StarlingX open source release updates
Signed-off-by: Dean Troyer <dtroyer@gmail.com>
This commit is contained in:
parent
1341a878b6
commit
17c909ec83
8
CONTRIBUTORS.wrs
Normal file
8
CONTRIBUTORS.wrs
Normal file
@ -0,0 +1,8 @@
|
||||
The following contributors from Wind River have developed the seed code in this
|
||||
repository. We look forward to community collaboration and contributions for
|
||||
additional features, enhancements and refactoring.
|
||||
|
||||
Contributors:
|
||||
=============
|
||||
Bin Qian <Bin.Qian@windriver.com>
|
||||
Eric Macdonald <Eric.MacDonald@windriver.com>
|
202
LICENSE
Normal file
202
LICENSE
Normal file
@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
5
README.rst
Normal file
5
README.rst
Normal file
@ -0,0 +1,5 @@
|
||||
======
|
||||
stx-ha
|
||||
======
|
||||
|
||||
SterlingX Service Management
|
5
mwa-solon.map
Normal file
5
mwa-solon.map
Normal file
@ -0,0 +1,5 @@
|
||||
cgcs/recipes-svcmgmt/service-mgmt-api|service-mgmt-api
|
||||
cgcs/recipes-svcmgmt/service-mgmt-client|service-mgmt-client
|
||||
cgcs/recipes-svcmgmt/service-mgmt-tools|service-mgmt-tools
|
||||
cgcs/recipes-svcmgmt/service-mgmt|service-mgmt
|
||||
cgcs/middleware/mtce/recipes-common/cgts-mtce-common/cgts-mtce-common-1.0/pmon|pmon
|
4
service-mgmt-api/centos/build_srpm.data
Normal file
4
service-mgmt-api/centos/build_srpm.data
Normal file
@ -0,0 +1,4 @@
|
||||
SRC_DIR=sm-api
|
||||
TAR_NAME=sm-api
|
||||
VERSION=1.0
|
||||
TIS_PATCH_VER=2
|
81
service-mgmt-api/centos/sm-api.spec
Normal file
81
service-mgmt-api/centos/sm-api.spec
Normal file
@ -0,0 +1,81 @@
|
||||
Summary: Service Management API
|
||||
Name: sm-api
|
||||
Version: 1.0
|
||||
Release: %{tis_patch_ver}%{?_tis_dist}
|
||||
License: Apache-2.0
|
||||
Group: base
|
||||
Packager: Wind River <info@windriver.com>
|
||||
URL: unknown
|
||||
Source0: %{name}-%{version}.tar.gz
|
||||
|
||||
%define debug_package %{nil}
|
||||
|
||||
BuildRequires: python
|
||||
BuildRequires: python-setuptools
|
||||
BuildRequires: util-linux
|
||||
# BuildRequires systemd is to get %_unitdir I think
|
||||
BuildRequires: systemd
|
||||
BuildRequires: systemd-devel
|
||||
Requires: python-libs
|
||||
|
||||
# Needed for /etc/init.d, can be removed when we go fully systemd
|
||||
Requires: chkconfig
|
||||
# Needed for /etc/pmon.d
|
||||
Requires: cgts-mtce-common-pmon
|
||||
|
||||
|
||||
%prep
|
||||
%setup -q
|
||||
|
||||
%build
|
||||
%{__python2} setup.py build
|
||||
|
||||
%install
|
||||
%global _buildsubdir %{_builddir}/%{name}-%{version}
|
||||
%{__python2} setup.py install -O1 --skip-build --root %{buildroot}
|
||||
install -d %{buildroot}/etc/sm
|
||||
install -d %{buildroot}/etc/init.d
|
||||
install -d %{buildroot}/etc/pmon.d
|
||||
install -d %{buildroot}%{_unitdir}
|
||||
install -m 644 %{_buildsubdir}/scripts/sm_api.ini %{buildroot}/etc/sm
|
||||
install -m 755 %{_buildsubdir}/scripts/sm-api %{buildroot}/etc/init.d
|
||||
install -m 644 %{_buildsubdir}/scripts/sm-api.service %{buildroot}%{_unitdir}
|
||||
install -m 644 %{_buildsubdir}/scripts/sm-api.conf %{buildroot}/etc/pmon.d
|
||||
|
||||
%description
|
||||
Service Management API
|
||||
|
||||
#%package -n sm-api-py-src-tar
|
||||
#Summary: Service Management API
|
||||
#Group: base
|
||||
|
||||
#%description -n sm-api-py-src-tar
|
||||
#Service Management API
|
||||
|
||||
|
||||
#%post -n sm-api-py-src-tar
|
||||
## sm-api-py-src-tar - postinst
|
||||
# if [ -f $D/usr/src/sm-api-1.0.tar.bz2 ] ; then
|
||||
# ( cd $D/ && tar -xf $D/usr/src/sm-api-1.0.tar.bz2 )
|
||||
# fi
|
||||
|
||||
%post
|
||||
/usr/bin/systemctl enable sm-api.service >/dev/null 2>&1
|
||||
|
||||
%files
|
||||
%defattr(-,root,root,-)
|
||||
%dir "/usr/lib/python2.7/site-packages/sm_api"
|
||||
/usr/lib/python2.7/site-packages/sm_api/*
|
||||
%dir "/usr/lib/python2.7/site-packages/sm_api-1.0.0-py2.7.egg-info"
|
||||
/usr/lib/python2.7/site-packages/sm_api-1.0.0-py2.7.egg-info/*
|
||||
"/usr/bin/sm-api"
|
||||
%dir "/etc/sm"
|
||||
"/etc/init.d/sm-api"
|
||||
"/etc/pmon.d/sm-api.conf"
|
||||
"/etc/sm/sm_api.ini"
|
||||
%{_unitdir}/*
|
||||
|
||||
#%files -n sm-api-py-src-tar
|
||||
#%defattr(-,-,-,-)
|
||||
#"/usr/src/sm-api-1.0.tar.bz2"
|
||||
|
202
service-mgmt-api/sm-api/LICENSE
Normal file
202
service-mgmt-api/sm-api/LICENSE
Normal file
@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
114
service-mgmt-api/sm-api/scripts/sm-api
Executable file
114
service-mgmt-api/sm-api/scripts/sm-api
Executable file
@ -0,0 +1,114 @@
|
||||
#! /bin/sh
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
# chkconfig: - 60 60
|
||||
# processname: sm-api
|
||||
# description: Service Management API
|
||||
#
|
||||
### BEGIN INIT INFO
|
||||
# Description: sm-api...
|
||||
#
|
||||
# Short-Description: Service Management API.
|
||||
# Provides: sm-api
|
||||
# Required-Start: $network
|
||||
# Should-Start: $syslog
|
||||
# Required-Stop: $network
|
||||
# Default-Start: 3 5
|
||||
# Default-Stop: 0 6
|
||||
### END INIT INFO
|
||||
|
||||
. /etc/init.d/functions
|
||||
|
||||
# Linux Standard Base (LSB) Error Codes
|
||||
RETVAL=0
|
||||
LSB_GENERIC_ERROR=1
|
||||
LSB_INVALID_ARGS=2
|
||||
LSB_UNSUPPORTED_FEATURE=3
|
||||
LSB_NOT_INSTALLED=5
|
||||
LSB_NOT_RUNNING=7
|
||||
|
||||
SM_API_NAME="sm-api"
|
||||
SM_API="/usr/bin/${SM_API_NAME}"
|
||||
|
||||
daemon_pidfile="/var/run/${SM_API_NAME}.pid"
|
||||
|
||||
|
||||
if [ ! -e "${SM_API}" ] ; then
|
||||
logger "${SM_API} is missing"
|
||||
exit ${LSB_NOT_INSTALLED}
|
||||
fi
|
||||
|
||||
PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin
|
||||
export PATH
|
||||
|
||||
case "$1" in
|
||||
start)
|
||||
echo -n "Starting ${SM_API_NAME}: "
|
||||
if [ -n "`pidof ${SM_API_NAME}`" ] ; then
|
||||
echo -n "is already running "
|
||||
RETVAL=0
|
||||
else
|
||||
/bin/sh -c "${SM_API} --debug --verbose --use-syslog --syslog-log-facility local1"' >> /dev/null 2>&1 & echo $!' > ${daemon_pidfile}
|
||||
RETVAL=$?
|
||||
fi
|
||||
if [ ${RETVAL} -eq 0 ] ; then
|
||||
pid=`pidof ${SM_API_NAME}`
|
||||
echo "OK"
|
||||
logger "${SM_API} (${pid})"
|
||||
else
|
||||
echo "FAIL"
|
||||
RETVAL=${LSB_GENERIC_ERROR}
|
||||
fi
|
||||
;;
|
||||
|
||||
stop)
|
||||
echo " "
|
||||
echo -n "Stopping ${SM_API_NAME}: "
|
||||
|
||||
if [ -e ${daemon_pidfile} ] ; then
|
||||
pid=`cat ${daemon_pidfile}`
|
||||
kill -TERM $pid
|
||||
rm -f ${daemon_pidfile}
|
||||
rm -f /var/lock/subsys/${SM_API_NAME}
|
||||
echo "OK"
|
||||
else
|
||||
echo "FAIL"
|
||||
fi
|
||||
;;
|
||||
|
||||
restart)
|
||||
$0 stop
|
||||
sleep 1
|
||||
$0 start
|
||||
;;
|
||||
|
||||
status)
|
||||
if [ -e ${daemon_pidfile} ] ; then
|
||||
pid=`cat ${daemon_pidfile}`
|
||||
ps -p $pid | grep -v "PID TTY" >> /dev/null 2>&1
|
||||
if [ $? -eq 0 ] ; then
|
||||
echo "${SM_API_NAME} is running"
|
||||
RETVAL=0
|
||||
else
|
||||
echo "${SM_API_NAME} is NOT running"
|
||||
RETVAL=1
|
||||
fi
|
||||
else
|
||||
echo "${SM_API_NAME} is running; no pidfile"
|
||||
RETVAL=1
|
||||
fi
|
||||
;;
|
||||
|
||||
condrestart)
|
||||
[ -f /var/lock/subsys/${SM_API_NAME} ] && $0 restart
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "usage: $0 { start | stop | status | restart | condrestart | status }"
|
||||
;;
|
||||
esac
|
||||
|
||||
exit ${RETVAL}
|
14
service-mgmt-api/sm-api/scripts/sm-api.conf
Normal file
14
service-mgmt-api/sm-api/scripts/sm-api.conf
Normal file
@ -0,0 +1,14 @@
|
||||
;
|
||||
; Copyright (c) 2014 Wind River Systems, Inc.
|
||||
;
|
||||
; SPDX-License-Identifier: Apache-2.0
|
||||
;
|
||||
[process]
|
||||
process = sm-api
|
||||
pidfile = /var/run/sm-api.pid
|
||||
script = /etc/init.d/sm-api
|
||||
style = lsb ; ocf or lsb
|
||||
severity = major ; minor, major, critical
|
||||
restarts = 3 ; restarts before error assertion
|
||||
interval = 5 ; number of seconds to wait between restarts
|
||||
debounce = 20 ; number of seconds to wait before degrade clear
|
15
service-mgmt-api/sm-api/scripts/sm-api.service
Normal file
15
service-mgmt-api/sm-api/scripts/sm-api.service
Normal file
@ -0,0 +1,15 @@
|
||||
[Unit]
|
||||
Description=Service Management API Unit
|
||||
After=network-online.target syslog-ng.service config.service sm.service
|
||||
Before=sm-eru.service pmon.service
|
||||
|
||||
[Service]
|
||||
Type=forking
|
||||
RemainAfterExit=yes
|
||||
User=root
|
||||
ExecStart=/etc/init.d/sm-api start
|
||||
ExecStop=/etc/init.d/sm-api stop
|
||||
PIDFile=/var/run/sm-api.pid
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
10
service-mgmt-api/sm-api/scripts/sm_api.ini
Normal file
10
service-mgmt-api/sm-api/scripts/sm_api.ini
Normal file
@ -0,0 +1,10 @@
|
||||
;
|
||||
; Copyright (c) 2014 Wind River Systems, Inc.
|
||||
;
|
||||
; SPDX-License-Identifier: Apache-2.0
|
||||
;
|
||||
[logging]
|
||||
use_syslog=
|
||||
|
||||
[database]
|
||||
database=/etc/sm/sm.database.v1
|
31
service-mgmt-api/sm-api/setup.py
Normal file
31
service-mgmt-api/sm-api/setup.py
Normal file
@ -0,0 +1,31 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
import setuptools
|
||||
|
||||
setuptools.setup(
|
||||
name='sm_api',
|
||||
description='Service Management API',
|
||||
version='1.0.0',
|
||||
license='Apache-2.0',
|
||||
packages=['sm_api', 'sm_api.common', 'sm_api.db', 'sm_api.objects',
|
||||
'sm_api.api', 'sm_api.api.controllers', 'sm_api.api.middleware',
|
||||
'sm_api.api.controllers.v1', 'sm_api.cmd',
|
||||
'sm_api.db.sqlalchemy',
|
||||
'sm_api.db.sqlalchemy.migrate_repo',
|
||||
'sm_api.db.sqlalchemy.migrate_repo.versions',
|
||||
'sm_api.openstack', 'sm_api.openstack.common',
|
||||
'sm_api.openstack.common.db',
|
||||
'sm_api.openstack.common.db.sqlalchemy',
|
||||
'sm_api.openstack.common.rootwrap',
|
||||
'sm_api.openstack.common.rpc',
|
||||
'sm_api.openstack.common.notifier',
|
||||
'sm_api.openstack.common.config',
|
||||
'sm_api.openstack.common.fixture'],
|
||||
entry_points={
|
||||
'console_scripts': [
|
||||
'sm-api = sm_api.cmd.api:main'
|
||||
]}
|
||||
)
|
10
service-mgmt-api/sm-api/sm_api/__init__.py
Normal file
10
service-mgmt-api/sm-api/sm_api/__init__.py
Normal file
@ -0,0 +1,10 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# All Rights Reserved.
|
||||
#
|
27
service-mgmt-api/sm-api/sm_api/api/__init__.py
Normal file
27
service-mgmt-api/sm-api/sm_api/api/__init__.py
Normal file
@ -0,0 +1,27 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
from oslo_config import cfg
|
||||
|
||||
API_SERVICE_OPTS = [
|
||||
cfg.StrOpt('sm_api_bind_ip',
|
||||
default='0.0.0.0',
|
||||
help='IP for the Service Management API server to bind to',
|
||||
),
|
||||
cfg.IntOpt('sm_api_port',
|
||||
default=7777,
|
||||
help='The port for the Service Management API server',
|
||||
),
|
||||
cfg.IntOpt('api_limit_max',
|
||||
default=1000,
|
||||
help='the maximum number of items returned in a single '
|
||||
'response from a collection resource'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
opt_group = cfg.OptGroup(name='api',
|
||||
title='Options for the sm-api service')
|
||||
CONF.register_group(opt_group)
|
||||
CONF.register_opts(API_SERVICE_OPTS)
|
76
service-mgmt-api/sm-api/sm_api/api/acl.py
Normal file
76
service-mgmt-api/sm-api/sm_api/api/acl.py
Normal file
@ -0,0 +1,76 @@
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
# Copyright © 2012 New Dream Network, LLC (DreamHost)
|
||||
#
|
||||
# Author: Doug Hellmann <doug.hellmann@dreamhost.com>
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Access Control Lists (ACL's) control access the API server."""
|
||||
|
||||
from keystonemiddleware import auth_token as keystone_auth_token
|
||||
from oslo_config import cfg
|
||||
from pecan import hooks
|
||||
from webob import exc
|
||||
|
||||
from sm_api.api.middleware import auth_token
|
||||
from sm_api.common import policy
|
||||
|
||||
|
||||
OPT_GROUP_NAME = 'keystone_authtoken'
|
||||
|
||||
|
||||
def register_opts(conf):
|
||||
"""Register keystoneclient middleware options
|
||||
|
||||
:param conf: SmApi settings.
|
||||
"""
|
||||
#conf.register_opts(keystone_auth_token._OPTS, group=OPT_GROUP_NAME)
|
||||
keystone_auth_token.CONF = conf
|
||||
|
||||
|
||||
register_opts(cfg.CONF)
|
||||
|
||||
|
||||
def install(app, conf, public_routes):
|
||||
"""Install ACL check on application.
|
||||
|
||||
:param app: A WSGI applicatin.
|
||||
:param conf: Settings. Must include OPT_GROUP_NAME section.
|
||||
:param public_routes: The list of the routes which will be allowed to
|
||||
access without authentication.
|
||||
:return: The same WSGI application with ACL installed.
|
||||
|
||||
"""
|
||||
keystone_config = dict(conf.get(OPT_GROUP_NAME))
|
||||
return auth_token.AuthTokenMiddleware(app,
|
||||
conf=keystone_config,
|
||||
public_api_routes=public_routes)
|
||||
|
||||
|
||||
class AdminAuthHook(hooks.PecanHook):
|
||||
"""Verify that the user has admin rights.
|
||||
|
||||
Checks whether the request context is an admin context and
|
||||
rejects the request otherwise.
|
||||
|
||||
"""
|
||||
def before(self, state):
|
||||
ctx = state.request.context
|
||||
|
||||
if not policy.check_is_admin(ctx) and not ctx.is_public_api:
|
||||
raise exc.HTTPForbidden()
|
5
service-mgmt-api/sm-api/sm_api/api/api.ini
Normal file
5
service-mgmt-api/sm-api/sm_api/api/api.ini
Normal file
@ -0,0 +1,5 @@
|
||||
[logging]
|
||||
use_syslog=
|
||||
|
||||
[database]
|
||||
database=/tmp/sm.database.v1
|
67
service-mgmt-api/sm-api/sm_api/api/api.py
Normal file
67
service-mgmt-api/sm-api/sm_api/api/api.py
Normal file
@ -0,0 +1,67 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import ConfigParser
|
||||
import eventlet
|
||||
from wsgiref import simple_server
|
||||
|
||||
from sm_api.common import config
|
||||
from sm_api.common import log
|
||||
from sm_api import app
|
||||
|
||||
os.environ['EVENTLET_NO_GREENDNS'] = 'yes'
|
||||
eventlet.monkey_patch(os=False)
|
||||
|
||||
|
||||
def get_handler_cls():
|
||||
cls = simple_server.WSGIRequestHandler
|
||||
|
||||
# old-style class doesn't support super
|
||||
class MyHandler(cls, object):
|
||||
def address_string(self):
|
||||
# In the future, we could provide a config option to allow reverse DNS lookup
|
||||
return self.client_address[0]
|
||||
|
||||
return MyHandler
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('-c', '--config', required=True,
|
||||
help='configuration file')
|
||||
args = parser.parse_args()
|
||||
|
||||
config.load(args.config)
|
||||
|
||||
if not config.CONF:
|
||||
print "Error: configuration not available."
|
||||
sys.exit(-1)
|
||||
|
||||
log.configure(config.CONF)
|
||||
|
||||
wsgi = simple_server.make_server('0.0.0.0', 7777, app.Application(),
|
||||
handler_class=get_handler_cls())
|
||||
wsgi.serve_forever()
|
||||
|
||||
except ConfigParser.NoOptionError as e:
|
||||
print e
|
||||
sys.exit(-2)
|
||||
|
||||
except ConfigParser.NoSectionError as e:
|
||||
print e
|
||||
sys.exit(-3)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
sys.exit()
|
||||
|
||||
except Exception as e:
|
||||
print e
|
||||
sys.exit(-4)
|
||||
|
||||
main()
|
109
service-mgmt-api/sm-api/sm_api/api/app.py
Normal file
109
service-mgmt-api/sm-api/sm_api/api/app.py
Normal file
@ -0,0 +1,109 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
"""
|
||||
Application
|
||||
"""
|
||||
|
||||
from oslo_config import cfg
|
||||
import pecan
|
||||
|
||||
from sm_api.api import config
|
||||
from sm_api.api import hooks
|
||||
|
||||
from sm_api.api import acl
|
||||
from sm_api.api import middleware
|
||||
|
||||
|
||||
auth_opts = [
|
||||
cfg.StrOpt('auth_strategy',
|
||||
default='noauth',
|
||||
help='Method to use for auth: noauth or keystone.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(auth_opts)
|
||||
|
||||
|
||||
def get_pecan_config():
|
||||
filename = config.__file__.replace('.pyc', '.py')
|
||||
return pecan.configuration.conf_from_file(filename)
|
||||
|
||||
|
||||
def create_app():
|
||||
pecan_conf = get_pecan_config()
|
||||
app_hooks = [hooks.ConfigHook(),
|
||||
hooks.DatabaseHook()]
|
||||
|
||||
pecan.configuration.set_config(dict(pecan_conf), overwrite=True)
|
||||
|
||||
app = pecan.make_app(
|
||||
pecan_conf.app.root,
|
||||
static_root=pecan_conf.app.static_root,
|
||||
debug=False,
|
||||
force_canonical=getattr(pecan_conf.app, 'force_canonical', True),
|
||||
hooks=app_hooks
|
||||
)
|
||||
|
||||
return app
|
||||
|
||||
|
||||
def setup_app(pecan_config=None, extra_hooks=None):
|
||||
app_hooks = [hooks.ConfigHook(),
|
||||
hooks.DatabaseHook(),
|
||||
hooks.ContextHook(pecan_config.app.acl_public_routes),
|
||||
]
|
||||
# hooks.RPCHook()
|
||||
if extra_hooks:
|
||||
app_hooks.extend(extra_hooks)
|
||||
|
||||
if not pecan_config:
|
||||
pecan_config = get_pecan_config()
|
||||
|
||||
if pecan_config.app.enable_acl:
|
||||
app_hooks.append(acl.AdminAuthHook())
|
||||
|
||||
pecan.configuration.set_config(dict(pecan_config), overwrite=True)
|
||||
|
||||
app = pecan.make_app(
|
||||
pecan_config.app.root,
|
||||
static_root=pecan_config.app.static_root,
|
||||
debug=CONF.debug,
|
||||
force_canonical=getattr(pecan_config.app, 'force_canonical', True),
|
||||
hooks=app_hooks,
|
||||
wrap_app=middleware.ParsableErrorMiddleware,
|
||||
)
|
||||
|
||||
if pecan_config.app.enable_acl:
|
||||
return acl.install(app, cfg.CONF, pecan_config.app.acl_public_routes)
|
||||
|
||||
return app
|
||||
|
||||
|
||||
class Application(object):
|
||||
def __init__(self):
|
||||
self.v1 = create_app()
|
||||
|
||||
@classmethod
|
||||
def unsupported_version(cls, start_response):
|
||||
start_response('404 Not Found', [])
|
||||
return []
|
||||
|
||||
def __call__(self, environ, start_response):
|
||||
if environ['PATH_INFO'].startswith("/v1/"):
|
||||
return self.v1(environ, start_response)
|
||||
|
||||
return Application.unsupported_version(start_response)
|
||||
|
||||
|
||||
class VersionSelectorApplication(object):
|
||||
def __init__(self):
|
||||
pc = get_pecan_config()
|
||||
pc.app.enable_acl = (CONF.auth_strategy == 'keystone')
|
||||
self.v1 = setup_app(pecan_config=pc)
|
||||
|
||||
def __call__(self, environ, start_response):
|
||||
return self.v1(environ, start_response)
|
17
service-mgmt-api/sm-api/sm_api/api/config.py
Normal file
17
service-mgmt-api/sm-api/sm_api/api/config.py
Normal file
@ -0,0 +1,17 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# Server Configuration
|
||||
server = {'host': '0.0.0.0', 'port': '7777'}
|
||||
|
||||
# Pecan Application Configurations
|
||||
app = {'root': 'sm_api.api.controllers.root.RootController',
|
||||
'modules': ['sm_api'],
|
||||
'static_root': '',
|
||||
'debug': False,
|
||||
'enable_acl': False,
|
||||
'acl_public_routes': ['/', '/v1']
|
||||
}
|
@ -0,0 +1,5 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
65
service-mgmt-api/sm-api/sm_api/api/controllers/root.py
Normal file
65
service-mgmt-api/sm-api/sm_api/api/controllers/root.py
Normal file
@ -0,0 +1,65 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
import pecan
|
||||
from pecan import rest
|
||||
from wsme import types as wsme_types
|
||||
from wsmeext import pecan as wsme_pecan
|
||||
|
||||
from sm_api.api.controllers import v1
|
||||
from sm_api.api.controllers.v1 import base
|
||||
from sm_api.api.controllers.v1 import link
|
||||
|
||||
|
||||
class Version(base.APIBase):
|
||||
"""An API version representation."""
|
||||
|
||||
id = wsme_types.text
|
||||
"The ID of the version, also acts as the release number"
|
||||
|
||||
links = [link.Link]
|
||||
"A Link that point to a specific version of the API"
|
||||
|
||||
@classmethod
|
||||
def convert(cls, id):
|
||||
version = Version()
|
||||
version.id = id
|
||||
version.links = [link.Link.make_link('self', pecan.request.host_url,
|
||||
id, '', bookmark=True)]
|
||||
return version
|
||||
|
||||
|
||||
class Root(base.APIBase):
|
||||
|
||||
name = wsme_types.text
|
||||
"The name of the API"
|
||||
|
||||
description = wsme_types.text
|
||||
"Some information about this API"
|
||||
|
||||
version = [Version]
|
||||
"Links to all the versions available in this API"
|
||||
|
||||
default_version = Version
|
||||
"A link to the default version of the API"
|
||||
|
||||
@classmethod
|
||||
def convert(cls):
|
||||
root = Root()
|
||||
root.name = "System Management API"
|
||||
root.description = "System Management API from Wind River"
|
||||
root.version = [Version.convert("v1")]
|
||||
root.default_version = Version.convert("v1")
|
||||
return root
|
||||
|
||||
|
||||
class RootController(rest.RestController):
|
||||
|
||||
v1 = v1.Controller()
|
||||
|
||||
@wsme_pecan.wsexpose(Root)
|
||||
def get(self):
|
||||
return Root.convert()
|
101
service-mgmt-api/sm-api/sm_api/api/controllers/v1/__init__.py
Normal file
101
service-mgmt-api/sm-api/sm_api/api/controllers/v1/__init__.py
Normal file
@ -0,0 +1,101 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
import pecan
|
||||
from pecan import rest
|
||||
|
||||
from wsme import types as wsme_types
|
||||
import wsmeext.pecan as wsme_pecan
|
||||
|
||||
from sm_api.api.controllers.v1 import link
|
||||
from sm_api.api.controllers.v1 import service_groups
|
||||
from sm_api.api.controllers.v1 import services
|
||||
from sm_api.api.controllers.v1 import servicenode
|
||||
from sm_api.api.controllers.v1 import sm_sda
|
||||
from sm_api.api.controllers.v1 import nodes
|
||||
|
||||
|
||||
class Version1(wsme_types.Base):
|
||||
""" Version-1 of the API.
|
||||
"""
|
||||
|
||||
id = wsme_types.text
|
||||
"The ID of the version, also acts as the release number"
|
||||
|
||||
links = [link.Link]
|
||||
"Links that point to a specific URL for this version and documentation"
|
||||
|
||||
service_group = [link.Link]
|
||||
"Links to the SM service-group resource"
|
||||
|
||||
servicenode = [link.Link]
|
||||
"Links to the SM service node resource"
|
||||
|
||||
sm_sda = [link.Link]
|
||||
"Links to the SM service domain assignments resource "
|
||||
|
||||
@classmethod
|
||||
def convert(cls):
|
||||
v1 = Version1()
|
||||
v1.id = "v1"
|
||||
v1.links = [link.Link.make_link('self', pecan.request.host_url,
|
||||
'v1', '', bookmark=True)]
|
||||
v1.service_groups = [link.Link.make_link('self',
|
||||
pecan.request.host_url,
|
||||
'service_groups', ''),
|
||||
link.Link.make_link('bookmark',
|
||||
pecan.request.host_url,
|
||||
'service_groups', '',
|
||||
bookmark=True)]
|
||||
v1.services = [link.Link.make_link('self',
|
||||
pecan.request.host_url,
|
||||
'services', ''),
|
||||
link.Link.make_link('bookmark',
|
||||
pecan.request.host_url,
|
||||
'services', '',
|
||||
bookmark=True)]
|
||||
|
||||
v1.servicenode = [link.Link.make_link('self',
|
||||
pecan.request.host_url,
|
||||
'servicenode', ''),
|
||||
link.Link.make_link('bookmark',
|
||||
pecan.request.host_url,
|
||||
'servicenode', '',
|
||||
bookmark=True)]
|
||||
|
||||
v1.sm_sda = [link.Link.make_link('self',
|
||||
pecan.request.host_url,
|
||||
'sm_sda', ''),
|
||||
link.Link.make_link('bookmark',
|
||||
pecan.request.host_url,
|
||||
'sm_sda', '',
|
||||
bookmark=True)]
|
||||
|
||||
v1.nodes = [link.Link.make_link('self',
|
||||
pecan.request.host_url,
|
||||
'nodes', ''),
|
||||
link.Link.make_link('bookmark',
|
||||
pecan.request.host_url,
|
||||
'nodes', '',
|
||||
bookmark=True)]
|
||||
|
||||
return v1
|
||||
|
||||
|
||||
class Controller(rest.RestController):
|
||||
"""Version 1 API controller root."""
|
||||
|
||||
service_groups = service_groups.ServiceGroupController()
|
||||
services = services.ServicesController()
|
||||
servicenode = servicenode.ServiceNodeController()
|
||||
sm_sda = sm_sda.SmSdaController()
|
||||
nodes = nodes.NodesController()
|
||||
|
||||
@wsme_pecan.wsexpose(Version1)
|
||||
def get(self):
|
||||
return Version1.convert()
|
||||
|
||||
__all__ = Controller
|
55
service-mgmt-api/sm-api/sm_api/api/controllers/v1/base.py
Normal file
55
service-mgmt-api/sm-api/sm_api/api/controllers/v1/base.py
Normal file
@ -0,0 +1,55 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# All Rights Reserved.
|
||||
#
|
||||
|
||||
import datetime
|
||||
|
||||
import wsme
|
||||
from wsme import types as wsme_types
|
||||
|
||||
|
||||
class APIBase(wsme_types.Base):
|
||||
|
||||
# created_at = datetime.datetime
|
||||
# "The time in UTC at which the object is created"
|
||||
|
||||
# updated_at = datetime.datetime
|
||||
# "The time in UTC at which the object is updated"
|
||||
|
||||
def as_dict(self):
|
||||
"""Render this object as a dict of its fields."""
|
||||
return dict((k, getattr(self, k))
|
||||
for k in self.fields
|
||||
if hasattr(self, k) and
|
||||
getattr(self, k) != wsme.Unset)
|
||||
|
||||
def unset_fields_except(self, except_list=None):
|
||||
"""Unset fields so they don't appear in the message body.
|
||||
|
||||
:param except_list: A list of fields that won't be touched.
|
||||
|
||||
"""
|
||||
if except_list is None:
|
||||
except_list = []
|
||||
|
||||
for k in self.as_dict():
|
||||
if k not in except_list:
|
||||
setattr(self, k, wsme.Unset)
|
||||
|
||||
@classmethod
|
||||
def from_rpc_object(cls, m, fields=None):
|
||||
"""Convert a RPC object to an API object."""
|
||||
obj_dict = m.as_dict()
|
||||
# Unset non-required fields so they do not appear
|
||||
# in the message body
|
||||
obj_dict.update(dict((k, wsme.Unset)
|
||||
for k in obj_dict.keys()
|
||||
if fields and k not in fields))
|
||||
return cls(**obj_dict)
|
@ -0,0 +1,55 @@
|
||||
#!/usr/bin/env python
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2013 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import pecan
|
||||
from wsme import types as wtypes
|
||||
|
||||
from sm_api.api.controllers.v1 import base
|
||||
from sm_api.api.controllers.v1 import link
|
||||
|
||||
|
||||
class Collection(base.APIBase):
|
||||
|
||||
next = wtypes.text
|
||||
"A link to retrieve the next subset of the collection"
|
||||
|
||||
@property
|
||||
def collection(self):
|
||||
return getattr(self, self._type)
|
||||
|
||||
def has_next(self, limit):
|
||||
"""Return whether collection has more items."""
|
||||
return len(self.collection) and len(self.collection) == limit
|
||||
|
||||
def get_next(self, limit, url=None, **kwargs):
|
||||
"""Return a link to the next subset of the collection."""
|
||||
if not self.has_next(limit):
|
||||
return wtypes.Unset
|
||||
|
||||
resource_url = url or self._type
|
||||
q_args = ''.join(['%s=%s&' % (key, kwargs[key]) for key in kwargs])
|
||||
next_args = '?%(args)slimit=%(limit)d&marker=%(marker)s' % {
|
||||
'args': q_args, 'limit': limit,
|
||||
'marker': self.collection[-1].uuid}
|
||||
|
||||
return link.Link.make_link('next', pecan.request.host_url,
|
||||
resource_url, next_args).href
|
30
service-mgmt-api/sm-api/sm_api/api/controllers/v1/link.py
Normal file
30
service-mgmt-api/sm-api/sm_api/api/controllers/v1/link.py
Normal file
@ -0,0 +1,30 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
from wsme import types as wsme_types
|
||||
|
||||
|
||||
class Link(wsme_types.Base):
|
||||
""" Representation of a link.
|
||||
"""
|
||||
|
||||
href = wsme_types.text
|
||||
"The url of a link."
|
||||
|
||||
rel = wsme_types.text
|
||||
"The name of a link."
|
||||
|
||||
type = wsme_types.text
|
||||
"The type of document or link."
|
||||
|
||||
@classmethod
|
||||
def make_link(cls, rel_name, url, resource, resource_args,
|
||||
bookmark=False, type=wsme_types.Unset):
|
||||
template = '%s/%s' if bookmark else '%s/v1/%s'
|
||||
template += '%s' if resource_args.startswith('?') else '/%s'
|
||||
|
||||
return Link(href=template % (url, resource, resource_args),
|
||||
rel=rel_name, type=type)
|
144
service-mgmt-api/sm-api/sm_api/api/controllers/v1/nodes.py
Normal file
144
service-mgmt-api/sm-api/sm_api/api/controllers/v1/nodes.py
Normal file
@ -0,0 +1,144 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
import json
|
||||
import wsme
|
||||
from wsme import types as wsme_types
|
||||
import wsmeext.pecan as wsme_pecan
|
||||
import pecan
|
||||
from pecan import rest
|
||||
|
||||
from sm_api.api.controllers.v1 import base
|
||||
from sm_api.api.controllers.v1 import collection
|
||||
from sm_api.api.controllers.v1 import link
|
||||
from sm_api.api.controllers.v1 import utils
|
||||
|
||||
from sm_api.common import log
|
||||
from sm_api import objects
|
||||
|
||||
|
||||
LOG = log.get_logger(__name__)
|
||||
|
||||
|
||||
class NodesCommand(wsme_types.Base):
|
||||
path = wsme_types.text
|
||||
value = wsme_types.text
|
||||
op = wsme_types.text
|
||||
|
||||
|
||||
class NodesCommandResult(wsme_types.Base):
|
||||
# Host Information
|
||||
hostname = wsme_types.text
|
||||
state = wsme_types.text
|
||||
# Command
|
||||
path = wsme_types.text
|
||||
value = wsme_types.text
|
||||
op = wsme_types.text
|
||||
# Result
|
||||
error_code = wsme_types.text
|
||||
error_details = wsme_types.text
|
||||
|
||||
|
||||
class Nodes(base.APIBase):
|
||||
id = int
|
||||
name = wsme_types.text
|
||||
administrative_state = wsme_types.text
|
||||
operational_state = wsme_types.text
|
||||
availability_status = wsme_types.text
|
||||
ready_state = wsme_types.text
|
||||
|
||||
links = [link.Link]
|
||||
"A list containing a self link and associated nodes links"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self.fields = objects.sm_node.fields.keys()
|
||||
for k in self.fields:
|
||||
setattr(self, k, kwargs.get(k))
|
||||
|
||||
@classmethod
|
||||
def convert_with_links(cls, rpc_nodes, expand=True):
|
||||
minimum_fields = ['id', 'name', 'administrative_state',
|
||||
'operational_state', 'availability_status',
|
||||
'ready_state']
|
||||
fields = minimum_fields if not expand else None
|
||||
nodes = Nodes.from_rpc_object(
|
||||
rpc_nodes, fields)
|
||||
|
||||
return nodes
|
||||
|
||||
|
||||
class NodesCollection(collection.Collection):
|
||||
"""API representation of a collection of nodes."""
|
||||
|
||||
nodes = [Nodes]
|
||||
"A list containing nodes objects"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self._type = 'nodes'
|
||||
|
||||
@classmethod
|
||||
def convert_with_links(cls, nodes, limit, url=None,
|
||||
expand=False, **kwargs):
|
||||
collection = NodesCollection()
|
||||
collection.nodes = [
|
||||
Nodes.convert_with_links(ch, expand)
|
||||
for ch in nodes]
|
||||
url = url or None
|
||||
collection.next = collection.get_next(limit, url=url, **kwargs)
|
||||
return collection
|
||||
|
||||
|
||||
# class Nodess(wsme_types.Base):
|
||||
# nodes = wsme_types.text
|
||||
class NodesController(rest.RestController):
|
||||
|
||||
def _get_nodes(self, marker, limit, sort_key, sort_dir):
|
||||
|
||||
limit = utils.validate_limit(limit)
|
||||
sort_dir = utils.validate_sort_dir(sort_dir)
|
||||
marker_obj = None
|
||||
if marker:
|
||||
marker_obj = objects.sm_node.get_by_uuid(
|
||||
pecan.request.context, marker)
|
||||
|
||||
nodes = pecan.request.dbapi.sm_node_get_list(limit,
|
||||
marker_obj,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
return nodes
|
||||
|
||||
@wsme_pecan.wsexpose(Nodes, unicode)
|
||||
def get_one(self, uuid):
|
||||
rpc_sg = objects.sm_node.get_by_uuid(pecan.request.context, uuid)
|
||||
|
||||
return Nodes.convert_with_links(rpc_sg)
|
||||
|
||||
@wsme_pecan.wsexpose(NodesCollection, unicode, int,
|
||||
unicode, unicode)
|
||||
def get_all(self, marker=None, limit=None,
|
||||
sort_key='name', sort_dir='asc'):
|
||||
"""Retrieve list of nodes."""
|
||||
|
||||
nodes = self._get_nodes(marker,
|
||||
limit,
|
||||
sort_key,
|
||||
sort_dir)
|
||||
|
||||
return NodesCollection.convert_with_links(nodes, limit,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
@wsme_pecan.wsexpose(NodesCommandResult, unicode,
|
||||
body=NodesCommand)
|
||||
def put(self, hostname, command):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
@wsme_pecan.wsexpose(NodesCommandResult, unicode,
|
||||
body=NodesCommand)
|
||||
def patch(self, hostname, command):
|
||||
|
||||
raise NotImplementedError()
|
@ -0,0 +1,164 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
import json
|
||||
import wsme
|
||||
from wsme import types as wsme_types
|
||||
import wsmeext.pecan as wsme_pecan
|
||||
import pecan
|
||||
from pecan import rest
|
||||
|
||||
from sm_api.api.controllers.v1 import base
|
||||
from sm_api.api.controllers.v1 import collection
|
||||
from sm_api.api.controllers.v1 import link
|
||||
from sm_api.api.controllers.v1 import utils
|
||||
|
||||
from sm_api.common import log
|
||||
from sm_api import objects
|
||||
|
||||
|
||||
LOG = log.get_logger(__name__)
|
||||
|
||||
|
||||
class ServiceGroupCommand(wsme_types.Base):
|
||||
path = wsme_types.text
|
||||
value = wsme_types.text
|
||||
op = wsme_types.text
|
||||
|
||||
|
||||
class ServiceGroupCommandResult(wsme_types.Base):
|
||||
# Host Information
|
||||
hostname = wsme_types.text
|
||||
state = wsme_types.text
|
||||
# Command
|
||||
path = wsme_types.text
|
||||
value = wsme_types.text
|
||||
op = wsme_types.text
|
||||
# Result
|
||||
error_code = wsme_types.text
|
||||
error_details = wsme_types.text
|
||||
|
||||
|
||||
class ServiceGroup(base.APIBase):
|
||||
name = wsme_types.text
|
||||
state = wsme_types.text
|
||||
status = wsme_types.text
|
||||
|
||||
# JKUNG new
|
||||
uuid = wsme_types.text
|
||||
"The UUID of the service_groups"
|
||||
|
||||
id = int
|
||||
|
||||
links = [link.Link]
|
||||
"A list containing a self link and associated service_groups links"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self.fields = objects.service_groups.fields.keys()
|
||||
for k in self.fields:
|
||||
setattr(self, k, kwargs.get(k))
|
||||
|
||||
@classmethod
|
||||
def convert_with_links(cls, rpc_service_groups, expand=True):
|
||||
minimum_fields = ['id', 'name', 'state', 'status']
|
||||
fields = minimum_fields if not expand else None
|
||||
service_groups = ServiceGroup.from_rpc_object(
|
||||
rpc_service_groups, fields)
|
||||
|
||||
return service_groups
|
||||
|
||||
|
||||
class ServiceGroupCollection(collection.Collection):
|
||||
"""API representation of a collection of service_groups."""
|
||||
|
||||
service_groups = [ServiceGroup]
|
||||
"A list containing service_groups objects"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self._type = 'service_groups'
|
||||
|
||||
@classmethod
|
||||
def convert_with_links(cls, service_groups, limit, url=None,
|
||||
expand=False, **kwargs):
|
||||
collection = ServiceGroupCollection()
|
||||
collection.service_groups = [
|
||||
ServiceGroup.convert_with_links(ch, expand)
|
||||
for ch in service_groups]
|
||||
url = url or None
|
||||
collection.next = collection.get_next(limit, url=url, **kwargs)
|
||||
return collection
|
||||
|
||||
|
||||
class ServiceGroups(wsme_types.Base):
|
||||
service_groups = wsme_types.text
|
||||
|
||||
|
||||
class ServiceGroupController(rest.RestController):
|
||||
|
||||
def _get_service_groups(self, marker, limit, sort_key, sort_dir):
|
||||
|
||||
limit = utils.validate_limit(limit)
|
||||
sort_dir = utils.validate_sort_dir(sort_dir)
|
||||
marker_obj = None
|
||||
if marker:
|
||||
marker_obj = objects.service_groups.get_by_uuid(
|
||||
pecan.request.context, marker)
|
||||
|
||||
service_groups = pecan.request.dbapi.iservicegroup_get_list(limit,
|
||||
marker_obj,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
return service_groups
|
||||
|
||||
@wsme_pecan.wsexpose(ServiceGroup, unicode)
|
||||
def get_one(self, uuid):
|
||||
rpc_sg = objects.service_groups.get_by_uuid(pecan.request.context, uuid)
|
||||
|
||||
return ServiceGroup.convert_with_links(rpc_sg)
|
||||
|
||||
@wsme_pecan.wsexpose(ServiceGroupCollection, unicode, int,
|
||||
unicode, unicode)
|
||||
def get_all(self, marker=None, limit=None,
|
||||
sort_key='name', sort_dir='asc'):
|
||||
"""Retrieve list of servicegroups."""
|
||||
|
||||
service_groups = self._get_service_groups(marker,
|
||||
limit,
|
||||
sort_key,
|
||||
sort_dir)
|
||||
|
||||
return ServiceGroupCollection.convert_with_links(service_groups, limit,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
# cursor = pecan.request.database.cursor()
|
||||
|
||||
# cursor.execute("SELECT name, state from service_groups")
|
||||
|
||||
# data = cursor.fetchall()
|
||||
|
||||
# if data is not None:
|
||||
# service_groups = []
|
||||
|
||||
# for row in data:
|
||||
# service_groups.append({'name': row[0], 'state': row[1]})
|
||||
|
||||
# return ServiceGroups(service_groups=json.dumps(service_groups))
|
||||
|
||||
#return wsme.api.Response(ServiceGroups(service_groups=json.dumps([])),
|
||||
# status_code=404)
|
||||
|
||||
@wsme_pecan.wsexpose(ServiceGroupCommandResult, unicode,
|
||||
body=ServiceGroupCommand)
|
||||
def put(self, hostname, command):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
@wsme_pecan.wsexpose(ServiceGroupCommandResult, unicode,
|
||||
body=ServiceGroupCommand)
|
||||
def patch(self, hostname, command):
|
||||
|
||||
raise NotImplementedError()
|
447
service-mgmt-api/sm-api/sm_api/api/controllers/v1/servicenode.py
Executable file
447
service-mgmt-api/sm-api/sm_api/api/controllers/v1/servicenode.py
Executable file
@ -0,0 +1,447 @@
|
||||
#!/usr/bin/env python
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2013 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import pecan
|
||||
from pecan import rest
|
||||
import wsme
|
||||
from wsme import types as wtypes
|
||||
import wsmeext.pecan as wsme_pecan
|
||||
|
||||
from sm_api.api.controllers.v1 import base
|
||||
from sm_api.api.controllers.v1 import smc_api
|
||||
from sm_api.openstack.common import log
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
ERR_CODE_SUCCESS = "0"
|
||||
ERR_CODE_HOST_NOT_FOUND = "-1000"
|
||||
ERR_CODE_ACTION_FAILED = "-1001"
|
||||
ERR_CODE_NO_HOST_TO_SWACT_TO = "-1002"
|
||||
|
||||
SM_NODE_STATE_UNKNOWN = "unknown"
|
||||
SM_NODE_ADMIN_LOCKED = "locked"
|
||||
SM_NODE_ADMIN_UNLOCKED = "unlocked"
|
||||
SM_NODE_OPER_ENABLED = "enabled"
|
||||
SM_NODE_OPER_DISABLED = "disabled"
|
||||
SM_NODE_AVAIL_AVAILABLE = "available"
|
||||
SM_NODE_AVAIL_DEGRADED = "degraded"
|
||||
SM_NODE_AVAIL_FAILED = "failed"
|
||||
|
||||
# sm_types.c
|
||||
SM_SERVICE_DOMAIN_MEMBER_REDUNDANCY_MODEL_NIL = "nil"
|
||||
SM_SERVICE_DOMAIN_MEMBER_REDUNDANCY_MODEL_UNKNOWN = "unknown"
|
||||
SM_SERVICE_DOMAIN_MEMBER_REDUNDANCY_MODEL_NONE = "none"
|
||||
SM_SERVICE_DOMAIN_MEMBER_REDUNDANCY_MODEL_N = "N"
|
||||
SM_SERVICE_DOMAIN_MEMBER_REDUNDANCY_MODEL_N_PLUS_M = "N + M"
|
||||
SM_SERVICE_DOMAIN_MEMBER_REDUNDANCY_MODEL_N_TO_1 = "N to 1"
|
||||
SM_SERVICE_DOMAIN_MEMBER_REDUNDANCY_MODEL_N_TO_N = "N to N"
|
||||
|
||||
# sm_types.c
|
||||
SM_SERVICE_GROUP_STATE_NIL = "nil"
|
||||
SM_SERVICE_GROUP_STATE_NA = "not-applicable"
|
||||
SM_SERVICE_GROUP_STATE_INITIAL = "initial"
|
||||
SM_SERVICE_GROUP_STATE_UNKNOWN = "unknown"
|
||||
SM_SERVICE_GROUP_STATE_STANDBY = "standby"
|
||||
SM_SERVICE_GROUP_STATE_GO_STANDBY = "go-standby"
|
||||
SM_SERVICE_GROUP_STATE_GO_ACTIVE = "go-active"
|
||||
SM_SERVICE_GROUP_STATE_ACTIVE = "active"
|
||||
SM_SERVICE_GROUP_STATE_DISABLING = "disabling"
|
||||
SM_SERVICE_GROUP_STATE_DISABLED = "disabled"
|
||||
SM_SERVICE_GROUP_STATE_SHUTDOWN = "shutdown"
|
||||
|
||||
# sm_types.c
|
||||
SM_SERVICE_GROUP_STATUS_NIL = "nil"
|
||||
SM_SERVICE_GROUP_STATUS_NONE = ""
|
||||
SM_SERVICE_GROUP_STATUS_WARN = "warn"
|
||||
SM_SERVICE_GROUP_STATUS_DEGRADED = "degraded"
|
||||
SM_SERVICE_GROUP_STATUS_FAILED = "failed"
|
||||
|
||||
# sm_types.c
|
||||
SM_SERVICE_GROUP_CONDITION_NIL = "nil"
|
||||
SM_SERVICE_GROUP_CONDITION_NONE = ""
|
||||
SM_SERVICE_GROUP_CONDITION_DATA_INCONSISTENT = "data-inconsistent"
|
||||
SM_SERVICE_GROUP_CONDITION_DATA_OUTDATED = "data-outdated"
|
||||
SM_SERVICE_GROUP_CONDITION_DATA_CONSISTENT = "data-consistent"
|
||||
SM_SERVICE_GROUP_CONDITION_DATA_SYNC = "data-syncing"
|
||||
SM_SERVICE_GROUP_CONDITION_DATA_STANDALONE = "data-standalone"
|
||||
SM_SERVICE_GROUP_CONDITION_RECOVERY_FAILURE = "recovery-failure"
|
||||
SM_SERVICE_GROUP_CONDITION_ACTION_FAILURE = "action-failure"
|
||||
SM_SERVICE_GROUP_CONDITION_FATAL_FAILURE = "fatal-failure"
|
||||
|
||||
|
||||
class ServiceNodeCommand(base.APIBase):
|
||||
origin = wtypes.text
|
||||
action = wtypes.text # swact | swact-force | unlock | lock | event
|
||||
admin = wtypes.text # locked | unlocked
|
||||
oper = wtypes.text # enabled | disabled
|
||||
avail = wtypes.text # none | ...
|
||||
|
||||
|
||||
class ServiceNodeCommandResult(base.APIBase):
|
||||
# Origin and Host Information
|
||||
origin = wtypes.text # e.g. "mtce" or "sm"
|
||||
hostname = wtypes.text
|
||||
# Command
|
||||
action = wtypes.text
|
||||
admin = wtypes.text
|
||||
oper = wtypes.text
|
||||
avail = wtypes.text
|
||||
# Result
|
||||
error_code = wtypes.text
|
||||
error_details = wtypes.text
|
||||
|
||||
|
||||
class ServiceNode(base.APIBase):
|
||||
origin = wtypes.text
|
||||
hostname = wtypes.text
|
||||
admin = wtypes.text
|
||||
oper = wtypes.text
|
||||
avail = wtypes.text
|
||||
active_services = wtypes.text
|
||||
swactable_services = wtypes.text
|
||||
|
||||
|
||||
class ServiceNodeController(rest.RestController):
|
||||
|
||||
def __init__(self, from_isystem=False):
|
||||
self._seqno = 0
|
||||
|
||||
def _seqno_incr_get(self):
|
||||
self._seqno += 1
|
||||
return self._seqno
|
||||
|
||||
def _seqno_get(self):
|
||||
return self._seqno
|
||||
|
||||
def _get_current_sm_sdas(self):
|
||||
sm_sdas = pecan.request.dbapi.sm_sda_get_list()
|
||||
for sm in sm_sdas:
|
||||
LOG.debug("sm-api sm_sdas= %s" % sm.as_dict())
|
||||
|
||||
return sm_sdas
|
||||
|
||||
def _sm_sdm_get(self, server, service_group_name):
|
||||
return pecan.request.dbapi.sm_sdm_get(server, service_group_name)
|
||||
|
||||
def _smc_node_exists(self, hostname):
|
||||
# check whether hostname exists in nodes table
|
||||
node_exists = False
|
||||
sm_nodes = pecan.request.dbapi.sm_node_get_by_name(hostname)
|
||||
for sm_node in sm_nodes:
|
||||
node_exists = True
|
||||
|
||||
return node_exists
|
||||
|
||||
def _get_sm_node_state(self, hostname):
|
||||
sm_nodes = pecan.request.dbapi.sm_node_get_by_name(hostname)
|
||||
|
||||
# default values
|
||||
node_state = {'hostname': hostname,
|
||||
'admin': SM_NODE_STATE_UNKNOWN,
|
||||
'oper': SM_NODE_STATE_UNKNOWN,
|
||||
'avail': SM_NODE_STATE_UNKNOWN}
|
||||
|
||||
for sm_node in sm_nodes:
|
||||
node_state = {'hostname': hostname,
|
||||
'admin': sm_node.administrative_state,
|
||||
'oper': sm_node.operational_state,
|
||||
'avail': sm_node.availability_status}
|
||||
break
|
||||
|
||||
LOG.debug("sm-api get_sm_node_state hostname: %s" % (node_state))
|
||||
return node_state
|
||||
|
||||
def _have_active_sm_services(self, hostname, sm_sdas):
|
||||
# check db service_domain_assignments for any "active"
|
||||
# in either state or desired state:
|
||||
active_sm_services = False
|
||||
|
||||
# active: current or transition state
|
||||
active_attr_list = [SM_SERVICE_GROUP_STATE_ACTIVE,
|
||||
SM_SERVICE_GROUP_STATE_GO_ACTIVE,
|
||||
SM_SERVICE_GROUP_STATE_GO_STANDBY,
|
||||
SM_SERVICE_GROUP_STATE_DISABLING,
|
||||
SM_SERVICE_GROUP_STATE_UNKNOWN]
|
||||
|
||||
for sm_sda in sm_sdas:
|
||||
if sm_sda.node_name == hostname:
|
||||
for aa in active_attr_list:
|
||||
if sm_sda.state == aa or sm_sda.desired_state == aa:
|
||||
active_sm_services = True
|
||||
LOG.debug("sm-api have_active_sm_services True")
|
||||
return active_sm_services
|
||||
|
||||
LOG.debug("sm-api have_active_sm_services: False")
|
||||
return active_sm_services
|
||||
|
||||
def _have_swactable_sm_services(self, hostname, sm_sdas):
|
||||
# check db service_domain_assignments for any "active"
|
||||
# in either state or desired state:
|
||||
swactable_sm_services = False
|
||||
|
||||
# active: current or transition state
|
||||
active_attr_list = [SM_SERVICE_GROUP_STATE_ACTIVE,
|
||||
SM_SERVICE_GROUP_STATE_GO_ACTIVE,
|
||||
SM_SERVICE_GROUP_STATE_GO_STANDBY,
|
||||
SM_SERVICE_GROUP_STATE_DISABLING,
|
||||
SM_SERVICE_GROUP_STATE_UNKNOWN]
|
||||
|
||||
for sm_sda in sm_sdas:
|
||||
if sm_sda.node_name == hostname:
|
||||
for aa in active_attr_list:
|
||||
if sm_sda.state == aa or sm_sda.desired_state == aa:
|
||||
sdm = self._sm_sdm_get(sm_sda.name,
|
||||
sm_sda.service_group_name)
|
||||
if sdm.redundancy_model == \
|
||||
SM_SERVICE_DOMAIN_MEMBER_REDUNDANCY_MODEL_N_PLUS_M:
|
||||
swactable_sm_services = True
|
||||
LOG.debug("sm-api have_swactable_sm_services True")
|
||||
return swactable_sm_services
|
||||
|
||||
LOG.debug("sm-api have_active_sm_services: False")
|
||||
return swactable_sm_services
|
||||
|
||||
def _swact_pre_check(self, hostname):
|
||||
# run pre-swact checks, verify that services are in the right state
|
||||
# to accept service
|
||||
have_destination = False
|
||||
check_result = None
|
||||
|
||||
sm_sdas = pecan.request.dbapi.sm_sda_get_list(None, None,
|
||||
sort_key='name',
|
||||
sort_dir='asc')
|
||||
|
||||
origin_state = self._collect_svc_state(sm_sdas, hostname)
|
||||
|
||||
for sm_sda in sm_sdas:
|
||||
if sm_sda.node_name != hostname:
|
||||
have_destination = True
|
||||
|
||||
# Verify that target host state is unlocked-enabled
|
||||
node_state = self._get_sm_node_state(sm_sda.node_name)
|
||||
if SM_NODE_ADMIN_LOCKED == node_state['admin']:
|
||||
check_result = ("%s is not ready to take service, "
|
||||
"%s is locked"
|
||||
% (sm_sda.node_name, sm_sda.node_name))
|
||||
break
|
||||
|
||||
if SM_NODE_OPER_DISABLED == node_state['oper']:
|
||||
check_result = ("%s is not ready to take service, "
|
||||
"%s is disabled"
|
||||
% (sm_sda.node_name, sm_sda.node_name))
|
||||
break
|
||||
|
||||
# Verify that
|
||||
# all the services are in the standby or active
|
||||
# state on the other host
|
||||
# or service only provisioned in the other host
|
||||
# or service state are the same on both hosts
|
||||
if SM_SERVICE_GROUP_STATE_ACTIVE != sm_sda.state \
|
||||
and SM_SERVICE_GROUP_STATE_STANDBY != sm_sda.state \
|
||||
and origin_state.has_key(sm_sda.service_group_name) \
|
||||
and origin_state[sm_sda.service_group_name] != sm_sda.state:
|
||||
check_result = (
|
||||
"%s on %s is not ready to take service, "
|
||||
"service not in the active or standby "
|
||||
"state" % (sm_sda.service_group_name,
|
||||
sm_sda.node_name))
|
||||
break
|
||||
|
||||
# Verify that all the services are in the desired state on
|
||||
# the other host
|
||||
if sm_sda.desired_state != sm_sda.state:
|
||||
check_result = ("%s on %s is not ready to take service, "
|
||||
"services transitioning state"
|
||||
% (sm_sda.service_group_name,
|
||||
sm_sda.node_name))
|
||||
break
|
||||
|
||||
# Verify that all the services are ready to accept service
|
||||
# i.e. not failed or syncing data
|
||||
if SM_SERVICE_GROUP_STATUS_FAILED == sm_sda.status:
|
||||
check_result = ("%s on %s is not ready to take service, "
|
||||
"service is failed"
|
||||
% (sm_sda.service_group_name,
|
||||
sm_sda.node_name))
|
||||
break
|
||||
|
||||
elif SM_SERVICE_GROUP_STATUS_DEGRADED == sm_sda.status:
|
||||
degraded_conditions \
|
||||
= [SM_SERVICE_GROUP_CONDITION_DATA_INCONSISTENT,
|
||||
SM_SERVICE_GROUP_CONDITION_DATA_OUTDATED,
|
||||
SM_SERVICE_GROUP_CONDITION_DATA_CONSISTENT,
|
||||
SM_SERVICE_GROUP_CONDITION_DATA_STANDALONE]
|
||||
|
||||
if sm_sda.condition == SM_SERVICE_GROUP_CONDITION_DATA_SYNC:
|
||||
check_result = ("%s on %s is not ready to take "
|
||||
"service, service is syncing data"
|
||||
% (sm_sda.service_group_name,
|
||||
sm_sda.node_name))
|
||||
break
|
||||
|
||||
elif sm_sda.condition in degraded_conditions:
|
||||
check_result = ("%s on %s is not ready to take "
|
||||
"service, service is degraded, %s"
|
||||
% (sm_sda.service_group_name,
|
||||
sm_sda.node_name, sm_sda.condition))
|
||||
break
|
||||
else:
|
||||
check_result = ("%s on %s is not ready to take "
|
||||
"service, service is degraded"
|
||||
% (sm_sda.service_group_name,
|
||||
sm_sda.node_name))
|
||||
break
|
||||
|
||||
if check_result is None and not have_destination:
|
||||
check_result = "no peer available"
|
||||
|
||||
if check_result is not None:
|
||||
LOG.info("swact pre-check failed host %s, reason=%s."
|
||||
% (hostname, check_result))
|
||||
|
||||
return check_result
|
||||
|
||||
@staticmethod
|
||||
def _collect_svc_state(sm_sdas, hostname):
|
||||
sm_state_ht = {}
|
||||
for sm_sda in sm_sdas:
|
||||
if sm_sda.node_name == hostname:
|
||||
sm_state_ht[sm_sda.service_group_name] = sm_sda.state
|
||||
LOG.info("%s" % sm_state_ht)
|
||||
return sm_state_ht
|
||||
|
||||
def _do_modify_command(self, hostname, command):
|
||||
|
||||
if command.action == smc_api.SM_NODE_ACTION_SWACT_PRE_CHECK or \
|
||||
command.action == smc_api.SM_NODE_ACTION_SWACT:
|
||||
check_result = self._swact_pre_check(hostname)
|
||||
if check_result is not None:
|
||||
result = ServiceNodeCommandResult(
|
||||
origin="sm", hostname=hostname, action=command.action,
|
||||
admin=command.admin, oper=command.oper,
|
||||
avail=command.avail, error_code=ERR_CODE_ACTION_FAILED,
|
||||
error_details=check_result)
|
||||
|
||||
if command.action == smc_api.SM_NODE_ACTION_SWACT_PRE_CHECK:
|
||||
return wsme.api.Response(result, status_code=200)
|
||||
return wsme.api.Response(result, status_code=400)
|
||||
|
||||
elif command.action == smc_api.SM_NODE_ACTION_SWACT_PRE_CHECK:
|
||||
result = ServiceNodeCommandResult(
|
||||
origin="sm", hostname=hostname, action=command.action,
|
||||
admin=command.admin, oper=command.oper,
|
||||
avail=command.avail, error_code=ERR_CODE_SUCCESS,
|
||||
error_details=check_result)
|
||||
return wsme.api.Response(result, status_code=200)
|
||||
|
||||
if command.action == smc_api.SM_NODE_ACTION_UNLOCK or \
|
||||
command.action == smc_api.SM_NODE_ACTION_LOCK or \
|
||||
command.action == smc_api.SM_NODE_ACTION_SWACT or \
|
||||
command.action == smc_api.SM_NODE_ACTION_SWACT_FORCE or \
|
||||
command.action == smc_api.SM_NODE_ACTION_EVENT:
|
||||
|
||||
sm_ack_dict = smc_api.sm_api_set_node_state(command.origin,
|
||||
hostname,
|
||||
command.action,
|
||||
command.admin,
|
||||
command.avail,
|
||||
command.oper,
|
||||
self._seqno_incr_get())
|
||||
|
||||
ack_admin = sm_ack_dict['SM_API_MSG_NODE_ADMIN'].lower()
|
||||
ack_oper = sm_ack_dict['SM_API_MSG_NODE_OPER'].lower()
|
||||
ack_avail = sm_ack_dict['SM_API_MSG_NODE_AVAIL'].lower()
|
||||
|
||||
LOG.debug("sm-api _do_modify_command sm_ack_dict: %s ACK admin: "
|
||||
"%s oper: %s avail: %s." % (sm_ack_dict, ack_admin,
|
||||
ack_oper, ack_avail))
|
||||
|
||||
# loose check on admin and oper only
|
||||
if (command.admin == ack_admin) and (command.oper == ack_oper):
|
||||
return ServiceNodeCommandResult(
|
||||
origin=sm_ack_dict['SM_API_MSG_ORIGIN'],
|
||||
hostname=sm_ack_dict['SM_API_MSG_NODE_NAME'],
|
||||
action=sm_ack_dict['SM_API_MSG_NODE_ACTION'],
|
||||
admin=ack_admin,
|
||||
oper=ack_oper,
|
||||
avail=ack_avail,
|
||||
error_code=ERR_CODE_SUCCESS,
|
||||
error_msg="success")
|
||||
else:
|
||||
result = ServiceNodeCommandResult(
|
||||
origin="sm",
|
||||
hostname=hostname,
|
||||
action=sm_ack_dict['SM_API_MSG_NODE_ACTION'],
|
||||
admin=ack_admin,
|
||||
oper=ack_oper,
|
||||
avail=ack_avail,
|
||||
error_code=ERR_CODE_ACTION_FAILED,
|
||||
error_details="action failed")
|
||||
|
||||
return wsme.api.Response(result, status_code=500)
|
||||
else:
|
||||
raise wsme.exc.InvalidInput('action', command.action, "unknown")
|
||||
|
||||
@wsme_pecan.wsexpose(ServiceNode, unicode)
|
||||
def get_one(self, hostname):
|
||||
|
||||
try:
|
||||
data = self._get_sm_node_state(hostname)
|
||||
except:
|
||||
LOG.exception("No entry in database for %s:" % hostname)
|
||||
return ServiceNode(origin="sm",
|
||||
hostname=hostname,
|
||||
admin=SM_NODE_STATE_UNKNOWN,
|
||||
oper=SM_NODE_STATE_UNKNOWN,
|
||||
avail=SM_NODE_STATE_UNKNOWN,
|
||||
active_services="unknown",
|
||||
swactable_services="unknown")
|
||||
|
||||
sm_sdas = self._get_current_sm_sdas()
|
||||
|
||||
if self._have_active_sm_services(hostname, sm_sdas):
|
||||
active_services = "yes"
|
||||
else:
|
||||
active_services = "no"
|
||||
|
||||
if self._have_swactable_sm_services(hostname, sm_sdas):
|
||||
swactable_services = "yes"
|
||||
else:
|
||||
swactable_services = "no"
|
||||
|
||||
return ServiceNode(origin="sm",
|
||||
hostname=data['hostname'],
|
||||
admin=data['admin'],
|
||||
oper=data['oper'],
|
||||
avail=data['avail'],
|
||||
active_services=active_services,
|
||||
swactable_services=swactable_services)
|
||||
|
||||
@wsme_pecan.wsexpose(ServiceNodeCommandResult, unicode,
|
||||
body=ServiceNodeCommand)
|
||||
def patch(self, hostname, command):
|
||||
|
||||
if command.origin != "mtce" and command.origin != "sysinv":
|
||||
LOG.warn("sm-api unexpected origin: %s. Continuing."
|
||||
% command.origin)
|
||||
|
||||
return self._do_modify_command(hostname, command)
|
144
service-mgmt-api/sm-api/sm_api/api/controllers/v1/services.py
Normal file
144
service-mgmt-api/sm-api/sm_api/api/controllers/v1/services.py
Normal file
@ -0,0 +1,144 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
import json
|
||||
import wsme
|
||||
from wsme import types as wsme_types
|
||||
import wsmeext.pecan as wsme_pecan
|
||||
import pecan
|
||||
from pecan import rest
|
||||
|
||||
from sm_api.api.controllers.v1 import base
|
||||
from sm_api.api.controllers.v1 import collection
|
||||
from sm_api.api.controllers.v1 import link
|
||||
from sm_api.api.controllers.v1 import utils
|
||||
|
||||
from sm_api.common import log
|
||||
from sm_api import objects
|
||||
|
||||
|
||||
LOG = log.get_logger(__name__)
|
||||
|
||||
|
||||
class ServicesCommand(wsme_types.Base):
|
||||
path = wsme_types.text
|
||||
value = wsme_types.text
|
||||
op = wsme_types.text
|
||||
|
||||
|
||||
class ServicesCommandResult(wsme_types.Base):
|
||||
# Host Information
|
||||
hostname = wsme_types.text
|
||||
state = wsme_types.text
|
||||
# Command
|
||||
path = wsme_types.text
|
||||
value = wsme_types.text
|
||||
op = wsme_types.text
|
||||
# Result
|
||||
error_code = wsme_types.text
|
||||
error_details = wsme_types.text
|
||||
|
||||
|
||||
class Services(base.APIBase):
|
||||
id = int
|
||||
name = wsme_types.text
|
||||
desired_state = wsme_types.text
|
||||
state = wsme_types.text
|
||||
status = wsme_types.text
|
||||
|
||||
# online_uuid = wsme_types.text
|
||||
# "The UUID of the services"
|
||||
|
||||
links = [link.Link]
|
||||
"A list containing a self link and associated services links"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self.fields = objects.service.fields.keys()
|
||||
for k in self.fields:
|
||||
setattr(self, k, kwargs.get(k))
|
||||
|
||||
@classmethod
|
||||
def convert_with_links(cls, rpc_services, expand=True):
|
||||
minimum_fields = ['id', 'name', 'desired_state', 'state', 'status']
|
||||
fields = minimum_fields if not expand else None
|
||||
services = Services.from_rpc_object(
|
||||
rpc_services, fields)
|
||||
|
||||
return services
|
||||
|
||||
|
||||
class ServicesCollection(collection.Collection):
|
||||
"""API representation of a collection of services."""
|
||||
|
||||
services = [Services]
|
||||
"A list containing services objects"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self._type = 'services'
|
||||
|
||||
@classmethod
|
||||
def convert_with_links(cls, services, limit, url=None,
|
||||
expand=False, **kwargs):
|
||||
collection = ServicesCollection()
|
||||
collection.services = [
|
||||
Services.convert_with_links(ch, expand)
|
||||
for ch in services]
|
||||
url = url or None
|
||||
collection.next = collection.get_next(limit, url=url, **kwargs)
|
||||
return collection
|
||||
|
||||
|
||||
# class Servicess(wsme_types.Base):
|
||||
# services = wsme_types.text
|
||||
class ServicesController(rest.RestController):
|
||||
|
||||
def _get_services(self, marker, limit, sort_key, sort_dir):
|
||||
|
||||
limit = utils.validate_limit(limit)
|
||||
sort_dir = utils.validate_sort_dir(sort_dir)
|
||||
marker_obj = None
|
||||
if marker:
|
||||
marker_obj = objects.service.get_by_uuid(
|
||||
pecan.request.context, marker)
|
||||
|
||||
services = pecan.request.dbapi.sm_service_get_list(limit,
|
||||
marker_obj,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
return services
|
||||
|
||||
@wsme_pecan.wsexpose(Services, unicode)
|
||||
def get_one(self, uuid):
|
||||
rpc_sg = objects.service.get_by_uuid(pecan.request.context, uuid)
|
||||
|
||||
return Services.convert_with_links(rpc_sg)
|
||||
|
||||
@wsme_pecan.wsexpose(ServicesCollection, unicode, int,
|
||||
unicode, unicode)
|
||||
def get_all(self, marker=None, limit=None,
|
||||
sort_key='name', sort_dir='asc'):
|
||||
"""Retrieve list of services."""
|
||||
|
||||
services = self._get_services(marker,
|
||||
limit,
|
||||
sort_key,
|
||||
sort_dir)
|
||||
|
||||
return ServicesCollection.convert_with_links(services, limit,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
@wsme_pecan.wsexpose(ServicesCommandResult, unicode,
|
||||
body=ServicesCommand)
|
||||
def put(self, hostname, command):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
@wsme_pecan.wsexpose(ServicesCommandResult, unicode,
|
||||
body=ServicesCommand)
|
||||
def patch(self, hostname, command):
|
||||
|
||||
raise NotImplementedError()
|
162
service-mgmt-api/sm-api/sm_api/api/controllers/v1/sm_sda.py
Executable file
162
service-mgmt-api/sm-api/sm_api/api/controllers/v1/sm_sda.py
Executable file
@ -0,0 +1,162 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
import json
|
||||
import wsme
|
||||
from wsme import types as wsme_types
|
||||
import wsmeext.pecan as wsme_pecan
|
||||
import pecan
|
||||
from pecan import rest
|
||||
|
||||
from sm_api.api.controllers.v1 import base
|
||||
from sm_api.api.controllers.v1 import collection
|
||||
from sm_api.api.controllers.v1 import link
|
||||
from sm_api.api.controllers.v1 import utils
|
||||
|
||||
from sm_api.common import log
|
||||
from sm_api import objects
|
||||
|
||||
|
||||
LOG = log.get_logger(__name__)
|
||||
|
||||
|
||||
class SmSdaCommand(wsme_types.Base):
|
||||
path = wsme_types.text
|
||||
value = wsme_types.text
|
||||
op = wsme_types.text
|
||||
|
||||
|
||||
class SmSdaCommandResult(wsme_types.Base):
|
||||
# Host Information
|
||||
hostname = wsme_types.text
|
||||
state = wsme_types.text
|
||||
# Command
|
||||
path = wsme_types.text
|
||||
value = wsme_types.text
|
||||
op = wsme_types.text
|
||||
# Result
|
||||
error_code = wsme_types.text
|
||||
error_details = wsme_types.text
|
||||
|
||||
|
||||
class SmSda(base.APIBase):
|
||||
id = int
|
||||
uuid = wsme_types.text
|
||||
"The UUID of the sm_sda"
|
||||
|
||||
name = wsme_types.text
|
||||
node_name = wsme_types.text
|
||||
service_group_name = wsme_types.text
|
||||
state = wsme_types.text
|
||||
desired_state = wsme_types.text
|
||||
status = wsme_types.text
|
||||
condition = wsme_types.text
|
||||
|
||||
links = [link.Link]
|
||||
"A list containing a self link and associated sm_sda links"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self.fields = objects.sm_sda.fields.keys()
|
||||
for k in self.fields:
|
||||
setattr(self, k, kwargs.get(k))
|
||||
|
||||
@classmethod
|
||||
def convert_with_links(cls, rpc_sm_sda, expand=True):
|
||||
minimum_fields = ['id', 'uuid', 'name', 'node_name',
|
||||
'service_group_name', 'desired_state',
|
||||
'state', 'status', 'condition']
|
||||
fields = minimum_fields if not expand else None
|
||||
sm_sda = SmSda.from_rpc_object(
|
||||
rpc_sm_sda, fields)
|
||||
|
||||
return sm_sda
|
||||
|
||||
|
||||
class SmSdaCollection(collection.Collection):
|
||||
"""API representation of a collection of sm_sda."""
|
||||
|
||||
sm_sda = [SmSda]
|
||||
"A list containing sm_sda objects"
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
self._type = 'sm_sda'
|
||||
|
||||
@classmethod
|
||||
def convert_with_links(cls, sm_sda, limit, url=None,
|
||||
expand=False, **kwargs):
|
||||
collection = SmSdaCollection()
|
||||
collection.sm_sda = [
|
||||
SmSda.convert_with_links(ch, expand)
|
||||
for ch in sm_sda]
|
||||
url = url or None
|
||||
collection.next = collection.get_next(limit, url=url, **kwargs)
|
||||
return collection
|
||||
|
||||
|
||||
class SmSdas(wsme_types.Base):
|
||||
sm_sda = wsme_types.text
|
||||
|
||||
|
||||
class SmSdaController(rest.RestController):
|
||||
|
||||
def _get_sm_sda(self, marker, limit, sort_key, sort_dir):
|
||||
|
||||
limit = utils.validate_limit(limit)
|
||||
sort_dir = utils.validate_sort_dir(sort_dir)
|
||||
marker_obj = None
|
||||
if marker:
|
||||
marker_obj = objects.sm_sda.get_by_uuid(pecan.request.context,
|
||||
marker)
|
||||
|
||||
sm_sdas = pecan.request.dbapi.sm_sda_get_list(limit,
|
||||
marker_obj,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
# Remap OpenStack_Services to Cloud_Services
|
||||
for sm_sda in sm_sdas:
|
||||
if sm_sda.service_group_name.lower() == "openstack_services":
|
||||
sm_sda.service_group_name = "Cloud_Services"
|
||||
|
||||
return sm_sdas
|
||||
|
||||
@wsme_pecan.wsexpose(SmSda, unicode)
|
||||
def get_one(self, uuid):
|
||||
|
||||
rpc_sda = objects.sm_sda.get_by_uuid(pecan.request.context, uuid)
|
||||
|
||||
# temp: remap OpenStack_Services to Cloud_Services
|
||||
if rpc_sda.service_group_name.lower() == "openstack_services":
|
||||
rpc_sda.service_group_name = "Cloud_Services"
|
||||
|
||||
return SmSda.convert_with_links(rpc_sda)
|
||||
|
||||
@wsme_pecan.wsexpose(SmSdaCollection, unicode, int,
|
||||
unicode, unicode)
|
||||
def get_all(self, marker=None, limit=None,
|
||||
sort_key='name', sort_dir='asc'):
|
||||
"""Retrieve list of sm_sdas."""
|
||||
|
||||
sm_sda = self._get_sm_sda(marker,
|
||||
limit,
|
||||
sort_key,
|
||||
sort_dir)
|
||||
|
||||
return SmSdaCollection.convert_with_links(sm_sda, limit,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
@wsme_pecan.wsexpose(SmSdaCommandResult, unicode,
|
||||
body=SmSdaCommand)
|
||||
def put(self, hostname, command):
|
||||
|
||||
raise NotImplementedError()
|
||||
|
||||
@wsme_pecan.wsexpose(SmSdaCommandResult, unicode,
|
||||
body=SmSdaCommand)
|
||||
def patch(self, hostname, command):
|
||||
|
||||
raise NotImplementedError()
|
160
service-mgmt-api/sm-api/sm_api/api/controllers/v1/smc_api.py
Executable file
160
service-mgmt-api/sm-api/sm_api/api/controllers/v1/smc_api.py
Executable file
@ -0,0 +1,160 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
import os
|
||||
import socket
|
||||
|
||||
from sm_api.openstack.common import log
|
||||
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
SM_API_SERVER_ADDR = "/tmp/.sm_server_api"
|
||||
SM_API_CLIENT_ADDR = "/tmp/.sm_client_api"
|
||||
|
||||
SM_API_MSG_VERSION = "1"
|
||||
SM_API_MSG_REVISION = "1"
|
||||
SM_API_MSG_TYPE_SET_NODE = "SET_NODE"
|
||||
SM_API_MSG_TYPE_SET_NODE_ACK = "SET_NODE_ACK"
|
||||
|
||||
SM_API_MSG_NODE_ADMINSTATE_LOCK = "LOCK"
|
||||
SM_API_MSG_NODE_ADMINSTATE_UNLOCK = "UNLOCK"
|
||||
|
||||
# offsets
|
||||
SM_API_MSG_VERSION_FIELD = 0
|
||||
SM_API_MSG_REVISION_FIELD = 1
|
||||
SM_API_MSG_SEQNO_FIELD = 2
|
||||
SM_API_MSG_TYPE_FIELD = 3
|
||||
SM_API_MSG_ORIGIN_FIELD = 4
|
||||
SM_API_MSG_NODE_NAME_FIELD = 5
|
||||
SM_API_MSG_NODE_ACTION_FIELD = 6
|
||||
SM_API_MSG_NODE_ADMIN_FIELD = 7
|
||||
SM_API_MSG_NODE_OPER_FIELD = 8
|
||||
SM_API_MSG_NODE_AVAIL_FIELD = 9
|
||||
SM_API_MAX_MSG_SIZE = 2048
|
||||
|
||||
SM_NODE_ACTION_UNLOCK = "unlock"
|
||||
SM_NODE_ACTION_LOCK = "lock"
|
||||
SM_NODE_ACTION_SWACT_PRE_CHECK = "swact-pre-check"
|
||||
SM_NODE_ACTION_SWACT = "swact"
|
||||
SM_NODE_ACTION_SWACT_FORCE = "swact-force"
|
||||
SM_NODE_ACTION_EVENT = "event"
|
||||
|
||||
|
||||
def sm_api_notify(sm_dict):
|
||||
|
||||
sm_ack_dict = {}
|
||||
|
||||
sm_buf_dict = {'SM_API_MSG_VERSION': SM_API_MSG_VERSION,
|
||||
'SM_API_MSG_REVISION': SM_API_MSG_REVISION}
|
||||
|
||||
sm_buf_dict.update(sm_dict)
|
||||
sm_buf = ("%s,%s,%i,%s,%s,%s,%s,%s,%s,%s" % (
|
||||
sm_buf_dict['SM_API_MSG_VERSION'],
|
||||
sm_buf_dict['SM_API_MSG_REVISION'],
|
||||
sm_buf_dict['SM_API_MSG_SEQNO'],
|
||||
sm_buf_dict['SM_API_MSG_TYPE'],
|
||||
sm_buf_dict['SM_API_MSG_ORIGIN'],
|
||||
sm_buf_dict['SM_API_MSG_NODE_NAME'],
|
||||
sm_buf_dict['SM_API_MSG_NODE_ACTION'],
|
||||
sm_buf_dict['SM_API_MSG_NODE_ADMIN'],
|
||||
sm_buf_dict['SM_API_MSG_NODE_OPER'],
|
||||
sm_buf_dict['SM_API_MSG_NODE_AVAIL']))
|
||||
|
||||
LOG.debug("sm-api buffer to SM API: %s" % sm_buf)
|
||||
|
||||
# notify SM
|
||||
s = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
|
||||
try:
|
||||
if os.path.exists(SM_API_CLIENT_ADDR):
|
||||
os.unlink(SM_API_CLIENT_ADDR)
|
||||
|
||||
s.setblocking(1) # blocking, timeout must be specified
|
||||
s.settimeout(6) # give sm a few secs to respond
|
||||
s.bind(SM_API_CLIENT_ADDR)
|
||||
s.sendto(sm_buf, SM_API_SERVER_ADDR)
|
||||
|
||||
count = 0
|
||||
while count < 5:
|
||||
count += 1
|
||||
sm_ack = s.recv(1024)
|
||||
|
||||
try:
|
||||
sm_ack_list = sm_ack.split(",")
|
||||
if sm_ack_list[SM_API_MSG_SEQNO_FIELD] == \
|
||||
str(sm_buf_dict['SM_API_MSG_SEQNO']):
|
||||
break
|
||||
else:
|
||||
LOG.debug(_("sm-api mismatch seqno tx message: %s rx message: %s " % (sm_buf, sm_ack)))
|
||||
except:
|
||||
LOG.exception(_("sm-api bad rx message: %s" % sm_ack))
|
||||
|
||||
except socket.error, e:
|
||||
LOG.exception(_("sm-api socket error: %s on %s") % (e, sm_buf))
|
||||
sm_ack_dict = {
|
||||
'SM_API_MSG_TYPE': "unknown_set_node",
|
||||
'SM_API_MSG_NODE_ACTION': sm_dict['SM_API_MSG_NODE_ACTION'],
|
||||
'SM_API_MSG_ORIGIN': "sm",
|
||||
'SM_API_MSG_NODE_NAME': sm_dict['SM_API_MSG_NODE_NAME'],
|
||||
'SM_API_MSG_NODE_ADMIN': "unknown",
|
||||
'SM_API_MSG_NODE_OPER': "unknown",
|
||||
'SM_API_MSG_NODE_AVAIL': "unknown"}
|
||||
|
||||
return sm_ack_dict
|
||||
|
||||
finally:
|
||||
s.close()
|
||||
if os.path.exists(SM_API_CLIENT_ADDR):
|
||||
os.unlink(SM_API_CLIENT_ADDR)
|
||||
|
||||
LOG.debug("sm-api set node state sm_ack %s " % sm_ack)
|
||||
try:
|
||||
sm_ack_list = sm_ack.split(",")
|
||||
sm_ack_dict = {
|
||||
'SM_API_MSG_VERSION': sm_ack_list[SM_API_MSG_VERSION_FIELD],
|
||||
'SM_API_MSG_REVISION': sm_ack_list[SM_API_MSG_REVISION_FIELD],
|
||||
'SM_API_MSG_SEQNO': sm_ack_list[SM_API_MSG_SEQNO_FIELD],
|
||||
'SM_API_MSG_TYPE': sm_ack_list[SM_API_MSG_TYPE_FIELD],
|
||||
'SM_API_MSG_NODE_ACTION': sm_ack_list[SM_API_MSG_NODE_ACTION_FIELD],
|
||||
|
||||
'SM_API_MSG_ORIGIN': sm_ack_list[SM_API_MSG_ORIGIN_FIELD],
|
||||
'SM_API_MSG_NODE_NAME': sm_ack_list[SM_API_MSG_NODE_NAME_FIELD],
|
||||
'SM_API_MSG_NODE_ADMIN': sm_ack_list[SM_API_MSG_NODE_ADMIN_FIELD],
|
||||
'SM_API_MSG_NODE_OPER': sm_ack_list[SM_API_MSG_NODE_OPER_FIELD],
|
||||
'SM_API_MSG_NODE_AVAIL': sm_ack_list[SM_API_MSG_NODE_AVAIL_FIELD]
|
||||
}
|
||||
except:
|
||||
LOG.exception(_("sm-api ack message error: %s" % sm_ack))
|
||||
sm_ack_dict = {
|
||||
'SM_API_MSG_TYPE': "unknown_set_node",
|
||||
|
||||
'SM_API_MSG_ORIGIN': "sm",
|
||||
'SM_API_MSG_NODE_NAME': sm_dict['SM_API_MSG_NODE_NAME'],
|
||||
'SM_API_MSG_NODE_ADMIN': "unknown",
|
||||
'SM_API_MSG_NODE_OPEsR': "unknown",
|
||||
'SM_API_MSG_NODE_AVAIL': "unknown"
|
||||
}
|
||||
|
||||
return sm_ack_dict
|
||||
|
||||
|
||||
def sm_api_set_node_state(origin, hostname, action, admin, avail, oper, seqno):
|
||||
sm_ack_dict = {}
|
||||
sm_dict = {'SM_API_MSG_TYPE': SM_API_MSG_TYPE_SET_NODE,
|
||||
|
||||
'SM_API_MSG_ORIGIN': origin,
|
||||
'SM_API_MSG_NODE_NAME': hostname,
|
||||
|
||||
'SM_API_MSG_NODE_ACTION': action,
|
||||
'SM_API_MSG_NODE_ADMIN': admin,
|
||||
'SM_API_MSG_NODE_OPER': oper,
|
||||
'SM_API_MSG_NODE_AVAIL': avail,
|
||||
'SM_API_MSG_SEQNO': seqno,
|
||||
}
|
||||
|
||||
sm_ack_dict = sm_api_notify(sm_dict)
|
||||
|
||||
return sm_ack_dict
|
98
service-mgmt-api/sm-api/sm_api/api/controllers/v1/utils.py
Normal file
98
service-mgmt-api/sm-api/sm_api/api/controllers/v1/utils.py
Normal file
@ -0,0 +1,98 @@
|
||||
#!/usr/bin/env python
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2013 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import jsonpatch
|
||||
import re
|
||||
import wsme
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
JSONPATCH_EXCEPTIONS = (jsonpatch.JsonPatchException,
|
||||
jsonpatch.JsonPointerException,
|
||||
KeyError)
|
||||
|
||||
|
||||
def validate_limit(limit):
|
||||
if limit and limit < 0:
|
||||
raise wsme.exc.ClientSideError(_("Limit must be positive"))
|
||||
|
||||
return min(CONF.api_limit_max, limit) or CONF.api_limit_max
|
||||
|
||||
|
||||
def validate_sort_dir(sort_dir):
|
||||
if sort_dir not in ['asc', 'desc']:
|
||||
raise wsme.exc.ClientSideError(_("Invalid sort direction: %s. "
|
||||
"Acceptable values are "
|
||||
"'asc' or 'desc'") % sort_dir)
|
||||
return sort_dir
|
||||
|
||||
|
||||
def validate_patch(patch):
|
||||
"""Performs a basic validation on patch."""
|
||||
|
||||
if not isinstance(patch, list):
|
||||
patch = [patch]
|
||||
|
||||
for p in patch:
|
||||
path_pattern = re.compile("^/[a-zA-Z0-9-_]+(/[a-zA-Z0-9-_]+)*$")
|
||||
|
||||
if not isinstance(p, dict) or \
|
||||
any(key for key in ["path", "op"] if key not in p):
|
||||
raise wsme.exc.ClientSideError(_("Invalid patch format: %s")
|
||||
% str(p))
|
||||
|
||||
path = p["path"]
|
||||
op = p["op"]
|
||||
|
||||
if op not in ["add", "replace", "remove"]:
|
||||
raise wsme.exc.ClientSideError(_("Operation not supported: %s")
|
||||
% op)
|
||||
|
||||
if not path_pattern.match(path):
|
||||
raise wsme.exc.ClientSideError(_("Invalid path: %s") % path)
|
||||
|
||||
if op == "add":
|
||||
if path.count('/') == 1:
|
||||
raise wsme.exc.ClientSideError(_("Adding an additional "
|
||||
"attribute (%s) to the "
|
||||
"resource is not allowed")
|
||||
% path)
|
||||
|
||||
|
||||
class ValidTypes(wsme.types.UserType):
|
||||
"""User type for validate that value has one of a few types."""
|
||||
|
||||
def __init__(self, *types):
|
||||
self.types = types
|
||||
|
||||
def validate(self, value):
|
||||
for t in self.types:
|
||||
if t is wsme.types.text and isinstance(value, wsme.types.bytes):
|
||||
value = value.decode()
|
||||
if isinstance(value, t):
|
||||
return value
|
||||
else:
|
||||
raise ValueError("Wrong type. Expected '%s', got '%s'" % (
|
||||
self.types, type(value)))
|
98
service-mgmt-api/sm-api/sm_api/api/hooks.py
Normal file
98
service-mgmt-api/sm-api/sm_api/api/hooks.py
Normal file
@ -0,0 +1,98 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
"""
|
||||
Hooks
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
from pecan import hooks
|
||||
|
||||
from sm_api.common import context
|
||||
from sm_api.common import utils
|
||||
|
||||
from sm_api.common import config
|
||||
from sm_api.db import api as dbapi
|
||||
from sm_api.openstack.common import policy
|
||||
|
||||
|
||||
|
||||
|
||||
class ConfigHook(hooks.PecanHook):
|
||||
def __init__(self):
|
||||
super(ConfigHook, self).__init__()
|
||||
|
||||
def before(self, state):
|
||||
state.request.config = config.CONF
|
||||
|
||||
|
||||
class DatabaseHook(hooks.PecanHook):
|
||||
# JKUNG def __init__(self):
|
||||
# super(DatabaseHook, self).__init__()
|
||||
# self.database = sqlite3.connect(config.CONF['database']['database'])
|
||||
|
||||
def before(self, state):
|
||||
# state.request.database = self.database
|
||||
state.request.dbapi = dbapi.get_instance()
|
||||
|
||||
# def __del__(self):
|
||||
# self.database.close()
|
||||
|
||||
|
||||
class ContextHook(hooks.PecanHook):
|
||||
"""Configures a request context and attaches it to the request.
|
||||
|
||||
The following HTTP request headers are used:
|
||||
|
||||
X-User-Id or X-User:
|
||||
Used for context.user_id.
|
||||
|
||||
X-Tenant-Id or X-Tenant:
|
||||
Used for context.tenant.
|
||||
|
||||
X-Auth-Token:
|
||||
Used for context.auth_token.
|
||||
|
||||
X-Roles:
|
||||
Used for setting context.is_admin flag to either True or False.
|
||||
The flag is set to True, if X-Roles contains either an administrator
|
||||
or admin substring. Otherwise it is set to False.
|
||||
|
||||
"""
|
||||
def __init__(self, public_api_routes):
|
||||
self.public_api_routes = public_api_routes
|
||||
super(ContextHook, self).__init__()
|
||||
|
||||
def before(self, state):
|
||||
user_id = state.request.headers.get('X-User-Id')
|
||||
user_id = state.request.headers.get('X-User', user_id)
|
||||
tenant = state.request.headers.get('X-Tenant-Id')
|
||||
tenant = state.request.headers.get('X-Tenant', tenant)
|
||||
domain_id = state.request.headers.get('X-User-Domain-Id')
|
||||
domain_name = state.request.headers.get('X-User-Domain-Name')
|
||||
auth_token = state.request.headers.get('X-Auth-Token', None)
|
||||
creds = {'roles': state.request.headers.get('X-Roles', '').split(',')}
|
||||
|
||||
is_admin = policy.check('is_admin', state.request.headers, creds)
|
||||
|
||||
path = utils.safe_rstrip(state.request.path, '/')
|
||||
is_public_api = path in self.public_api_routes
|
||||
|
||||
state.request.context = context.RequestContext(
|
||||
auth_token=auth_token,
|
||||
user=user_id,
|
||||
tenant=tenant,
|
||||
domain_id=domain_id,
|
||||
domain_name=domain_name,
|
||||
is_admin=is_admin,
|
||||
is_public_api=is_public_api)
|
||||
|
||||
|
||||
class RPCHook(hooks.PecanHook):
|
||||
"""Attach the rpcapi object to the request so controllers can get to it."""
|
||||
|
||||
def before(self, state):
|
||||
state.request.rpcapi = rpcapi.ConductorAPI()
|
19
service-mgmt-api/sm-api/sm_api/api/middleware/__init__.py
Normal file
19
service-mgmt-api/sm-api/sm_api/api/middleware/__init__.py
Normal file
@ -0,0 +1,19 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
|
||||
from sm_api.api.middleware import auth_token
|
||||
from sm_api.api.middleware import parsable_error
|
||||
|
||||
|
||||
ParsableErrorMiddleware = parsable_error.ParsableErrorMiddleware
|
||||
AuthTokenMiddleware = auth_token.AuthTokenMiddleware
|
||||
|
||||
__all__ = (ParsableErrorMiddleware,
|
||||
AuthTokenMiddleware)
|
34
service-mgmt-api/sm-api/sm_api/api/middleware/auth_token.py
Normal file
34
service-mgmt-api/sm-api/sm_api/api/middleware/auth_token.py
Normal file
@ -0,0 +1,34 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
|
||||
from keystonemiddleware import auth_token
|
||||
|
||||
from sm_api.common import utils
|
||||
|
||||
|
||||
class AuthTokenMiddleware(auth_token.AuthProtocol):
|
||||
"""A wrapper on Keystone auth_token middleware.
|
||||
|
||||
Does not perform verification of authentication tokens
|
||||
for public routes in the API.
|
||||
|
||||
"""
|
||||
def __init__(self, app, conf, public_api_routes=[]):
|
||||
self.public_api_routes = set(public_api_routes)
|
||||
|
||||
super(AuthTokenMiddleware, self).__init__(app, conf)
|
||||
|
||||
def __call__(self, env, start_response):
|
||||
path = utils.safe_rstrip(env.get('PATH_INFO'), '/')
|
||||
|
||||
if path in self.public_api_routes:
|
||||
return self.app(env, start_response)
|
||||
|
||||
return super(AuthTokenMiddleware, self).__call__(env, start_response)
|
@ -0,0 +1,95 @@
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
# Copyright © 2012 New Dream Network, LLC (DreamHost)
|
||||
#
|
||||
# Author: Doug Hellmann <doug.hellmann@dreamhost.com>
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
"""
|
||||
Middleware to replace the plain text message body of an error
|
||||
response with one formatted so the client can parse it.
|
||||
|
||||
Based on pecan.middleware.errordocument
|
||||
"""
|
||||
|
||||
import json
|
||||
import webob
|
||||
from xml import etree as et
|
||||
|
||||
from sm_api.openstack.common import log
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class ParsableErrorMiddleware(object):
|
||||
"""Replace error body with something the client can parse.
|
||||
"""
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def __call__(self, environ, start_response):
|
||||
# Request for this state, modified by replace_start_response()
|
||||
# and used when an error is being reported.
|
||||
state = {}
|
||||
|
||||
def replacement_start_response(status, headers, exc_info=None):
|
||||
"""Overrides the default response to make errors parsable.
|
||||
"""
|
||||
try:
|
||||
status_code = int(status.split(' ')[0])
|
||||
state['status_code'] = status_code
|
||||
except (ValueError, TypeError): # pragma: nocover
|
||||
raise Exception((
|
||||
'ErrorDocumentMiddleware received an invalid '
|
||||
'status %s' % status
|
||||
))
|
||||
else:
|
||||
if (state['status_code'] / 100) not in (2, 3):
|
||||
# Remove some headers so we can replace them later
|
||||
# when we have the full error message and can
|
||||
# compute the length.
|
||||
headers = [(h, v)
|
||||
for (h, v) in headers
|
||||
if h not in ('Content-Length', 'Content-Type')
|
||||
]
|
||||
# Save the headers in case we need to modify them.
|
||||
state['headers'] = headers
|
||||
return start_response(status, headers, exc_info)
|
||||
|
||||
app_iter = self.app(environ, replacement_start_response)
|
||||
if (state['status_code'] / 100) not in (2, 3):
|
||||
req = webob.Request(environ)
|
||||
if (req.accept.best_match(['application/json', 'application/xml'])
|
||||
== 'application/xml'):
|
||||
try:
|
||||
# simple check xml is valid
|
||||
body = [et.ElementTree.tostring(
|
||||
et.ElementTree.fromstring('<error_message>'
|
||||
+ '\n'.join(app_iter)
|
||||
+ '</error_message>'))]
|
||||
except et.ElementTree.ParseError as err:
|
||||
LOG.error('Error parsing HTTP response: %s' % err)
|
||||
body = ['<error_message>%s' % state['status_code']
|
||||
+ '</error_message>']
|
||||
state['headers'].append(('Content-Type', 'application/xml'))
|
||||
else:
|
||||
body = [json.dumps({'error_message': '\n'.join(app_iter)})]
|
||||
state['headers'].append(('Content-Type', 'application/json'))
|
||||
state['headers'].append(('Content-Length', len(body[0])))
|
||||
else:
|
||||
body = app_iter
|
||||
return body
|
30
service-mgmt-api/sm-api/sm_api/cmd/__init__.py
Normal file
30
service-mgmt-api/sm-api/sm_api/cmd/__init__.py
Normal file
@ -0,0 +1,30 @@
|
||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
# TODO(deva): move eventlet imports to sm_api.__init__ once we move to PBR
|
||||
import os
|
||||
|
||||
os.environ['EVENTLET_NO_GREENDNS'] = 'yes'
|
||||
|
||||
import eventlet
|
||||
|
||||
eventlet.monkey_patch(os=False)
|
||||
|
||||
from sm_api.openstack.common import gettextutils
|
||||
gettextutils.install('sm_api')
|
82
service-mgmt-api/sm-api/sm_api/cmd/api.py
Normal file
82
service-mgmt-api/sm-api/sm_api/cmd/api.py
Normal file
@ -0,0 +1,82 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
#
|
||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""The Service Management API."""
|
||||
|
||||
import logging
|
||||
import os.path
|
||||
import sys
|
||||
import time
|
||||
|
||||
|
||||
from oslo_config import cfg
|
||||
from wsgiref import simple_server
|
||||
|
||||
from sm_api.api import app
|
||||
from sm_api.common import service as sm_api_service
|
||||
from sm_api.openstack.common import log
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
def get_handler_cls():
|
||||
cls = simple_server.WSGIRequestHandler
|
||||
|
||||
# old-style class doesn't support super
|
||||
class MyHandler(cls, object):
|
||||
def address_string(self):
|
||||
# In the future, we could provide a config option to allow reverse DNS lookup
|
||||
return self.client_address[0]
|
||||
|
||||
return MyHandler
|
||||
|
||||
|
||||
def main():
|
||||
# Parse config file and command line options, then start logging
|
||||
|
||||
# Periodically check every minute for want_sm_config
|
||||
while os.path.exists("/etc/sm/.not_want_sm_config"):
|
||||
time.sleep(60)
|
||||
|
||||
sm_api_service.prepare_service(sys.argv)
|
||||
|
||||
# Build and start the WSGI app
|
||||
# host = CONF.sm_api_api_bind_ip
|
||||
# port = CONF.sm_api_api_port
|
||||
host = 'localhost'
|
||||
port = 7777
|
||||
wsgi = simple_server.make_server(host, port,
|
||||
app.VersionSelectorApplication(),
|
||||
handler_class=get_handler_cls())
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
LOG.info(_("Serving on http://%(host)s:%(port)s") %
|
||||
{'host': host, 'port': port})
|
||||
LOG.info(_("Configuration:"))
|
||||
CONF.log_opt_values(LOG, logging.INFO)
|
||||
|
||||
try:
|
||||
wsgi.serve_forever()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
5
service-mgmt-api/sm-api/sm_api/common/__init__.py
Normal file
5
service-mgmt-api/sm-api/sm_api/common/__init__.py
Normal file
@ -0,0 +1,5 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
34
service-mgmt-api/sm-api/sm_api/common/config.py
Normal file
34
service-mgmt-api/sm-api/sm_api/common/config.py
Normal file
@ -0,0 +1,34 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
"""
|
||||
Configuration
|
||||
"""
|
||||
|
||||
import ConfigParser
|
||||
|
||||
CONF = None
|
||||
|
||||
|
||||
class Config(ConfigParser.ConfigParser):
|
||||
def as_dict(self):
|
||||
d = dict(self._sections)
|
||||
for key in d:
|
||||
d[key] = dict(self._defaults, **d[key])
|
||||
d[key].pop('__name__', None)
|
||||
return d
|
||||
|
||||
|
||||
def load(config_file):
|
||||
""" Load the power management configuration
|
||||
|
||||
Parameters: configuration file
|
||||
"""
|
||||
global CONF
|
||||
|
||||
config = Config()
|
||||
config.read(config_file)
|
||||
CONF = config.as_dict()
|
45
service-mgmt-api/sm-api/sm_api/common/context.py
Normal file
45
service-mgmt-api/sm-api/sm_api/common/context.py
Normal file
@ -0,0 +1,45 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
|
||||
from sm_api.openstack.common import context
|
||||
|
||||
|
||||
class RequestContext(context.RequestContext):
|
||||
"""Extends security contexts from the OpenStack common library."""
|
||||
|
||||
def __init__(self, auth_token=None, domain_id=None, domain_name=None,
|
||||
user=None, tenant=None, is_admin=False, is_public_api=False,
|
||||
read_only=False, show_deleted=False, request_id=None):
|
||||
"""Stores several additional request parameters:
|
||||
|
||||
:param domain_id: The ID of the domain.
|
||||
:param domain_name: The name of the domain.
|
||||
:param is_public_api: Specifies whether the request should be processed
|
||||
without authentication.
|
||||
|
||||
"""
|
||||
self.is_public_api = is_public_api
|
||||
self.domain_id = domain_id
|
||||
self.domain_name = domain_name
|
||||
|
||||
super(RequestContext, self).__init__(auth_token=auth_token,
|
||||
user=user, tenant=tenant,
|
||||
is_admin=is_admin,
|
||||
read_only=read_only,
|
||||
show_deleted=show_deleted,
|
||||
request_id=request_id)
|
||||
|
||||
def to_dict(self):
|
||||
result = {'domain_id': self.domain_id,
|
||||
'domain_name': self.domain_name,
|
||||
'is_public_api': self.is_public_api}
|
||||
|
||||
result.update(super(RequestContext, self).to_dict())
|
||||
|
||||
return result
|
401
service-mgmt-api/sm-api/sm_api/common/exception.py
Normal file
401
service-mgmt-api/sm-api/sm_api/common/exception.py
Normal file
@ -0,0 +1,401 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""SmApi base exception handling.
|
||||
|
||||
Includes decorator for re-raising SmApi-type exceptions.
|
||||
|
||||
SHOULD include dedicated exception logging.
|
||||
|
||||
"""
|
||||
|
||||
import functools
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.common import safe_utils
|
||||
from sm_api.openstack.common import excutils
|
||||
from sm_api.openstack.common import log as logging
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
exc_log_opts = [
|
||||
cfg.BoolOpt('fatal_exception_format_errors',
|
||||
default=False,
|
||||
help='make exception message format errors fatal'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(exc_log_opts)
|
||||
|
||||
|
||||
class ProcessExecutionError(IOError):
|
||||
def __init__(self, stdout=None, stderr=None, exit_code=None, cmd=None,
|
||||
description=None):
|
||||
self.exit_code = exit_code
|
||||
self.stderr = stderr
|
||||
self.stdout = stdout
|
||||
self.cmd = cmd
|
||||
self.description = description
|
||||
|
||||
if description is None:
|
||||
description = _('Unexpected error while running command.')
|
||||
if exit_code is None:
|
||||
exit_code = '-'
|
||||
message = (_('%(description)s\nCommand: %(cmd)s\n'
|
||||
'Exit code: %(exit_code)s\nStdout: %(stdout)r\n'
|
||||
'Stderr: %(stderr)r') %
|
||||
{'description': description, 'cmd': cmd,
|
||||
'exit_code': exit_code, 'stdout': stdout,
|
||||
'stderr': stderr})
|
||||
IOError.__init__(self, message)
|
||||
|
||||
|
||||
def _cleanse_dict(original):
|
||||
"""Strip all admin_password, new_pass, rescue_pass keys from a dict."""
|
||||
return dict((k, v) for k, v in original.iteritems() if not "_pass" in k)
|
||||
|
||||
|
||||
def wrap_exception(notifier=None, publisher_id=None, event_type=None,
|
||||
level=None):
|
||||
"""This decorator wraps a method to catch any exceptions that may
|
||||
get thrown. It logs the exception as well as optionally sending
|
||||
it to the notification system.
|
||||
"""
|
||||
def inner(f):
|
||||
def wrapped(self, context, *args, **kw):
|
||||
# Don't store self or context in the payload, it now seems to
|
||||
# contain confidential information.
|
||||
try:
|
||||
return f(self, context, *args, **kw)
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
if notifier:
|
||||
payload = dict(exception=e)
|
||||
call_dict = safe_utils.getcallargs(f, *args, **kw)
|
||||
cleansed = _cleanse_dict(call_dict)
|
||||
payload.update({'args': cleansed})
|
||||
|
||||
# Use a temp vars so we don't shadow
|
||||
# our outer definitions.
|
||||
temp_level = level
|
||||
if not temp_level:
|
||||
temp_level = notifier.ERROR
|
||||
|
||||
temp_type = event_type
|
||||
if not temp_type:
|
||||
# If f has multiple decorators, they must use
|
||||
# functools.wraps to ensure the name is
|
||||
# propagated.
|
||||
temp_type = f.__name__
|
||||
|
||||
notifier.notify(context, publisher_id, temp_type,
|
||||
temp_level, payload)
|
||||
|
||||
return functools.wraps(f)(wrapped)
|
||||
return inner
|
||||
|
||||
|
||||
class SmApiException(Exception):
|
||||
"""Base SmApi Exception
|
||||
|
||||
To correctly use this class, inherit from it and define
|
||||
a 'message' property. That message will get printf'd
|
||||
with the keyword arguments provided to the constructor.
|
||||
|
||||
"""
|
||||
message = _("An unknown exception occurred.")
|
||||
code = 500
|
||||
headers = {}
|
||||
safe = False
|
||||
|
||||
def __init__(self, message=None, **kwargs):
|
||||
self.kwargs = kwargs
|
||||
|
||||
if 'code' not in self.kwargs:
|
||||
try:
|
||||
self.kwargs['code'] = self.code
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
if not message:
|
||||
try:
|
||||
message = self.message % kwargs
|
||||
|
||||
except Exception as e:
|
||||
# kwargs doesn't match a variable in the message
|
||||
# log the issue and the kwargs
|
||||
LOG.exception(_('Exception in string format operation'))
|
||||
for name, value in kwargs.iteritems():
|
||||
LOG.error("%s: %s" % (name, value))
|
||||
|
||||
if CONF.fatal_exception_format_errors:
|
||||
raise e
|
||||
else:
|
||||
# at least get the core message out if something happened
|
||||
message = self.message
|
||||
|
||||
super(SmApiException, self).__init__(message)
|
||||
|
||||
def format_message(self):
|
||||
if self.__class__.__name__.endswith('_Remote'):
|
||||
return self.args[0]
|
||||
else:
|
||||
return unicode(self)
|
||||
|
||||
|
||||
class NotAuthorized(SmApiException):
|
||||
message = _("Not authorized.")
|
||||
code = 403
|
||||
|
||||
|
||||
class AdminRequired(NotAuthorized):
|
||||
message = _("User does not have admin privileges")
|
||||
|
||||
|
||||
class PolicyNotAuthorized(NotAuthorized):
|
||||
message = _("Policy doesn't allow %(action)s to be performed.")
|
||||
|
||||
|
||||
class OperationNotPermitted(NotAuthorized):
|
||||
message = _("Operation not permitted.")
|
||||
|
||||
|
||||
class Invalid(SmApiException):
|
||||
message = _("Unacceptable parameters.")
|
||||
code = 400
|
||||
|
||||
|
||||
class Conflict(SmApiException):
|
||||
message = _('Conflict.')
|
||||
code = 409
|
||||
|
||||
|
||||
class InvalidCPUInfo(Invalid):
|
||||
message = _("Unacceptable CPU info") + ": %(reason)s"
|
||||
|
||||
|
||||
class InvalidIpAddressError(Invalid):
|
||||
message = _("%(address)s is not a valid IP v4/6 address.")
|
||||
|
||||
|
||||
class InvalidDiskFormat(Invalid):
|
||||
message = _("Disk format %(disk_format)s is not acceptable")
|
||||
|
||||
|
||||
class InvalidUUID(Invalid):
|
||||
message = _("Expected a uuid but received %(uuid)s.")
|
||||
|
||||
|
||||
class InvalidIdentity(Invalid):
|
||||
message = _("Expected an uuid or int but received %(identity)s.")
|
||||
|
||||
|
||||
class PatchError(Invalid):
|
||||
message = _("Couldn't apply patch '%(patch)s'. Reason: %(reason)s")
|
||||
|
||||
|
||||
class InvalidMAC(Invalid):
|
||||
message = _("Expected a MAC address but received %(mac)s.")
|
||||
|
||||
|
||||
class MACAlreadyExists(Conflict):
|
||||
message = _("A Port with MAC address %(mac)s already exists.")
|
||||
|
||||
|
||||
class InstanceDeployFailure(Invalid):
|
||||
message = _("Failed to deploy instance: %(reason)s")
|
||||
|
||||
|
||||
class ImageUnacceptable(Invalid):
|
||||
message = _("Image %(image_id)s is unacceptable: %(reason)s")
|
||||
|
||||
|
||||
class ImageConvertFailed(Invalid):
|
||||
message = _("Image %(image_id)s is unacceptable: %(reason)s")
|
||||
|
||||
|
||||
# Cannot be templated as the error syntax varies.
|
||||
# msg needs to be constructed when raised.
|
||||
class InvalidParameterValue(Invalid):
|
||||
message = _("%(err)s")
|
||||
|
||||
|
||||
class NotFound(SmApiException):
|
||||
message = _("Resource could not be found.")
|
||||
code = 404
|
||||
|
||||
|
||||
class DiskNotFound(NotFound):
|
||||
message = _("No disk at %(location)s")
|
||||
|
||||
|
||||
class DriverNotFound(NotFound):
|
||||
message = _("Failed to load driver %(driver_name)s.")
|
||||
|
||||
|
||||
class ImageNotFound(NotFound):
|
||||
message = _("Image %(image_id)s could not be found.")
|
||||
|
||||
|
||||
class HostNotFound(NotFound):
|
||||
message = _("Host %(host)s could not be found.")
|
||||
|
||||
|
||||
class HostLocked(SmApiException):
|
||||
message = _("Unable to complete the action %(action)s because "
|
||||
"Host %(host)s is in administrative state = unlocked.")
|
||||
|
||||
|
||||
class ConsoleNotFound(NotFound):
|
||||
message = _("Console %(console_id)s could not be found.")
|
||||
|
||||
|
||||
class FileNotFound(NotFound):
|
||||
message = _("File %(file_path)s could not be found.")
|
||||
|
||||
|
||||
class NoValidHost(NotFound):
|
||||
message = _("No valid host was found. %(reason)s")
|
||||
|
||||
|
||||
class InstanceNotFound(NotFound):
|
||||
message = _("Instance %(instance)s could not be found.")
|
||||
|
||||
|
||||
class NodeNotFound(NotFound):
|
||||
message = _("Node %(node)s could not be found.")
|
||||
|
||||
|
||||
class NodeLocked(NotFound):
|
||||
message = _("Node %(node)s is locked by another process.")
|
||||
|
||||
|
||||
class PortNotFound(NotFound):
|
||||
message = _("Port %(port)s could not be found.")
|
||||
|
||||
|
||||
class ChassisNotFound(NotFound):
|
||||
message = _("Chassis %(chassis)s could not be found.")
|
||||
|
||||
|
||||
class ServerNotFound(NotFound):
|
||||
message = _("Server %(server)s could not be found.")
|
||||
|
||||
|
||||
class PowerStateFailure(SmApiException):
|
||||
message = _("Failed to set node power state to %(pstate)s.")
|
||||
|
||||
|
||||
class ExclusiveLockRequired(NotAuthorized):
|
||||
message = _("An exclusive lock is required, "
|
||||
"but the current context has a shared lock.")
|
||||
|
||||
|
||||
class NodeInUse(SmApiException):
|
||||
message = _("Unable to complete the requested action because node "
|
||||
"%(node)s is currently in use by another process.")
|
||||
|
||||
|
||||
class NodeInWrongPowerState(SmApiException):
|
||||
message = _("Can not change instance association while node "
|
||||
"%(node)s is in power state %(pstate)s.")
|
||||
|
||||
|
||||
class NodeNotConfigured(SmApiException):
|
||||
message = _("Can not change power state because node %(node)s "
|
||||
"is not fully configured.")
|
||||
|
||||
|
||||
class ChassisNotEmpty(SmApiException):
|
||||
message = _("Cannot complete the requested action because chassis "
|
||||
"%(chassis)s contains nodes.")
|
||||
|
||||
|
||||
class IPMIFailure(SmApiException):
|
||||
message = _("IPMI call failed: %(cmd)s.")
|
||||
|
||||
|
||||
class SSHConnectFailed(SmApiException):
|
||||
message = _("Failed to establish SSH connection to host %(host)s.")
|
||||
|
||||
|
||||
class UnsupportedObjectError(SmApiException):
|
||||
message = _('Unsupported object type %(objtype)s')
|
||||
|
||||
|
||||
class OrphanedObjectError(SmApiException):
|
||||
message = _('Cannot call %(method)s on orphaned %(objtype)s object')
|
||||
|
||||
|
||||
class IncompatibleObjectVersion(SmApiException):
|
||||
message = _('Version %(objver)s of %(objname)s is not supported')
|
||||
|
||||
|
||||
class GlanceConnectionFailed(SmApiException):
|
||||
message = "Connection to glance host %(host)s:%(port)s failed: %(reason)s"
|
||||
|
||||
|
||||
class ImageNotAuthorized(SmApiException):
|
||||
message = "Not authorized for image %(image_id)s."
|
||||
|
||||
|
||||
class InvalidImageRef(SmApiException):
|
||||
message = "Invalid image href %(image_href)s."
|
||||
code = 400
|
||||
|
||||
|
||||
class ServiceUnavailable(SmApiException):
|
||||
message = "Connection failed"
|
||||
|
||||
|
||||
class Forbidden(SmApiException):
|
||||
message = "Requested OpenStack Images API is forbidden"
|
||||
|
||||
|
||||
class BadRequest(SmApiException):
|
||||
pass
|
||||
|
||||
|
||||
class HTTPException(SmApiException):
|
||||
message = "Requested version of OpenStack Images API is not available."
|
||||
|
||||
|
||||
class InvalidEndpoint(SmApiException):
|
||||
message = "The provided endpoint is invalid"
|
||||
|
||||
|
||||
class CommunicationError(SmApiException):
|
||||
message = "Unable to communicate with the server."
|
||||
|
||||
|
||||
class HTTPForbidden(Forbidden):
|
||||
pass
|
||||
|
||||
|
||||
class Unauthorized(SmApiException):
|
||||
pass
|
||||
|
||||
|
||||
class HTTPNotFound(NotFound):
|
||||
pass
|
77
service-mgmt-api/sm-api/sm_api/common/log.py
Normal file
77
service-mgmt-api/sm-api/sm_api/common/log.py
Normal file
@ -0,0 +1,77 @@
|
||||
#
|
||||
# Copyright (c) 2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
#
|
||||
|
||||
"""
|
||||
Logging
|
||||
"""
|
||||
|
||||
import logging
|
||||
import logging.handlers
|
||||
|
||||
LOG_FILE_NAME = "sm.log"
|
||||
LOG_MAX_BYTES = 10485760
|
||||
LOG_BACKUP_COUNT = 5
|
||||
|
||||
_log_to_console = False
|
||||
_log_to_syslog = False
|
||||
_log_to_file = False
|
||||
|
||||
_syslog_facility = None
|
||||
|
||||
_loggers = {}
|
||||
|
||||
|
||||
def _setup_logger(logger):
|
||||
formatter = logging.Formatter("%(asctime)s %(threadName)s[%(process)d] "
|
||||
"%(name)s.%(lineno)d - %(levelname)s "
|
||||
"%(message)s")
|
||||
|
||||
if _log_to_console:
|
||||
handler = logging.StreamHandler()
|
||||
handler.setFormatter(formatter)
|
||||
logger.addHandler(handler)
|
||||
|
||||
if _log_to_syslog:
|
||||
handler = logging.handlers.SysLogHandler(address='/dev/log',
|
||||
facility=_syslog_facility)
|
||||
handler.setFormatter(formatter)
|
||||
logger.addHandler(handler)
|
||||
|
||||
if _log_to_file:
|
||||
handler = logging.handlers.RotatingFileHandler(LOG_FILE_NAME, "a+",
|
||||
LOG_MAX_BYTES,
|
||||
LOG_BACKUP_COUNT)
|
||||
handler.setFormatter(formatter)
|
||||
logger.addHandler(handler)
|
||||
|
||||
logger.setLevel(logging.DEBUG)
|
||||
|
||||
|
||||
def get_logger(name):
|
||||
""" Get a logger or create one .
|
||||
"""
|
||||
|
||||
global _loggers
|
||||
|
||||
_loggers[name] = logging.getLogger(name)
|
||||
return _loggers[name]
|
||||
|
||||
|
||||
def configure(conf):
|
||||
""" Setup logging.
|
||||
"""
|
||||
|
||||
global _loggers
|
||||
global _log_to_syslog
|
||||
global _syslog_facility
|
||||
|
||||
if conf['logging']['use_syslog']:
|
||||
_log_to_syslog = True
|
||||
_syslog_facility = conf['logging']['log_facility']
|
||||
|
||||
for logger in _loggers:
|
||||
_setup_logger(_loggers[logger])
|
135
service-mgmt-api/sm-api/sm_api/common/policy.py
Normal file
135
service-mgmt-api/sm-api/sm_api/common/policy.py
Normal file
@ -0,0 +1,135 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Policy Engine For Sm_api."""
|
||||
|
||||
import os.path
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.common import exception
|
||||
from sm_api.common import utils
|
||||
from sm_api.openstack.common import policy
|
||||
|
||||
|
||||
policy_opts = [
|
||||
cfg.StrOpt('policy_file',
|
||||
default='policy.json',
|
||||
help=_('JSON file representing policy')),
|
||||
cfg.StrOpt('policy_default_rule',
|
||||
default='default',
|
||||
help=_('Rule checked when requested rule is not found')),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(policy_opts)
|
||||
|
||||
_POLICY_PATH = None
|
||||
_POLICY_CACHE = {}
|
||||
|
||||
|
||||
def reset():
|
||||
global _POLICY_PATH
|
||||
global _POLICY_CACHE
|
||||
_POLICY_PATH = None
|
||||
_POLICY_CACHE = {}
|
||||
policy.reset()
|
||||
|
||||
|
||||
def init():
|
||||
global _POLICY_PATH
|
||||
global _POLICY_CACHE
|
||||
if not _POLICY_PATH:
|
||||
_POLICY_PATH = CONF.policy_file
|
||||
if not os.path.exists(_POLICY_PATH):
|
||||
_POLICY_PATH = CONF.find_file(_POLICY_PATH)
|
||||
if not _POLICY_PATH:
|
||||
raise exception.ConfigNotFound(path=CONF.policy_file)
|
||||
utils.read_cached_file(_POLICY_PATH, _POLICY_CACHE,
|
||||
reload_func=_set_rules)
|
||||
|
||||
|
||||
def _set_rules(data):
|
||||
default_rule = CONF.policy_default_rule
|
||||
policy.set_rules(policy.Rules.load_json(data, default_rule))
|
||||
|
||||
|
||||
def enforce(context, action, target, do_raise=True):
|
||||
"""Verifies that the action is valid on the target in this context.
|
||||
|
||||
:param context: sm_api context
|
||||
:param action: string representing the action to be checked
|
||||
this should be colon separated for clarity.
|
||||
i.e. ``compute:create_instance``,
|
||||
``compute:attach_volume``,
|
||||
``volume:attach_volume``
|
||||
:param target: dictionary representing the object of the action
|
||||
for object creation this should be a dictionary representing the
|
||||
location of the object e.g. ``{'project_id': context.project_id}``
|
||||
:param do_raise: if True (the default), raises PolicyNotAuthorized;
|
||||
if False, returns False
|
||||
|
||||
:raises sm_api.exception.PolicyNotAuthorized: if verification fails
|
||||
and do_raise is True.
|
||||
|
||||
:return: returns a non-False value (not necessarily "True") if
|
||||
authorized, and the exact value False if not authorized and
|
||||
do_raise is False.
|
||||
"""
|
||||
init()
|
||||
|
||||
credentials = context.to_dict()
|
||||
|
||||
# Add the exception arguments if asked to do a raise
|
||||
extra = {}
|
||||
if do_raise:
|
||||
extra.update(exc=exception.PolicyNotAuthorized, action=action)
|
||||
|
||||
return policy.check(action, target, credentials, **extra)
|
||||
|
||||
|
||||
def check_is_admin(context):
|
||||
"""Whether or not role contains 'admin' role according to policy setting.
|
||||
|
||||
"""
|
||||
init()
|
||||
|
||||
credentials = context.to_dict()
|
||||
target = credentials
|
||||
|
||||
return policy.check('context_is_admin', target, credentials)
|
||||
|
||||
|
||||
@policy.register('context_is_admin')
|
||||
class IsAdminCheck(policy.Check):
|
||||
"""An explicit check for is_admin."""
|
||||
|
||||
def __init__(self, kind, match):
|
||||
"""Initialize the check."""
|
||||
|
||||
self.expected = (match.lower() == 'true')
|
||||
|
||||
super(IsAdminCheck, self).__init__(kind, str(self.expected))
|
||||
|
||||
def __call__(self, target, creds):
|
||||
"""Determine whether is_admin matches the requested value."""
|
||||
|
||||
return creds['is_admin'] == self.expected
|
59
service-mgmt-api/sm-api/sm_api/common/safe_utils.py
Normal file
59
service-mgmt-api/sm-api/sm_api/common/safe_utils.py
Normal file
@ -0,0 +1,59 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Justin Santa Barbara
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Utilities and helper functions that won't produce circular imports."""
|
||||
|
||||
import inspect
|
||||
|
||||
|
||||
def getcallargs(function, *args, **kwargs):
|
||||
"""This is a simplified inspect.getcallargs (2.7+).
|
||||
|
||||
It should be replaced when python >= 2.7 is standard.
|
||||
"""
|
||||
keyed_args = {}
|
||||
argnames, varargs, keywords, defaults = inspect.getargspec(function)
|
||||
|
||||
keyed_args.update(kwargs)
|
||||
|
||||
#NOTE(alaski) the implicit 'self' or 'cls' argument shows up in
|
||||
# argnames but not in args or kwargs. Uses 'in' rather than '==' because
|
||||
# some tests use 'self2'.
|
||||
if 'self' in argnames[0] or 'cls' == argnames[0]:
|
||||
# The function may not actually be a method or have im_self.
|
||||
# Typically seen when it's stubbed with mox.
|
||||
if inspect.ismethod(function) and hasattr(function, 'im_self'):
|
||||
keyed_args[argnames[0]] = function.im_self
|
||||
else:
|
||||
keyed_args[argnames[0]] = None
|
||||
|
||||
remaining_argnames = filter(lambda x: x not in keyed_args, argnames)
|
||||
keyed_args.update(dict(zip(remaining_argnames, args)))
|
||||
|
||||
if defaults:
|
||||
num_defaults = len(defaults)
|
||||
for argname, value in zip(argnames[-num_defaults:], defaults):
|
||||
if argname not in keyed_args:
|
||||
keyed_args[argname] = value
|
||||
|
||||
return keyed_args
|
70
service-mgmt-api/sm-api/sm_api/common/service.py
Normal file
70
service-mgmt-api/sm-api/sm_api/common/service.py
Normal file
@ -0,0 +1,70 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
# Copyright © 2012 eNovance <licensing@enovance.com>
|
||||
#
|
||||
# Author: Julien Danjou <julien@danjou.info>
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import socket
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.openstack.common import context
|
||||
from sm_api.openstack.common import log
|
||||
from sm_api.openstack.common import periodic_task
|
||||
from sm_api.openstack.common import rpc
|
||||
from sm_api.openstack.common.rpc import service as rpc_service
|
||||
|
||||
|
||||
cfg.CONF.register_opts([
|
||||
cfg.IntOpt('periodic_interval',
|
||||
default=60,
|
||||
help='seconds between running periodic tasks'),
|
||||
cfg.StrOpt('host',
|
||||
default=socket.getfqdn(),
|
||||
help='Name of this node. This can be an opaque identifier. '
|
||||
'It is not necessarily a hostname, FQDN, or IP address. '
|
||||
'However, the node name must be valid within '
|
||||
'an AMQP key, and if using ZeroMQ, a valid '
|
||||
'hostname, FQDN, or IP address'),
|
||||
])
|
||||
|
||||
|
||||
class PeriodicService(rpc_service.Service, periodic_task.PeriodicTasks):
|
||||
|
||||
def start(self):
|
||||
super(PeriodicService, self).start()
|
||||
admin_context = context.RequestContext('admin', 'admin', is_admin=True)
|
||||
self.tg.add_timer(cfg.CONF.periodic_interval,
|
||||
self.manager.periodic_tasks,
|
||||
context=admin_context)
|
||||
|
||||
|
||||
def prepare_service(argv=[]):
|
||||
rpc.set_defaults(control_exchange='sm_api')
|
||||
cfg.set_defaults(log.log_opts,
|
||||
default_log_levels=['amqplib=WARN',
|
||||
'qpid.messaging=INFO',
|
||||
'sqlalchemy=WARN',
|
||||
'keystoneclient=INFO',
|
||||
'stevedore=INFO',
|
||||
'eventlet.wsgi.server=WARN'
|
||||
])
|
||||
cfg.CONF(argv[1:], project='sm_api')
|
||||
log.setup('sm_api')
|
678
service-mgmt-api/sm-api/sm_api/common/utils.py
Normal file
678
service-mgmt-api/sm-api/sm_api/common/utils.py
Normal file
@ -0,0 +1,678 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Justin Santa Barbara
|
||||
# Copyright (c) 2012 NTT DOCOMO, INC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Utilities and helper functions."""
|
||||
|
||||
import contextlib
|
||||
import errno
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
#import paramiko
|
||||
import random
|
||||
import re
|
||||
import shutil
|
||||
import signal
|
||||
import six
|
||||
import socket
|
||||
import tempfile
|
||||
import uuid
|
||||
|
||||
from eventlet.green import subprocess
|
||||
from eventlet import greenthread
|
||||
import netaddr
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.common import exception
|
||||
from sm_api.openstack.common import log as logging
|
||||
|
||||
utils_opts = [
|
||||
cfg.StrOpt('rootwrap_config',
|
||||
default="/etc/sm_api/rootwrap.conf",
|
||||
help='Path to the rootwrap configuration file to use for '
|
||||
'running commands as root'),
|
||||
cfg.StrOpt('tempdir',
|
||||
default=None,
|
||||
help='Explicitly specify the temporary working directory'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(utils_opts)
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
# Used for looking up extensions of text
|
||||
# to their 'multiplied' byte amount
|
||||
BYTE_MULTIPLIERS = {
|
||||
'': 1,
|
||||
't': 1024 ** 4,
|
||||
'g': 1024 ** 3,
|
||||
'm': 1024 ** 2,
|
||||
'k': 1024,
|
||||
}
|
||||
|
||||
|
||||
def _subprocess_setup():
|
||||
# Python installs a SIGPIPE handler by default. This is usually not what
|
||||
# non-Python subprocesses expect.
|
||||
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
|
||||
|
||||
|
||||
def execute(*cmd, **kwargs):
|
||||
"""Helper method to execute command with optional retry.
|
||||
|
||||
If you add a run_as_root=True command, don't forget to add the
|
||||
corresponding filter to etc/sm_api/rootwrap.d !
|
||||
|
||||
:param cmd: Passed to subprocess.Popen.
|
||||
:param process_input: Send to opened process.
|
||||
:param check_exit_code: Single bool, int, or list of allowed exit
|
||||
codes. Defaults to [0]. Raise
|
||||
exception.ProcessExecutionError unless
|
||||
program exits with one of these code.
|
||||
:param delay_on_retry: True | False. Defaults to True. If set to
|
||||
True, wait a short amount of time
|
||||
before retrying.
|
||||
:param attempts: How many times to retry cmd.
|
||||
:param run_as_root: True | False. Defaults to False. If set to True,
|
||||
the command is run with rootwrap.
|
||||
|
||||
:raises exception.SmApiException: on receiving unknown arguments
|
||||
:raises exception.ProcessExecutionError:
|
||||
|
||||
:returns: a tuple, (stdout, stderr) from the spawned process, or None if
|
||||
the command fails.
|
||||
"""
|
||||
process_input = kwargs.pop('process_input', None)
|
||||
check_exit_code = kwargs.pop('check_exit_code', [0])
|
||||
ignore_exit_code = False
|
||||
if isinstance(check_exit_code, bool):
|
||||
ignore_exit_code = not check_exit_code
|
||||
check_exit_code = [0]
|
||||
elif isinstance(check_exit_code, int):
|
||||
check_exit_code = [check_exit_code]
|
||||
delay_on_retry = kwargs.pop('delay_on_retry', True)
|
||||
attempts = kwargs.pop('attempts', 1)
|
||||
run_as_root = kwargs.pop('run_as_root', False)
|
||||
shell = kwargs.pop('shell', False)
|
||||
|
||||
if len(kwargs):
|
||||
raise exception.SmApiException(_('Got unknown keyword args '
|
||||
'to utils.execute: %r') % kwargs)
|
||||
|
||||
if run_as_root and os.geteuid() != 0:
|
||||
cmd = ['sudo', 'sm_api-rootwrap', CONF.rootwrap_config] + list(cmd)
|
||||
|
||||
cmd = map(str, cmd)
|
||||
|
||||
while attempts > 0:
|
||||
attempts -= 1
|
||||
try:
|
||||
LOG.debug(_('Running cmd (subprocess): %s'), ' '.join(cmd))
|
||||
_PIPE = subprocess.PIPE # pylint: disable=E1101
|
||||
|
||||
if os.name == 'nt':
|
||||
preexec_fn = None
|
||||
close_fds = False
|
||||
else:
|
||||
preexec_fn = _subprocess_setup
|
||||
close_fds = True
|
||||
|
||||
obj = subprocess.Popen(cmd,
|
||||
stdin=_PIPE,
|
||||
stdout=_PIPE,
|
||||
stderr=_PIPE,
|
||||
close_fds=close_fds,
|
||||
preexec_fn=preexec_fn,
|
||||
shell=shell)
|
||||
result = None
|
||||
if process_input is not None:
|
||||
result = obj.communicate(process_input)
|
||||
else:
|
||||
result = obj.communicate()
|
||||
obj.stdin.close() # pylint: disable=E1101
|
||||
_returncode = obj.returncode # pylint: disable=E1101
|
||||
LOG.debug(_('Result was %s') % _returncode)
|
||||
if not ignore_exit_code and _returncode not in check_exit_code:
|
||||
(stdout, stderr) = result
|
||||
raise exception.ProcessExecutionError(
|
||||
exit_code=_returncode,
|
||||
stdout=stdout,
|
||||
stderr=stderr,
|
||||
cmd=' '.join(cmd))
|
||||
return result
|
||||
except exception.ProcessExecutionError:
|
||||
if not attempts:
|
||||
raise
|
||||
else:
|
||||
LOG.debug(_('%r failed. Retrying.'), cmd)
|
||||
if delay_on_retry:
|
||||
greenthread.sleep(random.randint(20, 200) / 100.0)
|
||||
finally:
|
||||
# NOTE(termie): this appears to be necessary to let the subprocess
|
||||
# call clean something up in between calls, without
|
||||
# it two execute calls in a row hangs the second one
|
||||
greenthread.sleep(0)
|
||||
|
||||
|
||||
def trycmd(*args, **kwargs):
|
||||
"""A wrapper around execute() to more easily handle warnings and errors.
|
||||
|
||||
Returns an (out, err) tuple of strings containing the output of
|
||||
the command's stdout and stderr. If 'err' is not empty then the
|
||||
command can be considered to have failed.
|
||||
|
||||
:discard_warnings True | False. Defaults to False. If set to True,
|
||||
then for succeeding commands, stderr is cleared
|
||||
|
||||
"""
|
||||
discard_warnings = kwargs.pop('discard_warnings', False)
|
||||
|
||||
try:
|
||||
out, err = execute(*args, **kwargs)
|
||||
failed = False
|
||||
except exception.ProcessExecutionError as exn:
|
||||
out, err = '', str(exn)
|
||||
failed = True
|
||||
|
||||
if not failed and discard_warnings and err:
|
||||
# Handle commands that output to stderr but otherwise succeed
|
||||
err = ''
|
||||
|
||||
return out, err
|
||||
|
||||
|
||||
# def ssh_connect(connection):
|
||||
# """Method to connect to a remote system using ssh protocol.
|
||||
#
|
||||
# :param connection: a dict of connection parameters.
|
||||
# :returns: paramiko.SSHClient -- an active ssh connection.
|
||||
# :raises: SSHConnectFailed
|
||||
#
|
||||
# """
|
||||
# try:
|
||||
# ssh = paramiko.SSHClient()
|
||||
# ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
|
||||
# ssh.connect(connection.get('host'),
|
||||
# username=connection.get('username'),
|
||||
# password=connection.get('password', None),
|
||||
# port=connection.get('port', 22),
|
||||
# key_filename=connection.get('key_filename', None),
|
||||
# timeout=connection.get('timeout', 10))
|
||||
#
|
||||
# # send TCP keepalive packets every 20 seconds
|
||||
# ssh.get_transport().set_keepalive(20)
|
||||
# except Exception:
|
||||
# raise exception.SSHConnectFailed(host=connection.get('host'))
|
||||
#
|
||||
# return ssh
|
||||
|
||||
|
||||
def generate_uid(topic, size=8):
|
||||
characters = '01234567890abcdefghijklmnopqrstuvwxyz'
|
||||
choices = [random.choice(characters) for _x in xrange(size)]
|
||||
return '%s-%s' % (topic, ''.join(choices))
|
||||
|
||||
|
||||
def random_alnum(size=32):
|
||||
characters = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'
|
||||
return ''.join(random.choice(characters) for _ in xrange(size))
|
||||
|
||||
|
||||
class LazyPluggable(object):
|
||||
"""A pluggable backend loaded lazily based on some value."""
|
||||
|
||||
def __init__(self, pivot, config_group=None, **backends):
|
||||
self.__backends = backends
|
||||
self.__pivot = pivot
|
||||
self.__backend = None
|
||||
self.__config_group = config_group
|
||||
|
||||
def __get_backend(self):
|
||||
if not self.__backend:
|
||||
if self.__config_group is None:
|
||||
backend_name = CONF[self.__pivot]
|
||||
else:
|
||||
backend_name = CONF[self.__config_group][self.__pivot]
|
||||
if backend_name not in self.__backends:
|
||||
msg = _('Invalid backend: %s') % backend_name
|
||||
raise exception.SmApiException(msg)
|
||||
|
||||
backend = self.__backends[backend_name]
|
||||
if isinstance(backend, tuple):
|
||||
name = backend[0]
|
||||
fromlist = backend[1]
|
||||
else:
|
||||
name = backend
|
||||
fromlist = backend
|
||||
|
||||
self.__backend = __import__(name, None, None, fromlist)
|
||||
return self.__backend
|
||||
|
||||
def __getattr__(self, key):
|
||||
backend = self.__get_backend()
|
||||
return getattr(backend, key)
|
||||
|
||||
|
||||
def delete_if_exists(pathname):
|
||||
"""delete a file, but ignore file not found error."""
|
||||
|
||||
try:
|
||||
os.unlink(pathname)
|
||||
except OSError as e:
|
||||
if e.errno == errno.ENOENT:
|
||||
return
|
||||
else:
|
||||
raise
|
||||
|
||||
|
||||
def is_int_like(val):
|
||||
"""Check if a value looks like an int."""
|
||||
try:
|
||||
return str(int(val)) == str(val)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def is_valid_boolstr(val):
|
||||
"""Check if the provided string is a valid bool string or not."""
|
||||
boolstrs = ('true', 'false', 'yes', 'no', 'y', 'n', '1', '0')
|
||||
return str(val).lower() in boolstrs
|
||||
|
||||
|
||||
def is_valid_mac(address):
|
||||
"""Verify the format of a MAC addres."""
|
||||
m = "[0-9a-f]{2}([-:])[0-9a-f]{2}(\\1[0-9a-f]{2}){4}$"
|
||||
if isinstance(address, six.string_types) and re.match(m, address.lower()):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def validate_and_normalize_mac(address):
|
||||
"""Validate a MAC address and return normalized form.
|
||||
|
||||
Checks whether the supplied MAC address is formally correct and
|
||||
normalize it to all lower case.
|
||||
|
||||
:param address: MAC address to be validated and normalized.
|
||||
:returns: Normalized and validated MAC address.
|
||||
:raises: InvalidMAC If the MAC address is not valid.
|
||||
|
||||
"""
|
||||
if not is_valid_mac(address):
|
||||
raise exception.InvalidMAC(mac=address)
|
||||
return address.lower()
|
||||
|
||||
|
||||
def is_valid_ipv4(address):
|
||||
"""Verify that address represents a valid IPv4 address."""
|
||||
try:
|
||||
return netaddr.valid_ipv4(address)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def is_valid_ipv6(address):
|
||||
try:
|
||||
return netaddr.valid_ipv6(address)
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def is_valid_ipv6_cidr(address):
|
||||
try:
|
||||
str(netaddr.IPNetwork(address, version=6).cidr)
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def get_shortened_ipv6(address):
|
||||
addr = netaddr.IPAddress(address, version=6)
|
||||
return str(addr.ipv6())
|
||||
|
||||
|
||||
def get_shortened_ipv6_cidr(address):
|
||||
net = netaddr.IPNetwork(address, version=6)
|
||||
return str(net.cidr)
|
||||
|
||||
|
||||
def is_valid_cidr(address):
|
||||
"""Check if the provided ipv4 or ipv6 address is a valid CIDR address."""
|
||||
try:
|
||||
# Validate the correct CIDR Address
|
||||
netaddr.IPNetwork(address)
|
||||
except netaddr.core.AddrFormatError:
|
||||
return False
|
||||
except UnboundLocalError:
|
||||
# NOTE(MotoKen): work around bug in netaddr 0.7.5 (see detail in
|
||||
# https://github.com/drkjam/netaddr/issues/2)
|
||||
return False
|
||||
|
||||
# Prior validation partially verify /xx part
|
||||
# Verify it here
|
||||
ip_segment = address.split('/')
|
||||
|
||||
if (len(ip_segment) <= 1 or
|
||||
ip_segment[1] == ''):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def get_ip_version(network):
|
||||
"""Returns the IP version of a network (IPv4 or IPv6).
|
||||
|
||||
:raises: AddrFormatError if invalid network.
|
||||
"""
|
||||
if netaddr.IPNetwork(network).version == 6:
|
||||
return "IPv6"
|
||||
elif netaddr.IPNetwork(network).version == 4:
|
||||
return "IPv4"
|
||||
|
||||
|
||||
def convert_to_list_dict(lst, label):
|
||||
"""Convert a value or list into a list of dicts."""
|
||||
if not lst:
|
||||
return None
|
||||
if not isinstance(lst, list):
|
||||
lst = [lst]
|
||||
return [{label: x} for x in lst]
|
||||
|
||||
|
||||
def sanitize_hostname(hostname):
|
||||
"""Return a hostname which conforms to RFC-952 and RFC-1123 specs."""
|
||||
if isinstance(hostname, unicode):
|
||||
hostname = hostname.encode('latin-1', 'ignore')
|
||||
|
||||
hostname = re.sub('[ _]', '-', hostname)
|
||||
hostname = re.sub('[^\w.-]+', '', hostname)
|
||||
hostname = hostname.lower()
|
||||
hostname = hostname.strip('.-')
|
||||
|
||||
return hostname
|
||||
|
||||
|
||||
def read_cached_file(filename, cache_info, reload_func=None):
|
||||
"""Read from a file if it has been modified.
|
||||
|
||||
:param cache_info: dictionary to hold opaque cache.
|
||||
:param reload_func: optional function to be called with data when
|
||||
file is reloaded due to a modification.
|
||||
|
||||
:returns: data from file
|
||||
|
||||
"""
|
||||
mtime = os.path.getmtime(filename)
|
||||
if not cache_info or mtime != cache_info.get('mtime'):
|
||||
LOG.debug(_("Reloading cached file %s") % filename)
|
||||
with open(filename) as fap:
|
||||
cache_info['data'] = fap.read()
|
||||
cache_info['mtime'] = mtime
|
||||
if reload_func:
|
||||
reload_func(cache_info['data'])
|
||||
return cache_info['data']
|
||||
|
||||
|
||||
def file_open(*args, **kwargs):
|
||||
"""Open file
|
||||
|
||||
see built-in file() documentation for more details
|
||||
|
||||
Note: The reason this is kept in a separate module is to easily
|
||||
be able to provide a stub module that doesn't alter system
|
||||
state at all (for unit tests)
|
||||
"""
|
||||
return file(*args, **kwargs)
|
||||
|
||||
|
||||
def hash_file(file_like_object):
|
||||
"""Generate a hash for the contents of a file."""
|
||||
checksum = hashlib.sha1()
|
||||
for chunk in iter(lambda: file_like_object.read(32768), b''):
|
||||
checksum.update(chunk)
|
||||
return checksum.hexdigest()
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def temporary_mutation(obj, **kwargs):
|
||||
"""Temporarily set the attr on a particular object to a given value then
|
||||
revert when finished.
|
||||
|
||||
One use of this is to temporarily set the read_deleted flag on a context
|
||||
object:
|
||||
|
||||
with temporary_mutation(context, read_deleted="yes"):
|
||||
do_something_that_needed_deleted_objects()
|
||||
"""
|
||||
def is_dict_like(thing):
|
||||
return hasattr(thing, 'has_key')
|
||||
|
||||
def get(thing, attr, default):
|
||||
if is_dict_like(thing):
|
||||
return thing.get(attr, default)
|
||||
else:
|
||||
return getattr(thing, attr, default)
|
||||
|
||||
def set_value(thing, attr, val):
|
||||
if is_dict_like(thing):
|
||||
thing[attr] = val
|
||||
else:
|
||||
setattr(thing, attr, val)
|
||||
|
||||
def delete(thing, attr):
|
||||
if is_dict_like(thing):
|
||||
del thing[attr]
|
||||
else:
|
||||
delattr(thing, attr)
|
||||
|
||||
NOT_PRESENT = object()
|
||||
|
||||
old_values = {}
|
||||
for attr, new_value in kwargs.items():
|
||||
old_values[attr] = get(obj, attr, NOT_PRESENT)
|
||||
set_value(obj, attr, new_value)
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
for attr, old_value in old_values.items():
|
||||
if old_value is NOT_PRESENT:
|
||||
delete(obj, attr)
|
||||
else:
|
||||
set_value(obj, attr, old_value)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def tempdir(**kwargs):
|
||||
tempfile.tempdir = CONF.tempdir
|
||||
tmpdir = tempfile.mkdtemp(**kwargs)
|
||||
try:
|
||||
yield tmpdir
|
||||
finally:
|
||||
try:
|
||||
shutil.rmtree(tmpdir)
|
||||
except OSError as e:
|
||||
LOG.error(_('Could not remove tmpdir: %s'), str(e))
|
||||
|
||||
|
||||
def mkfs(fs, path, label=None):
|
||||
"""Format a file or block device
|
||||
|
||||
:param fs: Filesystem type (examples include 'swap', 'ext3', 'ext4'
|
||||
'btrfs', etc.)
|
||||
:param path: Path to file or block device to format
|
||||
:param label: Volume label to use
|
||||
"""
|
||||
if fs == 'swap':
|
||||
args = ['mkswap']
|
||||
else:
|
||||
args = ['mkfs', '-t', fs]
|
||||
#add -F to force no interactive execute on non-block device.
|
||||
if fs in ('ext3', 'ext4'):
|
||||
args.extend(['-F'])
|
||||
if label:
|
||||
if fs in ('msdos', 'vfat'):
|
||||
label_opt = '-n'
|
||||
else:
|
||||
label_opt = '-L'
|
||||
args.extend([label_opt, label])
|
||||
args.append(path)
|
||||
execute(*args)
|
||||
|
||||
|
||||
# TODO(deva): Make these work in SmApi.
|
||||
# Either copy nova/virt/utils (bad),
|
||||
# or reimplement as a common lib,
|
||||
# or make a driver that doesn't need to do this.
|
||||
#
|
||||
#def cache_image(context, target, image_id, user_id, project_id):
|
||||
# if not os.path.exists(target):
|
||||
# libvirt_utils.fetch_image(context, target, image_id,
|
||||
# user_id, project_id)
|
||||
#
|
||||
#
|
||||
#def inject_into_image(image, key, net, metadata, admin_password,
|
||||
# files, partition, use_cow=False):
|
||||
# try:
|
||||
# disk_api.inject_data(image, key, net, metadata, admin_password,
|
||||
# files, partition, use_cow)
|
||||
# except Exception as e:
|
||||
# LOG.warn(_("Failed to inject data into image %(image)s. "
|
||||
# "Error: %(e)s") % locals())
|
||||
|
||||
|
||||
def unlink_without_raise(path):
|
||||
try:
|
||||
os.unlink(path)
|
||||
except OSError as e:
|
||||
if e.errno == errno.ENOENT:
|
||||
return
|
||||
else:
|
||||
LOG.warn(_("Failed to unlink %(path)s, error: %(e)s") %
|
||||
{'path': path, 'e': e})
|
||||
|
||||
|
||||
def rmtree_without_raise(path):
|
||||
try:
|
||||
if os.path.isdir(path):
|
||||
shutil.rmtree(path)
|
||||
except OSError as e:
|
||||
LOG.warn(_("Failed to remove dir %(path)s, error: %(e)s") %
|
||||
{'path': path, 'e': e})
|
||||
|
||||
|
||||
def write_to_file(path, contents):
|
||||
with open(path, 'w') as f:
|
||||
f.write(contents)
|
||||
|
||||
|
||||
def create_link_without_raise(source, link):
|
||||
try:
|
||||
os.symlink(source, link)
|
||||
except OSError as e:
|
||||
if e.errno == errno.EEXIST:
|
||||
return
|
||||
else:
|
||||
LOG.warn(_("Failed to create symlink from %(source)s to %(link)s"
|
||||
", error: %(e)s") %
|
||||
{'source': source, 'link': link, 'e': e})
|
||||
|
||||
|
||||
def safe_rstrip(value, chars=None):
|
||||
"""Removes trailing characters from a string if that does not make it empty
|
||||
|
||||
:param value: A string value that will be stripped.
|
||||
:param chars: Characters to remove.
|
||||
:return: Stripped value.
|
||||
|
||||
"""
|
||||
if not isinstance(value, six.string_types):
|
||||
LOG.warn(_("Failed to remove trailing character. Returning original "
|
||||
"object. Supplied object is not a string: %s,") % value)
|
||||
return value
|
||||
|
||||
return value.rstrip(chars) or value
|
||||
|
||||
|
||||
def generate_uuid():
|
||||
return str(uuid.uuid4())
|
||||
|
||||
|
||||
def is_uuid_like(val):
|
||||
"""Returns validation of a value as a UUID.
|
||||
|
||||
For our purposes, a UUID is a canonical form string:
|
||||
aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa
|
||||
|
||||
"""
|
||||
try:
|
||||
return str(uuid.UUID(val)) == val
|
||||
except (TypeError, ValueError, AttributeError):
|
||||
return False
|
||||
|
||||
|
||||
def removekey(d, key):
|
||||
r = dict(d)
|
||||
del r[key]
|
||||
return r
|
||||
|
||||
|
||||
def notify_mtc_and_recv(mtc_address, mtc_port, idict):
|
||||
mtc_response_dict = {}
|
||||
mtc_response_dict['status'] = None
|
||||
|
||||
serialized_idict = json.dumps(idict)
|
||||
|
||||
# notify mtc this ihost has been added
|
||||
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
try:
|
||||
s.setblocking(1) # blocking, timeout must be specified
|
||||
s.settimeout(6) # give mtc a few secs to respond
|
||||
s.connect((mtc_address, mtc_port))
|
||||
LOG.warning("Mtc Command : %s" % serialized_idict)
|
||||
s.sendall(serialized_idict)
|
||||
|
||||
mtc_response = s.recv(1024) # check if mtc allows
|
||||
try:
|
||||
mtc_response_dict = json.loads(mtc_response)
|
||||
LOG.warning("Mtc Response: %s" % mtc_response_dict)
|
||||
except:
|
||||
LOG.exception("Mtc Response Error: %s" % mtc_response)
|
||||
pass
|
||||
|
||||
except socket.error, e:
|
||||
LOG.exception(_("Socket Error: %s on %s:%s for %s") % (e,
|
||||
mtc_address, mtc_port, serialized_idict))
|
||||
# if e not in [errno.EWOULDBLOCK, errno.EINTR]:
|
||||
# raise exception.CommunicationError(_(
|
||||
# "Socket error: address=%s port=%s error=%s ") % (
|
||||
# self._mtc_address, self._mtc_port, e))
|
||||
pass
|
||||
|
||||
finally:
|
||||
s.close()
|
||||
|
||||
return mtc_response_dict
|
10
service-mgmt-api/sm-api/sm_api/db/__init__.py
Normal file
10
service-mgmt-api/sm-api/sm_api/db/__init__.py
Normal file
@ -0,0 +1,10 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# All Rights Reserved.
|
||||
#
|
175
service-mgmt-api/sm-api/sm_api/db/api.py
Normal file
175
service-mgmt-api/sm-api/sm_api/db/api.py
Normal file
@ -0,0 +1,175 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
#
|
||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
"""
|
||||
Base classes for storage engines
|
||||
"""
|
||||
|
||||
import abc
|
||||
|
||||
from sm_api.openstack.common.db import api as db_api
|
||||
|
||||
_BACKEND_MAPPING = {'sqlalchemy': 'sm_api.db.sqlalchemy.api'}
|
||||
IMPL = db_api.DBAPI(backend_mapping=_BACKEND_MAPPING)
|
||||
|
||||
|
||||
def get_instance():
|
||||
"""Return a DB API instance."""
|
||||
return IMPL
|
||||
|
||||
|
||||
class Connection(object):
|
||||
"""Base class for storage system connections."""
|
||||
|
||||
__metaclass__ = abc.ABCMeta
|
||||
|
||||
@abc.abstractmethod
|
||||
def __init__(self):
|
||||
"""Constructor."""
|
||||
|
||||
@abc.abstractmethod
|
||||
def iservicegroup_get(self, server):
|
||||
"""Return a servicegroup.
|
||||
|
||||
:param server: The id or uuid of a servicegroup.
|
||||
:returns: An iservicegroup.
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def iservicegroup_get_list(self, limit=None, marker=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Return a list of servicegroupintances.
|
||||
|
||||
:param limit: Maximum number of iServers to return.
|
||||
:param marker: the last item of the previous page; we return the next
|
||||
result set.
|
||||
:param sort_key: Attribute by which results should be sorted.
|
||||
:param sort_dir: direction in which results should be sorted.
|
||||
(asc, desc)
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def iservice_get(self, server):
|
||||
"""Return a service instance.
|
||||
|
||||
:param server: The id or uuid of a server.
|
||||
:returns: A server.
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def iservice_get_list(self, limit=None, marker=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Return a list of serviceintances.
|
||||
|
||||
:param limit: Maximum number of iServers to return.
|
||||
:param marker: the last item of the previous page; we return the next
|
||||
result set.
|
||||
:param sort_key: Attribute by which results should be sorted.
|
||||
:param sort_dir: direction in which results should be sorted.
|
||||
(asc, desc)
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def iservice_get_by_name(self, name):
|
||||
"""Return a list of serviceinstance by service.
|
||||
:param name: The name (service group)
|
||||
returns: An iservice list
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def sm_sdm_get(self, server, service_group_name):
|
||||
"""get service_domain_member
|
||||
|
||||
:param server: The name of the service_domain.
|
||||
:param service_group: The name of the service_domain_member.
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def sm_sda_get(self, server):
|
||||
"""get service_domain_assignment
|
||||
|
||||
:param server: The id or uuid of a service_domain_assignment.
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def sm_sda_get_list(self, limit=None, marker=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Return a list of service_domain_assignments.
|
||||
|
||||
:param limit: Maximum number of entries to return.
|
||||
:param marker: the last item of the previous page; we return the next
|
||||
result set.
|
||||
:param sort_key: Attribute by which results should be sorted.
|
||||
:param sort_dir: direction in which results should be sorted.
|
||||
(asc, desc)
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def sm_node_get_list(self, limit=None, marker=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Return a list of nodes.
|
||||
|
||||
:param limit: Maximum number of nodes to return.
|
||||
:param marker: the last item of the previous page; we return the next
|
||||
result set.
|
||||
:param sort_key: Attribute by which results should be sorted.
|
||||
:param sort_dir: direction in which results should be sorted.
|
||||
(asc, desc)
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def sm_node_get(self, server):
|
||||
"""get node
|
||||
|
||||
:param server: The id or uuid of a node.
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def sm_node_get_by_name(self, name):
|
||||
"""Return a list of nodes by name.
|
||||
:param name: The name of the hostname.
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def sm_service_get(self, server):
|
||||
"""get service
|
||||
|
||||
:param server: The id or uuid of a service.
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def sm_service_get_list(self, limit=None, marker=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Return a list of services.
|
||||
|
||||
:param limit: Maximum number of services to return.
|
||||
:param marker: the last item of the previous page; we return the next
|
||||
result set.
|
||||
:param sort_key: Attribute by which results should be sorted.
|
||||
:param sort_dir: direction in which results should be sorted.
|
||||
(asc, desc)
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def sm_service_get_by_name(self, name):
|
||||
"""Return a list of services by name.
|
||||
:param name: The name of the services.
|
||||
"""
|
49
service-mgmt-api/sm-api/sm_api/db/migration.py
Normal file
49
service-mgmt-api/sm-api/sm_api/db/migration.py
Normal file
@ -0,0 +1,49 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Database setup and migration commands."""
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.common import utils
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.import_opt('backend',
|
||||
'sm_api.openstack.common.db.api',
|
||||
group='database')
|
||||
|
||||
IMPL = utils.LazyPluggable(
|
||||
pivot='backend',
|
||||
config_group='database',
|
||||
sqlalchemy='sm_api.db.sqlalchemy.migration')
|
||||
|
||||
INIT_VERSION = 0
|
||||
|
||||
|
||||
def db_sync(version=None):
|
||||
"""Migrate the database to `version` or the most recent version."""
|
||||
return IMPL.db_sync(version=version)
|
||||
|
||||
|
||||
def db_version():
|
||||
"""Display the current database version."""
|
||||
return IMPL.db_version()
|
267
service-mgmt-api/sm-api/sm_api/db/sqlalchemy/api.py
Executable file
267
service-mgmt-api/sm-api/sm_api/db/sqlalchemy/api.py
Executable file
@ -0,0 +1,267 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""SQLAlchemy storage backend."""
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
# TODO(deva): import MultipleResultsFound and handle it appropriately
|
||||
from sqlalchemy.orm.exc import NoResultFound
|
||||
|
||||
from sm_api.common import exception
|
||||
from sm_api.common import utils
|
||||
from sm_api.db import api
|
||||
from sm_api.db.sqlalchemy import models
|
||||
from sm_api import objects
|
||||
from sm_api.openstack.common.db import exception as db_exc
|
||||
from sm_api.openstack.common.db.sqlalchemy import session as db_session
|
||||
from sm_api.openstack.common.db.sqlalchemy import utils as db_utils
|
||||
from sm_api.openstack.common import log
|
||||
from sm_api.openstack.common import uuidutils
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.import_opt('connection',
|
||||
'sm_api.openstack.common.db.sqlalchemy.session',
|
||||
group='database')
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
get_engine = db_session.get_engine
|
||||
get_session = db_session.get_session
|
||||
|
||||
|
||||
def _paginate_query(model, limit=None, marker=None, sort_key=None,
|
||||
sort_dir=None, query=None):
|
||||
if not query:
|
||||
query = model_query(model)
|
||||
sort_keys = ['id']
|
||||
if sort_key and sort_key not in sort_keys:
|
||||
sort_keys.insert(0, sort_key)
|
||||
query = db_utils.paginate_query(query, model, limit, sort_keys,
|
||||
marker=marker, sort_dir=sort_dir)
|
||||
return query.all()
|
||||
|
||||
|
||||
def get_backend():
|
||||
"""The backend is this module itself."""
|
||||
return Connection()
|
||||
|
||||
|
||||
def model_query(model, *args, **kwargs):
|
||||
"""Query helper for simpler session usage.
|
||||
|
||||
:param session: if present, the session to use
|
||||
"""
|
||||
|
||||
session = kwargs.get('session') or get_session()
|
||||
query = session.query(model, *args)
|
||||
return query
|
||||
|
||||
|
||||
def add_identity_filter(query, value, use_name=False):
|
||||
"""Adds an identity filter to a query.
|
||||
|
||||
Filters results by ID, if supplied value is a valid integer.
|
||||
Otherwise attempts to filter results by UUID.
|
||||
|
||||
:param query: Initial query to add filter to.
|
||||
:param value: Value for filtering results by.
|
||||
:return: Modified query.
|
||||
"""
|
||||
if utils.is_int_like(value):
|
||||
return query.filter_by(id=value)
|
||||
elif uuidutils.is_uuid_like(value):
|
||||
return query.filter_by(uuid=value)
|
||||
else:
|
||||
if use_name:
|
||||
return query.filter_by(name=value)
|
||||
else:
|
||||
# JKUNG raise exception
|
||||
return query.filter_by(hostname=value)
|
||||
|
||||
|
||||
def add_filter_by_many_identities(query, model, values):
|
||||
"""Adds an identity filter to a query for values list.
|
||||
|
||||
Filters results by ID, if supplied values contain a valid integer.
|
||||
Otherwise attempts to filter results by UUID.
|
||||
|
||||
:param query: Initial query to add filter to.
|
||||
:param model: Model for filter.
|
||||
:param values: Values for filtering results by.
|
||||
:return: tuple (Modified query, filter field name).
|
||||
"""
|
||||
if not values:
|
||||
raise exception.InvalidIdentity(identity=values)
|
||||
value = values[0]
|
||||
if utils.is_int_like(value):
|
||||
return query.filter(getattr(model, 'id').in_(values)), 'id'
|
||||
elif uuidutils.is_uuid_like(value):
|
||||
return query.filter(getattr(model, 'uuid').in_(values)), 'uuid'
|
||||
else:
|
||||
raise exception.InvalidIdentity(identity=value)
|
||||
|
||||
|
||||
class Connection(api.Connection):
|
||||
"""SqlAlchemy connection."""
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@objects.objectify(objects.service_groups)
|
||||
def iservicegroup_get(self, server):
|
||||
query = model_query(models.iservicegroup)
|
||||
query = add_identity_filter(query, server, use_name=True)
|
||||
|
||||
try:
|
||||
result = query.one()
|
||||
except NoResultFound:
|
||||
raise exception.ServerNotFound(server=server)
|
||||
|
||||
return result
|
||||
|
||||
@objects.objectify(objects.service_groups)
|
||||
def iservicegroup_get_list(self, limit=None, marker=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
return _paginate_query(models.iservicegroup, limit, marker,
|
||||
sort_key, sort_dir)
|
||||
|
||||
@objects.objectify(objects.service)
|
||||
def iservice_get(self, server):
|
||||
# server may be passed as a string. It may be uuid or Int.
|
||||
# server = int(server)
|
||||
query = model_query(models.service)
|
||||
query = add_identity_filter(query, server, use_name=True)
|
||||
|
||||
try:
|
||||
result = query.one()
|
||||
except NoResultFound:
|
||||
raise exception.ServerNotFound(server=server)
|
||||
|
||||
return result
|
||||
|
||||
@objects.objectify(objects.service)
|
||||
def iservice_get_list(self, limit=None, marker=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
return _paginate_query(models.service, limit, marker,
|
||||
sort_key, sort_dir)
|
||||
|
||||
@objects.objectify(objects.service)
|
||||
def iservice_get_by_name(self, name):
|
||||
result = model_query(models.service, read_deleted="no").\
|
||||
filter_by(name=name)
|
||||
# first() since want a list
|
||||
|
||||
if not result:
|
||||
raise exception.NodeNotFound(node=name)
|
||||
|
||||
return result
|
||||
|
||||
@objects.objectify(objects.sm_sdm)
|
||||
def sm_sdm_get(self, server, service_group_name):
|
||||
query = model_query(models.sm_sdm)
|
||||
query = query.filter_by(name=server)
|
||||
query = query.filter_by(service_group_name=service_group_name)
|
||||
|
||||
try:
|
||||
result = query.one()
|
||||
except NoResultFound:
|
||||
raise exception.ServerNotFound(server=server)
|
||||
|
||||
return result
|
||||
|
||||
@objects.objectify(objects.sm_sda)
|
||||
def sm_sda_get(self, server):
|
||||
query = model_query(models.sm_sda)
|
||||
query = add_identity_filter(query, server, use_name=True)
|
||||
|
||||
try:
|
||||
result = query.one()
|
||||
except NoResultFound:
|
||||
raise exception.ServerNotFound(server=server)
|
||||
|
||||
return result
|
||||
|
||||
@objects.objectify(objects.sm_sda)
|
||||
def sm_sda_get_list(self, limit=None, marker=None, sort_key=None,
|
||||
sort_dir=None):
|
||||
return _paginate_query(models.sm_sda, limit, marker,
|
||||
sort_key, sort_dir)
|
||||
|
||||
@objects.objectify(objects.sm_node)
|
||||
def sm_node_get_list(self, limit=None, marker=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
return _paginate_query(models.sm_node, limit, marker,
|
||||
sort_key, sort_dir)
|
||||
|
||||
@objects.objectify(objects.sm_node)
|
||||
def sm_node_get(self, server):
|
||||
query = model_query(models.sm_node)
|
||||
query = add_identity_filter(query, server, use_name=True)
|
||||
|
||||
try:
|
||||
result = query.one()
|
||||
except NoResultFound:
|
||||
raise exception.ServerNotFound(server=server)
|
||||
|
||||
return result
|
||||
|
||||
@objects.objectify(objects.sm_node)
|
||||
def sm_node_get_by_name(self, name):
|
||||
result = model_query(models.sm_node, read_deleted="no").\
|
||||
filter_by(name=name)
|
||||
# first() since want a list
|
||||
|
||||
if not result:
|
||||
raise exception.NodeNotFound(node=name)
|
||||
|
||||
return result
|
||||
|
||||
@objects.objectify(objects.service)
|
||||
def sm_service_get(self, server):
|
||||
# server may be passed as a string. It may be uuid or Int.
|
||||
# server = int(server)
|
||||
query = model_query(models.service)
|
||||
query = add_identity_filter(query, server, use_name=True)
|
||||
|
||||
try:
|
||||
result = query.one()
|
||||
except NoResultFound:
|
||||
raise exception.ServerNotFound(server=server)
|
||||
|
||||
return result
|
||||
|
||||
@objects.objectify(objects.service)
|
||||
def sm_service_get_list(self, limit=None, marker=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
return _paginate_query(models.service, limit, marker,
|
||||
sort_key, sort_dir)
|
||||
|
||||
@objects.objectify(objects.service)
|
||||
def sm_service_get_by_name(self, name):
|
||||
result = model_query(models.service, read_deleted="no").\
|
||||
filter_by(name=name)
|
||||
# first() since want a list
|
||||
|
||||
if not result:
|
||||
raise exception.NodeNotFound(node=name)
|
||||
|
||||
return result
|
@ -0,0 +1,26 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
from migrate.versioning.shell import main
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main(debug='False', repository='.')
|
@ -0,0 +1,20 @@
|
||||
[db_settings]
|
||||
# Used to identify which repository this database is versioned under.
|
||||
# You can use the name of your project.
|
||||
repository_id=sm_api
|
||||
|
||||
# The name of the database table used to track the schema version.
|
||||
# This name shouldn't already be used by your project.
|
||||
# If this is changed once a database is under version control, you'll need to
|
||||
# change the table name in each database too.
|
||||
version_table=migrate_version
|
||||
|
||||
# When committing a change script, Migrate will attempt to generate the
|
||||
# sql for all supported databases; normally, if one of them fails - probably
|
||||
# because you don't have that database installed - it is ignored and the
|
||||
# commit continues, perhaps ending successfully.
|
||||
# Databases in this list MUST compile successfully during a commit, or the
|
||||
# entire commit will fail. List the databases your application will actually
|
||||
# be using to ensure your updates to that database work properly.
|
||||
# This must be a list; example: ['postgres','sqlite']
|
||||
required_dbs=[]
|
@ -0,0 +1,74 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
#
|
||||
|
||||
from sqlalchemy import Column, MetaData, String, Table, UniqueConstraint
|
||||
from sqlalchemy import Boolean, Integer, Enum, Text, ForeignKey, DateTime
|
||||
from sqlalchemy import Index
|
||||
from sqlalchemy.dialects import postgresql
|
||||
|
||||
ENGINE = 'InnoDB'
|
||||
CHARSET = 'utf8'
|
||||
|
||||
|
||||
def upgrade(migrate_engine):
|
||||
meta = MetaData()
|
||||
meta.bind = migrate_engine
|
||||
|
||||
i_ServiceGroup = Table(
|
||||
'i_servicegroup',
|
||||
meta,
|
||||
Column('created_at', DateTime),
|
||||
Column('updated_at', DateTime),
|
||||
Column('deleted_at', DateTime),
|
||||
|
||||
Column('id', Integer, primary_key=True, nullable=False),
|
||||
Column('uuid', String(36), unique=True),
|
||||
|
||||
Column('servicename', String(255), unique=True),
|
||||
Column('state', String(255), default="unknown"),
|
||||
|
||||
mysql_engine=ENGINE,
|
||||
mysql_charset=CHARSET,
|
||||
)
|
||||
i_ServiceGroup.create()
|
||||
|
||||
i_Service = Table(
|
||||
'i_service',
|
||||
meta,
|
||||
Column('created_at', DateTime),
|
||||
Column('updated_at', DateTime),
|
||||
Column('deleted_at', DateTime),
|
||||
|
||||
Column('id', Integer, primary_key=True, nullable=False), # autoincr
|
||||
Column('uuid', String(36), unique=True),
|
||||
|
||||
Column('servicename', String(255)),
|
||||
Column('hostname', String(255)),
|
||||
Column('forihostid', Integer,
|
||||
ForeignKey('i_host.id', ondelete='CASCADE')),
|
||||
|
||||
Column('activity', String), # active/standby
|
||||
Column('state', String),
|
||||
Column('reason', Text), # JSON encodedlist of string
|
||||
|
||||
UniqueConstraint('servicename', 'hostname',
|
||||
name='u_servicehost'),
|
||||
|
||||
mysql_engine=ENGINE,
|
||||
mysql_charset=CHARSET,
|
||||
)
|
||||
i_Service.create()
|
||||
|
||||
|
||||
def downgrade(migrate_engine):
|
||||
raise NotImplementedError('Downgrade from Initial is unsupported.')
|
||||
|
||||
#t = Table('i_disk', meta, autoload=True)
|
||||
#t.drop()
|
116
service-mgmt-api/sm-api/sm_api/db/sqlalchemy/migration.py
Normal file
116
service-mgmt-api/sm-api/sm_api/db/sqlalchemy/migration.py
Normal file
@ -0,0 +1,116 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import distutils.version as dist_version
|
||||
import os
|
||||
|
||||
import migrate
|
||||
from migrate.versioning import util as migrate_util
|
||||
import sqlalchemy
|
||||
|
||||
from sm_api.common import exception
|
||||
from sm_api.db import migration
|
||||
from sm_api.openstack.common.db.sqlalchemy import session as db_session
|
||||
|
||||
|
||||
@migrate_util.decorator
|
||||
def patched_with_engine(f, *a, **kw):
|
||||
url = a[0]
|
||||
engine = migrate_util.construct_engine(url, **kw)
|
||||
|
||||
try:
|
||||
kw['engine'] = engine
|
||||
return f(*a, **kw)
|
||||
finally:
|
||||
if isinstance(engine, migrate_util.Engine) and engine is not url:
|
||||
migrate_util.log.debug('Disposing SQLAlchemy engine %s', engine)
|
||||
engine.dispose()
|
||||
|
||||
|
||||
# TODO(jkoelker) When migrate 0.7.3 is released and nova depends
|
||||
# on that version or higher, this can be removed
|
||||
MIN_PKG_VERSION = dist_version.StrictVersion('0.7.3')
|
||||
if (not hasattr(migrate, '__version__') or
|
||||
dist_version.StrictVersion(migrate.__version__) < MIN_PKG_VERSION):
|
||||
migrate_util.with_engine = patched_with_engine
|
||||
|
||||
|
||||
# NOTE(jkoelker) Delay importing migrate until we are patched
|
||||
from migrate import exceptions as versioning_exceptions
|
||||
from migrate.versioning import api as versioning_api
|
||||
from migrate.versioning.repository import Repository
|
||||
|
||||
_REPOSITORY = None
|
||||
|
||||
get_engine = db_session.get_engine
|
||||
|
||||
|
||||
def db_sync(version=None):
|
||||
if version is not None:
|
||||
try:
|
||||
version = int(version)
|
||||
except ValueError:
|
||||
raise exception.Sm_apiException(_("version should be an integer"))
|
||||
|
||||
current_version = db_version()
|
||||
repository = _find_migrate_repo()
|
||||
if version is None or version > current_version:
|
||||
return versioning_api.upgrade(get_engine(), repository, version)
|
||||
else:
|
||||
return versioning_api.downgrade(get_engine(), repository,
|
||||
version)
|
||||
|
||||
|
||||
def db_version():
|
||||
repository = _find_migrate_repo()
|
||||
try:
|
||||
return versioning_api.db_version(get_engine(), repository)
|
||||
except versioning_exceptions.DatabaseNotControlledError:
|
||||
meta = sqlalchemy.MetaData()
|
||||
engine = get_engine()
|
||||
meta.reflect(bind=engine)
|
||||
tables = meta.tables
|
||||
if len(tables) == 0:
|
||||
db_version_control(migration.INIT_VERSION)
|
||||
return versioning_api.db_version(get_engine(), repository)
|
||||
else:
|
||||
# Some pre-Essex DB's may not be version controlled.
|
||||
# Require them to upgrade using Essex first.
|
||||
raise exception.Sm_apiException(
|
||||
_("Upgrade DB using Essex release first."))
|
||||
|
||||
|
||||
def db_version_control(version=None):
|
||||
repository = _find_migrate_repo()
|
||||
versioning_api.version_control(get_engine(), repository, version)
|
||||
return version
|
||||
|
||||
|
||||
def _find_migrate_repo():
|
||||
"""Get the path for the migrate repository."""
|
||||
global _REPOSITORY
|
||||
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
|
||||
'migrate_repo')
|
||||
assert os.path.exists(path)
|
||||
if _REPOSITORY is None:
|
||||
_REPOSITORY = Repository(path)
|
||||
return _REPOSITORY
|
154
service-mgmt-api/sm-api/sm_api/db/sqlalchemy/models.py
Executable file
154
service-mgmt-api/sm-api/sm_api/db/sqlalchemy/models.py
Executable file
@ -0,0 +1,154 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# -*- encoding: utf-8 -*-
|
||||
#
|
||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""
|
||||
SQLAlchemy models for sm_api data.
|
||||
"""
|
||||
|
||||
import json
|
||||
import urlparse
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sqlalchemy import Column, ForeignKey, Integer, Boolean
|
||||
from sqlalchemy import Enum, UniqueConstraint, String
|
||||
from sqlalchemy import Index
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.types import TypeDecorator, VARCHAR
|
||||
|
||||
from sm_api.openstack.common.db.sqlalchemy import models
|
||||
|
||||
sql_opts = [
|
||||
cfg.StrOpt('mysql_engine',
|
||||
default='InnoDB',
|
||||
help='MySQL engine')
|
||||
]
|
||||
|
||||
cfg.CONF.register_opts(sql_opts, 'database')
|
||||
|
||||
|
||||
def table_args():
|
||||
engine_name = urlparse.urlparse(cfg.CONF.database_connection).scheme
|
||||
if engine_name == 'mysql':
|
||||
return {'mysql_engine': cfg.CONF.mysql_engine,
|
||||
'mysql_charset': "utf8"}
|
||||
return None
|
||||
|
||||
|
||||
class JSONEncodedDict(TypeDecorator):
|
||||
"""Represents an immutable structure as a json-encoded string."""
|
||||
|
||||
impl = VARCHAR
|
||||
|
||||
def process_bind_param(self, value, dialect):
|
||||
if value is not None:
|
||||
value = json.dumps(value)
|
||||
return value
|
||||
|
||||
def process_result_value(self, value, dialect):
|
||||
if value is not None:
|
||||
value = json.loads(value)
|
||||
return value
|
||||
|
||||
|
||||
class Sm_apiBase(models.ModelBase): # models.TimestampMixin,
|
||||
|
||||
metadata = None
|
||||
|
||||
def as_dict(self):
|
||||
d = {}
|
||||
for c in self.__table__.columns:
|
||||
d[c.name] = self[c.name]
|
||||
return d
|
||||
|
||||
|
||||
Base = declarative_base(cls=Sm_apiBase)
|
||||
|
||||
|
||||
# table name in models
|
||||
class iservicegroup(Base):
|
||||
__tablename__ = 'service_groups'
|
||||
|
||||
id = Column(Integer, primary_key=True)
|
||||
name = Column(String(255))
|
||||
state = Column(String(255))
|
||||
status = Column(String(255))
|
||||
|
||||
|
||||
class iservice(Base):
|
||||
__tablename__ = 'i_service'
|
||||
id = Column(Integer, primary_key=True)
|
||||
uuid = Column(String(36))
|
||||
|
||||
servicename = Column(String(255), unique=True)
|
||||
hostname = Column(String(36))
|
||||
forihostid = Column(Integer, ForeignKey('i_host.id',
|
||||
ondelete='CASCADE'))
|
||||
|
||||
activity = Column(String(255), default="unknown")
|
||||
state = Column(String(255), default="unknown")
|
||||
reason = Column(JSONEncodedDict)
|
||||
|
||||
|
||||
class service(Base):
|
||||
__tablename__ = 'services'
|
||||
id = Column(Integer, primary_key=True)
|
||||
|
||||
name = Column(String(255))
|
||||
desired_state = Column(String(255))
|
||||
state = Column(String(255))
|
||||
status = Column(String(255))
|
||||
|
||||
|
||||
# sm_service_domain_members
|
||||
class sm_sdm(Base):
|
||||
__tablename__ = 'service_domain_members'
|
||||
|
||||
id = Column(Integer, primary_key=True)
|
||||
name = Column(String(255))
|
||||
service_group_name = Column(String(255))
|
||||
redundancy_model = Column(String(255)) # sm_types.h
|
||||
|
||||
|
||||
# sm_service_domain_assignments
|
||||
class sm_sda(Base):
|
||||
__tablename__ = 'service_domain_assignments'
|
||||
id = Column(Integer, primary_key=True)
|
||||
uuid = Column(String(36))
|
||||
|
||||
name = Column(String(255))
|
||||
node_name = Column(String(255)) # hostname
|
||||
service_group_name = Column(String(255))
|
||||
desired_state = Column(String(255)) # sm_types.h
|
||||
state = Column(String(255)) # sm_types.h
|
||||
status = Column(String(255))
|
||||
condition = Column(String(255))
|
||||
|
||||
|
||||
class sm_node(Base):
|
||||
__tablename__ = 'nodes'
|
||||
id = Column(Integer, primary_key=True)
|
||||
|
||||
name = Column(String(255))
|
||||
administrative_state = Column(String(255)) # sm_types.h
|
||||
operational_state = Column(String(255))
|
||||
availability_status = Column(String(255))
|
||||
ready_state = Column(String(255))
|
56
service-mgmt-api/sm-api/sm_api/objects/__init__.py
Normal file
56
service-mgmt-api/sm-api/sm_api/objects/__init__.py
Normal file
@ -0,0 +1,56 @@
|
||||
# Copyright 2013 IBM Corp.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import functools
|
||||
|
||||
from sm_api.objects import smo_servicegroup
|
||||
from sm_api.objects import smo_service
|
||||
from sm_api.objects import smo_sdm
|
||||
from sm_api.objects import smo_sda
|
||||
from sm_api.objects import smo_node
|
||||
|
||||
|
||||
def objectify(klass):
|
||||
"""Decorator to convert database results into specified objects."""
|
||||
def the_decorator(fn):
|
||||
@functools.wraps(fn)
|
||||
def wrapper(*args, **kwargs):
|
||||
result = fn(*args, **kwargs)
|
||||
try:
|
||||
return klass._from_db_object(klass(), result)
|
||||
except TypeError:
|
||||
# TODO(deva): handle lists of objects better
|
||||
# once support for those lands and is imported.
|
||||
return [klass._from_db_object(klass(), obj) for obj in result]
|
||||
return wrapper
|
||||
return the_decorator
|
||||
|
||||
|
||||
service_groups = smo_servicegroup.service_groups
|
||||
service = smo_service.service
|
||||
sm_sdm = smo_sdm.sm_sdm
|
||||
sm_sda = smo_sda.sm_sda
|
||||
sm_node = smo_node.sm_node
|
||||
|
||||
__all__ = (
|
||||
service_groups,
|
||||
service,
|
||||
sm_sdm,
|
||||
sm_sda,
|
||||
sm_node,
|
||||
objectify)
|
504
service-mgmt-api/sm-api/sm_api/objects/base.py
Normal file
504
service-mgmt-api/sm-api/sm_api/objects/base.py
Normal file
@ -0,0 +1,504 @@
|
||||
# Copyright 2013 IBM Corp.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Sm common internal object model"""
|
||||
|
||||
import collections
|
||||
|
||||
from sm_api.common import exception
|
||||
from sm_api.objects import utils as obj_utils
|
||||
from sm_api.openstack.common import context
|
||||
from sm_api.openstack.common import log as logging
|
||||
from sm_api.openstack.common.rpc import common as rpc_common
|
||||
from sm_api.openstack.common.rpc import serializer as rpc_serializer
|
||||
|
||||
|
||||
LOG = logging.getLogger('object')
|
||||
|
||||
|
||||
def get_attrname(name):
|
||||
"""Return the mangled name of the attribute's underlying storage."""
|
||||
return '_%s' % name
|
||||
|
||||
|
||||
def make_class_properties(cls):
|
||||
# NOTE(danms): Inherit Sm_apiObject's base fields only
|
||||
cls.fields.update(Sm_apiObject.fields)
|
||||
for name, typefn in cls.fields.iteritems():
|
||||
|
||||
def getter(self, name=name):
|
||||
attrname = get_attrname(name)
|
||||
if not hasattr(self, attrname):
|
||||
self.obj_load_attr(name)
|
||||
return getattr(self, attrname)
|
||||
|
||||
def setter(self, value, name=name, typefn=typefn):
|
||||
self._changed_fields.add(name)
|
||||
try:
|
||||
return setattr(self, get_attrname(name), typefn(value))
|
||||
except Exception:
|
||||
attr = "%s.%s" % (self.obj_name(), name)
|
||||
LOG.exception(_('Error setting %(attr)s') %
|
||||
{'attr': attr})
|
||||
raise
|
||||
|
||||
setattr(cls, name, property(getter, setter))
|
||||
|
||||
|
||||
class Sm_apiObjectMetaclass(type):
|
||||
"""Metaclass that allows tracking of object classes."""
|
||||
|
||||
# NOTE(danms): This is what controls whether object operations are
|
||||
# remoted. If this is not None, use it to remote things over RPC.
|
||||
indirection_api = None
|
||||
|
||||
def __init__(cls, names, bases, dict_):
|
||||
if not hasattr(cls, '_obj_classes'):
|
||||
# This will be set in the 'Sm_apiObject' class.
|
||||
cls._obj_classes = collections.defaultdict(list)
|
||||
else:
|
||||
# Add the subclass to Sm_apiObject._obj_classes
|
||||
make_class_properties(cls)
|
||||
cls._obj_classes[cls.obj_name()].append(cls)
|
||||
|
||||
|
||||
# These are decorators that mark an object's method as remotable.
|
||||
# If the metaclass is configured to forward object methods to an
|
||||
# indirection service, these will result in making an RPC call
|
||||
# instead of directly calling the implementation in the object. Instead,
|
||||
# the object implementation on the remote end will perform the
|
||||
# requested action and the result will be returned here.
|
||||
def remotable_classmethod(fn):
|
||||
"""Decorator for remotable classmethods."""
|
||||
def wrapper(cls, context, *args, **kwargs):
|
||||
if Sm_apiObject.indirection_api:
|
||||
result = Sm_apiObject.indirection_api.object_class_action(
|
||||
context, cls.obj_name(), fn.__name__, cls.version,
|
||||
args, kwargs)
|
||||
else:
|
||||
result = fn(cls, context, *args, **kwargs)
|
||||
if isinstance(result, Sm_apiObject):
|
||||
result._context = context
|
||||
return result
|
||||
return classmethod(wrapper)
|
||||
|
||||
|
||||
# See comment above for remotable_classmethod()
|
||||
#
|
||||
# Note that this will use either the provided context, or the one
|
||||
# stashed in the object. If neither are present, the object is
|
||||
# "orphaned" and remotable methods cannot be called.
|
||||
def remotable(fn):
|
||||
"""Decorator for remotable object methods."""
|
||||
def wrapper(self, *args, **kwargs):
|
||||
ctxt = self._context
|
||||
try:
|
||||
if isinstance(args[0], (context.RequestContext,
|
||||
rpc_common.CommonRpcContext)):
|
||||
ctxt = args[0]
|
||||
args = args[1:]
|
||||
except IndexError:
|
||||
pass
|
||||
if ctxt is None:
|
||||
raise exception.OrphanedObjectError(method=fn.__name__,
|
||||
objtype=self.obj_name())
|
||||
if Sm_apiObject.indirection_api:
|
||||
updates, result = Sm_apiObject.indirection_api.object_action(
|
||||
ctxt, self, fn.__name__, args, kwargs)
|
||||
for key, value in updates.iteritems():
|
||||
if key in self.fields:
|
||||
self[key] = self._attr_from_primitive(key, value)
|
||||
self._changed_fields = set(updates.get('obj_what_changed', []))
|
||||
return result
|
||||
else:
|
||||
return fn(self, ctxt, *args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
# Object versioning rules
|
||||
#
|
||||
# Each service has its set of objects, each with a version attached. When
|
||||
# a client attempts to call an object method, the server checks to see if
|
||||
# the version of that object matches (in a compatible way) its object
|
||||
# implementation. If so, cool, and if not, fail.
|
||||
def check_object_version(server, client):
|
||||
try:
|
||||
client_major, _client_minor = client.split('.')
|
||||
server_major, _server_minor = server.split('.')
|
||||
client_minor = int(_client_minor)
|
||||
server_minor = int(_server_minor)
|
||||
except ValueError:
|
||||
raise exception.IncompatibleObjectVersion(
|
||||
_('Invalid version string'))
|
||||
|
||||
if client_major != server_major:
|
||||
raise exception.IncompatibleObjectVersion(
|
||||
dict(client=client_major, server=server_major))
|
||||
if client_minor > server_minor:
|
||||
raise exception.IncompatibleObjectVersion(
|
||||
dict(client=client_minor, server=server_minor))
|
||||
|
||||
|
||||
class Sm_apiObject(object):
|
||||
"""Base class and object factory.
|
||||
|
||||
This forms the base of all objects that can be remoted or instantiated
|
||||
via RPC. Simply defining a class that inherits from this base class
|
||||
will make it remotely instantiatable. Objects should implement the
|
||||
necessary "get" classmethod routines as well as "save" object methods
|
||||
as appropriate.
|
||||
"""
|
||||
__metaclass__ = Sm_apiObjectMetaclass
|
||||
|
||||
# Version of this object (see rules above check_object_version())
|
||||
version = '1.0'
|
||||
|
||||
# The fields present in this object as key:typefn pairs. For example:
|
||||
#
|
||||
# fields = { 'foo': int,
|
||||
# 'bar': str,
|
||||
# 'baz': lambda x: str(x).ljust(8),
|
||||
# }
|
||||
#
|
||||
# NOTE(danms): The base Sm_apiObject class' fields will be inherited
|
||||
# by subclasses, but that is a special case. Objects inheriting from
|
||||
# other objects will not receive this merging of fields contents.
|
||||
fields = {}
|
||||
# JKUNG until avail 'created_at': obj_utils.datetime_or_str_or_none,
|
||||
# 'updated_at': obj_utils.datetime_or_str_or_none,
|
||||
# }
|
||||
obj_extra_fields = []
|
||||
|
||||
def __init__(self):
|
||||
self._changed_fields = set()
|
||||
self._context = None
|
||||
|
||||
@classmethod
|
||||
def obj_name(cls):
|
||||
"""Return a canonical name for this object which will be used over
|
||||
the wire for remote hydration.
|
||||
"""
|
||||
return cls.__name__
|
||||
|
||||
@classmethod
|
||||
def obj_class_from_name(cls, objname, objver):
|
||||
"""Returns a class from the registry based on a name and version."""
|
||||
if objname not in cls._obj_classes:
|
||||
LOG.error(_('Unable to instantiate unregistered object type '
|
||||
'%(objtype)s') % dict(objtype=objname))
|
||||
raise exception.UnsupportedObjectError(objtype=objname)
|
||||
|
||||
compatible_match = None
|
||||
for objclass in cls._obj_classes[objname]:
|
||||
if objclass.version == objver:
|
||||
return objclass
|
||||
try:
|
||||
check_object_version(objclass.version, objver)
|
||||
compatible_match = objclass
|
||||
except exception.IncompatibleObjectVersion:
|
||||
pass
|
||||
|
||||
if compatible_match:
|
||||
return compatible_match
|
||||
|
||||
raise exception.IncompatibleObjectVersion(objname=objname,
|
||||
objver=objver)
|
||||
|
||||
_attr_created_at_from_primitive = obj_utils.dt_deserializer
|
||||
_attr_updated_at_from_primitive = obj_utils.dt_deserializer
|
||||
|
||||
def _attr_from_primitive(self, attribute, value):
|
||||
"""Attribute deserialization dispatcher.
|
||||
|
||||
This calls self._attr_foo_from_primitive(value) for an attribute
|
||||
foo with value, if it exists, otherwise it assumes the value
|
||||
is suitable for the attribute's setter method.
|
||||
"""
|
||||
handler = '_attr_%s_from_primitive' % attribute
|
||||
if hasattr(self, handler):
|
||||
return getattr(self, handler)(value)
|
||||
return value
|
||||
|
||||
@classmethod
|
||||
def obj_from_primitive(cls, primitive, context=None):
|
||||
"""Simple base-case hydration.
|
||||
|
||||
This calls self._attr_from_primitive() for each item in fields.
|
||||
"""
|
||||
if primitive['sm_api_object.namespace'] != 'sm_api':
|
||||
# NOTE(danms): We don't do anything with this now, but it's
|
||||
# there for "the future"
|
||||
raise exception.UnsupportedObjectError(
|
||||
objtype='%s.%s' % (primitive['sm_api_object.namespace'],
|
||||
primitive['sm_api_object.name']))
|
||||
objname = primitive['sm_api_object.name']
|
||||
objver = primitive['sm_api_object.version']
|
||||
objdata = primitive['sm_api_object.data']
|
||||
objclass = cls.obj_class_from_name(objname, objver)
|
||||
self = objclass()
|
||||
self._context = context
|
||||
for name in self.fields:
|
||||
if name in objdata:
|
||||
setattr(self, name,
|
||||
self._attr_from_primitive(name, objdata[name]))
|
||||
changes = primitive.get('sm_api_object.changes', [])
|
||||
self._changed_fields = set([x for x in changes if x in self.fields])
|
||||
return self
|
||||
|
||||
_attr_created_at_to_primitive = obj_utils.dt_serializer('created_at')
|
||||
_attr_updated_at_to_primitive = obj_utils.dt_serializer('updated_at')
|
||||
|
||||
def _attr_to_primitive(self, attribute):
|
||||
"""Attribute serialization dispatcher.
|
||||
|
||||
This calls self._attr_foo_to_primitive() for an attribute foo,
|
||||
if it exists, otherwise it assumes the attribute itself is
|
||||
primitive-enough to be sent over the RPC wire.
|
||||
"""
|
||||
handler = '_attr_%s_to_primitive' % attribute
|
||||
if hasattr(self, handler):
|
||||
return getattr(self, handler)()
|
||||
else:
|
||||
return getattr(self, attribute)
|
||||
|
||||
def obj_to_primitive(self):
|
||||
"""Simple base-case dehydration.
|
||||
|
||||
This calls self._attr_to_primitive() for each item in fields.
|
||||
"""
|
||||
primitive = dict()
|
||||
for name in self.fields:
|
||||
if hasattr(self, get_attrname(name)):
|
||||
primitive[name] = self._attr_to_primitive(name)
|
||||
obj = {'sm_api_object.name': self.obj_name(),
|
||||
'sm_api_object.namespace': 'sm_api',
|
||||
'sm_api_object.version': self.version,
|
||||
'sm_api_object.data': primitive}
|
||||
if self.obj_what_changed():
|
||||
obj['sm_api_object.changes'] = list(self.obj_what_changed())
|
||||
return obj
|
||||
|
||||
def obj_load_attr(self, attrname):
|
||||
"""Load an additional attribute from the real object.
|
||||
|
||||
This should use self._conductor, and cache any data that might
|
||||
be useful for future load operations.
|
||||
"""
|
||||
raise NotImplementedError(
|
||||
_("Cannot load '%(attrname)s' in the base class") %
|
||||
{'attrname': attrname})
|
||||
|
||||
def save(self, context):
|
||||
"""Save the changed fields back to the store.
|
||||
|
||||
This is optional for subclasses, but is presented here in the base
|
||||
class for consistency among those that do.
|
||||
"""
|
||||
raise NotImplementedError('Cannot save anything in the base class')
|
||||
|
||||
def obj_what_changed(self):
|
||||
"""Returns a set of fields that have been modified."""
|
||||
return self._changed_fields
|
||||
|
||||
def obj_reset_changes(self, fields=None):
|
||||
"""Reset the list of fields that have been changed.
|
||||
|
||||
Note that this is NOT "revert to previous values"
|
||||
"""
|
||||
if fields:
|
||||
self._changed_fields -= set(fields)
|
||||
else:
|
||||
self._changed_fields.clear()
|
||||
|
||||
# dictish syntactic sugar
|
||||
def iteritems(self):
|
||||
"""For backwards-compatibility with dict-based objects.
|
||||
|
||||
NOTE(danms): May be removed in the future.
|
||||
"""
|
||||
for name in self.fields.keys() + self.obj_extra_fields:
|
||||
if (hasattr(self, get_attrname(name)) or
|
||||
name in self.obj_extra_fields):
|
||||
yield name, getattr(self, name)
|
||||
|
||||
items = lambda self: list(self.iteritems())
|
||||
|
||||
def __getitem__(self, name):
|
||||
"""For backwards-compatibility with dict-based objects.
|
||||
|
||||
NOTE(danms): May be removed in the future.
|
||||
"""
|
||||
return getattr(self, name)
|
||||
|
||||
def __setitem__(self, name, value):
|
||||
"""For backwards-compatibility with dict-based objects.
|
||||
|
||||
NOTE(danms): May be removed in the future.
|
||||
"""
|
||||
setattr(self, name, value)
|
||||
|
||||
def __contains__(self, name):
|
||||
"""For backwards-compatibility with dict-based objects.
|
||||
|
||||
NOTE(danms): May be removed in the future.
|
||||
"""
|
||||
return hasattr(self, get_attrname(name))
|
||||
|
||||
def get(self, key, value=None):
|
||||
"""For backwards-compatibility with dict-based objects.
|
||||
|
||||
NOTE(danms): May be removed in the future.
|
||||
"""
|
||||
return self[key]
|
||||
|
||||
def update(self, updates):
|
||||
"""For backwards-compatibility with dict-base objects.
|
||||
|
||||
NOTE(danms): May be removed in the future.
|
||||
"""
|
||||
for key, value in updates.items():
|
||||
self[key] = value
|
||||
|
||||
def as_dict(self):
|
||||
return dict((k, getattr(self, k))
|
||||
for k in self.fields
|
||||
if hasattr(self, k))
|
||||
|
||||
@classmethod
|
||||
def get_defaults(cls):
|
||||
"""Return a dict of its fields with their default value."""
|
||||
return dict((k, v(None))
|
||||
for k, v in cls.fields.iteritems()
|
||||
if k != "id" and callable(v))
|
||||
|
||||
|
||||
class ObjectListBase(object):
|
||||
"""Mixin class for lists of objects.
|
||||
|
||||
This mixin class can be added as a base class for an object that
|
||||
is implementing a list of objects. It adds a single field of 'objects',
|
||||
which is the list store, and behaves like a list itself. It supports
|
||||
serialization of the list of objects automatically.
|
||||
"""
|
||||
fields = {
|
||||
'objects': list,
|
||||
}
|
||||
|
||||
def __iter__(self):
|
||||
"""List iterator interface."""
|
||||
return iter(self.objects)
|
||||
|
||||
def __len__(self):
|
||||
"""List length."""
|
||||
return len(self.objects)
|
||||
|
||||
def __getitem__(self, index):
|
||||
"""List index access."""
|
||||
if isinstance(index, slice):
|
||||
new_obj = self.__class__()
|
||||
new_obj.objects = self.objects[index]
|
||||
# NOTE(danms): We must be mixed in with an Sm_apiObject!
|
||||
new_obj.obj_reset_changes()
|
||||
new_obj._context = self._context
|
||||
return new_obj
|
||||
return self.objects[index]
|
||||
|
||||
def __contains__(self, value):
|
||||
"""List membership test."""
|
||||
return value in self.objects
|
||||
|
||||
def count(self, value):
|
||||
"""List count of value occurrences."""
|
||||
return self.objects.count(value)
|
||||
|
||||
def index(self, value):
|
||||
"""List index of value."""
|
||||
return self.objects.index(value)
|
||||
|
||||
def _attr_objects_to_primitive(self):
|
||||
"""Serialization of object list."""
|
||||
return [x.obj_to_primitive() for x in self.objects]
|
||||
|
||||
def _attr_objects_from_primitive(self, value):
|
||||
"""Deserialization of object list."""
|
||||
objects = []
|
||||
for entity in value:
|
||||
obj = Sm_apiObject.obj_from_primitive(entity,
|
||||
context=self._context)
|
||||
objects.append(obj)
|
||||
return objects
|
||||
|
||||
|
||||
class Sm_apiObjectSerializer(rpc_serializer.Serializer):
|
||||
"""A Sm_apiObject-aware Serializer.
|
||||
|
||||
This implements the Oslo Serializer interface and provides the
|
||||
ability to serialize and deserialize Sm_apiObject entities. Any service
|
||||
that needs to accept or return Sm_apiObjects as arguments or result values
|
||||
should pass this to its RpcProxy and RpcDispatcher objects.
|
||||
"""
|
||||
|
||||
def _process_iterable(self, context, action_fn, values):
|
||||
"""Process an iterable, taking an action on each value.
|
||||
:param:context: Request context
|
||||
:param:action_fn: Action to take on each item in values
|
||||
:param:values: Iterable container of things to take action on
|
||||
:returns: A new container of the same type (except set) with
|
||||
items from values having had action applied.
|
||||
"""
|
||||
iterable = values.__class__
|
||||
if iterable == set:
|
||||
# NOTE(danms): A set can't have an unhashable value inside, such as
|
||||
# a dict. Convert sets to tuples, which is fine, since we can't
|
||||
# send them over RPC anyway.
|
||||
iterable = tuple
|
||||
return iterable([action_fn(context, value) for value in values])
|
||||
|
||||
def serialize_entity(self, context, entity):
|
||||
if isinstance(entity, (tuple, list, set)):
|
||||
entity = self._process_iterable(context, self.serialize_entity,
|
||||
entity)
|
||||
elif (hasattr(entity, 'obj_to_primitive') and
|
||||
callable(entity.obj_to_primitive)):
|
||||
entity = entity.obj_to_primitive()
|
||||
return entity
|
||||
|
||||
def deserialize_entity(self, context, entity):
|
||||
if isinstance(entity, dict) and 'sm_api_object.name' in entity:
|
||||
entity = Sm_apiObject.obj_from_primitive(entity, context=context)
|
||||
elif isinstance(entity, (tuple, list, set)):
|
||||
entity = self._process_iterable(context, self.deserialize_entity,
|
||||
entity)
|
||||
return entity
|
||||
|
||||
|
||||
def obj_to_primitive(obj):
|
||||
"""Recursively turn an object into a python primitive.
|
||||
|
||||
An Sm_apiObject becomes a dict, and anything that implements ObjectListBase
|
||||
becomes a list.
|
||||
"""
|
||||
if isinstance(obj, ObjectListBase):
|
||||
return [obj_to_primitive(x) for x in obj]
|
||||
elif isinstance(obj, Sm_apiObject):
|
||||
result = {}
|
||||
for key, value in obj.iteritems():
|
||||
result[key] = obj_to_primitive(value)
|
||||
return result
|
||||
else:
|
||||
return obj
|
72
service-mgmt-api/sm-api/sm_api/objects/smo_node.py
Normal file
72
service-mgmt-api/sm-api/sm_api/objects/smo_node.py
Normal file
@ -0,0 +1,72 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# coding=utf-8
|
||||
#
|
||||
|
||||
from sm_api.db import api as db_api
|
||||
from sm_api.objects import base
|
||||
from sm_api.objects import utils
|
||||
|
||||
class sm_node(base.Sm_apiObject):
|
||||
|
||||
dbapi = db_api.get_instance()
|
||||
|
||||
fields = {
|
||||
'id': int,
|
||||
'name': utils.str_or_none,
|
||||
'administrative_state': utils.str_or_none,
|
||||
'operational_state': utils.str_or_none,
|
||||
'availability_status': utils.str_or_none,
|
||||
'ready_state': utils.str_or_none,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _from_db_object(server, db_server):
|
||||
"""Converts a database entity to a formal object."""
|
||||
for field in server.fields:
|
||||
server[field] = db_server[field]
|
||||
|
||||
server.obj_reset_changes()
|
||||
return server
|
||||
|
||||
@base.remotable_classmethod
|
||||
def get_by_uuid(cls, context, uuid):
|
||||
"""Find a server based on uuid and return a Node object.
|
||||
|
||||
:param uuid: the uuid of a server.
|
||||
:returns: a :class:`Node` object.
|
||||
"""
|
||||
db_server = cls.dbapi.sm_node_get(uuid)
|
||||
return sm_node._from_db_object(cls(), db_server)
|
||||
|
||||
# @base.remotable
|
||||
# def save(self, context):
|
||||
# """Save updates to this Node.
|
||||
|
||||
# Column-wise updates will be made based on the result of
|
||||
# self.what_changed(). If target_power_state is provided,
|
||||
# it will be checked against the in-database copy of the
|
||||
# server before updates are made.
|
||||
|
||||
# :param context: Security context
|
||||
# """
|
||||
# updates = {}
|
||||
# changes = self.obj_what_changed()
|
||||
# for field in changes:
|
||||
# updates[field] = self[field]
|
||||
# self.dbapi.sm_node_update(self.uuid, updates)
|
||||
#
|
||||
# self.obj_reset_changes()
|
||||
|
||||
@base.remotable
|
||||
def refresh(self, context):
|
||||
current = self.__class__.get_by_uuid(context, uuid=self.uuid)
|
||||
for field in self.fields:
|
||||
if (hasattr(self, base.get_attrname(field)) and
|
||||
self[field] != current[field]):
|
||||
self[field] = current[field]
|
81
service-mgmt-api/sm-api/sm_api/objects/smo_sda.py
Executable file
81
service-mgmt-api/sm-api/sm_api/objects/smo_sda.py
Executable file
@ -0,0 +1,81 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# coding=utf-8
|
||||
#
|
||||
|
||||
from sm_api.db import api as db_api
|
||||
from sm_api.objects import base
|
||||
from sm_api.objects import utils
|
||||
|
||||
|
||||
class sm_sda(base.Sm_apiObject):
|
||||
|
||||
dbapi = db_api.get_instance()
|
||||
|
||||
fields = {
|
||||
'id': int,
|
||||
'uuid': utils.str_or_none,
|
||||
# 'deleted': utils.str_or_none,
|
||||
|
||||
# 'created_at': utils.datetime_str_or_none,
|
||||
# 'updated_at': utils.datetime_str_or_none,
|
||||
'name': utils.str_or_none,
|
||||
'node_name': utils.str_or_none,
|
||||
'service_group_name': utils.str_or_none,
|
||||
'state': utils.str_or_none,
|
||||
'desired_state': utils.str_or_none,
|
||||
'status': utils.str_or_none,
|
||||
'condition': utils.str_or_none,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _from_db_object(server, db_server):
|
||||
"""Converts a database entity to a formal object."""
|
||||
for field in server.fields:
|
||||
server[field] = db_server[field]
|
||||
|
||||
server.obj_reset_changes()
|
||||
return server
|
||||
|
||||
@base.remotable_classmethod
|
||||
def get_by_uuid(cls, context, uuid):
|
||||
"""Find a server based on uuid and return a Node object.
|
||||
|
||||
:param uuid: the uuid of a server.
|
||||
:returns: a :class:`Node` object.
|
||||
"""
|
||||
# TODO(deva): enable getting ports for this server
|
||||
db_server = cls.dbapi.sm_sda_get(uuid)
|
||||
return sm_sda._from_db_object(cls(), db_server)
|
||||
|
||||
# @base.remotable
|
||||
# def save(self, context):
|
||||
# """Save updates to this Node.
|
||||
|
||||
# Column-wise updates will be made based on the result of
|
||||
# self.what_changed(). If target_power_state is provided,
|
||||
# it will be checked against the in-database copy of the
|
||||
# server before updates are made.
|
||||
|
||||
# :param context: Security context
|
||||
# """
|
||||
# updates = {}
|
||||
# changes = self.obj_what_changed()
|
||||
# for field in changes:
|
||||
# updates[field] = self[field]
|
||||
# self.dbapi.sm_sda_update(self.uuid, updates)
|
||||
#
|
||||
# self.obj_reset_changes()
|
||||
|
||||
@base.remotable
|
||||
def refresh(self, context):
|
||||
current = self.__class__.get_by_uuid(context, uuid=self.uuid)
|
||||
for field in self.fields:
|
||||
if (hasattr(self, base.get_attrname(field)) and
|
||||
self[field] != current[field]):
|
||||
self[field] = current[field]
|
72
service-mgmt-api/sm-api/sm_api/objects/smo_sdm.py
Normal file
72
service-mgmt-api/sm-api/sm_api/objects/smo_sdm.py
Normal file
@ -0,0 +1,72 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# coding=utf-8
|
||||
#
|
||||
|
||||
from sm_api.db import api as db_api
|
||||
from sm_api.objects import base
|
||||
from sm_api.objects import utils
|
||||
|
||||
|
||||
class sm_sdm(base.Sm_apiObject):
|
||||
|
||||
dbapi = db_api.get_instance()
|
||||
|
||||
fields = {
|
||||
'id': int,
|
||||
'name': utils.str_or_none,
|
||||
'service_group_name': utils.str_or_none,
|
||||
'redundancy_model': utils.str_or_none,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _from_db_object(server, db_server):
|
||||
"""Converts a database entity to a formal object."""
|
||||
for field in server.fields:
|
||||
server[field] = db_server[field]
|
||||
|
||||
server.obj_reset_changes()
|
||||
return server
|
||||
|
||||
@base.remotable_classmethod
|
||||
def get_by_uuid(cls, context, uuid):
|
||||
"""Find a server based on uuid and return a Node object.
|
||||
|
||||
:param uuid: the uuid of a server.
|
||||
:returns: a :class:`Node` object.
|
||||
"""
|
||||
# TODO(deva): enable getting ports for this server
|
||||
db_server = cls.dbapi.sm_sda_get(uuid)
|
||||
return sm_sda._from_db_object(cls(), db_server)
|
||||
|
||||
# @base.remotable
|
||||
# def save(self, context):
|
||||
# """Save updates to this Node.
|
||||
|
||||
# Column-wise updates will be made based on the result of
|
||||
# self.what_changed(). If target_power_state is provided,
|
||||
# it will be checked against the in-database copy of the
|
||||
# server before updates are made.
|
||||
|
||||
# :param context: Security context
|
||||
# """
|
||||
# updates = {}
|
||||
# changes = self.obj_what_changed()
|
||||
# for field in changes:
|
||||
# updates[field] = self[field]
|
||||
# self.dbapi.sm_sda_update(self.uuid, updates)
|
||||
#
|
||||
# self.obj_reset_changes()
|
||||
|
||||
@base.remotable
|
||||
def refresh(self, context):
|
||||
current = self.__class__.get_by_uuid(context, uuid=self.uuid)
|
||||
for field in self.fields:
|
||||
if (hasattr(self, base.get_attrname(field)) and
|
||||
self[field] != current[field]):
|
||||
self[field] = current[field]
|
77
service-mgmt-api/sm-api/sm_api/objects/smo_service.py
Normal file
77
service-mgmt-api/sm-api/sm_api/objects/smo_service.py
Normal file
@ -0,0 +1,77 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# coding=utf-8
|
||||
#
|
||||
|
||||
from sm_api.db import api as db_api
|
||||
from sm_api.objects import base
|
||||
from sm_api.objects import utils
|
||||
|
||||
|
||||
class service(base.Sm_apiObject):
|
||||
|
||||
dbapi = db_api.get_instance()
|
||||
|
||||
fields = {
|
||||
'id': int,
|
||||
'name': utils.str_or_none,
|
||||
'desired_state': utils.str_or_none,
|
||||
'state': utils.str_or_none,
|
||||
'status': utils.str_or_none,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _from_db_object(server, db_server):
|
||||
"""Converts a database entity to a formal object."""
|
||||
for field in server.fields:
|
||||
server[field] = db_server[field]
|
||||
|
||||
server.obj_reset_changes()
|
||||
return server
|
||||
|
||||
@base.remotable_classmethod
|
||||
def get_by_uuid(cls, context, uuid):
|
||||
"""Find a server based on uuid and return a Node object.
|
||||
|
||||
:param uuid: the uuid of a server.
|
||||
:returns: a :class:`Node` object.
|
||||
"""
|
||||
# TODO(deva): enable getting ports for this server
|
||||
db_server = cls.dbapi.sm_service_get(uuid)
|
||||
return service._from_db_object(cls(), db_server)
|
||||
|
||||
@base.remotable
|
||||
def save(self, context):
|
||||
"""Save updates to this Node.
|
||||
|
||||
Column-wise updates will be made based on the result of
|
||||
self.what_changed(). If target_power_state is provided,
|
||||
it will be checked against the in-database copy of the
|
||||
server before updates are made.
|
||||
|
||||
:param context: Security context
|
||||
"""
|
||||
# TODO(deva): enforce safe limits on what fields may be changed
|
||||
# depending on state. Eg., do not allow changing
|
||||
# instance_uuid of an already-provisioned server.
|
||||
# Raise exception if unsafe to change something.
|
||||
updates = {}
|
||||
changes = self.obj_what_changed()
|
||||
for field in changes:
|
||||
updates[field] = self[field]
|
||||
self.dbapi.sm_service_update(self.uuid, updates)
|
||||
|
||||
self.obj_reset_changes()
|
||||
|
||||
@base.remotable
|
||||
def refresh(self, context):
|
||||
current = self.__class__.get_by_uuid(context, uuid=self.uuid)
|
||||
for field in self.fields:
|
||||
if (hasattr(self, base.get_attrname(field)) and
|
||||
self[field] != current[field]):
|
||||
self[field] = current[field]
|
80
service-mgmt-api/sm-api/sm_api/objects/smo_servicegroup.py
Normal file
80
service-mgmt-api/sm-api/sm_api/objects/smo_servicegroup.py
Normal file
@ -0,0 +1,80 @@
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# coding=utf-8
|
||||
#
|
||||
|
||||
from sm_api.db import api as db_api
|
||||
from sm_api.objects import base
|
||||
from sm_api.objects import utils
|
||||
|
||||
|
||||
class service_groups(base.Sm_apiObject):
|
||||
|
||||
dbapi = db_api.get_instance()
|
||||
|
||||
fields = {
|
||||
'id': utils.int_or_none,
|
||||
# 'uuid': utils.str_or_none,
|
||||
# 'deleted': utils.str_or_none,
|
||||
|
||||
# 'created_at': utils.datetime_str_or_none,
|
||||
# 'updated_at': utils.datetime_str_or_none,
|
||||
'name': utils.str_or_none,
|
||||
'state': utils.str_or_none,
|
||||
'status': utils.str_or_none,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _from_db_object(server, db_server):
|
||||
"""Converts a database entity to a formal object."""
|
||||
for field in server.fields:
|
||||
server[field] = db_server[field]
|
||||
|
||||
server.obj_reset_changes()
|
||||
return server
|
||||
|
||||
@base.remotable_classmethod
|
||||
def get_by_uuid(cls, context, uuid):
|
||||
"""Find a server based on uuid and return a Node object.
|
||||
|
||||
:param uuid: the uuid of a server.
|
||||
:returns: a :class:`Node` object.
|
||||
"""
|
||||
db_server = cls.dbapi.iservicegroup_get(uuid)
|
||||
return service_groups._from_db_object(cls(), db_server)
|
||||
|
||||
@base.remotable
|
||||
def save(self, context):
|
||||
"""Save updates to this Node.
|
||||
|
||||
Column-wise updates will be made based on the result of
|
||||
self.what_changed(). If target_power_state is provided,
|
||||
it will be checked against the in-database copy of the
|
||||
server before updates are made.
|
||||
|
||||
:param context: Security context
|
||||
"""
|
||||
# TODO(deva): enforce safe limits on what fields may be changed
|
||||
# depending on state. Eg., do not allow changing
|
||||
# instance_uuid of an already-provisioned server.
|
||||
# Raise exception if unsafe to change something.
|
||||
updates = {}
|
||||
changes = self.obj_what_changed()
|
||||
for field in changes:
|
||||
updates[field] = self[field]
|
||||
self.dbapi.iservicegroup_update(self.uuid, updates)
|
||||
|
||||
self.obj_reset_changes()
|
||||
|
||||
@base.remotable
|
||||
def refresh(self, context):
|
||||
current = self.__class__.get_by_uuid(context, uuid=self.uuid)
|
||||
for field in self.fields:
|
||||
if (hasattr(self, base.get_attrname(field)) and
|
||||
self[field] != current[field]):
|
||||
self[field] = current[field]
|
129
service-mgmt-api/sm-api/sm_api/objects/utils.py
Normal file
129
service-mgmt-api/sm-api/sm_api/objects/utils.py
Normal file
@ -0,0 +1,129 @@
|
||||
# Copyright 2013 IBM Corp.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Utility methods for objects"""
|
||||
|
||||
import ast
|
||||
import datetime
|
||||
import iso8601
|
||||
import netaddr
|
||||
|
||||
from sm_api.openstack.common import timeutils
|
||||
|
||||
|
||||
def datetime_or_none(dt):
|
||||
"""Validate a datetime or None value."""
|
||||
if dt is None:
|
||||
return None
|
||||
elif isinstance(dt, datetime.datetime):
|
||||
if dt.utcoffset() is None:
|
||||
# NOTE(danms): Legacy objects from sqlalchemy are stored in UTC,
|
||||
# but are returned without a timezone attached.
|
||||
# As a transitional aid, assume a tz-naive object is in UTC.
|
||||
return dt.replace(tzinfo=iso8601.iso8601.Utc())
|
||||
else:
|
||||
return dt
|
||||
raise ValueError('A datetime.datetime is required here')
|
||||
|
||||
|
||||
def datetime_or_str_or_none(val):
|
||||
if isinstance(val, basestring):
|
||||
return timeutils.parse_isotime(val)
|
||||
return datetime_or_none(val)
|
||||
|
||||
|
||||
def int_or_none(val):
|
||||
"""Attempt to parse an integer value, or None."""
|
||||
if val is None:
|
||||
return val
|
||||
else:
|
||||
return int(val)
|
||||
|
||||
|
||||
def int_or_zero(val):
|
||||
"""Attempt to parse an integer value, if None return zero."""
|
||||
if val is None:
|
||||
return int(0)
|
||||
else:
|
||||
return int(val)
|
||||
|
||||
|
||||
def str_or_none(val):
|
||||
"""Attempt to stringify a value, or None."""
|
||||
if val is None:
|
||||
return val
|
||||
else:
|
||||
return str(val)
|
||||
|
||||
|
||||
def dict_or_none(val):
|
||||
"""Attempt to dictify a value, or None."""
|
||||
if val is None:
|
||||
return {}
|
||||
elif isinstance(val, str):
|
||||
return dict(ast.literal_eval(val))
|
||||
else:
|
||||
try:
|
||||
return dict(val)
|
||||
except ValueError:
|
||||
return {}
|
||||
|
||||
|
||||
def ip_or_none(version):
|
||||
"""Return a version-specific IP address validator."""
|
||||
def validator(val, version=version):
|
||||
if val is None:
|
||||
return val
|
||||
else:
|
||||
return netaddr.IPAddress(val, version=version)
|
||||
return validator
|
||||
|
||||
|
||||
def nested_object_or_none(objclass):
|
||||
def validator(val, objclass=objclass):
|
||||
if val is None or isinstance(val, objclass):
|
||||
return val
|
||||
raise ValueError('An object of class %s is required here' % objclass)
|
||||
return validator
|
||||
|
||||
|
||||
def dt_serializer(name):
|
||||
"""Return a datetime serializer for a named attribute."""
|
||||
def serializer(self, name=name):
|
||||
if getattr(self, name) is not None:
|
||||
return timeutils.isotime(getattr(self, name))
|
||||
else:
|
||||
return None
|
||||
return serializer
|
||||
|
||||
|
||||
def dt_deserializer(instance, val):
|
||||
"""A deserializer method for datetime attributes."""
|
||||
if val is None:
|
||||
return None
|
||||
else:
|
||||
return timeutils.parse_isotime(val)
|
||||
|
||||
|
||||
def obj_serializer(name):
|
||||
def serializer(self, name=name):
|
||||
if getattr(self, name) is not None:
|
||||
return getattr(self, name).obj_to_primitive()
|
||||
else:
|
||||
return None
|
||||
return serializer
|
67
service-mgmt-api/sm-api/sm_api/openstack/common/cliutils.py
Normal file
67
service-mgmt-api/sm-api/sm_api/openstack/common/cliutils.py
Normal file
@ -0,0 +1,67 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2012 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import inspect
|
||||
|
||||
|
||||
class MissingArgs(Exception):
|
||||
|
||||
def __init__(self, missing):
|
||||
self.missing = missing
|
||||
|
||||
def __str__(self):
|
||||
if len(self.missing) == 1:
|
||||
return "An argument is missing"
|
||||
else:
|
||||
return ("%(num)d arguments are missing" %
|
||||
dict(num=len(self.missing)))
|
||||
|
||||
|
||||
def validate_args(fn, *args, **kwargs):
|
||||
"""Check that the supplied args are sufficient for calling a function.
|
||||
|
||||
>>> validate_args(lambda a: None)
|
||||
Traceback (most recent call last):
|
||||
...
|
||||
MissingArgs: An argument is missing
|
||||
>>> validate_args(lambda a, b, c, d: None, 0, c=1)
|
||||
Traceback (most recent call last):
|
||||
...
|
||||
MissingArgs: 2 arguments are missing
|
||||
|
||||
:param fn: the function to check
|
||||
:param arg: the positional arguments supplied
|
||||
:param kwargs: the keyword arguments supplied
|
||||
"""
|
||||
argspec = inspect.getargspec(fn)
|
||||
|
||||
num_defaults = len(argspec.defaults or [])
|
||||
required_args = argspec.args[:len(argspec.args) - num_defaults]
|
||||
|
||||
def isbound(method):
|
||||
return getattr(method, 'im_self', None) is not None
|
||||
|
||||
if isbound(fn):
|
||||
required_args.pop(0)
|
||||
|
||||
missing = [arg for arg in required_args if arg not in kwargs]
|
||||
missing = missing[len(args):]
|
||||
if missing:
|
||||
raise MissingArgs(missing)
|
258
service-mgmt-api/sm-api/sm_api/openstack/common/config/generator.py
Executable file
258
service-mgmt-api/sm-api/sm_api/openstack/common/config/generator.py
Executable file
@ -0,0 +1,258 @@
|
||||
#!/usr/bin/env python
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2012 SINA Corporation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
#
|
||||
# @author: Zhongyue Luo, SINA Corporation.
|
||||
#
|
||||
"""Extracts OpenStack config option info from module(s)."""
|
||||
|
||||
import imp
|
||||
import os
|
||||
import re
|
||||
import socket
|
||||
import sys
|
||||
import textwrap
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.openstack.common import gettextutils
|
||||
from sm_api.openstack.common import importutils
|
||||
|
||||
gettextutils.install('sm_api')
|
||||
|
||||
STROPT = "StrOpt"
|
||||
BOOLOPT = "BoolOpt"
|
||||
INTOPT = "IntOpt"
|
||||
FLOATOPT = "FloatOpt"
|
||||
LISTOPT = "ListOpt"
|
||||
MULTISTROPT = "MultiStrOpt"
|
||||
|
||||
OPT_TYPES = {
|
||||
STROPT: 'string value',
|
||||
BOOLOPT: 'boolean value',
|
||||
INTOPT: 'integer value',
|
||||
FLOATOPT: 'floating point value',
|
||||
LISTOPT: 'list value',
|
||||
MULTISTROPT: 'multi valued',
|
||||
}
|
||||
|
||||
OPTION_COUNT = 0
|
||||
OPTION_REGEX = re.compile(r"(%s)" % "|".join([STROPT, BOOLOPT, INTOPT,
|
||||
FLOATOPT, LISTOPT,
|
||||
MULTISTROPT]))
|
||||
|
||||
PY_EXT = ".py"
|
||||
BASEDIR = os.path.abspath(os.path.join(os.path.dirname(__file__),
|
||||
"../../../../"))
|
||||
WORDWRAP_WIDTH = 60
|
||||
|
||||
|
||||
def generate(srcfiles):
|
||||
mods_by_pkg = dict()
|
||||
for filepath in srcfiles:
|
||||
pkg_name = filepath.split(os.sep)[1]
|
||||
mod_str = '.'.join(['.'.join(filepath.split(os.sep)[:-1]),
|
||||
os.path.basename(filepath).split('.')[0]])
|
||||
mods_by_pkg.setdefault(pkg_name, list()).append(mod_str)
|
||||
# NOTE(lzyeval): place top level modules before packages
|
||||
pkg_names = filter(lambda x: x.endswith(PY_EXT), mods_by_pkg.keys())
|
||||
pkg_names.sort()
|
||||
ext_names = filter(lambda x: x not in pkg_names, mods_by_pkg.keys())
|
||||
ext_names.sort()
|
||||
pkg_names.extend(ext_names)
|
||||
|
||||
# opts_by_group is a mapping of group name to an options list
|
||||
# The options list is a list of (module, options) tuples
|
||||
opts_by_group = {'DEFAULT': []}
|
||||
|
||||
for pkg_name in pkg_names:
|
||||
mods = mods_by_pkg.get(pkg_name)
|
||||
mods.sort()
|
||||
for mod_str in mods:
|
||||
if mod_str.endswith('.__init__'):
|
||||
mod_str = mod_str[:mod_str.rfind(".")]
|
||||
|
||||
mod_obj = _import_module(mod_str)
|
||||
if not mod_obj:
|
||||
continue
|
||||
|
||||
for group, opts in _list_opts(mod_obj):
|
||||
opts_by_group.setdefault(group, []).append((mod_str, opts))
|
||||
|
||||
print_group_opts('DEFAULT', opts_by_group.pop('DEFAULT', []))
|
||||
for group, opts in opts_by_group.items():
|
||||
print_group_opts(group, opts)
|
||||
|
||||
print "# Total option count: %d" % OPTION_COUNT
|
||||
|
||||
|
||||
def _import_module(mod_str):
|
||||
try:
|
||||
if mod_str.startswith('bin.'):
|
||||
imp.load_source(mod_str[4:], os.path.join('bin', mod_str[4:]))
|
||||
return sys.modules[mod_str[4:]]
|
||||
else:
|
||||
return importutils.import_module(mod_str)
|
||||
except ImportError as ie:
|
||||
sys.stderr.write("%s\n" % str(ie))
|
||||
return None
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def _is_in_group(opt, group):
|
||||
"Check if opt is in group."
|
||||
for key, value in group._opts.items():
|
||||
if value['opt'] == opt:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def _guess_groups(opt, mod_obj):
|
||||
# is it in the DEFAULT group?
|
||||
if _is_in_group(opt, cfg.CONF):
|
||||
return 'DEFAULT'
|
||||
|
||||
# what other groups is it in?
|
||||
for key, value in cfg.CONF.items():
|
||||
if isinstance(value, cfg.CONF.GroupAttr):
|
||||
if _is_in_group(opt, value._group):
|
||||
return value._group.name
|
||||
|
||||
raise RuntimeError(
|
||||
"Unable to find group for option %s, "
|
||||
"maybe it's defined twice in the same group?"
|
||||
% opt.name
|
||||
)
|
||||
|
||||
|
||||
def _list_opts(obj):
|
||||
def is_opt(o):
|
||||
return (isinstance(o, cfg.Opt) and
|
||||
not isinstance(o, cfg.SubCommandOpt))
|
||||
|
||||
opts = list()
|
||||
for attr_str in dir(obj):
|
||||
attr_obj = getattr(obj, attr_str)
|
||||
if is_opt(attr_obj):
|
||||
opts.append(attr_obj)
|
||||
elif (isinstance(attr_obj, list) and
|
||||
all(map(lambda x: is_opt(x), attr_obj))):
|
||||
opts.extend(attr_obj)
|
||||
|
||||
ret = {}
|
||||
for opt in opts:
|
||||
ret.setdefault(_guess_groups(opt, obj), []).append(opt)
|
||||
return ret.items()
|
||||
|
||||
|
||||
def print_group_opts(group, opts_by_module):
|
||||
print "[%s]" % group
|
||||
print
|
||||
global OPTION_COUNT
|
||||
for mod, opts in opts_by_module:
|
||||
OPTION_COUNT += len(opts)
|
||||
print '#'
|
||||
print '# Options defined in %s' % mod
|
||||
print '#'
|
||||
print
|
||||
for opt in opts:
|
||||
_print_opt(opt)
|
||||
print
|
||||
|
||||
|
||||
def _get_my_ip():
|
||||
try:
|
||||
csock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
||||
csock.connect(('8.8.8.8', 80))
|
||||
(addr, port) = csock.getsockname()
|
||||
csock.close()
|
||||
return addr
|
||||
except socket.error:
|
||||
return None
|
||||
|
||||
|
||||
def _sanitize_default(s):
|
||||
"""Set up a reasonably sensible default for pybasedir, my_ip and host."""
|
||||
if s.startswith(BASEDIR):
|
||||
return s.replace(BASEDIR, '/usr/lib/python/site-packages')
|
||||
elif BASEDIR in s:
|
||||
return s.replace(BASEDIR, '')
|
||||
elif s == _get_my_ip():
|
||||
return '10.0.0.1'
|
||||
elif s == socket.gethostname():
|
||||
return 'sm_api'
|
||||
elif s.strip() != s:
|
||||
return '"%s"' % s
|
||||
return s
|
||||
|
||||
|
||||
def _print_opt(opt):
|
||||
opt_name, opt_default, opt_help = opt.dest, opt.default, opt.help
|
||||
if not opt_help:
|
||||
sys.stderr.write('WARNING: "%s" is missing help string.\n' % opt_name)
|
||||
opt_type = None
|
||||
try:
|
||||
opt_type = OPTION_REGEX.search(str(type(opt))).group(0)
|
||||
except (ValueError, AttributeError) as err:
|
||||
sys.stderr.write("%s\n" % str(err))
|
||||
sys.exit(1)
|
||||
opt_help += ' (' + OPT_TYPES[opt_type] + ')'
|
||||
print '#', "\n# ".join(textwrap.wrap(opt_help, WORDWRAP_WIDTH))
|
||||
try:
|
||||
if opt_default is None:
|
||||
print '#%s=<None>' % opt_name
|
||||
elif opt_type == STROPT:
|
||||
assert(isinstance(opt_default, basestring))
|
||||
print '#%s=%s' % (opt_name, _sanitize_default(opt_default))
|
||||
elif opt_type == BOOLOPT:
|
||||
assert(isinstance(opt_default, bool))
|
||||
print '#%s=%s' % (opt_name, str(opt_default).lower())
|
||||
elif opt_type == INTOPT:
|
||||
assert(isinstance(opt_default, int) and
|
||||
not isinstance(opt_default, bool))
|
||||
print '#%s=%s' % (opt_name, opt_default)
|
||||
elif opt_type == FLOATOPT:
|
||||
assert(isinstance(opt_default, float))
|
||||
print '#%s=%s' % (opt_name, opt_default)
|
||||
elif opt_type == LISTOPT:
|
||||
assert(isinstance(opt_default, list))
|
||||
print '#%s=%s' % (opt_name, ','.join(opt_default))
|
||||
elif opt_type == MULTISTROPT:
|
||||
assert(isinstance(opt_default, list))
|
||||
if not opt_default:
|
||||
opt_default = ['']
|
||||
for default in opt_default:
|
||||
print '#%s=%s' % (opt_name, default)
|
||||
print
|
||||
except Exception:
|
||||
sys.stderr.write('Error in option "%s"\n' % opt_name)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print "usage: %s [srcfile]...\n" % sys.argv[0]
|
||||
sys.exit(0)
|
||||
generate(sys.argv[1:])
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
86
service-mgmt-api/sm-api/sm_api/openstack/common/context.py
Normal file
86
service-mgmt-api/sm-api/sm_api/openstack/common/context.py
Normal file
@ -0,0 +1,86 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""
|
||||
Simple class that stores security context information in the web request.
|
||||
|
||||
Projects should subclass this class if they wish to enhance the request
|
||||
context or provide additional information in their specific WSGI pipeline.
|
||||
"""
|
||||
|
||||
import itertools
|
||||
|
||||
from sm_api.openstack.common import uuidutils
|
||||
|
||||
|
||||
def generate_request_id():
|
||||
return 'req-%s' % uuidutils.generate_uuid()
|
||||
|
||||
|
||||
class RequestContext(object):
|
||||
|
||||
"""
|
||||
Stores information about the security context under which the user
|
||||
accesses the system, as well as additional request information.
|
||||
"""
|
||||
|
||||
def __init__(self, auth_token=None, user=None, tenant=None, is_admin=False,
|
||||
read_only=False, show_deleted=False, request_id=None):
|
||||
self.auth_token = auth_token
|
||||
self.user = user
|
||||
self.tenant = tenant
|
||||
self.is_admin = is_admin
|
||||
self.read_only = read_only
|
||||
self.show_deleted = show_deleted
|
||||
if not request_id:
|
||||
request_id = generate_request_id()
|
||||
self.request_id = request_id
|
||||
|
||||
def to_dict(self):
|
||||
return {'user': self.user,
|
||||
'tenant': self.tenant,
|
||||
'is_admin': self.is_admin,
|
||||
'read_only': self.read_only,
|
||||
'show_deleted': self.show_deleted,
|
||||
'auth_token': self.auth_token,
|
||||
'request_id': self.request_id}
|
||||
|
||||
|
||||
def get_admin_context(show_deleted="no"):
|
||||
context = RequestContext(None,
|
||||
tenant=None,
|
||||
is_admin=True,
|
||||
show_deleted=show_deleted)
|
||||
return context
|
||||
|
||||
|
||||
def get_context_from_function_and_args(function, args, kwargs):
|
||||
"""Find an arg of type RequestContext and return it.
|
||||
|
||||
This is useful in a couple of decorators where we don't
|
||||
know much about the function we're wrapping.
|
||||
"""
|
||||
|
||||
for arg in itertools.chain(kwargs.values(), args):
|
||||
if isinstance(arg, RequestContext):
|
||||
return arg
|
||||
|
||||
return None
|
@ -0,0 +1,20 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2012 Cloudscaling Group, Inc
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
110
service-mgmt-api/sm-api/sm_api/openstack/common/db/api.py
Normal file
110
service-mgmt-api/sm-api/sm_api/openstack/common/db/api.py
Normal file
@ -0,0 +1,110 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2013 Rackspace Hosting
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Multiple DB API backend support.
|
||||
|
||||
Supported configuration options:
|
||||
|
||||
The following two parameters are in the 'database' group:
|
||||
`backend`: DB backend name or full module path to DB backend module.
|
||||
`use_tpool`: Enable thread pooling of DB API calls.
|
||||
|
||||
A DB backend module should implement a method named 'get_backend' which
|
||||
takes no arguments. The method can return any object that implements DB
|
||||
API methods.
|
||||
|
||||
*NOTE*: There are bugs in eventlet when using tpool combined with
|
||||
threading locks. The python logging module happens to use such locks. To
|
||||
work around this issue, be sure to specify thread=False with
|
||||
eventlet.monkey_patch().
|
||||
|
||||
A bug for eventlet has been filed here:
|
||||
|
||||
https://bitbucket.org/eventlet/eventlet/issue/137/
|
||||
"""
|
||||
import functools
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.openstack.common import importutils
|
||||
from sm_api.openstack.common import lockutils
|
||||
|
||||
|
||||
db_opts = [
|
||||
cfg.StrOpt('backend',
|
||||
default='sqlalchemy',
|
||||
deprecated_name='db_backend',
|
||||
deprecated_group='DEFAULT',
|
||||
help='The backend to use for db'),
|
||||
cfg.BoolOpt('use_tpool',
|
||||
default=False,
|
||||
deprecated_name='dbapi_use_tpool',
|
||||
deprecated_group='DEFAULT',
|
||||
help='Enable the experimental use of thread pooling for '
|
||||
'all DB API calls')
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(db_opts, 'database')
|
||||
|
||||
|
||||
class DBAPI(object):
|
||||
def __init__(self, backend_mapping=None):
|
||||
if backend_mapping is None:
|
||||
backend_mapping = {}
|
||||
self.__backend = None
|
||||
self.__backend_mapping = backend_mapping
|
||||
|
||||
@lockutils.synchronized('dbapi_backend', 'sm_api-')
|
||||
def __get_backend(self):
|
||||
"""Get the actual backend. May be a module or an instance of
|
||||
a class. Doesn't matter to us. We do this synchronized as it's
|
||||
possible multiple greenthreads started very quickly trying to do
|
||||
DB calls and eventlet can switch threads before self.__backend gets
|
||||
assigned.
|
||||
"""
|
||||
if self.__backend:
|
||||
# Another thread assigned it
|
||||
return self.__backend
|
||||
backend_name = CONF.database.backend
|
||||
self.__use_tpool = CONF.database.use_tpool
|
||||
if self.__use_tpool:
|
||||
from eventlet import tpool
|
||||
self.__tpool = tpool
|
||||
# Import the untranslated name if we don't have a
|
||||
# mapping.
|
||||
backend_path = self.__backend_mapping.get(backend_name,
|
||||
backend_name)
|
||||
backend_mod = importutils.import_module(backend_path)
|
||||
self.__backend = backend_mod.get_backend()
|
||||
return self.__backend
|
||||
|
||||
def __getattr__(self, key):
|
||||
backend = self.__backend or self.__get_backend()
|
||||
attr = getattr(backend, key)
|
||||
if not self.__use_tpool or not hasattr(attr, '__call__'):
|
||||
return attr
|
||||
|
||||
def tpool_wrapper(*args, **kwargs):
|
||||
return self.__tpool.execute(attr, *args, **kwargs)
|
||||
|
||||
functools.update_wrapper(tpool_wrapper, attr)
|
||||
return tpool_wrapper
|
@ -0,0 +1,58 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""DB related custom exceptions."""
|
||||
|
||||
from sm_api.openstack.common.gettextutils import _ # noqa
|
||||
|
||||
|
||||
class DBError(Exception):
|
||||
"""Wraps an implementation specific exception."""
|
||||
def __init__(self, inner_exception=None):
|
||||
self.inner_exception = inner_exception
|
||||
super(DBError, self).__init__(str(inner_exception))
|
||||
|
||||
|
||||
class DBDuplicateEntry(DBError):
|
||||
"""Wraps an implementation specific exception."""
|
||||
def __init__(self, columns=[], inner_exception=None):
|
||||
self.columns = columns
|
||||
super(DBDuplicateEntry, self).__init__(inner_exception)
|
||||
|
||||
|
||||
class DBDeadlock(DBError):
|
||||
def __init__(self, inner_exception=None):
|
||||
super(DBDeadlock, self).__init__(inner_exception)
|
||||
|
||||
|
||||
class DBInvalidUnicodeParameter(Exception):
|
||||
message = _("Invalid Parameter: "
|
||||
"Unicode is not supported by the current database.")
|
||||
|
||||
|
||||
class DbMigrationError(DBError):
|
||||
"""Wraps migration specific exception."""
|
||||
def __init__(self, message=None):
|
||||
super(DbMigrationError, self).__init__(str(message))
|
||||
|
||||
|
||||
class DBConnectionError(DBError):
|
||||
"""Wraps connection specific exception."""
|
||||
pass
|
@ -0,0 +1,20 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2012 Cloudscaling Group, Inc
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
@ -0,0 +1,109 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Piston Cloud Computing, Inc.
|
||||
# Copyright 2012 Cloudscaling Group, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
"""
|
||||
SQLAlchemy models.
|
||||
"""
|
||||
|
||||
from sqlalchemy import Column, Integer
|
||||
from sqlalchemy import DateTime
|
||||
from sqlalchemy.orm import object_mapper
|
||||
|
||||
from sm_api.openstack.common.db.sqlalchemy.session import get_session
|
||||
from sm_api.openstack.common import timeutils
|
||||
|
||||
|
||||
class ModelBase(object):
|
||||
"""Base class for models."""
|
||||
__table_initialized__ = False
|
||||
|
||||
def save(self, session=None):
|
||||
"""Save this object."""
|
||||
if not session:
|
||||
session = get_session()
|
||||
# NOTE(boris-42): This part of code should be look like:
|
||||
# sesssion.add(self)
|
||||
# session.flush()
|
||||
# But there is a bug in sqlalchemy and eventlet that
|
||||
# raises NoneType exception if there is no running
|
||||
# transaction and rollback is called. As long as
|
||||
# sqlalchemy has this bug we have to create transaction
|
||||
# explicity.
|
||||
with session.begin(subtransactions=True):
|
||||
session.add(self)
|
||||
session.flush()
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
setattr(self, key, value)
|
||||
|
||||
def __getitem__(self, key):
|
||||
return getattr(self, key)
|
||||
|
||||
def get(self, key, default=None):
|
||||
return getattr(self, key, default)
|
||||
|
||||
def __iter__(self):
|
||||
columns = dict(object_mapper(self).columns).keys()
|
||||
# NOTE(russellb): Allow models to specify other keys that can be looked
|
||||
# up, beyond the actual db columns. An example would be the 'name'
|
||||
# property for an Instance.
|
||||
if hasattr(self, '_extra_keys'):
|
||||
columns.extend(self._extra_keys())
|
||||
self._i = iter(columns)
|
||||
return self
|
||||
|
||||
def next(self):
|
||||
n = self._i.next()
|
||||
return n, getattr(self, n)
|
||||
|
||||
def update(self, values):
|
||||
"""Make the model object behave like a dict."""
|
||||
for k, v in values.iteritems():
|
||||
setattr(self, k, v)
|
||||
|
||||
def iteritems(self):
|
||||
"""Make the model object behave like a dict.
|
||||
|
||||
Includes attributes from joins."""
|
||||
local = dict(self)
|
||||
joined = dict([(k, v) for k, v in self.__dict__.iteritems()
|
||||
if not k[0] == '_'])
|
||||
local.update(joined)
|
||||
return local.iteritems()
|
||||
|
||||
|
||||
class TimestampMixin(object):
|
||||
created_at = Column(DateTime, default=timeutils.utcnow)
|
||||
updated_at = Column(DateTime, onupdate=timeutils.utcnow)
|
||||
|
||||
|
||||
class SoftDeleteMixin(object):
|
||||
deleted_at = Column(DateTime)
|
||||
deleted = Column(Integer, default=0)
|
||||
|
||||
def soft_delete(self, session=None):
|
||||
"""Mark this object as deleted."""
|
||||
self.deleted = self.id
|
||||
self.deleted_at = timeutils.utcnow()
|
||||
self.save(session=session)
|
@ -0,0 +1,700 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Session Handling for SQLAlchemy backend.
|
||||
|
||||
Initializing:
|
||||
|
||||
* Call set_defaults with the minimal of the following kwargs:
|
||||
sql_connection, sqlite_db
|
||||
|
||||
Example:
|
||||
|
||||
session.set_defaults(
|
||||
sql_connection="sqlite:///var/lib/sm_api/sqlite.db",
|
||||
sqlite_db="/var/lib/sm_api/sqlite.db")
|
||||
|
||||
Recommended ways to use sessions within this framework:
|
||||
|
||||
* Don't use them explicitly; this is like running with AUTOCOMMIT=1.
|
||||
model_query() will implicitly use a session when called without one
|
||||
supplied. This is the ideal situation because it will allow queries
|
||||
to be automatically retried if the database connection is interrupted.
|
||||
|
||||
Note: Automatic retry will be enabled in a future patch.
|
||||
|
||||
It is generally fine to issue several queries in a row like this. Even though
|
||||
they may be run in separate transactions and/or separate sessions, each one
|
||||
will see the data from the prior calls. If needed, undo- or rollback-like
|
||||
functionality should be handled at a logical level. For an example, look at
|
||||
the code around quotas and reservation_rollback().
|
||||
|
||||
Examples:
|
||||
|
||||
def get_foo(context, foo):
|
||||
return model_query(context, models.Foo).\
|
||||
filter_by(foo=foo).\
|
||||
first()
|
||||
|
||||
def update_foo(context, id, newfoo):
|
||||
model_query(context, models.Foo).\
|
||||
filter_by(id=id).\
|
||||
update({'foo': newfoo})
|
||||
|
||||
def create_foo(context, values):
|
||||
foo_ref = models.Foo()
|
||||
foo_ref.update(values)
|
||||
foo_ref.save()
|
||||
return foo_ref
|
||||
|
||||
|
||||
* Within the scope of a single method, keeping all the reads and writes within
|
||||
the context managed by a single session. In this way, the session's __exit__
|
||||
handler will take care of calling flush() and commit() for you.
|
||||
If using this approach, you should not explicitly call flush() or commit().
|
||||
Any error within the context of the session will cause the session to emit
|
||||
a ROLLBACK. If the connection is dropped before this is possible, the
|
||||
database will implicitly rollback the transaction.
|
||||
|
||||
Note: statements in the session scope will not be automatically retried.
|
||||
|
||||
If you create models within the session, they need to be added, but you
|
||||
do not need to call model.save()
|
||||
|
||||
def create_many_foo(context, foos):
|
||||
session = get_session()
|
||||
with session.begin():
|
||||
for foo in foos:
|
||||
foo_ref = models.Foo()
|
||||
foo_ref.update(foo)
|
||||
session.add(foo_ref)
|
||||
|
||||
def update_bar(context, foo_id, newbar):
|
||||
session = get_session()
|
||||
with session.begin():
|
||||
foo_ref = model_query(context, models.Foo, session).\
|
||||
filter_by(id=foo_id).\
|
||||
first()
|
||||
model_query(context, models.Bar, session).\
|
||||
filter_by(id=foo_ref['bar_id']).\
|
||||
update({'bar': newbar})
|
||||
|
||||
Note: update_bar is a trivially simple example of using "with session.begin".
|
||||
Whereas create_many_foo is a good example of when a transaction is needed,
|
||||
it is always best to use as few queries as possible. The two queries in
|
||||
update_bar can be better expressed using a single query which avoids
|
||||
the need for an explicit transaction. It can be expressed like so:
|
||||
|
||||
def update_bar(context, foo_id, newbar):
|
||||
subq = model_query(context, models.Foo.id).\
|
||||
filter_by(id=foo_id).\
|
||||
limit(1).\
|
||||
subquery()
|
||||
model_query(context, models.Bar).\
|
||||
filter_by(id=subq.as_scalar()).\
|
||||
update({'bar': newbar})
|
||||
|
||||
For reference, this emits approximagely the following SQL statement:
|
||||
|
||||
UPDATE bar SET bar = ${newbar}
|
||||
WHERE id=(SELECT bar_id FROM foo WHERE id = ${foo_id} LIMIT 1);
|
||||
|
||||
* Passing an active session between methods. Sessions should only be passed
|
||||
to private methods. The private method must use a subtransaction; otherwise
|
||||
SQLAlchemy will throw an error when you call session.begin() on an existing
|
||||
transaction. Public methods should not accept a session parameter and should
|
||||
not be involved in sessions within the caller's scope.
|
||||
|
||||
Note that this incurs more overhead in SQLAlchemy than the above means
|
||||
due to nesting transactions, and it is not possible to implicitly retry
|
||||
failed database operations when using this approach.
|
||||
|
||||
This also makes code somewhat more difficult to read and debug, because a
|
||||
single database transaction spans more than one method. Error handling
|
||||
becomes less clear in this situation. When this is needed for code clarity,
|
||||
it should be clearly documented.
|
||||
|
||||
def myfunc(foo):
|
||||
session = get_session()
|
||||
with session.begin():
|
||||
# do some database things
|
||||
bar = _private_func(foo, session)
|
||||
return bar
|
||||
|
||||
def _private_func(foo, session=None):
|
||||
if not session:
|
||||
session = get_session()
|
||||
with session.begin(subtransaction=True):
|
||||
# do some other database things
|
||||
return bar
|
||||
|
||||
|
||||
There are some things which it is best to avoid:
|
||||
|
||||
* Don't keep a transaction open any longer than necessary.
|
||||
|
||||
This means that your "with session.begin()" block should be as short
|
||||
as possible, while still containing all the related calls for that
|
||||
transaction.
|
||||
|
||||
* Avoid "with_lockmode('UPDATE')" when possible.
|
||||
|
||||
In MySQL/InnoDB, when a "SELECT ... FOR UPDATE" query does not match
|
||||
any rows, it will take a gap-lock. This is a form of write-lock on the
|
||||
"gap" where no rows exist, and prevents any other writes to that space.
|
||||
This can effectively prevent any INSERT into a table by locking the gap
|
||||
at the end of the index. Similar problems will occur if the SELECT FOR UPDATE
|
||||
has an overly broad WHERE clause, or doesn't properly use an index.
|
||||
|
||||
One idea proposed at ODS Fall '12 was to use a normal SELECT to test the
|
||||
number of rows matching a query, and if only one row is returned,
|
||||
then issue the SELECT FOR UPDATE.
|
||||
|
||||
The better long-term solution is to use INSERT .. ON DUPLICATE KEY UPDATE.
|
||||
However, this can not be done until the "deleted" columns are removed and
|
||||
proper UNIQUE constraints are added to the tables.
|
||||
|
||||
|
||||
Enabling soft deletes:
|
||||
|
||||
* To use/enable soft-deletes, the SoftDeleteMixin must be added
|
||||
to your model class. For example:
|
||||
|
||||
class NovaBase(models.SoftDeleteMixin, models.ModelBase):
|
||||
pass
|
||||
|
||||
|
||||
Efficient use of soft deletes:
|
||||
|
||||
* There are two possible ways to mark a record as deleted:
|
||||
model.soft_delete() and query.soft_delete().
|
||||
|
||||
model.soft_delete() method works with single already fetched entry.
|
||||
query.soft_delete() makes only one db request for all entries that correspond
|
||||
to query.
|
||||
|
||||
* In almost all cases you should use query.soft_delete(). Some examples:
|
||||
|
||||
def soft_delete_bar():
|
||||
count = model_query(BarModel).find(some_condition).soft_delete()
|
||||
if count == 0:
|
||||
raise Exception("0 entries were soft deleted")
|
||||
|
||||
def complex_soft_delete_with_synchronization_bar(session=None):
|
||||
if session is None:
|
||||
session = get_session()
|
||||
with session.begin(subtransactions=True):
|
||||
count = model_query(BarModel).\
|
||||
find(some_condition).\
|
||||
soft_delete(synchronize_session=True)
|
||||
# Here synchronize_session is required, because we
|
||||
# don't know what is going on in outer session.
|
||||
if count == 0:
|
||||
raise Exception("0 entries were soft deleted")
|
||||
|
||||
* There is only one situation where model.soft_delete() is appropriate: when
|
||||
you fetch a single record, work with it, and mark it as deleted in the same
|
||||
transaction.
|
||||
|
||||
def soft_delete_bar_model():
|
||||
session = get_session()
|
||||
with session.begin():
|
||||
bar_ref = model_query(BarModel).find(some_condition).first()
|
||||
# Work with bar_ref
|
||||
bar_ref.soft_delete(session=session)
|
||||
|
||||
However, if you need to work with all entries that correspond to query and
|
||||
then soft delete them you should use query.soft_delete() method:
|
||||
|
||||
def soft_delete_multi_models():
|
||||
session = get_session()
|
||||
with session.begin():
|
||||
query = model_query(BarModel, session=session).\
|
||||
find(some_condition)
|
||||
model_refs = query.all()
|
||||
# Work with model_refs
|
||||
query.soft_delete(synchronize_session=False)
|
||||
# synchronize_session=False should be set if there is no outer
|
||||
# session and these entries are not used after this.
|
||||
|
||||
When working with many rows, it is very important to use query.soft_delete,
|
||||
which issues a single query. Using model.soft_delete(), as in the following
|
||||
example, is very inefficient.
|
||||
|
||||
for bar_ref in bar_refs:
|
||||
bar_ref.soft_delete(session=session)
|
||||
# This will produce count(bar_refs) db requests.
|
||||
|
||||
"""
|
||||
|
||||
import os.path
|
||||
import re
|
||||
import time
|
||||
|
||||
from eventlet import greenthread
|
||||
from oslo_config import cfg
|
||||
import six
|
||||
from sqlalchemy import exc as sqla_exc
|
||||
import sqlalchemy.interfaces
|
||||
from sqlalchemy.interfaces import PoolListener
|
||||
import sqlalchemy.orm
|
||||
from sqlalchemy.pool import NullPool, StaticPool
|
||||
from sqlalchemy.sql.expression import literal_column
|
||||
|
||||
from sm_api.openstack.common.db import exception
|
||||
from sm_api.openstack.common import log as logging
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
from sm_api.openstack.common import timeutils
|
||||
|
||||
DEFAULT = 'DEFAULT'
|
||||
|
||||
sqlite_db_opts = [
|
||||
cfg.StrOpt('sqlite_db',
|
||||
default='sm.db',
|
||||
help='the filename to use with sqlite'),
|
||||
cfg.BoolOpt('sqlite_synchronous',
|
||||
default=True,
|
||||
help='If true, use synchronous mode for sqlite'),
|
||||
]
|
||||
|
||||
database_opts = [
|
||||
cfg.StrOpt('connection',
|
||||
default='sqlite:////var/run/sm/sm.db',
|
||||
help='The SQLAlchemy connection string used to connect to the '
|
||||
'database',
|
||||
deprecated_name='sql_connection',
|
||||
deprecated_group=DEFAULT,
|
||||
secret=True),
|
||||
cfg.IntOpt('idle_timeout',
|
||||
default=3600,
|
||||
deprecated_name='sql_idle_timeout',
|
||||
deprecated_group=DEFAULT,
|
||||
help='timeout before idle sql connections are reaped'),
|
||||
cfg.IntOpt('min_pool_size',
|
||||
default=1,
|
||||
deprecated_name='sql_min_pool_size',
|
||||
deprecated_group=DEFAULT,
|
||||
help='Minimum number of SQL connections to keep open in a '
|
||||
'pool'),
|
||||
cfg.IntOpt('max_pool_size',
|
||||
default=5,
|
||||
deprecated_name='sql_max_pool_size',
|
||||
deprecated_group=DEFAULT,
|
||||
help='Maximum number of SQL connections to keep open in a '
|
||||
'pool'),
|
||||
cfg.IntOpt('max_retries',
|
||||
default=10,
|
||||
deprecated_name='sql_max_retries',
|
||||
deprecated_group=DEFAULT,
|
||||
help='maximum db connection retries during startup. '
|
||||
'(setting -1 implies an infinite retry count)'),
|
||||
cfg.IntOpt('retry_interval',
|
||||
default=10,
|
||||
deprecated_name='sql_retry_interval',
|
||||
deprecated_group=DEFAULT,
|
||||
help='interval between retries of opening a sql connection'),
|
||||
cfg.IntOpt('max_overflow',
|
||||
default=None,
|
||||
deprecated_name='sql_max_overflow',
|
||||
deprecated_group=DEFAULT,
|
||||
help='If set, use this value for max_overflow with sqlalchemy'),
|
||||
cfg.IntOpt('connection_debug',
|
||||
default=0,
|
||||
deprecated_name='sql_connection_debug',
|
||||
deprecated_group=DEFAULT,
|
||||
help='Verbosity of SQL debugging information. 0=None, '
|
||||
'100=Everything'),
|
||||
cfg.BoolOpt('connection_trace',
|
||||
default=False,
|
||||
deprecated_name='sql_connection_trace',
|
||||
deprecated_group=DEFAULT,
|
||||
help='Add python stack traces to SQL as comment strings'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(sqlite_db_opts)
|
||||
CONF.register_opts(database_opts, 'database')
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
_ENGINE = None
|
||||
_MAKER = None
|
||||
|
||||
|
||||
def set_defaults(sql_connection, sqlite_db):
|
||||
"""Set defaults for configuration variables."""
|
||||
cfg.set_defaults(database_opts,
|
||||
connection=sql_connection)
|
||||
cfg.set_defaults(sqlite_db_opts,
|
||||
sqlite_db=sqlite_db)
|
||||
|
||||
|
||||
def cleanup():
|
||||
global _ENGINE, _MAKER
|
||||
|
||||
if _MAKER:
|
||||
_MAKER.close_all()
|
||||
_MAKER = None
|
||||
if _ENGINE:
|
||||
_ENGINE.dispose()
|
||||
_ENGINE = None
|
||||
|
||||
|
||||
class SqliteForeignKeysListener(PoolListener):
|
||||
"""
|
||||
Ensures that the foreign key constraints are enforced in SQLite.
|
||||
|
||||
The foreign key constraints are disabled by default in SQLite,
|
||||
so the foreign key constraints will be enabled here for every
|
||||
database connection
|
||||
"""
|
||||
def connect(self, dbapi_con, con_record):
|
||||
dbapi_con.execute('pragma foreign_keys=ON')
|
||||
|
||||
|
||||
def get_session(autocommit=True, expire_on_commit=False,
|
||||
sqlite_fk=False):
|
||||
"""Return a SQLAlchemy session."""
|
||||
global _MAKER
|
||||
|
||||
if _MAKER is None:
|
||||
engine = get_engine(sqlite_fk=sqlite_fk)
|
||||
_MAKER = get_maker(engine, autocommit, expire_on_commit)
|
||||
|
||||
session = _MAKER()
|
||||
return session
|
||||
|
||||
|
||||
# note(boris-42): In current versions of DB backends unique constraint
|
||||
# violation messages follow the structure:
|
||||
#
|
||||
# sqlite:
|
||||
# 1 column - (IntegrityError) column c1 is not unique
|
||||
# N columns - (IntegrityError) column c1, c2, ..., N are not unique
|
||||
#
|
||||
# postgres:
|
||||
# 1 column - (IntegrityError) duplicate key value violates unique
|
||||
# constraint "users_c1_key"
|
||||
# N columns - (IntegrityError) duplicate key value violates unique
|
||||
# constraint "name_of_our_constraint"
|
||||
#
|
||||
# mysql:
|
||||
# 1 column - (IntegrityError) (1062, "Duplicate entry 'value_of_c1' for key
|
||||
# 'c1'")
|
||||
# N columns - (IntegrityError) (1062, "Duplicate entry 'values joined
|
||||
# with -' for key 'name_of_our_constraint'")
|
||||
_DUP_KEY_RE_DB = {
|
||||
"sqlite": re.compile(r"^.*columns?([^)]+)(is|are)\s+not\s+unique$"),
|
||||
"postgresql": re.compile(r"^.*duplicate\s+key.*\"([^\"]+)\"\s*\n.*$"),
|
||||
"mysql": re.compile(r"^.*\(1062,.*'([^\']+)'\"\)$")
|
||||
}
|
||||
|
||||
|
||||
def _raise_if_duplicate_entry_error(integrity_error, engine_name):
|
||||
"""
|
||||
In this function will be raised DBDuplicateEntry exception if integrity
|
||||
error wrap unique constraint violation.
|
||||
"""
|
||||
|
||||
def get_columns_from_uniq_cons_or_name(columns):
|
||||
# note(boris-42): UniqueConstraint name convention: "uniq_c1_x_c2_x_c3"
|
||||
# means that columns c1, c2, c3 are in UniqueConstraint.
|
||||
uniqbase = "uniq_"
|
||||
if not columns.startswith(uniqbase):
|
||||
if engine_name == "postgresql":
|
||||
return [columns[columns.index("_") + 1:columns.rindex("_")]]
|
||||
return [columns]
|
||||
return columns[len(uniqbase):].split("_x_")
|
||||
|
||||
if engine_name not in ["mysql", "sqlite", "postgresql"]:
|
||||
return
|
||||
|
||||
m = _DUP_KEY_RE_DB[engine_name].match(integrity_error.message)
|
||||
if not m:
|
||||
return
|
||||
columns = m.group(1)
|
||||
|
||||
if engine_name == "sqlite":
|
||||
columns = columns.strip().split(", ")
|
||||
else:
|
||||
columns = get_columns_from_uniq_cons_or_name(columns)
|
||||
raise exception.DBDuplicateEntry(columns, integrity_error)
|
||||
|
||||
|
||||
# NOTE(comstud): In current versions of DB backends, Deadlock violation
|
||||
# messages follow the structure:
|
||||
#
|
||||
# mysql:
|
||||
# (OperationalError) (1213, 'Deadlock found when trying to get lock; try '
|
||||
# 'restarting transaction') <query_str> <query_args>
|
||||
_DEADLOCK_RE_DB = {
|
||||
"mysql": re.compile(r"^.*\(1213, 'Deadlock.*")
|
||||
}
|
||||
|
||||
|
||||
def _raise_if_deadlock_error(operational_error, engine_name):
|
||||
"""
|
||||
Raise DBDeadlock exception if OperationalError contains a Deadlock
|
||||
condition.
|
||||
"""
|
||||
re = _DEADLOCK_RE_DB.get(engine_name)
|
||||
if re is None:
|
||||
return
|
||||
m = re.match(operational_error.message)
|
||||
if not m:
|
||||
return
|
||||
raise exception.DBDeadlock(operational_error)
|
||||
|
||||
|
||||
def _wrap_db_error(f):
|
||||
def _wrap(*args, **kwargs):
|
||||
try:
|
||||
return f(*args, **kwargs)
|
||||
except UnicodeEncodeError:
|
||||
raise exception.DBInvalidUnicodeParameter()
|
||||
# note(boris-42): We should catch unique constraint violation and
|
||||
# wrap it by our own DBDuplicateEntry exception. Unique constraint
|
||||
# violation is wrapped by IntegrityError.
|
||||
except sqla_exc.OperationalError as e:
|
||||
_raise_if_deadlock_error(e, get_engine().name)
|
||||
# NOTE(comstud): A lot of code is checking for OperationalError
|
||||
# so let's not wrap it for now.
|
||||
raise
|
||||
except sqla_exc.IntegrityError as e:
|
||||
# note(boris-42): SqlAlchemy doesn't unify errors from different
|
||||
# DBs so we must do this. Also in some tables (for example
|
||||
# instance_types) there are more than one unique constraint. This
|
||||
# means we should get names of columns, which values violate
|
||||
# unique constraint, from error message.
|
||||
_raise_if_duplicate_entry_error(e, get_engine().name)
|
||||
raise exception.DBError(e)
|
||||
except Exception as e:
|
||||
LOG.exception(_('DB exception wrapped.'))
|
||||
raise exception.DBError(e)
|
||||
_wrap.func_name = f.func_name
|
||||
return _wrap
|
||||
|
||||
|
||||
def get_engine(sqlite_fk=False):
|
||||
"""Return a SQLAlchemy engine."""
|
||||
global _ENGINE
|
||||
if _ENGINE is None:
|
||||
_ENGINE = create_engine(CONF.database.connection,
|
||||
sqlite_fk=sqlite_fk)
|
||||
return _ENGINE
|
||||
|
||||
|
||||
def _synchronous_switch_listener(dbapi_conn, connection_rec):
|
||||
"""Switch sqlite connections to non-synchronous mode."""
|
||||
dbapi_conn.execute("PRAGMA synchronous = OFF")
|
||||
|
||||
|
||||
def _add_regexp_listener(dbapi_con, con_record):
|
||||
"""Add REGEXP function to sqlite connections."""
|
||||
|
||||
def regexp(expr, item):
|
||||
reg = re.compile(expr)
|
||||
return reg.search(six.text_type(item)) is not None
|
||||
dbapi_con.create_function('regexp', 2, regexp)
|
||||
|
||||
|
||||
def _greenthread_yield(dbapi_con, con_record):
|
||||
"""
|
||||
Ensure other greenthreads get a chance to execute by forcing a context
|
||||
switch. With common database backends (eg MySQLdb and sqlite), there is
|
||||
no implicit yield caused by network I/O since they are implemented by
|
||||
C libraries that eventlet cannot monkey patch.
|
||||
"""
|
||||
greenthread.sleep(0)
|
||||
|
||||
|
||||
def _ping_listener(dbapi_conn, connection_rec, connection_proxy):
|
||||
"""
|
||||
Ensures that MySQL connections checked out of the
|
||||
pool are alive.
|
||||
|
||||
Borrowed from:
|
||||
http://groups.google.com/group/sqlalchemy/msg/a4ce563d802c929f
|
||||
"""
|
||||
try:
|
||||
dbapi_conn.cursor().execute('select 1')
|
||||
except dbapi_conn.OperationalError as ex:
|
||||
if ex.args[0] in (2006, 2013, 2014, 2045, 2055):
|
||||
LOG.warn(_('Got mysql server has gone away: %s'), ex)
|
||||
raise sqla_exc.DisconnectionError("Database server went away")
|
||||
else:
|
||||
raise
|
||||
|
||||
|
||||
def _is_db_connection_error(args):
|
||||
"""Return True if error in connecting to db."""
|
||||
# NOTE(adam_g): This is currently MySQL specific and needs to be extended
|
||||
# to support Postgres and others.
|
||||
conn_err_codes = ('2002', '2003', '2006')
|
||||
for err_code in conn_err_codes:
|
||||
if args.find(err_code) != -1:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def create_engine(sql_connection, sqlite_fk=False):
|
||||
"""Return a new SQLAlchemy engine."""
|
||||
connection_dict = sqlalchemy.engine.url.make_url(sql_connection)
|
||||
|
||||
engine_args = {
|
||||
"pool_recycle": CONF.database.idle_timeout,
|
||||
"echo": False,
|
||||
'convert_unicode': True,
|
||||
}
|
||||
|
||||
# Map our SQL debug level to SQLAlchemy's options
|
||||
if CONF.database.connection_debug >= 100:
|
||||
engine_args['echo'] = 'debug'
|
||||
elif CONF.database.connection_debug >= 50:
|
||||
engine_args['echo'] = True
|
||||
|
||||
if "sqlite" in connection_dict.drivername:
|
||||
if sqlite_fk:
|
||||
engine_args["listeners"] = [SqliteForeignKeysListener()]
|
||||
engine_args["poolclass"] = NullPool
|
||||
|
||||
if CONF.database.connection == "sqlite://":
|
||||
engine_args["poolclass"] = StaticPool
|
||||
engine_args["connect_args"] = {'check_same_thread': False}
|
||||
else:
|
||||
engine_args['pool_size'] = CONF.database.max_pool_size
|
||||
if CONF.database.max_overflow is not None:
|
||||
engine_args['max_overflow'] = CONF.database.max_overflow
|
||||
|
||||
engine = sqlalchemy.create_engine(sql_connection, **engine_args)
|
||||
|
||||
sqlalchemy.event.listen(engine, 'checkin', _greenthread_yield)
|
||||
|
||||
if 'mysql' in connection_dict.drivername:
|
||||
sqlalchemy.event.listen(engine, 'checkout', _ping_listener)
|
||||
elif 'sqlite' in connection_dict.drivername:
|
||||
if not CONF.sqlite_synchronous:
|
||||
sqlalchemy.event.listen(engine, 'connect',
|
||||
_synchronous_switch_listener)
|
||||
sqlalchemy.event.listen(engine, 'connect', _add_regexp_listener)
|
||||
|
||||
if (CONF.database.connection_trace and
|
||||
engine.dialect.dbapi.__name__ == 'MySQLdb'):
|
||||
_patch_mysqldb_with_stacktrace_comments()
|
||||
|
||||
try:
|
||||
engine.connect()
|
||||
except sqla_exc.OperationalError as e:
|
||||
if not _is_db_connection_error(e.args[0]):
|
||||
raise
|
||||
|
||||
remaining = CONF.database.max_retries
|
||||
if remaining == -1:
|
||||
remaining = 'infinite'
|
||||
while True:
|
||||
msg = _('SQL connection failed. %s attempts left.')
|
||||
LOG.warn(msg % remaining)
|
||||
if remaining != 'infinite':
|
||||
remaining -= 1
|
||||
time.sleep(CONF.database.retry_interval)
|
||||
try:
|
||||
engine.connect()
|
||||
break
|
||||
except sqla_exc.OperationalError as e:
|
||||
if (remaining != 'infinite' and remaining == 0) or \
|
||||
not _is_db_connection_error(e.args[0]):
|
||||
raise
|
||||
return engine
|
||||
|
||||
|
||||
class Query(sqlalchemy.orm.query.Query):
|
||||
"""Subclass of sqlalchemy.query with soft_delete() method."""
|
||||
def soft_delete(self, synchronize_session='evaluate'):
|
||||
return self.update({'deleted': literal_column('id'),
|
||||
'updated_at': literal_column('updated_at'),
|
||||
'deleted_at': timeutils.utcnow()},
|
||||
synchronize_session=synchronize_session)
|
||||
|
||||
|
||||
class Session(sqlalchemy.orm.session.Session):
|
||||
"""Custom Session class to avoid SqlAlchemy Session monkey patching."""
|
||||
@_wrap_db_error
|
||||
def query(self, *args, **kwargs):
|
||||
return super(Session, self).query(*args, **kwargs)
|
||||
|
||||
@_wrap_db_error
|
||||
def flush(self, *args, **kwargs):
|
||||
return super(Session, self).flush(*args, **kwargs)
|
||||
|
||||
@_wrap_db_error
|
||||
def execute(self, *args, **kwargs):
|
||||
return super(Session, self).execute(*args, **kwargs)
|
||||
|
||||
|
||||
def get_maker(engine, autocommit=True, expire_on_commit=False):
|
||||
"""Return a SQLAlchemy sessionmaker using the given engine."""
|
||||
return sqlalchemy.orm.sessionmaker(bind=engine,
|
||||
class_=Session,
|
||||
autocommit=autocommit,
|
||||
expire_on_commit=expire_on_commit,
|
||||
query_cls=Query)
|
||||
|
||||
|
||||
def _patch_mysqldb_with_stacktrace_comments():
|
||||
"""Adds current stack trace as a comment in queries by patching
|
||||
MySQLdb.cursors.BaseCursor._do_query.
|
||||
"""
|
||||
import MySQLdb.cursors
|
||||
import traceback
|
||||
|
||||
old_mysql_do_query = MySQLdb.cursors.BaseCursor._do_query
|
||||
|
||||
def _do_query(self, q):
|
||||
stack = ''
|
||||
for file, line, method, function in traceback.extract_stack():
|
||||
# exclude various common things from trace
|
||||
if file.endswith('session.py') and method == '_do_query':
|
||||
continue
|
||||
if file.endswith('api.py') and method == 'wrapper':
|
||||
continue
|
||||
if file.endswith('utils.py') and method == '_inner':
|
||||
continue
|
||||
if file.endswith('exception.py') and method == '_wrap':
|
||||
continue
|
||||
# db/api is just a wrapper around db/sqlalchemy/api
|
||||
if file.endswith('db/api.py'):
|
||||
continue
|
||||
# only trace inside sm_api
|
||||
index = file.rfind('sm_api')
|
||||
if index == -1:
|
||||
continue
|
||||
stack += "File:%s:%s Method:%s() Line:%s | " \
|
||||
% (file[index:], line, method, function)
|
||||
|
||||
# strip trailing " | " from stack
|
||||
if stack:
|
||||
stack = stack[:-3]
|
||||
qq = "%s /* %s */" % (q, stack)
|
||||
else:
|
||||
qq = q
|
||||
old_mysql_do_query(self, qq)
|
||||
|
||||
setattr(MySQLdb.cursors.BaseCursor, '_do_query', _do_query)
|
@ -0,0 +1,150 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2010-2011 OpenStack Foundation.
|
||||
# Copyright 2012 Justin Santa Barbara
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Implementation of paginate query."""
|
||||
|
||||
import sqlalchemy
|
||||
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
from sm_api.openstack.common import log as logging
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class InvalidSortKey(Exception):
|
||||
message = _("Sort key supplied was not valid.")
|
||||
|
||||
|
||||
# copy from glance/db/sqlalchemy/api.py
|
||||
def paginate_query(query, model, limit, sort_keys, marker=None,
|
||||
sort_dir=None, sort_dirs=None):
|
||||
"""Returns a query with sorting / pagination criteria added.
|
||||
|
||||
Pagination works by requiring a unique sort_key, specified by sort_keys.
|
||||
(If sort_keys is not unique, then we risk looping through values.)
|
||||
We use the last row in the previous page as the 'marker' for pagination.
|
||||
So we must return values that follow the passed marker in the order.
|
||||
With a single-valued sort_key, this would be easy: sort_key > X.
|
||||
With a compound-values sort_key, (k1, k2, k3) we must do this to repeat
|
||||
the lexicographical ordering:
|
||||
(k1 > X1) or (k1 == X1 && k2 > X2) or (k1 == X1 && k2 == X2 && k3 > X3)
|
||||
|
||||
We also have to cope with different sort_directions.
|
||||
|
||||
Typically, the id of the last row is used as the client-facing pagination
|
||||
marker, then the actual marker object must be fetched from the db and
|
||||
passed in to us as marker.
|
||||
|
||||
:param query: the query object to which we should add paging/sorting
|
||||
:param model: the ORM model class
|
||||
:param limit: maximum number of items to return
|
||||
:param sort_keys: array of attributes by which results should be sorted
|
||||
:param marker: the last item of the previous page; we returns the next
|
||||
results after this value.
|
||||
:param sort_dir: direction in which results should be sorted (asc, desc)
|
||||
:param sort_dirs: per-column array of sort_dirs, corresponding to sort_keys
|
||||
|
||||
:rtype: sqlalchemy.orm.query.Query
|
||||
:return: The query with sorting/pagination added.
|
||||
"""
|
||||
|
||||
if 'id' not in sort_keys:
|
||||
# TODO(justinsb): If this ever gives a false-positive, check
|
||||
# the actual primary key, rather than assuming its id
|
||||
LOG.warn(_('id not in sort_keys; is sort_keys unique?'))
|
||||
|
||||
assert(not (sort_dir and sort_dirs))
|
||||
|
||||
# Default the sort direction to ascending
|
||||
if sort_dirs is None and sort_dir is None:
|
||||
sort_dir = 'asc'
|
||||
|
||||
# Ensure a per-column sort direction
|
||||
if sort_dirs is None:
|
||||
sort_dirs = [sort_dir for _sort_key in sort_keys]
|
||||
|
||||
assert(len(sort_dirs) == len(sort_keys))
|
||||
|
||||
# Add sorting
|
||||
for current_sort_key, current_sort_dir in zip(sort_keys, sort_dirs):
|
||||
sort_dir_func = {
|
||||
'asc': sqlalchemy.asc,
|
||||
'desc': sqlalchemy.desc,
|
||||
}[current_sort_dir]
|
||||
|
||||
if (current_sort_key == 'id') and ('name' in sort_keys):
|
||||
continue # dont double sort if name
|
||||
|
||||
try:
|
||||
sort_key_attr = getattr(model, current_sort_key)
|
||||
except AttributeError:
|
||||
raise InvalidSortKey()
|
||||
query = query.order_by(sort_dir_func(sort_key_attr))
|
||||
|
||||
# Add pagination
|
||||
if marker is not None:
|
||||
marker_values = []
|
||||
for sort_key in sort_keys:
|
||||
v = getattr(marker, sort_key)
|
||||
marker_values.append(v)
|
||||
|
||||
# Build up an array of sort criteria as in the docstring
|
||||
criteria_list = []
|
||||
for i in range(0, len(sort_keys)):
|
||||
crit_attrs = []
|
||||
for j in range(0, i):
|
||||
model_attr = getattr(model, sort_keys[j])
|
||||
crit_attrs.append((model_attr == marker_values[j]))
|
||||
|
||||
model_attr = getattr(model, sort_keys[i])
|
||||
if sort_dirs[i] == 'desc':
|
||||
crit_attrs.append((model_attr < marker_values[i]))
|
||||
elif sort_dirs[i] == 'asc':
|
||||
crit_attrs.append((model_attr > marker_values[i]))
|
||||
else:
|
||||
raise ValueError(_("Unknown sort direction, "
|
||||
"must be 'desc' or 'asc'"))
|
||||
|
||||
criteria = sqlalchemy.sql.and_(*crit_attrs)
|
||||
criteria_list.append(criteria)
|
||||
|
||||
f = sqlalchemy.sql.or_(*criteria_list)
|
||||
query = query.filter(f)
|
||||
|
||||
if limit is not None:
|
||||
query = query.limit(limit)
|
||||
|
||||
return query
|
||||
|
||||
|
||||
def get_table(engine, name):
|
||||
"""Returns an sqlalchemy table dynamically from db.
|
||||
|
||||
Needed because the models don't work for us in migrations
|
||||
as models will be far out of sync with the current data.
|
||||
"""
|
||||
metadata = sqlalchemy.MetaData()
|
||||
metadata.bind = engine
|
||||
return sqlalchemy.Table(name, metadata, autoload=True)
|
@ -0,0 +1,93 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2012 OpenStack Foundation.
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import gc
|
||||
import pprint
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
import eventlet
|
||||
import eventlet.backdoor
|
||||
import greenlet
|
||||
from oslo_config import cfg
|
||||
|
||||
eventlet_backdoor_opts = [
|
||||
cfg.IntOpt('backdoor_port',
|
||||
default=None,
|
||||
help='port for eventlet backdoor to listen')
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(eventlet_backdoor_opts)
|
||||
|
||||
|
||||
def _dont_use_this():
|
||||
print("Don't use this, just disconnect instead")
|
||||
|
||||
|
||||
def _find_objects(t):
|
||||
return filter(lambda o: isinstance(o, t), gc.get_objects())
|
||||
|
||||
|
||||
def _print_greenthreads():
|
||||
for i, gt in enumerate(_find_objects(greenlet.greenlet)):
|
||||
print(i, gt)
|
||||
traceback.print_stack(gt.gr_frame)
|
||||
print()
|
||||
|
||||
|
||||
def _print_nativethreads():
|
||||
for threadId, stack in sys._current_frames().items():
|
||||
print(threadId)
|
||||
traceback.print_stack(stack)
|
||||
print()
|
||||
|
||||
|
||||
def initialize_if_enabled():
|
||||
backdoor_locals = {
|
||||
'exit': _dont_use_this, # So we don't exit the entire process
|
||||
'quit': _dont_use_this, # So we don't exit the entire process
|
||||
'fo': _find_objects,
|
||||
'pgt': _print_greenthreads,
|
||||
'pnt': _print_nativethreads,
|
||||
}
|
||||
|
||||
if CONF.backdoor_port is None:
|
||||
return None
|
||||
|
||||
# NOTE(johannes): The standard sys.displayhook will print the value of
|
||||
# the last expression and set it to __builtin__._, which overwrites
|
||||
# the __builtin__._ that gettext sets. Let's switch to using pprint
|
||||
# since it won't interact poorly with gettext, and it's easier to
|
||||
# read the output too.
|
||||
def displayhook(val):
|
||||
if val is not None:
|
||||
pprint.pprint(val)
|
||||
sys.displayhook = displayhook
|
||||
|
||||
sock = eventlet.listen(('localhost', CONF.backdoor_port))
|
||||
port = sock.getsockname()[1]
|
||||
eventlet.spawn_n(eventlet.backdoor.backdoor_server, sock,
|
||||
locals=backdoor_locals)
|
||||
return port
|
55
service-mgmt-api/sm-api/sm_api/openstack/common/excutils.py
Normal file
55
service-mgmt-api/sm-api/sm_api/openstack/common/excutils.py
Normal file
@ -0,0 +1,55 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# Copyright 2012, Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""
|
||||
Exception related utilities.
|
||||
"""
|
||||
|
||||
import contextlib
|
||||
import logging
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def save_and_reraise_exception():
|
||||
"""Save current exception, run some code and then re-raise.
|
||||
|
||||
In some cases the exception context can be cleared, resulting in None
|
||||
being attempted to be re-raised after an exception handler is run. This
|
||||
can happen when eventlet switches greenthreads or when running an
|
||||
exception handler, code raises and catches an exception. In both
|
||||
cases the exception context will be cleared.
|
||||
|
||||
To work around this, we save the exception state, run handler code, and
|
||||
then re-raise the original exception. If another exception occurs, the
|
||||
saved exception is logged and the new exception is re-raised.
|
||||
"""
|
||||
type_, value, tb = sys.exc_info()
|
||||
try:
|
||||
yield
|
||||
except Exception:
|
||||
logging.error(_('Original exception being dropped: %s'),
|
||||
traceback.format_exception(type_, value, tb))
|
||||
raise
|
||||
raise type_, value, tb
|
114
service-mgmt-api/sm-api/sm_api/openstack/common/fileutils.py
Normal file
114
service-mgmt-api/sm-api/sm_api/openstack/common/fileutils.py
Normal file
@ -0,0 +1,114 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
|
||||
import contextlib
|
||||
import errno
|
||||
import os
|
||||
|
||||
from sm_api.openstack.common import excutils
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
from sm_api.openstack.common import log as logging
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
_FILE_CACHE = {}
|
||||
|
||||
|
||||
def ensure_tree(path):
|
||||
"""Create a directory (and any ancestor directories required)
|
||||
|
||||
:param path: Directory to create
|
||||
"""
|
||||
try:
|
||||
os.makedirs(path)
|
||||
except OSError as exc:
|
||||
if exc.errno == errno.EEXIST:
|
||||
if not os.path.isdir(path):
|
||||
raise
|
||||
else:
|
||||
raise
|
||||
|
||||
|
||||
def read_cached_file(filename, force_reload=False):
|
||||
"""Read from a file if it has been modified.
|
||||
|
||||
:param force_reload: Whether to reload the file.
|
||||
:returns: A tuple with a boolean specifying if the data is fresh
|
||||
or not.
|
||||
"""
|
||||
global _FILE_CACHE
|
||||
|
||||
if force_reload and filename in _FILE_CACHE:
|
||||
del _FILE_CACHE[filename]
|
||||
|
||||
reloaded = False
|
||||
mtime = os.path.getmtime(filename)
|
||||
cache_info = _FILE_CACHE.setdefault(filename, {})
|
||||
|
||||
if not cache_info or mtime > cache_info.get('mtime', 0):
|
||||
LOG.debug(_("Reloading cached file %s") % filename)
|
||||
with open(filename) as fap:
|
||||
cache_info['data'] = fap.read()
|
||||
cache_info['mtime'] = mtime
|
||||
reloaded = True
|
||||
return (reloaded, cache_info['data'])
|
||||
|
||||
|
||||
def delete_if_exists(path):
|
||||
"""Delete a file, but ignore file not found error.
|
||||
|
||||
:param path: File to delete
|
||||
"""
|
||||
|
||||
try:
|
||||
os.unlink(path)
|
||||
except OSError as e:
|
||||
if e.errno == errno.ENOENT:
|
||||
return
|
||||
else:
|
||||
raise
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def remove_path_on_error(path):
|
||||
"""Protect code that wants to operate on PATH atomically.
|
||||
Any exception will cause PATH to be removed.
|
||||
|
||||
:param path: File to work with
|
||||
"""
|
||||
try:
|
||||
yield
|
||||
except Exception:
|
||||
with excutils.save_and_reraise_exception():
|
||||
delete_if_exists(path)
|
||||
|
||||
|
||||
def file_open(*args, **kwargs):
|
||||
"""Open file
|
||||
|
||||
see built-in file() documentation for more details
|
||||
|
||||
Note: The reason this is kept in a separate module is to easily
|
||||
be able to provide a stub module that doesn't alter system
|
||||
state at all (for unit tests)
|
||||
"""
|
||||
return file(*args, **kwargs)
|
@ -0,0 +1,55 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import fixtures
|
||||
import mock
|
||||
|
||||
|
||||
class PatchObject(fixtures.Fixture):
|
||||
"""Deal with code around mock."""
|
||||
|
||||
def __init__(self, obj, attr, **kwargs):
|
||||
self.obj = obj
|
||||
self.attr = attr
|
||||
self.kwargs = kwargs
|
||||
|
||||
def setUp(self):
|
||||
super(PatchObject, self).setUp()
|
||||
_p = mock.patch.object(self.obj, self.attr, **self.kwargs)
|
||||
self.mock = _p.start()
|
||||
self.addCleanup(_p.stop)
|
||||
|
||||
|
||||
class Patch(fixtures.Fixture):
|
||||
|
||||
"""Deal with code around mock.patch."""
|
||||
|
||||
def __init__(self, obj, **kwargs):
|
||||
self.obj = obj
|
||||
self.kwargs = kwargs
|
||||
|
||||
def setUp(self):
|
||||
super(Patch, self).setUp()
|
||||
_p = mock.patch(self.obj, **self.kwargs)
|
||||
self.mock = _p.start()
|
||||
self.addCleanup(_p.stop)
|
@ -0,0 +1,41 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2013 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import fixtures
|
||||
import mox
|
||||
import stubout
|
||||
|
||||
|
||||
class MoxStubout(fixtures.Fixture):
|
||||
"""Deal with code around mox and stubout as a fixture."""
|
||||
|
||||
def setUp(self):
|
||||
super(MoxStubout, self).setUp()
|
||||
# emulate some of the mox stuff, we can't use the metaclass
|
||||
# because it screws with our generators
|
||||
self.mox = mox.Mox()
|
||||
self.stubs = stubout.StubOutForTesting()
|
||||
self.addCleanup(self.mox.UnsetStubs)
|
||||
self.addCleanup(self.stubs.UnsetAll)
|
||||
self.addCleanup(self.stubs.SmartUnsetAll)
|
||||
self.addCleanup(self.mox.VerifyAll)
|
@ -0,0 +1,54 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2012 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""
|
||||
gettext for openstack-common modules.
|
||||
|
||||
Usual usage in an openstack.common module:
|
||||
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
"""
|
||||
|
||||
import gettext
|
||||
import os
|
||||
|
||||
_localedir = os.environ.get('sm_api'.upper() + '_LOCALEDIR')
|
||||
_t = gettext.translation('sm_api', localedir=_localedir, fallback=True)
|
||||
|
||||
|
||||
def _(msg):
|
||||
return _t.ugettext(msg)
|
||||
|
||||
|
||||
def install(domain):
|
||||
"""Install a _() function using the given translation domain.
|
||||
|
||||
Given a translation domain, install a _() function using gettext's
|
||||
install() function.
|
||||
|
||||
The main difference from gettext.install() is that we allow
|
||||
overriding the default localedir (e.g. /usr/share/locale) using
|
||||
a translation-domain-specific environment variable (e.g.
|
||||
NOVA_LOCALEDIR).
|
||||
"""
|
||||
gettext.install(domain,
|
||||
localedir=os.environ.get(domain.upper() + '_LOCALEDIR'),
|
||||
unicode=True)
|
@ -0,0 +1,71 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""
|
||||
Import related utilities and helper functions.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
|
||||
def import_class(import_str):
|
||||
"""Returns a class from a string including module and class"""
|
||||
mod_str, _sep, class_str = import_str.rpartition('.')
|
||||
try:
|
||||
__import__(mod_str)
|
||||
return getattr(sys.modules[mod_str], class_str)
|
||||
except (ValueError, AttributeError):
|
||||
raise ImportError('Class %s cannot be found (%s)' %
|
||||
(class_str,
|
||||
traceback.format_exception(*sys.exc_info())))
|
||||
|
||||
|
||||
def import_object(import_str, *args, **kwargs):
|
||||
"""Import a class and return an instance of it."""
|
||||
return import_class(import_str)(*args, **kwargs)
|
||||
|
||||
|
||||
def import_object_ns(name_space, import_str, *args, **kwargs):
|
||||
"""
|
||||
Import a class and return an instance of it, first by trying
|
||||
to find the class in a default namespace, then failing back to
|
||||
a full path if not found in the default namespace.
|
||||
"""
|
||||
import_value = "%s.%s" % (name_space, import_str)
|
||||
try:
|
||||
return import_class(import_value)(*args, **kwargs)
|
||||
except ImportError:
|
||||
return import_class(import_str)(*args, **kwargs)
|
||||
|
||||
|
||||
def import_module(import_str):
|
||||
"""Import a module."""
|
||||
__import__(import_str)
|
||||
return sys.modules[import_str]
|
||||
|
||||
|
||||
def try_import(import_str, default=None):
|
||||
"""Try to import a module and if it fails return default."""
|
||||
try:
|
||||
return import_module(import_str)
|
||||
except ImportError:
|
||||
return default
|
173
service-mgmt-api/sm-api/sm_api/openstack/common/jsonutils.py
Normal file
173
service-mgmt-api/sm-api/sm_api/openstack/common/jsonutils.py
Normal file
@ -0,0 +1,173 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Justin Santa Barbara
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
'''
|
||||
JSON related utilities.
|
||||
|
||||
This module provides a few things:
|
||||
|
||||
1) A handy function for getting an object down to something that can be
|
||||
JSON serialized. See to_primitive().
|
||||
|
||||
2) Wrappers around loads() and dumps(). The dumps() wrapper will
|
||||
automatically use to_primitive() for you if needed.
|
||||
|
||||
3) This sets up anyjson to use the loads() and dumps() wrappers if anyjson
|
||||
is available.
|
||||
'''
|
||||
|
||||
|
||||
import datetime
|
||||
import functools
|
||||
import inspect
|
||||
import itertools
|
||||
import json
|
||||
import types
|
||||
import xmlrpclib
|
||||
|
||||
import six
|
||||
|
||||
from sm_api.openstack.common import timeutils
|
||||
|
||||
|
||||
_nasty_type_tests = [inspect.ismodule, inspect.isclass, inspect.ismethod,
|
||||
inspect.isfunction, inspect.isgeneratorfunction,
|
||||
inspect.isgenerator, inspect.istraceback, inspect.isframe,
|
||||
inspect.iscode, inspect.isbuiltin, inspect.isroutine,
|
||||
inspect.isabstract]
|
||||
|
||||
_simple_types = (types.NoneType, int, basestring, bool, float, long)
|
||||
|
||||
|
||||
def to_primitive(value, convert_instances=False, convert_datetime=True,
|
||||
level=0, max_depth=3):
|
||||
"""Convert a complex object into primitives.
|
||||
|
||||
Handy for JSON serialization. We can optionally handle instances,
|
||||
but since this is a recursive function, we could have cyclical
|
||||
data structures.
|
||||
|
||||
To handle cyclical data structures we could track the actual objects
|
||||
visited in a set, but not all objects are hashable. Instead we just
|
||||
track the depth of the object inspections and don't go too deep.
|
||||
|
||||
Therefore, convert_instances=True is lossy ... be aware.
|
||||
|
||||
"""
|
||||
# handle obvious types first - order of basic types determined by running
|
||||
# full tests on nova project, resulting in the following counts:
|
||||
# 572754 <type 'NoneType'>
|
||||
# 460353 <type 'int'>
|
||||
# 379632 <type 'unicode'>
|
||||
# 274610 <type 'str'>
|
||||
# 199918 <type 'dict'>
|
||||
# 114200 <type 'datetime.datetime'>
|
||||
# 51817 <type 'bool'>
|
||||
# 26164 <type 'list'>
|
||||
# 6491 <type 'float'>
|
||||
# 283 <type 'tuple'>
|
||||
# 19 <type 'long'>
|
||||
if isinstance(value, _simple_types):
|
||||
return value
|
||||
|
||||
if isinstance(value, datetime.datetime):
|
||||
if convert_datetime:
|
||||
return timeutils.strtime(value)
|
||||
else:
|
||||
return value
|
||||
|
||||
# value of itertools.count doesn't get caught by nasty_type_tests
|
||||
# and results in infinite loop when list(value) is called.
|
||||
if type(value) == itertools.count:
|
||||
return six.text_type(value)
|
||||
|
||||
# FIXME(vish): Workaround for LP bug 852095. Without this workaround,
|
||||
# tests that raise an exception in a mocked method that
|
||||
# has a @wrap_exception with a notifier will fail. If
|
||||
# we up the dependency to 0.5.4 (when it is released) we
|
||||
# can remove this workaround.
|
||||
if getattr(value, '__module__', None) == 'mox':
|
||||
return 'mock'
|
||||
|
||||
if level > max_depth:
|
||||
return '?'
|
||||
|
||||
# The try block may not be necessary after the class check above,
|
||||
# but just in case ...
|
||||
try:
|
||||
recursive = functools.partial(to_primitive,
|
||||
convert_instances=convert_instances,
|
||||
convert_datetime=convert_datetime,
|
||||
level=level,
|
||||
max_depth=max_depth)
|
||||
if isinstance(value, dict):
|
||||
return dict((k, recursive(v)) for k, v in value.iteritems())
|
||||
elif isinstance(value, (list, tuple)):
|
||||
return [recursive(lv) for lv in value]
|
||||
|
||||
# It's not clear why xmlrpclib created their own DateTime type, but
|
||||
# for our purposes, make it a datetime type which is explicitly
|
||||
# handled
|
||||
if isinstance(value, xmlrpclib.DateTime):
|
||||
value = datetime.datetime(*tuple(value.timetuple())[:6])
|
||||
|
||||
if convert_datetime and isinstance(value, datetime.datetime):
|
||||
return timeutils.strtime(value)
|
||||
elif hasattr(value, 'iteritems'):
|
||||
return recursive(dict(value.iteritems()), level=level + 1)
|
||||
elif hasattr(value, '__iter__'):
|
||||
return recursive(list(value))
|
||||
elif convert_instances and hasattr(value, '__dict__'):
|
||||
# Likely an instance of something. Watch for cycles.
|
||||
# Ignore class member vars.
|
||||
return recursive(value.__dict__, level=level + 1)
|
||||
else:
|
||||
if any(test(value) for test in _nasty_type_tests):
|
||||
return six.text_type(value)
|
||||
return value
|
||||
except TypeError:
|
||||
# Class objects are tricky since they may define something like
|
||||
# __iter__ defined but it isn't callable as list().
|
||||
return six.text_type(value)
|
||||
|
||||
|
||||
def dumps(value, default=to_primitive, **kwargs):
|
||||
return json.dumps(value, default=default, **kwargs)
|
||||
|
||||
|
||||
def loads(s):
|
||||
return json.loads(s)
|
||||
|
||||
|
||||
def load(s):
|
||||
return json.load(s)
|
||||
|
||||
|
||||
try:
|
||||
import anyjson
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
anyjson._modules.append((__name__, 'dumps', TypeError,
|
||||
'loads', ValueError, 'load'))
|
||||
anyjson.force_implementation(__name__)
|
52
service-mgmt-api/sm-api/sm_api/openstack/common/local.py
Normal file
52
service-mgmt-api/sm-api/sm_api/openstack/common/local.py
Normal file
@ -0,0 +1,52 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Greenthread local storage of variables using weak references"""
|
||||
|
||||
import weakref
|
||||
|
||||
from eventlet import corolocal
|
||||
|
||||
|
||||
class WeakLocal(corolocal.local):
|
||||
def __getattribute__(self, attr):
|
||||
rval = corolocal.local.__getattribute__(self, attr)
|
||||
if rval:
|
||||
# NOTE(mikal): this bit is confusing. What is stored is a weak
|
||||
# reference, not the value itself. We therefore need to lookup
|
||||
# the weak reference and return the inner value here.
|
||||
rval = rval()
|
||||
return rval
|
||||
|
||||
def __setattr__(self, attr, value):
|
||||
value = weakref.ref(value)
|
||||
return corolocal.local.__setattr__(self, attr, value)
|
||||
|
||||
|
||||
# NOTE(mikal): the name "store" should be deprecated in the future
|
||||
store = WeakLocal()
|
||||
|
||||
# A "weak" store uses weak references and allows an object to fall out of scope
|
||||
# when it falls out of scope in the code that uses the thread local storage. A
|
||||
# "strong" store will hold a reference to the object so that it never falls out
|
||||
# of scope.
|
||||
weak_store = WeakLocal()
|
||||
strong_store = corolocal.local
|
282
service-mgmt-api/sm-api/sm_api/openstack/common/lockutils.py
Normal file
282
service-mgmt-api/sm-api/sm_api/openstack/common/lockutils.py
Normal file
@ -0,0 +1,282 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
|
||||
import errno
|
||||
import functools
|
||||
import os
|
||||
import shutil
|
||||
import tempfile
|
||||
import time
|
||||
import weakref
|
||||
|
||||
from eventlet import semaphore
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.openstack.common import fileutils
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
from sm_api.openstack.common import local
|
||||
from sm_api.openstack.common import log as logging
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
util_opts = [
|
||||
cfg.BoolOpt('disable_process_locking', default=False,
|
||||
help='Whether to disable inter-process locks'),
|
||||
cfg.StrOpt('lock_path',
|
||||
help=('Directory to use for lock files. Default to a '
|
||||
'temp directory'))
|
||||
]
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(util_opts)
|
||||
|
||||
|
||||
def set_defaults(lock_path):
|
||||
cfg.set_defaults(util_opts, lock_path=lock_path)
|
||||
|
||||
|
||||
class _InterProcessLock(object):
|
||||
"""Lock implementation which allows multiple locks, working around
|
||||
issues like bugs.debian.org/cgi-bin/bugreport.cgi?bug=632857 and does
|
||||
not require any cleanup. Since the lock is always held on a file
|
||||
descriptor rather than outside of the process, the lock gets dropped
|
||||
automatically if the process crashes, even if __exit__ is not executed.
|
||||
|
||||
There are no guarantees regarding usage by multiple green threads in a
|
||||
single process here. This lock works only between processes. Exclusive
|
||||
access between local threads should be achieved using the semaphores
|
||||
in the @synchronized decorator.
|
||||
|
||||
Note these locks are released when the descriptor is closed, so it's not
|
||||
safe to close the file descriptor while another green thread holds the
|
||||
lock. Just opening and closing the lock file can break synchronisation,
|
||||
so lock files must be accessed only using this abstraction.
|
||||
"""
|
||||
|
||||
def __init__(self, name):
|
||||
self.lockfile = None
|
||||
self.fname = name
|
||||
|
||||
def __enter__(self):
|
||||
self.lockfile = open(self.fname, 'w')
|
||||
|
||||
while True:
|
||||
try:
|
||||
# Using non-blocking locks since green threads are not
|
||||
# patched to deal with blocking locking calls.
|
||||
# Also upon reading the MSDN docs for locking(), it seems
|
||||
# to have a laughable 10 attempts "blocking" mechanism.
|
||||
self.trylock()
|
||||
return self
|
||||
except IOError as e:
|
||||
if e.errno in (errno.EACCES, errno.EAGAIN):
|
||||
# external locks synchronise things like iptables
|
||||
# updates - give it some time to prevent busy spinning
|
||||
time.sleep(0.01)
|
||||
else:
|
||||
raise
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
try:
|
||||
self.unlock()
|
||||
self.lockfile.close()
|
||||
except IOError:
|
||||
LOG.exception(_("Could not release the acquired lock `%s`"),
|
||||
self.fname)
|
||||
|
||||
def trylock(self):
|
||||
raise NotImplementedError()
|
||||
|
||||
def unlock(self):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class _WindowsLock(_InterProcessLock):
|
||||
def trylock(self):
|
||||
msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_NBLCK, 1)
|
||||
|
||||
def unlock(self):
|
||||
msvcrt.locking(self.lockfile.fileno(), msvcrt.LK_UNLCK, 1)
|
||||
|
||||
|
||||
class _PosixLock(_InterProcessLock):
|
||||
def trylock(self):
|
||||
fcntl.lockf(self.lockfile, fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
|
||||
def unlock(self):
|
||||
fcntl.lockf(self.lockfile, fcntl.LOCK_UN)
|
||||
|
||||
|
||||
if os.name == 'nt':
|
||||
import msvcrt
|
||||
InterProcessLock = _WindowsLock
|
||||
else:
|
||||
import fcntl
|
||||
InterProcessLock = _PosixLock
|
||||
|
||||
_semaphores = weakref.WeakValueDictionary()
|
||||
|
||||
|
||||
def synchronized(name, lock_file_prefix, external=False, lock_path=None):
|
||||
"""Synchronization decorator.
|
||||
|
||||
Decorating a method like so::
|
||||
|
||||
@synchronized('mylock')
|
||||
def foo(self, *args):
|
||||
...
|
||||
|
||||
ensures that only one thread will execute the foo method at a time.
|
||||
|
||||
Different methods can share the same lock::
|
||||
|
||||
@synchronized('mylock')
|
||||
def foo(self, *args):
|
||||
...
|
||||
|
||||
@synchronized('mylock')
|
||||
def bar(self, *args):
|
||||
...
|
||||
|
||||
This way only one of either foo or bar can be executing at a time.
|
||||
|
||||
The lock_file_prefix argument is used to provide lock files on disk with a
|
||||
meaningful prefix. The prefix should end with a hyphen ('-') if specified.
|
||||
|
||||
The external keyword argument denotes whether this lock should work across
|
||||
multiple processes. This means that if two different workers both run a
|
||||
a method decorated with @synchronized('mylock', external=True), only one
|
||||
of them will execute at a time.
|
||||
|
||||
The lock_path keyword argument is used to specify a special location for
|
||||
external lock files to live. If nothing is set, then CONF.lock_path is
|
||||
used as a default.
|
||||
"""
|
||||
|
||||
def wrap(f):
|
||||
@functools.wraps(f)
|
||||
def inner(*args, **kwargs):
|
||||
# NOTE(soren): If we ever go natively threaded, this will be racy.
|
||||
# See http://stackoverflow.com/questions/5390569/dyn
|
||||
# amically-allocating-and-destroying-mutexes
|
||||
sem = _semaphores.get(name, semaphore.Semaphore())
|
||||
if name not in _semaphores:
|
||||
# this check is not racy - we're already holding ref locally
|
||||
# so GC won't remove the item and there was no IO switch
|
||||
# (only valid in greenthreads)
|
||||
_semaphores[name] = sem
|
||||
|
||||
with sem:
|
||||
LOG.debug(_('Got semaphore "%(lock)s" for method '
|
||||
'"%(method)s"...'), {'lock': name,
|
||||
'method': f.__name__})
|
||||
|
||||
# NOTE(mikal): I know this looks odd
|
||||
if not hasattr(local.strong_store, 'locks_held'):
|
||||
local.strong_store.locks_held = []
|
||||
local.strong_store.locks_held.append(name)
|
||||
|
||||
try:
|
||||
if external and not CONF.disable_process_locking:
|
||||
LOG.debug(_('Attempting to grab file lock "%(lock)s" '
|
||||
'for method "%(method)s"...'),
|
||||
{'lock': name, 'method': f.__name__})
|
||||
cleanup_dir = False
|
||||
|
||||
# We need a copy of lock_path because it is non-local
|
||||
local_lock_path = lock_path
|
||||
if not local_lock_path:
|
||||
local_lock_path = CONF.lock_path
|
||||
|
||||
if not local_lock_path:
|
||||
cleanup_dir = True
|
||||
local_lock_path = tempfile.mkdtemp()
|
||||
|
||||
if not os.path.exists(local_lock_path):
|
||||
fileutils.ensure_tree(local_lock_path)
|
||||
|
||||
# NOTE(mikal): the lock name cannot contain directory
|
||||
# separators
|
||||
safe_name = name.replace(os.sep, '_')
|
||||
lock_file_name = '%s%s' % (lock_file_prefix, safe_name)
|
||||
lock_file_path = os.path.join(local_lock_path,
|
||||
lock_file_name)
|
||||
|
||||
try:
|
||||
lock = InterProcessLock(lock_file_path)
|
||||
with lock:
|
||||
LOG.debug(_('Got file lock "%(lock)s" at '
|
||||
'%(path)s for method '
|
||||
'"%(method)s"...'),
|
||||
{'lock': name,
|
||||
'path': lock_file_path,
|
||||
'method': f.__name__})
|
||||
retval = f(*args, **kwargs)
|
||||
finally:
|
||||
LOG.debug(_('Released file lock "%(lock)s" at '
|
||||
'%(path)s for method "%(method)s"...'),
|
||||
{'lock': name,
|
||||
'path': lock_file_path,
|
||||
'method': f.__name__})
|
||||
# NOTE(vish): This removes the tempdir if we needed
|
||||
# to create one. This is used to
|
||||
# cleanup the locks left behind by unit
|
||||
# tests.
|
||||
if cleanup_dir:
|
||||
shutil.rmtree(local_lock_path)
|
||||
else:
|
||||
retval = f(*args, **kwargs)
|
||||
|
||||
finally:
|
||||
local.strong_store.locks_held.remove(name)
|
||||
|
||||
return retval
|
||||
return inner
|
||||
return wrap
|
||||
|
||||
|
||||
def synchronized_with_prefix(lock_file_prefix):
|
||||
"""Partial object generator for the synchronization decorator.
|
||||
|
||||
Redefine @synchronized in each project like so::
|
||||
|
||||
(in nova/utils.py)
|
||||
from nova.openstack.common import lockutils
|
||||
|
||||
synchronized = lockutils.synchronized_with_prefix('nova-')
|
||||
|
||||
|
||||
(in nova/foo.py)
|
||||
from nova import utils
|
||||
|
||||
@utils.synchronized('mylock')
|
||||
def bar(self, *args):
|
||||
...
|
||||
|
||||
The lock_file_prefix argument is used to provide lock files on disk with a
|
||||
meaningful prefix. The prefix should end with a hyphen ('-') if specified.
|
||||
"""
|
||||
|
||||
return functools.partial(synchronized, lock_file_prefix=lock_file_prefix)
|
562
service-mgmt-api/sm-api/sm_api/openstack/common/log.py
Normal file
562
service-mgmt-api/sm-api/sm_api/openstack/common/log.py
Normal file
@ -0,0 +1,562 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""Openstack logging handler.
|
||||
|
||||
This module adds to logging functionality by adding the option to specify
|
||||
a context object when calling the various log methods. If the context object
|
||||
is not specified, default formatting is used. Additionally, an instance uuid
|
||||
may be passed as part of the log message, which is intended to make it easier
|
||||
for admins to find messages related to a specific instance.
|
||||
|
||||
It also allows setting of formatting information through conf.
|
||||
|
||||
"""
|
||||
|
||||
import ConfigParser
|
||||
import cStringIO
|
||||
import inspect
|
||||
import itertools
|
||||
import logging
|
||||
import logging.config
|
||||
import logging.handlers
|
||||
import os
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
from sm_api.openstack.common import importutils
|
||||
from sm_api.openstack.common import jsonutils
|
||||
from sm_api.openstack.common import local
|
||||
|
||||
|
||||
_DEFAULT_LOG_DATE_FORMAT = "%Y-%m-%d %H:%M:%S"
|
||||
|
||||
common_cli_opts = [
|
||||
cfg.BoolOpt('debug',
|
||||
short='d',
|
||||
default=False,
|
||||
help='Print debugging output (set logging level to '
|
||||
'DEBUG instead of default WARNING level).'),
|
||||
cfg.BoolOpt('verbose',
|
||||
short='v',
|
||||
default=False,
|
||||
help='Print more verbose output (set logging level to '
|
||||
'INFO instead of default WARNING level).'),
|
||||
]
|
||||
|
||||
logging_cli_opts = [
|
||||
cfg.StrOpt('log-config',
|
||||
metavar='PATH',
|
||||
help='If this option is specified, the logging configuration '
|
||||
'file specified is used and overrides any other logging '
|
||||
'options specified. Please see the Python logging module '
|
||||
'documentation for details on logging configuration '
|
||||
'files.'),
|
||||
cfg.StrOpt('log-format',
|
||||
default=None,
|
||||
metavar='FORMAT',
|
||||
help='A logging.Formatter log message format string which may '
|
||||
'use any of the available logging.LogRecord attributes. '
|
||||
'This option is deprecated. Please use '
|
||||
'logging_context_format_string and '
|
||||
'logging_default_format_string instead.'),
|
||||
cfg.StrOpt('log-date-format',
|
||||
default=_DEFAULT_LOG_DATE_FORMAT,
|
||||
metavar='DATE_FORMAT',
|
||||
help='Format string for %%(asctime)s in log records. '
|
||||
'Default: %(default)s'),
|
||||
cfg.StrOpt('log-file',
|
||||
metavar='PATH',
|
||||
deprecated_name='logfile',
|
||||
help='(Optional) Name of log file to output to. '
|
||||
'If no default is set, logging will go to stdout.'),
|
||||
cfg.StrOpt('log-dir',
|
||||
deprecated_name='logdir',
|
||||
help='(Optional) The base directory used for relative '
|
||||
'--log-file paths'),
|
||||
cfg.BoolOpt('use-syslog',
|
||||
default=False,
|
||||
help='Use syslog for logging.'),
|
||||
cfg.StrOpt('syslog-log-facility',
|
||||
default='LOG_USER',
|
||||
help='syslog facility to receive log lines')
|
||||
]
|
||||
|
||||
generic_log_opts = [
|
||||
cfg.BoolOpt('use_stderr',
|
||||
default=True,
|
||||
help='Log output to standard error')
|
||||
]
|
||||
|
||||
log_opts = [
|
||||
cfg.StrOpt('logging_context_format_string',
|
||||
default='sm-api %(process)d '
|
||||
'%(name)s [%(request_id)s %(user)s %(tenant)s] '
|
||||
'%(instance)s%(message)s',
|
||||
help='format string to use for log messages with context'),
|
||||
cfg.StrOpt('logging_default_format_string',
|
||||
default='sm-api %(process)d '
|
||||
'%(name)s [-] %(instance)s%(message)s',
|
||||
help='format string to use for log messages without context'),
|
||||
cfg.StrOpt('logging_debug_format_suffix',
|
||||
default='%(funcName)s %(pathname)s:%(lineno)d',
|
||||
help='data to append to log format when level is DEBUG'),
|
||||
cfg.StrOpt('logging_exception_prefix',
|
||||
default='%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s '
|
||||
'%(instance)s',
|
||||
help='prefix each line of exception output with this format'),
|
||||
cfg.ListOpt('default_log_levels',
|
||||
default=[
|
||||
'amqplib=WARN',
|
||||
'sqlalchemy=WARN',
|
||||
'boto=WARN',
|
||||
'suds=INFO',
|
||||
'keystone=INFO',
|
||||
'eventlet.wsgi.server=WARN'
|
||||
],
|
||||
help='list of logger=LEVEL pairs'),
|
||||
cfg.BoolOpt('publish_errors',
|
||||
default=False,
|
||||
help='publish error events'),
|
||||
cfg.BoolOpt('fatal_deprecations',
|
||||
default=False,
|
||||
help='make deprecations fatal'),
|
||||
|
||||
# NOTE(mikal): there are two options here because sometimes we are handed
|
||||
# a full instance (and could include more information), and other times we
|
||||
# are just handed a UUID for the instance.
|
||||
cfg.StrOpt('instance_format',
|
||||
default='[instance: %(uuid)s] ',
|
||||
help='If an instance is passed with the log message, format '
|
||||
'it like this'),
|
||||
cfg.StrOpt('instance_uuid_format',
|
||||
default='[instance: %(uuid)s] ',
|
||||
help='If an instance UUID is passed with the log message, '
|
||||
'format it like this'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_cli_opts(common_cli_opts)
|
||||
CONF.register_cli_opts(logging_cli_opts)
|
||||
CONF.register_opts(generic_log_opts)
|
||||
CONF.register_opts(log_opts)
|
||||
|
||||
# our new audit level
|
||||
# NOTE(jkoelker) Since we synthesized an audit level, make the logging
|
||||
# module aware of it so it acts like other levels.
|
||||
logging.AUDIT = logging.INFO + 1
|
||||
logging.addLevelName(logging.AUDIT, 'AUDIT')
|
||||
|
||||
|
||||
try:
|
||||
NullHandler = logging.NullHandler
|
||||
except AttributeError: # NOTE(jkoelker) NullHandler added in Python 2.7
|
||||
class NullHandler(logging.Handler):
|
||||
def handle(self, record):
|
||||
pass
|
||||
|
||||
def emit(self, record):
|
||||
pass
|
||||
|
||||
def createLock(self):
|
||||
self.lock = None
|
||||
|
||||
|
||||
def _dictify_context(context):
|
||||
if context is None:
|
||||
return None
|
||||
if not isinstance(context, dict) and getattr(context, 'to_dict', None):
|
||||
context = context.to_dict()
|
||||
return context
|
||||
|
||||
|
||||
def _get_binary_name():
|
||||
return os.path.basename(inspect.stack()[-1][1])
|
||||
|
||||
|
||||
def _get_log_file_path(binary=None):
|
||||
logfile = CONF.log_file
|
||||
logdir = CONF.log_dir
|
||||
|
||||
if logfile and not logdir:
|
||||
return logfile
|
||||
|
||||
if logfile and logdir:
|
||||
return os.path.join(logdir, logfile)
|
||||
|
||||
if logdir:
|
||||
binary = binary or _get_binary_name()
|
||||
return '%s.log' % (os.path.join(logdir, binary),)
|
||||
|
||||
|
||||
class BaseLoggerAdapter(logging.LoggerAdapter):
|
||||
|
||||
def audit(self, msg, *args, **kwargs):
|
||||
self.log(logging.AUDIT, msg, *args, **kwargs)
|
||||
|
||||
|
||||
class LazyAdapter(BaseLoggerAdapter):
|
||||
def __init__(self, name='unknown', version='unknown'):
|
||||
self._logger = None
|
||||
self.extra = {}
|
||||
self.name = name
|
||||
self.version = version
|
||||
|
||||
@property
|
||||
def logger(self):
|
||||
if not self._logger:
|
||||
self._logger = getLogger(self.name, self.version)
|
||||
return self._logger
|
||||
|
||||
|
||||
class ContextAdapter(BaseLoggerAdapter):
|
||||
warn = logging.LoggerAdapter.warning
|
||||
|
||||
def __init__(self, logger, project_name, version_string):
|
||||
self.logger = logger
|
||||
self.project = project_name
|
||||
self.version = version_string
|
||||
|
||||
@property
|
||||
def handlers(self):
|
||||
return self.logger.handlers
|
||||
|
||||
def deprecated(self, msg, *args, **kwargs):
|
||||
stdmsg = _("Deprecated: %s") % msg
|
||||
if CONF.fatal_deprecations:
|
||||
self.critical(stdmsg, *args, **kwargs)
|
||||
raise DeprecatedConfig(msg=stdmsg)
|
||||
else:
|
||||
self.warn(stdmsg, *args, **kwargs)
|
||||
|
||||
def process(self, msg, kwargs):
|
||||
if 'extra' not in kwargs:
|
||||
kwargs['extra'] = {}
|
||||
extra = kwargs['extra']
|
||||
|
||||
context = kwargs.pop('context', None)
|
||||
if not context:
|
||||
context = getattr(local.store, 'context', None)
|
||||
if context:
|
||||
extra.update(_dictify_context(context))
|
||||
|
||||
instance = kwargs.pop('instance', None)
|
||||
instance_extra = ''
|
||||
if instance:
|
||||
instance_extra = CONF.instance_format % instance
|
||||
else:
|
||||
instance_uuid = kwargs.pop('instance_uuid', None)
|
||||
if instance_uuid:
|
||||
instance_extra = (CONF.instance_uuid_format
|
||||
% {'uuid': instance_uuid})
|
||||
extra.update({'instance': instance_extra})
|
||||
|
||||
extra.update({"project": self.project})
|
||||
extra.update({"version": self.version})
|
||||
extra['extra'] = extra.copy()
|
||||
return msg, kwargs
|
||||
|
||||
|
||||
class JSONFormatter(logging.Formatter):
|
||||
def __init__(self, fmt=None, datefmt=None):
|
||||
# NOTE(jkoelker) we ignore the fmt argument, but its still there
|
||||
# since logging.config.fileConfig passes it.
|
||||
self.datefmt = datefmt
|
||||
|
||||
def formatException(self, ei, strip_newlines=True):
|
||||
lines = traceback.format_exception(*ei)
|
||||
if strip_newlines:
|
||||
lines = [itertools.ifilter(
|
||||
lambda x: x,
|
||||
line.rstrip().splitlines()) for line in lines]
|
||||
lines = list(itertools.chain(*lines))
|
||||
return lines
|
||||
|
||||
def format(self, record):
|
||||
message = {'message': record.getMessage(),
|
||||
'asctime': self.formatTime(record, self.datefmt),
|
||||
'name': record.name,
|
||||
'msg': record.msg,
|
||||
'args': record.args,
|
||||
'levelname': record.levelname,
|
||||
'levelno': record.levelno,
|
||||
'pathname': record.pathname,
|
||||
'filename': record.filename,
|
||||
'module': record.module,
|
||||
'lineno': record.lineno,
|
||||
'funcname': record.funcName,
|
||||
'created': record.created,
|
||||
'msecs': record.msecs,
|
||||
'relative_created': record.relativeCreated,
|
||||
'thread': record.thread,
|
||||
'thread_name': record.threadName,
|
||||
'process_name': record.processName,
|
||||
'process': record.process,
|
||||
'traceback': None}
|
||||
|
||||
if hasattr(record, 'extra'):
|
||||
message['extra'] = record.extra
|
||||
|
||||
if record.exc_info:
|
||||
message['traceback'] = self.formatException(record.exc_info)
|
||||
|
||||
return jsonutils.dumps(message)
|
||||
|
||||
|
||||
def _create_logging_excepthook(product_name):
|
||||
def logging_excepthook(type, value, tb):
|
||||
extra = {}
|
||||
if CONF.verbose:
|
||||
extra['exc_info'] = (type, value, tb)
|
||||
getLogger(product_name).critical(str(value), **extra)
|
||||
return logging_excepthook
|
||||
|
||||
|
||||
class LogConfigError(Exception):
|
||||
|
||||
message = _('Error loading logging config %(log_config)s: %(err_msg)s')
|
||||
|
||||
def __init__(self, log_config, err_msg):
|
||||
self.log_config = log_config
|
||||
self.err_msg = err_msg
|
||||
|
||||
def __str__(self):
|
||||
return self.message % dict(log_config=self.log_config,
|
||||
err_msg=self.err_msg)
|
||||
|
||||
|
||||
def _load_log_config(log_config):
|
||||
try:
|
||||
logging.config.fileConfig(log_config)
|
||||
except ConfigParser.Error as exc:
|
||||
raise LogConfigError(log_config, str(exc))
|
||||
|
||||
|
||||
def setup(product_name):
|
||||
"""Setup logging."""
|
||||
if CONF.log_config:
|
||||
_load_log_config(CONF.log_config)
|
||||
else:
|
||||
_setup_logging_from_conf()
|
||||
sys.excepthook = _create_logging_excepthook(product_name)
|
||||
|
||||
|
||||
def set_defaults(logging_context_format_string):
|
||||
cfg.set_defaults(log_opts,
|
||||
logging_context_format_string=
|
||||
logging_context_format_string)
|
||||
|
||||
|
||||
def _find_facility_from_conf():
|
||||
facility_names = logging.handlers.SysLogHandler.facility_names
|
||||
facility = getattr(logging.handlers.SysLogHandler,
|
||||
CONF.syslog_log_facility,
|
||||
None)
|
||||
|
||||
if facility is None and CONF.syslog_log_facility in facility_names:
|
||||
facility = facility_names.get(CONF.syslog_log_facility)
|
||||
|
||||
if facility is None:
|
||||
valid_facilities = facility_names.keys()
|
||||
consts = ['LOG_AUTH', 'LOG_AUTHPRIV', 'LOG_CRON', 'LOG_DAEMON',
|
||||
'LOG_FTP', 'LOG_KERN', 'LOG_LPR', 'LOG_MAIL', 'LOG_NEWS',
|
||||
'LOG_AUTH', 'LOG_SYSLOG', 'LOG_USER', 'LOG_UUCP',
|
||||
'LOG_LOCAL0', 'LOG_LOCAL1', 'LOG_LOCAL2', 'LOG_LOCAL3',
|
||||
'LOG_LOCAL4', 'LOG_LOCAL5', 'LOG_LOCAL6', 'LOG_LOCAL7']
|
||||
valid_facilities.extend(consts)
|
||||
raise TypeError(_('syslog facility must be one of: %s') %
|
||||
', '.join("'%s'" % fac
|
||||
for fac in valid_facilities))
|
||||
|
||||
return facility
|
||||
|
||||
|
||||
def _setup_logging_from_conf():
|
||||
log_root = getLogger(None).logger
|
||||
for handler in log_root.handlers:
|
||||
log_root.removeHandler(handler)
|
||||
|
||||
if CONF.use_syslog:
|
||||
facility = _find_facility_from_conf()
|
||||
syslog = logging.handlers.SysLogHandler(address='/dev/log',
|
||||
facility=facility)
|
||||
log_root.addHandler(syslog)
|
||||
|
||||
logpath = _get_log_file_path()
|
||||
if logpath:
|
||||
filelog = logging.handlers.WatchedFileHandler(logpath)
|
||||
log_root.addHandler(filelog)
|
||||
|
||||
if CONF.use_stderr:
|
||||
streamlog = ColorHandler()
|
||||
log_root.addHandler(streamlog)
|
||||
|
||||
elif not CONF.log_file:
|
||||
# pass sys.stdout as a positional argument
|
||||
# python2.6 calls the argument strm, in 2.7 it's stream
|
||||
streamlog = logging.StreamHandler(sys.stdout)
|
||||
log_root.addHandler(streamlog)
|
||||
|
||||
if CONF.publish_errors:
|
||||
handler = importutils.import_object(
|
||||
"sm_api.openstack.common.log_handler.PublishErrorsHandler",
|
||||
logging.ERROR)
|
||||
log_root.addHandler(handler)
|
||||
|
||||
datefmt = CONF.log_date_format
|
||||
for handler in log_root.handlers:
|
||||
# NOTE(alaski): CONF.log_format overrides everything currently. This
|
||||
# should be deprecated in favor of context aware formatting.
|
||||
if CONF.log_format:
|
||||
handler.setFormatter(logging.Formatter(fmt=CONF.log_format,
|
||||
datefmt=datefmt))
|
||||
log_root.info('Deprecated: log_format is now deprecated and will '
|
||||
'be removed in the next release')
|
||||
else:
|
||||
handler.setFormatter(ContextFormatter(datefmt=datefmt))
|
||||
|
||||
if CONF.debug:
|
||||
log_root.setLevel(logging.DEBUG)
|
||||
elif CONF.verbose:
|
||||
log_root.setLevel(logging.INFO)
|
||||
else:
|
||||
log_root.setLevel(logging.WARNING)
|
||||
|
||||
for pair in CONF.default_log_levels:
|
||||
mod, _sep, level_name = pair.partition('=')
|
||||
level = logging.getLevelName(level_name)
|
||||
logger = logging.getLogger(mod)
|
||||
logger.setLevel(level)
|
||||
|
||||
_loggers = {}
|
||||
|
||||
|
||||
def getLogger(name='unknown', version='unknown'):
|
||||
if name not in _loggers:
|
||||
_loggers[name] = ContextAdapter(logging.getLogger(name),
|
||||
name,
|
||||
version)
|
||||
return _loggers[name]
|
||||
|
||||
|
||||
def getLazyLogger(name='unknown', version='unknown'):
|
||||
"""
|
||||
create a pass-through logger that does not create the real logger
|
||||
until it is really needed and delegates all calls to the real logger
|
||||
once it is created
|
||||
"""
|
||||
return LazyAdapter(name, version)
|
||||
|
||||
|
||||
class WritableLogger(object):
|
||||
"""A thin wrapper that responds to `write` and logs."""
|
||||
|
||||
def __init__(self, logger, level=logging.INFO):
|
||||
self.logger = logger
|
||||
self.level = level
|
||||
|
||||
def write(self, msg):
|
||||
self.logger.log(self.level, msg)
|
||||
|
||||
|
||||
class ContextFormatter(logging.Formatter):
|
||||
"""A context.RequestContext aware formatter configured through flags.
|
||||
|
||||
The flags used to set format strings are: logging_context_format_string
|
||||
and logging_default_format_string. You can also specify
|
||||
logging_debug_format_suffix to append extra formatting if the log level is
|
||||
debug.
|
||||
|
||||
For information about what variables are available for the formatter see:
|
||||
http://docs.python.org/library/logging.html#formatter
|
||||
|
||||
"""
|
||||
|
||||
def format(self, record):
|
||||
"""Uses contextstring if request_id is set, otherwise default."""
|
||||
# NOTE(sdague): default the fancier formating params
|
||||
# to an empty string so we don't throw an exception if
|
||||
# they get used
|
||||
for key in ('instance', 'color'):
|
||||
if key not in record.__dict__:
|
||||
record.__dict__[key] = ''
|
||||
|
||||
if record.__dict__.get('request_id', None):
|
||||
self._fmt = CONF.logging_context_format_string
|
||||
else:
|
||||
self._fmt = CONF.logging_default_format_string
|
||||
|
||||
if (record.levelno == logging.DEBUG and
|
||||
CONF.logging_debug_format_suffix):
|
||||
self._fmt += " " + CONF.logging_debug_format_suffix
|
||||
|
||||
# Cache this on the record, Logger will respect our formated copy
|
||||
if record.exc_info:
|
||||
record.exc_text = self.formatException(record.exc_info, record)
|
||||
return logging.Formatter.format(self, record)
|
||||
|
||||
def formatException(self, exc_info, record=None):
|
||||
"""Format exception output with CONF.logging_exception_prefix."""
|
||||
if not record:
|
||||
return logging.Formatter.formatException(self, exc_info)
|
||||
|
||||
stringbuffer = cStringIO.StringIO()
|
||||
traceback.print_exception(exc_info[0], exc_info[1], exc_info[2],
|
||||
None, stringbuffer)
|
||||
lines = stringbuffer.getvalue().split('\n')
|
||||
stringbuffer.close()
|
||||
|
||||
if CONF.logging_exception_prefix.find('%(asctime)') != -1:
|
||||
record.asctime = self.formatTime(record, self.datefmt)
|
||||
|
||||
formatted_lines = []
|
||||
for line in lines:
|
||||
pl = CONF.logging_exception_prefix % record.__dict__
|
||||
fl = '%s%s' % (pl, line)
|
||||
formatted_lines.append(fl)
|
||||
return '\n'.join(formatted_lines)
|
||||
|
||||
|
||||
class ColorHandler(logging.StreamHandler):
|
||||
LEVEL_COLORS = {
|
||||
logging.DEBUG: '\033[00;32m', # GREEN
|
||||
logging.INFO: '\033[00;36m', # CYAN
|
||||
logging.AUDIT: '\033[01;36m', # BOLD CYAN
|
||||
logging.WARN: '\033[01;33m', # BOLD YELLOW
|
||||
logging.ERROR: '\033[01;31m', # BOLD RED
|
||||
logging.CRITICAL: '\033[01;31m', # BOLD RED
|
||||
}
|
||||
|
||||
def format(self, record):
|
||||
record.color = self.LEVEL_COLORS[record.levelno]
|
||||
return logging.StreamHandler.format(self, record)
|
||||
|
||||
|
||||
class DeprecatedConfig(Exception):
|
||||
message = _("Fatal call to deprecated config: %(msg)s")
|
||||
|
||||
def __init__(self, msg):
|
||||
super(Exception, self).__init__(self.message % dict(msg=msg))
|
@ -0,0 +1,35 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2013 IBM Corp.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
import logging
|
||||
|
||||
from sm_api.openstack.common import notifier
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
|
||||
class PublishErrorsHandler(logging.Handler):
|
||||
def emit(self, record):
|
||||
if ('sm_api.openstack.common.notifier.log_notifier' in
|
||||
cfg.CONF.notification_driver):
|
||||
return
|
||||
notifier.api.notify(None, 'error.publisher',
|
||||
'error_notification',
|
||||
notifier.api.ERROR,
|
||||
dict(error=record.msg))
|
151
service-mgmt-api/sm-api/sm_api/openstack/common/loopingcall.py
Normal file
151
service-mgmt-api/sm-api/sm_api/openstack/common/loopingcall.py
Normal file
@ -0,0 +1,151 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Justin Santa Barbara
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import sys
|
||||
|
||||
from eventlet import event
|
||||
from eventlet import greenthread
|
||||
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
from sm_api.openstack.common import log as logging
|
||||
from sm_api.openstack.common import timeutils
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class LoopingCallDone(Exception):
|
||||
"""Exception to break out and stop a LoopingCall.
|
||||
|
||||
The poll-function passed to LoopingCall can raise this exception to
|
||||
break out of the loop normally. This is somewhat analogous to
|
||||
StopIteration.
|
||||
|
||||
An optional return-value can be included as the argument to the exception;
|
||||
this return-value will be returned by LoopingCall.wait()
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, retvalue=True):
|
||||
""":param retvalue: Value that LoopingCall.wait() should return."""
|
||||
self.retvalue = retvalue
|
||||
|
||||
|
||||
class LoopingCallBase(object):
|
||||
def __init__(self, f=None, *args, **kw):
|
||||
self.args = args
|
||||
self.kw = kw
|
||||
self.f = f
|
||||
self._running = False
|
||||
self.done = None
|
||||
|
||||
def stop(self):
|
||||
self._running = False
|
||||
|
||||
def wait(self):
|
||||
return self.done.wait()
|
||||
|
||||
|
||||
class FixedIntervalLoopingCall(LoopingCallBase):
|
||||
"""A fixed interval looping call."""
|
||||
|
||||
def start(self, interval, initial_delay=None):
|
||||
self._running = True
|
||||
done = event.Event()
|
||||
|
||||
def _inner():
|
||||
if initial_delay:
|
||||
greenthread.sleep(initial_delay)
|
||||
|
||||
try:
|
||||
while self._running:
|
||||
start = timeutils.utcnow()
|
||||
self.f(*self.args, **self.kw)
|
||||
end = timeutils.utcnow()
|
||||
if not self._running:
|
||||
break
|
||||
delay = interval - timeutils.delta_seconds(start, end)
|
||||
if delay <= 0:
|
||||
LOG.warn(_('task run outlasted interval by %s sec') %
|
||||
-delay)
|
||||
greenthread.sleep(delay if delay > 0 else 0)
|
||||
except LoopingCallDone as e:
|
||||
self.stop()
|
||||
done.send(e.retvalue)
|
||||
except Exception:
|
||||
LOG.exception(_('in fixed duration looping call'))
|
||||
done.send_exception(*sys.exc_info())
|
||||
return
|
||||
else:
|
||||
done.send(True)
|
||||
|
||||
self.done = done
|
||||
|
||||
greenthread.spawn_n(_inner)
|
||||
return self.done
|
||||
|
||||
|
||||
# TODO(mikal): this class name is deprecated in Havana and should be removed
|
||||
# in the I release
|
||||
LoopingCall = FixedIntervalLoopingCall
|
||||
|
||||
|
||||
class DynamicLoopingCall(LoopingCallBase):
|
||||
"""A looping call which sleeps until the next known event.
|
||||
|
||||
The function called should return how long to sleep for before being
|
||||
called again.
|
||||
"""
|
||||
|
||||
def start(self, initial_delay=None, periodic_interval_max=None):
|
||||
self._running = True
|
||||
done = event.Event()
|
||||
|
||||
def _inner():
|
||||
if initial_delay:
|
||||
greenthread.sleep(initial_delay)
|
||||
|
||||
try:
|
||||
while self._running:
|
||||
idle = self.f(*self.args, **self.kw)
|
||||
if not self._running:
|
||||
break
|
||||
|
||||
if periodic_interval_max is not None:
|
||||
idle = min(idle, periodic_interval_max)
|
||||
LOG.debug(_('Dynamic looping call sleeping for %.02f '
|
||||
'seconds'), idle)
|
||||
greenthread.sleep(idle)
|
||||
except LoopingCallDone as e:
|
||||
self.stop()
|
||||
done.send(e.retvalue)
|
||||
except Exception:
|
||||
LOG.exception(_('in dynamic looping call'))
|
||||
done.send_exception(*sys.exc_info())
|
||||
return
|
||||
else:
|
||||
done.send(True)
|
||||
|
||||
self.done = done
|
||||
|
||||
greenthread.spawn(_inner)
|
||||
return self.done
|
@ -0,0 +1,73 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2012 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
"""
|
||||
Network-related utilities and helper functions.
|
||||
"""
|
||||
|
||||
from sm_api.openstack.common import log as logging
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def parse_host_port(address, default_port=None):
|
||||
"""
|
||||
Interpret a string as a host:port pair.
|
||||
An IPv6 address MUST be escaped if accompanied by a port,
|
||||
because otherwise ambiguity ensues: 2001:db8:85a3::8a2e:370:7334
|
||||
means both [2001:db8:85a3::8a2e:370:7334] and
|
||||
[2001:db8:85a3::8a2e:370]:7334.
|
||||
|
||||
>>> parse_host_port('server01:80')
|
||||
('server01', 80)
|
||||
>>> parse_host_port('server01')
|
||||
('server01', None)
|
||||
>>> parse_host_port('server01', default_port=1234)
|
||||
('server01', 1234)
|
||||
>>> parse_host_port('[::1]:80')
|
||||
('::1', 80)
|
||||
>>> parse_host_port('[::1]')
|
||||
('::1', None)
|
||||
>>> parse_host_port('[::1]', default_port=1234)
|
||||
('::1', 1234)
|
||||
>>> parse_host_port('2001:db8:85a3::8a2e:370:7334', default_port=1234)
|
||||
('2001:db8:85a3::8a2e:370:7334', 1234)
|
||||
|
||||
"""
|
||||
if address[0] == '[':
|
||||
# Escaped ipv6
|
||||
_host, _port = address[1:].split(']')
|
||||
host = _host
|
||||
if ':' in _port:
|
||||
port = _port.split(':')[1]
|
||||
else:
|
||||
port = default_port
|
||||
else:
|
||||
if address.count(':') == 1:
|
||||
host, port = address.split(':')
|
||||
else:
|
||||
# 0 means ipv4, >1 means ipv6.
|
||||
# We prohibit unescaped ipv6 addresses with port.
|
||||
host = address
|
||||
port = default_port
|
||||
|
||||
return (host, None if port is None else int(port))
|
@ -0,0 +1,18 @@
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
186
service-mgmt-api/sm-api/sm_api/openstack/common/notifier/api.py
Normal file
186
service-mgmt-api/sm-api/sm_api/openstack/common/notifier/api.py
Normal file
@ -0,0 +1,186 @@
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
import uuid
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.openstack.common import context
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
from sm_api.openstack.common import importutils
|
||||
from sm_api.openstack.common import jsonutils
|
||||
from sm_api.openstack.common import log as logging
|
||||
from sm_api.openstack.common import timeutils
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
notifier_opts = [
|
||||
cfg.MultiStrOpt('notification_driver',
|
||||
default=[],
|
||||
help='Driver or drivers to handle sending notifications'),
|
||||
cfg.StrOpt('default_notification_level',
|
||||
default='INFO',
|
||||
help='Default notification level for outgoing notifications'),
|
||||
cfg.StrOpt('default_publisher_id',
|
||||
default='$host',
|
||||
help='Default publisher_id for outgoing notifications'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(notifier_opts)
|
||||
|
||||
WARN = 'WARN'
|
||||
INFO = 'INFO'
|
||||
ERROR = 'ERROR'
|
||||
CRITICAL = 'CRITICAL'
|
||||
DEBUG = 'DEBUG'
|
||||
|
||||
log_levels = (DEBUG, WARN, INFO, ERROR, CRITICAL)
|
||||
|
||||
|
||||
class BadPriorityException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def notify_decorator(name, fn):
|
||||
""" decorator for notify which is used from utils.monkey_patch()
|
||||
|
||||
:param name: name of the function
|
||||
:param function: - object of the function
|
||||
:returns: function -- decorated function
|
||||
|
||||
"""
|
||||
def wrapped_func(*args, **kwarg):
|
||||
body = {}
|
||||
body['args'] = []
|
||||
body['kwarg'] = {}
|
||||
for arg in args:
|
||||
body['args'].append(arg)
|
||||
for key in kwarg:
|
||||
body['kwarg'][key] = kwarg[key]
|
||||
|
||||
ctxt = context.get_context_from_function_and_args(fn, args, kwarg)
|
||||
notify(ctxt,
|
||||
CONF.default_publisher_id,
|
||||
name,
|
||||
CONF.default_notification_level,
|
||||
body)
|
||||
return fn(*args, **kwarg)
|
||||
return wrapped_func
|
||||
|
||||
|
||||
def publisher_id(service, host=None):
|
||||
if not host:
|
||||
host = CONF.host
|
||||
return "%s.%s" % (service, host)
|
||||
|
||||
|
||||
def notify(context, publisher_id, event_type, priority, payload):
|
||||
"""Sends a notification using the specified driver
|
||||
|
||||
:param publisher_id: the source worker_type.host of the message
|
||||
:param event_type: the literal type of event (ex. Instance Creation)
|
||||
:param priority: patterned after the enumeration of Python logging
|
||||
levels in the set (DEBUG, WARN, INFO, ERROR, CRITICAL)
|
||||
:param payload: A python dictionary of attributes
|
||||
|
||||
Outgoing message format includes the above parameters, and appends the
|
||||
following:
|
||||
|
||||
message_id
|
||||
a UUID representing the id for this notification
|
||||
|
||||
timestamp
|
||||
the GMT timestamp the notification was sent at
|
||||
|
||||
The composite message will be constructed as a dictionary of the above
|
||||
attributes, which will then be sent via the transport mechanism defined
|
||||
by the driver.
|
||||
|
||||
Message example::
|
||||
|
||||
{'message_id': str(uuid.uuid4()),
|
||||
'publisher_id': 'compute.host1',
|
||||
'timestamp': timeutils.utcnow(),
|
||||
'priority': 'WARN',
|
||||
'event_type': 'compute.create_instance',
|
||||
'payload': {'instance_id': 12, ... }}
|
||||
|
||||
"""
|
||||
if priority not in log_levels:
|
||||
raise BadPriorityException(
|
||||
_('%s not in valid priorities') % priority)
|
||||
|
||||
# Ensure everything is JSON serializable.
|
||||
payload = jsonutils.to_primitive(payload, convert_instances=True)
|
||||
|
||||
msg = dict(message_id=str(uuid.uuid4()),
|
||||
publisher_id=publisher_id,
|
||||
event_type=event_type,
|
||||
priority=priority,
|
||||
payload=payload,
|
||||
timestamp=str(timeutils.utcnow()))
|
||||
|
||||
for driver in _get_drivers():
|
||||
try:
|
||||
driver.notify(context, msg)
|
||||
except Exception as e:
|
||||
LOG.exception(_("Problem '%(e)s' attempting to "
|
||||
"send to notification system. "
|
||||
"Payload=%(payload)s")
|
||||
% dict(e=e, payload=payload))
|
||||
|
||||
|
||||
_drivers = None
|
||||
|
||||
|
||||
def _get_drivers():
|
||||
"""Instantiate, cache, and return drivers based on the CONF."""
|
||||
global _drivers
|
||||
if _drivers is None:
|
||||
_drivers = {}
|
||||
for notification_driver in CONF.notification_driver:
|
||||
add_driver(notification_driver)
|
||||
|
||||
return _drivers.values()
|
||||
|
||||
|
||||
def add_driver(notification_driver):
|
||||
"""Add a notification driver at runtime."""
|
||||
# Make sure the driver list is initialized.
|
||||
_get_drivers()
|
||||
if isinstance(notification_driver, basestring):
|
||||
# Load and add
|
||||
try:
|
||||
driver = importutils.import_module(notification_driver)
|
||||
_drivers[notification_driver] = driver
|
||||
except ImportError:
|
||||
LOG.exception(_("Failed to load notifier %s. "
|
||||
"These notifications will not be sent.") %
|
||||
notification_driver)
|
||||
else:
|
||||
# Driver is already loaded; just add the object.
|
||||
_drivers[notification_driver] = notification_driver
|
||||
|
||||
|
||||
def _reset_drivers():
|
||||
"""Used by unit tests to reset the drivers."""
|
||||
global _drivers
|
||||
_drivers = None
|
@ -0,0 +1,39 @@
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.openstack.common import jsonutils
|
||||
from sm_api.openstack.common import log as logging
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
def notify(_context, message):
|
||||
"""Notifies the recipient of the desired event given the model.
|
||||
Log notifications using openstack's default logging system"""
|
||||
|
||||
priority = message.get('priority',
|
||||
CONF.default_notification_level)
|
||||
priority = priority.lower()
|
||||
logger = logging.getLogger(
|
||||
'sm_api.openstack.common.notification.%s' %
|
||||
message['event_type'])
|
||||
getattr(logger, priority)(jsonutils.dumps(message))
|
@ -0,0 +1,23 @@
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
|
||||
def notify(_context, message):
|
||||
"""Notifies the recipient of the desired event given the model"""
|
||||
pass
|
@ -0,0 +1,50 @@
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.openstack.common import context as req_context
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
from sm_api.openstack.common import log as logging
|
||||
from sm_api.openstack.common import rpc
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
notification_topic_opt = cfg.ListOpt(
|
||||
'notification_topics', default=['notifications', ],
|
||||
help='AMQP topic used for openstack notifications')
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opt(notification_topic_opt)
|
||||
|
||||
|
||||
def notify(context, message):
|
||||
"""Sends a notification via RPC"""
|
||||
if not context:
|
||||
context = req_context.get_admin_context()
|
||||
priority = message.get('priority',
|
||||
CONF.default_notification_level)
|
||||
priority = priority.lower()
|
||||
for topic in CONF.notification_topics:
|
||||
topic = '%s.%s' % (topic, priority)
|
||||
try:
|
||||
rpc.notify(context, topic, message)
|
||||
except Exception:
|
||||
LOG.exception(_("Could not send notification to %(topic)s. "
|
||||
"Payload=%(message)s"), locals())
|
@ -0,0 +1,56 @@
|
||||
# Copyright 2011 OpenStack Foundation.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# Copyright (c) 2013-2014 Wind River Systems, Inc.
|
||||
#
|
||||
|
||||
|
||||
'''messaging based notification driver, with message envelopes'''
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from sm_api.openstack.common import context as req_context
|
||||
from sm_api.openstack.common.gettextutils import _
|
||||
from sm_api.openstack.common import log as logging
|
||||
from sm_api.openstack.common import rpc
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
notification_topic_opt = cfg.ListOpt(
|
||||
'topics', default=['notifications', ],
|
||||
help='AMQP topic(s) used for openstack notifications')
|
||||
|
||||
opt_group = cfg.OptGroup(name='rpc_notifier2',
|
||||
title='Options for rpc_notifier2')
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_group(opt_group)
|
||||
CONF.register_opt(notification_topic_opt, opt_group)
|
||||
|
||||
|
||||
def notify(context, message):
|
||||
"""Sends a notification via RPC"""
|
||||
if not context:
|
||||
context = req_context.get_admin_context()
|
||||
priority = message.get('priority',
|
||||
CONF.default_notification_level)
|
||||
priority = priority.lower()
|
||||
for topic in CONF.rpc_notifier2.topics:
|
||||
topic = '%s.%s' % (topic, priority)
|
||||
try:
|
||||
rpc.notify(context, topic, message, envelope=True)
|
||||
except Exception:
|
||||
LOG.exception(_("Could not send notification to %(topic)s. "
|
||||
"Payload=%(message)s"), locals())
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user