StarlingX open source release updates

Signed-off-by: Dean Troyer <dtroyer@gmail.com>
This commit is contained in:
Dean Troyer 2018-05-30 16:16:51 -07:00
parent 48eda4de1e
commit 9d3ca49387
497 changed files with 56138 additions and 0 deletions

7
CONTRIBUTORS.wrs Normal file
View File

@ -0,0 +1,7 @@
The following contributors from Wind River have developed the seed code in this
repository. We look forward to community collaboration and contributions for
additional features, enhancements and refactoring.
Contributors:
=============
Wind River Titanium Cloud Team

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

5
README.rst Normal file
View File

@ -0,0 +1,5 @@
============
stx-upstream
============
StarlingX Upstream Packages

6
ceph-manager/.gitignore vendored Normal file
View File

@ -0,0 +1,6 @@
!.distro
.distro/centos7/rpmbuild/RPMS
.distro/centos7/rpmbuild/SRPMS
.distro/centos7/rpmbuild/BUILD
.distro/centos7/rpmbuild/BUILDROOT
.distro/centos7/rpmbuild/SOURCES/ceph-manager*tar.gz

202
ceph-manager/LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

13
ceph-manager/PKG-INFO Normal file
View File

@ -0,0 +1,13 @@
Metadata-Version: 1.1
Name: ceph-manager
Version: 1.0
Summary: Handle Ceph API calls and provide status updates via alarms
Home-page:
Author: Windriver
Author-email: info@windriver.com
License: Apache-2.0
Description: Handle Ceph API calls and provide status updates via alarms
Platform: UNKNOWN

View File

@ -0,0 +1,3 @@
SRC_DIR="ceph-manager"
COPY_LIST_TO_TAR="files scripts"
TIS_PATCH_VER=4

View File

@ -0,0 +1,70 @@
Summary: Handle Ceph API calls and provide status updates via alarms
Name: ceph-manager
Version: 1.0
Release: %{tis_patch_ver}%{?_tis_dist}
License: Apache-2.0
Group: base
Packager: Wind River <info@windriver.com>
URL: unknown
Source0: %{name}-%{version}.tar.gz
BuildRequires: python-setuptools
BuildRequires: systemd-units
BuildRequires: systemd-devel
Requires: sysinv
%description
Handle Ceph API calls and provide status updates via alarms.
Handle sysinv RPC calls for long running Ceph API operations:
- cache tiering enable
- cache tiering disable
%define local_bindir /usr/bin/
%define local_etc_initd /etc/init.d/
%define local_etc_logrotated /etc/logrotate.d/
%define pythonroot /usr/lib64/python2.7/site-packages
%define debug_package %{nil}
%prep
%setup
%build
%{__python} setup.py build
%install
%{__python} setup.py install --root=$RPM_BUILD_ROOT \
--install-lib=%{pythonroot} \
--prefix=/usr \
--install-data=/usr/share \
--single-version-externally-managed
install -d -m 755 %{buildroot}%{local_etc_initd}
install -p -D -m 700 scripts/init.d/ceph-manager %{buildroot}%{local_etc_initd}/ceph-manager
install -d -m 755 %{buildroot}%{local_bindir}
install -p -D -m 700 scripts/bin/ceph-manager %{buildroot}%{local_bindir}/ceph-manager
install -d -m 755 %{buildroot}%{local_etc_logrotated}
install -p -D -m 644 files/ceph-manager.logrotate %{buildroot}%{local_etc_logrotated}/ceph-manager.logrotate
install -d -m 755 %{buildroot}%{_unitdir}
install -m 644 -p -D files/%{name}.service %{buildroot}%{_unitdir}/%{name}.service
%clean
rm -rf $RPM_BUILD_ROOT
# Note: The package name is ceph-manager but the import name is ceph_manager so
# can't use '%{name}'.
%files
%defattr(-,root,root,-)
%doc LICENSE
%{local_bindir}/*
%{local_etc_initd}/*
%{_unitdir}/%{name}.service
%dir %{local_etc_logrotated}
%{local_etc_logrotated}/*
%dir %{pythonroot}/ceph_manager
%{pythonroot}/ceph_manager/*
%dir %{pythonroot}/ceph_manager-%{version}.0-py2.7.egg-info
%{pythonroot}/ceph_manager-%{version}.0-py2.7.egg-info/*

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,5 @@
#
# Copyright (c) 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#

View File

@ -0,0 +1,705 @@
#
# Copyright (c) 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import copy
import contextlib
import functools
import math
import subprocess
import time
import traceback
# noinspection PyUnresolvedReferences
import eventlet
# noinspection PyUnresolvedReferences
from eventlet.semaphore import Semaphore
# noinspection PyUnresolvedReferences
from oslo_log import log as logging
# noinspection PyUnresolvedReferences
from sysinv.conductor.cache_tiering_service_config import ServiceConfig
from i18n import _LI, _LW, _LE
import constants
import exception
import ceph
LOG = logging.getLogger(__name__)
CEPH_POOLS = copy.deepcopy(constants.CEPH_POOLS)
MAX_WAIT = constants.CACHE_FLUSH_MAX_WAIT_OBJ_COUNT_DECREASE_SEC
MIN_WAIT = constants.CACHE_FLUSH_MIN_WAIT_OBJ_COUNT_DECREASE_SEC
class LockOwnership(object):
def __init__(self, sem):
self.sem = sem
@contextlib.contextmanager
def __call__(self):
try:
yield
finally:
if self.sem:
self.sem.release()
def transfer(self):
new_lo = LockOwnership(self.sem)
self.sem = None
return new_lo
class Lock(object):
def __init__(self):
self.sem = Semaphore(value=1)
def try_lock(self):
result = self.sem.acquire(blocking=False)
if result:
return LockOwnership(self.sem)
class CacheTiering(object):
def __init__(self, service):
self.service = service
self.lock = Lock()
# will be unlocked by set_initial_config()
self._init_config_lock = self.lock.try_lock()
self.config = None
self.config_desired = None
self.config_applied = None
self.target_max_bytes = {}
def set_initial_config(self, config):
with self._init_config_lock():
LOG.info("Setting Ceph cache tiering initial configuration")
self.config = ServiceConfig.from_dict(
config.get(constants.CACHE_TIERING, {})) or \
ServiceConfig()
self.config_desired = ServiceConfig.from_dict(
config.get(constants.CACHE_TIERING_DESIRED, {})) or \
ServiceConfig()
self.config_applied = ServiceConfig.from_dict(
config.get(constants.CACHE_TIERING_APPLIED, {})) or \
ServiceConfig()
if self.config_desired:
LOG.debug("set_initial_config config_desired %s " %
self.config_desired.to_dict())
if self.config_applied:
LOG.debug("set_initial_config config_applied %s " %
self.config_applied.to_dict())
# Check that previous caching tier operation completed
# successfully or perform recovery
if (self.config_desired and
self.config_applied and
(self.config_desired.cache_enabled !=
self.config_applied.cache_enabled)):
if self.config_desired.cache_enabled:
self.enable_cache(self.config_desired.to_dict(),
self.config_applied.to_dict(),
self._init_config_lock.transfer())
else:
self.disable_cache(self.config_desired.to_dict(),
self.config_applied.to_dict(),
self._init_config_lock.transfer())
def is_locked(self):
lock_ownership = self.lock.try_lock()
if not lock_ownership:
return True
with lock_ownership():
return False
def update_pools_info(self):
global CEPH_POOLS
cfg = self.service.sysinv_conductor.call(
{}, 'get_ceph_pools_config')
CEPH_POOLS = copy.deepcopy(cfg)
LOG.info(_LI("update_pools_info: pools: {}").format(CEPH_POOLS))
def enable_cache(self, new_config, applied_config, lock_ownership=None):
new_config = ServiceConfig.from_dict(new_config)
applied_config = ServiceConfig.from_dict(applied_config)
if not lock_ownership:
lock_ownership = self.lock.try_lock()
if not lock_ownership:
raise exception.CephCacheEnableFailure()
with lock_ownership():
eventlet.spawn(self.do_enable_cache,
new_config, applied_config,
lock_ownership.transfer())
def do_enable_cache(self, new_config, applied_config, lock_ownership):
LOG.info(_LI("cache_tiering_enable_cache: "
"new_config={}, applied_config={}").format(
new_config.to_dict(), applied_config.to_dict()))
_unwind_actions = []
with lock_ownership():
success = False
_exception = None
try:
self.config_desired.cache_enabled = True
self.update_pools_info()
for pool in CEPH_POOLS:
if (pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL or
pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER):
object_pool_name = \
self.service.monitor._get_object_pool_name()
pool['pool_name'] = object_pool_name
self.cache_pool_create(pool)
_unwind_actions.append(
functools.partial(self.cache_pool_delete, pool))
for pool in CEPH_POOLS:
if (pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL or
pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER):
object_pool_name = \
self.service.monitor._get_object_pool_name()
pool['pool_name'] = object_pool_name
self.cache_tier_add(pool)
_unwind_actions.append(
functools.partial(self.cache_tier_remove, pool))
for pool in CEPH_POOLS:
if (pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL or
pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER):
object_pool_name = \
self.service.monitor._get_object_pool_name()
pool['pool_name'] = object_pool_name
self.cache_mode_set(pool, 'writeback')
self.cache_pool_set_config(pool, new_config)
self.cache_overlay_create(pool)
success = True
except Exception as e:
LOG.error(_LE('Failed to enable cache: reason=%s') %
traceback.format_exc())
for action in reversed(_unwind_actions):
try:
action()
except Exception:
LOG.warn(_LW('Failed cache enable '
'unwind action: reason=%s') %
traceback.format_exc())
success = False
_exception = str(e)
finally:
self.service.monitor.monitor_check_cache_tier(success)
if success:
self.config_applied.cache_enabled = True
self.service.sysinv_conductor.call(
{}, 'cache_tiering_enable_cache_complete',
success=success, exception=_exception,
new_config=new_config.to_dict(),
applied_config=applied_config.to_dict())
# Run first update of periodic target_max_bytes
self.update_cache_target_max_bytes()
@contextlib.contextmanager
def ignore_ceph_failure(self):
try:
yield
except exception.CephManagerException:
pass
def disable_cache(self, new_config, applied_config, lock_ownership=None):
new_config = ServiceConfig.from_dict(new_config)
applied_config = ServiceConfig.from_dict(applied_config)
if not lock_ownership:
lock_ownership = self.lock.try_lock()
if not lock_ownership:
raise exception.CephCacheDisableFailure()
with lock_ownership():
eventlet.spawn(self.do_disable_cache,
new_config, applied_config,
lock_ownership.transfer())
def do_disable_cache(self, new_config, applied_config, lock_ownership):
LOG.info(_LI("cache_tiering_disable_cache: "
"new_config={}, applied_config={}").format(
new_config, applied_config))
with lock_ownership():
success = False
_exception = None
try:
self.config_desired.cache_enabled = False
for pool in CEPH_POOLS:
if (pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL or
pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER):
object_pool_name = \
self.service.monitor._get_object_pool_name()
pool['pool_name'] = object_pool_name
with self.ignore_ceph_failure():
self.cache_mode_set(
pool, 'forward')
for pool in CEPH_POOLS:
if (pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL or
pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER):
object_pool_name = \
self.service.monitor._get_object_pool_name()
pool['pool_name'] = object_pool_name
retries_left = 3
while True:
try:
self.cache_flush(pool)
break
except exception.CephCacheFlushFailure:
retries_left -= 1
if not retries_left:
# give up
break
else:
time.sleep(1)
for pool in CEPH_POOLS:
if (pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL or
pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER):
object_pool_name = \
self.service.monitor._get_object_pool_name()
pool['pool_name'] = object_pool_name
with self.ignore_ceph_failure():
self.cache_overlay_delete(pool)
self.cache_tier_remove(pool)
for pool in CEPH_POOLS:
if (pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL or
pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER):
object_pool_name = \
self.service.monitor._get_object_pool_name()
pool['pool_name'] = object_pool_name
with self.ignore_ceph_failure():
self.cache_pool_delete(pool)
success = True
except Exception as e:
LOG.warn(_LE('Failed to disable cache: reason=%s') %
traceback.format_exc())
_exception = str(e)
finally:
self.service.monitor.monitor_check_cache_tier(False)
if success:
self.config_desired.cache_enabled = False
self.config_applied.cache_enabled = False
self.service.sysinv_conductor.call(
{}, 'cache_tiering_disable_cache_complete',
success=success, exception=_exception,
new_config=new_config.to_dict(),
applied_config=applied_config.to_dict())
def get_pool_pg_num(self, pool_name):
return self.service.sysinv_conductor.call(
{}, 'get_pool_pg_num',
pool_name=pool_name)
def cache_pool_create(self, pool):
backing_pool = pool['pool_name']
cache_pool = backing_pool + '-cache'
pg_num = self.get_pool_pg_num(cache_pool)
if not ceph.osd_pool_exists(self.service.ceph_api, cache_pool):
ceph.osd_pool_create(
self.service.ceph_api, cache_pool,
pg_num, pg_num)
def cache_pool_delete(self, pool):
cache_pool = pool['pool_name'] + '-cache'
ceph.osd_pool_delete(
self.service.ceph_api, cache_pool)
def cache_tier_add(self, pool):
backing_pool = pool['pool_name']
cache_pool = backing_pool + '-cache'
response, body = self.service.ceph_api.osd_tier_add(
backing_pool, cache_pool,
force_nonempty="--force-nonempty",
body='json')
if response.ok:
LOG.info(_LI("Added OSD tier: "
"backing_pool={}, cache_pool={}").format(
backing_pool, cache_pool))
else:
e = exception.CephPoolAddTierFailure(
backing_pool=backing_pool,
cache_pool=cache_pool,
response_status_code=response.status_code,
response_reason=response.reason,
status=body.get('status'),
output=body.get('output'))
LOG.warn(e)
raise e
def cache_tier_remove(self, pool):
backing_pool = pool['pool_name']
cache_pool = backing_pool + '-cache'
response, body = self.service.ceph_api.osd_tier_remove(
backing_pool, cache_pool, body='json')
if response.ok:
LOG.info(_LI("Removed OSD tier: "
"backing_pool={}, cache_pool={}").format(
backing_pool, cache_pool))
else:
e = exception.CephPoolRemoveTierFailure(
backing_pool=backing_pool,
cache_pool=cache_pool,
response_status_code=response.status_code,
response_reason=response.reason,
status=body.get('status'),
output=body.get('output'))
LOG.warn(e)
raise e
def cache_mode_set(self, pool, mode):
backing_pool = pool['pool_name']
cache_pool = backing_pool + '-cache'
response, body = self.service.ceph_api.osd_tier_cachemode(
cache_pool, mode, body='json')
if response.ok:
LOG.info(_LI("Set OSD tier cache mode: "
"cache_pool={}, mode={}").format(cache_pool, mode))
else:
e = exception.CephCacheSetModeFailure(
cache_pool=cache_pool,
mode=mode,
response_status_code=response.status_code,
response_reason=response.reason,
status=body.get('status'),
output=body.get('output'))
LOG.warn(e)
raise e
def cache_pool_set_config(self, pool, config):
for name, value in config.params.iteritems():
self.cache_pool_set_param(pool, name, value)
def cache_pool_set_param(self, pool, name, value):
backing_pool = pool['pool_name']
cache_pool = backing_pool + '-cache'
ceph.osd_set_pool_param(
self.service.ceph_api, cache_pool, name, value)
def cache_overlay_create(self, pool):
backing_pool = pool['pool_name']
cache_pool = backing_pool + '-cache'
response, body = self.service.ceph_api.osd_tier_set_overlay(
backing_pool, cache_pool, body='json')
if response.ok:
LOG.info(_LI("Set OSD tier overlay: "
"backing_pool={}, cache_pool={}").format(
backing_pool, cache_pool))
else:
e = exception.CephCacheCreateOverlayFailure(
backing_pool=backing_pool,
cache_pool=cache_pool,
response_status_code=response.status_code,
response_reason=response.reason,
status=body.get('status'),
output=body.get('output'))
LOG.warn(e)
raise e
def cache_overlay_delete(self, pool):
backing_pool = pool['pool_name']
cache_pool = pool['pool_name']
response, body = self.service.ceph_api.osd_tier_remove_overlay(
backing_pool, body='json')
if response.ok:
LOG.info(_LI("Removed OSD tier overlay: "
"backing_pool={}").format(backing_pool))
else:
e = exception.CephCacheDeleteOverlayFailure(
backing_pool=backing_pool,
cache_pool=cache_pool,
response_status_code=response.status_code,
response_reason=response.reason,
status=body.get('status'),
output=body.get('output'))
LOG.warn(e)
raise e
@staticmethod
def rados_cache_flush_evict_all(pool):
backing_pool = pool['pool_name']
cache_pool = backing_pool + '-cache'
try:
subprocess.check_call(
['/usr/bin/rados', '-p', cache_pool, 'cache-flush-evict-all'])
LOG.info(_LI("Flushed OSD cache pool:"
"cache_pool={}").format(cache_pool))
except subprocess.CalledProcessError as e:
_e = exception.CephCacheFlushFailure(
cache_pool=cache_pool,
return_code=str(e.returncode),
cmd=" ".join(e.cmd),
output=e.output)
LOG.warn(_e)
raise _e
def cache_flush(self, pool):
backing_pool = pool['pool_name']
cache_pool = backing_pool + '-cache'
try:
# set target_max_objects to a small value to force evacuation of
# objects from cache before we use rados cache-flush-evict-all
# WARNING: assuming cache_pool will be deleted after flush so
# we don't have to save/restore the value of target_max_objects
#
self.cache_pool_set_param(pool, 'target_max_objects', 1)
prev_object_count = None
wait_interval = MIN_WAIT
while True:
response, body = self.service.ceph_api.df(body='json')
if not response.ok:
LOG.warn(_LW(
"Failed to retrieve cluster free space stats: "
"status_code=%d, reason=%s") % (
response.status_code, response.reason))
break
stats = None
for s in body['output']['pools']:
if s['name'] == cache_pool:
stats = s['stats']
break
if not stats:
LOG.warn(_LW("Missing pool free space stats: "
"cache_pool=%s") % cache_pool)
break
object_count = stats['objects']
if object_count < constants.CACHE_FLUSH_OBJECTS_THRESHOLD:
break
if prev_object_count is not None:
delta_objects = object_count - prev_object_count
if delta_objects > 0:
LOG.warn(_LW("Unexpected increase in number "
"of objects in cache pool: "
"cache_pool=%s, prev_object_count=%d, "
"object_count=%d") % (
cache_pool, prev_object_count,
object_count))
break
if delta_objects == 0:
wait_interval *= 2
if wait_interval > MAX_WAIT:
LOG.warn(_LW(
"Cache pool number of objects did not "
"decrease: cache_pool=%s, object_count=%d, "
"wait_interval=%d") % (
cache_pool, object_count, wait_interval))
break
else:
wait_interval = MIN_WAIT
time.sleep(wait_interval)
prev_object_count = object_count
except exception.CephPoolSetParamFailure as e:
LOG.warn(e)
finally:
self.rados_cache_flush_evict_all(pool)
def update_cache_target_max_bytes(self):
"Dynamically compute target_max_bytes of caching pools"
# Only compute if cache tiering is enabled
if self.config_applied and self.config_desired:
if (not self.config_desired.cache_enabled or
not self.config_applied.cache_enabled):
LOG.debug("Cache tiering disabled, no need to update "
"target_max_bytes.")
return
LOG.debug("Updating target_max_bytes")
# Get available space
response, body = self.service.ceph_api.osd_df(body='json',
output_method='tree')
if not response.ok:
LOG.warn(_LW(
"Failed to retrieve cluster free space stats: "
"status_code=%d, reason=%s") % (
response.status_code, response.reason))
return
storage_tier_size = 0
cache_tier_size = 0
replication = constants.CEPH_REPLICATION_FACTOR
for node in body['output']['nodes']:
if node['name'] == 'storage-tier':
storage_tier_size = node['kb']*1024/replication
elif node['name'] == 'cache-tier':
cache_tier_size = node['kb']*1024/replication
if storage_tier_size == 0 or cache_tier_size == 0:
LOG.info("Failed to get cluster size "
"(storage_tier_size=%s, cache_tier_size=%s),"
"retrying on next cycle" %
(storage_tier_size, cache_tier_size))
return
# Get available pools
response, body = self.service.ceph_api.osd_lspools(body='json')
if not response.ok:
LOG.warn(_LW(
"Failed to retrieve available pools: "
"status_code=%d, reason=%s") % (
response.status_code, response.reason))
return
pools = [p['poolname'] for p in body['output']]
# Separate backing from caching for easy iteration
backing_pools = []
caching_pools = []
for p in pools:
if p.endswith('-cache'):
caching_pools.append(p)
else:
backing_pools.append(p)
LOG.debug("Pools: caching: %s, backing: %s" % (caching_pools,
backing_pools))
if not len(caching_pools):
# We do not have caching pools created yet
return
# Get quota from backing pools that are cached
stats = {}
for p in caching_pools:
backing_name = p.replace('-cache', '')
stats[backing_name] = {}
try:
quota = ceph.osd_pool_get_quota(self.service.ceph_api,
backing_name)
except exception.CephPoolGetQuotaFailure as e:
LOG.warn(_LW(
"Failed to retrieve quota: "
"exception: %s") % str(e))
return
stats[backing_name]['quota'] = quota['max_bytes']
stats[backing_name]['quota_pt'] = (quota['max_bytes']*100.0 /
storage_tier_size)
LOG.debug("Quota for pool: %s "
"is: %s B representing %s pt" %
(backing_name,
quota['max_bytes'],
stats[backing_name]['quota_pt']))
# target_max_bytes logic:
# - For computing target_max_bytes cache_tier_size must be equal than
# the sum of target_max_bytes of each caching pool
# - target_max_bytes for each caching pool is computed as the
# percentage of quota in corresponding backing pool
# - the caching tiers has to work at full capacity, so if the sum of
# all quotas in the backing tier is different than 100% we need to
# normalize
# - if the quota is zero for any pool we add CACHE_TIERING_MIN_QUOTA
# by default *after* normalization so that we have real minimum
# We compute the real percentage that need to be normalized after
# ensuring that we have CACHE_TIERING_MIN_QUOTA for each pool with
# a quota of 0
real_100pt = 90.0 # we start from max and decrease it for each 0 pool
# Note: We must avoid reaching 100% at all costs! and
# cache_target_full_ratio, the Ceph parameter that is supposed to
# protect the cluster against this does not work in Ceph v0.94.6!
# Therefore a value of 90% is better suited for this
for p in caching_pools:
backing_name = p.replace('-cache', '')
if stats[backing_name]['quota_pt'] == 0:
real_100pt -= constants.CACHE_TIERING_MIN_QUOTA
LOG.debug("Quota before normalization for %s is: %s pt" %
(p, stats[backing_name]['quota_pt']))
# Compute total percentage of quotas for all backing pools.
# Should be 100% if correctly configured
total_quota_pt = 0
for p in caching_pools:
backing_name = p.replace('-cache', '')
total_quota_pt += stats[backing_name]['quota_pt']
LOG.debug("Total quota pt is: %s" % total_quota_pt)
# Normalize quota pt to 100% (or real_100pt)
if total_quota_pt != 0: # to avoid divide by zero
for p in caching_pools:
backing_name = p.replace('-cache', '')
stats[backing_name]['quota_pt'] = \
(stats[backing_name]['quota_pt'] *
(real_100pt / total_quota_pt))
# Do not allow quota to be 0 for any pool
total = 0
for p in caching_pools:
backing_name = p.replace('-cache', '')
if stats[backing_name]['quota_pt'] == 0:
stats[backing_name]['quota_pt'] = \
constants.CACHE_TIERING_MIN_QUOTA
total += stats[backing_name]['quota_pt']
LOG.debug("Quota after normalization for %s is: %s:" %
(p, stats[backing_name]['quota_pt']))
if total > 100:
# Supplementary protection, we really have to avoid going above
# 100%. Note that real_100pt is less than 100% but we still got
# more than 100!
LOG.warn("Total sum of quotas should not go above 100% "
"but is: %s, recalculating in next cycle" % total)
return
LOG.debug("Total sum of quotas is %s pt" % total)
# Get current target_max_bytes. We cache it to reduce requests
# to ceph-rest-api. We are the ones changing it, so not an issue.
for p in caching_pools:
if p not in self.target_max_bytes:
try:
value = ceph.osd_get_pool_param(self.service.ceph_api, p,
constants.TARGET_MAX_BYTES)
except exception.CephPoolGetParamFailure as e:
LOG.warn(e)
return
self.target_max_bytes[p] = value
LOG.debug("Existing target_max_bytes got from "
"Ceph: %s" % self.target_max_bytes)
# Set TARGET_MAX_BYTES
LOG.debug("storage_tier_size: %s "
"cache_tier_size: %s" % (storage_tier_size,
cache_tier_size))
for p in caching_pools:
backing_name = p.replace('-cache', '')
s = stats[backing_name]
target_max_bytes = math.floor(s['quota_pt'] * cache_tier_size /
100.0)
target_max_bytes = int(target_max_bytes)
LOG.debug("New Target max bytes of pool: %s is: %s B" % (
p, target_max_bytes))
# Set the new target_max_bytes only if it changed
if self.target_max_bytes.get(p) == target_max_bytes:
LOG.debug("Target max bytes of pool: %s "
"is already updated" % p)
continue
try:
ceph.osd_set_pool_param(self.service.ceph_api, p,
constants.TARGET_MAX_BYTES,
target_max_bytes)
self.target_max_bytes[p] = target_max_bytes
except exception.CephPoolSetParamFailure as e:
LOG.warn(e)
continue
return

View File

@ -0,0 +1,164 @@
#
# Copyright (c) 2016-2018 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import exception
from i18n import _LI
# noinspection PyUnresolvedReferences
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
def osd_pool_set_quota(ceph_api, pool_name, max_bytes=0, max_objects=0):
"""Set the quota for an OSD pool_name
Setting max_bytes or max_objects to 0 will disable that quota param
:param pool_name: OSD pool_name
:param max_bytes: maximum bytes for OSD pool_name
:param max_objects: maximum objects for OSD pool_name
"""
# Update quota if needed
prev_quota = osd_pool_get_quota(ceph_api, pool_name)
if prev_quota["max_bytes"] != max_bytes:
resp, b = ceph_api.osd_set_pool_quota(pool_name, 'max_bytes',
max_bytes, body='json')
if resp.ok:
LOG.info(_LI("Set OSD pool_name quota: "
"pool_name={}, max_bytes={}").format(
pool_name, max_bytes))
else:
e = exception.CephPoolSetQuotaFailure(
pool=pool_name, name='max_bytes',
value=max_bytes, reason=resp.reason)
LOG.error(e)
raise e
if prev_quota["max_objects"] != max_objects:
resp, b = ceph_api.osd_set_pool_quota(pool_name, 'max_objects',
max_objects,
body='json')
if resp.ok:
LOG.info(_LI("Set OSD pool_name quota: "
"pool_name={}, max_objects={}").format(
pool_name, max_objects))
else:
e = exception.CephPoolSetQuotaFailure(
pool=pool_name, name='max_objects',
value=max_objects, reason=resp.reason)
LOG.error(e)
raise e
def osd_pool_get_quota(ceph_api, pool_name):
resp, quota = ceph_api.osd_get_pool_quota(pool_name, body='json')
if not resp.ok:
e = exception.CephPoolGetQuotaFailure(
pool=pool_name, reason=resp.reason)
LOG.error(e)
raise e
else:
return {"max_objects": quota["output"]["quota_max_objects"],
"max_bytes": quota["output"]["quota_max_bytes"]}
def osd_pool_exists(ceph_api, pool_name):
response, body = ceph_api.osd_pool_get(
pool_name, "pg_num", body='json')
if response.ok:
return True
return False
def osd_pool_create(ceph_api, pool_name, pg_num, pgp_num):
if pool_name.endswith("-cache"):
# ruleset 1: is the ruleset for the cache tier
# Name: cache_tier_ruleset
ruleset = 1
else:
# ruleset 0: is the default ruleset if no crushmap is loaded or
# the ruleset for the backing tier if loaded:
# Name: storage_tier_ruleset
ruleset = 0
response, body = ceph_api.osd_pool_create(
pool_name, pg_num, pgp_num, pool_type="replicated",
ruleset=ruleset, body='json')
if response.ok:
LOG.info(_LI("Created OSD pool: "
"pool_name={}, pg_num={}, pgp_num={}, "
"pool_type=replicated, ruleset={}").format(
pool_name, pg_num, pgp_num, ruleset))
else:
e = exception.CephPoolCreateFailure(
name=pool_name, reason=response.reason)
LOG.error(e)
raise e
# Explicitly assign the ruleset to the pool on creation since it is
# ignored in the create call
response, body = ceph_api.osd_set_pool_param(
pool_name, "crush_ruleset", ruleset, body='json')
if response.ok:
LOG.info(_LI("Assigned crush ruleset to OS pool: "
"pool_name={}, ruleset={}").format(
pool_name, ruleset))
else:
e = exception.CephPoolRulesetFailure(
name=pool_name, reason=response.reason)
LOG.error(e)
ceph_api.osd_pool_delete(
pool_name, pool_name,
sure='--yes-i-really-really-mean-it',
body='json')
raise e
def osd_pool_delete(ceph_api, pool_name):
"""Delete an osd pool
:param pool_name: pool name
"""
response, body = ceph_api.osd_pool_delete(
pool_name, pool_name,
sure='--yes-i-really-really-mean-it',
body='json')
if response.ok:
LOG.info(_LI("Deleted OSD pool {}").format(pool_name))
else:
e = exception.CephPoolDeleteFailure(
name=pool_name, reason=response.reason)
LOG.warn(e)
raise e
def osd_set_pool_param(ceph_api, pool_name, param, value):
response, body = ceph_api.osd_set_pool_param(
pool_name, param, value,
force=None, body='json')
if response.ok:
LOG.info('OSD set pool param: '
'pool={}, name={}, value={}'.format(
pool_name, param, value))
else:
raise exception.CephPoolSetParamFailure(
pool_name=pool_name,
param=param,
value=str(value),
reason=response.reason)
return response, body
def osd_get_pool_param(ceph_api, pool_name, param):
response, body = ceph_api.osd_get_pool_param(
pool_name, param, body='json')
if response.ok:
LOG.debug('OSD get pool param: '
'pool={}, name={}, value={}'.format(
pool_name, param, body['output'][param]))
else:
raise exception.CephPoolGetParamFailure(
pool_name=pool_name,
param=param,
reason=response.reason)
return body['output'][param]

View File

@ -0,0 +1,107 @@
#
# Copyright (c) 2016-2018 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from i18n import _
# noinspection PyUnresolvedReferences
from sysinv.common import constants as sysinv_constants
CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL = \
sysinv_constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL
CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER = \
sysinv_constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER
CEPH_POOLS = sysinv_constants.BACKING_POOLS
CEPH_REPLICATION_FACTOR = sysinv_constants.CEPH_REPLICATION_FACTOR_DEFAULT
SERVICE_PARAM_CEPH_CACHE_HIT_SET_TYPE_BLOOM = \
sysinv_constants.SERVICE_PARAM_CEPH_CACHE_HIT_SET_TYPE_BLOOM
CACHE_TIERING_DEFAULTS = sysinv_constants.CACHE_TIERING_DEFAULTS
TARGET_MAX_BYTES = \
sysinv_constants.SERVICE_PARAM_CEPH_CACHE_TIER_TARGET_MAX_BYTES
# Cache tiering section shortener
CACHE_TIERING = \
sysinv_constants.SERVICE_PARAM_SECTION_CEPH_CACHE_TIER
CACHE_TIERING_DESIRED = \
sysinv_constants.SERVICE_PARAM_SECTION_CEPH_CACHE_TIER_DESIRED
CACHE_TIERING_APPLIED = \
sysinv_constants.SERVICE_PARAM_SECTION_CEPH_CACHE_TIER_APPLIED
CACHE_TIERING_SECTIONS = \
[CACHE_TIERING, CACHE_TIERING_DESIRED, CACHE_TIERING_APPLIED]
# Cache flush parameters
CACHE_FLUSH_OBJECTS_THRESHOLD = 1000
CACHE_FLUSH_MIN_WAIT_OBJ_COUNT_DECREASE_SEC = 1
CACHE_FLUSH_MAX_WAIT_OBJ_COUNT_DECREASE_SEC = 128
CACHE_TIERING_MIN_QUOTA = 5
FM_ALARM_REASON_MAX_SIZE = 256
# TODO this will later change based on parsed health
# clock skew is vm malfunction, mon or osd is equipment mal
ALARM_CAUSE = 'equipment-malfunction'
ALARM_TYPE = 'equipment'
# Ceph health check interval (in seconds)
CEPH_HEALTH_CHECK_INTERVAL = 60
# Ceph health statuses
CEPH_HEALTH_OK = 'HEALTH_OK'
CEPH_HEALTH_WARN = 'HEALTH_WARN'
CEPH_HEALTH_ERR = 'HEALTH_ERR'
CEPH_HEALTH_DOWN = 'CEPH_DOWN'
# Statuses not reported by Ceph
CEPH_STATUS_CUSTOM = [CEPH_HEALTH_DOWN]
SEVERITY = {CEPH_HEALTH_DOWN: 'critical',
CEPH_HEALTH_ERR: 'critical',
CEPH_HEALTH_WARN: 'warning'}
SERVICE_AFFECTING = {CEPH_HEALTH_DOWN: True,
CEPH_HEALTH_ERR: True,
CEPH_HEALTH_WARN: False}
# TODO this will later change based on parsed health
ALARM_REASON_NO_OSD = _('no OSDs')
ALARM_REASON_OSDS_DOWN = _('OSDs are down')
ALARM_REASON_OSDS_OUT = _('OSDs are out')
ALARM_REASON_OSDS_DOWN_OUT = _('OSDs are down/out')
ALARM_REASON_PEER_HOST_DOWN = _('peer host down')
REPAIR_ACTION_MAJOR_CRITICAL_ALARM = _(
'Ensure storage hosts from replication group are unlocked and available.'
'Check if OSDs of each storage host are up and running.'
'If problem persists, contact next level of support.')
REPAIR_ACTION = _('If problem persists, contact next level of support.')
SYSINV_CONDUCTOR_TOPIC = 'sysinv.conductor_manager'
CEPH_MANAGER_TOPIC = 'sysinv.ceph_manager'
SYSINV_CONFIG_FILE = '/etc/sysinv/sysinv.conf'
# Titanium Cloud version strings
TITANIUM_SERVER_VERSION_16_10 = '16.10'
CEPH_HEALTH_WARN_REQUIRE_JEWEL_OSDS_NOT_SET = (
"all OSDs are running jewel or later but the "
"'require_jewel_osds' osdmap flag is not set")
UPGRADE_COMPLETED = \
sysinv_constants.UPGRADE_COMPLETED
UPGRADE_ABORTING = \
sysinv_constants.UPGRADE_ABORTING
UPGRADE_ABORT_COMPLETING = \
sysinv_constants.UPGRADE_ABORT_COMPLETING
UPGRADE_ABORTING_ROLLBACK = \
sysinv_constants.UPGRADE_ABORTING_ROLLBACK
CEPH_FLAG_REQUIRE_JEWEL_OSDS = 'require_jewel_osds'
# Tiers
CEPH_CRUSH_TIER_SUFFIX = sysinv_constants.CEPH_CRUSH_TIER_SUFFIX
SB_TIER_TYPE_CEPH = sysinv_constants.SB_TIER_TYPE_CEPH
SB_TIER_SUPPORTED = sysinv_constants.SB_TIER_SUPPORTED
SB_TIER_DEFAULT_NAMES = sysinv_constants.SB_TIER_DEFAULT_NAMES
SB_TIER_CEPH_POOLS = sysinv_constants.SB_TIER_CEPH_POOLS

View File

@ -0,0 +1,130 @@
#
# Copyright (c) 2016-2017 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# noinspection PyUnresolvedReferences
from i18n import _, _LW
# noinspection PyUnresolvedReferences
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class CephManagerException(Exception):
message = _("An unknown exception occurred.")
def __init__(self, message=None, **kwargs):
self.kwargs = kwargs
if not message:
try:
message = self.message % kwargs
except TypeError:
LOG.warn(_LW('Exception in string format operation'))
for name, value in kwargs.iteritems():
LOG.error("%s: %s" % (name, value))
# at least get the core message out if something happened
message = self.message
super(CephManagerException, self).__init__(message)
class CephPoolSetQuotaFailure(CephManagerException):
message = _("Error seting the OSD pool "
"quota %(name)s for %(pool)s to %(value)s") \
+ ": %(reason)s"
class CephPoolGetQuotaFailure(CephManagerException):
message = _("Error geting the OSD pool quota for %(pool)s") \
+ ": %(reason)s"
class CephPoolCreateFailure(CephManagerException):
message = _("Creating OSD pool %(name)s failed: %(reason)s")
class CephPoolDeleteFailure(CephManagerException):
message = _("Deleting OSD pool %(name)s failed: %(reason)s")
class CephPoolRulesetFailure(CephManagerException):
message = _("Assigning crush ruleset to OSD "
"pool %(name)s failed: %(reason)s")
class CephPoolAddTierFailure(CephManagerException):
message = _("Failed to add OSD tier: "
"backing_pool=%(backing_pool)s, cache_pool=%(cache_pool)s, "
"response=%(response_status_code)s:%(response_reason)s, "
"status=%(status)s, output=%(output)s")
class CephPoolRemoveTierFailure(CephManagerException):
message = _("Failed to remove tier: "
"backing_pool=%(backing_pool)s, cache_pool=%(cache_pool)s, "
"response=%(response_status_code)s:%(response_reason)s, "
"status=%(status)s, output=%(output)s")
class CephCacheSetModeFailure(CephManagerException):
message = _("Failed to set OSD tier cache mode: "
"cache_pool=%(cache_pool)s, mode=%(mode)s, "
"response=%(response_status_code)s:%(response_reason)s, "
"status=%(status)s, output=%(output)s")
class CephPoolSetParamFailure(CephManagerException):
message = _("Cannot set Ceph OSD pool parameter: "
"pool_name=%(pool_name)s, param=%(param)s, value=%(value)s. "
"Reason: %(reason)s")
class CephPoolGetParamFailure(CephManagerException):
message = _("Cannot get Ceph OSD pool parameter: "
"pool_name=%(pool_name)s, param=%(param)s. "
"Reason: %(reason)s")
class CephCacheCreateOverlayFailure(CephManagerException):
message = _("Failed to create overlay: "
"backing_pool=%(backing_pool)s, cache_pool=%(cache_pool)s, "
"response=%(response_status_code)s:%(response_reason)s, "
"status=%(status)s, output=%(output)s")
class CephCacheDeleteOverlayFailure(CephManagerException):
message = _("Failed to delete overlay: "
"backing_pool=%(backing_pool)s, cache_pool=%(cache_pool)s, "
"response=%(response_status_code)s:%(response_reason)s, "
"status=%(status)s, output=%(output)s")
class CephCacheFlushFailure(CephManagerException):
message = _("Failed to flush cache pool: "
"cache_pool=%(cache_pool)s, "
"return_code=%(return_code)s, "
"cmd=%(cmd)s, output=%(output)s")
class CephCacheEnableFailure(CephManagerException):
message = _("Cannot enable Ceph cache tier. "
"Reason: cache tiering operation in progress.")
class CephCacheDisableFailure(CephManagerException):
message = _("Cannot disable Ceph cache tier. "
"Reason: cache tiering operation in progress.")
class CephSetKeyFailure(CephManagerException):
message = _("Error setting the Ceph flag "
"'%(flag)s' %(extra)s: "
"response=%(response_status_code)s:%(response_reason)s, "
"status=%(status)s, output=%(output)s")
class CephApiFailure(CephManagerException):
message = _("API failure: "
"call=%(call)s, reason=%(reason)s")

View File

@ -0,0 +1,15 @@
#
# Copyright (c) 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import oslo_i18n
DOMAIN = 'ceph-manager'
_translators = oslo_i18n.TranslatorFactory(domain=DOMAIN)
_ = _translators.primary
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error

View File

@ -0,0 +1,893 @@
#
# Copyright (c) 2013-2018 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import time
# noinspection PyUnresolvedReferences
from fm_api import fm_api
# noinspection PyUnresolvedReferences
from fm_api import constants as fm_constants
# noinspection PyUnresolvedReferences
from oslo_log import log as logging
from sysinv.conductor.cache_tiering_service_config import ServiceConfig
# noinspection PyProtectedMember
from i18n import _, _LI, _LW, _LE
import constants
import exception
LOG = logging.getLogger(__name__)
# When upgrading from 16.10 to 17.x Ceph goes from Hammer release
# to Jewel release. After all storage nodes are upgraded to 17.x
# the cluster is in HEALTH_WARN until administrator explicitly
# enables require_jewel_osds flag - which signals Ceph that it
# can safely transition from Hammer to Jewel
#
# This class is needed only when upgrading from 16.10 to 17.x
# TODO: remove it after 1st 17.x release
#
class HandleUpgradesMixin(object):
def __init__(self, service):
self.service = service
self.surpress_require_jewel_osds_warning = False
def setup(self, config):
self._set_upgrade(self.service.retry_get_software_upgrade_status())
def _set_upgrade(self, upgrade):
state = upgrade.get('state')
from_version = upgrade.get('from_version')
if (state
and state != constants.UPGRADE_COMPLETED
and from_version == constants.TITANIUM_SERVER_VERSION_16_10):
LOG.info(_LI("Surpress require_jewel_osds health warning"))
self.surpress_require_jewel_osds_warning = True
def set_flag_require_jewel_osds(self):
try:
response, body = self.service.ceph_api.osd_set_key(
constants.CEPH_FLAG_REQUIRE_JEWEL_OSDS,
body='json')
LOG.info(_LI("Set require_jewel_osds flag"))
except IOError as e:
raise exception.CephApiFailure(
call="osd_set_key",
reason=e.message)
else:
if not response.ok:
raise exception.CephSetKeyFailure(
flag=constants.CEPH_FLAG_REQUIRE_JEWEL_OSDS,
extra=_("needed to complete upgrade to Jewel"),
response_status_code=response.status_code,
response_reason=response.reason,
status=body.get('status'),
output=body.get('output'))
def filter_health_status(self, health):
health = self.auto_heal(health)
# filter out require_jewel_osds warning
#
if not self.surpress_require_jewel_osds_warning:
return health
if health['health'] != constants.CEPH_HEALTH_WARN:
return health
if (constants.CEPH_HEALTH_WARN_REQUIRE_JEWEL_OSDS_NOT_SET
not in health['detail']):
return health
return self._remove_require_jewel_osds_warning(health)
def _remove_require_jewel_osds_warning(self, health):
reasons_list = []
for reason in health['detail'].split(';'):
reason = reason.strip()
if len(reason) == 0:
continue
if constants.CEPH_HEALTH_WARN_REQUIRE_JEWEL_OSDS_NOT_SET in reason:
continue
reasons_list.append(reason)
if len(reasons_list) == 0:
health = {
'health': constants.CEPH_HEALTH_OK,
'detail': ''}
else:
health['detail'] = '; '.join(reasons_list)
return health
def auto_heal(self, health):
if (health['health'] == constants.CEPH_HEALTH_WARN
and (constants.CEPH_HEALTH_WARN_REQUIRE_JEWEL_OSDS_NOT_SET
in health['detail'])):
try:
upgrade = self.service.get_software_upgrade_status()
except Exception as ex:
LOG.warn(_LW(
"Getting software upgrade status failed "
"with: %s. Skip auto-heal attempt "
"(will retry on next ceph status poll).") % str(ex))
return
state = upgrade.get('state')
# surpress require_jewel_osds in case upgrade is
# in progress but not completed or aborting
if (not self.surpress_require_jewel_osds_warning
and (upgrade.get('from_version')
== constants.TITANIUM_SERVER_VERSION_16_10)
and state not in [
None,
constants.UPGRADE_COMPLETED,
constants.UPGRADE_ABORTING,
constants.UPGRADE_ABORT_COMPLETING,
constants.UPGRADE_ABORTING_ROLLBACK]):
LOG.info(_LI("Surpress require_jewel_osds health warning"))
self.surpress_require_jewel_osds_warning = True
# set require_jewel_osds in case upgrade is
# not in progress or completed
if (state in [None, constants.UPGRADE_COMPLETED]):
LOG.warn(_LW(
"No upgrade in progress or update completed "
"and require_jewel_osds health warning raised. "
"Set require_jewel_osds flag."))
self.set_flag_require_jewel_osds()
health = self._remove_require_jewel_osds_warning(health)
LOG.info(_LI("Unsurpress require_jewel_osds health warning"))
self.surpress_require_jewel_osds_warning = False
# unsurpress require_jewel_osds in case upgrade
# is aborting
if (self.surpress_require_jewel_osds_warning
and state in [
constants.UPGRADE_ABORTING,
constants.UPGRADE_ABORT_COMPLETING,
constants.UPGRADE_ABORTING_ROLLBACK]):
LOG.info(_LI("Unsurpress require_jewel_osds health warning"))
self.surpress_require_jewel_osds_warning = False
return health
class Monitor(HandleUpgradesMixin):
def __init__(self, service):
self.service = service
self.current_ceph_health = ""
self.cache_enabled = False
self.tiers_size = {}
self.known_object_pool_name = None
self.primary_tier_name = constants.SB_TIER_DEFAULT_NAMES[
constants.SB_TIER_TYPE_CEPH] + constants.CEPH_CRUSH_TIER_SUFFIX
self.cluster_is_up = False
super(Monitor, self).__init__(service)
def setup(self, config):
self.set_caching_tier_config(config)
super(Monitor, self).setup(config)
def set_caching_tier_config(self, config):
conf = ServiceConfig().from_dict(
config.get(constants.CACHE_TIERING_APPLIED))
if conf:
self.cache_enabled = conf.cache_enabled
def monitor_check_cache_tier(self, enable_flag):
LOG.info(_LI("monitor_check_cache_tier: "
"enable_flag={}".format(enable_flag)))
self.cache_enabled = enable_flag
def run(self):
# Wait until Ceph cluster is up and we can get the fsid
while True:
self.ceph_get_fsid()
if self.service.entity_instance_id:
break
time.sleep(constants.CEPH_HEALTH_CHECK_INTERVAL)
# Start monitoring ceph status
while True:
self.ceph_poll_status()
self.ceph_poll_quotas()
time.sleep(constants.CEPH_HEALTH_CHECK_INTERVAL)
def ceph_get_fsid(self):
# Check whether an alarm has already been raised
self._get_current_alarms()
if self.current_health_alarm:
LOG.info(_LI("Current alarm: %s") %
str(self.current_health_alarm.__dict__))
fsid = self._get_fsid()
if not fsid:
# Raise alarm - it will not have an entity_instance_id
self._report_fault({'health': constants.CEPH_HEALTH_DOWN,
'detail': 'Ceph cluster is down.'},
fm_constants.FM_ALARM_ID_STORAGE_CEPH)
else:
# Clear alarm with no entity_instance_id
self._clear_fault(fm_constants.FM_ALARM_ID_STORAGE_CEPH)
self.service.entity_instance_id = 'cluster=%s' % fsid
def ceph_poll_status(self):
# get previous data every time in case:
# * daemon restarted
# * alarm was cleared manually but stored as raised in daemon
self._get_current_alarms()
if self.current_health_alarm:
LOG.info(_LI("Current alarm: %s") %
str(self.current_health_alarm.__dict__))
# get ceph health
health = self._get_health()
LOG.info(_LI("Current Ceph health: "
"%(health)s detail: %(detail)s") % health)
health = self.filter_health_status(health)
if health['health'] != constants.CEPH_HEALTH_OK:
self._report_fault(health, fm_constants.FM_ALARM_ID_STORAGE_CEPH)
self._report_alarm_osds_health()
else:
self._clear_fault(fm_constants.FM_ALARM_ID_STORAGE_CEPH)
self.clear_all_major_critical()
def filter_health_status(self, health):
return super(Monitor, self).filter_health_status(health)
def ceph_poll_quotas(self):
self._get_current_alarms()
if self.current_quota_alarms:
LOG.info(_LI("Current quota alarms %s") %
self.current_quota_alarms)
# Get current current size of each tier
previous_tiers_size = self.tiers_size
self.tiers_size = self._get_tiers_size()
# Make sure any removed tiers have the alarms cleared
for t in (set(previous_tiers_size)-set(self.tiers_size)):
self._clear_fault(fm_constants.FM_ALARM_ID_STORAGE_CEPH_FREE_SPACE,
"{0}.tier={1}".format(
self.service.entity_instance_id,
t[:-len(constants.CEPH_CRUSH_TIER_SUFFIX)]))
# Check the quotas on each tier
for tier in self.tiers_size:
# TODO(rchurch): For R6 remove the tier from the default crushmap
# and remove this check. No longer supporting this tier in R5
if tier == 'cache-tier':
continue
# Extract the tier name from the crush equivalent
tier_name = tier[:-len(constants.CEPH_CRUSH_TIER_SUFFIX)]
if self.tiers_size[tier] == 0:
LOG.info(_LI("'%s' tier cluster size not yet available")
% tier_name)
continue
pools_quota_sum = 0
if tier == self.primary_tier_name:
for pool in constants.CEPH_POOLS:
if (pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL or
pool['pool_name'] ==
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER):
object_pool_name = self._get_object_pool_name()
if object_pool_name is None:
LOG.error("Rados gateway object data pool does "
"not exist.")
else:
pools_quota_sum += \
self._get_osd_pool_quota(object_pool_name)
else:
pools_quota_sum += self._get_osd_pool_quota(
pool['pool_name'])
else:
for pool in constants.SB_TIER_CEPH_POOLS:
pool_name = "{0}-{1}".format(pool['pool_name'], tier_name)
pools_quota_sum += self._get_osd_pool_quota(pool_name)
# Currently, there is only one pool on the addtional tier(s),
# therefore allow a quota of 0
if (pools_quota_sum != self.tiers_size[tier] and
pools_quota_sum != 0):
self._report_fault(
{'tier_name': tier_name,
'tier_eid': "{0}.tier={1}".format(
self.service.entity_instance_id,
tier_name)},
fm_constants.FM_ALARM_ID_STORAGE_CEPH_FREE_SPACE)
else:
self._clear_fault(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_FREE_SPACE,
"{0}.tier={1}".format(self.service.entity_instance_id,
tier_name))
# CEPH HELPERS
def _get_fsid(self):
try:
response, fsid = self.service.ceph_api.fsid(
body='text', timeout=30)
except IOError as e:
LOG.warning(_LW("ceph_api.fsid failed: %s") % str(e.message))
self.cluster_is_up = False
return None
if not response.ok:
LOG.warning(_LW("Get fsid failed: %s") % response.reason)
self.cluster_is_up = False
return None
self.cluster_is_up = True
return fsid.strip()
def _get_health(self):
try:
# we use text since it has all info
response, body = self.service.ceph_api.health(
body='text', timeout=30)
except IOError as e:
LOG.warning(_LW("ceph_api.health failed: %s") % str(e.message))
self.cluster_is_up = False
return {'health': constants.CEPH_HEALTH_DOWN,
'detail': 'Ceph cluster is down.'}
if not response.ok:
LOG.warning(_LW("CEPH health check failed: %s") % response.reason)
health_info = [constants.CEPH_HEALTH_DOWN, response.reason]
self.cluster_is_up = False
else:
health_info = body.split(' ', 1)
self.cluster_is_up = True
health = health_info[0]
if len(health_info) > 1:
detail = health_info[1]
else:
detail = health_info[0]
return {'health': health.strip(),
'detail': detail.strip()}
def _get_object_pool_name(self):
if self.known_object_pool_name is None:
response, body = self.service.ceph_api.osd_pool_get(
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL,
"pg_num",
body='json')
if response.ok:
self.known_object_pool_name = \
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_JEWEL
return self.known_object_pool_name
response, body = self.service.ceph_api.osd_pool_get(
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER,
"pg_num",
body='json')
if response.ok:
self.known_object_pool_name = \
constants.CEPH_POOL_OBJECT_GATEWAY_NAME_HAMMER
return self.known_object_pool_name
return self.known_object_pool_name
def _get_osd_pool_quota(self, pool_name):
try:
resp, quota = self.service.ceph_api.osd_get_pool_quota(
pool_name, body='json')
except IOError:
return 0
if not resp.ok:
LOG.error(_LE("Getting the quota for "
"%(name)s pool failed:%(reason)s)") %
{"name": pool_name, "reason": resp.reason})
return 0
else:
try:
quota_gib = int(quota["output"]["quota_max_bytes"])/(1024**3)
return quota_gib
except IOError:
return 0
# we have two root nodes 'cache-tier' and 'storage-tier'
# to calculate the space that is used by the pools, we must only
# use 'storage-tier'
# this function determines if a certain node is under a certain
# tree
def host_is_in_root(self, search_tree, node, root_name):
if node['type'] == 'root':
if node['name'] == root_name:
return True
else:
return False
return self.host_is_in_root(search_tree,
search_tree[node['parent']],
root_name)
# The information received from ceph is not properly
# structured for efficient parsing and searching, so
# it must be processed and transformed into a more
# structured form.
#
# Input received from ceph is an array of nodes with the
# following structure:
# [{'id':<node_id>, 'children':<array_of_children_ids>, ....},
# ...]
#
# We process this array and transform it into a dictionary
# (for efficient access) The transformed "search tree" is a
# dictionary with the following structure:
# {<node_id> : {'children':<array_of_children_ids>}
def _get_tiers_size(self):
try:
resp, body = self.service.ceph_api.osd_df(
body='json',
output_method='tree')
except IOError:
return 0
if not resp.ok:
LOG.error(_LE("Getting the cluster usage "
"information failed: %(reason)s - "
"%(body)s") % {"reason": resp.reason,
"body": body})
return {}
# A node is a crushmap element: root, chassis, host, osd. Create a
# dictionary for the nodes with the key as the id used for efficient
# searching through nodes.
#
# For example: storage-0's node has one child node => OSD 0
# {
# "id": -4,
# "name": "storage-0",
# "type": "host",
# "type_id": 1,
# "reweight": -1.000000,
# "kb": 51354096,
# "kb_used": 1510348,
# "kb_avail": 49843748,
# "utilization": 2.941047,
# "var": 1.480470,
# "pgs": 0,
# "children": [
# 0
# ]
# },
search_tree = {}
for node in body['output']['nodes']:
search_tree[node['id']] = node
# Extract the tiers as we will return a dict for the size of each tier
tiers = {k: v for k, v in search_tree.items() if v['type'] == 'root'}
# For each tier, traverse the heirarchy from the root->chassis->host.
# Sum the host sizes to determine the overall size of the tier
tier_sizes = {}
for tier in tiers.values():
tier_size = 0
for chassis_id in tier['children']:
chassis_size = 0
chassis = search_tree[chassis_id]
for host_id in chassis['children']:
host = search_tree[host_id]
if (chassis_size == 0 or
chassis_size > host['kb']):
chassis_size = host['kb']
tier_size += chassis_size/(1024 ** 2)
tier_sizes[tier['name']] = tier_size
return tier_sizes
# ALARM HELPERS
@staticmethod
def _check_storage_group(osd_tree, group_id,
hosts, osds, fn_report_alarm):
reasons = set()
degraded_hosts = set()
severity = fm_constants.FM_ALARM_SEVERITY_CRITICAL
for host_id in hosts:
if len(osds[host_id]) == 0:
reasons.add(constants.ALARM_REASON_NO_OSD)
degraded_hosts.add(host_id)
else:
for osd_id in osds[host_id]:
if osd_tree[osd_id]['status'] == 'up':
if osd_tree[osd_id]['reweight'] == 0.0:
reasons.add(constants.ALARM_REASON_OSDS_OUT)
degraded_hosts.add(host_id)
else:
severity = fm_constants.FM_ALARM_SEVERITY_MAJOR
elif osd_tree[osd_id]['status'] == 'down':
reasons.add(constants.ALARM_REASON_OSDS_DOWN)
degraded_hosts.add(host_id)
if constants.ALARM_REASON_OSDS_OUT in reasons \
and constants.ALARM_REASON_OSDS_DOWN in reasons:
reasons.add(constants.ALARM_REASON_OSDS_DOWN_OUT)
reasons.remove(constants.ALARM_REASON_OSDS_OUT)
if constants.ALARM_REASON_OSDS_DOWN in reasons \
and constants.ALARM_REASON_OSDS_DOWN_OUT in reasons:
reasons.remove(constants.ALARM_REASON_OSDS_DOWN)
reason = "/".join(list(reasons))
if severity == fm_constants.FM_ALARM_SEVERITY_CRITICAL:
reason = "{} {}: {}".format(
fm_constants.ALARM_CRITICAL_REPLICATION,
osd_tree[group_id]['name'],
reason)
elif severity == fm_constants.FM_ALARM_SEVERITY_MAJOR:
reason = "{} {}: {}".format(
fm_constants.ALARM_MAJOR_REPLICATION,
osd_tree[group_id]['name'],
reason)
if len(degraded_hosts) == 0:
if len(hosts) < 2:
fn_report_alarm(
osd_tree[group_id]['name'],
"{} {}: {}".format(
fm_constants.ALARM_MAJOR_REPLICATION,
osd_tree[group_id]['name'],
constants.ALARM_REASON_PEER_HOST_DOWN),
fm_constants.FM_ALARM_SEVERITY_MAJOR)
elif len(degraded_hosts) == 1:
fn_report_alarm(
"{}.host={}".format(
osd_tree[group_id]['name'],
osd_tree[list(degraded_hosts)[0]]['name']),
reason, severity)
else:
fn_report_alarm(
osd_tree[group_id]['name'],
reason, severity)
def _check_storage_tier(self, osd_tree, tier_name, fn_report_alarm):
for tier_id in osd_tree:
if osd_tree[tier_id]['type'] != 'root':
continue
if osd_tree[tier_id]['name'] != tier_name:
continue
for group_id in osd_tree[tier_id]['children']:
if osd_tree[group_id]['type'] != 'chassis':
continue
if not osd_tree[group_id]['name'].startswith('group-'):
continue
hosts = []
osds = {}
for host_id in osd_tree[group_id]['children']:
if osd_tree[host_id]['type'] != 'host':
continue
hosts.append(host_id)
osds[host_id] = []
for osd_id in osd_tree[host_id]['children']:
if osd_tree[osd_id]['type'] == 'osd':
osds[host_id].append(osd_id)
self._check_storage_group(osd_tree, group_id, hosts,
osds, fn_report_alarm)
break
def _current_health_alarm_equals(self, reason, severity):
if not self.current_health_alarm:
return False
if getattr(self.current_health_alarm, 'severity', None) != severity:
return False
if getattr(self.current_health_alarm, 'reason_text', None) != reason:
return False
return True
def _report_alarm_osds_health(self):
response, osd_tree = self.service.ceph_api.osd_tree(body='json')
if not response.ok:
LOG.error(_LE("Failed to retrieve Ceph OSD tree: "
"status_code: %(status_code)s, reason: %(reason)s") %
{"status_code": response.status_code,
"reason": response.reason})
return
osd_tree = dict([(n['id'], n) for n in osd_tree['output']['nodes']])
alarms = []
self._check_storage_tier(osd_tree, "storage-tier",
lambda *args: alarms.append(args))
if self.cache_enabled:
self._check_storage_tier(osd_tree, "cache-tier",
lambda *args: alarms.append(args))
old_alarms = {}
for alarm_id in [
fm_constants.FM_ALARM_ID_STORAGE_CEPH_MAJOR,
fm_constants.FM_ALARM_ID_STORAGE_CEPH_CRITICAL]:
alarm_list = self.service.fm_api.get_faults_by_id(alarm_id)
if not alarm_list:
continue
for alarm in alarm_list:
if alarm.entity_instance_id not in old_alarms:
old_alarms[alarm.entity_instance_id] = []
old_alarms[alarm.entity_instance_id].append(
(alarm.alarm_id, alarm.reason_text))
for peer_group, reason, severity in alarms:
if self._current_health_alarm_equals(reason, severity):
continue
alarm_critical_major = fm_constants.FM_ALARM_ID_STORAGE_CEPH_MAJOR
if severity == fm_constants.FM_ALARM_SEVERITY_CRITICAL:
alarm_critical_major = (
fm_constants.FM_ALARM_ID_STORAGE_CEPH_CRITICAL)
entity_instance_id = (
self.service.entity_instance_id + '.peergroup=' + peer_group)
alarm_already_exists = False
if entity_instance_id in old_alarms:
for alarm_id, old_reason in old_alarms[entity_instance_id]:
if (reason == old_reason and
alarm_id == alarm_critical_major):
# if the alarm is exactly the same, we don't need
# to recreate it
old_alarms[entity_instance_id].remove(
(alarm_id, old_reason))
alarm_already_exists = True
elif (alarm_id == alarm_critical_major):
# if we change just the reason, then we just remove the
# alarm from the list so we don't remove it at the
# end of the function
old_alarms[entity_instance_id].remove(
(alarm_id, old_reason))
if (len(old_alarms[entity_instance_id]) == 0):
del old_alarms[entity_instance_id]
# in case the alarm is exactly the same, we skip the alarm set
if alarm_already_exists is True:
continue
major_repair_action = constants.REPAIR_ACTION_MAJOR_CRITICAL_ALARM
fault = fm_api.Fault(
alarm_id=alarm_critical_major,
alarm_type=fm_constants.FM_ALARM_TYPE_4,
alarm_state=fm_constants.FM_ALARM_STATE_SET,
entity_type_id=fm_constants.FM_ENTITY_TYPE_CLUSTER,
entity_instance_id=entity_instance_id,
severity=severity,
reason_text=reason,
probable_cause=fm_constants.ALARM_PROBABLE_CAUSE_15,
proposed_repair_action=major_repair_action,
service_affecting=constants.SERVICE_AFFECTING['HEALTH_WARN'])
alarm_uuid = self.service.fm_api.set_fault(fault)
if alarm_uuid:
LOG.info(_LI(
"Created storage alarm %(alarm_uuid)s - "
"severity: %(severity)s, reason: %(reason)s, "
"service_affecting: %(service_affecting)s") % {
"alarm_uuid": str(alarm_uuid),
"severity": str(severity),
"reason": reason,
"service_affecting": str(
constants.SERVICE_AFFECTING['HEALTH_WARN'])})
else:
LOG.error(_LE(
"Failed to create storage alarm - "
"severity: %(severity)s, reason: %(reason)s, "
"service_affecting: %(service_affecting)s") % {
"severity": str(severity),
"reason": reason,
"service_affecting": str(
constants.SERVICE_AFFECTING['HEALTH_WARN'])})
for entity_instance_id in old_alarms:
for alarm_id, old_reason in old_alarms[entity_instance_id]:
self.service.fm_api.clear_fault(alarm_id, entity_instance_id)
@staticmethod
def _parse_reason(health):
""" Parse reason strings received from Ceph """
if health['health'] in constants.CEPH_STATUS_CUSTOM:
# Don't parse reason messages that we added
return "Storage Alarm Condition: %(health)s. %(detail)s" % health
reasons_lst = health['detail'].split(';')
parsed_reasons_text = ""
# Check if PGs have issues - we can't safely store the entire message
# as it tends to be long
for reason in reasons_lst:
if "pgs" in reason:
parsed_reasons_text += "PGs are degraded/stuck or undersized"
break
# Extract recovery status
parsed_reasons = [r.strip() for r in reasons_lst if 'recovery' in r]
if parsed_reasons:
parsed_reasons_text += ";" + ";".join(parsed_reasons)
# We need to keep the most important parts of the messages when storing
# them to fm alarms, therefore text between [] brackets is truncated if
# max size is reached.
# Add brackets, if needed
if len(parsed_reasons_text):
lbracket = " ["
rbracket = "]"
else:
lbracket = ""
rbracket = ""
msg = {"head": "Storage Alarm Condition: ",
"tail": ". Please check 'ceph -s' for more details."}
max_size = constants.FM_ALARM_REASON_MAX_SIZE - \
len(msg["head"]) - len(msg["tail"])
return (
msg['head'] +
(health['health'] + lbracket + parsed_reasons_text)[:max_size-1] +
rbracket + msg['tail'])
def _report_fault(self, health, alarm_id):
if alarm_id == fm_constants.FM_ALARM_ID_STORAGE_CEPH:
new_severity = constants.SEVERITY[health['health']]
new_reason_text = self._parse_reason(health)
new_service_affecting = \
constants.SERVICE_AFFECTING[health['health']]
# Raise or update alarm if necessary
if ((not self.current_health_alarm) or
(self.current_health_alarm.__dict__['severity'] !=
new_severity) or
(self.current_health_alarm.__dict__['reason_text'] !=
new_reason_text) or
(self.current_health_alarm.__dict__['service_affecting'] !=
str(new_service_affecting))):
fault = fm_api.Fault(
alarm_id=fm_constants.FM_ALARM_ID_STORAGE_CEPH,
alarm_type=fm_constants.FM_ALARM_TYPE_4,
alarm_state=fm_constants.FM_ALARM_STATE_SET,
entity_type_id=fm_constants.FM_ENTITY_TYPE_CLUSTER,
entity_instance_id=self.service.entity_instance_id,
severity=new_severity,
reason_text=new_reason_text,
probable_cause=fm_constants.ALARM_PROBABLE_CAUSE_15,
proposed_repair_action=constants.REPAIR_ACTION,
service_affecting=new_service_affecting)
alarm_uuid = self.service.fm_api.set_fault(fault)
if alarm_uuid:
LOG.info(_LI(
"Created storage alarm %(alarm_uuid)s - "
"severity: %(severity)s, reason: %(reason)s, "
"service_affecting: %(service_affecting)s") % {
"alarm_uuid": alarm_uuid,
"severity": new_severity,
"reason": new_reason_text,
"service_affecting": new_service_affecting})
else:
LOG.error(_LE(
"Failed to create storage alarm - "
"severity: %(severity)s, reason: %(reason)s "
"service_affecting: %(service_affecting)s") % {
"severity": new_severity,
"reason": new_reason_text,
"service_affecting": new_service_affecting})
# Log detailed reason for later analysis
if (self.current_ceph_health != health['health'] or
self.detailed_health_reason != health['detail']):
LOG.info(_LI("Ceph status changed: %(health)s "
"detailed reason: %(detail)s") % health)
self.current_ceph_health = health['health']
self.detailed_health_reason = health['detail']
elif (alarm_id == fm_constants.FM_ALARM_ID_STORAGE_CEPH_FREE_SPACE and
not health['tier_eid'] in self.current_quota_alarms):
quota_reason_text = ("Quota/Space mismatch for the %s tier. The "
"sum of Ceph pool quotas does not match the "
"tier size." % health['tier_name'])
fault = fm_api.Fault(
alarm_id=fm_constants.FM_ALARM_ID_STORAGE_CEPH_FREE_SPACE,
alarm_state=fm_constants.FM_ALARM_STATE_SET,
entity_type_id=fm_constants.FM_ENTITY_TYPE_CLUSTER,
entity_instance_id=health['tier_eid'],
severity=fm_constants.FM_ALARM_SEVERITY_MINOR,
reason_text=quota_reason_text,
alarm_type=fm_constants.FM_ALARM_TYPE_7,
probable_cause=fm_constants.ALARM_PROBABLE_CAUSE_75,
proposed_repair_action=(
"Update ceph storage pool quotas to use all available "
"cluster space for the %s tier." % health['tier_name']),
service_affecting=False)
alarm_uuid = self.service.fm_api.set_fault(fault)
if alarm_uuid:
LOG.info(_LI(
"Created storage quota storage alarm %(alarm_uuid)s. "
"Reason: %(reason)s") % {
"alarm_uuid": alarm_uuid, "reason": quota_reason_text})
else:
LOG.error(_LE("Failed to create quota "
"storage alarm. Reason: %s") % quota_reason_text)
def _clear_fault(self, alarm_id, entity_instance_id=None):
# Only clear alarm if there is one already raised
if (alarm_id == fm_constants.FM_ALARM_ID_STORAGE_CEPH and
self.current_health_alarm):
LOG.info(_LI("Clearing health alarm"))
self.service.fm_api.clear_fault(
fm_constants.FM_ALARM_ID_STORAGE_CEPH,
self.service.entity_instance_id)
elif (alarm_id == fm_constants.FM_ALARM_ID_STORAGE_CEPH_FREE_SPACE and
entity_instance_id in self.current_quota_alarms):
LOG.info(_LI("Clearing quota alarm with entity_instance_id %s")
% entity_instance_id)
self.service.fm_api.clear_fault(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_FREE_SPACE,
entity_instance_id)
def clear_critical_alarm(self, group_name):
alarm_list = self.service.fm_api.get_faults_by_id(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_CRITICAL)
if alarm_list:
for alarm in range(len(alarm_list)):
group_id = alarm_list[alarm].entity_instance_id.find("group-")
group_instance_name = (
"group-" +
alarm_list[alarm].entity_instance_id[group_id + 6])
if group_name == group_instance_name:
self.service.fm_api.clear_fault(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_CRITICAL,
alarm_list[alarm].entity_instance_id)
def clear_all_major_critical(self, group_name=None):
# clear major alarms
alarm_list = self.service.fm_api.get_faults_by_id(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_MAJOR)
if alarm_list:
for alarm in range(len(alarm_list)):
if group_name is not None:
group_id = (
alarm_list[alarm].entity_instance_id.find("group-"))
group_instance_name = (
"group-" +
alarm_list[alarm].entity_instance_id[group_id+6])
if group_name == group_instance_name:
self.service.fm_api.clear_fault(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_MAJOR,
alarm_list[alarm].entity_instance_id)
else:
self.service.fm_api.clear_fault(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_MAJOR,
alarm_list[alarm].entity_instance_id)
# clear critical alarms
alarm_list = self.service.fm_api.get_faults_by_id(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_CRITICAL)
if alarm_list:
for alarm in range(len(alarm_list)):
if group_name is not None:
group_id = (
alarm_list[alarm].entity_instance_id.find("group-"))
group_instance_name = (
"group-" +
alarm_list[alarm].entity_instance_id[group_id + 6])
if group_name == group_instance_name:
self.service.fm_api.clear_fault(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_CRITICAL,
alarm_list[alarm].entity_instance_id)
else:
self.service.fm_api.clear_fault(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_CRITICAL,
alarm_list[alarm].entity_instance_id)
def _get_current_alarms(self):
""" Retrieve currently raised alarm """
self.current_health_alarm = self.service.fm_api.get_fault(
fm_constants.FM_ALARM_ID_STORAGE_CEPH,
self.service.entity_instance_id)
quota_faults = self.service.fm_api.get_faults_by_id(
fm_constants.FM_ALARM_ID_STORAGE_CEPH_FREE_SPACE)
if quota_faults:
self.current_quota_alarms = [f.entity_instance_id
for f in quota_faults]
else:
self.current_quota_alarms = []

View File

@ -0,0 +1,249 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Copyright (c) 2016-2018 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# https://chrigl.de/posts/2014/08/27/oslo-messaging-example.html
# http://docs.openstack.org/developer/oslo.messaging/server.html
import sys
# noinspection PyUnresolvedReferences
import eventlet
# noinspection PyUnresolvedReferences
import oslo_messaging as messaging
# noinspection PyUnresolvedReferences
from fm_api import fm_api
# noinspection PyUnresolvedReferences
from oslo_config import cfg
# noinspection PyUnresolvedReferences
from oslo_log import log as logging
# noinspection PyUnresolvedReferences
from oslo_service import service
# noinspection PyUnresolvedReferences
from oslo_service.periodic_task import PeriodicTasks
# noinspection PyUnresolvedReferences
from oslo_service import loopingcall
from sysinv.conductor.cache_tiering_service_config import ServiceConfig
# noinspection PyUnresolvedReferences
from cephclient import wrapper
from monitor import Monitor
from cache_tiering import CacheTiering
import exception
import constants
from i18n import _LI, _LW
from retrying import retry
eventlet.monkey_patch(all=True)
CONF = cfg.CONF
CONF.register_opts([
cfg.StrOpt('sysinv_api_bind_ip',
default='0.0.0.0',
help='IP for the Ceph Manager server to bind to')])
CONF.logging_default_format_string = (
'%(asctime)s.%(msecs)03d %(process)d '
'%(levelname)s %(name)s [-] %(message)s')
logging.register_options(CONF)
logging.setup(CONF, __name__)
LOG = logging.getLogger(__name__)
CONF.rpc_backend = 'rabbit'
class RpcEndpoint(PeriodicTasks):
def __init__(self, service=None):
self.service = service
def cache_tiering_enable_cache(self, _, new_config, applied_config):
LOG.info(_LI("Enabling cache"))
try:
self.service.cache_tiering.enable_cache(
new_config, applied_config)
except exception.CephManagerException as e:
self.service.sysinv_conductor.call(
{}, 'cache_tiering_enable_cache_complete',
success=False, exception=str(e.message),
new_config=new_config, applied_config=applied_config)
def cache_tiering_disable_cache(self, _, new_config, applied_config):
LOG.info(_LI("Disabling cache"))
try:
self.service.cache_tiering.disable_cache(
new_config, applied_config)
except exception.CephManagerException as e:
self.service.sysinv_conductor.call(
{}, 'cache_tiering_disable_cache_complete',
success=False, exception=str(e.message),
new_config=new_config, applied_config=applied_config)
def cache_tiering_operation_in_progress(self, _):
is_locked = self.service.cache_tiering.is_locked()
LOG.info(_LI("Cache tiering operation "
"is in progress: %s") % str(is_locked).lower())
return is_locked
def get_primary_tier_size(self, _):
"""Get the ceph size for the primary tier.
returns: an int for the size (in GB) of the tier
"""
tiers_size = self.service.monitor.tiers_size
primary_tier_size = tiers_size.get(
self.service.monitor.primary_tier_name, 0)
LOG.debug(_LI("Ceph cluster primary tier size: %s GB") %
str(primary_tier_size))
return primary_tier_size
def get_tiers_size(self, _):
"""Get the ceph cluster tier sizes.
returns: a dict of sizes (in GB) by tier name
"""
tiers_size = self.service.monitor.tiers_size
LOG.debug(_LI("Ceph cluster tiers (size in GB): %s") %
str(tiers_size))
return tiers_size
def is_cluster_up(self, _):
"""Report if the last health check was successful.
This is an independent view of the cluster accessibility that can be
used by the sysinv conductor to gate ceph API calls which would timeout
and potentially block other operations.
This view is only updated at the rate the monitor checks for a cluster
uuid or a health check (CEPH_HEALTH_CHECK_INTERVAL)
returns: boolean True if last health check was successful else False
"""
return self.service.monitor.cluster_is_up
# This class is needed only when upgrading from 16.10 to 17.x
# TODO: remove it after 1st 17.x release
#
class SysinvConductorUpgradeApi(object):
def __init__(self):
self.sysinv_conductor = None
super(SysinvConductorUpgradeApi, self).__init__()
def get_software_upgrade_status(self):
LOG.info(_LI("Getting software upgrade status from sysinv"))
cctxt = self.sysinv_conductor.prepare(timeout=2)
upgrade = cctxt.call({}, 'get_software_upgrade_status')
LOG.info(_LI("Software upgrade status: %s") % str(upgrade))
return upgrade
@retry(wait_fixed=1000,
retry_on_exception=lambda exception:
LOG.warn(_LW(
"Getting software upgrade status failed "
"with: %s. Retrying... ") % str(exception)) or True)
def retry_get_software_upgrade_status(self):
return self.get_software_upgrade_status()
class Service(SysinvConductorUpgradeApi, service.Service):
def __init__(self, conf):
super(Service, self).__init__()
self.conf = conf
self.rpc_server = None
self.sysinv_conductor = None
self.ceph_api = None
self.entity_instance_id = ''
self.fm_api = fm_api.FaultAPIs()
self.monitor = Monitor(self)
self.cache_tiering = CacheTiering(self)
self.config = None
self.config_desired = None
self.config_applied = None
def start(self):
super(Service, self).start()
transport = messaging.get_transport(self.conf)
self.sysinv_conductor = messaging.RPCClient(
transport,
messaging.Target(
topic=constants.SYSINV_CONDUCTOR_TOPIC))
self.ceph_api = wrapper.CephWrapper(
endpoint='http://localhost:5001/api/v0.1/')
# Get initial config from sysinv and send it to
# services that need it before starting them
config = self.get_caching_tier_config()
self.monitor.setup(config)
self.rpc_server = messaging.get_rpc_server(
transport,
messaging.Target(topic=constants.CEPH_MANAGER_TOPIC,
server=self.conf.sysinv_api_bind_ip),
[RpcEndpoint(self)],
executor='eventlet')
self.rpc_server.start()
self.cache_tiering.set_initial_config(config)
eventlet.spawn_n(self.monitor.run)
periodic = loopingcall.FixedIntervalLoopingCall(
self.update_ceph_target_max_bytes)
periodic.start(interval=300)
def get_caching_tier_config(self):
LOG.info("Getting cache tiering configuration from sysinv")
while True:
# Get initial configuration from sysinv,
# retry until sysinv starts
try:
cctxt = self.sysinv_conductor.prepare(timeout=2)
config = cctxt.call({}, 'cache_tiering_get_config')
for section in config:
if section == constants.CACHE_TIERING:
self.config = ServiceConfig().from_dict(
config[section])
elif section == constants.CACHE_TIERING_DESIRED:
self.config_desired = ServiceConfig().from_dict(
config[section])
elif section == constants.CACHE_TIERING_APPLIED:
self.config_applied = ServiceConfig().from_dict(
config[section])
LOG.info("Cache tiering configs: {}".format(config))
return config
except Exception as ex:
# In production we should retry on every error until connection
# is reestablished.
LOG.warn("Getting cache tiering configuration failed "
"with: {}. Retrying... ".format(str(ex)))
def stop(self):
try:
self.rpc_server.stop()
self.rpc_server.wait()
except Exception:
pass
super(Service, self).stop()
def update_ceph_target_max_bytes(self):
try:
self.cache_tiering.update_cache_target_max_bytes()
except Exception as ex:
LOG.exception("Updating Ceph target max bytes failed "
"with: {} retrying on next cycle.".format(str(ex)))
def run_service():
CONF(sys.argv[1:])
logging.setup(CONF, "ceph-manager")
launcher = service.launch(CONF, Service(CONF), workers=1)
launcher.wait()
if __name__ == "__main__":
run_service()

View File

@ -0,0 +1,309 @@
#
# Copyright (c) 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import unittest
import mock
import subprocess
import math
from ..cache_tiering import CacheTiering
from ..cache_tiering import LOG as CT_LOG
from ..constants import CACHE_FLUSH_OBJECTS_THRESHOLD
from ..constants import CACHE_FLUSH_MIN_WAIT_OBJ_COUNT_DECREASE_SEC as MIN_WAIT
from ..constants import CACHE_FLUSH_MAX_WAIT_OBJ_COUNT_DECREASE_SEC as MAX_WAIT
from ..exception import CephCacheFlushFailure
class TestCacheFlush(unittest.TestCase):
def setUp(self):
self.service = mock.Mock()
self.ceph_api = mock.Mock()
self.service.ceph_api = self.ceph_api
self.cache_tiering = CacheTiering(self.service)
@mock.patch('subprocess.check_call')
def test_set_param_fail(self, mock_proc_call):
self.ceph_api.osd_set_pool_param = mock.Mock()
self.ceph_api.osd_set_pool_param.return_value = (
mock.Mock(ok=False, status_code=500, reason='denied'),
{})
self.cache_tiering.cache_flush({'pool_name': 'test'})
mock_proc_call.assert_called_with(
['/usr/bin/rados', '-p', 'test-cache', 'cache-flush-evict-all'])
@mock.patch('subprocess.check_call')
def test_df_fail(self, mock_proc_call):
self.ceph_api.osd_set_pool_param = mock.Mock()
self.ceph_api.osd_set_pool_param.return_value = (
mock.Mock(ok=True, status_code=200, reason='OK'),
{})
self.ceph_api.df = mock.Mock()
self.ceph_api.df.return_value = (
mock.Mock(ok=False, status_code=500, reason='denied'),
{})
self.cache_tiering.cache_flush({'pool_name': 'test'})
self.ceph_api.osd_set_pool_param.assert_called_once_with(
'test-cache', 'target_max_objects', 1, force=None, body='json')
mock_proc_call.assert_called_with(
['/usr/bin/rados', '-p', 'test-cache', 'cache-flush-evict-all'])
@mock.patch('subprocess.check_call')
def test_rados_evict_fail_raises(self, mock_proc_call):
mock_proc_call.side_effect = subprocess.CalledProcessError(1, ['cmd'])
self.ceph_api.osd_set_pool_param = mock.Mock()
self.ceph_api.osd_set_pool_param.return_value = (
mock.Mock(ok=False, status_code=500, reason='denied'),
{})
self.assertRaises(CephCacheFlushFailure,
self.cache_tiering.cache_flush,
{'pool_name': 'test'})
mock_proc_call.assert_called_with(
['/usr/bin/rados', '-p', 'test-cache', 'cache-flush-evict-all'])
@mock.patch('subprocess.check_call')
def test_df_missing_pool(self, mock_proc_call):
self.ceph_api.osd_set_pool_param = mock.Mock()
self.ceph_api.osd_set_pool_param.return_value = (
mock.Mock(ok=True, status_code=200, reason='OK'),
{})
self.ceph_api.df = mock.Mock()
self.ceph_api.df.return_value = (
mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'rbd',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects': 0}}]},
'status': 'OK'})
with mock.patch.object(CT_LOG, 'warn') as mock_lw:
self.cache_tiering.cache_flush({'pool_name': 'test'})
self.ceph_api.df.assert_called_once_with(body='json')
for c in mock_lw.call_args_list:
if 'Missing pool free space' in c[0][0]:
break
else:
self.fail('expected log warning')
self.ceph_api.osd_set_pool_param.assert_called_once_with(
'test-cache', 'target_max_objects', 1, force=None, body='json')
mock_proc_call.assert_called_with(
['/usr/bin/rados', '-p', 'test-cache', 'cache-flush-evict-all'])
@mock.patch('subprocess.check_call')
def test_df_objects_empty(self, mock_proc_call):
self.ceph_api.osd_set_pool_param = mock.Mock()
self.ceph_api.osd_set_pool_param.return_value = (
mock.Mock(ok=True, status_code=200, reason='OK'),
{})
self.ceph_api.df = mock.Mock()
self.ceph_api.df.return_value = (
mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'test-cache',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects': 0}}]},
'status': 'OK'})
self.cache_tiering.cache_flush({'pool_name': 'test'})
self.ceph_api.df.assert_called_once_with(body='json')
self.ceph_api.osd_set_pool_param.assert_called_once_with(
'test-cache', 'target_max_objects', 1, force=None, body='json')
mock_proc_call.assert_called_with(
['/usr/bin/rados', '-p', 'test-cache', 'cache-flush-evict-all'])
@mock.patch('time.sleep')
@mock.patch('subprocess.check_call')
def test_df_objects_above_threshold(self, mock_proc_call, mock_time_sleep):
self.ceph_api.osd_set_pool_param = mock.Mock()
self.ceph_api.osd_set_pool_param.return_value = (
mock.Mock(ok=True, status_code=200, reason='OK'),
{})
self.ceph_api.df = mock.Mock()
self.ceph_api.df.side_effect = [
(mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'test-cache',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects': CACHE_FLUSH_OBJECTS_THRESHOLD}}]},
'status': 'OK'}),
(mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'test-cache',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects':
CACHE_FLUSH_OBJECTS_THRESHOLD - 1}}]},
'status': 'OK'})]
self.cache_tiering.cache_flush({'pool_name': 'test'})
self.ceph_api.osd_set_pool_param.assert_called_once_with(
'test-cache', 'target_max_objects', 1, force=None, body='json')
self.ceph_api.df.assert_called_with(body='json')
mock_time_sleep.assert_called_once_with(MIN_WAIT)
mock_proc_call.assert_called_with(
['/usr/bin/rados', '-p', 'test-cache', 'cache-flush-evict-all'])
@mock.patch('time.sleep')
@mock.patch('subprocess.check_call')
def test_df_objects_interval_increase(self, mock_proc_call,
mock_time_sleep):
self.ceph_api.osd_set_pool_param = mock.Mock()
self.ceph_api.osd_set_pool_param.return_value = (
mock.Mock(ok=True, status_code=200, reason='OK'),
{})
self.ceph_api.df = mock.Mock()
self.ceph_api.df.side_effect = [
(mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'test-cache',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects':
CACHE_FLUSH_OBJECTS_THRESHOLD + 1}}]},
'status': 'OK'}),
(mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'test-cache',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects':
CACHE_FLUSH_OBJECTS_THRESHOLD + 1}}]},
'status': 'OK'}),
(mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'test-cache',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects':
CACHE_FLUSH_OBJECTS_THRESHOLD + 1}}]},
'status': 'OK'}),
(mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'test-cache',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects':
CACHE_FLUSH_OBJECTS_THRESHOLD - 1}}]},
'status': 'OK'})]
self.cache_tiering.cache_flush({'pool_name': 'test'})
self.ceph_api.osd_set_pool_param.assert_called_once_with(
'test-cache', 'target_max_objects', 1, force=None, body='json')
self.ceph_api.df.assert_called_with(body='json')
self.assertEqual([c[0][0] for c in mock_time_sleep.call_args_list],
[MIN_WAIT,
MIN_WAIT * 2,
MIN_WAIT * 4])
mock_proc_call.assert_called_with(
['/usr/bin/rados', '-p', 'test-cache', 'cache-flush-evict-all'])
@mock.patch('time.sleep')
@mock.patch('subprocess.check_call')
def test_df_objects_allways_over_threshold(self, mock_proc_call,
mock_time_sleep):
self.ceph_api.osd_set_pool_param = mock.Mock()
self.ceph_api.osd_set_pool_param.return_value = (
mock.Mock(ok=True, status_code=200, reason='OK'),
{})
self.ceph_api.df = mock.Mock()
self.ceph_api.df.return_value = (
mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'test-cache',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects':
CACHE_FLUSH_OBJECTS_THRESHOLD + 1}}]},
'status': 'OK'})
# noinspection PyTypeChecker
mock_time_sleep.side_effect = \
[None]*int(math.ceil(math.log(float(MAX_WAIT)/MIN_WAIT, 2)) + 1) \
+ [Exception('too many sleeps')]
self.cache_tiering.cache_flush({'pool_name': 'test'})
self.ceph_api.osd_set_pool_param.assert_called_once_with(
'test-cache', 'target_max_objects', 1, force=None, body='json')
self.ceph_api.df.assert_called_with(body='json')
expected_sleep = []
interval = MIN_WAIT
while interval <= MAX_WAIT:
expected_sleep.append(interval)
interval *= 2
self.assertEqual([c[0][0] for c in mock_time_sleep.call_args_list],
expected_sleep)
mock_proc_call.assert_called_with(
['/usr/bin/rados', '-p', 'test-cache', 'cache-flush-evict-all'])
@mock.patch('time.sleep')
@mock.patch('subprocess.check_call')
def test_df_objects_increase(self, mock_proc_call, mock_time_sleep):
self.ceph_api.osd_set_pool_param = mock.Mock()
self.ceph_api.osd_set_pool_param.return_value = (
mock.Mock(ok=True, status_code=200, reason='OK'),
{})
self.ceph_api.df = mock.Mock()
self.ceph_api.df.side_effect = [
(mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'test-cache',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects':
CACHE_FLUSH_OBJECTS_THRESHOLD + 1}}]},
'status': 'OK'}),
(mock.Mock(ok=True, status_code=200, reason='OK'),
{'output': {
'pools': [
{'id': 0,
'name': 'test-cache',
'stats': {'bytes_used': 0,
'kb_used': 0,
'max_avail': 9588428800,
'objects':
CACHE_FLUSH_OBJECTS_THRESHOLD + 2}}]},
'status': 'OK'})]
with mock.patch.object(CT_LOG, 'warn') as mock_lw:
self.cache_tiering.cache_flush({'pool_name': 'test'})
for c in mock_lw.call_args_list:
if 'Unexpected increase' in c[0][0]:
break
else:
self.fail('expected log warning')
self.ceph_api.df.assert_called_with(body='json')
mock_time_sleep.assert_called_once_with(MIN_WAIT)
self.ceph_api.osd_set_pool_param.assert_called_once_with(
'test-cache', 'target_max_objects', 1, force=None, body='json')
mock_proc_call.assert_called_with(
['/usr/bin/rados', '-p', 'test-cache', 'cache-flush-evict-all'])

View File

@ -0,0 +1,19 @@
#!/usr/bin/env python
#
# Copyright (c) 2013-2014, 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import setuptools
setuptools.setup(
name='ceph_manager',
version='1.0.0',
description='CEPH manager',
license='Apache-2.0',
packages=['ceph_manager'],
entry_points={
}
)

View File

@ -0,0 +1,10 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
mock
flake8
eventlet
pytest
oslo.log
oslo.i18n

View File

@ -0,0 +1,29 @@
# adapted from glance tox.ini
[tox]
minversion = 1.6
envlist = py27,pep8
skipsdist = True
# tox does not work if the path to the workdir is too long, so move it to /tmp
toxworkdir = /tmp/{env:USER}_ceph_manager_tox
[testenv]
setenv = VIRTUAL_ENV={envdir}
usedevelop = True
install_command = pip install --no-use-wheel -U --force-reinstall {opts} {packages}
deps = -r{toxinidir}/test-requirements.txt
commands = py.test {posargs}
whitelist_externals = bash
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
[testenv:py27]
basepython = python2.7
setenv =
PYTHONPATH={toxinidir}/../../../../sysinv/recipes-common/sysinv/sysinv:{toxinidir}/../../../../config/recipes-common/tsconfig/tsconfig
[testenv:pep8]
commands =
flake8 {posargs}
[flake8]
exclude = .venv,.git,.tox,dist,doc,etc,*glance/locale*,*lib/python*,*egg,build

View File

@ -0,0 +1,11 @@
/var/log/ceph-manager.log {
nodateext
size 10M
start 1
rotate 10
missingok
notifempty
compress
delaycompress
copytruncate
}

View File

@ -0,0 +1,17 @@
[Unit]
Description=Handle Ceph API calls and provide status updates via alarms
After=ceph.target
[Service]
Type=forking
Restart=no
KillMode=process
RemainAfterExit=yes
ExecStart=/etc/rc.d/init.d/ceph-manager start
ExecStop=/etc/rc.d/init.d/ceph-manager stop
ExecReload=/etc/rc.d/init.d/ceph-manager reload
PIDFile=/var/run/ceph/ceph-manager.pid
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,17 @@
#!/usr/bin/env python
#
# Copyright (c) 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import sys
try:
from ceph_manager.server import run_service
except EnvironmentError as e:
print >> sys.stderr, "Error importing ceph_manager: ", str(e)
sys.exit(1)
run_service()

View File

@ -0,0 +1,103 @@
#!/bin/sh
#
# Copyright (c) 2013-2014, 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
### BEGIN INIT INFO
# Provides: ceph-manager
# Required-Start: $ceph
# Required-Stop: $ceph
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Daemon for polling ceph status
# Description: Daemon for polling ceph status
### END INIT INFO
DESC="ceph-manager"
DAEMON="/usr/bin/ceph-manager"
RUNDIR="/var/run/ceph"
PIDFILE=$RUNDIR/$DESC.pid
CONFIGFILE="/etc/sysinv/sysinv.conf"
LOGFILE="/var/log/ceph-manager.log"
start()
{
if [ -e $PIDFILE ]; then
PIDDIR=/prod/$(cat $PIDFILE)
if [ -d ${PIDFILE} ]; then
echo "$DESC already running."
exit 0
else
echo "Removing stale PID file $PIDFILE"
rm -f $PIDFILE
fi
fi
echo -n "Starting $DESC..."
mkdir -p $RUNDIR
start-stop-daemon --start --quiet \
--pidfile ${PIDFILE} --exec ${DAEMON} \
--make-pidfile --background \
-- --log-file=$LOGFILE --config-file=$CONFIGFILE
if [ $? -eq 0 ]; then
echo "done."
else
echo "failed."
exit 1
fi
}
stop()
{
echo -n "Stopping $DESC..."
start-stop-daemon --stop --quiet --pidfile $PIDFILE --retry 60
if [ $? -eq 0 ]; then
echo "done."
else
echo "failed."
fi
rm -f $PIDFILE
}
status()
{
pid=`cat $PIDFILE 2>/dev/null`
if [ -n "$pid" ]; then
if ps -p $pid &> /dev/null ; then
echo "$DESC is running"
exit 0
else
echo "$DESC is not running but has pid file"
exit 1
fi
fi
echo "$DESC is not running"
exit 3
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart|force-reload|reload)
stop
start
;;
status)
status
;;
*)
echo "Usage: $0 {start|stop|force-reload|restart|reload|status}"
exit 1
;;
esac
exit 0

View File

@ -0,0 +1,5 @@
SRC_DIR="$CGCS_BASE/git/ceph"
TIS_BASE_SRCREV=fc689aa5ded5941b8ae86374c7124c7d91782973
TIS_PATCH_VER=GITREVCOUNT
BUILD_IS_BIG=40
BUILD_IS_SLOW=26

1
ceph/centos/ceph.spec Symbolic link
View File

@ -0,0 +1 @@
../../../git/ceph/ceph.spec

View File

@ -0,0 +1,326 @@
#!/usr/bin/python
#
# Copyright (c) 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import ast
import os
import os.path
import re
import subprocess
import sys
#########
# Utils #
#########
def command(arguments, **kwargs):
""" Execute e command and capture stdout, stderr & return code """
process = subprocess.Popen(
arguments,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**kwargs)
out, err = process.communicate()
return out, err, process.returncode
def get_input(arg, valid_keys):
"""Convert the input to a dict and perform basic validation"""
json_string = arg.replace("\\n", "\n")
try:
input_dict = ast.literal_eval(json_string)
if not all(k in input_dict for k in valid_keys):
return None
except Exception:
return None
return input_dict
def get_partition_uuid(dev):
output, _, _ = command(['blkid', dev])
try:
return re.search('PARTUUID=\"(.+?)\"', output).group(1)
except AttributeError:
return None
def device_path_to_device_node(device_path):
try:
output, _, _ = command(["udevadm", "settle", "-E", device_path])
out, err, retcode = command(["readlink", "-f", device_path])
out = out.rstrip()
except Exception as e:
return None
return out
###########################################
# Manage Journal Disk Partitioning Scheme #
###########################################
DISK_BY_PARTUUID = "/dev/disk/by-partuuid/"
JOURNAL_UUID='45b0969e-9b03-4f30-b4c6-b4b80ceff106' # Type of a journal partition
def is_partitioning_correct(disk_path, partition_sizes):
""" Validate the existence and size of journal partitions"""
# Obtain the device node from the device path.
disk_node = device_path_to_device_node(disk_path)
# Check that partition table format is GPT
output, _, _ = command(["udevadm", "settle", "-E", disk_node])
output, _, _ = command(["parted", "-s", disk_node, "print"])
if not re.search('Partition Table: gpt', output):
print "Format of disk node %s is not GPT, zapping disk" % disk_node
return False
# Check each partition size
partition_index = 1
for size in partition_sizes:
# Check that each partition size matches the one in input
partition_node = disk_node + str(partition_index)
output, _, _ = command(["udevadm", "settle", "-E", partition_node])
cmd = ["parted", "-s", partition_node, "unit", "MiB", "print"]
output, _, _ = command(cmd)
regex = ("^Disk " + str(partition_node) + ":\\s*" +
str(size) + "[\\.0]*MiB")
if not re.search(regex, output, re.MULTILINE):
print ("Journal partition %(node)s size is not %(size)s, "
"zapping disk" % {"node": partition_node, "size": size})
return False
partition_index += 1
output, _, _ = command(["udevadm", "settle", "-t", "10"])
return True
def create_partitions(disk_path, partition_sizes):
""" Recreate partitions """
# Obtain the device node from the device path.
disk_node = device_path_to_device_node(disk_path)
# Issue: After creating a new partition table on a device, Udev does not
# always remove old symlinks (i.e. to previous partitions on that device).
# Also, even if links are erased before zapping the disk, some of them will
# be recreated even though there is no partition to back them!
# Therefore, we have to remove the links AFTER we erase the partition table
# Issue: DISK_BY_PARTUUID directory is not present at all if there are no
# GPT partitions on the storage node so nothing to remove in this case
links = []
if os.path.isdir(DISK_BY_PARTUUID):
links = [ os.path.join(DISK_BY_PARTUUID,l) for l in os.listdir(DISK_BY_PARTUUID)
if os.path.islink(os.path.join(DISK_BY_PARTUUID, l)) ]
# Erase all partitions on current node by creating a new GPT table
_, err, ret = command(["parted", "-s", disk_node, "mktable", "gpt"])
if ret:
print ("Error erasing partition table of %(node)s\n"
"Return code: %(ret)s reason: %(reason)s" %
{"node": disk_node, "ret": ret, "reason": err})
exit(1)
# Erase old symlinks
for l in links:
if disk_node in os.path.realpath(l):
os.remove(l)
# Create partitions in order
used_space_mib = 1 # leave 1 MB at the beginning of the disk
num = 1
for size in partition_sizes:
cmd = ['parted', '-s', disk_node, 'unit', 'mib',
'mkpart', 'primary',
str(used_space_mib), str(used_space_mib + size)]
_, err, ret = command(cmd)
parms = {"disk_node": disk_node,
"start": used_space_mib,
"end": used_space_mib + size,
"reason": err}
print ("Created partition from start=%(start)s MiB to end=%(end)s MiB"
" on %(disk_node)s" % parms)
if ret:
print ("Failed to create partition with "
"start=%(start)s, end=%(end)s "
"on %(disk_node)s reason: %(reason)s" % parms)
exit(1)
# Set partition type to ceph journal
# noncritical operation, it makes 'ceph-disk list' output correct info
cmd = ['sgdisk',
'--change-name={num}:ceph journal'.format(num=num),
'--typecode={num}:{uuid}'.format(
num=num,
uuid=JOURNAL_UUID,
),
disk_node]
_, err, ret = command(cmd)
if ret:
print ("WARNINIG: Failed to set partition name and typecode")
used_space_mib += size
num += 1
###########################
# Manage Journal Location #
###########################
OSD_PATH = "/var/lib/ceph/osd/"
def mount_data_partition(data_path, osdid):
""" Mount an OSD data partition and return the mounted path """
# Obtain the device node from the device path.
data_node = device_path_to_device_node(data_path)
mount_path = OSD_PATH + "ceph-" + str(osdid)
output, _, _ = command(['mount'])
regex = "^" + data_node + ".*" + mount_path
if not re.search(regex, output, re.MULTILINE):
cmd = ['mount', '-t', 'xfs', data_node, mount_path]
_, _, ret = command(cmd)
params = {"node": data_node, "path": mount_path}
if ret:
print "Failed to mount %(node)s to %(path), aborting" % params
exit(1)
else:
print "Mounted %(node)s to %(path)s" % params
return mount_path
def is_location_correct(path, journal_path, osdid):
""" Check if location points to the correct device """
# Obtain the device node from the device path.
journal_node = device_path_to_device_node(journal_path)
cur_node = os.path.realpath(path + "/journal")
if cur_node == journal_node:
return True
else:
return False
def fix_location(mount_point, journal_path, osdid):
""" Move the journal to the new partition """
# Obtain the device node from the device path.
journal_node = device_path_to_device_node(journal_path)
# Fix symlink
path = mount_point + "/journal" # 'journal' symlink path used by ceph-osd
journal_uuid = get_partition_uuid(journal_node)
new_target = DISK_BY_PARTUUID + journal_uuid
params = {"path": path, "target": new_target}
try:
if os.path.lexists(path):
os.unlink(path) # delete the old symlink
os.symlink(new_target, path)
print "Symlink created: %(path)s -> %(target)s" % params
except:
print "Failed to create symlink: %(path)s -> %(target)s" % params
exit(1)
# Fix journal_uuid
path = mount_point + "/journal_uuid"
try:
with open(path, 'w') as f:
f.write(journal_uuid)
except Exception as ex:
# The operation is noncritical, it only makes 'ceph-disk list'
# display complete output. We log and continue.
params = {"path": path, "uuid": journal_uuid}
print "WARNING: Failed to set uuid of %(path)s to %(uuid)s" % params
# Clean the journal partition
# even if erasing the partition table, if another journal was present here
# it's going to be reused. Journals are always bigger than 100MB.
command(['dd', 'if=/dev/zero', 'of=%s' % journal_node,
'bs=1M', 'count=100'])
# Format the journal
cmd = ['/usr/bin/ceph-osd', '-i', str(osdid),
'--pid-file', '/var/run/ceph/osd.%s.pid' % osdid,
'-c', '/etc/ceph/ceph.conf',
'--cluster', 'ceph',
'--mkjournal']
out, err, ret = command(cmd)
params = {"journal_node": journal_node,
"osdid": osdid,
"ret": ret,
"reason": err}
if not ret:
print ("Prepared new journal partition: %(journal_node)s "
"for osd id: %(osdid)s") % params
else:
print ("Error initializing journal node: "
"%(journal_node)s for osd id: %(osdid)s "
"ceph-osd return code: %(ret)s reason: %(reason)s" % params)
########
# Main #
########
def main(argv):
# parse and validate arguments
err = False
partitions = None
location = None
if len(argv) != 2:
err = True
elif argv[0] == "partitions":
valid_keys = ['disk_path', 'journals']
partitions = get_input(argv[1], valid_keys)
if not partitions:
err = True
elif not isinstance(partitions['journals'], list):
err = True
elif argv[0] == "location":
valid_keys = ['data_path', 'journal_path', 'osdid']
location = get_input(argv[1], valid_keys)
if not location:
err = True
elif not isinstance(location['osdid'], int):
err = True
else:
err = True
if err:
print "Command intended for internal use only"
exit(-1)
if partitions:
# Recreate partitions only if the existing ones don't match input
if not is_partitioning_correct(partitions['disk_path'],
partitions['journals']):
create_partitions(partitions['disk_path'], partitions['journals'])
else:
print ("Partition table for %s is correct, "
"no need to repartition" %
device_path_to_device_node(partitions['disk_path']))
elif location:
# we need to have the data partition mounted & we can let it mounted
mount_point = mount_data_partition(location['data_path'],
location['osdid'])
# Update journal location only if link point to another partition
if not is_location_correct(mount_point,
location['journal_path'],
location['osdid']):
print ("Fixing journal location for "
"OSD id: %(id)s" % {"node": location['data_path'],
"id": location['osdid']})
fix_location(mount_point,
location['journal_path'],
location['osdid'])
else:
print ("Journal location for %s is correct,"
"no need to change it" % location['data_path'])
main(sys.argv[1:])

3
mwa-perian.map Normal file
View File

@ -0,0 +1,3 @@
cgcs/middleware/ceph/recipes-common/ceph-manager|ceph-manager
cgcs/openstack/recipes-base|openstack
cgcs/recipes-extended/ceph|ceph

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,5 @@
TAR_NAME="distributedcloud-client"
SRC_DIR="$CGCS_BASE/git/distributedcloud-client"
TIS_BASE_SRCREV=078b0eed0d9e9de5d5b0f5d82b3f13e7bcfb5d10
TIS_PATCH_VER=1

View File

@ -0,0 +1,81 @@
%global pypi_name distributedcloud-client
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%if 0%{?fedora}
%global with_python3 1
%{!?python3_shortver: %global python3_shortver %(%{__python3} -c 'import sys; print(str(sys.version_info.major) + "." + str(sys.version_info.minor))')}
%endif
Name: %{pypi_name}
Version: 1.0.0
Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: Client Library for Distributed Cloud Services
License: ASL 2.0
URL: unknown
Source0: %{pypi_name}-%{version}.tar.gz
BuildArch: noarch
BuildRequires: python2-devel
BuildRequires: python-setuptools
BuildRequires: python-jsonschema >= 2.0.0
BuildRequires: python-keystonemiddleware
BuildRequires: python-oslo-concurrency
BuildRequires: python-oslo-config
BuildRequires: python-oslo-context
BuildRequires: python-oslo-db
BuildRequires: python-oslo-i18n
BuildRequires: python-oslo-log
BuildRequires: python-oslo-messaging
BuildRequires: python-oslo-middleware
BuildRequires: python-oslo-policy
BuildRequires: python-oslo-rootwrap
BuildRequires: python-oslo-serialization
BuildRequires: python-oslo-service
BuildRequires: python-oslo-utils
BuildRequires: python-oslo-versionedobjects
BuildRequires: python-pbr >= 1.8
BuildRequires: python-routes >= 1.12.3
BuildRequires: python-sphinx
BuildRequires: python-sphinxcontrib-httpdomain
BuildRequires: pyOpenSSL
BuildRequires: systemd
# Required to compile translation files
BuildRequires: python-babel
%description
Client library for Distributed Cloud built on the Distributed Cloud API. It
provides a command-line tool (dcmanager).
Distributed Cloud provides configuration and management of distributed clouds
# DC Manager
%package dcmanagerclient
Summary: DC Manager Client
%description dcmanagerclient
Distributed Cloud Manager Client
%prep
%autosetup -n %{pypi_name}-%{version}
# Remove the requirements file so that pbr hooks don't add it
# to distutils requires_dist config
rm -rf {test-,}requirements.txt tools/{pip,test}-requires
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install --skip-build --root %{buildroot}
%files dcmanagerclient
%license LICENSE
%{python2_sitelib}/dcmanagerclient*
%{python2_sitelib}/distributedcloud_client-*.egg-info
%exclude %{python2_sitelib}/dcmanagerclient/tests
%{_bindir}/dcmanager*

View File

@ -0,0 +1,6 @@
TAR_NAME="distributedcloud"
SRC_DIR="$CGCS_BASE/git/distributedcloud"
COPY_LIST="$FILES_BASE/*"
TIS_BASE_SRCREV=ea7caa8567120384a0b6a7abbb567fcc7d22188b
TIS_PATCH_VER=7

View File

@ -0,0 +1,168 @@
%global pypi_name distributedcloud
%global with_doc %{!?_without_doc:1}%{?_without_doc:0}
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%if 0%{?fedora}
%global with_python3 1
%{!?python3_shortver: %global python3_shortver %(%{__python3} -c 'import sys; print(str(sys.version_info.major) + "." + str(sys.version_info.minor))')}
%endif
Name: %{pypi_name}
Version: 1.0.0
Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: Distributed Cloud Services
License: ASL 2.0
URL: unknown
Source0: %{pypi_name}-%{version}.tar.gz
Source1: dcmanager-api.service
Source2: dcmanager-manager.service
Source3: dcorch-api.service
Source4: dcorch-engine.service
Source5: dcorch-nova-api-proxy.service
Source6: dcorch-sysinv-api-proxy.service
Source7: dcorch-snmp.service
Source8: dcorch-cinder-api-proxy.service
Source9: dcorch-neutron-api-proxy.service
BuildArch: noarch
BuildRequires: python-crypto
BuildRequires: python-cryptography
BuildRequires: python2-devel
BuildRequires: python-eventlet
BuildRequires: python-setuptools
BuildRequires: python-jsonschema >= 2.0.0
BuildRequires: python-keyring
BuildRequires: python-keystonemiddleware
BuildRequires: python-keystoneauth1 >= 3.1.0
BuildRequires: python-netaddr
BuildRequires: python-oslo-concurrency
BuildRequires: python-oslo-config
BuildRequires: python-oslo-context
BuildRequires: python-oslo-db
BuildRequires: python-oslo-i18n
BuildRequires: python-oslo-log
BuildRequires: python-oslo-messaging
BuildRequires: python-oslo-middleware
BuildRequires: python-oslo-policy
BuildRequires: python-oslo-rootwrap
BuildRequires: python-oslo-serialization
BuildRequires: python-oslo-service
BuildRequires: python-oslo-utils
BuildRequires: python-oslo-versionedobjects
BuildRequires: python-pbr >= 1.8
BuildRequires: python-pecan >= 1.0.0
BuildRequires: python-routes >= 1.12.3
BuildRequires: python-sphinx
BuildRequires: python-sphinxcontrib-httpdomain
BuildRequires: pyOpenSSL
BuildRequires: systemd
# Required to compile translation files
BuildRequires: python-babel
%description
Distributed Cloud provides configuration and management of distributed clouds
# DC Manager
%package dcmanager
Summary: DC Manager
%description dcmanager
Distributed Cloud Manager
%package dcorch
Summary: DC Orchestrator
# TODO(John): should we add Requires lines?
%description dcorch
Distributed Cloud Orchestrator
%prep
%autosetup -n %{pypi_name}-%{version}
# Remove the requirements file so that pbr hooks don't add it
# to distutils requires_dist config
rm -rf {test-,}requirements.txt tools/{pip,test}-requires
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
# Generate sample config and add the current directory to PYTHONPATH so
# oslo-config-generator doesn't skip heat's entry points.
PYTHONPATH=. oslo-config-generator --config-file=./dcmanager/config-generator.conf
PYTHONPATH=. oslo-config-generator --config-file=./dcorch/config-generator.conf
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install -O1 --skip-build --root %{buildroot} \
--single-version-externally-managed
mkdir -p %{buildroot}/var/log/dcmanager
mkdir -p %{buildroot}/var/cache/dcmanager
mkdir -p %{buildroot}/var/run/dcmanager
mkdir -p %{buildroot}/etc/dcmanager/
# install systemd unit files
install -p -D -m 644 %{SOURCE1} %{buildroot}%{_unitdir}/dcmanager-api.service
install -p -D -m 644 %{SOURCE2} %{buildroot}%{_unitdir}/dcmanager-manager.service
# install default config files
cd %{_builddir}/%{pypi_name}-%{version} && oslo-config-generator --config-file ./dcmanager/config-generator.conf --output-file %{_builddir}/%{pypi_name}-%{version}/etc/dcmanager/dcmanager.conf.sample
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{version}/etc/dcmanager/dcmanager.conf.sample %{buildroot}%{_sysconfdir}/dcmanager/dcmanager.conf
mkdir -p %{buildroot}/var/log/dcorch
mkdir -p %{buildroot}/var/cache/dcorch
mkdir -p %{buildroot}/var/run/dcorch
mkdir -p %{buildroot}/etc/dcorch/
# install systemd unit files
install -p -D -m 644 %{SOURCE3} %{buildroot}%{_unitdir}/dcorch-api.service
install -p -D -m 644 %{SOURCE4} %{buildroot}%{_unitdir}/dcorch-engine.service
install -p -D -m 644 %{SOURCE5} %{buildroot}%{_unitdir}/dcorch-nova-api-proxy.service
install -p -D -m 644 %{SOURCE6} %{buildroot}%{_unitdir}/dcorch-sysinv-api-proxy.service
install -p -D -m 644 %{SOURCE7} %{buildroot}%{_unitdir}/dcorch-snmp.service
install -p -D -m 644 %{SOURCE8} %{buildroot}%{_unitdir}/dcorch-cinder-api-proxy.service
install -p -D -m 644 %{SOURCE9} %{buildroot}%{_unitdir}/dcorch-neutron-api-proxy.service
# install default config files
cd %{_builddir}/%{pypi_name}-%{version} && oslo-config-generator --config-file ./dcorch/config-generator.conf --output-file %{_builddir}/%{pypi_name}-%{version}/etc/dcorch/dcorch.conf.sample
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{version}/etc/dcorch/dcorch.conf.sample %{buildroot}%{_sysconfdir}/dcorch/dcorch.conf
%files dcmanager
%license LICENSE
%{python2_sitelib}/dcmanager*
%{python2_sitelib}/distributedcloud-*.egg-info
%exclude %{python2_sitelib}/dcmanager/tests
%{_bindir}/dcmanager-api
%{_unitdir}/dcmanager-api.service
%{_bindir}/dcmanager-manager
%{_unitdir}/dcmanager-manager.service
%{_bindir}/dcmanager-manage
%dir %attr(0755,root,root) %{_localstatedir}/log/dcmanager
%dir %attr(0755,root,root) %{_localstatedir}/run/dcmanager
%dir %attr(0755,root,root) %{_localstatedir}/cache/dcmanager
%dir %attr(0755,root,root) %{_sysconfdir}/dcmanager
%config(noreplace) %attr(-, root, root) %{_sysconfdir}/dcmanager/dcmanager.conf
%files dcorch
%license LICENSE
%{python2_sitelib}/dcorch*
%{python2_sitelib}/distributedcloud-*.egg-info
%exclude %{python2_sitelib}/dcorch/tests
%{_bindir}/dcorch-api
%{_unitdir}/dcorch-api.service
%{_bindir}/dcorch-engine
%{_unitdir}/dcorch-engine.service
%{_bindir}/dcorch-api-proxy
%{_unitdir}/dcorch-nova-api-proxy.service
%{_unitdir}/dcorch-sysinv-api-proxy.service
%{_unitdir}/dcorch-cinder-api-proxy.service
%{_unitdir}/dcorch-neutron-api-proxy.service
%{_bindir}/dcorch-manage
%{_bindir}/dcorch-snmp
%{_unitdir}/dcorch-snmp.service
%dir %attr(0755,root,root) %{_localstatedir}/log/dcorch
%dir %attr(0755,root,root) %{_localstatedir}/run/dcorch
%dir %attr(0755,root,root) %{_localstatedir}/cache/dcorch
%dir %attr(0755,root,root) %{_sysconfdir}/dcorch
%config(noreplace) %attr(-, root, root) %{_sysconfdir}/dcorch/dcorch.conf

View File

@ -0,0 +1,13 @@
[Unit]
Description=DC Manager API Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
# TODO(Bart): what user to use?
User=root
ExecStart=/usr/bin/dcmanager-api --config-file /etc/dcmanager/dcmanager.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,13 @@
[Unit]
Description=DC Manager Service
After=syslog.target network.target mysqld.service openstack-keystone.service
[Service]
Type=simple
# TODO(Bart): What user?
User=root
ExecStart=/usr/bin/dcmanager-manager --config-file /etc/dcmanager/dcmanager.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,13 @@
[Unit]
Description=DC Manager API Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
# TODO(Bart): what user to use?
User=root
ExecStart=/usr/bin/dcorch-api --config-file /etc/dcorch/dcorch.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,13 @@
[Unit]
Description=DC Orchestrator API Proxy Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
# TODO(Bart): what user to use?
User=root
ExecStart=/usr/bin/dcorch-api-proxy --config-file /etc/dcorch/dcorch.conf --type volume
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,13 @@
[Unit]
Description=DC Manager Service
After=syslog.target network.target mysqld.service openstack-keystone.service
[Service]
Type=simple
# TODO(Bart): What user?
User=root
ExecStart=/usr/bin/dcorch-engine --config-file /etc/dcorch/dcorch.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,13 @@
[Unit]
Description=DC Orchestrator API Proxy Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
# TODO(Bart): what user to use?
User=root
ExecStart=/usr/bin/dcorch-api-proxy --config-file /etc/dcorch/dcorch.conf --type network
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,13 @@
[Unit]
Description=DC Orchestrator API Proxy Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
# TODO(Bart): what user to use?
User=root
ExecStart=/usr/bin/dcorch-api-proxy --config-file /etc/dcorch/dcorch.conf --type compute
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,14 @@
[Unit]
Description=DC Manager SNMP Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
# TODO(Bart): what user to use?
User=root
ExecStart=/usr/bin/dcorch-snmp --config-file /etc/dcorch/dcorch.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,13 @@
[Unit]
Description=DC Orchestrator API Proxy Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
# TODO(Bart): what user to use?
User=root
ExecStart=/usr/bin/dcorch-api-proxy --config-file /etc/dcorch/dcorch.conf --type platform
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1 @@
TIS_PATCH_VER=6

View File

@ -0,0 +1,245 @@
From 7662bc5ed71f6704ffc90c7ad8ea040e6872e190 Mon Sep 17 00:00:00 2001
From: Scott Little <scott.little@windriver.com>
Date: Mon, 2 Oct 2017 14:28:46 -0400
Subject: [PATCH 1/5] WRS:
0001-Modify-service-files-and-create-expirer-cron-script.patch
Conflicts:
SPECS/openstack-aodh.spec
---
SOURCES/aodh-expirer-active | 28 ++++++++++++++++++++++++++++
SOURCES/openstack-aodh-api.service | 5 ++---
SOURCES/openstack-aodh-evaluator.service | 5 ++---
SOURCES/openstack-aodh-expirer.service | 5 ++---
SOURCES/openstack-aodh-listener.service | 5 ++---
SOURCES/openstack-aodh-notifier.service | 5 ++---
SPECS/openstack-aodh.spec | 25 +++++++++++--------------
7 files changed, 49 insertions(+), 29 deletions(-)
create mode 100644 SOURCES/aodh-expirer-active
diff --git a/SOURCES/aodh-expirer-active b/SOURCES/aodh-expirer-active
new file mode 100644
index 0000000..373fa5d
--- /dev/null
+++ b/SOURCES/aodh-expirer-active
@@ -0,0 +1,61 @@
+#!/bin/bash
+
+#
+# Wrapper script to run aodh-expirer when on active controller only
+#
+AODH_EXPIRER_INFO="/var/run/aodh-expirer.info"
+AODH_EXPIRER_CMD="/usr/bin/nice -n 2 /usr/bin/aodh-expirer"
+
+function is_active_pgserver()
+{
+ # Determine whether we're running on the same controller as the service.
+ local service=postgres
+ local enabledactive=$(/usr/bin/sm-query service $service| grep enabled-active)
+ if [ "x$enabledactive" == "x" ]
+ then
+ # enabled-active not found for that service on this controller
+ return 1
+ else
+ # enabled-active found for that resource
+ return 0
+ fi
+}
+
+if is_active_pgserver
+then
+ if [ ! -f ${AODH_EXPIRER_INFO} ]
+ then
+ echo delay_count=0 > ${AODH_EXPIRER_INFO}
+ fi
+
+ source ${AODH_EXPIRER_INFO}
+ sudo -u postgres psql -d sysinv -c "SELECT alarm_id, entity_instance_id from i_alarm;" | grep -P "^(?=.*100.101)(?=.*${HOSTNAME})" &>/dev/null
+ if [ $? -eq 0 ]
+ then
+ source /etc/platform/platform.conf
+ if [ "${system_type}" = "All-in-one" ]
+ then
+ source /etc/init.d/task_affinity_functions.sh
+ idle_core=$(get_most_idle_core)
+ if [ "$idle_core" -ne "0" ]
+ then
+ sh -c "exec taskset -c $idle_core ${AODH_EXPIRER_CMD}"
+ sed -i "/delay_count/s/=.*/=0/" ${AODH_EXPIRER_INFO}
+ exit 0
+ fi
+ fi
+
+ if [ "$delay_count" -lt "3" ]
+ then
+ newval=$(($delay_count+1))
+ sed -i "/delay_count/s/=.*/=$newval/" ${AODH_EXPIRER_INFO}
+ (sleep 3600; /usr/bin/aodh-expirer-active) &
+ exit 0
+ fi
+ fi
+
+ eval ${AODH_EXPIRER_CMD}
+ sed -i "/delay_count/s/=.*/=0/" ${AODH_EXPIRER_INFO}
+fi
+
+exit 0
diff --git a/SOURCES/openstack-aodh-api.service b/SOURCES/openstack-aodh-api.service
index 2224261..b8b2921 100644
--- a/SOURCES/openstack-aodh-api.service
+++ b/SOURCES/openstack-aodh-api.service
@@ -4,9 +4,8 @@ After=syslog.target network.target
[Service]
Type=simple
-User=aodh
-ExecStart=/usr/bin/aodh-api --logfile /var/log/aodh/api.log
-Restart=on-failure
+User=root
+ExecStart=/usr/bin/aodh-api
[Install]
WantedBy=multi-user.target
diff --git a/SOURCES/openstack-aodh-evaluator.service b/SOURCES/openstack-aodh-evaluator.service
index 4f70431..795ef0c 100644
--- a/SOURCES/openstack-aodh-evaluator.service
+++ b/SOURCES/openstack-aodh-evaluator.service
@@ -4,9 +4,8 @@ After=syslog.target network.target
[Service]
Type=simple
-User=aodh
-ExecStart=/usr/bin/aodh-evaluator --logfile /var/log/aodh/evaluator.log
-Restart=on-failure
+User=root
+ExecStart=/usr/bin/aodh-evaluator
[Install]
WantedBy=multi-user.target
diff --git a/SOURCES/openstack-aodh-expirer.service b/SOURCES/openstack-aodh-expirer.service
index cc68b1b..0185d63 100644
--- a/SOURCES/openstack-aodh-expirer.service
+++ b/SOURCES/openstack-aodh-expirer.service
@@ -4,9 +4,8 @@ After=syslog.target network.target
[Service]
Type=simple
-User=aodh
-ExecStart=/usr/bin/aodh-expirer --logfile /var/log/aodh/expirer.log
-Restart=on-failure
+User=root
+ExecStart=/usr/bin/aodh-expirer
[Install]
WantedBy=multi-user.target
diff --git a/SOURCES/openstack-aodh-listener.service b/SOURCES/openstack-aodh-listener.service
index a024fe3..40e20d2 100644
--- a/SOURCES/openstack-aodh-listener.service
+++ b/SOURCES/openstack-aodh-listener.service
@@ -4,9 +4,8 @@ After=syslog.target network.target
[Service]
Type=simple
-User=aodh
-ExecStart=/usr/bin/aodh-listener --logfile /var/log/aodh/listener.log
-Restart=on-failure
+User=root
+ExecStart=/usr/bin/aodh-listener
[Install]
WantedBy=multi-user.target
diff --git a/SOURCES/openstack-aodh-notifier.service b/SOURCES/openstack-aodh-notifier.service
index d6135d7..68a96dd 100644
--- a/SOURCES/openstack-aodh-notifier.service
+++ b/SOURCES/openstack-aodh-notifier.service
@@ -4,9 +4,8 @@ After=syslog.target network.target
[Service]
Type=simple
-User=aodh
-ExecStart=/usr/bin/aodh-notifier --logfile /var/log/aodh/notifier.log
-Restart=on-failure
+User=root
+ExecStart=/usr/bin/aodh-notifier
[Install]
WantedBy=multi-user.target
diff --git a/SPECS/openstack-aodh.spec b/SPECS/openstack-aodh.spec
index 203f2f0..5d0dedd 100644
--- a/SPECS/openstack-aodh.spec
+++ b/SPECS/openstack-aodh.spec
@@ -18,8 +18,13 @@ Source12: %{name}-notifier.service
Source13: %{name}-expirer.service
Source14: %{name}-listener.service
+#WRS
+Source20: aodh-expirer-active
+
BuildArch: noarch
+
+
BuildRequires: python-setuptools
BuildRequires: python2-devel
BuildRequires: systemd
@@ -263,7 +268,7 @@ install -p -D -m 640 etc/aodh/api_paste.ini %{buildroot}%{_sysconfdir}/aodh/api_
# Setup directories
install -d -m 755 %{buildroot}%{_sharedstatedir}/aodh
install -d -m 755 %{buildroot}%{_sharedstatedir}/aodh/tmp
-install -d -m 750 %{buildroot}%{_localstatedir}/log/aodh
+install -d -m 755 %{buildroot}%{_localstatedir}/log/aodh
# Install logrotate
install -p -D -m 644 %{SOURCE2} %{buildroot}%{_sysconfdir}/logrotate.d/%{name}
@@ -284,6 +289,9 @@ mv %{buildroot}%{python2_sitelib}/%{pypi_name}/locale %{buildroot}%{_datadir}/lo
# Find language files
%find_lang %{pypi_name} --all-name
+# WRS
+install -p -D -m 750 %{SOURCE20} %{buildroot}%{_bindir}/aodh-expirer-active
+
# Remove unused files
rm -fr %{buildroot}/usr/etc
@@ -346,13 +354,13 @@ exit 0
%config(noreplace) %attr(-, root, aodh) %{_sysconfdir}/aodh/api_paste.ini
%config(noreplace) %{_sysconfdir}/logrotate.d/%{name}
%dir %attr(0755, aodh, root) %{_localstatedir}/log/aodh
-%{_bindir}/aodh-dbsync
%defattr(-, aodh, aodh, -)
%dir %{_sharedstatedir}/aodh
%dir %{_sharedstatedir}/aodh/tmp
%files api
+%{_bindir}/aodh-dbsync
%{_bindir}/aodh-api
%{_bindir}/aodh-data-migration
%{_bindir}/aodh-combination-alarm-conversion
@@ -373,22 +381,11 @@ exit 0
%files expirer
%{_bindir}/aodh-expirer
+%{_bindir}/aodh-expirer-active
%{_unitdir}/%{name}-expirer.service
%changelog
-* Mon Aug 28 2017 rdo-trunk <javier.pena@redhat.com> 3.0.4-1
-- Update to 3.0.4
-
-* Mon Jul 24 2017 Pradeep Kilambi <pkilambi@redhat.com> 3.0.3-2
-- Move aodh-dbsync to openstack-aodh-common
-
-* Thu Jul 13 2017 Mehdi Abaakouk <sileht@redhat.com> 3.0.3-1
-- Update to 3.0.3
-
-* Tue Feb 28 2017 Alfredo Moralejo <amoralej@redhat.com> 3.0.2-1
-- Update to 3.0.2
-
* Thu Oct 06 2016 Haikel Guemar <hguemar@fedoraproject.org> 3.0.0-1
- Update to 3.0.0
--
1.9.1

View File

@ -0,0 +1,27 @@
From 4639ba8ff40214558ac25394ff2a3f4aaebe437a Mon Sep 17 00:00:00 2001
From: Scott Little <scott.little@windriver.com>
Date: Mon, 2 Oct 2017 14:28:46 -0400
Subject: [PATCH 2/5] WRS: 0001-Update-package-versioning-for-TIS-format.patch
Conflicts:
SPECS/openstack-aodh.spec
---
SPECS/openstack-aodh.spec | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/SPECS/openstack-aodh.spec b/SPECS/openstack-aodh.spec
index 5d0dedd..c844a28 100644
--- a/SPECS/openstack-aodh.spec
+++ b/SPECS/openstack-aodh.spec
@@ -4,7 +4,7 @@
Name: openstack-aodh
Version: 3.0.4
-Release: 1%{?dist}
+Release: 1.el7%{?_tis_dist}.%{tis_patch_ver}
Summary: OpenStack Telemetry Alarming
License: ASL 2.0
URL: https://github.com/openstack/aodh.git
--
1.9.1

View File

@ -0,0 +1,72 @@
From c4f387dbc34568caedd13e6c782a601cbdfcf707 Mon Sep 17 00:00:00 2001
From: Scott Little <scott.little@windriver.com>
Date: Mon, 2 Oct 2017 14:28:46 -0400
Subject: [PATCH 4/5] WRS: 0001-meta-modify-aodh-api.patch
Conflicts:
SPECS/openstack-aodh.spec
---
SOURCES/openstack-aodh-api.service | 2 +-
SPECS/openstack-aodh.spec | 11 +++++++++--
2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/SOURCES/openstack-aodh-api.service b/SOURCES/openstack-aodh-api.service
index b8b2921..06bcd12 100644
--- a/SOURCES/openstack-aodh-api.service
+++ b/SOURCES/openstack-aodh-api.service
@@ -5,7 +5,7 @@ After=syslog.target network.target
[Service]
Type=simple
User=root
-ExecStart=/usr/bin/aodh-api
+ExecStart=/bin/python /usr/bin/gunicorn --bind 192.168.204.2:8042 --pythonpath /usr/share/aodh aodh-api
[Install]
WantedBy=multi-user.target
diff --git a/SPECS/openstack-aodh.spec b/SPECS/openstack-aodh.spec
index b52931c..217dd14 100644
--- a/SPECS/openstack-aodh.spec
+++ b/SPECS/openstack-aodh.spec
@@ -20,9 +20,10 @@ Source14: %{name}-listener.service
#WRS
Source20: aodh-expirer-active
-BuildArch: noarch
-
+#WRS: Include patches here:
+Patch1: 0001-modify-aodh-api.patch
+BuildArch: noarch
BuildRequires: python-setuptools
BuildRequires: python2-devel
@@ -221,6 +222,9 @@ This package contains the Aodh test files.
%prep
%setup -q -n %{pypi_name}-%{upstream_version}
+#WRS: Apply patches here
+%patch1 -p1
+
find . \( -name .gitignore -o -name .placeholder \) -delete
find aodh -name \*.py -exec sed -i '/\/usr\/bin\/env python/{d;q}' {} +
@@ -263,6 +267,8 @@ install -p -D -m 640 %{SOURCE1} %{buildroot}%{_datadir}/aodh/aodh-dist.conf
install -p -D -m 640 etc/aodh/aodh.conf %{buildroot}%{_sysconfdir}/aodh/aodh.conf
install -p -D -m 640 etc/aodh/policy.json %{buildroot}%{_sysconfdir}/aodh/policy.json
install -p -D -m 640 etc/aodh/api_paste.ini %{buildroot}%{_sysconfdir}/aodh/api_paste.ini
+#WRS
+install -p -D -m 640 aodh/api/aodh-api.py %{buildroot}%{_datadir}/aodh/aodh-api.py
# Setup directories
install -d -m 755 %{buildroot}%{_sharedstatedir}/aodh
@@ -344,6 +350,7 @@ exit 0
%files common -f %{pypi_name}.lang
%doc README.rst
%dir %{_sysconfdir}/aodh
+%{_datadir}/aodh/aodh-api.*
%attr(-, root, aodh) %{_datadir}/aodh/aodh-dist.conf
%config(noreplace) %attr(-, root, aodh) %{_sysconfdir}/aodh/aodh.conf
%config(noreplace) %attr(-, root, aodh) %{_sysconfdir}/aodh/policy.json
--
1.9.1

View File

@ -0,0 +1,25 @@
From 98503ae07f4a3b6753c9c1dfc1cf7ed6573ca8e8 Mon Sep 17 00:00:00 2001
From: Scott Little <scott.little@windriver.com>
Date: Mon, 2 Oct 2017 14:28:46 -0400
Subject: [PATCH 5/5] WRS: 0001-meta-pass-aodh-api-config.patch
---
SOURCES/openstack-aodh-api.service | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/SOURCES/openstack-aodh-api.service b/SOURCES/openstack-aodh-api.service
index 06bcd12..a78eb32 100644
--- a/SOURCES/openstack-aodh-api.service
+++ b/SOURCES/openstack-aodh-api.service
@@ -5,7 +5,7 @@ After=syslog.target network.target
[Service]
Type=simple
User=root
-ExecStart=/bin/python /usr/bin/gunicorn --bind 192.168.204.2:8042 --pythonpath /usr/share/aodh aodh-api
+ExecStart=/bin/python /usr/bin/gunicorn --config /usr/share/aodh/aodh-api.conf --pythonpath /usr/share/aodh aodh-api
[Install]
WantedBy=multi-user.target
--
1.9.1

View File

@ -0,0 +1,32 @@
From 0563ba710bf274a50ee16df75017dd0092cd2d31 Mon Sep 17 00:00:00 2001
From: Al Bailey <Al.Bailey@windriver.com>
Date: Thu, 21 Dec 2017 14:15:48 -0600
Subject: [PATCH] add drivername for postgresql
---
SPECS/openstack-aodh.spec | 2 ++
1 file changed, 2 insertions(+)
diff --git a/SPECS/openstack-aodh.spec b/SPECS/openstack-aodh.spec
index 217dd14..2fa77d0 100644
--- a/SPECS/openstack-aodh.spec
+++ b/SPECS/openstack-aodh.spec
@@ -22,6 +22,7 @@ Source20: aodh-expirer-active
#WRS: Include patches here:
Patch1: 0001-modify-aodh-api.patch
+Patch2: 0002-Add-drivername-support-for-postgresql-connection-set.patch
BuildArch: noarch
@@ -224,6 +225,7 @@ This package contains the Aodh test files.
#WRS: Apply patches here
%patch1 -p1
+%patch2 -p1
find . \( -name .gitignore -o -name .placeholder \) -delete
--
1.8.3.1

View File

@ -0,0 +1,6 @@
0001-Modify-service-files-and-create-expirer-cron-script.patch
0001-Update-package-versioning-for-TIS-format.patch
meta-remove-default-logrotate.patch
0001-meta-modify-aodh-api.patch
0001-meta-pass-aodh-api-config.patch
0006-add-drivername-for-postgresql.patch

View File

@ -0,0 +1,42 @@
From bc92f8743ede901522e6b19af208a4e5d038fa2f Mon Sep 17 00:00:00 2001
From: Scott Little <scott.little@windriver.com>
Date: Mon, 2 Oct 2017 14:28:46 -0400
Subject: [PATCH 3/5] WRS: meta-remove-default-logrotate.patch
---
SPECS/openstack-aodh.spec | 5 -----
1 file changed, 5 deletions(-)
diff --git a/SPECS/openstack-aodh.spec b/SPECS/openstack-aodh.spec
index c844a28..b52931c 100644
--- a/SPECS/openstack-aodh.spec
+++ b/SPECS/openstack-aodh.spec
@@ -11,7 +11,6 @@ URL: https://github.com/openstack/aodh.git
Source0: https://tarballs.openstack.org/%{pypi_name}/%{pypi_name}-%{upstream_version}.tar.gz
Source1: %{pypi_name}-dist.conf
-Source2: %{pypi_name}.logrotate
Source10: %{name}-api.service
Source11: %{name}-evaluator.service
Source12: %{name}-notifier.service
@@ -270,9 +269,6 @@ install -d -m 755 %{buildroot}%{_sharedstatedir}/aodh
install -d -m 755 %{buildroot}%{_sharedstatedir}/aodh/tmp
install -d -m 755 %{buildroot}%{_localstatedir}/log/aodh
-# Install logrotate
-install -p -D -m 644 %{SOURCE2} %{buildroot}%{_sysconfdir}/logrotate.d/%{name}
-
# Install systemd unit services
install -p -D -m 644 %{SOURCE10} %{buildroot}%{_unitdir}/%{name}-api.service
install -p -D -m 644 %{SOURCE11} %{buildroot}%{_unitdir}/%{name}-evaluator.service
@@ -352,7 +348,6 @@ exit 0
%config(noreplace) %attr(-, root, aodh) %{_sysconfdir}/aodh/aodh.conf
%config(noreplace) %attr(-, root, aodh) %{_sysconfdir}/aodh/policy.json
%config(noreplace) %attr(-, root, aodh) %{_sysconfdir}/aodh/api_paste.ini
-%config(noreplace) %{_sysconfdir}/logrotate.d/%{name}
%dir %attr(0755, aodh, root) %{_localstatedir}/log/aodh
%defattr(-, aodh, aodh, -)
--
1.9.1

View File

@ -0,0 +1,65 @@
From ea7f6013ffd1eb525943f4d7ae1bfdef6ecf6c22 Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Wed, 15 Feb 2017 15:59:26 -0500
Subject: [PATCH 1/1] modify-aodh-api
---
aodh/api/aodh-api.py | 7 +++++++
aodh/api/app.py | 14 ++++++++++----
2 files changed, 17 insertions(+), 4 deletions(-)
create mode 100644 aodh/api/aodh-api.py
diff --git a/aodh/api/aodh-api.py b/aodh/api/aodh-api.py
new file mode 100644
index 0000000..565f2e3
--- /dev/null
+++ b/aodh/api/aodh-api.py
@@ -0,0 +1,7 @@
+from aodh.api import app as build_wsgi_app
+import sys
+
+sys.argv = sys.argv[:1]
+args = {'config_file' : 'etc/aodh/aodh.conf', }
+application = build_wsgi_app.build_wsgi_app(None, args)
+
diff --git a/aodh/api/app.py b/aodh/api/app.py
index 5cecb83..652856e 100644
--- a/aodh/api/app.py
+++ b/aodh/api/app.py
@@ -60,7 +60,7 @@ def setup_app(pecan_config=PECAN_CONFIG, conf=None):
return app
-def load_app(conf):
+def load_app(conf, args):
# Build the WSGI app
cfg_file = None
cfg_path = conf.api.paste_config
@@ -68,15 +68,21 @@ def load_app(conf):
cfg_file = conf.find_file(cfg_path)
elif os.path.exists(cfg_path):
cfg_file = cfg_path
-
if not cfg_file:
raise cfg.ConfigFilesNotFoundError([conf.api.paste_config])
+
+ config = dict([(key, value) for key, value in args.iteritems()
+ if key in conf and value is not None])
+ for key, value in config.iteritems():
+ if key == 'config_file':
+ conf.config_file = value
+
LOG.info(_LI("Full WSGI config used: %s"), cfg_file)
return deploy.loadapp("config:" + cfg_file)
-def build_wsgi_app(argv=None):
- return load_app(service.prepare_service(argv=argv))
+def build_wsgi_app(argv=None, args=None):
+ return load_app(service.prepare_service(argv=argv), args)
def _app():
--
1.8.3.1

View File

@ -0,0 +1,65 @@
From c8afec630be24345ccae50db739949f964e9c580 Mon Sep 17 00:00:00 2001
From: Al Bailey <Al.Bailey@windriver.com>
Date: Thu, 21 Dec 2017 13:38:09 -0600
Subject: [PATCH] Add drivername support for postgresql connection settings
---
aodh/api/aodh-api.py | 3 +--
aodh/cmd/data_migration.py | 2 +-
aodh/storage/__init__.py | 2 +-
setup.cfg | 1 +
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/aodh/api/aodh-api.py b/aodh/api/aodh-api.py
index 565f2e3..7c413d6 100644
--- a/aodh/api/aodh-api.py
+++ b/aodh/api/aodh-api.py
@@ -2,6 +2,5 @@ from aodh.api import app as build_wsgi_app
import sys
sys.argv = sys.argv[:1]
-args = {'config_file' : 'etc/aodh/aodh.conf', }
+args = {'config_file': 'etc/aodh/aodh.conf', }
application = build_wsgi_app.build_wsgi_app(None, args)
-
diff --git a/aodh/cmd/data_migration.py b/aodh/cmd/data_migration.py
index 6a9ea49..1a8df28 100644
--- a/aodh/cmd/data_migration.py
+++ b/aodh/cmd/data_migration.py
@@ -94,7 +94,7 @@ def _validate_conn_options(args):
), nosql_scheme)
sys.exit(1)
if sql_scheme not in ('mysql', 'mysql+pymysql', 'postgresql',
- 'sqlite'):
+ 'postgresql+psycopg2', 'sqlite'):
root_logger.error(_LE('Invalid destination DB type %s, the destination'
' database connection should be one of: '
'[mysql, postgresql, sqlite]'), sql_scheme)
diff --git a/aodh/storage/__init__.py b/aodh/storage/__init__.py
index e1d1048..d8fcd54 100644
--- a/aodh/storage/__init__.py
+++ b/aodh/storage/__init__.py
@@ -59,7 +59,7 @@ def get_connection_from_config(conf):
url = conf.database.connection
connection_scheme = urlparse.urlparse(url).scheme
if connection_scheme not in ('mysql', 'mysql+pymysql', 'postgresql',
- 'sqlite'):
+ 'postgresql+psycopg2', 'sqlite'):
msg = ('Storage backend %s is deprecated, and all the NoSQL backends '
'will be removed in Aodh 4.0, please use SQL backend.' %
connection_scheme)
diff --git a/setup.cfg b/setup.cfg
index 76f5362..ca67a16 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -80,6 +80,7 @@ aodh.storage =
mysql = aodh.storage.impl_sqlalchemy:Connection
mysql+pymysql = aodh.storage.impl_sqlalchemy:Connection
postgresql = aodh.storage.impl_sqlalchemy:Connection
+ postgresql+psycopg2 = aodh.storage.impl_sqlalchemy:Connection
sqlite = aodh.storage.impl_sqlalchemy:Connection
hbase = aodh.storage.impl_hbase:Connection
aodh.alarm.rule =
--
1.8.3.1

View File

@ -0,0 +1 @@
mirror:Source/openstack-aodh-3.0.4-1.el7.src.rpm

View File

@ -0,0 +1,6 @@
TAR_NAME="ironic"
SRC_DIR="$CGCS_BASE/git/ironic"
COPY_LIST="$FILES_BASE/*"
TIS_BASE_SRCREV=47179d9fca337f32324f8e8a68541358fdac8649
TIS_PATCH_VER=GITREVCOUNT

View File

@ -0,0 +1,4 @@
[DEFAULT]
log_dir = /var/log/ironic
state_path = /var/lib/ironic
use_stderr = True

View File

@ -0,0 +1,2 @@
Defaults:ironic !requiretty
ironic ALL = (root) NOPASSWD: /usr/bin/ironic-rootwrap /etc/ironic/rootwrap.conf *

View File

@ -0,0 +1,12 @@
[Unit]
Description=OpenStack Ironic API service
After=syslog.target network.target
[Service]
Type=simple
User=ironic
ExecStart=/usr/bin/ironic-api
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,12 @@
[Unit]
Description=OpenStack Ironic Conductor service
After=syslog.target network.target
[Service]
Type=simple
User=ironic
ExecStart=/usr/bin/ironic-conductor
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,284 @@
%global full_release ironic-%{version}
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: openstack-ironic
# Liberty semver reset
# https://review.openstack.org/#/q/I1a161b2c1d1e27268065b6b4be24c8f7a5315afb,n,z
Epoch: 1
Summary: OpenStack Baremetal Hypervisor API (ironic)
Version: 9.1.2
Release: 0%{?_tis_dist}.%{tis_patch_ver}
License: ASL 2.0
URL: http://www.openstack.org
Source0: https://tarballs.openstack.org/ironic/ironic-%{version}.tar.gz
Source1: openstack-ironic-api.service
Source2: openstack-ironic-conductor.service
Source3: ironic-rootwrap-sudoers
Source4: ironic-dist.conf
BuildArch: noarch
BuildRequires: openstack-macros
BuildRequires: python-setuptools
BuildRequires: python2-devel
BuildRequires: python-pbr
BuildRequires: openssl-devel
BuildRequires: libxml2-devel
BuildRequires: libxslt-devel
BuildRequires: gmp-devel
BuildRequires: python-sphinx
BuildRequires: systemd
# Required to compile translation files
BuildRequires: python-babel
# Required to run unit tests
BuildRequires: pysendfile
BuildRequires: python-alembic
BuildRequires: python-automaton
BuildRequires: python-cinderclient
BuildRequires: python-dracclient
BuildRequires: python-eventlet
BuildRequires: python-futurist
BuildRequires: python-glanceclient
BuildRequires: python-ironic-inspector-client
BuildRequires: python-ironic-lib
BuildRequires: python-jinja2
BuildRequires: python-jsonpatch
BuildRequires: python-jsonschema
BuildRequires: python-keystoneauth1
BuildRequires: python-keystonemiddleware
BuildRequires: python-mock
BuildRequires: python-neutronclient
BuildRequires: python-oslo-concurrency
BuildRequires: python-oslo-config
BuildRequires: python-oslo-context
BuildRequires: python-oslo-db
BuildRequires: python-oslo-db-tests
BuildRequires: python-oslo-i18n
BuildRequires: python-oslo-log
BuildRequires: python-oslo-messaging
BuildRequires: python-oslo-middleware
BuildRequires: python-oslo-policy
BuildRequires: python-oslo-reports
BuildRequires: python-oslo-rootwrap
BuildRequires: python-oslo-serialization
BuildRequires: python-oslo-service
BuildRequires: python-oslo-utils
BuildRequires: python-oslo-versionedobjects
BuildRequires: python-oslotest
BuildRequires: python-osprofiler
BuildRequires: python-os-testr
BuildRequires: python-pbr
BuildRequires: python-pecan
BuildRequires: python-proliantutils
BuildRequires: python-psutil
BuildRequires: python-requests
BuildRequires: python-retrying
BuildRequires: python-scciclient
BuildRequires: python-six
BuildRequires: python-sqlalchemy
BuildRequires: python-stevedore
BuildRequires: python-sushy
BuildRequires: python-swiftclient
BuildRequires: python-testresources
BuildRequires: python-tooz
BuildRequires: python-UcsSdk
BuildRequires: python-webob
BuildRequires: python-wsme
BuildRequires: pysnmp
BuildRequires: pytz
%prep
%setup -q -n ironic-%{upstream_version}
rm requirements.txt test-requirements.txt
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
# Generate i18n files
%{__python2} setup.py compile_catalog -d build/lib/ironic/locale
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install -O1 --skip-build --root=%{buildroot}
# Create fake egg-info for the tempest plugin
# TODO switch to %{service} everywhere as in openstack-example.spec
%global service ironic
%py2_entrypoint %{service} %{service}
# install systemd scripts
mkdir -p %{buildroot}%{_unitdir}
install -p -D -m 644 %{SOURCE1} %{buildroot}%{_unitdir}
install -p -D -m 644 %{SOURCE2} %{buildroot}%{_unitdir}
# install sudoers file
mkdir -p %{buildroot}%{_sysconfdir}/sudoers.d
install -p -D -m 440 %{SOURCE3} %{buildroot}%{_sysconfdir}/sudoers.d/ironic
mkdir -p %{buildroot}%{_sharedstatedir}/ironic/
mkdir -p %{buildroot}%{_localstatedir}/log/ironic/
mkdir -p %{buildroot}%{_sysconfdir}/ironic/rootwrap.d
#Populate the conf dir
install -p -D -m 640 etc/ironic/ironic.conf.sample %{buildroot}/%{_sysconfdir}/ironic/ironic.conf
install -p -D -m 640 etc/ironic/policy.json %{buildroot}/%{_sysconfdir}/ironic/policy.json
install -p -D -m 640 etc/ironic/rootwrap.conf %{buildroot}/%{_sysconfdir}/ironic/rootwrap.conf
install -p -D -m 640 etc/ironic/rootwrap.d/* %{buildroot}/%{_sysconfdir}/ironic/rootwrap.d/
# Install distribution config
install -p -D -m 640 %{SOURCE4} %{buildroot}/%{_datadir}/ironic/ironic-dist.conf
# Install i18n .mo files (.po and .pot are not required)
install -d -m 755 %{buildroot}%{_datadir}
rm -f %{buildroot}%{python2_sitelib}/ironic/locale/*/LC_*/ironic*po
rm -f %{buildroot}%{python2_sitelib}/ironic/locale/*pot
mv %{buildroot}%{python2_sitelib}/ironic/locale %{buildroot}%{_datadir}/locale
# Find language files
%find_lang ironic --all-name
%description
Ironic provides an API for management and provisioning of physical machines
%package common
Summary: Ironic common
Requires: ipmitool
Requires: pysendfile
Requires: python-alembic
Requires: python-automaton >= 0.5.0
Requires: python-cinderclient >= 3.1.0
Requires: python-dracclient >= 1.3.0
Requires: python-eventlet
Requires: python-futurist >= 0.11.0
Requires: python-glanceclient >= 1:2.7.0
Requires: python-ironic-inspector-client >= 1.5.0
Requires: python-ironic-lib >= 2.5.0
Requires: python-jinja2
Requires: python-jsonpatch
Requires: python-jsonschema
Requires: python-keystoneauth1 >= 3.1.0
Requires: python-keystonemiddleware >= 4.12.0
Requires: python-neutronclient >= 6.3.0
Requires: python-oslo-concurrency >= 3.8.0
Requires: python-oslo-config >= 2:4.0.0
Requires: python-oslo-context >= 2.14.0
Requires: python-oslo-db >= 4.24.0
Requires: python-oslo-i18n >= 2.1.0
Requires: python-oslo-log >= 3.22.0
Requires: python-oslo-messaging >= 5.24.2
Requires: python-oslo-middleware >= 3.27.0
Requires: python-oslo-policy >= 1.23.0
Requires: python-oslo-reports >= 0.6.0
Requires: python-oslo-rootwrap >= 5.0.0
Requires: python-oslo-serialization >= 1.10.0
Requires: python-oslo-service >= 1.10.0
Requires: python-oslo-utils >= 3.20.0
Requires: python-oslo-versionedobjects >= 1.17.0
Requires: python-osprofiler >= 1.4.0
Requires: python-pbr
Requires: python-pecan
Requires: python-proliantutils >= 2.4.0
Requires: python-psutil
Requires: python-requests
Requires: python-retrying
Requires: python-rfc3986 >= 0.3.1
Requires: python-scciclient >= 0.5.0
Requires: python-six
Requires: python-sqlalchemy
Requires: python-stevedore >= 1.20.0
Requires: python-sushy
Requires: python-swiftclient >= 3.2.0
Requires: python-tooz >= 1.47.0
Requires: python-UcsSdk >= 0.8.2.2
Requires: python-webob >= 1.7.1
Requires: python-wsme
Requires: pysnmp
Requires: pytz
Requires(pre): shadow-utils
%description common
Components common to all OpenStack Ironic services
%files common -f ironic.lang
%doc README.rst
%license LICENSE
%{_bindir}/ironic-dbsync
%{_bindir}/ironic-rootwrap
%{python2_sitelib}/ironic
%{python2_sitelib}/ironic-*.egg-info
%exclude %{python2_sitelib}/ironic/tests
%exclude %{python2_sitelib}/ironic_tempest_plugin
%{_sysconfdir}/sudoers.d/ironic
%config(noreplace) %attr(-,root,ironic) %{_sysconfdir}/ironic
%attr(-,ironic,ironic) %{_sharedstatedir}/ironic
%attr(0755,ironic,ironic) %{_localstatedir}/log/ironic
%attr(-, root, ironic) %{_datadir}/ironic/ironic-dist.conf
%exclude %{python2_sitelib}/ironic_tests.egg_info
%package api
Summary: The Ironic API
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
%description api
Ironic API for management and provisioning of physical machines
%files api
%{_bindir}/ironic-api
%{_unitdir}/openstack-ironic-api.service
%package conductor
Summary: The Ironic Conductor
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
%description conductor
Ironic Conductor for management and provisioning of physical machines
%files conductor
%{_bindir}/ironic-conductor
%{_unitdir}/openstack-ironic-conductor.service
%package -n python-ironic-tests
Summary: Ironic tests
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires: python-mock
Requires: python-oslotest
Requires: python-os-testr
Requires: python-testresources
%description -n python-ironic-tests
This package contains the Ironic test files.
%files -n python-ironic-tests
%{python2_sitelib}/ironic/tests
%{python2_sitelib}/ironic_tempest_plugin
%{python2_sitelib}/%{service}_tests.egg-info
%changelog
* Fri Nov 03 2017 RDO <dev@lists.rdoproject.org> 1:9.1.2-1
- Update to 9.1.2
* Mon Sep 25 2017 rdo-trunk <javier.pena@redhat.com> 1:9.1.1-1
- Update to 9.1.1
* Thu Aug 24 2017 Alfredo Moralejo <amoralej@redhat.com> 1:9.1.0-1
- Update to 9.1.0

View File

@ -0,0 +1,6 @@
TAR_NAME="magnum-ui"
SRC_DIR="$CGCS_BASE/git/magnum-ui"
TIS_BASE_SRCREV=0b9fc50aada1a3e214acaad1204b48c96a549e5f
TIS_PATCH_VER=1

View File

@ -0,0 +1,93 @@
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%global library magnum-ui
%global module magnum_ui
Name: openstack-%{library}
Version: 3.0.0
Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: OpenStack Magnum UI Horizon plugin
License: ASL 2.0
URL: http://launchpad.net/%{library}/
Source0: https://tarballs.openstack.org/%{library}/%{library}-%{upstream_version}.tar.gz
BuildArch: noarch
BuildRequires: python2-devel
BuildRequires: python-pbr
BuildRequires: python-setuptools
BuildRequires: git
Requires: python-pbr
Requires: python-babel
Requires: python-magnumclient >= 2.0.0
Requires: openstack-dashboard >= 8.0.0
Requires: python-django >= 1.8
Requires: python-django-babel
Requires: python-django-compressor >= 2.0
Requires: python-django-openstack-auth >= 3.5.0
Requires: python-django-pyscss >= 2.0.2
%description
OpenStack Magnum UI Horizon plugin
# Documentation package
%package -n python-%{library}-doc
Summary: OpenStack example library documentation
BuildRequires: python-sphinx
BuildRequires: python-django
BuildRequires: python-django-nose
BuildRequires: openstack-dashboard
BuildRequires: python-openstackdocstheme
BuildRequires: python-magnumclient
BuildRequires: python-mock
BuildRequires: python-mox3
%description -n python-%{library}-doc
OpenStack Magnum UI Horizon plugin documentation
This package contains the documentation.
%prep
%autosetup -n %{library}-%{upstream_version} -S git
# Let's handle dependencies ourseleves
rm -f *requirements.txt
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
# generate html docs
export PYTHONPATH=/usr/share/openstack-dashboard
#%{__python2} setup.py build_sphinx -b html
# remove the sphinx-build leftovers
#rm -rf doc/build/html/.{doctrees,buildinfo}
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install --skip-build --root %{buildroot}
# Move config to horizon
install -p -D -m 640 %{module}/enabled/_1370_project_container_infra_panel_group.py %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/_1370_project_container_infra_panel_group.py
install -p -D -m 640 %{module}/enabled/_1371_project_container_infra_clusters_panel.py %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/_1371_project_container_infra_clusters_panel.py
install -p -D -m 640 %{module}/enabled/_1372_project_container_infra_cluster_templates_panel.py %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/_1372_project_container_infra_cluster_templates_panel.py
%files
%license LICENSE
%{python2_sitelib}/%{module}
%{python2_sitelib}/*.egg-info
%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/_137*
%files -n python-%{library}-doc
%license LICENSE
#%doc doc/build/html README.rst
%changelog
* Thu Aug 24 2017 Alfredo Moralejo <amoralej@redhat.com> 3.0.0-1
- Update to 3.0.0

View File

@ -0,0 +1,6 @@
TAR_NAME="magnum"
SRC_DIR="$CGCS_BASE/git/magnum"
COPY_LIST="$FILES_BASE/*"
TIS_BASE_SRCREV=ca4b29087a4af00060870519e5897348ccc61161
TIS_PATCH_VER=1

View File

@ -0,0 +1,15 @@
[Unit]
Description=OpenStack Magnum API Service
After=syslog.target network.target
[Service]
Type=simple
User=magnum
ExecStart=/usr/bin/magnum-api
PrivateTmp=true
NotifyAccess=all
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,15 @@
[Unit]
Description=Openstack Magnum Conductor Service
After=syslog.target network.target qpidd.service mysqld.service tgtd.service
[Service]
Type=simple
User=magnum
ExecStart=/usr/bin/magnum-conductor
PrivateTmp=true
NotifyAccess=all
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,325 @@
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%global with_doc %{!?_without_doc:1}%{?_without_doc:0}
%global service magnum
Name: openstack-%{service}
Summary: Container Management project for OpenStack
Version: 5.0.1
Release: 1%{?_tis_dist}.%{tis_patch_ver}
License: ASL 2.0
URL: https://github.com/openstack/magnum.git
Source0: https://tarballs.openstack.org/%{service}/%{service}-%{version}.tar.gz
Source2: %{name}-api.service
Source3: %{name}-conductor.service
BuildArch: noarch
BuildRequires: git
BuildRequires: python2-devel
BuildRequires: python-pbr
BuildRequires: python-setuptools
BuildRequires: python-werkzeug
BuildRequires: systemd-units
# Required for config file generation
BuildRequires: python-pycadf
BuildRequires: python-osprofiler
Requires: %{name}-common = %{version}-%{release}
Requires: %{name}-conductor = %{version}-%{release}
Requires: %{name}-api = %{version}-%{release}
%description
Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
%package -n python-%{service}
Summary: Magnum Python libraries
Requires: python-pbr
Requires: python-babel
Requires: PyYAML
Requires: python-sqlalchemy
Requires: python-wsme
Requires: python-webob
Requires: python-alembic
Requires: python-decorator
Requires: python-docker >= 2.0.0
Requires: python-enum34
Requires: python-eventlet
Requires: python-iso8601
Requires: python-jsonpatch
Requires: python-keystonemiddleware >= 4.12.0
Requires: python-netaddr
Requires: python-oslo-concurrency >= 3.8.0
Requires: python-oslo-config >= 2:4.0.0
Requires: python-oslo-context >= 2.14.0
Requires: python-oslo-db >= 4.24.0
Requires: python-oslo-i18n >= 2.1.0
Requires: python-oslo-log >= 3.22.0
Requires: python-oslo-messaging >= 5.24.2
Requires: python-oslo-middleware >= 3.27.0
Requires: python-oslo-policy >= 1.23.0
Requires: python-oslo-serialization >= 1.10.0
Requires: python-oslo-service >= 1.10.0
Requires: python-oslo-utils >= 3.20.0
Requires: python-oslo-versionedobjects >= 1.17.0
Requires: python-oslo-reports >= 0.6.0
Requires: python-osprofiler
Requires: python-pycadf
Requires: python-pecan
Requires: python-barbicanclient >= 4.0.0
Requires: python-glanceclient >= 1:2.8.0
Requires: python-heatclient >= 1.6.1
Requires: python-neutronclient >= 6.3.0
Requires: python-novaclient >= 1:9.0.0
Requires: python-kubernetes
Requires: python-keystoneclient >= 1:3.8.0
Requires: python-keystoneauth1 >= 3.1.0
Requires: python-cliff >= 2.8.0
Requires: python-requests
Requires: python-six
Requires: python-stevedore >= 1.20.0
Requires: python-taskflow
Requires: python-cryptography
Requires: python-werkzeug
Requires: python-marathon
%description -n python-%{service}
Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
%package common
Summary: Magnum common
Requires: python-%{service} = %{version}-%{release}
Requires(pre): shadow-utils
%description common
Components common to all OpenStack Magnum services
%package conductor
Summary: The Magnum conductor
Requires: %{name}-common = %{version}-%{release}
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
%description conductor
OpenStack Magnum Conductor
%package api
Summary: The Magnum API
Requires: %{name}-common = %{version}-%{release}
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
%description api
OpenStack-native ReST API to the Magnum Engine
%if 0%{?with_doc}
%package -n %{name}-doc
Summary: Documentation for OpenStack Magnum
Requires: python-%{service} = %{version}-%{release}
BuildRequires: python-sphinx
BuildRequires: python-openstackdocstheme
BuildRequires: python-stevedore
BuildRequires: graphviz
%description -n %{name}-doc
Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
This package contains documentation files for Magnum.
%endif
# tests
%package -n python-%{service}-tests
Summary: Tests for OpenStack Magnum
Requires: python-%{service} = %{version}-%{release}
BuildRequires: python-fixtures
BuildRequires: python-hacking
BuildRequires: python-mock
BuildRequires: python-oslotest
BuildRequires: python-os-testr
BuildRequires: python-subunit
BuildRequires: python-testrepository
BuildRequires: python-testscenarios
BuildRequires: python-testtools
BuildRequires: python-tempest
BuildRequires: openstack-macros
# copy-paste from runtime Requires
BuildRequires: python-babel
BuildRequires: PyYAML
BuildRequires: python-sqlalchemy
BuildRequires: python-wsme
BuildRequires: python-webob
BuildRequires: python-alembic
BuildRequires: python-decorator
BuildRequires: python-docker >= 2.0.0
BuildRequires: python-enum34
BuildRequires: python-eventlet
BuildRequires: python-iso8601
BuildRequires: python-jsonpatch
BuildRequires: python-keystonemiddleware
BuildRequires: python-netaddr
BuildRequires: python-oslo-concurrency
BuildRequires: python-oslo-config
BuildRequires: python-oslo-context
BuildRequires: python-oslo-db
BuildRequires: python-oslo-i18n
BuildRequires: python-oslo-log
BuildRequires: python-oslo-messaging
BuildRequires: python-oslo-middleware
BuildRequires: python-oslo-policy
BuildRequires: python-oslo-serialization
BuildRequires: python-oslo-service
BuildRequires: python-oslo-utils
BuildRequires: python-oslo-versionedobjects
BuildRequires: python2-oslo-versionedobjects-tests
BuildRequires: python-oslo-reports
BuildRequires: python-pecan
BuildRequires: python-barbicanclient
BuildRequires: python-glanceclient
BuildRequires: python-heatclient
BuildRequires: python-neutronclient
BuildRequires: python-novaclient
BuildRequires: python-kubernetes
BuildRequires: python-keystoneclient
BuildRequires: python-requests
BuildRequires: python-six
BuildRequires: python-stevedore
BuildRequires: python-taskflow
BuildRequires: python-cryptography
BuildRequires: python-marathon
%description -n python-%{service}-tests
Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
%prep
%autosetup -n %{service}-%{upstream_version} -S git
# Let's handle dependencies ourselves
rm -rf {test-,}requirements{-bandit,}.txt tools/{pip,test}-requires
# Remove tests in contrib
find contrib -name tests -type d | xargs rm -rf
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install -O1 --skip-build --root=%{buildroot}
# Create fake egg-info for the tempest plugin
%py2_entrypoint %{service} %{service}
# docs generation requires everything to be installed first
%if 0%{?with_doc}
%{__python2} setup.py build_sphinx -b html
# Fix hidden-file-or-dir warnings
rm -fr doc/build/html/.doctrees doc/build/html/.buildinfo
%endif
mkdir -p %{buildroot}%{_localstatedir}/log/%{service}/
mkdir -p %{buildroot}%{_localstatedir}/run/%{service}/
# install systemd unit files
install -p -D -m 644 %{SOURCE2} %{buildroot}%{_unitdir}/%{name}-api.service
install -p -D -m 644 %{SOURCE3} %{buildroot}%{_unitdir}/%{name}-conductor.service
mkdir -p %{buildroot}%{_sharedstatedir}/%{service}/
mkdir -p %{buildroot}%{_sharedstatedir}/%{service}/certificates/
mkdir -p %{buildroot}%{_sysconfdir}/%{service}/
oslo-config-generator --config-file etc/magnum/magnum-config-generator.conf --output-file %{buildroot}%{_sysconfdir}/%{service}/magnum.conf
chmod 640 %{buildroot}%{_sysconfdir}/%{service}/magnum.conf
install -p -D -m 640 etc/magnum/policy.json %{buildroot}%{_sysconfdir}/%{service}
install -p -D -m 640 etc/magnum/api-paste.ini %{buildroot}%{_sysconfdir}/%{service}
%check
%{__python2} setup.py test || true
%files -n python-%{service}
%license LICENSE
%{python2_sitelib}/%{service}
%{python2_sitelib}/%{service}-*.egg-info
%exclude %{python2_sitelib}/%{service}/tests
%files common
%{_bindir}/magnum-db-manage
%{_bindir}/magnum-driver-manage
%license LICENSE
%dir %attr(0750,%{service},root) %{_localstatedir}/log/%{service}
%dir %attr(0755,%{service},root) %{_localstatedir}/run/%{service}
%dir %attr(0755,%{service},root) %{_sharedstatedir}/%{service}
%dir %attr(0755,%{service},root) %{_sharedstatedir}/%{service}/certificates
%dir %attr(0755,%{service},root) %{_sysconfdir}/%{service}
%config(noreplace) %attr(-, root, %{service}) %{_sysconfdir}/%{service}/magnum.conf
%config(noreplace) %attr(-, root, %{service}) %{_sysconfdir}/%{service}/policy.json
%config(noreplace) %attr(-, root, %{service}) %{_sysconfdir}/%{service}/api-paste.ini
%pre common
# 1870:1870 for magnum - rhbz#845078
getent group %{service} >/dev/null || groupadd -r --gid 1870 %{service}
getent passwd %{service} >/dev/null || \
useradd --uid 1870 -r -g %{service} -d %{_sharedstatedir}/%{service} -s /sbin/nologin \
-c "OpenStack Magnum Daemons" %{service}
exit 0
%files conductor
%doc README.rst
%license LICENSE
%{_bindir}/magnum-conductor
%{_unitdir}/%{name}-conductor.service
%files api
%doc README.rst
%license LICENSE
%{_bindir}/magnum-api
%{_unitdir}/%{name}-api.service
%if 0%{?with_doc}
%files -n %{name}-doc
%license LICENSE
%doc doc/build/html
%endif
%files -n python-%{service}-tests
%license LICENSE
%{python2_sitelib}/%{service}/tests
%{python2_sitelib}/%{service}_tests.egg-info
%changelog
* Mon Aug 28 2017 rdo-trunk <javier.pena@redhat.com> 5.0.1-1
- Update to 5.0.1
* Thu Aug 24 2017 Alfredo Moralejo <amoralej@redhat.com> 5.0.0-1
- Update to 5.0.0

View File

@ -0,0 +1,5 @@
TAR_NAME="murano-dashboard"
SRC_DIR="$CGCS_BASE/git/murano-dashboard"
TIS_BASE_SRCREV=c950e248c2dfdc7a040d6984d84ed19c82a04e7d
TIS_PATCH_VER=1

View File

@ -0,0 +1,147 @@
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%global pypi_name murano-dashboard
%global mod_name muranodashboard
Name: openstack-murano-ui
Version: 4.0.0
Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: The UI component for the OpenStack murano service
Group: Applications/Communications
License: ASL 2.0
URL: https://github.com/openstack/%{pypi_name}
Source0: https://tarballs.openstack.org/%{pypi_name}/%{pypi_name}-%{upstream_version}.tar.gz
#
BuildRequires: gettext
BuildRequires: git
BuildRequires: openstack-dashboard
BuildRequires: python-beautifulsoup4
BuildRequires: python-castellan
BuildRequires: python-devel
BuildRequires: python-django-formtools
BuildRequires: python-django-nose
BuildRequires: python-mock
BuildRequires: python-mox3
BuildRequires: python-muranoclient
BuildRequires: python-nose
BuildRequires: python-openstack-nose-plugin
BuildRequires: python-oslo-config >= 2:3.14.0
BuildRequires: python-pbr >= 1.6
BuildRequires: python-semantic-version
BuildRequires: python-setuptools
BuildRequires: python-testtools
BuildRequires: python-yaql >= 1.1.0
BuildRequires: tsconfig
Requires: openstack-dashboard
Requires: PyYAML >= 3.10
Requires: python-babel >= 2.3.4
Requires: python-beautifulsoup4
Requires: python-castellan >= 0.7.0
Requires: python-django >= 1.8
Requires: python-django-babel
Requires: python-django-formtools
Requires: python-iso8601 >= 0.1.11
Requires: python-muranoclient >= 0.8.2
Requires: python-oslo-log >= 3.22.0
Requires: python-pbr
Requires: python-semantic-version
Requires: python-six >= 1.9.0
Requires: python-yaql >= 1.1.0
Requires: pytz
BuildArch: noarch
%description
Murano Dashboard
Sytem package - murano-dashboard
Python package - murano-dashboard
Murano Dashboard is an extension for OpenStack Dashboard that provides a UI
for Murano. With murano-dashboard, a user is able to easily manage and control
an application catalog, running applications and created environments alongside
with all other OpenStack resources.
%package doc
Summary: Documentation for OpenStack murano dashboard
BuildRequires: python-sphinx
BuildRequires: python-openstackdocstheme
BuildRequires: python-reno
%description doc
Murano Dashboard is an extension for OpenStack Dashboard that provides a UI
for Murano. With murano-dashboard, a user is able to easily manage and control
an application catalog, running applications and created environments alongside
with all other OpenStack resources.
This package contains the documentation.
%prep
%autosetup -n %{pypi_name}-%{upstream_version} -S git
# Let RPM handle the dependencies
rm -rf {test-,}requirements.txt tools/{pip,test}-requires
# disable warning-is-error, this project has intersphinx in docs
# so some warnings are generated in network isolated build environment
# as koji
sed -i 's/^warning-is-error.*/warning-is-error = 0/g' setup.cfg
%build
export PBR_VERSION=%{version}
%py2_build
# Generate i18n files
pushd build/lib/%{mod_name}
django-admin compilemessages
popd
# generate html docs
export OSLO_PACKAGE_VERSION=%{upstream_version}
%{__python2} setup.py build_sphinx -b html
# remove the sphinx-build leftovers
rm -rf doc/build/html/.{doctrees,buildinfo}
%install
export PBR_VERSION=%{version}
%py2_install
mkdir -p %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled
mkdir -p %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/local_settings.d
mkdir -p %{buildroot}/var/cache/murano-dashboard
# Enable Horizon plugin for murano-dashboard
cp %{_builddir}/%{pypi_name}-%{upstream_version}/muranodashboard/local/local_settings.d/_50_murano.py %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/local_settings.d/
cp %{_builddir}/%{pypi_name}-%{upstream_version}/muranodashboard/local/enabled/_*.py %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/
# install policy file, makes horizon side bar dissapear without it. Can be fixed by refreshing page, but annoying
install -p -D -m 644 muranodashboard/conf/murano_policy.json %{buildroot}%{_sysconfdir}/openstack-dashboard/murano_policy.json
%check
export PYTHONPATH="%{_datadir}/openstack-dashboard:%{python2_sitearch}:%{python2_sitelib}:%{buildroot}%{python2_sitelib}"
#%{__python2} manage.py test muranodashboard --settings=muranodashboard.tests.settings
%post
HORIZON_SETTINGS='/etc/openstack-dashboard/local_settings'
if grep -Eq '^METADATA_CACHE_DIR=' $HORIZON_SETTINGS; then
sed -i '/^METADATA_CACHE_DIR=/{s#.*#METADATA_CACHE_DIR="/var/cache/murano-dashboard"#}' $HORIZON_SETTINGS
else
sed -i '$aMETADATA_CACHE_DIR="/var/cache/murano-dashboard"' $HORIZON_SETTINGS
fi
%systemd_postun_with_restart httpd.service
%postun
%systemd_postun_with_restart httpd.service
%files
%license LICENSE
%doc README.rst
%{python2_sitelib}/muranodashboard
%{python2_sitelib}/murano_dashboard*.egg-info
%{_datadir}/openstack-dashboard/openstack_dashboard/local/local_settings.d/*
%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/*
%dir %attr(755, apache, apache) /var/cache/murano-dashboard
%{_sysconfdir}/openstack-dashboard/murano_policy.json
%files doc
%license LICENSE
%doc doc/build/html
%changelog
* Wed Aug 30 2017 rdo-trunk <javier.pena@redhat.com> 4.0.0-1
- Update to 4.0.0
* Thu Aug 24 2017 Alfredo Moralejo <amoralej@redhat.com> 4.0.0-0.1.0rc2
- Update to 4.0.0.0rc2

View File

@ -0,0 +1,6 @@
TAR_NAME="murano"
SRC_DIR="$CGCS_BASE/git/murano"
COPY_LIST="$FILES_BASE/*"
TIS_BASE_SRCREV=de53ba8f9a97ad30c492063d9cc497ca56093e38
TIS_PATCH_VER=1

View File

@ -0,0 +1,12 @@
[Unit]
Description=OpenStack Murano API Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
User=murano
ExecStart=/usr/bin/murano-api --config-file /etc/murano/murano.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,12 @@
[Unit]
Description=OpenStack Murano Cloud Foundry API Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
User=murano
ExecStart=/usr/bin/murano-cfapi --config-file /etc/murano/murano.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,12 @@
[Unit]
Description=Openstack Murano Engine Service
After=syslog.target network.target mysqld.service openstack-keystone.service
[Service]
Type=simple
User=murano
ExecStart=/usr/bin/murano-engine --config-file /etc/murano/murano.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,290 @@
%global pypi_name murano
%global with_doc %{!?_without_doc:1}%{?_without_doc:0}
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%if 0%{?fedora}
%global with_python3 1
%{!?python3_shortver: %global python3_shortver %(%{__python3} -c 'import sys; print(str(sys.version_info.major) + "." + str(sys.version_info.minor))')}
%endif
Name: openstack-%{pypi_name}
Version: 4.0.0
Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: OpenStack Murano Service
License: ASL 2.0
URL: https://pypi.python.org/pypi/murano
Source0: https://tarballs.openstack.org/%{pypi_name}/%{pypi_name}-%{upstream_version}.tar.gz
#
Source1: openstack-murano-api.service
Source2: openstack-murano-engine.service
Source4: openstack-murano-cf-api.service
BuildArch: noarch
BuildRequires: git
BuildRequires: python2-devel
BuildRequires: python-setuptools
BuildRequires: python-jsonschema >= 2.0.0
BuildRequires: python-keystonemiddleware
BuildRequires: python-oslo-config
BuildRequires: python-oslo-db
BuildRequires: python-oslo-i18n
BuildRequires: python-oslo-log
BuildRequires: python-oslo-messaging
BuildRequires: python-oslo-middleware
BuildRequires: python-oslo-policy
BuildRequires: python-oslo-serialization
BuildRequires: python-oslo-service
BuildRequires: python-openstackdocstheme
BuildRequires: python-pbr >= 2.0.0
BuildRequires: python-routes >= 2.3.1
BuildRequires: python-sphinx
BuildRequires: python-sphinxcontrib-httpdomain
BuildRequires: python-castellan
BuildRequires: pyOpenSSL
BuildRequires: systemd
BuildRequires: openstack-macros
# Required to compile translation files
BuildRequires: python-babel
%description
Murano Project introduces an application catalog service
# MURANO-COMMON
%package common
Summary: Murano common
Requires: python-alembic >= 0.8.7
Requires: python-babel >= 2.3.4
Requires: python-debtcollector >= 1.2.0
Requires: python-eventlet >= 0.18.2
Requires: python-iso8601 >= 0.1.9
Requires: python-jsonpatch >= 1.1
Requires: python-jsonschema >= 2.0.0
Requires: python-keystonemiddleware >= 4.12.0
Requires: python-keystoneauth1 >= 3.1.0
Requires: python-kombu >= 1:4.0.0
Requires: python-netaddr >= 0.7.13
Requires: python-oslo-concurrency >= 3.8.0
Requires: python-oslo-config >= 2:4.0.0
Requires: python-oslo-context >= 2.14.0
Requires: python-oslo-db >= 4.24.0
Requires: python-oslo-i18n >= 2.1.0
Requires: python-oslo-log >= 3.22.0
Requires: python-oslo-messaging >= 5.24.2
Requires: python-oslo-middleware >= 3.27.0
Requires: python-oslo-policy >= 1.23.0
Requires: python-oslo-serialization >= 1.10.0
Requires: python-oslo-service >= 1.10.0
Requires: python-oslo-utils >= 3.20.0
Requires: python-paste
Requires: python-paste-deploy >= 1.5.0
Requires: python-pbr >= 2.0.0
Requires: python-psutil >= 3.2.2
Requires: python-congressclient >= 1.3.0
Requires: python-heatclient >= 1.6.1
Requires: python-keystoneclient >= 1:3.8.0
Requires: python-mistralclient >= 3.1.0
Requires: python-muranoclient >= 0.8.2
Requires: python-neutronclient >= 6.3.0
Requires: PyYAML >= 3.10
Requires: python-routes >= 2.3.1
Requires: python-semantic_version >= 2.3.1
Requires: python-six >= 1.9.0
Requires: python-stevedore >= 1.20.0
Requires: python-sqlalchemy >= 1.0.10
Requires: python-tenacity >= 3.2.1
Requires: python-webob >= 1.7.1
Requires: python-yaql >= 1.1.0
Requires: python-castellan >= 0.7.0
Requires: %{name}-doc = %{version}-%{release}
%description common
Components common to all OpenStack Murano services
# MURANO-ENGINE
%package engine
Summary: The Murano engine
Group: Applications/System
Requires: %{name}-common = %{version}-%{release}
%description engine
OpenStack Murano Engine daemon
# MURANO-API
%package api
Summary: The Murano API
Group: Applications/System
Requires: %{name}-common = %{version}-%{release}
%description api
OpenStack rest API to the Murano Engine
# MURANO-CF-API
%package cf-api
Summary: The Murano Cloud Foundry API
Group: System Environment/Base
Requires: %{name}-common = %{version}-%{release}
%description cf-api
OpenStack rest API for Murano to the Cloud Foundry
%if 0%{?with_doc}
%package doc
Summary: Documentation for OpenStack Murano services
%description doc
This package contains documentation files for Murano.
%endif
%package -n python-murano-tests
Summary: Murano tests
Requires: %{name}-common = %{version}-%{release}
%description -n python-murano-tests
This package contains the murano test files.
%prep
%autosetup -S git -n %{pypi_name}-%{upstream_version}
# Remove the requirements file so that pbr hooks don't add it
# to distutils requires_dist config
rm -rf {test-,}requirements.txt tools/{pip,test}-requires
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
# Generate i18n files
%{__python2} setup.py compile_catalog -d build/lib/%{pypi_name}/locale
# Generate sample config and add the current directory to PYTHONPATH so
# oslo-config-generator doesn't skip heat's entry points.
PYTHONPATH=. oslo-config-generator --config-file=./etc/oslo-config-generator/murano.conf
PYTHONPATH=. oslo-config-generator --config-file=./etc/oslo-config-generator/murano-cfapi.conf
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install -O1 --skip-build --root %{buildroot}
# Create fake egg-info for the tempest plugin
# TODO switch to %{service} everywhere as in openstack-example.spec
%global service murano
%py2_entrypoint %{service} %{service}
# DOCs
pushd doc
%if 0%{?with_doc}
SPHINX_DEBUG=1 sphinx-build -b html source build/html
# Fix hidden-file-or-dir warnings
rm -fr build/html/.doctrees build/html/.buildinfo
%endif
popd
mkdir -p %{buildroot}/var/log/murano
mkdir -p %{buildroot}/var/run/murano
mkdir -p %{buildroot}/var/cache/murano/meta
mkdir -p %{buildroot}/etc/murano/
# install systemd unit files
install -p -D -m 644 %{SOURCE1} %{buildroot}%{_unitdir}/murano-api.service
install -p -D -m 644 %{SOURCE2} %{buildroot}%{_unitdir}/murano-engine.service
install -p -D -m 644 %{SOURCE4} %{buildroot}%{_unitdir}/murano-cf-api.service
# install default config files
cd %{_builddir}/%{pypi_name}-%{upstream_version} && oslo-config-generator --config-file ./etc/oslo-config-generator/murano.conf --output-file %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/murano.conf.sample
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/murano.conf.sample %{buildroot}%{_sysconfdir}/murano/murano.conf
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/netconfig.yaml.sample %{buildroot}%{_sysconfdir}/murano/netconfig.yaml.sample
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/murano-paste.ini %{buildroot}%{_sysconfdir}/murano/murano-paste.ini
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/logging.conf.sample %{buildroot}%{_sysconfdir}/murano/logging.conf
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/murano-cfapi.conf.sample %{buildroot}%{_sysconfdir}/murano/murano-cfapi.conf
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/murano-cfapi-paste.ini %{buildroot}%{_sysconfdir}/murano/murano-cfapi-paste.ini
# Creating murano core library archive(murano meta packages written in muranoPL with execution plan main minimal logic)
pushd meta/io.murano
zip -r %{buildroot}%{_localstatedir}/cache/murano/meta/io.murano.zip .
popd
# Creating murano core library archive(murano meta packages written in muranoPL with execution plan main minimal logic)
pushd meta/io.murano.applications
zip -r %{buildroot}%{_localstatedir}/cache/murano/meta/io.murano.applications.zip .
popd
# Install i18n .mo files (.po and .pot are not required)
install -d -m 755 %{buildroot}%{_datadir}
rm -f %{buildroot}%{python2_sitelib}/%{pypi_name}/locale/*/LC_*/%{pypi_name}*po
rm -f %{buildroot}%{python2_sitelib}/%{pypi_name}/locale/*pot
mv %{buildroot}%{python2_sitelib}/%{pypi_name}/locale %{buildroot}%{_datadir}/locale
# Find language files
%find_lang %{pypi_name} --all-name
%files common -f %{pypi_name}.lang
%license LICENSE
%{python2_sitelib}/murano
%{python2_sitelib}/murano-*.egg-info
%exclude %{python2_sitelib}/murano/tests
%exclude %{python2_sitelib}/murano_tempest_tests
%exclude %{python2_sitelib}/%{service}_tests.egg-info
%{_bindir}/murano-manage
%{_bindir}/murano-db-manage
%{_bindir}/murano-test-runner
%{_bindir}/murano-cfapi-db-manage
%dir %attr(0750,murano,root) %{_localstatedir}/log/murano
%dir %attr(0755,murano,root) %{_localstatedir}/run/murano
%dir %attr(0755,murano,root) %{_localstatedir}/cache/murano
%dir %attr(0755,murano,root) %{_sysconfdir}/murano
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/murano.conf
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/murano-paste.ini
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/netconfig.yaml.sample
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/logging.conf
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/murano-cfapi.conf
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/murano-cfapi-paste.ini
%files engine
%doc README.rst
%license LICENSE
%{_bindir}/murano-engine
%{_unitdir}/murano-engine.service
%post engine
%systemd_post murano-engine.service
%preun engine
%systemd_preun murano-engine.service
%postun engine
%systemd_postun_with_restart murano-engine.service
%files api
%doc README.rst
%license LICENSE
%{_localstatedir}/cache/murano/*
%{_bindir}/murano-api
%{_bindir}/murano-wsgi-api
%{_unitdir}/murano-api.service
%files cf-api
%doc README.rst
%license LICENSE
%{_bindir}/murano-cfapi
%{_unitdir}/murano-cf-api.service
%files doc
%doc doc/build/html
%files -n python-murano-tests
%license LICENSE
%{python2_sitelib}/murano/tests
%{python2_sitelib}/murano_tempest_tests
%{python2_sitelib}/%{service}_tests.egg-info
%changelog
* Wed Aug 30 2017 rdo-trunk <javier.pena@redhat.com> 4.0.0-1
- Update to 4.0.0
* Fri Aug 25 2017 Alfredo Moralejo <amoralej@redhat.com> 4.0.0-0.2.0rc2
- Update to 4.0.0.0rc2
* Mon Aug 21 2017 Alfredo Moralejo <amoralej@redhat.com> 4.0.0-0.1.0rc1
- Update to 4.0.0.0rc1

View File

@ -0,0 +1 @@
TIS_PATCH_VER=5

View File

@ -0,0 +1,171 @@
From 39121ea596ec8137f2d56b8a35ebba73feb6b5c8 Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Fri, 20 Oct 2017 10:07:03 -0400
Subject: [PATCH 1/1] panko config
---
SOURCES/panko-dist.conf | 2 +-
SOURCES/panko-expirer-active | 27 +++++++++++++++++++++++++++
SPECS/openstack-panko.spec | 22 +++++++++++++++++-----
3 files changed, 45 insertions(+), 6 deletions(-)
create mode 100644 SOURCES/panko-expirer-active
diff --git a/SOURCES/panko-dist.conf b/SOURCES/panko-dist.conf
index c33a2ee..ac6f79f 100644
--- a/SOURCES/panko-dist.conf
+++ b/SOURCES/panko-dist.conf
@@ -1,4 +1,4 @@
[DEFAULT]
-log_dir = /var/log/panko
+#log_dir = /var/log/panko
use_stderr = False
diff --git a/SOURCES/panko-expirer-active b/SOURCES/panko-expirer-active
new file mode 100644
index 0000000..7d526e0
--- /dev/null
+++ b/SOURCES/panko-expirer-active
@@ -0,0 +1,60 @@
+#!/bin/bash
+
+#
+# Wrapper script to run panko-expirer when on active controller only
+#
+PANKO_EXPIRER_INFO="/var/run/panko-expirer.info"
+PANKO_EXPIRER_CMD="/usr/bin/nice -n 2 /usr/bin/panko-expirer"
+
+function is_active_pgserver()
+{
+ # Determine whether we're running on the same controller as the service.
+ local service=postgres
+ local enabledactive=$(/usr/bin/sm-query service $service| grep enabled-active)
+ if [ "x$enabledactive" == "x" ]
+ then
+ # enabled-active not found for that service on this controller
+ return 1
+ else
+ # enabled-active found for that resource
+ return 0
+ fi
+}
+
+if is_active_pgserver
+then
+ if [ ! -f ${PANKO_EXPIRER_INFO} ]
+ then
+ echo skip_count=0 > ${PANKO_EXPIRER_INFO}
+ fi
+
+ source ${PANKO_EXPIRER_INFO}
+ sudo -u postgres psql -d sysinv -c "SELECT alarm_id, entity_instance_id from i_alarm;" | grep -P "^(?=.*100.101)(?=.*${HOSTNAME})" &>/dev/null
+ if [ $? -eq 0 ]
+ then
+ source /etc/platform/platform.conf
+ if [ "${system_type}" = "All-in-one" ]
+ then
+ source /etc/init.d/task_affinity_functions.sh
+ idle_core=$(get_most_idle_core)
+ if [ "$idle_core" -ne "0" ]
+ then
+ sh -c "exec taskset -c $idle_core ${PANKO_EXPIRER_CMD}"
+ sed -i "/skip_count/s/=.*/=0/" ${PANKO_EXPIRER_INFO}
+ exit 0
+ fi
+ fi
+
+ if [ "$skip_count" -lt "3" ]
+ then
+ newval=$(($skip_count+1))
+ sed -i "/skip_count/s/=.*/=$newval/" ${PANKO_EXPIRER_INFO}
+ exit 0
+ fi
+ fi
+
+ eval ${PANKO_EXPIRER_CMD}
+ sed -i "/skip_count/s/=.*/=0/" ${PANKO_EXPIRER_INFO}
+fi
+
+exit 0
diff --git a/SPECS/openstack-panko.spec b/SPECS/openstack-panko.spec
index d12da57..90471d9 100644
--- a/SPECS/openstack-panko.spec
+++ b/SPECS/openstack-panko.spec
@@ -4,20 +4,26 @@
Name: openstack-panko
Version: 3.0.0
-Release: 1%{?dist}
+Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: Panko provides Event storage and REST API
License: ASL 2.0
URL: http://github.com/openstack/panko
Source0: https://tarballs.openstack.org/%{pypi_name}/%{pypi_name}-%{upstream_version}.tar.gz
Source1: %{pypi_name}-dist.conf
-Source2: %{pypi_name}.logrotate
+# WRS
+Source2: panko-expirer-active
+
+# WRS: Include patches here
+Patch1: 0001-modify-panko-api.patch
+
BuildArch: noarch
BuildRequires: python-setuptools
BuildRequires: python-pbr
BuildRequires: python2-devel
BuildRequires: openstack-macros
+BuildRequires: python-tenacity >= 3.1.0
%description
HTTP API to store events.
@@ -116,6 +122,9 @@ This package contains documentation files for panko.
%prep
%setup -q -n %{pypi_name}-%{upstream_version}
+# WRS: Apply patches here
+%patch1 -p1
+
find . \( -name .gitignore -o -name .placeholder \) -delete
find panko -name \*.py -exec sed -i '/\/usr\/bin\/env python/{d;q}' {} +
@@ -158,6 +167,8 @@ mkdir -p %{buildroot}/%{_var}/log/%{name}
install -p -D -m 640 %{SOURCE1} %{buildroot}%{_datadir}/panko/panko-dist.conf
install -p -D -m 640 etc/panko/panko.conf %{buildroot}%{_sysconfdir}/panko/panko.conf
install -p -D -m 640 etc/panko/api_paste.ini %{buildroot}%{_sysconfdir}/panko/api_paste.ini
+# WRS
+install -p -D -m 640 panko/api/panko-api.py %{buildroot}%{_datadir}/panko/panko-api.py
#TODO(prad): build the docs at run time, once the we get rid of postgres setup dependency
@@ -169,8 +180,8 @@ install -d -m 755 %{buildroot}%{_sharedstatedir}/panko
install -d -m 755 %{buildroot}%{_sharedstatedir}/panko/tmp
install -d -m 755 %{buildroot}%{_localstatedir}/log/panko
-# Install logrotate
-install -p -D -m 644 %{SOURCE2} %{buildroot}%{_sysconfdir}/logrotate.d/%{name}
+# WRS
+install -p -D -m 755 %{SOURCE2} %{buildroot}%{_bindir}/panko-expirer-active
# Remove all of the conf files that are included in the buildroot/usr/etc dir since we installed them above
rm -f %{buildroot}/usr/etc/panko/*
@@ -201,14 +212,15 @@ exit 0
%{_bindir}/panko-api
%{_bindir}/panko-dbsync
%{_bindir}/panko-expirer
+%{_bindir}/panko-expirer-active
%files common
%dir %{_sysconfdir}/panko
+%{_datadir}/panko/panko-api.*
%attr(-, root, panko) %{_datadir}/panko/panko-dist.conf
%config(noreplace) %attr(-, root, panko) %{_sysconfdir}/panko/policy.json
%config(noreplace) %attr(-, root, panko) %{_sysconfdir}/panko/panko.conf
%config(noreplace) %attr(-, root, panko) %{_sysconfdir}/panko/api_paste.ini
-%config(noreplace) %attr(-, root, panko) %{_sysconfdir}/logrotate.d/%{name}
%dir %attr(0755, panko, root) %{_localstatedir}/log/panko
%defattr(-, panko, panko, -)
--
1.8.3.1

View File

@ -0,0 +1,32 @@
From 4e791be412662ae1f97cfd4ff5a90ea6337e49a4 Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Thu, 16 Nov 2017 15:25:08 -0500
Subject: [PATCH 1/1] spec change event list descending
---
SPECS/openstack-panko.spec | 2 ++
1 file changed, 2 insertions(+)
diff --git a/SPECS/openstack-panko.spec b/SPECS/openstack-panko.spec
index 90471d9..95497b4 100644
--- a/SPECS/openstack-panko.spec
+++ b/SPECS/openstack-panko.spec
@@ -16,6 +16,7 @@ Source2: panko-expirer-active
# WRS: Include patches here
Patch1: 0001-modify-panko-api.patch
+Patch2: 0002-Change-event-list-descending.patch
BuildArch: noarch
@@ -124,6 +125,7 @@ This package contains documentation files for panko.
# WRS: Apply patches here
%patch1 -p1
+%patch2 -p1
find . \( -name .gitignore -o -name .placeholder \) -delete
--
1.8.3.1

View File

@ -0,0 +1,32 @@
From aad89aa79de1e9f0b35afa1ba587c10591a889e0 Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Mon, 11 Dec 2017 16:29:23 -0500
Subject: [PATCH 1/1] spec fix event query to sqlalchemy with non admin user
---
SPECS/openstack-panko.spec | 2 ++
1 file changed, 2 insertions(+)
diff --git a/SPECS/openstack-panko.spec b/SPECS/openstack-panko.spec
index 95497b4..87a6a5a 100644
--- a/SPECS/openstack-panko.spec
+++ b/SPECS/openstack-panko.spec
@@ -17,6 +17,7 @@ Source2: panko-expirer-active
# WRS: Include patches here
Patch1: 0001-modify-panko-api.patch
Patch2: 0002-Change-event-list-descending.patch
+Patch3: 0003-Fix-event-query-to-sqlalchemy-with-non-admin-user.patch
BuildArch: noarch
@@ -126,6 +127,7 @@ This package contains documentation files for panko.
# WRS: Apply patches here
%patch1 -p1
%patch2 -p1
+%patch3 -p1
find . \( -name .gitignore -o -name .placeholder \) -delete
--
1.8.3.1

View File

@ -0,0 +1,3 @@
0001-panko-config.patch
0002-spec-change-event-list-descending.patch
0003-spec-fix-event-query-to-sqlalchemy-with-non-admin-us.patch

View File

@ -0,0 +1,63 @@
From 3583e2afbae8748f05dc12c51eefc4983358759c Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Mon, 6 Nov 2017 17:32:46 -0500
Subject: [PATCH 1/1] modify panko api
---
panko/api/app.py | 12 +++++++++---
panko/api/panko-api.py | 6 ++++++
2 files changed, 15 insertions(+), 3 deletions(-)
create mode 100644 panko/api/panko-api.py
diff --git a/panko/api/app.py b/panko/api/app.py
index 9867e18..4eedaea 100644
--- a/panko/api/app.py
+++ b/panko/api/app.py
@@ -51,7 +51,7 @@ global APPCONFIGS
APPCONFIGS = {}
-def load_app(conf, appname='panko+keystone'):
+def load_app(conf, args, appname='panko+keystone'):
global APPCONFIGS
# Build the WSGI app
@@ -62,6 +62,12 @@ def load_app(conf, appname='panko+keystone'):
if cfg_path is None or not os.path.exists(cfg_path):
raise cfg.ConfigFilesNotFoundError([conf.api_paste_config])
+ config_args = dict([(key, value) for key, value in args.iteritems()
+ if key in conf and value is not None])
+ for key, value in config_args.iteritems():
+ if key == 'config_file':
+ conf.config_file = value
+
config = dict(conf=conf)
configkey = str(uuid.uuid4())
APPCONFIGS[configkey] = config
@@ -71,8 +77,8 @@ def load_app(conf, appname='panko+keystone'):
global_conf={'configkey': configkey})
-def build_wsgi_app(argv=None):
- return load_app(service.prepare_service(argv=argv))
+def build_wsgi_app(argv=None, args=None):
+ return load_app(service.prepare_service(argv=argv), args)
def app_factory(global_config, **local_conf):
diff --git a/panko/api/panko-api.py b/panko/api/panko-api.py
new file mode 100644
index 0000000..87d917d
--- /dev/null
+++ b/panko/api/panko-api.py
@@ -0,0 +1,6 @@
+from panko.api import app as build_wsgi_app
+import sys
+
+sys.argv = sys.argv[:1]
+args = {'config_file' : 'etc/panko/panko.conf', }
+application = build_wsgi_app.build_wsgi_app(args=args)
--
1.8.3.1

View File

@ -0,0 +1,27 @@
From 05b89c2f78357ad39b0cd9eb74903e14d1f56758 Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Thu, 16 Nov 2017 15:14:17 -0500
Subject: [PATCH 1/1] Change event list descending
---
panko/storage/models.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/panko/storage/models.py b/panko/storage/models.py
index 9c578c8..ed4c9a8 100644
--- a/panko/storage/models.py
+++ b/panko/storage/models.py
@@ -35,8 +35,8 @@ class Event(base.Model):
SUPPORT_DIRS = ('asc', 'desc')
SUPPORT_SORT_KEYS = ('message_id', 'generated')
- DEFAULT_DIR = 'asc'
- DEFAULT_SORT = [('generated', 'asc'), ('message_id', 'asc')]
+ DEFAULT_DIR = 'desc'
+ DEFAULT_SORT = [('generated', 'desc'), ('message_id', 'desc')]
PRIMARY_KEY = 'message_id'
def __init__(self, message_id, event_type, generated, traits, raw):
--
1.8.3.1

View File

@ -0,0 +1,101 @@
From c390a3bc6920728806f581b85d46f02d75eb651c Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Mon, 11 Dec 2017 16:21:42 -0500
Subject: [PATCH 1/1] Fix event query to sqlalchemy with non admin user
This is an upstream fix.
https://github.com/openstack/panko/commit/99d591df950271594ee049caa3ff22304437a228
Do not port this patch in the next panko rebase.
---
panko/storage/impl_sqlalchemy.py | 34 +++++++++++++++-------
.../functional/storage/test_storage_scenarios.py | 4 +--
2 files changed, 25 insertions(+), 13 deletions(-)
diff --git a/panko/storage/impl_sqlalchemy.py b/panko/storage/impl_sqlalchemy.py
index 670c8d7..29b5b97 100644
--- a/panko/storage/impl_sqlalchemy.py
+++ b/panko/storage/impl_sqlalchemy.py
@@ -24,6 +24,7 @@ from oslo_log import log
from oslo_utils import timeutils
import sqlalchemy as sa
from sqlalchemy.engine import url as sqlalchemy_url
+from sqlalchemy.orm import aliased
from panko import storage
from panko.storage import base
@@ -61,8 +62,8 @@ trait_models_dict = {'string': models.TraitText,
'float': models.TraitFloat}
-def _build_trait_query(session, trait_type, key, value, op='eq'):
- trait_model = trait_models_dict[trait_type]
+def _get_model_and_conditions(trait_type, key, value, op='eq'):
+ trait_model = aliased(trait_models_dict[trait_type])
op_dict = {'eq': (trait_model.value == value),
'lt': (trait_model.value < value),
'le': (trait_model.value <= value),
@@ -70,8 +71,7 @@ def _build_trait_query(session, trait_type, key, value, op='eq'):
'ge': (trait_model.value >= value),
'ne': (trait_model.value != value)}
conditions = [trait_model.key == key, op_dict[op]]
- return (session.query(trait_model.event_id.label('ev_id'))
- .filter(*conditions))
+ return (trait_model, conditions)
class Connection(base.Connection):
@@ -274,16 +274,28 @@ class Connection(base.Connection):
key = trait_filter.pop('key')
op = trait_filter.pop('op', 'eq')
trait_type, value = list(trait_filter.items())[0]
- trait_subq = _build_trait_query(session, trait_type,
- key, value, op)
- for trait_filter in filters:
+
+ trait_model, conditions = _get_model_and_conditions(
+ trait_type, key, value, op)
+ trait_subq = (session
+ .query(trait_model.event_id.label('ev_id'))
+ .filter(*conditions))
+
+ first_model = trait_model
+ for label_num, trait_filter in enumerate(filters):
key = trait_filter.pop('key')
op = trait_filter.pop('op', 'eq')
trait_type, value = list(trait_filter.items())[0]
- q = _build_trait_query(session, trait_type,
- key, value, op)
- trait_subq = trait_subq.filter(
- trait_subq.subquery().c.ev_id == q.subquery().c.ev_id)
+ trait_model, conditions = _get_model_and_conditions(
+ trait_type, key, value, op)
+ trait_subq = (
+ trait_subq
+ .add_columns(
+ trait_model.event_id.label('l%d' % label_num))
+ .filter(
+ first_model.event_id == trait_model.event_id,
+ *conditions))
+
trait_subq = trait_subq.subquery()
query = (session.query(models.Event.id)
diff --git a/panko/tests/functional/storage/test_storage_scenarios.py b/panko/tests/functional/storage/test_storage_scenarios.py
index 3af76b4..9af75c8 100644
--- a/panko/tests/functional/storage/test_storage_scenarios.py
+++ b/panko/tests/functional/storage/test_storage_scenarios.py
@@ -340,8 +340,8 @@ class GetEventTest(EventTestBase):
def test_get_event_multiple_trait_filter(self):
trait_filters = [{'key': 'trait_B', 'integer': 1},
- {'key': 'trait_A', 'string': 'my_Foo_text'},
- {'key': 'trait_C', 'float': 0.123456}]
+ {'key': 'trait_C', 'float': 0.123456},
+ {'key': 'trait_A', 'string': 'my_Foo_text'}]
event_filter = storage.EventFilter(self.start, self.end,
traits_filter=trait_filters)
events = [event for event in self.conn.get_events(event_filter)]
--
1.8.3.1

View File

@ -0,0 +1 @@
mirror:Source/openstack-panko-3.0.0-1.el7.src.rpm

View File

@ -0,0 +1,5 @@
SRC_DIR="$CGCS_BASE/git/openstack-ras"
#COPY_LIST="$FILES_BASE/*"
TIS_BASE_SRCREV=a54e652dd2f404de8e125370445a1225b3678894
TIS_PATCH_VER=GITREVCOUNT

View File

@ -0,0 +1,80 @@
%define local_dir /usr/local
Summary: openstack-ras
Name: openstack-ras
Version: 1.0.0
Release: 0%{?_tis_dist}.%{tis_patch_ver}
License: Apache-2.0
Group: base
Packager: Wind River <info@windriver.com>
URL: https://github.com/madkiss/openstack-resource-agents/tree/stable-grizzly
# Note: when upgrading, new upstream URL will be:
# https://git.openstack.org/cgit/openstack/openstack-resource-agents
Requires: /usr/bin/env
Requires: /bin/sh
Source: %{name}-%{version}.tar.gz
%description
OpenStack Resource Agents from Madkiss
%prep
%autosetup -p 1
%install
%make_install
rm -rf ${RPM_BUILD_ROOT}/usr/lib/ocf/resource.d/openstack/ceilometer-agent-central
rm -rf ${RPM_BUILD_ROOT}/usr/lib/ocf/resource.d/openstack/ceilometer-alarm-evaluator
rm -rf ${RPM_BUILD_ROOT}/usr/lib/ocf/resource.d/openstack/ceilometer-alarm-notifier
%files
%defattr(-,root,root,-)
%dir "/usr/lib/ocf/resource.d/openstack"
"/usr/lib/ocf/resource.d/openstack/aodh-api"
"/usr/lib/ocf/resource.d/openstack/aodh-evaluator"
"/usr/lib/ocf/resource.d/openstack/aodh-listener"
"/usr/lib/ocf/resource.d/openstack/aodh-notifier"
"/usr/lib/ocf/resource.d/openstack/murano-engine"
"/usr/lib/ocf/resource.d/openstack/murano-api"
"/usr/lib/ocf/resource.d/openstack/magnum-conductor"
"/usr/lib/ocf/resource.d/openstack/magnum-api"
"/usr/lib/ocf/resource.d/openstack/ironic-conductor"
"/usr/lib/ocf/resource.d/openstack/ironic-api"
"/usr/lib/ocf/resource.d/openstack/nova-compute"
"/usr/lib/ocf/resource.d/openstack/heat-api"
"/usr/lib/ocf/resource.d/openstack/glance-registry"
"/usr/lib/ocf/resource.d/openstack/nova-network"
"/usr/lib/ocf/resource.d/openstack/keystone"
"/usr/lib/ocf/resource.d/openstack/heat-engine"
"/usr/lib/ocf/resource.d/openstack/nova-novnc"
"/usr/lib/ocf/resource.d/openstack/nova-serialproxy"
"/usr/lib/ocf/resource.d/openstack/heat-api-cfn"
"/usr/lib/ocf/resource.d/openstack/cinder-api"
"/usr/lib/ocf/resource.d/openstack/neutron-agent-dhcp"
"/usr/lib/ocf/resource.d/openstack/cinder-volume"
"/usr/lib/ocf/resource.d/openstack/neutron-agent-l3"
"/usr/lib/ocf/resource.d/openstack/cinder-schedule"
"/usr/lib/ocf/resource.d/openstack/nova-consoleauth"
"/usr/lib/ocf/resource.d/openstack/ceilometer-api"
"/usr/lib/ocf/resource.d/openstack/nova-scheduler"
"/usr/lib/ocf/resource.d/openstack/nova-conductor"
"/usr/lib/ocf/resource.d/openstack/neutron-server"
"/usr/lib/ocf/resource.d/openstack/validation"
"/usr/lib/ocf/resource.d/openstack/heat-api-cloudwatch"
"/usr/lib/ocf/resource.d/openstack/ceilometer-agent-notification"
"/usr/lib/ocf/resource.d/openstack/glance-api"
"/usr/lib/ocf/resource.d/openstack/nova-api"
"/usr/lib/ocf/resource.d/openstack/neutron-metadata-agent"
"/usr/lib/ocf/resource.d/openstack/ceilometer-collector"
"/usr/lib/ocf/resource.d/openstack/panko-api"
"/usr/lib/ocf/resource.d/openstack/nova-placement-api"
"/usr/lib/ocf/resource.d/openstack/dcorch-snmp"
"/usr/lib/ocf/resource.d/openstack/dcmanager-manager"
"/usr/lib/ocf/resource.d/openstack/dcorch-nova-api-proxy"
"/usr/lib/ocf/resource.d/openstack/dcorch-sysinv-api-proxy"
"/usr/lib/ocf/resource.d/openstack/dcmanager-api"
"/usr/lib/ocf/resource.d/openstack/dcorch-engine"
"/usr/lib/ocf/resource.d/openstack/dcorch-neutron-api-proxy"
"/usr/lib/ocf/resource.d/openstack/dcorch-cinder-api-proxy"
"/usr/lib/ocf/resource.d/openstack/dcorch-patch-api-proxy"

View File

@ -0,0 +1,221 @@
Index: git/ocf/cinder-api
===================================================================
--- git.orig/ocf/cinder-api 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/cinder-api 2014-09-23 10:22:33.294302829 -0400
@@ -244,18 +244,27 @@
fi
# Check detailed information about this specific version of the API.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack Cinder API (cinder-api) monitor succeeded"
Index: git/ocf/glance-api
===================================================================
--- git.orig/ocf/glance-api 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/glance-api 2014-09-23 10:16:35.903826295 -0400
@@ -236,11 +236,9 @@
fi
# Monitor the RA by retrieving the image list
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_os_auth_url" ]; then
+ if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_os_auth_url" ]; then
ocf_run -q $OCF_RESKEY_client_binary \
--os_username "$OCF_RESKEY_os_username" \
- --os_password "$OCF_RESKEY_os_password" \
--os_tenant_name "$OCF_RESKEY_os_tenant_name" \
--os_auth_url "$OCF_RESKEY_os_auth_url" \
index > /dev/null 2>&1
Index: git/ocf/glance-registry
===================================================================
--- git.orig/ocf/glance-registry 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/glance-registry 2014-09-23 10:22:58.078475044 -0400
@@ -246,18 +246,27 @@
# Check whether we are supposed to monitor by logging into glance-registry
# and do it if that's the case.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack ImageService (glance-registry): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack ImageService (glance-registry): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack ImageService (glance-registry) monitor succeeded"
Index: git/ocf/keystone
===================================================================
--- git.orig/ocf/keystone 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/keystone 2014-09-23 10:18:30.736618732 -0400
@@ -237,12 +237,10 @@
# Check whether we are supposed to monitor by logging into Keystone
# and do it if that's the case.
- if [ -n "$OCF_RESKEY_client_binary" ] && [ -n "$OCF_RESKEY_os_username" ] \
- && [ -n "$OCF_RESKEY_os_password" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] \
+ if [ -n "$OCF_RESKEY_client_binary" ] && [ -n "$OCF_RESKEY_os_password" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] \
&& [ -n "$OCF_RESKEY_os_auth_url" ]; then
ocf_run -q $OCF_RESKEY_client_binary \
--os-username "$OCF_RESKEY_os_username" \
- --os-password "$OCF_RESKEY_os_password" \
--os-tenant-name "$OCF_RESKEY_os_tenant_name" \
--os-auth-url "$OCF_RESKEY_os_auth_url" \
user-list > /dev/null 2>&1
Index: git/ocf/neutron-server
===================================================================
--- git.orig/ocf/neutron-server 2014-09-17 13:13:13.872502871 -0400
+++ git/ocf/neutron-server 2014-09-23 10:23:39.358761926 -0400
@@ -256,18 +256,27 @@
fi
# Check detailed information about this specific version of the API.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack Neutron API (neutron-server): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack Neutron API (neutron-server): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack Neutron Server (neutron-server) monitor succeeded"
Index: git/ocf/nova-api
===================================================================
--- git.orig/ocf/nova-api 2014-09-17 13:13:15.240513478 -0400
+++ git/ocf/nova-api 2014-09-23 10:23:20.454630543 -0400
@@ -244,18 +244,27 @@
fi
# Check detailed information about this specific version of the API.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack Nova API (nova-api): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack Nova API (nova-api): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack Nova API (nova-api) monitor succeeded"
Index: git/ocf/validation
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ git/ocf/validation 2014-09-23 10:06:37.011706573 -0400
@@ -0,0 +1,5 @@
+#!/usr/bin/env python
+
+from keystoneclient import probe
+
+probe.main()

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,374 @@
Index: git/ocf/ceilometer-mem-db
===================================================================
--- /dev/null
+++ git/ocf/ceilometer-mem-db
@@ -0,0 +1,369 @@
+#!/bin/sh
+#
+#
+# OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+#
+# Description: Manages an OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) process as an HA resource
+#
+# Authors: Emilien Macchi
+# Mainly inspired by the Nova Scheduler resource agent written by Sebastien Han
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+# Copyright (c) 2014 Wind River Systems, Inc.
+# SPDX-License-Identifier: Apache-2.0
+#
+#
+#
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_amqp_server_port
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="ceilometer-mem-db"
+OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_user_default="root"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_amqp_server_port_default="5672"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) process as an HA resource
+
+ The 'start' operation starts the scheduler service.
+ The 'stop' operation stops the scheduler service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the scheduler service is running
+ The 'monitor' operation reports whether the scheduler service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="ceilometer-mem-db">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+May manage a ceilometer-mem-db instance or a clone set that
+creates a distributed ceilometer-mem-db cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Mem DB server binary (ceilometer-mem-db)
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB server binary (ceilometer-mem-db)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB (ceilometer-mem-db registry) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="amqp_server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the AMQP server. Use for monitoring purposes
+</longdesc>
+<shortdesc lang="en">AMQP listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_amqp_server_port_default}" />
+</parameter>
+
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+</longdesc>
+<shortdesc lang="en">Additional parameters for ceilometer-mem-db</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+ceilometer_mem_db_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+ceilometer_mem_db_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ ceilometer_mem_db_check_port $OCF_RESKEY_amqp_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+ceilometer_mem_db_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Ceilometer Mem DB (ceilometer-mem-db) is not running"
+ rm -f $OCF_RESKEY_pid
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+ceilometer_mem_db_monitor() {
+ local rc
+ local pid
+ local scheduler_amqp_check
+
+ ceilometer_mem_db_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the connections according to the PID.
+ # We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
+ pid=`cat $OCF_RESKEY_pid`
+ scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Mem DB is not connected to the AMQP server : $rc"
+ return $OCF_NOT_RUNNING
+ fi
+
+ ocf_log debug "OpenStack Ceilometer Mem DB (ceilometer-mem-db) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+ceilometer_mem_db_start() {
+ local rc
+
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual ceilometer-mem-db daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ ceilometer_mem_db_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Ceilometer Mem DB (ceilometer-mem-db) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) started"
+ return $OCF_SUCCESS
+}
+
+ceilometer_mem_db_confirm_stop() {
+ local my_bin
+ local my_processes
+
+ my_binary=`which ${OCF_RESKEY_binary}`
+ my_processes=`pgrep -l -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"`
+
+ if [ -n "${my_processes}" ]
+ then
+ ocf_log info "About to SIGKILL the following: ${my_processes}"
+ pkill -KILL -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"
+ fi
+}
+
+ceilometer_mem_db_stop() {
+ local rc
+ local pid
+
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) already stopped"
+ ceilometer_mem_db_confirm_stop
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Ceilometer Mem DB (ceilometer-mem-db) couldn't be stopped"
+ ceilometer_mem_db_confirm_stop
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Ceilometer Mem DB (ceilometer-mem-db) still hasn't stopped yet. Waiting ..."
+ done
+
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+ ceilometer_mem_db_confirm_stop
+
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+ceilometer_mem_db_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) ceilometer_mem_db_start;;
+ stop) ceilometer_mem_db_stop;;
+ status) ceilometer_mem_db_status;;
+ monitor) ceilometer_mem_db_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac

View File

@ -0,0 +1,28 @@
Index: git/ocf/ceilometer-collector
===================================================================
--- git.orig/ocf/ceilometer-collector 2014-08-07 21:08:46.637211162 -0400
+++ git/ocf/ceilometer-collector 2014-08-07 21:09:24.893475317 -0400
@@ -223,15 +223,16 @@
return $rc
fi
- # Check the connections according to the PID.
- # We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
- pid=`cat $OCF_RESKEY_pid`
- scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc=$?
- if [ $rc -ne 0 ]; then
+ # Check the connections according to the PID of the child process since
+ # the parent is not the one with the AMQP connection
+ ppid=`cat $OCF_RESKEY_pid`
+ pid=`pgrep -P $ppid`
+ scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc=$?
+ if [ $rc -ne 0 ]; then
ocf_log err "Collector is not connected to the AMQP server : $rc"
return $OCF_NOT_RUNNING
- fi
+ fi
ocf_log debug "OpenStack Ceilometer Collector (ceilometer-collector) monitor succeeded"
return $OCF_SUCCESS

View File

@ -0,0 +1,22 @@
Index: git/ocf/ceilometer-api
===================================================================
--- git.orig/ocf/ceilometer-api
+++ git/ocf/ceilometer-api
@@ -183,7 +183,7 @@ ceilometer_api_validate() {
local rc
check_binary $OCF_RESKEY_binary
- check_binary netstat
+ check_binary lsof
ceilometer_api_check_port $OCF_RESKEY_api_listen_port
# A config file on shared storage that is not available
@@ -244,7 +244,7 @@ ceilometer_api_monitor() {
# Check the connections according to the PID.
# We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
pid=`cat $OCF_RESKEY_pid`
- scheduler_amqp_check=`netstat -apunt | grep -s "$OCF_RESKEY_api_listen_port" | grep -s "$pid" | grep -qs "LISTEN"`
+ scheduler_amqp_check=`lsof -nPp ${pid} | grep -s ":${OCF_RESKEY_api_listen_port}\s\+(LISTEN)"`
rc=$?
if [ $rc -ne 0 ]; then
ocf_log err "API is not listening for connections: $rc"

View File

@ -0,0 +1,63 @@
Index: git/ocf/ceilometer-agent-central
===================================================================
--- git.orig/ocf/ceilometer-agent-central
+++ git/ocf/ceilometer-agent-central
@@ -34,6 +34,7 @@
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+. /usr/bin/tsconfig
#######################################################################
@@ -41,7 +42,7 @@
OCF_RESKEY_binary_default="ceilometer-agent-central"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/${SW_VERSION}/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
Index: git/ocf/ceilometer-agent-notification
===================================================================
--- git.orig/ocf/ceilometer-agent-notification
+++ git/ocf/ceilometer-agent-notification
@@ -34,6 +34,7 @@
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+. /usr/bin/tsconfig
#######################################################################
@@ -41,7 +42,7 @@
OCF_RESKEY_binary_default="ceilometer-agent-notification"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/${SW_VERSION}/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
Index: git/ocf/ceilometer-api
===================================================================
--- git.orig/ocf/ceilometer-api
+++ git/ocf/ceilometer-api
@@ -34,6 +34,7 @@
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+. /usr/bin/tsconfig
#######################################################################
@@ -41,7 +42,7 @@
OCF_RESKEY_binary_default="ceilometer-api"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/${SW_VERSION}/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_api_listen_port_default="8777"

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,150 @@
Index: git/ocf/ceilometer-agent-central
===================================================================
--- git.orig/ocf/ceilometer-agent-central
+++ git/ocf/ceilometer-agent-central
@@ -23,6 +23,7 @@
# OCF instance parameters:
# OCF_RESKEY_binary
# OCF_RESKEY_config
+# OCF_RESKEY_pipeline
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_monitor_binary
@@ -40,12 +41,14 @@
OCF_RESKEY_binary_default="ceilometer-agent-central"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_pipeline=${OCF_RESKEY_pipeline_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
@@ -99,6 +102,14 @@ Location of the OpenStack Ceilometer Cen
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
+<parameter name="pipeline" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Central Agent Service (ceilometer-agent-central) pipeline file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Central Agent (ceilometer-agent-central registry) pipeline file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pipeline_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Ceilometer Central Agent Service (ceilometer-agent-central)
@@ -247,6 +258,7 @@ ceilometer_agent_central_start() {
# run the actual ceilometer-agent-central daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ --pipeline_cfg_file=$OCF_RESKEY_pipeline \
$OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.
Index: git/ocf/ceilometer-agent-notification
===================================================================
--- git.orig/ocf/ceilometer-agent-notification
+++ git/ocf/ceilometer-agent-notification
@@ -23,6 +23,7 @@
# OCF instance parameters:
# OCF_RESKEY_binary
# OCF_RESKEY_config
+# OCF_RESKEY_pipeline
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_monitor_binary
@@ -40,12 +41,14 @@
OCF_RESKEY_binary_default="ceilometer-agent-notification"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_pipeline=${OCF_RESKEY_pipeline_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
@@ -99,6 +102,14 @@ Location of the OpenStack Ceilometer Cen
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
+<parameter name="pipeline" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Central Agent Service (ceilometer-agent-notification) pipeline file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Central Agent (ceilometer-agent-notification registry) pipeline file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pipeline_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Ceilometer Central Agent Service (ceilometer-agent-notification)
@@ -247,6 +258,7 @@ ceilometer_agent_notification_start() {
# run the actual ceilometer-agent-notification daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ --pipeline_cfg_file=$OCF_RESKEY_pipeline \
$OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.
Index: git/ocf/ceilometer-api
===================================================================
--- git.orig/ocf/ceilometer-api
+++ git/ocf/ceilometer-api
@@ -23,6 +23,7 @@
# OCF instance parameters:
# OCF_RESKEY_binary
# OCF_RESKEY_config
+# OCF_RESKEY_pipeline
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_monitor_binary
@@ -40,12 +41,14 @@
OCF_RESKEY_binary_default="ceilometer-api"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_api_listen_port_default="8777"
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_pipeline=${OCF_RESKEY_pipeline_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_api_listen_port=${OCF_RESKEY_api_listen_port_default}}
@@ -99,6 +102,14 @@ Location of the OpenStack Ceilometer API
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
+<parameter name="pipeline" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer API Service (ceilometer-api) pipeline file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer API (ceilometer-api registry) pipeline file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pipeline_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Ceilometer API Service (ceilometer-api)
@@ -257,6 +268,7 @@ ceilometer_api_start() {
# run the actual ceilometer-api daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ --pipeline_cfg_file=$OCF_RESKEY_pipeline \
$OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.

View File

@ -0,0 +1,141 @@
--- a/ocf/cinder-volume
+++ b/ocf/cinder-volume
@@ -221,10 +221,73 @@ cinder_volume_status() {
fi
}
+cinder_volume_get_service_status() {
+ source /etc/nova/openrc
+ python - <<'EOF'
+from __future__ import print_function
+
+from cinderclient import client as cinder_client
+import keyring
+from keystoneclient import session as keystone_session
+from keystoneclient.auth.identity import v3
+import os
+import sys
+
+DEFAULT_OS_VOLUME_API_VERSION = 2
+CINDER_CLIENT_TIMEOUT_SEC = 3
+
+def create_cinder_client():
+ password = keyring.get_password('CGCS', os.environ['OS_USERNAME'])
+ auth = v3.Password(
+ user_domain_name=os.environ['OS_USER_DOMAIN_NAME'],
+ username = os.environ['OS_USERNAME'],
+ password = password,
+ project_domain_name = os.environ['OS_PROJECT_DOMAIN_NAME'],
+ project_name = os.environ['OS_PROJECT_NAME'],
+ auth_url = os.environ['OS_AUTH_URL'])
+ session = keystone_session.Session(auth=auth)
+ return cinder_client.Client(
+ DEFAULT_OS_VOLUME_API_VERSION,
+ username = os.environ['OS_USERNAME'],
+ auth_url = os.environ['OS_AUTH_URL'],
+ region_name=os.environ['OS_REGION_NAME'],
+ session = session, timeout = CINDER_CLIENT_TIMEOUT_SEC)
+
+def service_is_up(s):
+ return s.state == 'up'
+
+def cinder_volume_service_status(cc):
+ services = cc.services.list(
+ host='controller',
+ binary='cinder-volume')
+ if not len(services):
+ return (False, False)
+ exists, is_up = (True, service_is_up(services[0]))
+ for s in services[1:]:
+ # attempt to merge statuses
+ if is_up != service_is_up(s):
+ raise Exception(('Found multiple cinder-volume '
+ 'services with different '
+ 'statuses: {}').format(
+ [s.to_dict() for s in services]))
+ return (exists, is_up)
+
+try:
+ status = cinder_volume_service_status(
+ create_cinder_client())
+ print(('exists={0[0]}\n'
+ 'is_up={0[1]}').format(status))
+except Exception as e:
+ print(str(e), file=sys.stderr)
+ sys.exit(1)
+EOF
+}
+
cinder_volume_monitor() {
local rc
local pid
local volume_amqp_check
+ local check_service_status=$1; shift
cinder_volume_status
rc=$?
@@ -279,6 +342,46 @@ cinder_volume_monitor() {
touch $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ if [ $check_service_status == "check-service-status" ]; then
+ local retries_left
+ local retry_interval
+
+ retries_left=3
+ retry_interval=3
+ while [ $retries_left -gt 0 ]; do
+ retries_left=`expr $retries_left - 1`
+ status=$(cinder_volume_get_service_status)
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Unable to get Cinder Volume status"
+ if [ $retries_left -gt 0 ]; then
+ sleep $retry_interval
+ continue
+ else
+ return $OCF_ERR_GENERIC
+ fi
+ fi
+
+ local exists
+ local is_up
+ eval $status
+
+ if [ "$exists" == "True" ] && [ "$is_up" == "False" ]; then
+ ocf_log err "Cinder Volume service status is down"
+ if [ $retries_left -gt 0 ]; then
+ sleep $retry_interval
+ continue
+ else
+ ocf_log info "Trigger Cinder Volume guru meditation report"
+ ocf_run kill -s USR2 $pid
+ return $OCF_ERR_GENERIC
+ fi
+ fi
+
+ break
+ done
+ fi
+
ocf_log debug "OpenStack Cinder Volume (cinder-volume) monitor succeeded"
return $OCF_SUCCESS
}
@@ -386,7 +489,7 @@ cinder_volume_stop() {
# SIGTERM didn't help either, try SIGKILL
ocf_log info "OpenStack Cinder Volume (cinder-volume) failed to stop after ${shutdown_timeout}s \
using SIGTERM. Trying SIGKILL ..."
- ocf_run kill -s KILL $pid
+ ocf_run kill -s KILL -$pid
fi
cinder_volume_confirm_stop
@@ -414,7 +517,7 @@ case "$1" in
start) cinder_volume_start;;
stop) cinder_volume_stop;;
status) cinder_volume_status;;
- monitor) cinder_volume_monitor;;
+ monitor) cinder_volume_monitor "check-service-status";;
validate-all) ;;
*) usage
exit $OCF_ERR_UNIMPLEMENTED;;

View File

@ -0,0 +1,18 @@
Index: git/ocf/cinder-volume
===================================================================
--- git.orig/ocf/cinder-volume
+++ git/ocf/cinder-volume
@@ -224,6 +224,13 @@ cinder_volume_monitor() {
pid=`cat $OCF_RESKEY_pid`
if ocf_is_true "$OCF_RESKEY_multibackend"; then
+ pids=`ps -o pid --no-headers --ppid $pid`
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "No child processes from Cinder Volume (yet...): $rc"
+ return $OCF_NOT_RUNNING
+ fi
+
# Grab the child's PIDs
for i in `ps -o pid --no-headers --ppid $pid`
do

View File

@ -0,0 +1,93 @@
Index: git/ocf/cinder-volume
===================================================================
--- git.orig/ocf/cinder-volume
+++ git/ocf/cinder-volume
@@ -55,6 +55,20 @@ OCF_RESKEY_multibackend_default="false"
#######################################################################
+#######################################################################
+
+#
+# The following file is used to determine if Cinder-Volume should be
+# failed if the AMQP check does not pass. Cinder-Volume initializes
+# it's backend before connecting to Rabbit. In Ceph configurations,
+# Cinder-Volume will not connect to Rabbit until the storage blades
+# are provisioned (this can take a long time, no need to restart the
+# process over and over again).
+VOLUME_FAIL_ON_AMQP_CHECK_FILE="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.fail_on_amqp_check"
+
+#######################################################################
+
+
usage() {
cat <<UEND
usage: $0 (start|stop|validate-all|meta-data|status|monitor)
@@ -237,8 +251,13 @@ cinder_volume_monitor() {
volume_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$i" | grep -qs "ESTABLISHED"`
rc=$?
if [ $rc -ne 0 ]; then
- ocf_log err "This child process from Cinder Volume is not connected to the AMQP server: $rc"
- return $OCF_NOT_RUNNING
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ]; then
+ ocf_log err "This child process from Cinder Volume is not connected to the AMQP server: $rc"
+ return $OCF_NOT_RUNNING
+ else
+ ocf_log info "Cinder Volume initializing, child process is not connected to the AMQP server: $rc"
+ return $OCF_SUCCESS
+ fi
fi
done
else
@@ -248,11 +267,18 @@ cinder_volume_monitor() {
volume_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
rc=$?
if [ $rc -ne 0 ]; then
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ]; then
ocf_log err "Cinder Volume is not connected to the AMQP server: $rc"
return $OCF_NOT_RUNNING
+ else
+ ocf_log info "Cinder Volume initializing, not connected to the AMQP server: $rc"
+ return $OCF_SUCCESS
+ fi
fi
fi
+ touch $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+
ocf_log debug "OpenStack Cinder Volume (cinder-volume) monitor succeeded"
return $OCF_SUCCESS
}
@@ -260,6 +286,10 @@ cinder_volume_monitor() {
cinder_volume_start() {
local rc
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ] ; then
+ rm $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ fi
+
cinder_volume_status
rc=$?
if [ $rc -eq $OCF_SUCCESS ]; then
@@ -293,6 +323,10 @@ cinder_volume_confirm_stop() {
local my_bin
local my_processes
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ] ; then
+ rm $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ fi
+
my_binary=`which ${OCF_RESKEY_binary}`
my_processes=`pgrep -l -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"`
@@ -307,6 +341,10 @@ cinder_volume_stop() {
local rc
local pid
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ] ; then
+ rm $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ fi
+
cinder_volume_status
rc=$?
if [ $rc -eq $OCF_NOT_RUNNING ]; then

View File

@ -0,0 +1,95 @@
From 3ba260dbc2d69a797c8deb55ff0871e752dddebd Mon Sep 17 00:00:00 2001
From: Chris Friesen <chris.friesen@windriver.com>
Date: Tue, 11 Aug 2015 18:48:45 -0400
Subject: [PATCH] CGTS-1851: enable multiple nova-conductor workers
Enable multiple nova-conductor workers by properly handling
the fact that when there are multiple workers the first one just
coordinates the others and doesn't itself connect to AMQP or the DB.
This also fixes up a bunch of whitespace issues, replacing a number
of hard tabs with spaces to make it easier to follow the code.
---
ocf/nova-conductor | 58 ++++++++++++++++++++++++++++++++++++++----------------
1 file changed, 41 insertions(+), 17 deletions(-)
diff --git a/ocf/nova-conductor b/ocf/nova-conductor
index aa1ee2a..25e5f8f 100644
--- a/ocf/nova-conductor
+++ b/ocf/nova-conductor
@@ -239,6 +239,18 @@ nova_conductor_status() {
fi
}
+check_port() {
+ local port=$1
+ local pid=$2
+ netstat -punt | grep -s "$port" | grep -s "$pid" | grep -qs "ESTABLISHED"
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return 0
+ else
+ return 1
+ fi
+}
+
nova_conductor_monitor() {
local rc
local pid
@@ -258,24 +270,36 @@ nova_conductor_monitor() {
# Check the connections according to the PID.
# We are sure to hit the conductor process and not other nova process with the same connection behavior (for example nova-cert)
if ocf_is_true "$OCF_RESKEY_zeromq"; then
- pid=`cat $OCF_RESKEY_pid`
- conductor_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc_db=$?
- if [ $rc_db -ne 0 ]; then
- ocf_log err "Nova Conductor is not connected to the database server: $rc_db"
- return $OCF_NOT_RUNNING
- fi
- else
pid=`cat $OCF_RESKEY_pid`
- conductor_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc_db=$?
- conductor_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc_amqp=$?
- if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
- ocf_log err "Nova Conductor is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
- return $OCF_NOT_RUNNING
- fi
- fi
+ rc_db=`check_port $OCF_RESKEY_database_server_port $pid`
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "Nova Conductor is not connected to the database server: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ rc_db=`check_port $OCF_RESKEY_database_server_port $pid`
+ rc_amqp=`check_port $OCF_RESKEY_amqp_server_port $pid`
+ if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
+ # may have multiple workers, in which case $pid is the parent and we want to check the children
+ # If there are no children or at least one child is not connected to both DB and AMQP then we fail.
+ KIDPIDS=`pgrep -P $pid -f nova-conductor`
+ if [ ! -z "$KIDPIDS" ]; then
+ for pid in $KIDPIDS
+ do
+ rc_db=`check_port $OCF_RESKEY_database_server_port $pid`
+ rc_amqp=`check_port $OCF_RESKEY_amqp_server_port $pid`
+ if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
+ ocf_log err "Nova Conductor pid $pid is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ done
+ else
+ ocf_log err "Nova Conductor pid $pid is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ fi
+ fi
ocf_log debug "OpenStack Nova Conductor (nova-conductor) monitor succeeded"
return $OCF_SUCCESS
--
1.9.1

View File

@ -0,0 +1,16 @@
---
ocf/glance-api | 3 +++
1 file changed, 3 insertions(+)
--- a/ocf/glance-api
+++ b/ocf/glance-api
@@ -243,6 +243,9 @@ glance_api_monitor() {
return $rc
fi
+ ### DPENNEY: Bypass monitor until keyring functionality is ported
+ return $OCF_SUCCESS
+
# Monitor the RA by retrieving the image list
if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_os_auth_url" ]; then
ocf_run -q $OCF_RESKEY_client_binary \

View File

@ -0,0 +1,13 @@
Index: git/ocf/glance-api
===================================================================
--- git.orig/ocf/glance-api
+++ git/ocf/glance-api
@@ -249,7 +249,7 @@ glance_api_monitor() {
--os_username "$OCF_RESKEY_os_username" \
--os_tenant_name "$OCF_RESKEY_os_tenant_name" \
--os_auth_url "$OCF_RESKEY_os_auth_url" \
- index > /dev/null 2>&1
+ image-list > /dev/null 2>&1
rc=$?
if [ $rc -ne 0 ]; then
ocf_log err "Failed to connect to the OpenStack ImageService (glance-api): $rc"

View File

@ -0,0 +1,349 @@
Index: git/ocf/heat-api-cloudwatch
===================================================================
--- /dev/null
+++ git/ocf/heat-api-cloudwatch
@@ -0,0 +1,344 @@
+#!/bin/sh
+#
+#
+# OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+#
+# Description: Manages an OpenStack Orchestration Engine Service (heat-api-cloudwatch) process as an HA resource
+#
+# Authors: Emilien Macchi
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_server_port
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="heat-api-cloudwatch"
+OCF_RESKEY_config_default="/etc/heat/heat.conf"
+OCF_RESKEY_user_default="heat"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_server_port_default="8000"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_server_port=${OCF_RESKEY_server_port_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack Orchestration Engine Service (heat-api-cloudwatch) process as an HA resource
+
+ The 'start' operation starts the heat-api-cloudwatch service.
+ The 'stop' operation stops the heat-api-cloudwatch service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the heat-api-cloudwatch service is running
+ The 'monitor' operation reports whether the heat-api-cloudwatch service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="heat-api-cloudwatch">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+May manage a heat-api-cloudwatch instance or a clone set that
+creates a distributed heat-api-cloudwatch cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Orchestration Engine Service (heat-api-cloudwatch)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine server binary (heat-api-cloudwatch)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine server binary (heat-api-cloudwatch)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine Service (heat-api-cloudwatch) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine (heat-api-cloudwatch) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api-cloudwatch) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Orchestration Engine Service (heat-api-cloudwatch) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api-cloudwatch) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the heat-api-cloudwatch server.
+
+</longdesc>
+<shortdesc lang="en">heat-api-cloudwatch listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_server_port_default}" />
+</parameter>
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+</longdesc>
+<shortdesc lang="en">Additional parameters for heat-api-cloudwatch</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+heat_api_cloudwatch_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+heat_api_cloudwatch_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ heat_api_cloudwatch_check_port $OCF_RESKEY_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+heat_api_cloudwatch_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Orchestration Engine (heat-api-cloudwatch) is not running"
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+heat_api_cloudwatch_monitor() {
+ local rc
+ local pid
+ local rc_db
+ local engine_db_check
+
+ heat_api_cloudwatch_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the server is listening on the server port
+ engine_db_check=`netstat -an | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
+ rc_db=$?
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "heat-api-cloudwatch is not listening on $OCF_RESKEY_console_port: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+
+ ocf_log debug "OpenStack Orchestration Engine (heat-api-cloudwatch) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+heat_api_cloudwatch_start() {
+ local rc
+
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual heat-api-cloudwatch daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ heat_api_cloudwatch_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api-cloudwatch) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) started"
+ return $OCF_SUCCESS
+}
+
+heat_api_cloudwatch_stop() {
+ local rc
+ local pid
+
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) already stopped"
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api-cloudwatch) couldn't be stopped"
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Orchestration Engine (heat-api-cloudwatch) still hasn't stopped yet. Waiting ..."
+ done
+
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+heat_api_cloudwatch_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) heat_api_cloudwatch_start;;
+ stop) heat_api_cloudwatch_stop;;
+ status) heat_api_cloudwatch_status;;
+ monitor) heat_api_cloudwatch_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac
+

Some files were not shown because too many files have changed in this diff Show More