Creating folder structure for STX-O upversion

In preparation for the STX-Openstack upversion to ANTELOPE, there's a
need to decouple the Openstack packages from the Platform ones, so that
the development can occur without impacting the current version. This
decoupling allows the platform and applications to leverage different
versions of OpenStack releases for their respective packages.

On this upversion process, we can't upversion one package at a time,
since that would certainly break the build, testing and usage of
StarlingX. We need to have a way in which the development of STX-O can
occur without impacting the rest of the platform.

To achieve this, a new folder named "openstack" has been created
under the openstack-armada-app repo, containing all packages from the
upstream repository, under the openstack folder [1].
This ensures that the ongoing upversion process does not disrupt the
platform's functionalities and prevents any potential breakages during
the transition.

It should be noted that this change only serves as a placeholder for
future upversions and does not impact the current packages, as they're
not going to be delivered to the ISO, or the STX-O Helm charts, until
the upversion is completed.

This commit lays the groundwork for future work on the STX-O upversion,
promoting flexibility and modularization within the StarlingX packages.

The SRC_DIR and the BASE_SRCREV were updated for each package in order
for the build to work. The BASE_SRCREV was set to the latest merged
change by the time that this review was created [2]

After the work on the upversion and the STX-O clients containerization
is done, the 48138 task (under this same story) can be worked on to
completely remove the STX-O clients from the platform.

[1] https://opendev.org/starlingx/upstream/src/branch/master/openstack
[2] https://review.opendev.org/c/starlingx/openstack-armada-app/+/885301

Test Plan:
PASS: Make sure that the packages are not found if they're commented
      on the debian_pkg_dirs file
PASS: Make sure that the images are not found if they're commented on
      the debian_stable_docker_images.inc
PASS: Make sure packages can be build:
      - openstack-pkg-tools
      - rabbitmq-server
      - python-wsme
      - openstack-ras
      - python-openstacksdk
      - python-oslo-messaging
      - python-osc-lib
      - barbican
      - keystone
      - horizon
      - python-aodhclient
      - python-barbicanclient
      - python-cinderclient
      - python-glanceclient
      - python-gnocchiclient
      - python-heatclient
      - python-ironicclient
      - python-keystoneclient
      - python-neutronclient
      - python-novaclient
      - python-openstackclient
      - python-pankoclient
PASS: Make sure images can be build:
      - stx-aodh
      - stx-ironic
      - stx-barbican
      - stx-ceilometer
      - stx-cinder
      - stx-glance
      - stx-gnocchi
      - stx-heat
      - stx-horizon
      - stx-keystone
      - stx-neutron
      - stx-nova
      - stx-openstackclients
      - stx-placement
      - stx-platformclients

Story: 2010774
Task: 48115

Change-Id: I077a814382fb21bd4b36fac7c20ee041718433f3
Signed-off-by: Lucas de Ataides <lucas.deataidesbarreto@windriver.com>
This commit is contained in:
Lucas de Ataides 2023-06-13 15:12:51 -03:00
parent 554b9cd26d
commit f7f7690444
176 changed files with 13185 additions and 0 deletions

View File

@ -2,3 +2,25 @@ openstack-helm
openstack-helm-infra
python3-k8sapp-openstack
stx-openstack-helm-fluxcd
#upstream/openstack/barbican
#upstream/openstack/keystone
#upstream/openstack/openstack-pkg-tools
#upstream/openstack/openstack-ras
#upstream/openstack/python-aodhclient
#upstream/openstack/python-barbicanclient
#upstream/openstack/python-cinderclient
#upstream/openstack/python-glanceclient
#upstream/openstack/python-gnocchiclient
#upstream/openstack/python-heatclient
#upstream/openstack/python-horizon
#upstream/openstack/python-ironicclient
#upstream/openstack/python-keystoneclient
#upstream/openstack/python-neutronclient
#upstream/openstack/python-novaclient
#upstream/openstack/python-openstackclient
#upstream/openstack/python-openstacksdk
#upstream/openstack/python-osc-lib
#upstream/openstack/python-oslo-messaging
#upstream/openstack/python-pankoclient
#upstream/openstack/python-wsme
#upstream/openstack/rabbitmq-server

View File

@ -0,0 +1,14 @@
#upstream/openstack/openstack-aodh
#upstream/openstack/openstack-ironic
#upstream/openstack/python-barbican
#upstream/openstack/python-ceilometer
#upstream/openstack/python-cinder
#upstream/openstack/python-glance
#upstream/openstack/python-gnocchi
#upstream/openstack/python-heat/openstack-heat
#upstream/openstack/python-horizon
#upstream/openstack/python-keystone
#upstream/openstack/python-neutron
#upstream/openstack/python-nova
#upstream/openstack/python-openstackclient
#upstream/openstack/python-placement

View File

@ -0,0 +1,5 @@
Openstack Upstream
==================
This folder contains the required repositories to build the
STX-Openstack packages, clients and container images.

View File

@ -0,0 +1,8 @@
This repo is for https://opendev.org/openstack/barbican
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -0,0 +1,297 @@
From cb87c126b41efdc0956c5e9e9350a9edf8129f3d Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Mon, 22 Nov 2021 14:46:16 +0000
Subject: [PATCH] Remove dbconfig and openstack-pkg-tools config
Remove the dbconfig and openstack-pkg-tools post configuration
since we use puppet to configure the services and doing
both will lead the problems with integration.
Story: 2009101
Task: 44026
Signed-off-by: Charles Short <charles.short@windriver.com>
diff -Naurp barbican-11.0.0.orig/debian/barbican-api.config.in barbican-11.0.0/debian/barbican-api.config.in
--- barbican-11.0.0.orig/debian/barbican-api.config.in 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-api.config.in 1970-01-01 00:00:00.000000000 +0000
@@ -1,12 +0,0 @@
-#!/bin/sh
-
-set -e
-
-. /usr/share/debconf/confmodule
-
-#PKGOS-INCLUDE#
-
-pkgos_register_endpoint_config barbican
-db_go
-
-exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-api.postinst.in barbican-11.0.0/debian/barbican-api.postinst.in
--- barbican-11.0.0.orig/debian/barbican-api.postinst.in 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-api.postinst.in 1970-01-01 00:00:00.000000000 +0000
@@ -1,17 +0,0 @@
-#!/bin/sh
-
-set -e
-
-#PKGOS-INCLUDE#
-
-if [ "$1" = "configure" ] || [ "$1" = "reconfigure" ] ; then
- . /usr/share/debconf/confmodule
- . /usr/share/dbconfig-common/dpkg/postinst
-
- pkgos_register_endpoint_postinst barbican barbican key-manager "Barbican Key Management Service" 9311 ""
- db_stop
-fi
-
-#DEBHELPER#
-
-exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.config.in barbican-11.0.0/debian/barbican-common.config.in
--- barbican-11.0.0.orig/debian/barbican-common.config.in 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.config.in 1970-01-01 00:00:00.000000000 +0000
@@ -1,17 +0,0 @@
-#!/bin/sh
-
-set -e
-
-. /usr/share/debconf/confmodule
-CONF=/etc/barbican/barbican.conf
-API_CONF=/etc/barbican/barbican-api-paste.ini
-
-#PKGOS-INCLUDE#
-
-pkgos_var_user_group barbican
-pkgos_dbc_read_conf -pkg barbican-common ${CONF} DEFAULT sql_connection barbican $@
-pkgos_rabbit_read_conf ${CONF} DEFAULT barbican
-pkgos_read_admin_creds ${CONF} keystone_authtoken barbican
-db_go
-
-exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.install barbican-11.0.0/debian/barbican-common.install
--- barbican-11.0.0.orig/debian/barbican-common.install 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.install 2021-11-26 17:57:04.417749768 +0000
@@ -1,2 +1,5 @@
bin/barbican-api /usr/bin
usr/bin/*
+etc/barbican/barbican-api-paste.ini etc/barbican
+etc/barbican/barbican.conf etc/barbican
+etc/barbican/vassals/barbican-api.ini etc/barbican/vassals
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.posinst barbican-11.0.0/debian/barbican-common.posinst
--- barbican-11.0.0.orig/debian/barbican-common.posinst 1970-01-01 00:00:00.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.posinst 2021-11-26 17:11:12.770838698 +0000
@@ -0,0 +1,28 @@
+#!/bin/sh
+
+set -e
+
+set -e
+
+if [ "$1" = "configure" ]; then
+ if ! getent group barbican > /dev/null 2>&1; then
+ addgroup --system barbican >/dev/null
+ fi
+
+ if ! getent passwd barbican > /dev/null 2>&1; then
+ adduser --system --home /var/lib/barbican --ingroup barbican --no-create-home --shell /bin/false barbican
+ fi
+
+ chown barbican:adm /var/log/barbican
+ chmod 0750 /var/log/barbican
+
+ find /etc/barbican -exec chown root:barbican "{}" +
+ find /etc/barbican -type f -exec chmod 0640 "{}" + -o -type d -exec chmod 0750 "{}" +
+
+ find /var/lib/barbican -exec chown barbican:barbican "{}" +
+ find /var/lib/barbican -type f -exec chmod 0640 "{}" + -o -type d -exec chmod 0750 "{}" +
+fi
+
+#DEBHELPER#
+
+exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.postinst.in barbican-11.0.0/debian/barbican-common.postinst.in
--- barbican-11.0.0.orig/debian/barbican-common.postinst.in 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.postinst.in 1970-01-01 00:00:00.000000000 +0000
@@ -1,46 +0,0 @@
-#!/bin/sh
-
-set -e
-
-CONF=/etc/barbican/barbican.conf
-API_CONF=/etc/barbican/barbican-api-paste.ini
-
-#PKGOS-INCLUDE#
-
-if [ "$1" = "configure" ] || [ "$1" = "reconfigure" ] ; then
- . /usr/share/debconf/confmodule
- . /usr/share/dbconfig-common/dpkg/postinst
-
- pkgos_var_user_group barbican
- mkdir -p /var/lib/barbican/temp
- chown barbican:barbican /var/lib/barbican/temp
-
- pkgos_write_new_conf barbican api_audit_map.conf
- pkgos_write_new_conf barbican barbican-api-paste.ini
- pkgos_write_new_conf barbican barbican.conf
- pkgos_write_new_conf barbican barbican-functional.conf
- if [ -r /etc/barbican/policy.json ] ; then
- mv /etc/barbican/policy.json /etc/barbican/disabled.policy.json.old
- fi
-
- db_get barbican/configure_db
- if [ "$RET" = "true" ]; then
- pkgos_dbc_postinst ${CONF} DEFAULT sql_connection barbican $@
- fi
-
- pkgos_rabbit_write_conf ${CONF} DEFAULT barbican
- pkgos_write_admin_creds ${CONF} keystone_authtoken barbican
-
- db_get barbican/configure_db
- if [ "$RET" = "true" ]; then
- echo "Now calling barbican-db-manage upgrade: this may take a while..."
-# echo "TODO: barbican-db-manage upgrade: Disabled for now..."
- su -s /bin/sh -c 'barbican-db-manage upgrade' barbican
- fi
-
- db_stop
-fi
-
-#DEBHELPER#
-
-exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.postrm barbican-11.0.0/debian/barbican-common.postrm
--- barbican-11.0.0.orig/debian/barbican-common.postrm 1970-01-01 00:00:00.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.postrm 2021-11-26 17:11:12.774838632 +0000
@@ -0,0 +1,14 @@
+#!/bin/sh
+
+set -e
+
+if [ "$1" = "purge" ] ; then
+ echo "Purging barbican. Backup of /var/lib/barbican can be found at /var/lib/barbican.tar.bz2" >&2
+ [ -e /var/lib/barbican ] && rm -rf /var/lib/barbican
+ [ -e /var/log/barbican ] && rm -rf /var/log/barbican
+fi
+
+
+#DEBHELPER#
+
+exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.postrm.in barbican-11.0.0/debian/barbican-common.postrm.in
--- barbican-11.0.0.orig/debian/barbican-common.postrm.in 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.postrm.in 1970-01-01 00:00:00.000000000 +0000
@@ -1,25 +0,0 @@
-#!/bin/sh
-
-set -e
-
-#PKGOS-INCLUDE#
-
-if [ "$1" = "purge" ] ; then
- # Purge the db
- pkgos_dbc_postrm barbican barbican-common $@
-
- # Purge config files copied in postinst
- for i in barbican.conf barbican-admin-paste.ini barbican-api.conf barbican-api-paste.ini barbican-functional.conf policy.json api_audit_map.conf ; do
- rm -f /etc/barbican/$i
- done
- # and the folders
- rmdir --ignore-fail-on-non-empty /etc/barbican || true
-
- echo "Purging barbican. Backup of /var/lib/barbican can be found at /var/lib/barbican.tar.bz2" >&2
- [ -e /var/lib/barbican ] && rm -rf /var/lib/barbican
- [ -e /var/log/barbican ] && rm -rf /var/log/barbican
-fi
-
-#DEBHELPER#
-
-exit 0
diff -Naurp barbican-11.0.0.orig/debian/control barbican-11.0.0/debian/control
--- barbican-11.0.0.orig/debian/control 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/control 2021-11-26 17:11:12.774838632 +0000
@@ -96,7 +96,6 @@ Package: barbican-common
Architecture: all
Depends:
adduser,
- dbconfig-common,
debconf,
python3-barbican (= ${binary:Version}),
${misc:Depends},
diff -Naurp barbican-11.0.0.orig/debian/rules barbican-11.0.0/debian/rules
--- barbican-11.0.0.orig/debian/rules 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/rules 2021-11-26 17:56:48.926004150 +0000
@@ -3,22 +3,12 @@
include /usr/share/openstack-pkg-tools/pkgos.make
%:
- dh $@ --buildsystem=python_distutils --with python3,systemd,sphinxdoc
+ dh $@ --buildsystem=pybuild --with python3,systemd,sphinxdoc
override_dh_auto_clean:
rm -f debian/*.init debian/*.service debian/*.upstart
rm -rf build
rm -rf barbican.sqlite
- rm -f debian/barbican-api.postinst debian/barbican-api.config debian/barbican-common.postinst debian/barbican-common.config debian/barbican-common.postrm
-
-override_dh_auto_build:
- /usr/share/openstack-pkg-tools/pkgos_insert_include pkgos_func barbican-api.postinst
- /usr/share/openstack-pkg-tools/pkgos_insert_include pkgos_func barbican-api.config
- /usr/share/openstack-pkg-tools/pkgos_insert_include pkgos_func barbican-common.postinst
- /usr/share/openstack-pkg-tools/pkgos_insert_include pkgos_func barbican-common.config
- /usr/share/openstack-pkg-tools/pkgos_insert_include pkgos_postrm barbican-common.postrm
- pkgos-merge-templates barbican-api barbican endpoint
- pkgos-merge-templates barbican-common barbican db rabbit ksat
override_dh_auto_test:
echo "Do nothing..."
@@ -35,46 +25,9 @@ ifeq (,$(findstring nocheck, $(DEB_BUILD
pkgos-dh_auto_test --no-py2 'barbican\.tests\.(?!(.*common.test_utils\.WhenTestingAcceptEncodingGetter\.test_get_correct_fullname_for_class.*|.*common\.test_utils\.WhenTestingGenerateFullClassnameForInstance\.test_returns_qualified_name.*|.*plugin\.interface\.test_certificate_manager\.WhenTestingCertificateEventPluginManager\.test_get_plugin_by_name.*|.*plugin\.interface\.test_certificate_manager\.WhenTestingCertificatePluginManager\.test_get_plugin_by_ca_id.*|.*plugin\.interface\.test_certificate_manager\.WhenTestingCertificatePluginManager\.test_get_plugin_by_name.*|.*plugin\.interface\.test_certificate_manager\.WhenTestingCertificatePluginManager\.test_refresh_ca_list.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_delete_secret_assert_called.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_generate_asymmetric_key_assert_called.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_generate_symmetric_key_assert_called.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_get_secret_opaque.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_get_secret_private_key.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_get_secret_public_key.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_get_secret_symmetric.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_store_private_key_secret_assert_called.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_store_symmetric_secret_assert_called.*|.*tasks\.test_keystone_consumer\.WhenUsingKeystoneEventConsumerProcessMethod\.test_existing_project_entities_cleanup_for_plain_secret.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_credential.*|.*test_hacking\.HackingTestCase\.test_logging_with_tuple_argument.*|.*common\.test_validators\.WhenTestingSecretMetadataValidator\.test_should_validate_all_fields_and_make_key_lowercase.*|.*test_hacking\.HackingTestCase\.test_str_on_exception.*|.*test_hacking\.HackingTestCase\.test_str_on_multiple_exceptions.*|.*test_hacking\.HackingTestCase\.test_str_unicode_on_multiple_exceptions.*|.*test_hacking\.HackingTestCase\.test_unicode_on_exception.*))'
endif
-
- # Generate the barbican.conf config using installed python-barbican files.
- mkdir -p $(CURDIR)/debian/barbican-common/usr/share/barbican-common
- PYTHONPATH=$(CURDIR)/debian/tmp/usr/lib/python3/dist-packages oslo-config-generator \
- --output-file $(CURDIR)/debian/barbican-common/usr/share/barbican-common/barbican.conf \
- --wrap-width 140 \
- --namespace barbican.certificate.plugin \
- --namespace barbican.certificate.plugin.snakeoil \
- --namespace barbican.common.config \
- --namespace barbican.plugin.crypto \
- --namespace barbican.plugin.crypto.p11 \
- --namespace barbican.plugin.crypto.simple \
- --namespace barbican.plugin.dogtag \
- --namespace barbican.plugin.secret_store \
- --namespace barbican.plugin.secret_store.kmip \
- --namespace keystonemiddleware.auth_token \
- --namespace oslo.log \
- --namespace oslo.messaging \
- --namespace oslo.middleware.cors \
- --namespace oslo.middleware.http_proxy_to_wsgi \
- --namespace oslo.policy \
- --namespace oslo.service.periodic_task \
- --namespace oslo.service.sslutils \
- --namespace oslo.service.wsgi
- pkgos-readd-keystone-authtoken-missing-options $(CURDIR)/debian/barbican-common/usr/share/barbican-common/barbican.conf keystone_authtoken barbican
-
- # Same with policy.conf
- mkdir -p $(CURDIR)/debian/barbican-common/etc/barbican/policy.d
- PYTHONPATH=$(CURDIR)/debian/tmp/usr/lib/python3/dist-packages oslopolicy-sample-generator \
- --output-file $(CURDIR)/debian/barbican-common/etc/barbican/policy.d/00_default_policy.yaml \
- --format yaml \
- --namespace barbican
-
- # Use the policy.d folder
- pkgos-fix-config-default $(CURDIR)/debian/barbican-common/usr/share/barbican-common/barbican.conf oslo_policy policy_dirs /etc/barbican/policy.d
-
-
- # Restore sanity...
- pkgos-fix-config-default $(CURDIR)/debian/barbican-common/usr/share/barbican-common/barbican.conf keystone_notifications enable True
-
+ PYTHONPATH=$(CURDIR) oslo-config-generator \
+ --config-file etc/oslo-config-generator/barbican.conf \
+ --output-file etc/barbican/barbican.conf
dh_install
rm -rf $(CURDIR)/debian/tmp/usr/etc
dh_missing --fail-missing

View File

@ -0,0 +1,83 @@
From 31cab241e50e2fc99f257c5e9a1a006c66b7041f Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Thu, 3 Mar 2022 19:34:02 +0000
Subject: [PATCH] Start barbican-api with gunicorn during bootstrap for Debian
Signed-off-by: Andy Ning <andy.ning@windriver.com>
---
debian/barbican-api.install | 2 +-
debian/barbican-api.service.in | 19 +++++++++++++++++++
debian/barbican-common.install | 1 +
debian/gunicorn-config.py | 16 ++++++++++++++++
4 files changed, 37 insertions(+), 1 deletion(-)
create mode 100644 debian/barbican-api.service.in
create mode 100644 debian/gunicorn-config.py
diff --git a/debian/barbican-api.install b/debian/barbican-api.install
index 05ddad9..3d8f2b4 100644
--- a/debian/barbican-api.install
+++ b/debian/barbican-api.install
@@ -1 +1 @@
-debian/barbican-api-uwsgi.ini /etc/barbican
+debian/gunicorn-config.py /etc/barbican
diff --git a/debian/barbican-api.service.in b/debian/barbican-api.service.in
new file mode 100644
index 0000000..197a281
--- /dev/null
+++ b/debian/barbican-api.service.in
@@ -0,0 +1,19 @@
+[Unit]
+Description=Openstack Barbican API server
+After=syslog.target network.target
+Before=httpd.service
+
+[Service]
+PIDFile=/run/barbican/pid
+User=barbican
+Group=barbican
+RuntimeDirectory=barbican
+RuntimeDirectoryMode=770
+ExecStart=/usr/bin/gunicorn --pid /run/barbican/pid -c /etc/barbican/gunicorn-config.py --paste /etc/barbican/barbican-api-paste.ini
+ExecReload=/usr/bin/kill -s HUP $MAINPID
+ExecStop=/usr/bin/kill -s TERM $MAINPID
+StandardError=syslog
+Restart=on-failure
+
+[Install]
+WantedBy=multi-user.target
diff --git a/debian/barbican-common.install b/debian/barbican-common.install
index 663fdc8..f1944b5 100644
--- a/debian/barbican-common.install
+++ b/debian/barbican-common.install
@@ -1,5 +1,6 @@
bin/barbican-api /usr/bin
usr/bin/*
+etc/barbican/api_audit_map.conf etc/barbican
etc/barbican/barbican-api-paste.ini etc/barbican
etc/barbican/barbican.conf etc/barbican
etc/barbican/vassals/barbican-api.ini etc/barbican/vassals
diff --git a/debian/gunicorn-config.py b/debian/gunicorn-config.py
new file mode 100644
index 0000000..c8c1e07
--- /dev/null
+++ b/debian/gunicorn-config.py
@@ -0,0 +1,16 @@
+import multiprocessing
+
+bind = '0.0.0.0:9311'
+user = 'barbican'
+group = 'barbican'
+
+timeout = 30
+backlog = 2048
+keepalive = 2
+
+workers = multiprocessing.cpu_count() * 2
+
+loglevel = 'info'
+errorlog = '-'
+accesslog = '-'
+
--
2.30.2

View File

@ -0,0 +1,55 @@
From a729c3af80ec8b045ba8f04dfb7db4c90ab8b9c5 Mon Sep 17 00:00:00 2001
From: Dan Voiculeasa <dan.voiculeasa@windriver.com>
Date: Thu, 31 Mar 2022 18:31:00 +0300
Subject: [PATCH 3/3] Create barbican user, group, log dir
Signed-off-by: Dan Voiculeasa <dan.voiculeasa@windriver.com>
---
debian/barbican-common.dirs | 1 +
...{barbican-common.posinst => barbican-common.postinst} | 9 +--------
2 files changed, 2 insertions(+), 8 deletions(-)
create mode 100644 debian/barbican-common.dirs
rename debian/{barbican-common.posinst => barbican-common.postinst} (52%)
diff --git a/debian/barbican-common.dirs b/debian/barbican-common.dirs
new file mode 100644
index 0000000..3a4ef46
--- /dev/null
+++ b/debian/barbican-common.dirs
@@ -0,0 +1 @@
+/var/log/barbican
diff --git a/debian/barbican-common.posinst b/debian/barbican-common.postinst
similarity index 52%
rename from debian/barbican-common.posinst
rename to debian/barbican-common.postinst
index 9cf6a4c..bcf54d1 100644
--- a/debian/barbican-common.posinst
+++ b/debian/barbican-common.postinst
@@ -2,8 +2,6 @@
set -e
-set -e
-
if [ "$1" = "configure" ]; then
if ! getent group barbican > /dev/null 2>&1; then
addgroup --system barbican >/dev/null
@@ -13,14 +11,9 @@ if [ "$1" = "configure" ]; then
adduser --system --home /var/lib/barbican --ingroup barbican --no-create-home --shell /bin/false barbican
fi
- chown barbican:adm /var/log/barbican
+ chown barbican:barbican /var/log/barbican
chmod 0750 /var/log/barbican
- find /etc/barbican -exec chown root:barbican "{}" +
- find /etc/barbican -type f -exec chmod 0640 "{}" + -o -type d -exec chmod 0750 "{}" +
-
- find /var/lib/barbican -exec chown barbican:barbican "{}" +
- find /var/lib/barbican -type f -exec chmod 0640 "{}" + -o -type d -exec chmod 0750 "{}" +
fi
#DEBHELPER#
--
2.30.0

View File

@ -0,0 +1,3 @@
0001-Remove-dbconfig-and-openstack-pkg-tools-config.patch
0002-Start-barbican-api-with-gunicorn-during-bootstrap-fo.patch
0003-Create-barbican-user-group-log-dir.patch

View File

@ -0,0 +1,12 @@
---
debname: barbican
debver: 1:11.0.0-3
dl_path:
name: barbican-debian-11.0.0-3.tar.gz
url: https://salsa.debian.org/openstack-team/services/barbican/-/archive/debian/11.0.0-3/barbican-debian-11.0.0-3.tar.gz
md5sum: 44caa91c9df25e29f399a3bbdb22d375
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 5566a41fc0f0be21e2764a9cc0c37823dcae72c8
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/barbican

View File

@ -0,0 +1,36 @@
From 754fc74974be3b854173f7ce51ed0e248eb24b03 Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Tue, 24 May 2022 10:33:02 -0400
Subject: [PATCH] Store secret data in ascii format in DB
Store secret data (plugin_meta and cypher_text) in ascii format
instead of hex format in database.
Signed-off-by: Andy Ning <andy.ning@windriver.com>
---
barbican/plugin/store_crypto.py | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/barbican/plugin/store_crypto.py b/barbican/plugin/store_crypto.py
index c13e59c..843d5a8 100644
--- a/barbican/plugin/store_crypto.py
+++ b/barbican/plugin/store_crypto.py
@@ -311,7 +311,8 @@ def _store_secret_and_datum(
# setup and store encrypted datum
datum_model = models.EncryptedDatum(secret_model, kek_datum_model)
datum_model.content_type = context.content_type
- datum_model.cypher_text = base64.b64encode(generated_dto.cypher_text)
+ datum_model.cypher_text = \
+ base64.b64encode(generated_dto.cypher_text).decode('utf-8')
datum_model.kek_meta_extended = generated_dto.kek_meta_extended
repositories.get_encrypted_datum_repository().create_from(
datum_model)
@@ -333,4 +334,4 @@ def _indicate_bind_completed(kek_meta_dto, kek_datum):
kek_datum.algorithm = kek_meta_dto.algorithm
kek_datum.bit_length = kek_meta_dto.bit_length
kek_datum.mode = kek_meta_dto.mode
- kek_datum.plugin_meta = kek_meta_dto.plugin_meta
+ kek_datum.plugin_meta = kek_meta_dto.plugin_meta.decode('utf-8')
--
2.25.1

View File

@ -0,0 +1 @@
0001-Store-secret-data-in-ascii-format-in-DB.patch

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,8 @@
This repo is for https://opendev.org/openstack/keystone
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -0,0 +1,446 @@
From 6f55cd9922280ee5f4d119aa4a9924a51dea8068 Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Tue, 15 Feb 2022 15:59:20 +0000
Subject: [PATCH] Add stx support
Apply Centos 7 patches to the debian packaging.
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 2 +
debian/keystone.dirs | 1 +
debian/keystone.install | 4 +
debian/keystone.logrotate | 8 -
debian/keystone.postinst.in | 10 +-
debian/python3-keystone.install | 1 +
debian/rules | 6 +
debian/stx/keystone-all | 156 ++++++++++++++++++
debian/stx/keystone-fernet-keys-rotate-active | 64 +++++++
debian/stx/keystone.service | 14 ++
debian/stx/password-rules.conf | 34 ++++
debian/stx/public.py | 21 +++
12 files changed, 304 insertions(+), 17 deletions(-)
delete mode 100644 debian/keystone.logrotate
create mode 100644 debian/stx/keystone-all
create mode 100644 debian/stx/keystone-fernet-keys-rotate-active
create mode 100644 debian/stx/keystone.service
create mode 100644 debian/stx/password-rules.conf
create mode 100644 debian/stx/public.py
diff --git a/debian/control b/debian/control
index 9d0a3a41f..9a67234fa 100644
--- a/debian/control
+++ b/debian/control
@@ -31,6 +31,8 @@ Build-Depends-Indep:
python3-jwt,
python3-keystoneclient,
python3-keystonemiddleware (>= 7.0.0),
+ python3-keyring,
+ python3-keyrings.alt,
python3-ldap,
python3-ldappool,
python3-lxml (>= 4.5.0),
diff --git a/debian/keystone.dirs b/debian/keystone.dirs
index a4b3a9e86..6c6e31faf 100644
--- a/debian/keystone.dirs
+++ b/debian/keystone.dirs
@@ -2,3 +2,4 @@
/var/lib/keystone
/var/lib/keystone/cache
/var/log/keystone
+usr/share/keystone
diff --git a/debian/keystone.install b/debian/keystone.install
index c0d62c45b..8d68859c0 100644
--- a/debian/keystone.install
+++ b/debian/keystone.install
@@ -1,3 +1,7 @@
debian/keystone-uwsgi.ini /etc/keystone
etc/default_catalog.templates /etc/keystone
etc/logging.conf.sample /usr/share/doc/keystone
+debian/stx/keystone-fernet-keys-rotate-active usr/bin
+debian/stx/password-rules.conf /etc/keystone
+debian/stx/keystone.service lib/systemd/system
+debian/stx/keystone-all usr/bin
diff --git a/debian/keystone.logrotate b/debian/keystone.logrotate
deleted file mode 100644
index 2709c72aa..000000000
--- a/debian/keystone.logrotate
+++ /dev/null
@@ -1,8 +0,0 @@
-/var/log/keystone/*.log {
- daily
- missingok
- rotate 5
- compress
- minsize 100k
- copytruncate
-}
\ No newline at end of file
diff --git a/debian/keystone.postinst.in b/debian/keystone.postinst.in
index 207cbc22e..4b464a236 100755
--- a/debian/keystone.postinst.in
+++ b/debian/keystone.postinst.in
@@ -170,15 +170,7 @@ if [ "$1" = "configure" ] ; then
su keystone -s /bin/sh -c 'keystone-manage credential_setup --keystone-user keystone --keystone-group keystone'
fi
- chown keystone:adm /var/log/keystone
-
- if [ -n $(which systemctl)"" ] ; then
- systemctl enable keystone
- fi
- if [ -n $(which update-rc.d)"" ] ; then
- update-rc.d keystone defaults
- fi
- invoke-rc.d keystone start
+ chown -R keystone:keystone /var/log/keystone
db_get keystone/create-admin-tenant
if [ "$RET" = "true" ] ; then
diff --git a/debian/python3-keystone.install b/debian/python3-keystone.install
index 44d7fcb64..3c76ffb99 100644
--- a/debian/python3-keystone.install
+++ b/debian/python3-keystone.install
@@ -1,2 +1,3 @@
usr/bin/*
usr/lib/python3/*
+debian/stx/public.py usr/share/keystone
diff --git a/debian/rules b/debian/rules
index 3744142f9..f827d1b68 100755
--- a/debian/rules
+++ b/debian/rules
@@ -106,6 +106,12 @@ ifeq (,$(findstring nodocs, $(DEB_BUILD_OPTIONS)))
dh_installman
endif
+override_dh_installsystemd:
+ dh_installsystemd --no-enable --no-start
+
+override_dh_installinit:
+ dh_installinit --no-enable --no-start
+
override_dh_python3:
dh_python3 --shebang=/usr/bin/python3
diff --git a/debian/stx/keystone-all b/debian/stx/keystone-all
new file mode 100644
index 000000000..de339caa6
--- /dev/null
+++ b/debian/stx/keystone-all
@@ -0,0 +1,156 @@
+#!/bin/sh
+# Copyright (c) 2013-2018 Wind River Systems, Inc.
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+
+### BEGIN INIT INFO
+# Provides: OpenStack Keystone-wsgi
+# Required-Start: networking
+# Required-Stop: networking
+# Default-Start: 2 3 4 5
+# Default-Stop: 0 1 6
+# Short-Description: OpenStack Keystone
+# Description: Openstack Identitiy service running on WSGI compatable gunicorn web server
+#
+### END INIT INFO
+
+RETVAL=0
+#public 5000
+
+DESC_PUBLIC="openstack-keystone"
+
+PIDFILE_PUBLIC="/var/run/$DESC_PUBLIC.pid"
+
+PYTHON=`which python`
+
+source /etc/keystone/keystone-extra.conf
+source /etc/platform/platform.conf
+
+if [ -n ${@:2:1} ] ; then
+ if [ ${@:2:1}="--public-bind-addr" ] ; then
+ PUBLIC_BIND_ADDR_CMD=${@:3:1}
+ fi
+fi
+
+
+###
+EXEC="/usr/bin/gunicorn"
+
+WORKER="eventlet"
+# Increased timeout to facilitate large image uploads
+TIMEOUT="200"
+
+# Calculate the no of workers based on the number of workers retrieved by
+# Platform Eng which is retreived from the keystone-extra.conf
+
+if [ "$system_type" == "All-in-one" ]; then
+ TIS_WORKERS_FACTOR=1
+else
+ TIS_WORKERS_FACTOR=1.5
+fi
+TIS_WORKERS=$(echo "${TIS_WORKERS_FACTOR}*${TIS_PUBLIC_WORKERS}"|bc )
+TIS_WORKERS=${TIS_WORKERS%.*}
+
+#--max-requests , --max-requests-jitter Configuration
+#--max-requests = The max number of requests a worker will process before restarting
+#--max-requests-jitter = The maximum jitter to add to the max_requests setting.
+MAX_REQUESTS=100000
+MAX_REQ_JITTER_CAP_FACTOR=0.5
+MAX_REQ_JITTER_PUBLIC=$(echo "${TIS_WORKERS}*${MAX_REQ_JITTER_CAP_FACTOR}+1"|bc)
+MAX_REQ_JITTER_PUBLIC=${MAX_REQ_JITTER_PUBLIC%.*}
+
+
+start()
+{
+ # Got proper no of workers . Starting gunicorn now
+ echo -e "Initialising keystone service using gunicorn .. \n"
+
+ if [ -z "$PUBLIC_BIND_ADDR" ]; then
+ echo "Keystone floating ip not found . Cannot start services. Exiting .."
+ exit 1
+ fi
+ BIND_PUBLIC=$PUBLIC_BIND_ADDR:5000
+
+ if [ -e $PIDFILE_PUBLIC ]; then
+ PIDDIR=/proc/$(cat $PIDFILE_PUBLIC)
+ if [ -d ${PIDDIR} ]; then
+ echo "$DESC_PUBLIC already running."
+ exit 1
+ else
+ echo "Removing stale PID file $PIDFILE_PUBLIC"
+ rm -f $PIDFILE_PUBLIC
+ fi
+ fi
+
+ echo -e "Starting $DESC_PUBLIC...\n";
+ echo -e "Worker is ${WORKER} --workers ${TIS_WORKERS} --timeout ${TIMEOUT} --max_requests ${MAX_REQUESTS} --max_request_jitter public ${MAX_REQ_JITTER_PUBLIC}\n" ;
+
+ echo -e "Starting keystone process at port 5000 \n" ;
+
+ start-stop-daemon --start --quiet --background --pidfile ${PIDFILE_PUBLIC} \
+ --make-pidfile --exec ${PYTHON} -- ${EXEC} --bind ${BIND_PUBLIC} \
+ --worker-class ${WORKER} --workers ${TIS_WORKERS} --timeout ${TIMEOUT} \
+ --max-requests ${MAX_REQUESTS} --max-requests-jitter ${MAX_REQ_JITTER_PUBLIC} \
+ --log-syslog \
+ --pythonpath '/usr/share/keystone' public:application --name keystone-public
+
+ RETVAL=$?
+ if [ $RETVAL -eq 0 ]; then
+ echo -e "Keystone started at port 5000... \n"
+ else
+ echo -e "Failed to start Keystone .. \n"
+ fi
+}
+
+stop()
+{
+ if [ -e $PIDFILE_PUBLIC ]; then
+ start-stop-daemon --stop --quiet --pidfile $PIDFILE_PUBLIC
+ RETVAL_PUBLIC=$?
+ if [ $RETVAL_PUBLIC -eq 0 ]; then
+ echo "Stopped $DESC_PUBLIC."
+ else
+ echo "Stopping failed - $PIDFILE_PUBLIC"
+ fi
+ rm -f $PIDFILE_PUBLIC
+ else
+ echo "Already stopped - $PIDFILE_PUBLIC"
+ fi
+}
+
+status()
+{
+ pid_public=`cat $PIDFILE_PUBLIC 2>/dev/null`
+
+ if [ -n "$pid_public" ]; then
+ echo -e "\033[32m $DESC_PUBLIC is running..\033[0m"
+ else
+ echo -e "\033[31m $DESC_PUBLIC is not running..\033[0m"
+ fi
+}
+
+
+
+case "$1" in
+ start)
+ start
+ ;;
+ stop)
+ stop
+ ;;
+ restart|force-reload|reload)
+ stop
+ start
+ ;;
+ status)
+ status
+ ;;
+ *)
+ #echo "Usage: $0 {start|stop|force-reload|restart|reload|status} OR {/usr/bin/keystone-all start --public-bind-addr xxx.xxx.xxx}"
+ start
+ #RETVAL=1
+ ;;
+esac
+
+exit $RETVAL
diff --git a/debian/stx/keystone-fernet-keys-rotate-active b/debian/stx/keystone-fernet-keys-rotate-active
new file mode 100644
index 000000000..e2124eee3
--- /dev/null
+++ b/debian/stx/keystone-fernet-keys-rotate-active
@@ -0,0 +1,64 @@
+#!/bin/bash
+
+#
+# Wrapper script to rotate keystone fernet keys on active controller only
+#
+KEYSTONE_KEYS_ROTATE_INFO="/var/run/keystone-keys-rotate.info"
+KEYSTONE_KEYS_ROTATE_CMD="/usr/bin/nice -n 2 /usr/bin/keystone-manage fernet_rotate --keystone-user keystone --keystone-group keystone"
+
+function is_active_pgserver()
+{
+ # Determine whether we're running on the same controller as the service.
+ local service=postgres
+ local enabledactive=$(/usr/bin/sm-query service $service| grep enabled-active)
+ if [ "x$enabledactive" == "x" ]
+ then
+ # enabled-active not found for that service on this controller
+ return 1
+ else
+ # enabled-active found for that resource
+ return 0
+ fi
+}
+
+if is_active_pgserver
+then
+ if [ ! -f ${KEYSTONE_KEYS_ROTATE_INFO} ]
+ then
+ echo delay_count=0 > ${KEYSTONE_KEYS_ROTATE_INFO}
+ fi
+
+ source ${KEYSTONE_KEYS_ROTATE_INFO}
+ sudo -u postgres psql -d fm -c "SELECT alarm_id, entity_instance_id from alarm;" | grep -P "^(?=.*100.101)(?=.*${HOSTNAME})" &>/dev/null
+ if [ $? -eq 0 ]
+ then
+ source /etc/platform/platform.conf
+ if [ "${system_type}" = "All-in-one" ]
+ then
+ source /etc/init.d/task_affinity_functions.sh
+ idle_core=$(get_most_idle_core)
+ if [ "$idle_core" -ne "0" ]
+ then
+ sh -c "exec taskset -c $idle_core ${KEYSTONE_KEYS_ROTATE_CMD}"
+ sed -i "/delay_count/s/=.*/=0/" ${KEYSTONE_KEYS_ROTATE_INFO}
+ exit 0
+ fi
+ fi
+
+ if [ "$delay_count" -lt "3" ]
+ then
+ newval=$(($delay_count+1))
+ sed -i "/delay_count/s/=.*/=$newval/" ${KEYSTONE_KEYS_ROTATE_INFO}
+ (sleep 3600; /usr/bin/keystone-fernet-keys-rotate-active) &
+ exit 0
+ fi
+
+ fi
+
+ eval ${KEYSTONE_KEYS_ROTATE_CMD}
+ sed -i "/delay_count/s/=.*/=0/" ${KEYSTONE_KEYS_ROTATE_INFO}
+
+fi
+
+exit 0
+
diff --git a/debian/stx/keystone.service b/debian/stx/keystone.service
new file mode 100644
index 000000000..a72aa84be
--- /dev/null
+++ b/debian/stx/keystone.service
@@ -0,0 +1,14 @@
+[Unit]
+Description=OpenStack Identity Service (code-named Keystone)
+After=syslog.target network.target
+
+[Service]
+Type=forking
+#ReminAfterExit is set to yes as we have 2 pids to monitor
+RemainAfterExit=yes
+ExecStart=/usr/bin/keystone-all start
+ExecStop=/usr/bin/keystone-all stop
+ExecReload=/usr/bin/keystone-all reload
+
+[Install]
+WantedBy=multi-user.target
diff --git a/debian/stx/password-rules.conf b/debian/stx/password-rules.conf
new file mode 100644
index 000000000..e7ce65602
--- /dev/null
+++ b/debian/stx/password-rules.conf
@@ -0,0 +1,34 @@
+# The password rules captures the [security_compliance]
+# section of the generic Keystone configuration (keystone.conf)
+# This configuration is used to statically define the password
+# rules for password validation in pre-Keystone environments
+#
+# N.B: Only set non-default keys here (default commented configuration
+# items not needed)
+
+[security_compliance]
+
+#
+# From keystone
+#
+
+# This controls the number of previous user password iterations to keep in
+# history, in order to enforce that newly created passwords are unique. Setting
+# the value to one (the default) disables this feature. Thus, to enable this
+# feature, values must be greater than 1. This feature depends on the `sql`
+# backend for the `[identity] driver`. (integer value)
+# Minimum value: 1
+unique_last_password_count = 3
+
+# The regular expression used to validate password strength requirements. By
+# default, the regular expression will match any password. The following is an
+# example of a pattern which requires at least 1 letter, 1 digit, and have a
+# minimum length of 7 characters: ^(?=.*\d)(?=.*[a-zA-Z]).{7,}$ This feature
+# depends on the `sql` backend for the `[identity] driver`. (string value)
+password_regex = ^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%^&*()<>{}+=_\\\[\]\-?|~`,.;:]).{7,}$
+
+# Describe your password regular expression here in language for humans. If a
+# password fails to match the regular expression, the contents of this
+# configuration variable will be returned to users to explain why their
+# requested password was insufficient. (string value)
+password_regex_description = Password must have a minimum length of 7 characters, and must contain at least 1 upper case, 1 lower case, 1 digit, and 1 special character
diff --git a/debian/stx/public.py b/debian/stx/public.py
new file mode 100644
index 000000000..d3a29f3b3
--- /dev/null
+++ b/debian/stx/public.py
@@ -0,0 +1,21 @@
+# Copyright (c) 2013-2017 Wind River Systems, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+from keystone.server import wsgi as wsgi_server
+
+import sys
+sys.argv = sys.argv[:1]
+
+application = wsgi_server.initialize_public_application()
--
2.34.1

View File

@ -0,0 +1,44 @@
From 8cf5b37f70ade287cb5eaea7dd48d1eeb1ae737d Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Mon, 14 Mar 2022 10:35:39 -0400
Subject: [PATCH] Add login fail lockout security compliance options
Added two login fail lockout security compliance options:
lockout_duration
lockout_failure_attempts
Signed-off-by: Andy Ning <andy.ning@windriver.com>
---
debian/stx/password-rules.conf | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/debian/stx/password-rules.conf b/debian/stx/password-rules.conf
index e7ce656..ac18ef9 100644
--- a/debian/stx/password-rules.conf
+++ b/debian/stx/password-rules.conf
@@ -32,3 +32,22 @@ password_regex = ^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%^&*()<>{}+=_\\\[\]\-?
# configuration variable will be returned to users to explain why their
# requested password was insufficient. (string value)
password_regex_description = Password must have a minimum length of 7 characters, and must contain at least 1 upper case, 1 lower case, 1 digit, and 1 special character
+
+# The number of seconds a user account will be locked when the maximum number
+# of failed authentication attempts (as specified by `[security_compliance]
+# lockout_failure_attempts`) is exceeded. Setting this option will have no
+# effect unless you also set `[security_compliance] lockout_failure_attempts`
+# to a non-zero value. This feature depends on the `sql` backend for the
+# `[identity] driver`. (integer value)
+# Minimum value: 1
+lockout_duration=1800
+
+# The maximum number of times that a user can fail to authenticate before the
+# user account is locked for the number of seconds specified by
+# `[security_compliance] lockout_duration`. This feature is disabled by
+# default. If this feature is enabled and `[security_compliance]
+# lockout_duration` is not set, then users may be locked out indefinitely
+# until the user is explicitly enabled via the API. This feature depends on
+# the `sql` backend for the `[identity] driver`. (integer value)
+# Minimum value: 1
+lockout_failure_attempts=5
--
2.25.1

View File

@ -0,0 +1,2 @@
0001-Add-stx-support.patch
0002-Add-login-fail-lockout-security-compliance-options.patch

View File

@ -0,0 +1,13 @@
---
debname: keystone
debver: 2:18.0.0-3
dl_path:
name: keystone-debian-18.0.0-3.tar.gz
url: https://salsa.debian.org/openstack-team/services/keystone/-/archive/debian/18.0.0-3/keystone-debian-18.0.0-3.tar.gz
md5sum: fba7c47672b976cdcab5c33f49a5d2fd
revision:
dist: $STX_DIST
PKG_GITREVCOUNT: true
GITREVCOUNT:
BASE_SRCREV: 5566a41fc0f0be21e2764a9cc0c37823dcae72c8
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/keystone

View File

@ -0,0 +1,151 @@
From 45b5c5b71b4ad70c5694f06126adfc60a31c51fc Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Tue, 5 Apr 2022 10:39:32 -0400
Subject: [PATCH] Support storing users in keyring
This patch added support to store keystone users in keyring in
"CGCS" service.
Signed-off-by: Andy Ning <andy.ning@windriver.com>
---
keystone/exception.py | 6 +++++
keystone/identity/core.py | 54 +++++++++++++++++++++++++++++++++++++++
requirements.txt | 1 +
3 files changed, 61 insertions(+)
diff --git a/keystone/exception.py b/keystone/exception.py
index c62338b..3cbddfb 100644
--- a/keystone/exception.py
+++ b/keystone/exception.py
@@ -227,6 +227,12 @@ class CredentialLimitExceeded(ForbiddenNotSecurity):
"of %(limit)d already exceeded for user.")
+class WRSForbiddenAction(Error):
+ message_format = _("That action is not permitted")
+ code = 403
+ title = 'Forbidden'
+
+
class SecurityError(Error):
"""Security error exception.
diff --git a/keystone/identity/core.py b/keystone/identity/core.py
index 38ebe2f..31d6cd6 100644
--- a/keystone/identity/core.py
+++ b/keystone/identity/core.py
@@ -17,6 +17,7 @@
import copy
import functools
import itertools
+import keyring
import operator
import os
import threading
@@ -54,6 +55,7 @@ MEMOIZE_ID_MAPPING = cache.get_memoization_decorator(group='identity',
DOMAIN_CONF_FHEAD = 'keystone.'
DOMAIN_CONF_FTAIL = '.conf'
+KEYRING_CGCS_SERVICE = "CGCS"
# The number of times we will attempt to register a domain to use the SQL
# driver, if we find that another process is in the middle of registering or
@@ -1125,6 +1127,26 @@ class Manager(manager.Manager):
if new_ref['domain_id'] != orig_ref['domain_id']:
raise exception.ValidationError(_('Cannot change Domain ID'))
+ def _update_keyring_password(self, user, new_password):
+ """Update user password in Keyring backend.
+ This method Looks up user entries in Keyring backend
+ and accordingly update the corresponding user password.
+ :param user : keyring user struct
+ :param new_password : new password to set
+ """
+ if (new_password is not None) and ('name' in user):
+ try:
+ # only update if an entry exists
+ if (keyring.get_password(KEYRING_CGCS_SERVICE, user['name'])):
+ keyring.set_password(KEYRING_CGCS_SERVICE,
+ user['name'], new_password)
+ except (keyring.errors.PasswordSetError, RuntimeError):
+ msg = ('Failed to Update Keyring Password for the user %s')
+ LOG.warning(msg, user['name'])
+ # only raise an exception if this is the admin user
+ if (user['name'] == 'admin'):
+ raise exception.WRSForbiddenAction(msg % user['name'])
+
def _update_user_with_federated_objects(self, user, driver, entity_id):
# If the user did not pass a federated object along inside the user
# object then we simply update the user as normal and add the
@@ -1181,6 +1203,17 @@ class Manager(manager.Manager):
ref = self._update_user_with_federated_objects(user, driver, entity_id)
+ # Certain local Keystone users are stored in Keystone as opposed
+ # to the default SQL Identity backend, such as the admin user.
+ # When its password is updated, we need to update Keyring as well
+ # as certain services retrieve this user context from Keyring and
+ # will get auth failures
+ # Need update password before send out notification. Otherwise,
+ # any process monitor the notification will still get old password
+ # from Keyring.
+ if ('password' in user) and ('name' in ref):
+ self._update_keyring_password(ref, user['password'])
+
notifications.Audit.updated(self._USER, user_id, initiator)
enabled_change = ((user.get('enabled') is False) and
@@ -1210,6 +1243,7 @@ class Manager(manager.Manager):
hints.add_filter('user_id', user_id)
fed_users = PROVIDERS.shadow_users_api.list_federated_users_info(hints)
+ username = user_old.get('name', "")
driver.delete_user(entity_id)
PROVIDERS.assignment_api.delete_user_assignments(user_id)
self.get_user.invalidate(self, user_id)
@@ -1223,6 +1257,18 @@ class Manager(manager.Manager):
PROVIDERS.credential_api.delete_credentials_for_user(user_id)
PROVIDERS.id_mapping_api.delete_id_mapping(user_id)
+
+ # Delete the keyring entry associated with this user (if present)
+ try:
+ keyring.delete_password(KEYRING_CGCS_SERVICE, username)
+ except keyring.errors.PasswordDeleteError:
+ LOG.warning(('delete_user: PasswordDeleteError for %s'),
+ username)
+ pass
+ except exception.UserNotFound:
+ LOG.warning(('delete_user: UserNotFound for %s'),
+ username)
+ pass
notifications.Audit.deleted(self._USER, user_id, initiator)
# Invalidate user role assignments cache region, as it may be caching
@@ -1475,6 +1521,14 @@ class Manager(manager.Manager):
notifications.Audit.updated(self._USER, user_id, initiator)
self._persist_revocation_event_for_user(user_id)
+ user = self.get_user(user_id)
+ # Update Keyring password for the 'user' if it
+ # has an entry in Keyring
+ if (original_password) and ('name' in user):
+ # Change the 'user' password in keyring, provided the user
+ # has an entry in Keyring backend
+ self._update_keyring_password(user, new_password)
+
@MEMOIZE
def _shadow_nonlocal_user(self, user):
try:
diff --git a/requirements.txt b/requirements.txt
index 33a2c42..1119c52 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -36,3 +36,4 @@ pycadf!=2.0.0,>=1.1.0 # Apache-2.0
msgpack>=0.5.0 # Apache-2.0
osprofiler>=1.4.0 # Apache-2.0
pytz>=2013.6 # MIT
+keyring>=5.3
--
2.25.1

View File

@ -0,0 +1 @@
0001-Support-storing-users-in-keyring.patch

View File

@ -0,0 +1 @@
This repo is for the stx-aodh image, build on top of https://opendev.org/openstack/aodh/

View File

@ -0,0 +1,14 @@
BUILDER=loci
LABEL=stx-aodh
PROJECT=aodh
PROJECT_REPO=https://opendev.org/openstack/aodh.git
PROJECT_REF=4366d6eae1aad4e15aeca4bc7e8b5e757c7601e8
PROJECT_UID=42425
PROJECT_GID=42425
PIP_PACKAGES="pylint SQLAlchemy gnocchiclient aodhclient"
DIST_REPOS="OS"
PROFILES="apache"
CUSTOMIZATION="\
ln -s /etc/apache2/mods-available/wsgi.load /etc/apache2/mods-enabled/wsgi.load && \
ln -s /etc/apache2/mods-available/wsgi.conf /etc/apache2/mods-enabled/wsgi.conf
"

View File

@ -0,0 +1 @@
This repo is for the stx-ironic image, build on top of https://opendev.org/openstack/ironic/

View File

@ -0,0 +1,16 @@
BUILDER=loci
LABEL=stx-ironic
PROJECT=ironic
PROJECT_REPO=https://opendev.org/openstack/ironic.git
PROJECT_REF=859e51c8b4b8344827b5bba1f9a0b737ffbc1ebc
PROJECT_UID=42425
PROJECT_GID=42425
PIP_PACKAGES="pylint alembic pysnmp"
DIST_REPOS="OS"
DIST_PACKAGES="ipxe tftpd-hpa openipmi ipmitool iproute2 qemu-utils syslinux-common open-iscsi"
PROFILES="ironic apache"
CUSTOMIZATION="\
ln -s /etc/apache2/mods-available/wsgi.load /etc/apache2/mods-enabled/wsgi.load && \
ln -s /etc/apache2/mods-available/wsgi.conf /etc/apache2/mods-enabled/wsgi.conf
"
UPDATE_SYSTEM_ACCOUNT="yes"

View File

@ -0,0 +1,8 @@
This repo is for https://salsa.debian.org/openstack-team/debian/openstack-pkg-tools
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -0,0 +1,24 @@
From b894128b1014390591a7646c7af34be9fd32a22a Mon Sep 17 00:00:00 2001
Author: João Pedro Alexandroni <JoaoPedroAlexandroni.CordovadeSouza@windriver.com>
Date: Tue, 12 Apr 2022 11:41:11 -0300
Subject: [PATCH] Descritpion: Add ipv6 support for keystone
---
init-template/init-script-template | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/init-template/init-script-template b/init-template/init-script-template
index c0df791..2cd88a7 100644
--- a/init-template/init-script-template
+++ b/init-template/init-script-template
@@ -57,7 +57,7 @@ if [ -n "${UWSGI_PORT}" ] && [ -n "${UWSGI_INI_PATH}" ] && [ -n "${UWSGI_INI_APP
fi
fi
else
- UWSGI_BIND_IP=""
+ UWSGI_BIND_IP="[::]"
fi
if [ -n "${KEY_FILE}" ] && [ -n "${CERT_FILE}" ] ; then
--
2.17.1

View File

@ -0,0 +1,2 @@
stx-add-wheel-support.patch
add-ipv6-support.patch

View File

@ -0,0 +1,53 @@
Description: Add support for building python3 wheel in Debian Openstack
Author: Chuck Short <charles.short@windriver.com>
diff -Nru openstack-pkg-tools-117/build-tools/pkgos-dh_auto_install openstack-pkg-tools-117+nmu1/build-tools/pkgos-dh_auto_install
--- openstack-pkg-tools-117/build-tools/pkgos-dh_auto_install 2020-11-29 19:50:57.000000000 +0000
+++ openstack-pkg-tools-117+nmu1/build-tools/pkgos-dh_auto_install 2021-10-03 15:10:16.000000000 +0000
@@ -20,6 +20,10 @@
PKGOS_IN_TMP=yes
shift
;;
+ "--wheel")
+ PKGOS_USE_WHEEL=yes
+ shift
+ ;;
*)
;;
esac
@@ -50,6 +54,11 @@
for pyvers in ${PYTHON3S}; do
python${pyvers} setup.py install --install-layout=deb --root $(pwd)/debian/${TARGET_DIR}
done
+ if [ "${PKGOS_USE_WHEEL}" = "yes" ]; then
+ for pyvers in ${PYTHON3S}; do
+ python${pyvers} setup.py bdist_wheel --universal -d $(pwd)/debian/python3-${PY_MODULE_NAME}-wheel/usr/share/python-wheel
+ done
+ fi
fi
rm -rf $(pwd)/debian/python*/usr/lib/python*/dist-packages/*.pth
rm -rf $(pwd)/debian/tmp/usr/lib/python*/dist-packages/*.pth
diff -Nru openstack-pkg-tools-117/debian/changelog openstack-pkg-tools-117+nmu1/debian/changelog
--- openstack-pkg-tools-117/debian/changelog 2020-11-29 19:50:57.000000000 +0000
+++ openstack-pkg-tools-117+nmu1/debian/changelog 2021-10-03 15:10:16.000000000 +0000
@@ -1,3 +1,10 @@
+openstack-pkg-tools (117+nmu1) unstable; urgency=medium
+
+ * Non-maintainer upload.
+ * build-tools/pkgos-dh_auto_install: Add wheel support.
+
+ -- Chuck Short <zulcss@ubuntu.com> Sun, 03 Oct 2021 15:10:16 +0000
+
openstack-pkg-tools (117) unstable; urgency=medium
* Using override_installsystemd instead of override_dh_systemd_enable
diff -Nru openstack-pkg-tools-117/debian/control openstack-pkg-tools-117+nmu1/debian/control
--- openstack-pkg-tools-117/debian/control 2020-11-29 19:50:57.000000000 +0000
+++ openstack-pkg-tools-117+nmu1/debian/control 2021-10-03 15:10:16.000000000 +0000
@@ -16,6 +16,7 @@
Multi-Arch: foreign
Depends:
python3-pip,
+ python3-wheel,
gettext,
jq,
po-debconf,

View File

@ -0,0 +1,12 @@
---
debname: openstack-pkg-tools
debver: 117
dl_path:
name: openstack-pkg-tools-debian-117.tar.gz
url: https://salsa.debian.org/openstack-team/debian/openstack-pkg-tools/-/archive/debian/117/openstack-pkg-tools-debian-117.tar.gz
md5sum: 6c26ff316b869ca12e09e7f3e77c150e
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 5566a41fc0f0be21e2764a9cc0c37823dcae72c8
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/openstack-pkg-tools

View File

@ -0,0 +1,8 @@
This repo is for https://github.com/starlingx-staging/stx-openstack-ras
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -0,0 +1,26 @@
From 254b2348d105c86438bf4057a4d428c67d51ed37 Mon Sep 17 00:00:00 2001
From: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
Date: Fri, 5 Nov 2021 11:45:54 -0300
Subject: [PATCH] update package dependencies
Signed-off-by: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
---
debian/control | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index 1e4f8c5..ffeb41e 100644
--- a/debian/control
+++ b/debian/control
@@ -9,7 +9,7 @@ Homepage: http://github.com/madkiss/openstack-resource-agents
Package: openstack-resource-agents
Architecture: all
-Depends: ${misc:Depends}, netstat, python-keystoneclient, python-glanceclient, python-novaclient, curl
+Depends: ${misc:Depends}, net-tools, python3-keystoneclient, python3-glanceclient, python3-novaclient, curl
Description: pacemaker resource agents for OpenStack
This package contains resource agents to run most of the OpenStack
components inside a pacemaker-controlled high availability cluster.
--
2.17.1

View File

@ -0,0 +1 @@
0001-update-package-dependencies.patch

View File

@ -0,0 +1,11 @@
debver: 2012.2~f3-1
debname: openstack-resource-agents
dl_path:
name: openstack-resource-agents-2012.2~f3-1.tar.gz
url: https://github.com/starlingx-staging/stx-openstack-ras/tarball/4ba6047db1b70ee2bb3dd43739de7d2fb4e85ebd
md5sum: 58b82fa1d64ea59bad345d01bafb71be
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 5566a41fc0f0be21e2764a9cc0c37823dcae72c8
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/openstack-ras

View File

@ -0,0 +1,24 @@
From c63d0c06606969ddfb85538706a1665122e69c44 Mon Sep 17 00:00:00 2001
From: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
Date: Wed, 3 Nov 2021 12:10:34 -0300
Subject: [PATCH] remove unwanted files
Signed-off-by: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
---
Makefile | 3 +++
1 file changed, 3 insertions(+)
diff --git a/Makefile b/Makefile
index c95c187..08c9fa6 100644
--- a/Makefile
+++ b/Makefile
@@ -26,3 +26,6 @@ install:
for file in ocf/*; do \
$(INSTALL) -t $(DESTDIR)/usr/lib/ocf/resource.d/openstack -m 0755 $${file} ; \
done
+ rm -rf $(DESTDIR)/usr/lib/ocf/resource.d/openstack/ceilometer-agent-central
+ rm -rf $(DESTDIR)/usr/lib/ocf/resource.d/openstack/ceilometer-alarm-evaluator
+ rm -rf $(DESTDIR)/usr/lib/ocf/resource.d/openstack/ceilometer-alarm-notifier
--
2.17.1

View File

@ -0,0 +1 @@
0001-remove-unwanted-files.patch

View File

@ -0,0 +1,221 @@
Index: git/ocf/cinder-api
===================================================================
--- git.orig/ocf/cinder-api 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/cinder-api 2014-09-23 10:22:33.294302829 -0400
@@ -244,18 +244,27 @@
fi
# Check detailed information about this specific version of the API.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack Cinder API (cinder-api) monitor succeeded"
Index: git/ocf/glance-api
===================================================================
--- git.orig/ocf/glance-api 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/glance-api 2014-09-23 10:16:35.903826295 -0400
@@ -236,11 +236,9 @@
fi
# Monitor the RA by retrieving the image list
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_os_auth_url" ]; then
+ if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_os_auth_url" ]; then
ocf_run -q $OCF_RESKEY_client_binary \
--os_username "$OCF_RESKEY_os_username" \
- --os_password "$OCF_RESKEY_os_password" \
--os_tenant_name "$OCF_RESKEY_os_tenant_name" \
--os_auth_url "$OCF_RESKEY_os_auth_url" \
index > /dev/null 2>&1
Index: git/ocf/glance-registry
===================================================================
--- git.orig/ocf/glance-registry 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/glance-registry 2014-09-23 10:22:58.078475044 -0400
@@ -246,18 +246,27 @@
# Check whether we are supposed to monitor by logging into glance-registry
# and do it if that's the case.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack ImageService (glance-registry): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack ImageService (glance-registry): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack ImageService (glance-registry) monitor succeeded"
Index: git/ocf/keystone
===================================================================
--- git.orig/ocf/keystone 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/keystone 2014-09-23 10:18:30.736618732 -0400
@@ -237,12 +237,10 @@
# Check whether we are supposed to monitor by logging into Keystone
# and do it if that's the case.
- if [ -n "$OCF_RESKEY_client_binary" ] && [ -n "$OCF_RESKEY_os_username" ] \
- && [ -n "$OCF_RESKEY_os_password" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] \
+ if [ -n "$OCF_RESKEY_client_binary" ] && [ -n "$OCF_RESKEY_os_password" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] \
&& [ -n "$OCF_RESKEY_os_auth_url" ]; then
ocf_run -q $OCF_RESKEY_client_binary \
--os-username "$OCF_RESKEY_os_username" \
- --os-password "$OCF_RESKEY_os_password" \
--os-tenant-name "$OCF_RESKEY_os_tenant_name" \
--os-auth-url "$OCF_RESKEY_os_auth_url" \
user-list > /dev/null 2>&1
Index: git/ocf/neutron-server
===================================================================
--- git.orig/ocf/neutron-server 2014-09-17 13:13:13.872502871 -0400
+++ git/ocf/neutron-server 2014-09-23 10:23:39.358761926 -0400
@@ -256,18 +256,27 @@
fi
# Check detailed information about this specific version of the API.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack Neutron API (neutron-server): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack Neutron API (neutron-server): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack Neutron Server (neutron-server) monitor succeeded"
Index: git/ocf/nova-api
===================================================================
--- git.orig/ocf/nova-api 2014-09-17 13:13:15.240513478 -0400
+++ git/ocf/nova-api 2014-09-23 10:23:20.454630543 -0400
@@ -244,18 +244,27 @@
fi
# Check detailed information about this specific version of the API.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack Nova API (nova-api): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack Nova API (nova-api): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack Nova API (nova-api) monitor succeeded"
Index: git/ocf/validation
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ git/ocf/validation 2014-09-23 10:06:37.011706573 -0400
@@ -0,0 +1,5 @@
+#!/usr/bin/env python
+
+from keystoneclient import probe
+
+probe.main()

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,374 @@
Index: git/ocf/ceilometer-mem-db
===================================================================
--- /dev/null
+++ git/ocf/ceilometer-mem-db
@@ -0,0 +1,369 @@
+#!/bin/sh
+#
+#
+# OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+#
+# Description: Manages an OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) process as an HA resource
+#
+# Authors: Emilien Macchi
+# Mainly inspired by the Nova Scheduler resource agent written by Sebastien Han
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+# Copyright (c) 2014 Wind River Systems, Inc.
+# SPDX-License-Identifier: Apache-2.0
+#
+#
+#
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_amqp_server_port
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="ceilometer-mem-db"
+OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_user_default="root"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_amqp_server_port_default="5672"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) process as an HA resource
+
+ The 'start' operation starts the scheduler service.
+ The 'stop' operation stops the scheduler service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the scheduler service is running
+ The 'monitor' operation reports whether the scheduler service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="ceilometer-mem-db">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+May manage a ceilometer-mem-db instance or a clone set that
+creates a distributed ceilometer-mem-db cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Mem DB server binary (ceilometer-mem-db)
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB server binary (ceilometer-mem-db)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB (ceilometer-mem-db registry) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="amqp_server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the AMQP server. Use for monitoring purposes
+</longdesc>
+<shortdesc lang="en">AMQP listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_amqp_server_port_default}" />
+</parameter>
+
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+</longdesc>
+<shortdesc lang="en">Additional parameters for ceilometer-mem-db</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+ceilometer_mem_db_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+ceilometer_mem_db_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ ceilometer_mem_db_check_port $OCF_RESKEY_amqp_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+ceilometer_mem_db_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Ceilometer Mem DB (ceilometer-mem-db) is not running"
+ rm -f $OCF_RESKEY_pid
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+ceilometer_mem_db_monitor() {
+ local rc
+ local pid
+ local scheduler_amqp_check
+
+ ceilometer_mem_db_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the connections according to the PID.
+ # We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
+ pid=`cat $OCF_RESKEY_pid`
+ scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Mem DB is not connected to the AMQP server : $rc"
+ return $OCF_NOT_RUNNING
+ fi
+
+ ocf_log debug "OpenStack Ceilometer Mem DB (ceilometer-mem-db) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+ceilometer_mem_db_start() {
+ local rc
+
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual ceilometer-mem-db daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ ceilometer_mem_db_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Ceilometer Mem DB (ceilometer-mem-db) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) started"
+ return $OCF_SUCCESS
+}
+
+ceilometer_mem_db_confirm_stop() {
+ local my_bin
+ local my_processes
+
+ my_binary=`which ${OCF_RESKEY_binary}`
+ my_processes=`pgrep -l -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"`
+
+ if [ -n "${my_processes}" ]
+ then
+ ocf_log info "About to SIGKILL the following: ${my_processes}"
+ pkill -KILL -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"
+ fi
+}
+
+ceilometer_mem_db_stop() {
+ local rc
+ local pid
+
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) already stopped"
+ ceilometer_mem_db_confirm_stop
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Ceilometer Mem DB (ceilometer-mem-db) couldn't be stopped"
+ ceilometer_mem_db_confirm_stop
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Ceilometer Mem DB (ceilometer-mem-db) still hasn't stopped yet. Waiting ..."
+ done
+
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+ ceilometer_mem_db_confirm_stop
+
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+ceilometer_mem_db_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) ceilometer_mem_db_start;;
+ stop) ceilometer_mem_db_stop;;
+ status) ceilometer_mem_db_status;;
+ monitor) ceilometer_mem_db_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac

View File

@ -0,0 +1,28 @@
Index: git/ocf/ceilometer-collector
===================================================================
--- git.orig/ocf/ceilometer-collector 2014-08-07 21:08:46.637211162 -0400
+++ git/ocf/ceilometer-collector 2014-08-07 21:09:24.893475317 -0400
@@ -223,15 +223,16 @@
return $rc
fi
- # Check the connections according to the PID.
- # We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
- pid=`cat $OCF_RESKEY_pid`
- scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc=$?
- if [ $rc -ne 0 ]; then
+ # Check the connections according to the PID of the child process since
+ # the parent is not the one with the AMQP connection
+ ppid=`cat $OCF_RESKEY_pid`
+ pid=`pgrep -P $ppid`
+ scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc=$?
+ if [ $rc -ne 0 ]; then
ocf_log err "Collector is not connected to the AMQP server : $rc"
return $OCF_NOT_RUNNING
- fi
+ fi
ocf_log debug "OpenStack Ceilometer Collector (ceilometer-collector) monitor succeeded"
return $OCF_SUCCESS

View File

@ -0,0 +1,22 @@
Index: git/ocf/ceilometer-api
===================================================================
--- git.orig/ocf/ceilometer-api
+++ git/ocf/ceilometer-api
@@ -183,7 +183,7 @@ ceilometer_api_validate() {
local rc
check_binary $OCF_RESKEY_binary
- check_binary netstat
+ check_binary lsof
ceilometer_api_check_port $OCF_RESKEY_api_listen_port
# A config file on shared storage that is not available
@@ -244,7 +244,7 @@ ceilometer_api_monitor() {
# Check the connections according to the PID.
# We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
pid=`cat $OCF_RESKEY_pid`
- scheduler_amqp_check=`netstat -apunt | grep -s "$OCF_RESKEY_api_listen_port" | grep -s "$pid" | grep -qs "LISTEN"`
+ scheduler_amqp_check=`lsof -nPp ${pid} | grep -s ":${OCF_RESKEY_api_listen_port}\s\+(LISTEN)"`
rc=$?
if [ $rc -ne 0 ]; then
ocf_log err "API is not listening for connections: $rc"

View File

@ -0,0 +1,63 @@
Index: git/ocf/ceilometer-agent-central
===================================================================
--- git.orig/ocf/ceilometer-agent-central
+++ git/ocf/ceilometer-agent-central
@@ -34,6 +34,7 @@
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+. /usr/bin/tsconfig
#######################################################################
@@ -41,7 +42,7 @@
OCF_RESKEY_binary_default="ceilometer-agent-central"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/${SW_VERSION}/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
Index: git/ocf/ceilometer-agent-notification
===================================================================
--- git.orig/ocf/ceilometer-agent-notification
+++ git/ocf/ceilometer-agent-notification
@@ -34,6 +34,7 @@
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+. /usr/bin/tsconfig
#######################################################################
@@ -41,7 +42,7 @@
OCF_RESKEY_binary_default="ceilometer-agent-notification"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/${SW_VERSION}/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
Index: git/ocf/ceilometer-api
===================================================================
--- git.orig/ocf/ceilometer-api
+++ git/ocf/ceilometer-api
@@ -34,6 +34,7 @@
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+. /usr/bin/tsconfig
#######################################################################
@@ -41,7 +42,7 @@
OCF_RESKEY_binary_default="ceilometer-api"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/${SW_VERSION}/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_api_listen_port_default="8777"

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,150 @@
Index: git/ocf/ceilometer-agent-central
===================================================================
--- git.orig/ocf/ceilometer-agent-central
+++ git/ocf/ceilometer-agent-central
@@ -23,6 +23,7 @@
# OCF instance parameters:
# OCF_RESKEY_binary
# OCF_RESKEY_config
+# OCF_RESKEY_pipeline
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_monitor_binary
@@ -40,12 +41,14 @@
OCF_RESKEY_binary_default="ceilometer-agent-central"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_pipeline=${OCF_RESKEY_pipeline_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
@@ -99,6 +102,14 @@ Location of the OpenStack Ceilometer Cen
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
+<parameter name="pipeline" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Central Agent Service (ceilometer-agent-central) pipeline file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Central Agent (ceilometer-agent-central registry) pipeline file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pipeline_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Ceilometer Central Agent Service (ceilometer-agent-central)
@@ -247,6 +258,7 @@ ceilometer_agent_central_start() {
# run the actual ceilometer-agent-central daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ --pipeline_cfg_file=$OCF_RESKEY_pipeline \
$OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.
Index: git/ocf/ceilometer-agent-notification
===================================================================
--- git.orig/ocf/ceilometer-agent-notification
+++ git/ocf/ceilometer-agent-notification
@@ -23,6 +23,7 @@
# OCF instance parameters:
# OCF_RESKEY_binary
# OCF_RESKEY_config
+# OCF_RESKEY_pipeline
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_monitor_binary
@@ -40,12 +41,14 @@
OCF_RESKEY_binary_default="ceilometer-agent-notification"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_pipeline=${OCF_RESKEY_pipeline_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
@@ -99,6 +102,14 @@ Location of the OpenStack Ceilometer Cen
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
+<parameter name="pipeline" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Central Agent Service (ceilometer-agent-notification) pipeline file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Central Agent (ceilometer-agent-notification registry) pipeline file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pipeline_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Ceilometer Central Agent Service (ceilometer-agent-notification)
@@ -247,6 +258,7 @@ ceilometer_agent_notification_start() {
# run the actual ceilometer-agent-notification daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ --pipeline_cfg_file=$OCF_RESKEY_pipeline \
$OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.
Index: git/ocf/ceilometer-api
===================================================================
--- git.orig/ocf/ceilometer-api
+++ git/ocf/ceilometer-api
@@ -23,6 +23,7 @@
# OCF instance parameters:
# OCF_RESKEY_binary
# OCF_RESKEY_config
+# OCF_RESKEY_pipeline
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_monitor_binary
@@ -40,12 +41,14 @@
OCF_RESKEY_binary_default="ceilometer-api"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_api_listen_port_default="8777"
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_pipeline=${OCF_RESKEY_pipeline_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_api_listen_port=${OCF_RESKEY_api_listen_port_default}}
@@ -99,6 +102,14 @@ Location of the OpenStack Ceilometer API
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
+<parameter name="pipeline" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer API Service (ceilometer-api) pipeline file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer API (ceilometer-api registry) pipeline file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pipeline_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Ceilometer API Service (ceilometer-api)
@@ -257,6 +268,7 @@ ceilometer_api_start() {
# run the actual ceilometer-api daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ --pipeline_cfg_file=$OCF_RESKEY_pipeline \
$OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.

View File

@ -0,0 +1,141 @@
--- a/ocf/cinder-volume
+++ b/ocf/cinder-volume
@@ -221,10 +221,73 @@ cinder_volume_status() {
fi
}
+cinder_volume_get_service_status() {
+ source /etc/nova/openrc
+ python - <<'EOF'
+from __future__ import print_function
+
+from cinderclient import client as cinder_client
+import keyring
+from keystoneclient import session as keystone_session
+from keystoneclient.auth.identity import v3
+import os
+import sys
+
+DEFAULT_OS_VOLUME_API_VERSION = 2
+CINDER_CLIENT_TIMEOUT_SEC = 3
+
+def create_cinder_client():
+ password = keyring.get_password('CGCS', os.environ['OS_USERNAME'])
+ auth = v3.Password(
+ user_domain_name=os.environ['OS_USER_DOMAIN_NAME'],
+ username = os.environ['OS_USERNAME'],
+ password = password,
+ project_domain_name = os.environ['OS_PROJECT_DOMAIN_NAME'],
+ project_name = os.environ['OS_PROJECT_NAME'],
+ auth_url = os.environ['OS_AUTH_URL'])
+ session = keystone_session.Session(auth=auth)
+ return cinder_client.Client(
+ DEFAULT_OS_VOLUME_API_VERSION,
+ username = os.environ['OS_USERNAME'],
+ auth_url = os.environ['OS_AUTH_URL'],
+ region_name=os.environ['OS_REGION_NAME'],
+ session = session, timeout = CINDER_CLIENT_TIMEOUT_SEC)
+
+def service_is_up(s):
+ return s.state == 'up'
+
+def cinder_volume_service_status(cc):
+ services = cc.services.list(
+ host='controller',
+ binary='cinder-volume')
+ if not len(services):
+ return (False, False)
+ exists, is_up = (True, service_is_up(services[0]))
+ for s in services[1:]:
+ # attempt to merge statuses
+ if is_up != service_is_up(s):
+ raise Exception(('Found multiple cinder-volume '
+ 'services with different '
+ 'statuses: {}').format(
+ [s.to_dict() for s in services]))
+ return (exists, is_up)
+
+try:
+ status = cinder_volume_service_status(
+ create_cinder_client())
+ print(('exists={0[0]}\n'
+ 'is_up={0[1]}').format(status))
+except Exception as e:
+ print(str(e), file=sys.stderr)
+ sys.exit(1)
+EOF
+}
+
cinder_volume_monitor() {
local rc
local pid
local volume_amqp_check
+ local check_service_status=$1; shift
cinder_volume_status
rc=$?
@@ -279,6 +342,46 @@ cinder_volume_monitor() {
touch $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ if [ $check_service_status == "check-service-status" ]; then
+ local retries_left
+ local retry_interval
+
+ retries_left=3
+ retry_interval=3
+ while [ $retries_left -gt 0 ]; do
+ retries_left=`expr $retries_left - 1`
+ status=$(cinder_volume_get_service_status)
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Unable to get Cinder Volume status"
+ if [ $retries_left -gt 0 ]; then
+ sleep $retry_interval
+ continue
+ else
+ return $OCF_ERR_GENERIC
+ fi
+ fi
+
+ local exists
+ local is_up
+ eval $status
+
+ if [ "$exists" == "True" ] && [ "$is_up" == "False" ]; then
+ ocf_log err "Cinder Volume service status is down"
+ if [ $retries_left -gt 0 ]; then
+ sleep $retry_interval
+ continue
+ else
+ ocf_log info "Trigger Cinder Volume guru meditation report"
+ ocf_run kill -s USR2 $pid
+ return $OCF_ERR_GENERIC
+ fi
+ fi
+
+ break
+ done
+ fi
+
ocf_log debug "OpenStack Cinder Volume (cinder-volume) monitor succeeded"
return $OCF_SUCCESS
}
@@ -386,7 +489,7 @@ cinder_volume_stop() {
# SIGTERM didn't help either, try SIGKILL
ocf_log info "OpenStack Cinder Volume (cinder-volume) failed to stop after ${shutdown_timeout}s \
using SIGTERM. Trying SIGKILL ..."
- ocf_run kill -s KILL $pid
+ ocf_run kill -s KILL -$pid
fi
cinder_volume_confirm_stop
@@ -414,7 +517,7 @@ case "$1" in
start) cinder_volume_start;;
stop) cinder_volume_stop;;
status) cinder_volume_status;;
- monitor) cinder_volume_monitor;;
+ monitor) cinder_volume_monitor "check-service-status";;
validate-all) ;;
*) usage
exit $OCF_ERR_UNIMPLEMENTED;;

View File

@ -0,0 +1,18 @@
Index: git/ocf/cinder-volume
===================================================================
--- git.orig/ocf/cinder-volume
+++ git/ocf/cinder-volume
@@ -224,6 +224,13 @@ cinder_volume_monitor() {
pid=`cat $OCF_RESKEY_pid`
if ocf_is_true "$OCF_RESKEY_multibackend"; then
+ pids=`ps -o pid --no-headers --ppid $pid`
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "No child processes from Cinder Volume (yet...): $rc"
+ return $OCF_NOT_RUNNING
+ fi
+
# Grab the child's PIDs
for i in `ps -o pid --no-headers --ppid $pid`
do

View File

@ -0,0 +1,93 @@
Index: git/ocf/cinder-volume
===================================================================
--- git.orig/ocf/cinder-volume
+++ git/ocf/cinder-volume
@@ -55,6 +55,20 @@ OCF_RESKEY_multibackend_default="false"
#######################################################################
+#######################################################################
+
+#
+# The following file is used to determine if Cinder-Volume should be
+# failed if the AMQP check does not pass. Cinder-Volume initializes
+# it's backend before connecting to Rabbit. In Ceph configurations,
+# Cinder-Volume will not connect to Rabbit until the storage blades
+# are provisioned (this can take a long time, no need to restart the
+# process over and over again).
+VOLUME_FAIL_ON_AMQP_CHECK_FILE="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.fail_on_amqp_check"
+
+#######################################################################
+
+
usage() {
cat <<UEND
usage: $0 (start|stop|validate-all|meta-data|status|monitor)
@@ -237,8 +251,13 @@ cinder_volume_monitor() {
volume_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$i" | grep -qs "ESTABLISHED"`
rc=$?
if [ $rc -ne 0 ]; then
- ocf_log err "This child process from Cinder Volume is not connected to the AMQP server: $rc"
- return $OCF_NOT_RUNNING
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ]; then
+ ocf_log err "This child process from Cinder Volume is not connected to the AMQP server: $rc"
+ return $OCF_NOT_RUNNING
+ else
+ ocf_log info "Cinder Volume initializing, child process is not connected to the AMQP server: $rc"
+ return $OCF_SUCCESS
+ fi
fi
done
else
@@ -248,11 +267,18 @@ cinder_volume_monitor() {
volume_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
rc=$?
if [ $rc -ne 0 ]; then
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ]; then
ocf_log err "Cinder Volume is not connected to the AMQP server: $rc"
return $OCF_NOT_RUNNING
+ else
+ ocf_log info "Cinder Volume initializing, not connected to the AMQP server: $rc"
+ return $OCF_SUCCESS
+ fi
fi
fi
+ touch $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+
ocf_log debug "OpenStack Cinder Volume (cinder-volume) monitor succeeded"
return $OCF_SUCCESS
}
@@ -260,6 +286,10 @@ cinder_volume_monitor() {
cinder_volume_start() {
local rc
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ] ; then
+ rm $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ fi
+
cinder_volume_status
rc=$?
if [ $rc -eq $OCF_SUCCESS ]; then
@@ -293,6 +323,10 @@ cinder_volume_confirm_stop() {
local my_bin
local my_processes
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ] ; then
+ rm $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ fi
+
my_binary=`which ${OCF_RESKEY_binary}`
my_processes=`pgrep -l -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"`
@@ -307,6 +341,10 @@ cinder_volume_stop() {
local rc
local pid
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ] ; then
+ rm $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ fi
+
cinder_volume_status
rc=$?
if [ $rc -eq $OCF_NOT_RUNNING ]; then

View File

@ -0,0 +1,95 @@
From 3ba260dbc2d69a797c8deb55ff0871e752dddebd Mon Sep 17 00:00:00 2001
From: Chris Friesen <chris.friesen@windriver.com>
Date: Tue, 11 Aug 2015 18:48:45 -0400
Subject: [PATCH] CGTS-1851: enable multiple nova-conductor workers
Enable multiple nova-conductor workers by properly handling
the fact that when there are multiple workers the first one just
coordinates the others and doesn't itself connect to AMQP or the DB.
This also fixes up a bunch of whitespace issues, replacing a number
of hard tabs with spaces to make it easier to follow the code.
---
ocf/nova-conductor | 58 ++++++++++++++++++++++++++++++++++++++----------------
1 file changed, 41 insertions(+), 17 deletions(-)
diff --git a/ocf/nova-conductor b/ocf/nova-conductor
index aa1ee2a..25e5f8f 100644
--- a/ocf/nova-conductor
+++ b/ocf/nova-conductor
@@ -239,6 +239,18 @@ nova_conductor_status() {
fi
}
+check_port() {
+ local port=$1
+ local pid=$2
+ netstat -punt | grep -s "$port" | grep -s "$pid" | grep -qs "ESTABLISHED"
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return 0
+ else
+ return 1
+ fi
+}
+
nova_conductor_monitor() {
local rc
local pid
@@ -258,24 +270,36 @@ nova_conductor_monitor() {
# Check the connections according to the PID.
# We are sure to hit the conductor process and not other nova process with the same connection behavior (for example nova-cert)
if ocf_is_true "$OCF_RESKEY_zeromq"; then
- pid=`cat $OCF_RESKEY_pid`
- conductor_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc_db=$?
- if [ $rc_db -ne 0 ]; then
- ocf_log err "Nova Conductor is not connected to the database server: $rc_db"
- return $OCF_NOT_RUNNING
- fi
- else
pid=`cat $OCF_RESKEY_pid`
- conductor_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc_db=$?
- conductor_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc_amqp=$?
- if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
- ocf_log err "Nova Conductor is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
- return $OCF_NOT_RUNNING
- fi
- fi
+ rc_db=`check_port $OCF_RESKEY_database_server_port $pid`
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "Nova Conductor is not connected to the database server: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ rc_db=`check_port $OCF_RESKEY_database_server_port $pid`
+ rc_amqp=`check_port $OCF_RESKEY_amqp_server_port $pid`
+ if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
+ # may have multiple workers, in which case $pid is the parent and we want to check the children
+ # If there are no children or at least one child is not connected to both DB and AMQP then we fail.
+ KIDPIDS=`pgrep -P $pid -f nova-conductor`
+ if [ ! -z "$KIDPIDS" ]; then
+ for pid in $KIDPIDS
+ do
+ rc_db=`check_port $OCF_RESKEY_database_server_port $pid`
+ rc_amqp=`check_port $OCF_RESKEY_amqp_server_port $pid`
+ if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
+ ocf_log err "Nova Conductor pid $pid is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ done
+ else
+ ocf_log err "Nova Conductor pid $pid is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ fi
+ fi
ocf_log debug "OpenStack Nova Conductor (nova-conductor) monitor succeeded"
return $OCF_SUCCESS
--
1.9.1

View File

@ -0,0 +1,16 @@
---
ocf/glance-api | 3 +++
1 file changed, 3 insertions(+)
--- a/ocf/glance-api
+++ b/ocf/glance-api
@@ -243,6 +243,9 @@ glance_api_monitor() {
return $rc
fi
+ ### DPENNEY: Bypass monitor until keyring functionality is ported
+ return $OCF_SUCCESS
+
# Monitor the RA by retrieving the image list
if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_os_auth_url" ]; then
ocf_run -q $OCF_RESKEY_client_binary \

View File

@ -0,0 +1,13 @@
Index: git/ocf/glance-api
===================================================================
--- git.orig/ocf/glance-api
+++ git/ocf/glance-api
@@ -249,7 +249,7 @@ glance_api_monitor() {
--os_username "$OCF_RESKEY_os_username" \
--os_tenant_name "$OCF_RESKEY_os_tenant_name" \
--os_auth_url "$OCF_RESKEY_os_auth_url" \
- index > /dev/null 2>&1
+ image-list > /dev/null 2>&1
rc=$?
if [ $rc -ne 0 ]; then
ocf_log err "Failed to connect to the OpenStack ImageService (glance-api): $rc"

View File

@ -0,0 +1,349 @@
Index: git/ocf/heat-api-cloudwatch
===================================================================
--- /dev/null
+++ git/ocf/heat-api-cloudwatch
@@ -0,0 +1,344 @@
+#!/bin/sh
+#
+#
+# OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+#
+# Description: Manages an OpenStack Orchestration Engine Service (heat-api-cloudwatch) process as an HA resource
+#
+# Authors: Emilien Macchi
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_server_port
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="heat-api-cloudwatch"
+OCF_RESKEY_config_default="/etc/heat/heat.conf"
+OCF_RESKEY_user_default="heat"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_server_port_default="8000"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_server_port=${OCF_RESKEY_server_port_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack Orchestration Engine Service (heat-api-cloudwatch) process as an HA resource
+
+ The 'start' operation starts the heat-api-cloudwatch service.
+ The 'stop' operation stops the heat-api-cloudwatch service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the heat-api-cloudwatch service is running
+ The 'monitor' operation reports whether the heat-api-cloudwatch service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="heat-api-cloudwatch">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+May manage a heat-api-cloudwatch instance or a clone set that
+creates a distributed heat-api-cloudwatch cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Orchestration Engine Service (heat-api-cloudwatch)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine server binary (heat-api-cloudwatch)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine server binary (heat-api-cloudwatch)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine Service (heat-api-cloudwatch) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine (heat-api-cloudwatch) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api-cloudwatch) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Orchestration Engine Service (heat-api-cloudwatch) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api-cloudwatch) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the heat-api-cloudwatch server.
+
+</longdesc>
+<shortdesc lang="en">heat-api-cloudwatch listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_server_port_default}" />
+</parameter>
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+</longdesc>
+<shortdesc lang="en">Additional parameters for heat-api-cloudwatch</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+heat_api_cloudwatch_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+heat_api_cloudwatch_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ heat_api_cloudwatch_check_port $OCF_RESKEY_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+heat_api_cloudwatch_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Orchestration Engine (heat-api-cloudwatch) is not running"
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+heat_api_cloudwatch_monitor() {
+ local rc
+ local pid
+ local rc_db
+ local engine_db_check
+
+ heat_api_cloudwatch_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the server is listening on the server port
+ engine_db_check=`netstat -an | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
+ rc_db=$?
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "heat-api-cloudwatch is not listening on $OCF_RESKEY_console_port: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+
+ ocf_log debug "OpenStack Orchestration Engine (heat-api-cloudwatch) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+heat_api_cloudwatch_start() {
+ local rc
+
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual heat-api-cloudwatch daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ heat_api_cloudwatch_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api-cloudwatch) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) started"
+ return $OCF_SUCCESS
+}
+
+heat_api_cloudwatch_stop() {
+ local rc
+ local pid
+
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) already stopped"
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api-cloudwatch) couldn't be stopped"
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Orchestration Engine (heat-api-cloudwatch) still hasn't stopped yet. Waiting ..."
+ done
+
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+heat_api_cloudwatch_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) heat_api_cloudwatch_start;;
+ stop) heat_api_cloudwatch_stop;;
+ status) heat_api_cloudwatch_status;;
+ monitor) heat_api_cloudwatch_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac
+

View File

@ -0,0 +1,52 @@
---
ocf/heat-engine | 24 +++++++++++++++++++++---
1 file changed, 21 insertions(+), 3 deletions(-)
--- a/ocf/heat-engine
+++ b/ocf/heat-engine
@@ -238,6 +238,24 @@ heat_engine_status() {
fi
}
+# Function to check a process for port usage, as well as children
+check_port() {
+ local port=$1
+ local pid=$2
+
+ local children=`ps -ef | awk -v ppid=$pid '$3 == ppid { print $2}'`
+
+ for p in $pid $children
+ do
+ netstat -punt | grep -s "$port" | grep -s "$p" | grep -qs "ESTABLISHED"
+ if [ $? -eq 0 ]
+ then
+ return 0
+ fi
+ done
+ return 1
+}
+
heat_engine_monitor() {
local rc
local pid
@@ -258,7 +276,7 @@ heat_engine_monitor() {
# We are sure to hit the heat-engine process and not other heat process with the same connection behavior (for example heat-api)
if ocf_is_true "$OCF_RESKEY_zeromq"; then
pid=`cat $OCF_RESKEY_pid`
- engine_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ engine_db_check=`check_port "$OCF_RESKEY_database_server_port" "$pid"`
rc_db=$?
if [ $rc_db -ne 0 ]; then
ocf_log err "heat-engine is not connected to the database server: $rc_db"
@@ -266,9 +284,9 @@ heat_engine_monitor() {
fi
else
pid=`cat $OCF_RESKEY_pid`
- engine_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ engine_db_check=`check_port "$OCF_RESKEY_database_server_port" "$pid"`
rc_db=$?
- engine_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ engine_amqp_check=`check_port "$OCF_RESKEY_amqp_server_port" "$pid"`
rc_amqp=$?
if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
ocf_log err "Heat Engine is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"

View File

@ -0,0 +1,698 @@
Index: git/ocf/heat-api
===================================================================
--- /dev/null
+++ git/ocf/heat-api
@@ -0,0 +1,344 @@
+#!/bin/sh
+#
+#
+# OpenStack Orchestration Engine Service (heat-api)
+#
+# Description: Manages an OpenStack Orchestration Engine Service (heat-api) process as an HA resource
+#
+# Authors: Emilien Macchi
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_server_port
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="heat-api"
+OCF_RESKEY_config_default="/etc/heat/heat.conf"
+OCF_RESKEY_user_default="heat"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_server_port_default="8004"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_server_port=${OCF_RESKEY_server_port_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack Orchestration Engine Service (heat-api) process as an HA resource
+
+ The 'start' operation starts the heat-api service.
+ The 'stop' operation stops the heat-api service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the heat-api service is running
+ The 'monitor' operation reports whether the heat-api service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="heat-api">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Orchestration Engine Service (heat-api)
+May manage a heat-api instance or a clone set that
+creates a distributed heat-api cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Orchestration Engine Service (heat-api)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine server binary (heat-api)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine server binary (heat-api)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine Service (heat-api) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine (heat-api) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Orchestration Engine Service (heat-api)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Orchestration Engine Service (heat-api) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the heat-api server.
+
+</longdesc>
+<shortdesc lang="en">heat-api listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_server_port_default}" />
+</parameter>
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Orchestration Engine Service (heat-api)
+</longdesc>
+<shortdesc lang="en">Additional parameters for heat-api</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+heat_api_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+heat_api_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ heat_api_check_port $OCF_RESKEY_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+heat_api_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Orchestration Engine (heat-api) is not running"
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+heat_api_monitor() {
+ local rc
+ local pid
+ local rc_db
+ local engine_db_check
+
+ heat_api_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the server is listening on the server port
+ engine_db_check=`netstat -an | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
+ rc_db=$?
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "heat-api is not listening on $OCF_RESKEY_console_port: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+
+ ocf_log debug "OpenStack Orchestration Engine (heat-api) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+heat_api_start() {
+ local rc
+
+ heat_api_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual heat-api daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ heat_api_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api) started"
+ return $OCF_SUCCESS
+}
+
+heat_api_stop() {
+ local rc
+ local pid
+
+ heat_api_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api) already stopped"
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api) couldn't be stopped"
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ heat_api_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Orchestration Engine (heat-api) still hasn't stopped yet. Waiting ..."
+ done
+
+ heat_api_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Orchestration Engine (heat-api) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+heat_api_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) heat_api_start;;
+ stop) heat_api_stop;;
+ status) heat_api_status;;
+ monitor) heat_api_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac
+
Index: git/ocf/heat-api-cfn
===================================================================
--- /dev/null
+++ git/ocf/heat-api-cfn
@@ -0,0 +1,344 @@
+#!/bin/sh
+#
+#
+# OpenStack Orchestration Engine Service (heat-api-cfn)
+#
+# Description: Manages an OpenStack Orchestration Engine Service (heat-api-cfn) process as an HA resource
+#
+# Authors: Emilien Macchi
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_server_port
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="heat-api-cfn"
+OCF_RESKEY_config_default="/etc/heat/heat.conf"
+OCF_RESKEY_user_default="heat"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_server_port_default="8000"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_server_port=${OCF_RESKEY_server_port_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack Orchestration Engine Service (heat-api-cfn) process as an HA resource
+
+ The 'start' operation starts the heat-api-cfn service.
+ The 'stop' operation stops the heat-api-cfn service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the heat-api-cfn service is running
+ The 'monitor' operation reports whether the heat-api-cfn service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="heat-api-cfn">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Orchestration Engine Service (heat-api-cfn)
+May manage a heat-api-cfn instance or a clone set that
+creates a distributed heat-api-cfn cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Orchestration Engine Service (heat-api-cfn)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine server binary (heat-api-cfn)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine server binary (heat-api-cfn)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine Service (heat-api-cfn) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine (heat-api-cfn) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Orchestration Engine Service (heat-api-cfn)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api-cfn) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Orchestration Engine Service (heat-api-cfn) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api-cfn) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the heat-api-cfn server.
+
+</longdesc>
+<shortdesc lang="en">heat-api-cfn listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_server_port_default}" />
+</parameter>
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Orchestration Engine Service (heat-api-cfn)
+</longdesc>
+<shortdesc lang="en">Additional parameters for heat-api-cfn</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+heat_api_cfn_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+heat_api_cfn_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ heat_api_cfn_check_port $OCF_RESKEY_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+heat_api_cfn_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Orchestration Engine (heat-api-cfn) is not running"
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+heat_api_cfn_monitor() {
+ local rc
+ local pid
+ local rc_db
+ local engine_db_check
+
+ heat_api_cfn_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the server is listening on the server port
+ engine_db_check=`netstat -an | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
+ rc_db=$?
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "heat-api-cfn is not listening on $OCF_RESKEY_console_port: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+
+ ocf_log debug "OpenStack Orchestration Engine (heat-api-cfn) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+heat_api_cfn_start() {
+ local rc
+
+ heat_api_cfn_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual heat-api-cfn daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ heat_api_cfn_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api-cfn) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) started"
+ return $OCF_SUCCESS
+}
+
+heat_api_cfn_stop() {
+ local rc
+ local pid
+
+ heat_api_cfn_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) already stopped"
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api-cfn) couldn't be stopped"
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ heat_api_cfn_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Orchestration Engine (heat-api-cfn) still hasn't stopped yet. Waiting ..."
+ done
+
+ heat_api_cfn_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+heat_api_cfn_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) heat_api_cfn_start;;
+ stop) heat_api_cfn_stop;;
+ status) heat_api_cfn_status;;
+ monitor) heat_api_cfn_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac
+

View File

@ -0,0 +1,15 @@
---
ocf/neutron-server | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/ocf/neutron-server
+++ b/ocf/neutron-server
@@ -288,7 +288,7 @@ neutron_server_start() {
# Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
- --config-file=$OCF_RESKEY_plugin_config --log-file=/var/log/neutron/server.log $OCF_RESKEY_additional_parameters"' >> \
+ --config-file=$OCF_RESKEY_plugin_config $OCF_RESKEY_additional_parameters"' >> \
/dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.

View File

@ -0,0 +1,52 @@
Index: openstack-resource-agents-git-64e633d/ocf/neutron-server
===================================================================
--- openstack-resource-agents-git-64e633d.orig/ocf/neutron-server 2016-08-09 19:09:49.981633000 -0400
+++ openstack-resource-agents-git-64e633d/ocf/neutron-server 2016-08-10 09:31:41.221558000 -0400
@@ -25,6 +25,7 @@
# OCF_RESKEY_binary
# OCF_RESKEY_config
# OCF_RESKEY_plugin_config
+# OCF_RESKEY_sriov_plugin_config
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_os_username
@@ -45,6 +46,7 @@
OCF_RESKEY_binary_default="neutron-server"
OCF_RESKEY_config_default="/etc/neutron/neutron.conf"
OCF_RESKEY_plugin_config_default="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
+OCF_RESKEY_sriov_plugin_config_default="/etc/neutron/plugins/ml2/ml2_conf_sriov.ini"
OCF_RESKEY_user_default="neutron"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_url_default="http://127.0.0.1:9696"
@@ -53,6 +55,7 @@
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
: ${OCF_RESKEY_plugin_config=${OCF_RESKEY_plugin_config_default}}
+: ${OCF_RESKEY_sriov_plugin_config=${OCF_RESKEY_sriov_plugin_config_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_url=${OCF_RESKEY_url_default}}
@@ -115,6 +118,14 @@
<content type="string" default="${OCF_RESKEY_plugin_config_default}" />
</parameter>
+<parameter name="sriov_plugin_config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack sriov plugin configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack neutron sriov config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_sriov_plugin_config_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Neutron Server (neutron-server)
@@ -288,7 +299,7 @@
# Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
- --config-file=$OCF_RESKEY_plugin_config $OCF_RESKEY_additional_parameters"' >> \
+ --config-file=$OCF_RESKEY_plugin_config --config-file=$OCF_RESKEY_sriov_plugin_config $OCF_RESKEY_additional_parameters"' >> \
/dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.

View File

@ -0,0 +1,64 @@
---
ocf/nova-novnc | 23 ++++++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)
--- a/ocf/nova-novnc
+++ b/ocf/nova-novnc
@@ -139,7 +139,7 @@ Additional parameters to pass on to the
<actions>
<action name="start" timeout="10" />
-<action name="stop" timeout="10" />
+<action name="stop" timeout="15" />
<action name="status" timeout="10" />
<action name="monitor" timeout="5" interval="10" />
<action name="validate-all" timeout="5" />
@@ -260,6 +260,23 @@ nova_vnc_console_start() {
return $OCF_SUCCESS
}
+nova_vnc_console_stop_all() {
+ # Make sure nova-novncproxy and all the children are stopped.
+ for sig in TERM KILL
+ do
+ for pid in $(ps -eo pid,cmd | grep python |\
+ grep "nova-novncproxy" | \
+ grep -v grep | awk '{print $1}')
+ do
+ ocf_log info "Manually killing $pid with $sig"
+ kill -$sig $pid
+ done
+ sleep 1
+ done
+
+ return $OCF_SUCCESS
+}
+
nova_vnc_console_stop() {
local rc
local pid
@@ -268,6 +285,7 @@ nova_vnc_console_stop() {
rc=$?
if [ $rc -eq $OCF_NOT_RUNNING ]; then
ocf_log info "OpenStack Nova VNC Console (nova-novncproxy) already stopped"
+ nova_vnc_console_stop_all
return $OCF_SUCCESS
fi
@@ -277,6 +295,7 @@ nova_vnc_console_stop() {
rc=$?
if [ $rc -ne 0 ]; then
ocf_log err "OpenStack Nova VNC Console (nova-novncproxy) couldn't be stopped"
+ nova_vnc_console_stop_all
exit $OCF_ERR_GENERIC
fi
@@ -310,6 +329,8 @@ nova_vnc_console_stop() {
rm -f $OCF_RESKEY_pid
+ nova_vnc_console_stop_all
+
return $OCF_SUCCESS
}

View File

@ -0,0 +1,42 @@
diff --git a/ocf/nova-api b/ocf/nova-api
index 5764adc..b67c4e5 100644
--- a/ocf/nova-api
+++ b/ocf/nova-api
@@ -275,6 +275,9 @@ nova_api_start() {
# Change the working dir to /, to be sure it's accesible
cd /
+ # Run the pre-start hooks. This can be used to trigger a nova database sync, for example.
+ /usr/bin/nova-controller-runhooks
+
# run the actual nova-api daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
diff --git a/ocf/nova-conductor b/ocf/nova-conductor
index dfcff97..aa1ee2a 100644
--- a/ocf/nova-conductor
+++ b/ocf/nova-conductor
@@ -294,6 +294,9 @@ nova_conductor_start() {
# Change the working dir to /, to be sure it's accesible
cd /
+ # Run the pre-start hooks. This can be used to trigger a nova database sync, for example.
+ /usr/bin/nova-controller-runhooks
+
# run the actual nova-conductor daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
diff --git a/ocf/nova-scheduler b/ocf/nova-scheduler
index afaf8e9..45378ca 100644
--- a/ocf/nova-scheduler
+++ b/ocf/nova-scheduler
@@ -294,6 +294,9 @@ nova_scheduler_start() {
# Change the working dir to /, to be sure it's accesible
cd /
+ # Run the pre-start hooks. This can be used to trigger a nova database sync, for example.
+ /usr/bin/nova-controller-runhooks
+
# run the actual nova-scheduler daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \

View File

@ -0,0 +1,94 @@
---
ocf/nova-api | 3 +++
ocf/nova-cert | 3 +++
ocf/nova-conductor | 3 +++
ocf/nova-consoleauth | 3 +++
ocf/nova-network | 3 +++
ocf/nova-novnc | 3 +++
ocf/nova-scheduler | 3 +++
7 files changed, 21 insertions(+)
--- a/ocf/nova-api
+++ b/ocf/nova-api
@@ -272,6 +272,9 @@ nova_api_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-api daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
--- a/ocf/nova-cert
+++ b/ocf/nova-cert
@@ -285,6 +285,9 @@ nova_cert_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-cert daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
--- a/ocf/nova-conductor
+++ b/ocf/nova-conductor
@@ -284,6 +284,9 @@ nova_conductor_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-conductor daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
--- a/ocf/nova-consoleauth
+++ b/ocf/nova-consoleauth
@@ -285,6 +285,9 @@ nova_consoleauth_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-consoleauth daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
--- a/ocf/nova-network
+++ b/ocf/nova-network
@@ -264,6 +264,9 @@ nova_network_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-network daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
--- a/ocf/nova-novnc
+++ b/ocf/nova-novnc
@@ -235,6 +235,9 @@ nova_vnc_console_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-novncproxy daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config --web /usr/share/novnc/ \
--- a/ocf/nova-scheduler
+++ b/ocf/nova-scheduler
@@ -284,6 +284,9 @@ nova_scheduler_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-scheduler daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \

View File

@ -0,0 +1,405 @@
---
ocf/nova-conductor | 383 +++++++++++++++++++++++++++++++++++++++++++++++++++++
ocf/nova-novnc | 5
2 files changed, 387 insertions(+), 1 deletion(-)
--- /dev/null
+++ b/ocf/nova-conductor
@@ -0,0 +1,383 @@
+#!/bin/sh
+#
+#
+# OpenStack Conductor Service (nova-conductor)
+#
+# Description: Manages an OpenStack Conductor Service (nova-conductor) process as an HA resource
+#
+# Authors: Sébastien Han
+# Mainly inspired by the Glance API resource agent written by Martin Gerhard Loschwitz from Hastexo: http://goo.gl/whLpr
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_database_server_port
+# OCF_RESKEY_amqp_server_port
+# OCF_RESKEY_zeromq
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="nova-conductor"
+OCF_RESKEY_config_default="/etc/nova/nova.conf"
+OCF_RESKEY_user_default="nova"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_database_server_port_default="3306"
+OCF_RESKEY_amqp_server_port_default="5672"
+OCF_RESKEY_zeromq_default="false"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_database_server_port=${OCF_RESKEY_database_server_port_default}}
+: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
+: ${OCF_RESKEY_zeromq=${OCF_RESKEY_zeromq_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack ConductorService (nova-conductor) process as an HA resource
+
+ The 'start' operation starts the conductor service.
+ The 'stop' operation stops the conductor service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the conductor service is running
+ The 'monitor' operation reports whether the conductor service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="nova-conductor">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Nova Conductor Service (nova-conductor)
+May manage a nova-conductor instance or a clone set that
+creates a distributed nova-conductor cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Conductor Service (nova-conductor)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Nova Conductor server binary (nova-conductor)
+</longdesc>
+<shortdesc lang="en">OpenStack Nova Conductor server binary (nova-conductor)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Conductor Service (nova-conductor) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Nova Conductor (nova-conductor) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Conductor Service (nova-conductor)
+</longdesc>
+<shortdesc lang="en">OpenStack Conductor Service (nova-conductor) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Conductor Service (nova-conductor) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Conductor Service (nova-conductor) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="database_server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the database server. Use for monitoring purposes
+</longdesc>
+<shortdesc lang="en">Database listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_database_server_port_default}" />
+</parameter>
+
+<parameter name="amqp_server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the AMQP server. Use for monitoring purposes
+</longdesc>
+<shortdesc lang="en">AMQP listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_amqp_server_port_default}" />
+</parameter>
+
+<parameter name="zeromq" unique="0" required="0">
+<longdesc lang="en">
+If zeromq is used, this will disable the connection test to the AMQP server. Use for monitoring purposes
+</longdesc>
+<shortdesc lang="en">Zero-MQ usage</shortdesc>
+<content type="boolean" default="${OCF_RESKEY_zeromq_default}" />
+</parameter>
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Conductor Service (nova-conductor)
+</longdesc>
+<shortdesc lang="en">Additional parameters for nova-conductor</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+nova_conductor_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+nova_conductor_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ nova_conductor_check_port $OCF_RESKEY_database_server_port
+ nova_conductor_check_port $OCF_RESKEY_amqp_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+nova_conductor_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Nova Conductor (nova-conductor) is not running"
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+nova_conductor_monitor() {
+ local rc
+ local pid
+ local rc_db
+ local rc_amqp
+ local conductor_db_check
+ local conductor_amqp_check
+
+ nova_conductor_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the connections according to the PID.
+ # We are sure to hit the conductor process and not other nova process with the same connection behavior (for example nova-cert)
+ if ocf_is_true "$OCF_RESKEY_zeromq"; then
+ pid=`cat $OCF_RESKEY_pid`
+ conductor_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc_db=$?
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "Nova Conductor is not connected to the database server: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ conductor_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc_db=$?
+ conductor_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc_amqp=$?
+ if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
+ ocf_log err "Nova Conductor is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ fi
+
+ ocf_log debug "OpenStack Nova Conductor (nova-conductor) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+nova_conductor_start() {
+ local rc
+
+ nova_conductor_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual nova-conductor daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ nova_conductor_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Nova Conductor (nova-conductor) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) started"
+ return $OCF_SUCCESS
+}
+
+nova_conductor_stop() {
+ local rc
+ local pid
+
+ nova_conductor_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) already stopped"
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Nova Conductor (nova-conductor) couldn't be stopped"
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ nova_conductor_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Nova Conductor (nova-conductor) still hasn't stopped yet. Waiting ..."
+ done
+
+ nova_conductor_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+nova_conductor_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) nova_conductor_start;;
+ stop) nova_conductor_stop;;
+ status) nova_conductor_status;;
+ monitor) nova_conductor_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac
+
--- a/ocf/nova-novnc
+++ b/ocf/nova-novnc
@@ -214,7 +214,10 @@ nova_vnc_console_monitor() {
# Check whether we are supposed to monitor by logging into nova-novncproxy
# and do it if that's the case.
vnc_list_check=`netstat -a | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
- rc=$?
+ #rc=$?
+ # not sure why grep is returning 1 .. should root cause at some point.
+ # return success for now since service and port are both up
+ rc=0
if [ $rc -ne 0 ]; then
ocf_log err "Nova VNC Console doesn't seem to listen on his default port: $rc"
return $OCF_NOT_RUNNING

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,57 @@
---
ocf/nova-novnc | 8 +++-----
ocf/neutron-agent-dhcp | 2 +-
ocf/neutron-agent-l3 | 2 +-
ocf/neutron-server | 2 +-
4 files changed, 6 insertions(+), 8 deletions(-)
--- a/ocf/neutron-agent-dhcp
+++ b/ocf/neutron-agent-dhcp
@@ -95,7 +95,7 @@ Location of the OpenStack Quantum Servic
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
-<parameter name="plugin config" unique="0" required="0">
+<parameter name="plugin_config" unique="0" required="0">
<longdesc lang="en">
Location of the OpenStack DHCP Service (neutron-dhcp-agent) configuration file
</longdesc>
--- a/ocf/neutron-agent-l3
+++ b/ocf/neutron-agent-l3
@@ -95,7 +95,7 @@ Location of the OpenStack Quantum Servic
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
-<parameter name="plugin config" unique="0" required="0">
+<parameter name="plugin_config" unique="0" required="0">
<longdesc lang="en">
Location of the OpenStack L3 Service (neutron-l3-agent) configuration file
</longdesc>
--- a/ocf/neutron-server
+++ b/ocf/neutron-server
@@ -101,7 +101,7 @@ Location of the OpenStack Quantum Server
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
-<parameter name="plugin config" unique="0" required="0">
+<parameter name="plugin_config" unique="0" required="0">
<longdesc lang="en">
Location of the OpenStack Default Plugin (Open-vSwitch) configuration file
</longdesc>
--- a/ocf/nova-novnc
+++ b/ocf/nova-novnc
@@ -213,11 +213,9 @@ nova_vnc_console_monitor() {
# Check whether we are supposed to monitor by logging into nova-novncproxy
# and do it if that's the case.
- vnc_list_check=`netstat -a | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
- #rc=$?
- # not sure why grep is returning 1 .. should root cause at some point.
- # return success for now since service and port are both up
- rc=0
+ # Adding -n to netstat so that dns delays will not impact this.
+ vnc_list_check=`netstat -an | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
+ rc=$?
if [ $rc -ne 0 ]; then
ocf_log err "Nova VNC Console doesn't seem to listen on his default port: $rc"
return $OCF_NOT_RUNNING

View File

@ -0,0 +1,20 @@
---
ocf/neutron-server | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
--- a/ocf/neutron-server
+++ b/ocf/neutron-server
@@ -287,8 +287,11 @@ neutron_server_start() {
# run the actual neutron-server daemon with correct configurations files (server + plugin)
# Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
- su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
- --config-file=$OCF_RESKEY_plugin_config $OCF_RESKEY_additional_parameters"' >> \
+ ## DPENNEY: Removing plugin ref
+ ##su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ ## --config-file=$OCF_RESKEY_plugin_config $OCF_RESKEY_additional_parameters"' >> \
+ ## /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config"' >> \
/dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.

View File

@ -0,0 +1,388 @@
From daaf82a9e83f28e1e1072fc6d77ca57d4eb22c5d Mon Sep 17 00:00:00 2001
From: Angie Wang <Angie.Wang@windriver.com>
Date: Mon, 14 Nov 2016 13:58:27 -0500
Subject: [PATCH] remove-ceilometer-mem-db
---
ocf/ceilometer-mem-db | 369 --------------------------------------------------
1 file changed, 369 deletions(-)
delete mode 100644 ocf/ceilometer-mem-db
diff --git a/ocf/ceilometer-mem-db b/ocf/ceilometer-mem-db
deleted file mode 100644
index d7112d8..0000000
--- a/ocf/ceilometer-mem-db
+++ /dev/null
@@ -1,369 +0,0 @@
-#!/bin/sh
-#
-#
-# OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
-#
-# Description: Manages an OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) process as an HA resource
-#
-# Authors: Emilien Macchi
-# Mainly inspired by the Nova Scheduler resource agent written by Sebastien Han
-#
-# Support: openstack@lists.launchpad.net
-# License: Apache Software License (ASL) 2.0
-#
-# Copyright (c) 2014-2016 Wind River Systems, Inc.
-# SPDX-License-Identifier: Apache-2.0
-#
-#
-#
-#
-#
-# See usage() function below for more details ...
-#
-# OCF instance parameters:
-# OCF_RESKEY_binary
-# OCF_RESKEY_config
-# OCF_RESKEY_user
-# OCF_RESKEY_pid
-# OCF_RESKEY_monitor_binary
-# OCF_RESKEY_amqp_server_port
-# OCF_RESKEY_additional_parameters
-#######################################################################
-# Initialization:
-
-: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
-. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
-
-#######################################################################
-
-# Fill in some defaults if no values are specified
-
-OCF_RESKEY_binary_default="ceilometer-mem-db"
-OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_user_default="root"
-OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
-OCF_RESKEY_amqp_server_port_default="5672"
-
-: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
-: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
-: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
-: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
-: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
-
-#######################################################################
-
-usage() {
- cat <<UEND
- usage: $0 (start|stop|validate-all|meta-data|status|monitor)
-
- $0 manages an OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) process as an HA resource
-
- The 'start' operation starts the scheduler service.
- The 'stop' operation stops the scheduler service.
- The 'validate-all' operation reports whether the parameters are valid
- The 'meta-data' operation reports this RA's meta-data information
- The 'status' operation reports whether the scheduler service is running
- The 'monitor' operation reports whether the scheduler service seems to be working
-
-UEND
-}
-
-meta_data() {
- cat <<END
-<?xml version="1.0"?>
-<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
-<resource-agent name="ceilometer-mem-db">
-<version>1.0</version>
-
-<longdesc lang="en">
-Resource agent for the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
-May manage a ceilometer-mem-db instance or a clone set that
-creates a distributed ceilometer-mem-db cluster.
-</longdesc>
-<shortdesc lang="en">Manages the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)</shortdesc>
-<parameters>
-
-<parameter name="binary" unique="0" required="0">
-<longdesc lang="en">
-Location of the OpenStack Ceilometer Mem DB server binary (ceilometer-mem-db)
-</longdesc>
-<shortdesc lang="en">OpenStack Ceilometer Mem DB server binary (ceilometer-mem-db)</shortdesc>
-<content type="string" default="${OCF_RESKEY_binary_default}" />
-</parameter>
-
-<parameter name="config" unique="0" required="0">
-<longdesc lang="en">
-Location of the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) configuration file
-</longdesc>
-<shortdesc lang="en">OpenStack Ceilometer Mem DB (ceilometer-mem-db registry) config file</shortdesc>
-<content type="string" default="${OCF_RESKEY_config_default}" />
-</parameter>
-
-<parameter name="user" unique="0" required="0">
-<longdesc lang="en">
-User running OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
-</longdesc>
-<shortdesc lang="en">OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) user</shortdesc>
-<content type="string" default="${OCF_RESKEY_user_default}" />
-</parameter>
-
-<parameter name="pid" unique="0" required="0">
-<longdesc lang="en">
-The pid file to use for this OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) instance
-</longdesc>
-<shortdesc lang="en">OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) pid file</shortdesc>
-<content type="string" default="${OCF_RESKEY_pid_default}" />
-</parameter>
-
-<parameter name="amqp_server_port" unique="0" required="0">
-<longdesc lang="en">
-The listening port number of the AMQP server. Use for monitoring purposes
-</longdesc>
-<shortdesc lang="en">AMQP listening port</shortdesc>
-<content type="integer" default="${OCF_RESKEY_amqp_server_port_default}" />
-</parameter>
-
-
-<parameter name="additional_parameters" unique="0" required="0">
-<longdesc lang="en">
-Additional parameters to pass on to the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
-</longdesc>
-<shortdesc lang="en">Additional parameters for ceilometer-mem-db</shortdesc>
-<content type="string" />
-</parameter>
-
-</parameters>
-
-<actions>
-<action name="start" timeout="20" />
-<action name="stop" timeout="20" />
-<action name="status" timeout="20" />
-<action name="monitor" timeout="30" interval="20" />
-<action name="validate-all" timeout="5" />
-<action name="meta-data" timeout="5" />
-</actions>
-</resource-agent>
-END
-}
-
-#######################################################################
-# Functions invoked by resource manager actions
-
-ceilometer_mem_db_check_port() {
-# This function has been taken from the squid RA and improved a bit
-# The length of the integer must be 4
-# Examples of valid port: "1080", "0080"
-# Examples of invalid port: "1080bad", "0", "0000", ""
-
- local int
- local cnt
-
- int="$1"
- cnt=${#int}
- echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
-
- if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
- ocf_log err "Invalid port number: $1"
- exit $OCF_ERR_CONFIGURED
- fi
-}
-
-ceilometer_mem_db_validate() {
- local rc
-
- check_binary $OCF_RESKEY_binary
- check_binary netstat
- ceilometer_mem_db_check_port $OCF_RESKEY_amqp_server_port
-
- # A config file on shared storage that is not available
- # during probes is OK.
- if [ ! -f $OCF_RESKEY_config ]; then
- if ! ocf_is_probe; then
- ocf_log err "Config $OCF_RESKEY_config doesn't exist"
- return $OCF_ERR_INSTALLED
- fi
- ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
- fi
-
- getent passwd $OCF_RESKEY_user >/dev/null 2>&1
- rc=$?
- if [ $rc -ne 0 ]; then
- ocf_log err "User $OCF_RESKEY_user doesn't exist"
- return $OCF_ERR_INSTALLED
- fi
-
- true
-}
-
-ceilometer_mem_db_status() {
- local pid
- local rc
-
- if [ ! -f $OCF_RESKEY_pid ]; then
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) is not running"
- return $OCF_NOT_RUNNING
- else
- pid=`cat $OCF_RESKEY_pid`
- fi
-
- ocf_run -warn kill -s 0 $pid
- rc=$?
- if [ $rc -eq 0 ]; then
- return $OCF_SUCCESS
- else
- ocf_log info "Old PID file found, but OpenStack Ceilometer Mem DB (ceilometer-mem-db) is not running"
- rm -f $OCF_RESKEY_pid
- return $OCF_NOT_RUNNING
- fi
-}
-
-ceilometer_mem_db_monitor() {
- local rc
- local pid
- local scheduler_amqp_check
-
- ceilometer_mem_db_status
- rc=$?
-
- # If status returned anything but success, return that immediately
- if [ $rc -ne $OCF_SUCCESS ]; then
- return $rc
- fi
-
- # Check the connections according to the PID.
- # We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
- pid=`cat $OCF_RESKEY_pid`
- scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc=$?
- if [ $rc -ne 0 ]; then
- ocf_log err "Mem DB is not connected to the AMQP server : $rc"
- return $OCF_NOT_RUNNING
- fi
-
- ocf_log debug "OpenStack Ceilometer Mem DB (ceilometer-mem-db) monitor succeeded"
- return $OCF_SUCCESS
-}
-
-ceilometer_mem_db_start() {
- local rc
-
- ceilometer_mem_db_status
- rc=$?
- if [ $rc -eq $OCF_SUCCESS ]; then
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) already running"
- return $OCF_SUCCESS
- fi
-
- # run the actual ceilometer-mem-db daemon. Don't use ocf_run as we're sending the tool's output
- # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
- su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
- $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
-
- # Spin waiting for the server to come up.
- while true; do
- ceilometer_mem_db_monitor
- rc=$?
- [ $rc -eq $OCF_SUCCESS ] && break
- if [ $rc -ne $OCF_NOT_RUNNING ]; then
- ocf_log err "OpenStack Ceilometer Mem DB (ceilometer-mem-db) start failed"
- exit $OCF_ERR_GENERIC
- fi
- sleep 1
- done
-
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) started"
- return $OCF_SUCCESS
-}
-
-ceilometer_mem_db_confirm_stop() {
- local my_bin
- local my_processes
-
- my_binary=`which ${OCF_RESKEY_binary}`
- my_processes=`pgrep -l -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"`
-
- if [ -n "${my_processes}" ]
- then
- ocf_log info "About to SIGKILL the following: ${my_processes}"
- pkill -KILL -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"
- fi
-}
-
-ceilometer_mem_db_stop() {
- local rc
- local pid
-
- ceilometer_mem_db_status
- rc=$?
- if [ $rc -eq $OCF_NOT_RUNNING ]; then
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) already stopped"
- ceilometer_mem_db_confirm_stop
- return $OCF_SUCCESS
- fi
-
- # Try SIGTERM
- pid=`cat $OCF_RESKEY_pid`
- ocf_run kill -s TERM $pid
- rc=$?
- if [ $rc -ne 0 ]; then
- ocf_log err "OpenStack Ceilometer Mem DB (ceilometer-mem-db) couldn't be stopped"
- ceilometer_mem_db_confirm_stop
- exit $OCF_ERR_GENERIC
- fi
-
- # stop waiting
- shutdown_timeout=2
- if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
- shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
- fi
- count=0
- while [ $count -lt $shutdown_timeout ]; do
- ceilometer_mem_db_status
- rc=$?
- if [ $rc -eq $OCF_NOT_RUNNING ]; then
- break
- fi
- count=`expr $count + 1`
- sleep 1
- ocf_log debug "OpenStack Ceilometer Mem DB (ceilometer-mem-db) still hasn't stopped yet. Waiting ..."
- done
-
- ceilometer_mem_db_status
- rc=$?
- if [ $rc -ne $OCF_NOT_RUNNING ]; then
- # SIGTERM didn't help either, try SIGKILL
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) failed to stop after ${shutdown_timeout}s \
- using SIGTERM. Trying SIGKILL ..."
- ocf_run kill -s KILL $pid
- fi
- ceilometer_mem_db_confirm_stop
-
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) stopped"
-
- rm -f $OCF_RESKEY_pid
-
- return $OCF_SUCCESS
-}
-
-#######################################################################
-
-case "$1" in
- meta-data) meta_data
- exit $OCF_SUCCESS;;
- usage|help) usage
- exit $OCF_SUCCESS;;
-esac
-
-# Anything except meta-data and help must pass validation
-ceilometer_mem_db_validate || exit $?
-
-# What kind of method was invoked?
-case "$1" in
- start) ceilometer_mem_db_start;;
- stop) ceilometer_mem_db_stop;;
- status) ceilometer_mem_db_status;;
- monitor) ceilometer_mem_db_monitor;;
- validate-all) ;;
- *) usage
- exit $OCF_ERR_UNIMPLEMENTED;;
-esac
--
1.8.3.1

View File

@ -0,0 +1,87 @@
---
ocf/ceilometer-agent-notification | 4 ++--
ocf/ceilometer-api | 4 ++--
ocf/ceilometer-collector | 4 ++--
ocf/ceilometer-mem-db | 4 ++--
4 files changed, 8 insertions(+), 8 deletions(-)
--- a/ocf/ceilometer-api
+++ b/ocf/ceilometer-api
@@ -11,7 +11,7 @@
# Support: openstack@lists.launchpad.net
# License: Apache Software License (ASL) 2.0
#
-# Copyright (c) 2014 Wind River Systems, Inc.
+# Copyright (c) 2014-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -324,7 +324,7 @@ ceilometer_api_stop() {
fi
# stop waiting
- shutdown_timeout=15
+ shutdown_timeout=2
if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
fi
--- a/ocf/ceilometer-agent-notification
+++ b/ocf/ceilometer-agent-notification
@@ -11,7 +11,7 @@
# Support: openstack@lists.launchpad.net
# License: Apache Software License (ASL) 2.0
#
-# Copyright (c) 2014 Wind River Systems, Inc.
+# Copyright (c) 2014-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -314,7 +314,7 @@ ceilometer_agent_notification_stop() {
fi
# stop waiting
- shutdown_timeout=15
+ shutdown_timeout=2
if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
fi
--- a/ocf/ceilometer-collector
+++ b/ocf/ceilometer-collector
@@ -11,7 +11,7 @@
# Support: openstack@lists.launchpad.net
# License: Apache Software License (ASL) 2.0
#
-# Copyright (c) 2014 Wind River Systems, Inc.
+# Copyright (c) 2014-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -313,7 +313,7 @@ ceilometer_collector_stop() {
fi
# stop waiting
- shutdown_timeout=15
+ shutdown_timeout=2
if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
fi
--- a/ocf/ceilometer-mem-db
+++ b/ocf/ceilometer-mem-db
@@ -11,7 +11,7 @@
# Support: openstack@lists.launchpad.net
# License: Apache Software License (ASL) 2.0
#
-# Copyright (c) 2014 Wind River Systems, Inc.
+# Copyright (c) 2014-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -312,7 +312,7 @@ ceilometer_mem_db_stop() {
fi
# stop waiting
- shutdown_timeout=15
+ shutdown_timeout=2
if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
fi

View File

@ -0,0 +1,8 @@
This repo is for https://opendev.org/openstack/python-aodhclient
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -0,0 +1,53 @@
From 59078d1ddd3f5f58007973615a67b6f136831823 Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Wed, 27 Oct 2021 16:46:26 +0000
Subject: [PATCH] Add wheel package
Add python3-aodhclient-wheel.
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 16 ++++++++++++++++
debian/rules | 2 +-
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index df1f5e0..0ae5339 100644
--- a/debian/control
+++ b/debian/control
@@ -78,3 +78,19 @@ Description: OpenStack Alarming as a Service - Python 3.x client
for more than 10 min.
.
This package contains the Python 3.x module.
+
+Package: python3-aodhclient-wheel
+Architecture: all
+Depends:
+ python3-wheel,
+ ${misc:Depends},
+ ${python3:Depends},
+Description: OpenStack Alarming as a Service - Python 3.x client
+ Aodh provides alarming for OpenStack. The alarming component of Aodh, first
+ delivered in the Havana version, allows you to set alarms based on threshold
+ evaluation for a collection of samples. An alarm can be set on a single meter,
+ or on a combination. For example, you may want to trigger an alarm when the
+ memory consumption reaches 70% on a given instance if the instance has been up
+ for more than 10 min.
+ .
+ This package contains the Python wheel.
diff --git a/debian/rules b/debian/rules
index 42e437f..3795caf 100755
--- a/debian/rules
+++ b/debian/rules
@@ -13,7 +13,7 @@ override_dh_auto_build:
echo "Do nothing..."
override_dh_auto_install:
- pkgos-dh_auto_install --no-py2
+ pkgos-dh_auto_install --no-py2 --wheel
override_dh_auto_test:
ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
--
2.30.2

View File

@ -0,0 +1 @@
0001-Add-wheel-package.patch

View File

@ -0,0 +1,12 @@
---
debname: python-aodhclient
debver: 2.1.1-1
dl_path:
name: python-aodhclient-debian-2.1.1-1.tar.gz
url: https://salsa.debian.org/openstack-team/clients/python-aodhclient/-/archive/debian/2.1.1-1/python-aodhclient-debian-2.1.1-1.tar.gz
md5sum: 86ee75ba3dec6529b48c816c7ddd317e
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 5566a41fc0f0be21e2764a9cc0c37823dcae72c8
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-aodhclient

View File

@ -0,0 +1 @@
This repo is for the stx-barbican image, build on top of https://opendev.org/openstack/barbican/

View File

@ -0,0 +1,9 @@
BUILDER=loci
LABEL=stx-barbican
PROJECT=barbican
DIST_REPOS="OS"
PROJECT_REPO=https://opendev.org/openstack/barbican.git
NON_UNIQUE_SYSTEM_ACCOUNT="yes"
PROJECT_REF=cc076f24e55c24a6fc8e57ca606130090fb6369b
PIP_PACKAGES="pycryptodomex"
PROFILES="fluent"

View File

@ -0,0 +1,8 @@
This repo is for https://opendev.org/openstack/python-barbicanclient
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -0,0 +1,29 @@
From 599df369e9077f94a3dead25f0c3852222e13f0d Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Mon, 29 Nov 2021 20:50:16 +0000
Subject: [PATCH] Remove openstackclient
Remove build-Depends-Indep for python-openstackclient as it is
not being used and it is causing problems with the build-pkgs
tool
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 1 -
1 file changed, 1 deletion(-)
diff --git a/debian/control b/debian/control
index 73963d9..467dee1 100644
--- a/debian/control
+++ b/debian/control
@@ -19,7 +19,6 @@ Build-Depends-Indep:
python3-hacking,
python3-keystoneauth1,
python3-nose,
- python3-openstackclient,
python3-openstackdocstheme <!nodoc>,
python3-oslo.config,
python3-oslo.i18n,
--
2.30.2

View File

@ -0,0 +1,2 @@
stx-add-wheel-support.patch
remove-openstackclient.patch

View File

@ -0,0 +1,46 @@
diff -Nru python-barbicanclient-5.0.1/debian/changelog python-barbicanclient-5.0.1/debian/changelog
--- python-barbicanclient-5.0.1/debian/changelog 2020-10-16 08:42:06.000000000 +0000
+++ python-barbicanclient-5.0.1/debian/changelog 2021-10-03 18:30:48.000000000 +0000
@@ -1,3 +1,10 @@
+python-barbicanclient (5.0.1-2.1) unstable; urgency=medium
+
+ * Non-maintainer upload.
+ * debian/control, debian/rules: Add wheels support.
+
+ -- Chuck Short <charles.short@windriver.com> Sun, 03 Oct 2021 18:30:48 +0000
+
python-barbicanclient (5.0.1-2) unstable; urgency=medium
* Uploading to unstable.
diff -Nru python-barbicanclient-5.0.1/debian/control python-barbicanclient-5.0.1/debian/control
--- python-barbicanclient-5.0.1/debian/control 2020-10-16 08:42:06.000000000 +0000
+++ python-barbicanclient-5.0.1/debian/control 2021-10-03 18:30:42.000000000 +0000
@@ -57,3 +57,16 @@
command-line script (barbican).
.
This package contains the Python 3.x module.
+
+Package: python3-barbicanclient-wheel
+Architecture: all
+Depends:
+ python3-wheels,
+ ${misc:Depends},
+ ${python3:Depends},
+Description: OpenStack Key Management API client - Python 3.x
+ This is a client for the Barbican Key Management API. This package includes a
+ Python library for accessing the API (the barbicanclient module), and a
+ command-line script (barbican).
+ .
+ This package contains the Python 3.x wheel.
diff -Nru python-barbicanclient-5.0.1/debian/rules python-barbicanclient-5.0.1/debian/rules
--- python-barbicanclient-5.0.1/debian/rules 2020-10-16 08:42:06.000000000 +0000
+++ python-barbicanclient-5.0.1/debian/rules 2021-10-03 18:29:57.000000000 +0000
@@ -12,7 +12,7 @@
echo "Do nothing..."
override_dh_auto_install:
- pkgos-dh_auto_install --no-py2
+ pkgos-dh_auto_install --no-py2 --wheel
override_dh_auto_test:
ifeq (,$(findstring nocheck, $(DEB_BUILD_OPTIONS)))

View File

@ -0,0 +1,12 @@
---
debname: python-barbicanclient
debver: 5.0.1-2
dl_path:
name: python-barbicanclient-debian-5.0.1-2.tar.gz
url: https://salsa.debian.org/openstack-team/clients/python-barbicanclient/-/archive/debian/5.0.1-2/python-barbicanclient-debian-5.0.1-2.tar.gz
md5sum: 80fe9db068b5ca8638f1ed63dbff7327
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 5566a41fc0f0be21e2764a9cc0c37823dcae72c8
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-barbicanclient

View File

@ -0,0 +1 @@
This repo is for the stx-ceilometer image, build on top of https://opendev.org/openstack/ceilometer/

View File

@ -0,0 +1,23 @@
BUILDER=loci
LABEL=stx-ceilometer
PROJECT=ceilometer
PROJECT_REPO=https://opendev.org/openstack/ceilometer.git
PROJECT_REF=bcada72c3aaeeb2a86de3368b1787a9253c9d55b
PIP_PACKAGES="\
gnocchiclient \
libvirt-python \
panko==5.0.0
"
DIST_REPOS="OS"
DIST_PACKAGES="\
ipmitool \
libvirt0 \
libvirt-clients \
libvirt-daemon \
libvirt-daemon-driver-lxc \
libvirt-daemon-driver-qemu \
libvirt-daemon-driver-storage-gluster \
libvirt-login-shell
"
UPDATE_SYSTEM_ACCOUNT="yes"
NON_UNIQUE_SYSTEM_ACCOUNT="yes"

View File

@ -0,0 +1 @@
This repo is for the stx-cinder image, build on top of https://opendev.org/openstack/cinder/

View File

@ -0,0 +1,12 @@
BUILDER=loci
LABEL=stx-cinder
PROJECT=cinder
DIST_REPOS="OS"
PROJECT_REPO=https://opendev.org/openstack/cinder.git
PROJECT_REF=79b012fbc8b6bc9dcce2c8c52a6fa63976a0309f
PROJECT_UID=42425
PROJECT_GID=42425
NON_UNIQUE_SYSTEM_ACCOUNT="yes"
DIST_PACKAGES="nfs-common"
PIP_PACKAGES="pycryptodomex python-swiftclient pylint"
PROFILES="fluent cinder lvm ceph qemu"

View File

@ -0,0 +1,8 @@
This repo is for https://opendev.org/openstack/python-cinderclient
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -0,0 +1,29 @@
From 5c420535f8b04efda7a9fac27eeaafde961db6aa Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Wed, 27 Oct 2021 17:28:06 +0000
Subject: [PATCH] Add package wheel
Add python3-cinderclient-wheel.
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/rules | 3 +++
1 file changed, 3 insertions(+)
diff --git a/debian/rules b/debian/rules
index 8acee49..0d8778c 100755
--- a/debian/rules
+++ b/debian/rules
@@ -15,6 +15,9 @@ override_dh_auto_install:
for i in $(PYTHON3S) ; do \
python3 setup.py install -f --install-layout=deb --root=$(CURDIR)/debian/tmp ; \
done
+ for i in $(PYTHON3S) ; do \
+ python3 setup.py bdist_wheel --universal -d $(CURDIR)/debian/python3-cinderclient-wheel/usr/share/python3-wheel ; \
+ done
ifeq (,$(findstring nocheck, $(DEB_BUILD_OPTIONS)))
PYTHONPATH=$(CURDIR)/debian/tmp/usr/lib/python3/dist-packages pkgos-dh_auto_test --no-py2
endif
--
2.30.2

View File

@ -0,0 +1 @@
0001-Add-package-wheel.patch

View File

@ -0,0 +1,12 @@
---
debname: python-cinderclient
debver: 1:7.2.0-3
dl_path:
name: python-cinderclient-debian-7.2.0-3.tar.gz
url: https://salsa.debian.org/openstack-team/clients/python-cinderclient/-/archive/debian/7.2.0-3/python-cinderclient-debian-7.2.0-3.tar.gz
md5sum: b2fae10096bc2cf30935afe409ed9b4c
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 5566a41fc0f0be21e2764a9cc0c37823dcae72c8
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-cinderclient

View File

@ -0,0 +1,270 @@
From b9ea3db2bde72c11b5da6222c57d7ccb80143724 Mon Sep 17 00:00:00 2001
From: Luan Nunes Utimura <LuanNunes.Utimura@windriver.com>
Date: Mon, 6 Mar 2023 09:25:12 -0300
Subject: [PATCH] Add location parameter for volume backup creation
This change adds the `location` parameter in python-cinderclient's
`volume backup create` command to allow the optional specification of
volume backup locations.
This change also updates the unit tests accordingly.
Signed-off-by: Luan Nunes Utimura <LuanNunes.Utimura@windriver.com>
---
cinderclient/tests/unit/v2/test_shell.py | 5 ++++
.../tests/unit/v2/test_volume_backups.py | 6 ++++
cinderclient/tests/unit/v3/test_shell.py | 20 ++++++++++++-
cinderclient/v2/shell.py | 7 ++++-
cinderclient/v2/volume_backups.py | 5 ++--
cinderclient/v3/shell.py | 5 ++++
cinderclient/v3/volume_backups.py | 30 +++++++++++--------
7 files changed, 62 insertions(+), 16 deletions(-)
diff --git a/cinderclient/tests/unit/v2/test_shell.py b/cinderclient/tests/unit/v2/test_shell.py
index f6f6355..95a3af9 100644
--- a/cinderclient/tests/unit/v2/test_shell.py
+++ b/cinderclient/tests/unit/v2/test_shell.py
@@ -379,6 +379,11 @@ class ShellTest(utils.TestCase):
self.run_command('backup-create 1234 --snapshot-id 4321')
self.assert_called('POST', '/backups')
+ def test_backup_location(self):
+ self.run_command('backup-create 1234 '
+ '--location nfs://10.10.10.10:/exports/backups')
+ self.assert_called('POST', '/backups')
+
def test_multiple_backup_delete(self):
self.run_command('backup-delete 1234 5678')
self.assert_called_anytime('DELETE', '/backups/1234')
diff --git a/cinderclient/tests/unit/v2/test_volume_backups.py b/cinderclient/tests/unit/v2/test_volume_backups.py
index 700c440..09f1c0e 100644
--- a/cinderclient/tests/unit/v2/test_volume_backups.py
+++ b/cinderclient/tests/unit/v2/test_volume_backups.py
@@ -52,6 +52,12 @@ class VolumeBackupsTest(utils.TestCase):
'3c706gbg-c074-51d9-9575-385119gcdfg5')
cs.assert_called('POST', '/backups')
+ def test_create_location(self):
+ cs.backups.create('2b695faf-b963-40c8-8464-274008fbcef4',
+ None, None, None, False, False, None,
+ 'nfs://10.10.10.10:/exports/backups')
+ cs.assert_called('POST', '/backups')
+
def test_get(self):
backup_id = '76a17945-3c6f-435c-975b-b5685db10b62'
back = cs.backups.get(backup_id)
diff --git a/cinderclient/tests/unit/v3/test_shell.py b/cinderclient/tests/unit/v3/test_shell.py
index 0332ae3..6464a73 100644
--- a/cinderclient/tests/unit/v3/test_shell.py
+++ b/cinderclient/tests/unit/v3/test_shell.py
@@ -1254,7 +1254,23 @@ class ShellTest(utils.TestCase):
'incremental': False,
'force': False,
'snapshot_id': None,
- }}
+ 'location': None, }}
+ self.assert_called('POST', '/backups', body=expected)
+
+ def test_backup_with_location(self):
+ self.run_command('--os-volume-api-version 3.42 backup-create '
+ '--name 1234 '
+ '--location nfs://10.10.10.10:/exports/backups 1234')
+ expected = {
+ 'backup': {
+ 'volume_id': 1234,
+ 'container': None,
+ 'name': '1234',
+ 'description': None,
+ 'incremental': False,
+ 'force': False,
+ 'snapshot_id': None,
+ 'location': 'nfs://10.10.10.10:/exports/backups', }}
self.assert_called('POST', '/backups', body=expected)
def test_backup_with_metadata(self):
@@ -1267,6 +1283,7 @@ class ShellTest(utils.TestCase):
'incremental': False,
'force': False,
'snapshot_id': None,
+ 'location': None,
'metadata': {'foo': 'bar'}, }}
self.assert_called('POST', '/backups', body=expected)
@@ -1280,6 +1297,7 @@ class ShellTest(utils.TestCase):
'incremental': False,
'force': False,
'snapshot_id': None,
+ 'location': None,
'availability_zone': 'AZ2'}}
self.assert_called('POST', '/backups', body=expected)
diff --git a/cinderclient/v2/shell.py b/cinderclient/v2/shell.py
index d41e014..a975f02 100644
--- a/cinderclient/v2/shell.py
+++ b/cinderclient/v2/shell.py
@@ -1162,6 +1162,10 @@ def do_retype(cs, args):
metavar='<snapshot-id>',
default=None,
help='ID of snapshot to backup. Default=None.')
+@utils.arg('--location',
+ metavar='<location>',
+ default=None,
+ help='Backup location. Default=None')
def do_backup_create(cs, args):
"""Creates a volume backup."""
if args.display_name is not None:
@@ -1177,7 +1181,8 @@ def do_backup_create(cs, args):
args.description,
args.incremental,
args.force,
- args.snapshot_id)
+ args.snapshot_id,
+ args.location)
info = {"volume_id": volume.id}
info.update(backup._info)
diff --git a/cinderclient/v2/volume_backups.py b/cinderclient/v2/volume_backups.py
index bcf3e01..0a4f1c1 100644
--- a/cinderclient/v2/volume_backups.py
+++ b/cinderclient/v2/volume_backups.py
@@ -46,7 +46,7 @@ class VolumeBackupManager(base.ManagerWithFind):
def create(self, volume_id, container=None,
name=None, description=None,
incremental=False, force=False,
- snapshot_id=None):
+ snapshot_id=None, location=None):
"""Creates a volume backup.
:param volume_id: The ID of the volume to backup.
@@ -66,7 +66,8 @@ class VolumeBackupManager(base.ManagerWithFind):
'description': description,
'incremental': incremental,
'force': force,
- 'snapshot_id': snapshot_id, }}
+ 'snapshot_id': snapshot_id,
+ 'location': location, }}
return self._create('/backups', body, 'backup')
def get(self, backup_id):
diff --git a/cinderclient/v3/shell.py b/cinderclient/v3/shell.py
index eaded7e..cfafe87 100644
--- a/cinderclient/v3/shell.py
+++ b/cinderclient/v3/shell.py
@@ -2466,6 +2466,10 @@ def do_service_get_log(cs, args):
metavar='<snapshot-id>',
default=None,
help='ID of snapshot to backup. Default=None.')
+@utils.arg('--location',
+ metavar='<location>',
+ default=None,
+ help='Backup location. Default=None')
@utils.arg('--metadata',
nargs='*',
metavar='<key=value>',
@@ -2500,6 +2504,7 @@ def do_backup_create(cs, args):
args.incremental,
args.force,
args.snapshot_id,
+ location=args.location,
**kwargs)
info = {"volume_id": volume.id}
info.update(backup._info)
diff --git a/cinderclient/v3/volume_backups.py b/cinderclient/v3/volume_backups.py
index 7dd8560..66525af 100644
--- a/cinderclient/v3/volume_backups.py
+++ b/cinderclient/v3/volume_backups.py
@@ -43,7 +43,7 @@ class VolumeBackupManager(volume_backups.VolumeBackupManager):
def create(self, volume_id, container=None,
name=None, description=None,
incremental=False, force=False,
- snapshot_id=None):
+ snapshot_id=None, location=None):
"""Creates a volume backup.
:param volume_id: The ID of the volume to backup.
@@ -55,17 +55,19 @@ class VolumeBackupManager(volume_backups.VolumeBackupManager):
:param snapshot_id: The ID of the snapshot to backup. This should
be a snapshot of the src volume, when specified,
the new backup will be based on the snapshot.
+ :param location: The backup location.
:rtype: :class:`VolumeBackup`
"""
return self._create_backup(volume_id, container, name, description,
- incremental, force, snapshot_id)
+ incremental, force, snapshot_id,
+ location=location)
@api_versions.wraps("3.43") # noqa: F811
def create(self, volume_id, container=None, # noqa
name=None, description=None,
incremental=False, force=False,
- snapshot_id=None,
- metadata=None):
+ snapshot_id=None, metadata=None,
+ location=None):
"""Creates a volume backup.
:param volume_id: The ID of the volume to backup.
@@ -74,28 +76,30 @@ class VolumeBackupManager(volume_backups.VolumeBackupManager):
:param description: The description of the backup.
:param incremental: Incremental backup.
:param force: If True, allows an in-use volume to be backed up.
- :param metadata: Key Value pairs
:param snapshot_id: The ID of the snapshot to backup. This should
be a snapshot of the src volume, when specified,
the new backup will be based on the snapshot.
+ :param metadata: Key Value pairs
+ :param location: The backup location.
:rtype: :class:`VolumeBackup`
"""
# pylint: disable=function-redefined
return self._create_backup(volume_id, container, name, description,
- incremental, force, snapshot_id, metadata)
+ incremental, force, snapshot_id, metadata,
+ location=location)
@api_versions.wraps("3.51") # noqa: F811
def create(self, volume_id, container=None, name=None, description=None, # noqa
incremental=False, force=False, snapshot_id=None, metadata=None,
- availability_zone=None):
+ availability_zone=None, location=None):
return self._create_backup(volume_id, container, name, description,
incremental, force, snapshot_id, metadata,
- availability_zone)
+ availability_zone, location=location)
def _create_backup(self, volume_id, container=None, name=None,
description=None, incremental=False, force=False,
- snapshot_id=None, metadata=None,
- availability_zone=None):
+ snapshot_id=None, metadata=None, availability_zone=None,
+ location=None):
"""Creates a volume backup.
:param volume_id: The ID of the volume to backup.
@@ -104,10 +108,11 @@ class VolumeBackupManager(volume_backups.VolumeBackupManager):
:param description: The description of the backup.
:param incremental: Incremental backup.
:param force: If True, allows an in-use volume to be backed up.
- :param metadata: Key Value pairs
:param snapshot_id: The ID of the snapshot to backup. This should
be a snapshot of the src volume, when specified,
the new backup will be based on the snapshot.
+ :param location: The backup location.
+ :param metadata: Key Value pairs
:param availability_zone: The AZ where we want the backup stored.
:rtype: :class:`VolumeBackup`
"""
@@ -118,7 +123,8 @@ class VolumeBackupManager(volume_backups.VolumeBackupManager):
'description': description,
'incremental': incremental,
'force': force,
- 'snapshot_id': snapshot_id, }}
+ 'snapshot_id': snapshot_id,
+ 'location': location, }}
if metadata:
body['backup']['metadata'] = metadata
if availability_zone:
--
2.25.1

View File

@ -0,0 +1 @@
0001-Add-location-parameter-for-volume-backup-creation.patch

View File

@ -0,0 +1 @@
This repo is for the stx-glance image, build on top of https://opendev.org/openstack/glance/

View File

@ -0,0 +1,11 @@
BUILDER=loci
LABEL=stx-glance
PROJECT=glance
PROJECT_REPO=https://opendev.org/openstack/glance.git
PROJECT_REF=6f03ccd47772e02f810de8fa3158afddc4a9c158
DIST_REPOS="OS"
UPDATE_SYSTEM_ACCOUNT="yes"
NON_UNIQUE_SYSTEM_ACCOUNT="yes"
PIP_PACKAGES="pycryptodomex python-swiftclient psutil pylint "
DIST_PACKAGES="libpq5"
PROFILES="fluent glance ceph"

View File

@ -0,0 +1,8 @@
This repo is for https://opendev.org/openstack/python-glanceclient
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -0,0 +1,54 @@
From 4c5368b6f9811e195dc12d1a3cecccb176cf720e Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Tue, 26 Oct 2021 23:35:45 +0000
Subject: [PATCH] Add support for wheel
Add support for python3 wheels.
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 17 +++++++++++++++++
debian/rules | 2 +-
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index 742257e..f2c1648 100644
--- a/debian/control
+++ b/debian/control
@@ -84,3 +84,20 @@ Description: Client library for Openstack glance server - Python 3.x
Python API (the "glanceclient" module), and a command-line script ("glance").
.
This package provides the Python 3.x module.
+
+Package: python3-glanceclient-wheel
+Architecture: all
+Depends:
+ python3-wheel,
+ ${misc:Depends},
+ ${python3:Depends},
+Description: Client library for Openstack Glance server - Python 3.x
+ The Glance project provides services for discovering, registering, and
+ retrieving virtual machine images over the cloud. They may be stand-alone
+ services, or may be used to deliver images from object stores, such as
+ OpenStack's Swift service, to Nova's compute nodes.
+ .
+ This is a client for the Glance which uses the OpenStack Image API. There's a
+ Python API (the "glanceclient" module), and a command-line script ("glance").
+ .
+ This package contains the Python wheel.
diff --git a/debian/rules b/debian/rules
index d5d2b14..459f08c 100755
--- a/debian/rules
+++ b/debian/rules
@@ -12,7 +12,7 @@ override_dh_auto_build:
echo "Do nothing..."
override_dh_auto_install:
- pkgos-dh_auto_install --no-py2
+ pkgos-dh_auto_install --no-py2 --wheel
override_dh_python3:
dh_python3 --shebang=/usr/bin/python3
--
2.30.2

View File

@ -0,0 +1 @@
0001-Add-support-for-wheel.patch

View File

@ -0,0 +1,12 @@
---
debname: python-glanceclient
debver: 1:3.2.2-2
dl_path:
name: python-glanceclient-debian-3.2.2-2.tar.gz
url: https://salsa.debian.org/openstack-team/clients/python-glanceclient/-/archive/debian/3.2.2-2/python-glanceclient-debian-3.2.2-2.tar.gz
md5sum: bc184e7b7d10732f1562fb7cab668711
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 5566a41fc0f0be21e2764a9cc0c37823dcae72c8
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-glanceclient

View File

@ -0,0 +1 @@
This repo is for the stx-gnocchi image, build on top of https://opendev.org/openstack/gnocchi

View File

@ -0,0 +1,15 @@
BUILDER=loci
LABEL=stx-gnocchi
PROJECT=gnocchi
PROJECT_REPO=https://github.com/gnocchixyz/gnocchi.git
PROJECT_REF=4.3.2
PROJECT_UID=42425
PROJECT_GID=42425
PIP_PACKAGES="pylint SQLAlchemy SQLAlchemy-Utils oslo.db keystonemiddleware gnocchiclient pymemcache psycopg2"
DIST_REPOS="OS"
DIST_PACKAGES="python3-rados"
PROFILES="gnocchi apache"
CUSTOMIZATION="\
ln -s /etc/apache2/mods-available/wsgi.load /etc/apache2/mods-enabled/wsgi.load && \
ln -s /etc/apache2/mods-available/wsgi.conf /etc/apache2/mods-enabled/wsgi.conf
"

View File

@ -0,0 +1,8 @@
This repo is for https://opendev.org/openstack/python-gnocchiclient
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -0,0 +1,55 @@
From 1cdba6b7884878b91b34321d8e6cb48aadb18165 Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Tue, 26 Oct 2021 23:51:34 +0000
Subject: [PATCH] Add python3 wheel
Add python3-gnocchiclient-wheel
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 18 ++++++++++++++++++
debian/rules | 2 +-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index c80f5f7..e4341b6 100644
--- a/debian/control
+++ b/debian/control
@@ -81,3 +81,21 @@ Description: bindings to the OpenStack Gnocchi API - Python 3.x
HTTP REST API.
.
This package contains the Python 3.x module.
+
+Package: python3-gnocchiclient-wheel
+Architecture: all
+Depends:
+ python3-wheel,
+ ${misc:Depends},
+ ${python3:Depends},
+Description: bindings to the OpenStack Gnocchi API - Python 3.x
+ This is a client for OpenStack gnocchi API. There's a Python API (the
+ gnocchiclient module), and a command-line script. Each implements the entire
+ OpenStack Gnocchi API.
+ .
+ Gnocchi is a service for managing a set of resources and storing metrics about
+ them, in a scalable and resilient way. Its functionalities are exposed over an
+ HTTP REST API.
+ .
+ This package contains the Python wheel.
+
diff --git a/debian/rules b/debian/rules
index df1b32a..0cee15d 100755
--- a/debian/rules
+++ b/debian/rules
@@ -13,7 +13,7 @@ override_dh_auto_build:
echo "Do nothing..."
override_dh_auto_install:
- pkgos-dh_auto_install --no-py2
+ pkgos-dh_auto_install --no-py2 --wheel
# Generate bash completion
mkdir -p $(CURDIR)/debian/python3-gnocchiclient/usr/share/bash-completion/completions
--
2.30.2

View File

@ -0,0 +1,29 @@
From 8f239c761ac065f0faa6a8d4d66704f583767fb1 Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Mon, 29 Nov 2021 20:57:22 +0000
Subject: [PATCH] Remove openstackclient
Remove build-Depends-Indep for python-openstackclient as it is
not being used and it is causing problems with the build-pkgs
tool
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 1 -
1 file changed, 1 deletion(-)
diff --git a/debian/control b/debian/control
index c80f5f7..87e4cb8 100644
--- a/debian/control
+++ b/debian/control
@@ -23,7 +23,6 @@ Build-Depends-Indep:
python3-keystoneauth1,
python3-keystonemiddleware <!nocheck>,
python3-monotonic,
- python3-openstackclient,
python3-osc-lib,
python3-pytest <!nocheck>,
python3-pytest-xdist <!nocheck>,
--
2.30.2

View File

@ -0,0 +1,2 @@
0001-Add-python3-wheel.patch
remove-openstackcleint.patch

View File

@ -0,0 +1,12 @@
---
debname: python-gnocchiclient
debver: 7.0.6-1
dl_path:
name: python-gnocchiclient-debian-7.0.6-1.tar.gz
url: https://salsa.debian.org/openstack-team/clients/python-gnocchiclient/-/archive/debian/7.0.6-1/python-gnocchiclient-debian-7.0.6-1.tar.gz
md5sum: 3ee6a1ee65fb1a4dbd86038257b33c04
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 5566a41fc0f0be21e2764a9cc0c37823dcae72c8
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-gnocchiclient

View File

@ -0,0 +1 @@
This repo is for the stx-heat image, build on top of https://opendev.org/openstack/heat

View File

@ -0,0 +1,10 @@
BUILDER=loci
LABEL=stx-heat
PROJECT=heat
PROJECT_REPO=https://opendev.org/openstack/heat.git
PROJECT_REF=5466ede853bde7d636943cba017ed8265dcfd260
DIST_REPOS="OS"
NON_UNIQUE_SYSTEM_ACCOUNT="yes"
PIP_PACKAGES="pycryptodomex pylint"
DIST_PACKAGES="curl libxslt1.1"
PROFILES="fluent heat apache"

View File

@ -0,0 +1,8 @@
This repo is for https://opendev.org/openstack/python-heatclient
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -0,0 +1,54 @@
From 9ea6b1a4d02e631efccdde8ed240dc79849159af Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Wed, 27 Oct 2021 12:29:52 +0000
Subject: [PATCH] Add wheel support
Add python3-heatclient-wheel package.
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 17 +++++++++++++++++
debian/rules | 2 +-
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index f287c45..0598164 100644
--- a/debian/control
+++ b/debian/control
@@ -75,3 +75,20 @@ Description: client library and CLI for OpenStack Heat - Python 3.x
the OpenStack Heat API.
.
This package provides the Python 3.x module.
+
+Package: python3-heatclient-wheel
+Architecture: all
+Depends:
+ python3-wheel,
+ ${misc:Depends},
+ ${python3:Depends},
+Description: client library and CLI for OpenStack Heat - Python 3.x
+ Heat is a service to orchestrate multiple composite cloud applications
+ using templates, through both an OpenStack-native ReST API and
+ a CloudFormation-compatible Query API.
+ .
+ This is a client for the OpenStack Heat API. There's a Python API (the
+ heatclient module), and a command-line script (heat). Each implements 100% of
+ the OpenStack Heat API.
+ .
+ This package contains the Python wheel.
diff --git a/debian/rules b/debian/rules
index 70f505c..110310e 100755
--- a/debian/rules
+++ b/debian/rules
@@ -14,7 +14,7 @@ override_dh_auto_build:
echo "Do nothing..."
override_dh_auto_install:
- pkgos-dh_auto_install --no-py2 --in-tmp
+ pkgos-dh_auto_install --no-py2 --in-tmp --wheel
ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS)))
PYTHONPATH=$(CURDIR)/debian/tmp/usr/lib/python3/dist-packages pkgos-dh_auto_test --no-py2
--
2.30.2

Some files were not shown because too many files have changed in this diff Show More