Enable helm/armada plugin delivery with the application

This creates a new package spec called python-k8sapp-openstack that will
hold all the stevedore plugins needed to support the application. This
spec will build two packages python-k8sapp-openstack and
python-k8sapp-openstack-wheels.

These packages are included in the build dependencies for the
stx-openstack-helm application package build where the wheels file is
included in the application tarball.

The helm and armada plugins have been relocated to this repo and
provided in a k8sapp_openstack python module. This module will be
extracted from the wheels and installed on the platform via the sysinv
application framework. The module will be made available when the
application is enabled.

Change-Id: I342308fbff23d29bfdf64a07dbded4bae01b79fd
Depends-On: https://review.opendev.org/#/c/688191/
Story: 2006537
Task: 36978
Signed-off-by: Robert Church <robert.church@windriver.com>
This commit is contained in:
Robert Church 2019-09-22 17:34:28 -04:00
parent 8d3452a5e8
commit 949dd5aa77
57 changed files with 5359 additions and 5 deletions

View File

@ -1,3 +1,4 @@
openstack-helm
openstack-helm-infra
stx-openstack-helm
python-k8sapp-openstack

View File

@ -0,0 +1,9 @@
SRC_DIR="k8sapp_openstack"
# Bump the version by the previous version value prior to decoupling as this
# will align the GITREVCOUNT value to increment the version by one. Remove this
# (i.e. reset to 0) on then next major version changes when TIS_BASE_SRCREV
# changes. This version should align with the version of the helm charts in
# stx-openstack-helm
TIS_BASE_SRCREV=8d3452a5e864339101590e542c24c375bb3808fb
TIS_PATCH_VER=GITREVCOUNT+20

View File

@ -0,0 +1,65 @@
%global pypi_name k8sapp-openstack
%global sname k8sapp_openstack
Name: python-%{pypi_name}
Version: 1.0
Release: %{tis_patch_ver}%{?_tis_dist}
Summary: StarlingX sysinv extensions: Openstack K8S app
License: Apache-2.0
Source0: %{name}-%{version}.tar.gz
BuildArch: noarch
BuildRequires: python-setuptools
BuildRequires: python-pbr
BuildRequires: python2-pip
BuildRequires: python2-wheel
%description
StarlingX sysinv extensions: Openstack K8S app
%package -n python2-%{pypi_name}
Summary: StarlingX sysinv extensions: Openstack K8S app
Requires: python-pbr >= 2.0.0
Requires: sysinv >= 1.0
%description -n python2-%{pypi_name}
StarlingX sysinv extensions: Openstack K8S app
%prep
%setup
# Remove bundled egg-info
rm -rf %{pypi_name}.egg-info
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
%py2_build_wheel
%install
export PBR_VERSION=%{version}.%{tis_patch_ver}
export SKIP_PIP_INSTALL=1
%{__python2} setup.py install --skip-build --root %{buildroot}
mkdir -p ${RPM_BUILD_ROOT}/plugins
install -m 644 dist/*.whl ${RPM_BUILD_ROOT}/plugins/
%files
%{python2_sitelib}/%{sname}
%{python2_sitelib}/%{sname}-*.egg-info
%package wheels
Summary: %{name} wheels
%description wheels
Contains python wheels for %{name}
%files wheels
/plugins/*
%changelog
* Wed Sep 20 2019 Robert Church <robert.church@windriver.com>
- Initial version

View File

@ -0,0 +1,35 @@
# Compiled files
*.py[co]
*.a
*.o
*.so
# Sphinx
_build
doc/source/api/
# Packages/installer info
*.egg
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
# Other
*.DS_Store
.stestr
.testrepository
.tox
.venv
.*.swp
.coverage
bandit.xml
cover
AUTHORS
ChangeLog
*.sqlite

View File

@ -0,0 +1,4 @@
[DEFAULT]
test_path=./k8sapp_openstack/tests
top_dir=./k8sapp_openstack
#parallel_class=True

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2019 Wind River Systems, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,7 @@
k8sapp-openstack
================
This project contains StarlingX Kubernetes application specific python plugins
for the openstack application. These plugins are required to integrate the
openstack application into the StarlingX application framework and to support
the various StarlingX deployments.

View File

@ -0,0 +1,5 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#

View File

@ -0,0 +1,19 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import yaml
class quoted_str(str):
pass
# force strings to be single-quoted to avoid interpretation as numeric values
def quoted_presenter(dumper, data):
return dumper.represent_scalar(u'tag:yaml.org,2002:str', data, style="'")
yaml.add_representer(quoted_str, quoted_presenter)

View File

@ -0,0 +1,187 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# All Rights Reserved.
#
""" System inventory Armada manifest operator."""
from oslo_log import log as logging
from k8sapp_openstack.helm.aodh import AodhHelm
from k8sapp_openstack.helm.barbican import BarbicanHelm
from k8sapp_openstack.helm.ceilometer import CeilometerHelm
from k8sapp_openstack.helm.cinder import CinderHelm
from k8sapp_openstack.helm.dcdbsync import DcdbsyncHelm
from k8sapp_openstack.helm.fm_rest_api import FmRestApiHelm
from k8sapp_openstack.helm.garbd import GarbdHelm
from k8sapp_openstack.helm.glance import GlanceHelm
from k8sapp_openstack.helm.gnocchi import GnocchiHelm
from k8sapp_openstack.helm.heat import HeatHelm
from k8sapp_openstack.helm.horizon import HorizonHelm
from k8sapp_openstack.helm.ingress import IngressHelm
from k8sapp_openstack.helm.ironic import IronicHelm
from k8sapp_openstack.helm.keystone import KeystoneHelm
from k8sapp_openstack.helm.keystone_api_proxy import KeystoneApiProxyHelm
from k8sapp_openstack.helm.libvirt import LibvirtHelm
from k8sapp_openstack.helm.magnum import MagnumHelm
from k8sapp_openstack.helm.mariadb import MariadbHelm
from k8sapp_openstack.helm.memcached import MemcachedHelm
from k8sapp_openstack.helm.neutron import NeutronHelm
from k8sapp_openstack.helm.nginx_ports_control import NginxPortsControlHelm
from k8sapp_openstack.helm.nova import NovaHelm
from k8sapp_openstack.helm.nova_api_proxy import NovaApiProxyHelm
from k8sapp_openstack.helm.openvswitch import OpenvswitchHelm
from k8sapp_openstack.helm.panko import PankoHelm
from k8sapp_openstack.helm.placement import PlacementHelm
from k8sapp_openstack.helm.rabbitmq import RabbitmqHelm
from k8sapp_openstack.helm.swift import SwiftHelm
from sysinv.common import constants
from sysinv.common import exception
from sysinv.helm import manifest_base as base
LOG = logging.getLogger(__name__)
class OpenstackArmadaManifestOperator(base.ArmadaManifestOperator):
APP = constants.HELM_APP_OPENSTACK
ARMADA_MANIFEST = 'armada-manifest'
CHART_INGRESS_KS = CHART_GROUP_INGRESS_KS = 'kube-system-ingress'
CHART_GROUP_INGRESS_OS = 'openstack-ingress'
CHART_GROUP_MAGNUM = 'openstack-magnum'
CHART_GROUP_MARIADB = 'openstack-mariadb'
CHART_GROUP_MEMCACHED = 'openstack-memcached'
CHART_GROUP_RABBITMQ = 'openstack-rabbitmq'
CHART_GROUP_KEYSTONE = 'openstack-keystone'
CHART_GROUP_KS_API_PROXY = 'openstack-keystone-api-proxy'
CHART_GROUP_BARBICAN = 'openstack-barbican'
CHART_GROUP_GLANCE = 'openstack-glance'
CHART_GROUP_SWIFT = 'openstack-ceph-rgw'
CHART_GROUP_CINDER = 'openstack-cinder'
CHART_GROUP_FM_REST_API = 'openstack-fm-rest-api'
CHART_GROUP_COMPUTE_KIT = 'openstack-compute-kit'
CHART_GROUP_HEAT = 'openstack-heat'
CHART_GROUP_HORIZON = 'openstack-horizon'
CHART_GROUP_TELEMETRY = 'openstack-telemetry'
CHART_GROUP_DCDBSYNC = 'openstack-dcdbsync'
CHART_GROUPS_LUT = {
AodhHelm.CHART: CHART_GROUP_TELEMETRY,
BarbicanHelm.CHART: CHART_GROUP_BARBICAN,
CeilometerHelm.CHART: CHART_GROUP_TELEMETRY,
CinderHelm.CHART: CHART_GROUP_CINDER,
FmRestApiHelm.CHART: CHART_GROUP_FM_REST_API,
GarbdHelm.CHART: CHART_GROUP_MARIADB,
GlanceHelm.CHART: CHART_GROUP_GLANCE,
GnocchiHelm.CHART: CHART_GROUP_TELEMETRY,
HeatHelm.CHART: CHART_GROUP_HEAT,
HorizonHelm.CHART: CHART_GROUP_HORIZON,
IngressHelm.CHART: CHART_GROUP_INGRESS_OS,
IronicHelm.CHART: CHART_GROUP_COMPUTE_KIT,
KeystoneHelm.CHART: CHART_GROUP_KEYSTONE,
KeystoneApiProxyHelm.CHART: CHART_GROUP_KS_API_PROXY,
LibvirtHelm.CHART: CHART_GROUP_COMPUTE_KIT,
MagnumHelm.CHART: CHART_GROUP_MAGNUM,
MariadbHelm.CHART: CHART_GROUP_MARIADB,
MemcachedHelm.CHART: CHART_GROUP_MEMCACHED,
NeutronHelm.CHART: CHART_GROUP_COMPUTE_KIT,
NginxPortsControlHelm.CHART: CHART_GROUP_INGRESS_OS,
NovaHelm.CHART: CHART_GROUP_COMPUTE_KIT,
NovaApiProxyHelm.CHART: CHART_GROUP_COMPUTE_KIT,
OpenvswitchHelm.CHART: CHART_GROUP_COMPUTE_KIT,
PankoHelm.CHART: CHART_GROUP_TELEMETRY,
PlacementHelm.CHART: CHART_GROUP_COMPUTE_KIT,
RabbitmqHelm.CHART: CHART_GROUP_RABBITMQ,
SwiftHelm.CHART: CHART_GROUP_SWIFT,
DcdbsyncHelm.CHART: CHART_GROUP_DCDBSYNC,
}
CHARTS_LUT = {
AodhHelm.CHART: 'openstack-aodh',
BarbicanHelm.CHART: 'openstack-barbican',
CeilometerHelm.CHART: 'openstack-ceilometer',
CinderHelm.CHART: 'openstack-cinder',
GarbdHelm.CHART: 'openstack-garbd',
FmRestApiHelm.CHART: 'openstack-fm-rest-api',
GlanceHelm.CHART: 'openstack-glance',
GnocchiHelm.CHART: 'openstack-gnocchi',
HeatHelm.CHART: 'openstack-heat',
HorizonHelm.CHART: 'openstack-horizon',
IngressHelm.CHART: 'openstack-ingress',
IronicHelm.CHART: 'openstack-ironic',
KeystoneHelm.CHART: 'openstack-keystone',
KeystoneApiProxyHelm.CHART: 'openstack-keystone-api-proxy',
LibvirtHelm.CHART: 'openstack-libvirt',
MagnumHelm.CHART: 'openstack-magnum',
MariadbHelm.CHART: 'openstack-mariadb',
MemcachedHelm.CHART: 'openstack-memcached',
NeutronHelm.CHART: 'openstack-neutron',
NginxPortsControlHelm.CHART: 'openstack-nginx-ports-control',
NovaHelm.CHART: 'openstack-nova',
NovaApiProxyHelm.CHART: 'openstack-nova-api-proxy',
OpenvswitchHelm.CHART: 'openstack-openvswitch',
PankoHelm.CHART: 'openstack-panko',
PlacementHelm.CHART: 'openstack-placement',
RabbitmqHelm.CHART: 'openstack-rabbitmq',
SwiftHelm.CHART: 'openstack-ceph-rgw',
DcdbsyncHelm.CHART: 'openstack-dcdbsync',
}
def platform_mode_manifest_updates(self, dbapi, mode):
""" Update the application manifest based on the platform
This is used for
:param dbapi: DB api object
:param mode: mode to control how to apply the application manifest
"""
if mode == constants.OPENSTACK_RESTORE_DB:
# During application restore, first bring up
# MariaDB service.
self.manifest_chart_groups_set(
self.ARMADA_MANIFEST,
[self.CHART_GROUP_INGRESS_KS,
self.CHART_GROUP_INGRESS_OS,
self.CHART_GROUP_MARIADB])
elif mode == constants.OPENSTACK_RESTORE_STORAGE:
# After MariaDB data is restored, restore Keystone,
# Glance and Cinder.
self.manifest_chart_groups_set(
self.ARMADA_MANIFEST,
[self.CHART_GROUP_INGRESS_KS,
self.CHART_GROUP_INGRESS_OS,
self.CHART_GROUP_MARIADB,
self.CHART_GROUP_MEMCACHED,
self.CHART_GROUP_RABBITMQ,
self.CHART_GROUP_KEYSTONE,
self.CHART_GROUP_GLANCE,
self.CHART_GROUP_CINDER])
else:
# When mode is OPENSTACK_RESTORE_NORMAL or None,
# bring up all the openstack services.
try:
system = dbapi.isystem_get_one()
except exception.NotFound:
LOG.exception("System %s not found.")
raise
if (system.distributed_cloud_role ==
constants.DISTRIBUTED_CLOUD_ROLE_SYSTEMCONTROLLER):
# remove the chart_groups not needed in this configuration
self.manifest_chart_groups_delete(
self.ARMADA_MANIFEST, self.CHART_GROUP_SWIFT)
self.manifest_chart_groups_delete(
self.ARMADA_MANIFEST, self.CHART_GROUP_COMPUTE_KIT)
self.manifest_chart_groups_delete(
self.ARMADA_MANIFEST, self.CHART_GROUP_HEAT)
self.manifest_chart_groups_delete(
self.ARMADA_MANIFEST, self.CHART_GROUP_TELEMETRY)

View File

@ -0,0 +1,5 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#

View File

@ -0,0 +1,37 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Helm: Supported charts:
# These values match the names in the chart package's Chart.yaml
HELM_CHART_AODH = 'aodh'
HELM_CHART_BARBICAN = 'barbican'
HELM_CHART_CEILOMETER = 'ceilometer'
HELM_CHART_CINDER = 'cinder'
HELM_CHART_FM_REST_API = 'fm-rest-api'
HELM_CHART_GARBD = 'garbd'
HELM_CHART_GLANCE = 'glance'
HELM_CHART_GNOCCHI = 'gnocchi'
HELM_CHART_HEAT = 'heat'
HELM_CHART_HELM_TOOLKIT = 'helm-toolkit'
HELM_CHART_HORIZON = 'horizon'
HELM_CHART_INGRESS = 'ingress'
HELM_CHART_IRONIC = 'ironic'
HELM_CHART_KEYSTONE = 'keystone'
HELM_CHART_KEYSTONE_API_PROXY = 'keystone-api-proxy'
HELM_CHART_LIBVIRT = 'libvirt'
HELM_CHART_MAGNUM = 'magnum'
HELM_CHART_MARIADB = 'mariadb'
HELM_CHART_MEMCACHED = 'memcached'
HELM_CHART_NEUTRON = 'neutron'
HELM_CHART_NGINX_PORTS_CONTROL = "nginx-ports-control"
HELM_CHART_NOVA = 'nova'
HELM_CHART_NOVA_API_PROXY = 'nova-api-proxy'
HELM_CHART_OPENVSWITCH = 'openvswitch'
HELM_CHART_PANKO = 'panko'
HELM_CHART_PLACEMENT = 'placement'
HELM_CHART_RABBITMQ = 'rabbitmq'
HELM_CHART_SWIFT = 'ceph-rgw'
HELM_CHART_DCDBSYNC = 'dcdbsync'

View File

@ -0,0 +1,19 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import yaml
class quoted_str(str):
pass
# force strings to be single-quoted to avoid interpretation as numeric values
def quoted_presenter(dumper, data):
return dumper.represent_scalar(u'tag:yaml.org,2002:str', data, style="'")
yaml.add_representer(quoted_str, quoted_presenter)

View File

@ -0,0 +1,86 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class AodhHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the aodh chart"""
CHART = app_constants.HELM_CHART_AODH
SERVICE_NAME = app_constants.HELM_CHART_AODH
AUTH_USERS = ['aodh']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': self._get_pod_overrides(),
'conf': self._get_conf_overrides(),
'endpoints': self._get_endpoints_overrides()
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_pod_overrides(self):
overrides = {
'replicas': {
'api': self._num_controllers(),
'evaluator': self._num_controllers(),
'listener': self._num_controllers(),
'notifier': self._num_controllers()
}
}
return overrides
def _get_conf_overrides(self):
return {
'aodh': {
'service_credentials': {
'region_name': self._region_name()
}
}
}
def _get_endpoints_overrides(self):
return {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'alarming': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'oslo_cache': {
'auth': {
'memcache_secret_key':
self._get_common_password('auth_memcache_key')
}
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
'oslo_messaging': {
'auth': self._get_endpoints_oslo_messaging_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
}

View File

@ -0,0 +1,68 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class BarbicanHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the barbican chart"""
CHART = app_constants.HELM_CHART_BARBICAN
AUTH_USERS = ['barbican']
SERVICE_NAME = app_constants.HELM_CHART_BARBICAN
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': {
'replicas': {
'api': self._num_controllers()
}
},
'endpoints': self._get_endpoints_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_endpoints_overrides(self):
return {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'key_manager': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
'oslo_cache': {
'auth': {
'memcache_secret_key':
self._get_common_password('auth_memcache_key')
}
},
'oslo_messaging': {
'auth': self._get_endpoints_oslo_messaging_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
}

View File

@ -0,0 +1,101 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.common import utils
from sysinv.helm import common
class CeilometerHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the ceilometer chart"""
CHART = app_constants.HELM_CHART_CEILOMETER
SERVICE_NAME = app_constants.HELM_CHART_CEILOMETER
AUTH_USERS = ['ceilometer']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': self._get_pod_overrides(),
'conf': self._get_conf_overrides(),
'manifests': self._get_manifests_overrides(),
'endpoints': self._get_endpoints_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_pod_overrides(self):
return {
'replicas': {
'central': self._num_controllers(),
'notification': self._num_controllers()
}
}
def _get_manifests_overrides(self):
manifests_overrides = {}
if utils.is_virtual():
manifests_overrides.update({'daemonset_ipmi': False})
return manifests_overrides
def _get_conf_overrides(self):
return {
'ceilometer': {
'notification': {
'messaging_urls': {
'values': self._get_notification_messaging_urls()
}
},
'meter': {
'meter_definitions_dirs': '/etc/ceilometer/meters.d'
}
}
}
def _get_notification_messaging_urls(self):
rabbit_user = 'rabbitmq-admin'
rabbit_pass = self._get_common_password(rabbit_user)
rabbit_paths = ['/ceilometer', '/cinder', '/glance', '/nova', '/keystone', '/neutron', '/heat']
messaging_urls = []
for rabbit_path in rabbit_paths:
messaging_urls += \
['rabbit://%s:%s@rabbitmq.openstack.svc.cluster.local:5672%s' %
(rabbit_user, rabbit_pass, rabbit_path)]
return messaging_urls
def _get_endpoints_overrides(self):
return {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'oslo_cache': {
'auth': {
'memcache_secret_key':
self._get_common_password('auth_memcache_key')
}
},
'oslo_messaging': {
'auth': self._get_endpoints_oslo_messaging_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
}
def get_region_name(self):
return self._get_service_region_name(self.SERVICE_NAME)

View File

@ -0,0 +1,263 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
import tsconfig.tsconfig as tsc
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common.storage_backend_conf import StorageBackendConfig
from sysinv.helm import common
class CinderHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the cinder chart"""
CHART = app_constants.HELM_CHART_CINDER
SERVICE_NAME = app_constants.HELM_CHART_CINDER
SERVICE_TYPE = 'volume'
AUTH_USERS = ['cinder']
def _get_mount_overrides(self):
overrides = {
'volumes': [],
'volumeMounts': []
}
overrides['volumes'].append({
'name': 'newvolume',
'hostPath': {'path': tsc.IMAGE_CONVERSION_PATH}
})
overrides['volumeMounts'].append({
'name': 'newvolume',
'mountPath': tsc.IMAGE_CONVERSION_PATH
})
return overrides
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': {
'mounts': {
'cinder_volume': {
'cinder_volume': self._get_mount_overrides()
}
},
'replicas': {
'api': self._num_controllers(),
'volume': self._num_controllers(),
'scheduler': self._num_controllers(),
'backup': self._num_controllers()
}
},
'conf': {
'cinder': self._get_conf_cinder_overrides(),
'ceph': self._get_conf_ceph_overrides(),
'backends': self._get_conf_backends_overrides(),
},
'endpoints': self._get_endpoints_overrides(),
'ceph_client': self._get_ceph_client_overrides()
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_conf_ceph_overrides(self):
ceph_backend = self._get_primary_ceph_backend()
if not ceph_backend:
return {}
primary_tier_name =\
constants.SB_TIER_DEFAULT_NAMES[constants.SB_TIER_TYPE_CEPH]
replication, min_replication =\
StorageBackendConfig.get_ceph_pool_replication(self.dbapi)
pools = {}
for backend in self.dbapi.storage_ceph_get_list():
if backend.tier_name == primary_tier_name:
pool_name = constants.CEPH_POOL_VOLUMES_NAME
else:
pool_name = "%s-%s" % (constants.CEPH_POOL_VOLUMES_NAME,
backend.tier_name)
rule_name = "{0}{1}{2}".format(
backend.tier_name, constants.CEPH_CRUSH_TIER_SUFFIX,
"-ruleset").replace('-', '_')
pool = {
'replication': replication,
'crush_rule': rule_name.encode('utf8', 'strict'),
'chunk_size': constants.CEPH_POOL_VOLUMES_CHUNK_SIZE,
'app_name': constants.CEPH_POOL_VOLUMES_APP_NAME
}
pools[pool_name.encode('utf8', 'strict')] = pool
if backend.name == constants.SB_DEFAULT_NAMES[constants.SB_TYPE_CEPH]:
# Backup uses the same replication and crush rule as
# the default storage backend
pools['backup'] = dict(pool)
return {
'monitors': self._get_formatted_ceph_monitor_ips(),
'admin_keyring': 'null',
'pools': pools
}
def _get_conf_cinder_overrides(self):
# Get all the internal CEPH backends.
backends = self.dbapi.storage_backend_get_list_by_type(
backend_type=constants.SB_TYPE_CEPH)
conf_cinder = {
'DEFAULT': {
'enabled_backends': ','.join(
str(b.name.encode('utf8', 'strict').decode('utf-8')) for b in backends)
},
}
current_host_fs_list = self.dbapi.host_fs_get_list()
chosts = self.dbapi.ihost_get_by_personality(constants.CONTROLLER)
chosts_fs = [fs for fs in current_host_fs_list
if fs['name'] == constants.FILESYSTEM_NAME_IMAGE_CONVERSION]
# conversion overrides should be generated only if each controller node
# configured has the conversion partition added
if len(chosts) == len(chosts_fs):
conf_cinder['DEFAULT']['image_conversion_dir'] = \
tsc.IMAGE_CONVERSION_PATH
# Always set the default_volume_type to the volume type associated with the
# primary Ceph backend/tier which is available on all StarlingX platform
# configurations. This will guarantee that any Cinder API requests for
# this value will be fulfilled as part of determining a safe volume type to
# use during volume creation. This can be overrides by the user when/if
# additional backends are added to the platform.
default = next(
(b.name for b in backends
if b.name == constants.SB_DEFAULT_NAMES[constants.SB_TYPE_CEPH]), None)
if default:
conf_cinder['DEFAULT']['default_volume_type'] = \
default.encode('utf8', 'strict')
return conf_cinder
def _get_conf_backends_overrides(self):
conf_backends = {}
# We don't use the chart's default backends.
conf_backends['rbd1'] = ""
# Get tier info.
tiers = self.dbapi.storage_tier_get_list()
primary_tier_name =\
constants.SB_TIER_DEFAULT_NAMES[constants.SB_TIER_TYPE_CEPH]
# We support primary and secondary CEPH tiers.
backends = self.dbapi.storage_backend_get_list_by_type(
backend_type=constants.SB_TYPE_CEPH)
# No data if there are no CEPH backends.
if not backends:
return {}
for bk in backends:
bk_name = bk.name.encode('utf8', 'strict')
tier = next((t for t in tiers if t.forbackendid == bk.id), None)
if not tier:
raise Exception("No tier present for backend %s" % bk_name)
if tier.name == primary_tier_name:
rbd_pool = constants.CEPH_POOL_VOLUMES_NAME
else:
rbd_pool = "%s-%s" % (constants.CEPH_POOL_VOLUMES_NAME,
tier.name)
conf_backends[bk_name] = {
'image_volume_cache_enabled': 'True',
'volume_backend_name': bk_name,
'volume_driver': 'cinder.volume.drivers.rbd.RBDDriver',
'rbd_pool': rbd_pool.encode('utf8', 'strict'),
'rbd_user': 'cinder',
'rbd_ceph_conf':
(constants.CEPH_CONF_PATH +
constants.SB_TYPE_CEPH_CONF_FILENAME),
}
return conf_backends
def _get_endpoints_overrides(self):
return {
'identity': {
'auth':
self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
'oslo_cache': {
'auth': {
'memcache_secret_key':
self._get_common_password('auth_memcache_key')
}
},
'oslo_messaging': {
'auth': self._get_endpoints_oslo_messaging_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
'volume': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'volumev2': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'volumev3': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
}
def _get_primary_ceph_backend(self):
try:
backend = self.dbapi.storage_backend_get_by_name(
constants.SB_DEFAULT_NAMES[constants.SB_TYPE_CEPH])
except Exception:
backend = None
pass
return backend
def get_region_name(self):
return self._get_service_region_name(self.SERVICE_NAME)
def get_service_name_v2(self):
return self._get_configured_service_name(self.SERVICE_NAME, 'v2')
def get_service_type_v2(self):
service_type = self._get_configured_service_type(
self.SERVICE_NAME, 'v2')
if service_type is None:
return self.SERVICE_TYPE + 'v2'
else:
return service_type

View File

@ -0,0 +1,68 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import constants
from sysinv.common import exception
from sysinv.helm import common
class DcdbsyncHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the dcdbsync chart"""
CHART = app_constants.HELM_CHART_DCDBSYNC
AUTH_USERS = ['dcdbsync']
SERVICE_NAME = app_constants.HELM_CHART_DCDBSYNC
def _is_enabled(self, app_name, chart_name, namespace):
# First, see if this chart is enabled by the user then adjust based on
# system conditions
enabled = super(DcdbsyncHelm, self)._is_enabled(
app_name, chart_name, namespace)
if enabled \
and (self._distributed_cloud_role() !=
constants.DISTRIBUTED_CLOUD_ROLE_SYSTEMCONTROLLER) \
and (self._distributed_cloud_role() !=
constants.DISTRIBUTED_CLOUD_ROLE_SUBCLOUD):
enabled = False
return enabled
def execute_manifest_updates(self, operator):
if self._is_enabled(operator.APP,
self.CHART, common.HELM_NS_OPENSTACK):
operator.manifest_chart_groups_insert(
operator.ARMADA_MANIFEST,
operator.CHART_GROUPS_LUT[self.CHART])
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'endpoints': self._get_endpoints_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_endpoints_overrides(self):
return {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'dcorch_dbsync': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
},
}

View File

@ -0,0 +1,57 @@
#
# SPDX-License-Identifier: Apache-2.0
#
from oslo_log import log as logging
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
LOG = logging.getLogger(__name__)
class FmRestApiHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the fm-rest-api chart"""
CHART = app_constants.HELM_CHART_FM_REST_API
SERVICE_NAME = app_constants.HELM_CHART_FM_REST_API
AUTH_USERS = ['fm']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'endpoints': self._get_endpoints_overrides(),
'pod': {
'replicas': {
'api': self._num_controllers()
},
},
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_endpoints_overrides(self):
fm_service_name = self._operator.chart_operators[
app_constants.HELM_CHART_FM_REST_API].SERVICE_NAME
return {
'identity': {
'auth': self._get_endpoints_identity_overrides(
fm_service_name, self.AUTH_USERS),
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
fm_service_name, self.AUTH_USERS)
},
}

View File

@ -0,0 +1,67 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common import utils
from sysinv.helm import common
from sysinv.helm import base
class GarbdHelm(base.BaseHelm):
"""Class to encapsulate helm operations for the galera arbitrator chart"""
# The service name is used to build the standard docker image location.
# It is intentionally "mariadb" and not "garbd" as they both use the
# same docker image.
SERVICE_NAME = app_constants.HELM_CHART_MARIADB
CHART = app_constants.HELM_CHART_GARBD
SUPPORTED_NAMESPACES = \
base.BaseHelm.SUPPORTED_NAMESPACES + [common.HELM_NS_OPENSTACK]
SUPPORTED_APP_NAMESPACES = {
constants.HELM_APP_OPENSTACK:
base.BaseHelm.SUPPORTED_NAMESPACES + [common.HELM_NS_OPENSTACK]
}
def _is_enabled(self, app_name, chart_name, namespace):
# First, see if this chart is enabled by the user then adjust based on
# system conditions
enabled = super(GarbdHelm, self)._is_enabled(
app_name, chart_name, namespace)
# If there are fewer than 2 controllers or we're on AIO-DX or we are on
# distributed cloud system controller, we'll use a single mariadb server
# and so we don't want to run garbd.
if enabled and (self._num_controllers() < 2 or
utils.is_aio_duplex_system(self.dbapi) or
(self._distributed_cloud_role() ==
constants.DISTRIBUTED_CLOUD_ROLE_SYSTEMCONTROLLER)):
enabled = False
return enabled
def execute_manifest_updates(self, operator):
# On application load this chart is enabled in the mariadb chart group
if not self._is_enabled(operator.APP,
self.CHART, common.HELM_NS_OPENSTACK):
operator.chart_group_chart_delete(
operator.CHART_GROUPS_LUT[self.CHART],
operator.CHARTS_LUT[self.CHART])
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides

View File

@ -0,0 +1,180 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common.storage_backend_conf import StorageBackendConfig
from sysinv.helm import common
# Info used in the Glance Helm chart.
RBD_STORE_USER = 'images'
class GlanceHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the glance chart"""
CHART = app_constants.HELM_CHART_GLANCE
SERVICE_NAME = app_constants.HELM_CHART_GLANCE
SERVICE_TYPE = 'image'
AUTH_USERS = ['glance']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': self._get_pod_overrides(),
'endpoints': self._get_endpoints_overrides(),
'storage': self._get_storage_overrides(),
'conf': self._get_conf_overrides(),
'bootstrap': self._get_bootstrap_overrides(),
'ceph_client': self._get_ceph_client_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_pod_overrides(self):
replicas_count = 1
ceph_backend = self._get_primary_ceph_backend()
if ceph_backend:
replicas_count = self._num_controllers()
return {
'replicas': {
'api': replicas_count,
}
}
def _get_endpoints_overrides(self):
return {
'image': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
app_constants.HELM_CHART_GLANCE),
'scheme': self._get_endpoints_scheme_public_overrides(),
'port': self._get_endpoints_port_api_public_overrides(),
},
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'oslo_cache': {
'auth': {
'memcache_secret_key':
self._get_common_password('auth_memcache_key')
}
},
'oslo_messaging': {
'auth': self._get_endpoints_oslo_messaging_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
}
def _get_storage_overrides(self):
ceph_backend = self._get_primary_ceph_backend()
if not ceph_backend:
return 'pvc'
return constants.GLANCE_BACKEND_RBD # radosgw| rbd | swift | pvc
def _get_ceph_overrides(self):
conf_ceph = {
'admin_keyring': self._get_ceph_password(
self.SERVICE_NAME, 'admin_keyring'
),
'monitors': self._get_formatted_ceph_monitor_ips()
}
return conf_ceph
def _get_conf_overrides(self):
ceph_backend = self._get_primary_ceph_backend()
if not ceph_backend:
rbd_store_pool = ""
rbd_store_user = ""
replication = 1
else:
rbd_store_pool = constants.CEPH_POOL_IMAGES_NAME
rbd_store_user = RBD_STORE_USER
replication, min_replication = \
StorageBackendConfig.get_ceph_pool_replication(self.dbapi)
# Only the primary Ceph tier is used for the glance images pool
rule_name = "{0}{1}{2}".format(
constants.SB_TIER_DEFAULT_NAMES[
constants.SB_TIER_TYPE_CEPH],
constants.CEPH_CRUSH_TIER_SUFFIX,
"-ruleset").replace('-', '_')
conf = {
'glance': {
'DEFAULT': {
'graceful_shutdown': True,
'show_image_direct_url': True,
},
'glance_store': {
'filesystem_store_datadir': constants.GLANCE_IMAGE_PATH,
'rbd_store_pool': rbd_store_pool,
'rbd_store_user': rbd_store_user,
'rbd_store_replication': replication,
'rbd_store_crush_rule': rule_name,
}
}
}
if ceph_backend:
conf['ceph'] = self._get_ceph_overrides()
return conf
def _get_bootstrap_overrides(self):
# By default, prevent the download and creation of the Cirros image.
# TODO: Remove if/when pulling from external registries is supported.
bootstrap = {
'enabled': False
}
return bootstrap
def _get_primary_ceph_backend(self):
try:
backend = self.dbapi.storage_backend_get_by_name(
constants.SB_DEFAULT_NAMES[constants.SB_TYPE_CEPH])
except exception.StorageBackendNotFoundByName:
backend = None
pass
return backend
def get_region_name(self):
return self._get_service_region_name(self.SERVICE_NAME)
def get_service_name(self):
return self._get_configured_service_name(self.SERVICE_NAME)
def get_service_type(self):
service_type = self._get_configured_service_type(self.SERVICE_NAME)
if service_type is None:
return self.SERVICE_TYPE
else:
return service_type

View File

@ -0,0 +1,69 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class GnocchiHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the gnocchi chart"""
CHART = app_constants.HELM_CHART_GNOCCHI
SERVICE_NAME = app_constants.HELM_CHART_GNOCCHI
AUTH_USERS = ['gnocchi']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': self._get_pod_overrides(),
'endpoints': self._get_endpoints_overrides(),
'ceph_client': self._get_ceph_client_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_pod_overrides(self):
return {
'replicas': {
'api': self._num_controllers()
}
}
def _get_endpoints_overrides(self):
return {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'metric': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'oslo_cache': {
'auth': {
'memcache_secret_key':
self._get_common_password('auth_memcache_key')
}
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
}

View File

@ -0,0 +1,79 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class HeatHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the heat chart"""
CHART = app_constants.HELM_CHART_HEAT
SERVICE_NAME = app_constants.HELM_CHART_HEAT
AUTH_USERS = ['heat', 'heat_trustee', 'heat_stack_user']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': self._get_pod_overrides(),
'endpoints': self._get_endpoints_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_pod_overrides(self):
return {
'replicas': {
'api': self._num_controllers(),
'cfn': self._num_controllers(),
'cloudwatch': self._num_controllers(),
'engine': self._num_controllers()
}
}
def _get_endpoints_overrides(self):
return {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'cloudformation': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
'cloudformation'),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'orchestration': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, [self.SERVICE_NAME])
},
'oslo_messaging': {
'auth': self._get_endpoints_oslo_messaging_overrides(
self.SERVICE_NAME, [self.SERVICE_NAME])
},
}
def get_region_name(self):
return self._get_service_region_name(self.SERVICE_NAME)

View File

@ -0,0 +1,35 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from sysinv.common import exception
from sysinv.helm import common
from sysinv.helm import base
class HelmToolkitHelm(base.BaseHelm):
"""Class to encapsulate helm operations for the helm toolkit"""
CHART = app_constants.HELM_CHART_HELM_TOOLKIT
SUPPORTED_NAMESPACES = [
common.HELM_NS_HELM_TOOLKIT,
]
def get_namespaces(self):
return self.SUPPORTED_NAMESPACES
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_HELM_TOOLKIT: {}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides

View File

@ -0,0 +1,128 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import constants
from sysinv.common import exception
from sysinv.helm import common
class HorizonHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the horizon chart"""
CHART = app_constants.HELM_CHART_HORIZON
SERVICE_NAME = app_constants.HELM_CHART_HORIZON
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'conf': {
'horizon': {
'local_settings': {
'config': self._get_local_settings_config_overrides(),
}
}
},
'endpoints': self._get_endpoints_overrides(),
'network': {
'node_port': {
'enabled': self._get_network_node_port_overrides()
}
}
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_endpoints_overrides(self):
return {
'dashboard': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
app_constants.HELM_CHART_HORIZON),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, [self.SERVICE_NAME])
},
}
def _get_local_settings_config_overrides(self):
local_settings_config = {
'horizon_secret_key': self._get_or_generate_password(
self.SERVICE_NAME, common.HELM_NS_OPENSTACK,
'horizon_secret_key'),
'system_region_name': self._region_name()
}
# Basic region config additions
if self._region_config():
openstack_host = 'controller' # TODO(tsmith) must evaluate region functionality
region_name = self._region_name()
local_settings_config.update({
'openstack_keystone_url': "http://%s:5000/v3" % openstack_host,
'region_name': region_name,
'available_regions': [("http://%s:5000/v3" % openstack_host, region_name), ],
'ss_enabled': 'True',
})
# Distributed cloud additions
if self._distributed_cloud_role() in [
constants.DISTRIBUTED_CLOUD_ROLE_SUBCLOUD,
constants.DISTRIBUTED_CLOUD_ROLE_SYSTEMCONTROLLER]:
local_settings_config.update({
'dc_mode': 'True',
})
# Https & security settings
if self._https_enabled():
local_settings_config.update({
'https_enabled': 'True',
})
lockout_retries = self._get_service_parameter('horizon', 'auth', 'lockout_retries')
lockout_seconds = self._get_service_parameter('horizon', 'auth', 'lockout_seconds')
if lockout_retries is not None and lockout_seconds is not None:
local_settings_config.update({
'lockout_retries_num': str(lockout_retries.value),
'lockout_period_sec': str(lockout_seconds.value),
})
return local_settings_config
def _region_config(self):
# A wrapper over the Base region_config check.
if (self._distributed_cloud_role() ==
constants.DISTRIBUTED_CLOUD_ROLE_SUBCLOUD):
return False
else:
return super(HorizonHelm, self)._region_config()
def _get_network_node_port_overrides(self):
# If openstack endpoint FQDN is configured, disable node_port 31000
# which will enable the Ingress for the horizon service
endpoint_fqdn = self._get_service_parameter(
constants.SERVICE_TYPE_OPENSTACK,
constants.SERVICE_PARAM_SECTION_OPENSTACK_HELM,
constants.SERVICE_PARAM_NAME_ENDPOINT_DOMAIN)
if endpoint_fqdn:
return False
else:
return True

View File

@ -0,0 +1,93 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from sysinv.common import constants
from sysinv.common import exception
from sysinv.helm import common
from sysinv.helm import base
class IngressHelm(base.BaseHelm):
"""Class to encapsulate helm operations for the ingress chart"""
CHART = app_constants.HELM_CHART_INGRESS
SUPPORTED_NAMESPACES = base.BaseHelm.SUPPORTED_NAMESPACES + [
common.HELM_NS_KUBE_SYSTEM,
common.HELM_NS_OPENSTACK
]
SUPPORTED_APP_NAMESPACES = {
constants.HELM_APP_OPENSTACK:
base.BaseHelm.SUPPORTED_NAMESPACES + [common.HELM_NS_KUBE_SYSTEM,
common.HELM_NS_OPENSTACK]
}
def get_overrides(self, namespace=None):
limit_enabled, limit_cpus, limit_mem_mib = self._get_platform_res_limit()
overrides = {
common.HELM_NS_KUBE_SYSTEM: {
'pod': {
'replicas': {
'error_page': self._num_controllers()
},
'resources': {
'enabled': limit_enabled,
'ingress': {
'limits': {
'cpu': "%d000m" % (limit_cpus),
'memory': "%dMi" % (limit_mem_mib)
}
},
'error_pages': {
'limits': {
'cpu': "%d000m" % (limit_cpus),
'memory': "%dMi" % (limit_mem_mib)
}
}
}
},
'deployment': {
'mode': 'cluster',
'type': 'DaemonSet'
},
'network': {
'host_namespace': 'true'
},
},
common.HELM_NS_OPENSTACK: {
'pod': {
'replicas': {
'ingress': self._num_controllers(),
'error_page': self._num_controllers()
},
'resources': {
'enabled': limit_enabled,
'ingress': {
'limits': {
'cpu': "%d000m" % (limit_cpus),
'memory': "%dMi" % (limit_mem_mib)
}
},
'error_pages': {
'limits': {
'cpu': "%d000m" % (limit_cpus),
'memory': "%dMi" % (limit_mem_mib)
}
}
}
}
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides

View File

@ -0,0 +1,168 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import constants
from sysinv.common import exception
from sysinv.helm import common
class IronicHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the ironic chart"""
CHART = app_constants.HELM_CHART_IRONIC
SERVICE_NAME = app_constants.HELM_CHART_IRONIC
SERVICE_USERS = ['glance']
AUTH_USERS = ['ironic']
def _is_enabled(self, app_name, chart_name, namespace):
# First, see if this chart is enabled by the user then adjust based on
# system conditions
enabled = super(IronicHelm, self)._is_enabled(app_name,
chart_name, namespace)
if enabled and self._num_controllers() < 2:
enabled = False
return enabled
def execute_manifest_updates(self, operator):
# On application load, this chart is disabled in the metadata. Insert as
# needed.
if self._is_enabled(operator.APP,
self.CHART, common.HELM_NS_OPENSTACK):
operator.chart_group_chart_insert(
operator.CHART_GROUPS_LUT[self.CHART],
operator.CHARTS_LUT[self.CHART])
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': {
'replicas': {
'api': self._num_controllers(),
'conductor': self._num_controllers()
}
},
'network': self._get_network_overrides(),
'endpoints': self._get_endpoints_overrides()
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_endpoints_overrides(self):
overrides = {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'oslo_cache': {
'auth': {
'memcache_secret_key':
self._get_common_password('auth_memcache_key')
}
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
'oslo_messaging': {
'auth': self._get_endpoints_oslo_messaging_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
}
# Service user passwords already exist in other chart overrides
for user in self.SERVICE_USERS:
overrides['identity']['auth'].update({
user: {
'region_name': self._region_name(),
'password': self._get_or_generate_password(
user, common.HELM_NS_OPENSTACK, user)
}
})
return overrides
def _get_interface_port_name(self, iface):
"""
Determine the port name of the underlying device.
"""
assert iface['iftype'] == constants.INTERFACE_TYPE_ETHERNET
port = self.dbapi.port_get_by_interface(iface.id)
if port:
return port[0]['name']
def _get_ironic_port(self):
ironic_port = ''
if self.dbapi is None:
return ironic_port
# find the first interface with ironic network type
interfaces = self.dbapi.iinterface_get_all()
for iface in interfaces:
for net_type in iface.networktypelist:
if net_type == constants.NETWORK_TYPE_IRONIC:
ironic_port = self._get_interface_port_name(iface)
break
return ironic_port
def _get_ironic_addrpool(self):
ironic_addrpool = {}
if self.dbapi is None:
return ironic_addrpool
networks = self.dbapi.networks_get_by_type(
constants.NETWORK_TYPE_IRONIC)
for network in networks:
addrpool = self.dbapi.address_pool_get(network.pool_uuid)
if addrpool:
ironic_addrpool['cidr'] = str(addrpool.network) + \
'/' + str(addrpool.prefix)
ironic_addrpool['gateway'] = str(addrpool.gateway_address)
ironic_addrpool['start'] = str(addrpool.ranges[0][0])
ironic_addrpool['end'] = str(addrpool.ranges[0][1])
break
return ironic_addrpool
# retrieve ironic network settings from address pools,
# ironic ethernet port name from interfaces,
# and ironic provider network from data networks.
#
# NOTE: Different ethernet port name for ironic conductor not supported.
# Currently the name of ironic port should be the same on each
# controllers to support HA, otherwise the initialization
# of ironic-conductor-pxe would be failed. It's a limitation
# from openstack-helm/ironic that ironic conductors use same
# configuration file for init.
def _get_network_overrides(self):
ironic_addrpool = self._get_ironic_addrpool()
gateway = ironic_addrpool.get('gateway', '')
cidr = ironic_addrpool.get('cidr', '')
start = ironic_addrpool.get('start', '')
end = ironic_addrpool.get('end', '')
overrides = {
'pxe': {
'device': str(self._get_ironic_port()),
# User can define it's own tenant network name by
# 'system helm-override-update' to update this value
'neutron_provider_network': 'ironic',
'neutron_subnet_gateway': str(gateway),
'neutron_subnet_cidr': str(cidr),
'neutron_subnet_alloc_start': str(start),
'neutron_subnet_alloc_end': str(end)
},
}
return overrides

View File

@ -0,0 +1,297 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from six.moves import configparser
import os
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import constants
from sysinv.common import exception
from sysinv.helm import common
OPENSTACK_PASSWORD_RULES_FILE = '/etc/keystone/password-rules.conf'
class KeystoneHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the keystone chart"""
CHART = app_constants.HELM_CHART_KEYSTONE
SERVICE_NAME = app_constants.HELM_CHART_KEYSTONE
SERVICE_PATH = '/v3'
DEFAULT_DOMAIN_NAME = 'default'
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': self._get_pod_overrides(),
'conf': self._get_conf_overrides(),
'endpoints': self._get_endpoints_overrides(),
'network': self._get_network_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_pod_overrides(self):
overrides = {
'replicas': {
'api': self._num_controllers()
},
'lifecycle': {
'termination_grace_period': {
'api': {
'timeout': 60
}
}
}
}
return overrides
def _get_conf_keystone_default_overrides(self):
return {
'max_token_size': 255, # static controller.yaml => chart default
'debug': False, # static controller.yaml => chart default
'use_syslog': True, # static controller.yaml
'syslog_log_facility': 'local2', # static controller.yaml
'log_file': '/dev/null', # static controller.yaml
# 'admin_token': self._generate_random_password(length=32)
}
def _get_conf_keystone_database_overrides(self):
return {
'idle_timeout': 60, # static controller.yaml
'max_pool_size': 1, # static controller.yaml
'max_overflow': 50, # static controller.yaml
}
def _get_conf_keystone_oslo_middleware_overrides(self):
return {
'enable_proxy_headers_parsing': True # static controller.yaml
}
def _get_conf_keystone_token_overrides(self):
return {
'provider': 'fernet' # static controller.yaml => chart default
}
def _get_conf_keystone_identity_overrides(self):
return {
'driver': 'sql' # static controller.yaml
}
def _get_conf_keystone_assignment_overrides(self):
return {
'driver': 'sql' # static controller.yaml
}
@staticmethod
def _extract_openstack_password_rules_from_file(
rules_file, section="security_compliance"):
try:
config = configparser.RawConfigParser()
parsed_config = config.read(rules_file)
if not parsed_config:
msg = ("Cannot parse rules file: %s" % rules_file)
raise Exception(msg)
if not config.has_section(section):
msg = ("Required section '%s' not found in rules file" % section)
raise Exception(msg)
rules = config.items(section)
if not rules:
msg = ("section '%s' contains no configuration options" % section)
raise Exception(msg)
return dict(rules)
except Exception:
raise Exception("Failed to extract password rules from file")
def _get_password_rule(self):
password_rule = {}
if os.path.isfile(OPENSTACK_PASSWORD_RULES_FILE):
try:
passwd_rules = \
KeystoneHelm._extract_openstack_password_rules_from_file(
OPENSTACK_PASSWORD_RULES_FILE)
password_rule.update({
'unique_last_password_count':
int(passwd_rules['unique_last_password_count']),
'password_regex':
self.quoted_str(passwd_rules['password_regex']),
'password_regex_description':
self.quoted_str(
passwd_rules['password_regex_description'])
})
except Exception:
pass
return password_rule
def _get_conf_keystone_security_compliance_overrides(self):
rgx = '^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%^&*()<>{}+=_\\\[\]\-?|~`,.;:]).{7,}$'
overrides = {
'unique_last_password_count': 2, # static controller.yaml
'password_regex': self.quoted_str(rgx),
'password_regex_description':
self.quoted_str('Password must have a minimum length of 7'
' characters, and must contain at least 1'
' upper case, 1 lower case, 1 digit, and 1'
' special character'),
}
overrides.update(self._get_password_rule())
return overrides
def _get_conf_keystone_overrides(self):
return {
'DEFAULT': self._get_conf_keystone_default_overrides(),
'database': self._get_conf_keystone_database_overrides(),
'oslo_middleware': self._get_conf_keystone_oslo_middleware_overrides(),
'token': self._get_conf_keystone_token_overrides(),
'identity': self._get_conf_keystone_identity_overrides(),
'assignment': self._get_conf_keystone_assignment_overrides(),
'security_compliance': self._get_conf_keystone_security_compliance_overrides(),
}
def _get_conf_policy_overrides(self):
return {
"admin_required": "role:admin or is_admin:1",
"service_role": "role:service",
"service_or_admin": "rule:admin_required or rule:service_role",
"owner": "user_id:%(user_id)s",
"admin_or_owner": "rule:admin_required or rule:owner",
"token_subject": "user_id:%(target.token.user_id)s",
"admin_or_token_subject": "rule:admin_required or rule:token_subject",
"service_admin_or_token_subject":
"rule:service_or_admin or rule:token_subject",
"protected_domains":
"'heat':%(target.domain.name)s or 'magnum':%(target.domain.name)s",
"protected_projects":
"'admin':%(target.project.name)s or 'services':%(target.project.name)s",
"protected_admins":
"'admin':%(target.user.name)s or 'heat_admin':%(target.user.name)s"
" or 'dcmanager':%(target.user.name)s",
"protected_roles":
"'admin':%(target.role.name)s or 'heat_admin':%(target.user.name)s",
"protected_services": [
["'aodh':%(target.user.name)s"],
["'barbican':%(target.user.name)s"],
["'ceilometer':%(target.user.name)s"],
["'cinder':%(target.user.name)s"],
["'glance':%(target.user.name)s"],
["'heat':%(target.user.name)s"],
["'neutron':%(target.user.name)s"],
["'nova':%(target.user.name)s"],
["'patching':%(target.user.name)s"],
["'sysinv':%(target.user.name)s"],
["'mtce':%(target.user.name)s"],
["'magnum':%(target.user.name)s"],
["'panko':%(target.user.name)s"],
["'gnocchi':%(target.user.name)s"]
],
"identity:delete_service": "rule:admin_required and not rule:protected_services",
"identity:delete_domain": "rule:admin_required and not rule:protected_domains",
"identity:delete_project": "rule:admin_required and not rule:protected_projects",
"identity:delete_user":
"rule:admin_required and not (rule:protected_admins or rule:protected_services)",
"identity:change_password": "rule:admin_or_owner and not rule:protected_services",
"identity:delete_role": "rule:admin_required and not rule:protected_roles",
}
def _get_conf_overrides(self):
return {
'keystone': self._get_conf_keystone_overrides(),
'policy': self._get_conf_policy_overrides()
}
def _region_config(self):
# A wrapper over the Base region_config check.
if (self._distributed_cloud_role() ==
constants.DISTRIBUTED_CLOUD_ROLE_SUBCLOUD):
return False
else:
return super(KeystoneHelm, self)._region_config()
def _get_endpoints_overrides(self):
overrides = {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, []),
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, [self.SERVICE_NAME])
},
'oslo_messaging': {
'auth': self._get_endpoints_oslo_messaging_overrides(
self.SERVICE_NAME, [self.SERVICE_NAME])
},
}
admin_endpoint_override = \
self._get_endpoints_hosts_admin_overrides(self.SERVICE_NAME)
if admin_endpoint_override:
overrides['identity']['hosts'] = admin_endpoint_override
return overrides
def _get_network_overrides(self):
overrides = {
'api': {
'ingress': self._get_network_api_ingress_overrides(),
}
}
return overrides
def get_admin_user_name(self):
if self._region_config():
service_config = self._get_service_config(self.SERVICE_NAME)
if service_config is not None:
return service_config.capabilities.get('admin_user_name')
return common.USER_ADMIN
def get_admin_user_domain(self):
if self._region_config():
service_config = self._get_service_config(self.SERVICE_NAME)
if service_config is not None:
return service_config.capabilities.get('admin_user_domain')
return self.DEFAULT_DOMAIN_NAME
def get_admin_project_name(self):
if self._region_config():
service_config = self._get_service_config(self.SERVICE_NAME)
if service_config is not None:
return service_config.capabilities.get('admin_project_name')
return common.USER_ADMIN
def get_admin_project_domain(self):
if self._region_config():
service_config = self._get_service_config(self.SERVICE_NAME)
if service_config is not None:
return service_config.capabilities.get('admin_project_domain')
return self.DEFAULT_DOMAIN_NAME
def get_admin_password(self):
o_user = self.get_admin_user_name()
o_service = common.SERVICE_ADMIN
return self._get_identity_password(o_service, o_user)
def get_region_name(self):
return self._get_service_region_name(self.SERVICE_NAME)

View File

@ -0,0 +1,107 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import constants
from sysinv.common import exception
from sysinv.helm import common
class KeystoneApiProxyHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the keystone api proxy chart"""
CHART = app_constants.HELM_CHART_KEYSTONE_API_PROXY
SERVICE_NAME = app_constants.HELM_CHART_KEYSTONE_API_PROXY
DCORCH_SERVICE_NAME = 'dcorch'
def _is_enabled(self, app_name, chart_name, namespace):
# First, see if this chart is enabled by the user then adjust based on
# system conditions
enabled = super(KeystoneApiProxyHelm, self)._is_enabled(
app_name, chart_name, namespace)
if enabled and (self._distributed_cloud_role() !=
constants.DISTRIBUTED_CLOUD_ROLE_SYSTEMCONTROLLER):
enabled = False
return enabled
def execute_manifest_updates(self, operator):
# This chart group is not included by default in the manifest. Insert as
# needed.
if self._is_enabled(operator.APP,
self.CHART, common.HELM_NS_OPENSTACK):
operator.manifest_chart_groups_insert(
operator.ARMADA_MANIFEST,
operator.CHART_GROUPS_LUT[self.CHART])
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': {
'user': {
'keystone_api_proxy': {
'uid': 0
}
}
},
'conf': {
'keystone_api_proxy': {
'DEFAULT': {
'transport_url': self._get_transport_url()
},
'database': {
'connection': self._get_database_connection()
},
'identity': {
'remote_host': self._get_keystone_endpoint(),
}
}
},
'endpoints': self._get_endpoints_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_endpoints_overrides(self):
return {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, [])
},
'keystone_api_proxy': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
app_constants.HELM_CHART_KEYSTONE_API_PROXY),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
}
}
def _get_transport_url(self):
host_url = self._format_url_address(self._get_management_address())
auth_password = self._get_keyring_password('amqp', 'rabbit')
transport_url = "rabbit://guest:%s@%s:5672" % (auth_password, host_url)
return transport_url
def _get_database_connection(self):
host_url = self._format_url_address(self._get_management_address())
auth_password = self._get_keyring_password(
self.DCORCH_SERVICE_NAME, 'database')
connection = "postgresql+psycopg2://admin-dcorch:%s@%s/dcorch" %\
(auth_password, host_url)
return connection
def _get_keystone_endpoint(self):
return 'keystone-api.openstack.svc.cluster.local'

View File

@ -0,0 +1,52 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class LibvirtHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the libvirt chart"""
CHART = app_constants.HELM_CHART_LIBVIRT
SERVICE_NAME = app_constants.HELM_CHART_LIBVIRT
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'conf': {
'libvirt': {
'listen_addr': '0.0.0.0'
},
'qemu': {
'user': "root",
'group': "root",
'cgroup_controllers': ["cpu", "cpuacct", "cpuset", "freezer", "net_cls", "perf_event"],
'namespaces': [],
'clear_emulator_capabilities': 0
}
},
'pod': {
'mounts': {
'libvirt': {
'libvirt': self._get_mount_uefi_overrides()
}
}
}
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides

View File

@ -0,0 +1,40 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class MagnumHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the magnum chart"""
CHART = app_constants.HELM_CHART_MAGNUM
SERVICE_NAME = app_constants.HELM_CHART_MAGNUM
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': {
'replicas': {
'api': self._num_controllers(),
'conductor': self._num_controllers()
}
}
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides

View File

@ -0,0 +1,66 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class MariadbHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the mariadb chart"""
CHART = app_constants.HELM_CHART_MARIADB
def _num_server_replicas(self):
return self._num_controllers()
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': {
'replicas': {
'server': self._num_server_replicas(),
'ingress': self._num_controllers()
}
},
'endpoints': self._get_endpoints_overrides(),
'conf': {
'database': {
'config_override': self._get_database_config_override()
}
}
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_database_config_override(self):
listen_host = "0.0.0.0"
if self._is_ipv6_cluster_service():
listen_host = "::"
return "[mysqld]\n" \
"bind_address=::\n" \
"wsrep_provider_options=\"evs.suspect_timeout=PT30S; " \
"gmcast.peer_timeout=PT15S; " \
"gmcast.listen_addr=tcp://%s:{{ tuple \"oslo_db\" " \
"\"direct\" \"wsrep\" . | " \
"include \"helm-toolkit.endpoints.endpoint_port_lookup\" }}\"" % listen_host
def _get_endpoints_overrides(self):
return {
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.CHART, [])
}
}

View File

@ -0,0 +1,37 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from sysinv.common import constants
from sysinv.common import exception
from sysinv.helm import common
from sysinv.helm import base
class MemcachedHelm(base.BaseHelm):
"""Class to encapsulate helm operations for the memcached chart"""
CHART = app_constants.HELM_CHART_MEMCACHED
SUPPORTED_NAMESPACES = \
base.BaseHelm.SUPPORTED_NAMESPACES + [common.HELM_NS_OPENSTACK]
SUPPORTED_APP_NAMESPACES = {
constants.HELM_APP_OPENSTACK:
base.BaseHelm.SUPPORTED_NAMESPACES + [common.HELM_NS_OPENSTACK]
}
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides

View File

@ -0,0 +1,338 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from oslo_log import log as logging
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common import utils
from sysinv.helm import common
LOG = logging.getLogger(__name__)
DATA_NETWORK_TYPES = [constants.NETWORK_TYPE_DATA]
SRIOV_NETWORK_TYPES = [constants.NETWORK_TYPE_PCI_SRIOV]
class NeutronHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the neutron chart"""
CHART = app_constants.HELM_CHART_NEUTRON
SERVICE_NAME = app_constants.HELM_CHART_NEUTRON
AUTH_USERS = ['neutron']
SERVICE_USERS = ['nova']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': {
'replicas': {
'server': self._num_controllers()
},
},
'conf': {
'plugins': {
'ml2_conf': self._get_neutron_ml2_config()
},
'overrides': {
'neutron_ovs-agent': {
'hosts': self._get_per_host_overrides()
},
'neutron_dhcp-agent': {
'hosts': self._get_per_host_overrides()
},
'neutron_l3-agent': {
'hosts': self._get_per_host_overrides()
},
'neutron_metadata-agent': {
'hosts': self._get_per_host_overrides()
},
'neutron_sriov-agent': {
'hosts': self._get_per_host_overrides()
},
},
'paste': {
'app:neutronversions': {
'paste.app_factory':
'neutron.pecan_wsgi.app:versions_factory'
},
},
},
'endpoints': self._get_endpoints_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_per_host_overrides(self):
host_list = []
hosts = self.dbapi.ihost_get_list()
for host in hosts:
host_labels = self.dbapi.label_get_by_host(host.id)
if (host.invprovision in [constants.PROVISIONED,
constants.PROVISIONING] or
host.ihost_action in [constants.UNLOCK_ACTION,
constants.FORCE_UNLOCK_ACTION]):
if (constants.WORKER in utils.get_personalities(host) and
utils.has_openstack_compute(host_labels)):
hostname = str(host.hostname)
host_neutron = {
'name': hostname,
'conf': {
'plugins': {
'openvswitch_agent': self._get_dynamic_ovs_agent_config(host),
'sriov_agent': self._get_dynamic_sriov_agent_config(host),
}
}
}
# if ovs runs on host, auto bridge add is covered by sysinv
if utils.get_vswitch_type(self.dbapi) == constants.VSWITCH_TYPE_NONE:
host_neutron['conf'].update({
'auto_bridge_add': self._get_host_bridges(host)})
host_list.append(host_neutron)
return host_list
def _interface_sort_key(self, iface):
"""
Sort interfaces by interface type placing ethernet interfaces ahead of
aggregated ethernet interfaces, and vlan interfaces last.
"""
if iface['iftype'] == constants.INTERFACE_TYPE_ETHERNET:
return 0, iface['ifname']
elif iface['iftype'] == constants.INTERFACE_TYPE_AE:
return 1, iface['ifname']
else: # if iface['iftype'] == constants.INTERFACE_TYPE_VLAN:
return 2, iface['ifname']
def _get_datapath_type(self):
if (utils.get_vswitch_type(self.dbapi) ==
constants.VSWITCH_TYPE_OVS_DPDK):
return "netdev"
else:
return "system"
def _get_host_bridges(self, host):
bridges = {}
index = 0
for iface in sorted(self.dbapi.iinterface_get_by_ihost(host.id),
key=self._interface_sort_key):
if self._is_data_network_type(iface):
if any(dn.datanetwork_network_type in
[constants.DATANETWORK_TYPE_FLAT,
constants.DATANETWORK_TYPE_VLAN] for dn in
self._get_interface_datanets(iface)):
# obtain the assigned bridge for interface
brname = 'br-phy%d' % index
port_name = self._get_interface_port_name(iface)
bridges[brname] = port_name.encode('utf8', 'strict')
index += 1
return bridges
def _get_dynamic_ovs_agent_config(self, host):
local_ip = None
tunnel_types = None
bridge_mappings = ""
index = 0
for iface in sorted(self.dbapi.iinterface_get_by_ihost(host.id),
key=self._interface_sort_key):
if self._is_data_network_type(iface):
# obtain the assigned bridge for interface
brname = 'br-phy%d' % index
if brname:
datanets = self._get_interface_datanets(iface)
for datanet in datanets:
dn_name = datanet['datanetwork_name'].strip()
LOG.debug('_get_dynamic_ovs_agent_config '
'host=%s datanet=%s', host.hostname, dn_name)
if (datanet.datanetwork_network_type ==
constants.DATANETWORK_TYPE_VXLAN):
local_ip = self._get_interface_primary_address(
self.context, host, iface)
tunnel_types = constants.DATANETWORK_TYPE_VXLAN
elif (datanet.datanetwork_network_type in
[constants.DATANETWORK_TYPE_FLAT,
constants.DATANETWORK_TYPE_VLAN]):
bridge_mappings += ('%s:%s,' % (dn_name, brname))
index += 1
agent = {}
ovs = {
'integration_bridge': 'br-int',
'datapath_type': self._get_datapath_type(),
'vhostuser_socket_dir': '/var/run/openvswitch',
}
if tunnel_types:
agent['tunnel_types'] = tunnel_types
if local_ip:
ovs['local_ip'] = local_ip
if bridge_mappings:
ovs['bridge_mappings'] = str(bridge_mappings)
# https://access.redhat.com/documentation/en-us/
# red_hat_enterprise_linux_openstack_platform/7/html/
# networking_guide/bridge-mappings
# required for vlan, not flat, vxlan:
# ovs['network_vlan_ranges'] = physnet1:10:20,physnet2:21:25
return {
'agent': agent,
'ovs': ovs,
}
def _get_dynamic_sriov_agent_config(self, host):
physical_device_mappings = ""
for iface in sorted(self.dbapi.iinterface_get_by_ihost(host.id),
key=self._interface_sort_key):
if self._is_sriov_network_type(iface):
# obtain the assigned datanets for interface
datanets = self._get_interface_datanets(iface)
port_name = self._get_interface_port_name(iface)
for datanet in datanets:
dn_name = datanet['datanetwork_name'].strip()
physical_device_mappings += ('%s:%s,' % (dn_name, port_name))
sriov_nic = {
'physical_device_mappings': str(physical_device_mappings),
}
return {
'securitygroup': {
'firewall_driver': 'noop',
},
# Mitigate host OS memory leak of cgroup session-*scope files
# and kernel slab resources. The leak is triggered using 'sudo'
# which utilizes the host dbus-daemon. The sriov agent frequently
# polls devices via 'ip link show' using run_as_root=True, but
# does not actually require 'sudo'.
'agent': {
'root_helper': '',
},
'sriov_nic': sriov_nic,
}
def _get_ml2_physical_network_mtus(self):
ml2_physical_network_mtus = []
datanetworks = self.dbapi.datanetworks_get_all()
for datanetwork in datanetworks:
dn_str = str(datanetwork.name) + ":" + str(datanetwork.mtu)
ml2_physical_network_mtus.append(dn_str)
return ",".join(ml2_physical_network_mtus)
def _get_flat_networks(self):
flat_nets = []
datanetworks = self.dbapi.datanetworks_get_all()
for datanetwork in datanetworks:
if datanetwork.network_type == constants.DATANETWORK_TYPE_FLAT:
flat_nets.append(str(datanetwork.name))
return ",".join(flat_nets)
def _get_neutron_ml2_config(self):
ml2_config = {
'ml2': {
'physical_network_mtus': self._get_ml2_physical_network_mtus()
},
'ml2_type_flat': {
'flat_networks': self._get_flat_networks()
}
}
LOG.info("_get_neutron_ml2_config=%s" % ml2_config)
return ml2_config
def _is_data_network_type(self, iface):
return iface.ifclass == constants.INTERFACE_CLASS_DATA
def _is_sriov_network_type(self, iface):
return iface.ifclass == constants.INTERFACE_CLASS_PCI_SRIOV
def _get_interface_datanets(self, iface):
"""
Return the data networks of the supplied interface as a list.
"""
ifdatanets = self.dbapi.interface_datanetwork_get_by_interface(
iface.uuid)
return ifdatanets
def _get_interface_port_name(self, iface):
"""
Determine the port name of the underlying device.
"""
if (iface['iftype'] == constants.INTERFACE_TYPE_VF and iface['uses']):
lower_iface = self.dbapi.iinterface_get(iface['uses'][0])
if lower_iface:
return self._get_interface_port_name(lower_iface)
assert iface['iftype'] == constants.INTERFACE_TYPE_ETHERNET
port = self.dbapi.port_get_by_interface(iface.id)
if port:
return port[0]['name']
def _get_interface_primary_address(self, context, host, iface):
"""
Determine the primary IP address on an interface (if any). If multiple
addresses exist then the first address is returned.
"""
for address in self.dbapi.addresses_get_by_host(host.id):
if address.ifname == iface.ifname:
return address.address
return None
def _get_endpoints_overrides(self):
overrides = {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'network': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'oslo_cache': {
'auth': {
'memcache_secret_key':
self._get_common_password('auth_memcache_key')
}
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
'oslo_messaging': {
'auth': self._get_endpoints_oslo_messaging_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
}
# Service user passwords already exist in other chart overrides
for user in self.SERVICE_USERS:
overrides['identity']['auth'].update({
user: {
'region_name': self._region_name(),
'password': self._get_or_generate_password(
user, common.HELM_NS_OPENSTACK, user)
}
})
return overrides
def get_region_name(self):
return self._get_service_region_name(self.SERVICE_NAME)

View File

@ -0,0 +1,33 @@
#
# Copyright (c) 2019 Intel, Inc.
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from sysinv.common import exception
from sysinv.helm import common
from sysinv.helm import base
class NginxPortsControlHelm(base.BaseHelm):
"""Class to encapsulate helm operations for nginx-ports-control chart"""
CHART = app_constants.HELM_CHART_NGINX_PORTS_CONTROL
SUPPORTED_NAMESPACES = \
base.BaseHelm.SUPPORTED_NAMESPACES + [common.HELM_NS_OPENSTACK]
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides

View File

@ -0,0 +1,602 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import copy
import os
from oslo_log import log as logging
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common import interface
from sysinv.common import utils
from sysinv.common.storage_backend_conf import StorageBackendConfig
from sysinv.helm import common
LOG = logging.getLogger(__name__)
# Align ephemeral rbd_user with the cinder rbd_user so that the same libvirt
# secret can be used for accessing both pools. This also aligns with the
# behavior defined in nova/virt/libvirt/volume/net.py:_set_auth_config_rbd()
RBD_POOL_USER = "cinder"
DEFAULT_NOVA_PCI_ALIAS = [
{"vendor_id": constants.NOVA_PCI_ALIAS_QAT_PF_VENDOR,
"product_id": constants.NOVA_PCI_ALIAS_QAT_DH895XCC_PF_DEVICE,
"name": constants.NOVA_PCI_ALIAS_QAT_DH895XCC_PF_NAME},
{"vendor_id": constants.NOVA_PCI_ALIAS_QAT_VF_VENDOR,
"product_id": constants.NOVA_PCI_ALIAS_QAT_DH895XCC_VF_DEVICE,
"name": constants.NOVA_PCI_ALIAS_QAT_DH895XCC_VF_NAME},
{"vendor_id": constants.NOVA_PCI_ALIAS_QAT_PF_VENDOR,
"product_id": constants.NOVA_PCI_ALIAS_QAT_C62X_PF_DEVICE,
"name": constants.NOVA_PCI_ALIAS_QAT_C62X_PF_NAME},
{"vendor_id": constants.NOVA_PCI_ALIAS_QAT_VF_VENDOR,
"product_id": constants.NOVA_PCI_ALIAS_QAT_C62X_VF_DEVICE,
"name": constants.NOVA_PCI_ALIAS_QAT_C62X_VF_NAME},
{"name": constants.NOVA_PCI_ALIAS_GPU_NAME}
]
class NovaHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the nova chart"""
CHART = app_constants.HELM_CHART_NOVA
SERVICE_NAME = app_constants.HELM_CHART_NOVA
AUTH_USERS = ['nova', ]
SERVICE_USERS = ['neutron', 'ironic', 'placement']
NOVNCPROXY_SERVICE_NAME = 'novncproxy'
NOVNCPROXY_NODE_PORT = '30680'
def get_overrides(self, namespace=None):
ssh_privatekey, ssh_publickey = \
self._get_or_generate_ssh_keys(self.SERVICE_NAME, common.HELM_NS_OPENSTACK)
overrides = {
common.HELM_NS_OPENSTACK: {
'manifests': self._get_compute_ironic_manifests(),
'pod': {
'mounts': {
'nova_compute': {
'nova_compute': self._get_mount_overrides()
}
},
'replicas': {
'api_metadata': self._num_controllers(),
'placement': self._num_controllers(),
'osapi': self._num_controllers(),
'conductor': self._num_controllers(),
'consoleauth': self._num_controllers(),
'scheduler': self._num_controllers(),
'novncproxy': self._num_controllers()
}
},
'conf': {
'ceph': {
'ephemeral_storage': self._get_rbd_ephemeral_storage()
},
'nova': {
'libvirt': {
'virt_type': self._get_virt_type(),
},
'vnc': {
'novncproxy_base_url': self._get_novncproxy_base_url(),
},
'pci': self._get_pci_alias(),
},
'overrides': {
'nova_compute': {
'hosts': self._get_per_host_overrides()
}
},
'ssh_private': ssh_privatekey,
'ssh_public': ssh_publickey,
},
'endpoints': self._get_endpoints_overrides(),
'network': {
'sshd': {
'from_subnet': self._get_ssh_subnet(),
},
'novncproxy': {
'node_port': {
'enabled': self._get_network_node_port_overrides()
}
}
},
'ceph_client': self._get_ceph_client_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_mount_overrides(self):
overrides = self._get_mount_uefi_overrides()
# mount /dev/pts in order to get console log
overrides['volumes'].append({
'name': 'dev-pts',
'hostPath': {'path': '/dev/pts'}
})
overrides['volumeMounts'].append({
'name': 'dev-pts',
'mountPath': '/dev/pts'
})
return overrides
def _get_compute_ironic_manifests(self):
ironic_operator = self._operator.chart_operators[
app_constants.HELM_CHART_IRONIC]
enabled = ironic_operator._is_enabled(constants.HELM_APP_OPENSTACK,
app_constants.HELM_CHART_IRONIC, common.HELM_NS_OPENSTACK)
return {
'statefulset_compute_ironic': enabled
}
def _get_endpoints_overrides(self):
overrides = {
'identity': {
'name': 'keystone',
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'compute': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'compute_novnc_proxy': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.NOVNCPROXY_SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'oslo_cache': {
'auth': {
'memcache_secret_key':
self._get_common_password('auth_memcache_key')
}
},
'oslo_messaging': {
'auth': self._get_endpoints_oslo_messaging_overrides(
self.SERVICE_NAME, [self.SERVICE_NAME])
},
}
db_passwords = {'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, [self.SERVICE_NAME])}
overrides.update({
'oslo_db': db_passwords,
'oslo_db_api': copy.deepcopy(db_passwords),
'oslo_db_cell0': copy.deepcopy(db_passwords),
})
# Service user passwords already exist in other chart overrides
for user in self.SERVICE_USERS:
overrides['identity']['auth'].update({
user: {
'region_name': self._region_name(),
'password': self._get_or_generate_password(
user, common.HELM_NS_OPENSTACK, user)
}
})
return overrides
def _get_novncproxy_base_url(self):
# Get the openstack endpoint public domain name
endpoint_domain = self._get_service_parameter(
constants.SERVICE_TYPE_OPENSTACK,
constants.SERVICE_PARAM_SECTION_OPENSTACK_HELM,
constants.SERVICE_PARAM_NAME_ENDPOINT_DOMAIN)
if endpoint_domain is not None:
location = "%s.%s" % (self.NOVNCPROXY_SERVICE_NAME,
str(endpoint_domain.value).lower())
else:
if self._is_ipv6_cluster_service():
location = "[%s]:%s" % (self._get_oam_address(),
self.NOVNCPROXY_NODE_PORT)
else:
location = "%s:%s" % (self._get_oam_address(),
self.NOVNCPROXY_NODE_PORT)
url = "%s://%s/vnc_auto.html" % (self._get_public_protocol(),
location)
return url
def _get_virt_type(self):
if utils.is_virtual():
return 'qemu'
else:
return 'kvm'
def _update_host_cpu_maps(self, host, default_config):
host_cpus = self._get_host_cpu_list(host, threads=True)
if host_cpus:
vm_cpus = self._get_host_cpu_list(
host, function=constants.APPLICATION_FUNCTION, threads=True)
vm_cpu_list = [c.cpu for c in vm_cpus]
vm_cpu_fmt = "\"%s\"" % utils.format_range_set(vm_cpu_list)
default_config.update({'vcpu_pin_set': vm_cpu_fmt})
shared_cpus = self._get_host_cpu_list(
host, function=constants.SHARED_FUNCTION, threads=True)
shared_cpu_map = {c.numa_node: c.cpu for c in shared_cpus}
shared_cpu_fmt = "\"%s\"" % ','.join(
"%r:%r" % (node, cpu) for node, cpu in shared_cpu_map.items())
default_config.update({'shared_pcpu_map': shared_cpu_fmt})
def _get_pci_pt_whitelist(self, host, iface_context):
# Process all configured PCI passthrough interfaces and add them to
# the list of devices to whitelist
devices = []
for iface in iface_context['interfaces'].values():
if iface['ifclass'] in [constants.INTERFACE_CLASS_PCI_PASSTHROUGH]:
port = interface.get_interface_port(iface_context, iface)
dnames = interface._get_datanetwork_names(iface_context, iface)
device = {
'address': port['pciaddr'],
'physical_network': dnames,
}
LOG.debug('_get_pci_pt_whitelist '
'host=%s, device=%s', host.hostname, device)
devices.append(device)
# Process all enabled PCI devices configured for PT and SRIOV and
# add them to the list of devices to whitelist.
# Since we are now properly initializing the qat driver and
# restarting sysinv, we need to add VF devices to the regular
# whitelist instead of the sriov whitelist
pci_devices = self.dbapi.pci_device_get_by_host(host.id)
for pci_device in pci_devices:
if pci_device.enabled:
device = {
'address': pci_device.pciaddr,
}
LOG.debug('_get_pci_pt_whitelist '
'host=%s, device=%s', host.hostname, device)
devices.append(device)
return devices
def _get_pci_sriov_whitelist(self, host, iface_context):
# Process all configured SRIOV interfaces and add each VF
# to the list of devices to whitelist
devices = []
for iface in iface_context['interfaces'].values():
if iface['ifclass'] in [constants.INTERFACE_CLASS_PCI_SRIOV]:
port = interface.get_sriov_interface_port(iface_context, iface)
dnames = interface._get_datanetwork_names(iface_context, iface)
vf_addrs = port['sriov_vfs_pci_address'].split(",")
vf_addrs = interface.get_sriov_interface_vf_addrs(iface_context, iface, vf_addrs)
if vf_addrs:
for vf_addr in vf_addrs:
device = {
'address': vf_addr,
'physical_network': dnames,
}
LOG.debug('_get_pci_sriov_whitelist '
'host=%s, device=%s', host.hostname, device)
devices.append(device)
return devices
def _get_pci_alias(self):
"""
Generate multistring values containing global PCI alias
configuration for QAT and GPU devices.
The multistring type with list of JSON string values is used
to generate one-line-per-entry formatting, since JSON list of
dict is not supported by nova.
"""
alias_config = DEFAULT_NOVA_PCI_ALIAS[:]
LOG.debug('_get_pci_alias: aliases = %s', alias_config)
multistring = self._oslo_multistring_override(
name='alias', values=alias_config)
return multistring
def _update_host_pci_whitelist(self, host, pci_config):
"""
Generate multistring values containing PCI passthrough
and SR-IOV devices.
The multistring type with list of JSON string values is used
to generate one-line-per-entry pretty formatting.
"""
# obtain interface information specific to this host
iface_context = {
'ports': interface._get_port_interface_id_index(
self.dbapi, host),
'interfaces': interface._get_interface_name_index(
self.dbapi, host),
'interfaces_datanets': interface._get_interface_name_datanets(
self.dbapi, host),
'addresses': interface._get_address_interface_name_index(
self.dbapi, host),
}
# This host's list of PCI passthrough and SR-IOV device dictionaries
devices = []
devices.extend(self._get_pci_pt_whitelist(host, iface_context))
devices.extend(self._get_pci_sriov_whitelist(host, iface_context))
if not devices:
return
# Convert device list into passthrough_whitelist multistring
multistring = self._oslo_multistring_override(
name='passthrough_whitelist', values=devices)
if multistring is not None:
pci_config.update(multistring)
def _update_host_storage(self, host, default_config, libvirt_config):
remote_storage = False
labels = self.dbapi.label_get_all(host.id)
for label in labels:
if (label.label_key == common.LABEL_REMOTE_STORAGE and
label.label_value == common.LABEL_VALUE_ENABLED):
remote_storage = True
break
rbd_pool = constants.CEPH_POOL_EPHEMERAL_NAME
rbd_ceph_conf = os.path.join(constants.CEPH_CONF_PATH,
constants.SB_TYPE_CEPH_CONF_FILENAME)
# If NOVA is a service on a ceph-external backend, use the ephemeral_pool
# and ceph_conf file that are stored in that DB entry.
# If NOVA is not on any ceph-external backend, it must be on the internal
# ceph backend with default "ephemeral" pool and default "/etc/ceph/ceph.conf"
# config file
sb_list = self.dbapi.storage_backend_get_list_by_type(
backend_type=constants.SB_TYPE_CEPH_EXTERNAL)
if sb_list:
for sb in sb_list:
if constants.SB_SVC_NOVA in sb.services:
ceph_ext_obj = self.dbapi.storage_ceph_external_get(sb.id)
rbd_pool = sb.capabilities.get('ephemeral_pool')
rbd_ceph_conf = \
constants.CEPH_CONF_PATH + os.path.basename(ceph_ext_obj.ceph_conf)
if remote_storage:
libvirt_config.update({'images_type': 'rbd',
'images_rbd_pool': rbd_pool,
'images_rbd_ceph_conf': rbd_ceph_conf})
else:
libvirt_config.update({'images_type': 'default'})
def _update_host_addresses(self, host, default_config, vnc_config, libvirt_config):
interfaces = self.dbapi.iinterface_get_by_ihost(host.id)
addresses = self.dbapi.addresses_get_by_host(host.id)
cluster_host_network = self.dbapi.network_get_by_type(
constants.NETWORK_TYPE_CLUSTER_HOST)
cluster_host_iface = None
for iface in interfaces:
interface_network = {'interface_id': iface.id,
'network_id': cluster_host_network.id}
try:
self.dbapi.interface_network_query(interface_network)
cluster_host_iface = iface
except exception.InterfaceNetworkNotFoundByHostInterfaceNetwork:
pass
if cluster_host_iface is None:
return
cluster_host_ip = None
ip_family = None
for addr in addresses:
if addr.interface_id == cluster_host_iface.id:
cluster_host_ip = addr.address
ip_family = addr.family
default_config.update({'my_ip': cluster_host_ip})
if ip_family == 4:
vnc_config.update({'vncserver_listen': '0.0.0.0'})
elif ip_family == 6:
vnc_config.update({'vncserver_listen': '::0'})
libvirt_config.update({'live_migration_inbound_addr': cluster_host_ip})
vnc_config.update({'vncserver_proxyclient_address': cluster_host_ip})
def _get_ssh_subnet(self):
cluster_host_network = self.dbapi.network_get_by_type(
constants.NETWORK_TYPE_CLUSTER_HOST)
address_pool = self.dbapi.address_pool_get(cluster_host_network.pool_uuid)
return '%s/%s' % (str(address_pool.network), str(address_pool.prefix))
def _update_reserved_memory(self, host, default_config):
host_memory = self.dbapi.imemory_get_by_ihost(host.id)
reserved_pages = []
reserved_host_memory = 0
for cell in host_memory:
reserved_4K_pages = 'node:%d,size:4,count:%d' % (
cell.numa_node,
cell.platform_reserved_mib * constants.NUM_4K_PER_MiB)
reserved_pages.append(reserved_4K_pages)
# vswitch pages will be either 2M or 1G
reserved_vswitch_pages = 'node:%d,size:%d,count:%d' % (cell.numa_node,
cell.vswitch_hugepages_size_mib * constants.Ki,
cell.vswitch_hugepages_nr)
reserved_pages.append(reserved_vswitch_pages)
reserved_host_memory += cell.platform_reserved_mib
reserved_host_memory += cell.vswitch_hugepages_size_mib * cell.vswitch_hugepages_nr
multistring = self._oslo_multistring_override(
name='reserved_huge_pages', values=reserved_pages)
if multistring is not None:
default_config.update(multistring)
default_config.update({'reserved_host_memory_mb': reserved_host_memory})
def _get_interface_numa_nodes(self, context):
# Process all ethernet interfaces with physical port and add each port numa_node to
# the dict of interface_numa_nodes
interface_numa_nodes = {}
# Update the numa_node of this interface and its all used_by interfaces
def update_iface_numa_node(iface, numa_node):
if iface['ifname'] in interface_numa_nodes:
interface_numa_nodes[iface['ifname']].add(numa_node)
else:
interface_numa_nodes[iface['ifname']] = set([numa_node])
upper_ifnames = iface['used_by'] or []
for upper_ifname in upper_ifnames:
upper_iface = context['interfaces'][upper_ifname]
update_iface_numa_node(upper_iface, numa_node)
for iface in context['interfaces'].values():
if iface['iftype'] == constants.INTERFACE_TYPE_ETHERNET:
port = context['ports'][iface['id']]
if port and port.numa_node >= 0:
update_iface_numa_node(iface, port.numa_node)
return interface_numa_nodes
def _update_host_neutron_physnet(self, host, neutron_config, per_physnet_numa_config):
'''
Generate physnets configuration option and dynamically-generate
configuration groups to enable nova feature numa-aware-vswitches.
'''
# obtain interface information specific to this host
iface_context = {
'ports': interface._get_port_interface_id_index(
self.dbapi, host),
'interfaces': interface._get_interface_name_index(
self.dbapi, host),
'interfaces_datanets': interface._get_interface_name_datanets(
self.dbapi, host),
}
# find out the numa_nodes of ports which the physnet(datanetwork) is bound with
physnet_numa_nodes = {}
tunneled_net_numa_nodes = set()
interface_numa_nodes = self._get_interface_numa_nodes(iface_context)
for iface in iface_context['interfaces'].values():
if iface['ifname'] not in interface_numa_nodes:
continue
# Only the physnets with valid numa_node can be insert into physnet_numa_nodes
# or tunneled_net_numa_nodes
if_numa_nodes = interface_numa_nodes[iface['ifname']]
for datanet in interface.get_interface_datanets(iface_context, iface):
if datanet['network_type'] in [constants.DATANETWORK_TYPE_FLAT,
constants.DATANETWORK_TYPE_VLAN]:
dname = str(datanet['name'])
if dname in physnet_numa_nodes:
physnet_numa_nodes[dname] = if_numa_nodes | physnet_numa_nodes[dname]
else:
physnet_numa_nodes[dname] = if_numa_nodes
elif datanet['network_type'] in [constants.DATANETWORK_TYPE_VXLAN]:
tunneled_net_numa_nodes = if_numa_nodes | tunneled_net_numa_nodes
if physnet_numa_nodes:
physnet_names = ','.join(physnet_numa_nodes.keys())
neutron_config.update({'physnets': physnet_names})
# For L2-type networks, configuration group name must be set with 'neutron_physnet_{datanet.name}'
# For L3-type networks, configuration group name must be set with 'neutron_tunneled'
for dname in physnet_numa_nodes.keys():
group_name = 'neutron_physnet_' + dname
numa_nodes = ','.join('%s' % n for n in physnet_numa_nodes[dname])
per_physnet_numa_config.update({group_name: {'numa_nodes': numa_nodes}})
if tunneled_net_numa_nodes:
numa_nodes = ','.join('%s' % n for n in tunneled_net_numa_nodes)
per_physnet_numa_config.update({'neutron_tunneled': {'numa_nodes': numa_nodes}})
def _get_per_host_overrides(self):
host_list = []
hosts = self.dbapi.ihost_get_list()
for host in hosts:
host_labels = self.dbapi.label_get_by_host(host.id)
if (host.invprovision in [constants.PROVISIONED,
constants.PROVISIONING] or
host.ihost_action in [constants.UNLOCK_ACTION,
constants.FORCE_UNLOCK_ACTION]):
if (constants.WORKER in utils.get_personalities(host) and
utils.has_openstack_compute(host_labels)):
hostname = str(host.hostname)
default_config = {}
vnc_config = {}
libvirt_config = {}
pci_config = {}
neutron_config = {}
per_physnet_numa_config = {}
self._update_host_cpu_maps(host, default_config)
self._update_host_storage(host, default_config, libvirt_config)
self._update_host_addresses(host, default_config, vnc_config,
libvirt_config)
self._update_host_pci_whitelist(host, pci_config)
self._update_reserved_memory(host, default_config)
self._update_host_neutron_physnet(host, neutron_config, per_physnet_numa_config)
host_nova = {
'name': hostname,
'conf': {
'nova': {
'DEFAULT': default_config,
'vnc': vnc_config,
'libvirt': libvirt_config,
'pci': pci_config if pci_config else None,
'neutron': neutron_config
}
}
}
host_nova['conf']['nova'].update(per_physnet_numa_config)
host_list.append(host_nova)
return host_list
def get_region_name(self):
return self._get_service_region_name(self.SERVICE_NAME)
def _get_rbd_ephemeral_storage(self):
ephemeral_storage_conf = {}
ephemeral_pools = []
# Get the values for replication and min replication from the storage
# backend attributes.
replication, min_replication = \
StorageBackendConfig.get_ceph_pool_replication(self.dbapi)
# For now, the ephemeral pool will only be on the primary Ceph tier
rule_name = "{0}{1}{2}".format(
constants.SB_TIER_DEFAULT_NAMES[
constants.SB_TIER_TYPE_CEPH],
constants.CEPH_CRUSH_TIER_SUFFIX,
"-ruleset").replace('-', '_')
# Form the dictionary with the info for the ephemeral pool.
# If needed, multiple pools can be specified.
ephemeral_pool = {
'rbd_pool_name': constants.CEPH_POOL_EPHEMERAL_NAME,
'rbd_user': RBD_POOL_USER,
'rbd_crush_rule': rule_name,
'rbd_replication': replication,
'rbd_chunk_size': constants.CEPH_POOL_EPHEMERAL_PG_NUM
}
ephemeral_pools.append(ephemeral_pool)
ephemeral_storage_conf = {
'type': 'rbd',
'rbd_pools': ephemeral_pools
}
return ephemeral_storage_conf
def _get_network_node_port_overrides(self):
# If openstack endpoint FQDN is configured, disable node_port 30680
# which will enable the Ingress for the novncproxy service
endpoint_fqdn = self._get_service_parameter(
constants.SERVICE_TYPE_OPENSTACK,
constants.SERVICE_PARAM_SECTION_OPENSTACK_HELM,
constants.SERVICE_PARAM_NAME_ENDPOINT_DOMAIN)
if endpoint_fqdn:
return False
else:
return True

View File

@ -0,0 +1,71 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class NovaApiProxyHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the nova chart"""
CHART = app_constants.HELM_CHART_NOVA_API_PROXY
SERVICE_NAME = app_constants.HELM_CHART_NOVA_API_PROXY
AUTH_USERS = ['nova']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': {
'user': {
'nova_api_proxy': {
'uid': 0
}
},
'replicas': {
'proxy': self._num_controllers()
}
},
'conf': {
'nova_api_proxy': {
'DEFAULT': {
'nfvi_compute_listen': self._get_management_address()
},
}
},
'endpoints': self._get_endpoints_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_endpoints_overrides(self):
nova_service_name = self._operator.chart_operators[
app_constants.HELM_CHART_NOVA].SERVICE_NAME
return {
'identity': {
'auth': self._get_endpoints_identity_overrides(
nova_service_name, self.AUTH_USERS),
},
'compute': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
app_constants.HELM_CHART_NOVA),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
}

View File

@ -0,0 +1,546 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from eventlet.green import subprocess
import keyring
import os
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import rsa
from k8sapp_openstack.common import constants as app_constants
from oslo_log import log
from oslo_serialization import jsonutils
from sqlalchemy.orm.exc import NoResultFound
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common.storage_backend_conf import K8RbdProvisioner
from sysinv.helm import base
from sysinv.helm import common
LOG = log.getLogger(__name__)
class OpenstackBaseHelm(base.BaseHelm):
"""Class to encapsulate Openstack service operations for helm"""
SUPPORTED_NAMESPACES = \
base.BaseHelm.SUPPORTED_NAMESPACES + [common.HELM_NS_OPENSTACK]
SUPPORTED_APP_NAMESPACES = {
constants.HELM_APP_OPENSTACK:
base.BaseHelm.SUPPORTED_NAMESPACES + [common.HELM_NS_OPENSTACK]
}
SYSTEM_CONTROLLER_SERVICES = [
app_constants.HELM_CHART_KEYSTONE_API_PROXY,
]
@property
def CHART(self):
# subclasses must define the property: CHART='name of chart'
# if an author of a new chart forgets this, NotImplementedError is raised
raise NotImplementedError
def _get_service_config(self, service):
configs = self.context.setdefault('_service_configs', {})
if service not in configs:
configs[service] = self._get_service(service)
return configs[service]
def _get_service_parameters(self, service=None):
service_parameters = []
if self.dbapi is None:
return service_parameters
try:
service_parameters = self.dbapi.service_parameter_get_all(
service=service)
# the service parameter has not been added
except NoResultFound:
pass
return service_parameters
def _get_service_parameter_configs(self, service):
configs = self.context.setdefault('_service_params', {})
if service not in configs:
params = self._get_service_parameters(service)
if params:
configs[service] = params
else:
return None
return configs[service]
@staticmethod
def _service_parameter_lookup_one(service_parameters, section, name,
default):
for param in service_parameters:
if param['section'] == section and param['name'] == name:
return param['value']
return default
def _get_admin_user_name(self):
keystone_operator = self._operator.chart_operators[
app_constants.HELM_CHART_KEYSTONE]
return keystone_operator.get_admin_user_name()
def _get_identity_password(self, service, user):
passwords = self.context.setdefault('_service_passwords', {})
if service not in passwords:
passwords[service] = {}
if user not in passwords[service]:
passwords[service][user] = self._get_keyring_password(service, user)
return passwords[service][user]
def _get_database_username(self, service):
return 'admin-%s' % service
def _get_keyring_password(self, service, user, pw_format=None):
password = keyring.get_password(service, user)
if not password:
if pw_format == common.PASSWORD_FORMAT_CEPH:
try:
cmd = ['ceph-authtool', '--gen-print-key']
password = subprocess.check_output(cmd).strip()
except subprocess.CalledProcessError:
raise exception.SysinvException(
'Failed to generate ceph key')
else:
password = self._generate_random_password()
keyring.set_password(service, user, password)
# get_password() returns in unicode format, which leads to YAML
# that Armada doesn't like. Converting to UTF-8 is safe because
# we generated the password originally.
return password.encode('utf8', 'strict')
def _get_service_region_name(self, service):
if self._region_config():
service_config = self._get_service_config(service)
if (service_config is not None and
service_config.region_name is not None):
return service_config.region_name.encode('utf8', 'strict')
if (self._distributed_cloud_role() ==
constants.DISTRIBUTED_CLOUD_ROLE_SYSTEMCONTROLLER and
service in self.SYSTEM_CONTROLLER_SERVICES):
return constants.SYSTEM_CONTROLLER_REGION
return self._region_name()
def _get_configured_service_name(self, service, version=None):
if self._region_config():
service_config = self._get_service_config(service)
if service_config is not None:
name = 'service_name'
if version is not None:
name = version + '_' + name
service_name = service_config.capabilities.get(name)
if service_name is not None:
return service_name
elif version is not None:
return service + version
else:
return service
def _get_configured_service_type(self, service, version=None):
if self._region_config():
service_config = self._get_service_config(service)
if service_config is not None:
stype = 'service_type'
if version is not None:
stype = version + '_' + stype
return service_config.capabilities.get(stype)
return None
def _get_or_generate_password(self, chart, namespace, field):
# Get password from the db for the specified chart overrides
if not self.dbapi:
return None
try:
app = self.dbapi.kube_app_get(constants.HELM_APP_OPENSTACK)
override = self.dbapi.helm_override_get(app_id=app.id,
name=chart,
namespace=namespace)
except exception.HelmOverrideNotFound:
# Override for this chart not found, so create one
try:
values = {
'name': chart,
'namespace': namespace,
'app_id': app.id,
}
override = self.dbapi.helm_override_create(values=values)
except Exception as e:
LOG.exception(e)
return None
password = override.system_overrides.get(field, None)
if password:
return password.encode('utf8', 'strict')
# The password is not present, dump from inactive app if available,
# otherwise generate one and store it to the override
try:
inactive_apps = self.dbapi.kube_app_get_inactive(
constants.HELM_APP_OPENSTACK)
app_override = self.dbapi.helm_override_get(app_id=inactive_apps[0].id,
name=chart,
namespace=namespace)
password = app_override.system_overrides.get(field, None)
except (IndexError, exception.HelmOverrideNotFound):
# No inactive app or no overrides for the inactive app
pass
if not password:
password = self._generate_random_password()
values = {'system_overrides': override.system_overrides}
values['system_overrides'].update({
field: password,
})
try:
self.dbapi.helm_override_update(
app_id=app.id, name=chart, namespace=namespace, values=values)
except Exception as e:
LOG.exception(e)
return password.encode('utf8', 'strict')
def _get_endpoints_identity_overrides(self, service_name, users):
# Returns overrides for admin and individual users
overrides = {}
overrides.update(self._get_common_users_overrides(service_name))
for user in users:
overrides.update({
user: {
'region_name': self._region_name(),
'password': self._get_or_generate_password(
service_name, common.HELM_NS_OPENSTACK, user)
}
})
return overrides
def _get_file_content(self, filename):
file_contents = ''
with open(filename) as f:
file_contents = f.read()
return file_contents
def _get_endpoint_public_tls(self):
overrides = {}
if (os.path.exists(constants.OPENSTACK_CERT_FILE) and
os.path.exists(constants.OPENSTACK_CERT_KEY_FILE)):
overrides.update({
'crt': self._get_file_content(constants.OPENSTACK_CERT_FILE),
'key': self._get_file_content(
constants.OPENSTACK_CERT_KEY_FILE),
})
if os.path.exists(constants.OPENSTACK_CERT_CA_FILE):
overrides.update({
'ca': self._get_file_content(constants.OPENSTACK_CERT_CA_FILE),
})
return overrides
def _get_endpoints_host_fqdn_overrides(self, service_name):
overrides = {'public': {}}
endpoint_domain = self._get_service_parameter(
constants.SERVICE_TYPE_OPENSTACK,
constants.SERVICE_PARAM_SECTION_OPENSTACK_HELM,
constants.SERVICE_PARAM_NAME_ENDPOINT_DOMAIN)
if endpoint_domain is not None:
overrides['public'].update({
'host': service_name + '.' + str(endpoint_domain.value).lower()
})
if (self._distributed_cloud_role() ==
constants.DISTRIBUTED_CLOUD_ROLE_SUBCLOUD):
admin_endpoint_domain = 'openstack.svc.cluster.%s' \
% self._region_name()
overrides['admin'] = {
'host': service_name + '-admin' + '.' + admin_endpoint_domain
}
# Get TLS certificate files if installed
cert = None
try:
cert = self.dbapi.certificate_get_by_certtype(
constants.CERT_MODE_OPENSTACK)
except exception.CertificateTypeNotFound:
pass
if cert is not None:
tls_overrides = self._get_endpoint_public_tls()
if tls_overrides:
overrides['public'].update({
'tls': tls_overrides
})
return overrides
def _get_endpoints_hosts_admin_overrides(self, service_name):
overrides = {}
if (self._distributed_cloud_role() ==
constants.DISTRIBUTED_CLOUD_ROLE_SUBCLOUD):
overrides['admin'] = service_name + '-' + 'admin'
return overrides
def _get_network_api_ingress_overrides(self):
overrides = {'admin': False}
if (self._distributed_cloud_role() ==
constants.DISTRIBUTED_CLOUD_ROLE_SUBCLOUD):
overrides['admin'] = True
return overrides
def _get_endpoints_scheme_public_overrides(self):
overrides = {}
if self._https_enabled():
overrides = {
'public': 'https'
}
return overrides
def _get_endpoints_port_api_public_overrides(self):
overrides = {}
if self._https_enabled():
overrides = {
'api': {
'public': 443
}
}
return overrides
def _get_endpoints_oslo_db_overrides(self, service_name, users):
overrides = {
'admin': {
'password': self._get_common_password('admin_db'),
}
}
for user in users:
overrides.update({
user: {
'password': self._get_or_generate_password(
service_name, common.HELM_NS_OPENSTACK,
user + '_db'),
}
})
return overrides
def _get_endpoints_oslo_messaging_overrides(self, service_name, users):
overrides = {
'admin': {
'username': 'rabbitmq-admin',
'password': self._get_common_password('rabbitmq-admin')
}
}
for user in users:
overrides.update({
user: {
'username': user + '-rabbitmq-user',
'password': self._get_or_generate_password(
service_name, common.HELM_NS_OPENSTACK,
user + '_rabbit')
}
})
return overrides
def _get_common_password(self, name):
# Admin passwords are stored on keystone's helm override entry
return self._get_or_generate_password(
'keystone', common.HELM_NS_OPENSTACK, name)
def _get_common_users_overrides(self, service):
overrides = {}
for user in common.USERS:
if user == common.USER_ADMIN:
o_user = self._get_admin_user_name()
o_service = common.SERVICE_ADMIN
else:
o_user = user
o_service = service
overrides.update({
user: {
'region_name': self._region_name(),
'username': o_user,
'password': self._get_identity_password(o_service, o_user)
}
})
return overrides
def _get_ceph_password(self, service, user):
passwords = self.context.setdefault('_ceph_passwords', {})
if service not in passwords:
passwords[service] = {}
if user not in passwords[service]:
passwords[service][user] = self._get_keyring_password(
service, user, pw_format=common.PASSWORD_FORMAT_CEPH)
return passwords[service][user]
def _get_or_generate_ssh_keys(self, chart, namespace):
try:
app = self.dbapi.kube_app_get(constants.HELM_APP_OPENSTACK)
override = self.dbapi.helm_override_get(app_id=app.id,
name=chart,
namespace=namespace)
except exception.HelmOverrideNotFound:
# Override for this chart not found, so create one
values = {
'name': chart,
'namespace': namespace,
'app_id': app.id
}
override = self.dbapi.helm_override_create(values=values)
privatekey = override.system_overrides.get('privatekey', None)
publickey = override.system_overrides.get('publickey', None)
if privatekey and publickey:
return str(privatekey), str(publickey)
# ssh keys are not set, dump from inactive app if available,
# otherwise generate them and store in overrides
newprivatekey = None
newpublickey = None
try:
inactive_apps = self.dbapi.kube_app_get_inactive(
constants.HELM_APP_OPENSTACK)
app_override = self.dbapi.helm_override_get(app_id=inactive_apps[0].id,
name=chart,
namespace=namespace)
newprivatekey = str(app_override.system_overrides.get('privatekey', None))
newpublickey = str(app_override.system_overrides.get('publickey', None))
except (IndexError, exception.HelmOverrideNotFound):
# No inactive app or no overrides for the inactive app
pass
if not newprivatekey or not newpublickey:
private_key = rsa.generate_private_key(public_exponent=65537,
key_size=2048,
backend=default_backend())
public_key = private_key.public_key()
newprivatekey = str(private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncryption()).decode('utf-8'))
newpublickey = str(public_key.public_bytes(
serialization.Encoding.OpenSSH,
serialization.PublicFormat.OpenSSH).decode('utf-8'))
values = {'system_overrides': override.system_overrides}
values['system_overrides'].update({'privatekey': newprivatekey,
'publickey': newpublickey})
self.dbapi.helm_override_update(
app_id=app.id, name=chart, namespace=namespace, values=values)
return newprivatekey, newpublickey
def _oslo_multistring_override(self, name=None, values=None):
"""
Generate helm multistring dictionary override for specified option
name with multiple values.
This generates oslo_config.MultiStringOpt() compatible config
with multiple input values. This routine JSON encodes each value for
complex types (eg, dict, list, set).
Return a multistring type formatted dictionary override.
"""
override = None
if name is None or not values:
return override
mvalues = []
for value in values:
if isinstance(value, (dict, list, set)):
mvalues.append(jsonutils.dumps(value))
else:
mvalues.append(value)
override = {
name: {'type': 'multistring',
'values': mvalues,
}
}
return override
def _get_public_protocol(self):
return 'https' if self._https_enabled() else 'http'
def _get_service_default_dns_name(self, service):
return "{}.{}.svc.{}".format(service, common.HELM_NS_OPENSTACK,
constants.DEFAULT_DNS_SERVICE_DOMAIN)
def _get_mount_uefi_overrides(self):
# This path depends on OVMF packages and for starlingx
# we don't care about aarch64.
# This path will be used by nova-compute and libvirt pods.
uefi_loader_path = "/usr/share/OVMF"
uefi_config = {
'volumes': [
{
'name': 'ovmf',
'hostPath': {
'path': uefi_loader_path
}
}
],
'volumeMounts': [
{
'name': 'ovmf',
'mountPath': uefi_loader_path
},
]
}
return uefi_config
def _get_ceph_client_overrides(self):
# A secret is required by the chart for ceph client access. Use the
# secret for the kube-rbd pool associated with the primary ceph tier
return {
'user_secret_name':
K8RbdProvisioner.get_user_secret_name({
'name': constants.SB_DEFAULT_NAMES[constants.SB_TYPE_CEPH]})
}
def execute_manifest_updates(self, operator):
"""
Update the elements of the armada manifest.
This allows a helm chart plugin to use the ArmadaManifestOperator to
make dynamic structural changes to the application manifest based on the
current conditions in the platform
Changes include updates to manifest documents for the following schemas:
armada/Manifest/v1, armada/ChartGroup/v1, armada/Chart/v1.
:param operator: an instance of the ArmadaManifestOperator
"""
if not self._is_enabled(operator.APP, self.CHART,
common.HELM_NS_OPENSTACK):
operator.chart_group_chart_delete(
operator.CHART_GROUPS_LUT[self.CHART],
operator.CHARTS_LUT[self.CHART])
def _is_enabled(self, app_name, chart_name, namespace):
"""
Check if the chart is enable at a system level
:param app_name: Application name
:param chart_name: Chart supplied with the application
:param namespace: Namespace where the chart will be executed
Returns true by default if an exception occurs as most charts are
enabled.
"""
return super(OpenstackBaseHelm, self)._is_enabled(
app_name, chart_name, namespace)

View File

@ -0,0 +1,52 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common import utils
from sysinv.helm import common
class OpenvswitchHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the openvswitch chart"""
CHART = app_constants.HELM_CHART_OPENVSWITCH
def _is_enabled(self, app_name, chart_name, namespace):
# First, see if this chart is enabled by the user then adjust based on
# system conditions
enabled = super(OpenvswitchHelm, self)._is_enabled(
app_name, chart_name, namespace)
if enabled and (utils.get_vswitch_type(self.dbapi) !=
constants.VSWITCH_TYPE_NONE):
enabled = False
return enabled
def execute_manifest_updates(self, operator):
# On application load, this chart in not included in the compute-kit
# chart group . Insert as needed.
if self._is_enabled(operator.APP,
self.CHART, common.HELM_NS_OPENSTACK):
operator.chart_group_chart_insert(
operator.CHART_GROUPS_LUT[self.CHART],
operator.CHARTS_LUT[self.CHART],
before_chart=operator.CHARTS_LUT[app_constants.HELM_CHART_NOVA])
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides

View File

@ -0,0 +1,70 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class PankoHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the panko chart"""
CHART = app_constants.HELM_CHART_PANKO
SERVICE_NAME = app_constants.HELM_CHART_PANKO
AUTH_USERS = ['panko']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': self._get_pod_overrides(),
'endpoints': self._get_endpoints_overrides()
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_pod_overrides(self):
overrides = {
'replicas': {
'api': self._num_controllers()
}
}
return overrides
def _get_endpoints_overrides(self):
return {
'identity': {
'auth':
self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'event': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, self.AUTH_USERS)
},
'oslo_cache': {
'auth': {
'memcache_secret_key':
self._get_common_password('auth_memcache_key')
}
},
}

View File

@ -0,0 +1,64 @@
#
# Copyright (c) 2019 StarlingX.
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class PlacementHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the placement chart"""
CHART = app_constants.HELM_CHART_PLACEMENT
SERVICE_NAME = app_constants.HELM_CHART_PLACEMENT
AUTH_USERS = ['placement']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': {
'replicas': {
'api': self._num_controllers()
}
},
'endpoints': self._get_endpoints_overrides()
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_endpoints_overrides(self):
overrides = {
'identity': {
'name': 'keystone',
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'oslo_db': {
'auth': self._get_endpoints_oslo_db_overrides(
self.SERVICE_NAME, [self.SERVICE_NAME])
},
'placement': {
'host_fqdn_override':
self._get_endpoints_host_fqdn_overrides(
self.SERVICE_NAME),
'port': self._get_endpoints_port_api_public_overrides(),
'scheme': self._get_endpoints_scheme_public_overrides(),
},
}
return overrides

View File

@ -0,0 +1,80 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class RabbitmqHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the rabbitmq chart"""
CHART = app_constants.HELM_CHART_RABBITMQ
def get_overrides(self, namespace=None):
limit_enabled, limit_cpus, limit_mem_mib = self._get_platform_res_limit()
# Refer to: https://github.com/rabbitmq/rabbitmq-common/commit/4f9ef33cf9ba52197ff210ffcdf6629c1b7a6e9e
io_thread_pool_size = limit_cpus * 16
if io_thread_pool_size < 64:
io_thread_pool_size = 64
elif io_thread_pool_size > 1024:
io_thread_pool_size = 1024
overrides = {
common.HELM_NS_OPENSTACK: {
'pod': {
'replicas': {
'server': self._num_controllers()
},
'resources': {
'enabled': limit_enabled,
'prometheus_rabbitmq_exporter': {
'limits': {
'cpu': "%d000m" % (limit_cpus),
'memory': "%dMi" % (limit_mem_mib)
}
},
'server': {
'limits': {
'cpu': "%d000m" % (limit_cpus),
'memory': "%dMi" % (limit_mem_mib)
}
}
}
},
'io_thread_pool': {
'enabled': limit_enabled,
'size': "%d" % (io_thread_pool_size)
},
'endpoints': self._get_endpoints_overrides(),
'manifests': {
'config_ipv6': self._is_ipv6_cluster_service()
}
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_endpoints_overrides(self):
credentials = self._get_endpoints_oslo_messaging_overrides(
self.CHART, [])
overrides = {
'oslo_messaging': {
'auth': {
'user': credentials['admin']
}
},
}
return overrides

View File

@ -0,0 +1,55 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm import openstack
from sysinv.common import exception
from sysinv.helm import common
class SwiftHelm(openstack.OpenstackBaseHelm):
"""Class to encapsulate helm operations for the swift chart"""
CHART = app_constants.HELM_CHART_SWIFT
SERVICE_NAME = 'swift'
SERVICE_TYPE = 'object-store'
AUTH_USERS = ['swift']
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_OPENSTACK: {
'endpoints': self._get_endpoints_overrides(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_object_store_overrides(self):
return {
'hosts': {
'default': 'null',
'admin': self._get_management_address(),
'internal': self._get_management_address(),
'public': self._get_oam_address()
},
}
def _get_endpoints_overrides(self):
return {
'identity': {
'auth': self._get_endpoints_identity_overrides(
self.SERVICE_NAME, self.AUTH_USERS),
},
'object_store': self._get_object_store_overrides(),
}

View File

@ -0,0 +1,5 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#

View File

@ -0,0 +1,38 @@
#
# Copyright (c) 2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.tests import test_plugins
import tsconfig.tsconfig as tsc
from sysinv.helm import common
from sysinv.tests.db import base as dbbase
from sysinv.tests.db import utils as dbutils
from sysinv.tests.helm import base
class CinderConversionTestCase(test_plugins.K8SAppOpenstackAppMixin,
base.HelmTestCaseMixin):
def setUp(self):
super(CinderConversionTestCase, self).setUp()
self.app = dbutils.create_test_app(name=self.app_name)
class CinderGetOverrideTest(CinderConversionTestCase,
dbbase.ControllerHostTestCase):
def test_cinder_overrides(self):
dbutils.create_test_host_fs(name='image-conversion',
forihostid=self.host.id)
overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_CINDER,
cnamespace=common.HELM_NS_OPENSTACK)
self.assertOverridesParameters(overrides, {
'conf': {
'cinder': {
'DEFAULT': {
'image_conversion_dir': tsc.IMAGE_CONVERSION_PATH}}}
})

View File

@ -0,0 +1,35 @@
#
# Copyright (c) 2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.helm import nova
from sysinv.helm import helm
from sysinv.common import constants
from sysinv.tests.db import base as dbbase
class NovaGetOverrideTest(dbbase.ControllerHostTestCase):
def setUp(self):
super(NovaGetOverrideTest, self).setUp()
self.operator = helm.HelmOperator(self.dbapi)
self.nova = nova.NovaHelm(self.operator)
self.worker = self._create_test_host(
personality=constants.WORKER,
administrative=constants.ADMIN_LOCKED)
self.ifaces = self._create_test_host_platform_interface(self.worker)
self.dbapi.address_create({
'name': 'test',
'family': self.oam_subnet.version,
'prefix': self.oam_subnet.prefixlen,
'address': str(self.oam_subnet[24]),
'interface_id': self.ifaces[0].id,
'enable_dad': self.oam_subnet.version == 6
})
def test_update_host_addresses(self):
self.nova._update_host_addresses(self.worker, {}, {}, {})

View File

@ -0,0 +1,49 @@
#
# Copyright (c) 2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.tests import test_plugins
from sysinv.helm import common
from sysinv.tests.db import base as dbbase
from sysinv.tests.db import utils as dbutils
from sysinv.tests.helm import base
class NovaApiProxyTestCase(test_plugins.K8SAppOpenstackAppMixin,
base.HelmTestCaseMixin):
def setUp(self):
super(NovaApiProxyTestCase, self).setUp()
self.app = dbutils.create_test_app(name=self.app_name)
class NovaApiProxyIPv4ControllerHostTestCase(NovaApiProxyTestCase,
dbbase.ControllerHostTestCase):
def test_replicas(self):
overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_NOVA_API_PROXY,
cnamespace=common.HELM_NS_OPENSTACK)
self.assertOverridesParameters(overrides, {
# Only one replica for a single controller
'pod': {'replicas': {'proxy': 1}}
})
class NovaApiProxyIPv4AIODuplexSystemTestCase(NovaApiProxyTestCase,
dbbase.AIODuplexSystemTestCase):
def test_replicas(self):
overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_NOVA_API_PROXY,
cnamespace=common.HELM_NS_OPENSTACK)
self.assertOverridesParameters(overrides, {
# Expect two replicas because there are two controllers
'pod': {'replicas': {'proxy': 2}}
})

View File

@ -0,0 +1,54 @@
#
# SPDX-License-Identifier: Apache-2.0
#
from sysinv.common import constants
from sysinv.helm import common
from sysinv.tests.db import base as dbbase
from sysinv.tests.db import utils as dbutils
from sysinv.tests.helm.test_helm import HelmOperatorTestSuiteMixin
class K8SAppOpenstackAppMixin(object):
app_name = constants.HELM_APP_OPENSTACK
path_name = app_name + '.tgz'
def setUp(self):
super(K8SAppOpenstackAppMixin, self).setUp()
# Label hosts with appropriate labels
for host in self.hosts:
if host.personality == constants.CONTROLLER:
dbutils.create_test_label(
host_id=host.id,
label_key=common.LABEL_CONTROLLER,
label_value=common.LABEL_VALUE_ENABLED)
elif host.personality == constants.WORKER:
dbutils.create_test_label(
host_id=host.id,
label_key=common.LABEL_COMPUTE_LABEL,
label_value=common.LABEL_VALUE_ENABLED)
# Test Configuration:
# - Controller
# - IPv6
# - Ceph Storage
# - stx-openstack app
class K8SAppOpenstackControllerTestCase(K8SAppOpenstackAppMixin,
dbbase.BaseIPv6Mixin,
dbbase.BaseCephStorageBackendMixin,
HelmOperatorTestSuiteMixin,
dbbase.ControllerHostTestCase):
pass
# Test Configuration:
# - AIO
# - IPv4
# - Ceph Storage
# - stx-openstack app
class K8SAppOpenstackAIOTestCase(K8SAppOpenstackAppMixin,
dbbase.BaseCephStorageBackendMixin,
HelmOperatorTestSuiteMixin,
dbbase.AIOSimplexHostTestCase):
pass

View File

@ -0,0 +1,287 @@
[MASTER]
# Specify a configuration file.
rcfile=pylint.rc
# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=
# Add files or directories to the blacklist. Should be base names, not paths.
ignore=tests
# Pickle collected data for later comparisons.
persistent=yes
# List of plugins (as comma separated values of python modules names) to load,
# usually to register additional checkers.
load-plugins=
# Use multiple processes to speed up Pylint.
jobs=4
# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
unsafe-load-any-extension=no
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code
extension-pkg-whitelist=lxml.etree,greenlet
[MESSAGES CONTROL]
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time.
#enable=
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once).
# See "Messages Control" section of
# https://pylint.readthedocs.io/en/latest/user_guide
# We are disabling (C)onvention
# We are disabling (R)efactor
# We are selectively disabling (W)arning
# We are not disabling (F)atal, (E)rror
# The following warnings should be fixed:
# fixme (todo, xxx, fixme)
# W0101: unreachable
# W0105: pointless-string-statement
# W0106: expression-not-assigned
# W0107: unnecessary-pass
# W0108: unnecessary-lambda
# W0110: deprecated-lambda
# W0123: eval-used
# W0150: lost-exception
# W0201: attribute-defined-outside-init
# W0211: bad-staticmethod-argument
# W0212: protected-access
# W0221: arguments-differ
# W0223: abstract-method
# W0231: super-init-not-called
# W0235: useless-super-delegation
# W0311: bad-indentation
# W0402: deprecated-module
# W0403: relative-import
# W0404: reimported
# W0603: global-statement
# W0612: unused-variable
# W0613: unused-argument
# W0621: redefined-outer-name
# W0622: redefined-builtin
# W0631: undefined-loop-variable
# W0632: unbalanced-tuple-unpacking
# W0701: bad-except-order
# W0703: broad-except
# W1113: keyword-arg-before-vararg
# W1201: logging-not-lazy
# W1401: anomalous-backslash-in-string
# W1505: deprecated-method
# All these errors should be fixed:
# E0213: no-self-argument
# E0401: import-error
# E0604: invalid-all-object
# E0633: unpacking-non-sequence
# E0701: bad-except-order
# E1102: not-callable
# E1120: no-value-for-parameter
# E1121: too-many-function-args
disable=C, R, fixme, W0101, W0105, W0106, W0107, W0108, W0110, W0123, W0150,
W0201, W0211, W0212, W0221, W0223, W0231, W0235, W0311, W0402, W0403,
W0404, W0603, W0612, W0613, W0621, W0622, W0631, W0632, W0701, W0703,
W1113, W1201, W1401, W1505,
E0213, E0401, E0604, E0633, E0701, E1102, E1120, E1121
[REPORTS]
# Set the output format. Available formats are text, parseable, colorized, msvs
# (visual studio) and html
output-format=text
# Put messages in a separate file for each module / package specified on the
# command line instead of printing them on stdout. Reports (if any) will be
# written in a file name "pylint_global.[txt|html]".
files-output=no
# Tells whether to display a full report or only the messages
reports=yes
# Python expression which should return a note less than 10 (10 is the highest
# note). You have access to the variables errors warning, statement which
# respectively contain the number of errors / warnings messages and the total
# number of statements analyzed. This is used by the global evaluation report
# (RP0004).
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
[SIMILARITIES]
# Minimum lines number of a similarity.
min-similarity-lines=4
# Ignore comments when computing similarities.
ignore-comments=yes
# Ignore docstrings when computing similarities.
ignore-docstrings=yes
[FORMAT]
# Maximum number of characters on a single line.
max-line-length=85
# Maximum number of lines in a module
max-module-lines=1000
# String used as indentation unit. This is usually 4 spaces or "\t" (1 tab).
indent-string=' '
[TYPECHECK]
# Tells whether missing members accessed in mixin class should be ignored. A
# mixin class is detected if its name ends with "mixin" (case insensitive).
ignore-mixin-members=yes
# List of module names for which member attributes should not be checked
# (useful for modules/projects where namespaces are manipulated during runtime
# and thus existing member attributes cannot be deduced by static analysis
ignored-modules=distutils,eventlet.green.subprocess,six,six.moves
# List of classes names for which member attributes should not be checked
# (useful for classes with attributes dynamically set).
# pylint is confused by sqlalchemy Table, as well as sqlalchemy Enum types
# ie: (unprovisioned, identity)
# LookupDict in requests library confuses pylint
ignored-classes=SQLObject, optparse.Values, thread._local, _thread._local,
Table, unprovisioned, identity, LookupDict
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E0201 when accessed. Python regular
# expressions are accepted.
generated-members=REQUEST,acl_users,aq_parent
[BASIC]
# List of builtins function names that should not be used, separated by a comma
bad-functions=map,filter,apply,input
# Regular expression which should only match correct module names
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
# Regular expression which should only match correct module level names
const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$
# Regular expression which should only match correct class names
class-rgx=[A-Z_][a-zA-Z0-9]+$
# Regular expression which should only match correct function names
function-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct method names
method-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct instance attribute names
attr-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct argument names
argument-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct variable names
variable-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct list comprehension /
# generator expression variable names
inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$
# Good variable names which should always be accepted, separated by a comma
good-names=i,j,k,ex,Run,_
# Bad variable names which should always be refused, separated by a comma
bad-names=foo,bar,baz,toto,tutu,tata
# Regular expression which should only match functions or classes name which do
# not require a docstring
no-docstring-rgx=__.*__
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,XXX,TODO
[VARIABLES]
# Tells whether we should check for unused import in __init__ files.
init-import=no
# A regular expression matching the beginning of the name of dummy variables
# (i.e. not used).
dummy-variables-rgx=_|dummy
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid to define new builtins when possible.
additional-builtins=
[IMPORTS]
# Deprecated modules which should not be used, separated by a comma
deprecated-modules=regsub,string,TERMIOS,Bastion,rexec
# Create a graph of every (i.e. internal and external) dependencies in the
# given file (report RP0402 must not be disabled)
import-graph=
# Create a graph of external dependencies in the given file (report RP0402 must
# not be disabled)
ext-import-graph=
# Create a graph of internal dependencies in the given file (report RP0402 must
# not be disabled)
int-import-graph=
[DESIGN]
# Maximum number of arguments for function / method
max-args=5
# Argument names that match this expression will be ignored. Default to name
# with leading underscore
ignored-argument-names=_.*
# Maximum number of locals for function / method body
max-locals=15
# Maximum number of return / yield for function / method body
max-returns=6
# Maximum number of branch for function / method body
max-branchs=12
# Maximum number of statements in function / method body
max-statements=50
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
[CLASSES]
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,__new__,setUp
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
[EXCEPTIONS]
# Exceptions that will emit a warning when being caught. Defaults to
# "Exception"
overgeneral-exceptions=Exception

View File

@ -0,0 +1,2 @@
pbr>=2.0.0
PyYAML==3.10

View File

@ -0,0 +1,69 @@
[metadata]
name = k8sapp-openstack
summary = StarlingX sysinv extensions for stx-openstack
long_description = file: README.rst
long_description_content_type = text/x-rst
license = Apache 2.0
author = StarlingX
author-email = starlingx-discuss@lists.starlingx.io
home-page = https://www.starlingx.io/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.4
Programming Language :: Python :: 3.5
[files]
packages =
k8sapp_openstack
[global]
setup-hooks =
pbr.hooks.setup_hook
[entry_points]
systemconfig.helm_applications =
stx-openstack = systemconfig.helm_plugins.stx_openstack
systemconfig.helm_plugins.stx_openstack =
001_ingress = k8sapp_openstack.helm.ingress:IngressHelm
002_mariadb = k8sapp_openstack.helm.mariadb:MariadbHelm
003_garbd = k8sapp_openstack.helm.garbd:GarbdHelm
004_rabbitmq = k8sapp_openstack.helm.rabbitmq:RabbitmqHelm
005_memcached = k8sapp_openstack.helm.memcached:MemcachedHelm
006_keystone = k8sapp_openstack.helm.keystone:KeystoneHelm
007_heat = k8sapp_openstack.helm.heat:HeatHelm
008_horizon = k8sapp_openstack.helm.horizon:HorizonHelm
009_glance = k8sapp_openstack.helm.glance:GlanceHelm
010_openvswitch = k8sapp_openstack.helm.openvswitch:OpenvswitchHelm
011_libvirt = k8sapp_openstack.helm.libvirt:LibvirtHelm
012_neutron = k8sapp_openstack.helm.neutron:NeutronHelm
013_nova = k8sapp_openstack.helm.nova:NovaHelm
014_nova-api-proxy = k8sapp_openstack.helm.nova_api_proxy:NovaApiProxyHelm
015_cinder = k8sapp_openstack.helm.cinder:CinderHelm
016_gnocchi = k8sapp_openstack.helm.gnocchi:GnocchiHelm
017_ceilometer = k8sapp_openstack.helm.ceilometer:CeilometerHelm
018_panko = k8sapp_openstack.helm.panko:PankoHelm
019_aodh = k8sapp_openstack.helm.aodh:AodhHelm
020_helm-toolkit = k8sapp_openstack.helm.helm_toolkit:HelmToolkitHelm
021_barbican = k8sapp_openstack.helm.barbican:BarbicanHelm
022_keystone-api-proxy = k8sapp_openstack.helm.keystone_api_proxy:KeystoneApiProxyHelm
023_ceph-rgw = k8sapp_openstack.helm.swift:SwiftHelm
024_ironic = k8sapp_openstack.helm.ironic:IronicHelm
025_placement = k8sapp_openstack.helm.placement:PlacementHelm
026_nginx-ports-control = k8sapp_openstack.helm.nginx_ports_control:NginxPortsControlHelm
027_fm-rest-api = k8sapp_openstack.helm.fm_rest_api:FmRestApiHelm
028_dcdbsync = k8sapp_openstack.helm.dcdbsync:DcdbsyncHelm
systemconfig.armada.manifest_ops =
stx-openstack = k8sapp_openstack.armada.manifest_openstack:OpenstackArmadaManifestOperator
[wheel]
universal = 1

View File

@ -0,0 +1,12 @@
#
# Copyright (c) 2019-2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import setuptools
setuptools.setup(
setup_requires=['pbr>=2.0.0'],
pbr=True)

View File

@ -0,0 +1,28 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
flake8<3.8.0
pycodestyle<2.6.0 # MIT License
hacking>=1.1.0,<=2.0.0 # Apache-2.0
coverage>=3.6
discover
fixtures>=3.0.0 # Apache-2.0/BSD
mock>=2.0.0 # BSD
passlib>=1.7.0
psycopg2-binary
python-barbicanclient<3.1.0,>=3.0.1
python-subunit>=0.0.18
requests-mock>=0.6.0 # Apache-2.0
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
oslosphinx<2.6.0,>=2.5.0 # Apache-2.0
oslotest>=3.2.0 # Apache-2.0
stestr>=1.0.0 # Apache-2.0
testrepository>=0.0.18
testtools!=1.2.0,>=0.9.36
tempest-lib<0.5.0,>=0.4.0
ipaddr
pytest
pyudev
migrate
python-ldap>=3.1.0
markupsafe

View File

@ -0,0 +1,200 @@
[tox]
envlist = flake8,py27,py36,pylint
minversion = 1.6
# skipsdist = True
#,pip-missing-reqs
# tox does not work if the path to the workdir is too long, so move it to /tmp
toxworkdir = /tmp/{env:USER}_k8sopenstacktox
stxdir = {toxinidir}/../../..
distshare={toxworkdir}/.tox/distshare
[testenv]
# usedevelop = True
# enabling usedevelop results in py27 develop-inst:
# Exception: Versioning for this project requires either an sdist tarball,
# or access to an upstream git repository.
# Note. site-packages is true and rpm-python must be yum installed on your dev machine.
sitepackages = True
# tox is silly... these need to be separated by a newline....
whitelist_externals = bash
find
install_command = pip install \
-v -v -v \
-c{toxinidir}/upper-constraints.txt \
-c{env:UPPER_CONSTRAINTS_FILE:https://opendev.org/openstack/requirements/raw/branch/stable/stein/upper-constraints.txt} \
{opts} {packages}
# Note the hash seed is set to 0 until can be tested with a
# random hash seed successfully.
setenv = VIRTUAL_ENV={envdir}
PYTHONHASHSEED=0
PYTHONDONTWRITEBYTECODE=1
OS_TEST_PATH=./k8sapp_openstack/tests
LANG=en_US.UTF-8
LANGUAGE=en_US:en
LC_ALL=C
EVENTS_YAML=./k8sapp_openstack/tests/events_for_testing.yaml
SYSINV_TEST_ENV=True
TOX_WORK_DIR={toxworkdir}
PYLINTHOME={toxworkdir}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
-e{[tox]stxdir}/config/controllerconfig/controllerconfig
-e{[tox]stxdir}/config/sysinv/sysinv/sysinv
-e{[tox]stxdir}/config/tsconfig/tsconfig
-e{[tox]stxdir}/fault/fm-api
-e{[tox]stxdir}/fault/python-fmclient/fmclient
-e{[tox]stxdir}/update/cgcs-patch/cgcs-patch
-e{[tox]stxdir}/utilities/ceph/python-cephclient/python-cephclient
commands =
find . -type f -name "*.pyc" -delete
[flake8]
# H series are hacking
# H101 is TODO
# H102 is apache license
# H104 file contains only comments (ie: license)
# H105 author tags
# H306 imports not in alphabetical order
# H401 docstring should not start with a space
# H403 multi line docstrings should end on a new line
# H404 multi line docstring should start without a leading new line
# H405 multi line docstring summary not separated with an empty line
# H701 Empty localization string
# H702 Formatting operation should be outside of localization method call
# H703 Multiple positional placeholders
# B series are bugbear
# B006 Do not use mutable data structures for argument defaults. Needs to be FIXED.
# B007 Loop control variable not used within the loop body.
# B009 Do not call getattr with a constant attribute value
# B010 Do not call setattr with a constant attribute value
# B012 return/continue/break inside finally blocks cause exceptions to be silenced
# B014 Redundant exception types
# B301 Python 3 does not include `.iter*` methods on dictionaries. (this should be suppressed on a per line basis)
# B306 `BaseException.message` has been deprecated. Needs to be FIXED
# W series are warnings
# W503 line break before binary operator
# W504 line break after binary operator
# W605 invalid escape sequence
# E series are pep8
# E117 over-indented
# E126 continuation line over-indented for hanging indent
# E127 continuation line over-indented for visual indent
# E128 continuation line under-indented for visual indent
# E402 module level import not at top of file
ignore = H101,H102,H104,H105,H306,H401,H403,H404,H405,H701,H702,H703,
B006,B007,B009,B010,B012,B014,B301,B306,
W503,W504,W605,
E117,E126,E127,E128,E402
exclude = build,dist,tools,.eggs
max-line-length=120
[testenv:flake8]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt
flake8-bugbear
commands =
flake8 {posargs} .
[testenv:py27]
basepython = python2.7
commands =
{[testenv]commands}
stestr run {posargs}
stestr slowest
[testenv:py36]
basepython = python3.6
commands =
{[testenv]commands}
stestr run {posargs}
stestr slowest
[testenv:pep8]
# testenv:flake8 clone
basepython = {[testenv:flake8]basepython}
deps = {[testenv:flake8]deps}
commands = {[testenv:flake8]commands}
[testenv:venv]
commands = {posargs}
[bandit]
# The following bandit tests are being skipped:
# B101: Test for use of assert
# B103: Test for setting permissive file permissions
# B104: Test for binding to all interfaces
# B105: Test for use of hard-coded password strings
# B108: Test for insecure usage of tmp file/directory
# B110: Try, Except, Pass detected.
# B303: Use of insecure MD2, MD4, MD5, or SHA1 hash function.
# B307: Blacklisted call to eval.
# B310: Audit url open for permitted schemes
# B311: Standard pseudo-random generators are not suitable for security/cryptographic purposes
# B314: Blacklisted calls to xml.etree.ElementTree
# B318: Blacklisted calls to xml.dom.minidom
# B320: Blacklisted calls to lxml.etree
# B404: Import of subprocess module
# B405: import xml.etree
# B408: import xml.minidom
# B410: import lxml
# B506: Test for use of yaml load
# B602: Test for use of popen with shell equals true
# B603: Test for use of subprocess without shell equals true
# B604: Test for any function with shell equals true
# B605: Test for starting a process with a shell
# B607: Test for starting a process with a partial path
#
# Note: 'skips' entry cannot be split across multiple lines
#
skips = B101,B103,B104,B105,B108,B110,B303,B307,B310,B311,B314,B318,B320,B404,B405,B408,B410,B506,B602,B603,B604,B605,B607
exclude = tests
[testenv:bandit]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt
bandit
commands = bandit --ini tox.ini -n 5 -r k8sapp_openstack
[testenv:pylint]
basepython = python2.7
sitepackages = False
deps = {[testenv]deps}
pylint
commands =
pylint {posargs} k8sapp_openstack --rcfile=./pylint.rc
[testenv:cover]
basepython = python2.7
deps = {[testenv]deps}
setenv = {[testenv]setenv}
PYTHON=coverage run --parallel-mode
commands =
{[testenv]commands}
coverage erase
stestr run {posargs}
coverage combine
coverage html -d cover
coverage xml -o cover/coverage.xml
coverage report
[testenv:pip-missing-reqs]
# do not install test-requirements as that will pollute the virtualenv for
# determining missing packages
# this also means that pip-missing-reqs must be installed separately, outside
# of the requirements.txt files
deps = pip_missing_reqs
-rrequirements.txt
commands=pip-missing-reqs -d --ignore-file=/k8sapp_openstack/tests k8sapp_openstack

View File

@ -0,0 +1,5 @@
# Override upstream constraints based on StarlingX load
iso8601==0.1.11
openstacksdk==0.25.0
os-client-config==1.28.0
paramiko==2.1.1

View File

@ -3,8 +3,10 @@ COPY_LIST_TO_TAR="\
$STX_BASE/helm-charts/fm-rest-api/fm-rest-api/helm-charts \
"
# This version is used as a component of the stx-openstack application
# version. Any change to this version must also be reflected in the
# SUPPORTED_VERSIONS list in sysinv/helm/openstack_version_check.py
#
TIS_PATCH_VER=20
# Bump the version by the previous version value prior to decoupling as this
# will align the GITREVCOUNT value to increment the version by one. Remove this
# (i.e. reset to 0) on then next major version changes when TIS_BASE_SRCREV
# changes. This version should align with the version of the helm charts in
# python-k8sapp-openstack
TIS_BASE_SRCREV=8d3452a5e864339101590e542c24c375bb3808fb
TIS_PATCH_VER=GITREVCOUNT+20

View File

@ -21,6 +21,7 @@ BuildRequires: helm
BuildRequires: openstack-helm-infra
Requires: openstack-helm-infra
Requires: openstack-helm
Requires: python-k8sapp-openstack-wheels
%description
StarlingX Openstack Application Helm charts