Remove Tiller

For now we leave the tiller status enpdpoint, until
Shipyard has had a release to stop depending on it [0].

[0]: https://review.opendev.org/c/airship/shipyard/+/802718

Signed-off-by: Sean Eagan <seaneagan1@gmail.com>
Change-Id: If8a02d7118f6840fdbbe088b4086aee9a18ababb
This commit is contained in:
Sean Eagan 2021-07-23 14:36:56 -05:00 committed by Sean Eagan
parent 2efb96eea0
commit a5730f8db8
77 changed files with 706 additions and 5457 deletions

View File

@ -52,7 +52,7 @@
- job:
name: armada-chart-build-gate
description: |
Builds Armada and Tiller charts using pinned Helm toolkit.
Builds charts using pinned Helm toolkit.
timeout: 900
run: tools/gate/playbooks/build-charts.yaml
nodeset: armada-single-node
@ -60,7 +60,7 @@
- job:
name: armada-chart-build-latest-htk
description: |
Builds Armada and Tiller charts using latest Helm toolkit.
Builds charts using latest Helm toolkit.
timeout: 900
voting: false
run: tools/gate/playbooks/build-charts.yaml

View File

@ -120,10 +120,6 @@ ifeq ($(PUSH_IMAGE), true)
docker push $(IMAGE)
endif
# make tools
protoc:
@tools/helm-hapi.sh
clean:
rm -rf build
rm -rf doc/build

View File

@ -13,7 +13,7 @@ Overview
--------
The Armada Python library and command line tool provide a way to
synchronize a Helm (Tiller) target with an operator's intended state,
synchronize a Helm target with an operator's intended state,
consisting of several charts, dependencies, and overrides using a single file
or directory with a collection of files. This allows operators to define many
charts, potentially with different namespaces for those releases, and their
@ -39,13 +39,13 @@ Components
Armada consists of two separate but complementary components:
#. CLI component (**mandatory**) which interfaces directly with `Tiller`_.
#. CLI component (**mandatory**) which interfaces directly with `Helm`_.
#. API component (**optional**) which services user requests through a wsgi
server (which in turn communicates with the `Tiller`_ server) and provides
server (which in turn communicates with the `Helm`_ CLI) and provides
the following additional functionality:
* Role-Based Access Control.
* Limiting projects to specific `Tiller`_ functionality by leveraging
* Limiting projects to specific functionality by leveraging
project-scoping provided by `Keystone`_.
Installation
@ -96,7 +96,7 @@ Integration Points
Armada CLI component has the following integration points:
* `Tiller`_ manages Armada chart installations.
* `Helm`_ manages Armada chart installations.
* `Deckhand`_ is one of the supported control document sources for Armada.
* `Prometheus`_ exporter is provided for metric data related to application
of charts and collections of charts. See `metrics`_.
@ -115,7 +115,7 @@ Further Reading
.. _Armada Quickstart: https://docs.airshipit.org/armada/operations/guide-use-armada.html
.. _metrics: https://docs.airshipit.org/armada/operations/metrics.html#metrics
.. _kubectl: https://kubernetes.io/docs/user-guide/kubectl/kubectl_config/
.. _Tiller: https://docs.helm.sh/using_helm/#easy-in-cluster-installation
.. _Helm: https://docs.helm.sh
.. _Deckhand: https://opendev.org/airship/deckhand
.. _Prometheus: https://prometheus.io
.. _Keystone: https://github.com/openstack/keystone

View File

@ -23,7 +23,6 @@ from oslo_utils import excutils
import yaml
from armada.handlers.helm import Helm
from armada.handlers.tiller import Tiller
CONF = cfg.CONF
@ -118,9 +117,6 @@ class BaseResource(object):
def error(self, ctx, msg):
self.log_error(ctx, log.ERROR, msg)
def get_tiller(self, req, resp):
return Tiller()
def get_helm(self, req, resp):
return Helm()

View File

@ -11,7 +11,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import falcon
@ -20,7 +19,6 @@ from oslo_log import log as logging
from armada import api
from armada.common import policy
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
@ -32,27 +30,15 @@ class Status(api.BaseResource):
get tiller status
'''
try:
with self.get_tiller(req, resp) as tiller:
message = self.handle(tiller)
resp.status = falcon.HTTP_200
resp.text = json.dumps(message)
resp.content_type = 'application/json'
message = self.handle()
resp.status = falcon.HTTP_200
resp.text = json.dumps(message)
resp.content_type = 'application/json'
except Exception as e:
err_message = 'Failed to get Tiller Status: {}'.format(e)
self.error(req.context, err_message)
self.return_error(resp, falcon.HTTP_500, message=err_message)
def handle(self, tiller):
LOG.debug(
'Tiller (Status) at: %s:%s, namespace=%s, '
'timeout=%s', tiller.tiller_host, tiller.tiller_port,
tiller.tiller_namespace, tiller.timeout)
message = {
'tiller': {
'state': tiller.tiller_status(),
'version': tiller.tiller_version()
}
}
def handle(self):
message = {'tiller': {'state': True, 'version': "v1.2.3"}}
return message

View File

@ -63,6 +63,8 @@ def create(enable_middleware=CONF.middleware):
(HEALTH_PATH, Health()),
('apply', Apply()),
('releases', Releases()),
# TODO: Remove this in follow on release after Shipyard has
# been updated to no longer depend on it.
('status', Status()),
('tests', TestReleasesManifestController()),
('test/{namespace}/{release}', TestReleasesReleaseNameController()),

View File

@ -73,11 +73,11 @@ SHORT_DESC = "Command installs manifest charts."
@click.option('--api', help="Contacts service endpoint.", is_flag=True)
@click.option(
'--disable-update-post',
help="Disable post-update Tiller operations.",
help="Disable post-update Helm operations.",
is_flag=True)
@click.option(
'--disable-update-pre',
help="Disable pre-update Tiller operations.",
help="Disable pre-update Helm operations.",
is_flag=True)
@click.option(
'--enable-chart-cleanup', help="Clean up unmanaged charts.", is_flag=True)
@ -117,7 +117,7 @@ SHORT_DESC = "Command installs manifest charts."
@click.option(
'--wait',
help=(
"Force Tiller to wait until all charts are deployed, "
"Force Helm to wait until all charts are deployed, "
"rather than using each charts specified wait policy. "
"This is equivalent to sequenced chartgroups."),
is_flag=True)

View File

@ -1,130 +0,0 @@
# Copyright 2017 The Armada Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import click
from oslo_config import cfg
from armada.cli import CliAction
from armada.handlers.tiller import Tiller
CONF = cfg.CONF
@click.group()
def tiller():
""" Tiller Services actions
"""
DESC = """
This command gets Tiller information
The tiller command uses flags to obtain information from Tiller services
To obtain Armada deployed releases:
$ armada tiller --releases
To obtain Tiller service status/information:
$ armada tiller --status
"""
SHORT_DESC = "Command gets Tiller information."
@tiller.command(name='tiller', help=DESC, short_help=SHORT_DESC)
@click.option('--tiller-host', help="Tiller host IP.", default=None)
@click.option(
'--tiller-port', help="Tiller host port.", type=int, default=None)
@click.option(
'--tiller-namespace',
'-tn',
help="Tiller namespace.",
type=str,
default=None)
@click.option('--releases', help="List of deployed releases.", is_flag=True)
@click.option('--status', help="Status of Tiller services.", is_flag=True)
@click.option('--bearer-token', help="User bearer token.", default=None)
@click.option('--debug', help="Enable debug logging.", is_flag=True)
@click.pass_context
def tiller_service(
ctx, tiller_host, tiller_port, tiller_namespace, releases, status,
bearer_token, debug):
CONF.debug = debug
TillerServices(
ctx, tiller_host, tiller_port, tiller_namespace, releases, status,
bearer_token).safe_invoke()
class TillerServices(CliAction):
def __init__(
self, ctx, tiller_host, tiller_port, tiller_namespace, releases,
status, bearer_token):
super(TillerServices, self).__init__()
self.ctx = ctx
self.tiller_host = tiller_host
self.tiller_port = tiller_port
self.tiller_namespace = tiller_namespace
self.releases = releases
self.status = status
self.bearer_token = bearer_token
def invoke(self):
with Tiller(tiller_host=self.tiller_host, tiller_port=self.tiller_port,
tiller_namespace=self.tiller_namespace,
bearer_token=self.bearer_token) as tiller:
self.handle(tiller)
def handle(self, tiller):
if self.status:
if not self.ctx.obj.get('api', False):
self.logger.info('Tiller Service: %s', tiller.tiller_status())
self.logger.info('Tiller Version: %s', tiller.tiller_version())
else:
client = self.ctx.obj.get('CLIENT')
query = {
'tiller_host': self.tiller_host,
'tiller_port': self.tiller_port,
'tiller_namespace': self.tiller_namespace
}
resp = client.get_status(query=query)
tiller_status = resp.get('tiller').get('state', False)
tiller_version = resp.get('tiller').get('version')
self.logger.info("Tiller Service: %s", tiller_status)
self.logger.info("Tiller Version: %s", tiller_version)
if self.releases:
if not self.ctx.obj.get('api', False):
for release in tiller.list_releases():
self.logger.info(
"Release %s in namespace: %s", release.name,
release.namespace)
else:
client = self.ctx.obj.get('CLIENT')
query = {
'tiller_host': self.tiller_host,
'tiller_port': self.tiller_port,
'tiller_namespace': self.tiller_namespace
}
resp = client.get_releases(query=query)
for namespace in resp.get('releases'):
for release in resp.get('releases').get(namespace):
self.logger.info(
'Release %s in namespace: %s', release, namespace)

View File

@ -65,26 +65,6 @@ The Keystone project domain name used for authentication.
"""Optional path to an SSH private key used for
authenticating against a Git source repository. The path must be an absolute
path to the private key that includes the name of the key itself.""")),
cfg.StrOpt(
'tiller_pod_labels',
default='app=helm,name=tiller',
help=utils.fmt('Labels for the Tiller pod.')),
cfg.StrOpt(
'tiller_namespace',
default='kube-system',
help=utils.fmt('Namespace for the Tiller pod.')),
cfg.StrOpt(
'tiller_host',
default=None,
help=utils.fmt('IP/hostname of the Tiller pod.')),
cfg.IntOpt(
'tiller_port',
default=44134,
help=utils.fmt('Port for the Tiller pod.')),
cfg.ListOpt(
'tiller_release_roles',
default=['admin'],
help=utils.fmt('IDs of approved API access roles.')),
cfg.IntOpt(
'lock_acquire_timeout',
default=60,

View File

@ -23,22 +23,8 @@ KEYWORD_RELEASE = 'release'
DEFAULT_CHART_TIMEOUT = 900
DEFAULT_TEST_TIMEOUT = 300
# Tiller
DEFAULT_TILLER_TIMEOUT = 300
DEFAULT_DELETE_TIMEOUT = DEFAULT_TILLER_TIMEOUT
STATUS_UNKNOWN = 'UNKNOWN'
STATUS_DEPLOYED = 'DEPLOYED'
STATUS_DELETED = 'DELETED'
STATUS_DELETING = 'DELETING'
STATUS_FAILED = 'FAILED'
STATUS_PENDING_INSTALL = 'PENDING_INSTALL'
STATUS_PENDING_UPGRADE = 'PENDING_UPGRADE'
STATUS_PENDING_ROLLBACK = 'PENDING_ROLLBACK'
STATUS_ALL = [
STATUS_UNKNOWN, STATUS_DEPLOYED, STATUS_DELETED, STATUS_DELETING,
STATUS_FAILED, STATUS_PENDING_INSTALL, STATUS_PENDING_UPGRADE,
STATUS_PENDING_ROLLBACK
]
# Helm
DEFAULT_DELETE_TIMEOUT = 300
# Kubernetes
DEFAULT_K8S_TIMEOUT = 300

View File

@ -1,156 +0,0 @@
# Copyright 2017 The Armada Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from armada.exceptions.base_exception import ArmadaBaseException as ex
class TillerException(ex):
'''Base class for Tiller exceptions and error handling.'''
message = 'An unknown Tiller error occurred.'
class ChartCleanupException(TillerException):
'''Exception that occurs during chart cleanup.'''
def __init__(self, chart_name):
message = 'An error occurred during cleanup while removing {}'.format(
chart_name)
super(ChartCleanupException, self).__init__(message)
class ListChartsException(TillerException):
'''Exception that occurs when listing charts'''
message = 'There was an error listing the Helm chart releases.'
class ReleaseException(TillerException):
'''
Exception that occurs when a release fails to install, upgrade, delete,
or test.
**Troubleshoot:**
*Coming Soon*
'''
def __init__(self, name, status, action):
til_msg = getattr(status.info, 'Description').encode()
message = 'Failed to {} release: {} - Tiller Message: {}'.format(
action, name, til_msg)
super(ReleaseException, self).__init__(message)
class TestFailedException(TillerException):
'''
Exception that occurs when a release test fails.
**Troubleshoot:**
*Coming Soon*
'''
def __init__(self, release):
message = 'Test failed for release: {}'.format(release)
super(TestFailedException, self).__init__(message)
class ChannelException(TillerException):
'''
Exception that occurs during a failed gRPC channel creation
**Troubleshoot:**
*Coming Soon*
'''
message = 'Failed to create gRPC channel.'
class GetReleaseStatusException(TillerException):
'''
Exception that occurs during a failed Release Testing.
**Troubleshoot:**
*Coming Soon*
'''
def __init__(self, release, version):
message = 'Failed to get {} status {} version'.format(release, version)
super(GetReleaseStatusException, self).__init__(message)
class GetReleaseContentException(TillerException):
'''Exception that occurs during a failed Release Testing'''
def __init__(self, release, version):
message = 'Failed to get {} content {} version'.format(
release, version)
super(GetReleaseContentException, self).__init__(message)
class TillerPodNotFoundException(TillerException):
'''
Exception that occurs when a tiller pod cannot be found using the labels
specified in the Armada config.
**Troubleshoot:**
*Coming Soon*
'''
def __init__(self, labels):
message = 'Could not find Tiller pod with labels "{}"'.format(labels)
super(TillerPodNotFoundException, self).__init__(message)
class TillerPodNotRunningException(TillerException):
'''
Exception that occurs when no tiller pod is found in a running state.
**Troubleshoot:**
*Coming Soon*
'''
message = 'No Tiller pods found in running state'
class TillerVersionException(TillerException):
'''
Exception that occurs during a failed Release Testing
**Troubleshoot:**
*Coming Soon*
'''
message = 'Failed to get Tiller Version'
class TillerListReleasesPagingException(TillerException):
'''
Exception that occurs when paging through releases listed by tiller
and the total releases changes between pages.
This could occur as tiller does not use anything like continue tokens for
paging as seen in the kubernetes api for example.
**Troubleshoot:**
*Coming Soon*
'''
message = (
'Failed to page through tiller releases, possibly due to '
'releases being added between pages')

View File

@ -14,9 +14,9 @@
from oslo_log import log as logging
from armada import const
from armada.conf import get_current_chart
from armada.exceptions import armada_exceptions as ex
from armada.handlers import helm
from armada.handlers import schema
from armada.utils.release import label_selectors
@ -78,7 +78,7 @@ class PreUpdateActions():
resource_labels,
namespace,
wait=False,
timeout=const.DEFAULT_TILLER_TIMEOUT):
timeout=helm.DEFAULT_HELM_TIMEOUT):
'''
Delete resources matching provided resource type, labels, and
namespace.
@ -159,7 +159,7 @@ class PreUpdateActions():
chart,
disable_hooks,
values,
timeout=const.DEFAULT_TILLER_TIMEOUT):
timeout=helm.DEFAULT_HELM_TIMEOUT):
'''
update statefulsets (daemon, stateful)
'''

View File

@ -1,583 +0,0 @@
# Copyright 2017 The Armada Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import grpc
from hapi.chart.config_pb2 import Config
from hapi.services.tiller_pb2 import GetReleaseContentRequest
from hapi.services.tiller_pb2 import GetReleaseStatusRequest
from hapi.services.tiller_pb2 import GetVersionRequest
from hapi.services.tiller_pb2 import InstallReleaseRequest
from hapi.services.tiller_pb2 import ListReleasesRequest
from hapi.services.tiller_pb2_grpc import ReleaseServiceStub
from hapi.services.tiller_pb2 import TestReleaseRequest
from hapi.services.tiller_pb2 import UninstallReleaseRequest
from hapi.services.tiller_pb2 import UpdateReleaseRequest
from oslo_config import cfg
from oslo_log import log as logging
import yaml
from armada import const
from armada.exceptions import tiller_exceptions as ex
from armada.handlers.k8s import K8s
from armada.utils import helm
TILLER_VERSION = b'2.16.9'
GRPC_EPSILON = 60
LIST_RELEASES_PAGE_SIZE = 32
LIST_RELEASES_ATTEMPTS = 3
# NOTE(seaneagan): This has no effect on the message size limit that tiller
# sets for itself which can be seen here:
# https://github.com/helm/helm/blob/2d77db11fa47005150e682fb13c3cf49eab98fbb/pkg/tiller/server.go#L34
MAX_MESSAGE_LENGTH = 429496729
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class CommonEqualityMixin(object):
def __eq__(self, other):
return (
isinstance(other, self.__class__)
and self.__dict__ == other.__dict__)
def __ne__(self, other):
return not self.__eq__(other)
class TillerResult(CommonEqualityMixin):
'''Object to hold Tiller results for Armada.'''
def __init__(self, release, namespace, status, description, version):
self.release = release
self.namespace = namespace
self.status = status
self.description = description
self.version = version
class Tiller(object):
'''
The Tiller class supports communication and requests to the Tiller Helm
service over gRPC
'''
def __init__(
self,
tiller_host=None,
tiller_port=None,
tiller_namespace=None,
bearer_token=None):
self.tiller_host = tiller_host or CONF.tiller_host
self.tiller_port = tiller_port or CONF.tiller_port
self.tiller_namespace = tiller_namespace or CONF.tiller_namespace
self.bearer_token = bearer_token
# init k8s connectivity
self.k8s = K8s(bearer_token=self.bearer_token)
# init Tiller channel
self.channel = self.get_channel()
# init timeout for all requests
# and assume eventually this will
# be fed at runtime as an override
self.timeout = const.DEFAULT_TILLER_TIMEOUT
LOG.debug(
'Armada is using Tiller at: %s:%s, namespace=%s, timeout=%s',
self.tiller_host, self.tiller_port, self.tiller_namespace,
self.timeout)
@property
def metadata(self):
'''
Return Tiller metadata for requests
'''
return [(b'x-helm-api-client', TILLER_VERSION)]
def get_channel(self):
'''
Return a Tiller channel
'''
tiller_ip = self._get_tiller_ip()
tiller_port = self._get_tiller_port()
try:
LOG.debug(
'Tiller getting gRPC insecure channel at %s:%s '
'with options: [grpc.max_send_message_length=%s, '
'grpc.max_receive_message_length=%s]', tiller_ip, tiller_port,
MAX_MESSAGE_LENGTH, MAX_MESSAGE_LENGTH)
return grpc.insecure_channel(
'%s:%s' % (tiller_ip, tiller_port),
options=[
('grpc.max_send_message_length', MAX_MESSAGE_LENGTH),
('grpc.max_receive_message_length', MAX_MESSAGE_LENGTH)
])
except Exception:
LOG.exception('Failed to initialize grpc channel to tiller.')
raise ex.ChannelException()
def _get_tiller_pod(self):
'''
Returns Tiller pod using the Tiller pod labels specified in the Armada
config
'''
pods = None
namespace = self._get_tiller_namespace()
pods = self.k8s.get_namespace_pod(
namespace, label_selector=CONF.tiller_pod_labels).items
# No Tiller pods found
if not pods:
raise ex.TillerPodNotFoundException(CONF.tiller_pod_labels)
# Return first Tiller pod in running state
for pod in pods:
if pod.status.phase == 'Running':
LOG.debug('Found at least one Running Tiller pod.')
return pod
# No Tiller pod found in running state
raise ex.TillerPodNotRunningException()
def _get_tiller_ip(self):
'''
Returns the Tiller pod's IP address by searching all namespaces
'''
if self.tiller_host:
LOG.debug('Using Tiller host IP: %s', self.tiller_host)
return self.tiller_host
else:
pod = self._get_tiller_pod()
LOG.debug('Using Tiller pod IP: %s', pod.status.pod_ip)
return pod.status.pod_ip
def _get_tiller_port(self):
'''Stub method to support arbitrary ports in the future'''
LOG.debug('Using Tiller host port: %s', self.tiller_port)
return self.tiller_port
def _get_tiller_namespace(self):
LOG.debug('Using Tiller namespace: %s', self.tiller_namespace)
return self.tiller_namespace
def tiller_status(self):
'''
return if Tiller exist or not
'''
if self._get_tiller_ip():
LOG.debug('Getting Tiller Status: Tiller exists')
return True
LOG.debug('Getting Tiller Status: Tiller does not exist')
return False
def list_releases(self):
'''
List Helm Releases
'''
# TODO(MarshM possibly combine list_releases() with list_charts()
# since they do the same thing, grouping output differently
stub = ReleaseServiceStub(self.channel)
# NOTE(seaneagan): Paging through releases to prevent hitting the
# maximum message size limit that tiller sets for it's reponses.
def get_results():
releases = []
done = False
next_release_expected = ""
initial_total = None
while not done:
req = ListReleasesRequest(
offset=next_release_expected,
limit=LIST_RELEASES_PAGE_SIZE,
status_codes=const.STATUS_ALL)
LOG.debug(
'Tiller ListReleases() with timeout=%s, request=%s',
self.timeout, req)
response = stub.ListReleases(
req, self.timeout, metadata=self.metadata)
found_message = False
for message in response:
found_message = True
page = message.releases
if initial_total:
if message.total != initial_total:
LOG.warning(
'Total releases changed between '
'pages from (%s) to (%s)', initial_total,
message.count)
raise ex.TillerListReleasesPagingException()
else:
initial_total = message.total
# Add page to results.
releases.extend(page)
if message.next:
next_release_expected = message.next
else:
done = True
# Ensure we break out was no message found which
# is seen if there are no releases in tiller.
if not found_message:
done = True
return releases
for index in range(LIST_RELEASES_ATTEMPTS):
attempt = index + 1
try:
releases = get_results()
except ex.TillerListReleasesPagingException:
LOG.warning(
'List releases paging failed on attempt %s/%s', attempt,
LIST_RELEASES_ATTEMPTS)
if attempt == LIST_RELEASES_ATTEMPTS:
raise
else:
# Filter out old releases, similar to helm cli:
# https://github.com/helm/helm/blob/1e26b5300b5166fabb90002535aacd2f9cc7d787/cmd/helm/list.go#L196
latest_versions = {}
for r in releases:
max = latest_versions.get(r.name)
if max is not None:
if max > r.version:
continue
latest_versions[r.name] = r.version
latest_releases = []
for r in releases:
if latest_versions[r.name] == r.version:
LOG.debug(
'Found release %s, version %s, status: %s', r.name,
r.version, r.info.status)
latest_releases.append(r)
return latest_releases
def get_chart_templates(
self, template_name, name, release_name, namespace, chart,
disable_hooks, values):
# returns some info
LOG.info("Template( %s ) : %s ", template_name, name)
stub = ReleaseServiceStub(self.channel)
release_request = InstallReleaseRequest(
chart=chart,
values=values,
name=name,
namespace=namespace,
wait=False)
templates = stub.InstallRelease(
release_request, self.timeout, metadata=self.metadata)
for template in yaml.load_all(getattr(templates.release, 'manifest',
[])):
if template_name == template.get('metadata', None).get('name',
None):
LOG.info(template_name)
return template
def list_charts(self):
'''
List Helm Charts from Latest Releases
Returns a list of tuples in the form:
(name, version, chart, values, status)
'''
LOG.debug('Getting known releases from Tiller...')
charts = []
for latest_release in self.list_releases():
try:
release = (
latest_release.name, latest_release.version,
latest_release.chart, latest_release.config.raw,
latest_release.info.status.Code.Name(
latest_release.info.status.code))
charts.append(release)
except (AttributeError, IndexError) as e:
LOG.debug(
'%s while getting releases: %s, ex=%s',
e.__class__.__name__, latest_release, e)
continue
return charts
def update_release(
self,
chart,
release,
namespace,
disable_hooks=False,
values=None,
wait=False,
timeout=None,
force=False,
recreate_pods=False):
'''
Update a Helm Release
'''
timeout = self._check_timeout(wait, timeout)
LOG.info(
'Helm update release: wait=%s, timeout=%s, force=%s, '
'recreate_pods=%s', wait, timeout, force, recreate_pods)
if values is None:
values = Config(raw='')
else:
values = Config(raw=values)
update_msg = None
# build release install request
try:
stub = ReleaseServiceStub(self.channel)
release_request = UpdateReleaseRequest(
chart=chart,
disable_hooks=disable_hooks,
values=values,
name=release,
wait=wait,
timeout=timeout,
force=force,
recreate=recreate_pods)
update_msg = stub.UpdateRelease(
release_request,
timeout + GRPC_EPSILON,
metadata=self.metadata)
except Exception:
LOG.exception('Error while updating release %s', release)
status = self.get_release_status(release)
raise ex.ReleaseException(release, status, 'Upgrade')
tiller_result = TillerResult(
update_msg.release.name, update_msg.release.namespace,
update_msg.release.info.status.Code.Name(
update_msg.release.info.status.code),
update_msg.release.info.Description, update_msg.release.version)
return tiller_result
def install_release(
self, chart, release, namespace, values=None, wait=False,
timeout=None):
'''
Create a Helm Release
'''
timeout = self._check_timeout(wait, timeout)
LOG.info('Helm install release: wait=%s, timeout=%s', wait, timeout)
if values is None:
values = Config(raw='')
else:
values = Config(raw=values)
# build release install request
try:
stub = ReleaseServiceStub(self.channel)
release_request = InstallReleaseRequest(
chart=chart,
values=values,
name=release,
namespace=namespace,
wait=wait,
timeout=timeout)
install_msg = stub.InstallRelease(
release_request,
timeout + GRPC_EPSILON,
metadata=self.metadata)
tiller_result = TillerResult(
install_msg.release.name, install_msg.release.namespace,
install_msg.release.info.status.Code.Name(
install_msg.release.info.status.code),
install_msg.release.info.Description,
install_msg.release.version)
return tiller_result
except Exception:
LOG.exception('Error while installing release %s', release)
status = self.get_release_status(release)
raise ex.ReleaseException(release, status, 'Install')
def test_release(
self, release, timeout=const.DEFAULT_TILLER_TIMEOUT,
cleanup=False):
'''
:param release: name of release to test
:param timeout: runtime before exiting
:param cleanup: removes testing pod created
:returns: test suite run object
'''
LOG.info("Running Helm test: release=%s, timeout=%s", release, timeout)
try:
stub = ReleaseServiceStub(self.channel)
# TODO: This timeout is redundant since we already have the grpc
# timeout below, and it's actually used by tiller for individual
# k8s operations not the overall request, should we:
# 1. Remove this timeout
# 2. Add `k8s_timeout=const.DEFAULT_K8S_TIMEOUT` arg and use
release_request = TestReleaseRequest(
name=release, timeout=timeout, cleanup=cleanup)
test_message_stream = stub.RunReleaseTest(
release_request, timeout, metadata=self.metadata)
failed = 0
for test_message in test_message_stream:
if test_message.status == helm.TESTRUN_STATUS_FAILURE:
failed += 1
LOG.info(test_message.msg)
if failed:
LOG.info('{} test(s) failed'.format(failed))
status = self.get_release_status(release)
return status.info.status.last_test_suite_run
except Exception:
LOG.exception('Error while testing release %s', release)
status = self.get_release_status(release)
raise ex.ReleaseException(release, status, 'Test')
def get_release_status(self, release, version=0):
'''
:param release: name of release to test
:param version: version of release status
'''
LOG.debug(
'Helm getting release status for release=%s, version=%s', release,
version)
try:
stub = ReleaseServiceStub(self.channel)
status_request = GetReleaseStatusRequest(
name=release, version=version)
release_status = stub.GetReleaseStatus(
status_request, self.timeout, metadata=self.metadata)
LOG.debug('GetReleaseStatus= %s', release_status)
return release_status
except Exception:
LOG.exception('Cannot get tiller release status.')
raise ex.GetReleaseStatusException(release, version)
def get_release_content(self, release, version=0):
'''
:param release: name of release to test
:param version: version of release status
'''
LOG.debug(
'Helm getting release content for release=%s, version=%s', release,
version)
try:
stub = ReleaseServiceStub(self.channel)
status_request = GetReleaseContentRequest(
name=release, version=version)
release_content = stub.GetReleaseContent(
status_request, self.timeout, metadata=self.metadata)
LOG.debug('GetReleaseContent= %s', release_content)
return release_content
except Exception:
LOG.exception('Cannot get tiller release content.')
raise ex.GetReleaseContentException(release, version)
def tiller_version(self):
'''
:returns: Tiller version
'''
try:
stub = ReleaseServiceStub(self.channel)
release_request = GetVersionRequest()
LOG.debug('Getting Tiller version, with timeout=%s', self.timeout)
tiller_version = stub.GetVersion(
release_request, self.timeout, metadata=self.metadata)
tiller_version = getattr(tiller_version.Version, 'sem_ver', None)
LOG.debug('Got Tiller version %s', tiller_version)
return tiller_version
except Exception:
LOG.exception('Failed to get Tiller version.')
raise ex.TillerVersionException()
def uninstall_release(
self, release, disable_hooks=False, purge=True, timeout=None):
'''
:param: release - Helm chart release name
:param: purge - deep delete of chart
:param: timeout - timeout for the tiller call
Deletes a Helm chart from Tiller
'''
if timeout is None:
timeout = const.DEFAULT_DELETE_TIMEOUT
# build release uninstall request
try:
stub = ReleaseServiceStub(self.channel)
LOG.info(
"Delete %s release with disable_hooks=%s, "
"purge=%s, timeout=%s flags", release, disable_hooks, purge,
timeout)
release_request = UninstallReleaseRequest(
name=release, disable_hooks=disable_hooks, purge=purge)
return stub.UninstallRelease(
release_request, timeout, metadata=self.metadata)
except Exception:
LOG.exception('Error while deleting release %s', release)
status = self.get_release_status(release)
raise ex.ReleaseException(release, status, 'Delete')
def _check_timeout(self, wait, timeout):
if timeout is None or timeout <= 0:
if wait:
LOG.warn(
'Tiller timeout is invalid or unspecified, '
'using default %ss.', self.timeout)
timeout = self.timeout
return timeout
def close(self):
# Ensure channel was actually initialized before closing
if getattr(self, 'channel', None):
self.channel.close()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()

View File

@ -20,7 +20,6 @@ from oslo_log import log
from armada.cli.apply import apply_create
from armada.cli.test import test_charts
from armada.cli.tiller import tiller_service
from armada.cli.validate import validate_manifest
from armada.common.client import ArmadaClient
from armada.common.session import ArmadaSession
@ -48,7 +47,6 @@ def main(ctx, debug, api, url, token):
\b
$ armada apply
$ armada test
$ armada tiller
$ armada validate
Environment:
@ -56,8 +54,6 @@ def main(ctx, debug, api, url, token):
\b
$TOKEN set auth token
$HOST set armada service host endpoint
This tool will communicate with deployed Tiller in your Kubernetes cluster.
"""
if not ctx.obj:
@ -85,5 +81,4 @@ def main(ctx, debug, api, url, token):
main.add_command(apply_create)
main.add_command(test_charts)
main.add_command(tiller_service)
main.add_command(validate_manifest)

View File

@ -12,10 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from oslo_config import cfg
from armada import api
from armada.common.policies import base as policy_base
from armada.tests import test_utils
from armada.tests.unit.api import base
@ -24,54 +22,28 @@ CONF = cfg.CONF
class TillerControllerTest(base.BaseControllerTest):
@mock.patch.object(api, 'Tiller')
def test_get_tiller_status(self, mock_tiller):
def test_get_tiller_status(self):
"""Tests GET /api/v1.0/status endpoint."""
rules = {'tiller:get_status': '@'}
self.policy.set_rules(rules)
m_tiller = mock_tiller.return_value
m_tiller.__enter__.return_value = m_tiller
m_tiller.tiller_status.return_value = 'fake_status'
m_tiller.tiller_version.return_value = 'fake_version'
result = self.app.simulate_get('/api/v1.0/status')
expected = {
'tiller': {
'version': 'fake_version',
'state': 'fake_status'
}
}
expected = {'tiller': {'state': True, 'version': "v1.2.3"}}
self.assertEqual(expected, result.json)
self.assertEqual('application/json', result.headers['content-type'])
mock_tiller.assert_called_once()
m_tiller.__exit__.assert_called()
@mock.patch.object(api, 'Tiller')
def test_get_tiller_status_with_params(self, mock_tiller):
def test_get_tiller_status_with_params(self):
"""Tests GET /api/v1.0/status endpoint with query parameters."""
rules = {'tiller:get_status': '@'}
self.policy.set_rules(rules)
m_tiller = mock_tiller.return_value
m_tiller.__enter__.return_value = m_tiller
m_tiller.tiller_status.return_value = 'fake_status'
m_tiller.tiller_version.return_value = 'fake_version'
result = self.app.simulate_get(
'/api/v1.0/status', params_csv=False, params={})
expected = {
'tiller': {
'version': 'fake_version',
'state': 'fake_status'
}
}
expected = {'tiller': {'state': True, 'version': "v1.2.3"}}
self.assertEqual(expected, result.json)
self.assertEqual('application/json', result.headers['content-type'])
mock_tiller.assert_called_once()
m_tiller.__exit__.assert_called()
class TillerControllerNegativeRbacTest(base.BaseControllerTest):

View File

@ -1,546 +0,0 @@
# Copyright 2017 The Armada Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from armada.exceptions import tiller_exceptions as ex
from armada.handlers import tiller
from armada.utils import helm
from armada.tests.unit import base
from armada.tests.test_utils import AttrDict
class TillerTestCase(base.ArmadaTestCase):
@mock.patch.object(tiller.Tiller, '_get_tiller_ip')
@mock.patch('armada.handlers.tiller.K8s')
@mock.patch('armada.handlers.tiller.grpc')
@mock.patch('armada.handlers.tiller.Config')
@mock.patch('armada.handlers.tiller.InstallReleaseRequest')
@mock.patch('armada.handlers.tiller.ReleaseServiceStub')
def test_install_release(
self, mock_stub, mock_install_request, mock_config, mock_grpc,
mock_k8s, mock_ip):
# instantiate Tiller object
mock_grpc.insecure_channel.return_value = mock.Mock()
mock_ip.return_value = '0.0.0.0'
tiller_obj = tiller.Tiller()
assert tiller_obj._get_tiller_ip() == '0.0.0.0'
# set params
chart = mock.Mock()
name = None
namespace = None
initial_values = None
updated_values = mock_config(raw=initial_values)
wait = False
timeout = 3600
tiller_obj.install_release(
chart,
name,
namespace,
values=initial_values,
wait=wait,
timeout=timeout)
mock_stub.assert_called_with(tiller_obj.channel)
release_request = mock_install_request(
chart=chart,
values=updated_values,
release=name,
namespace=namespace,
wait=wait,
timeout=timeout)
(
mock_stub(tiller_obj.channel).InstallRelease.assert_called_with(
release_request, timeout + 60, metadata=tiller_obj.metadata))
@mock.patch('armada.handlers.tiller.K8s', autospec=True)
@mock.patch.object(tiller.Tiller, '_get_tiller_ip', autospec=True)
@mock.patch.object(tiller.Tiller, '_get_tiller_port', autospec=True)
@mock.patch('armada.handlers.tiller.grpc', autospec=True)
def test_get_channel(self, mock_grpc, mock_port, mock_ip, _):
mock_port.return_value = mock.sentinel.port
mock_ip.return_value = mock.sentinel.ip
mock_channel = mock.Mock()
# instantiate Tiller object
mock_grpc.insecure_channel.return_value = mock_channel
tiller_obj = tiller.Tiller()
self.assertIsNotNone(tiller_obj.channel)
self.assertEqual(mock_channel, tiller_obj.channel)
mock_grpc.insecure_channel.assert_called_once_with(
'%s:%s' % (str(mock.sentinel.ip), str(mock.sentinel.port)),
options=[
('grpc.max_send_message_length', tiller.MAX_MESSAGE_LENGTH),
('grpc.max_receive_message_length', tiller.MAX_MESSAGE_LENGTH)
])
@mock.patch('armada.handlers.tiller.K8s', autospec=True)
@mock.patch('armada.handlers.tiller.grpc', autospec=True)
def test_get_tiller_ip_with_host_provided(self, mock_grpc, _):
tiller_obj = tiller.Tiller('1.1.1.1')
self.assertIsNotNone(tiller_obj._get_tiller_ip())
self.assertEqual('1.1.1.1', tiller_obj._get_tiller_ip())
@mock.patch.object(tiller.Tiller, '_get_tiller_pod', autospec=True)
@mock.patch('armada.handlers.tiller.K8s', autospec=True)
@mock.patch('armada.handlers.tiller.grpc', autospec=True)
def test_get_tiller_ip_with_mocked_pod(
self, mock_grpc, mock_k8s, mock_pod):
status = mock.Mock(pod_ip='1.1.1.1')
mock_pod.return_value.status = status
tiller_obj = tiller.Tiller()
self.assertEqual('1.1.1.1', tiller_obj._get_tiller_ip())
@mock.patch.object(tiller.Tiller, '_get_tiller_ip', autospec=True)
@mock.patch('armada.handlers.tiller.K8s', autospec=True)
@mock.patch('armada.handlers.tiller.grpc', autospec=True)
def test_get_tiller_pod_throws_exception(
self, mock_grpc, mock_k8s, mock_ip):
mock_k8s.get_namespace_pod.return_value.items = []
tiller_obj = tiller.Tiller()
mock_grpc.insecure_channel.side_effect = ex.ChannelException()
self.assertRaises(
ex.TillerPodNotRunningException, tiller_obj._get_tiller_pod)
@mock.patch.object(tiller.Tiller, '_get_tiller_ip', autospec=True)
@mock.patch('armada.handlers.tiller.K8s', autospec=True)
@mock.patch('armada.handlers.tiller.grpc', autospec=True)
def test_get_tiller_port(self, mock_grpc, _, mock_ip):
# instantiate Tiller object
tiller_obj = tiller.Tiller(None, '8080', None)
self.assertEqual('8080', tiller_obj._get_tiller_port())
@mock.patch.object(tiller.Tiller, '_get_tiller_ip', autospec=True)
@mock.patch('armada.handlers.tiller.K8s', autospec=True)
@mock.patch('armada.handlers.tiller.grpc', autospec=True)
def test_get_tiller_namespace(self, mock_grpc, _, mock_ip):
# verifies namespace set via instantiation
tiller_obj = tiller.Tiller(None, None, 'test_namespace2')
self.assertEqual('test_namespace2', tiller_obj._get_tiller_namespace())
@mock.patch.object(tiller.Tiller, '_get_tiller_ip', autospec=True)
@mock.patch('armada.handlers.tiller.K8s', autospec=True)
@mock.patch('armada.handlers.tiller.grpc', autospec=True)
def test_get_tiller_status_with_ip_provided(self, mock_grpc, _, mock_ip):
# instantiate Tiller object
tiller_obj = tiller.Tiller(None, '8080', None)
self.assertTrue(tiller_obj.tiller_status())
@mock.patch.object(tiller.Tiller, '_get_tiller_ip', autospec=True)
@mock.patch('armada.handlers.tiller.K8s', autospec=True)
@mock.patch('armada.handlers.tiller.grpc', autospec=True)
def test_get_tiller_status_no_ip(self, mock_grpc, _, mock_ip):
mock_ip.return_value = ''
# instantiate Tiller object
tiller_obj = tiller.Tiller()
self.assertFalse(tiller_obj.tiller_status())
@mock.patch.object(tiller.Tiller, '_get_tiller_ip', autospec=True)
@mock.patch('armada.handlers.tiller.K8s', autospec=True)
@mock.patch('armada.handlers.tiller.grpc', autospec=True)
@mock.patch('armada.handlers.tiller.ReleaseServiceStub')
def test_list_releases_empty(self, mock_stub, _, __, mock_ip):
message_mock = mock.Mock(count=0, total=5, next='', releases=[])
mock_stub.return_value.ListReleases.return_value = [message_mock]
# instantiate Tiller object
tiller_obj = tiller.Tiller()
self.assertEqual([], tiller_obj.list_releases())
@mock.patch.object(tiller.Tiller, '_get_tiller_ip', autospec=True)
@mock.patch('armada.handlers.tiller.K8s', autospec=True)
@mock.patch('armada.handlers.tiller.grpc', autospec=True)
@mock.patch('armada.handlers.tiller.ReleaseServiceStub')
def test_list_charts_empty(self, mock_stub, _, __, mock_ip):
message_mock = mock.Mock(count=0, total=5, next='', releases=[])
mock_stub.return_value.ListReleases.return_value = [message_mock]
# instantiate Tiller object
tiller_obj = tiller.Tiller()
self.assertEqual([], tiller_obj.list_charts())
@mock.patch('armada.handlers.tiller.K8s')
@mock.patch('armada.handlers.tiller.grpc')
@mock.patch.object(tiller, 'ListReleasesRequest')
@mock.patch.object(tiller, 'ReleaseServiceStub')
def test_list_releases_single_page(
self, mock_stub, mock_list_releases_request, mock_grpc, _):
releases = [mock.Mock(), mock.Mock()]
mock_stub.return_value.ListReleases.return_value = [
mock.Mock(
next='',
count=len(releases),
total=len(releases),
releases=releases)
]
tiller_obj = tiller.Tiller('host', '8080', None)
self.assertEqual(releases, tiller_obj.list_releases())
mock_stub.assert_called_once_with(tiller_obj.channel)
mock_stub.return_value.ListReleases.assert_called_once_with(
mock_list_releases_request.return_value,
tiller_obj.timeout,
metadata=tiller_obj.metadata)
mock_list_releases_request.assert_called_once_with(
offset="",
limit=tiller.LIST_RELEASES_PAGE_SIZE,
status_codes=tiller.const.STATUS_ALL)
@mock.patch('armada.handlers.tiller.K8s')
@mock.patch('armada.handlers.tiller.grpc')
@mock.patch.object(tiller, 'ListReleasesRequest')
@mock.patch.object(tiller, 'ReleaseServiceStub')
def test_list_releases_returns_latest_only(
self, mock_stub, mock_list_releases_request, mock_grpc, _):
latest = mock.Mock(version=3)
releases = [mock.Mock(version=2), latest, mock.Mock(version=1)]
for r in releases:
r.name = 'test'
mock_stub.return_value.ListReleases.return_value = [
mock.Mock(
next='',
count=len(releases),
total=len(releases),
releases=releases)
]
tiller_obj = tiller.Tiller('host', '8080', None)
self.assertEqual([latest], tiller_obj.list_releases())
mock_stub.assert_called_once_with(tiller_obj.channel)
mock_stub.return_value.ListReleases.assert_called_once_with(
mock_list_releases_request.return_value,
tiller_obj.timeout,
metadata=tiller_obj.metadata)
mock_list_releases_request.assert_called_once_with(
offset="",
limit=tiller.LIST_RELEASES_PAGE_SIZE,
status_codes=tiller.const.STATUS_ALL)
@mock.patch('armada.handlers.tiller.K8s')
@mock.patch('armada.handlers.tiller.grpc')
@mock.patch.object(tiller, 'ListReleasesRequest')
@mock.patch.object(tiller, 'ReleaseServiceStub')
def test_list_releases_paged(
self, mock_stub, mock_list_releases_request, mock_grpc, _):
page_count = 3
release_count = tiller.LIST_RELEASES_PAGE_SIZE * page_count
releases = [mock.Mock() for i in range(release_count)]
for i, release in enumerate(releases):
release.name = mock.PropertyMock(return_value=str(i))
pages = [
[
mock.Mock(
count=release_count,
total=release_count + 5,
next='' if i == page_count - 1 else str(
(tiller.LIST_RELEASES_PAGE_SIZE * (i + 1))),
releases=releases[tiller.LIST_RELEASES_PAGE_SIZE
* i:tiller.LIST_RELEASES_PAGE_SIZE
* (i + 1)])
] for i in range(page_count)
]
mock_stub.return_value.ListReleases.side_effect = pages
mock_list_releases_side_effect = [
mock.Mock() for i in range(page_count)
]
mock_list_releases_request.side_effect = mock_list_releases_side_effect
tiller_obj = tiller.Tiller('host', '8080', None)
self.assertEqual(releases, tiller_obj.list_releases())
mock_stub.assert_called_once_with(tiller_obj.channel)
list_releases_calls = [
mock.call(
mock_list_releases_side_effect[i],
tiller_obj.timeout,
metadata=tiller_obj.metadata) for i in range(page_count)
]
mock_stub.return_value.ListReleases.assert_has_calls(
list_releases_calls)
list_release_request_calls = [
mock.call(
offset='' if i == 0 else str(
tiller.LIST_RELEASES_PAGE_SIZE * i),
limit=tiller.LIST_RELEASES_PAGE_SIZE,
status_codes=tiller.const.STATUS_ALL)
for i in range(page_count)
]
mock_list_releases_request.assert_has_calls(list_release_request_calls)
@mock.patch('armada.handlers.tiller.K8s')
@mock.patch('armada.handlers.tiller.grpc')
@mock.patch.object(tiller, 'GetReleaseContentRequest')
@mock.patch.object(tiller, 'ReleaseServiceStub')
def test_get_release_content(
self, mock_release_service_stub, mock_release_content_request,
mock_grpc, _):
mock_release_service_stub.return_value.GetReleaseContent\
.return_value = {}
tiller_obj = tiller.Tiller('host', '8080', None)
self.assertEqual({}, tiller_obj.get_release_content('release'))
get_release_content_stub = mock_release_service_stub. \
return_value.GetReleaseContent
get_release_content_stub.assert_called_once_with(
mock_release_content_request.return_value,
tiller_obj.timeout,
metadata=tiller_obj.metadata)
@mock.patch('armada.handlers.tiller.K8s')
@mock.patch('armada.handlers.tiller.grpc')
@mock.patch.object(tiller, 'GetVersionRequest')
@mock.patch.object(tiller, 'ReleaseServiceStub')
def test_tiller_version(
self, mock_release_service_stub, mock_version_request, mock_grpc,
_):
mock_version = mock.Mock()
mock_version.Version.sem_ver = mock.sentinel.sem_ver
mock_release_service_stub.return_value.GetVersion\
.return_value = mock_version
tiller_obj = tiller.Tiller('host', '8080', None)
self.assertEqual(mock.sentinel.sem_ver, tiller_obj.tiller_version())
mock_release_service_stub.assert_called_once_with(tiller_obj.channel)
get_version_stub = mock_release_service_stub.return_value.GetVersion
get_version_stub.assert_called_once_with(
mock_version_request.return_value,
tiller_obj.timeout,
metadata=tiller_obj.metadata)
@mock.patch('armada.handlers.tiller.K8s')
@mock.patch('armada.handlers.tiller.grpc')
@mock.patch.object(tiller, 'GetVersionRequest')
@mock.patch.object(tiller, 'GetReleaseStatusRequest')
@mock.patch.object(tiller, 'ReleaseServiceStub')
def test_get_release_status(
self, mock_release_service_stub, mock_rel_status_request,
mock_version_request, mock_grpc, _):
mock_release_service_stub.return_value.GetReleaseStatus. \
return_value = {}
tiller_obj = tiller.Tiller('host', '8080', None)
self.assertEqual({}, tiller_obj.get_release_status('release'))
mock_release_service_stub.assert_called_once_with(tiller_obj.channel)
get_release_status_stub = mock_release_service_stub.return_value. \
GetReleaseStatus
get_release_status_stub.assert_called_once_with(
mock_rel_status_request.return_value,
tiller_obj.timeout,
metadata=tiller_obj.metadata)
@mock.patch('armada.handlers.tiller.K8s')
@mock.patch('armada.handlers.tiller.grpc')
@mock.patch.object(tiller, 'UninstallReleaseRequest')
@mock.patch.object(tiller, 'ReleaseServiceStub')
def test_uninstall_release(
self, mock_release_service_stub, mock_uninstall_release_request,
mock_grpc, _):
mock_release_service_stub.return_value.UninstallRelease\
.return_value = {}
tiller_obj = tiller.Tiller('host', '8080', None)
self.assertEqual({}, tiller_obj.uninstall_release('release'))
mock_release_service_stub.assert_called_once_with(tiller_obj.channel)
uninstall_release_stub = mock_release_service_stub.return_value. \
UninstallRelease
uninstall_release_stub.assert_called_once_with(
mock_uninstall_release_request.return_value,
tiller_obj.timeout,
metadata=tiller_obj.metadata)
@mock.patch('armada.handlers.tiller.K8s')
@mock.patch('armada.handlers.tiller.grpc')
@mock.patch('armada.handlers.tiller.Config')
@mock.patch.object(tiller, 'UpdateReleaseRequest')
@mock.patch.object(tiller, 'ReleaseServiceStub')
def test_update_release(
self, mock_release_service_stub, mock_update_release_request,
mock_config, _, __):
release = 'release'
chart = {}
namespace = 'namespace'
code = 0
status = 'DEPLOYED'
description = 'desc'
version = 2
values = mock_config(raw=None)
mock_release_service_stub.return_value.UpdateRelease.return_value =\
AttrDict(**{
'release': AttrDict(**{
'name': release,
'namespace': namespace,
'info': AttrDict(**{
'status': AttrDict(**{
'Code': AttrDict(**{
'Name': lambda c:
status if c == code else None
}),
'code': code
}),
'Description': description
}),
'version': version
})
})
tiller_obj = tiller.Tiller('host', '8080', None)
disable_hooks = False
wait = True
timeout = 123
force = True
recreate_pods = True
result = tiller_obj.update_release(
chart,
release,
namespace,
disable_hooks=disable_hooks,
values=values,
wait=wait,
timeout=timeout,
force=force,
recreate_pods=recreate_pods)
mock_update_release_request.assert_called_once_with(
chart=chart,
name=release,
disable_hooks=False,
values=values,
wait=wait,
timeout=timeout,
force=force,
recreate=recreate_pods)
mock_release_service_stub.assert_called_once_with(tiller_obj.channel)
update_release_stub = mock_release_service_stub.return_value. \
UpdateRelease
update_release_stub.assert_called_once_with(
mock_update_release_request.return_value,
timeout + tiller.GRPC_EPSILON,
metadata=tiller_obj.metadata)
expected_result = tiller.TillerResult(
release, namespace, status, description, version)
self.assertEqual(expected_result, result)
def _test_test_release(self, grpc_response_mock):
@mock.patch('armada.handlers.tiller.K8s')
@mock.patch('armada.handlers.tiller.grpc')
@mock.patch('armada.handlers.tiller.Config')
@mock.patch.object(tiller, 'TestReleaseRequest')
@mock.patch.object(tiller, 'ReleaseServiceStub')
def do_test(
self, mock_release_service_stub, mock_test_release_request,
mock_config, _, __):
tiller_obj = tiller.Tiller('host', '8080', None)
release = 'release'
test_suite_run = {}
mock_release_service_stub.return_value.RunReleaseTest\
.return_value = grpc_response_mock
tiller_obj.get_release_status = mock.Mock()
tiller_obj.get_release_status.return_value = AttrDict(
**{
'info': AttrDict(
**{
'status': AttrDict(
**{'last_test_suite_run': test_suite_run}),
'Description': 'Failed'
})
})
result = tiller_obj.test_release(release)
self.assertEqual(test_suite_run, result)
do_test(self)
def test_test_release_no_tests(self):
self._test_test_release(
[
AttrDict(
**{
'msg': 'No Tests Found',
'status': helm.TESTRUN_STATUS_UNKNOWN
})
])
def test_test_release_success(self):
self._test_test_release(
[
AttrDict(
**{
'msg': 'RUNNING: ...',
'status': helm.TESTRUN_STATUS_RUNNING
}),
AttrDict(
**{
'msg': 'SUCCESS: ...',
'status': helm.TESTRUN_STATUS_SUCCESS
})
])
def test_test_release_failure(self):
self._test_test_release(
[
AttrDict(
**{
'msg': 'RUNNING: ...',
'status': helm.TESTRUN_STATUS_RUNNING
}),
AttrDict(
**{
'msg': 'FAILURE: ...',
'status': helm.TESTRUN_STATUS_FAILURE
})
])
def test_test_release_failure_to_run(self):
class Iterator:
def __iter__(self):
return self
def __next__(self):
raise Exception
def test():
self._test_test_release(Iterator())
self.assertRaises(ex.ReleaseException, test)

View File

@ -41,11 +41,6 @@ limitations under the License.
{{- if empty .Values.conf.armada.keystone_authtoken.password -}}
{{- set .Values.conf.armada.keystone_authtoken "password" $userIdentity.password | quote | trunc 0 -}}
{{- end -}}
{{- if .Values.conf.tiller.enabled }}
{{- set .Values.conf.armada.DEFAULT "tiller_host" "127.0.0.1" | quote | trunc 0 -}}
{{- set .Values.conf.armada.DEFAULT "tiller_port" .Values.conf.tiller.port | quote | trunc 0 -}}
{{- end }}
---
apiVersion: v1
kind: ConfigMap

View File

@ -20,26 +20,11 @@ httpGet:
port: {{ tuple "armada" "internal" "api" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }}
{{- end }}
{{- define "tillerReadinessProbeTemplate" }}
httpGet:
path: /readiness
port: {{ .Values.conf.tiller.probe_port }}
scheme: HTTP
{{- end }}
{{- define "tillerLivenessProbeTemplate" }}
httpGet:
path: /liveness
port: {{ .Values.conf.tiller.probe_port }}
scheme: HTTP
{{- end }}
{{- if .Values.manifests.deployment_api }}
{{- $envAll := . }}
{{- $labels := tuple $envAll "armada" "api" | include "helm-toolkit.snippets.kubernetes_metadata_labels" -}}
{{- $mounts_armada_api := .Values.pod.mounts.armada_api.armada_api }}
{{- $mounts_armada_api_init := .Values.pod.mounts.armada_api.init_container }}
{{- $mounts_armada_api_tiller := .Values.pod.mounts.armada_api.tiller }}
{{- $prometheus_annotations := $envAll.Values.monitoring.prometheus.armada }}
{{- $serviceAccountName := "armada-api" }}
{{ tuple $envAll "api" $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }}
@ -108,7 +93,7 @@ spec:
{{ $labels | indent 8 }}
annotations:
{{ tuple $envAll | include "helm-toolkit.snippets.release_uuid" | indent 8 }}
{{ dict "envAll" $envAll "podName" "armada-api" "containerNames" (list "init" "armada-api" "tiller") | include "helm-toolkit.snippets.kubernetes_mandatory_access_control_annotation" | indent 8 }}
{{ dict "envAll" $envAll "podName" "armada-api" "containerNames" (list "init" "armada-api") | include "helm-toolkit.snippets.kubernetes_mandatory_access_control_annotation" | indent 8 }}
configmap-bin-hash: {{ tuple "configmap-bin.yaml" . | include "helm-toolkit.utils.hash" }}
configmap-etc-hash: {{ tuple "configmap-etc.yaml" . | include "helm-toolkit.utils.hash" }}
{{ tuple $prometheus_annotations | include "helm-toolkit.snippets.prometheus_pod_annotations" | indent 8 }}
@ -158,54 +143,6 @@ spec:
subPath: policy.yaml
readOnly: true
{{ if $mounts_armada_api.volumeMounts }}{{ toYaml $mounts_armada_api.volumeMounts | indent 12 }}{{ end }}
{{- if .Values.conf.tiller.enabled }}
- name: tiller
{{ tuple $envAll "tiller" | include "helm-toolkit.snippets.image" | indent 10 }}
{{ tuple $envAll $envAll.Values.pod.resources.tiller | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }}
{{ dict "envAll" $envAll "application" "armada" "container" "tiller" | include "helm-toolkit.snippets.kubernetes_container_security_context" | indent 10 }}
env:
- name: TILLER_NAMESPACE
value: {{ .Values.conf.tiller.namespace | quote }}
- name: TILLER_HISTORY_MAX
value: {{ .Values.conf.tiller.history_max | quote }}
command:
- /tiller
{{- if .Values.conf.tiller.storage }}
- --storage={{ .Values.conf.tiller.storage }}
{{- if and (eq .Values.conf.tiller.storage "sql") (.Values.conf.tiller.sql_dialect) (.Values.conf.tiller.sql_connection) }}
- --sql-dialect={{ .Values.conf.tiller.sql_dialect }}
- --sql-connection-string={{ .Values.conf.tiller.sql_connection }}
{{- end }}
{{- end }}
- -listen
- "{{ if not .Values.conf.tiller.listen_on_any }}127.0.0.1{{ end }}:{{ .Values.conf.tiller.port }}"
- -probe-listen
- ":{{ .Values.conf.tiller.probe_port }}"
- -logtostderr
- -v
- {{ .Values.conf.tiller.verbosity | quote }}
{{- if .Values.conf.tiller.trace }}
- -trace
{{- end }}
lifecycle:
preStop:
exec:
command:
# Delay tiller termination so that it has a chance to finish
# deploying releases including marking them with
# DEPLOYED/FAILED status, otherwise they can get stuck in
# PENDING_*** status.
- sleep
- "{{ .Values.conf.tiller.prestop_sleep }}"
ports:
- name: tiller
containerPort: {{ .Values.conf.tiller.port }}
protocol: TCP
{{ dict "envAll" $envAll "component" "armada" "container" "tiller" "type" "readiness" "probeTemplate" (include "tillerReadinessProbeTemplate" $envAll | fromYaml) | include "helm-toolkit.snippets.kubernetes_probe" | trim | indent 10 }}
{{ dict "envAll" $envAll "component" "armada" "container" "tiller" "type" "liveness" "probeTemplate" (include "tillerLivenessProbeTemplate" $envAll | fromYaml) | include "helm-toolkit.snippets.kubernetes_probe" | trim | indent 10 }}
volumeMounts:
{{ if $mounts_armada_api_tiller.volumeMounts }}{{ toYaml $mounts_armada_api_tiller.volumeMounts | indent 12 }}{{ end }}
{{- end }}
volumes:
- name: pod-tmp
emptyDir: {}
@ -220,7 +157,4 @@ spec:
name: armada-etc
defaultMode: 0444
{{ if $mounts_armada_api.volumes }}{{ toYaml $mounts_armada_api.volumes | indent 8 }}{{ end }}
{{- if .Values.conf.tiller.enabled }}
{{ if $mounts_armada_api_tiller.volumes }}{{ toYaml $mounts_armada_api_tiller.volumes | indent 8 }}{{ end }}
{{- end }}
{{- end }}

View File

@ -34,14 +34,12 @@ images:
ks_service: 'docker.io/openstackhelm/heat:newton'
ks_user: 'docker.io/openstackhelm/heat:newton'
image_repo_sync: docker.io/docker:17.07.0
tiller: gcr.io/kubernetes-helm/tiller:v2.16.9
pull_policy: "IfNotPresent"
local_registry:
active: false
exclude:
- dep_check
- image_repo_sync
- tiller
network:
api:
@ -174,8 +172,6 @@ secrets:
conf:
armada:
DEFAULT: {}
# When .conf.tiller.enabled is true `tiller_host` and `tiller_port` will
# be overridden by 127.0.0.1 and `.conf.tiller.port` respectively
armada_api:
bind_port: 8000
keystone_authtoken:
@ -202,30 +198,6 @@ conf:
'armada:validate_manifest': 'rule:admin_viewer'
'armada:get_release': 'rule:admin_viewer'
'tiller:get_status': 'rule:admin_viewer'
tiller:
# If set to false then some form of Tiller needs to be provided
enabled: true
# To have Tiller bind to all interfaces, allowing direct connections from
# the Helm client to pod_ip:port, set 'listen_on_any: true'.
# The default setting 'listen_on_any: false' binds Tiller to 127.0.0.1.
# The Armada container talks directly to Tiller via 127.0.0.1, so the
# default value is appropriate for normal operation.
listen_on_any: false
port: 24134
probe_port: 24135
verbosity: 5
trace: false
storage: null
# Only postgres is supported so far
sql_dialect: postgres
sql_connection: null
namespace: kube-system
# Limit the maximum number of revisions saved per release. 0 for no limit.
history_max: 0
# Note: Defaulting to the (default) kubernetes grace period, as anything
# greater than that will have no effect.
prestop_sleep: 30
monitoring:
prometheus:
armada:
@ -239,7 +211,6 @@ pod:
armada-api:
init: runtime/default
armada-api: runtime/default
tiller: runtime/default
armada-api-test:
armada-api-test: runtime/default
probes:
@ -255,23 +226,6 @@ pod:
params:
initialDelaySeconds: 15
periodSeconds: 10
tiller:
readiness:
enabled: true
params:
failureThreshold: 3
initialDelaySeconds: 1
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
liveness:
enabled: true
params:
failureThreshold: 3
initialDelaySeconds: 1
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
security_context:
armada:
pod:
@ -283,10 +237,6 @@ pod:
armada_api:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
tiller:
runAsUser: 65534
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
api_test:
pod:
runAsUser: 1000
@ -326,15 +276,6 @@ pod:
armada_api:
volumes: []
volumeMounts: []
tiller:
volumes:
- name: kubernetes-client-cache
emptyDir: {}
volumeMounts:
- name: kubernetes-client-cache
# Should be the `$HOME/.kube` of the `runAsUser` above
# as this is where tiller's kubernetes client roots its cache dir.
mountPath: /tmp/.kube
affinity:
anti:
type:
@ -366,13 +307,6 @@ pod:
requests:
memory: "128Mi"
cpu: "100m"
tiller:
limits:
memory: "128Mi"
cpu: "100m"
requests:
memory: "128Mi"
cpu: "100m"
jobs:
ks_user:
limits:

View File

@ -1,21 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -1,24 +0,0 @@
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
description: A Helm chart for Tiller
name: tiller
version: 0.1.0
keywords:
- tiller
home: https://docs.helm.sh
sources:
- https://github.com/kubernetes/helm
engine: gotpl

View File

@ -1,18 +0,0 @@
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
dependencies:
- name: helm-toolkit
repository: file://../deps/helm-toolkit
version: ">= 0.1.0"

View File

@ -1,135 +0,0 @@
{{/*
Copyright 2017 AT&T Intellectual Property. All other rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/}}
{{- define "tillerReadinessProbeTemplate" }}
httpGet:
scheme: HTTP
path: /readiness
port: {{ .Values.conf.tiller.probe_port }}
{{- end }}
{{- define "tillerLivenessProbeTemplate" }}
httpGet:
scheme: HTTP
path: /liveness
port: {{ .Values.conf.tiller.probe_port }}
{{- end }}
{{- if .Values.manifests.deployment_tiller }}
{{- $envAll := . }}
{{- $serviceAccountName := "tiller-deploy" }}
{{- $mounts_tiller := .Values.pod.mounts.tiller.tiller }}
{{ tuple $envAll "tiller_deploy" $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: run-tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: {{ $serviceAccountName }}
namespace: {{ .Release.Namespace }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: helm
name: tiller
name: tiller-deploy
annotations:
{{ tuple $envAll | include "helm-toolkit.snippets.release_uuid" | indent 4 }}
spec:
replicas: 1
selector:
matchLabels:
app: helm
name: tiller
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: helm
name: tiller
{{ tuple $envAll "tiller" "deploy" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }}
annotations:
{{ tuple $envAll | include "helm-toolkit.snippets.release_uuid" | indent 8 }}
{{ dict "envAll" $envAll "podName" "tiller" "containerNames" (list "tiller") | include "helm-toolkit.snippets.kubernetes_mandatory_access_control_annotation" | indent 8 }}
spec:
{{ dict "envAll" $envAll "application" "tiller" | include "helm-toolkit.snippets.kubernetes_pod_security_context" | indent 6 }}
serviceAccountName: {{ $serviceAccountName }}
nodeSelector:
{{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }}
containers:
- name: tiller
{{ tuple $envAll "tiller" | include "helm-toolkit.snippets.image" | indent 10 }}
{{ tuple $envAll $envAll.Values.pod.resources.tiller | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }}
{{ dict "envAll" $envAll "application" "tiller" "container" "tiller" | include "helm-toolkit.snippets.kubernetes_container_security_context" | indent 10 }}
env:
- name: TILLER_NAMESPACE
value: {{ .Values.Name }}
- name: TILLER_HISTORY_MAX
value: {{ .Values.deployment.tiller_history | quote }}
volumeMounts:
{{ toYaml $mounts_tiller.volumeMounts | indent 12 }}
command:
- /tiller
{{- if .Values.conf.tiller.storage }}
- --storage={{ .Values.conf.tiller.storage }}
{{- if and (eq .Values.conf.tiller.storage "sql") (.Values.conf.tiller.sql_dialect) (.Values.conf.tiller.sql_connection) }}
- --sql-dialect={{ .Values.conf.tiller.sql_dialect }}
- --sql-connection-string={{ .Values.conf.tiller.sql_connection }}
{{- end }}
{{- end }}
- -listen
- "{{ if not .Values.conf.tiller.listen_on_any }}127.0.0.1{{ end }}:{{ .Values.conf.tiller.port }}"
- -probe-listen
- ":{{ .Values.conf.tiller.probe_port }}"
- -logtostderr
- -v
- {{ .Values.conf.tiller.verbosity | quote }}
{{- if .Values.conf.tiller.trace }}
- -trace
{{- end }}
lifecycle:
preStop:
exec:
command:
# Delay tiller termination so that it has a chance to finish
# deploying releases including marking them with
# DEPLOYED/FAILED status, otherwise they can get stuck in
# PENDING_*** status.
- sleep
- "{{ .Values.conf.tiller.prestop_sleep }}"
ports:
- name: tiller
containerPort: {{ .Values.conf.tiller.port }}
protocol: TCP
{{ dict "envAll" $envAll "component" "tiller" "container" "tiller" "type" "readiness" "probeTemplate" (include "tillerReadinessProbeTemplate" $envAll | fromYaml) | include "helm-toolkit.snippets.kubernetes_probe" | trim | indent 10 }}
{{ dict "envAll" $envAll "component" "tiller" "container" "tiller" "type" "liveness" "probeTemplate" (include "tillerLivenessProbeTemplate" $envAll | fromYaml) | include "helm-toolkit.snippets.kubernetes_probe" | trim | indent 10 }}
volumes:
{{ toYaml $mounts_tiller.volumes | indent 8 }}
status: {}
{{- end }}

View File

@ -1,18 +0,0 @@
# Copyright 2017-2018 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{{- if .Values.manifests.network_policy -}}
{{- $netpol_opts := dict "envAll" . "name" "application" "label" "tiller" -}}
{{ $netpol_opts | include "helm-toolkit.manifests.kubernetes_network_policy" }}
{{- end -}}

View File

@ -1,126 +0,0 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# namespace: "kube-system"
labels:
node_selector_key: ucp-control-plane
node_selector_value: enabled
dependencies:
static:
tiller_deploy:
images:
tags:
tiller: gcr.io/kubernetes-helm/tiller:v2.16.9
pull_policy: "IfNotPresent"
local_registry:
# NOTE(portdirect): this tiller chart does not support image pulling
active: false
exclude:
- tiller
deployment:
# NOTE: Current replica is hard-coded to 1. This is a placeholder variable
# for future usage. Updates will be made to the chart when we know that
# tiller is stable with multiple instances.
replicas: 1
# The amount of revision tiller is willing to support. 0 means that there is
# no limit.
tiller_history: 0
conf:
tiller:
verbosity: 5
storage: null
# Only postgres is supported so far
sql_dialect: postgres
sql_connection: null
trace: false
# Note: Defaulting to the (default) kubernetes grace period, as anything
# greater than that will have no effect.
prestop_sleep: 30
# To have Tiller bind to all interfaces, allowing direct connections from
# the Helm client to pod_ip:port, set 'listen_on_any: true'.
# The default setting 'listen_on_any: false' binds Tiller to 127.0.0.1.
# Helm clients with Kubernetes API access dynamically set up a portforward
# into the pod, which works with the default setting.
listen_on_any: false
port: 44134
probe_port: 44135
pod:
mandatory_access_control:
type: apparmor
tiller:
tiller: runtime/default
security_context:
tiller:
pod:
runAsUser: 65534
container:
tiller:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
probes:
tiller:
tiller:
readiness:
enabled: true
params:
failureThreshold: 3
initialDelaySeconds: 1
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
liveness:
enabled: true
params:
failureThreshold: 3
initialDelaySeconds: 1
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
enabled: false
tiller:
limits:
memory: "128Mi"
cpu: "100m"
requests:
memory: "128Mi"
cpu: "100m"
mounts:
tiller:
tiller:
volumes:
- name: kubernetes-client-cache
emptyDir: {}
volumeMounts:
- name: kubernetes-client-cache
# Should be the `$HOME/.kube` of the `runAsUser` above
# as this is where tiller's kubernetes client roots its cache dir.
mountPath: /tmp/.kube
network_policy:
tiller:
ingress:
- {}
egress:
- {}
manifests:
deployment_tiller: true
service_tiller_deploy: true
network_policy: false

View File

@ -0,0 +1,621 @@
[DEFAULT]
#
# From armada.conf
#
# IDs of approved API access roles. (list value)
#armada_apply_roles = admin
# The default Keystone authentication url. (string value)
#auth_url = http://0.0.0.0/v3
# Absolute path to the certificate file to use for chart registries (string
# value)
#certs = <None>
# Path to Kubernetes configurations. (string value)
#kubernetes_config_path = /home/user/.kube/
# Enables or disables Keystone authentication middleware. (boolean value)
#middleware = true
# The Keystone project domain name used for authentication. (string value)
#project_domain_name = default
# The Keystone project name used for authentication. (string value)
#project_name = admin
# Optional path to an SSH private key used for authenticating against a Git
# source repository. The path must be an absolute path to the private key that
# includes the name of the key itself. (string value)
#ssh_key_path = /home/user/.ssh/
# Time in seconds of how long armada will attempt to acquire a lock
# before an exception is raised (integer value)
# Minimum value: 0
#lock_acquire_timeout = 60
# Time in seconds of how long to wait between attempts to acquire a lock
# (integer value)
# Minimum value: 0
#lock_acquire_delay = 5
# Time in seconds of how often armada will update the lock while it is
# continuing to do work (integer value)
# Minimum value: 0
#lock_update_interval = 60
# Time in seconds of how much time needs to pass since the last update
# of an existing lock before armada forcibly removes it and tries to
# acquire its own lock (integer value)
# Minimum value: 0
#lock_expiration = 600
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, log-date-format). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Enable journald for logging. If running in a systemd environment you may wish
# to enable journal support. Doing so will use the journal native protocol
# which includes structured metadata in addition to log messages.This option is
# ignored if log_config_append is set. (boolean value)
#use_journal = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Use JSON formatting for logging. This option is ignored if log_config_append
# is set. (boolean value)
#use_json = false
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = false
# Log output to Windows Event Log. (boolean value)
#use_eventlog = false
# The amount of time before the log files are rotated. This option is ignored
# unless log_rotation_type is set to "interval". (integer value)
#log_rotate_interval = 1
# Rotation interval type. The time of the last file change (or the time when
# the service was started) is used when scheduling the next rotation. (string
# value)
# Possible values:
# Seconds - <No description provided>
# Minutes - <No description provided>
# Hours - <No description provided>
# Days - <No description provided>
# Weekday - <No description provided>
# Midnight - <No description provided>
#log_rotate_interval_type = days
# Maximum number of rotated log files. (integer value)
#max_logfile_count = 30
# Log file maximum size in MB. This option is ignored if "log_rotation_type" is
# not set to "size". (integer value)
#max_logfile_size_mb = 200
# Log rotation type. (string value)
# Possible values:
# interval - Rotate logs at predefined time intervals.
# size - Rotate logs once they reach a predefined size.
# none - Do not rotate log files.
#log_rotation_type = none
# Format string to use for log messages with context. Used by
# oslo_log.formatters.ContextFormatter (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. Used by
# oslo_log.formatters.ContextFormatter (string value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. Used by oslo_log.formatters.ContextFormatter (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. Used by
# oslo_log.formatters.ContextFormatter (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. Used by oslo_log.formatters.ContextFormatter
# (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,oslo_policy=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Interval, number of seconds, of log rate limiting. (integer value)
#rate_limit_interval = 0
# Maximum number of logged messages per rate_limit_interval. (integer value)
#rate_limit_burst = 0
# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG
# or empty string. Logs with level greater or equal to rate_limit_except_level
# are not filtered. An empty string means that all levels are filtered. (string
# value)
#rate_limit_except_level = CRITICAL
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[cors]
#
# From oslo.middleware
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers =
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PATCH
# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers =
[healthcheck]
#
# From oslo.middleware
#
# DEPRECATED: The path to respond to healtcheck requests on. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#path = /healthcheck
# Show more detailed information as part of the response. Security note:
# Enabling this option may expose sensitive details about the service being
# monitored. Be sure to verify that it will not violate your security policies.
# (boolean value)
#detailed = false
# Additional backends that can perform health checks and report that
# information back as part of a request. (list value)
#backends =
# Check the presence of a file to determine if an application is running on a
# port. Used by DisableByFileHealthcheck plugin. (string value)
#disable_by_file_path = <None>
# Check the presence of a file based on a port to determine if an application
# is running on a port. Expects a "port:path" list of strings. Used by
# DisableByFilesPortsHealthcheck plugin. (list value)
#disable_by_file_paths =
[keystone_authtoken]
#
# From armada.conf
#
# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>
# PEM encoded client certificate cert file (string value)
#certfile = <None>
# PEM encoded client certificate key file (string value)
#keyfile = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# Timeout value for http requests (integer value)
#timeout = <None>
# Collect per-API call timing information. (boolean value)
#collect_timing = false
# Log requests to multiple loggers. (boolean value)
#split_loggers = false
# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (string value)
#auth_section = <None>
# Authentication URL (string value)
#auth_url = <None>
# Scope for system operations (string value)
#system_scope = <None>
# Domain ID to scope to (string value)
#domain_id = <None>
# Domain name to scope to (string value)
#domain_name = <None>
# Project ID to scope to (string value)
#project_id = <None>
# Project name to scope to (string value)
#project_name = <None>
# Domain ID containing project (string value)
#project_domain_id = <None>
# Domain name containing project (string value)
#project_domain_name = <None>
# Trust ID (string value)
#trust_id = <None>
# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>
# Optional domain name to use with v3 API and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
#default_domain_name = <None>
# User ID (string value)
#user_id = <None>
# Username (string value)
# Deprecated group/name - [keystone_authtoken]/user_name
#username = <None>
# User's domain id (string value)
#user_domain_id = <None>
# User's domain name (string value)
#user_domain_name = <None>
# User's password (string value)
#password = <None>
#
# From keystonemiddleware.auth_token
#
# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users.
# Unauthenticated clients are redirected to this endpoint to authenticate.
# Although this endpoint should ideally be unversioned, client support in the
# wild varies. If you're using a versioned v2 endpoint here, then this should
# *not* be the same endpoint the service user utilizes for validating tokens,
# because normal end users may not be able to reach that endpoint. (string
# value)
# Deprecated group/name - [keystone_authtoken]/auth_uri
#www_authenticate_uri = <None>
# DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
# be an "admin" endpoint, as it should be accessible by all end users.
# Unauthenticated clients are redirected to this endpoint to authenticate.
# Although this endpoint should ideally be unversioned, client support in the
# wild varies. If you're using a versioned v2 endpoint here, then this should
# *not* be the same endpoint the service user utilizes for validating tokens,
# because normal end users may not be able to reach that endpoint. This option
# is deprecated in favor of www_authenticate_uri and will be removed in the S
# release. (string value)
# This option is deprecated for removal since Queens.
# Its value may be silently ignored in the future.
# Reason: The auth_uri option is deprecated in favor of www_authenticate_uri
# and will be removed in the S release.
#auth_uri = <None>
# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>
# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false
# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>
# How many times are we trying to reconnect when communicating with Identity
# API Server. (integer value)
#http_request_max_retries = 3
# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>
# Required if identity server requires client certificate (string value)
#certfile = <None>
# Required if identity server requires client certificate (string value)
#keyfile = <None>
# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# The region in which the identity server can be found. (string value)
#region_name = <None>
# DEPRECATED: Directory used to cache files related to PKI tokens. This option
# has been deprecated in the Ocata release and will be removed in the P
# release. (string value)
# This option is deprecated for removal since Ocata.
# Its value may be silently ignored in the future.
# Reason: PKI token format is no longer supported.
#signing_dir = <None>
# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>
# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set
# to -1 to disable caching completely. (integer value)
#token_cache_time = 300
# DEPRECATED: Determines the frequency at which the list of revoked tokens is
# retrieved from the Identity service (in seconds). A high number of revocation
# events combined with a low cache duration may significantly reduce
# performance. Only valid for PKI tokens. This option has been deprecated in
# the Ocata release and will be removed in the P release. (integer value)
# This option is deprecated for removal since Ocata.
# Its value may be silently ignored in the future.
# Reason: PKI token format is no longer supported.
#revocation_cache_time = 10
# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Possible values:
# None - <No description provided>
# MAC - <No description provided>
# ENCRYPT - <No description provided>
#memcache_security_strategy = None
# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>
# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300
# (Optional) Maximum total number of open connections to every memcached
# server. (integer value)
#memcache_pool_maxsize = 10
# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3
# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60
# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10
# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false
# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true
# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it
# if not. "strict" like "permissive" but if the bind type is unknown the token
# will be rejected. "required" any form of token binding is needed to be
# allowed. Finally the name of a binding method that must be present in tokens.
# (string value)
#enforce_token_bind = permissive
# DEPRECATED: If true, the revocation list will be checked for cached tokens.
# This requires that PKI tokens are configured on the identity server. (boolean
# value)
# This option is deprecated for removal since Ocata.
# Its value may be silently ignored in the future.
# Reason: PKI token format is no longer supported.
#check_revocations_for_cached = false
# DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a
# single algorithm or multiple. The algorithms are those supported by Python
# standard hashlib.new(). The hashes will be tried in the order given, so put
# the preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
# This option is deprecated for removal since Ocata.
# Its value may be silently ignored in the future.
# Reason: PKI token format is no longer supported.
#hash_algorithms = md5
# A choice of roles that must be present in a service token. Service tokens are
# allowed to request that an expired token can be used and so this check should
# tightly control that only actual services should be sending this token. Roles
# here are applied as an ANY check so any role in this list must be present.
# For backwards compatibility reasons this currently only affects the
# allow_expired check. (list value)
#service_token_roles = service
# For backwards compatibility reasons we must let valid service tokens pass
# that don't pass the service_token_roles check as valid. Setting this true
# will become the default in a future release and should be enabled if
# possible. (boolean value)
#service_token_roles_required = false
# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (string value)
#auth_section = <None>
[oslo_middleware]
#
# From oslo.middleware
#
# The maximum body size for each request, in bytes. (integer value)
# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
# Deprecated group/name - [DEFAULT]/max_request_body_size
#max_request_body_size = 114688
# DEPRECATED: The HTTP Header that will be used to determine what the original
# request protocol scheme was, even if it was hidden by a SSL termination
# proxy. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#secure_proxy_ssl_header = X-Forwarded-Proto
# Whether the application is behind a proxy or not. This determines if the
# middleware should parse the headers or not. (boolean value)
#enable_proxy_headers_parsing = false
[oslo_policy]
#
# From oslo.policy
#
# This option controls whether or not to enforce scope when evaluating
# policies. If ``True``, the scope of the token used in the request is compared
# to the ``scope_types`` of the policy being enforced. If the scopes do not
# match, an ``InvalidScope`` exception will be raised. If ``False``, a message
# will be logged informing operators that policies are being invoked with
# mismatching scope. (boolean value)
#enforce_scope = false
# This option controls whether or not to use old deprecated defaults when
# evaluating policies. If ``True``, the old deprecated defaults are not going
# to be evaluated. This means if any existing token is allowed for old defaults
# but is disallowed for new defaults, it will be disallowed. It is encouraged
# to enable this flag along with the ``enforce_scope`` flag so that you can get
# the benefits of new defaults and ``scope_type`` together (boolean value)
#enforce_new_defaults = false
# The relative or absolute path of a file that maps roles to permissions for a
# given service. Relative paths must be specified in relation to the
# configuration file setting this option. (string value)
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
#policy_dirs = policy.d
# Content Type to send and receive data for REST based policy check (string
# value)
# Possible values:
# application/x-www-form-urlencoded - <No description provided>
# application/json - <No description provided>
#remote_content_type = application/x-www-form-urlencoded
# server identity verification for REST based policy check (boolean value)
#remote_ssl_verify_server_crt = false
# Absolute path to ca cert file for REST based policy check (string value)
#remote_ssl_ca_crt_file = <None>
# Absolute path to client cert for REST based policy check (string value)
#remote_ssl_client_crt_file = <None>
# Absolute path client key file REST based policy check (string value)
#remote_ssl_client_key_file = <None>

View File

@ -0,0 +1,32 @@
#"admin_required": "role:admin or role:admin_ucp"
#"service_or_admin": "rule:admin_required or rule:service_role"
#"service_role": "role:service"
#"admin_viewer": "role:admin_ucp_viewer or rule:service_or_admin"
# Install manifest charts
# POST /api/v1.0/apply/
#"armada:create_endpoints": "rule:admin_required"
# Validate manifest
# POST /api/v1.0/validatedesign/
#"armada:validate_manifest": "rule:admin_viewer"
# Test release
# GET /api/v1.0/test/{release}
#"armada:test_release": "rule:admin_required"
# Test manifest
# POST /api/v1.0/tests/
#"armada:test_manifest": "rule:admin_required"
# Get helm releases
# GET /api/v1.0/releases/
#"armada:get_release": "rule:admin_viewer"
# Get Tiller status
# GET /api/v1.0/status/
#"tiller:get_status": "rule:admin_viewer"

View File

@ -33,8 +33,8 @@ Commands
Options:
--api Contacts service endpoint.
--disable-update-post Disable post-update Tiller operations.
--disable-update-pre Disable pre-update Tiller operations.
--disable-update-post Disable post-update Helm operations.
--disable-update-pre Disable pre-update Helm operations.
--enable-chart-cleanup Clean up unmanaged charts.
--metrics-output TEXT The output path for metric data
--use-doc-ref Use armada manifest file reference.
@ -44,15 +44,12 @@ Commands
primitive or
<path>:<to>:<property>=<value1>,...,<valueN>
to specify a list of values.
--tiller-host TEXT Tiller host IP.
--tiller-port INTEGER Tiller host port.
-tn, --tiller-namespace TEXT Tiller namespace.
--timeout INTEGER Specifies time to wait for each chart to fully
finish deploying.
-f, --values TEXT Use to override multiple Armada Manifest
values by reading overrides from a
values.yaml-type file.
--wait Force Tiller to wait until all charts are
--wait Force Helm to wait until all charts are
deployed, rather than using each charts
specified wait policy. This is equivalent to
sequenced chartgroups.
@ -67,7 +64,7 @@ Synopsis
--------
The apply command will consume an armada manifest which contains group of charts
that it will deploy into the tiller service in your Kubernetes cluster.
that it will deploy via the Helm CLI into your Kubernetes cluster.
Executing the ``armada apply`` again on existing armada deployment will start
an update of the armada deployed charts.

View File

@ -12,5 +12,4 @@ Commands Guide
apply.rst
test.rst
tiller.rst
validate.rst

View File

@ -11,7 +11,6 @@ Commands
This command test deployed charts
The tiller command uses flags to obtain information from tiller services.
The test command will run the release chart tests either via a
manifest or by targeting a release.
@ -28,9 +27,6 @@ Commands
--enable-all Run disabled chart tests
--file TEXT armada manifest
--release TEXT helm release
--tiller-host TEXT Tiller Host IP
--tiller-port INTEGER Tiller Host Port
-tn, --tiller-namespace TEXT Tiller Namespace
--target-manifest TEXT The target manifest to run. Required for
specifying which manifest to run when multiple
are available.

View File

@ -1,37 +0,0 @@
Armada - Tiller
===============
Commands
--------
.. code:: bash
Usage: armada tiller [OPTIONS]
This command gets tiller information
The tiller command uses flags to obtain information from tiller services
To obtain armada deployed releases:
$ armada tiller --releases
To obtain tiller service status/information:
$ armada tiller --status
Options:
--tiller-host TEXT Tiller host ip
--tiller-port INTEGER Tiller host port
-tn, --tiller-namespace TEXT Tiller namespace
--releases list of deployed releses
--status Status of Armada services
--bearer-token User bearer token
--help Show this message and exit.
Synopsis
--------
The tiller command will perform command directly with tiller to check if tiller
in the cluster is running and the list of releases in tiller cluster.

View File

@ -55,13 +55,12 @@ Manual Installation
Pre-requisites
^^^^^^^^^^^^^^
Armada has many pre-requisites because it relies on `Helm`_, which itself
has pre-requisites. The guide below consolidates the installation of all
pre-requisites. For help troubleshooting individual resources, reference
their installation guides.
The guide below consolidates the installation of all pre-requisites.
For help troubleshooting individual resources, reference their
installation guides.
Armada requires a Kubernetes cluster to be deployed, along with `kubectl`_,
`Helm`_ client, and `Tiller`_ (the Helm server).
Armada requires a Kubernetes cluster to be deployed, along with `kubectl`_
and `Helm`_.
#. Install Kubernetes (k8s) and deploy a k8s cluster.
@ -80,14 +79,6 @@ Armada requires a Kubernetes cluster to be deployed, along with `kubectl`_,
#. Install and configure the `Helm`_ client.
#. Install and configure `Tiller`_ (Helm server).
#. Verify that Tiller is installed and running correctly by running:
::
$ kubectl get pods -n kube-system
.. _k8s-cluster-management:
Kubernetes Cluster Management
@ -159,14 +150,13 @@ Armada API Server Installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Armada API server is not required in order to use the Armada CLI,
which in this sense is standalone. The Armada CLI communicates with the Tiller
server and, as such, no API server needs to be instantiated in order for
Armada to communicate with Tiller. The Armada API server and CLI interface
which in this sense is standalone. The Armada CLI communicates with the Helm
CLI. The Armada API server and CLI interface
have the exact same functionality. However, the Armada API server offers the
following additional functionality:
* Role-Based Access Control, allowing Armada to provide authorization around
specific Armada (and by extension) Tiller functionality.
specific Armada functionality.
* `Keystone`_ authentication and project scoping, providing an additional
layer of security.
@ -318,6 +308,5 @@ included beneath each bullet.
.. _Bandit: https://opendev.org/openstack/bandit
.. _kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/
.. _Helm: https://docs.helm.sh/using_helm/#installing-helm
.. _Keystone: https://opendev.org/openstack/keystone
.. _Tiller: https://docs.helm.sh/using_helm/#easy-in-cluster-installation
.. _Helm: https://docs.helm.sh

View File

@ -96,10 +96,10 @@ Breaking changes
4. ``wait.native.enabled`` is now disabled by default. With the above changes,
this is no longer useful as a backup mechanism. Having both enabled leads to
ambiguity in which wait would fail in each case. More importantly, this must
be disabled in order to use the ``min_ready`` functionality, otherwise tiller
be disabled in order to use the ``min_ready`` functionality, otherwise helm
will wait for 100% anyway. So this prevents accidentally leaving it enabled
in that case. Also when the tiller native wait times out, this caused the
release to be marked FAILED by tiller, which caused it to be purged and
in that case. Also when the helm native wait times out, this caused the
release to be marked FAILED by helm, which caused it to be purged and
re-installed (unless protected), even though the wait criteria may have
eventually succeeded, which is already validated by armada on a retry.

View File

@ -441,7 +441,6 @@ Armada - Deploy Behavior
1. Armada will perform set of pre-flight checks to before applying the manifest
- validate input manifest
- check tiller service is Running
- check chart source locations are valid
2. Deploying Armada Manifest

View File

@ -26,7 +26,7 @@ Charts
Charts consist of the smallest building blocks in Armada. A ``Chart`` is
comparable to a Helm chart. Charts consist of all the labels, dependencies,
install and upgrade information, hooks and additional information needed to
convey to Tiller.
convey to Helm.
Chart Groups
------------

View File

@ -26,7 +26,7 @@ Charts
Charts consist of the smallest building blocks in Armada. A ``Chart`` is
comparable to a Helm chart. Charts consist of all the labels, dependencies,
install and upgrade information, hooks and additional information needed to
convey to Tiller.
convey to Helm.
Chart Groups
------------

View File

@ -25,5 +25,4 @@ Armada Exceptions
.. include:: manifest-exceptions.inc
.. include:: override-exceptions.inc
.. include:: source-exceptions.inc
.. include:: tiller-exceptions.inc
.. include:: validate-exceptions.inc

View File

@ -1,40 +0,0 @@
..
Copyright 2017 AT&T Intellectual Property.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Tiller Exceptions
-----------------
.. currentmodule:: armada.exceptions.tiller_exceptions
.. autoexception:: ChannelException
:members:
:show-inheritance:
:undoc-members:
.. autoexception:: GetReleaseStatusException
:members:
:show-inheritance:
:undoc-members:
.. autoexception:: ReleaseException
:members:
:show-inheritance:
:undoc-members:
.. autoexception:: TillerListReleasesPagingException
:members:
:show-inheritance:
:undoc-members:

View File

@ -25,5 +25,4 @@ Usage
**helm <Name> <action> [options]**
::
helm armada tiller --status
helm armada apply ~/.helm/plugins/armada/examples/simple.yaml

View File

@ -1,19 +1,3 @@
Armada - Troubleshooting
========================
Debugging Pods
--------------
Before starting to work in armada we need to check that the tiller pod is active and running.
.. code:: bash
kubectl get pods -n kube-system | grep tiller
.. code:: bash
armada tiller --status
Checking Logs
-------------

View File

@ -8,7 +8,7 @@ Prerequisites
Kubernetes Cluster
`Tiller Service <https://github.com/kubernetes/helm>`_
`Helm <https://docs.helm.sh>`_
.. todo:: point this to v2 docs once they're stable
@ -74,32 +74,20 @@ b. Helm Install
helm install <registry>/armada --name armada --namespace armada
3. Check that tiller is Available
.. code:: bash
docker exec armada armada tiller --status
4. If tiller is up then we can start deploying our armada yamls
3. Deploy armada yamls
.. code:: bash
docker exec armada armada apply /examples/openstack-helm.yaml [ --debug ]
5. Upgrading charts: modify the armada yaml or chart source code and run ``armada
4. Upgrading charts: modify the armada yaml or chart source code and run ``armada
apply`` above
.. code:: bash
docker exec armada armada apply /examples/openstack-helm.yaml [ --debug ]
6. To check deployed releases:
.. code:: bash
docker exec armada armada tiller --releases
7. Testing Releases:
5. Testing Releases:
.. code:: bash
@ -233,10 +221,7 @@ like openstack-keystone.
armada apply --bearer-token [ TOKEN ] --values [ path_to_yaml ] [ FILE ]
armada tiller --bearer-token [ TOKEN ] --status
.. note::
The bearer token option is available for the following commands
armada apply,
armada tiller

View File

@ -31,18 +31,6 @@
# includes the name of the key itself. (string value)
#ssh_key_path = /home/user/.ssh/
# Labels for the Tiller pod. (string value)
#tiller_pod_labels = app=helm,name=tiller
# Namespace for the Tiller pod. (string value)
#tiller_namespace = kube-system
# Port for the Tiller pod. (integer value)
#tiller_port = 44134
# IDs of approved API access roles. (list value)
#tiller_release_roles = admin
#
# From oslo.log
#

View File

@ -32,4 +32,4 @@
# Get Tiller status
# GET /api/v1.0/status/
#"tiller:get_status": "rule:admin_viewer"
#"tiller:get_status": "rule:admin_viewer"

View File

View File

View File

@ -1,108 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/chart/chart.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from hapi.chart import config_pb2 as hapi_dot_chart_dot_config__pb2
from hapi.chart import metadata_pb2 as hapi_dot_chart_dot_metadata__pb2
from hapi.chart import template_pb2 as hapi_dot_chart_dot_template__pb2
from google.protobuf import any_pb2 as google_dot_protobuf_dot_any__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/chart/chart.proto',
package='hapi.chart',
syntax='proto3',
serialized_options=_b('Z\005chart'),
serialized_pb=_b('\n\x16hapi/chart/chart.proto\x12\nhapi.chart\x1a\x17hapi/chart/config.proto\x1a\x19hapi/chart/metadata.proto\x1a\x19hapi/chart/template.proto\x1a\x19google/protobuf/any.proto\"\xca\x01\n\x05\x43hart\x12&\n\x08metadata\x18\x01 \x01(\x0b\x32\x14.hapi.chart.Metadata\x12\'\n\ttemplates\x18\x02 \x03(\x0b\x32\x14.hapi.chart.Template\x12\'\n\x0c\x64\x65pendencies\x18\x03 \x03(\x0b\x32\x11.hapi.chart.Chart\x12\"\n\x06values\x18\x04 \x01(\x0b\x32\x12.hapi.chart.Config\x12#\n\x05\x66iles\x18\x05 \x03(\x0b\x32\x14.google.protobuf.AnyB\x07Z\x05\x63hartb\x06proto3')
,
dependencies=[hapi_dot_chart_dot_config__pb2.DESCRIPTOR,hapi_dot_chart_dot_metadata__pb2.DESCRIPTOR,hapi_dot_chart_dot_template__pb2.DESCRIPTOR,google_dot_protobuf_dot_any__pb2.DESCRIPTOR,])
_CHART = _descriptor.Descriptor(
name='Chart',
full_name='hapi.chart.Chart',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='metadata', full_name='hapi.chart.Chart.metadata', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='templates', full_name='hapi.chart.Chart.templates', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='dependencies', full_name='hapi.chart.Chart.dependencies', index=2,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='values', full_name='hapi.chart.Chart.values', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='files', full_name='hapi.chart.Chart.files', index=4,
number=5, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=145,
serialized_end=347,
)
_CHART.fields_by_name['metadata'].message_type = hapi_dot_chart_dot_metadata__pb2._METADATA
_CHART.fields_by_name['templates'].message_type = hapi_dot_chart_dot_template__pb2._TEMPLATE
_CHART.fields_by_name['dependencies'].message_type = _CHART
_CHART.fields_by_name['values'].message_type = hapi_dot_chart_dot_config__pb2._CONFIG
_CHART.fields_by_name['files'].message_type = google_dot_protobuf_dot_any__pb2._ANY
DESCRIPTOR.message_types_by_name['Chart'] = _CHART
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Chart = _reflection.GeneratedProtocolMessageType('Chart', (_message.Message,), dict(
DESCRIPTOR = _CHART,
__module__ = 'hapi.chart.chart_pb2'
# @@protoc_insertion_point(class_scope:hapi.chart.Chart)
))
_sym_db.RegisterMessage(Chart)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

View File

@ -1,165 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/chart/config.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/chart/config.proto',
package='hapi.chart',
syntax='proto3',
serialized_options=_b('Z\005chart'),
serialized_pb=_b('\n\x17hapi/chart/config.proto\x12\nhapi.chart\"\x87\x01\n\x06\x43onfig\x12\x0b\n\x03raw\x18\x01 \x01(\t\x12.\n\x06values\x18\x02 \x03(\x0b\x32\x1e.hapi.chart.Config.ValuesEntry\x1a@\n\x0bValuesEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12 \n\x05value\x18\x02 \x01(\x0b\x32\x11.hapi.chart.Value:\x02\x38\x01\"\x16\n\x05Value\x12\r\n\x05value\x18\x01 \x01(\tB\x07Z\x05\x63hartb\x06proto3')
)
_CONFIG_VALUESENTRY = _descriptor.Descriptor(
name='ValuesEntry',
full_name='hapi.chart.Config.ValuesEntry',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='key', full_name='hapi.chart.Config.ValuesEntry.key', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='value', full_name='hapi.chart.Config.ValuesEntry.value', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=_b('8\001'),
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=111,
serialized_end=175,
)
_CONFIG = _descriptor.Descriptor(
name='Config',
full_name='hapi.chart.Config',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='raw', full_name='hapi.chart.Config.raw', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='values', full_name='hapi.chart.Config.values', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[_CONFIG_VALUESENTRY, ],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=40,
serialized_end=175,
)
_VALUE = _descriptor.Descriptor(
name='Value',
full_name='hapi.chart.Value',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='value', full_name='hapi.chart.Value.value', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=177,
serialized_end=199,
)
_CONFIG_VALUESENTRY.fields_by_name['value'].message_type = _VALUE
_CONFIG_VALUESENTRY.containing_type = _CONFIG
_CONFIG.fields_by_name['values'].message_type = _CONFIG_VALUESENTRY
DESCRIPTOR.message_types_by_name['Config'] = _CONFIG
DESCRIPTOR.message_types_by_name['Value'] = _VALUE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Config = _reflection.GeneratedProtocolMessageType('Config', (_message.Message,), dict(
ValuesEntry = _reflection.GeneratedProtocolMessageType('ValuesEntry', (_message.Message,), dict(
DESCRIPTOR = _CONFIG_VALUESENTRY,
__module__ = 'hapi.chart.config_pb2'
# @@protoc_insertion_point(class_scope:hapi.chart.Config.ValuesEntry)
))
,
DESCRIPTOR = _CONFIG,
__module__ = 'hapi.chart.config_pb2'
# @@protoc_insertion_point(class_scope:hapi.chart.Config)
))
_sym_db.RegisterMessage(Config)
_sym_db.RegisterMessage(Config.ValuesEntry)
Value = _reflection.GeneratedProtocolMessageType('Value', (_message.Message,), dict(
DESCRIPTOR = _VALUE,
__module__ = 'hapi.chart.config_pb2'
# @@protoc_insertion_point(class_scope:hapi.chart.Value)
))
_sym_db.RegisterMessage(Value)
DESCRIPTOR._options = None
_CONFIG_VALUESENTRY._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

View File

@ -1,308 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/chart/metadata.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/chart/metadata.proto',
package='hapi.chart',
syntax='proto3',
serialized_options=_b('Z\005chart'),
serialized_pb=_b('\n\x19hapi/chart/metadata.proto\x12\nhapi.chart\"6\n\nMaintainer\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\r\n\x05\x65mail\x18\x02 \x01(\t\x12\x0b\n\x03url\x18\x03 \x01(\t\"\xd5\x03\n\x08Metadata\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04home\x18\x02 \x01(\t\x12\x0f\n\x07sources\x18\x03 \x03(\t\x12\x0f\n\x07version\x18\x04 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x05 \x01(\t\x12\x10\n\x08keywords\x18\x06 \x03(\t\x12+\n\x0bmaintainers\x18\x07 \x03(\x0b\x32\x16.hapi.chart.Maintainer\x12\x0e\n\x06\x65ngine\x18\x08 \x01(\t\x12\x0c\n\x04icon\x18\t \x01(\t\x12\x12\n\napiVersion\x18\n \x01(\t\x12\x11\n\tcondition\x18\x0b \x01(\t\x12\x0c\n\x04tags\x18\x0c \x01(\t\x12\x12\n\nappVersion\x18\r \x01(\t\x12\x12\n\ndeprecated\x18\x0e \x01(\x08\x12\x15\n\rtillerVersion\x18\x0f \x01(\t\x12:\n\x0b\x61nnotations\x18\x10 \x03(\x0b\x32%.hapi.chart.Metadata.AnnotationsEntry\x12\x13\n\x0bkubeVersion\x18\x11 \x01(\t\x1a\x32\n\x10\x41nnotationsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\" \n\x06\x45ngine\x12\x0b\n\x07UNKNOWN\x10\x00\x12\t\n\x05GOTPL\x10\x01\x42\x07Z\x05\x63hartb\x06proto3')
)
_METADATA_ENGINE = _descriptor.EnumDescriptor(
name='Engine',
full_name='hapi.chart.Metadata.Engine',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='UNKNOWN', index=0, number=0,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GOTPL', index=1, number=1,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=535,
serialized_end=567,
)
_sym_db.RegisterEnumDescriptor(_METADATA_ENGINE)
_MAINTAINER = _descriptor.Descriptor(
name='Maintainer',
full_name='hapi.chart.Maintainer',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='hapi.chart.Maintainer.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='email', full_name='hapi.chart.Maintainer.email', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='url', full_name='hapi.chart.Maintainer.url', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=41,
serialized_end=95,
)
_METADATA_ANNOTATIONSENTRY = _descriptor.Descriptor(
name='AnnotationsEntry',
full_name='hapi.chart.Metadata.AnnotationsEntry',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='key', full_name='hapi.chart.Metadata.AnnotationsEntry.key', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='value', full_name='hapi.chart.Metadata.AnnotationsEntry.value', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=_b('8\001'),
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=483,
serialized_end=533,
)
_METADATA = _descriptor.Descriptor(
name='Metadata',
full_name='hapi.chart.Metadata',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='hapi.chart.Metadata.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='home', full_name='hapi.chart.Metadata.home', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='sources', full_name='hapi.chart.Metadata.sources', index=2,
number=3, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='version', full_name='hapi.chart.Metadata.version', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='description', full_name='hapi.chart.Metadata.description', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='keywords', full_name='hapi.chart.Metadata.keywords', index=5,
number=6, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='maintainers', full_name='hapi.chart.Metadata.maintainers', index=6,
number=7, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='engine', full_name='hapi.chart.Metadata.engine', index=7,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='icon', full_name='hapi.chart.Metadata.icon', index=8,
number=9, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='apiVersion', full_name='hapi.chart.Metadata.apiVersion', index=9,
number=10, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='condition', full_name='hapi.chart.Metadata.condition', index=10,
number=11, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='tags', full_name='hapi.chart.Metadata.tags', index=11,
number=12, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='appVersion', full_name='hapi.chart.Metadata.appVersion', index=12,
number=13, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='deprecated', full_name='hapi.chart.Metadata.deprecated', index=13,
number=14, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='tillerVersion', full_name='hapi.chart.Metadata.tillerVersion', index=14,
number=15, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='annotations', full_name='hapi.chart.Metadata.annotations', index=15,
number=16, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='kubeVersion', full_name='hapi.chart.Metadata.kubeVersion', index=16,
number=17, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[_METADATA_ANNOTATIONSENTRY, ],
enum_types=[
_METADATA_ENGINE,
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=98,
serialized_end=567,
)
_METADATA_ANNOTATIONSENTRY.containing_type = _METADATA
_METADATA.fields_by_name['maintainers'].message_type = _MAINTAINER
_METADATA.fields_by_name['annotations'].message_type = _METADATA_ANNOTATIONSENTRY
_METADATA_ENGINE.containing_type = _METADATA
DESCRIPTOR.message_types_by_name['Maintainer'] = _MAINTAINER
DESCRIPTOR.message_types_by_name['Metadata'] = _METADATA
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Maintainer = _reflection.GeneratedProtocolMessageType('Maintainer', (_message.Message,), dict(
DESCRIPTOR = _MAINTAINER,
__module__ = 'hapi.chart.metadata_pb2'
# @@protoc_insertion_point(class_scope:hapi.chart.Maintainer)
))
_sym_db.RegisterMessage(Maintainer)
Metadata = _reflection.GeneratedProtocolMessageType('Metadata', (_message.Message,), dict(
AnnotationsEntry = _reflection.GeneratedProtocolMessageType('AnnotationsEntry', (_message.Message,), dict(
DESCRIPTOR = _METADATA_ANNOTATIONSENTRY,
__module__ = 'hapi.chart.metadata_pb2'
# @@protoc_insertion_point(class_scope:hapi.chart.Metadata.AnnotationsEntry)
))
,
DESCRIPTOR = _METADATA,
__module__ = 'hapi.chart.metadata_pb2'
# @@protoc_insertion_point(class_scope:hapi.chart.Metadata)
))
_sym_db.RegisterMessage(Metadata)
_sym_db.RegisterMessage(Metadata.AnnotationsEntry)
DESCRIPTOR._options = None
_METADATA_ANNOTATIONSENTRY._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

View File

@ -1,77 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/chart/template.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/chart/template.proto',
package='hapi.chart',
syntax='proto3',
serialized_options=_b('Z\005chart'),
serialized_pb=_b('\n\x19hapi/chart/template.proto\x12\nhapi.chart\"&\n\x08Template\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\x42\x07Z\x05\x63hartb\x06proto3')
)
_TEMPLATE = _descriptor.Descriptor(
name='Template',
full_name='hapi.chart.Template',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='hapi.chart.Template.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='data', full_name='hapi.chart.Template.data', index=1,
number=2, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=41,
serialized_end=79,
)
DESCRIPTOR.message_types_by_name['Template'] = _TEMPLATE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Template = _reflection.GeneratedProtocolMessageType('Template', (_message.Message,), dict(
DESCRIPTOR = _TEMPLATE,
__module__ = 'hapi.chart.template_pb2'
# @@protoc_insertion_point(class_scope:hapi.chart.Template)
))
_sym_db.RegisterMessage(Template)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

View File

@ -1,223 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/release/hook.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/release/hook.proto',
package='hapi.release',
syntax='proto3',
serialized_options=_b('Z\007release'),
serialized_pb=_b('\n\x17hapi/release/hook.proto\x12\x0chapi.release\x1a\x1fgoogle/protobuf/timestamp.proto\"\xa9\x04\n\x04Hook\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04kind\x18\x02 \x01(\t\x12\x0c\n\x04path\x18\x03 \x01(\t\x12\x10\n\x08manifest\x18\x04 \x01(\t\x12(\n\x06\x65vents\x18\x05 \x03(\x0e\x32\x18.hapi.release.Hook.Event\x12,\n\x08last_run\x18\x06 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x0e\n\x06weight\x18\x07 \x01(\x05\x12\x38\n\x0f\x64\x65lete_policies\x18\x08 \x03(\x0e\x32\x1f.hapi.release.Hook.DeletePolicy\x12\x16\n\x0e\x64\x65lete_timeout\x18\t \x01(\x03\"\xe5\x01\n\x05\x45vent\x12\x0b\n\x07UNKNOWN\x10\x00\x12\x0f\n\x0bPRE_INSTALL\x10\x01\x12\x10\n\x0cPOST_INSTALL\x10\x02\x12\x0e\n\nPRE_DELETE\x10\x03\x12\x0f\n\x0bPOST_DELETE\x10\x04\x12\x0f\n\x0bPRE_UPGRADE\x10\x05\x12\x10\n\x0cPOST_UPGRADE\x10\x06\x12\x10\n\x0cPRE_ROLLBACK\x10\x07\x12\x11\n\rPOST_ROLLBACK\x10\x08\x12\x18\n\x14RELEASE_TEST_SUCCESS\x10\t\x12\x18\n\x14RELEASE_TEST_FAILURE\x10\n\x12\x0f\n\x0b\x43RD_INSTALL\x10\x0b\"C\n\x0c\x44\x65letePolicy\x12\r\n\tSUCCEEDED\x10\x00\x12\n\n\x06\x46\x41ILED\x10\x01\x12\x18\n\x14\x42\x45\x46ORE_HOOK_CREATION\x10\x02\x42\tZ\x07releaseb\x06proto3')
,
dependencies=[google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,])
_HOOK_EVENT = _descriptor.EnumDescriptor(
name='Event',
full_name='hapi.release.Hook.Event',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='UNKNOWN', index=0, number=0,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PRE_INSTALL', index=1, number=1,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='POST_INSTALL', index=2, number=2,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PRE_DELETE', index=3, number=3,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='POST_DELETE', index=4, number=4,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PRE_UPGRADE', index=5, number=5,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='POST_UPGRADE', index=6, number=6,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PRE_ROLLBACK', index=7, number=7,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='POST_ROLLBACK', index=8, number=8,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='RELEASE_TEST_SUCCESS', index=9, number=9,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='RELEASE_TEST_FAILURE', index=10, number=10,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='CRD_INSTALL', index=11, number=11,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=330,
serialized_end=559,
)
_sym_db.RegisterEnumDescriptor(_HOOK_EVENT)
_HOOK_DELETEPOLICY = _descriptor.EnumDescriptor(
name='DeletePolicy',
full_name='hapi.release.Hook.DeletePolicy',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='SUCCEEDED', index=0, number=0,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='FAILED', index=1, number=1,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='BEFORE_HOOK_CREATION', index=2, number=2,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=561,
serialized_end=628,
)
_sym_db.RegisterEnumDescriptor(_HOOK_DELETEPOLICY)
_HOOK = _descriptor.Descriptor(
name='Hook',
full_name='hapi.release.Hook',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='hapi.release.Hook.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='kind', full_name='hapi.release.Hook.kind', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='path', full_name='hapi.release.Hook.path', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='manifest', full_name='hapi.release.Hook.manifest', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='events', full_name='hapi.release.Hook.events', index=4,
number=5, type=14, cpp_type=8, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='last_run', full_name='hapi.release.Hook.last_run', index=5,
number=6, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='weight', full_name='hapi.release.Hook.weight', index=6,
number=7, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='delete_policies', full_name='hapi.release.Hook.delete_policies', index=7,
number=8, type=14, cpp_type=8, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='delete_timeout', full_name='hapi.release.Hook.delete_timeout', index=8,
number=9, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
_HOOK_EVENT,
_HOOK_DELETEPOLICY,
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=75,
serialized_end=628,
)
_HOOK.fields_by_name['events'].enum_type = _HOOK_EVENT
_HOOK.fields_by_name['last_run'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_HOOK.fields_by_name['delete_policies'].enum_type = _HOOK_DELETEPOLICY
_HOOK_EVENT.containing_type = _HOOK
_HOOK_DELETEPOLICY.containing_type = _HOOK
DESCRIPTOR.message_types_by_name['Hook'] = _HOOK
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Hook = _reflection.GeneratedProtocolMessageType('Hook', (_message.Message,), dict(
DESCRIPTOR = _HOOK,
__module__ = 'hapi.release.hook_pb2'
# @@protoc_insertion_point(class_scope:hapi.release.Hook)
))
_sym_db.RegisterMessage(Hook)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

View File

@ -1,105 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/release/info.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2
from hapi.release import status_pb2 as hapi_dot_release_dot_status__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/release/info.proto',
package='hapi.release',
syntax='proto3',
serialized_options=_b('Z\007release'),
serialized_pb=_b('\n\x17hapi/release/info.proto\x12\x0chapi.release\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x19hapi/release/status.proto\"\xd5\x01\n\x04Info\x12$\n\x06status\x18\x01 \x01(\x0b\x32\x14.hapi.release.Status\x12\x32\n\x0e\x66irst_deployed\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x31\n\rlast_deployed\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12+\n\x07\x64\x65leted\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x13\n\x0b\x44\x65scription\x18\x05 \x01(\tB\tZ\x07releaseb\x06proto3')
,
dependencies=[google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,hapi_dot_release_dot_status__pb2.DESCRIPTOR,])
_INFO = _descriptor.Descriptor(
name='Info',
full_name='hapi.release.Info',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='status', full_name='hapi.release.Info.status', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='first_deployed', full_name='hapi.release.Info.first_deployed', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='last_deployed', full_name='hapi.release.Info.last_deployed', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='deleted', full_name='hapi.release.Info.deleted', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='Description', full_name='hapi.release.Info.Description', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=102,
serialized_end=315,
)
_INFO.fields_by_name['status'].message_type = hapi_dot_release_dot_status__pb2._STATUS
_INFO.fields_by_name['first_deployed'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_INFO.fields_by_name['last_deployed'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_INFO.fields_by_name['deleted'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
DESCRIPTOR.message_types_by_name['Info'] = _INFO
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Info = _reflection.GeneratedProtocolMessageType('Info', (_message.Message,), dict(
DESCRIPTOR = _INFO,
__module__ = 'hapi.release.info_pb2'
# @@protoc_insertion_point(class_scope:hapi.release.Info)
))
_sym_db.RegisterMessage(Info)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

View File

@ -1,128 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/release/release.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from hapi.release import hook_pb2 as hapi_dot_release_dot_hook__pb2
from hapi.release import info_pb2 as hapi_dot_release_dot_info__pb2
from hapi.chart import config_pb2 as hapi_dot_chart_dot_config__pb2
from hapi.chart import chart_pb2 as hapi_dot_chart_dot_chart__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/release/release.proto',
package='hapi.release',
syntax='proto3',
serialized_options=_b('Z\007release'),
serialized_pb=_b('\n\x1ahapi/release/release.proto\x12\x0chapi.release\x1a\x17hapi/release/hook.proto\x1a\x17hapi/release/info.proto\x1a\x17hapi/chart/config.proto\x1a\x16hapi/chart/chart.proto\"\xd8\x01\n\x07Release\x12\x0c\n\x04name\x18\x01 \x01(\t\x12 \n\x04info\x18\x02 \x01(\x0b\x32\x12.hapi.release.Info\x12 \n\x05\x63hart\x18\x03 \x01(\x0b\x32\x11.hapi.chart.Chart\x12\"\n\x06\x63onfig\x18\x04 \x01(\x0b\x32\x12.hapi.chart.Config\x12\x10\n\x08manifest\x18\x05 \x01(\t\x12!\n\x05hooks\x18\x06 \x03(\x0b\x32\x12.hapi.release.Hook\x12\x0f\n\x07version\x18\x07 \x01(\x05\x12\x11\n\tnamespace\x18\x08 \x01(\tB\tZ\x07releaseb\x06proto3')
,
dependencies=[hapi_dot_release_dot_hook__pb2.DESCRIPTOR,hapi_dot_release_dot_info__pb2.DESCRIPTOR,hapi_dot_chart_dot_config__pb2.DESCRIPTOR,hapi_dot_chart_dot_chart__pb2.DESCRIPTOR,])
_RELEASE = _descriptor.Descriptor(
name='Release',
full_name='hapi.release.Release',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='hapi.release.Release.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='info', full_name='hapi.release.Release.info', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='chart', full_name='hapi.release.Release.chart', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='config', full_name='hapi.release.Release.config', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='manifest', full_name='hapi.release.Release.manifest', index=4,
number=5, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='hooks', full_name='hapi.release.Release.hooks', index=5,
number=6, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='version', full_name='hapi.release.Release.version', index=6,
number=7, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='namespace', full_name='hapi.release.Release.namespace', index=7,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=144,
serialized_end=360,
)
_RELEASE.fields_by_name['info'].message_type = hapi_dot_release_dot_info__pb2._INFO
_RELEASE.fields_by_name['chart'].message_type = hapi_dot_chart_dot_chart__pb2._CHART
_RELEASE.fields_by_name['config'].message_type = hapi_dot_chart_dot_config__pb2._CONFIG
_RELEASE.fields_by_name['hooks'].message_type = hapi_dot_release_dot_hook__pb2._HOOK
DESCRIPTOR.message_types_by_name['Release'] = _RELEASE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Release = _reflection.GeneratedProtocolMessageType('Release', (_message.Message,), dict(
DESCRIPTOR = _RELEASE,
__module__ = 'hapi.release.release_pb2'
# @@protoc_insertion_point(class_scope:hapi.release.Release)
))
_sym_db.RegisterMessage(Release)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

View File

@ -1,148 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/release/status.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from hapi.release import test_suite_pb2 as hapi_dot_release_dot_test__suite__pb2
from google.protobuf import any_pb2 as google_dot_protobuf_dot_any__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/release/status.proto',
package='hapi.release',
syntax='proto3',
serialized_options=_b('Z\007release'),
serialized_pb=_b('\n\x19hapi/release/status.proto\x12\x0chapi.release\x1a\x1dhapi/release/test_suite.proto\x1a\x19google/protobuf/any.proto\"\xa4\x02\n\x06Status\x12\'\n\x04\x63ode\x18\x01 \x01(\x0e\x32\x19.hapi.release.Status.Code\x12\x11\n\tresources\x18\x03 \x01(\t\x12\r\n\x05notes\x18\x04 \x01(\t\x12\x34\n\x13last_test_suite_run\x18\x05 \x01(\x0b\x32\x17.hapi.release.TestSuite\"\x98\x01\n\x04\x43ode\x12\x0b\n\x07UNKNOWN\x10\x00\x12\x0c\n\x08\x44\x45PLOYED\x10\x01\x12\x0b\n\x07\x44\x45LETED\x10\x02\x12\x0e\n\nSUPERSEDED\x10\x03\x12\n\n\x06\x46\x41ILED\x10\x04\x12\x0c\n\x08\x44\x45LETING\x10\x05\x12\x13\n\x0fPENDING_INSTALL\x10\x06\x12\x13\n\x0fPENDING_UPGRADE\x10\x07\x12\x14\n\x10PENDING_ROLLBACK\x10\x08\x42\tZ\x07releaseb\x06proto3')
,
dependencies=[hapi_dot_release_dot_test__suite__pb2.DESCRIPTOR,google_dot_protobuf_dot_any__pb2.DESCRIPTOR,])
_STATUS_CODE = _descriptor.EnumDescriptor(
name='Code',
full_name='hapi.release.Status.Code',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='UNKNOWN', index=0, number=0,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DEPLOYED', index=1, number=1,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DELETED', index=2, number=2,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SUPERSEDED', index=3, number=3,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='FAILED', index=4, number=4,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='DELETING', index=5, number=5,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PENDING_INSTALL', index=6, number=6,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PENDING_UPGRADE', index=7, number=7,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='PENDING_ROLLBACK', index=8, number=8,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=242,
serialized_end=394,
)
_sym_db.RegisterEnumDescriptor(_STATUS_CODE)
_STATUS = _descriptor.Descriptor(
name='Status',
full_name='hapi.release.Status',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='code', full_name='hapi.release.Status.code', index=0,
number=1, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='resources', full_name='hapi.release.Status.resources', index=1,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='notes', full_name='hapi.release.Status.notes', index=2,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='last_test_suite_run', full_name='hapi.release.Status.last_test_suite_run', index=3,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
_STATUS_CODE,
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=102,
serialized_end=394,
)
_STATUS.fields_by_name['code'].enum_type = _STATUS_CODE
_STATUS.fields_by_name['last_test_suite_run'].message_type = hapi_dot_release_dot_test__suite__pb2._TESTSUITE
_STATUS_CODE.containing_type = _STATUS
DESCRIPTOR.message_types_by_name['Status'] = _STATUS
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Status = _reflection.GeneratedProtocolMessageType('Status', (_message.Message,), dict(
DESCRIPTOR = _STATUS,
__module__ = 'hapi.release.status_pb2'
# @@protoc_insertion_point(class_scope:hapi.release.Status)
))
_sym_db.RegisterMessage(Status)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

View File

@ -1,135 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/release/test_run.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/release/test_run.proto',
package='hapi.release',
syntax='proto3',
serialized_options=_b('Z\007release'),
serialized_pb=_b('\n\x1bhapi/release/test_run.proto\x12\x0chapi.release\x1a\x1fgoogle/protobuf/timestamp.proto\"\xf3\x01\n\x07TestRun\x12\x0c\n\x04name\x18\x01 \x01(\t\x12,\n\x06status\x18\x02 \x01(\x0e\x32\x1c.hapi.release.TestRun.Status\x12\x0c\n\x04info\x18\x03 \x01(\t\x12.\n\nstarted_at\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x30\n\x0c\x63ompleted_at\x18\x05 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\"<\n\x06Status\x12\x0b\n\x07UNKNOWN\x10\x00\x12\x0b\n\x07SUCCESS\x10\x01\x12\x0b\n\x07\x46\x41ILURE\x10\x02\x12\x0b\n\x07RUNNING\x10\x03\x42\tZ\x07releaseb\x06proto3')
,
dependencies=[google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,])
_TESTRUN_STATUS = _descriptor.EnumDescriptor(
name='Status',
full_name='hapi.release.TestRun.Status',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='UNKNOWN', index=0, number=0,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SUCCESS', index=1, number=1,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='FAILURE', index=2, number=2,
serialized_options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='RUNNING', index=3, number=3,
serialized_options=None,
type=None),
],
containing_type=None,
serialized_options=None,
serialized_start=262,
serialized_end=322,
)
_sym_db.RegisterEnumDescriptor(_TESTRUN_STATUS)
_TESTRUN = _descriptor.Descriptor(
name='TestRun',
full_name='hapi.release.TestRun',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='hapi.release.TestRun.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='status', full_name='hapi.release.TestRun.status', index=1,
number=2, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='info', full_name='hapi.release.TestRun.info', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='started_at', full_name='hapi.release.TestRun.started_at', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='completed_at', full_name='hapi.release.TestRun.completed_at', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
_TESTRUN_STATUS,
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=79,
serialized_end=322,
)
_TESTRUN.fields_by_name['status'].enum_type = _TESTRUN_STATUS
_TESTRUN.fields_by_name['started_at'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_TESTRUN.fields_by_name['completed_at'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_TESTRUN_STATUS.containing_type = _TESTRUN
DESCRIPTOR.message_types_by_name['TestRun'] = _TESTRUN
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
TestRun = _reflection.GeneratedProtocolMessageType('TestRun', (_message.Message,), dict(
DESCRIPTOR = _TESTRUN,
__module__ = 'hapi.release.test_run_pb2'
# @@protoc_insertion_point(class_scope:hapi.release.TestRun)
))
_sym_db.RegisterMessage(TestRun)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

View File

@ -1,90 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/release/test_suite.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2
from hapi.release import test_run_pb2 as hapi_dot_release_dot_test__run__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/release/test_suite.proto',
package='hapi.release',
syntax='proto3',
serialized_options=_b('Z\007release'),
serialized_pb=_b('\n\x1dhapi/release/test_suite.proto\x12\x0chapi.release\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1bhapi/release/test_run.proto\"\x95\x01\n\tTestSuite\x12.\n\nstarted_at\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x30\n\x0c\x63ompleted_at\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12&\n\x07results\x18\x03 \x03(\x0b\x32\x15.hapi.release.TestRunB\tZ\x07releaseb\x06proto3')
,
dependencies=[google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,hapi_dot_release_dot_test__run__pb2.DESCRIPTOR,])
_TESTSUITE = _descriptor.Descriptor(
name='TestSuite',
full_name='hapi.release.TestSuite',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='started_at', full_name='hapi.release.TestSuite.started_at', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='completed_at', full_name='hapi.release.TestSuite.completed_at', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='results', full_name='hapi.release.TestSuite.results', index=2,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=110,
serialized_end=259,
)
_TESTSUITE.fields_by_name['started_at'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_TESTSUITE.fields_by_name['completed_at'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP
_TESTSUITE.fields_by_name['results'].message_type = hapi_dot_release_dot_test__run__pb2._TESTRUN
DESCRIPTOR.message_types_by_name['TestSuite'] = _TESTSUITE
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
TestSuite = _reflection.GeneratedProtocolMessageType('TestSuite', (_message.Message,), dict(
DESCRIPTOR = _TESTSUITE,
__module__ = 'hapi.release.test_suite_pb2'
# @@protoc_insertion_point(class_scope:hapi.release.TestSuite)
))
_sym_db.RegisterMessage(TestSuite)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

File diff suppressed because one or more lines are too long

View File

@ -1,228 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc
from hapi.services import tiller_pb2 as hapi_dot_services_dot_tiller__pb2
class ReleaseServiceStub(object):
"""ReleaseService is the service that a helm application uses to mutate,
query, and manage releases.
Release: A named installation composed of a chart and
config. At any given time a release has one
chart and one config.
Config: A config is a YAML file that supplies values
to the parametrizable templates of a chart.
Chart: A chart is a helm package that contains
metadata, a default config, zero or more
optionally parameterizable templates, and
zero or more charts (dependencies).
"""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.ListReleases = channel.unary_stream(
'/hapi.services.tiller.ReleaseService/ListReleases',
request_serializer=hapi_dot_services_dot_tiller__pb2.ListReleasesRequest.SerializeToString,
response_deserializer=hapi_dot_services_dot_tiller__pb2.ListReleasesResponse.FromString,
)
self.GetReleaseStatus = channel.unary_unary(
'/hapi.services.tiller.ReleaseService/GetReleaseStatus',
request_serializer=hapi_dot_services_dot_tiller__pb2.GetReleaseStatusRequest.SerializeToString,
response_deserializer=hapi_dot_services_dot_tiller__pb2.GetReleaseStatusResponse.FromString,
)
self.GetReleaseContent = channel.unary_unary(
'/hapi.services.tiller.ReleaseService/GetReleaseContent',
request_serializer=hapi_dot_services_dot_tiller__pb2.GetReleaseContentRequest.SerializeToString,
response_deserializer=hapi_dot_services_dot_tiller__pb2.GetReleaseContentResponse.FromString,
)
self.UpdateRelease = channel.unary_unary(
'/hapi.services.tiller.ReleaseService/UpdateRelease',
request_serializer=hapi_dot_services_dot_tiller__pb2.UpdateReleaseRequest.SerializeToString,
response_deserializer=hapi_dot_services_dot_tiller__pb2.UpdateReleaseResponse.FromString,
)
self.InstallRelease = channel.unary_unary(
'/hapi.services.tiller.ReleaseService/InstallRelease',
request_serializer=hapi_dot_services_dot_tiller__pb2.InstallReleaseRequest.SerializeToString,
response_deserializer=hapi_dot_services_dot_tiller__pb2.InstallReleaseResponse.FromString,
)
self.UninstallRelease = channel.unary_unary(
'/hapi.services.tiller.ReleaseService/UninstallRelease',
request_serializer=hapi_dot_services_dot_tiller__pb2.UninstallReleaseRequest.SerializeToString,
response_deserializer=hapi_dot_services_dot_tiller__pb2.UninstallReleaseResponse.FromString,
)
self.GetVersion = channel.unary_unary(
'/hapi.services.tiller.ReleaseService/GetVersion',
request_serializer=hapi_dot_services_dot_tiller__pb2.GetVersionRequest.SerializeToString,
response_deserializer=hapi_dot_services_dot_tiller__pb2.GetVersionResponse.FromString,
)
self.RollbackRelease = channel.unary_unary(
'/hapi.services.tiller.ReleaseService/RollbackRelease',
request_serializer=hapi_dot_services_dot_tiller__pb2.RollbackReleaseRequest.SerializeToString,
response_deserializer=hapi_dot_services_dot_tiller__pb2.RollbackReleaseResponse.FromString,
)
self.GetHistory = channel.unary_unary(
'/hapi.services.tiller.ReleaseService/GetHistory',
request_serializer=hapi_dot_services_dot_tiller__pb2.GetHistoryRequest.SerializeToString,
response_deserializer=hapi_dot_services_dot_tiller__pb2.GetHistoryResponse.FromString,
)
self.RunReleaseTest = channel.unary_stream(
'/hapi.services.tiller.ReleaseService/RunReleaseTest',
request_serializer=hapi_dot_services_dot_tiller__pb2.TestReleaseRequest.SerializeToString,
response_deserializer=hapi_dot_services_dot_tiller__pb2.TestReleaseResponse.FromString,
)
class ReleaseServiceServicer(object):
"""ReleaseService is the service that a helm application uses to mutate,
query, and manage releases.
Release: A named installation composed of a chart and
config. At any given time a release has one
chart and one config.
Config: A config is a YAML file that supplies values
to the parametrizable templates of a chart.
Chart: A chart is a helm package that contains
metadata, a default config, zero or more
optionally parameterizable templates, and
zero or more charts (dependencies).
"""
def ListReleases(self, request, context):
"""ListReleases retrieves release history.
TODO: Allow filtering the set of releases by
release status. By default, ListAllReleases returns the releases who
current status is "Active".
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetReleaseStatus(self, request, context):
"""GetReleasesStatus retrieves status information for the specified release.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetReleaseContent(self, request, context):
"""GetReleaseContent retrieves the release content (chart + value) for the specified release.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def UpdateRelease(self, request, context):
"""UpdateRelease updates release content.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def InstallRelease(self, request, context):
"""InstallRelease requests installation of a chart as a new release.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def UninstallRelease(self, request, context):
"""UninstallRelease requests deletion of a named release.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetVersion(self, request, context):
"""GetVersion returns the current version of the server.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def RollbackRelease(self, request, context):
"""RollbackRelease rolls back a release to a previous version.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetHistory(self, request, context):
"""ReleaseHistory retrieves a release's history.
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def RunReleaseTest(self, request, context):
"""RunReleaseTest executes the tests defined of a named release
"""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_ReleaseServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'ListReleases': grpc.unary_stream_rpc_method_handler(
servicer.ListReleases,
request_deserializer=hapi_dot_services_dot_tiller__pb2.ListReleasesRequest.FromString,
response_serializer=hapi_dot_services_dot_tiller__pb2.ListReleasesResponse.SerializeToString,
),
'GetReleaseStatus': grpc.unary_unary_rpc_method_handler(
servicer.GetReleaseStatus,
request_deserializer=hapi_dot_services_dot_tiller__pb2.GetReleaseStatusRequest.FromString,
response_serializer=hapi_dot_services_dot_tiller__pb2.GetReleaseStatusResponse.SerializeToString,
),
'GetReleaseContent': grpc.unary_unary_rpc_method_handler(
servicer.GetReleaseContent,
request_deserializer=hapi_dot_services_dot_tiller__pb2.GetReleaseContentRequest.FromString,
response_serializer=hapi_dot_services_dot_tiller__pb2.GetReleaseContentResponse.SerializeToString,
),
'UpdateRelease': grpc.unary_unary_rpc_method_handler(
servicer.UpdateRelease,
request_deserializer=hapi_dot_services_dot_tiller__pb2.UpdateReleaseRequest.FromString,
response_serializer=hapi_dot_services_dot_tiller__pb2.UpdateReleaseResponse.SerializeToString,
),
'InstallRelease': grpc.unary_unary_rpc_method_handler(
servicer.InstallRelease,
request_deserializer=hapi_dot_services_dot_tiller__pb2.InstallReleaseRequest.FromString,
response_serializer=hapi_dot_services_dot_tiller__pb2.InstallReleaseResponse.SerializeToString,
),
'UninstallRelease': grpc.unary_unary_rpc_method_handler(
servicer.UninstallRelease,
request_deserializer=hapi_dot_services_dot_tiller__pb2.UninstallReleaseRequest.FromString,
response_serializer=hapi_dot_services_dot_tiller__pb2.UninstallReleaseResponse.SerializeToString,
),
'GetVersion': grpc.unary_unary_rpc_method_handler(
servicer.GetVersion,
request_deserializer=hapi_dot_services_dot_tiller__pb2.GetVersionRequest.FromString,
response_serializer=hapi_dot_services_dot_tiller__pb2.GetVersionResponse.SerializeToString,
),
'RollbackRelease': grpc.unary_unary_rpc_method_handler(
servicer.RollbackRelease,
request_deserializer=hapi_dot_services_dot_tiller__pb2.RollbackReleaseRequest.FromString,
response_serializer=hapi_dot_services_dot_tiller__pb2.RollbackReleaseResponse.SerializeToString,
),
'GetHistory': grpc.unary_unary_rpc_method_handler(
servicer.GetHistory,
request_deserializer=hapi_dot_services_dot_tiller__pb2.GetHistoryRequest.FromString,
response_serializer=hapi_dot_services_dot_tiller__pb2.GetHistoryResponse.SerializeToString,
),
'RunReleaseTest': grpc.unary_stream_rpc_method_handler(
servicer.RunReleaseTest,
request_deserializer=hapi_dot_services_dot_tiller__pb2.TestReleaseRequest.FromString,
response_serializer=hapi_dot_services_dot_tiller__pb2.TestReleaseResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'hapi.services.tiller.ReleaseService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))

View File

@ -1,84 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: hapi/version/version.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='hapi/version/version.proto',
package='hapi.version',
syntax='proto3',
serialized_options=_b('Z\007version'),
serialized_pb=_b('\n\x1ahapi/version/version.proto\x12\x0chapi.version\"F\n\x07Version\x12\x0f\n\x07sem_ver\x18\x01 \x01(\t\x12\x12\n\ngit_commit\x18\x02 \x01(\t\x12\x16\n\x0egit_tree_state\x18\x03 \x01(\tB\tZ\x07versionb\x06proto3')
)
_VERSION = _descriptor.Descriptor(
name='Version',
full_name='hapi.version.Version',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='sem_ver', full_name='hapi.version.Version.sem_ver', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='git_commit', full_name='hapi.version.Version.git_commit', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
_descriptor.FieldDescriptor(
name='git_tree_state', full_name='hapi.version.Version.git_tree_state', index=2,
number=3, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, file=DESCRIPTOR),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=44,
serialized_end=114,
)
DESCRIPTOR.message_types_by_name['Version'] = _VERSION
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
Version = _reflection.GeneratedProtocolMessageType('Version', (_message.Message,), dict(
DESCRIPTOR = _VERSION,
__module__ = 'hapi.version.version_pb2'
# @@protoc_insertion_point(class_scope:hapi.version.Version)
))
_sym_db.RegisterMessage(Version)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)

View File

@ -1,3 +0,0 @@
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc

View File

@ -6,7 +6,6 @@ description: |-
multiple Helm charts with dependencies by centralizing all configurations
in a single Armada yaml and providing lifecycle hooks for all Helm releases
usage:
$ helm armada tiller --status
$ helm armada apply /examples/openstack-helm.yaml
ignoreFlags: false
useTunnel: false

View File

@ -1,7 +1,6 @@
amqp<2.7,>=2.6.0
deepdiff==3.3.0
gitpython
grpcio>=1.16.0
jsonschema>=3.0.1,<4
keystoneauth1>=3.18.0
keystonemiddleware==5.3.0
@ -9,7 +8,6 @@ kombu<4.7,>=4.6.10
kubernetes>=12.0.0
Paste>=2.0.3
PasteDeploy>=1.5.2
protobuf>=3.4.0
pylibyaml~=0.1
pyyaml~=5.1
requests

View File

@ -22,11 +22,6 @@ packages =
armada.cli
armada.api
armada.handlers
hapi
hapi.chart
hapi.release
hapi.services
hapi.version
[build_sphinx]
source-dir = doc/source

View File

@ -1,13 +0,0 @@
#!/usr/bin/env bash
HELM_BRANCH='v2.16.9'
git clone https://github.com/helm/helm ./helm -b $HELM_BRANCH
python -m grpc_tools.protoc -I helm/_proto --python_out=. --grpc_python_out=. helm/_proto/hapi/chart/*
python -m grpc_tools.protoc -I helm/_proto --python_out=. --grpc_python_out=. helm/_proto/hapi/services/*
python -m grpc_tools.protoc -I helm/_proto --python_out=. --grpc_python_out=. helm/_proto/hapi/release/*
python -m grpc_tools.protoc -I helm/_proto --python_out=. --grpc_python_out=. helm/_proto/hapi/version/*
find ./hapi/ -type d -exec touch {}/__init__.py \;
rm -rf ./helm

View File

@ -109,7 +109,7 @@ show-source = true
enable-extensions = H106,H201,H904
# [W503] line break before binary operator
ignore = W503
exclude = .git,.tox,dist,*lib/python*,*egg,build,releasenotes,doc/*,hapi,venv
exclude = .git,.tox,dist,*lib/python*,*egg,build,releasenotes,doc/*,venv
max-complexity = 24
application-import-names = armada
import-order-style = pep8