First cut

This commit is contained in:
Liam Young 2023-08-09 14:42:56 +00:00
commit 2a4f472f70
41 changed files with 3697 additions and 0 deletions

11
.gitignore vendored Normal file
View File

@ -0,0 +1,11 @@
venv/
build/
*.charm
.tox/
.coverage
__pycache__/
*.py[cod]
.idea
.vscode/
*.swp
.stestr/

5
.gitreview Normal file
View File

@ -0,0 +1,5 @@
[gerrit]
host=review.opendev.org
port=29418
project=openstack/charm-aodh-k8s.git
defaultbranch=main

3
.stestr.conf Normal file
View File

@ -0,0 +1,3 @@
[DEFAULT]
test_path=./tests/unit
top_dir=./tests

33
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,33 @@
# Contributing
To make contributions to this charm, you'll need a working [development setup](https://juju.is/docs/sdk/dev-setup).
You can use the environments created by `tox` for development:
```shell
tox --notest -e unit
source .tox/unit/bin/activate
```
## Testing
This project uses `tox` for managing test environments. There are some pre-configured environments
that can be used for linting and formatting code when you're preparing contributions to the charm:
```shell
tox -e fmt # update your code according to linting rules
tox -e lint # code style
tox -e unit # unit tests
tox -e integration # integration tests
tox # runs 'lint' and 'unit' environments
```
## Build the charm
Build the charm in this git repository using:
```shell
charmcraft pack
```
<!-- You may want to include any contribution/style guidelines in this document>

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2023 liam
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

26
README.md Normal file
View File

@ -0,0 +1,26 @@
<!--
Avoid using this README file for information that is maintained or published elsewhere, e.g.:
* metadata.yaml > published on Charmhub
* documentation > published on (or linked to from) Charmhub
* detailed contribution guide > documentation or CONTRIBUTING.md
Use links instead.
-->
# aodh-k8s
Charmhub package name: operator-template
More information: https://charmhub.io/aodh-k8s
Describe your charm in one or two sentences.
## Other resources
<!-- If your charm is documented somewhere else other than Charmhub, provide a link separately. -->
- [Read more](https://example.com)
- [Contributing](CONTRIBUTING.md) <!-- or link to other contribution documentation -->
- See the [Juju SDK documentation](https://juju.is/docs/sdk) for more information about developing and improving charms.

2
actions.yaml Normal file
View File

@ -0,0 +1,2 @@
# NOTE: no actions yet!
{ }

30
charmcraft.yaml Normal file
View File

@ -0,0 +1,30 @@
type: "charm"
bases:
- build-on:
- name: "ubuntu"
channel: "22.04"
run-on:
- name: "ubuntu"
channel: "22.04"
parts:
update-certificates:
plugin: nil
override-build: |
apt update
apt install -y ca-certificates
update-ca-certificates
charm:
after: [update-certificates]
build-packages:
- git
- libffi-dev
- libssl-dev
- rustc
- cargo
- pkg-config
charm-binary-python-packages:
- cryptography
- jsonschema
- jinja2
- git+https://opendev.org/openstack/charm-ops-sunbeam#egg=ops_sunbeam

39
config.yaml Normal file
View File

@ -0,0 +1,39 @@
options:
debug:
default: False
description: Enable debug logging.
type: boolean
os-admin-hostname:
default: glance.juju
description: |
The hostname or address of the admin endpoints that should be advertised
in the glance image provider.
type: string
os-internal-hostname:
default: glance.juju
description: |
The hostname or address of the internal endpoints that should be advertised
in the glance image provider.
type: string
os-public-hostname:
default: glance.juju
description: |
The hostname or address of the internal endpoints that should be advertised
in the glance image provider.
type: string
region:
default: RegionOne
description: Space delimited list of OpenStack regions
type: string
alarm-history-time-to-live:
default: -1
description: |
Number of seconds that alarm histories are kept in the database for (<= 0
means forever)
type: int
alarm-histories-delete-batch-size:
default: 0
description: |
Number of alarm histories to be deleted in one iteration from the database (0
means all). (integer value)
type: int

7
fetch-libs.sh Executable file
View File

@ -0,0 +1,7 @@
#!/bin/bash
echo "INFO: Fetching libs from charmhub."
# charmcraft fetch-lib charms.data_platform_libs.v0.database_requires
# charmcraft fetch-lib charms.keystone_k8s.v1.identity_service
# charmcraft fetch-lib charms.rabbitmq_k8s.v0.rabbitmq
# charmcraft fetch-lib charms.traefik_k8s.v1.ingress

View File

@ -0,0 +1,537 @@
# Copyright 2023 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""[DEPRECATED] Relation 'requires' side abstraction for database relation.
This library is a uniform interface to a selection of common database
metadata, with added custom events that add convenience to database management,
and methods to consume the application related data.
Following an example of using the DatabaseCreatedEvent, in the context of the
application charm code:
```python
from charms.data_platform_libs.v0.database_requires import (
DatabaseCreatedEvent,
DatabaseRequires,
)
class ApplicationCharm(CharmBase):
# Application charm that connects to database charms.
def __init__(self, *args):
super().__init__(*args)
# Charm events defined in the database requires charm library.
self.database = DatabaseRequires(self, relation_name="database", database_name="database")
self.framework.observe(self.database.on.database_created, self._on_database_created)
def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
# Handle the created database
# Create configuration file for app
config_file = self._render_app_config_file(
event.username,
event.password,
event.endpoints,
)
# Start application with rendered configuration
self._start_application(config_file)
# Set active status
self.unit.status = ActiveStatus("received database credentials")
```
As shown above, the library provides some custom events to handle specific situations,
which are listed below:
database_created: event emitted when the requested database is created.
endpoints_changed: event emitted when the read/write endpoints of the database have changed.
read_only_endpoints_changed: event emitted when the read-only endpoints of the database
have changed. Event is not triggered if read/write endpoints changed too.
If it is needed to connect multiple database clusters to the same relation endpoint
the application charm can implement the same code as if it would connect to only
one database cluster (like the above code example).
To differentiate multiple clusters connected to the same relation endpoint
the application charm can use the name of the remote application:
```python
def _on_database_created(self, event: DatabaseCreatedEvent) -> None:
# Get the remote app name of the cluster that triggered this event
cluster = event.relation.app.name
```
It is also possible to provide an alias for each different database cluster/relation.
So, it is possible to differentiate the clusters in two ways.
The first is to use the remote application name, i.e., `event.relation.app.name`, as above.
The second way is to use different event handlers to handle each cluster events.
The implementation would be something like the following code:
```python
from charms.data_platform_libs.v0.database_requires import (
DatabaseCreatedEvent,
DatabaseRequires,
)
class ApplicationCharm(CharmBase):
# Application charm that connects to database charms.
def __init__(self, *args):
super().__init__(*args)
# Define the cluster aliases and one handler for each cluster database created event.
self.database = DatabaseRequires(
self,
relation_name="database",
database_name="database",
relations_aliases = ["cluster1", "cluster2"],
)
self.framework.observe(
self.database.on.cluster1_database_created, self._on_cluster1_database_created
)
self.framework.observe(
self.database.on.cluster2_database_created, self._on_cluster2_database_created
)
def _on_cluster1_database_created(self, event: DatabaseCreatedEvent) -> None:
# Handle the created database on the cluster named cluster1
# Create configuration file for app
config_file = self._render_app_config_file(
event.username,
event.password,
event.endpoints,
)
...
def _on_cluster2_database_created(self, event: DatabaseCreatedEvent) -> None:
# Handle the created database on the cluster named cluster2
# Create configuration file for app
config_file = self._render_app_config_file(
event.username,
event.password,
event.endpoints,
)
...
```
"""
import json
import logging
from collections import namedtuple
from datetime import datetime
from typing import List, Optional
from ops.charm import (
CharmEvents,
RelationChangedEvent,
RelationEvent,
RelationJoinedEvent,
)
from ops.framework import EventSource, Object
from ops.model import Relation
# The unique Charmhub library identifier, never change it
LIBID = "0241e088ffa9440fb4e3126349b2fb62"
# Increment this major API version when introducing breaking changes
LIBAPI = 0
# Increment this PATCH version before using `charmcraft publish-lib` or reset
# to 0 if you are raising the major API version.
LIBPATCH = 6
logger = logging.getLogger(__name__)
class DatabaseEvent(RelationEvent):
"""Base class for database events."""
@property
def endpoints(self) -> Optional[str]:
"""Returns a comma separated list of read/write endpoints."""
if not self.relation.app:
return None
return self.relation.data[self.relation.app].get("endpoints")
@property
def password(self) -> Optional[str]:
"""Returns the password for the created user."""
if not self.relation.app:
return None
return self.relation.data[self.relation.app].get("password")
@property
def read_only_endpoints(self) -> Optional[str]:
"""Returns a comma separated list of read only endpoints."""
if not self.relation.app:
return None
return self.relation.data[self.relation.app].get("read-only-endpoints")
@property
def replset(self) -> Optional[str]:
"""Returns the replicaset name.
MongoDB only.
"""
if not self.relation.app:
return None
return self.relation.data[self.relation.app].get("replset")
@property
def tls(self) -> Optional[str]:
"""Returns whether TLS is configured."""
if not self.relation.app:
return None
return self.relation.data[self.relation.app].get("tls")
@property
def tls_ca(self) -> Optional[str]:
"""Returns TLS CA."""
if not self.relation.app:
return None
return self.relation.data[self.relation.app].get("tls-ca")
@property
def uris(self) -> Optional[str]:
"""Returns the connection URIs.
MongoDB, Redis, OpenSearch and Kafka only.
"""
if not self.relation.app:
return None
return self.relation.data[self.relation.app].get("uris")
@property
def username(self) -> Optional[str]:
"""Returns the created username."""
if not self.relation.app:
return None
return self.relation.data[self.relation.app].get("username")
@property
def version(self) -> Optional[str]:
"""Returns the version of the database.
Version as informed by the database daemon.
"""
if not self.relation.app:
return None
return self.relation.data[self.relation.app].get("version")
class DatabaseCreatedEvent(DatabaseEvent):
"""Event emitted when a new database is created for use on this relation."""
class DatabaseEndpointsChangedEvent(DatabaseEvent):
"""Event emitted when the read/write endpoints are changed."""
class DatabaseReadOnlyEndpointsChangedEvent(DatabaseEvent):
"""Event emitted when the read only endpoints are changed."""
class DatabaseEvents(CharmEvents):
"""Database events.
This class defines the events that the database can emit.
"""
database_created = EventSource(DatabaseCreatedEvent)
endpoints_changed = EventSource(DatabaseEndpointsChangedEvent)
read_only_endpoints_changed = EventSource(DatabaseReadOnlyEndpointsChangedEvent)
Diff = namedtuple("Diff", "added changed deleted")
Diff.__doc__ = """
A tuple for storing the diff between two data mappings.
added keys that were added.
changed keys that still exist but have new values.
deleted keys that were deleted.
"""
class DatabaseRequires(Object):
"""Requires-side of the database relation."""
on = DatabaseEvents() # pyright: ignore [reportGeneralTypeIssues]
def __init__(
self,
charm,
relation_name: str,
database_name: str,
extra_user_roles: Optional[str] = None,
relations_aliases: Optional[List[str]] = None,
):
"""Manager of database client relations."""
super().__init__(charm, relation_name)
self.charm = charm
self.database = database_name
self.extra_user_roles = extra_user_roles
self.local_app = self.charm.model.app
self.local_unit = self.charm.unit
self.relation_name = relation_name
self.relations_aliases = relations_aliases
self.framework.observe(
self.charm.on[relation_name].relation_joined, self._on_relation_joined_event
)
self.framework.observe(
self.charm.on[relation_name].relation_changed, self._on_relation_changed_event
)
# Define custom event names for each alias.
if relations_aliases:
# Ensure the number of aliases does not exceed the maximum
# of connections allowed in the specific relation.
relation_connection_limit = self.charm.meta.requires[relation_name].limit
if len(relations_aliases) != relation_connection_limit:
raise ValueError(
f"The number of aliases must match the maximum number of connections allowed in the relation. "
f"Expected {relation_connection_limit}, got {len(relations_aliases)}"
)
for relation_alias in relations_aliases:
self.on.define_event(f"{relation_alias}_database_created", DatabaseCreatedEvent)
self.on.define_event(
f"{relation_alias}_endpoints_changed", DatabaseEndpointsChangedEvent
)
self.on.define_event(
f"{relation_alias}_read_only_endpoints_changed",
DatabaseReadOnlyEndpointsChangedEvent,
)
def _assign_relation_alias(self, relation_id: int) -> None:
"""Assigns an alias to a relation.
This function writes in the unit data bag.
Args:
relation_id: the identifier for a particular relation.
"""
# If no aliases were provided, return immediately.
if not self.relations_aliases:
return
# Return if an alias was already assigned to this relation
# (like when there are more than one unit joining the relation).
if (
self.charm.model.get_relation(self.relation_name, relation_id)
.data[self.local_unit]
.get("alias")
):
return
# Retrieve the available aliases (the ones that weren't assigned to any relation).
available_aliases = self.relations_aliases[:]
for relation in self.charm.model.relations[self.relation_name]:
alias = relation.data[self.local_unit].get("alias")
if alias:
logger.debug("Alias %s was already assigned to relation %d", alias, relation.id)
available_aliases.remove(alias)
# Set the alias in the unit relation databag of the specific relation.
relation = self.charm.model.get_relation(self.relation_name, relation_id)
relation.data[self.local_unit].update({"alias": available_aliases[0]})
def _diff(self, event: RelationChangedEvent) -> Diff:
"""Retrieves the diff of the data in the relation changed databag.
Args:
event: relation changed event.
Returns:
a Diff instance containing the added, deleted and changed
keys from the event relation databag.
"""
# Retrieve the old data from the data key in the local unit relation databag.
old_data = json.loads(event.relation.data[self.local_unit].get("data", "{}"))
# Retrieve the new data from the event relation databag.
new_data = (
{key: value for key, value in event.relation.data[event.app].items() if key != "data"}
if event.app
else {}
)
# These are the keys that were added to the databag and triggered this event.
added = new_data.keys() - old_data.keys()
# These are the keys that were removed from the databag and triggered this event.
deleted = old_data.keys() - new_data.keys()
# These are the keys that already existed in the databag,
# but had their values changed.
changed = {
key for key in old_data.keys() & new_data.keys() if old_data[key] != new_data[key]
}
# TODO: evaluate the possibility of losing the diff if some error
# happens in the charm before the diff is completely checked (DPE-412).
# Convert the new_data to a serializable format and save it for a next diff check.
event.relation.data[self.local_unit].update({"data": json.dumps(new_data)})
# Return the diff with all possible changes.
return Diff(added, changed, deleted)
def _emit_aliased_event(self, event: RelationChangedEvent, event_name: str) -> None:
"""Emit an aliased event to a particular relation if it has an alias.
Args:
event: the relation changed event that was received.
event_name: the name of the event to emit.
"""
alias = self._get_relation_alias(event.relation.id)
if alias:
getattr(self.on, f"{alias}_{event_name}").emit(
event.relation, app=event.app, unit=event.unit
)
def _get_relation_alias(self, relation_id: int) -> Optional[str]:
"""Returns the relation alias.
Args:
relation_id: the identifier for a particular relation.
Returns:
the relation alias or None if the relation was not found.
"""
for relation in self.charm.model.relations[self.relation_name]:
if relation.id == relation_id:
return relation.data[self.local_unit].get("alias")
return None
def fetch_relation_data(self) -> dict:
"""Retrieves data from relation.
This function can be used to retrieve data from a relation
in the charm code when outside an event callback.
Returns:
a dict of the values stored in the relation data bag
for all relation instances (indexed by the relation ID).
"""
data = {}
for relation in self.relations:
data[relation.id] = (
{key: value for key, value in relation.data[relation.app].items() if key != "data"}
if relation.app
else {}
)
return data
def _update_relation_data(self, relation_id: int, data: dict) -> None:
"""Updates a set of key-value pairs in the relation.
This function writes in the application data bag, therefore,
only the leader unit can call it.
Args:
relation_id: the identifier for a particular relation.
data: dict containing the key-value pairs
that should be updated in the relation.
"""
if self.local_unit.is_leader():
relation = self.charm.model.get_relation(self.relation_name, relation_id)
relation.data[self.local_app].update(data)
def _on_relation_joined_event(self, event: RelationJoinedEvent) -> None:
"""Event emitted when the application joins the database relation."""
# If relations aliases were provided, assign one to the relation.
self._assign_relation_alias(event.relation.id)
# Sets both database and extra user roles in the relation
# if the roles are provided. Otherwise, sets only the database.
if self.extra_user_roles:
self._update_relation_data(
event.relation.id,
{
"database": self.database,
"extra-user-roles": self.extra_user_roles,
},
)
else:
self._update_relation_data(event.relation.id, {"database": self.database})
def _on_relation_changed_event(self, event: RelationChangedEvent) -> None:
"""Event emitted when the database relation has changed."""
# Check which data has changed to emit customs events.
diff = self._diff(event)
# Check if the database is created
# (the database charm shared the credentials).
if "username" in diff.added and "password" in diff.added:
# Emit the default event (the one without an alias).
logger.info("database created at %s", datetime.now())
getattr(self.on, "database_created").emit(
event.relation, app=event.app, unit=event.unit
)
# Emit the aliased event (if any).
self._emit_aliased_event(event, "database_created")
# To avoid unnecessary application restarts do not trigger
# “endpoints_changed“ event if “database_created“ is triggered.
return
# Emit an endpoints changed event if the database
# added or changed this info in the relation databag.
if "endpoints" in diff.added or "endpoints" in diff.changed:
# Emit the default event (the one without an alias).
logger.info("endpoints changed on %s", datetime.now())
getattr(self.on, "endpoints_changed").emit(
event.relation, app=event.app, unit=event.unit
)
# Emit the aliased event (if any).
self._emit_aliased_event(event, "endpoints_changed")
# To avoid unnecessary application restarts do not trigger
# “read_only_endpoints_changed“ event if “endpoints_changed“ is triggered.
return
# Emit a read only endpoints changed event if the database
# added or changed this info in the relation databag.
if "read-only-endpoints" in diff.added or "read-only-endpoints" in diff.changed:
# Emit the default event (the one without an alias).
logger.info("read-only-endpoints changed on %s", datetime.now())
getattr(self.on, "read_only_endpoints_changed").emit(
event.relation, app=event.app, unit=event.unit
)
# Emit the aliased event (if any).
self._emit_aliased_event(event, "read_only_endpoints_changed")
@property
def relations(self) -> List[Relation]:
"""The list of Relation instances associated with this relation_name."""
return list(self.charm.model.relations[self.relation_name])

View File

@ -0,0 +1,525 @@
"""IdentityServiceProvides and Requires module.
This library contains the Requires and Provides classes for handling
the identity_service interface.
Import `IdentityServiceRequires` in your charm, with the charm object and the
relation name:
- self
- "identity_service"
Also provide additional parameters to the charm object:
- service
- internal_url
- public_url
- admin_url
- region
- username
- vhost
Two events are also available to respond to:
- connected
- ready
- goneaway
A basic example showing the usage of this relation follows:
```
from charms.keystone_k8s.v1.identity_service import IdentityServiceRequires
class IdentityServiceClientCharm(CharmBase):
def __init__(self, *args):
super().__init__(*args)
# IdentityService Requires
self.identity_service = IdentityServiceRequires(
self, "identity_service",
service = "my-service"
internal_url = "http://internal-url"
public_url = "http://public-url"
admin_url = "http://admin-url"
region = "region"
)
self.framework.observe(
self.identity_service.on.connected, self._on_identity_service_connected)
self.framework.observe(
self.identity_service.on.ready, self._on_identity_service_ready)
self.framework.observe(
self.identity_service.on.goneaway, self._on_identity_service_goneaway)
def _on_identity_service_connected(self, event):
'''React to the IdentityService connected event.
This event happens when n IdentityService relation is added to the
model before credentials etc have been provided.
'''
# Do something before the relation is complete
pass
def _on_identity_service_ready(self, event):
'''React to the IdentityService ready event.
The IdentityService interface will use the provided config for the
request to the identity server.
'''
# IdentityService Relation is ready. Do something with the completed relation.
pass
def _on_identity_service_goneaway(self, event):
'''React to the IdentityService goneaway event.
This event happens when an IdentityService relation is removed.
'''
# IdentityService Relation has goneaway. shutdown services or suchlike
pass
```
"""
import json
import logging
from ops.framework import (
StoredState,
EventBase,
ObjectEvents,
EventSource,
Object,
)
from ops.model import (
Relation,
SecretNotFoundError,
)
logger = logging.getLogger(__name__)
# The unique Charmhub library identifier, never change it
LIBID = "0fa7fe7236c14c6e9624acf232b9a3b0"
# Increment this major API version when introducing breaking changes
LIBAPI = 1
# Increment this PATCH version before using `charmcraft publish-lib` or reset
# to 0 if you are raising the major API version
LIBPATCH = 1
logger = logging.getLogger(__name__)
class IdentityServiceConnectedEvent(EventBase):
"""IdentityService connected Event."""
pass
class IdentityServiceReadyEvent(EventBase):
"""IdentityService ready for use Event."""
pass
class IdentityServiceGoneAwayEvent(EventBase):
"""IdentityService relation has gone-away Event"""
pass
class IdentityServiceServerEvents(ObjectEvents):
"""Events class for `on`"""
connected = EventSource(IdentityServiceConnectedEvent)
ready = EventSource(IdentityServiceReadyEvent)
goneaway = EventSource(IdentityServiceGoneAwayEvent)
class IdentityServiceRequires(Object):
"""
IdentityServiceRequires class
"""
on = IdentityServiceServerEvents()
_stored = StoredState()
def __init__(self, charm, relation_name: str, service_endpoints: dict,
region: str):
super().__init__(charm, relation_name)
self.charm = charm
self.relation_name = relation_name
self.service_endpoints = service_endpoints
self.region = region
self.framework.observe(
self.charm.on[relation_name].relation_joined,
self._on_identity_service_relation_joined,
)
self.framework.observe(
self.charm.on[relation_name].relation_changed,
self._on_identity_service_relation_changed,
)
self.framework.observe(
self.charm.on[relation_name].relation_departed,
self._on_identity_service_relation_changed,
)
self.framework.observe(
self.charm.on[relation_name].relation_broken,
self._on_identity_service_relation_broken,
)
def _on_identity_service_relation_joined(self, event):
"""IdentityService relation joined."""
logging.debug("IdentityService on_joined")
self.on.connected.emit()
self.register_services(
self.service_endpoints,
self.region)
def _on_identity_service_relation_changed(self, event):
"""IdentityService relation changed."""
logging.debug("IdentityService on_changed")
try:
self.service_password
self.on.ready.emit()
except (AttributeError, KeyError):
pass
def _on_identity_service_relation_broken(self, event):
"""IdentityService relation broken."""
logging.debug("IdentityService on_broken")
self.on.goneaway.emit()
@property
def _identity_service_rel(self) -> Relation:
"""The IdentityService relation."""
return self.framework.model.get_relation(self.relation_name)
def get_remote_app_data(self, key: str) -> str:
"""Return the value for the given key from remote app data."""
data = self._identity_service_rel.data[self._identity_service_rel.app]
return data.get(key)
@property
def api_version(self) -> str:
"""Return the api_version."""
return self.get_remote_app_data('api-version')
@property
def auth_host(self) -> str:
"""Return the auth_host."""
return self.get_remote_app_data('auth-host')
@property
def auth_port(self) -> str:
"""Return the auth_port."""
return self.get_remote_app_data('auth-port')
@property
def auth_protocol(self) -> str:
"""Return the auth_protocol."""
return self.get_remote_app_data('auth-protocol')
@property
def internal_host(self) -> str:
"""Return the internal_host."""
return self.get_remote_app_data('internal-host')
@property
def internal_port(self) -> str:
"""Return the internal_port."""
return self.get_remote_app_data('internal-port')
@property
def internal_protocol(self) -> str:
"""Return the internal_protocol."""
return self.get_remote_app_data('internal-protocol')
@property
def admin_domain_name(self) -> str:
"""Return the admin_domain_name."""
return self.get_remote_app_data('admin-domain-name')
@property
def admin_domain_id(self) -> str:
"""Return the admin_domain_id."""
return self.get_remote_app_data('admin-domain-id')
@property
def admin_project_name(self) -> str:
"""Return the admin_project_name."""
return self.get_remote_app_data('admin-project-name')
@property
def admin_project_id(self) -> str:
"""Return the admin_project_id."""
return self.get_remote_app_data('admin-project-id')
@property
def admin_user_name(self) -> str:
"""Return the admin_user_name."""
return self.get_remote_app_data('admin-user-name')
@property
def admin_user_id(self) -> str:
"""Return the admin_user_id."""
return self.get_remote_app_data('admin-user-id')
@property
def service_domain_name(self) -> str:
"""Return the service_domain_name."""
return self.get_remote_app_data('service-domain-name')
@property
def service_domain_id(self) -> str:
"""Return the service_domain_id."""
return self.get_remote_app_data('service-domain-id')
@property
def service_host(self) -> str:
"""Return the service_host."""
return self.get_remote_app_data('service-host')
@property
def service_credentials(self) -> str:
"""Return the service_credentials secret."""
return self.get_remote_app_data('service-credentials')
@property
def service_password(self) -> str:
"""Return the service_password."""
credentials_id = self.get_remote_app_data('service-credentials')
if not credentials_id:
return None
try:
credentials = self.charm.model.get_secret(id=credentials_id)
return credentials.get_content().get("password")
except SecretNotFoundError:
logger.warning(f"Secret {credentials_id} not found")
return None
@property
def service_port(self) -> str:
"""Return the service_port."""
return self.get_remote_app_data('service-port')
@property
def service_protocol(self) -> str:
"""Return the service_protocol."""
return self.get_remote_app_data('service-protocol')
@property
def service_project_name(self) -> str:
"""Return the service_project_name."""
return self.get_remote_app_data('service-project-name')
@property
def service_project_id(self) -> str:
"""Return the service_project_id."""
return self.get_remote_app_data('service-project-id')
@property
def service_user_name(self) -> str:
"""Return the service_user_name."""
credentials_id = self.get_remote_app_data('service-credentials')
if not credentials_id:
return None
try:
credentials = self.charm.model.get_secret(id=credentials_id)
return credentials.get_content().get("username")
except SecretNotFoundError:
logger.warning(f"Secret {credentials_id} not found")
return None
@property
def service_user_id(self) -> str:
"""Return the service_user_id."""
return self.get_remote_app_data('service-user-id')
@property
def internal_auth_url(self) -> str:
"""Return the internal_auth_url."""
return self.get_remote_app_data('internal-auth-url')
@property
def admin_auth_url(self) -> str:
"""Return the admin_auth_url."""
return self.get_remote_app_data('admin-auth-url')
@property
def public_auth_url(self) -> str:
"""Return the public_auth_url."""
return self.get_remote_app_data('public-auth-url')
@property
def admin_role(self) -> str:
"""Return the admin_role."""
return self.get_remote_app_data('admin-role')
def register_services(self, service_endpoints: dict,
region: str) -> None:
"""Request access to the IdentityService server."""
if self.model.unit.is_leader():
logging.debug("Requesting service registration")
app_data = self._identity_service_rel.data[self.charm.app]
app_data["service-endpoints"] = json.dumps(
service_endpoints, sort_keys=True
)
app_data["region"] = region
class HasIdentityServiceClientsEvent(EventBase):
"""Has IdentityServiceClients Event."""
pass
class ReadyIdentityServiceClientsEvent(EventBase):
"""IdentityServiceClients Ready Event."""
def __init__(self, handle, relation_id, relation_name, service_endpoints,
region, client_app_name):
super().__init__(handle)
self.relation_id = relation_id
self.relation_name = relation_name
self.service_endpoints = service_endpoints
self.region = region
self.client_app_name = client_app_name
def snapshot(self):
return {
"relation_id": self.relation_id,
"relation_name": self.relation_name,
"service_endpoints": self.service_endpoints,
"client_app_name": self.client_app_name,
"region": self.region}
def restore(self, snapshot):
super().restore(snapshot)
self.relation_id = snapshot["relation_id"]
self.relation_name = snapshot["relation_name"]
self.service_endpoints = snapshot["service_endpoints"]
self.region = snapshot["region"]
self.client_app_name = snapshot["client_app_name"]
class IdentityServiceClientEvents(ObjectEvents):
"""Events class for `on`"""
has_identity_service_clients = EventSource(HasIdentityServiceClientsEvent)
ready_identity_service_clients = EventSource(ReadyIdentityServiceClientsEvent)
class IdentityServiceProvides(Object):
"""
IdentityServiceProvides class
"""
on = IdentityServiceClientEvents()
_stored = StoredState()
def __init__(self, charm, relation_name):
super().__init__(charm, relation_name)
self.charm = charm
self.relation_name = relation_name
self.framework.observe(
self.charm.on[relation_name].relation_joined,
self._on_identity_service_relation_joined,
)
self.framework.observe(
self.charm.on[relation_name].relation_changed,
self._on_identity_service_relation_changed,
)
self.framework.observe(
self.charm.on[relation_name].relation_broken,
self._on_identity_service_relation_broken,
)
def _on_identity_service_relation_joined(self, event):
"""Handle IdentityService joined."""
logging.debug("IdentityService on_joined")
self.on.has_identity_service_clients.emit()
def _on_identity_service_relation_changed(self, event):
"""Handle IdentityService changed."""
logging.debug("IdentityService on_changed")
REQUIRED_KEYS = [
'service-endpoints',
'region']
values = [
event.relation.data[event.relation.app].get(k)
for k in REQUIRED_KEYS
]
# Validate data on the relation
if all(values):
service_eps = json.loads(
event.relation.data[event.relation.app]['service-endpoints'])
self.on.ready_identity_service_clients.emit(
event.relation.id,
event.relation.name,
service_eps,
event.relation.data[event.relation.app]['region'],
event.relation.app.name)
def _on_identity_service_relation_broken(self, event):
"""Handle IdentityService broken."""
logging.debug("IdentityServiceProvides on_departed")
# TODO clear data on the relation
def set_identity_service_credentials(self, relation_name: int,
relation_id: str,
api_version: str,
auth_host: str,
auth_port: str,
auth_protocol: str,
internal_host: str,
internal_port: str,
internal_protocol: str,
service_host: str,
service_port: str,
service_protocol: str,
admin_domain: str,
admin_project: str,
admin_user: str,
service_domain: str,
service_project: str,
service_user: str,
internal_auth_url: str,
admin_auth_url: str,
public_auth_url: str,
service_credentials: str,
admin_role: str):
logging.debug("Setting identity_service connection information.")
_identity_service_rel = None
for relation in self.framework.model.relations[relation_name]:
if relation.id == relation_id:
_identity_service_rel = relation
if not _identity_service_rel:
# Relation has disappeared so skip send of data
return
app_data = _identity_service_rel.data[self.charm.app]
app_data["api-version"] = api_version
app_data["auth-host"] = auth_host
app_data["auth-port"] = str(auth_port)
app_data["auth-protocol"] = auth_protocol
app_data["internal-host"] = internal_host
app_data["internal-port"] = str(internal_port)
app_data["internal-protocol"] = internal_protocol
app_data["service-host"] = service_host
app_data["service-port"] = str(service_port)
app_data["service-protocol"] = service_protocol
app_data["admin-domain-name"] = admin_domain.name
app_data["admin-domain-id"] = admin_domain.id
app_data["admin-project-name"] = admin_project.name
app_data["admin-project-id"] = admin_project.id
app_data["admin-user-name"] = admin_user.name
app_data["admin-user-id"] = admin_user.id
app_data["service-domain-name"] = service_domain.name
app_data["service-domain-id"] = service_domain.id
app_data["service-project-name"] = service_project.name
app_data["service-project-id"] = service_project.id
app_data["service-user-id"] = service_user.id
app_data["internal-auth-url"] = internal_auth_url
app_data["admin-auth-url"] = admin_auth_url
app_data["public-auth-url"] = public_auth_url
app_data["service-credentials"] = service_credentials
app_data["admin-role"] = admin_role

View File

@ -0,0 +1,408 @@
# Copyright 2023 Canonical Ltd.
# Licensed under the Apache2.0, see LICENCE file in charm source for details.
"""Library for the ingress relation.
This library contains the Requires and Provides classes for handling
the ingress interface.
Import `IngressRequires` in your charm, with two required options:
- "self" (the charm itself)
- config_dict
`config_dict` accepts the following keys:
- additional-hostnames
- backend-protocol
- limit-rps
- limit-whitelist
- max-body-size
- owasp-modsecurity-crs
- owasp-modsecurity-custom-rules
- path-routes
- retry-errors
- rewrite-enabled
- rewrite-target
- service-hostname (required)
- service-name (required)
- service-namespace
- service-port (required)
- session-cookie-max-age
- tls-secret-name
See [the config section](https://charmhub.io/nginx-ingress-integrator/configure) for descriptions
of each, along with the required type.
As an example, add the following to `src/charm.py`:
```
from charms.nginx_ingress_integrator.v0.ingress import IngressRequires
# In your charm's `__init__` method.
self.ingress = IngressRequires(self, {
"service-hostname": self.config["external_hostname"],
"service-name": self.app.name,
"service-port": 80,
}
)
# In your charm's `config-changed` handler.
self.ingress.update_config({"service-hostname": self.config["external_hostname"]})
```
And then add the following to `metadata.yaml`:
```
requires:
ingress:
interface: ingress
```
You _must_ register the IngressRequires class as part of the `__init__` method
rather than, for instance, a config-changed event handler, for the relation
changed event to be properly handled.
"""
import copy
import logging
from typing import Dict
from ops.charm import CharmBase, CharmEvents, RelationBrokenEvent, RelationChangedEvent
from ops.framework import EventBase, EventSource, Object
from ops.model import BlockedStatus
INGRESS_RELATION_NAME = "ingress"
INGRESS_PROXY_RELATION_NAME = "ingress-proxy"
# The unique Charmhub library identifier, never change it
LIBID = "db0af4367506491c91663468fb5caa4c"
# Increment this major API version when introducing breaking changes
LIBAPI = 0
# Increment this PATCH version before using `charmcraft publish-lib` or reset
# to 0 if you are raising the major API version
LIBPATCH = 16
LOGGER = logging.getLogger(__name__)
REQUIRED_INGRESS_RELATION_FIELDS = {"service-hostname", "service-name", "service-port"}
OPTIONAL_INGRESS_RELATION_FIELDS = {
"additional-hostnames",
"backend-protocol",
"limit-rps",
"limit-whitelist",
"max-body-size",
"owasp-modsecurity-crs",
"owasp-modsecurity-custom-rules",
"path-routes",
"retry-errors",
"rewrite-target",
"rewrite-enabled",
"service-namespace",
"session-cookie-max-age",
"tls-secret-name",
}
RELATION_INTERFACES_MAPPINGS = {
"service-hostname": "host",
"service-name": "name",
"service-namespace": "model",
"service-port": "port",
}
RELATION_INTERFACES_MAPPINGS_VALUES = set(RELATION_INTERFACES_MAPPINGS.values())
class IngressAvailableEvent(EventBase):
"""IngressAvailableEvent custom event.
This event indicates the Ingress provider is available.
"""
class IngressProxyAvailableEvent(EventBase):
"""IngressProxyAvailableEvent custom event.
This event indicates the IngressProxy provider is available.
"""
class IngressBrokenEvent(RelationBrokenEvent):
"""IngressBrokenEvent custom event.
This event indicates the Ingress provider is broken.
"""
class IngressCharmEvents(CharmEvents):
"""Custom charm events.
Attrs:
ingress_available: Event to indicate that Ingress is available.
ingress_proxy_available: Event to indicate that IngressProxy is available.
ingress_broken: Event to indicate that Ingress is broken.
"""
ingress_available = EventSource(IngressAvailableEvent)
ingress_proxy_available = EventSource(IngressProxyAvailableEvent)
ingress_broken = EventSource(IngressBrokenEvent)
class IngressRequires(Object):
"""This class defines the functionality for the 'requires' side of the 'ingress' relation.
Hook events observed:
- relation-changed
Attrs:
model: Juju model where the charm is deployed.
config_dict: Contains all the configuration options for Ingress.
"""
def __init__(self, charm: CharmBase, config_dict: Dict) -> None:
"""Init function for the IngressRequires class.
Args:
charm: The charm that requires the ingress relation.
config_dict: Contains all the configuration options for Ingress.
"""
super().__init__(charm, INGRESS_RELATION_NAME)
self.framework.observe(
charm.on[INGRESS_RELATION_NAME].relation_changed, self._on_relation_changed
)
# Set default values.
default_relation_fields = {
"service-namespace": self.model.name,
}
config_dict.update(
(key, value)
for key, value in default_relation_fields.items()
if key not in config_dict or not config_dict[key]
)
self.config_dict = self._convert_to_relation_interface(config_dict)
@staticmethod
def _convert_to_relation_interface(config_dict: Dict) -> Dict:
"""Create a new relation dict that conforms with charm-relation-interfaces.
Args:
config_dict: Ingress configuration that doesn't conform with charm-relation-interfaces.
Returns:
The Ingress configuration conforming with charm-relation-interfaces.
"""
config_dict = copy.copy(config_dict)
config_dict.update(
(key, config_dict[old_key])
for old_key, key in RELATION_INTERFACES_MAPPINGS.items()
if old_key in config_dict and config_dict[old_key]
)
return config_dict
def _config_dict_errors(self, config_dict: Dict, update_only: bool = False) -> bool:
"""Check our config dict for errors.
Args:
config_dict: Contains all the configuration options for Ingress.
update_only: If the charm needs to update only existing keys.
Returns:
If we need to update the config dict or not.
"""
blocked_message = "Error in ingress relation, check `juju debug-log`"
unknown = [
config_key
for config_key in config_dict
if config_key
not in REQUIRED_INGRESS_RELATION_FIELDS
| OPTIONAL_INGRESS_RELATION_FIELDS
| RELATION_INTERFACES_MAPPINGS_VALUES
]
if unknown:
LOGGER.error(
"Ingress relation error, unknown key(s) in config dictionary found: %s",
", ".join(unknown),
)
self.model.unit.status = BlockedStatus(blocked_message)
return True
if not update_only:
missing = tuple(
config_key
for config_key in REQUIRED_INGRESS_RELATION_FIELDS
if config_key not in self.config_dict
)
if missing:
LOGGER.error(
"Ingress relation error, missing required key(s) in config dictionary: %s",
", ".join(sorted(missing)),
)
self.model.unit.status = BlockedStatus(blocked_message)
return True
return False
def _on_relation_changed(self, event: RelationChangedEvent) -> None:
"""Handle the relation-changed event.
Args:
event: Event triggering the relation-changed hook for the relation.
"""
# `self.unit` isn't available here, so use `self.model.unit`.
if self.model.unit.is_leader():
if self._config_dict_errors(config_dict=self.config_dict):
return
event.relation.data[self.model.app].update(
(key, str(self.config_dict[key])) for key in self.config_dict
)
def update_config(self, config_dict: Dict) -> None:
"""Allow for updates to relation.
Args:
config_dict: Contains all the configuration options for Ingress.
Attrs:
config_dict: Contains all the configuration options for Ingress.
"""
if self.model.unit.is_leader():
self.config_dict = self._convert_to_relation_interface(config_dict)
if self._config_dict_errors(self.config_dict, update_only=True):
return
relation = self.model.get_relation(INGRESS_RELATION_NAME)
if relation:
for key in self.config_dict:
relation.data[self.model.app][key] = str(self.config_dict[key])
class IngressBaseProvides(Object):
"""Parent class for IngressProvides and IngressProxyProvides.
Attrs:
model: Juju model where the charm is deployed.
"""
def __init__(self, charm: CharmBase, relation_name: str) -> None:
"""Init function for the IngressProxyProvides class.
Args:
charm: The charm that provides the ingress-proxy relation.
"""
super().__init__(charm, relation_name)
self.charm = charm
def _on_relation_changed(self, event: RelationChangedEvent) -> None:
"""Handle a change to the ingress/ingress-proxy relation.
Confirm we have the fields we expect to receive.
Args:
event: Event triggering the relation-changed hook for the relation.
"""
# `self.unit` isn't available here, so use `self.model.unit`.
if not self.model.unit.is_leader():
return
relation_name = event.relation.name
assert event.app is not None # nosec
if not event.relation.data[event.app]:
LOGGER.info(
"%s hasn't finished configuring, waiting until relation is changed again.",
relation_name,
)
return
ingress_data = {
field: event.relation.data[event.app].get(field)
for field in REQUIRED_INGRESS_RELATION_FIELDS | OPTIONAL_INGRESS_RELATION_FIELDS
}
missing_fields = sorted(
field for field in REQUIRED_INGRESS_RELATION_FIELDS if ingress_data.get(field) is None
)
if missing_fields:
LOGGER.warning(
"Missing required data fields for %s relation: %s",
relation_name,
", ".join(missing_fields),
)
self.model.unit.status = BlockedStatus(
f"Missing fields for {relation_name}: {', '.join(missing_fields)}"
)
if relation_name == INGRESS_RELATION_NAME:
# Conform to charm-relation-interfaces.
if "name" in ingress_data and "port" in ingress_data:
name = ingress_data["name"]
port = ingress_data["port"]
else:
name = ingress_data["service-name"]
port = ingress_data["service-port"]
event.relation.data[self.model.app]["url"] = f"http://{name}:{port}/"
# Create an event that our charm can use to decide it's okay to
# configure the ingress.
self.charm.on.ingress_available.emit()
elif relation_name == INGRESS_PROXY_RELATION_NAME:
self.charm.on.ingress_proxy_available.emit()
class IngressProvides(IngressBaseProvides):
"""Class containing the functionality for the 'provides' side of the 'ingress' relation.
Attrs:
charm: The charm that provides the ingress relation.
Hook events observed:
- relation-changed
"""
def __init__(self, charm: CharmBase) -> None:
"""Init function for the IngressProvides class.
Args:
charm: The charm that provides the ingress relation.
"""
super().__init__(charm, INGRESS_RELATION_NAME)
# Observe the relation-changed hook event and bind
# self.on_relation_changed() to handle the event.
self.framework.observe(
charm.on[INGRESS_RELATION_NAME].relation_changed, self._on_relation_changed
)
self.framework.observe(
charm.on[INGRESS_RELATION_NAME].relation_broken, self._on_relation_broken
)
def _on_relation_broken(self, event: RelationBrokenEvent) -> None:
"""Handle a relation-broken event in the ingress relation.
Args:
event: Event triggering the relation-broken hook for the relation.
"""
if not self.model.unit.is_leader():
return
# Create an event that our charm can use to remove the ingress resource.
self.charm.on.ingress_broken.emit(event.relation)
class IngressProxyProvides(IngressBaseProvides):
"""Class containing the functionality for the 'provides' side of the 'ingress-proxy' relation.
Attrs:
charm: The charm that provides the ingress-proxy relation.
Hook events observed:
- relation-changed
"""
def __init__(self, charm: CharmBase) -> None:
"""Init function for the IngressProxyProvides class.
Args:
charm: The charm that provides the ingress-proxy relation.
"""
super().__init__(charm, INGRESS_PROXY_RELATION_NAME)
# Observe the relation-changed hook event and bind
# self.on_relation_changed() to handle the event.
self.framework.observe(
charm.on[INGRESS_PROXY_RELATION_NAME].relation_changed, self._on_relation_changed
)

View File

@ -0,0 +1,286 @@
"""RabbitMQProvides and Requires module.
This library contains the Requires and Provides classes for handling
the rabbitmq interface.
Import `RabbitMQRequires` in your charm, with the charm object and the
relation name:
- self
- "amqp"
Also provide two additional parameters to the charm object:
- username
- vhost
Two events are also available to respond to:
- connected
- ready
- goneaway
A basic example showing the usage of this relation follows:
```
from charms.rabbitmq_k8s.v0.rabbitmq import RabbitMQRequires
class RabbitMQClientCharm(CharmBase):
def __init__(self, *args):
super().__init__(*args)
# RabbitMQ Requires
self.amqp = RabbitMQRequires(
self, "amqp",
username="myusername",
vhost="vhostname"
)
self.framework.observe(
self.amqp.on.connected, self._on_amqp_connected)
self.framework.observe(
self.amqp.on.ready, self._on_amqp_ready)
self.framework.observe(
self.amqp.on.goneaway, self._on_amqp_goneaway)
def _on_amqp_connected(self, event):
'''React to the RabbitMQ connected event.
This event happens when n RabbitMQ relation is added to the
model before credentials etc have been provided.
'''
# Do something before the relation is complete
pass
def _on_amqp_ready(self, event):
'''React to the RabbitMQ ready event.
The RabbitMQ interface will use the provided username and vhost for the
request to the rabbitmq server.
'''
# RabbitMQ Relation is ready. Do something with the completed relation.
pass
def _on_amqp_goneaway(self, event):
'''React to the RabbitMQ goneaway event.
This event happens when an RabbitMQ relation is removed.
'''
# RabbitMQ Relation has goneaway. shutdown services or suchlike
pass
```
"""
# The unique Charmhub library identifier, never change it
LIBID = "45622352791142fd9cf87232e3bd6f2a"
# Increment this major API version when introducing breaking changes
LIBAPI = 0
# Increment this PATCH version before using `charmcraft publish-lib` or reset
# to 0 if you are raising the major API version
LIBPATCH = 1
import logging
from ops.framework import (
StoredState,
EventBase,
ObjectEvents,
EventSource,
Object,
)
from ops.model import Relation
from typing import List
logger = logging.getLogger(__name__)
class RabbitMQConnectedEvent(EventBase):
"""RabbitMQ connected Event."""
pass
class RabbitMQReadyEvent(EventBase):
"""RabbitMQ ready for use Event."""
pass
class RabbitMQGoneAwayEvent(EventBase):
"""RabbitMQ relation has gone-away Event"""
pass
class RabbitMQServerEvents(ObjectEvents):
"""Events class for `on`"""
connected = EventSource(RabbitMQConnectedEvent)
ready = EventSource(RabbitMQReadyEvent)
goneaway = EventSource(RabbitMQGoneAwayEvent)
class RabbitMQRequires(Object):
"""
RabbitMQRequires class
"""
on = RabbitMQServerEvents()
def __init__(self, charm, relation_name: str, username: str, vhost: str):
super().__init__(charm, relation_name)
self.charm = charm
self.relation_name = relation_name
self.username = username
self.vhost = vhost
self.framework.observe(
self.charm.on[relation_name].relation_joined,
self._on_amqp_relation_joined,
)
self.framework.observe(
self.charm.on[relation_name].relation_changed,
self._on_amqp_relation_changed,
)
self.framework.observe(
self.charm.on[relation_name].relation_departed,
self._on_amqp_relation_changed,
)
self.framework.observe(
self.charm.on[relation_name].relation_broken,
self._on_amqp_relation_broken,
)
def _on_amqp_relation_joined(self, event):
"""RabbitMQ relation joined."""
logging.debug("RabbitMQRabbitMQRequires on_joined")
self.on.connected.emit()
self.request_access(self.username, self.vhost)
def _on_amqp_relation_changed(self, event):
"""RabbitMQ relation changed."""
logging.debug("RabbitMQRabbitMQRequires on_changed/departed")
if self.password:
self.on.ready.emit()
def _on_amqp_relation_broken(self, event):
"""RabbitMQ relation broken."""
logging.debug("RabbitMQRabbitMQRequires on_broken")
self.on.goneaway.emit()
@property
def _amqp_rel(self) -> Relation:
"""The RabbitMQ relation."""
return self.framework.model.get_relation(self.relation_name)
@property
def password(self) -> str:
"""Return the RabbitMQ password from the server side of the relation."""
return self._amqp_rel.data[self._amqp_rel.app].get("password")
@property
def hostname(self) -> str:
"""Return the hostname from the RabbitMQ relation"""
return self._amqp_rel.data[self._amqp_rel.app].get("hostname")
@property
def ssl_port(self) -> str:
"""Return the SSL port from the RabbitMQ relation"""
return self._amqp_rel.data[self._amqp_rel.app].get("ssl_port")
@property
def ssl_ca(self) -> str:
"""Return the SSL port from the RabbitMQ relation"""
return self._amqp_rel.data[self._amqp_rel.app].get("ssl_ca")
@property
def hostnames(self) -> List[str]:
"""Return a list of remote RMQ hosts from the RabbitMQ relation"""
_hosts = []
for unit in self._amqp_rel.units:
_hosts.append(self._amqp_rel.data[unit].get("ingress-address"))
return _hosts
def request_access(self, username: str, vhost: str) -> None:
"""Request access to the RabbitMQ server."""
if self.model.unit.is_leader():
logging.debug("Requesting RabbitMQ user and vhost")
self._amqp_rel.data[self.charm.app]["username"] = username
self._amqp_rel.data[self.charm.app]["vhost"] = vhost
class HasRabbitMQClientsEvent(EventBase):
"""Has RabbitMQClients Event."""
pass
class ReadyRabbitMQClientsEvent(EventBase):
"""RabbitMQClients Ready Event."""
pass
class RabbitMQClientEvents(ObjectEvents):
"""Events class for `on`"""
has_amqp_clients = EventSource(HasRabbitMQClientsEvent)
ready_amqp_clients = EventSource(ReadyRabbitMQClientsEvent)
class RabbitMQProvides(Object):
"""
RabbitMQProvides class
"""
on = RabbitMQClientEvents()
def __init__(self, charm, relation_name, callback):
super().__init__(charm, relation_name)
self.charm = charm
self.relation_name = relation_name
self.callback = callback
self.framework.observe(
self.charm.on[relation_name].relation_joined,
self._on_amqp_relation_joined,
)
self.framework.observe(
self.charm.on[relation_name].relation_changed,
self._on_amqp_relation_changed,
)
self.framework.observe(
self.charm.on[relation_name].relation_broken,
self._on_amqp_relation_broken,
)
def _on_amqp_relation_joined(self, event):
"""Handle RabbitMQ joined."""
logging.debug("RabbitMQRabbitMQProvides on_joined data={}"
.format(event.relation.data[event.relation.app]))
self.on.has_amqp_clients.emit()
def _on_amqp_relation_changed(self, event):
"""Handle RabbitMQ changed."""
logging.debug("RabbitMQRabbitMQProvides on_changed data={}"
.format(event.relation.data[event.relation.app]))
# Validate data on the relation
if self.username(event) and self.vhost(event):
self.on.ready_amqp_clients.emit()
if self.charm.unit.is_leader():
self.callback(event, self.username(event), self.vhost(event))
else:
logging.warning("Received RabbitMQ changed event without the "
"expected keys ('username', 'vhost') in the "
"application data bag. Incompatible charm in "
"other end of relation?")
def _on_amqp_relation_broken(self, event):
"""Handle RabbitMQ broken."""
logging.debug("RabbitMQRabbitMQProvides on_departed")
# TODO clear data on the relation
def username(self, event):
"""Return the RabbitMQ username from the client side of the relation."""
return event.relation.data[event.relation.app].get("username")
def vhost(self, event):
"""Return the RabbitMQ vhost from the client side of the relation."""
return event.relation.data[event.relation.app].get("vhost")

View File

@ -0,0 +1,579 @@
# Copyright 2022 Canonical Ltd.
# See LICENSE file for licensing details.
r"""# Interface Library for ingress.
This library wraps relation endpoints using the `ingress` interface
and provides a Python API for both requesting and providing per-application
ingress, with load-balancing occurring across all units.
## Getting Started
To get started using the library, you just need to fetch the library using `charmcraft`.
```shell
cd some-charm
charmcraft fetch-lib charms.traefik_k8s.v1.ingress
```
In the `metadata.yaml` of the charm, add the following:
```yaml
requires:
ingress:
interface: ingress
limit: 1
```
Then, to initialise the library:
```python
from charms.traefik_k8s.v1.ingress import (IngressPerAppRequirer,
IngressPerAppReadyEvent, IngressPerAppRevokedEvent)
class SomeCharm(CharmBase):
def __init__(self, *args):
# ...
self.ingress = IngressPerAppRequirer(self, port=80)
# The following event is triggered when the ingress URL to be used
# by this deployment of the `SomeCharm` is ready (or changes).
self.framework.observe(
self.ingress.on.ready, self._on_ingress_ready
)
self.framework.observe(
self.ingress.on.revoked, self._on_ingress_revoked
)
def _on_ingress_ready(self, event: IngressPerAppReadyEvent):
logger.info("This app's ingress URL: %s", event.url)
def _on_ingress_revoked(self, event: IngressPerAppRevokedEvent):
logger.info("This app no longer has ingress")
"""
import logging
import socket
import typing
from typing import Any, Dict, Optional, Tuple, Union
import yaml
from ops.charm import CharmBase, RelationBrokenEvent, RelationEvent
from ops.framework import EventSource, Object, ObjectEvents, StoredState
from ops.model import ModelError, Relation
# The unique Charmhub library identifier, never change it
LIBID = "e6de2a5cd5b34422a204668f3b8f90d2"
# Increment this major API version when introducing breaking changes
LIBAPI = 1
# Increment this PATCH version before using `charmcraft publish-lib` or reset
# to 0 if you are raising the major API version
LIBPATCH = 15
DEFAULT_RELATION_NAME = "ingress"
RELATION_INTERFACE = "ingress"
log = logging.getLogger(__name__)
try:
import jsonschema
DO_VALIDATION = True
except ModuleNotFoundError:
log.warning(
"The `ingress` library needs the `jsonschema` package to be able "
"to do runtime data validation; without it, it will still work but validation "
"will be disabled. \n"
"It is recommended to add `jsonschema` to the 'requirements.txt' of your charm, "
"which will enable this feature."
)
DO_VALIDATION = False
INGRESS_REQUIRES_APP_SCHEMA = {
"type": "object",
"properties": {
"model": {"type": "string"},
"name": {"type": "string"},
"host": {"type": "string"},
"port": {"type": "string"},
"strip-prefix": {"type": "string"},
"redirect-https": {"type": "string"},
},
"required": ["model", "name", "host", "port"],
}
INGRESS_PROVIDES_APP_SCHEMA = {
"type": "object",
"properties": {
"ingress": {"type": "object", "properties": {"url": {"type": "string"}}},
},
"required": ["ingress"],
}
try:
from typing import TypedDict
except ImportError:
from typing_extensions import TypedDict # py35 compatibility
# Model of the data a unit implementing the requirer will need to provide.
RequirerData = TypedDict(
"RequirerData",
{
"model": str,
"name": str,
"host": str,
"port": int,
"strip-prefix": bool,
"redirect-https": bool,
},
total=False,
)
# Provider ingress data model.
ProviderIngressData = TypedDict("ProviderIngressData", {"url": str})
# Provider application databag model.
ProviderApplicationData = TypedDict("ProviderApplicationData", {"ingress": ProviderIngressData}) # type: ignore
def _validate_data(data, schema):
"""Checks whether `data` matches `schema`.
Will raise DataValidationError if the data is not valid, else return None.
"""
if not DO_VALIDATION:
return
try:
jsonschema.validate(instance=data, schema=schema)
except jsonschema.ValidationError as e:
raise DataValidationError(data, schema) from e
class DataValidationError(RuntimeError):
"""Raised when data validation fails on IPU relation data."""
class _IngressPerAppBase(Object):
"""Base class for IngressPerUnit interface classes."""
def __init__(self, charm: CharmBase, relation_name: str = DEFAULT_RELATION_NAME):
super().__init__(charm, relation_name + "_V1")
self.charm: CharmBase = charm
self.relation_name = relation_name
self.app = self.charm.app
self.unit = self.charm.unit
observe = self.framework.observe
rel_events = charm.on[relation_name]
observe(rel_events.relation_created, self._handle_relation)
observe(rel_events.relation_joined, self._handle_relation)
observe(rel_events.relation_changed, self._handle_relation)
observe(rel_events.relation_broken, self._handle_relation_broken)
observe(charm.on.leader_elected, self._handle_upgrade_or_leader) # type: ignore
observe(charm.on.upgrade_charm, self._handle_upgrade_or_leader) # type: ignore
@property
def relations(self):
"""The list of Relation instances associated with this endpoint."""
return list(self.charm.model.relations[self.relation_name])
def _handle_relation(self, event):
"""Subclasses should implement this method to handle a relation update."""
pass
def _handle_relation_broken(self, event):
"""Subclasses should implement this method to handle a relation breaking."""
pass
def _handle_upgrade_or_leader(self, event):
"""Subclasses should implement this method to handle upgrades or leadership change."""
pass
class _IPAEvent(RelationEvent):
__args__: Tuple[str, ...] = ()
__optional_kwargs__: Dict[str, Any] = {}
@classmethod
def __attrs__(cls):
return cls.__args__ + tuple(cls.__optional_kwargs__.keys())
def __init__(self, handle, relation, *args, **kwargs):
super().__init__(handle, relation)
if not len(self.__args__) == len(args):
raise TypeError("expected {} args, got {}".format(len(self.__args__), len(args)))
for attr, obj in zip(self.__args__, args):
setattr(self, attr, obj)
for attr, default in self.__optional_kwargs__.items():
obj = kwargs.get(attr, default)
setattr(self, attr, obj)
def snapshot(self):
dct = super().snapshot()
for attr in self.__attrs__():
obj = getattr(self, attr)
try:
dct[attr] = obj
except ValueError as e:
raise ValueError(
"cannot automagically serialize {}: "
"override this method and do it "
"manually.".format(obj)
) from e
return dct
def restore(self, snapshot) -> None:
super().restore(snapshot)
for attr, obj in snapshot.items():
setattr(self, attr, obj)
class IngressPerAppDataProvidedEvent(_IPAEvent):
"""Event representing that ingress data has been provided for an app."""
__args__ = ("name", "model", "port", "host", "strip_prefix", "redirect_https")
if typing.TYPE_CHECKING:
name: Optional[str] = None
model: Optional[str] = None
port: Optional[str] = None
host: Optional[str] = None
strip_prefix: bool = False
redirect_https: bool = False
class IngressPerAppDataRemovedEvent(RelationEvent):
"""Event representing that ingress data has been removed for an app."""
class IngressPerAppProviderEvents(ObjectEvents):
"""Container for IPA Provider events."""
data_provided = EventSource(IngressPerAppDataProvidedEvent)
data_removed = EventSource(IngressPerAppDataRemovedEvent)
class IngressPerAppProvider(_IngressPerAppBase):
"""Implementation of the provider of ingress."""
on = IngressPerAppProviderEvents() # type: ignore
def __init__(self, charm: CharmBase, relation_name: str = DEFAULT_RELATION_NAME):
"""Constructor for IngressPerAppProvider.
Args:
charm: The charm that is instantiating the instance.
relation_name: The name of the relation endpoint to bind to
(defaults to "ingress").
"""
super().__init__(charm, relation_name)
def _handle_relation(self, event):
# created, joined or changed: if remote side has sent the required data:
# notify listeners.
if self.is_ready(event.relation):
data = self._get_requirer_data(event.relation)
self.on.data_provided.emit( # type: ignore
event.relation,
data["name"],
data["model"],
data["port"],
data["host"],
data.get("strip-prefix", False),
data.get("redirect-https", False),
)
def _handle_relation_broken(self, event):
self.on.data_removed.emit(event.relation) # type: ignore
def wipe_ingress_data(self, relation: Relation):
"""Clear ingress data from relation."""
assert self.unit.is_leader(), "only leaders can do this"
try:
relation.data
except ModelError as e:
log.warning(
"error {} accessing relation data for {!r}. "
"Probably a ghost of a dead relation is still "
"lingering around.".format(e, relation.name)
)
return
del relation.data[self.app]["ingress"]
def _get_requirer_data(self, relation: Relation) -> RequirerData: # type: ignore
"""Fetch and validate the requirer's app databag.
For convenience, we convert 'port' to integer.
"""
if not relation.app or not relation.app.name: # type: ignore
# Handle edge case where remote app name can be missing, e.g.,
# relation_broken events.
# FIXME https://github.com/canonical/traefik-k8s-operator/issues/34
return {}
databag = relation.data[relation.app]
remote_data: Dict[str, Union[int, str]] = {}
for k in ("port", "host", "model", "name", "mode", "strip-prefix", "redirect-https"):
v = databag.get(k)
if v is not None:
remote_data[k] = v
_validate_data(remote_data, INGRESS_REQUIRES_APP_SCHEMA)
remote_data["port"] = int(remote_data["port"])
remote_data["strip-prefix"] = bool(remote_data.get("strip-prefix", "false") == "true")
remote_data["redirect-https"] = bool(remote_data.get("redirect-https", "false") == "true")
return typing.cast(RequirerData, remote_data)
def get_data(self, relation: Relation) -> RequirerData: # type: ignore
"""Fetch the remote app's databag, i.e. the requirer data."""
return self._get_requirer_data(relation)
def is_ready(self, relation: Optional[Relation] = None):
"""The Provider is ready if the requirer has sent valid data."""
if not relation:
return any(map(self.is_ready, self.relations))
try:
return bool(self._get_requirer_data(relation))
except DataValidationError as e:
log.warning("Requirer not ready; validation error encountered: %s" % str(e))
return False
def _provided_url(self, relation: Relation) -> ProviderIngressData: # type: ignore
"""Fetch and validate this app databag; return the ingress url."""
if not relation.app or not relation.app.name or not self.unit.is_leader(): # type: ignore
# Handle edge case where remote app name can be missing, e.g.,
# relation_broken events.
# Also, only leader units can read own app databags.
# FIXME https://github.com/canonical/traefik-k8s-operator/issues/34
return typing.cast(ProviderIngressData, {}) # noqa
# fetch the provider's app databag
raw_data = relation.data[self.app].get("ingress")
if not raw_data:
raise RuntimeError("This application did not `publish_url` yet.")
ingress: ProviderIngressData = yaml.safe_load(raw_data)
_validate_data({"ingress": ingress}, INGRESS_PROVIDES_APP_SCHEMA)
return ingress
def publish_url(self, relation: Relation, url: str):
"""Publish to the app databag the ingress url."""
ingress = {"url": url}
ingress_data = {"ingress": ingress}
_validate_data(ingress_data, INGRESS_PROVIDES_APP_SCHEMA)
relation.data[self.app]["ingress"] = yaml.safe_dump(ingress)
@property
def proxied_endpoints(self):
"""Returns the ingress settings provided to applications by this IngressPerAppProvider.
For example, when this IngressPerAppProvider has provided the
`http://foo.bar/my-model.my-app` URL to the my-app application, the returned dictionary
will be:
```
{
"my-app": {
"url": "http://foo.bar/my-model.my-app"
}
}
```
"""
results = {}
for ingress_relation in self.relations:
assert (
ingress_relation.app
), "no app in relation (shouldn't happen)" # for type checker
results[ingress_relation.app.name] = self._provided_url(ingress_relation)
return results
class IngressPerAppReadyEvent(_IPAEvent):
"""Event representing that ingress for an app is ready."""
__args__ = ("url",)
if typing.TYPE_CHECKING:
url: Optional[str] = None
class IngressPerAppRevokedEvent(RelationEvent):
"""Event representing that ingress for an app has been revoked."""
class IngressPerAppRequirerEvents(ObjectEvents):
"""Container for IPA Requirer events."""
ready = EventSource(IngressPerAppReadyEvent)
revoked = EventSource(IngressPerAppRevokedEvent)
class IngressPerAppRequirer(_IngressPerAppBase):
"""Implementation of the requirer of the ingress relation."""
on = IngressPerAppRequirerEvents() # type: ignore
# used to prevent spurious urls to be sent out if the event we're currently
# handling is a relation-broken one.
_stored = StoredState()
def __init__(
self,
charm: CharmBase,
relation_name: str = DEFAULT_RELATION_NAME,
*,
host: Optional[str] = None,
port: Optional[int] = None,
strip_prefix: bool = False,
redirect_https: bool = False,
):
"""Constructor for IngressRequirer.
The request args can be used to specify the ingress properties when the
instance is created. If any are set, at least `port` is required, and
they will be sent to the ingress provider as soon as it is available.
All request args must be given as keyword args.
Args:
charm: the charm that is instantiating the library.
relation_name: the name of the relation endpoint to bind to (defaults to `ingress`);
relation must be of interface type `ingress` and have "limit: 1")
host: Hostname to be used by the ingress provider to address the requiring
application; if unspecified, the default Kubernetes service name will be used.
strip_prefix: configure Traefik to strip the path prefix.
redirect_https: redirect incoming requests to the HTTPS.
Request Args:
port: the port of the service
"""
super().__init__(charm, relation_name)
self.charm: CharmBase = charm
self.relation_name = relation_name
self._strip_prefix = strip_prefix
self._redirect_https = redirect_https
self._stored.set_default(current_url=None) # type: ignore
# if instantiated with a port, and we are related, then
# we immediately publish our ingress data to speed up the process.
if port:
self._auto_data = host, port
else:
self._auto_data = None
def _handle_relation(self, event):
# created, joined or changed: if we have auto data: publish it
self._publish_auto_data(event.relation)
if self.is_ready():
# Avoid spurious events, emit only when there is a NEW URL available
new_url = (
None
if isinstance(event, RelationBrokenEvent)
else self._get_url_from_relation_data()
)
if self._stored.current_url != new_url: # type: ignore
self._stored.current_url = new_url # type: ignore
self.on.ready.emit(event.relation, new_url) # type: ignore
def _handle_relation_broken(self, event):
self._stored.current_url = None # type: ignore
self.on.revoked.emit(event.relation) # type: ignore
def _handle_upgrade_or_leader(self, event):
"""On upgrade/leadership change: ensure we publish the data we have."""
for relation in self.relations:
self._publish_auto_data(relation)
def is_ready(self):
"""The Requirer is ready if the Provider has sent valid data."""
try:
return bool(self._get_url_from_relation_data())
except DataValidationError as e:
log.warning("Requirer not ready; validation error encountered: %s" % str(e))
return False
def _publish_auto_data(self, relation: Relation):
if self._auto_data and self.unit.is_leader():
host, port = self._auto_data
self.provide_ingress_requirements(host=host, port=port)
def provide_ingress_requirements(self, *, host: Optional[str] = None, port: int):
"""Publishes the data that Traefik needs to provide ingress.
NB only the leader unit is supposed to do this.
Args:
host: Hostname to be used by the ingress provider to address the
requirer unit; if unspecified, FQDN will be used instead
port: the port of the service (required)
"""
# get only the leader to publish the data since we only
# require one unit to publish it -- it will not differ between units,
# unlike in ingress-per-unit.
assert self.unit.is_leader(), "only leaders should do this."
assert self.relation, "no relation"
if not host:
host = socket.getfqdn()
data = {
"model": self.model.name,
"name": self.app.name,
"host": host,
"port": str(port),
}
if self._strip_prefix:
data["strip-prefix"] = "true"
if self._redirect_https:
data["redirect-https"] = "true"
_validate_data(data, INGRESS_REQUIRES_APP_SCHEMA)
self.relation.data[self.app].update(data)
@property
def relation(self):
"""The established Relation instance, or None."""
return self.relations[0] if self.relations else None
def _get_url_from_relation_data(self) -> Optional[str]:
"""The full ingress URL to reach the current unit.
Returns None if the URL isn't available yet.
"""
relation = self.relation
if not relation or not relation.app:
return None
# fetch the provider's app databag
try:
raw = relation.data.get(relation.app, {}).get("ingress")
except ModelError as e:
log.debug(
f"Error {e} attempting to read remote app data; "
f"probably we are in a relation_departed hook"
)
return None
if not raw:
return None
ingress: ProviderIngressData = yaml.safe_load(raw)
_validate_data({"ingress": ingress}, INGRESS_PROVIDES_APP_SCHEMA)
return ingress["url"]
@property
def url(self) -> Optional[str]:
"""The full ingress URL to reach the current unit.
Returns None if the URL isn't available yet.
"""
data = self._stored.current_url or self._get_url_from_relation_data() # type: ignore
assert isinstance(data, (str, type(None))) # for static checker
return data

77
metadata.yaml Normal file
View File

@ -0,0 +1,77 @@
name: aodh-k8s
summary: OpenStack aodh service
maintainer: OpenStack Charmers <openstack-charmers@lists.ubuntu.com>
description: |
OpenStack aodh provides an HTTP service for managing, selecting,
and claiming providers of classes of inventory representing available
resources in a cloud.
.
version: 3
bases:
- name: ubuntu
channel: 22.04/stable
assumes:
- k8s-api
- juju >= 3.2
tags:
- openstack
source: https://opendev.org/openstack/charm-aodh-k8s
issues: https://bugs.launchpad.net/charm-aodh-k8s
containers:
aodh-api:
resource: aodh-api-image
aodh-evaluator:
resource: aodh-evaluator-image
aodh-notifier:
resource: aodh-notifier-image
aodh-listener:
resource: aodh-listener-image
aodh-expirer:
resource: aodh-expirer-image
resources:
aodh-api-image:
type: oci-image
description: OCI image for OpenStack aodh
upstream-source: kolla/ubuntu-binary-aodh-notifier:yoga
aodh-evaluator-image:
type: oci-image
description: OCI image for OpenStack aodh
upstream-source: kolla/ubuntu-binary-aodh-evaluator:yoga
aodh-notifier-image:
type: oci-image
description: OCI image for OpenStack aodh
upstream-source: kolla/ubuntu-binary-aodh-notifier:yoga
aodh-listener-image:
type: oci-image
description: OCI image for OpenStack aodh
upstream-source: kolla/ubuntu-binary-aodh-listener:yoga
aodh-expirer-image:
type: oci-image
description: OCI image for OpenStack aodh
upstream-source: kolla/ubuntu-binary-aodh-expirer:yoga
requires:
database:
interface: mysql_client
limit: 1
identity-service:
interface: keystone
ingress-internal:
interface: ingress
optional: true
limit: 1
ingress-public:
interface: ingress
limit: 1
amqp:
interface: rabbitmq
provides:
aodh:
interface: aodh
peers:
peers:
interface: aodh-peer

10
osci.yaml Normal file
View File

@ -0,0 +1,10 @@
- project:
templates:
- charm-publish-jobs
vars:
needs_charm_build: true
charm_build_name: aodh-k8s
build_type: charmcraft
publish_charm: true
charmcraft_channel: 2.0/stable
publish_channel: 2023.1/edge

33
pyproject.toml Normal file
View File

@ -0,0 +1,33 @@
# Testing tools configuration
[tool.coverage.run]
branch = true
[tool.coverage.report]
show_missing = true
[tool.pytest.ini_options]
minversion = "6.0"
log_cli_level = "INFO"
# Formatting tools configuration
[tool.black]
line-length = 99
target-version = ["py38"]
[tool.isort]
line_length = 99
profile = "black"
# Linting tools configuration
[tool.flake8]
max-line-length = 99
max-doc-length = 99
max-complexity = 10
exclude = [".git", "__pycache__", ".tox", "build", "dist", "*.egg_info", "venv"]
select = ["E", "W", "F", "C", "N", "R", "D", "H"]
# Ignore W503, E501 because using black creates errors with this
# Ignore D107 Missing docstring in __init__
ignore = ["W503", "E501", "D107"]
# D100, D101, D102, D103: Ignore missing docstrings in tests
per-file-ignores = ["tests/*:D100,D101,D102,D103,D104"]
docstring-convention = "google"

13
rename.sh Executable file
View File

@ -0,0 +1,13 @@
#!/bin/bash
charm=$(grep "charm_build_name" osci.yaml | awk '{print $2}')
echo "renaming ${charm}_*.charm to ${charm}.charm"
echo -n "pwd: "
pwd
ls -al
echo "Removing bad downloaded charm maybe?"
if [[ -e "${charm}.charm" ]];
then
rm "${charm}.charm"
fi
echo "Renaming charm here."
mv ${charm}_*.charm ${charm}.charm

8
requirements.txt Normal file
View File

@ -0,0 +1,8 @@
ops
jinja2
git+https://github.com/openstack/charm-ops-sunbeam#egg=ops_sunbeam
lightkube
# Uncomment below if charm relates to ceph
# git+https://github.com/openstack/charm-ops-interface-ceph-client#egg=interface_ceph_client
# git+https://github.com/juju/charm-helpers.git#egg=charmhelpers

280
src/charm.py Executable file
View File

@ -0,0 +1,280 @@
#!/usr/bin/env python3
"""Aodh Operator Charm.
This charm provide Aodh services as part of an OpenStack deployment
"""
import logging
from typing import List
import ops_sunbeam.charm as sunbeam_charm
import ops_sunbeam.container_handlers as sunbeam_chandlers
import ops_sunbeam.core as sunbeam_core
from ops.framework import StoredState
from ops.main import main
logger = logging.getLogger(__name__)
AODH_WSGI_CONTAINER = "aodh-api"
AODH_EVALUATOR_CONTAINER = "aodh-evaluator"
AODH_NOTIFIER_CONTAINER = "aodh-notifier"
AODH_LISTENER_CONTAINER = "aodh-listener"
AODH_EXPIRER_CONTAINER = "aodh-expirer"
class AODHEvaluatorPebbleHandler(sunbeam_chandlers.ServicePebbleHandler):
"""Pebble handler for AODH Evaluator."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def get_layer(self) -> dict:
"""AODH Evaluator service layer.
:returns: pebble layer configuration for scheduler service
:rtype: dict
"""
return {
"summary": "aodh evaluator layer",
"description": "pebble configuration for aodh-evaluator service",
"services": {
"aodh-evaluator": {
"override": "replace",
"summary": "AODH Evaluator",
"command": "aodh-evaluator",
"startup": "enabled",
"user": "aodh",
"group": "aodh",
}
},
}
def default_container_configs(
self,
) -> List[sunbeam_core.ContainerConfigFile]:
"""Container configurations for handler."""
return [
sunbeam_core.ContainerConfigFile(
"/etc/aodh/aodh.conf",
"root",
"aodh",
0o640,
)
]
class AODHNotifierPebbleHandler(sunbeam_chandlers.ServicePebbleHandler):
"""Pebble handler for AODH Notifier container."""
def get_layer(self):
"""AODH Notifier service.
:returns: pebble service layer configuration for aodh-notifier service
:rtype: dict
"""
return {
"summary": "aodh notifier layer",
"description": "pebble configuration for aodh-notifier service",
"services": {
"aodh-notifier": {
"override": "replace",
"summary": "AODH Notifier",
"command": "aodh-notifier",
"startup": "enabled",
"user": "aodh",
"group": "aodh",
}
},
}
def default_container_configs(
self,
) -> List[sunbeam_core.ContainerConfigFile]:
"""Container configurations for handler."""
return [
sunbeam_core.ContainerConfigFile(
"/etc/aodh/aodh.conf",
"root",
"aodh",
0o640,
)
]
class AODHListenerPebbleHandler(sunbeam_chandlers.ServicePebbleHandler):
"""Pebble handler for AODH Listener container."""
def get_layer(self):
"""AODH Listener service.
:returns: pebble service layer configuration for aodh-listener service
:rtype: dict
"""
return {
"summary": "aodh listener layer",
"description": "pebble configuration for AODH Listener service",
"services": {
"aodh-listener": {
"override": "replace",
"summary": "AODH Listener",
"command": "aodh-listener",
"startup": "enabled",
"user": "aodh",
"group": "aodh",
}
},
}
def default_container_configs(
self,
) -> List[sunbeam_core.ContainerConfigFile]:
"""Container configurations for handler."""
return [
sunbeam_core.ContainerConfigFile(
"/etc/aodh/aodh.conf",
"root",
"aodh",
0o640,
)
]
class AODHExpirerPebbleHandler(sunbeam_chandlers.ServicePebbleHandler):
"""Pebble handler for AODH Expirer container."""
def get_layer(self):
"""AODH Expirer service.
:returns: pebble service layer configuration for aodh-expirer service
:rtype: dict
"""
return {
"summary": "aodh expirer layer",
"description": "pebble configuration for AODH Expirer service",
"services": {
"aodh-expirer": {
"override": "replace",
"summary": "AODH Expirer",
"command": ('/bin/bash -c "while true; do aodh-expirer; sleep 60; done"'),
"startup": "enabled",
"user": "aodh",
"group": "aodh",
}
},
}
def default_container_configs(
self,
) -> List[sunbeam_core.ContainerConfigFile]:
"""Container configurations for handler."""
return [
sunbeam_core.ContainerConfigFile(
"/etc/aodh/aodh.conf",
"root",
"aodh",
0o640,
)
]
class AodhOperatorCharm(sunbeam_charm.OSBaseOperatorAPICharm):
"""Charm the service."""
_state = StoredState()
service_name = "aodh-api"
wsgi_admin_script = "/usr/share/aodh/app.wsgi"
wsgi_public_script = "/usr/share/aodh/app.wsgi"
db_sync_cmds = [["aodh-dbsync"]]
@property
def service_conf(self) -> str:
"""Service default configuration file."""
return "/etc/aodh/aodh.conf"
@property
def service_user(self) -> str:
"""Service user file and directory ownership."""
return "aodh"
@property
def service_group(self) -> str:
"""Service group file and directory ownership."""
return "aodh"
@property
def service_endpoints(self):
"""Return service endpoints for the service."""
return [
{
"service_name": "aodh",
"type": "aodh",
"description": "OpenStack Aodh API",
"internal_url": f"{self.internal_url}",
"public_url": f"{self.public_url}",
"admin_url": f"{self.admin_url}",
}
]
@property
def default_public_ingress_port(self):
"""Ingress Port for API service."""
return 8042
def get_pebble_handlers(
self,
) -> List[sunbeam_chandlers.ServicePebbleHandler]:
"""Pebble handlers for operator."""
# if self.config.get("alarm-history-time-to-live") > 0:
# enable_expirer = True
# else:
# enable_expirer = False
pebble_handlers = [
sunbeam_chandlers.WSGIPebbleHandler(
self,
AODH_WSGI_CONTAINER,
self.service_name,
self.container_configs,
self.template_dir,
self.configure_charm,
f"wsgi-{self.service_name}",
),
AODHEvaluatorPebbleHandler(
self,
AODH_EVALUATOR_CONTAINER,
"aodh-evaluator",
[],
self.template_dir,
self.configure_charm,
),
AODHNotifierPebbleHandler(
self,
AODH_NOTIFIER_CONTAINER,
"aodh-notifier",
[],
self.template_dir,
self.configure_charm,
),
AODHListenerPebbleHandler(
self,
AODH_LISTENER_CONTAINER,
"aodh-listener",
[],
self.template_dir,
self.configure_charm,
),
AODHExpirerPebbleHandler(
self,
AODH_EXPIRER_CONTAINER,
"aodh-expirer",
[],
self.template_dir,
self.configure_charm,
# enable_expirer,
),
]
return pebble_handlers
if __name__ == "__main__":
main(AodhOperatorCharm)

View File

@ -0,0 +1,16 @@
[DEFAULT]
debug = {{ options.debug }}
gnocchi_external_project_owner = services
{% if identity_service.service_domain -%}
gnocchi_external_domain_name = {{ identity_service.service_domain }}
{% endif %}
{% include "parts/section-database" %}
alarm_history_time_to_live = {{ options.alarm_history_time_to_live }}
alarm_histories_delete_batch_size = {{ options.alarm_histories_delete_batch_size }}
{% include "parts/section-identity" %}
{% include "parts/section-service-user" %}

View File

@ -0,0 +1,22 @@
###############################################################################
# [ WARNING ]
# ceph configuration file maintained in aso
# local changes may be overwritten.
###############################################################################
[global]
{% if ceph.auth -%}
auth_supported = {{ ceph.auth }}
mon host = {{ ceph.mon_hosts }}
{% endif -%}
keyring = /etc/ceph/$cluster.$name.keyring
log to syslog = false
err to syslog = false
clog to syslog = false
{% if ceph.rbd_features %}
rbd default features = {{ ceph.rbd_features }}
{% endif %}
[client]
{% if ceph_config.rbd_default_data_pool -%}
rbd default data pool = {{ ceph_config.rbd_default_data_pool }}
{% endif %}

View File

@ -0,0 +1,3 @@
{% if database.connection -%}
connection = {{ database.connection }}
{% endif -%}

View File

@ -0,0 +1,23 @@
{% if identity_service.admin_auth_url -%}
auth_url = {{ identity_service.admin_auth_url }}
interface = admin
{% elif identity_service.internal_auth_url -%}
auth_url = {{ identity_service.internal_auth_url }}
interface = internal
{% elif identity_service.internal_host -%}
auth_url = {{ identity_service.internal_protocol }}://{{ identity_service.internal_host }}:{{ identity_service.internal_port }}
interface = internal
{% endif -%}
{% if identity_service.public_auth_url -%}
www_authenticate_uri = {{ identity_service.public_auth_url }}
{% elif identity_service.internal_host -%}
www_authenticate_uri = {{ identity_service.internal_protocol }}://{{ identity_service.internal_host }}:{{ identity_service.internal_port }}
{% endif -%}
auth_type = password
project_domain_name = {{ identity_service.service_domain_name }}
user_domain_name = {{ identity_service.service_domain_name }}
project_name = {{ identity_service.service_project_name }}
username = {{ identity_service.service_user_name }}
password = {{ identity_service.service_password }}
service_token_roles = {{ identity_service.admin_role }}
service_token_roles_required = True

View File

@ -0,0 +1,3 @@
[database]
{% include "parts/database-connection" %}
connection_recycle_time = 200

View File

@ -0,0 +1,10 @@
{% if trusted_dashboards %}
[federation]
{% for dashboard_url in trusted_dashboards -%}
trusted_dashboard = {{ dashboard_url }}
{% endfor -%}
{% endif %}
{% for sp in fid_sps -%}
[{{ sp['protocol-name'] }}]
remote_id_attribute = {{ sp['remote-id-attribute'] }}
{% endfor -%}

View File

@ -0,0 +1,2 @@
[keystone_authtoken]
{% include "parts/identity-data" %}

View File

@ -0,0 +1,6 @@
{% for section in sections -%}
[{{section}}]
{% for key, value in sections[section].items() -%}
{{ key }} = {{ value }}
{% endfor %}
{%- endfor %}

View File

@ -0,0 +1,15 @@
{% if identity_service.service_domain_id -%}
[service_user]
{% if identity_service.internal_auth_url -%}
auth_url = {{ identity_service.internal_auth_url }}
{% elif identity_service.internal_host -%}
auth_url = {{ identity_service.internal_protocol }}://{{ identity_service.internal_host }}:{{ identity_service.internal_port }}
{% endif -%}
send_service_user_token = true
auth_type = password
project_domain_id = {{ identity_service.service_domain_id }}
user_domain_id = {{ identity_service.service_domain_id }}
project_name = {{ identity_service.service_project_name }}
username = {{ identity_service.service_user_name }}
password = {{ identity_service.service_password }}
{% endif -%}

View File

@ -0,0 +1,15 @@
{% if enable_signing -%}
[signing]
{% if certfile -%}
certfile = {{ certfile }}
{% endif -%}
{% if keyfile -%}
keyfile = {{ keyfile }}
{% endif -%}
{% if ca_certs -%}
ca_certs = {{ ca_certs }}
{% endif -%}
{% if ca_key -%}
ca_key = {{ ca_key }}
{% endif -%}
{% endif -%}

View File

@ -0,0 +1,28 @@
Listen {{ wsgi_config.public_port }}
<VirtualHost *:{{ wsgi_config.public_port }}>
WSGIDaemonProcess {{ wsgi_config.group }} processes=3 threads=1 user={{ wsgi_config.user }} group={{ wsgi_config.group }} \
display-name=%{GROUP}
WSGIProcessGroup {{ wsgi_config.group }}
{% if ingress_public.ingress_path -%}
WSGIScriptAlias {{ ingress_public.ingress_path }} {{ wsgi_config.wsgi_public_script }}
{% endif -%}
WSGIScriptAlias / {{ wsgi_config.wsgi_public_script }}
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog {{ wsgi_config.error_log }}
CustomLog {{ wsgi_config.custom_log }} combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>

View File

@ -0,0 +1,28 @@
Listen {{ wsgi_config.public_port }}
<VirtualHost *:{{ wsgi_config.public_port }}>
WSGIDaemonProcess {{ wsgi_config.group }} processes=3 threads=1 user={{ wsgi_config.user }} group={{ wsgi_config.group }} \
display-name=%{GROUP}
WSGIProcessGroup {{ wsgi_config.group }}
{% if ingress_public.ingress_path -%}
WSGIScriptAlias {{ ingress_public.ingress_path }} {{ wsgi_config.wsgi_public_script }}
{% endif -%}
WSGIScriptAlias / {{ wsgi_config.wsgi_public_script }}
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog {{ wsgi_config.error_log }}
CustomLog {{ wsgi_config.custom_log }} combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>

9
test-requirements.txt Normal file
View File

@ -0,0 +1,9 @@
# This file is managed centrally. If you find the need to modify this as a
# one-off, please don't. Intead, consult #openstack-charms and ask about
# requirements management in charms via bot-control. Thank you.
coverage
mock
flake8
stestr
ops

74
tests/bundles/smoke.yaml Normal file
View File

@ -0,0 +1,74 @@
bundle: kubernetes
applications:
mysql:
charm: ch:mysql-k8s
channel: 8.0/stable
scale: 1
trust: false
# Currently traefik is required for networking things.
# If this isn't present, the units will hang at "installing agent".
traefik:
charm: ch:traefik-k8s
channel: 1.0/stable
scale: 1
trust: true
traefik-public:
charm: ch:traefik-k8s
channel: 1.0/stable
scale: 1
trust: true
options:
kubernetes-service-annotations: metallb.universe.tf/address-pool=public
# required for nova
rabbitmq:
charm: ch:rabbitmq-k8s
channel: 3.9/edge
scale: 1
trust: true
keystone:
charm: ch:keystone-k8s
channel: 2023.1/edge
scale: 1
trust: true
options:
admin-role: admin
storage:
fernet-keys: 5M
credential-keys: 5M
aodh:
charm: ../../aodh-k8s.charm
scale: 1
trust: true
resources:
aodh-api-image: kolla/ubuntu-binary-aodh-notifier:yoga
aodh-evaluator-image: kolla/ubuntu-binary-aodh-evaluator:yoga
aodh-notifier-image: kolla/ubuntu-binary-aodh-notifier:yoga
aodh-listener-image: kolla/ubuntu-binary-aodh-listener:yoga
aodh-expirer-image: kolla/ubuntu-binary-aodh-expirer:yoga
relations:
- - traefik:ingress
- keystone:ingress-internal
- - traefik-public:ingress
- keystone:ingress-public
- - mysql:database
- keystone:database
- - mysql:database
- aodh:database
- - rabbitmq:amqp
- aodh:amqp
- - keystone:identity-service
- aodh:identity-service
- - traefik:ingress
- aodh:ingress-internal
- - traefik-public:ingress
- aodh:ingress-public

1
tests/config.yaml Symbolic link
View File

@ -0,0 +1 @@
../config.yaml

View File

@ -0,0 +1,35 @@
#!/usr/bin/env python3
# Copyright 2023 liam
# See LICENSE file for licensing details.
import asyncio
import logging
from pathlib import Path
import pytest
import yaml
from pytest_operator.plugin import OpsTest
logger = logging.getLogger(__name__)
METADATA = yaml.safe_load(Path("./metadata.yaml").read_text())
APP_NAME = METADATA["name"]
@pytest.mark.abort_on_fail
async def test_build_and_deploy(ops_test: OpsTest):
"""Build the charm-under-test and deploy it together with related charms.
Assert on the unit status before any relations/configurations take place.
"""
# Build and deploy charm from local source folder
charm = await ops_test.build_charm(".")
resources = {"httpbin-image": METADATA["resources"]["httpbin-image"]["upstream-source"]}
# Deploy the charm and wait for active/idle status
await asyncio.gather(
ops_test.model.deploy(await charm, resources=resources, application_name=APP_NAME),
ops_test.model.wait_for_idle(
apps=[APP_NAME], status="active", raise_on_blocked=True, timeout=1000
),
)

18
tests/tests.yaml Normal file
View File

@ -0,0 +1,18 @@
gate_bundles:
- smoke
smoke_bundles:
- smoke
configure:
- zaza.openstack.charm_tests.keystone.setup.add_tempest_roles
tests: []
tests_options:
trust:
- smoke
ignore_hard_deploy_errors:
- smoke
tempest:
default:
smoke: True
target_deploy_status: []

17
tests/unit/__init__.py Normal file
View File

@ -0,0 +1,17 @@
#!/usr/bin/env python3
# Copyright 2023 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for AODH operator."""

97
tests/unit/test_charm.py Normal file
View File

@ -0,0 +1,97 @@
#!/usr/bin/env python3
# Copyright 2021 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for gnocchi charm."""
import ops_sunbeam.test_utils as test_utils
import charm
class _AodhOperatorCharm(charm.AodhOperatorCharm):
def __init__(self, framework):
self.seen_events = []
super().__init__(framework)
def _log_event(self, event):
self.seen_events.append(type(event).__name__)
def configure_charm(self, event):
super().configure_charm(event)
self._log_event(event)
@property
def public_ingress_address(self):
return "gnocchi.juju"
class TestAodhOperatorCharm(test_utils.CharmTestCase):
"""Class for testing gnocchi charm."""
PATCHES = []
def setUp(self):
"""Run setup for unit tests."""
super().setUp(charm, self.PATCHES)
self.harness = test_utils.get_harness(
_AodhOperatorCharm, container_calls=self.container_calls
)
# clean up events that were dynamically defined,
# otherwise we get issues because they'll be redefined,
# which is not allowed.
from charms.data_platform_libs.v0.database_requires import DatabaseEvents
for attr in (
"database_database_created",
"database_endpoints_changed",
"database_read_only_endpoints_changed",
):
try:
delattr(DatabaseEvents, attr)
except AttributeError:
pass
self.addCleanup(self.harness.cleanup)
test_utils.add_complete_ingress_relation(self.harness)
def test_pebble_ready_handler(self):
"""Test Pebble ready event is captured."""
self.harness.begin()
self.assertEqual(self.harness.charm.seen_events, [])
test_utils.set_all_pebbles_ready(self.harness)
self.assertEqual(len(self.harness.charm.seen_events), 5)
def test_all_relations(self):
"""Test all the charms relations."""
self.harness.begin_with_initial_hooks()
test_utils.add_db_relation_credentials(
self.harness, test_utils.add_base_db_relation(self.harness)
)
test_utils.add_identity_service_relation_response(
self.harness,
test_utils.add_base_identity_service_relation(self.harness),
)
self.harness.set_leader()
test_utils.set_all_pebbles_ready(self.harness)
app_setup_cmds = [["a2ensite", "wsgi-aodh-api"], ["aodh-dbsync"]]
for cmd in app_setup_cmds:
self.assertIn(cmd, self.container_calls.execute["aodh-api"])
for c in ["aodh-api", "aodh-evaluator", "aodh-notifier", "aodh-listener", "aodh-expirer"]:
self.check_file(c, "/etc/aodh/aodh.conf")

161
tox.ini Normal file
View File

@ -0,0 +1,161 @@
# Operator charm (with zaza): tox.ini
[tox]
skipsdist = True
envlist = pep8,py3
sitepackages = False
skip_missing_interpreters = False
minversion = 3.18.0
[vars]
src_path = {toxinidir}/src/
tst_path = {toxinidir}/tests/
lib_path = {toxinidir}/lib/
pyproject_toml = {toxinidir}/pyproject.toml
all_path = {[vars]src_path} {[vars]tst_path}
[testenv]
basepython = python3
setenv =
PYTHONPATH = {toxinidir}:{[vars]lib_path}:{[vars]src_path}
passenv =
HOME
PYTHONPATH
install_command =
pip install {opts} {packages}
commands = stestr run --slowest {posargs}
allowlist_externals =
git
charmcraft
{toxinidir}/fetch-libs.sh
{toxinidir}/rename.sh
deps =
-r{toxinidir}/test-requirements.txt
[testenv:fmt]
description = Apply coding style standards to code
deps =
black
isort
commands =
isort {[vars]all_path} --skip-glob {[vars]lib_path} --skip {toxinidir}/.tox
black --config {[vars]pyproject_toml} {[vars]all_path} --exclude {[vars]lib_path}
[testenv:build]
basepython = python3
deps =
commands =
charmcraft -v pack
{toxinidir}/rename.sh
[testenv:fetch]
basepython = python3
deps =
commands =
{toxinidir}/fetch-libs.sh
[testenv:py3]
basepython = python3
deps =
{[testenv]deps}
-r{toxinidir}/requirements.txt
[testenv:py38]
basepython = python3.8
deps = {[testenv:py3]deps}
[testenv:py39]
basepython = python3.9
deps = {[testenv:py3]deps}
[testenv:py310]
basepython = python3.10
deps = {[testenv:py3]deps}
[testenv:cover]
basepython = python3
deps = {[testenv:py3]deps}
setenv =
{[testenv]setenv}
PYTHON=coverage run
commands =
coverage erase
stestr run --slowest {posargs}
coverage combine
coverage html -d cover
coverage xml -o cover/coverage.xml
coverage report
[testenv:pep8]
description = Alias for lint
deps = {[testenv:lint]deps}
commands = {[testenv:lint]commands}
[testenv:lint]
description = Check code against coding style standards
deps =
black
flake8<6 # Pin version until https://github.com/savoirfairelinux/flake8-copyright/issues/19 is merged
flake8-docstrings
flake8-copyright
flake8-builtins
pyproject-flake8
pep8-naming
isort
codespell
commands =
codespell {[vars]all_path}
# pflake8 wrapper supports config from pyproject.toml
pflake8 --exclude {[vars]lib_path} --config {toxinidir}/pyproject.toml {[vars]all_path}
isort --check-only --diff {[vars]all_path} --skip-glob {[vars]lib_path}
black --config {[vars]pyproject_toml} --check --diff {[vars]all_path} --exclude {[vars]lib_path}
[testenv:func-noop]
basepython = python3
deps =
git+https://github.com/openstack-charmers/zaza.git@libjuju-3.1#egg=zaza
git+https://github.com/openstack-charmers/zaza-openstack-tests.git#egg=zaza.openstack
git+https://opendev.org/openstack/tempest.git#egg=tempest
commands =
functest-run-suite --help
[testenv:func]
basepython = python3
deps = {[testenv:func-noop]deps}
commands =
functest-run-suite --keep-model
[testenv:func-smoke]
basepython = python3
deps = {[testenv:func-noop]deps}
setenv =
TEST_MODEL_SETTINGS = automatically-retry-hooks=true
TEST_MAX_RESOLVE_COUNT = 5
commands =
functest-run-suite --keep-model --smoke
[testenv:func-dev]
basepython = python3
deps = {[testenv:func-noop]deps}
commands =
functest-run-suite --keep-model --dev
[testenv:func-target]
basepython = python3
deps = {[testenv:func-noop]deps}
commands =
functest-run-suite --keep-model --bundle {posargs}
[coverage:run]
branch = True
concurrency = multiprocessing
parallel = True
source =
.
omit =
.tox/*
tests/*
src/templates/*
[flake8]
ignore=E226,W504