sunbeam-charms/charms/glance-k8s/charmcraft.yaml
Guillaume Boutry e911599abe
Migrate to unified charmcraft.yaml
Charmcraft 3 moves towards a single charmcraft.yaml, this is needed for
24.04 migration.

Change-Id: I743712752aaf37bf68730b64bd6c147dfad370e2
Signed-off-by: Guillaume Boutry <guillaume.boutry@canonical.com>
2024-10-08 09:38:10 +02:00

375 lines
13 KiB
YAML

type: charm
name: glance-k8s
summary: OpenStack Image Registry and Delivery Service
description: |
The Glance project provides an image registration and discovery service
and an image delivery service. These services are used in conjunction
by Nova to deliver images from object stores, such as OpenStack's Swift
service, to Nova's compute nodes.
assumes:
- k8s-api
- juju >= 3.1
links:
source:
- https://opendev.org/openstack/charm-glance-k8s
issues:
- https://bugs.launchpad.net/charm-glance-k8s
base: ubuntu@22.04
platforms:
amd64:
config:
options:
debug:
default: false
description: Enable debug logging.
type: boolean
region:
default: RegionOne
description: Name of the OpenStack region
type: string
ceph-osd-replication-count:
default: 3
type: int
description: |
This value dictates the number of replicas ceph must make of any
object it stores within the cinder rbd pool. Of course, this only
applies if using Ceph as a backend store. Note that once the cinder
rbd pool has been created, changing this value will not have any
effect (although it can be changed in ceph by manually configuring
your ceph cluster).
ceph-pool-weight:
type: int
default: 40
description: |
Defines a relative weighting of the pool as a percentage of the total
amount of data in the Ceph cluster. This effectively weights the number
of placement groups for the pool created to be appropriately portioned
to the amount of data expected. For example, if the ephemeral volumes
for the OpenStack compute instances are expected to take up 20% of the
overall configuration then this value would be specified as 20. Note -
it is important to choose an appropriate value for the pool weight as
this directly affects the number of placement groups which will be
created for the pool. The number of placement groups for a pool can
only be increased, never decreased - so it is important to identify the
percent of data that will likely reside in the pool.
volume-backend-name:
default: null
type: string
description: |
Volume backend name for the backend. The default value is the
application name in the Juju model, e.g. "cinder-ceph-mybackend"
if it's deployed as `juju deploy cinder-ceph cinder-ceph-mybackend`.
A common backend name can be set to multiple backends with the
same characters so that those can be treated as a single virtual
backend associated with a single volume type.
backend-availability-zone:
default: null
type: string
description: |
Availability zone name of this volume backend. If set, it will
override the default availability zone. Supported for Pike or
newer releases.
restrict-ceph-pools:
default: false
type: boolean
description: |
Optionally restrict Ceph key permissions to access pools as required.
rbd-pool-name:
default: null
type: string
description: |
Optionally specify an existing rbd pool that cinder should map to.
rbd-flatten-volume-from-snapshot:
default: false
type: boolean
description: |
Flatten volumes created from snapshots to remove dependency from
volume to snapshot. Supported on Queens+
rbd-mirroring-mode:
type: string
default: pool
description: |
The RBD mirroring mode used for the Ceph pool. This option is only used
with 'replicated' pool type, as it's not supported for 'erasure-coded'
pool type - valid values: 'pool' and 'image'
pool-type:
type: string
default: replicated
description: |
Ceph pool type to use for storage - valid values include `replicated`
and `erasure-coded`.
ec-profile-name:
type: string
default: null
description: |
Name for the EC profile to be created for the EC pools. If not defined
a profile name will be generated based on the name of the pool used by
the application.
ec-rbd-metadata-pool:
type: string
default: null
description: |
Name of the metadata pool to be created (for RBD use-cases). If not
defined a metadata pool name will be generated based on the name of
the data pool used by the application. The metadata pool is always
replicated, not erasure coded.
ec-profile-k:
type: int
default: 1
description: |
Number of data chunks that will be used for EC data pool. K+M factors
should never be greater than the number of available zones (or hosts)
for balancing.
ec-profile-m:
type: int
default: 2
description: |
Number of coding chunks that will be used for EC data pool. K+M factors
should never be greater than the number of available zones (or hosts)
for balancing.
ec-profile-locality:
type: int
default: null
description: |
(lrc plugin - l) Group the coding and data chunks into sets of size l.
For instance, for k=4 and m=2, when l=3 two groups of three are created.
Each set can be recovered without reading chunks from another set. Note
that using the lrc plugin does incur more raw storage usage than isa or
jerasure in order to reduce the cost of recovery operations.
ec-profile-crush-locality:
type: string
default: null
description: |
(lrc plugin) The type of the crush bucket in which each set of chunks
defined by l will be stored. For instance, if it is set to rack, each
group of l chunks will be placed in a different rack. It is used to
create a CRUSH rule step such as step choose rack. If it is not set,
no such grouping is done.
ec-profile-durability-estimator:
type: int
default: null
description: |
(shec plugin - c) The number of parity chunks each of which includes
each data chunk in its calculation range. The number is used as a
durability estimator. For instance, if c=2, 2 OSDs can be down
without losing data.
ec-profile-helper-chunks:
type: int
default: null
description: |
(clay plugin - d) Number of OSDs requested to send data during
recovery of a single chunk. d needs to be chosen such that
k+1 <= d <= k+m-1. Larger the d, the better the savings.
ec-profile-scalar-mds:
type: string
default: null
description: |
(clay plugin) specifies the plugin that is used as a building
block in the layered construction. It can be one of jerasure,
isa, shec (defaults to jerasure).
ec-profile-plugin:
type: string
default: jerasure
description: |
EC plugin to use for this applications pool. The following list of
plugins acceptable - jerasure, lrc, isa, shec, clay.
ec-profile-technique:
type: string
default: null
description: |
EC profile technique used for this applications pool - will be
validated based on the plugin configured via ec-profile-plugin.
Supported techniques are `reed_sol_van`, `reed_sol_r6_op`,
`cauchy_orig`, `cauchy_good`, `liber8tion` for jerasure,
`reed_sol_van`, `cauchy` for isa and `single`, `multiple`
for shec.
ec-profile-device-class:
type: string
default: null
description: |
Device class from CRUSH map to use for placement groups for
erasure profile - valid values: ssd, hdd or nvme (or leave
unset to not use a device class).
bluestore-compression-algorithm:
type: string
default: null
description: |
Compressor to use (if any) for pools requested by this charm.
.
NOTE: The ceph-osd charm sets a global default for this value (defaults
to 'lz4' unless configured by the end user) which will be used unless
specified for individual pools.
bluestore-compression-mode:
type: string
default: null
description: |
Policy for using compression on pools requested by this charm.
.
'none' means never use compression.
'passive' means use compression when clients hint that data is
compressible.
'aggressive' means use compression unless clients hint that
data is not compressible.
'force' means use compression under all circumstances even if the clients
hint that the data is not compressible.
bluestore-compression-required-ratio:
type: float
default: null
description: |
The ratio of the size of the data chunk after compression relative to the
original size must be at least this small in order to store the
compressed version on pools requested by this charm.
bluestore-compression-min-blob-size:
type: int
default: null
description: |
Chunks smaller than this are never compressed on pools requested by
this charm.
bluestore-compression-min-blob-size-hdd:
type: int
default: null
description: |
Value of bluestore compression min blob size for rotational media on
pools requested by this charm.
bluestore-compression-min-blob-size-ssd:
type: int
default: null
description: |
Value of bluestore compression min blob size for solid state media on
pools requested by this charm.
bluestore-compression-max-blob-size:
type: int
default: null
description: |
Chunks larger than this are broken into smaller blobs sizing bluestore
compression max blob size before being compressed on pools requested by
this charm.
bluestore-compression-max-blob-size-hdd:
type: int
default: null
description: |
Value of bluestore compression max blob size for rotational media on
pools requested by this charm.
bluestore-compression-max-blob-size-ssd:
type: int
default: null
description: |
Value of bluestore compression max blob size for solid state media on
pools requested by this charm.
enable-telemetry-notifications:
type: boolean
default: false
description: Enable notifications to send to telemetry.
image-size-cap:
type: string
default: 5GB
description: |
Maximum size of image a user can upload. Defaults to 5GB
(5368709120 bytes). Example values: 500M, 500MB, 5G, 5TB.
Valid units: K, KB, M, MB, G, GB, T, TB, P, PB. If no units provided,
bytes are assumed.
.
WARNING: this value should only be increased after careful consideration
and must be set to a value under 8EB (9223372036854775808 bytes).
image-conversion:
type: boolean
default: false
description: |
Enable conversion of all images to raw format during image import.
This only works on imported images (for example using 'openstack image create --import').
Does not work on regular image uploads (like 'openstack image create')
actions:
describe-status:
description: |
See an expanded view of the compound status.
For a neat human readable summary:
juju run-action --wait glance/0 describe-status --format=json | jq -r '.[].results.output'
containers:
glance-api:
resource: glance-api-image
mounts:
- storage: local-repository
location: /var/lib/glance/images
resources:
glance-api-image:
type: oci-image
description: OCI image for OpenStack Glance
upstream-source: ghcr.io/canonical/glance-api:2024.1
storage:
local-repository:
type: filesystem
minimum-size: 10GiB
description: |
A local filesystem storage repository for glance images to be saved to.
Note, this must be shared storage in order to support a highly
available glance image registry.
requires:
database:
interface: mysql_client
limit: 1
ingress-internal:
interface: ingress
optional: true
limit: 1
ingress-public:
interface: ingress
limit: 1
identity-service:
interface: keystone
limit: 1
amqp:
interface: rabbitmq
optional: true
ceph:
interface: ceph-client
optional: true
receive-ca-cert:
interface: certificate_transfer
optional: true
logging:
interface: loki_push_api
optional: true
tracing:
interface: tracing
optional: true
limit: 1
provides:
image-service:
interface: glance
peers:
peers:
interface: glance-peer
parts:
update-certificates:
plugin: nil
override-build: |
apt update
apt install -y ca-certificates
update-ca-certificates
charm:
after:
- update-certificates
build-packages:
- git
- libffi-dev
- libssl-dev
- rustc
- cargo
- pkg-config
charm-binary-python-packages:
- cryptography
- jsonschema
- pydantic
- jinja2