Create default rbd pool

Create a default replicated or erasure coded pool for iscsi targets.
Omitting the pool name when running the create target action will
result in the target being backed by the default pool.

Change-Id: I1c27fbbe281763ba5bdb369df92ca82b87f70891
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/415
This commit is contained in:
Liam Young 2020-09-18 10:58:39 +00:00
parent 3a28374013
commit 226cd7e0ff
8 changed files with 370 additions and 26 deletions

View File

@ -58,7 +58,6 @@ create-target:
type: string
description: "The CHAPs password to be created for the client"
required:
- rbd-pool-name
- image-size
- image-name
- client-initiatorname

View File

@ -35,7 +35,7 @@ options:
192.168.0.0/24).
If multiple networks are to be used, a space-delimited list of a.b.c.d/x
can be provided.
rbd-metadata-pool:
gateway-metadata-pool:
type: string
default: iscsi
description: |
@ -52,3 +52,131 @@ options:
order for this charm to function correctly, the privacy extension must be
disabled and a non-temporary address must be configured/available on
your network interface.
ceph-osd-replication-count:
type: int
default: 3
description: |
This value dictates the number of replicas ceph must make of any
object it stores within the images rbd pool. Of course, this only
applies if using Ceph as a backend store. Note that once the images
rbd pool has been created, changing this value will not have any
effect (although it can be changed in ceph by manually configuring
your ceph cluster).
ceph-pool-weight:
type: int
default: 5
description: |
Defines a relative weighting of the pool as a percentage of the total
amount of data in the Ceph cluster. This effectively weights the number
of placement groups for the pool created to be appropriately portioned
to the amount of data expected. For example, if the compute images
for the OpenStack compute instances are expected to take up 20% of the
overall configuration then this value would be specified as 20. Note -
it is important to choose an appropriate value for the pool weight as
this directly affects the number of placement groups which will be
created for the pool. The number of placement groups for a pool can
only be increased, never decreased - so it is important to identify the
percent of data that will likely reside in the pool.
rbd-pool-name:
default:
type: string
description: |
Optionally specify an existing pool that gateway should map to.
pool-type:
type: string
default: replicated
description: |
Ceph pool type to use for storage - valid values include replicated
and erasure-coded.
ec-profile-name:
type: string
default:
description: |
Name for the EC profile to be created for the EC pools. If not defined
a profile name will be generated based on the name of the pool used by
the application.
ec-rbd-metadata-pool:
type: string
default:
description: |
Name of the metadata pool to be created (for RBD use-cases). If not
defined a metadata pool name will be generated based on the name of
the data pool used by the application. The metadata pool is always
replicated, not erasure coded.
ec-profile-k:
type: int
default: 1
description: |
Number of data chunks that will be used for EC data pool. K+M factors
should never be greater than the number of available zones (or hosts)
for balancing.
ec-profile-m:
type: int
default: 2
description: |
Number of coding chunks that will be used for EC data pool. K+M factors
should never be greater than the number of available zones (or hosts)
for balancing.
ec-profile-locality:
type: int
default:
description: |
(lrc plugin - l) Group the coding and data chunks into sets of size l.
For instance, for k=4 and m=2, when l=3 two groups of three are created.
Each set can be recovered without reading chunks from another set. Note
that using the lrc plugin does incur more raw storage usage than isa or
jerasure in order to reduce the cost of recovery operations.
ec-profile-crush-locality:
type: string
default:
description: |
(lrc plugin) The type of the crush bucket in which each set of chunks
defined by l will be stored. For instance, if it is set to rack, each
group of l chunks will be placed in a different rack. It is used to
create a CRUSH rule step such as step choose rack. If it is not set,
no such grouping is done.
ec-profile-durability-estimator:
type: int
default:
description: |
(shec plugin - c) The number of parity chunks each of which includes
each data chunk in its calculation range. The number is used as a
durability estimator. For instance, if c=2, 2 OSDs can be down
without losing data.
ec-profile-helper-chunks:
type: int
default:
description: |
(clay plugin - d) Number of OSDs requested to send data during
recovery of a single chunk. d needs to be chosen such that
k+1 <= d <= k+m-1. Larger the d, the better the savings.
ec-profile-scalar-mds:
type: string
default:
description: |
(clay plugin) specifies the plugin that is used as a building
block in the layered construction. It can be one of jerasure,
isa, shec (defaults to jerasure).
ec-profile-plugin:
type: string
default: jerasure
description: |
EC plugin to use for this applications pool. The following list of
plugins acceptable - jerasure, lrc, isa, shec, clay.
ec-profile-technique:
type: string
default:
description: |
EC profile technique used for this applications pool - will be
validated based on the plugin configured via ec-profile-plugin.
Supported techniques are reed_sol_van, reed_sol_r6_op,
cauchy_orig, cauchy_good, liber8tion for jerasure,
reed_sol_van, cauchy for isa and single, multiple
for shec.
ec-profile-device-class:
type: string
default:
description: |
Device class from CRUSH map to use for placement groups for
erasure profile - valid values: ssd, hdd or nvme (or leave
unset to not use a device class).

View File

@ -237,11 +237,100 @@ class CephISCSIGatewayCharmBase(ops_openstack.core.OSBaseCharm):
password = ''.join(secrets.choice(alphabet) for i in range(8))
self.peers.set_admin_password(password)
def config_get(self, key):
"""Retrieve config option.
:returns: Value of the corresponding config option or None.
:rtype: Any
"""
return self.model.config.get(key)
@property
def data_pool_name(self):
"""The name of the default rbd data pool to be used by targets.
:returns: Data pool name.
:rtype: str
"""
if self.config_get('rbd-pool-name'):
pool_name = self.config_get('rbd-pool-name')
else:
pool_name = self.app.name
return pool_name
@property
def metadata_pool_name(self):
"""The name of the default rbd metadata pool to be used by targets.
:returns: Metadata pool name.
:rtype: str
"""
return (self.config_get('ec-rbd-metadata-pool') or
"{}-metadata".format(self.app.name))
def request_ceph_pool(self, event):
"""Request pools from Ceph cluster."""
logging.info("Requesting replicated pool")
self.ceph_client.create_replicated_pool(
self.model.config['rbd-metadata-pool'])
self.config_get('gateway-metadata-pool'))
weight = self.config_get('ceph-pool-weight')
replicas = self.config_get('ceph-osd-replication-count')
if self.config_get('pool-type') == 'erasure-coded':
# General EC plugin config
plugin = self.config_get('ec-profile-plugin')
technique = self.config_get('ec-profile-technique')
device_class = self.config_get('ec-profile-device-class')
bdm_k = self.config_get('ec-profile-k')
bdm_m = self.config_get('ec-profile-m')
# LRC plugin config
bdm_l = self.config_get('ec-profile-locality')
crush_locality = self.config_get('ec-profile-crush-locality')
# SHEC plugin config
bdm_c = self.config_get('ec-profile-durability-estimator')
# CLAY plugin config
bdm_d = self.config_get('ec-profile-helper-chunks')
scalar_mds = self.config_get('ec-profile-scalar-mds')
# Profile name
profile_name = (
self.config_get('ec-profile-name') or
"{}-profile".format(self.app.name)
)
# Metadata sizing is approximately 1% of overall data weight
# but is in effect driven by the number of rbd's rather than
# their size - so it can be very lightweight.
metadata_weight = weight * 0.01
# Resize data pool weight to accomodate metadata weight
weight = weight - metadata_weight
# Create erasure profile
self.ceph_client.create_erasure_profile(
name=profile_name,
k=bdm_k, m=bdm_m,
lrc_locality=bdm_l,
lrc_crush_locality=crush_locality,
shec_durability_estimator=bdm_c,
clay_helper_chunks=bdm_d,
clay_scalar_mds=scalar_mds,
device_class=device_class,
erasure_type=plugin,
erasure_technique=technique
)
# Create EC data pool
self.ceph_client.create_erasure_pool(
name=self.data_pool_name,
erasure_profile=profile_name,
weight=weight,
allow_ec_overwrites=True
)
self.ceph_client.create_replicated_pool(
name=self.metadata_pool_name,
weight=metadata_weight
)
else:
self.ceph_client.create_replicated_pool(
name=self.data_pool_name,
replicas=replicas,
weight=weight)
logging.info("Requesting permissions")
self.ceph_client.request_ceph_permissions(
'ceph-iscsi',
@ -358,6 +447,22 @@ class CephISCSIGatewayCharmBase(ops_openstack.core.OSBaseCharm):
else:
event.fail("Action must be run on leader")
def calculate_target_pools(self, event):
if event.params['ec-rbd-metadata-pool']:
ec_rbd_metadata_pool = event.params['ec-rbd-metadata-pool']
rbd_pool_name = event.params['rbd-pool-name']
elif event.params['rbd-pool-name']:
ec_rbd_metadata_pool = None
rbd_pool_name = event.params['rbd-pool-name']
# Action did not specify pools to derive them from charm config.
elif self.model.config['pool-type'] == 'erasure-coded':
ec_rbd_metadata_pool = self.metadata_pool_name
rbd_pool_name = self.data_pool_name
else:
ec_rbd_metadata_pool = None
rbd_pool_name = self.data_pool_name
return rbd_pool_name, ec_rbd_metadata_pool
def on_create_target_action(self, event):
"""Create an iSCSI target."""
gw_client = gwcli_client.GatewayClient()
@ -365,7 +470,9 @@ class CephISCSIGatewayCharmBase(ops_openstack.core.OSBaseCharm):
gateway_units = event.params.get(
'gateway-units',
[u for u in self.peers.ready_peer_details.keys()])
if event.params['ec-rbd-metadata-pool']:
rbd_pool_name, ec_rbd_metadata_pool = self.calculate_target_pools(
event)
if ec_rbd_metadata_pool:
# When using erasure-coded pools the image needs to be pre-created
# as the gwcli does not currently handle the creation.
cmd = [
@ -375,14 +482,14 @@ class CephISCSIGatewayCharmBase(ops_openstack.core.OSBaseCharm):
'create',
'--size', event.params['image-size'],
'{}/{}'.format(
event.params['ec-rbd-metadata-pool'],
ec_rbd_metadata_pool,
event.params['image-name']),
'--data-pool', event.params['rbd-pool-name']]
'--data-pool', rbd_pool_name]
logging.info(cmd)
subprocess.check_call(cmd)
target_pool = event.params['ec-rbd-metadata-pool']
target_pool = ec_rbd_metadata_pool
else:
target_pool = event.params['rbd-pool-name']
target_pool = rbd_pool_name
gw_client.create_target(target)
for gw_unit, gw_config in self.peers.ready_peer_details.items():
added_gateways = []

View File

@ -2,7 +2,7 @@
logger_level = DEBUG
cluster_name = ceph
cluster_client_name = client.ceph-iscsi
pool = {{ options.rbd_metadata_pool }}
pool = {{ options.gateway_metadata_pool }}
gateway_keyring = ceph.client.ceph-iscsi.keyring
ceph_config_dir = /etc/ceph/iscsi

View File

@ -0,0 +1,89 @@
local_overlay_enabled: False
series: focal
machines:
'0':
'1':
'2':
'3':
'4':
'5':
'6':
'7':
'8':
constraints: mem=3072M
'9':
constraints: mem=3072M
'10':
constraints: mem=3072M
'11':
'12':
'13':
'14':
'15':
applications:
ubuntu:
charm: cs:ubuntu
num_units: 3
to:
- '7'
- '14'
- '15'
ceph-iscsi:
charm: ../../ceph-iscsi.charm
num_units: 2
options:
gateway-metadata-pool: tmbtil
pool-type: erasure-coded
ec-profile-k: 4
ec-profile-m: 2
to:
- '0'
- '1'
ceph-osd:
charm: cs:~openstack-charmers-next/ceph-osd
num_units: 6
storage:
osd-devices: 'cinder,10G'
options:
osd-devices: '/dev/test-non-existent'
to:
- '0'
- '1'
- '2'
- '11'
- '12'
- '13'
ceph-mon:
charm: cs:~openstack-charmers-next/ceph-mon
num_units: 3
options:
monitor-count: '3'
to:
- '3'
- '4'
- '5'
vault:
num_units: 1
charm: cs:~openstack-charmers-next/vault
to:
- '6'
mysql-innodb-cluster:
charm: cs:~openstack-charmers-next/mysql-innodb-cluster
num_units: 3
to:
- '8'
- '9'
- '10'
vault-mysql-router:
charm: cs:~openstack-charmers-next/mysql-router
relations:
- - 'ceph-mon:client'
- 'ceph-iscsi:ceph-client'
- - 'vault:certificates'
- 'ceph-iscsi:certificates'
- - 'ceph-osd:mon'
- 'ceph-mon:osd'
- - 'vault:shared-db'
- 'vault-mysql-router:shared-db'
- - 'vault-mysql-router:db-router'
- 'mysql-innodb-cluster:db-router'

View File

@ -19,18 +19,20 @@ machines:
'12':
'13':
'14':
'15':
applications:
ubuntu:
charm: cs:ubuntu
num_units: 2
num_units: 3
to:
- '7'
- '14'
- '15'
ceph-iscsi:
charm: ../../ceph-iscsi.charm
num_units: 2
options:
rbd-metadata-pool: tmbtil
gateway-metadata-pool: tmbtil
to:
- '0'
- '1'

View File

@ -1,5 +1,6 @@
charm_name: ceph-iscsi
gate_bundles:
- focal-ec
- focal
smoke_bundles:
- focal

View File

@ -338,7 +338,7 @@ class TestCephISCSIGatewayCharmBase(CharmTestCase):
self.maxDiff = None
rel_id = self.harness.add_relation('ceph-client', 'ceph-mon')
self.harness.update_config(
key_values={'rbd-metadata-pool': 'iscsi-pool'})
key_values={'gateway-metadata-pool': 'iscsi-pool'})
self.harness.begin()
self.harness.add_relation_unit(
rel_id,
@ -366,25 +366,43 @@ class TestCephISCSIGatewayCharmBase(CharmTestCase):
'compression-mode': None,
'compression-required-ratio': None,
'app-name': None,
'op': 'create-pool',
'name': 'iscsi-pool',
'replicas': 3,
'pg_num': None,
'weight': None,
'group': None,
'group-namespace': None,
'app-name': None,
'max-bytes': None,
'max-objects': None,
'name': 'iscsi-pool',
'max-objects': None},
{
'compression-algorithm': None,
'compression-max-blob-size': None,
'compression-max-blob-size-hdd': None,
'compression-max-blob-size-ssd': None,
'compression-min-blob-size': None,
'compression-min-blob-size-hdd': None,
'compression-min-blob-size-ssd': None,
'compression-mode': None,
'compression-required-ratio': None,
'op': 'create-pool',
'name': 'ceph-iscsi',
'replicas': None,
'pg_num': None,
'replicas': 3,
'weight': None},
{
'client': 'ceph-iscsi',
'op': 'set-key-permissions',
'permissions': [
'osd',
'allow *',
'mon',
'allow *',
'mgr',
'allow r']}])
'weight': None,
'group': None,
'group-namespace': None,
'app-name': None,
'max-bytes': None,
'max-objects': None},
{
'op': 'set-key-permissions',
'permissions': [
'osd', 'allow *',
'mon', 'allow *',
'mgr', 'allow r'],
'client': 'ceph-iscsi'}])
def test_on_pools_available(self):
self.os.path.exists.return_value = False