Curate the cinder-lvm charm

This patchset does the following things:
- Implements the unit tests.
- Standardizes the charm, adding the needed files and moving
  pre-existing ones into the correct directories.
- Implements the bundles for functional tests.
- Documents the charm's functionality.
- Updates the requirements for tests and other targets.

Change-Id: I23e2882486a96c0c07cd8393745ffa2b244191a1
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/626
This commit is contained in:
Luciano Lo Giudice 2021-09-13 16:59:45 -03:00
parent 6a7fed56f3
commit 14fa102fe9
34 changed files with 1248 additions and 263 deletions

1
.gitignore vendored
View File

@ -6,3 +6,4 @@ interfaces
.stestr
*__pycache__*
*.pyc
*.swp

5
.gitreview Normal file
View File

@ -0,0 +1,5 @@
[gerrit]
host=review.opendev.org
port=29418
project=openstack/charm-cinder-lvm.git
defaultbranch=master

5
.zuul.yaml Normal file
View File

@ -0,0 +1,5 @@
- project:
templates:
- python35-charm-jobs
- openstack-python3-ussuri-jobs
- openstack-cover-jobs

1
README.md Symbolic link
View File

@ -0,0 +1 @@
src/README.md

16
copyright Normal file
View File

@ -0,0 +1,16 @@
Format: http://dep.debian.net/deps/dep5/
Files: *
Copyright: Copyright 2021, Canonical Ltd., All Rights Reserved.
License: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.

1
metadata.yaml Symbolic link
View File

@ -0,0 +1 @@
src/metadata.yaml

22
osci.yaml Normal file
View File

@ -0,0 +1,22 @@
- project:
templates:
- charm-unit-jobs
check:
jobs:
- impish-xena:
voting: false
- hirsute-wallaby
- groovy-victoria
- focal-xena:
voting: false
- focal-wallaby
- focal-victoria
- focal-ussuri
- bionic-ussuri
- bionic-queens
- bionic-stein
- bionic-train
- bionic-rocky
vars:
needs_charm_build: true
charm_build_name: cinder-lvm

18
pip.sh Executable file
View File

@ -0,0 +1,18 @@
#!/usr/bin/env bash
#
# This file is managed centrally by release-tools and should not be modified
# within individual charm repos. See the 'global' dir contents for available
# choices of tox.ini for OpenStack Charms:
# https://github.com/openstack-charmers/release-tools
#
# setuptools 58.0 dropped the support for use_2to3=true which is needed to
# install blessings (an indirect dependency of charm-tools).
#
# More details on the beahvior of tox and virtualenv creation can be found at
# https://github.com/tox-dev/tox/issues/448
#
# This script is wrapper to force the use of the pinned versions early in the
# process when the virtualenv was created and upgraded before installing the
# depedencies declared in the target.
pip install 'pip<20.3' 'setuptools<50.0.0'
pip "$@"

View File

@ -1,7 +1,22 @@
# This file is managed centrally. If you find the need to modify this as a
# one-off, please don't. Intead, consult #openstack-charms and ask about
# requirements management in charms via bot-control. Thank you.
# This file is managed centrally by release-tools and should not be modified
# within individual charm repos. See the 'global' dir contents for available
# choices of *requirements.txt files for OpenStack Charms:
# https://github.com/openstack-charmers/release-tools
#
# NOTE(lourot): This might look like a duplication of test-requirements.txt but
# some tox targets use only test-requirements.txt whereas charm-build uses only
# requirements.txt
setuptools<50.0.0 # https://github.com/pypa/setuptools/commit/04e3df22df840c6bb244e9b27bc56750c44b7c85
# Build requirements
charm-tools>=2.4.4
charm-tools==2.8.3
simplejson
# Newer versions use keywords that didn't exist in python 3.5 yet (e.g.
# "ModuleNotFoundError")
# NOTE(lourot): This might look like a duplication of test-requirements.txt but
# some tox targets use only test-requirements.txt whereas charm-build uses only
# requirements.txt
importlib-metadata<3.0.0; python_version < '3.6'
importlib-resources<3.0.0; python_version < '3.6'

View File

@ -1,140 +1,55 @@
lvm Storage Backend for Cinder
-------------------------------
# Overview
Overview
========
The cinder-lvm charm provides an LVM backend for Cinder, the core OpenStack block storage (volume) service. It is a subordinate charm that is used in conjunction with the cinder charm.
This charm provides a lvm storage backend for use with the Cinder
charm. It is intended to be used as a subordinate to main cinder charm.
> **Note**: The cinder-lvm charm is supported starting with OpenStack Queens.
To use:
# Usage
juju deploy cinder
juju deploy cinder-lvm
juju add-relation cinder-lvm cinder
## Configuration
It will prepare the devices (format) and then will talk to main cinder
charm to pass on configuration which cinde charm will inject into it's
own configuration file. After that, it does nothing except watch for
config changes and reconfig cinder.
This section covers common and/or important configuration options. See file `config.yaml` for the full list of options, along with their descriptions and default values. See the [Juju documentation][juju-docs-config-apps] for details on configuring applications.
The configuration is passed over to cinder using a juju relation.
Although cinder has a few different services, it is the cinder-volume
service that will make use of the configuration added.
#### `allocation-type`
Note: The devices must be local to the cinder-volume, so you will
probably want to deploy this service on the compute hosts, since the
cinder-volume that will be running on the controller nodes will not
have access to any physical device (it is normally deployed in lxd).
Refers to volume provisioning type. Values can be 'thin', 'thick', 'auto' (resolves to 'thin' if supported) , and 'default' (resolves to 'thick'). The default value is 'default'.
A more complete example, using a bundle would be the folowing.
### `block-device`
Your normal cinder deployed to controllers, will all services running:
Specifies a space-separated list of devices to use for LVM physical volumes. This is a mandatory option. Value types include:
hacluster-cinder:
charm: cs:hacluster
cinder:
charm: cs:cinder
num_units: 3
constraints: *combi-access-constr
bindings:
"": *oam-space
public: *public-space
admin: *admin-space
internal: *internal-space
shared-db: *internal-space
options:
worker-multiplier: *worker-multiplier
openstack-origin: *openstack-origin
block-device: None
glance-api-version: 2
vip: *cinder-vip
use-internal-endpoints: True
region: *openstack-region
to:
- lxd:1003
- lxd:1004
- lxd:1005
* block devices (e.g. 'sdb' or '/dev/sdb')
* a path to a local file with the size appended after a pipe (e.g. '/path/to/file|10G'). The file will be created if necessary and be mapped to a loopback device. This is intended for development and testing purposes. The default size is 5G.
Extra cinder-volume only services running on compute-nodes (basically the same as above but with "enabled-services: volume"). Take care to leave "block-device: None" because we do not want to use internal lvm functionality from the cinder charm, and will instead make the cinder-lvm charm do that:
To prevent potential data loss an already formatted device (or one containing LVM metadata) cannot be used unless the `overwrite` configuration option is set to 'true'.
cinder-volume:
charm: cs:cinder
num_units: 9
constraints: *combi-access-constr
bindings:
"": *oam-space
public: *public-space
admin: *admin-space
internal: *internal-space
shared-db: *internal-space
options:
worker-multiplier: *worker-multiplier
openstack-origin: *openstack-origin
enabled-services: volume
block-device: None
glance-api-version: 2
use-internal-endpoints: True
region: *openstack-region
to:
- 1000
- 1001
- 1002
- 1003
- 1004
- 1005
- 1006
- 1007
- 1008
### `config-flags`
And then the cinder-lvm charm (as a subordinate charm):
Comma-separated list of key=value config flags. These values will be added to standard options when injecting config into `cinder.conf`.
cinder-lvm-fast:
charm: cs:cinder-lvm
num_units: 0
options:
alias: fast
block-device: /dev/nvme0n1
allocation-type: default
erase-size: '50'
unique-backend: true
cinder-lvm-slow:
charm: cs:cinder-lvm
num_units: 0
options:
alias: slow
block-device: /dev/sdb /dev/sdc /dev/sdd
allocation-type: default
erase-size: '50'
unique-backend: true
### `overwrite`
And then the extra relations for cinder-volume and cinder-lvm-[foo]:
Permits ('true') the charm to attempt to overwrite storage devices (specified by the `block-devices` option) if they contain pre-existing filesystems or LVM metadata. The default is 'false'. A device in use on the host will never be overwritten.
- [ cinder-volume, mysql ]
- [ "cinder-volume:amqp", "rabbitmq-server:amqp" ]
- [ "cinder-lvm-fast", "cinder-volume" ]
- [ "cinder-lvm-slow", "cinder-volume" ]
## Deployment
To deploy, add a relation to the cinder charm:
Configuration
=============
juju add-relation cinder-lvm:storage-backend cinder:storage-backend
See config.yaml for details of configuration options.
# Documentation
One or more block devices (local to the charm unit) are used as an LVM
physical volumes, on which a volume group is created. A logical volume
is created ('openstack volume create') and exported to a cloud instance
via iSCSI ('openstack server add volume').
The OpenStack Charms project maintains two documentation guides:
**Note**: It is not recommended to use the LVM storage method for
anything other than testing or for small non-production deployments.
* [OpenStack Charm Guide][cg]: for project information, including development
and support notes
* [OpenStack Charms Deployment Guide][cdg]: for charm usage information
**Important** Make sure the designated block devices exist and are not
in use (formatted as physical volumes or other filesystems), unless they
already have the desired volume group (in which case it will be used
instead of creating a new one).
# Bugs
This charm only prepares devices for lvm and configures cinder and do
not execute any active function, therefore there is not need for
high-availability.
Please report bugs on [Launchpad][lp-bugs-charm-cinder-lvm].
[cg]: https://docs.openstack.org/charm-guide
[cdg]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide
[lp-bugs-charm-cinder-lvm]: https://bugs.launchpad.net/charm-cinder-lvm/+filebug

View File

@ -6,7 +6,9 @@ config:
- use-syslog
- use-internal-endpoints
- ssl_ca
- ssl_cert
- ssl_key
options:
basic:
use_venv: True
repo: https://github.com/openstack-charmers/cinder-storage-backend-template
repo: https://opendev.org/openstack/charm-cinder-lvm

View File

@ -1,3 +1,17 @@
# Copyright 2021 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import subprocess
import socket
@ -87,7 +101,7 @@ def configure_lvm_storage(block_devices, volume_group, overwrite=False,
remove_missing=False, remove_missing_force=False):
''' Configure LVM storage on the list of block devices provided
:param block_devices: list: List of whitelisted block devices to detect
:param block_devices: list: List of allow-listed block devices to detect
and use if found
:param overwrite: bool: Scrub any existing block data if block device is
not already in-use
@ -123,14 +137,12 @@ def configure_lvm_storage(block_devices, volume_group, overwrite=False,
if overwrite is True or not has_partition_table(device):
prepare_volume(device)
new_devices.append(device)
elif (is_lvm_physical_volume(device)
and list_lvm_volume_group(device) != volume_group):
elif list_lvm_volume_group(device) != volume_group:
# Existing LVM but not part of required VG or new device
if overwrite is True:
prepare_volume(device)
new_devices.append(device)
elif (is_lvm_physical_volume(device)
and list_lvm_volume_group(device) == volume_group):
else:
# Mark vg as found
juju_log('Found volume-group already created on {}'.format(
device))
@ -141,7 +153,7 @@ def configure_lvm_storage(block_devices, volume_group, overwrite=False,
juju_log('LVM info mid preparation')
log_lvm_info()
if vg_found is False and len(new_devices) > 0:
if not vg_found and new_devices:
if overwrite:
ensure_lvm_volume_group_non_existent(volume_group)
@ -161,12 +173,12 @@ def configure_lvm_storage(block_devices, volume_group, overwrite=False,
" LVM may not be fully configured yet. Error was: '{}'."
.format(str(e)))
if len(new_devices) > 0:
if new_devices:
# Extend the volume group as required
for new_device in new_devices:
extend_lvm_volume_group(volume_group, new_device)
thin_pools = list_thin_logical_volume_pools(path_mode=True)
if len(thin_pools) == 0:
if not thin_pools:
juju_log("No thin pools found")
elif len(thin_pools) == 1:
juju_log("Thin pool {} found, extending with {}".format(
@ -325,11 +337,11 @@ def _parse_block_device(block_device):
return ('/dev/{}'.format(block_device), 0)
class CinderlvmCharm(
class CinderLVMCharm(
charms_openstack.charm.CinderStoragePluginCharm):
name = 'cinder_lvm'
release = 'ocata'
release = 'queens'
packages = []
release_pkg = 'cinder-common'
version_package = 'cinder-volume'

View File

@ -1,18 +1,21 @@
name: cinder-lvm
summary: lvm integration for OpenStack Block Storage
summary: LVM integration for OpenStack Block Storage
maintainer: OpenStack Charmers <openstack-charmers@lists.ubuntu.com>
description: |
Cinder is the block storage service for the Openstack project.
.
This charm provides a lvm backend for Cinder.
This charm provides an LVM backend for Cinder.
tags:
- openstack
- storage
- file-servers
- misc
series:
- xenial
- bionic
- focal
- groovy
- hirsute
- impish
subordinate: true
provides:
storage-backend:

View File

@ -1,3 +1,9 @@
# zaza
# This file is managed centrally by release-tools and should not be modified
# within individual charm repos. See the 'global' dir contents for available
# choices of *requirements.txt files for OpenStack Charms:
# https://github.com/openstack-charmers/release-tools
#
# Functional Test Requirements (let Zaza's dependencies solve all dependencies here!)
git+https://github.com/openstack-charmers/zaza.git#egg=zaza
git+https://github.com/openstack-charmers/zaza-openstack-tests.git#egg=zaza.openstack

View File

@ -1,4 +1,4 @@
series: xenial
series: bionic
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
@ -6,6 +6,7 @@ machines:
constraints: mem=3072M
'1':
'2':
constraints: mem=4G root-disk=16G
'3':
relations:
- - keystone:shared-db
@ -27,22 +28,22 @@ applications:
keystone:
charm: cs:~openstack-charmers-next/keystone
num_units: 1
options:
openstack-origin: cloud:xenial-ocata
to:
- '1'
cinder:
charm: cs:~openstack-charmers-next/cinder
num_units: 1
options:
openstack-origin: cloud:xenial-ocata
block-device: /dev/vdb
overwrite: "true"
ephemeral-unmount: /mnt
to:
- '2'
cinder-lvm:
series: xenial
charm: cinder-lvm
options:
# Add config options here
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1

View File

@ -0,0 +1,53 @@
series: bionic
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
'2':
'3':
relations:
- - keystone:shared-db
- mysql:shared-db
- - cinder:shared-db
- mysql:shared-db
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql:
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
to:
- '0'
keystone:
charm: cs:~openstack-charmers-next/keystone
options:
openstack-origin: cloud:bionic-rocky
num_units: 1
to:
- '1'
cinder:
charm: cs:~openstack-charmers-next/cinder
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
block-device: /dev/vdb
overwrite: "true"
ephemeral-unmount: /mnt
to:
- '2'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1
to:
- '3'

View File

@ -0,0 +1,53 @@
series: bionic
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
'2':
'3':
relations:
- - keystone:shared-db
- mysql:shared-db
- - cinder:shared-db
- mysql:shared-db
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql:
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
to:
- '0'
keystone:
charm: cs:~openstack-charmers-next/keystone
options:
openstack-origin: cloud:bionic-stein
num_units: 1
to:
- '1'
cinder:
charm: cs:~openstack-charmers-next/cinder
num_units: 1
options:
openstack-origin: cloud:bionic-stein
block-device: /dev/vdb
overwrite: "true"
ephemeral-unmount: /mnt
to:
- '2'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1
to:
- '3'

View File

@ -0,0 +1,56 @@
series: bionic
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
'2':
constraints: mem=4G root-disk=16G
'3':
relations:
- - keystone:shared-db
- mysql:shared-db
- - cinder:shared-db
- mysql:shared-db
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql:
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
to:
- '0'
keystone:
charm: cs:~openstack-charmers-next/keystone
options:
openstack-origin: cloud:bionic-train
num_units: 1
to:
- '1'
cinder:
charm: cs:~openstack-charmers-next/cinder
num_units: 1
storage:
block-devices: '40G'
options:
openstack-origin: cloud:bionic-train
block-device: None
overwrite: "true"
ephemeral-unmount: /mnt
to:
- '2'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1
to:
- '3'

View File

@ -0,0 +1,56 @@
series: bionic
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
'2':
constraints: mem=4G root-disk=16G
'3':
relations:
- - keystone:shared-db
- mysql:shared-db
- - cinder:shared-db
- mysql:shared-db
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql:
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
to:
- '0'
keystone:
charm: cs:~openstack-charmers-next/keystone
options:
openstack-origin: cloud:bionic-ussuri
num_units: 1
to:
- '1'
cinder:
charm: cs:~openstack-charmers-next/cinder
num_units: 1
storage:
block-devices: '40G'
options:
openstack-origin: cloud:bionic-ussuri
block-device: None
overwrite: "true"
ephemeral-unmount: /mnt
to:
- '2'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1
to:
- '3'

View File

@ -0,0 +1,76 @@
series: focal
variables:
openstack-origin: &openstack-origin distro
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
constraints: mem=3072M
'2':
constraints: mem=3072M
'3':
'4':
'5':
constraints: mem=4G root-disk=16G
relations:
- - keystone:shared-db
- keystone-mysql-router:shared-db
- - keystone-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:shared-db
- cinder-mysql-router:shared-db
- - cinder-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql-innodb-cluster:
charm: cs:~openstack-charmers/mysql-innodb-cluster
num_units: 3
options:
source: *openstack-origin
to:
- '0'
- '1'
- '2'
rabbitmq-server:
charm: cs:~openstack-charmers/rabbitmq-server
num_units: 1
options:
source: *openstack-origin
to:
- '3'
keystone:
charm: cs:~openstack-charmers/keystone
options:
openstack-origin: *openstack-origin
num_units: 1
to:
- '4'
keystone-mysql-router:
charm: cs:~openstack-charmers/mysql-router
cinder:
charm: cs:~openstack-charmers/cinder
num_units: 1
storage:
block-devices: '40G'
options:
openstack-origin: *openstack-origin
block-device: None
overwrite: "true"
ephemeral-unmount: /mnt
to:
- '5'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
cinder-mysql-router:
charm: cs:~openstack-charmers/mysql-router

View File

@ -0,0 +1,76 @@
series: focal
variables:
openstack-origin: &openstack-origin cloud:focal-victoria
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
constraints: mem=3072M
'2':
constraints: mem=3072M
'3':
'4':
'5':
constraints: mem=4G root-disk=16G
relations:
- - keystone:shared-db
- keystone-mysql-router:shared-db
- - keystone-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:shared-db
- cinder-mysql-router:shared-db
- - cinder-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql-innodb-cluster:
charm: cs:~openstack-charmers/mysql-innodb-cluster
num_units: 3
options:
source: *openstack-origin
to:
- '0'
- '1'
- '2'
rabbitmq-server:
charm: cs:~openstack-charmers/rabbitmq-server
num_units: 1
options:
source: *openstack-origin
to:
- '3'
keystone:
charm: cs:~openstack-charmers/keystone
options:
openstack-origin: *openstack-origin
num_units: 1
to:
- '4'
keystone-mysql-router:
charm: cs:~openstack-charmers/mysql-router
cinder:
charm: cs:~openstack-charmers/cinder
storage:
block-devices: '40G'
num_units: 1
options:
openstack-origin: *openstack-origin
block-device: None
overwrite: "true"
ephemeral-unmount: /mnt
to:
- '5'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
cinder-mysql-router:
charm: cs:~openstack-charmers/mysql-router

View File

@ -0,0 +1,76 @@
series: focal
variables:
openstack-origin: &openstack-origin cloud:focal-wallaby
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
constraints: mem=3072M
'2':
constraints: mem=3072M
'3':
'4':
'5':
constraints: mem=4G root-disk=16G
relations:
- - keystone:shared-db
- keystone-mysql-router:shared-db
- - keystone-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:shared-db
- cinder-mysql-router:shared-db
- - cinder-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql-innodb-cluster:
charm: cs:~openstack-charmers/mysql-innodb-cluster
num_units: 3
options:
source: *openstack-origin
to:
- '0'
- '1'
- '2'
rabbitmq-server:
charm: cs:~openstack-charmers/rabbitmq-server
num_units: 1
options:
source: *openstack-origin
to:
- '3'
keystone:
charm: cs:~openstack-charmers/keystone
options:
openstack-origin: *openstack-origin
num_units: 1
to:
- '4'
keystone-mysql-router:
charm: cs:~openstack-charmers/mysql-router
cinder:
charm: cs:~openstack-charmers/cinder
storage:
block-devices: '40G'
num_units: 1
options:
openstack-origin: *openstack-origin
block-device: None
overwrite: "true"
ephemeral-unmount: /mnt
to:
- '5'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
cinder-mysql-router:
charm: cs:~openstack-charmers/mysql-router

View File

@ -0,0 +1,76 @@
series: focal
variables:
openstack-origin: &openstack-origin cloud:focal-xena
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
constraints: mem=3072M
'2':
constraints: mem=3072M
'3':
'4':
'5':
constraints: mem=4G root-disk=16G
relations:
- - keystone:shared-db
- keystone-mysql-router:shared-db
- - keystone-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:shared-db
- cinder-mysql-router:shared-db
- - cinder-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql-innodb-cluster:
charm: cs:~openstack-charmers/mysql-innodb-cluster
num_units: 3
options:
source: *openstack-origin
to:
- '0'
- '1'
- '2'
rabbitmq-server:
charm: cs:~openstack-charmers/rabbitmq-server
num_units: 1
options:
source: *openstack-origin
to:
- '3'
keystone:
charm: cs:~openstack-charmers/keystone
options:
openstack-origin: *openstack-origin
num_units: 1
to:
- '4'
keystone-mysql-router:
charm: cs:~openstack-charmers/mysql-router
cinder:
charm: cs:~openstack-charmers/cinder
storage:
block-devices: '40G'
num_units: 1
options:
openstack-origin: *openstack-origin
block-device: None
overwrite: "true"
ephemeral-unmount: /mnt
to:
- '5'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
cinder-mysql-router:
charm: cs:~openstack-charmers/mysql-router

View File

@ -0,0 +1,75 @@
series: groovy
variables:
openstack-origin: &openstack-origin distro
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
constraints: mem=3072M
'2':
constraints: mem=3072M
'3':
'4':
'5':
constraints: mem=4G root-disk=16G
relations:
- - keystone:shared-db
- keystone-mysql-router:shared-db
- - keystone-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:shared-db
- cinder-mysql-router:shared-db
- - cinder-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql-innodb-cluster:
charm: cs:~openstack-charmers/mysql-innodb-cluster
num_units: 3
options:
source: *openstack-origin
to:
- '0'
- '1'
- '2'
rabbitmq-server:
charm: cs:~openstack-charmers/rabbitmq-server
num_units: 1
options:
source: *openstack-origin
to:
- '3'
keystone:
charm: cs:~openstack-charmers/keystone
options:
openstack-origin: *openstack-origin
num_units: 1
to:
- '4'
keystone-mysql-router:
charm: cs:~openstack-charmers/mysql-router
cinder:
charm: cs:~openstack-charmers/cinder
storage:
block-devices: '40G'
num_units: 1
options:
openstack-origin: *openstack-origin
block-device: None
overwrite: "true"
to:
- '5'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
cinder-mysql-router:
charm: cs:~openstack-charmers/mysql-router

View File

@ -0,0 +1,75 @@
series: hirsute
variables:
openstack-origin: &openstack-origin distro
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
constraints: mem=3072M
'2':
constraints: mem=3072M
'3':
'4':
'5':
constraints: mem=4G root-disk=16G
relations:
- - keystone:shared-db
- keystone-mysql-router:shared-db
- - keystone-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:shared-db
- cinder-mysql-router:shared-db
- - cinder-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql-innodb-cluster:
charm: cs:~openstack-charmers-next/mysql-innodb-cluster
num_units: 3
options:
source: *openstack-origin
to:
- '0'
- '1'
- '2'
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1
options:
source: *openstack-origin
to:
- '3'
keystone:
charm: cs:~openstack-charmers-next/keystone
options:
openstack-origin: *openstack-origin
num_units: 1
to:
- '4'
keystone-mysql-router:
charm: cs:~openstack-charmers-next/mysql-router
cinder:
charm: cs:~openstack-charmers-next/cinder
storage:
block-devices: '40G'
num_units: 1
options:
openstack-origin: *openstack-origin
block-device: None
overwrite: "true"
to:
- '5'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
cinder-mysql-router:
charm: cs:~openstack-charmers-next/mysql-router

View File

@ -0,0 +1,75 @@
series: impish
variables:
openstack-origin: &openstack-origin distro
comment:
- 'machines section to decide order of deployment. database sooner = faster'
machines:
'0':
constraints: mem=3072M
'1':
constraints: mem=3072M
'2':
constraints: mem=3072M
'3':
'4':
'5':
constraints: mem=4G root-disk=16G
relations:
- - keystone:shared-db
- keystone-mysql-router:shared-db
- - keystone-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:shared-db
- cinder-mysql-router:shared-db
- - cinder-mysql-router:db-router
- mysql-innodb-cluster:db-router
- - cinder:identity-service
- keystone:identity-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:storage-backend
- cinder-lvm:storage-backend
applications:
mysql-innodb-cluster:
charm: cs:~openstack-charmers-next/mysql-innodb-cluster
num_units: 3
options:
source: *openstack-origin
to:
- '0'
- '1'
- '2'
rabbitmq-server:
charm: cs:~openstack-charmers-next/rabbitmq-server
num_units: 1
options:
source: *openstack-origin
to:
- '3'
keystone:
charm: cs:~openstack-charmers-next/keystone
options:
openstack-origin: *openstack-origin
num_units: 1
to:
- '4'
keystone-mysql-router:
charm: cs:~openstack-charmers-next/mysql-router
cinder:
charm: cs:~openstack-charmers-next/cinder
storage:
block-devices: '40G'
num_units: 1
options:
openstack-origin: *openstack-origin
block-device: None
overwrite: "true"
to:
- '5'
cinder-lvm:
charm: cinder-lvm
options:
block-device: '/tmp/vol1|4G'
alias: zaza-lvm
cinder-mysql-router:
charm: cs:~openstack-charmers-next/mysql-router

View File

@ -1,9 +1,24 @@
charm_name: cinder-lvm
tests:
- tests.tests_cinder_lvm.CinderlvmTest
- zaza.openstack.charm_tests.cinder_lvm.tests.CinderLVMTest
configure:
- zaza.openstack.charm_tests.keystone.setup.add_demo_user
gate_bundles:
- xenial-ocata
- bionic-queens
- bionic-rocky
- bionic-stein
- bionic-train
- bionic-ussuri
- focal-ussuri
- focal-victoria
- focal-wallaby
- focal-xena
- groovy-victoria
- hirsute-wallaby
smoke_bundles:
- xenial-ocata
- bionic-ussuri
dev_bundles:
- impish-xena
test_options:
force_deploy:
- impish-xena

View File

@ -1,70 +0,0 @@
#!/usr/bin/env python3
# Copyright 2019 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Encapsulate cinder-lvm testing."""
import logging
import uuid
import zaza.model
import zaza.openstack.charm_tests.test_utils as test_utils
import zaza.openstack.utilities.openstack as openstack_utils
class CinderlvmTest(test_utils.OpenStackBaseTest):
"""Encapsulate lvm tests."""
@classmethod
def setUpClass(cls):
"""Run class setup for running tests."""
super(CinderlvmTest, cls).setUpClass()
cls.keystone_session = openstack_utils.get_overcloud_keystone_session()
cls.model_name = zaza.model.get_juju_model()
cls.cinder_client = openstack_utils.get_cinder_session_client(
cls.keystone_session)
def test_cinder_config(self):
logging.info('lvm')
expected_contents = {
'cinder-lvm': {
'iscsi_helper': ['tgtadm'],
'volume_dd_blocksize': ['512']}}
zaza.model.run_on_leader(
'cinder',
'sudo cp /etc/cinder/cinder.conf /tmp/',
model_name=self.model_name)
zaza.model.block_until_oslo_config_entries_match(
'cinder',
'/tmp/cinder.conf',
expected_contents,
model_name=self.model_name,
timeout=2)
def test_create_volume(self):
test_vol_name = "zaza{}".format(uuid.uuid1().fields[0])
vol_new = self.cinder_client.volumes.create(
name=test_vol_name,
size=2)
openstack_utils.resource_reaches_status(
self.cinder_client.volumes,
vol_new.id,
expected_status='available')
test_vol = self.cinder_client.volumes.find(name=test_vol_name)
self.assertEqual(
getattr(test_vol, 'os-vol-host-attr:host').split('#')[0],
'cinder@cinder-lvm')
self.cinder_client.volumes.delete(vol_new)

View File

@ -1,25 +1,46 @@
# Source charm (with zaza): ./src/tox.ini
# This file is managed centrally by release-tools and should not be modified
# within individual charm repos. See the 'global' dir contents for available
# choices of tox.ini for OpenStack Charms:
# https://github.com/openstack-charmers/release-tools
[tox]
envlist = pep8
skipsdist = True
# NOTE: Avoid build/test env pollution by not enabling sitepackages.
sitepackages = False
# NOTE: Avoid false positives by not skipping missing interpreters.
skip_missing_interpreters = False
# NOTES:
# * We avoid the new dependency resolver by pinning pip < 20.3, see
# https://github.com/pypa/pip/issues/9187
# * Pinning dependencies requires tox >= 3.2.0, see
# https://tox.readthedocs.io/en/latest/config.html#conf-requires
# * It is also necessary to pin virtualenv as a newer virtualenv would still
# lead to fetching the latest pip in the func* tox targets, see
# https://stackoverflow.com/a/38133283
requires = pip < 20.3
virtualenv < 20.0
# NOTE: https://wiki.canonical.com/engineering/OpenStack/InstallLatestToxOnOsci
minversion = 3.18.0
[testenv]
setenv = VIRTUAL_ENV={envdir}
PYTHONHASHSEED=0
whitelist_externals = juju
passenv = HOME TERM CS_API_* OS_* AMULET_*
allowlist_externals = juju
passenv = HOME TERM CS_* OS_* TEST_*
deps = -r{toxinidir}/test-requirements.txt
install_command =
pip install {opts} {packages}
[testenv:pep8]
basepython = python3
deps=charm-tools
commands = charm-proof
[testenv:func-noop]
basepython = python3
commands =
true
functest-run-suite --help
[testenv:func]
basepython = python3
@ -31,5 +52,10 @@ basepython = python3
commands =
functest-run-suite --keep-model --smoke
[testenv:func-target]
basepython = python3
commands =
functest-run-suite --keep-model --bundle {posargs}
[testenv:venv]
commands = {posargs}

View File

@ -1,2 +1,3 @@
#layer-basic uses wheelhouse to install python dependencies
psutil
psutil
git+https://github.com/juju/charm-helpers.git#egg=charmhelpers

View File

@ -5,3 +5,6 @@ charms.reactive
mock>=1.2
coverage>=3.6
git+https://github.com/openstack/charms.openstack.git#egg=charms-openstack
git+https://github.com/juju/charm-helpers.git#egg=charmhelpers
netifaces
psutil

97
tox.ini
View File

@ -1,10 +1,31 @@
# Source charm: ./tox.ini
# This file is managed centrally by release-tools and should not be modified
# within individual charm repos.
# within individual charm repos. See the 'global' dir contents for available
# choices of tox.ini for OpenStack Charms:
# https://github.com/openstack-charmers/release-tools
[tox]
skipsdist = True
envlist = pep8,py34,py35,py36
skip_missing_interpreters = True
envlist = pep8,py3
# NOTE: Avoid build/test env pollution by not enabling sitepackages.
sitepackages = False
# NOTE: Avoid false positives by not skipping missing interpreters.
skip_missing_interpreters = False
# NOTES:
# * We avoid the new dependency resolver by pinning pip < 20.3, see
# https://github.com/pypa/pip/issues/9187
# * Pinning dependencies requires tox >= 3.2.0, see
# https://tox.readthedocs.io/en/latest/config.html#conf-requires
# * It is also necessary to pin virtualenv as a newer virtualenv would still
# lead to fetching the latest pip in the func* tox targets, see
# https://stackoverflow.com/a/38133283
requires =
pip < 20.3
virtualenv < 20.0
setuptools<50.0.0
# NOTE: https://wiki.canonical.com/engineering/OpenStack/InstallLatestToxOnOsci
minversion = 3.18.0
[testenv]
setenv = VIRTUAL_ENV={envdir}
@ -13,49 +34,85 @@ setenv = VIRTUAL_ENV={envdir}
LAYER_PATH={toxinidir}/layers
INTERFACE_PATH={toxinidir}/interfaces
JUJU_REPOSITORY={toxinidir}/build
passenv = http_proxy https_proxy
passenv = http_proxy https_proxy INTERFACE_PATH LAYER_PATH JUJU_REPOSITORY
install_command =
pip install {opts} {packages}
{toxinidir}/pip.sh install {opts} {packages}
deps =
-r{toxinidir}/requirements.txt
[testenv:build]
basepython = python3
commands =
charm-build --log-level DEBUG -o {toxinidir}/build src {posargs}
charm-build --log-level DEBUG --use-lock-file-branches -o {toxinidir}/build/builds src {posargs}
[testenv:py27]
basepython = python2.7
# Reactive source charms are Python3-only, but a py27 unit test target
# is required by OpenStack Governance. Remove this shim as soon as
# permitted. https://governance.openstack.org/tc/reference/cti/python_cti.html
whitelist_externals = true
commands = true
[testenv:add-build-lock-file]
basepython = python3
commands =
charm-build --log-level DEBUG --write-lock-file -o {toxinidir}/build/builds src {posargs}
[testenv:py34]
basepython = python3.4
[testenv:py3]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
commands = stestr run --slowest {posargs}
[testenv:py35]
basepython = python3.5
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
commands = stestr run --slowest {posargs}
[testenv:py36]
basepython = python3.6
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
commands = stestr run --slowest {posargs}
[testenv:py37]
basepython = python3.7
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run --slowest {posargs}
[testenv:py38]
basepython = python3.8
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run --slowest {posargs}
[testenv:pep8]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt
deps = flake8==3.9.2
charm-tools==2.8.3
commands = flake8 {posargs} src unit_tests
[testenv:cover]
# Technique based heavily upon
# https://github.com/openstack/nova/blob/master/tox.ini
basepython = python3
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
setenv =
{[testenv]setenv}
PYTHON=coverage run
commands =
coverage erase
stestr run --slowest {posargs}
coverage combine
coverage html -d cover
coverage xml -o cover/coverage.xml
coverage report
[coverage:run]
branch = True
concurrency = multiprocessing
parallel = True
source =
.
omit =
.tox/*
*/charmhelpers/*
unit_tests/*
[testenv:venv]
basepython = python3
commands = {posargs}
[flake8]
# E402 ignore necessary for path append before sys module import in actions
extend-ignore = E402
ignore = E402,W503,W504

View File

@ -1,4 +1,4 @@
# Copyright 2016 Canonical Ltd
# Copyright 2021 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -16,7 +16,3 @@ import sys
sys.path.append('src')
sys.path.append('src/lib')
# Mock out charmhelpers so that we can test without it.
import charms_openstack.test_mocks # noqa
charms_openstack.test_mocks.mock_charmhelpers()

View File

@ -1,4 +1,4 @@
# Copyright 2016 Canonical Ltd
# Copyright 2021 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -12,38 +12,225 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import print_function
import charmhelpers
import charm.openstack.cinder_lvm as cinder_lvm
import charms_openstack.test_utils as test_utils
class TestCinderlvmCharm(test_utils.PatchHelper):
class MockDevice:
def __init__(self, path, **kwargs):
self.path = path
kwargs.setdefault('size', 0)
self.attrs = kwargs
def _patch_config_and_charm(self, config):
self.patch_object(charmhelpers.core.hookenv, 'config')
@property
def size(self):
return self.attrs['size']
def is_block(self):
return self.attrs.get('block', False)
def is_loop(self):
return self.attrs.get('loop', False)
def has_partition_table(self):
return self.attrs.get('partition-table')
class MockLVM:
def __init__(self):
self.vgroups = {}
self.devices = []
self.mount_points = {} # Maps device paths to device objects.
def reduce(self, group):
self.vgroups.pop(group, None)
def extend(self, group, device=None):
dev_group = self.vgroups.setdefault(group, set())
if device:
dev_group.add(device)
def exists(self, group):
return group in self.vgroups
def remove(self, group):
self.vgroups.pop(group)
def ensure_non_existent(self, group):
self.vgroups.pop(group, None)
def list_vgroup(self, device):
for group, dev in self.vgroups.items():
if dev == device:
return group
return ''
def fs_mounted(self, fs):
return fs in self.mount_points
def mounts(self):
return list((v.path, k) for k, v in self.mount_points.items())
def umount(self, fs):
for k, v in self.mount_points.items():
if fs == v.path:
del self.mount_points[k]
return
def find_device(self, path):
for dev in self.devices:
if dev.path == path:
return dev
def add_device(self, device, **kwargs):
dev = MockDevice(device, **kwargs)
self.devices.append(dev)
return dev
def is_block_device(self, path):
dev = self.find_device(path)
return dev is not None and dev.is_block()
def ensure_loopback_dev(self, path, size):
dev = self.find_device(path)
if dev is not None:
dev.attrs['size'] = size
dev.attrs['loop'] = True
else:
self.devices.append(MockDevice(path, loop=True, size=size))
return dev.path
def is_device_mounted(self, path):
return any(x.path == path for x in self.mount_points.values())
def mount_path(self, path, device, **kwargs):
dev = self.find_device(device)
if dev is not None:
self.attrs.update(kwargs)
else:
self.mount_points[path] = self.add_device(device, **kwargs)
def has_partition_table(self, device):
dev = self.find_device(device)
return dev is not None and dev.has_partition_table()
def reset(self):
self.vgroups.clear()
self.devices.clear()
self.mount_points.clear()
class TestCinderLVMCharm(test_utils.PatchHelper):
@classmethod
def setUpClass(cls):
cls.DEFAULT_CONFIG = {'overwrite': False,
'remove-missing': False, 'alias': 'test-alias',
'remove-missing-force': False}
cls.LVM = MockLVM()
def setUp(self):
super().setUp()
self._config = self.DEFAULT_CONFIG.copy()
lvm = self.LVM
def cf(key=None):
if key is not None:
return config[key]
return config
if key is None:
return self._config
return self._config.get(key)
self.patch_object(charmhelpers.core.hookenv, 'config')
self.patch_object(cinder_lvm, 'mounts')
self.patch_object(cinder_lvm, 'umount')
self.patch_object(cinder_lvm, 'is_block_device')
self.patch_object(cinder_lvm, 'zap_disk')
self.patch_object(cinder_lvm, 'is_device_mounted')
self.patch_object(cinder_lvm, 'ensure_loopback_device')
self.patch_object(cinder_lvm, 'create_lvm_physical_volume')
self.patch_object(cinder_lvm, 'create_lvm_volume_group')
self.patch_object(cinder_lvm, 'deactivate_lvm_volume_group')
self.patch_object(cinder_lvm, 'is_lvm_physical_volume')
self.patch_object(cinder_lvm, 'list_lvm_volume_group')
self.patch_object(cinder_lvm, 'list_thin_logical_volume_pools')
self.patch_object(cinder_lvm, 'filesystem_mounted')
self.patch_object(cinder_lvm, 'lvm_volume_group_exists')
self.patch_object(cinder_lvm, 'remove_lvm_volume_group')
self.patch_object(cinder_lvm, 'ensure_lvm_volume_group_non_existent')
self.patch_object(cinder_lvm, 'log_lvm_info')
self.patch_object(cinder_lvm, 'has_partition_table')
self.patch_object(cinder_lvm, 'reduce_lvm_volume_group_missing')
self.patch_object(cinder_lvm, 'extend_lvm_volume_group')
self.config.side_effect = cf
c = cinder_lvm.CinderlvmCharm()
return c
cinder_lvm.mounts.side_effect = lvm.mounts
cinder_lvm.umount.side_effect = lvm.umount
cinder_lvm.is_block_device.side_effect = lvm.is_block_device
cinder_lvm.is_device_mounted.side_effect = lvm.is_device_mounted
cinder_lvm.ensure_loopback_device.side_effect = lvm.ensure_loopback_dev
cinder_lvm.create_lvm_volume_group.side_effect = lvm.extend
cinder_lvm.is_lvm_physical_volume.return_value = False
cinder_lvm.list_lvm_volume_group.side_effect = lvm.list_vgroup
cinder_lvm.list_thin_logical_volume_pools.return_value = []
cinder_lvm.filesystem_mounted.side_effect = lvm.fs_mounted
cinder_lvm.lvm_volume_group_exists.side_effect = lvm.exists
cinder_lvm.remove_lvm_volume_group.side_effect = lvm.remove
cinder_lvm.ensure_lvm_volume_group_non_existent.side_effect = \
lvm.ensure_non_existent
cinder_lvm.has_partition_table.side_effect = lvm.has_partition_table
cinder_lvm.extend_lvm_volume_group.side_effect = lvm.extend
self._config['block-device'] = '/dev/sdb'
def tearDown(self):
super().tearDown()
self.LVM.reset()
def _patch_config_and_charm(self, config):
self._config.update(config)
return cinder_lvm.CinderLVMCharm()
def test_cinder_base(self):
charm = self._patch_config_and_charm({})
self.assertEqual(charm.name, 'cinder_lvm')
self.assertEqual(charm.version_package, 'cinder-volume')
self.assertEqual(charm.packages, ['cinder-volume'])
self.assertEqual(charm.packages, [])
def test_cinder_configuration(self):
charm = self._patch_config_and_charm({'a': 'b'})
config = charm.cinder_configuration() # noqa
# Add check here that configuration is as expected.
# self.assertEqual(config, {})
charm = self._patch_config_and_charm(
{'a': 'b', 'config-flags': 'val=3'})
config = charm.cinder_configuration()
self.assertEqual(config[-1][1], '3')
self.assertNotIn('a', list(x[0] for x in config))
def test_cinder_lvm_ephemeral_mount(self):
ephemeral_path, ephemeral_dev = 'somepath', '/dev/sdc'
charm = self._patch_config_and_charm(
{'ephemeral-unmount': ephemeral_path})
self.LVM.mount_path(ephemeral_path, ephemeral_dev)
charm.cinder_configuration()
cinder_lvm.filesystem_mounted.assert_called()
cinder_lvm.umount.assert_called()
self.assertFalse(cinder_lvm.is_device_mounted(ephemeral_path))
def test_cinder_lvm_block_dev_none(self):
charm = self._patch_config_and_charm({'block-device': 'none'})
charm.cinder_configuration()
self.assertFalse(self.LVM.mount_points)
def test_cinder_lvm_single_vg(self):
self.LVM.add_device(self._config['block-device'], block=True)
charm = self._patch_config_and_charm({})
charm.cinder_configuration()
cinder_lvm.is_device_mounted.assert_called()
cinder_lvm.zap_disk.assert_called()
self.assertTrue(self.LVM.exists(cinder_lvm.get_volume_group_name()))
def test_cinder_lvm_loopback_dev(self):
loop_dev = '/sys/loop0'
self.LVM.add_device(loop_dev, loop=True)
charm = self._patch_config_and_charm(
{'block-device': loop_dev + '|100'})
charm.cinder_configuration()
dev = self.LVM.find_device(loop_dev)
self.assertTrue(dev)
self.assertEqual(dev.size, '100')