Initial commit of charm

This commit is contained in:
James Page 2020-03-04 10:10:40 +00:00
commit d1c0f4a62c
25 changed files with 1337 additions and 0 deletions

11
.gitignore vendored Normal file
View File

@ -0,0 +1,11 @@
bin
.coverage
.testrepository
.tox
*.sw[nop]
*.pyc
.unit-state.db
.stestr
__pycache__
func-results.json
tests/id_rsa_zaza

3
.stestr.conf Normal file
View File

@ -0,0 +1,3 @@
[DEFAULT]
test_path=./unit_tests
top_dir=./

67
README.md Normal file
View File

@ -0,0 +1,67 @@
# Overview
TrilioVault Data Mover provides service for TrilioVault Datamover
on each compute node.
# Usage
TrilioVault Data Mover relies on services from nova-compute and rabbitmq-server.
Steps to deploy the charm:
juju deploy trilio-data-mover --config user-config.yaml
juju deploy nova-compute
juju deploy rabbitmq-server
juju add-relation trilio-data-mover rabbitmq-server
juju add-relation trilio-data-mover nova-compute
# Configuration
Please provide below configuration options using a config file:
python-version: "Openstack base python version(2 or 3)"
NOTE - Default value is set to "3". Please ensure to update this based on python version since installing
python3 packages on python2 based setup might have unexpected impact.
backup-target-type: Backup target type e.g. nfs or s3
For NFS backup target:
nfs-shares: NFS Shares IP address only for nfs backup target
For Amazon S3 backup target:
tv-s3-secret-key: S3 secret access key
tv-s3-access-key: S3 access key
tv-s3-region-name: S3 region name
tv-s3-bucket: S3 bucket name
For non-AWS S3 backup target:
tv-s3-secret-key: S3 secret access key
tv-s3-access-key: S3 access key
tv-s3-endpoint-url: S3 endpoint URL
tv-s3-region-name: S3 region name
tv-s3-bucket: S3 bucket name
The configuration options need to be updated based on the S3 specific requirements and the parameters that are not needed can be omitted.
TrilioVault Packages are downloaded from the repository added in below config parameter. Please change this only if you wish to download
TrilioVault Packages from a different source.
triliovault-pkg-source: Repository address of triliovault packages
# Contact Information
Trilio Support <support@trilio.com>

16
copyright Normal file
View File

@ -0,0 +1,16 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0
Files: *
Copyright: 2018, Trilio
License: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.

2
requirements.txt Normal file
View File

@ -0,0 +1,2 @@
# Requirements to build the charm
charm-tools

67
src/README.md Normal file
View File

@ -0,0 +1,67 @@
# Overview
TrilioVault Data Mover provides service for TrilioVault Datamover
on each compute node.
# Usage
TrilioVault Data Mover relies on services from nova-compute and rabbitmq-server.
Steps to deploy the charm:
juju deploy trilio-data-mover --config user-config.yaml
juju deploy nova-compute
juju deploy rabbitmq-server
juju add-relation trilio-data-mover rabbitmq-server
juju add-relation trilio-data-mover nova-compute
# Configuration
Please provide below configuration options using a config file:
python-version: "Openstack base python version(2 or 3)"
NOTE - Default value is set to "3". Please ensure to update this based on python version since installing
python3 packages on python2 based setup might have unexpected impact.
backup-target-type: Backup target type e.g. nfs or s3
For NFS backup target:
nfs-shares: NFS Shares IP address only for nfs backup target
For Amazon S3 backup target:
tv-s3-secret-key: S3 secret access key
tv-s3-access-key: S3 access key
tv-s3-region-name: S3 region name
tv-s3-bucket: S3 bucket name
For non-AWS S3 backup target:
tv-s3-secret-key: S3 secret access key
tv-s3-access-key: S3 access key
tv-s3-endpoint-url: S3 endpoint URL
tv-s3-region-name: S3 region name
tv-s3-bucket: S3 bucket name
The configuration options need to be updated based on the S3 specific requirements and the parameters that are not needed can be omitted.
TrilioVault Packages are downloaded from the repository added in below config parameter. Please change this only if you wish to download
TrilioVault Packages from a different source.
triliovault-pkg-source: Repository address of triliovault packages
# Contact Information
Trilio Support <support@trilio.com>

102
src/config.yaml Normal file
View File

@ -0,0 +1,102 @@
---
options:
python-version:
type: int
default: 3
description: Openstack base python version(2 or 3)
triliovault-pkg-source:
type: string
default: "deb [trusted=yes] https://apt.fury.io/triliodata-3-4/ /"
description: Repository address of triliovault packages
tvault-datamover-virtenv-url:
type: string
default:
description: Downloadable URL of triliovault contego virtual environment
tvault-datamover-ext-usr:
type: string
default: nova
description: nova service user name
tvault-datamover-ext-group:
type: string
default: nova
description: nova service group name
tvault-datamover-virtenv:
type: string
default: /home/tvault
description: Trilio Vault home directory
tvault-datamover-virtenv-path:
type: string
default: /home/tvault/.virtenv
description: Trilio Vault Datamover virtual env
tv-datamover-conf:
type: string
default: /etc/tvault-contego/tvault-contego.conf
description: Trilio Vault Datamover config file location
nova-config:
type: string
default: /etc/nova/nova.conf
description: Nova default configuration file location
backup-target-type:
type: string
default: nfs
description: |
Type of backup target.
Valid types are-
- nfs
- s3
nfs-shares:
type: string
default:
description: NFS Shares mount source path
nfs-options:
type: string
default: nolock,soft,timeo=180,intr,lookupcache=none
description: NFS Options
tv-data-dir:
type: string
default: /var/triliovault-mounts
description: TrilioVault data mount point
tv-data-dir-old:
type: string
default: /var/triliovault
description: Old TrilioVault data dir
tv-s3-secret-key:
type: string
default: sample_s3_secret_key
description: S3 secret access key
tv-s3-access-key:
type: string
default: sample_s3_access_key
description: S3 access key
tv-s3-region-name:
type: string
default:
description: S3 region name
tv-s3-bucket:
type: string
default: sample_s3_bucket_name
description: S3 bucket name
tv-s3-endpoint-url:
type: string
default:
description: S3 endpoint URL
tv-datamover-debug:
type: boolean
default: False
description: debug parameter value in /etc/tvault-contego/tvault-contego.conf
tv-datamover-verbose:
type: boolean
default: True
description: verbose parameter value in /etc/tvault-contego/tvault-contego.conf
tv-datamover-max-uploads-pending:
type: int
default: 3
description: max_uploads_pending parameter value in /etc/tvault-contego/tvault-contego.conf
tv-datamover-max-commit-pending:
type: int
default: 3
description: max_commit_pending parameter value in /etc/tvault-contego/tvault-contego.conf
tv-datamover-qemu-agent-ping-timeout:
type: int
default: 600
description: qemu_agent_ping_timeout parameter value in /etc/tvault-contego/tvault-contego.conf

16
src/copyright Normal file
View File

@ -0,0 +1,16 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0
Files: *
Copyright: 2018, Trilio
License: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.

View File

@ -0,0 +1,22 @@
from oslo_config import cfg
from nova import config as nova_conf
import sys
CONF = cfg.CONF
default_config_files = sys.argv[1].split(',') if len(
sys.argv) > 1 else ['/etc/nova/nova.conf']
nova_conf.parse_args(["/usr/bin/nova-compute"])
if not ('config_file' in CONF.keys() and CONF['config_file']):
try:
nova_conf.parse_args(
["/usr/bin/nova-compute"],
default_config_files=default_config_files)
except cfg.ConfigFilesNotFoundError:
raise
except BaseException:
pass
config_files = " --config-file=".join([""] + CONF['config_file']).strip()
print(config_files)

View File

@ -0,0 +1,2 @@
from distutils.sysconfig import get_python_lib
print(get_python_lib())

View File

@ -0,0 +1,5 @@
[Filters]
# mount and unmout filter
mount: CommandFilter, mount, root
umount: CommandFilter, umount, root
qemu-img: CommandFilter, qemu-img, root

View File

@ -0,0 +1 @@
nova ALL = (root) NOPASSWD: /home/tvault/.virtenv/bin/privsep-helper *

View File

@ -0,0 +1,9 @@
/var/log/nova/tvault-contego.log {
daily
missingok
notifempty
copytruncate
size=25M
rotate 3
compress
}

View File

@ -0,0 +1,84 @@
#!/usr/bin/python
import os
import boto3
import botocore
import argparse
from urllib.parse import urlparse
def validate_s3_credentials(s3_access_key_id, s3_secret_access_key,
s3_endpoint, s3_region, s3_bucket,
use_ssl, s3_signature_version):
""" Validate the S3 credentials.
Validate all of the S3 credentials by attempting to get
some bucket information.
Returns:
Success will be returned otherwise error 403, 404, or
500 will be retured with any relevent information.
"""
s3_config_object = None
if s3_signature_version != 'default' and s3_signature_version != '':
s3_config_object = botocore.client.Config(
signature_version=s3_signature_version)
s3_client = boto3.client('s3',
region_name=s3_region,
use_ssl=use_ssl,
aws_access_key_id=s3_access_key_id,
aws_secret_access_key=s3_secret_access_key,
endpoint_url=s3_endpoint,
config=s3_config_object)
s3_client.head_bucket(Bucket=s3_bucket)
# Add a check to see if the current object store will support
# our path length.
long_key = os.path.join(
'tvault_config/',
'workload_f5190be6-7f80-4856-8c24-149cb40500c5/',
'snapshot_f2e5c6a7-3c21-4b7f-969c-915bb408c64f/',
'vm_id_e81d1ac8-b49a-4ccf-9d92-5f1ef358f1be/',
'vm_res_id_72477d99-c475-4a5d-90ae-2560f5f3b319_vda/',
'deac2b8a-dca9-4415-adc1-f3c6598204ed-segments/',
'0000000000000000.00000000')
s3_client.put_object(
Bucket=s3_bucket, Key=long_key, Body='Test Data')
s3_client.delete_object(Bucket=s3_bucket, Key=long_key)
return {'status': 'Success'}
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-a', '--access-key', required=True)
parser.add_argument('-s', '--secret-key', required=True)
parser.add_argument('-e', '--endpoint-url', default=None)
parser.add_argument('-b', '--bucket-name', required=True)
parser.add_argument('-r', '--region-name', default='us-east-2')
parser.add_argument('-v', '--signature-version', default='default')
args = parser.parse_args()
s3_access_key_id = args.access_key
s3_secret_access_key = args.secret_key
s3_endpoint = args.endpoint_url if args.endpoint_url else None
use_ssl = True if (s3_endpoint and
urlparse(s3_endpoint).scheme == 'https') else False
s3_region = args.region_name if args.region_name else None
s3_bucket = args.bucket_name
s3_signature_version = args.signature_version
try:
validate_s3_credentials(s3_access_key_id, s3_secret_access_key,
s3_endpoint, s3_region, s3_bucket,
use_ssl, s3_signature_version)
except Exception:
raise
main()

23
src/icon.svg Normal file
View File

@ -0,0 +1,23 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 22.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 1000 1000" style="enable-background:new 0 0 1000 1000;" xml:space="preserve">
<style type="text/css">
.st0{fill:#77BC1F;}
.st1{fill:#FFFFFF;}
</style>
<circle class="st0" cx="500" cy="500.9" r="492.5"/>
<g>
<path class="st1" d="M500.1,799.9c79.9,0,153.9-30.2,208.3-85c55-55.4,85-131.1,84.5-213.1c0-167.8-128.6-299.2-292.7-299.2h-33.6
v23.1h33.6c151.2,0,269.5,121.3,269.5,276.2c0.6,75.8-27,145.7-77.7,196.8c-50.1,50.3-118.2,78-191.9,78
c-153.7,0-269.5-118.3-269.5-275.1c0-68.6,23.7-133.8,66.9-183.7l-16.6-16.5c-47.4,54.4-73.5,125.4-73.5,200.3
C207.4,671.7,333.2,799.9,500.1,799.9"/>
<path class="st1" d="M500.1,846c92.2,0,177.5-34.8,240.4-98.1c63.4-63.9,98-151.3,97.3-246c0-192.8-147.7-344.4-336.1-345.4h-12.5
V48.3L332.2,215l157.1,166.7v-110h10.8c125.8,0,224.4,101,224.4,230c0.4,63.2-22.5,121.5-64.7,163.9c-41.6,41.9-98.4,65-159.7,65
c-128,0-224.4-98.4-224.4-229c0-56.2,19.2-109.8,54.2-151.3L313.3,334c-39.3,45.8-60.8,105.3-60.8,167.7
c0,143.7,106.4,252.1,247.6,252.1c67.6,0,130.2-25.5,176.1-71.9c46.5-46.9,71.9-110.8,71.4-180.2c0-141.9-108.7-253-247.5-253
h-33.9v74.9L364,215l102.2-108.5v73.1h33.9c176.5,0,314.7,141.6,314.7,322.4c0.7,88.6-31.6,170.1-90.7,229.7
c-58.4,58.8-137.9,91.2-224,91.2c-179.4,0-314.7-138-314.7-321c0-80.9,28.2-157.6,79.5-216.3l-16.5-16.5
c-55.6,63-86.2,145.6-86.2,232.8C162.2,698.1,307.5,846,500.1,846"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

2
src/layer.yaml Normal file
View File

@ -0,0 +1,2 @@
---
includes: ['layer:basic'] # if you use any interfaces, add them here

25
src/metadata.yaml Normal file
View File

@ -0,0 +1,25 @@
---
name: trilio-data-mover
summary: Trilio-Data-Mover plugin
maintainer: Trilio Support <support@trilio.io>
description: |
Trilio-Data-Mover plugin installation on all compute nodes
subordinate: true
tags:
- openstack
- storage
- backup
- TVMv3.4
series:
- xenial
- bionic
requires:
amqp:
interface: rabbitmq
juju-info:
interface: juju-info
scope: container
provides:
data-mover:
interface: data-mover
scope: container

View File

@ -0,0 +1,610 @@
import os
import re
import configparser
import time
from subprocess import (
check_output,
call,
)
from charms.reactive import (
when,
when_not,
set_flag,
clear_flag,
hook,
remove_state,
set_state,
)
from charmhelpers.core.hookenv import (
status_set,
config,
log,
application_version_set,
)
from charmhelpers.fetch import (
apt_install,
apt_update,
apt_purge,
filter_missing_packages,
)
from charmhelpers.core.host import (
service_restart,
service_stop,
service_running,
write_file,
mount,
umount,
mounts,
add_user_to_group,
symlink,
mkdir,
chownr,
)
VALID_BACKUP_TARGETS = [
'nfs',
's3'
]
def get_new_version(pkg_name):
"""
Get the latest version available on the TrilioVault node.
"""
apt_cmd = "apt list {}".format(pkg_name)
pkg = check_output(apt_cmd.split()).decode('utf-8')
new_ver = re.search(r'\s([\d.]+)', pkg).group().strip()
return new_ver
def check_presence(tv_file):
"""
Just a wrpper of 'ls' command
"""
if os.system('ls {}'.format(tv_file)):
return False
return True
def validate_nfs():
"""
Validate the nfs mount device
"""
usr = config('tvault-datamover-ext-usr')
grp = config('tvault-datamover-ext-group')
data_dir = config('tv-data-dir')
device = config('nfs-shares')
nfs_options = config('nfs-options')
# install nfs-common package
if not filter_missing_packages(['nfs-common']):
log("'nfs-common' package not found, installing the package...")
apt_install(['nfs-common'], fatal=True)
if not device:
log("NFS mount device can not be empty."
"Check 'nfs-shares' value in config")
return False
# Ensure mount directory exists
mkdir(data_dir, owner=usr, group=grp, perms=501, force=True)
# check for mountable device
if not mount(device, data_dir, options=nfs_options, filesystem='nfs'):
log("Unable to mount, please enter valid mount device")
return False
log("Device mounted successfully")
umount(data_dir)
log("Device unmounted successfully")
return True
def validate_s3():
"""
Validate S3 backup target
"""
s3_access_key = config('tv-s3-access-key')
s3_secret_key = config('tv-s3-secret-key')
s3_endpoint = config('tv-s3-endpoint-url')
s3_bucket = config('tv-s3-bucket')
s3_region = config('tv-s3-region-name')
if not s3_access_key or not s3_secret_key:
log("Empty values provided!")
return False
if not s3_endpoint:
s3_endpoint = ''
if not s3_region:
s3_region = ''
cmd = ['python', 'files/trilio/validate_s3.py',
'-a', s3_access_key,
'-s', s3_secret_key,
'-e', s3_endpoint,
'-b', s3_bucket,
'-r', s3_region]
if not call(cmd):
log("Valid S3 credentials")
return True
log("Invalid S3 credentials")
return False
def validate_backup():
"""
Forwards to the respective modules accroding to the type of backup target.
"""
bkp_type = config('backup-target-type').lower()
if bkp_type not in VALID_BACKUP_TARGETS:
log("Not a valid backup target type")
return False
if bkp_type == 'nfs':
return validate_nfs()
elif bkp_type == 's3':
return validate_s3()
def add_users():
"""
Adding passwordless sudo access to nova user and adding to required groups
"""
usr = config('tvault-datamover-ext-usr')
path = '/etc/sudoers.d/tvault-nova'
source = '/usr/lib'
destination = '/usr/lib64'
content = '{} ALL=(ALL) NOPASSWD: ALL'.format(usr)
try:
write_file(path, content, owner='root', group='root', perms=501)
# Adding nova user to system groups
add_user_to_group(usr, 'kvm')
add_user_to_group(usr, 'disk')
# create symlink /usr/lib64/
symlink(source, destination)
except Exception as e:
log("Failed while adding user with msg: {}".format(e))
return False
return True
def create_virt_env(pkg_name):
"""
Checks if latest version is installed or else imports the new virtual env
And installs the Datamover package.
"""
usr = config('tvault-datamover-ext-usr')
grp = config('tvault-datamover-ext-group')
path = config('tvault-datamover-virtenv')
dm_ver = None
# create virtenv dir(/home/tvault) if it does not exist
mkdir(path, owner=usr, group=grp, perms=501, force=True)
latest_dm_ver = get_new_version(pkg_name)
if dm_ver == latest_dm_ver:
log("Latest TrilioVault DataMover package is already installed,"
" exiting")
return True
# Install TrilioVault Datamover package
if not install_plugin(pkg_name):
return False
# change virtenv dir(/home/tvault) users to nova
chownr(path, usr, grp)
# Copy Trilio sudoers and filters files
os.system(
'cp files/trilio/trilio_sudoers /etc/sudoers.d/')
os.system(
'cp files/trilio/trilio.filters /etc/nova/rootwrap.d/')
return True
def ensure_files():
"""
Ensures all the required files or directories
are present before it starts the datamover service.
"""
usr = config('tvault-datamover-ext-usr')
grp = config('tvault-datamover-ext-group')
dm_bin = '/usr/bin/tvault-contego'
log_path = '/var/log/nova'
log_file = '{}/tvault-contego.log'.format(log_path)
conf_path = '/etc/tvault-contego'
# Creates log directory if doesn't exists
mkdir(log_path, owner=usr, group=grp, perms=501, force=True)
write_file(log_file, '', owner=usr, group=grp, perms=501)
if not check_presence(dm_bin):
log("TrilioVault Datamover binary is not present")
return False
# Creates conf directory if doesn't exists
mkdir(conf_path, owner=usr, group=grp, perms=501, force=True)
return True
def create_conf():
"""
Creates datamover config file.
"""
nfs_share = config('nfs-shares')
nfs_options = config('nfs-options')
tv_data_dir_old = config('tv-data-dir-old')
tv_data_dir = config('tv-data-dir')
bkp_type = config('backup-target-type')
tv_config = configparser.RawConfigParser()
if bkp_type == 'nfs':
tv_config.set('DEFAULT', 'vault_storage_nfs_export', nfs_share)
tv_config.set('DEFAULT', 'vault_storage_nfs_options', nfs_options)
elif bkp_type == 's3':
tv_config.set('DEFAULT', 'vault_storage_nfs_export', 'TrilioVault')
tv_config.set('DEFAULT', 'vault_s3_auth_version', 'DEFAULT')
tv_config.set('DEFAULT', 'vault_s3_access_key_id',
config('tv-s3-access-key'))
tv_config.set('DEFAULT', 'vault_s3_secret_access_key',
config('tv-s3-secret-key'))
tv_config.set('DEFAULT', 'vault_s3_region_name',
config('tv-s3-region-name') or '')
tv_config.set('DEFAULT', 'vault_s3_bucket', config('tv-s3-bucket'))
tv_config.set('DEFAULT', 'vault_s3_endpoint_url',
config('tv-s3-endpoint-url') or '')
tv_config.set('DEFAULT', 'vault_storage_type', bkp_type)
tv_config.set('DEFAULT', 'vault_data_directory_old', tv_data_dir_old)
tv_config.set('DEFAULT', 'vault_data_directory', tv_data_dir)
tv_config.set('DEFAULT', 'log_file', '/var/log/nova/tvault-contego.log')
tv_config.set('DEFAULT', 'debug', config('tv-datamover-debug'))
tv_config.set('DEFAULT', 'verbose', config('tv-datamover-verbose'))
tv_config.set('DEFAULT', 'max_uploads_pending',
config('tv-datamover-max-uploads-pending'))
tv_config.set('DEFAULT', 'max_commit_pending',
config('tv-datamover-max-commit-pending'))
tv_config.set('DEFAULT', 'qemu_agent_ping_timeout',
config('tv-datamover-qemu-agent-ping-timeout'))
tv_config.add_section('contego_sys_admin')
tv_config.set('contego_sys_admin', 'helper_command',
'sudo /usr/bin/privsep-helper')
tv_config.add_section('conductor')
tv_config.set('conductor', 'use_local', True)
with open(config('tv-datamover-conf'), 'w') as cf:
tv_config.write(cf)
return True
def ensure_data_dir():
"""
Ensures all the required directories are present
and have appropriate permissions.
"""
usr = config('tvault-datamover-ext-usr')
grp = config('tvault-datamover-ext-group')
data_dir = config('tv-data-dir')
data_dir_old = config('tv-data-dir-old')
# ensure that data_dir is present
mkdir(data_dir, owner=usr, group=grp, perms=501, force=True)
# remove data_dir_old
os.system('rm -rf {}'.format(data_dir_old))
# recreate the data_dir_old
mkdir(data_dir_old, owner=usr, group=grp, perms=501, force=True)
# create logrotate file for tvault-contego.log
src = 'files/trilio/tvault-contego'
dest = '/etc/logrotate.d/tvault-contego'
os.system('cp {} {}'.format(src, dest))
return True
def create_service_file():
"""
Creates datamover service file.
"""
usr = config('tvault-datamover-ext-usr')
grp = config('tvault-datamover-ext-group')
usr_nova_conf = config('nova-config')
if not os.path.isfile(usr_nova_conf):
log("Try providing the correct path of nova.conf in config param")
status_set(
'blocked',
'Failed to find nova.conf file"')
return False
config_files = '--config-file={} --config-file={}'.format(
usr_nova_conf, config('tv-datamover-conf'))
if check_presence('/etc/nova/nova.conf.d'):
config_files = '{} --config-dir=/etc/nova/nova.conf.d'.format(
config_files)
# create service file
exec_start = '/usr/bin/python{} /usr/bin/tvault-contego {}\
'.format(config('python-version'), config_files)
tv_config = configparser.RawConfigParser()
tv_config.optionxform = str
tv_config.add_section('Unit')
tv_config.add_section('Service')
tv_config.add_section('Install')
tv_config.set('Unit', 'Description', 'TrilioVault DataMover')
tv_config.set('Unit', 'After', 'openstack-nova-compute.service')
tv_config.set('Service', 'User', usr)
tv_config.set('Service', 'Group', grp)
tv_config.set('Service', 'Type', 'simple')
tv_config.set('Service', 'ExecStart', exec_start)
tv_config.set('Service', 'MemoryMax', '10G')
tv_config.set('Service', 'TimeoutStopSec', 20)
tv_config.set('Service', 'KillMode', 'process')
tv_config.set('Service', 'Restart', 'always')
tv_config.set('Install', 'WantedBy', 'multi-user.target')
with open('/etc/systemd/system/tvault-contego.service', 'w') as cf:
tv_config.write(cf)
return True
def create_object_storage_service():
"""
Creates object storage service file.
"""
usr = config('tvault-datamover-ext-usr')
grp = config('tvault-datamover-ext-group')
venv_path = config('tvault-datamover-virtenv-path')
# Get dependent libraries paths
try:
cmd = ['/usr/bin/python{}'.format(config('python-version')),
'files/trilio/get_pkgs.py']
contego_path = check_output(cmd).decode('utf-8').strip()
except Exception as e:
log("Failed to get the dependent packages--{}".format(e))
return False
storage_path = '{}/contego/nova/extension/driver/s3vaultfuse.py'\
.format(contego_path)
config_file = config('tv-datamover-conf')
# create service file
exec_start = '{}/bin/python {} --config-file={}'\
.format(venv_path, storage_path, config_file)
tv_config = configparser.RawConfigParser()
tv_config.optionxform = str
tv_config.add_section('Unit')
tv_config.add_section('Service')
tv_config.add_section('Install')
tv_config.set('Unit', 'Description', 'TrilioVault Object Store')
tv_config.set('Unit', 'After', 'tvault-contego.service')
tv_config.set('Service', 'User', usr)
tv_config.set('Service', 'Group', grp)
tv_config.set('Service', 'Type', 'simple')
tv_config.set('Service', 'LimitNOFILE', 500000)
tv_config.set('Service', 'LimitNPROC', 500000)
tv_config.set('Service', 'ExecStart', exec_start)
tv_config.set('Service', 'TimeoutStopSec', 20)
tv_config.set('Service', 'KillMode', 'process')
tv_config.set('Service', 'Restart', 'on-failure')
tv_config.set('Install', 'WantedBy', 'multi-user.target')
with open('/etc/systemd/system/tvault-object-store.service', 'w') as cf:
tv_config.write(cf)
return True
def install_plugin(pkg_name):
"""
Install TrilioVault DataMover package
"""
try:
apt_install([pkg_name], ['--no-install-recommends'], fatal=True)
log("TrilioVault DataMover package installation passed")
status_set('maintenance', 'Starting...')
return True
except Exception as e:
# Datamover package installation failed
log("TrilioVault Datamover package installation failed")
log("With exception --{}".format(e))
return False
def uninstall_plugin(pkg_name):
"""
Uninstall TrilioVault DataMover packages
"""
retry_count = 0
bkp_type = config('backup-target-type')
try:
service_stop('tvault-contego')
os.system('sudo systemctl disable tvault-contego')
os.system('rm -rf /etc/systemd/system/tvault-contego.service')
if bkp_type == 's3':
service_stop('tvault-object-store')
os.system('systemctl disable tvault-object-store')
os.system('rm -rf /etc/systemd/system/tvault-object-store.service')
os.system('sudo systemctl daemon-reload')
os.system('rm -rf /etc/logrotate.d/tvault-contego')
os.system('rm -rf {}'.format(config('tv-datamover-conf')))
os.system('rm -rf /var/log/nova/tvault-contego.log')
# Get the mount points and un-mount tvault's mount points.
mount_points = mounts()
sorted_list = [mp[0] for mp in mount_points
if config('tv-data-dir') in mp[0]]
# stopping the tvault-object-store service may take time
while service_running('tvault-object-store') and retry_count < 3:
log('Waiting for tvault-object-store service to stop')
retry_count += 1
time.sleep(5)
for sl in sorted_list:
umount(sl)
# Uninstall tvault-contego package
apt_purge([pkg_name, 'contego'])
log("TrilioVault Datamover package uninstalled successfully")
return True
except Exception as e:
# package uninstallation failed
log("TrilioVault Datamover package un-installation failed:"
" {}".format(e))
return False
@when_not('tvault-contego.installed')
def install_tvault_contego_plugin():
status_set('maintenance', 'Installing...')
# Read config parameters
bkp_type = config('backup-target-type')
if config('python-version') == 2:
pkg_name = 'tvault-contego'
else:
pkg_name = 'python3-tvault-contego'
# add triliovault package repo
os.system('sudo echo "{}" > '
'/etc/apt/sources.list.d/trilio-gemfury-sources.list'.format(
config('triliovault-pkg-source')))
apt_update()
# Valildate backup target
if not validate_backup():
log("Failed while validating backup")
status_set(
'blocked',
'Invalid Backup target info, please provide valid info')
return
# Proceed as triliovault_ip Address is valid
if not add_users():
log("Failed while adding Users")
status_set('blocked', 'Failed while adding Users')
return
pkg_loc = create_virt_env(pkg_name)
if not pkg_loc:
log("Failed while Creating Virtual Env")
status_set('blocked', 'Failed while Creating Virtual Env')
return
if not ensure_files():
log("Failed while ensuring files")
status_set('blocked', 'Failed while ensuring files')
return
if not create_conf():
log("Failed while creating conf files")
status_set('blocked', 'Failed while creating conf files')
return
if not ensure_data_dir():
log("Failed while ensuring datat directories")
status_set('blocked', 'Failed while ensuring datat directories')
return
if not create_service_file():
log("Failed while creating DataMover service file")
status_set('blocked', 'Failed while creating DataMover service file')
return
if bkp_type == 's3' and not create_object_storage_service():
log("Failed while creating Object Store service file")
status_set('blocked', 'Failed while creating ObjectStore service file')
return
os.system('sudo systemctl daemon-reload')
# Enable and start the object-store service
if bkp_type == 's3':
os.system('sudo systemctl enable tvault-object-store')
service_restart('tvault-object-store')
# Enable and start the datamover service
os.system('sudo systemctl enable tvault-contego')
service_restart('tvault-contego')
# Install was successful
status_set('active', 'Ready...')
# Add the flag "installed" since it's done
application_version_set(get_new_version(pkg_name))
set_flag('tvault-contego.installed')
@hook('stop')
def stop_handler():
# Set the user defined "stopping" state when this hook event occurs.
set_state('tvault-contego.stopping')
@when('tvault-contego.stopping')
def stop_tvault_contego_plugin():
status_set('maintenance', 'Stopping...')
if config('python-version') == 2:
pkg_name = 'tvault-contego'
else:
pkg_name = 'python3-tvault-contego'
# add triliovault package repo
# Call the script to stop and uninstll TrilioVault Datamover
uninst_ret = uninstall_plugin(pkg_name)
if uninst_ret:
# Uninstall was successful
# Remove the state "stopping" since it's done
remove_state('tvault-contego.stopping')
@hook('upgrade-charm')
def upgrade_charm():
# check if installed contego pkg is python 2 or 3
if os.system('dpkg -s python3-tvault-contego | grep Status') == 0:
pkg_name = 'python3-tvault-contego'
else:
pkg_name = 'tvault-contego'
# Call the script to stop and uninstll TrilioVault Datamover
uninst_ret = uninstall_plugin(pkg_name)
if uninst_ret:
# Uninstall was successful, clear flag to re-install
clear_flag('tvault-contego.installed')
@hook('config-changed')
def reconfig_charm():
bkp_type = config('backup-target-type')
# Valildate backup target
if not validate_backup():
log("Failed while validating backup")
status_set(
'blocked',
'Invalid Backup target info, please provide valid info')
return
if not create_conf():
log("Failed while creating conf files")
status_set('blocked', 'Failed while creating conf files')
return
# Re-start the object-store service
if bkp_type == 's3':
service_restart('tvault-object-store')
# Re-start the datamover service
service_restart('tvault-contego')
# Reconfig successful
status_set('active', 'Ready...')

View File

@ -0,0 +1,7 @@
# Unit test requirements
flake8>=2.2.4,<=2.4.1
os-testr>=0.4.1
charms.reactive
mock>=1.2
coverage>=3.6
git+https://github.com/openstack/charms.openstack#egg=charms.openstack

21
src/tox.ini Normal file
View File

@ -0,0 +1,21 @@
# tox (https://tox.readthedocs.io/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
skipsdist = True
envlist = pep8
[testenv]
setenv = VIRTUAL_ENV={envdir}
PYTHONHASHSEED=0
TERM=linux
INTERFACE_PATH={toxinidir}/interfaces
LAYER_PATH={toxinidir}/layers
JUJU_REPOSITORY={toxinidir}/build
[testenv:pep8]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt
commands = flake8 {posargs} reactive

2
src/wheelhouse.txt Normal file
View File

@ -0,0 +1,2 @@
boto3
botocore

7
test-requirements.txt Normal file
View File

@ -0,0 +1,7 @@
# Unit test requirements
flake8>=2.2.4,<=2.4.1
os-testr>=0.4.1
charms.reactive
mock>=1.2
coverage>=3.6
git+https://github.com/openstack/charms.openstack#egg=charms.openstack

40
tox.ini Normal file
View File

@ -0,0 +1,40 @@
# tox (https://tox.readthedocs.io/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
# test suite on all supported python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
[tox]
skipsdist = True
envlist = pep8, py27, py3
[testenv]
setenv = VIRTUAL_ENV={envdir}
PYTHONHASHSEED=0
TERM=linux
INTERFACE_PATH={toxinidir}/interfaces
LAYER_PATH={toxinidir}/layers
JUJU_REPOSITORY={toxinidir}/build
install_command =
pip install {opts} {packages}
deps =
-r{toxinidir}/requirements.txt
[testenv:build]
basepython = python3
commands =
charm-build --log-level DEBUG -o {toxinidir}/build src {posargs}
[testenv:py27]
basepython = python2.7
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
[testenv:py3]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
[testenv:pep8]
basepython = python3
deps = -r{toxinidir}/test-requirements.txt
commands = flake8 {posargs} src

4
unit_tests/__init__.py Normal file
View File

@ -0,0 +1,4 @@
import sys
sys.path.append('src')
sys.path.append('src/reactive')

View File

@ -0,0 +1,189 @@
import mock
import unittest
import trilio_data_mover as datamover
_when_args = {}
_when_not_args = {}
def mock_hook_factory(d):
def mock_hook(*args, **kwargs):
def inner(f):
# remember what we were passed. Note that we can't actually
# determine the class we're attached to, as the decorator only gets
# the function.
try:
d[f.__name__].append(dict(args=args, kwargs=kwargs))
except KeyError:
d[f.__name__] = [dict(args=args, kwargs=kwargs)]
return f
return inner
return mock_hook
class Test(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls._patched_when = mock.patch('charms.reactive.when',
mock_hook_factory(_when_args))
cls._patched_when_started = cls._patched_when.start()
cls._patched_when_not = mock.patch('charms.reactive.when_not',
mock_hook_factory(_when_not_args))
cls._patched_when_not_started = cls._patched_when_not.start()
# force requires to rerun the mock_hook decorator:
# try except is Python2/Python3 compatibility as Python3 has moved
# reload to importlib.
try:
reload(datamover)
except NameError:
import importlib
importlib.reload(datamover)
@classmethod
def tearDownClass(cls):
cls._patched_when.stop()
cls._patched_when_started = None
cls._patched_when = None
cls._patched_when_not.stop()
cls._patched_when_not_started = None
cls._patched_when_not = None
# and fix any breakage we did to the module
try:
reload(datamover)
except NameError:
import importlib
importlib.reload(datamover)
def setUp(self):
self._patches = {}
self._patches_start = {}
def tearDown(self):
for k, v in self._patches.items():
v.stop()
setattr(self, k, None)
self._patches = None
self._patches_start = None
def patch(self, obj, attr, return_value=None, side_effect=None):
mocked = mock.patch.object(obj, attr)
self._patches[attr] = mocked
started = mocked.start()
started.return_value = return_value
started.side_effect = side_effect
self._patches_start[attr] = started
setattr(self, attr, started)
def test_registered_hooks(self):
# test that the hooks actually registered the relation expressions that
# are meaningful for this interface: this is to handle regressions.
# The keys are the function names that the hook attaches to.
when_patterns = {
'stop_tvault_contego_plugin': ('tvault-contego.stopping', ),
}
when_not_patterns = {
'install_tvault_contego_plugin': (
'tvault-contego.installed', ), }
# check the when hooks are attached to the expected functions
for t, p in [(_when_args, when_patterns),
(_when_not_args, when_not_patterns)]:
for f, args in t.items():
# check that function is in patterns
self.assertTrue(f in p.keys(),
"{} not found".format(f))
# check that the lists are equal
lists = []
for a in args:
lists += a['args'][:]
self.assertEqual(sorted(lists), sorted(p[f]),
"{}: incorrect state registration".format(f))
def test_install_plugin(self):
self.patch(datamover, 'install_plugin')
datamover.install_plugin('pkg_name')
self.install_plugin.assert_called_once_with('pkg_name')
def test_uninstall_plugin(self):
self.patch(datamover, 'uninstall_plugin')
datamover.uninstall_plugin()
self.uninstall_plugin.assert_called_once_with()
def test_install_tvault_contego_plugin(self):
self.patch(datamover, 'install_tvault_contego_plugin')
datamover.install_tvault_contego_plugin()
self.install_tvault_contego_plugin.assert_called_once_with()
def test_stop_tvault_contego_plugin(self):
self.patch(datamover, 'config')
self.patch(datamover, 'status_set')
self.patch(datamover, 'remove_state')
self.patch(datamover, 'uninstall_plugin')
self.uninstall_plugin.return_value = True
datamover.stop_tvault_contego_plugin()
self.status_set.assert_called_with(
'maintenance', 'Stopping...')
self.remove_state.assert_called_with('tvault-contego.stopping')
def test_s3_object_storage_fail(self):
self.patch(datamover, 'config')
self.config.return_value = 's3'
self.patch(datamover, 'apt_update')
self.patch(datamover, 'status_set')
self.patch(datamover, 'validate_backup')
self.validate_backup.return_value = True
self.patch(datamover, 'add_users')
self.add_users.return_value = True
self.patch(datamover, 'create_virt_env')
self.create_virt_env.return_value = True
self.patch(datamover, 'ensure_files')
self.ensure_files.return_value = True
self.patch(datamover, 'create_conf')
self.create_conf.return_value = True
self.patch(datamover, 'ensure_data_dir')
self.ensure_data_dir.return_value = True
self.patch(datamover, 'create_service_file')
self.create_service_file.return_value = True
self.patch(datamover, 'create_object_storage_service')
self.create_object_storage_service.return_value = False
self.patch(datamover.os, 'system')
self.patch(datamover, 'log')
datamover.install_tvault_contego_plugin()
self.status_set.assert_called_with(
'blocked',
'Failed while creating ObjectStore service file')
def test_s3_object_storage_pass(self):
self.patch(datamover, 'config')
self.patch(datamover, 'apt_update')
self.patch(datamover, 'status_set')
self.patch(datamover, 'validate_backup')
self.validate_backup.return_value = True
self.patch(datamover, 'add_users')
self.add_users.return_value = True
self.patch(datamover, 'create_virt_env')
self.create_virt_env.return_value = True
self.patch(datamover, 'ensure_files')
self.ensure_files.return_value = True
self.patch(datamover, 'create_conf')
self.create_conf.return_value = True
self.patch(datamover, 'ensure_data_dir')
self.ensure_data_dir.return_value = True
self.patch(datamover, 'create_service_file')
self.create_service_file.return_value = True
self.patch(datamover, 'create_object_storage_service')
self.create_object_storage_service.return_value = True
self.patch(datamover, 'service_restart')
self.patch(datamover, 'set_flag')
self.patch(datamover, 'application_version_set')
self.patch(datamover, 'get_new_version')
self.patch(datamover.os, 'system')
datamover.install_tvault_contego_plugin()
self.service_restart.assert_called_with(
'tvault-contego')
self.status_set.assert_called_with(
'active', 'Ready...')
self.application_version_set.assert_called_once()
self.set_flag.assert_called_with(
'tvault-contego.installed')