Implementation of nova instance backups

Implements blueprint: nova-vm-backup

Change-Id: I37d93af8567afa2dcc4f96fd5bf4d4ef202b38e2
This commit is contained in:
eldar nugaev 2015-05-25 17:34:31 +01:00 committed by Fausto Marzi
parent 5b2a0ebab3
commit 4a863ca7f3
20 changed files with 603 additions and 500 deletions

View File

@ -217,6 +217,22 @@ Execute a mysql backup with cinder::
--backup-name mysql-ops002 --backup-name mysql-ops002
--volume-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b --volume-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
Nova backups
To make a nova backup you should provide instance-id parameter in arguments.
Freezer doesn't do any additional checks and assumes that making backup
of that instance will be sufficient to restore your data in future.
Execute a nova backup::
$ freezerc --instance-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
Execute a mysql backup with nova::
$ freezerc --mysql-conf /root/.freezer/freezer-mysql.conf
--container freezer_mysql-backup-prod --mode mysql
--backup-name mysql-ops002
--instance-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
All the freezerc activities are logged into /var/log/freezer.log. All the freezerc activities are logged into /var/log/freezer.log.
Restore Restore
@ -284,6 +300,12 @@ existing content run next command:
Execute a cinder restore:: Execute a cinder restore::
$ freezerc --action restore --volume-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b $ freezerc --action restore --volume-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
Nova restore currently creates an instance with content of saved one, but the
ip address of vm will be different as well as it's id.
Execute a nova restore::
$ freezerc --action restore --instance-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
Architecture Architecture
============ ============
@ -441,6 +463,7 @@ Miscellanea
Available options:: Available options::
$ freezerc $ freezerc
usage: freezerc [-h] [--config CONFIG] [--action {backup,restore,info,admin}] usage: freezerc [-h] [--config CONFIG] [--action {backup,restore,info,admin}]
[-F PATH_TO_BACKUP] [-N BACKUP_NAME] [-m MODE] [-C CONTAINER] [-F PATH_TO_BACKUP] [-N BACKUP_NAME] [-m MODE] [-C CONTAINER]
[-L] [-l] [-o GET_OBJECT] [-d DST_FILE] [-L] [-l] [-o GET_OBJECT] [-d DST_FILE]
@ -460,14 +483,15 @@ Available options::
[--restore-from-date RESTORE_FROM_DATE] [--max-priority] [-V] [--restore-from-date RESTORE_FROM_DATE] [--max-priority] [-V]
[-q] [--insecure] [--os-auth-ver {1,2,3}] [--proxy PROXY] [-q] [--insecure] [--os-auth-ver {1,2,3}] [--proxy PROXY]
[--dry-run] [--upload-limit UPLOAD_LIMIT] [--dry-run] [--upload-limit UPLOAD_LIMIT]
[--volume-id VOLUME_ID] [--download-limit DOWNLOAD_LIMIT] [--volume-id VOLUME_ID] [--instance-id INSTANCE_ID]
[--download-limit DOWNLOAD_LIMIT]
[--sql-server-conf SQL_SERVER_CONF] [--volume VOLUME] [--sql-server-conf SQL_SERVER_CONF] [--volume VOLUME]
optional arguments: optional arguments:
-h, --help show this help message and exit -h, --help show this help message and exit
--config CONFIG Config file abs path. Option arguments are provided --config CONFIG Config file abs path. Option arguments are provided
from config file. When config file is used any option from config file. When config file is used any option
from command line is ignored. from command line provided take precedence.
--action {backup,restore,info,admin} --action {backup,restore,info,admin}
Set the action to be taken. backup and restore are Set the action to be taken. backup and restore are
self explanatory, info is used to retrieve info from self explanatory, info is used to retrieve info from
@ -558,6 +582,8 @@ Available options::
--mysql-conf MYSQL_CONF --mysql-conf MYSQL_CONF
Set the MySQL configuration file where freezer Set the MySQL configuration file where freezer
retrieve important information as db_name, user, retrieve important information as db_name, user,
password, host, port. Following is an example of
config file: # cat ~/.freezer/backup_mysql_conf host =
<db-host> user = <mysqluser> password = <mysqlpass> <db-host> user = <mysqluser> password = <mysqlpass>
port = <db-port> port = <db-port>
--log-file LOG_FILE Set log file. By default logs to --log-file LOG_FILE Set log file. By default logs to
@ -611,6 +637,8 @@ Available options::
invoked with dimensions (10K, 120M, 10G). invoked with dimensions (10K, 120M, 10G).
--volume-id VOLUME_ID --volume-id VOLUME_ID
Id of cinder volume for backup Id of cinder volume for backup
--instance-id INSTANCE_ID
Id of nova instance for backup
--download-limit DOWNLOAD_LIMIT --download-limit DOWNLOAD_LIMIT
Download bandwidth limit in Bytes per sec. Can be Download bandwidth limit in Bytes per sec. Can be
invoked with dimensions (10K, 120M, 10G). invoked with dimensions (10K, 120M, 10G).
@ -619,4 +647,3 @@ Available options::
retrieve the sql server instance. Following is an retrieve the sql server instance. Following is an
example of config file: instance = <db-instance> example of config file: instance = <db-instance>
--volume VOLUME Create a snapshot of the selected volume --volume VOLUME Create a snapshot of the selected volume

View File

@ -356,6 +356,11 @@ def backup_arguments(args_dict={}):
help='Id of cinder volume for backup', help='Id of cinder volume for backup',
dest="volume_id", dest="volume_id",
default='') default='')
arg_parser.add_argument(
"--instance-id", action='store',
help='Id of nova instance for backup',
dest="instance_id",
default='')
arg_parser.add_argument( arg_parser.add_argument(
'--download-limit', action='store', '--download-limit', action='store',
help='''Download bandwidth limit in Bytes per sec. help='''Download bandwidth limit in Bytes per sec.

View File

@ -20,15 +20,16 @@ Hudson (tjh@cryptsoft.com).
Freezer Backup modes related functions Freezer Backup modes related functions
""" """
import multiprocessing import multiprocessing
import logging import logging
import os import os
from os.path import expanduser from os.path import expanduser
import time
from freezer.lvm import lvm_snap, lvm_snap_remove, get_lvm_info from freezer.lvm import lvm_snap, lvm_snap_remove, get_lvm_info
from freezer.osclients import ClientManager
from freezer.tar import tar_backup, gen_tar_command from freezer.tar import tar_backup, gen_tar_command
from freezer.swift import add_object, manifest_upload, get_client from freezer.swift import add_object, manifest_upload
from freezer.utils import gen_manifest_meta, add_host_name_ts_level from freezer.utils import gen_manifest_meta, add_host_name_ts_level
from freezer.vss import vss_create_shadow_copy from freezer.vss import vss_create_shadow_copy
from freezer.vss import vss_delete_shadow_copy from freezer.vss import vss_delete_shadow_copy
@ -36,10 +37,6 @@ from freezer.vss import start_sql_server
from freezer.vss import stop_sql_server from freezer.vss import stop_sql_server
from freezer.winutils import use_shadow from freezer.winutils import use_shadow
from freezer.winutils import is_windows from freezer.winutils import is_windows
from freezer.cinder import provide_snapshot, do_copy_volume, make_glance_image
from freezer.cinder import download_image, clean_snapshot
from freezer.glance import glance
from freezer.cinder import cinder
from freezer import swift from freezer import swift
home = expanduser("~") home = expanduser("~")
@ -147,7 +144,42 @@ def backup_mode_mongo(backup_opt_dict, time_stamp, manifest_meta_dict):
return True return True
def backup_cinder(backup_dict, time_stamp, create_clients=True): def backup_nova(backup_dict, time_stamp):
"""
Implement nova backup
:param backup_dict: backup configuration dictionary
:param time_stamp: timestamp of backup
:return:
"""
instance_id = backup_dict.instance_id
client_manager = backup_dict.client_manager
nova = client_manager.get_nova()
instance = nova.servers.get(instance_id)
glance = client_manager.get_glance()
if instance.__dict__['OS-EXT-STS:task_state']:
time.sleep(5)
instance = nova.servers.get(instance)
image = instance.create_image("snapshot_of_%s" % instance_id)
image = glance.images.get(image)
while image.status != 'active':
time.sleep(5)
image = glance.images.get(image)
stream = client_manager.download_image(image)
package = "{0}/{1}".format(instance_id, time_stamp)
logging.info("[*] Uploading image to swift")
headers = {"x-object-meta-name": instance._info['name'],
"x-object-meta-tenant_id": instance._info['tenant_id']}
swift.add_stream(backup_dict.client_manager,
backup_dict.container_segments,
backup_dict.container, stream, package, headers)
logging.info("[*] Deleting temporary image")
glance.images.delete(image)
def backup_cinder(backup_dict, time_stamp):
""" """
Implements cinder backup: Implements cinder backup:
1) Gets a stream of the image from glance 1) Gets a stream of the image from glance
@ -155,33 +187,33 @@ def backup_cinder(backup_dict, time_stamp, create_clients=True):
:param backup_dict: global dict with variables :param backup_dict: global dict with variables
:param time_stamp: timestamp of snapshot :param time_stamp: timestamp of snapshot
:param create_clients: if set to True -
recreates cinder and glance clients,
False - uses existing from backup_opt_dict
""" """
if create_clients: client_manager = backup_dict.client_manager
backup_dict = cinder(backup_dict) cinder = client_manager.get_cinder()
backup_dict = glance(backup_dict) glance = client_manager.get_glance()
volume_id = backup_dict.volume_id volume_id = backup_dict.volume_id
volume = backup_dict.cinder.volumes.get(volume_id) volume = cinder.volumes.get(volume_id)
logging.info("[*] Creation temporary snapshot") logging.info("[*] Creation temporary snapshot")
snapshot = provide_snapshot(backup_dict, volume, snapshot = client_manager.provide_snapshot(
"backup_snapshot_for_volume_%s" % volume_id) volume, "backup_snapshot_for_volume_%s" % volume_id)
logging.info("[*] Creation temporary volume") logging.info("[*] Creation temporary volume")
copied_volume = do_copy_volume(backup_dict, snapshot) copied_volume = client_manager.do_copy_volume(snapshot)
logging.info("[*] Creation temporary glance image") logging.info("[*] Creation temporary glance image")
image = make_glance_image(backup_dict, "name", copied_volume) image = client_manager.make_glance_image("name", copied_volume)
stream = download_image(backup_dict, image) stream = client_manager.download_image(image)
package = "{0}/{1}".format(backup_dict, volume_id, time_stamp) package = "{0}/{1}".format(volume_id, time_stamp)
logging.info("[*] Uploading image to swift") logging.info("[*] Uploading image to swift")
swift.add_stream(backup_dict, stream, package) headers = {}
swift.add_stream(backup_dict.client_manager,
backup_dict.container_segments,
backup_dict.container, stream, package, headers=headers)
logging.info("[*] Deleting temporary snapshot") logging.info("[*] Deleting temporary snapshot")
clean_snapshot(backup_dict, snapshot) client_manager.clean_snapshot(snapshot)
logging.info("[*] Deleting temporary volume") logging.info("[*] Deleting temporary volume")
backup_dict.cinder.volumes.delete(copied_volume) cinder.volumes.delete(copied_volume)
logging.info("[*] Deleting temporary image") logging.info("[*] Deleting temporary image")
backup_dict.glance.images.delete(image) glance.images.delete(image)
def backup_mode_fs(backup_opt_dict, time_stamp, manifest_meta_dict): def backup_mode_fs(backup_opt_dict, time_stamp, manifest_meta_dict):
@ -194,7 +226,12 @@ def backup_mode_fs(backup_opt_dict, time_stamp, manifest_meta_dict):
if backup_opt_dict.volume_id: if backup_opt_dict.volume_id:
logging.info('[*] Detected volume_id parameter') logging.info('[*] Detected volume_id parameter')
logging.info('[*] Executing cinder snapshot') logging.info('[*] Executing cinder snapshot')
backup_cinder(backup_opt_dict, time_stamp, manifest_meta_dict) backup_cinder(backup_opt_dict, time_stamp)
return
if backup_opt_dict.instance_id:
logging.info('[*] Detected instance_id parameter')
logging.info('[*] Executing nova snapshot')
backup_nova(backup_opt_dict, time_stamp)
return return
try: try:
@ -269,14 +306,21 @@ def backup_mode_fs(backup_opt_dict, time_stamp, manifest_meta_dict):
if backup_opt_dict.upload: if backup_opt_dict.upload:
# Request a new auth client in case the current token # Request a new auth client in case the current token
# is expired before uploading tar meta data or the swift manifest # is expired before uploading tar meta data or the swift manifest
backup_opt_dict = get_client(backup_opt_dict) backup_opt_dict.client_manger = ClientManager(
backup_opt_dict.options,
backup_opt_dict.insecure,
backup_opt_dict.download_limit,
backup_opt_dict.upload_limit,
backup_opt_dict.os_auth_ver,
backup_opt_dict.dry_run
)
if not backup_opt_dict.no_incremental: if not backup_opt_dict.no_incremental:
# Upload tar incremental meta data file and remove it # Upload tar incremental meta data file and remove it
logging.info('[*] Uploading tar meta data file: {0}'.format( logging.info('[*] Uploading tar meta data file: {0}'.format(
tar_meta_to_upload)) tar_meta_to_upload))
with open(meta_data_abs_path, 'r') as meta_fd: with open(meta_data_abs_path, 'r') as meta_fd:
backup_opt_dict.sw_connector.put_object( backup_opt_dict.client_manger.get_swift().put_object(
backup_opt_dict.container, tar_meta_to_upload, meta_fd) backup_opt_dict.container, tar_meta_to_upload, meta_fd)
# Removing tar meta data file, so we have only one # Removing tar meta data file, so we have only one
# authoritative version on swift # authoritative version on swift

View File

@ -51,7 +51,7 @@ def monkeypatch_bandwidth(download_bytes_per_sec, upload_bytes_per_sec):
Monkey patch socket to ensure that all Monkey patch socket to ensure that all
new sockets created are throttled. new sockets created are throttled.
""" """
if upload_bytes_per_sec > -1 or download_bytes_per_sec > - 1:
def make_throttled_socket(*args, **kwargs): def make_throttled_socket(*args, **kwargs):
return ThrottledSocket( return ThrottledSocket(
download_bytes_per_sec, download_bytes_per_sec,
@ -65,6 +65,4 @@ def monkeypatch_bandwidth(download_bytes_per_sec, upload_bytes_per_sec):
def monkeypatch_socket_bandwidth(backup_opt_dict): def monkeypatch_socket_bandwidth(backup_opt_dict):
download_limit = backup_opt_dict.download_limit download_limit = backup_opt_dict.download_limit
upload_limit = backup_opt_dict.upload_limit upload_limit = backup_opt_dict.upload_limit
if upload_limit > -1 or download_limit > - 1:
monkeypatch_bandwidth(download_limit, upload_limit) monkeypatch_bandwidth(download_limit, upload_limit)

View File

@ -1,134 +0,0 @@
"""
Copyright 2014 Hewlett-Packard
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This product includes cryptographic software written by Eric Young
(eay@cryptsoft.com). This product includes software written by Tim
Hudson (tjh@cryptsoft.com).
========================================================================
Freezer functions to interact with OpenStack Swift client and server
"""
from cinderclient.v1 import client as ciclient
import time
from glance import ReSizeStream
import logging
from freezer.bandwidth import monkeypatch_socket_bandwidth
def cinder(backup_opt_dict):
"""
Creates cinder client and attached it ot the dictionary
:param backup_opt_dict: Dictionary with configuration
:return: Dictionary with attached cinder client
"""
options = backup_opt_dict.options
monkeypatch_socket_bandwidth(backup_opt_dict)
backup_opt_dict.cinder = ciclient.Client(
username=options.user_name,
api_key=options.password,
project_id=options.tenant_name,
auth_url=options.auth_url,
region_name=options.region_name,
insecure=backup_opt_dict.insecure,
service_type="volume")
return backup_opt_dict
def provide_snapshot(backup_dict, volume, snapshot_name):
"""
Creates snapshot for cinder volume with --force parameter
:param backup_dict: Dictionary with configuration
:param volume: volume object for snapshoting
:param snapshot_name: name of snapshot
:return: snapshot object
"""
volume_snapshots = backup_dict.cinder.volume_snapshots
snapshot = volume_snapshots.create(volume_id=volume.id,
display_name=snapshot_name,
force=True)
while snapshot.status != "available":
try:
logging.info("[*] Snapshot status: " + snapshot.status)
snapshot = volume_snapshots.get(snapshot.id)
if snapshot.status == "error":
logging.error("snapshot have error state")
exit(1)
time.sleep(5)
except Exception as e:
logging.info(e)
return snapshot
def do_copy_volume(backup_dict, snapshot):
"""
Creates new volume from a snapshot
:param backup_dict: Configuration dictionary
:param snapshot: provided snapshot
:return: created volume
"""
volume = backup_dict.cinder.volumes.create(
size=snapshot.size,
snapshot_id=snapshot.id)
while volume.status != "available":
try:
logging.info("[*] Volume copy status: " + volume.status)
volume = backup_dict.cinder.volumes.get(volume.id)
time.sleep(5)
except Exception as e:
logging.info(e)
logging.info("[*] Exception getting volume status")
return volume
def make_glance_image(backup_dict, image_volume_name, copy_volume):
"""
Creates an glance image from volume
:param backup_dict: Configuration dictionary
:param image_volume_name: Name of image
:param copy_volume: volume to make an image
:return: Glance image object
"""
volumes = backup_dict.cinder.volumes
return volumes.upload_to_image(volume=copy_volume,
force=True,
image_name=image_volume_name,
container_format="bare",
disk_format="raw")
def clean_snapshot(backup_dict, snapshot):
"""
Deletes snapshot
:param backup_dict: Configuration dictionary
:param snapshot: snapshot name
"""
logging.info("[*] Deleting existed snapshot: " + snapshot.id)
backup_dict.cinder.volume_snapshots.delete(snapshot)
def download_image(backup_dict, image):
"""
Creates a stream for image data
:param backup_dict: Configuration dictionary
:param image: Image object for downloading
:return: stream of image data
"""
stream = backup_dict.glance.images.data(image)
return ReSizeStream(stream, len(stream), 1000000)

View File

@ -1,109 +0,0 @@
"""
Copyright 2014 Hewlett-Packard
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This product includes cryptographic software written by Eric Young
(eay@cryptsoft.com). This product includes software written by Tim
Hudson (tjh@cryptsoft.com).
========================================================================
Freezer functions to interact with OpenStack Swift client and server
"""
import logging
from glanceclient.v1 import client as glclient
from glanceclient.shell import OpenStackImagesShell
from freezer.bandwidth import monkeypatch_socket_bandwidth
class Bunch:
def __init__(self, **kwds):
self.__dict__.update(kwds)
def __getattr__(self, item):
return self.__dict__.get(item)
def glance(backup_opt_dict):
"""
Creates glance client and attached it ot the dictionary
:param backup_opt_dict: Dictionary with configuration
:return: Dictionary with attached glance client
"""
options = backup_opt_dict.options
monkeypatch_socket_bandwidth(backup_opt_dict)
endpoint, token = OpenStackImagesShell()._get_endpoint_and_token(
Bunch(os_username=options.user_name,
os_password=options.password,
os_tenant_name=options.tenant_name,
os_auth_url=options.auth_url,
os_region_name=options.region_name,
force_auth=True))
backup_opt_dict.glance = glclient.Client(endpoint=endpoint, token=token)
return backup_opt_dict
class ReSizeStream:
"""
Iterator/File-like object for changing size of chunk in stream
"""
def __init__(self, stream, length, chunk_size):
self.stream = stream
self.length = length
self.chunk_size = chunk_size
self.reminder = ""
self.transmitted = 0
def __len__(self):
return self.length
def __iter__(self):
return self
def next(self):
logging.info("Transmitted (%s) of (%s)" % (self.transmitted,
self.length))
chunk_size = self.chunk_size
if len(self.reminder) > chunk_size:
result = self.reminder[:chunk_size]
self.reminder = self.reminder[chunk_size:]
self.transmitted += len(result)
return result
else:
stop = False
while not stop and len(self.reminder) < chunk_size:
try:
self.reminder += next(self.stream)
except StopIteration:
stop = True
if stop:
result = self.reminder
if len(self.reminder) == 0:
raise StopIteration()
self.reminder = []
self.transmitted += len(result)
return result
else:
result = self.reminder[:chunk_size]
self.reminder = self.reminder[chunk_size:]
self.transmitted += len(result)
return result
def read(self, chunk_size):
self.chunk_size = chunk_size
return self.next()

View File

@ -23,8 +23,10 @@ from freezer import swift
from freezer import utils from freezer import utils
from freezer import backup from freezer import backup
from freezer import restore from freezer import restore
from freezer.osclients import ClientManager
import logging import logging
from freezer.restore import RestoreOs
class Job: class Job:
@ -42,7 +44,15 @@ class Job:
logging.info('[*] Job execution Started at: {0}'. logging.info('[*] Job execution Started at: {0}'.
format(self.start_time)) format(self.start_time))
self.conf = swift.get_client(self.conf) if not hasattr(self.conf, 'client_manager'):
self.conf.client_manager = ClientManager(
self.conf.options,
self.conf.insecure,
self.conf.download_limit,
self.conf.upload_limit,
self.conf.os_auth_ver,
self.conf.dry_run
)
self.conf = swift.get_containers_list(self.conf) self.conf = swift.get_containers_list(self.conf)
retval = func(self) retval = func(self)
@ -136,8 +146,14 @@ class RestoreJob(Job):
# Get the object list of the remote containers and store it in the # Get the object list of the remote containers and store it in the
# same dict passes as argument under the dict.remote_obj_list namespace # same dict passes as argument under the dict.remote_obj_list namespace
self.conf = swift.get_container_content(self.conf) self.conf = swift.get_container_content(self.conf)
if self.conf.volume_id or self.conf.instance_id:
res = RestoreOs(self.conf.client_manager, self.conf.container)
if self.conf.volume_id: if self.conf.volume_id:
restore.restore_cinder(self.conf) res.restore_cinder(self.conf.restore_from_date,
self.conf.volume_id)
else:
res.restore_nova(self.conf.restore_from_date,
self.conf.instance_id)
else: else:
restore.restore_fs(self.conf) restore.restore_fs(self.conf)

232
freezer/osclients.py Normal file
View File

@ -0,0 +1,232 @@
from bandwidth import monkeypatch_bandwidth
from utils import Bunch
import logging
import time
from utils import ReSizeStream
class ClientManager:
def __init__(self, options, insecure,
download_bytes_per_sec, upload_bytes_per_sec,
swift_auth_version, dry_run):
"""
Creates manager of connections to swift, nova, glance and cinder
:param options: OpenstackOptions
:param insecure:
:param download_bytes_per_sec: information about bandwidth throttling
:param upload_bytes_per_sec: information about bandwidth throttling
:param swift_auth_version:
:param dry_run:
:return:
"""
self.options = options
self.download_bytes_per_sec = download_bytes_per_sec
self.upload_bytes_per_sec = upload_bytes_per_sec
self.insecure = insecure
self.swift_auth_version = swift_auth_version
self.dry_run = dry_run
self.cinder = None
self.swift = None
self.glance = None
self.nova = None
def _monkey_patch(self):
monkeypatch_bandwidth(self.download_bytes_per_sec,
self.upload_bytes_per_sec)
def get_cinder(self):
if not self.cinder:
self.create_cinder()
return self.cinder
def get_swift(self):
if not self.swift:
self.create_swift()
return self.swift
def get_glance(self):
if not self.glance:
self.create_glance()
return self.glance
def get_nova(self):
if not self.nova:
self.create_nova()
return self.nova
def create_cinder(self):
"""
Creates client for cinder and caches it
:return:
"""
from cinderclient.v1 import client
self._monkey_patch()
options = self.options
logging.info("[*] Creation of cinder client")
self.cinder = client.Client(
username=options.user_name,
api_key=options.password,
project_id=options.tenant_name,
auth_url=options.auth_url,
region_name=options.region_name,
insecure=self.insecure,
service_type="volume")
return self.cinder
def create_swift(self):
"""
Creates client for swift and caches it
:return:
"""
import swiftclient
self._monkey_patch()
options = self.options
logging.info("[*] Creation of swift client")
self.swift = swiftclient.client.Connection(
authurl=options.auth_url,
user=options.user_name, key=options.password,
tenant_name=options.tenant_name,
os_options=options.os_options,
auth_version=self.swift_auth_version,
insecure=self.insecure, retries=6)
if self.dry_run:
self.swift = DryRunSwiftclientConnectionWrapper(self.swift)
return self.swift
def create_glance(self):
"""
Creates client for glance and caches it
:return:
"""
from glanceclient.v1 import client
from glanceclient.shell import OpenStackImagesShell
self._monkey_patch()
options = self.options
logging.info("[*] Creation of glance client")
endpoint, token = OpenStackImagesShell()._get_endpoint_and_token(
Bunch(os_username=options.user_name,
os_password=options.password,
os_tenant_name=options.tenant_name,
os_auth_url=options.auth_url,
os_region_name=options.region_name,
force_auth=False))
self.glance = client.Client(endpoint=endpoint, token=token)
return self.glance
def create_nova(self):
"""
Creates client for nova and caches it
:return:
"""
from novaclient.v2 import client
self._monkey_patch()
options = self.options
logging.info("[*] Creation of nova client")
self.nova = client.Client(
username=options.user_name,
api_key=options.password,
project_id=options.tenant_name,
auth_url=options.auth_url,
region_name=options.region_name,
insecure=self.insecure)
return self.nova
def provide_snapshot(self, volume, snapshot_name):
"""
Creates snapshot for cinder volume with --force parameter
:param client_manager: Manager os clients
:param volume: volume object for snapshoting
:param snapshot_name: name of snapshot
:return: snapshot object
"""
snapshot = self.get_cinder().volume_snapshots.create(
volume_id=volume.id,
display_name=snapshot_name,
force=True)
while snapshot.status != "available":
try:
logging.info("[*] Snapshot status: " + snapshot.status)
snapshot = self.get_cinder().volume_snapshots.get(snapshot.id)
if snapshot.status == "error":
logging.error("snapshot have error state")
exit(1)
time.sleep(5)
except Exception as e:
logging.info(e)
return snapshot
def do_copy_volume(self, snapshot):
"""
Creates new volume from a snapshot
:param snapshot: provided snapshot
:return: created volume
"""
volume = self.get_cinder().volumes.create(
size=snapshot.size,
snapshot_id=snapshot.id)
while volume.status != "available":
try:
logging.info("[*] Volume copy status: " + volume.status)
volume = self.get_cinder().volumes.get(volume.id)
time.sleep(5)
except Exception as e:
logging.info(e)
logging.info("[*] Exception getting volume status")
return volume
def make_glance_image(self, image_volume_name, copy_volume):
"""
Creates an glance image from volume
:param image_volume_name: Name of image
:param copy_volume: volume to make an image
:return: Glance image object
"""
return self.get_cinder().volumes.upload_to_image(
volume=copy_volume,
force=True,
image_name=image_volume_name,
container_format="bare",
disk_format="raw")
def clean_snapshot(self, snapshot):
"""
Deletes snapshot
:param snapshot: snapshot name
"""
logging.info("[*] Deleting existed snapshot: " + snapshot.id)
self.get_cinder().volume_snapshots.delete(snapshot)
def download_image(self, image):
"""
Creates a stream for image data
:param image: Image object for downloading
:return: stream of image data
"""
stream = self.get_glance().images.data(image)
return ReSizeStream(stream, len(stream), 1000000)
class DryRunSwiftclientConnectionWrapper:
def __init__(self, sw_connector):
self.sw_connector = sw_connector
self.get_object = sw_connector.get_object
self.get_account = sw_connector.get_account
self.get_container = sw_connector.get_container
self.head_object = sw_connector.head_object
self.put_object = self.dummy
self.put_container = self.dummy
self.delete_object = self.dummy
def dummy(self, *args, **kwargs):
pass

View File

@ -29,11 +29,8 @@ import datetime
from freezer.tar import tar_restore from freezer.tar import tar_restore
from freezer.swift import object_to_stream from freezer.swift import object_to_stream
from freezer.glance import glance from freezer.utils import (validate_all_args, get_match_backup,
from freezer.cinder import cinder sort_backup_list, date_to_timestamp, ReSizeStream)
from freezer.glance import ReSizeStream
from freezer.utils import (
validate_all_args, get_match_backup, sort_backup_list, date_to_timestamp)
def restore_fs(backup_opt_dict): def restore_fs(backup_opt_dict):
@ -160,50 +157,68 @@ def restore_fs_sort_obj(backup_opt_dict):
backup_opt_dict.restore_abs_path)) backup_opt_dict.restore_abs_path))
def restore_cinder(backup_opt_dict, create_clients=True): class RestoreOs:
def __init__(self, client_manager, container):
self.client_manager = client_manager
self.container = container
def _get_backups(self, path, restore_from_date):
timestamp = date_to_timestamp(restore_from_date)
swift = self.client_manager.get_swift()
info, backups = swift.get_container(self.container, path=path)
backups = sorted(map(lambda x: int(x["name"].rsplit("/", 1)[-1]),
backups))
backups = filter(lambda x: x >= timestamp, backups)
if not backups:
msg = "Cannot find backups for path: %s" % path
logging.error(msg)
raise BaseException(msg)
return backups[-1]
def _create_image(self, path, restore_from_date):
swift = self.client_manager.get_swift()
glance = self.client_manager.get_glance()
backup = self._get_backups(path, restore_from_date)
stream = swift.get_object(
self.container, "%s/%s" % (path, backup), resp_chunk_size=10000000)
length = int(stream[0]["x-object-meta-length"])
logging.info("[*] Creation glance image")
image = glance.images.create(
data=ReSizeStream(stream[1], length, 1),
container_format="bare",
disk_format="raw")
return stream[0], image
def restore_cinder(self, restore_from_date, volume_id):
""" """
1) Define swift directory 1) Define swift directory
2) Download and upload to glance 2) Download and upload to glance
3) Create volume from glance 3) Create volume from glance
4) Delete 4) Delete
:param backup_opt_dict: global dictionary with params :param restore_from_date - date in format '%Y-%m-%dT%H:%M:%S'
:param create_clients: if set to True - :param volume_id - id of attached cinder volume
recreates cinder and glance clients,
False - uses existing from backup_opt_dict
""" """
timestamp = date_to_timestamp(backup_opt_dict.restore_from_date) (info, image) = self._create_image(volume_id, restore_from_date)
length = int(info["x-object-meta-length"])
if create_clients:
backup_opt_dict = cinder(backup_opt_dict)
backup_opt_dict = glance(backup_opt_dict)
volume_id = backup_opt_dict.volume_id
container = backup_opt_dict.container
connector = backup_opt_dict.sw_connector
info, backups = connector.get_container(container, path=volume_id)
backups = sorted(map(lambda x: int(x["name"].rsplit("/", 1)[-1]), backups))
backups = filter(lambda x: x >= timestamp, backups)
if not backups:
msg = "Cannot find backups for volume: %s" % volume_id
logging.error(msg)
raise BaseException(msg)
backup = backups[-1]
stream = connector.get_object(
backup_opt_dict.container, "%s/%s" % (volume_id, backup),
resp_chunk_size=10000000)
length = int(stream[0]["x-object-meta-length"])
stream = stream[1]
images = backup_opt_dict.glance.images
logging.info("[*] Creation glance image")
image = images.create(data=ReSizeStream(stream, length, 1),
container_format="bare",
disk_format="raw")
gb = 1073741824 gb = 1073741824
size = length / gb size = length / gb
if length % gb > 0: if length % gb > 0:
size += 1 size += 1
logging.info("[*] Creation volume from image") logging.info("[*] Creation volume from image")
backup_opt_dict.cinder.volumes.create(size, imageRef=image.id) self.client_manager.get_cinder().volumes.create(size,
imageRef=image.id)
logging.info("[*] Deleting temporary image") logging.info("[*] Deleting temporary image")
images.delete(image) self.client_manager.get_glance().images.delete(image)
def restore_nova(self, restore_from_date, instance_id):
"""
:param restore_from_date: date in format '%Y-%m-%dT%H:%M:%S'
:param instance_id: id of attached nova instance
:return:
"""
(info, image) = self._create_image(instance_id, restore_from_date)
nova = self.client_manager.get_nova()
flavor = nova.flavors.get(info['x-object-meta-tenant-id'])
logging.info("[*] Creation an instance")
nova.servers.create(info['x-object-meta-name'], image, flavor)

View File

@ -24,9 +24,7 @@ Freezer functions to interact with OpenStack Swift client and server
from freezer.utils import ( from freezer.utils import (
validate_all_args, get_match_backup, validate_all_args, get_match_backup,
sort_backup_list, DateTime) sort_backup_list, DateTime)
from freezer.bandwidth import monkeypatch_socket_bandwidth
import os import os
import swiftclient
import json import json
import re import re
from copy import deepcopy from copy import deepcopy
@ -49,13 +47,14 @@ def create_containers(backup_opt):
# Create backup container # Create backup container
logging.warning( logging.warning(
"[*] Creating container {0}".format(backup_opt.container)) "[*] Creating container {0}".format(backup_opt.container))
backup_opt.sw_connector.put_container(backup_opt.container) sw_connector = backup_opt.client_manager.get_swift()
sw_connector.put_container(backup_opt.container)
# Create segments container # Create segments container
logging.warning( logging.warning(
"[*] Creating container segments: {0}".format( "[*] Creating container segments: {0}".format(
backup_opt.container_segments)) backup_opt.container_segments))
backup_opt.sw_connector.put_container(backup_opt.container_segments) sw_connector.put_container(backup_opt.container_segments)
return True return True
@ -199,16 +198,17 @@ def remove_obj_older_than(backup_opt_dict):
(obj_name_match.group(3) != '0') (obj_name_match.group(3) != '0')
else: else:
sw_connector = backup_opt_dict.client_manager.get_swift()
if match_object.startswith('tar_meta'): if match_object.startswith('tar_meta'):
if not tar_meta_incremental_dep_flag: if not tar_meta_incremental_dep_flag:
remove_object(backup_opt_dict.sw_connector, remove_object(sw_connector,
backup_opt_dict.container, match_object) backup_opt_dict.container, match_object)
else: else:
if obj_name_match.group(3) == '0': if obj_name_match.group(3) == '0':
tar_meta_incremental_dep_flag = False tar_meta_incremental_dep_flag = False
else: else:
if not incremental_dep_flag: if not incremental_dep_flag:
remove_object(backup_opt_dict.sw_connector, remove_object(sw_connector,
backup_opt_dict.container, match_object) backup_opt_dict.container, match_object)
else: else:
if obj_name_match.group(3) == '0': if obj_name_match.group(3) == '0':
@ -224,7 +224,7 @@ def get_container_content(backup_opt_dict):
if not backup_opt_dict.container: if not backup_opt_dict.container:
raise Exception('please provide a valid container name') raise Exception('please provide a valid container name')
sw_connector = backup_opt_dict.sw_connector sw_connector = backup_opt_dict.client_manager.get_swift()
try: try:
backup_opt_dict.remote_obj_list = \ backup_opt_dict.remote_obj_list = \
sw_connector.get_container(backup_opt_dict.container)[1] sw_connector.get_container(backup_opt_dict.container)[1]
@ -249,7 +249,7 @@ def check_container_existance(backup_opt_dict):
logging.info( logging.info(
"[*] Retrieving container {0}".format(backup_opt_dict.container)) "[*] Retrieving container {0}".format(backup_opt_dict.container))
sw_connector = backup_opt_dict.sw_connector sw_connector = backup_opt_dict.client_manager.get_swift()
containers_list = sw_connector.get_account()[1] containers_list = sw_connector.get_account()[1]
match_container = [ match_container = [
@ -282,46 +282,6 @@ def check_container_existance(backup_opt_dict):
return containers return containers
class DryRunSwiftclientConnectionWrapper:
def __init__(self, sw_connector):
self.sw_connector = sw_connector
self.get_object = sw_connector.get_object
self.get_account = sw_connector.get_account
self.get_container = sw_connector.get_container
self.head_object = sw_connector.head_object
self.put_object = self.dummy
self.put_container = self.dummy
self.delete_object = self.dummy
def dummy(self, *args, **kwargs):
pass
def get_client(backup_opt_dict):
"""
Initialize a swift client object and return it in
backup_opt_dict
"""
options = backup_opt_dict.options
monkeypatch_socket_bandwidth(backup_opt_dict)
backup_opt_dict.sw_connector = swiftclient.client.Connection(
authurl=options.auth_url,
user=options.user_name, key=options.password,
tenant_name=options.tenant_name,
os_options=options.os_options,
auth_version=backup_opt_dict.os_auth_ver,
insecure=backup_opt_dict.insecure, retries=6)
if backup_opt_dict.dry_run:
backup_opt_dict.sw_connector = \
DryRunSwiftclientConnectionWrapper(backup_opt_dict.sw_connector)
return backup_opt_dict
def manifest_upload( def manifest_upload(
manifest_file, backup_opt_dict, file_prefix, manifest_meta_dict): manifest_file, backup_opt_dict, file_prefix, manifest_meta_dict):
""" """
@ -331,7 +291,7 @@ def manifest_upload(
if not manifest_meta_dict: if not manifest_meta_dict:
raise Exception('Manifest Meta dictionary not available') raise Exception('Manifest Meta dictionary not available')
sw_connector = backup_opt_dict.sw_connector sw_connector = backup_opt_dict.client_manager.get_swift()
tmp_manifest_meta = dict() tmp_manifest_meta = dict()
for key, value in manifest_meta_dict.items(): for key, value in manifest_meta_dict.items():
if key.startswith('x-object-meta'): if key.startswith('x-object-meta'):
@ -346,39 +306,36 @@ def manifest_upload(
logging.info('[*] Manifest successfully uploaded!') logging.info('[*] Manifest successfully uploaded!')
def add_stream(backup_opt_dict, stream, package_name): def add_stream(client_manager, container_segments, container, stream,
max_len = len(str(len(stream))) or 10 package_name, headers=None):
def format_chunk(number):
str_repr = str(number)
return "0" * (max_len - len(str_repr)) + str_repr
i = 0 i = 0
for el in stream: for el in stream:
add_chunk(backup_opt_dict, add_chunk(client_manager, container_segments,
"{0}/{1}".format(package_name, format_chunk(i)), el) "{0}/{1}".format(package_name, "%08d" % i), el)
i += 1 i += 1
headers = {'X-Object-Manifest': u'{0}/{1}/'.format( if not headers:
backup_opt_dict.container_segments, package_name), headers = {}
'x-object-meta-length': len(stream)} headers['X-Object-Manifest'] = u'{0}/{1}/'.format(
backup_opt_dict.sw_connector.put_object( container_segments, package_name)
backup_opt_dict.container, package_name, "", headers=headers) headers['x-object-meta-length'] = len(stream)
swift = client_manager.get_swift()
swift.put_object(container, package_name, "", headers=headers)
def add_chunk(backup_opt_dict, package_name, content): def add_chunk(client_manager, container_segments, package_name, content):
# If for some reason the swift client object is not available anymore # If for some reason the swift client object is not available anymore
# an exception is generated and a new client object is initialized/ # an exception is generated and a new client object is initialized/
# If the exception happens for 10 consecutive times for a total of # If the exception happens for 10 consecutive times for a total of
# 1 hour, then the program will exit with an Exception. # 1 hour, then the program will exit with an Exception.
sw_connector = backup_opt_dict.sw_connector
count = 0 count = 0
while True: while True:
try: try:
logging.info( logging.info(
'[*] Uploading file chunk index: {0}'.format( '[*] Uploading file chunk index: {0}'.format(
package_name)) package_name))
sw_connector.put_object( client_manager.get_swift().put_object(
backup_opt_dict.container_segments, container_segments,
package_name, content, package_name, content,
content_type='application/octet-stream', content_type='application/octet-stream',
content_length=len(content)) content_length=len(content))
@ -389,7 +346,7 @@ def add_chunk(backup_opt_dict, package_name, content):
logging.info('[*] Retrying to upload file chunk index: {0}'.format( logging.info('[*] Retrying to upload file chunk index: {0}'.format(
package_name)) package_name))
time.sleep(60) time.sleep(60)
backup_opt_dict = get_client(backup_opt_dict) client_manager.create_swift()
count += 1 count += 1
if count == 10: if count == 10:
logging.critical('[*] Error: add_object: {0}' logging.critical('[*] Error: add_object: {0}'
@ -424,7 +381,9 @@ def add_object(
package_name = u'{0}/{1}/{2}/{3}'.format( package_name = u'{0}/{1}/{2}/{3}'.format(
package_name, time_stamp, package_name, time_stamp,
backup_opt_dict.max_segment_size, file_chunk_index) backup_opt_dict.max_segment_size, file_chunk_index)
add_chunk(backup_opt_dict, package_name, file_chunk) add_chunk(backup_opt_dict.client_manager,
backup_opt_dict.container_segment,
package_name, file_chunk)
def get_containers_list(backup_opt_dict): def get_containers_list(backup_opt_dict):
@ -433,7 +392,7 @@ def get_containers_list(backup_opt_dict):
""" """
try: try:
sw_connector = backup_opt_dict.sw_connector sw_connector = backup_opt_dict.client_manager.get_swift()
backup_opt_dict.containers_list = sw_connector.get_account()[1] backup_opt_dict.containers_list = sw_connector.get_account()[1]
return backup_opt_dict return backup_opt_dict
except Exception as error: except Exception as error:
@ -454,7 +413,7 @@ def object_to_file(backup_opt_dict, file_name_abs_path):
raise ValueError('Error in object_to_file(): Please provide ALL the ' raise ValueError('Error in object_to_file(): Please provide ALL the '
'following arguments: --container file_name_abs_path') 'following arguments: --container file_name_abs_path')
sw_connector = backup_opt_dict.sw_connector sw_connector = backup_opt_dict.client_manager.get_swift()
file_name = file_name_abs_path.split('/')[-1] file_name = file_name_abs_path.split('/')[-1]
logging.info('[*] Downloading object {0} on {1}'.format( logging.info('[*] Downloading object {0} on {1}'.format(
file_name, file_name_abs_path)) file_name, file_name_abs_path))
@ -487,7 +446,7 @@ def object_to_stream(backup_opt_dict, write_pipe, read_pipe, obj_name):
raise ValueError('Error in object_to_stream(): Please provide ' raise ValueError('Error in object_to_stream(): Please provide '
'ALL the following argument: --container') 'ALL the following argument: --container')
backup_opt_dict = get_client(backup_opt_dict) sw_connector = backup_opt_dict.client_manager.get_swift()
logging.info('[*] Downloading data stream...') logging.info('[*] Downloading data stream...')
# Close the read pipe in this child as it is unneeded # Close the read pipe in this child as it is unneeded
@ -495,7 +454,7 @@ def object_to_stream(backup_opt_dict, write_pipe, read_pipe, obj_name):
# Chunk size is set by RESP_CHUNK_SIZE and sent to che write # Chunk size is set by RESP_CHUNK_SIZE and sent to che write
# pipe # pipe
read_pipe.close() read_pipe.close()
for obj_chunk in backup_opt_dict.sw_connector.get_object( for obj_chunk in sw_connector.get_object(
backup_opt_dict.container, obj_name, backup_opt_dict.container, obj_name,
resp_chunk_size=RESP_CHUNK_SIZE)[1]: resp_chunk_size=RESP_CHUNK_SIZE)[1]:
write_pipe.send_bytes(obj_chunk) write_pipe.send_bytes(obj_chunk)

View File

@ -29,7 +29,16 @@ import re
import subprocess import subprocess
class OpenstackOptions(object): class OpenstackOptions:
def __init__(self, user_name, tenant_name, auth_url, password,
tenant_id=None, region_name=None):
self.user_name = user_name
self.tenant_name = tenant_name
self.auth_url = auth_url
self.password = password
self.tenant_id = tenant_id
self.region_name = region_name
@property @property
def os_options(self): def os_options(self):
@ -44,18 +53,18 @@ class OpenstackOptions(object):
@staticmethod @staticmethod
def create_from_dict(src_dict): def create_from_dict(src_dict):
options = OpenstackOptions()
try: try:
options.user_name = src_dict['OS_USERNAME'] return OpenstackOptions(
options.tenant_name = src_dict['OS_TENANT_NAME'] user_name=src_dict['OS_USERNAME'],
options.auth_url = src_dict['OS_AUTH_URL'] tenant_name=src_dict['OS_TENANT_NAME'],
options.password = src_dict['OS_PASSWORD'] auth_url=src_dict['OS_AUTH_URL'],
options.tenant_id = src_dict.get('OS_TENANT_ID', None) password=src_dict['OS_PASSWORD'],
options.region_name = src_dict.get('OS_REGION_NAME', None) tenant_id=src_dict.get('OS_TENANT_ID', None),
region_name=src_dict.get('OS_REGION_NAME', None)
)
except Exception as e: except Exception as e:
raise Exception('Missing Openstack connection parameter: {0}' raise Exception('Missing Openstack connection parameter: {0}'
.format(e)) .format(e))
return options
def gen_manifest_meta( def gen_manifest_meta(
@ -529,10 +538,10 @@ def check_backup_and_tar_meta_existence(backup_opt_dict):
backup_opt_dict = get_newest_backup(backup_opt_dict) backup_opt_dict = get_newest_backup(backup_opt_dict)
if backup_opt_dict.remote_newest_backup: if backup_opt_dict.remote_newest_backup:
sw_connector = backup_opt_dict.sw_connector swift = backup_opt_dict.client_manager.get_swift()
logging.info("[*] Backup {0} found!".format( logging.info("[*] Backup {0} found!".format(
backup_opt_dict.backup_name)) backup_opt_dict.backup_name))
backup_match = sw_connector.head_object( backup_match = swift.head_object(
backup_opt_dict.container, backup_opt_dict.remote_newest_backup) backup_opt_dict.container, backup_opt_dict.remote_newest_backup)
return backup_match return backup_match
@ -643,3 +652,62 @@ def date_to_timestamp(date):
fmt = '%Y-%m-%dT%H:%M:%S' fmt = '%Y-%m-%dT%H:%M:%S'
opt_backup_date = datetime.datetime.strptime(date, fmt) opt_backup_date = datetime.datetime.strptime(date, fmt)
return int(time.mktime(opt_backup_date.timetuple())) return int(time.mktime(opt_backup_date.timetuple()))
class Bunch:
def __init__(self, **kwds):
self.__dict__.update(kwds)
def __getattr__(self, item):
return self.__dict__.get(item)
class ReSizeStream:
"""
Iterator/File-like object for changing size of chunk in stream
"""
def __init__(self, stream, length, chunk_size):
self.stream = stream
self.length = length
self.chunk_size = chunk_size
self.reminder = ""
self.transmitted = 0
def __len__(self):
return self.length
def __iter__(self):
return self
def next(self):
logging.info("Transmitted (%s) of (%s)" % (self.transmitted,
self.length))
chunk_size = self.chunk_size
if len(self.reminder) > chunk_size:
result = self.reminder[:chunk_size]
self.reminder = self.reminder[chunk_size:]
self.transmitted += len(result)
return result
else:
stop = False
while not stop and len(self.reminder) < chunk_size:
try:
self.reminder += next(self.stream)
except StopIteration:
stop = True
if stop:
result = self.reminder
if len(self.reminder) == 0:
raise StopIteration()
self.reminder = []
self.transmitted += len(result)
return result
else:
result = self.reminder[:chunk_size]
self.reminder = self.reminder[chunk_size:]
self.transmitted += len(result)
return result
def read(self, chunk_size):
self.chunk_size = chunk_size
return self.next()

View File

@ -2,6 +2,7 @@ python-swiftclient>=1.6.0
python-keystoneclient>=0.8.0 python-keystoneclient>=0.8.0
python-cinderclient python-cinderclient
python-glanceclient python-glanceclient
python-novaclient
docutils>=0.8.1 docutils>=0.8.1
pymysql pymysql
@ -10,3 +11,4 @@ pymongo
[testing] [testing]
pytest pytest
flake8 flake8
mock

View File

@ -83,6 +83,7 @@ setup(
'python-keystoneclient>=0.7.0', 'python-keystoneclient>=0.7.0',
'python-cinderclient', 'python-cinderclient',
'python-glanceclient', 'python-glanceclient',
'python-novaclient',
'pymysql', 'pymysql',
'pymongo', 'pymongo',
'docutils>=0.8.1'], 'docutils>=0.8.1'],

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
from mock import MagicMock
from freezer.backup import backup_mode_mysql, backup_mode_fs, backup_mode_mongo from freezer.backup import backup_mode_mysql, backup_mode_fs, backup_mode_mongo
import freezer import freezer
@ -11,7 +12,6 @@ import pymysql as MySQLdb
import pymongo import pymongo
import re import re
from collections import OrderedDict from collections import OrderedDict
import __builtin__
from glanceclient.common.utils import IterableWithLength from glanceclient.common.utils import IterableWithLength
from freezer.utils import OpenstackOptions from freezer.utils import OpenstackOptions
@ -590,11 +590,11 @@ class FakeGlanceClient:
class FakeSwiftClient: class FakeSwiftClient:
def __init__(self): def __init__(self):
return None pass
class client: class client:
def __init__(self): def __init__(self):
return None pass
class Connection: class Connection:
def __init__(self, key=True, os_options=True, auth_version=True, user=True, authurl=True, tenant_name=True, retries=True, insecure=True): def __init__(self, key=True, os_options=True, auth_version=True, user=True, authurl=True, tenant_name=True, retries=True, insecure=True):
@ -637,21 +637,23 @@ class FakeSwiftClient:
return True, [{'name': 'test-container'}, {'name': 'test-container-segments'}] return True, [{'name': 'test-container'}, {'name': 'test-container-segments'}]
def get_object(self, *args, **kwargs): def get_object(self, *args, **kwargs):
return [{'x-object-meta-length': "123"}, "abc"] return [{'x-object-meta-length': "123",
'x-object-meta-tenant-id': "12",
'x-object-meta-name': "name"}, "abc"]
class FakeSwiftClient1: class FakeSwiftClient1:
def __init__(self): def __init__(self):
return None pass
class client: class client:
def __init__(self): def __init__(self):
return None pass
class Connection: class Connection:
def __init__(self, key=True, os_options=True, os_auth_ver=True, user=True, authurl=True, tenant_name=True, retries=True, insecure=True): def __init__(self, key=True, os_options=True, os_auth_ver=True, user=True, authurl=True, tenant_name=True, retries=True, insecure=True):
return None pass
def put_object(self, opt1=True, opt2=True, opt3=True, opt4=True, opt5=True, headers=True, content_length=True, content_type=True): def put_object(self, opt1=True, opt2=True, opt3=True, opt4=True, opt5=True, headers=True, content_length=True, content_type=True):
raise Exception raise Exception
@ -793,13 +795,24 @@ class BackupOpt1:
self.download_limit = -1 self.download_limit = -1
self.sql_server_instance = 'Sql Server' self.sql_server_instance = 'Sql Server'
self.volume_id = '' self.volume_id = ''
self.instance_id = ''
self.options = OpenstackOptions.create_from_dict(os.environ) self.options = OpenstackOptions.create_from_dict(os.environ)
from freezer.osclients import ClientManager
from mock import Mock
self.client_manager = ClientManager(None, False, -1, -1, 2, False)
self.client_manager.get_swift = Mock(
return_value=FakeSwiftClient().client.Connection())
self.client_manager.get_glance = Mock(return_value=FakeGlanceClient())
self.client_manager.get_cinder = Mock(return_value=FakeCinderClient())
nova_client = MagicMock()
self.client_manager.get_nova = Mock(return_value=nova_client)
class FakeMySQLdb: class FakeMySQLdb:
def __init__(self): def __init__(self):
return None pass
def __call__(self, *args, **kwargs): def __call__(self, *args, **kwargs):
return self return self
@ -1016,9 +1029,6 @@ class FakeSwift:
backup_opt.list_objects = None backup_opt.list_objects = None
return backup_opt return backup_opt
def fake_get_client(self, backup_opt):
return backup_opt
def fake_show_containers(self, backup_opt): def fake_show_containers(self, backup_opt):
return True return True

View File

@ -7,7 +7,6 @@ import sys
import os import os
import pytest import pytest
import distutils.spawn as distspawn import distutils.spawn as distspawn
import __builtin__
class TestArguments(object): class TestArguments(object):
@ -53,8 +52,10 @@ class TestArguments(object):
platform = sys.platform platform = sys.platform
assert backup_arguments() is not False assert backup_arguments() is not False
if sys.__dict__['platform'] != 'darwin':
sys.__dict__['platform'] = 'darwin' sys.__dict__['platform'] = 'darwin'
pytest.raises(Exception, backup_arguments) pytest.raises(Exception, backup_arguments)
sys.__dict__['platform'] = 'darwin'
monkeypatch.setattr( monkeypatch.setattr(
distspawn, 'find_executable', fakedistutilsspawn.find_executable) distspawn, 'find_executable', fakedistutilsspawn.find_executable)
assert backup_arguments() is not False assert backup_arguments() is not False

View File

@ -3,8 +3,6 @@
from freezer.backup import backup_mode_mysql, backup_mode_fs, backup_mode_mongo from freezer.backup import backup_mode_mysql, backup_mode_fs, backup_mode_mongo
from freezer.backup import backup_cinder from freezer.backup import backup_cinder
import freezer import freezer
from freezer import cinder
from freezer import glance
import swiftclient import swiftclient
import multiprocessing import multiprocessing
import subprocess import subprocess
@ -16,8 +14,6 @@ import re
import pytest import pytest
from commons import * from commons import *
import __builtin__
class TestBackUP: class TestBackUP:
@ -193,13 +189,7 @@ class TestBackUP:
assert backup_mode_mongo( assert backup_mode_mongo(
backup_opt, 123456789, test_meta) is True backup_opt, 123456789, test_meta) is True
def test_backup_cinder(self, monkeypatch): def test_backup_cinder(self):
backup_opt = BackupOpt1() backup_opt = BackupOpt1()
backup_opt.volume_id = 34 backup_opt.volume_id = 34
backup_cinder(backup_opt, 1417649003)
backup_opt.glance = FakeGlanceClient()
backup_opt.cinder = FakeCinderClient()
fakeswiftclient = FakeSwiftClient()
monkeypatch.setattr(swiftclient, 'client', fakeswiftclient.client)
backup_cinder(backup_opt, 1417649003, False)

View File

@ -42,7 +42,6 @@ class TestJob:
monkeypatch.setattr(logging, 'warning', fakelogging.warning) monkeypatch.setattr(logging, 'warning', fakelogging.warning)
monkeypatch.setattr(logging, 'exception', fakelogging.exception) monkeypatch.setattr(logging, 'exception', fakelogging.exception)
monkeypatch.setattr(logging, 'error', fakelogging.error) monkeypatch.setattr(logging, 'error', fakelogging.error)
monkeypatch.setattr(swift, 'get_client', fakeswift.fake_get_client)
monkeypatch.setattr(swift, 'get_containers_list', fakeswift.fake_get_containers_list1) monkeypatch.setattr(swift, 'get_containers_list', fakeswift.fake_get_containers_list1)
def test_execute(self, monkeypatch): def test_execute(self, monkeypatch):

23
tests/test_osclients.py Normal file
View File

@ -0,0 +1,23 @@
import unittest
from freezer.osclients import ClientManager
from freezer.utils import OpenstackOptions
class TestOsClients(unittest.TestCase):
fake_options = OpenstackOptions("user", "tenant", "url", "password")
def test_init(self):
ClientManager(self.fake_options, None, None, None, None, None)
def test_create_cinder(self):
client = ClientManager(self.fake_options, None, None, None, None, None)
client.create_cinder()
def test_create_swift(self):
client = ClientManager(self.fake_options, None, None, None, None, None)
client.create_swift()
def test_create_nova(self):
client = ClientManager(self.fake_options, None, None, None, None, None)
client.create_nova()

View File

@ -23,7 +23,7 @@ Hudson (tjh@cryptsoft.com).
from commons import * from commons import *
from freezer.restore import ( from freezer.restore import (
restore_fs, restore_fs_sort_obj, restore_cinder) restore_fs, restore_fs_sort_obj, RestoreOs)
import freezer import freezer
import logging import logging
import pytest import pytest
@ -82,13 +82,12 @@ class TestRestore:
backup_opt.backup_name = 'abcdtest' backup_opt.backup_name = 'abcdtest'
pytest.raises(Exception, restore_fs_sort_obj, backup_opt) pytest.raises(Exception, restore_fs_sort_obj, backup_opt)
def test_restore_cinder(self, monkeypatch): def test_restore_cinder(self):
backup_opt = BackupOpt1() backup_opt = BackupOpt1()
backup_opt.volume_id = 34 ros = RestoreOs(backup_opt.client_manager, backup_opt.container)
ros.restore_cinder(backup_opt.restore_from_date, 34)
backup_opt.glance = FakeGlanceClient() def test_restore_nova(self):
backup_opt.cinder = FakeCinderClient() backup_opt = BackupOpt1()
fakeswiftclient = FakeSwiftClient() ros = RestoreOs(backup_opt.client_manager, backup_opt.container)
monkeypatch.setattr(swiftclient, 'client', fakeswiftclient.client) ros.restore_nova(backup_opt.restore_from_date, 34)
restore_cinder(backup_opt, False)

View File

@ -25,11 +25,10 @@ from commons import *
from freezer.swift import (create_containers, show_containers, from freezer.swift import (create_containers, show_containers,
show_objects, remove_obj_older_than, get_container_content, show_objects, remove_obj_older_than, get_container_content,
check_container_existance, check_container_existance,
get_client, manifest_upload, add_object, get_containers_list, manifest_upload, add_object, get_containers_list,
object_to_file, object_to_stream, _remove_object, remove_object) object_to_file, object_to_stream, _remove_object, remove_object)
import os import os
import logging import logging
import subprocess
import pytest import pytest
import time import time
@ -182,12 +181,6 @@ class TestSwift:
backup_opt.container = False backup_opt.container = False
pytest.raises(Exception, get_container_content, backup_opt) pytest.raises(Exception, get_container_content, backup_opt)
fakeclient = FakeSwiftClient1()
fakeconnector = fakeclient.client()
fakeswclient = fakeconnector.Connection()
backup_opt = BackupOpt1()
backup_opt.sw_connector = fakeswclient
pytest.raises(Exception, get_container_content, backup_opt)
def test_check_container_existance(self, monkeypatch): def test_check_container_existance(self, monkeypatch):
@ -210,18 +203,6 @@ class TestSwift:
backup_opt.container_segments = 'test-abcd-segments' backup_opt.container_segments = 'test-abcd-segments'
assert type(check_container_existance(backup_opt)) is dict assert type(check_container_existance(backup_opt)) is dict
def test_get_client(self, monkeypatch):
backup_opt = BackupOpt1()
fakelogging = FakeLogging()
monkeypatch.setattr(logging, 'critical', fakelogging.critical)
monkeypatch.setattr(logging, 'warning', fakelogging.warning)
monkeypatch.setattr(logging, 'exception', fakelogging.exception)
monkeypatch.setattr(logging, 'error', fakelogging.error)
assert isinstance(get_client(backup_opt), BackupOpt1) is True
def test_manifest_upload(self, monkeypatch): def test_manifest_upload(self, monkeypatch):
backup_opt = BackupOpt1() backup_opt = BackupOpt1()
@ -268,20 +249,6 @@ class TestSwift:
pytest.raises(SystemExit, add_object, backup_opt, backup_queue, pytest.raises(SystemExit, add_object, backup_opt, backup_queue,
absolute_file_path, time_stamp) absolute_file_path, time_stamp)
fakeclient = FakeSwiftClient1()
fakeconnector = fakeclient.client()
fakeswclient = fakeconnector.Connection()
backup_opt = BackupOpt1()
backup_opt.sw_connector = fakeswclient
pytest.raises(SystemExit, add_object, backup_opt, backup_queue,
absolute_file_path, time_stamp)
backup_opt = BackupOpt1()
absolute_file_path = None
backup_queue = None
pytest.raises(SystemExit, add_object, backup_opt, backup_queue,
absolute_file_path, time_stamp)
def test_get_containers_list(self, monkeypatch): def test_get_containers_list(self, monkeypatch):
backup_opt = BackupOpt1() backup_opt = BackupOpt1()
@ -294,13 +261,6 @@ class TestSwift:
assert isinstance(get_containers_list(backup_opt), BackupOpt1) is True assert isinstance(get_containers_list(backup_opt), BackupOpt1) is True
fakeclient = FakeSwiftClient1()
fakeconnector = fakeclient.client()
fakeswclient = fakeconnector.Connection()
backup_opt = BackupOpt1()
backup_opt.sw_connector = fakeswclient
pytest.raises(Exception, get_containers_list, backup_opt)
def test_object_to_file(self, monkeypatch): def test_object_to_file(self, monkeypatch):
@ -325,14 +285,11 @@ class TestSwift:
backup_opt = BackupOpt1() backup_opt = BackupOpt1()
fakelogging = FakeLogging() fakelogging = FakeLogging()
fakeclient = FakeSwiftClient()
fakeconnector = fakeclient.client
monkeypatch.setattr(logging, 'critical', fakelogging.critical) monkeypatch.setattr(logging, 'critical', fakelogging.critical)
monkeypatch.setattr(logging, 'warning', fakelogging.warning) monkeypatch.setattr(logging, 'warning', fakelogging.warning)
monkeypatch.setattr(logging, 'exception', fakelogging.exception) monkeypatch.setattr(logging, 'exception', fakelogging.exception)
monkeypatch.setattr(logging, 'error', fakelogging.error) monkeypatch.setattr(logging, 'error', fakelogging.error)
monkeypatch.setattr(swiftclient, 'client', fakeconnector)
obj_name = 'test-obj-name' obj_name = 'test-obj-name'
fakemultiprocessing = FakeMultiProcessing1() fakemultiprocessing = FakeMultiProcessing1()