Implementation of Cinder backup compatible mode
Implements blueprint: cinder-backup-compatible-mode Change-Id: I2b388dd40b8605cbf488d9c7976d8a9c12b144de
This commit is contained in:
parent
eaf8d10599
commit
465683e609
|
@ -19,6 +19,7 @@ Contributors
|
|||
- Coleman Corrigan
|
||||
- Guillermo Ramirez Garcia
|
||||
- Zahari Zahariev
|
||||
- Eldar Nugaev
|
||||
|
||||
Credits
|
||||
=======
|
||||
|
|
75
README.rst
75
README.rst
|
@ -203,35 +203,36 @@ Execute a MySQL backup using lvm snapshot::
|
|||
|
||||
Cinder backups
|
||||
|
||||
To make a cinder backup you should provide volume-id parameter in arguments.
|
||||
Freezer doesn't do any additional checks and assumes that making backup
|
||||
of that image will be sufficient to restore your data in future.
|
||||
To make a cinder backup you should provide cinder-vol-id or cindernative-vol-id
|
||||
parameter in command line arguments. Freezer doesn't do any additional checks
|
||||
and assumes that making backup of that image will be sufficient to restore your
|
||||
data in future.
|
||||
|
||||
Execute a cinder backup::
|
||||
$ freezerc --volume-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
$ freezerc --cinder-vol-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
|
||||
Execute a mysql backup with cinder::
|
||||
|
||||
$ freezerc --mysql-conf /root/.freezer/freezer-mysql.conf
|
||||
--container freezer_mysql-backup-prod --mode mysql
|
||||
--backup-name mysql-ops002
|
||||
--volume-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
--cinder-vol-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
|
||||
Nova backups
|
||||
|
||||
To make a nova backup you should provide instance-id parameter in arguments.
|
||||
To make a nova backup you should provide nova parameter in arguments.
|
||||
Freezer doesn't do any additional checks and assumes that making backup
|
||||
of that instance will be sufficient to restore your data in future.
|
||||
|
||||
Execute a nova backup::
|
||||
$ freezerc --instance-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
$ freezerc --nova-inst-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
|
||||
Execute a mysql backup with nova::
|
||||
|
||||
$ freezerc --mysql-conf /root/.freezer/freezer-mysql.conf
|
||||
--container freezer_mysql-backup-prod --mode mysql
|
||||
--backup-name mysql-ops002
|
||||
--instance-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
--nova-inst-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
|
||||
All the freezerc activities are logged into /var/log/freezer.log.
|
||||
|
||||
|
@ -298,13 +299,16 @@ vm. You should implement this steps manually. To create new volume from
|
|||
existing content run next command:
|
||||
|
||||
Execute a cinder restore::
|
||||
$ freezerc --action restore --volume-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
|
||||
$ freezerc --action restore --cinder-inst-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
$ freezerc --action restore --cindernative-vol-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
|
||||
Nova restore currently creates an instance with content of saved one, but the
|
||||
ip address of vm will be different as well as it's id.
|
||||
|
||||
Execute a nova restore::
|
||||
$ freezerc --action restore --instance-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
|
||||
$ freezerc --action restore --nova-inst-id 3ad7a62f-217a-48cd-a861-43ec0a04a78b
|
||||
|
||||
Architecture
|
||||
============
|
||||
|
@ -456,6 +460,39 @@ The hostname of the node where the Freezer perform the backup. This meta
|
|||
data is important to identify a backup with a specific node, thus avoid
|
||||
possible confusion and associate backup to the wrong node.
|
||||
|
||||
Nova and Cinder Backups
|
||||
-----------------------
|
||||
|
||||
If our data is stored on cinder volume or nova instance disk, we can implement
|
||||
file backup using nova snapshots or volume backups.
|
||||
|
||||
Nova backups:
|
||||
|
||||
If you provide nova argument in parameters freezer assume that all
|
||||
necessary data is located on instance disk and it can be successfully stored
|
||||
using nova snapshot mechanism.
|
||||
|
||||
For example if we want to store our mysql located on instance disk, we
|
||||
will execute the same actions like in case of lvm or tar snapshots, but
|
||||
we will invoke nova snapshot instead of lvm or tar.
|
||||
|
||||
After that we will place snapshot to swift container as dynamic large object.
|
||||
|
||||
container/%instance_id%/%timestamp% <- large object with metadata
|
||||
container_segments/%instance_id%/%timestamp%/segments...
|
||||
|
||||
Restore will create a snapshot from stored data and restore an instance from
|
||||
this snapshot. Instance will have different id and old instance should be
|
||||
terminated manually.
|
||||
|
||||
|
||||
Cinder backups:
|
||||
|
||||
Cinder has it's own mechanism for backups and freezer supports it. But it also
|
||||
allows create a glance image from volume and upload to swift.
|
||||
|
||||
To use standard cinder backups please provide --cindernative-vol-id argument.
|
||||
|
||||
|
||||
Miscellanea
|
||||
-----------
|
||||
|
@ -464,7 +501,7 @@ Available options::
|
|||
|
||||
$ freezerc
|
||||
|
||||
usage: freezerc [-h] [--config CONFIG] [--action {backup,restore,info,admin}]
|
||||
usage: freezerc [-h] [--config CONFIG] [--action {backup,restore,info,admin}]
|
||||
[-F PATH_TO_BACKUP] [-N BACKUP_NAME] [-m MODE] [-C CONTAINER]
|
||||
[-L] [-l] [-o GET_OBJECT] [-d DST_FILE]
|
||||
[--lvm-auto-snap LVM_AUTO_SNAP] [--lvm-srcvol LVM_SRCVOL]
|
||||
|
@ -483,9 +520,10 @@ Available options::
|
|||
[--restore-from-date RESTORE_FROM_DATE] [--max-priority] [-V]
|
||||
[-q] [--insecure] [--os-auth-ver {1,2,3}] [--proxy PROXY]
|
||||
[--dry-run] [--upload-limit UPLOAD_LIMIT]
|
||||
[--volume-id VOLUME_ID] [--instance-id INSTANCE_ID]
|
||||
[--cinder-vol-id CINDER_VOL_ID] [--nova-inst-id NOVA_INST_ID]
|
||||
[--cindernative-vol-id CINDERNATIVE_VOL_ID]
|
||||
[--download-limit DOWNLOAD_LIMIT]
|
||||
[--sql-server-conf SQL_SERVER_CONF] [--volume VOLUME]
|
||||
[--sql-server-conf SQL_SERVER_CONF] [--vssadmin VSSADMIN]
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
|
@ -635,10 +673,12 @@ Available options::
|
|||
--upload-limit UPLOAD_LIMIT
|
||||
Upload bandwidth limit in Bytes per sec. Can be
|
||||
invoked with dimensions (10K, 120M, 10G).
|
||||
--volume-id VOLUME_ID
|
||||
--cinder-vol-id CINDER_VOL_ID
|
||||
Id of cinder volume for backup
|
||||
--instance-id INSTANCE_ID
|
||||
--nova-inst-id NOVA_INST_ID
|
||||
Id of nova instance for backup
|
||||
--cindernative-vol-id CINDERNATIVE_VOL_ID
|
||||
Id of cinder volume for native backup
|
||||
--download-limit DOWNLOAD_LIMIT
|
||||
Download bandwidth limit in Bytes per sec. Can be
|
||||
invoked with dimensions (10K, 120M, 10G).
|
||||
|
@ -646,6 +686,5 @@ Available options::
|
|||
Set the SQL Server configuration file where freezer
|
||||
retrieve the sql server instance. Following is an
|
||||
example of config file: instance = <db-instance>
|
||||
--vssadmin VSSADMIN Create a backup using a snapshot on windows
|
||||
using vssadmin. Options are: True and False,
|
||||
default is True
|
||||
--vssadmin VSSADMIN Create a backup using a snapshot on windows using
|
||||
vssadmin. Options are: True and False, default is True
|
||||
|
|
|
@ -24,6 +24,7 @@ Arguments and general parameters definitions
|
|||
import sys
|
||||
import os
|
||||
import argparse
|
||||
|
||||
try:
|
||||
import configparser
|
||||
except ImportError:
|
||||
|
@ -38,10 +39,35 @@ from freezer.utils import OpenstackOptions
|
|||
from freezer.winutils import is_windows
|
||||
from os.path import expanduser
|
||||
|
||||
|
||||
home = expanduser("~")
|
||||
|
||||
|
||||
DEFAULT_PARAMS = {
|
||||
'os_auth_ver': 2, 'list_objects': False, 'get_object': False,
|
||||
'lvm_auto_snap': False, 'lvm_volgroup': False,
|
||||
'exclude': False, 'sql_server_conf': False,
|
||||
'backup_name': False, 'quiet': False,
|
||||
'container': 'freezer_backups', 'no_incremental': False,
|
||||
'max_segment_size': 67108864, 'lvm_srcvol': False,
|
||||
'download_limit': -1, 'hostname': False, 'remove_from_date': False,
|
||||
'restart_always_level': False, 'lvm_dirmount': False,
|
||||
'dst_file': False, 'dereference_symlink': 'none',
|
||||
'restore_from_host': False, 'config': False, 'mysql_conf': False,
|
||||
'insecure': False, 'lvm_snapname': False, 'max_priority': False,
|
||||
'max_level': False, 'path_to_backup': False,
|
||||
'encrypt_pass_file': False, 'volume': False, 'proxy': False,
|
||||
'cinder_vol_id': '', 'cindernative_vol_id': '',
|
||||
'nova_inst_id': '', 'list_containers': False,
|
||||
'remove_older_than': None, 'restore_from_date': False,
|
||||
'upload_limit': -1, 'always_level': False, 'version': False,
|
||||
'dry_run': False, 'lvm_snapsize': False,
|
||||
'restore_abs_path': False, 'log_file': None,
|
||||
'upload': True, 'mode': 'fs', 'action': 'backup',
|
||||
'vssadmin': True, 'shadow': '', 'shadow_path': '',
|
||||
'windows_volume': ''
|
||||
}
|
||||
|
||||
|
||||
def alter_proxy(args_dict):
|
||||
"""
|
||||
Read proxy option from dictionary and alter the HTTP_PROXY and/or
|
||||
|
@ -85,7 +111,7 @@ def backup_arguments(args_dict={}):
|
|||
"from config file. When config file is used any option "
|
||||
"from command line provided take precedence."))
|
||||
|
||||
defaults = {}
|
||||
defaults = DEFAULT_PARAMS.copy()
|
||||
args, remaining_argv = conf_parser.parse_known_args()
|
||||
if args.config:
|
||||
config = configparser.SafeConfigParser()
|
||||
|
@ -96,30 +122,6 @@ def backup_arguments(args_dict={}):
|
|||
if option_value in ('False', 'None'):
|
||||
option_value = False
|
||||
defaults[option] = option_value
|
||||
else:
|
||||
defaults = {
|
||||
'os_auth_ver': 2, 'list_objects': False, 'get_object': False,
|
||||
'lvm_auto_snap': False, 'lvm_volgroup': False,
|
||||
'exclude': False, 'sql_server_conf': False,
|
||||
'backup_name': False, 'quiet': False,
|
||||
'container': 'freezer_backups', 'no_incremental': False,
|
||||
'max_segment_size': 67108864, 'lvm_srcvol': False,
|
||||
'download_limit': -1, 'hostname': False, 'remove_from_date': False,
|
||||
'restart_always_level': False, 'lvm_dirmount': False,
|
||||
'dst_file': False, 'dereference_symlink': 'none',
|
||||
'restore_from_host': False, 'config': False, 'mysql_conf': False,
|
||||
'insecure': False, 'lvm_snapname': False, 'max_priority': False,
|
||||
'max_level': False, 'path_to_backup': False,
|
||||
'encrypt_pass_file': False, 'volume': False, 'proxy': False,
|
||||
'volume_id': '', 'list_containers': False,
|
||||
'remove_older_than': None, 'restore_from_date': False,
|
||||
'upload_limit': -1, 'always_level': False, 'version': False,
|
||||
'dry_run': False, 'lvm_snapsize': False,
|
||||
'restore_abs_path': False, 'log_file': None,
|
||||
'upload': True, 'mode': 'fs', 'action': 'backup',
|
||||
'vssadmin': True, 'shadow': '', 'shadow_path': '',
|
||||
'windows_volume': ''
|
||||
}
|
||||
|
||||
# Generate a new argparse istance and inherit options from config parse
|
||||
arg_parser = argparse.ArgumentParser(
|
||||
|
@ -355,14 +357,19 @@ def backup_arguments(args_dict={}):
|
|||
type=utils.human2bytes,
|
||||
default=-1)
|
||||
arg_parser.add_argument(
|
||||
"--volume-id", action='store',
|
||||
"--cinder-vol-id", action='store',
|
||||
help='Id of cinder volume for backup',
|
||||
dest="volume_id",
|
||||
dest="cinder_vol_id",
|
||||
default='')
|
||||
arg_parser.add_argument(
|
||||
"--instance-id", action='store',
|
||||
"--nova-inst-id", action='store',
|
||||
help='Id of nova instance for backup',
|
||||
dest="instance_id",
|
||||
dest="nova_inst_id",
|
||||
default='')
|
||||
arg_parser.add_argument(
|
||||
"--cindernative-vol-id", action='store',
|
||||
help='Id of cinder volume for native backup',
|
||||
dest="cindernative_vol_id",
|
||||
default='')
|
||||
arg_parser.add_argument(
|
||||
'--download-limit', action='store',
|
||||
|
@ -423,7 +430,7 @@ def backup_arguments(args_dict={}):
|
|||
backup_args.__dict__['curr_backup_level'] = ''
|
||||
backup_args.__dict__['manifest_meta_dict'] = ''
|
||||
if is_windows():
|
||||
backup_args.__dict__['tar_path'] = '{0}\\bin\\tar.exe'.\
|
||||
backup_args.__dict__['tar_path'] = '{0}\\bin\\tar.exe'. \
|
||||
format(path_to_binaries)
|
||||
else:
|
||||
backup_args.__dict__['tar_path'] = distspawn.find_executable('tar')
|
||||
|
@ -482,7 +489,17 @@ def backup_arguments(args_dict={}):
|
|||
# Freezer version
|
||||
backup_args.__dict__['__version__'] = '1.1.3'
|
||||
|
||||
backup_args.__dict__['options'] = \
|
||||
OpenstackOptions.create_from_dict(os.environ)
|
||||
backup_args.__dict__['options'] = OpenstackOptions.create_from_env()
|
||||
|
||||
# todo(enugaev) move it to new command line param backup_media
|
||||
backup_media = 'fs'
|
||||
if backup_args.cinder_vol_id:
|
||||
backup_media = 'cinder'
|
||||
elif backup_args.cindernative_vol_id:
|
||||
backup_media = 'cindernative'
|
||||
elif backup_args.nova_inst_id:
|
||||
backup_media = 'nova'
|
||||
|
||||
backup_args.__dict__['backup_media'] = backup_media
|
||||
|
||||
return backup_args, arg_parser
|
||||
|
|
|
@ -27,7 +27,6 @@ from os.path import expanduser
|
|||
import time
|
||||
|
||||
from freezer.lvm import lvm_snap, lvm_snap_remove, get_lvm_info
|
||||
from freezer.osclients import ClientManager
|
||||
from freezer.tar import tar_backup, gen_tar_command
|
||||
from freezer.swift import add_object, manifest_upload
|
||||
from freezer.utils import gen_manifest_meta, add_host_name_ts_level
|
||||
|
@ -115,8 +114,8 @@ def backup_mode_mysql(backup_opt_dict, time_stamp, manifest_meta_dict):
|
|||
except Exception as error:
|
||||
raise Exception('[*] MySQL: {0}'.format(error))
|
||||
|
||||
# Execute LVM backup
|
||||
backup_mode_fs(backup_opt_dict, time_stamp, manifest_meta_dict)
|
||||
# Execute backup
|
||||
backup(backup_opt_dict, time_stamp, manifest_meta_dict)
|
||||
|
||||
|
||||
def backup_mode_mongo(backup_opt_dict, time_stamp, manifest_meta_dict):
|
||||
|
@ -140,83 +139,113 @@ def backup_mode_mongo(backup_opt_dict, time_stamp, manifest_meta_dict):
|
|||
mongo_primary = master_dict['primary']
|
||||
|
||||
if mongo_me == mongo_primary:
|
||||
backup_mode_fs(backup_opt_dict, time_stamp, manifest_meta_dict)
|
||||
backup(backup_opt_dict, time_stamp, manifest_meta_dict)
|
||||
else:
|
||||
logging.warning('[*] localhost {0} is not Master/Primary,\
|
||||
exiting...'.format(local_hostname))
|
||||
return True
|
||||
|
||||
|
||||
def backup_nova(backup_dict, time_stamp):
|
||||
"""
|
||||
Implement nova backup
|
||||
:param backup_dict: backup configuration dictionary
|
||||
:param time_stamp: timestamp of backup
|
||||
:return:
|
||||
"""
|
||||
instance_id = backup_dict.instance_id
|
||||
client_manager = backup_dict.client_manager
|
||||
nova = client_manager.get_nova()
|
||||
instance = nova.servers.get(instance_id)
|
||||
glance = client_manager.get_glance()
|
||||
class BackupOs:
|
||||
|
||||
if instance.__dict__['OS-EXT-STS:task_state']:
|
||||
time.sleep(5)
|
||||
instance = nova.servers.get(instance)
|
||||
def __init__(self, client_manager, container, container_segments):
|
||||
self.client_manager = client_manager
|
||||
self.container = container
|
||||
self.container_segments = container_segments
|
||||
|
||||
image = instance.create_image("snapshot_of_%s" % instance_id)
|
||||
image = glance.images.get(image)
|
||||
while image.status != 'active':
|
||||
time.sleep(5)
|
||||
def backup_nova(self, instance_id, time_stamp):
|
||||
"""
|
||||
Implement nova backup
|
||||
:param instance_id: Id of the instance for backup
|
||||
:param time_stamp: timestamp of backup
|
||||
:return:
|
||||
"""
|
||||
instance_id = instance_id
|
||||
client_manager = self.client_manager
|
||||
nova = client_manager.get_nova()
|
||||
instance = nova.servers.get(instance_id)
|
||||
glance = client_manager.get_glance()
|
||||
|
||||
if instance.__dict__['OS-EXT-STS:task_state']:
|
||||
time.sleep(5)
|
||||
instance = nova.servers.get(instance)
|
||||
|
||||
image = instance.create_image("snapshot_of_%s" % instance_id)
|
||||
image = glance.images.get(image)
|
||||
while image.status != 'active':
|
||||
time.sleep(5)
|
||||
image = glance.images.get(image)
|
||||
|
||||
stream = client_manager.download_image(image)
|
||||
package = "{0}/{1}".format(instance_id, time_stamp)
|
||||
logging.info("[*] Uploading image to swift")
|
||||
headers = {"x-object-meta-name": instance._info['name'],
|
||||
"x-object-meta-tenant_id": instance._info['tenant_id']}
|
||||
swift.add_stream(backup_dict.client_manager,
|
||||
backup_dict.container_segments,
|
||||
backup_dict.container, stream, package, headers)
|
||||
logging.info("[*] Deleting temporary image")
|
||||
glance.images.delete(image)
|
||||
stream = client_manager.download_image(image)
|
||||
package = "{0}/{1}".format(instance_id, time_stamp)
|
||||
logging.info("[*] Uploading image to swift")
|
||||
headers = {"x-object-meta-name": instance._info['name'],
|
||||
"x-object-meta-tenant_id": instance._info['tenant_id']}
|
||||
swift.add_stream(client_manager,
|
||||
self.container_segments,
|
||||
self.container, stream, package, headers)
|
||||
logging.info("[*] Deleting temporary image")
|
||||
glance.images.delete(image)
|
||||
|
||||
def backup_cinder_by_glance(self, volume_id, time_stamp):
|
||||
"""
|
||||
Implements cinder backup:
|
||||
1) Gets a stream of the image from glance
|
||||
2) Stores resulted image to the swift as multipart object
|
||||
|
||||
:param volume_id: id of volume for backup
|
||||
:param time_stamp: timestamp of snapshot
|
||||
"""
|
||||
client_manager = self.client_manager
|
||||
cinder = client_manager.get_cinder()
|
||||
|
||||
volume = cinder.volumes.get(volume_id)
|
||||
logging.info("[*] Creation temporary snapshot")
|
||||
snapshot = client_manager.provide_snapshot(
|
||||
volume, "backup_snapshot_for_volume_%s" % volume_id)
|
||||
logging.info("[*] Creation temporary volume")
|
||||
copied_volume = client_manager.do_copy_volume(snapshot)
|
||||
logging.info("[*] Creation temporary glance image")
|
||||
image = client_manager.make_glance_image("name", copied_volume)
|
||||
stream = client_manager.download_image(image)
|
||||
package = "{0}/{1}".format(volume_id, time_stamp)
|
||||
logging.info("[*] Uploading image to swift")
|
||||
headers = {}
|
||||
swift.add_stream(self.client_manager,
|
||||
self.container_segments,
|
||||
self.container, stream, package, headers=headers)
|
||||
logging.info("[*] Deleting temporary snapshot")
|
||||
client_manager.clean_snapshot(snapshot)
|
||||
logging.info("[*] Deleting temporary volume")
|
||||
cinder.volumes.delete(copied_volume)
|
||||
logging.info("[*] Deleting temporary image")
|
||||
client_manager.get_glance().images.delete(image)
|
||||
|
||||
def backup_cinder(self, volume_id, name=None, description=None):
|
||||
client_manager = self.client_manager
|
||||
cinder = client_manager.get_cinder()
|
||||
cinder.backups.create(volume_id, self.container, name, description)
|
||||
|
||||
|
||||
def backup_cinder(backup_dict, time_stamp):
|
||||
"""
|
||||
Implements cinder backup:
|
||||
1) Gets a stream of the image from glance
|
||||
2) Stores resulted image to the swift as multipart object
|
||||
|
||||
:param backup_dict: global dict with variables
|
||||
:param time_stamp: timestamp of snapshot
|
||||
"""
|
||||
client_manager = backup_dict.client_manager
|
||||
cinder = client_manager.get_cinder()
|
||||
glance = client_manager.get_glance()
|
||||
|
||||
volume_id = backup_dict.volume_id
|
||||
volume = cinder.volumes.get(volume_id)
|
||||
logging.info("[*] Creation temporary snapshot")
|
||||
snapshot = client_manager.provide_snapshot(
|
||||
volume, "backup_snapshot_for_volume_%s" % volume_id)
|
||||
logging.info("[*] Creation temporary volume")
|
||||
copied_volume = client_manager.do_copy_volume(snapshot)
|
||||
logging.info("[*] Creation temporary glance image")
|
||||
image = client_manager.make_glance_image("name", copied_volume)
|
||||
stream = client_manager.download_image(image)
|
||||
package = "{0}/{1}".format(volume_id, time_stamp)
|
||||
logging.info("[*] Uploading image to swift")
|
||||
headers = {}
|
||||
swift.add_stream(backup_dict.client_manager,
|
||||
backup_dict.container_segments,
|
||||
backup_dict.container, stream, package, headers=headers)
|
||||
logging.info("[*] Deleting temporary snapshot")
|
||||
client_manager.clean_snapshot(snapshot)
|
||||
logging.info("[*] Deleting temporary volume")
|
||||
cinder.volumes.delete(copied_volume)
|
||||
logging.info("[*] Deleting temporary image")
|
||||
glance.images.delete(image)
|
||||
def backup(backup_opt_dict, time_stamp, manifest_meta_dict):
|
||||
backup_media = backup_opt_dict.backup_media
|
||||
backup_os = BackupOs(backup_opt_dict.client_manager,
|
||||
backup_opt_dict.container,
|
||||
backup_opt_dict.container_segments)
|
||||
if backup_media == 'fs':
|
||||
backup_mode_fs(backup_opt_dict, time_stamp, manifest_meta_dict)
|
||||
elif backup_media == 'nova':
|
||||
logging.info('[*] Executing nova backup')
|
||||
backup_os.backup_nova(backup_opt_dict.nova_inst_id, time_stamp)
|
||||
elif backup_media == 'cindernative':
|
||||
logging.info('[*] Executing cinder backup')
|
||||
backup_os.backup_cinder(backup_opt_dict.cindernative_vol_id)
|
||||
elif backup_media == 'cinder':
|
||||
logging.info('[*] Executing cinder snapshot')
|
||||
backup_os.backup_cinder_by_glance(backup_opt_dict.cindernative_vol_id,
|
||||
time_stamp)
|
||||
else:
|
||||
raise Exception('unknown parameter backup_media %s' % backup_media)
|
||||
|
||||
|
||||
def backup_mode_fs(backup_opt_dict, time_stamp, manifest_meta_dict):
|
||||
|
@ -225,18 +254,6 @@ def backup_mode_fs(backup_opt_dict, time_stamp, manifest_meta_dict):
|
|||
"""
|
||||
|
||||
logging.info('[*] File System backup is being executed...')
|
||||
|
||||
if backup_opt_dict.volume_id:
|
||||
logging.info('[*] Detected volume_id parameter')
|
||||
logging.info('[*] Executing cinder snapshot')
|
||||
backup_cinder(backup_opt_dict, time_stamp)
|
||||
return
|
||||
if backup_opt_dict.instance_id:
|
||||
logging.info('[*] Detected instance_id parameter')
|
||||
logging.info('[*] Executing nova snapshot')
|
||||
backup_nova(backup_opt_dict, time_stamp)
|
||||
return
|
||||
|
||||
try:
|
||||
|
||||
if is_windows():
|
||||
|
@ -309,25 +326,19 @@ def backup_mode_fs(backup_opt_dict, time_stamp, manifest_meta_dict):
|
|||
meta_data_abs_path = os.path.join(backup_opt_dict.workdir,
|
||||
tar_meta_prev)
|
||||
|
||||
client_manager = backup_opt_dict.client_manager
|
||||
# Upload swift manifest for segments
|
||||
if backup_opt_dict.upload:
|
||||
# Request a new auth client in case the current token
|
||||
# is expired before uploading tar meta data or the swift manifest
|
||||
backup_opt_dict.client_manger = ClientManager(
|
||||
backup_opt_dict.options,
|
||||
backup_opt_dict.insecure,
|
||||
backup_opt_dict.download_limit,
|
||||
backup_opt_dict.upload_limit,
|
||||
backup_opt_dict.os_auth_ver,
|
||||
backup_opt_dict.dry_run
|
||||
)
|
||||
client_manager.create_swift()
|
||||
|
||||
if not backup_opt_dict.no_incremental:
|
||||
# Upload tar incremental meta data file and remove it
|
||||
logging.info('[*] Uploading tar meta data file: {0}'.format(
|
||||
tar_meta_to_upload))
|
||||
with open(meta_data_abs_path, 'r') as meta_fd:
|
||||
backup_opt_dict.client_manger.get_swift().put_object(
|
||||
client_manager.get_swift().put_object(
|
||||
backup_opt_dict.container, tar_meta_to_upload, meta_fd)
|
||||
# Removing tar meta data file, so we have only one
|
||||
# authoritative version on swift
|
||||
|
|
|
@ -116,7 +116,7 @@ class BackupJob(Job):
|
|||
|
||||
self.conf.manifest_meta_dict = manifest_meta_dict
|
||||
if self.conf.mode == 'fs':
|
||||
backup.backup_mode_fs(
|
||||
backup.backup(
|
||||
self.conf, self.start_time.timestamp, manifest_meta_dict)
|
||||
elif self.conf.mode == 'mongo':
|
||||
backup.backup_mode_mongo(
|
||||
|
@ -146,16 +146,19 @@ class RestoreJob(Job):
|
|||
# Get the object list of the remote containers and store it in the
|
||||
# same dict passes as argument under the dict.remote_obj_list namespace
|
||||
self.conf = swift.get_container_content(self.conf)
|
||||
if self.conf.volume_id or self.conf.instance_id:
|
||||
res = RestoreOs(self.conf.client_manager, self.conf.container)
|
||||
if self.conf.volume_id:
|
||||
res.restore_cinder(self.conf.restore_from_date,
|
||||
self.conf.volume_id)
|
||||
else:
|
||||
res.restore_nova(self.conf.restore_from_date,
|
||||
self.conf.instance_id)
|
||||
else:
|
||||
res = RestoreOs(self.conf.client_manager, self.conf.container)
|
||||
restore_from_date = self.conf.restore_from_date
|
||||
backup_media = self.conf.backup_media
|
||||
if backup_media == 'fs':
|
||||
restore.restore_fs(self.conf)
|
||||
elif backup_media == 'nova':
|
||||
res.restore_nova(restore_from_date, self.conf.nova_inst_id)
|
||||
elif backup_media == 'cinder':
|
||||
res.restore_cinder_by_glance(restore_from_date, self.conf.cinder)
|
||||
elif backup_media == 'cindernative':
|
||||
res.restore_cinder(restore_from_date, self.conf.cinder_vol_id)
|
||||
else:
|
||||
raise Exception("unknown backup type: %s" % backup_media)
|
||||
|
||||
|
||||
class AdminJob(Job):
|
||||
|
|
|
@ -6,9 +6,9 @@ from utils import ReSizeStream
|
|||
|
||||
|
||||
class ClientManager:
|
||||
def __init__(self, options, insecure,
|
||||
download_bytes_per_sec, upload_bytes_per_sec,
|
||||
swift_auth_version, dry_run):
|
||||
def __init__(self, options, insecure=True,
|
||||
download_bytes_per_sec=-1, upload_bytes_per_sec=-1,
|
||||
swift_auth_version=2, dry_run=False):
|
||||
"""
|
||||
Creates manager of connections to swift, nova, glance and cinder
|
||||
:param options: OpenstackOptions
|
||||
|
@ -143,7 +143,6 @@ class ClientManager:
|
|||
def provide_snapshot(self, volume, snapshot_name):
|
||||
"""
|
||||
Creates snapshot for cinder volume with --force parameter
|
||||
:param client_manager: Manager os clients
|
||||
:param volume: volume object for snapshoting
|
||||
:param snapshot_name: name of snapshot
|
||||
:return: snapshot object
|
||||
|
|
|
@ -191,6 +191,22 @@ class RestoreOs:
|
|||
return stream[0], image
|
||||
|
||||
def restore_cinder(self, restore_from_date, volume_id):
|
||||
"""
|
||||
Restoring cinder backup using
|
||||
:param volume_id:
|
||||
:return:
|
||||
"""
|
||||
cinder = self.client_manager.get_cinder()
|
||||
backups = cinder.backups.findall(volume_id=volume_id,
|
||||
status='available')
|
||||
backups = [x for x in backups if x.created_at >= restore_from_date]
|
||||
if not backups:
|
||||
logging.error("no available backups for cinder volume")
|
||||
else:
|
||||
backup = min(backups, key=lambda x: x.created_at)
|
||||
cinder.restores.restore(backup_id=backup.id)
|
||||
|
||||
def restore_cinder_by_glance(self, restore_from_date, volume_id):
|
||||
"""
|
||||
1) Define swift directory
|
||||
2) Download and upload to glance
|
||||
|
|
|
@ -56,8 +56,6 @@ def create_containers(backup_opt):
|
|||
backup_opt.container_segments))
|
||||
sw_connector.put_container(backup_opt.container_segments)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def show_containers(backup_opt_dict):
|
||||
"""
|
||||
|
@ -430,8 +428,6 @@ def object_to_file(backup_opt_dict, file_name_abs_path):
|
|||
resp_chunk_size=16000000)[1]:
|
||||
obj_fd.write(obj_chunk)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def object_to_stream(backup_opt_dict, write_pipe, read_pipe, obj_name):
|
||||
"""
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
'''
|
||||
"""
|
||||
Copyright 2014 Hewlett-Packard
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
|
@ -19,7 +19,7 @@ Hudson (tjh@cryptsoft.com).
|
|||
========================================================================
|
||||
|
||||
Freezer general utils functions
|
||||
'''
|
||||
"""
|
||||
|
||||
import logging
|
||||
import os
|
||||
|
@ -30,7 +30,13 @@ import subprocess
|
|||
|
||||
|
||||
class OpenstackOptions:
|
||||
|
||||
"""
|
||||
Stores credentials for OpenStack API.
|
||||
Can be created using
|
||||
>> create_from_env()
|
||||
or
|
||||
>> create_from_dict(dict)
|
||||
"""
|
||||
def __init__(self, user_name, tenant_name, auth_url, password,
|
||||
tenant_id=None, region_name=None):
|
||||
self.user_name = user_name
|
||||
|
@ -51,6 +57,10 @@ class OpenstackOptions:
|
|||
'tenant_name': self.tenant_name,
|
||||
'region_name': self.region_name}
|
||||
|
||||
@staticmethod
|
||||
def create_from_env():
|
||||
return OpenstackOptions.create_from_dict(os.environ)
|
||||
|
||||
@staticmethod
|
||||
def create_from_dict(src_dict):
|
||||
try:
|
||||
|
@ -170,25 +180,6 @@ def validate_all_args(required_list):
|
|||
return True
|
||||
|
||||
|
||||
def validate_any_args(required_list):
|
||||
'''
|
||||
Ensure ANY of the elements of required_list are True. raise Exception
|
||||
Exception otherwise
|
||||
'''
|
||||
|
||||
try:
|
||||
for element in required_list:
|
||||
if element:
|
||||
return True
|
||||
except Exception as error:
|
||||
err = "[*] Error: validate_any_args: {0} {1}".format(
|
||||
required_list, error)
|
||||
logging.exception(err)
|
||||
raise Exception(err)
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def sort_backup_list(backup_opt_dict):
|
||||
"""
|
||||
Sort the backups by timestamp. The provided list contains strings in the
|
||||
|
@ -326,41 +317,6 @@ def get_rel_oldest_backup(backup_opt_dict):
|
|||
return backup_opt_dict
|
||||
|
||||
|
||||
def get_abs_oldest_backup(backup_opt_dict):
|
||||
'''
|
||||
Return from swift, the absolute oldest backup matching the provided
|
||||
backup name and hostname of the node where freezer is executed.
|
||||
The absolute oldest backup correspond the oldest available level 0 backup.
|
||||
'''
|
||||
if not backup_opt_dict.backup_name:
|
||||
err = "[*] Error: please provide a valid backup name in \
|
||||
backup_opt_dict.backup_name"
|
||||
logging.exception(err)
|
||||
raise Exception(err)
|
||||
|
||||
backup_opt_dict.remote_abs_oldest = u''
|
||||
if len(backup_opt_dict.remote_match_backup) == 0:
|
||||
return backup_opt_dict
|
||||
|
||||
backup_timestamp = 0
|
||||
hostname = backup_opt_dict.hostname
|
||||
for remote_obj in backup_opt_dict.remote_match_backup:
|
||||
object_name = remote_obj.get('name', '')
|
||||
obj_name_match = re.search(r'{0}_({1})_(\d+)_(\d+?)$'.format(
|
||||
hostname, backup_opt_dict.backup_name), object_name.lower(), re.I)
|
||||
if not obj_name_match:
|
||||
continue
|
||||
remote_obj_timestamp = int(obj_name_match.group(2))
|
||||
if backup_timestamp == 0:
|
||||
backup_timestamp = remote_obj_timestamp
|
||||
|
||||
if remote_obj_timestamp <= backup_timestamp:
|
||||
backup_timestamp = remote_obj_timestamp
|
||||
backup_opt_dict.remote_abs_oldest = object_name
|
||||
|
||||
return backup_opt_dict
|
||||
|
||||
|
||||
def eval_restart_backup(backup_opt_dict):
|
||||
'''
|
||||
Restart backup level if the first backup execute with always_level
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
python-swiftclient>=1.6.0
|
||||
python-keystoneclient>=0.8.0
|
||||
python-cinderclient
|
||||
python-cinderclient>=1.2.1
|
||||
python-glanceclient
|
||||
python-novaclient>=2.21.0
|
||||
|
||||
|
|
2
setup.py
2
setup.py
|
@ -81,7 +81,7 @@ setup(
|
|||
install_requires=[
|
||||
'python-swiftclient>=1.6.0',
|
||||
'python-keystoneclient>=0.7.0',
|
||||
'python-cinderclient',
|
||||
'python-cinderclient>=1.2.1',
|
||||
'python-glanceclient',
|
||||
'python-novaclient>=2.21.0',
|
||||
'pymysql',
|
||||
|
|
|
@ -32,18 +32,6 @@ class FakeTime:
|
|||
return True
|
||||
|
||||
|
||||
class FakeValidate:
|
||||
|
||||
def __init__(self):
|
||||
return None
|
||||
|
||||
def validate_all_args_false(self, *args, **kwargs):
|
||||
return False
|
||||
|
||||
def validate_any_args_false(self, *args, **kwargs):
|
||||
return False
|
||||
|
||||
|
||||
class FakeLogging:
|
||||
|
||||
def __init__(self):
|
||||
|
@ -165,6 +153,9 @@ class FakeArgparse:
|
|||
|
||||
@classmethod
|
||||
def set_defaults(self, *args, **kwargs):
|
||||
for k, v in kwargs.iteritems():
|
||||
if k not in self.__dict__:
|
||||
self.__dict__[k] = v
|
||||
return self
|
||||
|
||||
|
||||
|
@ -510,7 +501,7 @@ class FakeSubProcess6:
|
|||
|
||||
class Lvm:
|
||||
def __init__(self):
|
||||
return None
|
||||
pass
|
||||
|
||||
def lvm_snap_remove(self, opt1=True):
|
||||
return True
|
||||
|
@ -525,12 +516,24 @@ class FakeIdObject:
|
|||
self.status = "available"
|
||||
self.size = 10
|
||||
self.min_disk = 10
|
||||
self.created_at = 1234
|
||||
|
||||
|
||||
class FakeCinderClient:
|
||||
def __init__(self):
|
||||
self.volumes = FakeCinderClient.Volumes()
|
||||
self.volume_snapshots = FakeCinderClient.VolumeSnapshot
|
||||
self.backups = FakeCinderClient.Backups()
|
||||
|
||||
class Backups:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def create(self, id, container, name, desription):
|
||||
pass
|
||||
|
||||
def findall(self, **kwargs):
|
||||
return [FakeIdObject(4)]
|
||||
|
||||
class Volumes:
|
||||
def __init__(self):
|
||||
|
@ -715,6 +718,7 @@ class BackupOpt1:
|
|||
fakeconnector = fakeclient.client()
|
||||
fakeswclient = fakeconnector.Connection()
|
||||
self.mysql_conf = '/tmp/freezer-test-conf-file'
|
||||
self.backup_media = 'fs'
|
||||
self.mysql_db_inst = FakeMySQLdb()
|
||||
self.lvm_auto_snap = '/dev/null'
|
||||
self.lvm_volgroup = 'testgroup'
|
||||
|
@ -794,14 +798,16 @@ class BackupOpt1:
|
|||
self.upload_limit = -1
|
||||
self.download_limit = -1
|
||||
self.sql_server_instance = 'Sql Server'
|
||||
self.volume_id = ''
|
||||
self.instance_id = ''
|
||||
self.cinder_vol_id = ''
|
||||
self.cindernative_vol_id = ''
|
||||
self.nova_inst_id = ''
|
||||
self.options = OpenstackOptions.create_from_dict(os.environ)
|
||||
from freezer.osclients import ClientManager
|
||||
from mock import Mock
|
||||
self.client_manager = ClientManager(None, False, -1, -1, 2, False)
|
||||
self.client_manager.get_swift = Mock(
|
||||
return_value=FakeSwiftClient().client.Connection())
|
||||
self.client_manager.create_swift = self.client_manager.get_swift
|
||||
self.client_manager.get_glance = Mock(return_value=FakeGlanceClient())
|
||||
self.client_manager.get_cinder = Mock(return_value=FakeCinderClient())
|
||||
nova_client = MagicMock()
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
from freezer.backup import backup_mode_mysql, backup_mode_fs, backup_mode_mongo
|
||||
from freezer.backup import backup_cinder
|
||||
from freezer.backup import BackupOs
|
||||
import freezer
|
||||
import swiftclient
|
||||
import multiprocessing
|
||||
|
@ -189,7 +189,18 @@ class TestBackUP:
|
|||
assert backup_mode_mongo(
|
||||
backup_opt, 123456789, test_meta) is True
|
||||
|
||||
def test_backup_cinder_by_glance(self):
|
||||
backup_opt = BackupOpt1()
|
||||
backup_opt.volume_id = 34
|
||||
BackupOs(backup_opt.client_manager,
|
||||
backup_opt.container,
|
||||
backup_opt.container_segments).backup_cinder_by_glance(
|
||||
backup_opt, 1417649003)
|
||||
|
||||
def test_backup_cinder(self):
|
||||
backup_opt = BackupOpt1()
|
||||
backup_opt.volume_id = 34
|
||||
backup_cinder(backup_opt, 1417649003)
|
||||
BackupOs(backup_opt.client_manager,
|
||||
backup_opt.container,
|
||||
backup_opt.container_segments).backup_cinder(
|
||||
backup_opt, 1417649003)
|
||||
|
|
|
@ -82,12 +82,20 @@ class TestRestore:
|
|||
backup_opt.backup_name = 'abcdtest'
|
||||
pytest.raises(Exception, restore_fs_sort_obj, backup_opt)
|
||||
|
||||
def test_restore_cinder_by_glance(self):
|
||||
backup_opt = BackupOpt1()
|
||||
ros = RestoreOs(backup_opt.client_manager, backup_opt.container)
|
||||
ros.restore_cinder_by_glance(backup_opt.restore_from_date, 34)
|
||||
|
||||
def test_restore_cinder(self):
|
||||
backup_opt = BackupOpt1()
|
||||
ros = RestoreOs(backup_opt.client_manager, backup_opt.container)
|
||||
ros.restore_cinder(backup_opt.restore_from_date, 34)
|
||||
|
||||
|
||||
def test_restore_nova(self):
|
||||
backup_opt = BackupOpt1()
|
||||
ros = RestoreOs(backup_opt.client_manager, backup_opt.container)
|
||||
ros.restore_nova(backup_opt.restore_from_date, 34)
|
||||
|
||||
|
||||
|
|
|
@ -45,7 +45,7 @@ class TestSwift:
|
|||
monkeypatch.setattr(logging, 'exception', fakelogging.exception)
|
||||
monkeypatch.setattr(logging, 'error', fakelogging.error)
|
||||
|
||||
assert create_containers(backup_opt) is True
|
||||
create_containers(backup_opt)
|
||||
|
||||
def test_show_containers(self, monkeypatch):
|
||||
|
||||
|
@ -82,7 +82,6 @@ class TestSwift:
|
|||
backup_opt.__dict__['list_objects'] = False
|
||||
assert show_objects(backup_opt) is False
|
||||
|
||||
|
||||
def test__remove_object(self, monkeypatch):
|
||||
backup_opt = BackupOpt1()
|
||||
fakelogging = FakeLogging()
|
||||
|
@ -106,7 +105,6 @@ class TestSwift:
|
|||
fakeswclient.num_try = 60
|
||||
pytest.raises(Exception, _remove_object, fakeclient, 'container', 'obj_name')
|
||||
|
||||
|
||||
def test_remove_object(self, monkeypatch):
|
||||
backup_opt = BackupOpt1()
|
||||
fakelogging = FakeLogging()
|
||||
|
@ -181,7 +179,6 @@ class TestSwift:
|
|||
backup_opt.container = False
|
||||
pytest.raises(Exception, get_container_content, backup_opt)
|
||||
|
||||
|
||||
def test_check_container_existance(self, monkeypatch):
|
||||
|
||||
backup_opt = BackupOpt1()
|
||||
|
@ -261,7 +258,6 @@ class TestSwift:
|
|||
|
||||
assert isinstance(get_containers_list(backup_opt), BackupOpt1) is True
|
||||
|
||||
|
||||
def test_object_to_file(self, monkeypatch):
|
||||
|
||||
backup_opt = BackupOpt1()
|
||||
|
@ -273,7 +269,7 @@ class TestSwift:
|
|||
monkeypatch.setattr(logging, 'error', fakelogging.error)
|
||||
|
||||
file_name_abs_path = '/tmp/test-abs-file-path'
|
||||
assert object_to_file(backup_opt, file_name_abs_path) is True
|
||||
object_to_file(backup_opt, file_name_abs_path)
|
||||
|
||||
backup_opt = BackupOpt1()
|
||||
backup_opt.container = None
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
from freezer.utils import (
|
||||
gen_manifest_meta, validate_all_args, validate_any_args,
|
||||
gen_manifest_meta, validate_all_args,
|
||||
sort_backup_list, create_dir, get_match_backup,
|
||||
get_newest_backup, get_rel_oldest_backup, get_abs_oldest_backup,
|
||||
get_newest_backup, get_rel_oldest_backup,
|
||||
eval_restart_backup, set_backup_level,
|
||||
get_vol_fs_type, check_backup_and_tar_meta_existence,
|
||||
add_host_name_ts_level, get_mount_from_path, human2bytes, DateTime,
|
||||
|
@ -50,16 +50,6 @@ class TestUtils:
|
|||
assert validate_all_args(elements2) is False
|
||||
pytest.raises(Exception, validate_all_args, elements3)
|
||||
|
||||
def test_validate_any_args(self):
|
||||
|
||||
elements1 = ['test1', 'test2', 'test3']
|
||||
elements2 = [None, None, False, None]
|
||||
elements3 = None
|
||||
|
||||
assert validate_any_args(elements1) is True
|
||||
assert validate_any_args(elements2) is False
|
||||
pytest.raises(Exception, validate_any_args, elements3)
|
||||
|
||||
def test_sort_backup_list(self):
|
||||
|
||||
sorted_backups = sort_backup_list(BackupOpt1())
|
||||
|
@ -77,7 +67,6 @@ class TestUtils:
|
|||
max_time = backup_time
|
||||
max_level = level
|
||||
|
||||
|
||||
def test_create_dir(self, monkeypatch):
|
||||
|
||||
dir1 = '/tmp'
|
||||
|
@ -128,22 +117,6 @@ class TestUtils:
|
|||
backup_opt.__dict__['backup_name'] = ''
|
||||
pytest.raises(Exception, get_rel_oldest_backup, backup_opt)
|
||||
|
||||
def test_get_abs_oldest_backup(self):
|
||||
|
||||
backup_opt = BackupOpt1()
|
||||
backup_opt.__dict__['remote_match_backup'] = []
|
||||
backup_opt = get_abs_oldest_backup(backup_opt)
|
||||
assert len(backup_opt.remote_abs_oldest) == 0
|
||||
|
||||
backup_opt = BackupOpt1()
|
||||
backup_opt.__dict__['remote_match_backup'] = backup_opt.remote_obj_list
|
||||
backup_opt = get_abs_oldest_backup(backup_opt)
|
||||
assert len(backup_opt.remote_abs_oldest) > 0
|
||||
|
||||
backup_opt = BackupOpt1()
|
||||
backup_opt.__dict__['backup_name'] = ''
|
||||
pytest.raises(Exception, get_abs_oldest_backup, backup_opt)
|
||||
|
||||
def test_eval_restart_backup(self, monkeypatch):
|
||||
|
||||
backup_opt = BackupOpt1()
|
||||
|
@ -165,7 +138,6 @@ class TestUtils:
|
|||
assert eval_restart_backup(backup_opt) is not None
|
||||
#pytest.raises(Exception, eval_restart_backup, backup_opt)
|
||||
|
||||
|
||||
def test_set_backup_level(self):
|
||||
|
||||
manifest_meta = dict()
|
||||
|
|
Loading…
Reference in New Issue