Fixed based on reviewer's comment.

1. DB schema change
   vcpu/memory/hdd info were stored into Service table.
   but reviewer pointed out to me creating new table is better
   since Service table has too much columns.

2. Querying service table method
   Querying the compute-node recode from DB, several method were
   used to same purpose. Changed to use same method.

3. Removing unnecessary operation.
   FixedIP no longer have host column.
   I didnt find that, remove unnecessary operation from post_live_migration..

4. Test code 
   Modified testcode to fit following the above changes.
This commit is contained in:
Kei Masumoto
2011-02-22 13:16:52 +09:00
21 changed files with 842 additions and 808 deletions

View File

@@ -1,36 +1,43 @@
# Format is: # Format is:
# <preferred e-mail> <other e-mail> # <preferred e-mail> <other e-mail 1>
<code@term.ie> <github@anarkystic.com> # <preferred e-mail> <other e-mail 2>
<code@term.ie> <termie@preciousroy.local>
<Armando.Migliaccio@eu.citrix.com> <armando.migliaccio@citrix.com>
<matt.dietz@rackspace.com> <matthewdietz@Matthew-Dietzs-MacBook-Pro.local>
<matt.dietz@rackspace.com> <mdietz@openstack>
<cbehrens@codestud.com> <chris.behrens@rackspace.com>
<devin.carlen@gmail.com> <devcamcar@illian.local>
<ewan.mellor@citrix.com> <emellor@silver>
<jaypipes@gmail.com> <jpipes@serialcoder>
<anotherjesse@gmail.com> <jesse@dancelamb> <anotherjesse@gmail.com> <jesse@dancelamb>
<anotherjesse@gmail.com> <jesse@gigantor.local> <anotherjesse@gmail.com> <jesse@gigantor.local>
<anotherjesse@gmail.com> <jesse@ubuntu> <anotherjesse@gmail.com> <jesse@ubuntu>
<jmckenty@gmail.com> <jmckenty@yyj-dhcp171.corp.flock.com> <ant@openstack.org> <amesserl@rackspace.com>
<Armando.Migliaccio@eu.citrix.com> <armando.migliaccio@citrix.com>
<brian.lamar@rackspace.com> <brian.lamar@gmail.com>
<bschott@isi.edu> <bfschott@gmail.com>
<cbehrens@codestud.com> <chris.behrens@rackspace.com>
<chiradeep@cloud.com> <chiradeep@chiradeep-lt2>
<code@term.ie> <github@anarkystic.com>
<code@term.ie> <termie@preciousroy.local>
<corywright@gmail.com> <cory.wright@rackspace.com>
<devin.carlen@gmail.com> <devcamcar@illian.local>
<ewan.mellor@citrix.com> <emellor@silver>
<jaypipes@gmail.com> <jpipes@serialcoder>
<jmckenty@gmail.com> <jmckenty@joshua-mckentys-macbook-pro.local> <jmckenty@gmail.com> <jmckenty@joshua-mckentys-macbook-pro.local>
<jmckenty@gmail.com> <jmckenty@yyj-dhcp171.corp.flock.com>
<jmckenty@gmail.com> <joshua.mckenty@nasa.gov> <jmckenty@gmail.com> <joshua.mckenty@nasa.gov>
<justin@fathomdb.com> <justinsb@justinsb-desktop> <justin@fathomdb.com> <justinsb@justinsb-desktop>
<masumotok@nttdata.co.jp> <root@openstack2-api> <justin@fathomdb.com> <superstack@superstack.org>
<masumotok@nttdata.co.jp> Masumoto<masumotok@nttdata.co.jp> <masumotok@nttdata.co.jp> Masumoto<masumotok@nttdata.co.jp>
<masumotok@nttdata.co.jp> <root@openstack2-api>
<matt.dietz@rackspace.com> <matthewdietz@Matthew-Dietzs-MacBook-Pro.local>
<matt.dietz@rackspace.com> <mdietz@openstack>
<mordred@inaugust.com> <mordred@hudson> <mordred@inaugust.com> <mordred@hudson>
<paul@openstack.org> <pvoccio@castor.local>
<paul@openstack.org> <paul.voccio@rackspace.com> <paul@openstack.org> <paul.voccio@rackspace.com>
<paul@openstack.org> <pvoccio@castor.local>
<rconradharris@gmail.com> <rick.harris@rackspace.com>
<rlane@wikimedia.org> <laner@controller>
<sleepsonthefloor@gmail.com> <root@tonbuntu>
<soren.hansen@rackspace.com> <soren@linux2go.dk> <soren.hansen@rackspace.com> <soren@linux2go.dk>
<todd@ansolabs.com> <todd@lapex> <todd@ansolabs.com> <todd@lapex>
<todd@ansolabs.com> <todd@rubidine.com> <todd@ansolabs.com> <todd@rubidine.com>
<vishvananda@gmail.com> <vishvananda@yahoo.com> <tushar.vitthal.patil@gmail.com> <tpatil@vertex.co.in>
<ueno.nachi@lab.ntt.co.jp> <nati.ueno@gmail.com>
<ueno.nachi@lab.ntt.co.jp> <nova@u4>
<ueno.nachi@lab.ntt.co.jp> <openstack@lab.ntt.co.jp>
<vishvananda@gmail.com> <root@mirror.nasanebula.net> <vishvananda@gmail.com> <root@mirror.nasanebula.net>
<vishvananda@gmail.com> <root@ubuntu> <vishvananda@gmail.com> <root@ubuntu>
<sleepsonthefloor@gmail.com> <root@tonbuntu> <vishvananda@gmail.com> <vishvananda@yahoo.com>
<rlane@wikimedia.org> <laner@controller>
<rconradharris@gmail.com> <rick.harris@rackspace.com>
<corywright@gmail.com> <cory.wright@rackspace.com>
<ant@openstack.org> <amesserl@rackspace.com>
<chiradeep@cloud.com> <chiradeep@chiradeep-lt2>
<justin@fathomdb.com> <superstack@superstack.org>

11
Authors
View File

@@ -4,13 +4,16 @@ Anthony Young <sleepsonthefloor@gmail.com>
Antony Messerli <ant@openstack.org> Antony Messerli <ant@openstack.org>
Armando Migliaccio <Armando.Migliaccio@eu.citrix.com> Armando Migliaccio <Armando.Migliaccio@eu.citrix.com>
Bilal Akhtar <bilalakhtar@ubuntu.com> Bilal Akhtar <bilalakhtar@ubuntu.com>
Brian Lamar <brian.lamar@rackspace.com>
Brian Schott <bschott@isi.edu>
Brian Waldon <brian.waldon@rackspace.com>
Chiradeep Vittal <chiradeep@cloud.com> Chiradeep Vittal <chiradeep@cloud.com>
Chmouel Boudjnah <chmouel@chmouel.com> Chmouel Boudjnah <chmouel@chmouel.com>
Chris Behrens <cbehrens@codestud.com> Chris Behrens <cbehrens@codestud.com>
Christian Berendt <berendt@b1-systems.de> Christian Berendt <berendt@b1-systems.de>
Cory Wright <corywright@gmail.com> Cory Wright <corywright@gmail.com>
David Pravec <David.Pravec@danix.org>
Dan Prince <dan.prince@rackspace.com> Dan Prince <dan.prince@rackspace.com>
David Pravec <David.Pravec@danix.org>
Dean Troyer <dtroyer@gmail.com> Dean Troyer <dtroyer@gmail.com>
Devin Carlen <devin.carlen@gmail.com> Devin Carlen <devin.carlen@gmail.com>
Ed Leafe <ed@leafe.com> Ed Leafe <ed@leafe.com>
@@ -41,7 +44,8 @@ Monsyne Dragon <mdragon@rackspace.com>
Monty Taylor <mordred@inaugust.com> Monty Taylor <mordred@inaugust.com>
MORITA Kazutaka <morita.kazutaka@gmail.com> MORITA Kazutaka <morita.kazutaka@gmail.com>
Muneyuki Noguchi <noguchimn@nttdata.co.jp> Muneyuki Noguchi <noguchimn@nttdata.co.jp>
Nachi Ueno <ueno.nachi@lab.ntt.co.jp> <openstack@lab.ntt.co.jp> <nati.ueno@gmail.com> <nova@u4> Nachi Ueno <ueno.nachi@lab.ntt.co.jp>
Naveed Massjouni <naveed.massjouni@rackspace.com>
Paul Voccio <paul@openstack.org> Paul Voccio <paul@openstack.org>
Ricardo Carrillo Cruz <emaildericky@gmail.com> Ricardo Carrillo Cruz <emaildericky@gmail.com>
Rick Clark <rick@openstack.org> Rick Clark <rick@openstack.org>
@@ -55,7 +59,8 @@ Soren Hansen <soren.hansen@rackspace.com>
Thierry Carrez <thierry@openstack.org> Thierry Carrez <thierry@openstack.org>
Todd Willey <todd@ansolabs.com> Todd Willey <todd@ansolabs.com>
Trey Morris <trey.morris@rackspace.com> Trey Morris <trey.morris@rackspace.com>
Tushar Patil <tushar.vitthal.patil@gmail.com> <tpatil@vertex.co.in> Tushar Patil <tushar.vitthal.patil@gmail.com>
Vasiliy Shlykov <vash@vasiliyshlykov.org>
Vishvananda Ishaya <vishvananda@gmail.com> Vishvananda Ishaya <vishvananda@gmail.com>
Youcef Laribi <Youcef.Laribi@eu.citrix.com> Youcef Laribi <Youcef.Laribi@eu.citrix.com>
Zhixue Wu <Zhixue.Wu@citrix.com> Zhixue Wu <Zhixue.Wu@citrix.com>

19
HACKING
View File

@@ -47,3 +47,22 @@ Human Alphabetical Order Examples
from nova.auth import users from nova.auth import users
from nova.endpoint import api from nova.endpoint import api
from nova.endpoint import cloud from nova.endpoint import cloud
Docstrings
----------
"""Summary of the function, class or method, less than 80 characters.
New paragraph after newline that explains in more detail any general
information about the function, class or method. After this, if defining
parameters and return types use the Sphinx format. After that an extra
newline then close the quotations.
When writing the docstring for a class, an extra line should be placed
after the closing quotations. For more in-depth explanations for these
decisions see http://www.python.org/dev/peps/pep-0257/
:param foo: the foo parameter
:param bar: the bar parameter
:returns: description of the return value
"""

View File

@@ -6,14 +6,23 @@ graft doc
graft smoketests graft smoketests
graft tools graft tools
graft etc graft etc
graft bzrplugins
graft contrib
graft po
graft plugins
include nova/api/openstack/notes.txt include nova/api/openstack/notes.txt
include nova/auth/*.schema
include nova/auth/novarc.template include nova/auth/novarc.template
include nova/auth/opendj.sh
include nova/auth/slap.sh include nova/auth/slap.sh
include nova/cloudpipe/bootscript.sh include nova/cloudpipe/bootscript.sh
include nova/cloudpipe/client.ovpn.template include nova/cloudpipe/client.ovpn.template
include nova/cloudpipe/bootscript.template
include nova/compute/fakevirtinstance.xml include nova/compute/fakevirtinstance.xml
include nova/compute/interfaces.template include nova/compute/interfaces.template
include nova/console/xvp.conf.template
include nova/db/sqlalchemy/migrate_repo/migrate.cfg include nova/db/sqlalchemy/migrate_repo/migrate.cfg
include nova/db/sqlalchemy/migrate_repo/README
include nova/virt/interfaces.template include nova/virt/interfaces.template
include nova/virt/libvirt*.xml.template include nova/virt/libvirt*.xml.template
include nova/tests/CA/ include nova/tests/CA/
@@ -25,6 +34,7 @@ include nova/tests/bundle/1mb.manifest.xml
include nova/tests/bundle/1mb.no_kernel_or_ramdisk.manifest.xml include nova/tests/bundle/1mb.no_kernel_or_ramdisk.manifest.xml
include nova/tests/bundle/1mb.part.0 include nova/tests/bundle/1mb.part.0
include nova/tests/bundle/1mb.part.1 include nova/tests/bundle/1mb.part.1
include nova/tests/db/nova.austin.sqlite
include plugins/xenapi/README include plugins/xenapi/README
include plugins/xenapi/etc/xapi.d/plugins/objectstore include plugins/xenapi/etc/xapi.d/plugins/objectstore
include plugins/xenapi/etc/xapi.d/plugins/pluginlib_nova.py include plugins/xenapi/etc/xapi.d/plugins/pluginlib_nova.py

View File

@@ -433,6 +433,37 @@ class ProjectCommands(object):
"nova-api server on this host.") "nova-api server on this host.")
class FixedIpCommands(object):
"""Class for managing fixed ip."""
def list(self, host=None):
"""Lists all fixed ips (optionally by host) arguments: [host]"""
ctxt = context.get_admin_context()
if host == None:
fixed_ips = db.fixed_ip_get_all(ctxt)
else:
fixed_ips = db.fixed_ip_get_all_by_host(ctxt, host)
print "%-18s\t%-15s\t%-17s\t%-15s\t%s" % (_('network'),
_('IP address'),
_('MAC address'),
_('hostname'),
_('host'))
for fixed_ip in fixed_ips:
hostname = None
host = None
mac_address = None
if fixed_ip['instance']:
instance = fixed_ip['instance']
hostname = instance['hostname']
host = instance['host']
mac_address = instance['mac_address']
print "%-18s\t%-15s\t%-17s\t%-15s\t%s" % (
fixed_ip['network']['cidr'],
fixed_ip['address'],
mac_address, hostname, host)
class FloatingIpCommands(object): class FloatingIpCommands(object):
"""Class for managing floating ip.""" """Class for managing floating ip."""
@@ -472,8 +503,8 @@ class NetworkCommands(object):
"""Class for managing networks.""" """Class for managing networks."""
def create(self, fixed_range=None, num_networks=None, def create(self, fixed_range=None, num_networks=None,
network_size=None, vlan_start=None, vpn_start=None, network_size=None, vlan_start=None,
fixed_range_v6=None): vpn_start=None, fixed_range_v6=None, label='public'):
"""Creates fixed ips for host by range """Creates fixed ips for host by range
arguments: [fixed_range=FLAG], [num_networks=FLAG], arguments: [fixed_range=FLAG], [num_networks=FLAG],
[network_size=FLAG], [vlan_start=FLAG], [network_size=FLAG], [vlan_start=FLAG],
@@ -495,16 +526,29 @@ class NetworkCommands(object):
cidr=fixed_range, cidr=fixed_range,
num_networks=int(num_networks), num_networks=int(num_networks),
network_size=int(network_size), network_size=int(network_size),
cidr_v6=fixed_range_v6,
vlan_start=int(vlan_start), vlan_start=int(vlan_start),
vpn_start=int(vpn_start)) vpn_start=int(vpn_start),
cidr_v6=fixed_range_v6,
label=label)
def list(self):
"""List all created networks"""
print "%-18s\t%-15s\t%-15s\t%-15s" % (_('network'),
_('netmask'),
_('start address'),
'DNS')
for network in db.network_get_all(context.get_admin_context()):
print "%-18s\t%-15s\t%-15s\t%-15s" % (network.cidr,
network.netmask,
network.dhcp_start,
network.dns)
class InstanceCommands(object): class InstanceCommands(object):
"""Class for mangaging VM instances.""" """Class for mangaging VM instances."""
def live_migration(self, ec2_id, dest): def live_migration(self, ec2_id, dest):
"""live_migration""" """Migrates a running instance to a new machine."""
ctxt = context.get_admin_context() ctxt = context.get_admin_context()
instance_id = ec2_id_to_id(ec2_id) instance_id = ec2_id_to_id(ec2_id)
@@ -513,8 +557,8 @@ class InstanceCommands(object):
msg = _('Only KVM is supported for now. Sorry!') msg = _('Only KVM is supported for now. Sorry!')
raise exception.Error(msg) raise exception.Error(msg)
if FLAGS.volume_driver != 'nova.volume.driver.AOEDriver' and \ if (FLAGS.volume_driver != 'nova.volume.driver.AOEDriver' and \
FLAGS.volume_driver != 'nova.volume.driver.ISCSIDriver': FLAGS.volume_driver != 'nova.volume.driver.ISCSIDriver'):
msg = _("Support only AOEDriver and ISCSIDriver. Sorry!") msg = _("Support only AOEDriver and ISCSIDriver. Sorry!")
raise exception.Error(msg) raise exception.Error(msg)
@@ -586,15 +630,14 @@ class ServiceCommands(object):
# when this feture is included in API. # when this feture is included in API.
if type(result) != dict: if type(result) != dict:
print 'Unexpected error occurs' print 'Unexpected error occurs'
elif not result['ret']: print '[Result]', result
print '%s' % result['msg']
else: else:
cpu = result['phy_resource']['vcpus'] cpu = result['resource']['vcpus']
mem = result['phy_resource']['memory_mb'] mem = result['resource']['memory_mb']
hdd = result['phy_resource']['local_gb'] hdd = result['resource']['local_gb']
cpu_u = result['phy_resource']['vcpus_used'] cpu_u = result['resource']['vcpus_used']
mem_u = result['phy_resource']['memory_mb_used'] mem_u = result['resource']['memory_mb_used']
hdd_u = result['phy_resource']['local_gb_used'] hdd_u = result['resource']['local_gb_used']
print 'HOST\t\t\tPROJECT\t\tcpu\tmem(mb)\tdisk(gb)' print 'HOST\t\t\tPROJECT\t\tcpu\tmem(mb)\tdisk(gb)'
print '%s(total)\t\t\t%s\t%s\t%s' % (host, cpu, mem, hdd) print '%s(total)\t\t\t%s\t%s\t%s' % (host, cpu, mem, hdd)
@@ -657,6 +700,13 @@ class VolumeCommands(object):
ctxt = context.get_admin_context() ctxt = context.get_admin_context()
volume = db.volume_get(ctxt, param2id(volume_id)) volume = db.volume_get(ctxt, param2id(volume_id))
host = volume['host'] host = volume['host']
if not host:
print "Volume not yet assigned to host."
print "Deleting volume from database and skipping rpc."
db.volume_destroy(ctxt, param2id(volume_id))
return
if volume['status'] == 'in-use': if volume['status'] == 'in-use':
print "Volume is in-use." print "Volume is in-use."
print "Detach volume from instance and then try again." print "Detach volume from instance and then try again."
@@ -693,6 +743,7 @@ CATEGORIES = [
('role', RoleCommands), ('role', RoleCommands),
('shell', ShellCommands), ('shell', ShellCommands),
('vpn', VpnCommands), ('vpn', VpnCommands),
('fixed', FixedIpCommands),
('floating', FloatingIpCommands), ('floating', FloatingIpCommands),
('network', NetworkCommands), ('network', NetworkCommands),
('instance', InstanceCommands), ('instance', InstanceCommands),

View File

@@ -1826,7 +1826,7 @@ msgstr ""
#: nova/virt/xenapi/vm_utils.py:290 #: nova/virt/xenapi/vm_utils.py:290
#, python-format #, python-format
msgid "PV Kernel in VDI:%d" msgid "PV Kernel in VDI:%s"
msgstr "" msgstr ""
#: nova/virt/xenapi/vm_utils.py:318 #: nova/virt/xenapi/vm_utils.py:318

View File

@@ -74,6 +74,25 @@ LOG = logging.getLogger("nova.ldapdriver")
# in which we may want to change the interface a bit more. # in which we may want to change the interface a bit more.
def _clean(attr):
"""Clean attr for insertion into ldap"""
if attr is None:
return None
if type(attr) is unicode:
return str(attr)
return attr
def sanitize(fn):
"""Decorator to sanitize all args"""
def _wrapped(self, *args, **kwargs):
args = [_clean(x) for x in args]
kwargs = dict((k, _clean(v)) for (k, v) in kwargs)
return fn(self, *args, **kwargs)
_wrapped.func_name = fn.func_name
return _wrapped
class LdapDriver(object): class LdapDriver(object):
"""Ldap Auth driver """Ldap Auth driver
@@ -106,23 +125,27 @@ class LdapDriver(object):
self.conn.unbind_s() self.conn.unbind_s()
return False return False
@sanitize
def get_user(self, uid): def get_user(self, uid):
"""Retrieve user by id""" """Retrieve user by id"""
attr = self.__get_ldap_user(uid) attr = self.__get_ldap_user(uid)
return self.__to_user(attr) return self.__to_user(attr)
@sanitize
def get_user_from_access_key(self, access): def get_user_from_access_key(self, access):
"""Retrieve user by access key""" """Retrieve user by access key"""
query = '(accessKey=%s)' % access query = '(accessKey=%s)' % access
dn = FLAGS.ldap_user_subtree dn = FLAGS.ldap_user_subtree
return self.__to_user(self.__find_object(dn, query)) return self.__to_user(self.__find_object(dn, query))
@sanitize
def get_project(self, pid): def get_project(self, pid):
"""Retrieve project by id""" """Retrieve project by id"""
dn = self.__project_to_dn(pid) dn = self.__project_to_dn(pid)
attr = self.__find_object(dn, LdapDriver.project_pattern) attr = self.__find_object(dn, LdapDriver.project_pattern)
return self.__to_project(attr) return self.__to_project(attr)
@sanitize
def get_users(self): def get_users(self):
"""Retrieve list of users""" """Retrieve list of users"""
attrs = self.__find_objects(FLAGS.ldap_user_subtree, attrs = self.__find_objects(FLAGS.ldap_user_subtree,
@@ -134,6 +157,7 @@ class LdapDriver(object):
users.append(user) users.append(user)
return users return users
@sanitize
def get_projects(self, uid=None): def get_projects(self, uid=None):
"""Retrieve list of projects""" """Retrieve list of projects"""
pattern = LdapDriver.project_pattern pattern = LdapDriver.project_pattern
@@ -143,6 +167,7 @@ class LdapDriver(object):
pattern) pattern)
return [self.__to_project(attr) for attr in attrs] return [self.__to_project(attr) for attr in attrs]
@sanitize
def create_user(self, name, access_key, secret_key, is_admin): def create_user(self, name, access_key, secret_key, is_admin):
"""Create a user""" """Create a user"""
if self.__user_exists(name): if self.__user_exists(name):
@@ -196,6 +221,7 @@ class LdapDriver(object):
self.conn.add_s(self.__uid_to_dn(name), attr) self.conn.add_s(self.__uid_to_dn(name), attr)
return self.__to_user(dict(attr)) return self.__to_user(dict(attr))
@sanitize
def create_project(self, name, manager_uid, def create_project(self, name, manager_uid,
description=None, member_uids=None): description=None, member_uids=None):
"""Create a project""" """Create a project"""
@@ -231,6 +257,7 @@ class LdapDriver(object):
self.conn.add_s(dn, attr) self.conn.add_s(dn, attr)
return self.__to_project(dict(attr)) return self.__to_project(dict(attr))
@sanitize
def modify_project(self, project_id, manager_uid=None, description=None): def modify_project(self, project_id, manager_uid=None, description=None):
"""Modify an existing project""" """Modify an existing project"""
if not manager_uid and not description: if not manager_uid and not description:
@@ -249,21 +276,25 @@ class LdapDriver(object):
dn = self.__project_to_dn(project_id) dn = self.__project_to_dn(project_id)
self.conn.modify_s(dn, attr) self.conn.modify_s(dn, attr)
@sanitize
def add_to_project(self, uid, project_id): def add_to_project(self, uid, project_id):
"""Add user to project""" """Add user to project"""
dn = self.__project_to_dn(project_id) dn = self.__project_to_dn(project_id)
return self.__add_to_group(uid, dn) return self.__add_to_group(uid, dn)
@sanitize
def remove_from_project(self, uid, project_id): def remove_from_project(self, uid, project_id):
"""Remove user from project""" """Remove user from project"""
dn = self.__project_to_dn(project_id) dn = self.__project_to_dn(project_id)
return self.__remove_from_group(uid, dn) return self.__remove_from_group(uid, dn)
@sanitize
def is_in_project(self, uid, project_id): def is_in_project(self, uid, project_id):
"""Check if user is in project""" """Check if user is in project"""
dn = self.__project_to_dn(project_id) dn = self.__project_to_dn(project_id)
return self.__is_in_group(uid, dn) return self.__is_in_group(uid, dn)
@sanitize
def has_role(self, uid, role, project_id=None): def has_role(self, uid, role, project_id=None):
"""Check if user has role """Check if user has role
@@ -273,6 +304,7 @@ class LdapDriver(object):
role_dn = self.__role_to_dn(role, project_id) role_dn = self.__role_to_dn(role, project_id)
return self.__is_in_group(uid, role_dn) return self.__is_in_group(uid, role_dn)
@sanitize
def add_role(self, uid, role, project_id=None): def add_role(self, uid, role, project_id=None):
"""Add role for user (or user and project)""" """Add role for user (or user and project)"""
role_dn = self.__role_to_dn(role, project_id) role_dn = self.__role_to_dn(role, project_id)
@@ -283,11 +315,13 @@ class LdapDriver(object):
else: else:
return self.__add_to_group(uid, role_dn) return self.__add_to_group(uid, role_dn)
@sanitize
def remove_role(self, uid, role, project_id=None): def remove_role(self, uid, role, project_id=None):
"""Remove role for user (or user and project)""" """Remove role for user (or user and project)"""
role_dn = self.__role_to_dn(role, project_id) role_dn = self.__role_to_dn(role, project_id)
return self.__remove_from_group(uid, role_dn) return self.__remove_from_group(uid, role_dn)
@sanitize
def get_user_roles(self, uid, project_id=None): def get_user_roles(self, uid, project_id=None):
"""Retrieve list of roles for user (or user and project)""" """Retrieve list of roles for user (or user and project)"""
if project_id is None: if project_id is None:
@@ -307,6 +341,7 @@ class LdapDriver(object):
roles = self.__find_objects(project_dn, query) roles = self.__find_objects(project_dn, query)
return [role['cn'][0] for role in roles] return [role['cn'][0] for role in roles]
@sanitize
def delete_user(self, uid): def delete_user(self, uid):
"""Delete a user""" """Delete a user"""
if not self.__user_exists(uid): if not self.__user_exists(uid):
@@ -332,12 +367,14 @@ class LdapDriver(object):
# Delete entry # Delete entry
self.conn.delete_s(self.__uid_to_dn(uid)) self.conn.delete_s(self.__uid_to_dn(uid))
@sanitize
def delete_project(self, project_id): def delete_project(self, project_id):
"""Delete a project""" """Delete a project"""
project_dn = self.__project_to_dn(project_id) project_dn = self.__project_to_dn(project_id)
self.__delete_roles(project_dn) self.__delete_roles(project_dn)
self.__delete_group(project_dn) self.__delete_group(project_dn)
@sanitize
def modify_user(self, uid, access_key=None, secret_key=None, admin=None): def modify_user(self, uid, access_key=None, secret_key=None, admin=None):
"""Modify an existing user""" """Modify an existing user"""
if not access_key and not secret_key and admin is None: if not access_key and not secret_key and admin is None:

View File

@@ -10,7 +10,6 @@ export NOVA_CERT=${NOVA_KEY_DIR}/%(nova)s
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}" alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}" alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"
export CLOUD_SERVERS_API_KEY="%(access)s" export NOVA_API_KEY="%(access)s"
export CLOUD_SERVERS_USERNAME="%(user)s" export NOVA_USERNAME="%(user)s"
export CLOUD_SERVERS_URL="%(os)s" export NOVA_URL="%(os)s"

View File

@@ -282,6 +282,8 @@ DEFINE_integer('auth_token_ttl', 3600, 'Seconds for auth tokens to linger')
DEFINE_string('state_path', os.path.join(os.path.dirname(__file__), '../'), DEFINE_string('state_path', os.path.join(os.path.dirname(__file__), '../'),
"Top-level directory for maintaining nova's state") "Top-level directory for maintaining nova's state")
DEFINE_string('logdir', None, 'output to a per-service log file in named '
'directory')
DEFINE_string('sql_connection', DEFINE_string('sql_connection',
'sqlite:///$state_path/nova.sqlite', 'sqlite:///$state_path/nova.sqlite',

View File

@@ -28,9 +28,11 @@ It also allows setting of formatting information through flags.
import cStringIO import cStringIO
import inspect
import json import json
import logging import logging
import logging.handlers import logging.handlers
import os
import sys import sys
import traceback import traceback
@@ -92,7 +94,7 @@ critical = logging.critical
log = logging.log log = logging.log
# handlers # handlers
StreamHandler = logging.StreamHandler StreamHandler = logging.StreamHandler
FileHandler = logging.FileHandler WatchedFileHandler = logging.handlers.WatchedFileHandler
# logging.SysLogHandler is nicer than logging.logging.handler.SysLogHandler. # logging.SysLogHandler is nicer than logging.logging.handler.SysLogHandler.
SysLogHandler = logging.handlers.SysLogHandler SysLogHandler = logging.handlers.SysLogHandler
@@ -111,6 +113,18 @@ def _dictify_context(context):
return context return context
def _get_binary_name():
return os.path.basename(inspect.stack()[-1][1])
def get_log_file_path(binary=None):
if FLAGS.logfile:
return FLAGS.logfile
if FLAGS.logdir:
binary = binary or _get_binary_name()
return '%s.log' % (os.path.join(FLAGS.logdir, binary),)
def basicConfig(): def basicConfig():
logging.basicConfig() logging.basicConfig()
for handler in logging.root.handlers: for handler in logging.root.handlers:
@@ -123,8 +137,9 @@ def basicConfig():
syslog = SysLogHandler(address='/dev/log') syslog = SysLogHandler(address='/dev/log')
syslog.setFormatter(_formatter) syslog.setFormatter(_formatter)
logging.root.addHandler(syslog) logging.root.addHandler(syslog)
if FLAGS.logfile: logpath = get_log_file_path()
logfile = FileHandler(FLAGS.logfile) if logpath:
logfile = WatchedFileHandler(logpath)
logfile.setFormatter(_formatter) logfile.setFormatter(_formatter)
logging.root.addHandler(logfile) logging.root.addHandler(logfile)

View File

@@ -29,6 +29,7 @@ import uuid
from carrot import connection as carrot_connection from carrot import connection as carrot_connection
from carrot import messaging from carrot import messaging
from eventlet import greenpool
from eventlet import greenthread from eventlet import greenthread
from nova import context from nova import context
@@ -42,6 +43,8 @@ from nova import utils
FLAGS = flags.FLAGS FLAGS = flags.FLAGS
LOG = logging.getLogger('nova.rpc') LOG = logging.getLogger('nova.rpc')
flags.DEFINE_integer('rpc_thread_pool_size', 1024, 'Size of RPC thread pool')
class Connection(carrot_connection.BrokerConnection): class Connection(carrot_connection.BrokerConnection):
"""Connection instance object""" """Connection instance object"""
@@ -155,11 +158,15 @@ class AdapterConsumer(TopicConsumer):
def __init__(self, connection=None, topic="broadcast", proxy=None): def __init__(self, connection=None, topic="broadcast", proxy=None):
LOG.debug(_('Initing the Adapter Consumer for %s') % topic) LOG.debug(_('Initing the Adapter Consumer for %s') % topic)
self.proxy = proxy self.proxy = proxy
self.pool = greenpool.GreenPool(FLAGS.rpc_thread_pool_size)
super(AdapterConsumer, self).__init__(connection=connection, super(AdapterConsumer, self).__init__(connection=connection,
topic=topic) topic=topic)
def receive(self, *args, **kwargs):
self.pool.spawn_n(self._receive, *args, **kwargs)
@exception.wrap_exception @exception.wrap_exception
def receive(self, message_data, message): def _receive(self, message_data, message):
"""Magically looks for a method on the proxy object and calls it """Magically looks for a method on the proxy object and calls it
Message data should be a dictionary with two keys: Message data should be a dictionary with two keys:

View File

@@ -119,11 +119,12 @@ class Scheduler(object):
msg = _('volume node is not alive(time synchronize problem?)') msg = _('volume node is not alive(time synchronize problem?)')
raise exception.Invalid(msg) raise exception.Invalid(msg)
# Checking src host is alive. # Checking src host exists and compute node
src = instance_ref['host'] src = instance_ref['host']
services = db.service_get_all_by_topic(context, 'compute') services = db.service_get_all_compute_by_host(context, src)
services = [service for service in services if service.host == src]
if len(services) < 1 or not self.service_is_up(services[0]): # Checking src host is alive.
if not self.service_is_up(services[0]):
msg = _('%s is not alive(time synchronize problem?)') msg = _('%s is not alive(time synchronize problem?)')
raise exception.Invalid(msg % src) raise exception.Invalid(msg % src)
@@ -131,15 +132,8 @@ class Scheduler(object):
"""Live migration check routine (for destination host)""" """Live migration check routine (for destination host)"""
# Checking dest exists and compute node. # Checking dest exists and compute node.
dservice_refs = db.service_get_all_by_host(context, dest) dservice_refs = db.service_get_all_compute_by_host(context, dest)
if len(dservice_refs) <= 0:
msg = _('%s does not exists.')
raise exception.Invalid(msg % dest)
dservice_ref = dservice_refs[0] dservice_ref = dservice_refs[0]
if dservice_ref['topic'] != 'compute':
msg = _('%s must be compute node')
raise exception.Invalid(msg % dest)
# Checking dest host is alive. # Checking dest host is alive.
if not self.service_is_up(dservice_ref): if not self.service_is_up(dservice_ref):
@@ -169,18 +163,18 @@ class Scheduler(object):
self.mounted_on_same_shared_storage(context, instance_ref, dest) self.mounted_on_same_shared_storage(context, instance_ref, dest)
# Checking dest exists. # Checking dest exists.
dservice_refs = db.service_get_all_by_host(context, dest) dservice_refs = db.service_get_all_compute_by_host(context, dest)
if len(dservice_refs) <= 0: dservice_ref = dservice_refs[0]['compute_service'][0]
raise exception.Invalid(_('%s does not exists.') % dest)
dservice_ref = dservice_refs[0]
# Checking original host( where instance was launched at) exists. # Checking original host( where instance was launched at) exists.
oservice_refs = db.service_get_all_by_host(context, try:
oservice_refs = \
db.service_get_all_compute_by_host(context,
instance_ref['launched_on']) instance_ref['launched_on'])
if len(oservice_refs) <= 0: except exception.NotFound:
msg = _('%s(where instance was launched at) does not exists.') msg = _('%s(where instance was launched at) does not exists.')
raise exception.Invalid(msg % instance_ref['launched_on']) raise exception.Invalid(msg % instance_ref['launched_on'])
oservice_ref = oservice_refs[0] oservice_ref = oservice_refs[0]['compute_service'][0]
# Checking hypervisor is same. # Checking hypervisor is same.
o = oservice_ref['hypervisor_type'] o = oservice_ref['hypervisor_type']
@@ -223,13 +217,11 @@ class Scheduler(object):
ec2_id = instance_ref['hostname'] ec2_id = instance_ref['hostname']
# Getting host information # Getting host information
service_refs = db.service_get_all_by_host(context, dest) service_refs = db.service_get_all_compute_by_host(context, dest)
if len(service_refs) <= 0: compute_service_ref = service_refs[0]['compute_service'][0]
raise exception.Invalid(_('%s does not exists.') % dest)
service_ref = service_refs[0]
mem_total = int(service_ref['memory_mb']) mem_total = int(compute_service_ref['memory_mb'])
mem_used = int(service_ref['memory_mb_used']) mem_used = int(compute_service_ref['memory_mb_used'])
mem_avail = mem_total - mem_used mem_avail = mem_total - mem_used
mem_inst = instance_ref['memory_mb'] mem_inst = instance_ref['memory_mb']
if mem_avail <= mem_inst: if mem_avail <= mem_inst:

View File

@@ -74,30 +74,26 @@ class SchedulerManager(manager.Manager):
def show_host_resource(self, context, host, *args): def show_host_resource(self, context, host, *args):
"""show the physical/usage resource given by hosts.""" """show the physical/usage resource given by hosts."""
compute_refs = db.service_get_all_compute_sorted(context) compute_ref = db.service_get_all_compute_by_host(context, host)
compute_refs = [s for s, v in compute_refs if s['host'] == host] compute_ref = compute_ref[0]
if 0 == len(compute_refs):
return {'ret': False, 'msg': 'No such Host or not compute node.'}
# Getting physical resource information # Getting physical resource information
h_resource = {'vcpus': compute_refs[0]['vcpus'], compute_service_ref = compute_ref['compute_service'][0]
'memory_mb': compute_refs[0]['memory_mb'], resource = {'vcpus': compute_service_ref['vcpus'],
'local_gb': compute_refs[0]['local_gb'], 'memory_mb': compute_service_ref['memory_mb'],
'vcpus_used': compute_refs[0]['vcpus_used'], 'local_gb': compute_service_ref['local_gb'],
'memory_mb_used': compute_refs[0]['memory_mb_used'], 'vcpus_used': compute_service_ref['vcpus_used'],
'local_gb_used': compute_refs[0]['local_gb_used']} 'memory_mb_used': compute_service_ref['memory_mb_used'],
'local_gb_used': compute_service_ref['local_gb_used']}
# Getting usage resource information # Getting usage resource information
u_resource = {} usage = {}
instances_refs = db.instance_get_all_by_host(context, instance_refs = db.instance_get_all_by_host(context,
compute_refs[0]['host']) compute_ref['host'])
if 0 == len(instance_refs):
return {'resource': resource, 'usage': usage}
if 0 == len(instances_refs): project_ids = [i['project_id'] for i in instance_refs]
return {'ret': True,
'phy_resource': h_resource,
'usage': u_resource}
project_ids = [i['project_id'] for i in instances_refs]
project_ids = list(set(project_ids)) project_ids = list(set(project_ids))
for i in project_ids: for i in project_ids:
vcpus = db.instance_get_vcpu_sum_by_host_and_project(context, vcpus = db.instance_get_vcpu_sum_by_host_and_project(context,
@@ -109,8 +105,8 @@ class SchedulerManager(manager.Manager):
hdd = db.instance_get_disk_sum_by_host_and_project(context, hdd = db.instance_get_disk_sum_by_host_and_project(context,
host, host,
i) i)
u_resource[i] = {'vcpus': int(vcpus), usage[i] = {'vcpus': int(vcpus),
'memory_mb': int(mem), 'memory_mb': int(mem),
'local_gb': int(hdd)} 'local_gb': int(hdd)}
return {'ret': True, 'phy_resource': h_resource, 'usage': u_resource} return {'resource': resource, 'usage': usage}

View File

@@ -248,11 +248,9 @@ class ApiEc2TestCase(test.TestCase):
self.mox.ReplayAll() self.mox.ReplayAll()
rv = self.ec2.get_all_security_groups() rv = self.ec2.get_all_security_groups()
# I don't bother checkng that we actually find it here,
# because the create/delete unit test further up should group = [grp for grp in rv if grp.name == security_group_name][0]
# be good enough for that.
for group in rv:
if group.name == security_group_name:
self.assertEquals(len(group.rules), 1) self.assertEquals(len(group.rules), 1)
self.assertEquals(int(group.rules[0].from_port), 80) self.assertEquals(int(group.rules[0].from_port), 80)
self.assertEquals(int(group.rules[0].to_port), 81) self.assertEquals(int(group.rules[0].to_port), 81)
@@ -314,11 +312,8 @@ class ApiEc2TestCase(test.TestCase):
self.mox.ReplayAll() self.mox.ReplayAll()
rv = self.ec2.get_all_security_groups() rv = self.ec2.get_all_security_groups()
# I don't bother checkng that we actually find it here,
# because the create/delete unit test further up should group = [grp for grp in rv if grp.name == security_group_name][0]
# be good enough for that.
for group in rv:
if group.name == security_group_name:
self.assertEquals(len(group.rules), 1) self.assertEquals(len(group.rules), 1)
self.assertEquals(int(group.rules[0].from_port), 80) self.assertEquals(int(group.rules[0].from_port), 80)
self.assertEquals(int(group.rules[0].to_port), 81) self.assertEquals(int(group.rules[0].to_port), 81)

View File

@@ -243,6 +243,14 @@ class ComputeTestCase(test.TestCase):
self.compute.set_admin_password(self.context, instance_id) self.compute.set_admin_password(self.context, instance_id)
self.compute.terminate_instance(self.context, instance_id) self.compute.terminate_instance(self.context, instance_id)
def test_inject_file(self):
"""Ensure we can write a file to an instance"""
instance_id = self._create_instance()
self.compute.run_instance(self.context, instance_id)
self.compute.inject_file(self.context, instance_id, "/tmp/test",
"File Contents")
self.compute.terminate_instance(self.context, instance_id)
def test_snapshot(self): def test_snapshot(self):
"""Ensure instance can be snapshotted""" """Ensure instance can be snapshotted"""
instance_id = self._create_instance() instance_id = self._create_instance()
@@ -476,7 +484,8 @@ class ComputeTestCase(test.TestCase):
'state': power_state.RUNNING, 'state': power_state.RUNNING,
'host': i_ref['host']}) 'host': i_ref['host']})
for v in i_ref['volumes']: for v in i_ref['volumes']:
dbmock.volume_update(c, v['id'], {'status': 'in-use'}) dbmock.volume_update(c, v['id'], {'status': 'in-use',
'host': i_ref['host']})
self.compute.db = dbmock self.compute.db = dbmock
self.mox.ReplayAll() self.mox.ReplayAll()
@@ -541,14 +550,25 @@ class ComputeTestCase(test.TestCase):
def test_post_live_migration_working_correctly(self): def test_post_live_migration_working_correctly(self):
"""post_live_migration works as expected correctly """ """post_live_migration works as expected correctly """
i_ref = self._get_dummy_instance() dest = 'desthost'
fixed_ip_ref = {'id': 1, 'address': '1.1.1.1'} flo_addr = '1.2.1.2'
floating_ip_ref = {'id': 1, 'address': '2.2.2.2'}
c = context.get_admin_context()
dbmock = self.mox.CreateMock(db) # Preparing datas
dbmock.volume_get_all_by_instance(c, i_ref['id']).\ c = context.get_admin_context()
AndReturn(i_ref['volumes']) instance_id = self._create_instance()
i_ref = db.instance_get(c, instance_id)
db.instance_update(c, i_ref['id'], {'state_description': 'migrating',
'state': power_state.PAUSED})
v_ref = db.volume_create(c, {'size': 1, 'instance_id': instance_id})
fix_addr = db.fixed_ip_create(c, {'address': '1.1.1.1',
'instance_id': instance_id})
fix_ref = db.fixed_ip_get_by_address(c, fix_addr)
flo_ref = db.floating_ip_create(c, {'address': flo_addr,
'fixed_ip_id': fix_ref['id']})
# reload is necessary before setting mocks
i_ref = db.instance_get(c, instance_id)
# Preparing mocks
self.mox.StubOutWithMock(self.compute.volume_manager, self.mox.StubOutWithMock(self.compute.volume_manager,
'remove_compute_volume') 'remove_compute_volume')
for v in i_ref['volumes']: for v in i_ref['volumes']:
@@ -556,102 +576,22 @@ class ComputeTestCase(test.TestCase):
self.mox.StubOutWithMock(self.compute.driver, 'unfilter_instance') self.mox.StubOutWithMock(self.compute.driver, 'unfilter_instance')
self.compute.driver.unfilter_instance(i_ref) self.compute.driver.unfilter_instance(i_ref)
fixed_ip = fixed_ip_ref['address'] # executing
dbmock.instance_get_fixed_address(c, i_ref['id']).AndReturn(fixed_ip)
dbmock.fixed_ip_update(c, fixed_ip, {'host': i_ref['host']})
fl_ip = floating_ip_ref['address']
dbmock.instance_get_floating_address(c, i_ref['id']).AndReturn(fl_ip)
dbmock.floating_ip_get_by_address(c, fl_ip).AndReturn(floating_ip_ref)
dbmock.floating_ip_update(c, floating_ip_ref['address'],
{'host': i_ref['host']})
dbmock.instance_update(c, i_ref['id'],
{'state_description': 'running',
'state': power_state.RUNNING,
'host': i_ref['host']})
for v in i_ref['volumes']:
dbmock.volume_update(c, v['id'], {'status': 'in-use'})
self.compute.db = dbmock
self.mox.ReplayAll() self.mox.ReplayAll()
ret = self.compute.post_live_migration(c, i_ref, i_ref['host']) ret = self.compute.post_live_migration(c, i_ref, dest)
self.assertEqual(ret, None) self.mox.UnsetStubs()
self.mox.ResetAll()
def test_post_live_migration_no_floating_ip(self): # make sure every data is rewritten to dest
""" i_ref = db.instance_get(c, i_ref['id'])
post_live_migration works as expected correctly c1 = (i_ref['host'] == dest)
(in case instance doesnt have floaitng ip) v_ref = db.volume_get(c, v_ref['id'])
""" c2 = (v_ref['host'] == dest)
i_ref = self._get_dummy_instance() c3 = False
i_ref.__setitem__('volumes', []) flo_refs = db.floating_ip_get_all_by_host(c, dest)
fixed_ip_ref = {'id': 1, 'address': '1.1.1.1'} c3 = (len(flo_refs) != 0 and flo_refs[0]['address'] == flo_addr)
floating_ip_ref = {'id': 1, 'address': '1.1.1.1'}
c = context.get_admin_context()
dbmock = self.mox.CreateMock(db) # post operaton
dbmock.volume_get_all_by_instance(c, i_ref['id']).AndReturn([]) self.assertTrue(c1 and c2 and c3)
self.mox.StubOutWithMock(self.compute.driver, 'unfilter_instance') db.instance_destroy(c, instance_id)
self.compute.driver.unfilter_instance(i_ref) db.volume_destroy(c, v_ref['id'])
db.floating_ip_destroy(c, flo_addr)
fixed_ip = fixed_ip_ref['address']
dbmock.instance_get_fixed_address(c, i_ref['id']).AndReturn(fixed_ip)
dbmock.fixed_ip_update(c, fixed_ip, {'host': i_ref['host']})
dbmock.instance_get_floating_address(c, i_ref['id']).AndReturn(None)
dbmock.instance_update(c, i_ref['id'],
{'state_description': 'running',
'state': power_state.RUNNING,
'host': i_ref['host']})
for v in i_ref['volumes']:
dbmock.volume_update(c, v['id'], {'status': 'in-use'})
self.compute.db = dbmock
self.mox.ReplayAll()
ret = self.compute.post_live_migration(c, i_ref, i_ref['host'])
self.assertEqual(ret, None)
self.mox.ResetAll()
def test_post_live_migration_no_floating_ip_with_exception(self):
"""
post_live_migration works as expected correctly
(in case instance doesnt have floaitng ip, and raise exception)
"""
i_ref = self._get_dummy_instance()
i_ref.__setitem__('volumes', [])
fixed_ip_ref = {'id': 1, 'address': '1.1.1.1'}
floating_ip_ref = {'id': 1, 'address': '1.1.1.1'}
c = context.get_admin_context()
dbmock = self.mox.CreateMock(db)
dbmock.volume_get_all_by_instance(c, i_ref['id']).AndReturn([])
self.mox.StubOutWithMock(self.compute.driver, 'unfilter_instance')
self.compute.driver.unfilter_instance(i_ref)
fixed_ip = fixed_ip_ref['address']
dbmock.instance_get_fixed_address(c, i_ref['id']).AndReturn(fixed_ip)
dbmock.fixed_ip_update(c, fixed_ip, {'host': i_ref['host']})
dbmock.instance_get_floating_address(c, i_ref['id']).\
AndRaise(exception.NotFound())
self.mox.StubOutWithMock(compute_manager.LOG, 'info')
compute_manager.LOG.info(_('post_live_migration() is started..'))
compute_manager.LOG.info(_('floating_ip is not found for %s'),
i_ref.name)
# first 2 messages are checked.
compute_manager.LOG.info(mox.IgnoreArg())
compute_manager.LOG.info(mox.IgnoreArg())
self.mox.StubOutWithMock(db, 'instance_update')
dbmock.instance_update(c, i_ref['id'], {'state_description': 'running',
'state': power_state.RUNNING,
'host': i_ref['host']})
self.mox.StubOutWithMock(db, 'volume_update')
for v in i_ref['volumes']:
dbmock.volume_update(c, v['id'], {'status': 'in-use'})
self.compute.db = dbmock
self.mox.ReplayAll()
ret = self.compute.post_live_migration(c, i_ref, i_ref['host'])
self.assertEqual(ret, None)
self.mox.ResetAll()

View File

@@ -46,6 +46,27 @@ class RootLoggerTestCase(test.TestCase):
self.assert_(True) # didn't raise exception self.assert_(True) # didn't raise exception
class LogHandlerTestCase(test.TestCase):
def test_log_path_logdir(self):
self.flags(logdir='/some/path')
self.assertEquals(log.get_log_file_path(binary='foo-bar'),
'/some/path/foo-bar.log')
def test_log_path_logfile(self):
self.flags(logfile='/some/path/foo-bar.log')
self.assertEquals(log.get_log_file_path(binary='foo-bar'),
'/some/path/foo-bar.log')
def test_log_path_none(self):
self.assertTrue(log.get_log_file_path(binary='foo-bar') is None)
def test_log_path_logfile_overrides_logdir(self):
self.flags(logdir='/some/other/path',
logfile='/some/path/foo-bar.log')
self.assertEquals(log.get_log_file_path(binary='foo-bar'),
'/some/path/foo-bar.log')
class NovaFormatterTestCase(test.TestCase): class NovaFormatterTestCase(test.TestCase):
def setUp(self): def setUp(self):
super(NovaFormatterTestCase, self).setUp() super(NovaFormatterTestCase, self).setUp()

File diff suppressed because it is too large Load Diff

View File

@@ -32,6 +32,7 @@ from nova.virt import xenapi_conn
from nova.virt.xenapi import fake as xenapi_fake from nova.virt.xenapi import fake as xenapi_fake
from nova.virt.xenapi import volume_utils from nova.virt.xenapi import volume_utils
from nova.virt.xenapi.vmops import SimpleDH from nova.virt.xenapi.vmops import SimpleDH
from nova.virt.xenapi.vmops import VMOps
from nova.tests.db import fakes as db_fakes from nova.tests.db import fakes as db_fakes
from nova.tests.xenapi import stubs from nova.tests.xenapi import stubs
from nova.tests.glance import stubs as glance_stubs from nova.tests.glance import stubs as glance_stubs
@@ -141,6 +142,10 @@ class XenAPIVolumeTestCase(test.TestCase):
self.stubs.UnsetAll() self.stubs.UnsetAll()
def reset_network(*args):
pass
class XenAPIVMTestCase(test.TestCase): class XenAPIVMTestCase(test.TestCase):
""" """
Unit tests for VM operations Unit tests for VM operations
@@ -162,6 +167,7 @@ class XenAPIVMTestCase(test.TestCase):
stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests)
stubs.stubout_get_this_vm_uuid(self.stubs) stubs.stubout_get_this_vm_uuid(self.stubs)
stubs.stubout_stream_disk(self.stubs) stubs.stubout_stream_disk(self.stubs)
self.stubs.Set(VMOps, 'reset_network', reset_network)
glance_stubs.stubout_glance_client(self.stubs, glance_stubs.stubout_glance_client(self.stubs,
glance_stubs.FakeGlance) glance_stubs.FakeGlance)
self.conn = xenapi_conn.get_connection(False) self.conn = xenapi_conn.get_connection(False)
@@ -243,7 +249,8 @@ class XenAPIVMTestCase(test.TestCase):
# Check that the VM is running according to XenAPI. # Check that the VM is running according to XenAPI.
self.assertEquals(vm['power_state'], 'Running') self.assertEquals(vm['power_state'], 'Running')
def _test_spawn(self, image_id, kernel_id, ramdisk_id): def _test_spawn(self, image_id, kernel_id, ramdisk_id,
instance_type="m1.large"):
stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests)
values = {'name': 1, values = {'name': 1,
'id': 1, 'id': 1,
@@ -252,7 +259,7 @@ class XenAPIVMTestCase(test.TestCase):
'image_id': image_id, 'image_id': image_id,
'kernel_id': kernel_id, 'kernel_id': kernel_id,
'ramdisk_id': ramdisk_id, 'ramdisk_id': ramdisk_id,
'instance_type': 'm1.large', 'instance_type': instance_type,
'mac_address': 'aa:bb:cc:dd:ee:ff', 'mac_address': 'aa:bb:cc:dd:ee:ff',
} }
conn = xenapi_conn.get_connection(False) conn = xenapi_conn.get_connection(False)
@@ -260,6 +267,12 @@ class XenAPIVMTestCase(test.TestCase):
conn.spawn(instance) conn.spawn(instance)
self.check_vm_record(conn) self.check_vm_record(conn)
def test_spawn_not_enough_memory(self):
FLAGS.xenapi_image_service = 'glance'
self.assertRaises(Exception,
self._test_spawn,
1, 2, 3, "m1.xlarge")
def test_spawn_raw_objectstore(self): def test_spawn_raw_objectstore(self):
FLAGS.xenapi_image_service = 'objectstore' FLAGS.xenapi_image_service = 'objectstore'
self._test_spawn(1, None, None) self._test_spawn(1, None, None)

View File

@@ -43,8 +43,6 @@ else:
FLAGS = flags.FLAGS FLAGS = flags.FLAGS
flags.DEFINE_string('logdir', None, 'directory to keep log files in '
'(will be prepended to $logfile)')
class TwistdServerOptions(ServerOptions): class TwistdServerOptions(ServerOptions):

View File

@@ -20,13 +20,14 @@
System-level utilities and helper functions. System-level utilities and helper functions.
""" """
import base64
import datetime import datetime
import inspect import inspect
import json import json
import os import os
import random import random
import subprocess
import socket import socket
import string
import struct import struct
import sys import sys
import time import time
@@ -36,6 +37,7 @@ import netaddr
from eventlet import event from eventlet import event
from eventlet import greenthread from eventlet import greenthread
from eventlet.green import subprocess
from nova import exception from nova import exception
from nova.exception import ProcessExecutionError from nova.exception import ProcessExecutionError
@@ -235,6 +237,15 @@ def generate_mac():
return ':'.join(map(lambda x: "%02x" % x, mac)) return ':'.join(map(lambda x: "%02x" % x, mac))
def generate_password(length=20):
"""Generate a random sequence of letters and digits
to be used as a password. Note that this is not intended
to represent the ultimate in security.
"""
chrs = string.letters + string.digits
return "".join([random.choice(chrs) for i in xrange(length)])
def last_octet(address): def last_octet(address):
return int(address.split(".")[-1]) return int(address.split(".")[-1])
@@ -476,3 +487,15 @@ def dumps(value):
def loads(s): def loads(s):
return json.loads(s) return json.loads(s)
def ensure_b64_encoding(val):
"""Safety method to ensure that values expected to be base64-encoded
actually are. If they are, the value is returned unchanged. Otherwise,
the encoded value is returned.
"""
try:
dummy = base64.decode(val)
return val
except TypeError:
return base64.b64encode(val)

View File

@@ -85,9 +85,13 @@ setup(name='nova',
packages=find_packages(exclude=['bin', 'smoketests']), packages=find_packages(exclude=['bin', 'smoketests']),
include_package_data=True, include_package_data=True,
test_suite='nose.collector', test_suite='nose.collector',
scripts=['bin/nova-api', scripts=['bin/nova-ajax-console-proxy',
'bin/nova-api',
'bin/nova-combined',
'bin/nova-compute', 'bin/nova-compute',
'bin/nova-console',
'bin/nova-dhcpbridge', 'bin/nova-dhcpbridge',
'bin/nova-direct-api',
'bin/nova-import-canonical-imagestore', 'bin/nova-import-canonical-imagestore',
'bin/nova-instancemonitor', 'bin/nova-instancemonitor',
'bin/nova-logspool', 'bin/nova-logspool',
@@ -96,5 +100,6 @@ setup(name='nova',
'bin/nova-objectstore', 'bin/nova-objectstore',
'bin/nova-scheduler', 'bin/nova-scheduler',
'bin/nova-spoolsentry', 'bin/nova-spoolsentry',
'bin/stack',
'bin/nova-volume', 'bin/nova-volume',
'tools/nova-debug']) 'tools/nova-debug'])