Migrate repo of 'compass' to repo of 'compass-core', leave 'compass' to host project homepage

This commit is contained in:
syang 2014-01-08 19:18:04 -08:00
parent b116f83b2d
commit cf67d3ad47
348 changed files with 296730 additions and 4 deletions
.gitignoreLICENSEREADME.md
bin
compass

4
.gitignore vendored

@ -1,4 +1,7 @@
*.py[cod]
*~
*.swp
# C extensions
*.so
@ -10,7 +13,6 @@ dist
build
eggs
parts
bin
var
sdist
develop-eggs

202
LICENSE

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

@ -1,4 +1,59 @@
compass-core
============
Compass
=======
A Deoployment Automation System (https://wiki.openstack.org/wiki/Compass)
A Deoployment Automation System. See Wiki page at https://wiki.openstack.org/wiki/Compass.
How to install Compass?
-----------------------
1. Run `git clone https://github.com/huawei-cloud/compass`
2. Run `cd compass` to the Compass project directory.
3. Run `./install/install.sh` to setup compass environment. Please note that before you execute `install.sh`, you may setup your environment variables in `install/install.conf`, explanations and examples of those variables can be found in `install.conf`.
4. Run `source /etc/profile` to setup compass profile.
5. Run `./bin/refresh.sh` to initialize database.
6. Run `service compassd start` to start compass daemon services.
FAQ
---
* Why doesn't celery start? What should I do if I get `celery died but pid file exists` message after running `service compassd status`?
1. Simply remove celery pid file (`/var/run/celery.pid`).
2. Try running `export C_FORCE_ROOT=1`
3. Restart Compass daemon.
* How to restart compass service?
1. Run `service compassd restart`
2. Run `service httpd restart` to restart web service.
* How to check if the compass services run properly?
1. Run `service compassd status` to check compass services status.
2. Run `service httpd status` to check web service status.
* How to troubleshoot if `compassd` can not start the services?
1. Try to remove /var/run/celeryd.pid to release the celeryd lock
2. Try to remove /var/run/progress_update.pid to release the progress_update lock.
* How to use compass to install distributed systems?
Access http://<server_ip>/ods/ods.html. In the current version, we only support OpenStack deployment with a simplified configuration. Follow the simple wizard from the Web UI.
* How to run unittest?
`COMPASS_SETTING=<your own compass setting> python -m discover -s compass/tests`
* Where to find the log file?
1. `/var/log/compass/compass.log` is the compass web log.
2. `/var/log/compass/celery.log` is the celery log
3. The redirected celeryd stdout/stderr is at `/tmp/celeryd.log`.
4. The redirected progress_update.py stdout/stderr is at `/tmp/progress_update.log`
5. The web server (httpd) log files are under `/var/log/httpd/`.
* Where to find the compass config file?
1. the compass setting file is at /etc/compass/setting.
2. the default global config file for installing distributed system is at /etc/compass/setting
3. the default celery config file is at /etc/compass/celeryconfig
* Where is the default database file?
It is at `/opt/compass/db/app.db`
* Where is the utility scripts for compass?
It is at `/opt/compass/bin/`

0
bin/__init__.py Normal file

10
bin/chef/addcookbooks.py Normal file

@ -0,0 +1,10 @@
#!/usr/bin/env python
import os
import os.path
cookbooks = []
cookbook_dir = '/var/chef/cookbooks/'
cmd = "knife cookbook upload --all --cookbook-path %s" % cookbook_dir
os.system(cmd)

21
bin/chef/adddatabags.py Normal file

@ -0,0 +1,21 @@
#!/usr/bin/env python
import os
import os.path
databags = []
databag_dir = '/var/chef/databags'
for item in os.listdir(databag_dir):
databags.append(item)
for databag in databags:
cmd = "knife data bag create %s" % databag
os.system(cmd)
databag_items = []
databagitem_dir = os.path.join(databag_dir, databag)
for item in os.listdir(databagitem_dir):
databag_items.append(os.path.join(databagitem_dir, item))
for databag_item in databag_items:
cmd = 'knife data bag from file %s %s' % (databag, databag_item)
os.system(cmd)

15
bin/chef/addroles.py Executable file

@ -0,0 +1,15 @@
#!/usr/bin/env python
import os
import os.path
rolelist = []
role_dir = '/var/chef/roles'
for item in os.listdir(role_dir):
f = os.path.join(role_dir, item)
rolelist.append(f)
for role in rolelist:
cmd = "knife role from file %s" % role
os.system(cmd)

421
bin/manage_db.py Executable file

@ -0,0 +1,421 @@
#!/usr/bin/python
import logging
import os
import os.path
import shutil
import sys
from flask.ext.script import Manager
from compass.api import app
from compass.config_management.utils import config_manager
from compass.db import database
from compass.db.model import Adapter, Role, Switch, Machine, HostState, ClusterState, Cluster, ClusterHost, LogProgressingHistory
from compass.utils import flags
from compass.utils import logsetting
from compass.utils import setting_wrapper as setting
flags.add('table_name',
help='table name',
default='')
flags.add('clusters',
help=(
'clusters to clean, the format is as '
'clusterid:hostname1,hostname2,...;...'),
default='')
manager = Manager(app, usage="Perform database operations")
TABLE_MAPPING = {
'role': Role,
'adapter': Adapter,
'switch': Switch,
'machine': Machine,
'hoststate': HostState,
'clusterstate': ClusterState,
'cluster': Cluster,
'clusterhost': ClusterHost,
'logprogressinghistory': LogProgressingHistory,
}
@manager.command
def list_config():
"List the configuration"
for key, value in app.config.items():
print key, value
@manager.command
def createdb():
"Creates database from sqlalchemy models"
if setting.DATABASE_TYPE == 'sqlite':
if os.path.exists(setting.DATABASE_FILE):
os.remove(setting.DATABASE_FILE)
database.create_db()
if setting.DATABASE_TYPE == 'sqlite':
os.chmod(setting.DATABASE_FILE, 0777)
@manager.command
def dropdb():
"Drops database from sqlalchemy models"
database.drop_db()
@manager.command
def createtable():
"""Create database table by --table_name"""
table_name = flags.OPTIONS.table_name
if table_name and table_name in TABLE_MAPPING:
database.create_table(TABLE_MAPPING[table_name])
else:
print '--table_name should be in %s' % TABLE_MAPPING.keys()
@manager.command
def droptable():
"""Drop database table by --talbe_name"""
table_name = flags.OPTIONS.table_name
if table_name and table_name in TABLE_MAPPING:
database.drop_table(TABLE_MAPPING[table_name])
else:
print '--table_name should be in %s' % TABLE_MAPPING.keys()
@manager.command
def sync_from_installers():
"""set adapters in Adapter table from installers."""
manager = config_manager.ConfigManager()
adapters = manager.get_adapters()
target_systems = set()
roles_per_target_system = {}
for adapter in adapters:
target_systems.add(adapter['target_system'])
for target_system in target_systems:
roles_per_target_system[target_system] = manager.get_roles(
target_system)
with database.session() as session:
session.query(Adapter).delete()
session.query(Role).delete()
for adapter in adapters:
session.add(Adapter(**adapter))
for target_system, roles in roles_per_target_system.items():
for role in roles:
session.add(Role(**role))
def _get_clusters():
clusters = {}
logging.debug('get clusters from flag: %s', flags.OPTIONS.clusters)
for clusterid_and_hostnames in flags.OPTIONS.clusters.split(';'):
if not clusterid_and_hostnames:
continue
if ':' in clusterid_and_hostnames:
clusterid_str, hostnames_str = clusterid_and_hostnames.split(
':', 1)
else:
clusterid_str = clusterid_and_hostnames
hostnames_str = ''
clusterid = int(clusterid_str)
hostnames = [
hostname for hostname in hostnames_str.split(',')
if hostname
]
clusters[clusterid] = hostnames
logging.debug('got clusters from flag: %s', clusters)
with database.session() as session:
clusterids = clusters.keys()
if not clusterids:
cluster_list = session.query(Cluster).all()
clusterids = [cluster.id for cluster in cluster_list]
for clusterid in clusterids:
hostnames = clusters.get(clusterid, [])
if not hostnames:
host_list = session.query(ClusterHost).filter_by(
cluster_id=clusterid).all()
hostids = [host.id for host in host_list]
clusters[clusterid] = hostids
else:
hostids = []
for hostname in hostnames:
host = session.query(ClusterHost).filter_by(
cluster_id=clusterid, hostname=hostname).first()
if host:
hostids.append(host.id)
clusters[clusterid] = hostids
return clusters
def _clean_clusters(clusters):
manager = config_manager.ConfigManager()
logging.info('clean cluster hosts: %s', clusters)
with database.session() as session:
for clusterid, hostids in clusters.items():
cluster = session.query(Cluster).filter_by(id=clusterid).first()
if not cluster:
continue
all_hostids = [host.id for host in cluster.hosts]
logging.debug('all hosts in cluster %s is: %s',
clusterid, all_hostids)
logging.info('clean hosts %s in cluster %s',
hostids, clusterid)
adapter = cluster.adapter
for hostid in hostids:
host = session.query(ClusterHost).filter_by(id=hostid).first()
if not host:
continue
log_dir = os.path.join(
setting.INSTALLATION_LOGDIR,
'%s.%s' % (host.hostname, clusterid))
logging.info('clean log dir %s', log_dir)
shutil.rmtree(log_dir, True)
session.query(LogProgressingHistory).filter(
LogProgressingHistory.pathname.startswith(
'%s/' % log_dir)).delete(
synchronize_session='fetch')
logging.info('clean host %s', hostid)
manager.clean_host_config(
hostid,
os_version=adapter.os,
target_system=adapter.target_system)
session.query(ClusterHost).filter_by(
id=hostid).delete(synchronize_session='fetch')
session.query(HostState).filter_by(
id=hostid).delete(synchronize_session='fetch')
if set(all_hostids) == set(hostids):
logging.info('clean cluster %s', clusterid)
manager.clean_cluster_config(
clusterid,
os_version=adapter.os,
target_system=adapter.target_system)
session.query(Cluster).filter_by(
id=clusterid).delete(synchronize_session='fetch')
session.query(ClusterState).filter_by(
id=clusterid).delete(synchronize_session='fetch')
manager.sync()
@manager.command
def clean_clusters():
"""delete clusters and hosts.
The clusters and hosts are defined in --clusters.
"""
clusters = _get_clusters()
_clean_clusters(clusters)
os.system('service rsyslog restart')
def _clean_installation_progress(clusters):
logging.info('clean installation progress for cluster hosts: %s',
clusters)
with database.session() as session:
for clusterid, hostids in clusters.items():
cluster = session.query(Cluster).filter_by(
id=clusterid).first()
if not cluster:
continue
logging.info(
'clean installation progress for hosts %s in cluster %s',
hostids, clusterid)
all_hostids = [host.id for host in cluster.hosts]
logging.debug('all hosts in cluster %s is: %s',
clusterid, all_hostids)
for hostid in hostids:
host = session.query(ClusterHost).filter_by(id=hostid).first()
if not host:
continue
log_dir = os.path.join(
setting.INSTALLATION_LOGDIR,
'%s.%s' % (host.hostname, clusterid))
logging.info('clean log dir %s', log_dir)
shutil.rmtree(log_dir, True)
session.query(LogProgressingHistory).filter(
LogProgressingHistory.pathname.startswith(
'%s/' % log_dir)).delete(
synchronize_session='fetch')
logging.info('clean host installation progress for %s',
hostid)
if host.state and host.state.state != 'UNINITIALIZED':
session.query(ClusterHost).filter_by(
id=hostid).update({
'mutable': False
}, synchronize_session='fetch')
session.query(HostState).filter_by(id=hostid).update({
'state': 'INSTALLING',
'progress': 0.0,
'message': '',
'severity': 'INFO'
}, synchronize_session='fetch')
if set(all_hostids) == set(hostids):
logging.info('clean cluster installation progress %s',
clusterid)
if cluster.state and cluster.state != 'UNINITIALIZED':
session.query(Cluster).filter_by(
id=clusterid).update({
'mutable': False
}, synchronize_session='fetch')
session.query(ClusterState).filter_by(
id=clusterid).update({
'state': 'INSTALLING',
'progress': 0.0,
'message': '',
'severity': 'INFO'
}, synchronize_session='fetch')
@manager.command
def clean_installation_progress():
"""Clean clusters and hosts installation progress.
The cluster and hosts is defined in --clusters.
"""
clusters = _get_clusters()
_clean_installation_progress(clusters)
os.system('service rsyslog restart')
def _reinstall_hosts(clusters):
logging.info('reinstall cluster hosts: %s', clusters)
manager = config_manager.ConfigManager()
with database.session() as session:
for clusterid, hostids in clusters.items():
cluster = session.query(Cluster).filter_by(id=clusterid).first()
if not cluster:
continue
all_hostids = [host.id for host in cluster.hosts]
logging.debug('all hosts in cluster %s is: %s',
clusterid, all_hostids)
logging.info('reinstall hosts %s in cluster %s',
hostids, clusterid)
adapter = cluster.adapter
for hostid in hostids:
host = session.query(ClusterHost).filter_by(id=hostid).first()
if not host:
continue
log_dir = os.path.join(
setting.INSTALLATION_LOGDIR,
'%s.%s' % (host.hostname, clusterid))
logging.info('clean log dir %s', log_dir)
shutil.rmtree(log_dir, True)
session.query(LogProgressingHistory).filter(
LogProgressingHistory.pathname.startswith(
'%s/' % log_dir)).delete(
synchronize_session='fetch')
logging.info('reinstall host %s', hostid)
manager.reinstall_host(
hostid,
os_version=adapter.os,
target_system=adapter.target_system)
if host.state and host.state.state != 'UNINITIALIZED':
session.query(ClusterHost).filter_by(
id=hostid).update({
'mutable': False
}, synchronize_session='fetch')
session.query(HostState).filter_by(
id=hostid).update({
'state': 'INSTALLING',
'progress': 0.0,
'message': '',
'severity': 'INFO'
}, synchronize_session='fetch')
if set(all_hostids) == set(hostids):
logging.info('reinstall cluster %s',
clusterid)
if cluster.state and cluster.state != 'UNINITIALIZED':
session.query(Cluster).filter_by(
id=clusterid).update({
'mutable': False
}, synchronize_session='fetch')
session.query(ClusterState).filter_by(
id=clusterid).update({
'state': 'INSTALLING',
'progress': 0.0,
'message': '',
'severity': 'INFO'
}, synchronize_session='fetch')
manager.sync()
@manager.command
def reinstall_hosts():
"""Reinstall hosts in clusters.
the hosts are defined in --clusters.
"""
clusters = _get_clusters()
_reinstall_hosts(clusters)
@manager.command
def set_fake_switch_machine():
"""Set fake switches and machines for test."""
with database.session() as session:
credential = { 'version' : 'v2c',
'community' : 'public',
}
switches = [ {'ip': '192.168.100.250'},
{'ip': '192.168.100.251'},
{'ip': '192.168.100.252'},
]
session.query(Switch).delete()
session.query(Machine).delete()
ip_switch ={}
for item in switches:
logging.info('add switch %s', item)
switch = Switch(ip=item['ip'], vendor_info='huawei',
state='under_monitoring')
switch.credential = credential
session.add(switch)
ip_switch[item['ip']] = switch
session.flush()
machines = [
{'mac': '00:0c:29:32:76:85', 'port':50, 'vlan':1, 'switch_ip':'192.168.100.250'},
{'mac': '00:0c:29:fa:cb:72', 'port':51, 'vlan':1, 'switch_ip':'192.168.100.250'},
{'mac': '28:6e:d4:64:c7:4a', 'port':1, 'vlan':1, 'switch_ip':'192.168.100.251'},
{'mac': '28:6e:d4:64:c7:4c', 'port':2, 'vlan':1, 'switch_ip':'192.168.100.251'},
{'mac': '28:6e:d4:46:c4:25', 'port': 40, 'vlan': 1, 'switch_ip': '192.168.100.252'},
{'mac': '26:6e:d4:4d:c6:be', 'port': 41, 'vlan': 1, 'switch_ip': '192.168.100.252'},
{'mac': '28:6e:d4:62:da:38', 'port': 42, 'vlan': 1, 'switch_ip': '192.168.100.252'},
{'mac': '28:6e:d4:62:db:76', 'port': 43, 'vlan': 1, 'switch_ip': '192.168.100.252'},
]
for item in machines:
logging.info('add machine %s', item)
machine = Machine(mac=item['mac'], port=item['port'],
vlan=item['vlan'],
switch_id=ip_switch[item['switch_ip']].id)
session.add(machine)
if __name__ == "__main__":
flags.init()
logsetting.init()
manager.run()

113
bin/poll_switch.py Executable file

@ -0,0 +1,113 @@
#!/usr/bin/python
"""main script to poll machines which is connected to the switches."""
import daemon
import lockfile
import logging
import sys
import signal
import time
from compass.actions import poll_switch
from compass.db import database
from compass.db.model import Switch
from compass.tasks.client import celery
from compass.utils import flags
from compass.utils import logsetting
from compass.utils import setting_wrapper as setting
flags.add('switchids',
help='comma seperated switch ids',
default='')
flags.add_bool('async',
help='ryn in async mode',
default=True)
flags.add_bool('once',
help='run once or forever',
default=False)
flags.add('run_interval',
help='run interval in seconds',
default=setting.POLLSWITCH_INTERVAL)
flags.add_bool('daemonize',
help='run as daemon',
default=False)
BUSY = False
KILLED = False
def handle_term(signum, frame):
global BUSY
global KILLED
logging.info('Caught signal %s', signum)
KILLED = True
if not BUSY:
sys.exit(0)
def main(argv):
global BUSY
global KILLED
switchids = [int(switchid) for switchid in flags.OPTIONS.switchids.split(',') if switchid]
signal.signal(signal.SIGTERM, handle_term)
signal.signal(signal.SIGHUP, handle_term)
while True:
BUSY = True
with database.session() as session:
switch_ips = {}
switches = session.query(Switch).all()
for switch in switches:
switch_ips[switch.id] = switch.ip
if not switchids:
poll_switchids = [switch.id for switch in switches]
else:
poll_switchids = switchids
logging.info('poll switches to get machines mac: %s',
poll_switchids)
for switchid in poll_switchids:
if switchid not in switch_ips:
logging.error('there is no switch ip for switch %s',
switchid)
continue
if flags.OPTIONS.async:
celery.send_task('compass.tasks.pollswitch',
(switch_ips[switchid],))
else:
try:
poll_switch.poll_switch(switch_ips[switchid])
except Exception as error:
logging.error('failed to poll switch %s',
switch_ips[switchid])
BUSY = False
if KILLED:
logging.info('exit poll switch loop')
break
if flags.OPTIONS.once:
logging.info('finish poll switch')
break
if flags.OPTIONS.run_interval > 0:
logging.info('will rerun poll switch after %s seconds',
flags.OPTIONS.run_interval)
time.sleep(flags.OPTIONS.run_interval)
else:
logging.info('rerun poll switch imediately')
if __name__ == '__main__':
flags.init()
logsetting.init()
logging.info('run poll_switch: %s', sys.argv)
if flags.OPTIONS.daemonize:
with daemon.DaemonContext(
pidfile=lockfile.FileLock('/var/run/poll_switch.pid'),
stderr=open('/tmp/poll_switch_err.log', 'w+'),
stdout=open('/tmp/poll_switch_out.log', 'w+')
):
logging.info('run poll switch as daemon')
main(sys.argv)
else:
main(sys.argv)

109
bin/progress_update.py Executable file

@ -0,0 +1,109 @@
#!/usr/bin/python
"""main script to run as service to update hosts installing progress."""
import logging
import signal
import sys
import time
import daemon
from compass.actions import progress_update
from compass.db import database
from compass.db.model import Cluster
from compass.tasks.client import celery
from compass.utils import flags
from compass.utils import logsetting
from compass.utils import setting_wrapper as setting
flags.add('clusterids',
help='comma seperated cluster ids',
default='')
flags.add_bool('async',
help='ryn in async mode',
default=True)
flags.add_bool('once',
help='run once or forever',
default=False)
flags.add('run_interval',
help='run interval in seconds',
default=setting.PROGRESS_UPDATE_INTERVAL)
flags.add_bool('daemonize',
help='run as daemon',
default=False)
BUSY = False
KILLED = False
def handle_term(signum, frame):
global BUSY
global KILLED
logging.info('Caught signal %s', signum)
KILLED = True
if not BUSY:
sys.exit(0)
def main(argv):
"""entry function."""
global BUSY
global KILLED
clusterids = [
int(clusterid) for clusterid in flags.OPTIONS.clusterids.split(',')
if clusterid
]
signal.signal(signal.SIGINT, handle_term)
while True:
BUSY = True
with database.session() as session:
if not clusterids:
clusters = session.query(Cluster).all()
update_clusterids = [cluster.id for cluster in clusters]
else:
update_clusterids = clusterids
logging.info('update progress for clusters: %s', update_clusterids)
for clusterid in update_clusterids:
if flags.OPTIONS.async:
celery.send_task('compass.tasks.progress_update', (clusterid,))
else:
try:
progress_update.update_progress(clusterid)
except Exception as error:
logging.error('failed to update progress for cluster %s',
clusterid)
logging.exception(error)
pass
BUSY = False
if KILLED:
logging.info('exit progress update loop')
break
if flags.OPTIONS.once:
logging.info('trigger installer finsished')
break
if flags.OPTIONS.run_interval > 0:
logging.info('will rerun the trigger install after %s',
flags.OPTIONS.run_interval)
time.sleep(flags.OPTIONS.run_interval)
else:
logging.info('rerun the trigger installer immediately')
if __name__ == '__main__':
flags.init()
logsetting.init()
logging.info('run progress update: %s', sys.argv)
if flags.OPTIONS.daemonize:
with daemon.DaemonContext(
pidfile=lockfile.FileLock('/var/run/progress_update.pid'),
stderr=open('/tmp/progress_update_err.log', 'w+'),
stdout=open('/tmp/progress_update_out.log', 'w+')
):
logging.info('run progress update as daemon')
main(sys.argv)
else:
main(sys.argv)

16
bin/refresh.sh Executable file

@ -0,0 +1,16 @@
#!/bin/bash
let initial_run=0
while [ $# -gt 0 ]; do
case "$1" in
-i|--init) let initial_run=1; shift ;;
*) shift ;;
esac
done
if [ $initial_run -eq 0 ]; then
/opt/compass/bin/manage_db.py clean_clusters
fi
/opt/compass/bin/manage_db.py createdb
/opt/compass/bin/manage_db.py sync_from_installers
service compassd restart
service httpd restart
service rsyslog restart

1
bin/run_celery.sh Executable file

@ -0,0 +1 @@
CELERY_CONFIG_MODULE=compass.utils.celeryconfig_wrapper celeryd

3
bin/runserver.py Normal file

@ -0,0 +1,3 @@
#!/usr/bin/python
from baseplate import app
app.run(host = '0.0.0.0',debug = True)

46
bin/trigger_install.py Executable file

@ -0,0 +1,46 @@
#!/usr/bin/python
import logging
import sys
from compass.db import database
from compass.db.model import Cluster
from compass.tasks.client import celery
from compass.utils import flags
from compass.utils import logsetting
from compass.actions import trigger_install
flags.add('clusterids',
help='comma seperated cluster ids',
default='')
flags.add_bool('async',
help='ryn in async mode')
def main(argv):
flags.init()
logsetting.init()
clusterids = [
int(clusterid) for clusterid in flags.OPTIONS.clusterids.split(',')
if clusterid
]
with database.session() as session:
if not clusterids:
clusters = session.query(Cluster).all()
trigger_clusterids = [cluster.id for cluster in clusters]
else:
trigger_clusterids = clusterids
logging.info('trigger installer for clusters: %s',
trigger_clusterids)
for clusterid in trigger_clusterids:
if flags.OPTIONS.async:
celery.send_task('compass.tasks.trigger_install',
(clusterid,))
else:
trigger_install.trigger_install(clusterid)
if __name__ == '__main__':
main(sys.argv)

0
compass/__init__.py Normal file

@ -0,0 +1,76 @@
"""Module to provider function to poll switch."""
import logging
from compass.db import database
from compass.db.model import Switch, Machine
from compass.hdsdiscovery.hdmanager import HDManager
def poll_switch(ip_addr, req_obj='mac', oper="SCAN"):
"""Query switch and return expected result
.. note::
When polling switch succeeds, for each mac it got from polling switch,
A Machine record associated with the switch is added to the database.
:param ip_addr: switch ip address.
:type ip_addr: str
:param req_obj: the object requested to query from switch.
:type req_obj: str
:param oper: the operation to query the switch.
:type oper: str, should be one of ['SCAN', 'GET', 'SET']
.. note::
The function should be called inside database session scope.
"""
if not ip_addr:
logging.error('No switch IP address is provided!')
return
#Retrieve vendor info from switch table
session = database.current_session()
switch = session.query(Switch).filter_by(ip=ip_addr).first()
logging.info("pollswitch: %s", switch)
if not switch:
logging.error('no switch found for %s', ip_addr)
return
credential = switch.credential
logging.error("pollswitch: credential %r", credential)
vendor = switch.vendor
hdmanager = HDManager()
if not vendor or not hdmanager.is_valid_vendor(ip_addr,
credential, vendor):
# No vendor found or vendor doesn't match queried switch.
logging.debug('no vendor or vendor had been changed for switch %s',
switch)
vendor = hdmanager.get_vendor(ip_addr, credential)
logging.debug('[pollswitch] credential %r', credential)
if not vendor:
logging.error('no vendor found or match switch %s', switch)
return
switch.vendor = vendor
# Start to poll switch's mac address.....
logging.debug('hdmanager learn switch from %s %s %s %s %s',
ip_addr, credential, vendor, req_obj, oper)
results = hdmanager.learn(ip_addr, credential, vendor, req_obj, oper)
logging.info("pollswitch %s result: %s", switch, results)
if not results:
logging.error('no result learned from %s %s %s %s %s',
ip_addr, credential, vendor, req_obj, oper)
return
for entry in results:
mac = entry['mac']
machine = session.query(Machine).filter_by(mac=mac).first()
if not machine:
machine = Machine(mac=mac)
machine.port = entry['port']
machine.vlan = entry['vlan']
machine.switch = switch
logging.debug('update switch %s state to under monitoring', switch)
switch.state = 'under_monitoring'

@ -0,0 +1,61 @@
"""Module to update status and installing progress of the given cluster.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
from compass.db import database
from compass.db.model import Cluster
from compass.log_analyzor import progress_calculator
from compass.utils import setting_wrapper as setting
def update_progress(clusterid):
"""Update status and installing progress of the given cluster.
:param clusterid: the id of the cluster to get the progress.
:type clusterid: int
.. note::
The function should be called out of the database session scope.
In the function, it will update the database cluster_state and
host_state table for the deploying cluster and hosts.
The function will also query log_progressing_history table to get
the lastest installing progress and the position of log it has
processed in the last run. The function uses these information to
avoid recalculate the progress from the beginning of the log file.
After the progress got updated, these information will be stored back
to the log_progressing_history for next time run.
"""
os_version = ''
target_system = ''
hostids = []
with database.session() as session:
cluster = session.query(Cluster).filter_by(id=clusterid).first()
if not cluster:
logging.error('no cluster found for %s', clusterid)
return
if not cluster.adapter:
logging.error('there is no adapter for cluster %s', clusterid)
return
os_version = cluster.adapter.os
target_system = cluster.adapter.target_system
if not cluster.state:
logging.error('there is no state for cluster %s', clusterid)
return
if cluster.state.state != 'INSTALLING':
logging.error('the state %s is not in installing for cluster %s',
cluster.state.state, clusterid)
return
hostids = [host.id for host in cluster.hosts]
progress_calculator.update_progress(setting.OS_INSTALLER,
os_version,
setting.PACKAGE_INSTALLER,
target_system,
clusterid, hostids)

@ -0,0 +1,58 @@
"""Module to deploy a given cluster
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
from compass.db import database
from compass.db.model import Cluster, ClusterState, HostState
from compass.config_management.utils.config_manager import ConfigManager
def trigger_install(clusterid):
"""Deploy a given cluster.
:param clusterid: the id of the cluster to deploy.
:type clusterid: int
.. note::
The function should be called in database session.
"""
session = database.current_session()
cluster = session.query(Cluster).filter_by(id=clusterid).first()
if not cluster:
logging.error('no cluster found for %s', clusterid)
return
adapter = cluster.adapter
if not adapter:
logging.error('no proper adapter found for cluster %s', cluster.id)
return
if not cluster.state:
cluster.state = ClusterState()
if cluster.state.state and cluster.state.state != 'UNINITIALIZED':
logging.error('ignore installing cluster %s since the state is %s',
cluster.id, cluster.state)
return
cluster.state.state = 'INSTALLING'
hostids = [host.id for host in cluster.hosts]
update_hostids = []
for host in cluster.hosts:
if not host.state:
host.state = HostState()
elif host.state.state and host.state.state != 'UNINITIALIZED':
logging.info('ignore installing host %s sinc the state is %s',
host.id, host.state)
continue
host.state.state = 'INSTALLING'
update_hostids.append(host.id)
manager = ConfigManager()
manager.update_cluster_and_host_configs(
clusterid, hostids, update_hostids,
adapter.os, adapter.target_system)
manager.sync()

7
compass/api/__init__.py Normal file

@ -0,0 +1,7 @@
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.debug = True
import compass.api.api

1148
compass/api/api.py Normal file

File diff suppressed because it is too large Load Diff

83
compass/api/errors.py Normal file

@ -0,0 +1,83 @@
"""Exception and its handler"""
from compass.api import app
from compass.api import util
class ObjectDoesNotExist(Exception):
"""Define the exception for referring non-existing object"""
pass
class UserInvalidUsage(Exception):
"""Define the exception for fault usage of users"""
pass
class ObjectDuplicateError(Exception):
"""Define the duplicated object exception"""
pass
class InputMissingError(Exception):
"""Define the insufficient input exception"""
pass
class MethodNotAllowed(Exception):
"""Define the exception which invalid method is called"""
pass
@app.errorhandler(ObjectDoesNotExist)
def handle_not_exist(error, failed_objs=None):
"""Handler of ObjectDoesNotExist Exception"""
message = {'status': 'Not Found',
'message': error.message}
if failed_objs and isinstance(failed_objs, dict):
message.update(failed_objs)
return util.make_json_response(404, message)
@app.errorhandler(UserInvalidUsage)
def handle_invalid_usage(error):
"""Handler of UserInvalidUsage Exception"""
message = {'status': 'Invalid parameters',
'message': error.message}
return util.make_json_response(400, message)
@app.errorhandler(InputMissingError)
def handle_mssing_input(error):
"""Handler of InputMissingError Exception"""
message = {'status': 'Insufficient data',
'message': error.message}
return util.make_json_response(400, message)
@app.errorhandler(ObjectDuplicateError)
def handle_duplicate_object(error, failed_objs=None):
"""Handler of ObjectDuplicateError Exception"""
message = {'status': 'Conflict Error',
'message': error.message}
if failed_objs and isinstance(failed_objs, dict):
message.update(failed_objs)
return util.make_json_response(409, message)
@app.errorhandler(MethodNotAllowed)
def handle_not_allowed_method(error):
"""Handler of MethodNotAllowed Exception"""
message = {"status": "Method Not Allowed",
"message": "The method is not allowed to use"}
return util.make_json_response(405, message)

310
compass/api/util.py Normal file

@ -0,0 +1,310 @@
"""Utils for API usage"""
import logging
from flask import make_response
from flask.ext.restful import Api
import re
from netaddr import IPAddress
import simplejson as json
from compass.api import app
api = Api(app)
def make_json_response(status_code, data):
"""Wrap json format to the reponse object"""
result = json.dumps(data, indent=4)
resp = make_response(result, status_code)
resp.headers['Content-type'] = 'application/json'
return resp
def add_resource(*args, **kwargs):
"""Add resource"""
api.add_resource(*args, **kwargs)
def is_valid_ip(ip_address):
"""Valid the format of an Ip address"""
if not ip_address:
return False
regex = ('^(([0-9]|[1-9][0-9]|1[0-9]{2}|[1-2][0-4][0-9]|25[0-5])\.)'
'{3}'
'([0-9]|[1-9][0-9]|1[0-9]{2}|[1-2][0-4][0-9]|25[0-5])')
if re.match(regex, ip_address):
return True
return False
def is_valid_ipnetowrk(ip_network):
"""Valid the format of an Ip network"""
if not ip_network:
return False
regex = ('^(([0-9]|[1-9][0-9]|1[0-9]{2}|[1-2][0-4][0-9]|25[0-5])\.)'
'{3}'
'([0-9]|[1-9][0-9]|1[0-9]{2}|[1-2][0-4][0-9]|25[0-5])'
'((\/[0-9]|\/[1-2][0-9]|\/[1-3][0-2]))$')
if re.match(regex, ip_network):
return True
return False
def is_valid_netmask(ip_addr):
"""Valid the format of a netmask"""
try:
ip_address = IPAddress(ip_addr)
return ip_address.is_netmask()
except Exception:
return False
def is_valid_gateway(ip_addr):
"""Valid the format of gateway"""
invalid_ip_prefix = ['0', '224', '169', '127']
try:
# Check if ip_addr is an IP address and not start with 0
ip_addr_prefix = ip_addr.split('.')[0]
if is_valid_ip(ip_addr) and ip_addr_prefix not in invalid_ip_prefix:
ip_address = IPAddress(ip_addr)
if not ip_address.is_multicast():
# Check if ip_addr is not multicast and reserved IP
return True
return False
except Exception:
return False
def is_valid_security_config(config):
"""Valid the format of security section in config"""
security_keys = ['server_credentials', 'service_credentials',
'console_credentials']
fields = ['username', 'password']
logging.debug('config: %s', config)
for key in security_keys:
try:
content = config[key]
except KeyError:
error_msg = "Missing '%s' in security config!" % key
logging.error(error_msg)
raise KeyError(error_msg)
for k in fields:
try:
value = content[k]
if not value:
return False, '%s in %s cannot be null!' % (k, key)
except KeyError:
error_msg = ("Missing '%s' in '%s' section of security config"
% (k, key))
logging.error(error_msg)
raise KeyError(error_msg)
return True, 'valid!'
def is_valid_networking_config(config):
"""Valid the format of networking config"""
networking = ['interfaces', 'global']
def _is_valid_interfaces_config(interfaces_config):
"""Valid the format of interfaces section in config"""
expected_keys = ['management', 'tenant', 'public', 'storage']
required_fields = ['nic', 'promisc']
normal_fields = ['ip_start', 'ip_end', 'netmask']
other_fields = ['gateway', 'vlan']
interfaces_keys = interfaces_config.keys()
for key in expected_keys:
if key not in interfaces_keys:
error_msg = "Missing '%s' in interfaces config!" % key
return False, error_msg
content = interfaces_config[key]
for field in required_fields:
if field not in content:
error_msg = "Keyword '%s' in interface %s cannot be None!"\
% (field, key)
return False, error_msg
value = content[field]
if value is None:
error_msg = ("The value of '%s' in '%s' "
'config cannot be None!' %
(field, key))
return False, error_msg
if field == 'promisc':
valid_values = [0, 1]
if int(value) not in valid_values:
return (
False,
('The value of Promisc for interface %s can '
'only be 0/1.bit_length' % key)
)
elif field == 'nic':
if not value.startswith('eth'):
return (
False,
('The value of nic for interface %s should start'
'with eth' % key)
)
if not content['promisc']:
for field in normal_fields:
value = content[field]
if field == 'netmask' and not is_valid_netmask(value):
return (False, "Invalid netmask format for interface "
" %s: '%s'!" % (key, value))
elif not is_valid_ip(value):
return (False,
"Invalid Ip format for interface %s: '%s'"
% (key, value))
for field in other_fields:
if field in content and field == 'gateway':
value = content[field]
if value and not is_valid_gateway(value):
return False, "Invalid gateway format '%s'" % value
return True, 'Valid!'
def _is_valid_global_config(global_config):
"""Valid the format of 'global' section in config"""
required_fields = ['nameservers', 'search_path', 'gateway']
global_keys = global_config.keys()
for key in required_fields:
if key not in global_keys:
error_msg = ("Missing %s in global config of networking config"
% key)
return False, error_msg
value = global_config[key]
if not value:
error_msg = ("Value of %s in global config cannot be None!" %
key)
return False, error_msg
if key == 'nameservers':
nameservers = [nameserver for nameserver in value.split(',')
if nameserver]
for nameserver in nameservers:
if not is_valid_ip(nameserver):
return (
False,
"The nameserver format is invalid! '%s'" % value
)
elif key == 'gateway' and not is_valid_gateway(value):
return False, "The gateway format is invalid! '%s'" % value
return True, 'Valid!'
#networking_keys = networking.keys()
is_valid = False
msg = None
for nkey in networking:
if nkey in config:
content = config[nkey]
if nkey == 'interfaces':
is_valid, msg = _is_valid_interfaces_config(content)
elif nkey == 'global':
is_valid, msg = _is_valid_global_config(content)
if not is_valid:
return is_valid, msg
else:
error_msg = "Missing '%s' in networking config!" % nkey
return False, error_msg
return True, 'valid!'
def is_valid_partition_config(config):
"""Valid the configuration format"""
if not config:
return False, '%s in partition cannot be null!' % config
return True, 'valid!'
def valid_host_config(config):
""" valid_format is used to check if the input config is qualified
the required fields and format.
The key is the required field and format of the input config
The value is the validator function name of the config value
"""
from api import errors
valid_format = {"/networking/interfaces/management/ip": "is_valid_ip",
"/networking/global/gateway": "is_valid_gateway",
"/networking/global/nameserver": "",
"/networking/global/search_path": "",
"/roles": ""}
flat_config = {}
flatten_dict(config, flat_config)
config_keys = flat_config.keys()
for key in config_keys:
validator = None
try:
validator = valid_format[key]
except:
error_msg = ("Cannot find the path '%s'. Please check the keywords"
% key)
raise errors.UserInvalidUsage(error_msg)
else:
value = flat_config[key]
if validator:
is_valid_format = globals()[validator](value)
if not is_valid_format:
error_msg = "The format '%s' is incorrect!" % value
raise errors.UserInvalidUsage(error_msg)
def flatten_dict(dictionary, output, flat_key=""):
"""This function will convert the dictionary into a list
For example:
dict = {'a':{'b': 'c'}, 'd': 'e'} ==>
list = ['a/b/c', 'd/e']
"""
keywords = dictionary.keys()
for key in keywords:
tmp = '/'.join((flat_key, key))
if isinstance(dictionary[key], dict):
flatten_dict(dictionary[key], output, tmp)
else:
output[tmp] = dictionary[key]
def update_dict_value(searchkey, newvalue, dictionary):
"""Update dictionary value"""
keywords = dictionary.keys()
for key in keywords:
if key == searchkey:
dictionary[key] = newvalue
elif isinstance(dictionary[key], dict):
update_dict_value(searchkey, newvalue, dictionary[key])
else:
continue

232
compass/apiclient/example.py Executable file

@ -0,0 +1,232 @@
#!/usr/bin/python
"""Example code to deploy a cluster by compass client api."""
import sys
import time
from compass.apiclient.restful import Client
COMPASS_SERVER_URL = 'http://10.145.88.210:8080'
SWITCH_IP = '10.145.88.1'
SWITCH_SNMP_VERSION = 'v2c'
SWITCH_SNMP_COMMUNITY = 'public'
MACHINES_TO_ADD = ['00:0c:29:c3:40:7c', '00:0c:29:e9:f6:a6']
CLUSTER_NAME = 'cluster'
HOST_NAME_PREFIX = 'host'
SERVER_USERNAME = 'root'
SERVER_PASSWORD = 'root'
SERVICE_USERNAME = 'service'
SERVICE_PASSWORD = 'service'
CONSOLE_USERNAME = 'console'
CONSOLE_PASSWORD = 'console'
NAMESERVERS = '10.145.88.210'
SEARCH_PATH = 'ods.com'
GATEWAY = '10.145.88.1'
PROXY = 'http://10.145.88.210:3128'
NTP_SERVER = '10.145.88.210'
MANAGEMENT_IP_START = '10.145.88.130'
MANAGEMENT_IP_END = '10.145.88.255'
MANAGEMENT_NETMASK = '255.255.255.0'
MANAGEMENT_NIC = 'eth0'
MANAGEMENT_PROMISC = 0
TENANT_IP_START = '192.168.100.100'
TENANT_IP_END = '192.168.100.255'
TENANT_NETMASK = '255.255.255.0'
TENANT_NIC = 'eth0'
TENANT_PROMISC = 0
PUBLIC_IP_START = '12.234.32.100'
PUBLIC_IP_END = '12.234.32.255'
PUBLIC_NETMASK = '255.255.255.0'
PUBLIC_NIC = 'eth1'
PUBLIC_PROMISC = 1
STORAGE_IP_START = '172.16.100.100'
STORAGE_IP_END = '172.16.100.255'
STORAGE_NETMASK = '255.255.255.0'
STORAGE_NIC = 'eth0'
STORAGE_PROMISC = 0
HOME_PERCENTAGE = 40
TMP_PERCENTAGE = 10
VAR_PERCENTAGE = 15
ROLES_LIST = [[], ['os-single-controller']]
# get apiclient object.
client = Client(COMPASS_SERVER_URL)
# get all switches.
status, resp = client.get_switches()
print 'get all switches status: %s resp: %s' % (status, resp)
# add a switch.
status, resp = client.add_switch(
SWITCH_IP, version=SWITCH_SNMP_VERSION,
community=SWITCH_SNMP_COMMUNITY)
print 'add a switch status: %s resp: %s' % (status, resp)
if status < 400:
switch = resp['switch']
else:
status, resp = client.get_switches()
print 'get all switches status: %s resp: %s' % (status, resp)
switch = None
for switch in resp['switches']:
if switch['ip'] == SWITCH_IP:
break
switch_id = switch['id']
switch_ip = switch['ip']
# if the switch is not in under_monitoring, wait for the poll switch task
# update the swich information and change the switch state.
while switch['state'] != 'under_monitoring':
print 'waiting for the switch into under_monitoring'
status, resp = client.get_switch(switch_id)
print 'get switch %s status: %s, resp: %s' % (switch_id, status, resp)
switch = resp['switch']
time.sleep(10)
# get machines connected to the switch.
status, resp = client.get_machines(switch_id=switch_id)
print 'get all machines under switch %s status: %s, resp: %s' % (
switch_id, status, resp)
machines = {}
for machine in resp['machines']:
mac = machine['mac']
if mac in MACHINES_TO_ADD:
machines[machine['id']] = mac
print 'machine to add: %s' % machines
if set(machines.values()) != set(MACHINES_TO_ADD):
print 'only found macs %s while expected are %s' % (
machines.values(), MACHINES_TO_ADD)
sys.exit(1)
# get adapters.
status, resp = client.get_adapters()
print 'get all adapters status: %s, resp: %s' % (status, resp)
adapter_ids = []
for adapter in resp['adapters']:
adapter_ids.append(adapter['id'])
adapter_id = adapter_ids[0]
print 'adpater for deploying a cluster: %s' % adapter_id
# add a cluster.
status, resp = client.add_cluster(
cluster_name=CLUSTER_NAME, adapter_id=adapter_id)
print 'add cluster %s status: %s, resp: %s' % (CLUSTER_NAME, status, resp)
cluster = resp['cluster']
cluster_id = cluster['id']
# add hosts to the cluster.
status, resp = client.add_hosts(
cluster_id=cluster_id,
machine_ids=machines.keys())
print 'add hosts to cluster %s status: %s, resp: %s' % (
cluster_id, status, resp)
host_ids = []
for host in resp['cluster_hosts']:
host_ids.append(host['id'])
print 'added hosts: %s' % host_ids
# set cluster security
status, resp = client.set_security(
cluster_id, server_username=SERVER_USERNAME,
server_password=SERVER_PASSWORD,
service_username=SERVICE_USERNAME,
service_password=SERVICE_PASSWORD,
console_username=CONSOLE_USERNAME,
console_password=CONSOLE_PASSWORD)
print 'set security config to cluster %s status: %s, resp: %s' % (
cluster_id, status, resp)
# set cluster networking
status, resp = client.set_networking(
cluster_id,
nameservers=NAMESERVERS,
search_path=SEARCH_PATH,
gateway=GATEWAY,
proxy=PROXY,
ntp_server=NTP_SERVER,
management_ip_start=MANAGEMENT_IP_START,
management_ip_end=MANAGEMENT_IP_END,
management_netmask=MANAGEMENT_NETMASK,
management_nic=MANAGEMENT_NIC,
management_promisc=MANAGEMENT_PROMISC,
tenant_ip_start=TENANT_IP_START,
tenant_ip_end=TENANT_IP_END,
tenant_netmask=TENANT_NETMASK,
tenant_nic=TENANT_NIC,
tenant_promisc=TENANT_PROMISC,
public_ip_start=PUBLIC_IP_START,
public_ip_end=PUBLIC_IP_END,
public_netmask=PUBLIC_NETMASK,
public_nic=PUBLIC_NIC,
public_promisc=PUBLIC_PROMISC,
storage_ip_start=STORAGE_IP_START,
storage_ip_end=STORAGE_IP_END,
storage_netmask=STORAGE_NETMASK,
storage_nic=STORAGE_NIC,
storage_promisc=STORAGE_PROMISC)
print 'set networking config to cluster %s status: %s, resp: %s' % (
cluster_id, status, resp)
# set partiton of each host in cluster
status, resp = client.set_partition(cluster_id,
home_percentage=HOME_PERCENTAGE,
tmp_partition_percentage=TMP_PERCENTAGE,
var_partition_percentage=VAR_PERCENTAGE)
print 'set partition config to cluster %s status: %s, resp: %s' % (
cluster_id, status, resp)
# set each host config in cluster.
for host_id in host_ids:
if ROLES_LIST:
roles = ROLES_LIST.pop(0)
else:
roles = []
status, resp = client.update_host_config(
host_id, hostname='%s%s' % (HOST_NAME_PREFIX, host_id),
roles=roles)
print 'set roles to host %s status: %s, resp: %s' % (
host_id, status, resp)
# deploy cluster.
status, resp = client.deploy_hosts(cluster_id)
print 'deploy cluster %s status: %s, resp: %s' % (cluster_id, status, resp)
# get intalling progress.
while True:
status, resp = client.get_cluster_installing_progress(cluster_id)
print 'get cluster %s installing progress status: %s, resp: %s' % (
cluster_id, status, resp)
progress = resp['progress']
if (progress['state'] not in ['UNINITIALIZED', 'INSTALLING'] or
progress['percentage'] >= 1.0):
break
for host_id in host_ids:
status, resp = client.get_host_installing_progress(host_id)
print 'get host %s installing progress status: %s, resp: %s' % (
host_id, status, resp)
time.sleep(10)
status, resp = client.get_dashboard_links(cluster_id)
print 'get cluster %s dashboardlinks status: %s, resp: %s' % (
cluster_id, status, resp)

@ -0,0 +1,610 @@
"""Compass api client library.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
import json
import requests
class Client(object):
"""wrapper for compass restful api.
.. note::
Every api client method returns (status as int, resp as dict).
If the api succeeds, the status is 2xx, the resp includes
{'status': 'OK'} and other keys depend on method.
If the api fails, the status is 4xx, the resp includes {
'status': '...', 'message': '...'}
"""
def __init__(self, url, headers=None, proxies=None, stream=None):
"""Restful api client initialization.
:param url: url to the compass web service.
:type url: str.
:param headers: http header sent in each restful request.
:type headers: dict of header name (str) to heade value (str).
:param proxies: the proxy address for each protocol.
:type proxies: dict of protocol (str) to proxy url (str).
:param stream: wether the restful response should be streamed.
:type stream: bool.
"""
self.url_ = url
self.session_ = requests.Session()
if headers:
self.session_.headers = headers
if proxies is not None:
self.session_.proxies = proxies
if stream is not None:
self.session_.stream = stream
def __del__(self):
self.session_.close()
@classmethod
def _get_response(cls, resp):
"""decapsulate the resp to status code and python formatted data."""
resp_obj = {}
try:
resp_obj = resp.json()
except Exception as error:
logging.error('failed to load object from %s: %s',
resp.url, resp.content)
logging.exception(error)
resp_obj['status'] = 'Json Parsing Failure'
resp_obj['message'] = resp.content
return resp.status_code, resp_obj
def _get(self, relative_url, params=None):
"""encapsulate get method."""
url = '%s%s' % (self.url_, relative_url)
if params:
resp = self.session_.get(url, params=params)
else:
resp = self.session_.get(url)
return self._get_response(resp)
def _post(self, relative_url, data=None):
"""encapsulate post method."""
url = '%s%s' % (self.url_, relative_url)
if data:
resp = self.session_.post(url, json.dumps(data))
else:
resp = self.session_.post(url)
return self._get_response(resp)
def _put(self, relative_url, data=None):
"""encapsulate put method."""
url = '%s%s' % (self.url_, relative_url)
if data:
resp = self.session_.put(url, json.dumps(data))
else:
resp = self.session_.put(url)
return self._get_response(resp)
def _delete(self, relative_url):
"""encapsulate delete method."""
url = '%s%s' % (self.url_, relative_url)
return self._get_response(self.session_.delete(url))
def get_switches(self, switch_ips=None, switch_networks=None, limit=None):
"""List details for switches.
.. note::
The switches can be filtered by switch_ips, siwtch_networks and
limit. These params can be None or missing. If the param is None
or missing, that filter will be ignored.
:param switch_ips: Filter switch(es) with IP(s).
:type switch_ips: list of str. Each is as 'xxx.xxx.xxx.xxx'.
:param switch_networks: Filter switche(es) with network(s).
:type switch_networks: list of str. Each is as 'xxx.xxx.xxx.xxx/xx'.
:param limit: int, The maximum number of switches to return.
:type limit: int. 0 means unlimited.
"""
params = {}
if switch_ips:
params['switchIp'] = switch_ips
if switch_networks:
params['switchIpNetwork'] = switch_networks
if limit:
params['limit'] = limit
return self._get('/api/switches', params=params)
def get_switch(self, switch_id):
"""Lists details for a specified switch.
:param switch_id: switch id.
:type switch_id: int.
"""
return self._get('/api/switches/%s' % switch_id)
def add_switch(self, switch_ip, version=None, community=None,
username=None, password=None):
"""Create a switch with specified details.
.. note::
It will trigger switch polling if successful. During
the polling, MAC address of the devices connected to the
switch will be learned by SNMP or SSH.
:param switch_ip: the switch IP address.
:type switch_ip: str, as xxx.xxx.xxx.xxx.
:param version: SNMP version when using SNMP to poll switch.
:type version: str, one in ['v1', 'v2c', 'v3']
:param community: SNMP community when using SNMP to poll switch.
:type community: str, usually 'public'.
:param username: SSH username when using SSH to poll switch.
:type username: str.
:param password: SSH password when using SSH to poll switch.
:type password: str.
"""
data = {}
data['switch'] = {}
data['switch']['ip'] = switch_ip
data['switch']['credential'] = {}
if version:
data['switch']['credential']['version' ] = version
if community:
data['switch']['credential']['community'] = community
if username:
data['switch']['credential']['username'] = username
if password:
data['switch']['credential']['password'] = password
return self._post('/api/switches', data=data)
def update_switch(self, switch_id, ip_addr=None,
version=None, community=None,
username=None, password=None):
"""Updates a switch with specified details.
.. note::
It will trigger switch polling if successful. During
the polling, MAC address of the devices connected to the
switch will be learned by SNMP or SSH.
:param switch_id: switch id
:type switch_id: int.
:param ip_addr: the switch ip address.
:type ip_addr: str, as 'xxx.xxx.xxx.xxx' format.
:param version: SNMP version when using SNMP to poll switch.
:type version: str, one in ['v1', 'v2c', 'v3'].
:param community: SNMP community when using SNMP to poll switch.
:type community: str, usually be 'public'.
:param username: username when using SSH to poll switch.
:type username: str.
:param password: password when using SSH to poll switch.
"""
data = {}
data['switch'] = {}
if ip_addr:
data['switch']['ip'] = ip_addr
data['switch']['credential'] = {}
if version:
data['switch']['credential']['version' ] = version
if community:
data['switch']['credential']['community'] = community
if username:
data['switch']['credential']['username'] = username
if password:
data['switch']['credential']['password'] = password
return self._put('/api/switches/%s' % switch_id, data=data)
def delete_switch(self, switch_id):
"""Not implemented in api."""
return self._delete('api/switches/%s' % switch_id)
def get_machines(self, switch_id=None, vlan_id=None,
port=None, limit=None):
"""Get the details of machines.
.. note::
The machines can be filtered by switch_id, vlan_id, port
and limit. These params can be None or missing. If the param
is None or missing, the filter will be ignored.
:param switch_id: Return machine(s) connected to the switch.
:type switch_id: int.
:param vlan_id: Return machine(s) belonging to the vlan.
:type vlan_id: int.
:param port: Return machine(s) connect to the port.
:type port: int.
:param limit: the maximum number of machines will be returned.
:type limit: int. 0 means no limit.
"""
params = {}
if switch_id:
params['switchId'] = switch_id
if vlan_id:
params['vlanId'] = vlan_id
if port:
params['port'] = port
if limit:
params['limit'] = limit
return self._get('/api/machines', params=params)
def get_machine(self, machine_id):
"""Lists the details for a specified machine.
:param machine_id: Return machine with the id.
:type machine_id: int.
"""
return self._get('/api/machines/%s' % machine_id)
def get_clusters(self):
"""Lists the details for all clusters.
"""
return self._get('/api/clusters')
def get_cluster(self, cluster_id):
"""Lists the details of the specified cluster.
:param cluster_id: cluster id.
:type cluster_id: int.
"""
return self._get('/api/clusters/%s' % cluster_id)
def add_cluster(self, cluster_name, adapter_id):
"""Creates a cluster by specified name and given adapter id.
:param cluster_name: cluster name.
:type cluster_name: str.
:param adapter_id: adapter id.
:type adapter_id: int.
"""
data = {}
data['cluster'] = {}
data['cluster']['name'] = cluster_name
data['cluster']['adapter_id'] = adapter_id
return self._post('/api/clusters', data=data)
def add_hosts(self, cluster_id, machine_ids):
"""add the specified machine(s) as the host(s) to the cluster.
:param cluster_id: cluster id.
:type cluster_id: int.
:param machine_ids: machine ids to add to cluster.
:type machine_ids: list of int, each is the id of one machine.
"""
data = {}
data['addHosts'] = machine_ids
return self._post('/api/clusters/%s/action' % cluster_id, data=data)
def remove_hosts(self, cluster_id, host_ids):
"""remove the specified host(s) from the cluster.
:param cluster_id: cluster id.
:type cluster_id: int.
:param host_ids: host ids to remove from cluster.
:type host_ids: list of int, each is the id of one host.
"""
data = {}
data['removeHosts'] = host_ids
return self._post('/api/clusters/%s/action' % cluster_id, data=data)
def replace_hosts(self, cluster_id, machine_ids):
"""replace the cluster hosts with the specified machine(s).
:param cluster_id: int, The unique identifier of the cluster.
:type cluster_id: int.
:param machine_ids: the machine ids to replace the hosts in cluster.
:type machine_ids: list of int, each is the id of one machine.
"""
data = {}
data['replaceAllHosts'] = machine_ids
return self._post('/api/clusters/%s/action' % cluster_id, data=data)
def deploy_hosts(self, cluster_id):
"""Deploy the cluster.
:param cluster_id: The unique identifier of the cluster
:type cluster_id: int.
"""
data = {}
data['deploy'] = {}
return self._post('/api/clusters/%s/action' % cluster_id, data=data)
@classmethod
def parse_security(cls, kwargs):
"""parse the arguments to security data."""
data = {}
for key, value in kwargs.items():
if '_' not in key:
continue
key_name, key_value = key.split('_', 1)
data.setdefault(
'%s_credentials' % key_name, {})[key_value] = value
return data
def set_security(self, cluster_id, **kwargs):
"""Update the cluster security configuration.
:param cluster_id: cluster id.
:type cluster_id: int.
:param <security_name>_username: username of the security name.
:type <security_name>_username: str.
:param <security_name>_password: passowrd of the security name.
:type <security_name>_password: str.
.. note::
security_name should be one of ['server', 'service', 'console'].
"""
data = {}
data['security'] = self.parse_security(kwargs)
return self._put('/api/clusters/%s/security' % cluster_id, data=data)
@classmethod
def parse_networking(cls, kwargs):
"""parse arguments to network data."""
data = {}
possible_keys = [
'nameservers', 'search_path', 'gateway', 'proxy', 'ntp_server']
for key, value in kwargs.items():
if key in possible_keys:
data.setdefault('global', {})[key] = value
else:
if '_' not in key:
continue
key_name, key_value = key.split('_', 1)
data.setdefault(
'interfaces', {}).setdefault(
key_name, {})[key_value] = value
return data
def set_networking(self, cluster_id, **kwargs):
"""Update the cluster network configuration.
:param cluster_id: cluster id.
:type cluster_id: int.
:param nameservers: comma seperated nameserver ip address.
:type nameservers: str.
:param search_path: comma seperated dns name search path.
:type search_path: str.
:param gateway: gateway ip address for routing to outside.
:type gateway: str.
:param proxy: proxy url for downloading packages.
:type proxy: str.
:param ntp_server: ntp server ip address to sync timestamp.
:type ntp_server: str.
:param <interface>_ip_start: start ip address to host's interface.
:type <interface>_ip_start: str.
:param <interface>_ip_end: end ip address to host's interface.
:type <interface>_ip_end: str.
:param <interface>_netmask: netmask to host's interface.
:type <interface>_netmask: str.
:param <interface>_nic: host physical interface name.
:type <interface>_nic: str.
:param <interface>_promisc: if the interface in promiscous mode.
:type <interface>_promisc: int, 0 or 1.
.. note::
interface should be one of ['management', 'tenant',
'public', 'storage'].
"""
data = {}
data['networking'] = self.parse_networking(kwargs)
return self._put('/api/clusters/%s/networking' % cluster_id, data=data)
@classmethod
def parse_partition(cls, kwargs):
"""parse arguments to partition data."""
data = {}
for key, value in kwargs.items():
if key.endswith('_percentage'):
key_name = key[:-len('_percentage')]
data[key_name] = '%s%%' % value
elif key.endswitch('_mbytes'):
key_name = key[:-len('_mbytes')]
data[key_name] = str(value)
return ';'.join([
'/%s %s' % (key, value) for key, value in data.items()
])
def set_partition(self, cluster_id, **kwargs):
"""Update the cluster partition configuration.
:param cluster_id: cluster id.
:type cluster_id: int.
:param <partition>_percentage: the partiton percentage.
:type <partition>_percentage: float between 0 to 100.
:param <partition>_mbytes: the partition mbytes.
:type <partition>_mbytes: int.
.. note::
partition should be one of ['home', 'var', 'tmp'].
"""
data = {}
data['partition'] = self.parse_partition(kwargs)
return self._put('/api/clusters/%s/partition' % cluster_id, data=data)
def get_hosts(self, hostname=None, clustername=None):
"""Lists the details of hosts.
.. note::
The hosts can be filtered by hostname, clustername.
These params can be None or missing. If the param
is None or missing, the filter will be ignored.
:param hostname: The name of a host.
:type hostname: str.
:param clustername: The name of a cluster.
:type clustername: str.
"""
params = {}
if hostname:
params['hostname'] = hostname
if clustername:
params['clustername'] = clustername
return self._get('/api/clusterhosts', params=params)
def get_host(self, host_id):
"""Lists the details for the specified host.
:param host_id: host id.
:type host_id: int.
"""
return self._get('/api/clusterhosts/%s' % host_id)
def get_host_config(self, host_id):
"""Lists the details of the config for the specified host.
:param host_id: host id.
:type host_id: int.
"""
return self._get('/api/clusterhosts/%s/config' % host_id)
def update_host_config(self, host_id, hostname=None,
roles=None, **kwargs):
"""Updates config for the host.
:param host_id: host id.
:type host_id: int.
:param hostname: host name.
:type hostname: str.
:param security_<security>_username: username of the security name.
:type security_<security>_username: str.
:param security_<security>_password: passowrd of the security name.
:type security_<security>_password: str.
:param networking_nameservers: comma seperated nameserver ip address.
:type networking_nameservers: str.
:param networking_search_path: comma seperated dns name search path.
:type networking_search_path: str.
:param networking_gateway: gateway ip address for routing to outside.
:type networking_gateway: str.
:param networking_proxy: proxy url for downloading packages.
:type networking_proxy: str.
:param networking_ntp_server: ntp server ip address to sync timestamp.
:type networking_ntp_server: str.
:param networking_<interface>_ip: ip address to host interface.
:type networking_<interface>_ip: str.
:param networking_<interface>_netmask: netmask to host's interface.
:type networking_<interface>_netmask: str.
:param networking_<interface>_nic: host physical interface name.
:type networking_<interface>_nic: str.
:param networking_<interface>_promisc: if the interface is promiscous.
:type networking_<interface>_promisc: int, 0 or 1.
:param partition_<partition>_percentage: the partiton percentage.
:type partition_<partition>_percentage: float between 0 to 100.
:param partition_<partition>_mbytes: the partition mbytes.
:type partition_<partition>_mbytes: int.
:param roles: host assigned roles in the cluster.
:type roles: list of str.
"""
data = {}
if hostname:
data['hostname'] = hostname
sub_kwargs = {}
for key, value in kwargs.items():
key_name, key_value = key.split('_', 1)
sub_kwargs.setdefault(key_name, {})[key_value] = value
if 'security' in sub_kwargs:
data['security'] = self.parse_security(sub_kwargs['security'])
if 'networking' in sub_kwargs:
data['networking'] = self.parse_networking(
sub_kwargs['networking'])
if 'partition' in sub_kwargs:
data['partition'] = self.parse_partition(sub_kwargs['partition'])
if roles:
data['roles'] = roles
return self._put('/api/clusterhosts/%s/config' % host_id, data)
def delete_from_host_config(self, host_id, delete_key):
"""Deletes one key in config for the host.
:param host_id: host id.
:type host_id: int.
:param delete_key: the key in host config to be deleted.
:type delete_key: str.
"""
return self._delete('/api/clusterhosts/%s/config/%s' % (
host_id, delete_key))
def get_adapters(self, name=None):
"""Lists details of adapters.
.. note::
the adapter can be filtered by name of name is given and not None.
:param name: adapter name.
:type name: str.
"""
params = {}
if name:
params['name'] = name
return self._get('/api/adapters', params=params)
def get_adapter(self, adapter_id):
"""Lists details for the specified adapter.
:param adapter_id: adapter id.
:type adapter_id: int.
"""
return self._get('/api/adapters/%s' % adapter_id)
def get_adapter_roles(self, adapter_id):
"""Lists roles to assign to hosts for the specified adapter.
:param adapter_id: adapter id.
:type adapter_id: int.
"""
return self._get('/api/adapters/%s/roles' % adapter_id)
def get_host_installing_progress(self, host_id):
"""Lists progress details for the specified host.
:param host_id: host id.
:type host_id: int.
"""
return self._get('/api/clusterhosts/%s/progress' % host_id)
def get_cluster_installing_progress(self, cluster_id):
"""Lists progress details for the specified cluster.
:param cluster_id: cluster id.
:param cluster_id: int.
"""
return self._get('/api/clusters/%s/progress' % cluster_id)
def get_dashboard_links(self, cluster_id):
"""Lists links for dashboards of deployed cluster.
:param cluster_id: cluster id.
:type cluster_id: int.
"""
params = {}
params['cluser_id'] = cluster_id
return self._get('/api/dashboardlinks', params)

@ -0,0 +1,3 @@
from compass.config_management.installers.plugins import chefhandler
from compass.config_management.installers.plugins import cobbler

@ -0,0 +1,125 @@
"""Module to provider installer interface.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
class Installer(object):
"""Interface for installer."""
NAME = 'installer'
def __init__(self):
raise NotImplementedError(
'%s is not implemented' % self.__class__.__name__)
def __repr__(self):
return '%s[%s]' % (self.__class__.__name__, self.NAME)
def sync(self, **kwargs):
"""virtual method to sync installer."""
pass
def reinstall_host(self, hostid, config, **kwargs):
"""virtual method to reinstall specific host."""
pass
def get_global_config(self, **kwargs):
"""virtual method to get global config."""
return {}
def clean_cluster_config(self, clusterid, config, **kwargs):
"""virtual method to clean cluster config.
:param clusterid: the id of the cluster to cleanup.
:type clusterid: int
:param config: cluster configuration to cleanup.
:type config: dict
"""
pass
def get_cluster_config(self, clusterid, **kwargs):
"""virtual method to get cluster config.
:param clusterid: the id of the cluster to get configuration.
:type clusterid: int
:returns: cluster configuration as dict.
"""
return {}
def clean_host_config(self, hostid, config, **kwargs):
"""virtual method to clean host config.
:param hostid: the id of the host to cleanup.
:type hostid: int
:param config: host configuration to cleanup.
:type config: dict
"""
pass
def get_host_config(self, hostid, **kwargs):
"""virtual method to get host config.
:param hostid: the id of host to get configuration.
:type hostid: int
:returns: host configuration as dict.
"""
return {}
def clean_host_configs(self, host_configs, **kwargs):
"""Wrapper method to clean hosts' configs.
:param host_configs: dict of host id to host configuration as dict
"""
for hostid, host_config in host_configs.items():
self.clean_host_config(hostid, host_config, **kwargs)
def get_host_configs(self, hostids, **kwargs):
"""Wrapper method get hosts' configs.
:param hostids: ids of the hosts' configuration.
:type hostids: list of int
:returns: dict of host id to host configuration as dict.
"""
host_configs = {}
for hostid in hostids:
host_configs[hostid] = self.get_host_config(hostid, **kwargs)
return host_configs
def update_global_config(self, config, **kwargs):
"""virtual method to update global config.
:param config: global configuration.
:type config: dict
"""
pass
def update_cluster_config(self, clusterid, config, **kwargs):
"""virtual method to update cluster config.
:param clusterid: the id of the cluster to update the configuration.
:type clusterid: int
:param config: cluster configuration to update.
:type config: dict
"""
pass
def update_host_config(self, hostid, config, **kwargs):
"""virtual method to update host config.
:param hostid: the id of host to update host configuration.
:type hostid: int
:param config: host configuration to update.
:type config: dict
"""
pass
def update_host_configs(self, host_configs, **kwargs):
"""Wrapper method to updaet hosts' configs.
:param host_configs: dict of host id to host configuration as dict
"""
for hostid, config in host_configs.items():
self.update_host_config(hostid, config, **kwargs)

@ -0,0 +1,65 @@
"""Module for interface of os installer.
.. moduleauthor::: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
from compass.config_management.installers import installer
from compass.utils import setting_wrapper as setting
class Installer(installer.Installer):
"""Interface for os installer."""
NAME = 'os_installer'
def get_oses(self):
"""virtual method to get supported oses.
:returns: list of str, each is the supported os version.
"""
return []
INSTALLERS = {}
def get_installer_by_name(name, package_installer):
"""Get os installer by name.
:param name: os installer name.
:type name: str
:param package_installer: package installer instance.
:returns: :instance of subclass of :class:`Installer`
:raises: KeyError
"""
if name not in INSTALLERS:
logging.error('os installer name %s is not in os installers %s',
name, INSTALLERS)
raise KeyError('os installer name %s is not in os INSTALLERS')
os_installer = INSTALLERS[name](package_installer)
logging.debug('got os installer %s', os_installer)
return os_installer
def register(os_installer):
"""Register os installer.
:param os_installer: subclass of :class:`Installer`
:raises: KeyError
"""
if os_installer.NAME in INSTALLERS:
logging.error(
'os installer %s is already registered in INSTALLERS %s',
os_installer, INSTALLERS)
raise KeyError(
'os installer %s is already registered' % os_installer)
logging.debug('register os installer %s', os_installer)
INSTALLERS[os_installer.NAME] = os_installer
def get_installer(package_installer):
"""Get default os installer from compass setting."""
return get_installer_by_name(setting.OS_INSTALLER, package_installer)

@ -0,0 +1,87 @@
"""Module to provider interface for package installer.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
from compass.config_management.installers import installer
from compass.utils import setting_wrapper as setting
class Installer(installer.Installer):
"""Interface for package installer."""
NAME = 'package_installer'
def get_target_systems(self, oses):
"""virtual method to get available target_systems for each os.
:param oses: supported os versions.
:type oses: list of st
:returns: dict of os_version to target systems as list of str.
"""
return {}
def get_roles(self, target_system):
"""virtual method to get all roles of given target system.
:param target_system: target distributed system such as openstack.
:type target_system: str
:returns: dict of role to role description as str.
"""
return {}
def os_installer_config(self, config, **kwargs):
"""virtual method to get os installer related config.
:param config: os installer host configuration
:type config: dict
:returns: package related configuration for os installer.
"""
return {}
INSTALLERS = {}
def get_installer_by_name(name):
"""Get package installer by name.
:param name: package installer name.
:type name: str
:returns: instance of subclass of :class:`Installer`
:raises: KeyError
"""
if name not in INSTALLERS:
logging.error('installer name %s is not in package installers %s',
name, INSTALLERS)
raise KeyError('installer name %s is not in package INSTALLERS' % name)
package_installer = INSTALLERS[name]()
logging.debug('got package installer %s', package_installer)
return package_installer
def register(package_installer):
"""Register package installer.
:param package_installer: subclass of :class:`Installer`
:raises: KeyError
"""
if package_installer.NAME in INSTALLERS:
logging.error(
'package installer %s is already in INSTALLERS %s',
installer, INSTALLERS)
raise KeyError(
'package installer %s already registered' % package_installer)
logging.debug('register package installer: %s', package_installer)
INSTALLERS[package_installer.NAME] = package_installer
def get_installer():
"""get default package installer from comapss setting."""
return get_installer_by_name(setting.PACKAGE_INSTALLER)

@ -0,0 +1,310 @@
"""package instaler chef plugin.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@gmail.com>
"""
import fnmatch
import logging
from compass.utils import util
from compass.config_management.installers import package_installer
from compass.config_management.utils.config_translator import ConfigTranslator
from compass.config_management.utils.config_translator import KeyTranslator
from compass.config_management.utils import config_translator_callbacks
from compass.utils import setting_wrapper as setting
TO_CLUSTER_TRANSLATORS = {
'openstack': ConfigTranslator(
mapping={
'/security/console_credentials': [KeyTranslator(
translated_keys=['credential/identity/users/admin'],
)],
'/security/service_credentials': [KeyTranslator(
translated_keys=[
'/credential/identity/users/compute',
'/credential/identity/users/image',
'/credential/identity/users/metering',
'/credential/identity/users/network',
'/credential/identity/users/object-store',
'/credential/identity/users/volume',
'/credential/mysql/compute',
'/credential/mysql/dashboard',
'/credential/mysql/identity',
'/credential/mysql/image',
'/credential/mysql/metering',
'/credential/mysql/network',
'/credential/mysql/super',
'/credential/mysql/volume',
]
)],
'/networking/interfaces/management/nic': [KeyTranslator(
translated_keys=['/networking/control/interface'],
)],
'/networking/global/ntp_server': [KeyTranslator(
translated_keys=['/ntp/ntpserver']
)],
'/networking/interfaces/storage/nic': [KeyTranslator(
translated_keys=['/networking/storage/interface']
)],
'/networking/interfaces/public/nic': [KeyTranslator(
translated_keys=['/networking/public/interface']
)],
'/networking/interfaces/tenant/nic': [KeyTranslator(
translated_keys=['/networking/tenant/interface']
)],
}
),
}
FROM_CLUSTER_TRANSLATORS = {
'openstack': ConfigTranslator(
mapping={
'/role_assign_policy': [KeyTranslator(
translated_keys=['/role_assign_policy']
)],
'/dashboard_roles': [KeyTranslator(
translated_keys=['/dashboard_roles']
)],
}
),
}
TO_HOST_TRANSLATORS = {
'openstack': ConfigTranslator(
mapping={
'/networking/interfaces/management/ip': [KeyTranslator(
translated_keys=[
'/db/mysql/bind_address',
'/mq/rabbitmq/bind_address',
'/endpoints/compute/metadata/host',
'/endpoints/compute/novnc/host',
'/endpoints/compute/service/host',
'/endpoints/compute/xvpvnc/host',
'/endpoints/ec2/admin/host',
'/endpoints/ec2/service/host',
'/endpoints/identity/admin/host',
'/endpoints/identity/service/host',
'/endpoints/image/registry/host',
'/endpoints/image/service/host',
'/endpoints/metering/service/host',
'/endpoints/network/service/host',
'/endpoints/volume/service/host',
],
translated_value=config_translator_callbacks.get_value_if,
from_values={'condition': '/has_dashboard_roles'}
)],
}
),
}
class Installer(package_installer.Installer):
"""chef package installer."""
NAME = 'chef'
def __init__(self):
import chef
self.installer_url_ = setting.CHEF_INSTALLER_URL
self.global_databag_name_ = setting.CHEF_GLOBAL_DATABAG_NAME
self.api_ = chef.autoconfigure()
logging.debug('%s instance created', self)
def __repr__(self):
return '%s[name=%s,installer_url=%s,global_databag_name=%s]' % (
self.__class__.__name__, self.installer_url_,
self.NAME, self.global_databag_name_)
@classmethod
def _cluster_databag_name(cls, clusterid, target_system):
"""get cluster databag name"""
return '%s_%s' % (target_system, str(clusterid))
@classmethod
def _get_client_name(cls, hostname, clusterid, target_system):
"""get client name"""
return cls._get_node_name(hostname, clusterid, target_system)
@classmethod
def _get_node_name(cls, hostname, clusterid, target_system):
"""get node name"""
return '%s_%s_%s' % (hostname, target_system, clusterid)
def os_installer_config(self, config, target_system, **kwargs):
"""get os installer config."""
clusterid = config['clusterid']
roles = config['roles']
return {
'%s_url' % self.NAME: self.installer_url_,
'run_list': ','.join(
['"role[%s]"' % role for role in roles if role]),
'cluster_databag': self._cluster_databag_name(
clusterid, target_system),
'chef_client_name': self._get_client_name(
config['hostname'], config['clusterid'],
target_system),
'chef_node_name': self._get_node_name(
config['hostname'], config['clusterid'],
target_system)
}
def get_target_systems(self, oses):
"""get target systems."""
from chef import DataBag
databags = DataBag.list(api=self.api_)
target_systems = {}
for os_version in oses:
target_systems[os_version] = []
for databag in databags:
target_system = databag
global_databag_item = self._get_global_databag_item(
self._get_databag(target_system))
support_oses = global_databag_item['support_oses']
for os_version in oses:
for support_os in support_oses:
if fnmatch.fnmatch(os_version, support_os):
target_systems[os_version].append(target_system)
break
return target_systems
def get_roles(self, target_system):
"""get supported roles."""
global_databag_item = self._get_global_databag_item(
self._get_databag(target_system))
return global_databag_item['all_roles']
def _get_databag(self, target_system):
"""get databag."""
from chef import DataBag
return DataBag(target_system, api=self.api_)
def _get_databag_item(self, bag, bag_item_name):
"""get databag item."""
from chef import DataBagItem
return DataBagItem(bag, bag_item_name, api=self.api_)
def _get_global_databag_item(self, bag):
"""get global databag item."""
return self._get_databag_item(
bag, self.global_databag_name_)
def _get_cluster_databag_item(self, bag, clusterid, target_system):
"""get cluster databag item."""
return self._get_databag_item(
bag, self._cluster_databag_name(clusterid, target_system))
def get_cluster_config(self, clusterid, target_system, **kwargs):
"""get cluster config."""
bag = self._get_databag(target_system)
global_bag_item = dict(self._get_global_databag_item(bag))
bag_item = dict(self._get_cluster_databag_item(
bag, clusterid, target_system))
util.merge_dict(bag_item, global_bag_item, False)
return FROM_CLUSTER_TRANSLATORS[target_system].translate(bag_item)
def clean_cluster_config(self, clusterid, config,
target_system, **kwargs):
"""clean cluster config."""
try:
bag = self._get_databag(target_system)
bag_item = self._get_cluster_databag_item(
bag, clusterid, target_system)
bag_item.delete()
logging.debug('databag item is removed for cluster %s '
'config %s target_system %s',
clusterid, config, target_system)
except Exception as error:
logging.debug('no databag item to delete for cluster %s '
'config %s target_system %s',
clusterid, config, target_system)
def update_cluster_config(self, clusterid, config,
target_system, **kwargs):
"""update cluster config."""
self.clean_cluster_config(clusterid, config,
target_system, **kwargs)
bag = self._get_databag(target_system)
global_bag_item = dict(self._get_global_databag_item(bag))
bag_item = self._get_cluster_databag_item(
bag, clusterid, target_system)
bag_item_dict = dict(bag_item)
util.merge_dict(bag_item_dict, global_bag_item, False)
translated_config = TO_CLUSTER_TRANSLATORS[target_system].translate(
config)
util.merge_dict(bag_item_dict, translated_config)
for key, value in bag_item_dict.items():
bag_item[key] = value
bag_item.save()
def _clean_client(self, hostid, config, target_system, **kwargs):
"""clean client"""
from chef import Client
try:
client = Client(
self._get_client_name(
config['hostname'], config['clusterid'], target_system),
api=self.api_)
client.delete()
logging.debug('client is removed for host %s '
'config %s target_system %s',
hostid, config, target_system)
except Exception as error:
logging.debug('no client to delete for host %s '
'config %s target_system %s',
hostid, config, target_system)
def _clean_node(self, hostid, config, target_system, **kwargs):
"""clean node"""
from chef import Node
try:
node = Node(
self._get_node_name(
config['hostname'], config['clusterid'], target_system),
api=self.api_
)
node.delete()
logging.debug('node is removed for host %s '
'config %s target_system %s',
hostid, config, target_system)
except Exception as error:
logging.debug('no node to delete for host %s '
'config %s target_system %s',
hostid, config, target_system)
def clean_host_config(self, hostid, config, target_system, **kwargs):
"""clean host config."""
self._clean_client(hostid, config, target_system, **kwargs)
self._clean_node(hostid, config, target_system, **kwargs)
def reinstall_host(self, hostid, config, target_system, **kwargs):
"""reinstall host."""
self._clean_client(hostid, config, target_system, **kwargs)
self._clean_node(hostid, config, target_system, **kwargs)
def update_host_config(self, hostid, config, target_system, **kwargs):
"""update host cnfig."""
self.clean_host_config(hostid, config,
target_system=target_system, **kwargs)
clusterid = config['clusterid']
bag = self._get_databag(target_system)
global_bag_item = dict(self._get_global_databag_item(bag))
bag_item = self._get_cluster_databag_item(bag, clusterid, target_system)
bag_item_dict = dict(bag_item)
util.merge_dict(bag_item_dict, global_bag_item, False)
translated_config = TO_HOST_TRANSLATORS[target_system].translate(
config)
util.merge_dict(bag_item_dict, translated_config)
for key, value in bag_item_dict.items():
bag_item[key] = value
bag_item.save()
package_installer.register(Installer)

@ -0,0 +1,250 @@
"""os installer cobbler plugin"""
import functools
import logging
import xmlrpclib
from compass.config_management.installers import os_installer
from compass.config_management.utils.config_translator import ConfigTranslator
from compass.config_management.utils.config_translator import KeyTranslator
from compass.config_management.utils import config_translator_callbacks
from compass.utils import setting_wrapper as setting
from compass.utils import util
TO_HOST_TRANSLATOR = ConfigTranslator(
mapping={
'/networking/global/gateway': [KeyTranslator(
translated_keys=['/gateway']
)],
'/networking/global/nameservers': [KeyTranslator(
translated_keys=['/name_servers']
)],
'/networking/global/search_path': [KeyTranslator(
translated_keys=['/name_servers_search']
)],
'/networking/global/proxy': [KeyTranslator(
translated_keys=['/ksmeta/proxy']
)],
'/networking/global/ignore_proxy': [KeyTranslator(
translated_keys=['/ksmeta/ignore_proxy']
)],
'/networking/global/ntp_server': [KeyTranslator(
translated_keys=['/ksmeta/ntp_server']
)],
'/security/server_credentials/username': [KeyTranslator(
translated_keys=['/ksmeta/username']
)],
'/security/server_credentials/password': [KeyTranslator(
translated_keys=['/ksmeta/password'],
translated_value=config_translator_callbacks.get_encrypted_value
)],
'/partition': [KeyTranslator(
translated_keys=['/ksmeta/partition']
)],
'/networking/interfaces/*/mac': [KeyTranslator(
translated_keys=[functools.partial(
config_translator_callbacks.get_key_from_pattern,
to_pattern='/modify_interface/macaddress-%(nic)s')],
from_keys={'nic': '../nic'},
override=functools.partial(
config_translator_callbacks.override_path_has,
should_exist='management')
)],
'/networking/interfaces/*/ip': [KeyTranslator(
translated_keys=[functools.partial(
config_translator_callbacks.get_key_from_pattern,
to_pattern='/modify_interface/ipaddress-%(nic)s')],
from_keys={'nic': '../nic'},
override=functools.partial(
config_translator_callbacks.override_path_has,
should_exist='management')
)],
'/networking/interfaces/*/netmask': [KeyTranslator(
translated_keys=[functools.partial(
config_translator_callbacks.get_key_from_pattern,
to_pattern='/modify_interface/netmask-%(nic)s')],
from_keys={'nic': '../nic'},
override=functools.partial(
config_translator_callbacks.override_path_has,
should_exist='management')
)],
'/networking/interfaces/*/dns_alias': [KeyTranslator(
translated_keys=[functools.partial(
config_translator_callbacks.get_key_from_pattern,
to_pattern='/modify_interface/dnsname-%(nic)s')],
from_keys={'nic': '../nic'},
override=functools.partial(
config_translator_callbacks.override_path_has,
should_exist='management')
)],
'/networking/interfaces/*/nic': [KeyTranslator(
translated_keys=[functools.partial(
config_translator_callbacks.get_key_from_pattern,
to_pattern='/modify_interface/static-%(nic)s')],
from_keys={'nic': '../nic'},
translated_value=True,
override=functools.partial(
config_translator_callbacks.override_path_has,
should_exist='management'),
), KeyTranslator(
translated_keys=[functools.partial(
config_translator_callbacks.get_key_from_pattern,
to_pattern='/modify_interface/management-%(nic)s')],
from_keys={'nic': '../nic'},
translated_value=functools.partial(
config_translator_callbacks.override_path_has,
should_exist='management'),
override=functools.partial(
config_translator_callbacks.override_path_has,
should_exist='management')
), KeyTranslator(
translated_keys=['/ksmeta/promisc_nics'],
from_values={'condition': '../promisc'},
translated_value=config_translator_callbacks.add_value,
override=True,
)],
}
)
class Installer(os_installer.Installer):
"""cobbler installer"""
NAME = 'cobbler'
def __init__(self, package_installer):
# the connection is created when cobbler installer is initialized.
self.remote_ = xmlrpclib.Server(
setting.COBBLER_INSTALLER_URL,
allow_none=True)
self.token_ = self.remote_.login(
*setting.COBBLER_INSTALLER_TOKEN)
# cobbler tries to get package related config from package installer.
self.package_installer_ = package_installer
logging.debug('%s instance created', self)
def __repr__(self):
return '%s[name=%s,remote=%s,token=%s' % (
self.__class__.__name__, self.NAME,
self.remote_, self.token_)
def get_oses(self):
"""get supported os versions.
:returns: list of os version.
.. note::
In cobbler, we treat profile name as the indicator
of os version. It is just a simple indicator
and not accurate.
"""
profiles = self.remote_.get_profiles()
oses = []
for profile in profiles:
oses.append(profile['name'])
return oses
def sync(self):
"""Sync cobbler to catch up the latest update config."""
logging.debug('sync %s', self)
self.remote_.sync(self.token_)
def _get_modify_system(self, profile, config, **kwargs):
"""get modified system config."""
system_config = {
'name': self._get_system_name(config),
'hostname': config['hostname'],
'profile': profile,
}
translated_config = TO_HOST_TRANSLATOR.translate(config)
util.merge_dict(system_config, translated_config)
ksmeta = system_config.setdefault('ksmeta', {})
package_config = {'tool': self.package_installer_.NAME}
util.merge_dict(
package_config,
self.package_installer_.os_installer_config(
config, **kwargs))
util.merge_dict(ksmeta, package_config)
return system_config
def _get_profile(self, os_version, **_kwargs):
"""get profile name."""
profile_found = self.remote_.find_profile(
{'name': os_version})
return profile_found[0]
def _get_system_name(self, config):
return '%s.%s' % (
config['hostname'], config['clusterid'])
def _get_system(self, config, create_if_not_exists=True):
"""get system reference id."""
sys_name = self._get_system_name(config)
try:
sys_id = self.remote_.get_system_handle(
sys_name, self.token_)
logging.debug('using existing system %s for %s',
sys_id, sys_name)
except Exception as e:
if create_if_not_exists:
sys_id = self.remote_.new_system(self.token_)
logging.debug('create new system %s for %s',
sys_id, sys_name)
else:
sys_id = None
return sys_id
def _clean_system(self, config):
"""clean system."""
sys_name = self._get_system_name(config)
try:
self.remote_.remove_system(sys_name, self.token_)
logging.debug('system %s is removed', sys_name)
except Exception as error:
logging.debug('no system %s found to remove', sys_name)
def _save_system(self, sys_id):
"""save system config update."""
self.remote_.save_system(sys_id, self.token_)
def _update_modify_system(self, sys_id, system_config):
"""update modify system"""
for key, value in system_config.items():
self.remote_.modify_system(
sys_id, key, value, self.token_)
def _netboot_enabled(self, sys_id):
"""enable netboot"""
self.remote_.modify_system(
sys_id, 'netboot_enabled', True, self.token_)
def clean_host_config(self, hostid, config, **kwargs):
"""clean host config."""
self._clean_system(config)
def reinstall_host(self, hostid, config, **kwargs):
"""reinstall host."""
sys_id = self._get_system(config, False)
if sys_id:
self._netboot_enabled(sys_id)
def update_host_config(self, hostid, config, **kwargs):
"""update host config."""
self.clean_host_config(hostid, config, **kwargs)
profile = self._get_profile(**kwargs)
sys_id = self._get_system(config)
system_config = self._get_modify_system(
profile, config, **kwargs)
logging.debug('%s system config to update: %s',
hostid, system_config)
self._update_modify_system(sys_id, system_config)
self._save_system(sys_id)
os_installer.register(Installer)

@ -0,0 +1,3 @@
from compass.config_management.providers.plugins import db_config_provider
from compass.config_management.providers.plugins import file_config_provider
from compass.config_management.providers.plugins import mix_config_provider

@ -0,0 +1,138 @@
"""Module to provide interface to read/update global/cluster/host config.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
from compass.utils import setting_wrapper as setting
class ConfigProvider(object):
"""Interface for config provider"""
NAME = 'config_provider'
def __init__(self):
raise NotImplementedError('%s is not implemented' % self)
def __repr__(self):
return '%s[%s]' % (self.__class__.__name__, self.NAME)
def get_global_config(self):
"""Virtual method to get global config.
:returns: global configuration as dict.
"""
return {}
def get_cluster_config(self, clusterid):
"""Virtual method to get cluster config.
:param clusterid: id of the cluster to get configuration.
:type clusterid: int
:returns: cluster configuration as dict.
"""
return {}
def get_host_config(self, hostid):
"""Virtual method to get host config.
:param hostid: id of the host to get configuration.
:type hostid: int
:returns: host configuration as dict.
"""
return {}
def get_host_configs(self, hostids):
"""Wrapper method to get hosts' configs.
:param hostids: ids of the hosts to get configuration.
:type hostids: list of int
:returns: dict mapping each hostid to host configuration as dict.
"""
configs = {}
for hostid in hostids:
configs[hostid] = self.get_host_config(hostid)
return configs
def update_global_config(self, config):
"""Virtual method to update global config.
:param config: global configuration.
:type config: dict
"""
pass
def update_cluster_config(self, clusterid, config):
"""Virtual method to update cluster config.
:param clusterid: the id of the cluster to update configuration.
:type clusterid: int
:param config: cluster configuration.
:type config: dict
"""
pass
def update_host_config(self, hostid, config):
"""Virtual method to update host config.
:param hostid: the id of the host to update configuration.
:type hostid: int
:param config: host configuration.
:type config: dict
"""
pass
def update_host_configs(self, configs):
"""Wrapper method to update host configs.
:param configs: dict mapping host id to host configuration as dict.
:type configs: dict of (int, dict)
"""
for hostname, config in configs.items():
self.update_host_config(hostname, config)
PROVIDERS = {}
def get_provider():
"""get default provider from compass setting."""
return get_provider_by_name(setting.PROVIDER_NAME)
def get_provider_by_name(name):
"""get provider by provider name.
:param name: provider name.
:type name: str
:returns: instance of subclass of :class:`ConfigProvider`.
:raises: KeyError
"""
if name not in PROVIDERS:
logging.error('provider name %s is not found in providers %s',
name, PROVIDERS)
raise KeyError('provider %s is not found in PROVIDERS' % name)
provider = PROVIDERS[name]()
logging.debug('got provider %s', provider)
return provider
def register_provider(provider):
"""register provider.
:param provider: class inherited from :class:`ConfigProvider`
:raises: KeyError
"""
if provider.NAME in PROVIDERS:
logging.error('provider %s name %s is already registered in %s',
provider, provider.NAME, PROVIDERS)
raise KeyError('provider %s is already registered in PROVIDERS' %
provider.NAME)
logging.debug('register provider %s', provider.NAME)
PROVIDERS[provider.NAME] = provider

@ -0,0 +1,61 @@
"""Module to provide ConfigProvider that reads config from db.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
from compass.config_management.providers import config_provider
from compass.config_management.utils import config_filter
from compass.db import database
from compass.db.model import Cluster, ClusterHost
CLUSTER_ALLOWS = ['*']
CLUSTER_DENIES = []
HOST_ALLOWS = ['*']
HOST_DENIES = []
class DBProvider(config_provider.ConfigProvider):
"""config provider which reads config from db.
.. note::
All method of this class should be called inside database
session scope.
"""
NAME = 'db'
CLUSTER_FILTER = config_filter.ConfigFilter(
CLUSTER_ALLOWS, CLUSTER_DENIES)
HOST_FILTER = config_filter.ConfigFilter(
HOST_ALLOWS, HOST_DENIES)
def __init__(self):
pass
def get_cluster_config(self, clusterid):
"""Get cluster config from db."""
session = database.current_session()
cluster = session.query(Cluster).filter_by(id=clusterid).first()
if cluster:
return cluster.config
else:
return {}
def get_host_config(self, hostid):
"""Get host config from db."""
session = database.current_session()
host = session.query(ClusterHost).filter_by(id=hostid).first()
if host:
return host.config
else:
return {}
def update_host_config(self, hostid, config):
"""Update hsot config to db."""
session = database.current_session()
host = session.query(ClusterHost).filter_by(id=hostid).first()
if not host:
return
filtered_config = self.HOST_FILTER.filter(config)
host.config = filtered_config
config_provider.register_provider(DBProvider)

@ -0,0 +1,83 @@
"""config provider read config from file.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import json
import logging
from compass.config_management.providers import config_provider
from compass.utils import setting_wrapper as setting
class FileProvider(config_provider.ConfigProvider):
"""config provider which reads config from file."""
NAME = 'file'
def __init__(self):
self.config_dir_ = setting.CONFIG_DIR
self.global_config_filename_ = setting.GLOBAL_CONFIG_FILENAME
self.config_file_format_ = setting.CONFIG_FILE_FORMAT
def _global_config_filename(self):
"""Get global config file name."""
return '%s/%s' % (
self.config_dir_, self.global_config_filename_)
def _config_format(self):
"""Get config file format."""
return self.config_file_format_
@classmethod
def _config_format_python(cls, config_format):
"""Check if config file is stored as python formatted."""
if config_format == 'python':
return True
return False
@classmethod
def _config_format_json(cls, config_format):
"""Check if config file is stored as json formatted."""
if config_format == 'json':
return True
return False
@classmethod
def _read_config_from_file(cls, filename, config_format):
"""read config from file."""
config_globals = {}
config_locals = {}
content = ''
try:
with open(filename) as file_handler:
content = file_handler.read()
except Exception as error:
logging.error('failed to read file %s', filename)
logging.exception(error)
return {}
if cls._config_format_python(config_format):
try:
exec(content, config_globals, config_locals)
except Exception as error:
logging.error('failed to exec %s', content)
logging.exception(error)
return {}
elif cls._config_format_json(config_format):
try:
config_locals = json.loads(content)
except Exception as error:
logging.error('failed to load json data %s', content)
logging.exception(error)
return {}
return config_locals
def get_global_config(self):
"""read global config from file."""
return self._read_config_from_file(
self._global_config_filename(),
self._config_format())
config_provider.register_provider(FileProvider)

@ -0,0 +1,47 @@
"""Mix provider which read config from different other providers.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
from compass.config_management.providers import config_provider
from compass.utils import setting_wrapper as setting
class MixProvider(config_provider.ConfigProvider):
"""mix provider which read config from different other providers."""
NAME = 'mix'
def __init__(self):
self.global_provider_ = config_provider.get_provider_by_name(
setting.GLOBAL_CONFIG_PROVIDER)
self.cluster_provider_ = config_provider.get_provider_by_name(
setting.CLUSTER_CONFIG_PROVIDER)
self.host_provider_ = config_provider.get_provider_by_name(
setting.HOST_CONFIG_PROVIDER)
def get_global_config(self):
"""get global config."""
return self.global_provider_.get_global_config()
def get_cluster_config(self, clusterid):
"""get cluster config."""
return self.cluster_provider_.get_cluster_config(clusterid)
def get_host_config(self, hostid):
"""get host config."""
return self.host_provider_.get_host_config(hostid)
def update_global_config(self, config):
"""update global config."""
self.global_provider_.update_global_config(config)
def update_cluster_config(self, clusterid, config):
"""update cluster config."""
self.cluster_provider_.update_cluster_config(
clusterid, config)
def update_host_config(self, hostid, config):
"""update host config."""
self.host_provider_.update_host_config(hostid, config)
config_provider.register_provider(MixProvider)

@ -0,0 +1,93 @@
"""Module to filter configuration when upddating.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
from compass.config_management.utils import config_reference
class ConfigFilter(object):
"""config filter based on allows and denies rules"""
def __init__(self, allows=['*'], denies=[]):
"""Constructor
:param allows: glob path to copy to the filtered configuration.
:type allows: list of str
:param denies: glob path to remove from the filtered configuration.
:type denies: list of str
"""
self.allows_ = allows
self.denies_ = denies
self._is_valid()
def __repr__(self):
return '%s[allows=%s,denies=%s]' % (
self.__class__.__name__, self.allows_, self.denies_)
def _is_allows_valid(self):
"""Check if allows are valid"""
if not isinstance(self.allows_, list):
raise TypeError(
'allows type is %s but expected type is list: %s' % (
type(self.allows_), self.allows_))
for i, allow in enumerate(self.allows_):
if not isinstance(allow, str):
raise TypeError(
'allows[%s] type is %s but expected type is str: %s' % (
i, type(allow), allow))
def _is_denies_valid(self):
"""Check if denies are valid."""
if not isinstance(self.denies_, list):
raise TypeError(
'denies type is %s but expected type is list: %s' % (
type(self.denies_), self.denies_))
for i, deny in enumerate(self.denies_):
if not isinstance(deny, str):
raise TypeError(
'denies[%s] type is %s but expected type is str: %s' % (
i, type(deny), deny))
def _is_valid(self):
"""Check if config filter is valid."""
self._is_allows_valid()
self._is_denies_valid()
def filter(self, config):
"""Filter config
:param config: configuration to filter.
:type config: dict
:returns: filtered configuration as dict
"""
ref = config_reference.ConfigReference(config)
filtered_ref = config_reference.ConfigReference({})
self._filter_allows(ref, filtered_ref)
self._filter_denies(filtered_ref)
filtered_config = config_reference.get_clean_config(
filtered_ref.config)
logging.debug('filter config %s to %s', config, filtered_config)
return filtered_config
def _filter_allows(self, ref, filtered_ref):
"""copy ref config with the allows to filtered ref."""
for allow in self.allows_:
if not allow:
continue
for sub_key, sub_ref in ref.ref_items(allow):
filtered_ref.setdefault(sub_key).update(sub_ref.config)
def _filter_denies(self, filtered_ref):
"""remove config from filter_ref by denies."""
for deny in self.denies_:
if not deny:
continue
for ref_key in filtered_ref.ref_keys(deny):
del filtered_ref[ref_key]

@ -0,0 +1,335 @@
"""
Module to get configs from provider and isntallers and update
them to provider and installers.
.. moduleauthor:: Xiaodong wang ,xiaodongwang@huawei.com>
"""
import functools
import logging
from compass.config_management.installers import os_installer
from compass.config_management.installers import package_installer
from compass.config_management.providers import config_provider
from compass.config_management.utils import config_merger_callbacks
from compass.config_management.utils.config_merger import ConfigMapping
from compass.config_management.utils.config_merger import ConfigMerger
from compass.utils import util
CLUSTER_HOST_MERGER = ConfigMerger(
mappings=[
ConfigMapping(
path_list=['/networking/interfaces/*'],
from_upper_keys={'ip_start': 'ip_start', 'ip_end': 'ip_end'},
to_key='ip',
value=config_merger_callbacks.assign_ips
),
ConfigMapping(
path_list=['/role_assign_policy'],
from_upper_keys={
'policy_by_host_numbers': 'policy_by_host_numbers',
'default': 'default'},
to_key='/roles',
value=config_merger_callbacks.assign_roles_by_host_numbers,
override=config_merger_callbacks.override_if_empty
),
ConfigMapping(
path_list=['/dashboard_roles'],
from_lower_keys={'lower_values': '/roles'},
to_key='/has_dashboard_roles',
value=config_merger_callbacks.has_intersection
),
ConfigMapping(
path_list=[
'/networking/global',
'/networking/interfaces/*/netmask',
'/networking/interfaces/*/nic',
'/networking/interfaces/*/promisc',
'/security/*',
'/partition',
]
),
ConfigMapping(
path_list=['/networking/interfaces/*'],
from_upper_keys={'pattern': 'dns_pattern',
'clusterid': '/clusterid',
'search_path': '/networking/global/search_path'},
from_lower_keys={'hostname': '/hostname'},
to_key='dns_alias',
value=functools.partial(config_merger_callbacks.assign_from_pattern,
upper_keys=['search_path', 'clusterid'],
lower_keys=['hostname'])
),
ConfigMapping(
path_list=['/networking/global'],
from_upper_keys={'default': 'default_no_proxy'},
from_lower_keys={'hostnames': '/hostname',
'ips': '/networking/interfaces/management/ip'},
to_key='ignore_proxy',
value=config_merger_callbacks.assign_noproxy
)])
class ConfigManager(object):
"""
Class is to get global/clsuter/host configs from provider,
os installer, package installer, process them, and
update them to provider, os installer, package installer.
"""
def __init__(self):
self.config_provider_ = config_provider.get_provider()
logging.debug('got config provider: %s', self.config_provider_)
self.package_installer_ = package_installer.get_installer()
logging.debug('got package installer: %s', self.package_installer_)
self.os_installer_ = os_installer.get_installer(
self.package_installer_)
logging.debug('got os installer: %s', self.os_installer_)
def get_adapters(self):
"""Get adapter information from os installer and package installer.
:returns: list of adapter information.
.. note::
For each adapter, the information is as
{'name': '...', 'os': '...', 'target_system': '...'}
"""
oses = self.os_installer_.get_oses()
target_systems_per_os = self.package_installer_.get_target_systems(
oses)
adapters = []
for os_version, target_systems in target_systems_per_os.items():
for target_system in target_systems:
adapters.append({
'name': '%s/%s' % (os_version, target_system),
'os': os_version,
'target_system': target_system})
logging.debug('got adapters: %s', adapters)
return adapters
def get_roles(self, target_system):
"""Get all roles of the target system from package installer.
:param target_system: the target distributed system to deploy.
:type target_system: str
:returns: list of role information.
.. note::
For each role, the information is as:
{'name': '...', 'description': '...', 'target_system': '...'}
"""
roles = self.package_installer_.get_roles(target_system)
return [
{
'name': role,
'description': description,
'target_system': target_system
} for role, description in roles.items()
]
def get_global_config(self, os_version, target_system):
"""Get global config."""
config = self.config_provider_.get_global_config()
logging.debug('got global provider config from %s: %s',
self.config_provider_, config)
os_config = self.os_installer_.get_global_config(
os_version=os_version, target_system=target_system)
logging.debug('got global os config from %s: %s',
self.os_installer_, os_config)
package_config = self.package_installer_.get_global_config(
os_version=os_version,
target_system=target_system)
logging.debug('got global package config from %s: %s',
self.package_installer_, package_config)
util.merge_dict(config, os_config)
util.merge_dict(config, package_config)
return config
def update_global_config(self, config, os_version, target_system):
"""update global config."""
logging.debug('update global config: %s', config)
self.config_provider_.update_global_config(config)
self.os_installer_.update_global_config(
config, os_version=os_version, target_system=target_system)
self.package_installer_.update_global_config(
config, os_version=os_version, target_system=target_system)
def get_cluster_config(self, clusterid, os_version, target_system):
"""get cluster config."""
config = self.config_provider_.get_cluster_config(clusterid)
logging.debug('got cluster %s config from %s: %s',
clusterid, self.config_provider_, config)
os_config = self.os_installer_.get_cluster_config(
clusterid, os_version=os_version,
target_system=target_system)
logging.debug('got cluster %s config from %s: %s',
clusterid, self.os_installer_, os_config)
package_config = self.package_installer_.get_cluster_config(
clusterid, os_version=os_version,
target_system=target_system)
logging.debug('got cluster %s config from %s: %s',
clusterid, self.package_installer_, package_config)
util.merge_dict(config, os_config)
util.merge_dict(config, package_config)
return config
def clean_cluster_config(self, clusterid, os_version, target_system):
config = self.config_provider_.get_cluster_config(clusterid)
logging.debug('got cluster %s config from %s: %s',
clusterid, self.config_provider_, config)
self.os_installer_.clean_cluster_config(
clusterid, config, os_version=os_version,
target_system=target_system)
logging.debug('clean cluster %s config in %s',
clusterid, self.os_installer_)
self.package_installer_.clean_cluster_config(
clusterid, config, os_version=os_version,
target_system=target_system)
logging.debug('clean cluster %s config in %s',
clusterid, self.package_installer_)
def update_cluster_config(self, clusterid, config,
os_version, target_system):
"""update cluster config."""
logging.debug('update cluster %s config: %s', clusterid, config)
self.config_provider_.update_cluster_config(clusterid, config)
self.os_installer_.update_cluster_config(
clusterid, config, os_version=os_version,
target_system=target_system)
self.package_installer_.update_cluster_config(
clusterid, config, os_version=os_version,
target_system=target_system)
def get_host_config(self, hostid, os_version, target_system):
"""get host config."""
config = self.config_provider_.get_host_config(hostid)
logging.debug('got host %s config from %s: %s',
hostid, self.config_provider_, config)
os_config = self.os_installer_.get_host_config(
hostid, os_version=os_version,
target_system=target_system)
logging.debug('got host %s config from %s: %s',
hostid, self.os_installer_, os_config)
package_config = self.package_installer_.get_host_config(
hostid, os_version=os_version,
target_system=target_system)
logging.debug('got host %s config from %s: %s',
hostid, self.package_installer_, package_config)
util.merge_dict(config, os_config)
util.merge_dict(config, package_config)
return config
def get_host_configs(self, hostids, os_version, target_system):
"""get hosts' configs."""
host_configs = {}
for hostid in hostids:
host_configs[hostid] = self.get_host_config(
hostid, os_version, target_system)
return host_configs
def clean_host_config(self, hostid, os_version, target_system):
"""clean host config."""
config = self.config_provider_.get_host_config(hostid)
logging.debug('got host %s config from %s: %s',
hostid, self.config_provider_, config)
self.os_installer_.clean_host_config(
hostid, config, os_version=os_version,
target_system=target_system)
logging.debug('clean host %s config in %s',
hostid, self.os_installer_)
self.package_installer_.clean_host_config(
hostid, config, os_version=os_version,
target_system=target_system)
logging.debug('clean host %s config in %s',
hostid, self.package_installer_)
def clean_host_configs(self, hostids, os_version, target_system):
"""clean hosts' configs."""
for hostid in hostids:
self.clean_host_config(hostid, os_version, target_system)
def reinstall_host(self, hostid, os_version, target_system):
"""reinstall host."""
config = self.config_provider_.get_host_config(hostid)
logging.debug('got host %s config from %s: %s',
hostid, self.config_provider_, config)
self.os_installer_.reinstall_host(
hostid, config, os_version=os_version,
target_system=target_system)
logging.debug('reinstall host %s in %s',
hostid, self.os_installer_)
self.package_installer_.reinstall_host(
hostid, config, os_version=os_version,
target_system=target_system)
logging.debug('clean host %s in %s',
hostid, self.package_installer_)
def reinstall_hosts(self, hostids, os_version, target_system):
for hostid in hostids:
self.reinstall_host(hostid, os_version, target_system)
def update_host_config(self, hostid, config, os_version, target_system):
"""update host config."""
logging.debug('update host %s config: %s', hostid, config)
self.config_provider_.update_host_config(hostid, config)
self.os_installer_.update_host_config(
hostid, config, os_version=os_version,
target_system=target_system)
self.package_installer_.update_host_config(
hostid, config, os_version=os_version,
target_system=target_system)
def update_host_configs(self, host_configs, os_version, target_system):
"""update host configs."""
for hostid, host_config in host_configs.items():
self.update_host_config(
hostid, host_config, os_version, target_system)
def update_cluster_and_host_configs(self,
clusterid,
hostids,
update_hostids,
os_version,
target_system):
"""update cluster/host configs."""
logging.debug('update cluster %s with all hosts %s and update: %s',
clusterid, hostids, update_hostids)
global_config = self.get_global_config(os_version, target_system)
self.update_global_config(global_config, os_version=os_version,
target_system=target_system)
cluster_config = self.get_cluster_config(
clusterid, os_version=os_version, target_system=target_system)
util.merge_dict(cluster_config, global_config, False)
self.update_cluster_config(
clusterid, cluster_config, os_version=os_version,
target_system=target_system)
host_configs = self.get_host_configs(
hostids, os_version=os_version,
target_system=target_system)
CLUSTER_HOST_MERGER.merge(cluster_config, host_configs)
update_host_configs = dict(
[(hostid, host_config)
for hostid, host_config in host_configs.items()
if hostid in update_hostids])
self.update_host_configs(
update_host_configs, os_version=os_version,
target_system=target_system)
def sync(self):
"""sync os installer and package installer."""
self.os_installer_.sync()
self.package_installer_.sync()

@ -0,0 +1,285 @@
"""Module to set the hosts configs from cluster config.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
from copy import deepcopy
from compass.config_management.utils import config_reference
from compass.utils import util
class ConfigMapping(object):
"""Class to merge cluster config ref to host config ref by path list."""
def __init__(self, path_list, from_upper_keys={},
from_lower_keys={}, to_key='.',
override=False, override_conditions={},
value=None):
"""Constructor
:param path_list: list of path to merge from cluster ref to host refs
:type path_list: list of str
:param from_upper_keys: kwargs from cluster ref for value callback.
:type from_upper_keys: dict of kwargs name to path in cluster ref
:param from_lower_keys: kwargs from host refs for value callback.
:type from_lower_keys: dict of kwargs name to path in host refs.
:param to_key: the path in host refs to be merged to.
:type to_key: str
:param override: if the path in host ref can be overridden.
:type override: callback or bool
:param override_conditions: kwargs from host ref for override callback
:type override_conditions: dict of kwargs name to path in host ref
:param value: the value to be set in host refs.
:type value: callback or any type
"""
self.path_list_ = path_list
self.from_upper_keys_ = from_upper_keys
self.from_lower_keys_ = from_lower_keys
self.to_key_ = to_key
self.override_ = override
self.override_conditions_ = override_conditions
self.value_ = value
def __repr__(self):
return (
'%s[path_list=%s,from_upper_keys=%s,'
'from_lower_keys=%s,to_key=%s,override=%s,'
'override_conditions=%s,value=%s]'
) % (
self.__class__.__name__,
self.path_list_, self.from_upper_keys_,
self.from_lower_keys_, self.to_key_,
self.override_, self.override_conditions_,
self.value_)
def _is_valid_path_list(self):
"""Check path_list are valid."""
for i, path in enumerate(self.path_list_):
if not isinstance(path, str):
raise TypeError(
'path_list[%d] type is %s while '
'expected type is str: %s' % (
i, type(path), path))
def _is_valid_from_upper_keys(self):
"""Check from_upper_keys are valid."""
for mapping_key, from_upper_key in self.from_upper_keys_.items():
if not isinstance(from_upper_key, str):
raise TypeError(
'from_upper_keys[%s] type is %s'
'while expected type is str: %s' % (
mapping_key, type(from_upper_key), from_upper_key))
if '*' in from_upper_key:
raise KeyError(
'from_upper_keys[%s] %s contains *' % (
mapping_key, from_upper_key))
def _is_valid_from_lower_keys(self):
"""Check from_lower_keys are valid."""
for mapping_key, from_lower_key in self.from_lower_keys_.items():
if not isinstance(from_lower_key, str):
raise TypeError(
'from_lower_keys[%s] type'
'is %s while expected type is str: %s' % (
mapping_key, type(from_lower_key), from_lower_key))
if '*' in from_lower_key:
raise KeyError(
'from_lower_keys[%s] %s contains *' % (
mapping_key, from_lower_key))
def _is_valid_from_keys(self):
"""Check from keys are valid."""
self._is_valid_from_upper_keys()
self._is_valid_from_lower_keys()
upper_keys = set(self.from_upper_keys_.keys())
lower_keys = set(self.from_lower_keys_.keys())
intersection = upper_keys.intersection(lower_keys)
if intersection:
raise KeyError(
'there is intersection between from_upper_keys %s'
' and from_lower_keys %s: %s' % (
upper_keys, lower_keys, intersection))
def _is_valid_to_key(self):
"""Check to_key is valid."""
if '*' in self.to_key_:
raise KeyError('to_key %s contains *' % self.to_key_)
def _is_valid_override_conditions(self):
"""Check override conditions are valid."""
override_items = self.override_conditions_.items()
for mapping_key, override_condition in override_items:
if not util.is_instance(override_condition, [str, unicode]):
raise TypeError(
'override_conditions[%s] type is %s '
'while expected type is [str, unicode]: %s' % (
mapping_key, type(override_condition),
override_condition))
if '*' in override_condition:
raise KeyError(
'override_conditions[%s] %s contains *' % (
mapping_key, override_condition))
def _is_valid(self):
"""Check ConfigMapping instance is valid."""
self._is_valid_path_list()
self._is_valid_from_keys()
self._is_valid_to_key()
self._is_valid_override_conditions()
def _get_upper_sub_refs(self, upper_ref):
"""get sub_refs from upper_ref."""
upper_refs = []
for path in self.path_list_:
upper_refs.extend(upper_ref.ref_items(path))
return upper_refs
def _get_mapping_from_upper_keys(self, ref_key, sub_ref):
"""Get upper config mapping from from_upper_keys."""
sub_configs = {}
for mapping_key, from_upper_key in self.from_upper_keys_.items():
if from_upper_key in sub_ref:
sub_configs[mapping_key] = sub_ref[from_upper_key]
else:
logging.info('%s ignore from_upper_key %s in %s',
self, from_upper_key, ref_key)
return sub_configs
def _get_mapping_from_lower_keys(self, ref_key, lower_sub_refs):
"""Get lower config mapping from from_lower_keys."""
sub_configs = {}
for mapping_key, from_lower_key in self.from_lower_keys_.items():
sub_configs[mapping_key] = {}
for lower_key, lower_sub_ref in lower_sub_refs.items():
for mapping_key, from_lower_key in self.from_lower_keys_.items():
if from_lower_key in lower_sub_ref:
sub_configs[mapping_key][lower_key] = (
lower_sub_ref[from_lower_key])
else:
logging.error(
'%s ignore from_lower_key %s in %s lower_key %s',
self, from_lower_key, ref_key, lower_key)
return sub_configs
def _get_values(self, ref_key, sub_ref, lower_sub_refs, sub_configs):
"""Get values to set to lower configs."""
if self.value_ is None:
lower_values = {}
for lower_key in lower_sub_refs.keys():
lower_values[lower_key] = deepcopy(sub_ref.config)
return lower_values
if not callable(self.value_):
lower_values = {}
for lower_key in lower_sub_refs.keys():
lower_values[lower_key] = deepcopy(self.value_)
return lower_values
return self.value_(sub_ref, ref_key, lower_sub_refs,
self.to_key_, **sub_configs)
def _get_override(self, ref_key, sub_ref):
"""Get override from ref_key, ref from ref_key."""
if not callable(self.override_):
return bool(self.override_)
override_condition_configs = {}
override_items = self.override_conditions_.items()
for mapping_key, override_condition in override_items:
if override_condition in sub_ref:
override_condition_configs[mapping_key] = \
sub_ref[override_condition]
else:
logging.info('%s no override condition %s in %s',
self, override_condition, ref_key)
return self.override_(sub_ref, ref_key,
**override_condition_configs)
def merge(self, upper_ref, lower_refs):
"""merge upper config to lower configs."""
upper_sub_refs = self._get_upper_sub_refs(upper_ref)
for ref_key, sub_ref in upper_sub_refs:
sub_configs = self._get_mapping_from_upper_keys(ref_key, sub_ref)
lower_sub_refs = {}
for lower_key, lower_ref in lower_refs.items():
lower_sub_refs[lower_key] = lower_ref.setdefault(ref_key)
lower_sub_configs = self._get_mapping_from_lower_keys(
ref_key, lower_sub_refs)
util.merge_dict(sub_configs, lower_sub_configs)
values = self._get_values(
ref_key, sub_ref, lower_sub_refs, sub_configs)
logging.debug('%s set values %s to %s',
ref_key, self.to_key_, values)
for lower_key, lower_sub_ref in lower_sub_refs.items():
if lower_key not in values:
logging.error('no key %s in %s', lower_key, values)
continue
value = values[lower_key]
lower_to_ref = lower_sub_ref.setdefault(self.to_key_)
override = self._get_override(self.to_key_, lower_to_ref)
lower_to_ref.update(value, override)
class ConfigMerger(object):
"""Class to merge clsuter config to host configs."""
def __init__(self, mappings):
"""Constructor
:param mappings: list of :class:`ConfigMapping` instance
"""
self.mappings_ = mappings
self._is_valid()
def __repr__(self):
return '%s[mappings=%s]' % (self.__class__.__name__, self.mappings_)
def _is_valid(self):
"""Check ConfigMerger instance is valid."""
if not isinstance(self.mappings_, list):
raise TypeError(
'%s mapping type is %s while expect type is list: %s' % (
self.__class__.__name__, type(self.mappings_),
self.mappings_))
def merge(self, upper_config, lower_configs):
"""Merge cluster config to host configs.
:param upper_config: cluster configuration to merge from.
:type upper_config: dict
:param lower_configs: host configurations to merge to.
:type lower_configs: dict of host id to host config as dict
"""
upper_ref = config_reference.ConfigReference(upper_config)
lower_refs = {}
for lower_key, lower_config in lower_configs.items():
lower_refs[lower_key] = config_reference.ConfigReference(
lower_config)
for mapping in self.mappings_:
logging.debug('apply merging from the rule %s', mapping)
mapping.merge(upper_ref, lower_refs)
for lower_key, lower_config in lower_configs.items():
lower_configs[lower_key] = config_reference.get_clean_config(
lower_config)
logging.debug('merged upper config\n%s\nto lower configs:\n%s',
upper_config, lower_configs)

@ -0,0 +1,372 @@
"""ConfigMerger Callbacks module.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import itertools
import logging
from copy import deepcopy
from netaddr import IPSet, IPRange
from compass.utils import util
def _get_role_bundle_mapping(roles, bundles):
"""Get role bundles.
"""
bundle_mapping = {}
for role in roles:
bundle_mapping[role] = role
for bundle in bundles:
bundled_role = None
for role in bundle:
if role not in roles:
continue
while role != bundle_mapping[role]:
role = bundle_mapping[role]
if not bundled_role:
bundled_role = role
else:
bundle_mapping[role] = bundled_role
role_bundles = {}
for role in roles:
bundled_role = role
while bundled_role != bundle_mapping[bundled_role]:
bundled_role = bundle_mapping[bundled_role]
bundle_mapping[role] = bundled_role
role_bundles.setdefault(bundled_role, set()).add(role)
logging.debug('bundle_mapping is %s', bundle_mapping)
logging.debug('role_bundles is %s', role_bundles)
return bundle_mapping, role_bundles
def _get_bundled_exclusives(exclusives, bundle_mapping):
"""Get bundled exclusives."""
bundled_exclusives = set()
for exclusive in exclusives:
if exclusive not in bundle_mapping:
logging.error(
'exclusive role %s did not found in roles %s',
exclusive, bundle_mapping.keys())
continue
bundled_exclusives.add(bundle_mapping[exclusive])
logging.debug('bundled exclusives: %s', bundled_exclusives)
return bundled_exclusives
def _get_max(lhs, rhs):
"""Get max value"""
if lhs < 0:
return lhs
if rhs < 0:
return rhs
return max(lhs, rhs)
def _get_min(lhs, rhs):
"""Get min value"""
if lhs < 0:
return rhs
if rhs < 0:
return lhs
return min(lhs, rhs)
def _get_bundled_max_mins(maxs, mins, default_max, default_min, role_bundles):
"""Get max and mins for each bundled role."""
bundled_maxs = {}
bundled_mins = {}
default_min = max(default_min, 0)
default_max = _get_max(default_max, default_min)
for bundled_role, roles in role_bundles.items():
bundled_min = None
bundled_max = None
for role in roles:
new_max = maxs.get(role, default_max)
new_min = mins.get(role, default_min)
if bundled_min is None:
bundled_min = new_min
else:
bundled_min = min(bundled_min, max(new_min, 0))
if bundled_max is None:
bundled_max = new_max
else:
bundled_max = _get_min(
bundled_max, _get_max(new_max, bundled_min))
if bundled_min is None:
bundled_min = default_min
if bundled_max is None:
bundled_max = max(default_max, bundled_min)
bundled_mins[bundled_role] = bundled_min
bundled_maxs[bundled_role] = bundled_max
logging.debug('bundled_maxs are %s', bundled_maxs)
logging.debug('bundled_mins are %s', bundled_mins)
return bundled_maxs, bundled_mins
def _update_assigned_roles(lower_refs, to_key, bundle_mapping,
role_bundles, bundled_maxs, bundled_mins):
"""
Update bundled maxs/mins and get assign roles to each host,
unassigned host.
"""
lower_roles = {}
unassigned_hosts = []
for lower_key, lower_ref in lower_refs.items():
roles_per_host = lower_ref.get(to_key, [])
roles = set()
bundled_roles = set()
for role in roles_per_host:
if role in bundle_mapping:
bundled_role = bundle_mapping[role]
bundled_roles.add(bundled_role)
roles |= set(role_bundles[bundled_role])
for bundled_role in bundled_roles:
bundled_maxs[bundled_role] -= 1
bundled_mins[bundled_role] -= 1
lower_roles[lower_key] = list(roles)
if not roles:
unassigned_hosts.append(lower_key)
logging.debug('assigned roles: %s', lower_roles)
logging.debug('unassigned_hosts: %s', unassigned_hosts)
logging.debug('bundled maxs for unassigned hosts: %s', bundled_maxs)
logging.debug('bundled mins for unassigned hosts: %s', bundled_mins)
return lower_roles, unassigned_hosts
def _update_exclusive_roles(bundled_exclusives, lower_roles,
unassigned_hosts, bundled_maxs,
bundled_mins, role_bundles):
"""Assign exclusive roles to hosts."""
for bundled_exclusive in bundled_exclusives:
while bundled_mins[bundled_exclusive] > 0:
if not unassigned_hosts:
raise ValueError('no enough unassigned hosts for exlusive %s',
bundled_exclusive)
host = unassigned_hosts.pop(0)
bundled_mins[bundled_exclusive] -= 1
bundled_maxs[bundled_exclusive] -= 1
lower_roles[host] = list(role_bundles[bundled_exclusive])
del role_bundles[bundled_exclusive]
logging.debug('assigned roles after assigning exclusives: %s', lower_roles)
logging.debug('unassigned_hosts after assigning exclusives: %s',
unassigned_hosts)
logging.debug('bundled maxs after assigning exclusives: %s', bundled_maxs)
logging.debug('bundled mins after assigning exclusives: %s', bundled_mins)
def _assign_roles_by_mins(role_bundles, lower_roles, unassigned_hosts,
bundled_maxs, bundled_mins):
"""Assign roles to hosts by min restriction."""
available_hosts = deepcopy(unassigned_hosts)
for bundled_role, roles in role_bundles.items():
while bundled_mins[bundled_role] > 0:
if not available_hosts:
raise ValueError('no enough available hosts to assign to %s',
bundled_role)
host = available_hosts.pop(0)
available_hosts.append(host)
if host in unassigned_hosts:
unassigned_hosts.remove(host)
bundled_mins[bundled_role] -= 1
bundled_maxs[bundled_role] -= 1
lower_roles[host] = list(roles)
logging.debug('assigned roles after assigning mins: %s', lower_roles)
logging.debug('unassigned_hosts after assigning mins: %s',
unassigned_hosts)
logging.debug('bundled maxs after assigning mins: %s', bundled_maxs)
def _assign_roles_by_maxs(role_bundles, lower_roles, unassigned_hosts,
bundled_maxs):
"""Assign roles to host by max restriction."""
available_lists = []
default_roles = []
for bundled_role in role_bundles.keys():
if bundled_maxs[bundled_role] > 0:
available_lists.append(
[bundled_role]*bundled_maxs[bundled_role])
else:
default_roles.append(bundled_role)
available_list = util.flat_lists_with_possibility(available_lists)
for bundled_role in available_list:
if not unassigned_hosts:
break
host = unassigned_hosts.pop(0)
lower_roles[host] = list(role_bundles[bundled_role])
logging.debug('assigned roles after assigning max: %s', lower_roles)
logging.debug('unassigned_hosts after assigning maxs: %s',
unassigned_hosts)
if default_roles:
default_iter = itertools.cycle(default_roles)
while unassigned_hosts:
host = unassigned_hosts.pop(0)
bundled_role = default_iter.next()
lower_roles[host] = list(role_bundles[bundled_role])
logging.debug('assigned roles are %s', lower_roles)
logging.debug('unassigned hosts: %s', unassigned_hosts)
def assign_roles(_upper_ref, _from_key, lower_refs, to_key,
roles=[], maxs={}, mins={}, default_max=-1,
default_min=0, exclusives=[], bundles=[], **_kwargs):
"""Assign roles to lower configs."""
logging.debug(
'assignRoles with roles=%s, maxs=%s, mins=%s, '
'default_max=%s, default_min=%s, exclusives=%s, bundles=%s',
roles, maxs, mins, default_max,
default_min, exclusives, bundles)
bundle_mapping, role_bundles = _get_role_bundle_mapping(roles, bundles)
bundled_exclusives = _get_bundled_exclusives(exclusives, bundle_mapping)
bundled_maxs, bundled_mins = _get_bundled_max_mins(
maxs, mins, default_max, default_min, role_bundles)
lower_roles, unassigned_hosts = _update_assigned_roles(
lower_refs, to_key, bundle_mapping, role_bundles,
bundled_maxs, bundled_mins)
_update_exclusive_roles(bundled_exclusives, lower_roles, unassigned_hosts,
bundled_maxs, bundled_mins, role_bundles)
_assign_roles_by_mins(
role_bundles, lower_roles, unassigned_hosts,
bundled_maxs, bundled_mins)
_assign_roles_by_maxs(
role_bundles, lower_roles, unassigned_hosts,
bundled_maxs)
return lower_roles
def assign_roles_by_host_numbers(upper_ref, from_key, lower_refs, to_key,
policy_by_host_numbers={}, default={},
**_kwargs):
"""Assign roles by role assign policy."""
host_numbers = str(len(lower_refs))
policy_kwargs = deepcopy(default)
if host_numbers in policy_by_host_numbers:
util.merge_dict(policy_kwargs, policy_by_host_numbers[host_numbers])
return assign_roles(upper_ref, from_key, lower_refs,
to_key, **policy_kwargs)
def has_intersection(upper_ref, from_key, _lower_refs, _to_key,
lower_values={}, **_kwargs):
"""Check if upper config has intersection with lower values."""
has = {}
for lower_key, lower_value in lower_values.items():
values = set(lower_value)
intersection = values.intersection(set(upper_ref.config))
logging.debug(
'lower_key %s values %s intersection'
'with from_key %s value %s: %s',
lower_key, values, from_key, upper_ref.config, intersection)
if intersection:
has[lower_key] = True
else:
has[lower_key] = False
return has
def assign_ips(_upper_ref, _from_key, lower_refs, to_key,
ip_start='192.168.0.1', ip_end='192.168.0.254',
**_kwargs):
"""Assign ips to hosts' configurations."""
if not ip_start or not ip_end:
return {}
host_ips = {}
unassigned_hosts = []
ips = IPSet(IPRange(ip_start, ip_end))
for lower_key, lower_ref in lower_refs.items():
ip_addr = lower_ref.get(to_key, '')
if ip_addr:
host_ips[lower_key] = ip_addr
ips.remove(ip_addr)
else:
unassigned_hosts.append(lower_key)
for ip_addr in ips:
if not unassigned_hosts:
break
host = unassigned_hosts.pop(0)
host_ips[host] = str(ip_addr)
logging.debug('assign %s: %s', to_key, host_ips)
return host_ips
def assign_from_pattern(_upper_ref, _from_key, lower_refs, to_key,
upper_keys=[], lower_keys=[], pattern='', **kwargs):
"""assign to_key by pattern."""
host_values = {}
upper_configs = {}
for key in upper_keys:
upper_configs[key] = kwargs[key]
for lower_key, _ in lower_refs.items():
group = deepcopy(upper_configs)
for key in lower_keys:
group[key] = kwargs[key][lower_key]
try:
host_values[lower_key] = pattern % group
except Exception as error:
logging.error('failed to assign %s[%s] = %s %% %s',
lower_key, to_key, pattern, group)
raise error
return host_values
def assign_noproxy(_upper_ref, _from_key, lower_refs,
to_key, default=[], hostnames={}, ips={},
**_kwargs):
"""Assign no proxy to hosts."""
no_proxy_list = deepcopy(default)
for _, hostname in hostnames.items():
no_proxy_list.append(hostname)
for _, ip_addr in ips.items():
no_proxy_list.append(ip_addr)
no_proxy = ','.join(no_proxy_list)
host_no_proxy = {}
for lower_key, _ in lower_refs.items():
host_no_proxy[lower_key] = no_proxy
return host_no_proxy
def override_if_empty(lower_ref, _ref_key):
"""Override if the configuration value is empty."""
if not lower_ref.config:
return True
return False

@ -0,0 +1,294 @@
"""Module to provide util class to access item in nested dict easily.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import fnmatch
import os.path
from copy import deepcopy
from compass.utils import util
def get_clean_config(config):
"""Get cleaned config from original config.
:param config: configuration to be cleaned.
:returns: clean configuration without key referring to None or empty dict.
"""
if config is None:
return None
if isinstance(config, dict):
extracted_config = {}
for key, value in config.items():
sub_config = get_clean_config(value)
if sub_config is not None:
extracted_config[key] = sub_config
if not extracted_config:
return None
return extracted_config
else:
return config
class ConfigReference(object):
"""Helper class to acess item in nested dict."""
def __init__(self, config, parent=None, parent_key=None):
"""Construct ConfigReference from configuration.
:param config: configuration to build the ConfigRerence instance.
:type config: dict
:param parent: parent ConfigReference instance.
:param parent_key: the key refers to the config in parent.
:type parent_key: str
:raises: TypeError
"""
if parent and not isinstance(parent, self.__class__):
raise TypeError('parent %s type should be %s'
% (parent, self.__class__.__name__))\
if parent_key and not util.is_instance(parent_key, [str, unicode]):
raise TypeError('parent_key %s type should be [str, unicode]'
% parent_key)
self.config = config
self.refs_ = {'.': self}
self.parent_ = parent
self.parent_key_ = parent_key
if parent is not None:
self.refs_['..'] = parent
self.refs_['/'] = parent.refs_['/']
parent.refs_[parent_key] = self
if parent.config is None or not isinstance(parent.config, dict):
parent.__init__({}, parent=parent.parent_,
parent_key=parent.parent_key_)
parent.config[parent_key] = self.config
else:
self.refs_['..'] = self
self.refs_['/'] = self
if config and isinstance(config, dict):
for key, value in config.items():
if not util.is_instance(key, [str, unicode]):
msg = 'key type is %s while expected is [str, unicode]: %s'
raise TypeError(msg % (type(key), key))
ConfigReference(value, self, key)
def items(self, prefix=''):
"""Return key value pair of all nested items.
:param prefix: iterate key value pair under prefix.
:type prefix: str
:returns: list of (key, value)
"""
to_list = []
for key, ref in self.refs_.items():
if not self._special_path(key):
key_prefix = os.path.join(prefix, key)
to_list.append((key_prefix, ref.config))
to_list.extend(ref.items(key_prefix))
return to_list
def keys(self):
"""Return keys of :func:`ConfigReference.items`."""
return [key for key, _ in self.items()]
def values(self):
"""Return values of :func:`ConfigReference.items`."""
return [ref for _, ref in self.items()]
def __nonzero__(self):
return bool(self.config)
def __iter__(self):
return iter(self.keys())
def __len__(self):
return len(self.keys())
@classmethod
def _special_path(cls, path):
"""Check if path is special."""
return path in ['/', '.', '..']
def ref_items(self, path):
"""Return the refs matching the path glob.
:param path: glob pattern to match the path to the ref.
:type path: str
:returns: dict of key to :class:`ConfigReference` instance.
:raises: KeyError
"""
if not path:
raise KeyError('key %s is empty' % path)
parts = []
if util.is_instance(path, [str, unicode]):
parts = path.split('/')
else:
parts = path
if not parts[0]:
parts = parts[1:]
refs = [('/', self.refs_['/'])]
else:
refs = [('', self)]
for part in parts:
if not part:
continue
next_refs = []
for prefix, ref in refs:
if self._special_path(part):
sub_prefix = os.path.join(prefix, part)
next_refs.append((sub_prefix, ref.refs_[part]))
continue
for sub_key, sub_ref in ref.refs_.items():
if self._special_path(sub_key):
continue
matched = fnmatch.fnmatch(sub_key, part)
if not matched:
continue
sub_prefix = os.path.join(prefix, sub_key)
next_refs.append((sub_prefix, sub_ref))
refs = next_refs
return refs
def ref_keys(self, path):
"""Return keys of :func:`ConfigReference.ref_items`."""
return [key for key, _ in self.ref_items(path)]
def ref_values(self, path):
"""Return values of :func:`ConfigReference.ref_items`."""
return [ref for _, ref in self.ref_items(path)]
def ref(self, path, create_if_not_exist=False):
"""Get ref of the path.
:param path: str. The path to the ref.
:type path: str
:param create_if_not_exists: create ref if does not exist on path.
:type create_if_not_exist: bool
:returns: :class:`ConfigReference` instance to the path.
:raises: KeyError, TypeError
"""
if not path:
raise KeyError('key %s is empty' % path)
if '*' in path or '?' in path:
raise TypeError('key %s should not contain *')
parts = []
if isinstance(path, list):
parts = path
else:
parts = path.split('/')
if not parts[0]:
ref = self.refs_['/']
parts = parts[1:]
else:
ref = self
for part in parts:
if not part:
continue
if part in ref.refs_:
ref = ref.refs_[part]
elif create_if_not_exist:
ref = ConfigReference(None, ref, part)
else:
raise KeyError('key %s is not exist' % path)
return ref
def __repr__(self):
return '<ConfigReference: config=%r, refs[%s], parent=%s>' % (
self.config, self.refs_.keys(), self.parent_)
def __getitem__(self, path):
return self.ref(path).config
def __contains__(self, path):
try:
self.ref(path)
return True
except KeyError:
return False
def __setitem__(self, path, value):
ref = self.ref(path, True)
ref.__init__(value, ref.parent_, ref.parent_key_)
return ref.config
def __delitem__(self, path):
ref = self.ref(path)
if ref.parent_:
del ref.parent_.refs_[ref.parent_key_]
del ref.parent_.config[ref.parent_key_]
ref.__init__(None)
def update(self, config, override=True):
"""Update with config.
:param config: config to update.
:param override: if the instance config should be overrided
:type override: bool
"""
if (self.config is not None and
isinstance(self.config, dict) and
isinstance(config, dict)):
util.merge_dict(self.config, config, override)
elif self.config is None or override:
self.config = deepcopy(config)
else:
return
self.__init__(self.config, self.parent_, self.parent_key_)
def get(self, path, default=None):
"""Get config of the path or default if does not exist.
:param path: path to the item
:type path: str
:param default: default value to return
:returns: item in path or default.
"""
try:
return self[path]
except KeyError:
return default
def setdefault(self, path, value=None):
"""Set default value to path.
:param path: path to the item.
:type path: str
:param value: the default value to set to the path.
:returns: the :class:`ConfigReference` to path
"""
ref = self.ref(path, True)
if ref.config is None:
ref.__init__(value, ref.parent_, ref.parent_key_)
return ref

@ -0,0 +1,256 @@
"""Config Translator module to translate orign config to dest config.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
from compass.config_management.utils import config_reference
from compass.utils import util
class KeyTranslator(object):
"""Class to translate origin ref to dest ref."""
def __init__(self, translated_keys=[], from_keys={}, translated_value=None,
from_values={}, override=False, override_conditions={}):
"""Constructor
:param translated_keys: keys in dest ref to be translated to.
:type translated_keys: list of str
:param from_keys: extra kwargs parsed to translated key callback.
:type: from_keys: dict mapping name of kwargs to path in origin ref
:param translated_value: value or callback to get translated value.
:type translated_value: callback or any type
:param from_values: extra kwargs parsed to translated value callback.
:type from_vlaues: dictr mapping name of kwargs to path in origin ref.
:param override: if the translated value can be overridden.
:type override: callback or bool
:param override_conditions: extra kwargs parsed to override callback.
:type override_conditions: dict of kwargs name to origin ref path.
"""
self.translated_keys_ = translated_keys
self.from_keys_ = from_keys
self.translated_value_ = translated_value
self.from_values_ = from_values
self.override_ = override
self.override_conditions_ = override_conditions
self._is_valid()
def __repr__(self):
return (
'%s[translated_keys=%s,from_keys=%s,translated_value=%s,'
'from_values=%s,override=%s,override_conditions=%s]'
) % (
self.__class__.__name__, self.translated_keys_,
self.from_keys_, self.translated_value_, self.from_values_,
self.override_, self.override_conditions_
)
def _is_valid_translated_keys(self):
"""Check translated keys are valid."""
for i, translated_key in enumerate(self.translated_keys_):
if util.is_instance(translated_key, [str, unicode]):
if '*' in translated_key:
raise KeyError(
'transalted_keys[%d] %s should not contain *' % (
i, translated_key))
elif not callable(translated_key):
raise TypeError(
'translated_keys[%d] type is %s while expected '
'types are str or callable: %s' % (
i, type(translated_key), translated_key))
def _is_valid_from_keys(self):
"""Check from keys are valid."""
for mapping_key, from_key in self.from_keys_.items():
if not util.is_instance(from_key, [str, unicode]):
raise TypeError(
'from_keys[%s] type is %s while '
'expected type is [str, unicode]: %s' % (
mapping_key, type(from_key), from_key))
if '*' in from_key:
raise KeyError(
'from_keys[%s] %s contains *' % (
mapping_key, from_key))
def _is_valid_from_values(self):
"""Check from values are valid."""
for mapping_key, from_value in self.from_values_.items():
if not util.is_instance(from_value, [str, unicode]):
raise TypeError(
'from_values[%s] type is %s while '
'expected type is [str, unicode]: %s' % (
mapping_key, type(from_value), from_value))
if '*' in from_value:
raise KeyError(
'from_values[%s] %s contains *' % (
mapping_key, from_value))
def _is_valid_override_conditions(self):
"""Check override conditions are valid."""
override_items = self.override_conditions_.items()
for mapping_key, override_condition in override_items:
if not util.is_instance(override_condition, [str, unicode]):
raise TypeError(
'override_conditions[%s] type is %s '
'while expected type is [str, unicode]: %s' % (
mapping_key, type(override_condition),
override_condition))
if '*' in override_condition:
raise KeyError(
'override_conditions[%s] %s contains *' % (
mapping_key, override_condition))
def _is_valid(self):
"""Check key translator is valid."""
self._is_valid_translated_keys()
self._is_valid_from_keys()
self._is_valid_from_values()
self._is_valid_override_conditions()
def _get_translated_keys(self, ref_key, sub_ref):
"""Get translated keys."""
key_configs = {}
for mapping_key, from_key in self.from_keys_.items():
if from_key in sub_ref:
key_configs[mapping_key] = sub_ref[from_key]
else:
logging.error('%s from_key %s missing in %s',
self, from_key, sub_ref)
translated_keys = []
for translated_key in self.translated_keys_:
if callable(translated_key):
translated_key = translated_key(
sub_ref, ref_key, **key_configs)
if not translated_key:
logging.debug('%s ignore empty translated key', self)
continue
if not util.is_instance(translated_key, [str, unicode]):
logging.error(
'%s translated key %s should be [str, unicode]',
self, translated_key)
continue
translated_keys.append(translated_key)
return translated_keys
def _get_translated_value(self, ref_key, sub_ref,
translated_key, translated_sub_ref):
"""Get translated value."""
if self.translated_value_ is None:
return sub_ref.config
elif not callable(self.translated_value_):
return self.translated_value_
value_configs = {}
for mapping_key, from_value in self.from_values_.items():
if from_value in sub_ref:
value_configs[mapping_key] = sub_ref[from_value]
else:
logging.info('%s ignore from value %s for key %s',
self, from_value, ref_key)
return self.translated_value_(
sub_ref, ref_key, translated_sub_ref,
translated_key, **value_configs)
def _get_override(self, ref_key, sub_ref,
translated_key, translated_sub_ref):
"""Get override."""
if not callable(self.override_):
return self.override_
override_condition_configs = {}
override_items = self.override_conditions_.items()
for mapping_key, override_condition in override_items:
if override_condition in sub_ref:
override_condition_configs[mapping_key] = (
sub_ref[override_condition])
else:
logging.error('%s no override condition %s in %s',
self, override_condition, ref_key)
return self.override_(sub_ref, ref_key,
translated_sub_ref,
translated_key,
**override_condition_configs)
def translate(self, ref, key, translated_ref):
"""Translate content in ref[key] to translated_ref."""
for ref_key, sub_ref in ref.ref_items(key):
translated_keys = self._get_translated_keys(ref_key, sub_ref)
for translated_key in translated_keys:
translated_sub_ref = translated_ref.setdefault(
translated_key)
translated_value = self._get_translated_value(
ref_key, sub_ref, translated_key, translated_sub_ref)
if translated_value is None:
continue
override = self._get_override(
ref_key, sub_ref, translated_key, translated_sub_ref)
logging.debug('%s translate to %s value %s', ref_key,
translated_key, translated_value)
translated_sub_ref.update(translated_value, override)
class ConfigTranslator(object):
"""Class to translate origin config to expected dest config."""
def __init__(self, mapping):
"""Constructor
:param mapping: dict of config path to :class:`KeyTranslator` instance
"""
self.mapping_ = mapping
self._is_valid()
def __repr__(self):
return '%s[mapping=%s]' % (self.__class__.__name__, self.mapping_)
def _is_valid(self):
"""Check if ConfigTranslator is valid."""
if not isinstance(self.mapping_, dict):
raise TypeError(
'mapping type is %s while expected type is dict: %s' % (
type(self.mapping_), self.mapping_))
for key, values in self.mapping_.items():
if not isinstance(values, list):
msg = 'mapping[%s] type is %s while expected type is list: %s'
raise TypeError(msg % (key, type(values), values))
for i, value in enumerate(values):
if not isinstance(value, KeyTranslator):
msg = (
'mapping[%s][%d] type is %s '
'while expected type is KeyTranslator: %s')
raise TypeError(msg % (key, i, type(value), value))
def translate(self, config):
"""Translate config.
:param config: configuration to translate.
:returns: the translated configuration.
"""
ref = config_reference.ConfigReference(config)
translated_ref = config_reference.ConfigReference({})
for key, values in self.mapping_.items():
for value in values:
value.translate(ref, key, translated_ref)
translated_config = config_reference.get_clean_config(
translated_ref.config)
logging.debug('translate config\n%s\nto\n%s',
config, translated_config)
return translated_config

@ -0,0 +1,72 @@
"""callback lib for config translator callbacks."""
import crypt
import logging
import re
from compass.utils import util
def get_key_from_pattern(
_ref, path, from_pattern='.*',
to_pattern='', **kwargs):
"""Get translated key from pattern"""
match = re.match(from_pattern, path)
if not match:
return None
group = match.groupdict()
util.merge_dict(group, kwargs)
try:
translated_key = to_pattern % group
except Exception as error:
logging.error('failed to get translated key from %s %% %s',
to_pattern, group)
raise error
return translated_key
def get_encrypted_value(ref, _path, _translated_ref, _translated_path,
crypt_method=None, **_kwargs):
"""Get encrypted value."""
if not crypt_method:
crypt_method = crypt.METHOD_MD5
return crypt.crypt(ref.config, crypt_method)
def get_value_if(ref, _path, _translated_ref, _translated_path,
condition=False, **_kwargs):
"""Get value if condition is true."""
if not condition:
return None
return ref.config
def add_value(ref, _path, translated_ref,
_translated_path, condition='', **_kwargs):
"""Append value into translated config if condition."""
if not translated_ref.config:
value_list = []
else:
value_list = [
value for value in translated_ref.config.split(',') if value
]
if condition and ref.config not in value_list:
value_list.append(ref.config)
return ','.join(value_list)
def override_if_any(_ref, _path, _translated_ref, _translated_path, **kwargs):
"""override if any kwargs is True"""
return any(kwargs.values())
def override_if_all(_ref, _path, _translated_ref, _translated_path, **kwargs):
"""override if all kwargs are True"""
return all(kwargs.values())
def override_path_has(_ref, path, _translated_ref, _translated_path,
should_exist='', **_kwargs):
"""override if expect part exists in path."""
return should_exist in path.split('/')

0
compass/db/__init__.py Normal file

94
compass/db/database.py Normal file

@ -0,0 +1,94 @@
"""Provider interface to manipulate database."""
import logging
from threading import local
from contextlib import contextmanager
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from compass.utils import setting_wrapper as setting
from compass.db import model
ENGINE = create_engine(setting.SQLALCHEMY_DATABASE_URI, convert_unicode=True)
SESSION = sessionmaker(autocommit=False, autoflush=False)
SESSION.configure(bind=ENGINE)
SCOPED_SESSION = scoped_session(SESSION)
SESSION_HOLDER = local()
def init(database_url):
"""Initialize database.
:param database_url: string, database url.
"""
global ENGINE
global SCOPED_SESSION
ENGINE = create_engine(database_url, convert_unicode=True)
SESSION.configure(bind=ENGINE)
SCOPED_SESSION = scoped_session(SESSION)
@contextmanager
def session():
"""
database session scope. To operate database, it should be called in
database session.
"""
if hasattr(SESSION_HOLDER, 'session'):
logging.error('we are already in session')
new_session = SESSION_HOLDER.session
else:
new_session = SCOPED_SESSION()
try:
SESSION_HOLDER.session = new_session
yield new_session
new_session.commit()
except Exception as error:
new_session.rollback()
logging.error('failed to commit session')
logging.exception(error)
raise error
finally:
new_session.close()
SCOPED_SESSION.remove()
del SESSION_HOLDER.session
def current_session():
"""Get the current session scope when it is called.
:return: database session.
"""
try:
return SESSION_HOLDER.session
except Exception as error:
logging.error('It is not in the session scope')
logging.exception(error)
raise error
def create_db():
"""Create database"""
model.BASE.metadata.create_all(bind=ENGINE)
def drop_db():
"""Drop database."""
model.BASE.metadata.drop_all(bind=ENGINE)
def create_table(table):
"""Create table.
:param table: Class of the Table defined in the model.
"""
table.__table__.create(bind=ENGINE, checkfirst=True)
def drop_table(table):
"""Drop table.
:param table: Class of the Table defined in the model.
"""
table.__table__.drop(bind=ENGINE, checkfirst=True)

567
compass/db/model.py Normal file

@ -0,0 +1,567 @@
"""database model."""
from datetime import datetime
import simplejson as json
import logging
import uuid
from sqlalchemy import Column, ColumnDefault, Integer, String
from sqlalchemy import Float, Enum, DateTime, ForeignKey, Text, Boolean
from sqlalchemy import UniqueConstraint
from sqlalchemy.orm import relationship, backref
from sqlalchemy.ext.declarative import declarative_base
from compass.utils import util
BASE = declarative_base()
class Switch(BASE):
"""Switch table.
:param id: the unique identifier of the switch. int as primary key.
:param ip: the IP address of the switch.
:param vendor_info: the name of the vendor
:param credential_data: used for accessing and retrieving information
from the switch. Store json format as string.
:param state: Enum.'not_reached': polling switch fails or not complete to
learn all MAC addresses of devices connected to the switch;
'under_monitoring': successfully learn all MAC addresses.
:param machines: refer to list of Machine connected to the switch.
"""
__tablename__ = 'switch'
id = Column(Integer, primary_key=True)
ip = Column(String(80), unique=True)
credential_data = Column(Text)
vendor_info = Column(String(256), nullable=True)
state = Column(Enum('not_reached', 'under_monitoring',
name='switch_state'))
def __init__(self, **kwargs):
self.state = 'not_reached'
super(Switch, self).__init__(**kwargs)
def __repr__(self):
return '<Switch ip: %r, credential: %r, vendor: %r, state: %s>'\
% (self.ip, self.credential, self.vendor, self.state)
@property
def vendor(self):
"""vendor property getter"""
return self.vendor_info
@vendor.setter
def vendor(self, value):
"""vendor property setter"""
self.vendor_info = value
@property
def credential(self):
"""credential data getter.
:returns: python primitive dictionary object.
"""
if self.credential_data:
try:
credential = json.loads(self.credential_data)
credential = dict(
[(str(k).title(), str(v)) for k, v in credential.items()])
return credential
except Exception as error:
logging.error('failed to load credential data %s: %s',
self.id, self.credential_data)
logging.exception(error)
return {}
else:
return {}
@credential.setter
def credential(self, value):
"""credential property setter
:param value: dict of configuration data needed to update.
"""
if value:
try:
credential = {}
if self.credential_data:
credential = json.loads(self.credential_data)
credential.update(value)
self.credential_data = json.dumps(credential)
except Exception as error:
logging.error('failed to dump credential data %s: %s',
self.id, value)
logging.exception(error)
else:
self.credential_data = json.dumps({})
logging.debug('switch now is %s', self)
class Machine(BASE):
"""
Machine table.Note: currently, we are taking care of management plane.
Therefore, we assume one machine is connected to one switch.
:param id: int, identity as primary key
:param mac: string, the MAC address of the machine.
:param switch_id: switch id that this machine connected on to.
:param port: nth port of the switch that this machine connected.
:param vlan: vlan id that this machine connected on to.
:param update_timestamp: last time this entry got updated.
:param switch: refer to the Switch the machine connects to.
"""
__tablename__ = 'machine'
id = Column(Integer, primary_key=True)
mac = Column(String(24), unique=True)
port = Column(Integer)
vlan = Column(Integer)
update_timestamp = Column(DateTime, default=datetime.now,
onupdate=datetime.now)
switch_id = Column(Integer, ForeignKey('switch.id',
onupdate='CASCADE',
ondelete='SET NULL'))
switch = relationship('Switch', backref=backref('machines',
lazy='dynamic'))
def __init__(self, **kwargs):
super(Machine, self).__init__(**kwargs)
def __repr__(self):
return '<Machine %r: port=%r vlan=%r switch=%r>'\
% (self.mac, self.port, self.vlan, self.switch)
class HostState(BASE):
"""The state of the ClusterHost.
:param id: int, identity as primary key.
:param state: Enum. 'UNINITIALIZED': the host is ready to setup.
'INSTALLING': the host is not installing.
'READY': the host is setup.
'ERROR': the host has error.
:param progress: float, the installing progress from 0 to 1.
:param message: the latest installing message.
:param severity: Enum, the installing message severity.
('INFO', 'WARNING', 'ERROR')
:param update_timestamp: the lastest timestamp the entry got updated.
:param host: refer to ClusterHost.
"""
__tablename__ = "host_state"
id = Column(Integer, ForeignKey('cluster_host.id',
onupdate='CASCADE',
ondelete='CASCADE'),
primary_key=True)
state = Column(Enum('UNINITIALIZED', 'INSTALLING', 'READY', 'ERROR'),
ColumnDefault('UNINITIALIZED'))
progress = Column(Float, ColumnDefault(0.0))
message = Column(String)
severity = Column(Enum('INFO', 'WARNING', 'ERROR'), ColumnDefault('INFO'))
update_timestamp = Column(DateTime, default=datetime.now,
onupdate=datetime.now)
host = relationship('ClusterHost', backref=backref('state',
uselist=False))
def __init__(self, **kwargs):
super(HostState, self).__init__(**kwargs)
@property
def hostname(self):
"""hostname getter"""
return self.host.hostname
def __repr__(self):
return ('<HostState %r: state=%r, progress=%s, '
'message=%s, severity=%s>') % (
self.hostname, self.state, self.progress,
self.message, self.severity)
class ClusterState(BASE):
"""The state of the Cluster.
:param id: int, identity as primary key.
:param state: Enum, 'UNINITIALIZED': the cluster is ready to setup.
'INSTALLING': the cluster is not installing.
'READY': the cluster is setup.
'ERROR': the cluster has error.
:param progress: float, the installing progress from 0 to 1.
:param message: the latest installing message.
:param severity: Enum, the installing message severity.
('INFO', 'WARNING', 'ERROR').
:param update_timestamp: the lastest timestamp the entry got updated.
:param cluster: refer to Cluster.
"""
__tablename__ = 'cluster_state'
id = Column(Integer, ForeignKey('cluster.id',
onupdate='CASCADE',
ondelete='CASCADE'),
primary_key=True)
state = Column(Enum('UNINITIALIZED', 'INSTALLING', 'READY', 'ERROR'),
ColumnDefault('UNINITIALIZED'))
progress = Column(Float, ColumnDefault(0.0))
message = Column(String)
severity = Column(Enum('INFO', 'WARNING', 'ERROR'), ColumnDefault('INFO'))
update_timestamp = Column(DateTime, default=datetime.now,
onupdate=datetime.now)
cluster = relationship('Cluster', backref=backref('state',
uselist=False))
def __init__(self, **kwargs):
super(ClusterState, self).__init__(**kwargs)
@property
def clustername(self):
'clustername getter'
return self.cluster.name
def __repr__(self):
return ('<ClusterState %r: state=%r, progress=%s, '
'message=%s, severity=%s>') % (
self.clustername, self.state, self.progress,
self.message, self.severity)
class Cluster(BASE):
"""Cluster configuration information.
:param id: int, identity as primary key.
:param name: str, cluster name.
:param mutable: bool, if the Cluster is mutable.
:param security_config: str stores json formatted security information.
:param networking_config: str stores json formatted networking information.
:param partition_config: string stores json formatted parition information.
:param adapter_id: the refer id in the Adapter table.
:param raw_config: str stores json formatted other cluster information.
:param adapter: refer to the Adapter.
:param state: refer to the ClusterState.
"""
__tablename__ = 'cluster'
id = Column(Integer, primary_key=True)
name = Column(String, unique=True)
mutable = Column(Boolean, default=True)
security_config = Column(Text)
networking_config = Column(Text)
partition_config = Column(Text)
adapter_id = Column(Integer, ForeignKey('adapter.id'))
raw_config = Column(Text)
adapter = relationship("Adapter", backref=backref('clusters',
lazy='dynamic'))
def __init__(self, **kwargs):
if 'name' not in kwargs or not kwargs['name']:
self.name = str(uuid.uuid4())
if 'name' in kwargs:
del kwargs['name']
super(Cluster, self).__init__(**kwargs)
def __repr__(self):
return '<Cluster %r: config=%r>' % (self.name, self.config)
@property
def partition(self):
"""partition getter"""
if self.partition_config:
try:
return json.loads(self.partition_config)
except Exception as error:
logging.error('failed to load security config %s: %s',
self.id, self.partition_config)
logging.exception(error)
return {}
else:
return {}
@partition.setter
def partition(self, value):
"""partition setter"""
logging.debug('cluster %s set partition %s', self.id, value)
if value:
try:
self.partition_config = json.dumps(value)
except Exception as error:
logging.error('failed to dump partition config %s: %s',
self.id, value)
logging.exception(error)
else:
self.partition_config = None
@property
def security(self):
"""security getter"""
if self.security_config:
try:
return json.loads(self.security_config)
except Exception as error:
logging.error('failed to load security config %s: %s',
self.id, self.security_config)
logging.exception(error)
return {}
else:
return {}
@security.setter
def security(self, value):
"""security setter"""
logging.debug('cluster %s set security %s', self.id, value)
if value:
try:
self.security_config = json.dumps(value)
except Exception as error:
logging.error('failed to dump security config %s: %s',
self.id, value)
logging.exception(error)
else:
self.security_config = None
@property
def networking(self):
"""networking getter"""
if self.networking_config:
try:
return json.loads(self.networking_config)
except Exception as error:
logging.error('failed to load networking config %s: %s',
self.id, self.networking_config)
logging.exception(error)
return {}
else:
return {}
@networking.setter
def networking(self, value):
"""networking setter"""
logging.debug('cluster %s set networking %s', self.id, value)
if value:
try:
self.networking_config = json.dumps(value)
except Exception as error:
logging.error('failed to dump networking config %s: %s',
self.id, value)
logging.exception(error)
else:
self.networking_config = None
@property
def config(self):
"""get config from security, networking, partition"""
config = {}
if self.raw_config:
try:
config = json.loads(self.raw_config)
except Exception as error:
logging.error('failed to load raw config %s: %s',
self.id, self.raw_config)
logging.exception(error)
util.merge_dict(config, {'security': self.security})
util.merge_dict(config, {'networking': self.networking})
util.merge_dict(config, {'partition': self.partition})
util.merge_dict(config, {'clusterid': self.id,
'clustername': self.name})
return config
@config.setter
def config(self, value):
"""set config to security, networking, partition."""
logging.debug('cluster %s set config %s', self.id, value)
if not value:
self.security = None
self.networking = None
self.partition = None
self.raw_config = None
return
self.security = value.get('security')
self.networking = value.get('networking')
self.partition = value.get('partition')
try:
self.raw_config = json.dumps(value)
except Exception as error:
logging.error('failed to dump raw config %s: %s',
self.id, value)
logging.exception(error)
class ClusterHost(BASE):
"""ClusterHost information.
:param id: int, identity as primary key.
:param machine_id: int, the id of the Machine.
:param cluster_id: int, the id of the Cluster.
:param mutable: if the ClusterHost information is mutable.
:param hostname: str, host name.
:param config_data: string, json formatted config data.
:param cluster: refer to Cluster the host in.
:param machine: refer to the Machine the host on.
:param state: refer to HostState indicates the host state.
"""
__tablename__ = 'cluster_host'
id = Column(Integer, primary_key=True)
machine_id = Column(Integer, ForeignKey('machine.id',
onupdate='CASCADE',
ondelete='CASCADE'),
nullable=True, unique=True)
cluster_id = Column(Integer, ForeignKey('cluster.id',
onupdate='CASCADE',
ondelete='SET NULL'),
nullable=True)
hostname = Column(String)
UniqueConstraint('cluster_id', 'hostname', name='unique_1')
config_data = Column(Text)
mutable = Column(Boolean, default=True)
cluster = relationship("Cluster", backref=backref('hosts', lazy='dynamic'))
machine = relationship("Machine", backref=backref('host', uselist=False))
def __init__(self, **kwargs):
if 'hostname' not in kwargs or not kwargs['hostname']:
self.hostname = str(uuid.uuid4())
if 'hostname' in kwargs:
del kwargs['hostname']
super(ClusterHost, self).__init__(**kwargs)
def __repr__(self):
return '<ClusterHost %r: cluster=%r machine=%r>'\
% (self.hostname, self.cluster, self.machine)
@property
def config(self):
"""config getter."""
config = {}
if self.config_data:
try:
config.update(json.loads(self.config_data))
config.update({'hostid': self.id, 'hostname': self.hostname})
if self.cluster:
config.update({'clusterid': self.cluster.id,
'clustername': self.cluster.name})
if self.machine:
util.merge_dict(
config, {
'networking': {
'interfaces': {
'management': {
'mac': self.machine.mac
}
}
}
})
except Exception as error:
logging.error('failed to load config %s: %s',
self.hostname, self.config_data)
logging.exception(error)
return config
@config.setter
def config(self, value):
"""config setter"""
if not self.config_data:
config = {
}
self.config_data = json.dumps(config)
if value:
try:
config = json.loads(self.config_data)
util.merge_dict(config, value)
self.config_data = json.dumps(config)
except Exception as error:
logging.error('failed to dump config %s: %s',
self.hostname, value)
logging.exception(error)
class LogProgressingHistory(BASE):
"""host installing log history for each file.
:param id: int, identity as primary key.
:param pathname: str, the full path of the installing log file. unique.
:param position: int, the position of the log file it has processed.
:param partial_line: str, partial line of the log.
:param progressing: float, indicate the installing progress between 0 to 1.
:param message: str, str, the installing message.
:param severity: Enum, the installing message severity.
('ERROR', 'WARNING', 'INFO')
:param line_matcher_name: str, the line matcher name of the log processor.
:param update_timestamp: datetime, the latest timestamp the entry updated.
"""
__tablename__ = 'log_progressing_history'
id = Column(Integer, primary_key=True)
pathname = Column(String, unique=True)
position = Column(Integer, ColumnDefault(0))
partial_line = Column(Text)
progress = Column(Float, ColumnDefault(0.0))
message = Column(Text)
severity = Column(Enum('ERROR', 'WARNING', 'INFO'), ColumnDefault('INFO'))
line_matcher_name = Column(String, ColumnDefault('start'))
update_timestamp = Column(DateTime, default=datetime.now,
onupdate=datetime.now)
def __init__(self, **kwargs):
super(LogProgressingHistory, self).__init__(**kwargs)
def __repr__(self):
return ('LogProgressingHistory[%r: position %r,'
'partial_line %r,progress %r,message %r,'
'severity %r]') % (
self.pathname, self.position,
self.partial_line,
self.progress,
self.message,
self.severity)
class Adapter(BASE):
"""Table stores ClusterHost installing Adapter information.
:param id: int, identity as primary key.
:param name: string, adapter name, unique.
:param os: string, os name for installing the host.
:param target_system: string, target system to be installed on the host.
:param clusters: refer to the list of Cluster.
"""
__tablename__ = 'adapter'
id = Column(Integer, primary_key=True)
name = Column(String, unique=True)
os = Column(String)
target_system = Column(String)
def __init__(self, **kwargs):
super(Adapter, self).__init__(**kwargs)
def __repr__(self):
return '<Adapter %r: os %r, target_system %r>' % (
self.name, self.os, self.target_system)
class Role(BASE):
"""
The Role table stores avaiable roles of one target system
where the host can be deployed to one or several roles in the cluster.
:param id: int, identity as primary key.
:param name: role name.
:param target_system: str, the target_system.
:param description: str, the description of the role.
"""
__tablename__ = 'role'
id = Column(Integer, primary_key=True)
name = Column(String, unique=True)
target_system = Column(String)
description = Column(Text)
def __init__(self, **kwargs):
super(Role, self).__init__(**kwargs)
def __repr__(self):
return '<Role %r : target_system %r, description:%r>' % (
self.name, self.target_system, self.description)

@ -0,0 +1,33 @@
Install & Config Prerequisite Packages:
1. Net-Snmp:
a. #apt-get install -y snmpd snmp libsnmp-python
b. #apt-get install -y snmp-mibs-downloader
For Centos:
# yum install net-snmp net-snmp-utils
c. create vendor's mibs directory(for example):
- #mkdir -p /root/.snmp/mibs/huawei
- #vim /etc/snmp/snmp.conf (if not exists, create snmp.conf file)
* add vendor;s mibs directory:
mibdirs +/root/.snmp/mibs/huawei
* comment the line:
#mibs:
d. copy vendor's mibs to that directory
e. #vim /etc/default/snmpd
* modify the directive from
TRAPDRUN=no --> TRAPDRUN=yes
For Centos:
# vim /etc/sysconfig/snmpd
* modify into or add the directive
TRAPDRUN=yes
f. #vim /etc/snmp/snmpd.conf
* add the following line, where $ip is the ip address of manager machine:
com2sec mynetwork $ip/24 public
g. #service snmpd restart
Note: net-snmp-config is used to see default configuration
2. paramiko:
#apt-get install python-paramiko

@ -0,0 +1,42 @@
"""
Base class extended by specific vendor in vendors directory.
a vendor need to implment abstract methods of base class.
"""
class BaseVendor(object):
"""Basic Vendor object"""
def is_this_vendor(self, *args, **kwargs):
"""Determine if the host is associated with this vendor.
This function must be implemented by vendor itself
"""
raise NotImplementedError
class BasePlugin(object):
"""Extended by vendor's plugin, which processes request and
retrieve info directly from the switch.
"""
def process_data(self, *args, **kwargs):
"""Each vendors will have some plugins to do some operations.
Plugin will process request data and return expected result.
:param args: arguments
:param kwargs: key-value pairs of arguments
"""
raise NotImplementedError
# At least one of these three functions below must be implemented.
def scan(self, *args, **kwargs):
"""Get multiple records at once"""
pass
def set(self, *args, **kwargs):
"""Set value to desired variable"""
pass
def get(self, *args, **kwargs):
"""Get one record from a host"""
pass

@ -0,0 +1,87 @@
"""Manage hdsdiscovery functionalities"""
import os
import re
import logging
from compass.hdsdiscovery import utils
class HDManager:
"""Process a request."""
def __init__(self):
base_dir = os.path.dirname(os.path.realpath(__file__))
self.vendors_dir = os.path.join(base_dir, 'vendors')
self.vendor_plugins_dir = os.path.join(self.vendors_dir, '?/plugins')
def learn(self, host, credential, vendor, req_obj, oper="SCAN", **kwargs):
"""Insert/update record of switch_info. Get expected results from
switch according to sepcific operation.
:param req_obj: the object of a machine
:param host: switch IP address
:param credientials: credientials to access switch
:param oper: operations of the plugin (SCAN, GETONE, SET)
:param kwargs(optional): key-value pairs
"""
plugin_dir = self.vendor_plugins_dir.replace('?', vendor)
if not os.path.exists(plugin_dir):
logging.error('No such directory: %s', plugin_dir)
return None
plugin = utils.load_module(req_obj, plugin_dir, host, credential)
if not plugin:
# No plugin found!
#TODO add more code to catch excpetion or unexpected state
logging.error('no plugin %s to load from %s', req_obj, plugin_dir)
return None
return plugin.process_data(oper)
def is_valid_vendor(self, host, credential, vendor):
""" Check if vendor is associated with this host and credential
:param host: switch ip
:param credential: credential to access switch
:param vendor: the vendor of switch
"""
vendor_dir = os.path.join(self.vendors_dir, vendor)
if not os.path.exists(vendor_dir):
logging.error('no such directory: %s', vendor_dir)
return False
vendor_instance = utils.load_module(vendor, vendor_dir)
#TODO add more code to catch excpetion or unexpected state
if not vendor_instance:
# Cannot found the vendor in the directory!
logging.error('no vendor instance %s load from %s',
vendor, vendor_dir)
return False
return vendor_instance.is_this_vendor(host, credential)
def get_vendor(self, host, credential):
""" Check and get vendor of the switch.
:param host: switch ip:
:param credential: credential to access switch
"""
# List all vendors in vendors directory -- a directory but hidden
# under ../vendors
all_vendors = [o for o in os.listdir(self.vendors_dir)
if os.path.isdir(os.path.join(self.vendors_dir, o))
and re.match(r'^[^\.]', o)]
logging.debug("[get_vendor]: %s ", all_vendors)
for vname in all_vendors:
vpath = os.path.join(self.vendors_dir, vname)
instance = utils.load_module(vname, vpath)
#TODO add more code to catch excpetion or unexpected state
if not instance:
logging.error('no instance %s load from %s', vname, vpath)
continue
if instance.is_this_vendor(host, credential):
return vname
return None

@ -0,0 +1,160 @@
"""Utility functions
Including functions of get/getbulk/walk/set of snmp for three versions
"""
import imp
import re
import logging
def load_module(mod_name, path, host=None, credential=None):
""" Load a module instance.
:param str mod_name: module name
:param str path: directory of the module
:param str host: switch ip address
:param str credential: credential used to access switch
"""
instance = None
try:
file, path, descr = imp.find_module(mod_name, [path])
if file:
mod = imp.load_module(mod_name, file, path, descr)
if host and credential:
instance = getattr(mod, mod.CLASS_NAME)(host, credential)
else:
instance = getattr(mod, mod.CLASS_NAME)()
except ImportError as exc:
logging.error('No such plugin : %s', mod_name)
logging.exception(exc)
finally:
return instance
def ssh_remote_execute(host, username, password, cmd, *args):
"""SSH to execute script on remote machine
:param host: ip of the remote machine
:param username: username to access the remote machine
:param password: password to access the remote machine
:param cmd: command to execute
"""
try:
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(host, username=username, password=password)
stdin, stdout, stderr = client.exec_command(cmd)
return stdout.readlines()
except ImportError as exc:
logging.error("[hdsdiscovery][utils][ssh_remote_execute] failed to"
"load module 'paramiko', donnot exist!")
logging.exception(exc)
return None
except Exception as exc:
logging.error("[hdsdiscovery][utils][ssh_remote_execute] failed: %s",
cmd)
logging.exception(exc)
return None
finally:
client.close()
def valid_ip_format(ip_address):
"""Valid the format of an Ip address"""
if not re.match(r'^((([0-2]?\d{0,2}\.){3}([0-2]?\d{0,2}))'
'|(([\da-fA-F]{1,4}:){7}([\da-fA-F]{1,4})))$',
ip_address):
# check IP's format is match ipv4 or ipv6 by regex
return False
return True
#################################################################
# Implement snmpwalk and snmpget funtionality
# The structure of returned dictionary will by tag/iid/value/type
#################################################################
AUTH_VERSIONS = {'v1': 1,
'v2c': 2,
'v3': 3}
def snmp_walk(host, credential, *args):
"""Impelmentation of snmpwalk functionality
:param host: switch ip
:param credential: credential to access switch
:param args: OIDs
"""
try:
import netsnmp
except ImportError:
logging.error("Module 'netsnmp' do not exist! Please install it first")
return None
if 'Version' not in credential or 'Community' not in credential:
logging.error("[utils] missing 'Version' and 'Community' in %s",
credential)
return None
if credential['Version'] in AUTH_VERSIONS:
version = AUTH_VERSIONS[credential['Version']]
credential['Version'] = version
varbind_list = []
for arg in args:
varbind = netsnmp.Varbind(arg)
varbind_list.append(varbind)
var_list = netsnmp.VarList(*varbind_list)
res = netsnmp.snmpwalk(var_list, DestHost=host, **credential)
result = []
for var in var_list:
response = {}
response['elem_name'] = var.tag
response['iid'] = var.iid
response['value'] = var.val
response['type'] = var.type
result.append(response)
return result
def snmp_get(host, credential, object_type):
"""Impelmentation of snmp get functionality
:param object_type: mib object
:param host: switch ip
:param credential: the dict of credential to access switch
"""
try:
import netsnmp
except ImportError:
logging.error("Module 'netsnmp' do not exist! Please install it first")
return None
if 'Version' not in credential or 'Community' not in credential:
logging.error('[uitls][snmp_get] missing keywords in %s for %s',
credential, host)
return None
if credential['Version'] in AUTH_VERSIONS:
version = AUTH_VERSIONS[credential['Version']]
credential['Version'] = version
varbind = netsnmp.Varbind(object_type)
res = netsnmp.snmpget(varbind, DestHost=host, **credential)
if not res:
logging.error('no result found for %s %s', host, credential)
return None
return res[0]

51
compass/hdsdiscovery/vendors/hp/hp.py vendored Normal file

@ -0,0 +1,51 @@
"""Vendor: HP"""
import re
import logging
from compass.hdsdiscovery import base
from compass.hdsdiscovery import utils
#Vendor_loader will load vendor instance by CLASS_NAME
CLASS_NAME = 'Hp'
class Hp(base.BaseVendor):
"""Hp switch object"""
def __init__(self):
# the name of switch model belonging to Hewlett-Packard (HP) vendor
self.names = ['hp', 'procurve']
def is_this_vendor(self, host, credential):
"""
Determine if the hostname is accociated witH this vendor.
This example will use snmp sysDescr OID ,regex to extract
the vendor's name ,and then compare with self.name variable.
:param host: switch's IP address
:param credential: credential to access switch
"""
if "Version" not in credential or "Community" not in credential:
# The format of credential is incompatible with this vendor
err_msg = "[Hp]Missing keyword 'Version' or 'Community' in %r"
logging.error(err_msg, credential)
return False
sys_info = utils.snmp_get(host, credential, "sysDescr.0")
if not sys_info:
logging.info("Dismatch vendor information")
return False
sys_info = sys_info.lower()
for name in self.names:
if re.search(r"\b" + re.escape(name) + r"\b", sys_info):
return True
return False
@property
def name(self):
"""Get 'name' proptery"""
return 'hp'

@ -0,0 +1,79 @@
"""HP Switch Mac module"""
from compass.hdsdiscovery import utils
from compass.hdsdiscovery import base
CLASS_NAME = 'Mac'
class Mac(base.BasePlugin):
"""Process MAC address by HP switch"""
def __init__(self, host, credential):
self.host = host
self.credential = credential
def process_data(self, oper='SCAN'):
"""Dynamically call the function according 'oper'
:param oper: operation of data processing
"""
func_name = oper.lower()
return getattr(self, func_name)()
def scan(self):
"""
Implemnets the scan method in BasePlugin class. In this mac module,
mac addesses were retrieved by snmpwalk python lib.
"""
walk_result = utils.snmp_walk(self.host, self.credential,
"BRIDGE-MIB::dot1dTpFdbPort")
if not walk_result:
return None
mac_list = []
for result in walk_result:
if not result or result['value'] == str(0):
continue
temp = {}
mac_numbers = result['iid'].split('.')
temp['mac'] = self._get_mac_address(mac_numbers)
temp['port'] = self._get_port(result['value'])
temp['vlan'] = self._get_vlan_id(temp['port'])
mac_list.append(temp)
return mac_list
def _get_vlan_id(self, port):
"""Get vlan Id"""
oid = '.'.join(('Q-BRIDGE-MIB::dot1qPvid', port))
vlan_id = utils.snmp_get(self.host, self.credential, oid).strip()
return vlan_id
def _get_port(self, if_index):
"""Get port number"""
if_name = '.'.join(('ifName', if_index))
port = utils.snmp_get(self.host, self.credential, if_name).strip()
return port
def _convert_to_hex(self, integer):
"""Convert the integer from decimal to hex"""
hex_string = str(hex(int(integer)))[2:]
length = len(hex_string)
if length == 1:
hex_string = str(0) + hex_string
return hex_string
def _get_mac_address(self, mac_numbers):
"""Assemble mac address from the list"""
mac = ""
for num in mac_numbers:
num = self._convert_to_hex(num)
mac = ':'.join((mac, num))
mac = mac[1:]
return mac

@ -0,0 +1,51 @@
"""Huawei Switch"""
import re
import logging
from compass.hdsdiscovery import base
from compass.hdsdiscovery import utils
#Vendor_loader will load vendor instance by CLASS_NAME
CLASS_NAME = "Huawei"
class Huawei(base.BaseVendor):
"""Huawei switch"""
def __init__(self):
self.__name = "huawei"
def is_this_vendor(self, host, credential):
"""
Determine if the hostname is accociated witH this vendor.
This example will use snmp sysDescr OID ,regex to extract
the vendor's name ,and then compare with self.name variable.
:param host: swtich's IP address
:param credential: credential to access switch
"""
if not utils.valid_ip_format(host):
#invalid ip address
return False
if "Version" not in credential or "Community" not in credential:
# The format of credential is incompatible with this vendor
error_msg = "[huawei]Missing 'Version' or 'Community' in %r"
logging.error(error_msg, credential)
return False
sys_info = utils.snmp_get(host, credential, "sysDescr.0")
if not sys_info:
return False
if re.search(r"\b" + re.escape(self.__name) + r"\b", sys_info.lower()):
return True
return False
@property
def name(self):
"""Return switch name"""
return self.__name

@ -0,0 +1,111 @@
import subprocess
from compass.hdsdiscovery import utils
from compass.hdsdiscovery import base
CLASS_NAME = "Mac"
class Mac(base.BasePlugin):
"""Processes MAC address"""
def __init__(self, host, credential):
self.mac_mib_obj = 'HUAWEI-L2MAM-MIB::hwDynFdbPort'
self.host = host
self.credential = credential
def process_data(self, oper="SCAN"):
"""
Dynamically call the function according 'oper'
:param oper: operation of data processing
"""
func_name = oper.lower()
return getattr(self, func_name)()
def scan(self):
"""
Implemnets the scan method in BasePlugin class. In this mac module,
mac addesses were retrieved by snmpwalk commandline.
"""
version = self.credential['Version']
community = self.credential['Community']
if version == 2:
# Command accepts 1|2c|3 as version arg
version = '2c'
cmd = 'snmpwalk -v%s -Cc -c %s -O b %s %s' % \
(version, community, self.host, self.mac_mib_obj)
try:
sub_p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
result = []
for line in sub_p.stdout.readlines():
if not line or line == '\n':
continue
temp = {}
arr = line.split(" ")
temp['iid'] = arr[0].split('.', 1)[-1]
temp['value'] = arr[-1]
result.append(temp)
return self._process_mac(result)
except:
return None
def _process_mac(self, walk_result):
"""Get mac addresses from snmpwalk result"""
mac_list = []
for entity in walk_result:
iid = entity['iid']
ifIndex = entity['value']
numbers = iid.split('.')
mac = self._get_mac_address(numbers, 6)
vlan = numbers[6]
port = self._get_port(ifIndex)
attri_dict_temp = {}
attri_dict_temp['port'] = port
attri_dict_temp['mac'] = mac
attri_dict_temp['vlan'] = vlan
mac_list.append(attri_dict_temp)
return mac_list
def _get_port(self, if_index):
"""Get port number by using snmpget and OID 'IfName'
:param int if_index:the index of 'IfName'
"""
if_name = '.'.join(('ifName', if_index))
result = utils.snmp_get(self.host, self.credential, if_name)
"""result variable will be like: GigabitEthernet0/0/23"""
port = result.split("/")[2]
return port
def _convert_to_hex(self, integer):
"""Convert the integer from decimal to hex"""
hex_string = str(hex(int(integer)))[2:]
length = len(hex_string)
if length == 1:
hex_string = str(0) + hex_string
return hex_string
# Get Mac address: The first 6th numbers in the list
def _get_mac_address(self, iid_numbers, length):
"""Assemble mac address from the list"""
mac = ""
for index in range(length):
num = self._convert_to_hex(iid_numbers[index])
mac = ':'.join((mac, num))
mac = mac[1:]
return mac

@ -0,0 +1,59 @@
"""Open Vswitch module"""
import re
import logging
from compass.hdsdiscovery import base
from compass.hdsdiscovery import utils
#Vendor_loader will load vendor instance by CLASS_NAME
CLASS_NAME = "OVSwitch"
class OVSwitch(base.BaseVendor):
"""Open Vswitch"""
def __init__(self):
self.__name = "Open vSwitch"
def is_this_vendor(self, host, credential):
"""Determine if the hostname is accociated witH this vendor.
:param host: swtich's IP address
:param credential: credential to access switch
"""
if "username" in credential and "password" in credential:
user = credential['username']
pwd = credential['password']
else:
logging.error('either username or password key is not in %s',
credential)
return False
cmd = "ovs-vsctl -V"
result = None
try:
result = utils.ssh_remote_execute(host, user, pwd, cmd)
logging.debug('%s result for %s is %s', cmd, host, result)
if not result:
return False
except Exception as exc:
logging.error("vendor incorrect or connection failed to run %s",
cmd)
logging.exception(exc)
return False
if isinstance(result, str):
result = [result]
for line in result:
if not line:
continue
if re.search(r"\b" + re.escape(self.__name) + r"\b", line):
return True
return False
@property
def name(self):
"""Open Vswitch name"""
return self.__name

@ -0,0 +1,68 @@
"""Open Vswitch Mac address module"""
import logging
from compass.hdsdiscovery import utils
from compass.hdsdiscovery import base
CLASS_NAME = "Mac"
class Mac(base.BasePlugin):
"""Open Vswitch MAC address module"""
def __init__(self, host, credential):
self.host = host
self.credential = credential
def process_data(self, oper="SCAN"):
"""Dynamically call the function according 'oper'
:param oper: operation of data processing
"""
func_name = oper.lower()
return getattr(self, func_name)()
def scan(self):
"""
Implemnets the scan method in BasePlugin class. In this module,
mac addesses were retrieved by ssh
"""
try:
user = self.credential['username']
pwd = self.credential['password']
except KeyError:
logging.error("Cannot find username and password in credential")
return None
cmd = ("BRIDGES=$(ovs-vsctl show |grep Bridge |cut -f 2 -d '\"');"
"for br in $BRIDGES; do"
"PORTS=$(ovs-ofctl show $br |grep addr |cut -f 1 -d ':' "
"|egrep -v 'eth|wlan|LOCAL'|awk -F '(' '{print $1}');"
"for port in $PORTS; do"
"RESULT=$(ovs-appctl fdb/show $br |"
"awk '$1 == '$port' {print $1" "$2" "$3}');"
"echo '$RESULT'"
"done;"
"done;")
output = None
try:
output = utils.ssh_remote_execute(self.host, user, pwd, cmd)
except:
return None
logging.debug("[scan][output] output is %s", output)
if not output:
return None
fields_arr = ['port', 'vlan', 'mac']
result = []
for line in output:
if not line or line == '\n':
continue
values_arr = line.split()
temp = {}
for field, value in zip(fields_arr, values_arr):
temp[field] = value
result.append(temp.copy())
return result

@ -0,0 +1,374 @@
"""Module to provider installing progress calculation for the adapter.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
import re
from compass.db import database
from compass.db.model import Cluster, ClusterHost
from compass.log_analyzor.line_matcher import Progress
class AdapterItemMatcher(object):
"""Progress matcher for the os installing or package installing."""
def __init__(self, file_matchers):
self.file_matchers_ = file_matchers
self.min_progress_ = 0.0
self.max_progress_ = 1.0
def update_progress_range(self, min_progress, max_progress):
"""update min_progress and max_progress."""
self.min_progress_ = min_progress
self.max_progress_ = max_progress
for file_matcher in self.file_matchers_:
file_matcher.update_absolute_progress_range(
self.min_progress_, self.max_progress_)
def __str__(self):
return '%s[file_matchers: %s, min_progress: %s, max_progress: %s]' % (
self.__class__.__name__, self.file_matchers_,
self.min_progress_, self.max_progress_)
def update_progress(self, hostname, clusterid, progress):
"""Update progress.
:param hostname: the hostname of the installing host.
:type hostname: str
:param clusterid: the cluster id of the installing host.
:type clusterid: int
:param progress: Progress instance to update.
"""
for file_matcher in self.file_matchers_:
file_matcher.update_progress(hostname, clusterid, progress)
class OSMatcher(object):
"""Progress matcher for os installer."""
def __init__(self, os_installer_name, os_pattern,
item_matcher, min_progress, max_progress):
if not (0.0 <= min_progress <= max_progress <= 1.0):
raise IndexError('%s restriction not mat:'
'0.0 <= min_progress(%s) '
'<= max_progress(%s) <= 1.0' % (
self.__class__.__name__,
min_progress, max_progress))
self.name_ = os_installer_name
self.os_regex_ = re.compile(os_pattern)
self.matcher_ = item_matcher
self.matcher_.update_progress_range(min_progress, max_progress)
def __repr__(self):
return '%s[name:%s, os_pattern:%s, matcher:%s]' % (
self.__class__.__name__, self.name_,
self.os_regex_.pattern, self.matcher_)
def match(self, os_installer_name, os_name):
"""Check if the os matcher is acceptable."""
return all([
self.name_ == os_installer_name,
self.os_regex_.match(os_name)])
def update_progress(self, hostname, clusterid, progress):
"""Update progress."""
self.matcher_.update_progress(hostname, clusterid, progress)
class PackageMatcher(object):
"""Progress matcher for package installer."""
def __init__(self, package_installer_name, target_system,
item_matcher, min_progress, max_progress):
if not (0.0 <= min_progress <= max_progress <= 1.0):
raise IndexError('%s restriction not mat:'
'0.0 <= min_progress(%s) '
'<= max_progress(%s) <= 1.0' % (
self.__class__.__name__,
min_progress, max_progress))
self.name_ = package_installer_name
self.target_system_ = target_system
self.matcher_ = item_matcher
self.matcher_.update_progress_range(min_progress, max_progress)
def __repr__(self):
return '%s[name:%s, target_system:%s, matcher:%s]' % (
self.__class__.__name__, self.name_,
self.target_system_, self.matcher_)
def match(self, package_installer_name, target_system):
"""Check if the package matcher is acceptable."""
return all([
self.name_ == package_installer_name,
self.target_system_ == target_system])
def update_progress(self, hostname, clusterid, progress):
"""Update progress."""
self.matcher_.update_progress(hostname, clusterid, progress)
class AdapterMatcher(object):
"""Adapter matcher to update adapter installing progress."""
def __init__(self, os_matcher, package_matcher):
self.os_matcher_ = os_matcher
self.package_matcher_ = package_matcher
def match(self, os_installer_name, os_name,
package_installer_name, target_system):
"""Check if the adapter matcher is acceptable.
:param os_installer_name: the os installer name.
:type os_installer_name: str
:param os_name: the os name.
:type os_name: str
:param package_installer_name: the package installer name.
:type package_installer_name: str
:param target_system: the target system to deploy
:type target_system: str
:returns: bool
.. note::
Return True if the AdapterMatcher can process the log files
generated from the os installation and package installation.
"""
return all([
self.os_matcher_.match(os_installer_name, os_name),
self.package_matcher_.match(
package_installer_name, target_system)])
def __str__(self):
return '%s[os_matcher:%s, package_matcher:%s]' % (
self.__class__.__name__,
self.os_matcher_, self.package_matcher_)
@classmethod
def _get_host_progress(cls, hostid):
"""Get Host Progress from database.
.. notes::
The function should be called out of database session.
"""
with database.session() as session:
host = session.query(
ClusterHost).filter_by(
id=hostid).first()
if not host:
logging.error(
'there is no host for %s in ClusterHost', hostid)
return None, None, None
if not host.state:
logging.error('there is no related HostState for %s',
hostid)
return host.hostname, None, None
return (
host.hostname,
host.state.state,
Progress(host.state.progress,
host.state.message,
host.state.severity))
@classmethod
def _update_host_progress(cls, hostid, progress):
"""Update host progress to database.
.. note::
The function should be called out of the database session.
"""
with database.session() as session:
host = session.query(
ClusterHost).filter_by(id=hostid).first()
if not host:
logging.error(
'there is no host for %s in ClusterHost', hostid)
return
if not host.state:
logging.error(
'there is no related HostState for %s', hostid)
return
if host.state.state != 'INSTALLING':
logging.error(
'host %s is not in INSTALLING state',
hostid)
return
if host.state.progress > progress.progress:
logging.error(
'host %s progress is not increased '
'from %s to %s',
hostid, host.state, progress)
return
if (host.state.progress == progress.progress and
host.state.message == progress.message):
logging.info(
'ignore update host %s progress %s to %s',
hostid, progress, host.state)
return
if progress.progress >= 1.0:
host.state.state = 'READY'
host.state.progress = progress.progress
host.state.message = progress.message
if progress.severity:
host.state.severity = progress.severity
if progress.severity == 'ERROR':
host.state.state = 'ERROR'
if host.state.state != 'INSTALLING':
host.mutable = True
logging.debug(
'update host %s state %s',
hostid, host.state)
@classmethod
def _get_cluster_progress(cls, clusterid):
"""Get cluster progress from database.
.. notes::
The function should be called out of database session.
"""
with database.session() as session:
cluster = session.query(Cluster).filter_by(id=clusterid).first()
if not cluster:
logging.error('there is no Cluster for %s', clusterid)
return None, None
if not cluster.state:
logging.error('there is no ClusterState for %s', clusterid)
return None, None
return (
cluster.state.state,
Progress(cluster.state.progress,
cluster.state.message,
cluster.state.severity))
@classmethod
def _update_cluster_progress(cls, clusterid, progress):
"""Update cluster installing progress to database.
.. note::
The function should be called out of the database session.
"""
with database.session() as session:
cluster = session.query(
Cluster).filter_by(id=clusterid).first()
if not cluster:
logging.error(
'there is no cluster for %s in Cluster',
clusterid)
return
if not cluster.state:
logging.error(
'there is no ClusterState for %s',
clusterid)
if cluster.state.state != 'INSTALLING':
logging.error('cluster %s is not in INSTALLING state',
clusterid)
return
if progress.progress >= 1.0:
cluster.state.state = 'READY'
cluster.state.progress = progress.progress
cluster.state.message = progress.message
if progress.severity:
cluster.state.severity = progress.severity
if progress.severity == 'ERROR':
cluster.state.state = 'ERROR'
if cluster.state.state != 'INSTALLING':
cluster.mutable = True
logging.debug(
'update cluster %s state %s',
clusterid, cluster.state)
def update_progress(self, clusterid, hostids):
"""Update cluster progress and hosts progresses.
:param clusterid: the cluster id.
:type clusterid: int
:param hostids: the host ids.
:type hostids: list of int
"""
cluster_state, cluster_progress = self._get_cluster_progress(
clusterid)
if not cluster_progress:
logging.error(
'nothing to update cluster %s => state %s progress %s',
clusterid, cluster_state, cluster_progress)
return
logging.debug('got cluster %s state %s progress %s',
clusterid, cluster_state, cluster_progress)
host_progresses = {}
for hostid in hostids:
hostname, host_state, host_progress = self._get_host_progress(
hostid)
if not hostname or not host_progress:
logging.error(
'nothing to update host %s => hostname %s '
'state %s progress %s',
hostid, hostname, host_state, host_progress)
continue
logging.debug('got host %s hostname %s state %s progress %s',
hostid, hostname, host_state, host_progress)
host_progresses[hostid] = (hostname, host_state, host_progress)
for hostid, host_value in host_progresses.items():
hostname, host_state, host_progress = host_value
if host_state == 'INSTALLING' and host_progress.progress < 1.0:
self.os_matcher_.update_progress(
hostname, clusterid, host_progress)
self.package_matcher_.update_progress(
hostname, clusterid, host_progress)
self._update_host_progress(hostid, host_progress)
else:
logging.error(
'there is no need to update host %s '
'progress: hostname %s state %s progress %s',
hostid, hostname, host_state, host_progress)
cluster_progress_data = 0.0
for _, _, host_progress in host_progresses.values():
cluster_progress_data += host_progress.progress
cluster_progress.progress = cluster_progress_data / len(hostids)
messages = []
for _, _, host_progress in host_progresses.values():
if host_progress.message:
messages.append(host_progress.message)
if messages:
cluster_progress.message = '\n'.join(messages)
for severity in ['ERROR', 'WARNING', 'INFO']:
cluster_severity = None
for _, _, host_progress in host_progresses.values():
if host_progress.severity == severity:
cluster_severity = severity
break
if cluster_severity:
cluster_progress.severity = cluster_severity
break
self._update_cluster_progress(clusterid, cluster_progress)

@ -0,0 +1,333 @@
"""Module to update intalling progress by processing log file.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
import os.path
from compass.db import database
from compass.db.model import LogProgressingHistory
from compass.log_analyzor.line_matcher import Progress
from compass.utils import setting_wrapper as setting
class FileFilter(object):
"""base class to filter log file."""
def __repr__(self):
return self.__class__.__name__
def filter(self, pathname):
"""Filter log file.
:param pathname: the absolute path name to the log file.
"""
raise NotImplementedError(str(self))
class CompositeFileFilter(FileFilter):
"""filter log file based on the list of filters"""
def __init__(self, filters):
self.filters_ = filters
def __str__(self):
return 'CompositeFileFilter[%s]' % self.filters_
def append_filter(self, file_filter):
"""append filter."""
self.filters_.append(file_filter)
def filter(self, pathname):
"""filter log file."""
for file_filter in self.filters_:
if not file_filter.filter(pathname):
return False
return True
class FilterFileExist(FileFilter):
"""filter log file if not exists."""
def filter(self, pathname):
"""filter log file."""
file_exist = os.path.isfile(pathname)
if not file_exist:
logging.error("%s is not exist", pathname)
return file_exist
def get_file_filter():
"""get file filter"""
composite_filter = CompositeFileFilter([FilterFileExist()])
return composite_filter
class FileReader(object):
"""Class to read log file.
The class provide support to read log file from the position
it has read last time. and update the position when it finish
reading the log.
"""
def __init__(self, pathname):
self.pathname_ = pathname
self.position_ = 0
self.partial_line_ = ''
def __repr__(self):
return (
'%s[pathname:%s, position:%s, partial_line:%s]' % (
self.__class__.__name__, self.pathname_, self.position_,
self.partial_line_
)
)
def get_history(self):
"""Get log file read history from database.
:returns: (line_matcher_name progress)
.. note::
The function should be called out of database session.
It reads the log_progressing_history table to get the
position in the log file it has read in last run,
the partial line of the log, the line matcher name
in the last run, the progress, the message and the
severity it has got in the last run.
"""
with database.session() as session:
history = session.query(
LogProgressingHistory).filter_by(
pathname=self.pathname_).first()
if history:
self.position_ = history.position
self.partial_line_ = history.partial_line
line_matcher_name = history.line_matcher_name
progress = Progress(history.progress,
history.message,
history.severity)
else:
line_matcher_name = 'start'
progress = Progress(0.0, '', None)
return line_matcher_name, progress
def update_history(self, line_matcher_name, progress):
"""Update log_progressing_history table.
:param line_matcher_name: the line matcher name.
:param progress: Progress instance to record the installing progress.
.. note::
The function should be called out of database session.
It updates the log_processing_history table.
"""
with database.session() as session:
history = session.query(LogProgressingHistory).filter_by(
pathname=self.pathname_).first()
if history:
if history.position >= self.position_:
logging.error(
'%s history position %s is ahead of currrent '
'position %s',
self.pathname_,
history.position,
self.position_)
return
history.position = self.position_
history.partial_line = self.partial_line_
history.line_matcher_name = line_matcher_name
history.progress = progress.progress
history.message = progress.message
history.severity = progress.severity
else:
history = LogProgressingHistory(
pathname=self.pathname_, position=self.position_,
partial_line=self.partial_line_,
line_matcher_name=line_matcher_name,
progress=progress.progress,
message=progress.message,
severity=progress.severity)
session.merge(history)
logging.debug('update file %s to history %s',
self.pathname_, history)
def readline(self):
"""Generate each line of the log file."""
old_position = self.position_
try:
with open(self.pathname_) as logfile:
logfile.seek(self.position_)
while True:
line = logfile.readline()
self.partial_line_ += line
position = logfile.tell()
if position > self.position_:
self.position_ = position
if self.partial_line_.endswith('\n'):
yield_line = self.partial_line_
self.partial_line_ = ''
yield yield_line
else:
break
if self.partial_line_:
yield self.partial_line_
except Exception as error:
logging.error('failed to processing file %s', self.pathname_)
raise error
logging.debug(
'processing file %s log %s bytes to position %s',
self.pathname_, self.position_ - old_position,
self.position_)
class FileReaderFactory(object):
"""factory class to create FileReader instance."""
def __init__(self, logdir, filefilter):
self.logdir_ = logdir
self.filefilter_ = filefilter
def __str__(self):
return '%s[logdir: %s filefilter: %s]' % (
self.__class__.__name__, self.logdir_, self.filefilter_)
def get_file_reader(self, hostname, clusterid, filename):
"""Get FileReader instance.
:param hostname: hostname of installing host.
:param clusterid: cluster id of the installing host.
:param filename: the filename of the log file.
:returns: :class:`FileReader` instance if it is not filtered.
"""
pathname = os.path.join(
self.logdir_, '%s.%s' % (hostname, clusterid),
filename)
logging.debug('get FileReader from %s', pathname)
if not self.filefilter_.filter(pathname):
logging.error('%s is filtered', pathname)
return None
return FileReader(pathname)
FILE_READER_FACTORY = FileReaderFactory(
setting.INSTALLATION_LOGDIR, get_file_filter())
class FileMatcher(object):
"""
File matcher the get the lastest installing progress
from the log file.
"""
def __init__(self, line_matchers, min_progress, max_progress, filename):
if not 0.0 <= min_progress <= max_progress <= 1.0:
raise IndexError(
'%s restriction is not mat: 0.0 <= min_progress'
'(%s) <= max_progress(%s) <= 1.0' % (
self.__class__.__name__,
min_progress,
max_progress))
self.line_matchers_ = line_matchers
self.min_progress_ = min_progress
self.max_progress_ = max_progress
self.absolute_min_progress_ = 0.0
self.absolute_max_progress_ = 1.0
self.absolute_progress_diff_ = 1.0
self.filename_ = filename
def update_absolute_progress_range(self, min_progress, max_progress):
"""update the min progress and max progress the log file indicates."""
progress_diff = max_progress - min_progress
self.absolute_min_progress_ = (
min_progress + self.min_progress_ * progress_diff)
self.absolute_max_progress_ = (
min_progress + self.max_progress_ * progress_diff)
self.absolute_progress_diff_ = (
self.absolute_max_progress_ - self.absolute_min_progress_)
def __str__(self):
return (
'%s[ filename: %s, progress range: [%s:%s], '
'line_matchers: %s]' % (
self.__class__.__name__, self.filename_,
self.absolute_min_progress_,
self.absolute_max_progress_, self.line_matchers_)
)
def update_total_progress(self, file_progress, total_progress):
"""Get the total progress from file progress."""
if not file_progress.message:
logging.info(
'ignore update file %s progress %s to total progress',
self.filename_, file_progress)
return
total_progress_data = min(
self.absolute_min_progress_
+
file_progress.progress * self.absolute_progress_diff_,
self.absolute_max_progress_)
# total progress should only be updated when the new calculated
# progress is greater than the recored total progress or the
# progress to update is the same but the message is different.
if (total_progress.progress < total_progress_data or
(total_progress.progress == total_progress_data and
total_progress.message != file_progress.message)):
total_progress.progress = total_progress_data
total_progress.message = file_progress.message
total_progress.severity = file_progress.severity
logging.debug('update file %s total progress %s',
self.filename_, total_progress)
else:
logging.info(
'ignore update file %s progress %s to total progress %s',
self.filename_, file_progress, total_progress)
def update_progress(self, hostname, clusterid, total_progress):
"""update progress from file.
:param hostname: the hostname of the installing host.
:type hostname: str
:param clusterid: the cluster id of the installing host.
:type clusterid: int
:param total_progress: Progress instance to update.
the function update installing progress by reading the log file.
It contains a list of line matcher, when one log line matches
with current line matcher, the installing progress is updated.
and the current line matcher got updated.
Notes: some line may be processed multi times. The case is the
last line of log file is processed in one run, while in the other
run, it will be reprocessed at the beginning because there is
no line end indicator for the last line of the file.
"""
file_reader = FILE_READER_FACTORY.get_file_reader(
hostname, clusterid, self.filename_)
if not file_reader:
return
line_matcher_name, file_progress = file_reader.get_history()
for line in file_reader.readline():
if line_matcher_name not in self.line_matchers_:
logging.debug('early exit at\n%s\nbecause %s is not in %s',
line, line_matcher_name, self.line_matchers_)
break
index = line_matcher_name
while index in self.line_matchers_:
line_matcher = self.line_matchers_[index]
index, line_matcher_name = line_matcher.update_progress(
line, file_progress)
file_reader.update_history(line_matcher_name, file_progress)
self.update_total_progress(file_progress, total_progress)

@ -0,0 +1,209 @@
"""Module to get the progress when found match with a line of the log."""
import logging
import re
from compass.utils import util
class Progress(object):
"""Progress object to store installing progress and message."""
def __init__(self, progress, message, severity):
"""Constructor
:param progress: installing progress between 0 to 1.
:param message: installing message.
:param severity: installing message severity.
"""
self.progress = progress
self.message = message
self.severity = severity
def __repr__(self):
return '%s[progress:%s, message:%s, severity:%s]' % (
self.__class__.__name__,
self.progress,
self.message,
self.severity)
class ProgressCalculator(object):
"""base class to generate progress."""
def __init__(self):
raise NotImplementedError(str(self))
@classmethod
def update_progress(cls, progress_data, message,
severity, progress):
"""
Update progress with the given progress_data,
message and severity.
:param progress_data: installing progress.
:type progress_data: float between 0 to 1.
:param message: installing progress message.
:param severity: installing message severity.
:param progress: :class:`Progress` instance to update
"""
# the progress is only updated when the new progress
# is greater than the stored progress or the progress
# to update is the same but the message is different.
if (progress_data > progress.progress or
(progress_data == progress.progress and
message != progress.message)):
progress.progress = progress_data
if message:
progress.message = message
if severity:
progress.severity = severity
logging.debug('update progress to %s', progress)
else:
logging.info('ignore update progress %s to %s',
progress_data, progress)
def update(self, message, severity, progress):
"""vritual method to update progress by message and severity.
:param message: installing message.
:param severity: installing severity.
"""
raise NotImplementedError(str(self))
def __repr__(self):
return self.__class__.__name__
class IncrementalProgress(ProgressCalculator):
"""Class to increment the progress."""
def __init__(self, min_progress,
max_progress, incremental_ratio):
if not 0.0 <= min_progress <= max_progress <= 1.0:
raise IndexError(
'%s restriction is not mat: 0.0 <= min_progress(%s)'
' <= max_progress(%s) <= 1.0' % (
self.__class__.__name__, min_progress, max_progress))
if not 0.0 <= incremental_ratio <= 1.0:
raise IndexError(
'%s restriction is not mat: '
'0.0 <= incremental_ratio(%s) <= 1.0' % (
self.__class__.__name__, incremental_ratio))
self.min_progress_ = min_progress
self.max_progress_ = max_progress
self.incremental_progress_ = (
incremental_ratio * (max_progress - min_progress))
def __str__(self):
return '%s[%s:%s:%s]' % (
self.__class__.__name__,
self.min_progress_,
self.max_progress_,
self.incremental_progress_
)
def update(self, message, severity, progress):
"""update progress from message and severity."""
progress_data = max(
self.min_progress_,
min(
self.max_progress_,
progress.progress + self.incremental_progress_
)
)
self.update_progress(progress_data,
message, severity, progress)
class RelativeProgress(ProgressCalculator):
"""class to update progress to the given relative progress."""
def __init__(self, progress):
if not 0.0 <= progress <= 1.0:
raise IndexError(
'%s restriction is not mat: 0.0 <= progress(%s) <= 1.0' % (
self.__class__.__name__, progress))
self.progress_ = progress
def __str__(self):
return '%s[%s]' % (self.__class__.__name__, self.progress_)
def update(self, message, severity, progress):
"""update progress from message and severity."""
self.update_progress(
self.progress_, message, severity, progress)
class SameProgress(ProgressCalculator):
"""class to update message and severity for progress."""
def update(self, message, severity, progress):
"""update progress from the message and severity."""
self.update_progress(progress.progress, message,
severity, progress)
class LineMatcher(object):
"""Progress matcher for each line."""
def __init__(self, pattern, progress=None,
message_template='', severity=None,
unmatch_sameline_next_matcher_name='',
unmatch_nextline_next_matcher_name='',
match_sameline_next_matcher_name='',
match_nextline_next_matcher_name=''):
self.regex_ = re.compile(pattern)
if not progress:
self.progress_ = SameProgress()
elif isinstance(progress, ProgressCalculator):
self.progress_ = progress
elif util.is_instance(progress, [int, float]):
self.progress_ = RelativeProgress(progress)
else:
raise TypeError(
'progress unsupport type %s: %s' % (
type(progress), progress))
self.message_template_ = message_template
self.severity_ = severity
self.unmatch_sameline_ = unmatch_sameline_next_matcher_name
self.unmatch_nextline_ = unmatch_nextline_next_matcher_name
self.match_sameline_ = match_sameline_next_matcher_name
self.match_nextline_ = match_nextline_next_matcher_name
def __str__(self):
return '%s[pattern:%r, message_template:%r, severity:%r]' % (
self.__class__.__name__, self.regex_.pattern,
self.message_template_, self.severity_)
def update_progress(self, line, progress):
"""Update progress by the line.
:param line: one line in log file to indicate the installing progress.
.. note::
The line may be partial if the latest line of the log file is
not the whole line. But the whole line may be resent
in the next run.
:praam progress: the :class:`Progress` instance to update.
"""
mat = self.regex_.search(line)
if not mat:
return (
self.unmatch_sameline_,
self.unmatch_nextline_)
try:
message = self.message_template_ % mat.groupdict()
except Exception as error:
logging.error('failed to get message %s %% %s in line matcher %s',
self.message_template_, mat.groupdict(), self)
raise error
self.progress_.update(message, self.severity_, progress)
return (
self.match_sameline_,
self.match_nextline_)

@ -0,0 +1,278 @@
"""module to provide updating installing process function.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
import logging
from compass.log_analyzor.line_matcher import LineMatcher, IncrementalProgress
from compass.log_analyzor.file_matcher import FileMatcher
from compass.log_analyzor.adapter_matcher import AdapterMatcher
from compass.log_analyzor.adapter_matcher import AdapterItemMatcher
from compass.log_analyzor.adapter_matcher import OSMatcher
from compass.log_analyzor.adapter_matcher import PackageMatcher
# TODO(weidong): reconsider intialization method for the following.
OS_INSTALLER_CONFIGURATIONS = {
'CentOS': AdapterItemMatcher(
file_matchers=[
FileMatcher(
filename='sys.log',
min_progress=0.0,
max_progress=0.1,
line_matchers={
'start': LineMatcher(
pattern=r'NOTICE (?P<message>.*)',
progress=IncrementalProgress(.1, .9, .1),
message_template='%(message)s',
unmatch_nextline_next_matcher_name='start',
match_nextline_next_matcher_name='exit'
),
}
),
FileMatcher(
filename='anaconda.log',
min_progress=0.1,
max_progress=1.0,
line_matchers={
'start': LineMatcher(
pattern=r'setting.*up.*kickstart',
progress=.1,
message_template=(
'Setting up kickstart configurations'),
unmatch_nextline_next_matcher_name='start',
match_nextline_next_matcher_name='STEP_STAGE2'
),
'STEP_STAGE2': LineMatcher(
pattern=r'starting.*STEP_STAGE2',
progress=.15,
message_template=(
'Downloading installation '
'images from server'),
unmatch_nextline_next_matcher_name='STEP_STAGE2',
match_nextline_next_matcher_name='start_anaconda'
),
'start_anaconda': LineMatcher(
pattern=r'Running.*anaconda.*script',
progress=.2,
unmatch_nextline_next_matcher_name=(
'start_anaconda'),
match_nextline_next_matcher_name=(
'start_kickstart_pre')
),
'start_kickstart_pre': LineMatcher(
pattern=r'Running.*kickstart.*pre.*script',
progress=.25,
unmatch_nextline_next_matcher_name=(
'start_kickstart_pre'),
match_nextline_next_matcher_name=(
'kickstart_pre_done')
),
'kickstart_pre_done': LineMatcher(
pattern=(
r'All.*kickstart.*pre.*script.*have.*been.*run'),
progress=.3,
unmatch_nextline_next_matcher_name=(
'kickstart_pre_done'),
match_nextline_next_matcher_name=(
'start_enablefilesystem')
),
'start_enablefilesystem': LineMatcher(
pattern=r'moving.*step.*enablefilesystems',
progress=0.3,
message_template=(
'Performing hard-disk partitioning and '
'enabling filesystems'),
unmatch_nextline_next_matcher_name=(
'start_enablefilesystem'),
match_nextline_next_matcher_name=(
'enablefilesystem_done')
),
'enablefilesystem_done': LineMatcher(
pattern=r'leaving.*step.*enablefilesystems',
progress=.35,
message_template='Filesystems are enabled',
unmatch_nextline_next_matcher_name=(
'enablefilesystem_done'),
match_nextline_next_matcher_name=(
'setup_repositories')
),
'setup_repositories': LineMatcher(
pattern=r'moving.*step.*reposetup',
progress=0.35,
message_template=(
'Setting up Customized Repositories'),
unmatch_nextline_next_matcher_name=(
'setup_repositories'),
match_nextline_next_matcher_name=(
'repositories_ready')
),
'repositories_ready': LineMatcher(
pattern=r'leaving.*step.*reposetup',
progress=0.4,
message_template=(
'Customized Repositories setting up are done'),
unmatch_nextline_next_matcher_name=(
'repositories_ready'),
match_nextline_next_matcher_name='checking_dud'
),
'checking_dud': LineMatcher(
pattern=r'moving.*step.*postselection',
progress=0.4,
message_template='Checking DUD modules',
unmatch_nextline_next_matcher_name='checking_dud',
match_nextline_next_matcher_name='dud_checked'
),
'dud_checked': LineMatcher(
pattern=r'leaving.*step.*postselection',
progress=0.5,
message_template='Checking DUD modules are done',
unmatch_nextline_next_matcher_name='dud_checked',
match_nextline_next_matcher_name='installing_packages'
),
'installing_packages': LineMatcher(
pattern=r'moving.*step.*installpackages',
progress=0.5,
message_template='Installing packages',
unmatch_nextline_next_matcher_name=(
'installing_packages'),
match_nextline_next_matcher_name=(
'packages_installed')
),
'packages_installed': LineMatcher(
pattern=r'leaving.*step.*installpackages',
progress=0.8,
message_template='Packages are installed',
unmatch_nextline_next_matcher_name=(
'packages_installed'),
match_nextline_next_matcher_name=(
'installing_bootloader')
),
'installing_bootloader': LineMatcher(
pattern=r'moving.*step.*instbootloader',
progress=0.9,
message_template='Installing bootloaders',
unmatch_nextline_next_matcher_name=(
'installing_bootloader'),
match_nextline_next_matcher_name=(
'bootloader_installed'),
),
'bootloader_installed': LineMatcher(
pattern=r'leaving.*step.*instbootloader',
progress=1.0,
message_template='bootloaders is installed',
unmatch_nextline_next_matcher_name=(
'bootloader_installed'),
match_nextline_next_matcher_name='exit'
),
}
),
FileMatcher(
filename='install.log',
min_progress=0.56,
max_progress=0.80,
line_matchers={
'start': LineMatcher(
pattern=r'Installing (?P<package>.*)',
progress=IncrementalProgress(0.0, 0.99, 0.005),
message_template='Installing %(package)s',
unmatch_sameline_next_matcher_name='package_complete',
unmatch_nextline_next_matcher_name='start',
match_nextline_next_matcher_name='start'
),
'package_complete': LineMatcher(
pattern='FINISHED.*INSTALLING.*PACKAGES',
progress=1.0,
message_template='installing packages finished',
unmatch_nextline_next_matcher_name='start',
match_nextline_next_matcher_name='exit'
),
}
),
]
),
}
PACKAGE_INSTALLER_CONFIGURATIONS = {
'openstack': AdapterItemMatcher(
file_matchers=[
FileMatcher(
filename='chef-client.log',
min_progress=0.1,
max_progress=1.0,
line_matchers={
'start': LineMatcher(
pattern=(
r'Processing\s*(?P<install_type>.*)'
r'\[(?P<package>.*)\].*'),
progress=IncrementalProgress(0.0, .90, 0.005),
message_template=(
'Processing %(install_type)s %(package)s'),
unmatch_sameline_next_matcher_name=(
'chef_complete'),
unmatch_nextline_next_matcher_name='start',
match_nextline_next_matcher_name='start'
),
'chef_complete': LineMatcher(
pattern=r'Chef.*Run.*complete',
progress=1.0,
message_template='Chef run complete',
unmatch_nextline_next_matcher_name='start',
match_nextline_next_matcher_name='exit'
),
}
),
]
),
}
ADAPTER_CONFIGURATIONS = [
AdapterMatcher(
os_matcher=OSMatcher(
os_installer_name='cobbler',
os_pattern='CentOS.*',
item_matcher=OS_INSTALLER_CONFIGURATIONS['CentOS'],
min_progress=0.0,
max_progress=0.6),
package_matcher=PackageMatcher(
package_installer_name='chef',
target_system='openstack',
item_matcher=PACKAGE_INSTALLER_CONFIGURATIONS['openstack'],
min_progress=0.6,
max_progress=1.0)
)
]
def _get_adapter_matcher(os_installer, os_name,
package_installer, target_system):
"""Get adapter matcher by os name and package installer name."""
for configuration in ADAPTER_CONFIGURATIONS:
if configuration.match(os_installer, os_name,
package_installer, target_system):
return configuration
logging.error('No configuration found with os installer %s os %s '
'package_installer %s, target_system %s',
os_installer, os_name, package_installer, target_system)
return None
def update_progress(os_installer, os_name, package_installer, target_system,
clusterid, hostids):
"""Update adapter installing progress.
:param os_installer: os installer name
:param os_name: os name.
:param package_installer: package installer name.
:param clusterid: cluster id.
:param hostids: hosts ids.
"""
adapter = _get_adapter_matcher(os_installer, os_name,
package_installer, target_system)
if not adapter:
return
adapter.update_progress(clusterid, hostids)

19
compass/tasks/client.py Normal file

@ -0,0 +1,19 @@
"""Module to setup celery client.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
.. note::
If CELERY_CONFIG_MODULE is set in environment, load celery config from
the filename declared in CELERY_CONFIG_MODULE.
"""
import os
from celery import Celery
celery = Celery(__name__)
if 'CELERY_CONFIG_MODULE' in os.environ:
celery.config_from_envvar('CELERY_CONFIG_MODULE')
else:
from compass.utils import celeryconfig_wrapper as celeryconfig
celery.config_from_object(celeryconfig)

60
compass/tasks/tasks.py Normal file

@ -0,0 +1,60 @@
"""Module to define celery tasks.
.. moduleauthor:: Xiaodong Wang <xiaodongwang@huawei.com>
"""
from celery.signals import setup_logging
from compass.actions import poll_switch
from compass.actions import trigger_install
from compass.actions import progress_update
from compass.db import database
from compass.tasks.client import celery
from compass.utils import flags
from compass.utils import logsetting
from compass.utils import setting_wrapper as setting
def tasks_setup_logging(**_):
"""Setup logging options from compass setting."""
flags.init()
flags.OPTIONS.logfile = setting.CELERY_LOGFILE
logsetting.init()
setup_logging.connect(tasks_setup_logging)
@celery.task(name="compass.tasks.pollswitch")
def pollswitch(ip_addr, req_obj='mac', oper="SCAN"):
"""Query switch and return expected result.
:param ip_addr: switch ip address.
:type ip_addr: str
:param reqObj: the object requested to query from switch.
:type reqObj: str
:param oper: the operation to query the switch (SCAN, GET, SET).
:type oper: str
"""
with database.session():
poll_switch.poll_switch(ip_addr, req_obj='mac', oper="SCAN")
@celery.task(name="compass.tasks.trigger_install")
def triggerinstall(clusterid):
"""Deploy the given cluster.
:param clusterid: the id of the cluster to deploy.
:type clusterid: int
"""
with database.session():
trigger_install.trigger_install(clusterid)
@celery.task(name="compass.tasks.progress_update")
def progressupdate(clusterid):
"""Calculate the installing progress of the given cluster.
:param clusterid: the id of the cluster to get the intstalling progress.
:type clusterid: int
"""
progress_update.update_progress(clusterid)

@ -0,0 +1,754 @@
import logging
import simplejson as json
from copy import deepcopy
from celery import current_app
from mock import Mock
import unittest2
from compass.api import app
from compass.db import database
from compass.db.model import Switch
from compass.db.model import Machine
from compass.db.model import Cluster
from compass.db.model import ClusterHost
from compass.db.model import HostState
from compass.db.model import Adapter
from compass.db.model import Role
class ApiTestCase(unittest2.TestCase):
CLUSTER_NAME = "Test1"
SWITCH_IP_ADDRESS1 = '10.10.10.1'
SWITCH_CREDENTIAL = {'version': 'xxx',
'community': 'xxx'}
DATABASE_URL = 'sqlite://'
def setUp(self):
super(ApiTestCase, self).setUp()
database.init(self.DATABASE_URL)
database.create_db()
self.app = app.test_client()
# We do not want to send a real task as our test environment
# does not have a AMQP system set up. TODO(): any better way?
current_app.send_task = Mock()
# We do not want to send a real task as our test environment
# does not have a AMQP system set up. TODO(): any better way?
current_app.send_task = Mock()
def tearDown(self):
database.drop_db()
super(ApiTestCase, self).tearDown()
class TestSwtichMachineAPI(ApiTestCase):
SWITCH_RESP_TPL = {"state": "not_reached",
"ip": "",
"link": {"href": "",
"rel": "self"},
"id": ""}
def setUp(self):
super(TestSwtichMachineAPI, self).setUp()
# Create one switch in database
with database.session() as session:
test_switch = Switch(ip=self.SWITCH_IP_ADDRESS1)
test_switch.credential = self.SWITCH_CREDENTIAL
session.add(test_switch)
def tearDown(self):
super(TestSwtichMachineAPI, self).tearDown()
def test_get_switchList(self):
# Prepare testing data
with database.session() as session:
switches = [Switch(ip='192.168.1.1',
credential=self.SWITCH_CREDENTIAL),
Switch(ip='192.168.1.2',
credential=self.SWITCH_CREDENTIAL),
Switch(ip='192.1.192.1',
credential=self.SWITCH_CREDENTIAL),
Switch(ip='192.1.192.2',
credential=self.SWITCH_CREDENTIAL),
Switch(ip='192.1.195.3',
credential=self.SWITCH_CREDENTIAL),
Switch(ip='192.2.192.4',
credential=self.SWITCH_CREDENTIAL)]
session.add_all(switches)
# Start to query switches
# a. query multiple switches with ip
# b. query switches with only switchIpNetwork
# c. query only with limit
# d. query swithes with switchIpNetwork and limit number
# e. query switches with all conditions
# f. Invliad switch ip format
# g. Invalid switch ip network format
testList = [{'url': ('/switches?switchIp=192.168.1.1'
'&switchIp=192.168.1.2'),
'expected_code': 200, 'expected_count': 2},
{'url': '/switches?switchIpNetwork=192.1.192.0/22',
'expected_code': 200, 'expected_count': 3},
{'url': '/switches?limit=3', 'expected_code': 200,
'expected_count': 3},
{'url': '/switches?limit=-1', 'expected_code': 400},
{'url': ('/switches?switchIpNetwork=192.1.192.0/22'
'&limit=1'),
'expected_code': 200, 'expected_count': 1},
{'url': ('/switches?switchIp=192.168.1.1'
'&switchIpNetwork=192.1.192.0/22&limit=3'),
'expected_code': 400},
{'url': '/switches?switchIp=192.168.1.xx',
'expected_code': 400},
{'url': '/switches?switchIpNetwork=192.168.1.x',
'expected_code': 400}]
for test in testList:
url = test['url']
rv = self.app.get(url)
data = json.loads(rv.get_data())
expected_code = test['expected_code']
self.assertEqual(rv.status_code, expected_code)
if 'expected_count' in test:
expected_count = test['expected_count']
switch_count = len(data['switches'])
self.assertEqual(switch_count, expected_count)
def test_post_switchList(self):
# Test SwitchList POST method
url = '/switches'
# a. post a new switch
data = {'switch': {
'ip': '10.10.10.2',
'credential': self.SWITCH_CREDENTIAL}}
rv = self.app.post(url, data=json.dumps(data))
self.assertEqual(rv.status_code, 202)
with database.session() as session:
switch = session.query(Switch).filter_by(ip='10.10.10.2').first()
self.assertEqual(switch.ip, '10.10.10.2')
# b. Post Conflict switch Ip
rv = self.app.post(url, data=json.dumps(data))
self.assertEqual(rv.status_code, 409)
data = json.loads(rv.get_data())
self.assertEqual("IP address '10.10.10.2' already exists",
data['message'])
self.assertEqual(2, data['failedSwitch'])
# c. Invalid Ip format
data = {'switch': {
'ip': '192.543.1.1',
'credential': self.SWITCH_CREDENTIAL}}
rv = self.app.post(url, data=json.dumps(data))
self.assertEqual(rv.status_code, 400)
def test_get_switch_by_id(self):
# Test Get /switches/{id}
# Non-exist switch id
url = '/switches/1000'
rv = self.app.get(url)
logging.info('[test_get_switch_by_id] url %s', url)
self.assertEqual(rv.status_code, 404)
correct_url = '/switches/1'
rv = self.app.get(correct_url)
data = json.loads(rv.get_data())
expected_switch_resp = self.SWITCH_RESP_TPL.copy()
expected_switch_resp['link']['href'] = correct_url
expected_switch_resp['id'] = 1
expected_switch_resp['ip'] = "10.10.10.1"
self.assertEqual(rv.status_code, 200)
self.assertEqual(data["status"], "OK")
self.assertDictEqual(data["switch"], expected_switch_resp)
def test_put_switch_by_id(self):
# Test put a switch by id
url = '/switches/1000'
# Put a non-existing switch
data = {'switch': {'credential': self.SWITCH_CREDENTIAL}}
rv = self.app.put(url, data=json.dumps(data))
self.assertEqual(rv.status_code, 404)
# Put sucessfully
url = '/switches/1'
credential = deepcopy(self.SWITCH_CREDENTIAL)
credential['version'] = '1v'
data = {'switch': {'credential': credential}}
rv = self.app.put(url, data=json.dumps(data))
self.assertEqual(rv.status_code, 202)
def test_delete_switch(self):
url = '/switches/1'
rv = self.app.delete(url)
self.assertEqual(rv.status_code, 405)
def test_get_machine_by_id(self):
# Test get a machine by id
# Prepare testing data
with database.session() as session:
machine = Machine(mac='00:27:88:0c:a6', port='1', vlan='1',
switch_id=1)
session.add(machine)
# machine id exists in Machine table
url = '/machines/1'
rv = self.app.get(url)
self.assertEqual(rv.status_code, 200)
# machine id doesn't exist
url = '/machines/1000'
rv = self.app.get(url)
self.assertEqual(rv.status_code, 404)
def test_get_machineList(self):
#Prepare testing data
with database.session() as session:
machines = [Machine(mac='00:27:88:0c:01', port='1', vlan='1',
switch_id=1),
Machine(mac='00:27:88:0c:02', port='2', vlan='1',
switch_id=1),
Machine(mac='00:27:88:0c:03', port='3', vlan='1',
switch_id=1),
Machine(mac='00:27:88:0c:04', port='3', vlan='1',
switch_id=2),
Machine(mac='00:27:88:0c:05', port='4', vlan='2',
switch_id=2),
Machine(mac='00:27:88:0c:06', port='5', vlan='3',
switch_id=3)]
session.add_all(machines)
testList = [{'url': '/machines', 'expected': 6},
{'url': '/machines?limit=3', 'expected': 3},
{'url': '/machines?limit=50', 'expected': 6},
{'url': '/machines?switchId=1&vladId=1&port=2',
'expected': 1},
{'url': '/machines?switchId=1&vladId=1&limit=2',
'expected': 2},
{'url': '/machines?switchId=4', 'expected': 0}]
for test in testList:
url = test['url']
expected = test['expected']
rv = self.app.get(url)
data = json.loads(rv.get_data())
count = len(data['machines'])
self.assertEqual(rv.status_code, 200)
self.assertEqual(count, expected)
class TestClusterAPI(ApiTestCase):
SECURITY_CONFIG = {
'server_credentials': {
'username': 'root',
'password': 'huawei123'},
'service_credentials': {
'username': 'admin',
'password': 'huawei123'},
'console_credentials': {
'username': 'admin',
'password': 'huawei123'}}
NETWORKING_CONFIG = {
"interfaces": {
"management": {
"ip_start": "192.168.1.100",
"ip_end": "192.168.1.200",
"netmask": "255.255.255.0",
"gateway": "192.168.1.1",
"vlan": "",
"nic": "eth0",
"promisc": 1},
"tenant": {
"ip_start": "192.168.1.100",
"ip_end": "192.168.1.200",
"netmask": "255.255.255.0",
"nic": "eth1",
"promisc": 0},
"public": {
"ip_start": "192.168.1.100",
"ip_end": "192.168.1.200",
"netmask": "255.255.255.0",
"nic": "eth3",
"promisc": 1},
"storage": {
"ip_start": "192.168.1.100",
"ip_end": "192.168.1.200",
"netmask": "255.255.255.0",
"nic": "eth3",
"promisc": 1}},
"global": {
"gateway": "192.168.1.1",
"proxy": "",
"ntp_sever": "",
"nameservers": "8.8.8.8",
"search_path": "ods.com,ods1.com"}}
def setUp(self):
super(TestClusterAPI, self).setUp()
#Prepare testing data
with database.session() as session:
cluster = Cluster(name='cluster_01')
session.add(cluster)
session.flush()
def tearDown(self):
super(TestClusterAPI, self).tearDown()
def test_get_cluster_by_id(self):
# a. Get an existing cluster
# b. Get a non-existing cluster, return 404
testList = [{'url': '/clusters/1', 'expected_code': 200,
'expected': {'clusterName': 'cluster_01',
'href': '/clusters/1'}},
{'url': '/clusters/1000', 'expected_code': 404}]
for test in testList:
url = test['url']
rv = self.app.get(url)
data = json.loads(rv.get_data())
self.assertEqual(rv.status_code, test['expected_code'])
if 'expected' in test:
excepted_name = test['expected']['clusterName']
excepted_href = test['expected']['href']
self.assertEqual(data['cluster']['clusterName'], excepted_name)
self.assertEqual(data['cluster']['link']['href'],
excepted_href)
# Create a cluster
def test_post_cluster(self):
# a. Post a new cluster
cluster_req = {'cluster': {'name': 'cluster_02',
'adapter_id': 1}}
url = '/clusters'
rv = self.app.post(url, data=json.dumps(cluster_req))
data = json.loads(rv.get_data())
self.assertEqual(rv.status_code, 200)
self.assertEqual(data['cluster']['id'], 2)
self.assertEqual(data['cluster']['name'], 'cluster_02')
#b. Post an existing cluster, return 409
rv = self.app.post(url, data=json.dumps(cluster_req))
self.assertEqual(rv.status_code, 409)
#c. Post a new cluster without providing a name
cluster_req['cluster']['name'] = ''
rv = self.app.post(url, data=json.dumps(cluster_req))
data = json.loads(rv.get_data())
self.assertEqual(data['cluster']['id'], 3)
def test_get_clusters(self):
#Insert more clusters in db
with database.session() as session:
clusters_list = [
Cluster(name="cluster_02"),
Cluster(name="cluster_03"),
Cluster(name="cluster_04")]
session.add_all(clusters_list)
session.flush()
url = "/clusters"
rv = self.app.get(url)
data = json.loads(rv.get_data())
self.assertEqual(len(data['clusters']), 4)
def test_put_cluster_security_resource(self):
# Prepare testing data
security = {'security': self.SECURITY_CONFIG}
# a. Upate cluster's security config
url = '/clusters/1/security'
rv = self.app.put(url, data=json.dumps(security))
self.assertEqual(rv.status_code, 200)
# b. Update a non-existing cluster's resource
url = '/clusters/1000/security'
rv = self.app.put(url, data=json.dumps(security))
self.assertEqual(rv.status_code, 404)
# c. Update invalid cluster config item
url = '/clusters/1/xxx'
rv = self.app.put(url, data=json.dumps(security))
self.assertEqual(rv.status_code, 400)
# d. Security config is invalid -- some required field is null
security['security']['server_credentials']['username'] = None
rv = self.app.put(url, data=json.dumps(security))
self.assertEqual(rv.status_code, 400)
# e. Security config is invalid -- keyword is incorrect
security['security']['xxxx'] = {'xxx': 'xxx'}
rv = self.app.put(url, data=json.dumps(security))
self.assertEqual(rv.status_code, 400)
def test_put_cluster_networking_resource(self):
networking = {"networking" : self.NETWORKING_CONFIG}
url = "/clusters/1/networking"
rv = self.app.put(url, data=json.dumps(networking))
self.assertEqual(rv.status_code, 200)
def test_get_cluster_resource(self):
# Test only one resource - secuirty as an example
with database.session() as session:
cluster = session.query(Cluster).filter_by(id=1).first()
cluster.security = self.SECURITY_CONFIG
# a. query secuirty config by cluster id
url = '/clusters/1/security'
rv = self.app.get(url)
data = json.loads(rv.get_data())
self.assertEqual(rv.status_code, 200)
self.assertDictEqual(data['security'], self.SECURITY_CONFIG)
# b. query a nonsupported resource, return 400
url = '/clusters/1/xxx'
rv = self.app.get(url)
data = json.loads(rv.get_data())
self.assertEqual(rv.status_code, 400)
excepted_err_msg = "Invalid resource name 'xxx'!"
self.assertEqual(data['message'], excepted_err_msg)
def test_cluster_action(self):
from sqlalchemy import func
#Prepare testing data: create machines, clusters in database
#The first three machines will belong to cluster_01, the last one
#belongs to cluster_02
with database.session() as session:
machines = [Machine(mac='00:27:88:0c:01'),
Machine(mac='00:27:88:0c:02'),
Machine(mac='00:27:88:0c:03'),
Machine(mac='00:27:88:0c:04')]
clusters = [Cluster(name='cluster_02')]
session.add_all(machines)
session.add_all(clusters)
# add a host to machine '00:27:88:0c:04' to cluster_02
host = ClusterHost(cluster_id=2, machine_id=4,
hostname='host_c2_01')
session.add(host)
# Do an action to a non-existing cluster
url = '/clusters/1000/action'
request = {'addHosts': [10, 20, 30]}
rv = self.app.post(url, data=json.dumps(request))
self.assertEqual(rv.status_code, 404)
# Test 'addHosts' action on cluster_01
# 1. add a host with non-existing machine
url = '/clusters/1/action'
request = {'addHosts': [1, 1000, 1001]}
rv = self.app.post(url, data=json.dumps(request))
self.assertEqual(rv.status_code, 404)
# ClusterHost table should not have any records.
with database.session() as session:
hosts_num = session.query(func.count(ClusterHost.id))\
.filter_by(cluster_id=1).scalar()
self.assertEqual(hosts_num, 0)
# 2. add a host with a installed machine
request = {'addHosts': [1, 4]}
rv = self.app.post(url, data=json.dumps(request))
self.assertEqual(rv.status_code, 409)
data = json.loads(rv.get_data())
self.assertEqual(len(data['failedMachines']), 1)
# 3. add hosts to cluster_01
request = {'addHosts': [1, 2, 3]}
rv = self.app.post(url, data=json.dumps(request))
self.assertEqual(rv.status_code, 200)
data = json.loads(rv.get_data())
self.assertEqual(len(data['cluster_hosts']), 3)
# 4. try to remove some hosts which do not exists
request = {'removeHosts': [1, 1000, 1001]}
rv = self.app.post(url, data=json.dumps(request))
self.assertEqual(rv.status_code, 404)
data = json.loads(rv.get_data())
self.assertEqual(len(data['failedHosts']), 2)
# 5. sucessfully remove requested hosts
request = {'removeHosts': [1, 2]}
rv = self.app.post(url, data=json.dumps(request))
self.assertEqual(rv.status_code, 200)
data = json.loads(rv.get_data())
self.assertEqual(len(data['cluster_hosts']), 2)
# 6. Test 'replaceAllHosts' action on cluster_01
request = {'replaceAllHosts': [1, 2, 3]}
rv = self.app.post(url, data=json.dumps(request))
self.assertEqual(rv.status_code, 200)
data = json.loads(rv.get_data())
self.assertEqual(len(data['cluster_hosts']), 3)
# 7. Test 'deploy' action on cluster_01
request = {'deploy': {}}
rv = self.app.post(url, data=json.dumps(request))
self.assertEqual(rv.status_code, 202)
# 8. Test deploy cluster_01 the second time
rv = self.app.post(url, data=json.dumps(request))
self.assertEqual(rv.status_code, 400)
# 9. Try to deploy cluster_02 which no host
url = '/clusters/2/action'
with database.session() as session:
session.query(ClusterHost).filter_by(cluster_id=2)\
.delete(synchronize_session=False)
host = session.query(ClusterHost).filter_by(cluster_id=2).first()
rv = self.app.post(url, data=json.dumps(request))
self.assertEqual(rv.status_code, 404)
class ClusterHostAPITest(ApiTestCase):
def setUp(self):
super(ClusterHostAPITest, self).setUp()
self.test_config_data = {
"networking": {
"interfaces": {
"management": {
"ip": "192.168.1.1"}},
"global": {}},
"roles": ""}
# Insert a host into database for testing
with database.session() as session:
clusters_list = [Cluster(name='cluster_01'),
Cluster(name='cluster_02')]
session.add_all(clusters_list)
hosts_list = [ClusterHost(hostname='host_02', cluster_id=1),
ClusterHost(hostname='host_03', cluster_id=1),
ClusterHost(hostname='host_04', cluster_id=2)]
host = ClusterHost(hostname='host_01', cluster_id=1)
host.config_data = json.dumps(self.test_config_data)
session.add(host)
session.add_all(hosts_list)
def tearDown(self):
super(ClusterHostAPITest, self).tearDown()
def test_clusterHost_get_config(self):
# 1. Try to get a config of the cluster host which does not exist
url = '/clusterhosts/1000/config'
rv = self.app.get(url)
self.assertEqual(404, rv.status_code)
# 2. Get a config of a cluster host sucessfully
test_config_data = deepcopy(self.test_config_data)
test_config_data['hostname'] = 'host_01'
url = '/clusterhosts/1/config'
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
config = json.loads(rv.get_data())['config']
expected_config = deepcopy(test_config_data)
expected_config['hostid'] = 1
expected_config['hostname'] = 'host_01'
expected_config['clusterid'] = 1
expected_config['clustername'] = 'cluster_01'
self.assertDictEqual(config, expected_config)
def test_clusterHost_put_config(self):
config = deepcopy(self.test_config_data)
config['roles'] = ['base']
# 1. Try to put a config of the cluster host which does not exist
url = '/clusterhosts/1000/config'
rv = self.app.put(url, data=json.dumps(config))
self.assertEqual(404, rv.status_code)
# 2. Config with incorrect ip format
url = '/clusterhosts/1/config'
config2 = deepcopy(self.test_config_data)
config2['hostname'] = 'host_01_01'
config2['networking']['interfaces']['management']['ip'] = 'xxx'
rv = self.app.put(url, data=json.dumps(config2))
self.assertEqual(400, rv.status_code)
# 3. Config put sucessfully
rv = self.app.put(url, data=json.dumps(config))
self.assertEqual(200, rv.status_code)
with database.session() as session:
config_db = session.query(ClusterHost.config_data)\
.filter_by(id=1).first()[0]
self.assertDictEqual(config, json.loads(config_db))
def test_clusterHost_delete_subkey(self):
# 1. Try to delete an unqalified subkey of config
url = '/clusterhosts/1/config/gateway'
rv = self.app.delete(url)
self.assertEqual(400, rv.status_code)
# 2. Try to delete a subkey sucessfully
url = 'clusterhosts/1/config/ip'
rv = self.app.delete(url)
self.assertEqual(200, rv.status_code)
expected_config = deepcopy(self.test_config_data)
expected_config['networking']['interfaces']['management']['ip'] = ''
with database.session() as session:
config_db = session.query(ClusterHost.config_data).filter_by(id=1)\
.first()[0]
self.assertDictEqual(expected_config, json.loads(config_db))
# 3. Try to delete a subkey of a config belonged to an immtable host
with database.session() as session:
session.query(ClusterHost).filter_by(id=1)\
.update({'mutable': False})
url = 'clusterhosts/1/config/ip'
rv = self.app.delete(url)
self.assertEqual(400, rv.status_code)
def test_clusterHost_get_by_id(self):
# 1. Get host sucessfully
url = '/clusterhosts/1'
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
hostname = json.loads(rv.get_data())['cluster_host']['hostname']
self.assertEqual('host_01', hostname)
# 2. Get a non-existing host
url = '/clusterhosts/1000'
rv = self.app.get(url)
self.assertEqual(404, rv.status_code)
def test_list_clusterhosts(self):
# 1. list the cluster host whose hostname is host_01
url = '/clusterhosts?hostname=host_02'
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
hostname = json.loads(rv.get_data())['cluster_hosts'][0]['hostname']
self.assertEqual('host_02', hostname)
# 2. list cluster hosts whose cluster name is cluster_01
url = '/clusterhosts?clustername=cluster_01'
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
hosts_num = len(json.loads(rv.get_data())['cluster_hosts'])
self.assertEqual(3, hosts_num)
# 3. list the host whose name is host_03 and cluser name is cluster_01
url = '/clusterhosts?hostname=host_03&clustername=cluster_01'
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
hostname = json.loads(rv.get_data())['cluster_hosts'][0]['hostname']
self.assertEqual('host_03', hostname)
# 4. list all hosts
url = '/clusterhosts'
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
hosts_num = len(json.loads(rv.get_data())['cluster_hosts'])
self.assertEqual(4, hosts_num)
# 5. Cannot found any hosts in clust name: cluster_1000
url = '/clusterhosts?clustername=cluster_1000'
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
hosts_result = json.loads(rv.get_data())['cluster_hosts']
self.assertListEqual([], hosts_result)
def test_host_installing_progress(self):
# 1. Get progress of a non-existing host
url = '/clusterhosts/1000/progress'
rv = self.app.get(url)
self.assertEqual(404, rv.status_code)
# 2. Get progress of a host without state
url = '/clusterhosts/1/progress'
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
# 3. Get progress which is in UNINITIALIZED state
with database.session() as session:
host = session.query(ClusterHost).filter_by(id=1).first()
host.state = HostState()
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
data = json.loads(rv.get_data())
self.assertEqual('UNINITIALIZED', data['progress']['state'])
self.assertEqual(0, data['progress']['percentage'])
# 4. Get progress which is in INSTALLING state
with database.session() as session:
host = session.query(ClusterHost).filter_by(id=1).first()
host.state.state = 'INSTALLING'
session.query(HostState).filter_by(id=1)\
.update({'progress': 0.3,
'message': 'Configuring...',
'severity': 'INFO'})
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
data = json.loads(rv.get_data())
self.assertEqual('INSTALLING', data['progress']['state'])
self.assertEqual(0.3, data['progress']['percentage'])
class TestAdapterAPI(ApiTestCase):
def setUp(self):
super(TestAdapterAPI, self).setUp()
with database.session() as session:
adapters = [Adapter(name='Centos_openstack', os='Centos',
target_system='openstack'),
Adapter(name='Ubuntu_openstack', os='Ubuntu',
target_system='openstack')]
session.add_all(adapters)
roles = [Role(name='Control', target_system='openstack'),
Role(name='Compute', target_system='openstack'),
Role(name='Master', target_system='hadoop')]
session.add_all(roles)
def tearDown(self):
super(TestAdapterAPI, self).tearDown()
def test_list_adapter_by_id(self):
url = '/adapters/1'
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
data = json.loads(rv.get_data())
self.assertEqual('Centos_openstack', data['adapter']['name'])
def test_list_adapter_roles(self):
url = '/adapters/1/roles'
rv = self.app.get(url)
self.assertEqual(200, rv.status_code)
data = json.loads(rv.get_data())
self.assertEqual(2, len(data['roles']))
def test_list_adapters(self):
url = '/adapters?name=Centos_openstack'
rv = self.app.get(url)
data = json.loads(rv.get_data())
self.assertEqual(200, rv.status_code)
execpted_result = {"name": "Centos_openstack",
"os": "Centos",
"target_system": "openstack",
"id": 1,
"link": {
"href": "/adapters/1",
"rel": "self"}
}
self.assertDictEqual(execpted_result, data['adapters'][0])
url = '/adapters'
rv = self.app.get(url)
data = json.loads(rv.get_data())
self.assertEqual(200, rv.status_code)
self.assertEqual(2, len(data['adapters']))
if __name__ == '__main__':
unittest2.main()

@ -0,0 +1,42 @@
import unittest2
from compass.config_management.installers import os_installer
class DummyInstaller(os_installer.Installer):
NAME = 'dummy'
def __init__(self):
pass
class Dummy2Installer(os_installer.Installer):
NAME = 'dummy'
def __init__(self):
pass
class TestInstallerFunctions(unittest2.TestCase):
def setUp(self):
os_installer.INSTALLERS = {}
def tearDown(self):
os_installer.INSTALLERS = {}
def test_found_installer(self):
os_installer.register(DummyInstaller)
intaller = os_installer.get_installer_by_name(DummyInstaller.NAME)
self.assertIsInstance(intaller, DummyInstaller)
def test_notfound_unregistered_installer(self):
self.assertRaises(KeyError, os_installer.get_installer_by_name,
DummyInstaller.NAME)
def test_multi_registered_installer(self):
os_installer.register(DummyInstaller)
self.assertRaises(KeyError, os_installer.register, Dummy2Installer)
if __name__ == '__main__':
unittest2.main()

@ -0,0 +1,44 @@
import unittest2
from compass.config_management.installers import package_installer
class DummyInstaller(package_installer.Installer):
NAME = 'dummy'
def __init__(self):
pass
class Dummy2Installer(package_installer.Installer):
NAME = 'dummy'
def __init__(self):
pass
class TestInstallerFunctions(unittest2.TestCase):
def setUp(self):
package_installer.INSTALLERS = {}
def tearDown(self):
package_installer.INSTALLERS = {}
def test_found_installer(self):
package_installer.register(DummyInstaller)
intaller = package_installer.get_installer_by_name(
DummyInstaller.NAME)
self.assertIsInstance(intaller, DummyInstaller)
def test_notfound_unregistered_installer(self):
self.assertRaises(KeyError, package_installer.get_installer_by_name,
DummyInstaller.NAME)
def test_multi_registered_installer(self):
package_installer.register(DummyInstaller)
self.assertRaises(KeyError, package_installer.register,
Dummy2Installer)
if __name__ == '__main__':
unittest2.main()

@ -0,0 +1,44 @@
import unittest2
from compass.config_management.providers import config_provider
class DummyProvider(config_provider.ConfigProvider):
NAME = 'dummy'
def __init__(self):
pass
class Dummy2Provider(config_provider.ConfigProvider):
NAME = 'dummy'
def __init__(self):
pass
class TestProviderRegisterFunctions(unittest2.TestCase):
def setUp(self):
config_provider.PROVIDERS = {}
def tearDown(self):
config_provider.PROVIDERS = {}
def test_found_provider(self):
config_provider.register_provider(DummyProvider)
provider = config_provider.get_provider_by_name(
DummyProvider.NAME)
self.assertIsInstance(provider, DummyProvider)
def test_notfound_unregistered_provider(self):
self.assertRaises(KeyError, config_provider.get_provider_by_name,
DummyProvider.NAME)
def test_multi_registered_provider(self):
config_provider.register_provider(DummyProvider)
self.assertRaises(KeyError, config_provider.register_provider,
Dummy2Provider)
if __name__ == '__main__':
unittest2.main()

@ -0,0 +1,52 @@
import unittest2
from compass.config_management.utils import config_filter
class TestConfigFilter(unittest2.TestCase):
def test_allows(self):
config = {'1': '1',
'2': {'22': '22',
'33': {'333': '333',
'44': '444'}},
'3': {'33': '44'}}
allows = ['*', '3', '5']
filter = config_filter.ConfigFilter(allows)
filtered_config = filter.filter(config)
self.assertEqual(filtered_config, config)
allows = ['/1', '2/22', '5']
expected_config = {'1': '1', '2': {'22': '22'}}
filter = config_filter.ConfigFilter(allows)
filtered_config = filter.filter(config)
self.assertEqual(filtered_config, expected_config)
allows = ['*/33']
expected_config = {'2': {'33': {'333': '333',
'44': '444'}},
'3': {'33': '44'}}
filter = config_filter.ConfigFilter(allows)
filtered_config = filter.filter(config)
self.assertEqual(filtered_config, expected_config)
def test_denies(self):
config = {'1': '1', '2': {'22': '22',
'33': {'333': '333',
'44': '444'}},
'3': {'33': '44'}}
denies = ['/1', '2/22', '2/33/333', '5']
expected_config = {'2': {'33': {'44': '444'}}, '3': {'33': '44'}}
filter = config_filter.ConfigFilter(denies=denies)
filtered_config = filter.filter(config)
self.assertEqual(filtered_config, expected_config)
denies = ['*']
filter = config_filter.ConfigFilter(denies=denies)
filtered_config = filter.filter(config)
self.assertIsNone(filtered_config)
denies = ['*/33']
expected_config = {'1': '1', '2': {'22': '22'}}
filter = config_filter.ConfigFilter(denies=denies)
filtered_config = filter.filter(config)
self.assertEqual(filtered_config, expected_config)
if __name__ == '__main__':
unittest2.main()

@ -0,0 +1,166 @@
import functools
import unittest2
from compass.config_management.utils import config_merger
from compass.config_management.utils import config_merger_callbacks
from compass.config_management.utils import config_reference
class TestConfigMerger(unittest2.TestCase):
def test_merge(self):
upper_config = {
'networking': {
'interfaces': {
'management': {
'ip_start': '192.168.1.1',
'ip_end': '192.168.1.100',
'netmask': '255.255.255.0',
'dns_pattern': '%(hostname)s.%(clustername)s.%(search_path)s',
},
'floating': {
'ip_start': '172.16.0.1',
'ip_end': '172.16.0.100',
'netmask': '0.0.0.0',
'dns_pattern': 'public-%(hostname)s.%(clustername)s.%(search_path)s',
},
},
'global': {
'search_path': 'ods.com',
'default_no_proxy': ['127.0.0.1', 'localhost'],
},
},
'clustername': 'cluster1',
'dashboard_roles': ['os-single-controller'],
'role_assign_policy': {
'policy_by_host_numbers': {},
'default': {
'roles': ['os-single-controller', 'os-network',
'os-compute-worker'],
'default_min': 1,
},
},
}
lower_configs = {
1: {
'hostname': 'host1',
},
2: {
'hostname': 'host2',
'networking': {
'interfaces': {
'management': {
'ip': '192.168.1.50',
},
},
},
'roles': ['os-single-controller', 'os-network'],
}
}
expected_lower_configs = {
1: {
'networking': {
'interfaces': {
'floating': {
'ip': '172.16.0.1',
'netmask': '0.0.0.0',
'dns_alias': 'public-host1.cluster1.ods.com'
},
'management': {
'ip': '192.168.1.1',
'netmask': '255.255.255.0',
'dns_alias': 'host1.cluster1.ods.com'
}
},
'global': {
'search_path': 'ods.com',
'default_no_proxy': ['127.0.0.1', 'localhost'],
'ignore_proxy': '127.0.0.1,localhost,host1,192.168.1.1,host2,192.168.1.50'
}
},
'hostname': 'host1',
'has_dashboard_roles': False,
'roles': ['os-compute-worker']
},
2: {
'networking': {
'interfaces': {
'floating': {
'ip': '172.16.0.2',
'netmask': '0.0.0.0',
'dns_alias': 'public-host2.cluster1.ods.com'
},
'management': {
'ip': '192.168.1.50',
'netmask': '255.255.255.0',
'dns_alias': 'host2.cluster1.ods.com'
}
},
'global': {
'search_path': 'ods.com',
'default_no_proxy': ['127.0.0.1', 'localhost'],
'ignore_proxy': '127.0.0.1,localhost,host1,192.168.1.1,host2,192.168.1.50'
}
},
'hostname': 'host2',
'has_dashboard_roles': True,
'roles': ['os-single-controller', 'os-network']
}
}
mappings=[
config_merger.ConfigMapping(
path_list=['/networking/interfaces/*'],
from_upper_keys={'ip_start': 'ip_start', 'ip_end': 'ip_end'},
to_key='ip',
value=config_merger_callbacks.assign_ips
),
config_merger.ConfigMapping(
path_list=['/role_assign_policy'],
from_upper_keys={
'policy_by_host_numbers': 'policy_by_host_numbers',
'default': 'default'},
to_key='/roles',
value=config_merger_callbacks.assign_roles_by_host_numbers
),
config_merger.ConfigMapping(
path_list=['/dashboard_roles'],
from_lower_keys={'lower_values': '/roles'},
to_key='/has_dashboard_roles',
value=config_merger_callbacks.has_intersection
),
config_merger.ConfigMapping(
path_list=[
'/networking/global',
'/networking/interfaces/*/netmask',
'/networking/interfaces/*/nic',
'/networking/interfaces/*/promisc',
'/security/*',
'/partition',
]
),
config_merger.ConfigMapping(
path_list=['/networking/interfaces/*'],
from_upper_keys={'pattern': 'dns_pattern',
'clustername': '/clustername',
'search_path': '/networking/global/search_path'},
from_lower_keys={'hostname': '/hostname'},
to_key='dns_alias',
value=functools.partial(config_merger_callbacks.assign_from_pattern,
upper_keys=['search_path', 'clustername'],
lower_keys=['hostname'])
),
config_merger.ConfigMapping(
path_list=['/networking/global'],
from_upper_keys={'default': 'default_no_proxy'},
from_lower_keys={'hostnames': '/hostname',
'ips': '/networking/interfaces/management/ip'},
to_key='ignore_proxy',
value=config_merger_callbacks.assign_noproxy
)
]
merger = config_merger.ConfigMerger(mappings)
merger.merge(upper_config, lower_configs)
self.assertEqual(lower_configs, expected_lower_configs)
if __name__ == '__main__':
unittest2.main()

Some files were not shown because too many files have changed in this diff Show More