Initial commit

This commit is contained in:
Tanvir Talukder 2016-12-12 08:50:24 -06:00
commit ae593ca663
246 changed files with 23749 additions and 0 deletions

8
.coveragerc Normal file
View File

@ -0,0 +1,8 @@
[run]
branch = True
source = valet
omit = valet/tests/*
cover_pylib = True
[report]
ignore_errors = True

109
.gitignore vendored Normal file
View File

@ -0,0 +1,109 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
#ignore thumbnails created by windows
Thumbs.db
#Ignore files build by Visual Studio
*.obj
*.exe
*.pdb
*.user
*.aps
*.pch
*.vspscc
*_i.c
*_p.c
*.ncb
*.suo
*.tlb
*.tlh
*.bak
*.cache
*.ilk
*.log
# C extensions
*.so
*.pid
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.eggs/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.cache
nosetests.xml
coverage.xml
ostro-daemon.pid
.project
.pydevproject
.testrepository
.settings
# Translations
*.mo
*.pot
# Django stuff:
*.log
# Sphinx documentation
docs/_build/
# PyBuilder
target/
#ignore thumbnails created by windows
Thumbs.db
#Ignore files build by Visual Studio
*.obj
*.exe
*.pdb
*.user
*.aps
*.pch
*.vspscc
*_i.c
*_p.c
*.ncb
*.suo
*.tlb
*.tlh
*.bak
*.cache
*.ilk
[Bb]in
[Dd]ebug*/
*.lib
*.sbr
obj/
[Rr]elease*/
_ReSharper*/
[Tt]est[Rr]esult*
.idea/*

7
.testr.conf Normal file
View File

@ -0,0 +1,7 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-1000} \
${PYTHON:-python} -m subunit.run discover ${OS_TEST_PATH:-./valet/tests/unit} -t . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

176
doc/LICENSE Normal file
View File

@ -0,0 +1,176 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

29
etc/valet/api/app.apache2 Normal file
View File

@ -0,0 +1,29 @@
# valet user/group required (or substitute as needed).
# Place in /opt/apache2/sites-available, symlink from
# /opt/apache2/sites-enabled, and run 'apachectl restart' as root.
# Optional: Append python-path=PATH_TO_VENV_PACKAGES to WSGIDaemonProcess
Listen 8090
ServerName valet
<VirtualHost *:8090>
ServerName valet
WSGIDaemonProcess valet user=m04060 group=m04060 threads=5
WSGIScriptAlias / /var/www/valet/app.wsgi
SetEnv APACHE_RUN_USER m04060
SetEnv APACHE_RUN_GROUP m04060
WSGIProcessGroup valet
<Directory /var/www/valet/>
WSGIProcessGroup valet
WSGIApplicationGroup %{GLOBAL}
Order deny,allow
Allow from all
</Directory>
ErrorLog /var/log/valet/api.log
LogLevel warn
CustomLog /var/log/valet/access.log combined
</VirtualHost>

4
etc/valet/api/app.wsgi Normal file
View File

@ -0,0 +1,4 @@
# /var/www/valet/app.wsgi
from valet.api.app import load_app
application = load_app(config_file='/var/www/valet/config.py')

102
etc/valet/api/config.py Normal file
View File

@ -0,0 +1,102 @@
from oslo_config import cfg
from pecan.hooks import TransactionHook
from valet.api.db import models
from valet.api.common.hooks import NotFoundHook, MessageNotificationHook
CONF = cfg.CONF
# Server Specific Configurations
server = {
'port': CONF.server.port,
'host': CONF.server.host
}
# Pecan Application Configurations
app = {
'root': 'valet.api.v1.controllers.root.RootController',
'modules': ['valet.api'],
'default_renderer': 'json',
'force_canonical': False,
'debug': False,
'hooks': [
TransactionHook(
models.start,
models.start_read_only,
models.commit,
models.rollback,
models.clear
),
NotFoundHook(),
MessageNotificationHook(),
],
}
logging = {
'root': {'level': 'INFO', 'handlers': ['console']},
'loggers': {
'api': {
'level': 'DEBUG', 'handlers': ['console'], 'propagate': False
},
'api.models': {
'level': 'INFO', 'handlers': ['console'], 'propagate': False
},
'api.common': {
'level': 'INFO', 'handlers': ['console'], 'propagate': False
},
'pecan': {
'level': 'DEBUG', 'handlers': ['console'], 'propagate': False
},
'py.warnings': {'handlers': ['console']},
'__force_dict__': True
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'color'
}
},
'formatters': {
'simple': {
'format': ('%(asctime)s %(levelname)-5.5s [%(name)s]'
'[%(threadName)s] %(message)s')
},
'color': {
'()': 'pecan.log.ColorFormatter',
'format': ('%(asctime)s [%(padded_color_levelname)s] [%(name)s]'
'[%(threadName)s] %(message)s'),
'__force_dict__': True
}
}
}
ostro = {
'tries': CONF.music.tries,
'interval': CONF.music.interval,
}
messaging = {
'config': {
'transport_url': 'rabbit://' + CONF.messaging.username + ':' + CONF.messaging.password +
'@' + CONF.messaging.host + ':' + str(CONF.messaging.port) + '/'
}
}
identity = {
'config': {
'username': CONF.identity.username,
'password': CONF.identity.password,
'project_name': CONF.identity.project_name,
'auth_url': CONF.identity.auth_url,
'interface': CONF.identity.interface,
}
}
music = {
'host': CONF.music.host,
'port': CONF.music.port,
'keyspace': CONF.music.keyspace,
'replication_factor': CONF.music.replication_factor,
}

View File

@ -0,0 +1,24 @@
import json
from oslo_config import cfg
import oslo_messaging
class NotificationEndpoint(object):
def info(self, ctxt, publisher_id, event_type, payload, metadata):
print('recv notification:')
print(json.dumps(payload, indent=4))
def warn(self, ctxt, publisher_id, event_type, payload, metadata):
None
def error(self, ctxt, publisher_id, event_type, payload, metadata):
None
transport = oslo_messaging.get_transport(cfg.CONF)
targets = [oslo_messaging.Target(topic='notifications')]
endpoints = [NotificationEndpoint()]
server = oslo_messaging.get_notification_listener(transport, targets, endpoints)
server.start()
server.wait()

136
etc/valet/valet.conf Normal file
View File

@ -0,0 +1,136 @@
# __
# /_\ |__| |
# / \ | |
#
[server]
host = 0.0.0.0
port = 8090
[messaging]
username = rabbitmq_username
password = rabbitmq_psw
host = rabbitmq_host
port = rabbitmq_port
[identity]
project_name = project_name
username = project_username
password = project_username_password
auth_url = http://keystone_host:5000/v2.0
# interface = admin
# _ _
# | \ |_\
# |_/ |_/
#
[music]
host = music_host
port = 8080
keyspace = valet_keyspace
replication_factor = 3
# tries = 10
# interval = 1
# request_table = placement_requests
# response_table = placement_results
# event_table = oslo_messages
# resource_table = resource_status
# app_table = app
# resource_index_table = resource_log_index
# app_index_table = app_log_index
# uuid_table = uuid_map
# __ __ __
# |__ |\ | | | |\ | |__
# |__ | \| |__T | | \| |__
#
[engine]
# Set the location of daemon process id
pid = /var/run/valet/ostro-daemon.pid
# Set IP of this Ostro
# ip = localhost
# Used for Ostro active/passive selection
priority = 1
#------------------------------------------------------------------------------------------------------------
# Logging configuration
#------------------------------------------------------------------------------------------------------------
# Set logging parameters
# logger_name = test
# logging level = [debug|info]
# logging_level = debug
# Set the directory to locate the log file
# logging_dir = /var/log/valet/engine/
# Set the maximum size of the main logger as Byte
# max_main_log_size = 5000000
# Set the maximum logfile size as Byte for time-series log files
# max_log_size = 1000000
# Set the maximum number of time-series log files
# max_num_of_logs = 20
#------------------------------------------------------------------------------------------------------------
# Management configuration
#------------------------------------------------------------------------------------------------------------
# Inform the name of datacenter (region name), where Valet/Ostro is deployed.
# datacenter_name = bigsite
# Set the naming convention rules.
# Currently, 3 chars of CLLI + region number + 'r' + rack id number + 1 char of node type + node id number.
# For example, pdk15r05c001 indicates the first KVM compute server (i.e., 'c001') in the fifth rack
# (i.e., 'r05') in the fifteenth DeKalb-Peachtree Airport Region (i.e., 'pdk15').
# Set the number of chars that indicates the region code. The above example, 'pdk' is the region code.
# num_of_region_chars = 3
# Set 1 char of rack indicator. This should be 'r'.
# rack_code_list = r
# Set all of chars, each of which indicates the node type.
# Currently, 'a' = network, 'c' = KVM compute, 'u' = ESXi compute, 'f' = ?, 'o' = operation, 'p' = power,
# 's' = storage.
# node_code_list = a,c,u,f,o,p,s
# Set trigger time or frequency for checking compute hosting server status (i.e., call Nova)
# Note that currently, compute (Nova) should be triggered first then trigger topology.
# compute_trigger_time = 01:00
# compute_trigger_frequency = 3600
# Set trigger time or frequency for checking datacenter topology
# topology_trigger_time = 02:00
# topology_trigger_frequency = 3600
# Set default overbooking ratios. Note that each compute node can have its own ratios.
# default_cpu_allocation_ratio = 16
# default_ram_allocation_ratio = 1.5
# default_disk_allocation_ratio = 1
# Set static unused percentages of resources (i.e., standby) that are set aside for applications's workload spikes.
# static_cpu_standby_ratio = 20
# static_mem_standby_ratio = 20
# static_local_disk_standby_ratio = 20
# Set Ostro execution mode
# mode = [live|sim], sim will let Ostro simulate datacenter, while live will let it handle a real datacenter
# mode = live
# Set the location of simulation configuration file (i.e., ostro_sim.cfg).
# This is used only when the simulation mode
# sim_cfg_loc = /etc/valet/engine/ostro_sim.cfg
# Inform whether network controller (i.e., Tegu) has been deployed.
# If it does, set its API, Otherwise ignore these parameters
# network_control = no
# network_control_api = 29444/tegu/api
# Set RPC server ip and port if used. Otherwise, ignore these parameters
# rpc_server_ip = localhost
# rpc_server_port = 8002

14
requirements.txt Normal file
View File

@ -0,0 +1,14 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pip
pecan>=1.1.1
pecan-notario<=0.0.3
simplejson<=3.3.1
#pymysql
#sqlalchemy
pika<=0.10.0
python-daemon<=2.1.1
#oslo.messaging!=1.17.0,!=1.17.1,!=2.6.0,!=2.6.1,!=2.7.0,!=2.8.0,!=2.8.1,!=2.9.0,!=3.1.0,>=1.16.0 # Apache-2.0
#oslo.messaging==1.8.3

1
run_all_tests.sh Normal file
View File

@ -0,0 +1 @@
sudo tox

14
run_examples.sh Normal file
View File

@ -0,0 +1,14 @@
# run specific tests:
# sudo tox -epy27 -- '(TestAffinity|TestDiversity)'
# isolate
# sudo tox -- --isolated
# run all tests until failure
# sudo tox -- --until-failure
# unparallel running (serial)
# sudo tox -epy27 -- '--concurrency=1'
# use commands = ostestr --slowest '{posargs}' in file tox.ini
# http://docs.openstack.org/developer/os-testr/ostestr.html#running-tests

8
run_test.sh Normal file
View File

@ -0,0 +1,8 @@
sudo tox -epy27 -- '--concurrency=1' $*
# EXAMPLE:
# ./run_test '(TestAffinity)'
# run specific tests:
# sudo tox -epy27 -- '(TestAffinity|TestDiversity)'

4
run_until_fail.sh Normal file
View File

@ -0,0 +1,4 @@
# run all tests in a loop until failure
sudo tox -- --until-failure

31
setup.cfg Normal file
View File

@ -0,0 +1,31 @@
[metadata]
name = valet
summary = Valet Placement Service API
version = 0.1
# description-file = README.md
author = AT&T
author-email = jdandrea@research.att.com
homepage = https://github.com/att-comdev/valet
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
[global]
setup-hooks =
pbr.hooks.setup_hook
[files]
packages = valet
data_files = etc/valet/ = etc/*
[entry_points]
pecan.command =
populate = valet.api.v1.commands.populate:PopulateCommand
tempest.test_plugins =
valet_tests = valet.tests.tempest.plugin:ValetTempestPlugin

33
setup.py Normal file
View File

@ -0,0 +1,33 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Setup'''
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa # pylint: disable=W0611,C0411
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

29
test-requirements.txt Normal file
View File

@ -0,0 +1,29 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking<0.11,>=0.10.0
os-testr<=0.7.0
markupsafe<=0.23
pecan<=0.8.2
notario<=0.0.11
coverage>=3.6
python-subunit>=0.0.18
mock>=1.2
oslotest>=1.10.0 # Apache-2.0
oslo.config>=1.9.0
testrepository>=0.0.18
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
testscenarios>=0.4
testtools>=1.4.0
oslo.i18n<=3.8.0
oslo.log>=1.0.0
pytz
python-keystoneclient<=3.4.0
python-novaclient<=4.0.0
python-heatclient<=1.2.0
oslo.messaging==1.8.3
#tempest<=12.1.0 ---------- needs to be installed on Jenkins, no output when using tox
#tempest-lib>=0.8.0

View File

@ -0,0 +1,6 @@
[program:HAValet]
command=python /usr/local/lib/python2.7/dist-packages/valet/ha/ha_valet.py
autostart=true
autorestart=true
stderr_logfile=/var/log/HAValet.err.log
stdout_logfile=/var/log/HAValet.out.log

15
tools/conf.d/music.conf Normal file
View File

@ -0,0 +1,15 @@
[program:cassandra]
command=/bin/bash -c '/opt/app/apache-cassandra-2.1.1/bin/cassandra -f'
autostart=true
autorestart=true
stopsignal=KILL
stderr_logfile=/var/log/cassandra.err.log
stdout_logfile=/var/log/cassandra.out.log
[program:Zookeeper]
command=/opt/app/zookeeper-3.4.6/bin/zkServer.sh start-foreground
autostart=true
autorestart=true
stopsignal=KILL
stderr_logfile=/var/log/zookeeper.err.log
stdout_logfile=/var/log/zookeeper.out.log

16
tools/utils/cleandb.sh Normal file
View File

@ -0,0 +1,16 @@
#!/usr/bin/env bash
# drop keyspace
echo "drop valet keyspace"
/opt/app/apache-cassandra-2.1.1/bin/cqlsh -e "DROP KEYSPACE valet_test;"
sleep 5
# populate tables
echo "populate valet tables"
# /opt/app/apache-cassandra-2.1.1/bin/cqlsh -f ./populate.cql
pecan populate /var/www/valet/config.py
/opt/app/apache-cassandra-2.1.1/bin/cqlsh -e "DESCRIBE KEYSPACE valet_test;"
echo "Done populating"

23
tools/utils/populate.cql Normal file
View File

@ -0,0 +1,23 @@
CREATE KEYSPACE IF NOT EXISTS valet_test WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor': '3' } AND durable_writes = true;
CREATE TABLE IF NOT EXISTS valet_test.placements(id text PRIMARY KEY, name text, orchestration_id text, resource_id text, location text, reserved boolean, plan_id text);
CREATE TABLE IF NOT EXISTS valet_test.groups(id text PRIMARY KEY, name text, description text, type text, members text);
CREATE TABLE IF NOT EXISTS valet_test.placement_requests(stack_id text PRIMARY KEY, request text);
CREATE TABLE IF NOT EXISTS valet_test.placement_results(stack_id text PRIMARY KEY, placement text);
CREATE TABLE IF NOT EXISTS valet_test.oslo_messages ("timestamp" text PRIMARY KEY, args text, exchange text, method text);
CREATE TABLE IF NOT EXISTS valet_test.plans (id text PRIMARY KEY, name text, stack_id text);
CREATE TABLE IF NOT EXISTS valet_test.uuid_map (uuid text PRIMARY KEY, h_uuid text, s_uuid text);
CREATE TABLE IF NOT EXISTS valet_test.app (stack_id text PRIMARY KEY, app text);
CREATE TABLE IF NOT EXISTS valet_test.resource_status (site_name text PRIMARY KEY, resource text);
CREATE TABLE IF NOT EXISTS valet_test.resource_log_index (site_name text PRIMARY KEY, resource_log_index text);
CREATE TABLE IF NOT EXISTS valet_test.app_log_index ( site_name text PRIMARY KEY, app_log_index text);

66
tox.ini Normal file
View File

@ -0,0 +1,66 @@
[tox]
#minversion = 2.0
envlist = py27
#py27-constraints, pep8-constraints
#py34-constraints,py27-constraints,pypy-constraints,pep8-constraints
#skipsdist = True
[testenv]
usedevelop = True
install_command =
pip install -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
OS_TEST_PATH=valet/tests/unit
#commands = python setup.py testr --slowest --testr-args='{posargs}'
commands =
find . -type f -name "*.pyc" -delete
ostestr --slowest '{posargs}'
deps = -r{toxinidir}/test-requirements.txt
whitelist_externals =
bash
find
[testenv:pep8]
commands = flake8
[testenv:venv]
commands = {posargs}
[testenv:tempest]
setenv = VIRTUAL_ENV={envdir}
OS_TEST_PATH=valet/tests/tempest
commands = python setup.py testr --slowest --testr-args='{posargs}'
# python setup.py testr --testr-args='{posargs}' | subunit-trace --no-failure-debug -f
[testenv:cover]
setenv = VIRTUAL_ENV={envdir}
OS_TEST_PATH=valet/tests/unit/
commands =
coverage erase
python setup.py test --slowest --coverage --coverage-package-name 'valet' --testr-args='{posargs}'
coverage html
coverage report
[testenv:docs]
commands = python setup.py build_sphinx
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125,E501,H401,H105,H301
builtins = _
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build

0
valet/__init__.py Normal file
View File

4
valet/api/PKG-INFO Normal file
View File

@ -0,0 +1,4 @@
Metadata-Version: 1.2
Name: api
Version: 0.1.0
Author-email: jdandrea@research.att.com

0
valet/api/__init__.py Normal file
View File

44
valet/api/app.py Normal file
View File

@ -0,0 +1,44 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Application'''
from pecan.deploy import deploy
from pecan import make_app
from valet.api.common import identity, messaging
from valet.api.conf import register_conf, set_domain
from valet.api.db import models
def setup_app(config):
""" App Setup """
identity.init_identity()
messaging.init_messaging()
models.init_model()
app_conf = dict(config.app)
return make_app(
app_conf.pop('root'),
logging=getattr(config, 'logging', {}), **app_conf)
# entry point for apache2
def load_app(config_file):
register_conf()
set_domain(project='valet')
return deploy(config_file)

View File

@ -0,0 +1,22 @@
import ctypes
def terminate_thread(thread):
"""Terminates a python thread from another thread.
:param thread: a threading.Thread instance
"""
if not thread.isAlive():
return
print('valet watcher thread: notifier thread is alive... - kill it...')
exc = ctypes.py_object(SystemExit)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(thread.ident), exc)
if res == 0:
raise ValueError("nonexistent thread id")
elif res > 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, None)
raise SystemError("PyThreadState_SetAsyncExc failed")
print('valet watcher thread exits')

View File

@ -0,0 +1,32 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Compute helper library'''
from novaclient import client
from pecan import conf
# Nova API v2
VERSION = 2
def nova_client():
'''Returns a nova client'''
sess = conf.identity.engine.session
nova = client.Client(VERSION, session=sess)
return nova

111
valet/api/common/hooks.py Normal file
View File

@ -0,0 +1,111 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Hooks'''
import json
import logging
from valet.api.common.i18n import _
from valet.api.common import terminate_thread
from valet.api.v1.controllers import error
from pecan import conf
from pecan.hooks import PecanHook
import threading
import webob
LOG = logging.getLogger(__name__)
class MessageNotificationHook(PecanHook):
'''Send API request/responses out as Oslo msg notifications.'''
def after(self, state):
self.dummy = True
LOG.info('sending notification')
notifier = conf.messaging.notifier
status_code = state.response.status_code
status = webob.exc.status_map.get(status_code)
if issubclass(status, webob.exc.HTTPOk):
notifier_fn = notifier.info
else:
notifier_fn = notifier.error
ctxt = {} # Not using this just yet.
request_path = state.request.path
event_type_parts = ['api']
api_version = state.request.path_info_pop()
if api_version:
event_type_parts.append(api_version)
api_subject = state.request.path_info_pop()
if api_subject:
event_type_parts.append(api_subject)
event_type = '.'.join(event_type_parts)
request_method = state.request.method
try:
request_body = json.loads(state.request.body)
except ValueError:
request_body = None
try:
response_body = json.loads(state.response.body)
except ValueError:
response_body = state.response.body
tenant_id = state.request.context.get('tenant_id', None)
user_id = state.request.context.get('user_id', None)
payload = {
'context': {
'tenant_id': tenant_id,
'user_id': user_id,
},
'request': {
'method': request_method,
'path': request_path,
'body': request_body,
},
'response': {
'status_code': status_code,
'body': response_body,
}
}
# notifier_fn blocks in case rabbit mq is down - it prevents Valet API to return its response :(
# send the notification in a different thread
notifier_thread = threading.Thread(target=notifier_fn, args=(ctxt, event_type, payload))
notifier_thread.start()
# launch a timer to verify no hung threads are left behind
# (when timeout expired kill the notifier thread if it still alive)
watcher = threading.Timer(conf.messaging.timeout, terminate_thread, args=[notifier_thread])
watcher.start()
LOG.info('valet notification hook - end')
class NotFoundHook(PecanHook):
'''Catchall 'not found' hook for API'''
def on_error(self, state, exc):
self.dummy = True
'''Redirects to app-specific not_found endpoint if 404 only'''
if isinstance(exc, webob.exc.WSGIHTTPException) and exc.code == 404:
message = _('The resource could not be found.')
error('/errors/not_found', message)

23
valet/api/common/i18n.py Normal file
View File

@ -0,0 +1,23 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
"""i18n library"""
import gettext
_ = gettext.gettext

View File

@ -0,0 +1,155 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Identity helper library'''
from datetime import datetime
import iso8601
# https://github.com/openstack/python-keystoneclient/blob/
# master/keystoneclient/v2_0/client.py
# import keystoneauth1.exceptions
from keystoneauth1.identity import v2
from keystoneauth1 import session
from keystoneclient.v2_0 import client
import logging
from pecan import conf
import pytz
LOG = logging.getLogger(__name__)
def utcnow():
'''Returns the time (UTC)'''
return datetime.now(tz=pytz.utc)
class Identity(object):
'''Convenience library for all identity service-related queries.'''
_args = None
_client = None
_interface = None
_session = None
@classmethod
def is_token_admin(cls, token):
'''Returns true if decoded token has an admin role'''
for role in token.user.get('roles', []):
if role.get('name') == 'admin':
return True
return False
@classmethod
def tenant_from_token(cls, token):
'''Returns tenant id from decoded token'''
return token.tenant.get('id', None)
@classmethod
def user_from_token(cls, token):
'''Returns user id from decoded token'''
return token.user.get('id', None)
def __init__(self, interface='admin', **kwargs):
'''Initializer.'''
self._interface = interface
self._args = kwargs
self._client = None
self._session = None
@property
def _client_expired(self):
'''Returns True if cached client's token is expired.'''
# NOTE: Keystone may auto-regen the client now (v2? v3?)
# If so, this trip may no longer be necessary. Doesn't
# hurt to keep it around for the time being.
if not self._client or not self._client.auth_ref:
return True
token = self._client.auth_ref.get('token')
if not token:
return True
timestamp = token.get('expires')
if not timestamp:
return True
return iso8601.parse_date(timestamp) <= utcnow()
@property
def client(self):
'''Returns an identity client.'''
if not self._client or self._client_expired:
auth = v2.Password(**self._args)
self._session = session.Session(auth=auth)
self._client = client.Client(session=self._session,
interface=self._interface)
return self._client
@property
def session(self):
'''Read-only access to the session.'''
return self._session
def validate_token(self, auth_token):
'''Returns validated token or None if invalid'''
kwargs = {
'token': auth_token,
}
try:
return self.client.tokens.validate(**kwargs)
except Exception as ex:
LOG.error("Identity.validate_token: " + ex.message)
return None
def is_tenant_list_valid(self, tenant_list):
'''Returns true if tenant list contains valid tenant IDs'''
tenants = self.client.tenants.list()
if isinstance(tenant_list, list):
found = False
for tenant_id in tenant_list:
found = is_tenant_in_tenants(tenant_id, tenants)
if found:
break
return found
return False
def is_tenant_in_tenants(tenant_id, tenants):
for tenant in tenants:
if tenant_id == tenant.id:
return True
return False
def _identity_engine_from_config(config):
'''Initialize the identity engine based on supplied config.'''
# Using tenant_name instead of project name due to keystone v2
kwargs = {
'username': config.get('username'),
'password': config.get('password'),
'tenant_name': config.get('project_name'),
'auth_url': config.get('auth_url'),
}
interface = config.get('interface')
engine = Identity(interface, **kwargs)
return engine
def init_identity():
'''Initialize the identity engine and place in the config.'''
config = conf.identity.config
engine = _identity_engine_from_config(config)
conf.identity.engine = engine

View File

@ -0,0 +1,147 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Identity helper library'''
from datetime import datetime
import iso8601
# https://github.com/openstack/python-keystoneclient/blob/
# master/keystoneclient/v2_0/client.py
import keystoneauth1.exceptions
from keystoneauth1.identity import v2
from keystoneauth1 import session
from keystoneclient.v2_0 import client
from pecan import conf
import pytz
def utcnow():
'''Returns the time (UTC)'''
return datetime.now(tz=pytz.utc)
class Identity(object):
'''Convenience library for all identity service-related queries.'''
_args = None
_client = None
_interface = None
_session = None
@classmethod
def is_token_admin(cls, token):
'''Returns true if decoded token has an admin role'''
for role in token.user.get('roles', []):
if role.get('name') == 'admin':
return True
return False
@classmethod
def tenant_from_token(cls, token):
'''Returns tenant id from decoded token'''
return token.tenant.get('id', None)
@classmethod
def user_from_token(cls, token):
'''Returns user id from decoded token'''
return token.user.get('id', None)
def __init__(self, interface='admin', **kwargs):
'''Initializer.'''
self._interface = interface
self._args = kwargs
self._client = None
self._session = None
@property
def _client_expired(self):
'''Returns True if cached client's token is expired.'''
# NOTE: Keystone may auto-regen the client now (v2? v3?)
# If so, this trip may no longer be necessary. Doesn't
# hurt to keep it around for the time being.
if not self._client or not self._client.auth_ref:
return True
token = self._client.auth_ref.get('token')
if not token:
return True
timestamp = token.get('expires')
if not timestamp:
return True
return iso8601.parse_date(timestamp) <= utcnow()
@property
def client(self):
'''Returns an identity client.'''
if not self._client or self._client_expired:
auth = v2.Password(**self._args)
self._session = session.Session(auth=auth)
self._client = client.Client(session=self._session,
interface=self._interface)
return self._client
@property
def session(self):
'''Read-only access to the session.'''
return self._session
def validate_token(self, auth_token):
'''Returns validated token or None if invalid'''
kwargs = {
'token': auth_token,
}
try:
return self.client.tokens.validate(**kwargs)
except keystoneauth1.exceptions.http.NotFound:
# FIXME: Return a 404 or at least an auth required?
pass
return None
'''Returns true if tenant list contains valid tenant IDs'''
tenants = self.client.tenants.list()
if isinstance(tenant_list, list):
for tenant_id in tenant_list:
found = False
for tenant in tenants:
if tenant_id == tenant.id:
found = True
break
if not found:
return False
return True
return False
def _identity_engine_from_config(config):
'''Initialize the identity engine based on supplied config.'''
# Using tenant_name instead of project name due to keystone v2
kwargs = {
'username': config.get('username'),
'password': config.get('password'),
'tenant_name': config.get('project_name'),
'auth_url': config.get('auth_url'),
}
interface = config.get('interface')
engine = Identity(interface, **kwargs)
return engine
def init_identity():
'''Initialize the identity engine and place in the config.'''
config = conf.identity.config
engine = _identity_engine_from_config(config)
conf.identity.engine = engine

View File

@ -0,0 +1,43 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Messaging helper library'''
from oslo_config import cfg
import oslo_messaging as messaging
from pecan import conf
from valet.api.conf import set_domain, DOMAIN
def _messaging_notifier_from_config(config):
'''Initialize the messaging engine based on supplied config.'''
transport_url = config.get('transport_url')
transport = messaging.get_transport(cfg.CONF, transport_url)
notifier = messaging.Notifier(transport, driver='messaging',
publisher_id='valet',
topic='notifications', retry=10)
return notifier
def init_messaging():
'''Initialize the messaging engine and place in the config.'''
set_domain(DOMAIN)
config = conf.messaging.config
notifier = _messaging_notifier_from_config(config)
conf.messaging.notifier = notifier
conf.messaging.timeout = cfg.CONF.messaging.timeout

View File

@ -0,0 +1,315 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Ostro helper library'''
import json
import logging
from pecan import conf
import time
import uuid
from valet.api.common.i18n import _
from valet.api.db.models import Group
from valet.api.db.models import PlacementRequest
from valet.api.db.models import PlacementResult
from valet.api.db.models import Query
LOG = logging.getLogger(__name__)
SERVICEABLE_RESOURCES = [
'OS::Nova::Server'
]
GROUP_ASSIGNMENT = 'ATT::Valet::GroupAssignment'
GROUP_TYPE = 'group_type'
GROUP_NAME = 'group_name'
AFFINITY = 'affinity'
DIVERSITY = 'diversity'
EXCLUSIVITY = 'exclusivity'
def _log(text, title="Ostro"):
'''Log helper'''
log_text = "%s: %s" % (title, text)
LOG.debug(log_text)
class Ostro(object):
'''Ostro optimization engine helper class.'''
args = None
request = None
response = None
error_uri = None
tenant_id = None
tries = None # Number of times to poll for placement.
interval = None # Interval in seconds to poll for placement.
@classmethod
def _build_error(cls, message):
'''Build an Ostro-style error message'''
if not message:
message = _("Unknown error")
error = {
'status': {
'type': 'error',
'message': message,
}
}
return error
@classmethod
def _build_uuid_map(cls, resources):
'''Build a dict mapping names to UUIDs.'''
mapping = {}
for key in resources.iterkeys():
if 'name' in resources[key]:
name = resources[key]['name']
mapping[name] = key
return mapping
@classmethod
def _sanitize_resources(cls, resources):
'''Ensure lowercase keys at the top level of each resource.'''
for res in resources.itervalues():
for key in list(res.keys()):
if not key.islower():
res[key.lower()] = res.pop(key)
return resources
def __init__(self):
'''Initializer'''
self.tries = conf.music.get('tries', 10)
self.interval = conf.music.get('interval', 1)
def _map_names_to_uuids(self, mapping, data):
'''Map resource names to their UUID equivalents.'''
if isinstance(data, dict):
for key in data.iterkeys():
if key != 'name':
data[key] = self._map_names_to_uuids(mapping, data[key])
elif isinstance(data, list):
for key, value in enumerate(data):
data[key] = self._map_names_to_uuids(mapping, value)
elif isinstance(data, basestring) and data in mapping:
return mapping[data]
return data
def _prepare_resources(self, resources):
''' Pre-digests resource data for use by Ostro.
Maps Heat resource names to Orchestration UUIDs.
Ensures exclusivity groups exist and have tenant_id as a member.
'''
mapping = self._build_uuid_map(resources)
ostro_resources = self._map_names_to_uuids(mapping, resources)
self._sanitize_resources(ostro_resources)
verify_error = self._verify_groups(ostro_resources, self.tenant_id)
if isinstance(verify_error, dict):
return verify_error
return {'resources': ostro_resources}
# TODO(JD): This really belongs in valet-engine once it exists.
def _send(self, stack_id, request):
'''Send request.'''
# Creating the placement request effectively enqueues it.
PlacementRequest(stack_id=stack_id, request=request) # pylint: disable=W0612
# Wait for a response.
# TODO(JD): This is a blocking operation at the moment.
for __ in range(self.tries, 0, -1): # pylint: disable=W0612
query = Query(PlacementResult)
placement_result = query.filter_by(stack_id=stack_id).first()
if placement_result:
placement = placement_result.placement
placement_result.delete()
return placement
else:
time.sleep(self.interval)
self.error_uri = '/errors/server_error'
message = "Timed out waiting for a response."
response = self._build_error(message)
return json.dumps(response)
def _verify_groups(self, resources, tenant_id):
''' Verifies group settings. Returns an error status dict if the
group type is invalid, if a group name is used when the type
is affinity or diversity, if a nonexistant exclusivity group
is found, or if the tenant is not a group member.
Returns None if ok.
'''
message = None
for res in resources.itervalues():
res_type = res.get('type')
if res_type == GROUP_ASSIGNMENT:
properties = res.get('properties')
group_type = properties.get(GROUP_TYPE, '').lower()
group_name = properties.get(GROUP_NAME, '').lower()
if group_type == AFFINITY or \
group_type == DIVERSITY:
if group_name:
self.error_uri = '/errors/conflict'
message = _("%s must not be used when {0} is '{1}'. ").format(GROUP_NAME, GROUP_TYPE, group_type)
break
elif group_type == EXCLUSIVITY:
message = self._verify_exclusivity(group_name, tenant_id)
else:
self.error_uri = '/errors/invalid'
message = _("{0} '{1}' is invalid.").format(GROUP_TYPE, group_type)
break
if message:
return self._build_error(message)
def _verify_exclusivity(self, group_name, tenant_id):
return_message = None
if not group_name:
self.error_uri = '/errors/invalid'
return _("%s must be used when {0} is '{1}'.").format(GROUP_NAME, GROUP_TYPE, EXCLUSIVITY)
group = Group.query.filter_by( # pylint: disable=E1101
name=group_name).first()
if not group:
self.error_uri = '/errors/not_found'
return_message = "%s '%s' not found" % (GROUP_NAME, group_name)
elif group and tenant_id not in group.members:
self.error_uri = '/errors/conflict'
return_message = _("Tenant ID %s not a member of {0} '{1}' ({2})").format(self.tenant_id, GROUP_NAME, group.name, group.id)
return return_message
def build_request(self, **kwargs):
''' Build an Ostro request. If False is returned,
the response attribute contains status as to the error.
'''
# TODO(JD): Refactor this into create and update methods?
self.args = kwargs.get('args')
self.tenant_id = kwargs.get('tenant_id')
self.response = None
self.error_uri = None
resources = self.args['resources']
if 'resources_update' in self.args:
action = 'update'
resources_update = self.args['resources_update']
else:
action = 'create'
resources_update = None
# If we get any status in the response, it's an error. Bail.
self.response = self._prepare_resources(resources)
if 'status' in self.response:
return False
self.request = {
"action": action,
"resources": self.response['resources'],
"stack_id": self.args['stack_id'],
}
if resources_update:
# If we get any status in the response, it's an error. Bail.
self.response = self._prepare_resources(resources_update)
if 'status' in self.response:
return False
self.request['resources_update'] = self.response['resources']
return True
def is_request_serviceable(self):
''' Returns true if the request has at least one serviceable resource. '''
# TODO(JD): Ostro should return no placements vs throw an error.
resources = self.request.get('resources', {})
for res in resources.itervalues():
res_type = res.get('type')
if res_type and res_type in SERVICEABLE_RESOURCES:
return True
return False
def ping(self):
'''Send a ping request and obtain a response.'''
stack_id = str(uuid.uuid4())
self.args = {'stack_id': stack_id}
self.response = None
self.error_uri = None
self.request = {
"action": "ping",
"stack_id": stack_id,
}
def replan(self, **kwargs):
'''Replan a placement.'''
self.args = kwargs.get('args')
self.response = None
self.error_uri = None
self.request = {
"action": "replan",
"stack_id": self.args['stack_id'],
"locations": self.args['locations'],
"orchestration_id": self.args['orchestration_id'],
"exclusions": self.args['exclusions'],
}
def migrate(self, **kwargs):
'''Replan the placement for an existing resource.'''
self.args = kwargs.get('args')
self.response = None
self.error_uri = None
self.request = {
"action": "migrate",
"stack_id": self.args['stack_id'],
"excluded_hosts": self.args['excluded_hosts'],
"orchestration_id": self.args['orchestration_id'],
}
def query(self, **kwargs):
'''Send a query.'''
stack_id = str(uuid.uuid4())
self.args = kwargs.get('args')
self.args['stack_id'] = stack_id
self.response = None
self.error_uri = None
self.request = {
"action": "query",
"stack_id": self.args['stack_id'],
"type": self.args['type'],
"parameters": self.args['parameters'],
}
def send(self):
'''Send the request and obtain a response.'''
request_json = json.dumps([self.request])
# TODO(JD): Pass timeout value?
_log(request_json, 'Ostro Request')
result = self._send(self.args['stack_id'], request_json)
_log(result, 'Ostro Response')
self.response = json.loads(result)
status_type = self.response['status']['type']
if status_type != 'ok':
self.error_uri = '/errors/server_error'
return self.response

69
valet/api/conf.py Normal file
View File

@ -0,0 +1,69 @@
from oslo_config import cfg
DOMAIN = 'valet'
CONF = cfg.CONF
server_group = cfg.OptGroup(name='server', title='Valet API Server conf')
server_opts = [
cfg.StrOpt('host', default='0.0.0.0'),
cfg.StrOpt('port', default='8090'),
]
messaging_group = cfg.OptGroup(name='messaging', title='Valet Messaging conf')
messaging_opts = [
cfg.StrOpt('username'),
cfg.StrOpt('password'),
cfg.StrOpt('host'),
cfg.IntOpt('port', default=5672),
cfg.IntOpt('timeout', default=3),
]
identity_group = cfg.OptGroup(name='identity', title='Valet identity conf')
identity_opts = [
cfg.StrOpt('username'),
cfg.StrOpt('password'),
cfg.StrOpt('project_name'),
cfg.StrOpt('auth_url', default='http://controller:5000/v2.0'),
cfg.StrOpt('interface', default='admin'),
]
music_group = cfg.OptGroup(name='music', title='Valet Persistence conf')
music_opts = [
cfg.StrOpt('host', default='0.0.0.0'),
cfg.IntOpt('port', default=8080),
cfg.StrOpt('keyspace', default='valet'),
cfg.IntOpt('replication_factor', default=3),
cfg.IntOpt('tries', default=10),
cfg.IntOpt('interval', default=1),
cfg.StrOpt('request_table', default='placement_requests'),
cfg.StrOpt('response_table', default='placement_results'),
cfg.StrOpt('event_table', default='oslo_messages'),
cfg.StrOpt('resource_table', default='resource_status'),
cfg.StrOpt('app_table', default='app'),
cfg.StrOpt('resource_index_table', default='resource_log_index'),
cfg.StrOpt('app_index_table', default='app_log_index'),
cfg.StrOpt('uuid_table', default='uuid_map'),
cfg.StrOpt('db_host', default='localhost'),
# cfg.ListOpt('db_hosts', default='valet1,valet2,valet3')
]
def set_domain(project=DOMAIN):
CONF([], project)
def register_conf():
CONF.register_group(server_group)
CONF.register_opts(server_opts, server_group)
CONF.register_group(music_group)
CONF.register_opts(music_opts, music_group)
CONF.register_group(identity_group)
CONF.register_opts(identity_opts, identity_group)
CONF.register_group(messaging_group)
CONF.register_opts(messaging_opts, messaging_group)

0
valet/api/db/__init__.py Normal file
View File

View File

@ -0,0 +1,23 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=W0401
# Leave this here. We will eventually bring back sqlalchemy.
# When that happens, this needs to become a config option.
from .music import * # noqa

View File

@ -0,0 +1,303 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Music ORM - Common Methods'''
from abc import ABCMeta, abstractmethod
import inspect
from pecan import conf
import six
import uuid
from valet.api.common.i18n import _
from valet.api.db.models.music.music import Music
def get_class(kls):
'''Returns a class given a fully qualified class name'''
parts = kls.split('.')
module = ".".join(parts[:-1])
mod = __import__(module)
for comp in parts[1:]:
mod = getattr(mod, comp)
return mod
class abstractclassmethod(classmethod): # pylint: disable=C0103,R0903
'''Abstract Class Method from Python 3.3's abc module'''
__isabstractmethod__ = True
def __init__(self, callable): # pylint: disable=W0622
callable.__isabstractmethod__ = True
super(abstractclassmethod, self).__init__(callable)
class ClassPropertyDescriptor(object): # pylint: disable=R0903
'''Supports the notion of a class property'''
def __init__(self, fget, fset=None):
'''Initializer'''
self.fget = fget
self.fset = fset
def __get__(self, obj, klass=None):
'''Get attribute'''
if klass is None:
klass = type(obj)
return self.fget.__get__(obj, klass)()
def __set__(self, obj, value):
'''Set attribute'''
if not self.fset:
raise AttributeError(_("Can't set attribute"))
type_ = type(obj)
return self.fset.__get__(obj, type_)(value)
def setter(self, func):
'''Setter'''
if not isinstance(func, (classmethod, staticmethod)):
func = classmethod(func)
self.fset = func
return self
def classproperty(func):
'''Class Property decorator'''
if not isinstance(func, (classmethod, staticmethod)):
func = classmethod(func)
return ClassPropertyDescriptor(func)
class Results(list):
'''Query results'''
def __init__(self, *args, **kwargs): # pylint: disable=W0613
'''Initializer'''
super(Results, self).__init__(args[0])
def all(self):
'''Return all'''
return self
def first(self):
'''Return first'''
if len(self) > 0:
return self[0]
@six.add_metaclass(ABCMeta)
class Base(object):
''' A custom declarative base that provides some Elixir-inspired shortcuts. '''
__tablename__ = None
@classproperty
def query(cls): # pylint: disable=E0213
'''Return a query object a la sqlalchemy'''
return Query(cls)
@classmethod
def __kwargs(cls):
'''Return common keyword args'''
keyspace = conf.music.get('keyspace')
kwargs = {
'keyspace': keyspace,
'table': cls.__tablename__,
}
return kwargs
@classmethod
def create_table(cls):
'''Create table'''
kwargs = cls.__kwargs()
kwargs['schema'] = cls.schema()
conf.music.engine.create_table(**kwargs)
@abstractclassmethod
def schema(cls):
'''Return schema'''
return cls()
@abstractclassmethod
def pk_name(cls):
'''Primary key name'''
return cls()
@abstractmethod
def pk_value(self):
'''Primary key value'''
pass
@abstractmethod
def values(self):
'''Values'''
pass
def insert(self):
'''Insert row'''
kwargs = self.__kwargs()
kwargs['values'] = self.values()
pk_name = self.pk_name()
if pk_name not in kwargs['values']:
the_id = str(uuid.uuid4())
kwargs['values'][pk_name] = the_id
setattr(self, pk_name, the_id)
conf.music.engine.create_row(**kwargs)
def update(self):
'''Update row'''
kwargs = self.__kwargs()
kwargs['pk_name'] = self.pk_name()
kwargs['pk_value'] = self.pk_value()
kwargs['values'] = self.values()
conf.music.engine.update_row_eventually(**kwargs)
def delete(self):
'''Delete row'''
kwargs = self.__kwargs()
kwargs['pk_name'] = self.pk_name()
kwargs['pk_value'] = self.pk_value()
conf.music.engine.delete_row_eventually(**kwargs)
@classmethod
def filter_by(cls, **kwargs):
'''Filter objects'''
return cls.query.filter_by(**kwargs) # pylint: disable=E1101
def flush(self, *args, **kwargs):
'''Flush changes to storage'''
# TODO(JD): Implement in music? May be a no-op
pass
def as_dict(self):
'''Return object representation as a dictionary'''
return dict((k, v) for k, v in self.__dict__.items()
if not k.startswith('_'))
class Query(object):
'''Data Query'''
model = None
def __init__(self, model):
'''Initializer'''
if inspect.isclass(model):
self.model = model
elif isinstance(model, basestring):
self.model = get_class('valet.api.db.models.' + model)
assert inspect.isclass(self.model)
def __kwargs(self):
'''Return common keyword args'''
keyspace = conf.music.get('keyspace')
kwargs = {
'keyspace': keyspace,
'table': self.model.__tablename__, # pylint: disable=E1101
}
return kwargs
def __rows_to_objects(self, rows):
'''Convert query response rows to objects'''
results = []
pk_name = self.model.pk_name() # pylint: disable=E1101
for __, row in rows.iteritems(): # pylint: disable=W0612
the_id = row.pop(pk_name)
result = self.model(_insert=False, **row)
setattr(result, pk_name, the_id)
results.append(result)
return Results(results)
def all(self):
'''Return all objects'''
kwargs = self.__kwargs()
rows = conf.music.engine.read_all_rows(**kwargs)
return self.__rows_to_objects(rows)
def filter_by(self, **kwargs):
'''Filter objects'''
# Music doesn't allow filtering on anything but the primary key.
# We need to get all items and then go looking for what we want.
all_items = self.all()
filtered_items = Results([])
# For every candidate ...
for item in all_items:
passes = True
# All filters are AND-ed.
for key, value in kwargs.items():
if getattr(item, key) != value:
passes = False
break
if passes:
filtered_items.append(item)
return filtered_items
def init_model():
'''Data Store Initialization'''
conf.music.engine = _engine_from_config(conf.music)
keyspace = conf.music.get('keyspace')
conf.music.engine.create_keyspace(keyspace)
def _engine_from_config(configuration):
'''Create database engine object based on configuration'''
configuration = dict(configuration)
kwargs = {
'host': configuration.get('host'),
'port': configuration.get('port'),
'replication_factor': configuration.get('replication_factor'),
}
return Music(**kwargs)
def start():
'''Start transaction'''
pass
def start_read_only():
'''Start read-only transaction'''
start()
def commit():
'''Commit transaction'''
pass
def rollback():
'''Rollback transaction'''
pass
def clear():
'''Clear transaction'''
pass
def flush():
'''Flush to disk'''
pass
from groups import Group # noqa
from ostro import PlacementRequest, PlacementResult, Event # noqa
from placements import Placement # noqa
from plans import Plan # noqa

View File

@ -0,0 +1,94 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Group Model'''
from . import Base
import simplejson
class Group(Base):
'''Group model'''
__tablename__ = 'groups'
id = None # pylint: disable=C0103
name = None
description = None
type = None # pylint: disable=W0622
members = None
@classmethod
def schema(cls):
'''Return schema.'''
schema = {
'id': 'text',
'name': 'text',
'description': 'text',
'type': 'text',
'members': 'text',
'PRIMARY KEY': '(id)',
}
return schema
@classmethod
def pk_name(cls):
'''Primary key name'''
return 'id'
def pk_value(self):
'''Primary key value'''
return self.id
def values(self):
'''Values'''
# TODO(JD): Support lists in Music
# Lists aren't directly supported in Music, so we have to
# convert to/from json on the way out/in.
return {
'name': self.name,
'description': self.description,
'type': self.type,
'members': simplejson.dumps(self.members),
}
def __init__(self, name, description, type, members, _insert=True):
'''Initializer'''
super(Group, self).__init__()
self.name = name
self.description = description or ""
self.type = type
if _insert:
self.members = [] # members ignored at init time
self.insert()
else:
# TODO(JD): Support lists in Music
self.members = simplejson.loads(members)
def __repr__(self):
'''Object representation'''
return '<Group %r>' % self.name
def __json__(self):
'''JSON representation'''
json_ = {}
json_['id'] = self.id
json_['name'] = self.name
json_['description'] = self.description
json_['type'] = self.type
json_['members'] = self.members
return json_

View File

@ -0,0 +1,335 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Music Data Store API'''
import json
import logging
import time
from valet.api.common.i18n import _
import requests
LOG = logging.getLogger(__name__)
class REST(object):
'''Helper class for REST operations.'''
hosts = None
port = None
path = None
timeout = None
_urls = None
def __init__(self, hosts, port, path='/', timeout='10'):
'''Initializer. Accepts target host list, port, and path.'''
self.hosts = hosts # List of IP or FQDNs
self.port = port # Port Number
self.path = path # Path starting with /
self.timeout = float(timeout) # REST request timeout in seconds
@property
def urls(self):
'''Returns list of URLs using each host, plus the port/path.'''
if not self._urls:
urls = []
for host in self.hosts:
# Must end without a slash
urls.append('http://%(host)s:%(port)s%(path)s' % {
'host': host,
'port': self.port,
'path': self.path,
})
self._urls = urls
return self._urls
@staticmethod
def __headers(content_type='application/json'):
'''Returns HTTP request headers.'''
headers = {
'accept': content_type,
'content-type': content_type,
}
return headers
def request(self, method='get', content_type='application/json', path='/', data=None):
''' Performs HTTP request '''
if method not in ('post', 'get', 'put', 'delete'):
raise KeyError(_("Method must be one of post, get, put, or delete."))
method_fn = getattr(requests, method)
response = None
for url in self.urls:
# Try each url in turn. First one to succeed wins.
full_url = url + path
try:
data_json = json.dumps(data) if data else None
LOG.debug("Music Request: %s %s%s", method.upper(), full_url,
data_json if data else '')
response = method_fn(full_url, data=data_json,
headers=self.__headers(content_type),
timeout=self.timeout)
response.raise_for_status()
return response
except requests.exceptions.Timeout as err:
response = requests.Response()
response.status_code = 408
response.url = full_url
LOG.debug("Music: %s", err.message)
except requests.exceptions.RequestException as err:
response = requests.Response()
response.status_code = 400
response.url = full_url
LOG.debug("Music: %s", err.message)
# If we get here, an exception was raised for every url,
# but we passed so we could try each endpoint. Raise status
# for the last attempt (for now) so that we report something.
if response:
response.raise_for_status()
class Music(object):
'''Wrapper for Music API'''
lock_names = None # Cache of lock names created during session
lock_timeout = None # Maximum time in seconds to acquire a lock
rest = None # API Endpoint
replication_factor = None # Number of Music nodes to replicate across
def __init__(self, host=None, hosts=None, # pylint: disable=R0913
port='8080', lock_timeout=10, replication_factor=3):
'''Initializer. Accepts a lock_timeout for atomic operations.'''
# If one host is provided, that overrides the list
if not hosts:
hosts = ['localhost']
if host:
hosts = [host]
kwargs = {
'hosts': hosts,
'port': port,
'path': '/MUSIC/rest',
}
self.rest = REST(**kwargs)
self.lock_names = []
self.lock_timeout = lock_timeout
self.replication_factor = replication_factor
def create_keyspace(self, keyspace):
'''Creates a keyspace.'''
data = {
'replicationInfo': {
# 'class': 'NetworkTopologyStrategy',
# 'dc1': self.replication_factor,
'class': 'SimpleStrategy',
'replication_factor': self.replication_factor,
},
'durabilityOfWrites': True,
'consistencyInfo': {
'type': 'eventual',
},
}
path = '/keyspaces/%s' % keyspace
response = self.rest.request(method='post', path=path, data=data)
return response.ok
def create_table(self, keyspace, table, schema):
'''Creates a table.'''
data = {
'fields': schema,
'consistencyInfo': {
'type': 'eventual',
},
}
path = '/keyspaces/%(keyspace)s/tables/%(table)s/' % {
'keyspace': keyspace,
'table': table,
}
response = self.rest.request(method='post', path=path, data=data)
return response.ok
def version(self):
'''Returns version string.'''
path = '/version'
response = self.rest.request(method='get',
content_type='text/plain', path=path)
return response.text
def create_row(self, keyspace, table, values):
'''Create a row.'''
data = {
'values': values,
'consistencyInfo': {
'type': 'eventual',
},
}
path = '/keyspaces/%(keyspace)s/tables/%(table)s/rows' % {
'keyspace': keyspace,
'table': table,
}
response = self.rest.request(method='post', path=path, data=data)
return response.ok
def create_lock(self, lock_name):
'''Returns the lock id. Use for acquiring and releasing.'''
path = '/locks/create/%s' % lock_name
response = self.rest.request(method='post',
content_type='text/plain', path=path)
return response.text
def acquire_lock(self, lock_id):
'''Acquire a lock.'''
path = '/locks/acquire/%s' % lock_id
response = self.rest.request(method='get',
content_type='text/plain', path=path)
return response.text.lower() == 'true'
def release_lock(self, lock_id):
'''Release a lock.'''
path = '/locks/release/%s' % lock_id
response = self.rest.request(method='delete',
content_type='text/plain', path=path)
return response.ok
@staticmethod
def __row_url_path(keyspace, table, pk_name, pk_value):
'''Returns a Music-compliant row URL path.'''
path = '/keyspaces/%(keyspace)s/tables/%(table)s/rows' % {
'keyspace': keyspace,
'table': table,
}
if pk_name and pk_value:
path += '?%s=%s' % (pk_name, pk_value)
return path
def update_row_eventually(self, keyspace, table, # pylint: disable=R0913
pk_name, pk_value, values):
'''Update a row. Not atomic.'''
data = {
'values': values,
'consistencyInfo': {
'type': 'eventual',
},
}
path = self.__row_url_path(keyspace, table, pk_name, pk_value)
response = self.rest.request(method='put', path=path, data=data)
return response.ok
def update_row_atomically(self, keyspace, table, # pylint: disable=R0913
pk_name, pk_value, values):
'''Update a row atomically.'''
# Create lock for the candidate. The Music API dictates that the
# lock name must be of the form keyspace.table.primary_key
lock_name = '%(keyspace)s.%(table)s.%(primary_key)s' % {
'keyspace': keyspace,
'table': table,
'primary_key': pk_value,
}
self.lock_names.append(lock_name)
lock_id = self.create_lock(lock_name)
time_now = time.time()
while not self.acquire_lock(lock_id):
if time.time() - time_now > self.lock_timeout:
raise IndexError(_('Lock acquire timeout: %s') % lock_name)
# Update entry now that we have the lock.
data = {
'values': values,
'consistencyInfo': {
'type': 'atomic',
'lockId': lock_id,
},
}
path = self.__row_url_path(keyspace, table, pk_name, pk_value)
response = self.rest.request(method='put', path=path, data=data)
# Release lock now that the operation is done.
self.release_lock(lock_id)
# FIXME: Wouldn't we delete the lock at this point?
return response.ok
def delete_row_eventually(self, keyspace, table, pk_name, pk_value):
'''Delete a row. Not atomic.'''
data = {
'consistencyInfo': {
'type': 'eventual',
},
}
path = self.__row_url_path(keyspace, table, pk_name, pk_value)
response = self.rest.request(method='delete', path=path, data=data)
return response.ok
def read_row(self, keyspace, table, pk_name, pk_value, log=None):
'''Read one row based on a primary key name/value.'''
path = self.__row_url_path(keyspace, table, pk_name, pk_value)
response = self.rest.request(path=path)
if log:
log.debug("response is %s, path is %s" % (response, path))
return response.json()
def read_all_rows(self, keyspace, table):
'''Read all rows.'''
return self.read_row(keyspace, table, pk_name=None, pk_value=None)
def drop_keyspace(self, keyspace):
'''Drops a keyspace.'''
data = {
'consistencyInfo': {
'type': 'eventual',
},
}
path = '/keyspaces/%s' % keyspace
response = self.rest.request(method='delete', path=path, data=data)
return response.ok
def delete_lock(self, lock_name):
'''Deletes a lock by name.'''
path = '/locks/delete/%s' % lock_name
response = self.rest.request(content_type='text/plain',
method='delete', path=path)
return response.ok
def delete_all_locks(self):
'''Delete all locks created during the lifetime of this object.'''
# TODO(JD): Shouldn't this really be part of internal cleanup?
# FIXME: It can be several API calls. Any way to do in one fell swoop?
for lock_name in self.lock_names:
self.delete_lock(lock_name)

View File

@ -0,0 +1,180 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Ostro Models'''
from . import Base
class PlacementRequest(Base):
'''Placement Request Model'''
__tablename__ = 'placement_requests'
stack_id = None
request = None
@classmethod
def schema(cls):
'''Return schema.'''
schema = {
'stack_id': 'text',
'request': 'text',
'PRIMARY KEY': '(stack_id)',
}
return schema
@classmethod
def pk_name(cls):
'''Primary key name'''
return 'stack_id'
def pk_value(self):
'''Primary key value'''
return self.stack_id
def values(self):
'''Values'''
return {
'stack_id': self.stack_id,
'request': self.request,
}
def __init__(self, request, stack_id=None, _insert=True):
'''Initializer'''
super(PlacementRequest, self).__init__()
self.stack_id = stack_id
self.request = request
if _insert:
self.insert()
def __repr__(self):
'''Object representation'''
return '<PlacementRequest %r>' % self.stack_id
def __json__(self):
'''JSON representation'''
json_ = {}
json_['stack_id'] = self.stack_id
json_['request'] = self.request
return json_
class PlacementResult(Base):
'''Placement Result Model'''
__tablename__ = 'placement_results'
stack_id = None
placement = None
@classmethod
def schema(cls):
'''Return schema.'''
schema = {
'stack_id': 'text',
'placement': 'text',
'PRIMARY KEY': '(stack_id)',
}
return schema
@classmethod
def pk_name(cls):
'''Primary key name'''
return 'stack_id'
def pk_value(self):
'''Primary key value'''
return self.stack_id
def values(self):
'''Values'''
return {
'stack_id': self.stack_id,
'placement': self.placement,
}
def __init__(self, placement, stack_id=None, _insert=True):
'''Initializer'''
super(PlacementResult, self).__init__()
self.stack_id = stack_id
self.placement = placement
if _insert:
self.insert()
def __repr__(self):
'''Object representation'''
return '<PlacementResult %r>' % self.stack_id
def __json__(self):
'''JSON representation'''
json_ = {}
json_['stack_id'] = self.stack_id
json_['placement'] = self.placement
return json_
class Event(Base):
'''Event Model'''
__tablename__ = 'events'
event_id = None
event = None
@classmethod
def schema(cls):
'''Return schema.'''
schema = {
'event_id': 'text',
'event': 'text',
'PRIMARY KEY': '(event_id)',
}
return schema
@classmethod
def pk_name(cls):
'''Primary key name'''
return 'event_id'
def pk_value(self):
'''Primary key value'''
return self.event_id
def values(self):
'''Values'''
return {
'event_id': self.event_id,
'event': self.event,
}
def __init__(self, event, event_id=None, _insert=True):
'''Initializer'''
super(Event, self).__init__()
self.event_id = event_id
self.event = event
if _insert:
self.insert()
def __repr__(self):
'''Object representation'''
return '<Event %r>' % self.event_id
def __json__(self):
'''JSON representation'''
json_ = {}
json_['event_id'] = self.event_id
json_['event'] = self.event
return json_

View File

@ -0,0 +1,101 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Placement Model'''
from . import Base, Query
class Placement(Base):
'''Placement Model'''
__tablename__ = 'placements'
id = None # pylint: disable=C0103
name = None
orchestration_id = None
resource_id = None
location = None
plan_id = None
plan = None
@classmethod
def schema(cls):
'''Return schema.'''
schema = {
'id': 'text',
'name': 'text',
'orchestration_id': 'text',
'resource_id': 'text',
'location': 'text',
'reserved': 'boolean',
'plan_id': 'text',
'PRIMARY KEY': '(id)',
}
return schema
@classmethod
def pk_name(cls):
'''Primary key name'''
return 'id'
def pk_value(self):
'''Primary key value'''
return self.id
def values(self):
'''Values'''
return {
'name': self.name,
'orchestration_id': self.orchestration_id,
'resource_id': self.resource_id,
'location': self.location,
'reserved': self.reserved,
'plan_id': self.plan_id,
}
def __init__(self, name, orchestration_id, resource_id=None, plan=None,
plan_id=None, location=None, reserved=False, _insert=True):
'''Initializer'''
super(Placement, self).__init__()
self.name = name
self.orchestration_id = orchestration_id
self.resource_id = resource_id
if plan_id:
plan = Query("Plan").filter_by(id=plan_id).first()
self.plan = plan
self.plan_id = plan.id
self.location = location
self.reserved = reserved
if _insert:
self.insert()
def __repr__(self):
'''Object representation'''
return '<Placement %r>' % self.name
def __json__(self):
'''JSON representation'''
json_ = {}
json_['id'] = self.id
json_['name'] = self.name
json_['orchestration_id'] = self.orchestration_id
json_['resource_id'] = self.resource_id
json_['location'] = self.location
json_['reserved'] = self.reserved
json_['plan_id'] = self.plan.id
return json_

View File

@ -0,0 +1,98 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Plan Model'''
from . import Base, Query
class Plan(Base):
'''Plan model'''
__tablename__ = 'plans'
id = None # pylint: disable=C0103
name = None
stack_id = None
@classmethod
def schema(cls):
'''Return schema.'''
schema = {
'id': 'text',
'name': 'text',
'stack_id': 'text',
'PRIMARY KEY': '(id)',
}
return schema
@classmethod
def pk_name(cls):
'''Primary key name'''
return 'id'
def pk_value(self):
'''Primary key value'''
return self.id
def values(self):
'''Values'''
return {
'name': self.name,
'stack_id': self.stack_id,
}
def __init__(self, name, stack_id, _insert=True):
'''Initializer'''
super(Plan, self).__init__()
self.name = name
self.stack_id = stack_id
if _insert:
self.insert()
def placements(self):
'''Return list of placements'''
# TODO(JD): Make this a property?
all_results = Query("Placement").all()
results = []
for placement in all_results:
if placement.plan_id == self.id:
results.append(placement)
return results
@property
def orchestration_ids(self):
'''Return list of orchestration IDs'''
return list(set([p.orchestration_id for p in self.placements()]))
def __repr__(self):
'''Object representation'''
return '<Plan %r>' % self.name
def __json__(self):
'''JSON representation'''
json_ = {}
json_['id'] = self.id
json_['stack_id'] = self.stack_id
json_['name'] = self.name
json_['placements'] = {}
for placement in self.placements():
json_['placements'][placement.orchestration_id] = dict(
name=placement.name,
location=placement.location)
return json_

0
valet/api/v1/__init__.py Normal file
View File

View File

View File

@ -0,0 +1,72 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Populate command'''
from pecan.commands.base import BaseCommand
# from pecan import conf
from valet.api.common.i18n import _
from valet.api.conf import register_conf, set_domain
from valet.api.db import models
from valet.api.db.models import Event
from valet.api.db.models import Group
from valet.api.db.models import Placement
from valet.api.db.models import PlacementRequest
from valet.api.db.models import PlacementResult
from valet.api.db.models import Plan
def out(string):
'''Output helper'''
print("==> %s" % string)
class PopulateCommand(BaseCommand):
'''Load a pecan environment and initializate the database.'''
def run(self, args):
super(PopulateCommand, self).run(args)
out(_("Loading environment"))
register_conf()
set_domain()
self.load_app()
out(_("Building schema"))
try:
out(_("Starting a transaction..."))
models.start()
# FIXME: There's no create_all equivalent for Music.
# models.Base.metadata.create_all(conf.sqlalchemy.engine)
# Valet
Group.create_table()
Placement.create_table()
Plan.create_table()
# Ostro
Event.create_table()
PlacementRequest.create_table()
PlacementResult.create_table()
except Exception:
models.rollback()
out(_("Rolling back..."))
raise
else:
out(_("Committing."))
models.commit()

View File

@ -0,0 +1,128 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Controllers Package'''
import logging
from notario.decorators import instance_of
from notario import ensure
from os import path
from pecan import redirect, request
import string
from valet.api.common.i18n import _
from valet.api.db.models import Placement
LOG = logging.getLogger(__name__)
#
# Notario Helpers
#
def valid_group_name(value):
'''Validator for group name type.'''
if not value or not set(value) <= set(string.letters + string.digits + "-._~"):
LOG.error("group name is not valid")
LOG.error("group name must contain only uppercase and lowercase letters, decimal digits, \
hyphens, periods, underscores, and tildes [RFC 3986, Section 2.3]")
@instance_of((list, dict))
def valid_plan_resources(value):
'''Validator for plan resources.'''
ensure(len(value) > 0)
def valid_plan_update_action(value):
'''Validator for plan update action.'''
assert value in ['update', 'migrate'], _("must be update or migrate")
#
# Placement Helpers
#
def set_placements(plan, resources, placements):
'''Set placements'''
for uuid in placements.iterkeys():
name = resources[uuid]['name']
properties = placements[uuid]['properties']
location = properties['host']
Placement(name, uuid, plan=plan, location=location) # pylint: disable=W0612
return plan
def reserve_placement(placement, resource_id=None, reserve=True, update=True):
''' Reserve placement. Can optionally set the physical resource id.
Set reserve=False to unreserve. Set update=False to not update
the data store (if the update will be made later).
'''
if placement:
LOG.info(_('%(rsrv)s placement of %(orch_id)s in %(loc)s.'),
{'rsrv': _("Reserving") if reserve else _("Unreserving"),
'orch_id': placement.orchestration_id,
'loc': placement.location})
placement.reserved = reserve
if resource_id:
LOG.info(_('Associating resource id %(res_id)s with '
'orchestration id %(orch_id)s.'),
{'res_id': resource_id,
'orch_id': placement.orchestration_id})
placement.resource_id = resource_id
if update:
placement.update()
def update_placements(placements, reserve_id=None, unlock_all=False):
'''Update placements. Optionally reserve one placement.'''
for uuid in placements.iterkeys():
placement = Placement.query.filter_by( # pylint: disable=E1101
orchestration_id=uuid).first()
if placement:
properties = placements[uuid]['properties']
location = properties['host']
if placement.location != location:
LOG.info(_('Changing placement of %(orch_id)s '
'from %(old_loc)s to %(new_loc)s.'),
{'orch_id': placement.orchestration_id,
'old_loc': placement.location,
'new_loc': location})
placement.location = location
if unlock_all:
reserve_placement(placement, reserve=False, update=False)
elif reserve_id and placement.orchestration_id == reserve_id:
reserve_placement(placement, reserve=True, update=False)
placement.update()
return
#
# Error Helpers
#
def error(url, msg=None, **kwargs):
'''Error handler'''
if msg:
request.context['error_message'] = msg
if kwargs:
request.context['kwargs'] = kwargs
url = path.join(url, '?error_message=%s' % msg)
redirect(url, internal=True)

View File

@ -0,0 +1,140 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Errors'''
import logging
from pecan import expose, request, response
from valet.api.common.i18n import _
from webob.exc import status_map
LOG = logging.getLogger(__name__)
# pylint: disable=R0201
def error_wrapper(func):
'''Error decorator.'''
def func_wrapper(self, **kw):
'''Wrapper.'''
kwargs = func(self, **kw)
status = status_map.get(response.status_code)
message = getattr(status, 'explanation', '')
explanation = request.context.get('error_message', message)
error_type = status.__name__
title = status.title
traceback = getattr(kwargs, 'traceback', None)
LOG.error(explanation)
# Modeled after Heat's format
return {
"explanation": explanation,
"code": response.status_code,
"error": {
"message": message,
"traceback": traceback,
"type": error_type,
},
"title": title,
}
return func_wrapper
# pylint: disable=W0613
class ErrorsController(object):
''' Errors Controller /errors/{error_name} '''
@expose('json')
@error_wrapper
def schema(self, **kw):
'''400'''
request.context['error_message'] = str(request.validation_error)
response.status = 400
return request.context.get('kwargs')
@expose('json')
@error_wrapper
def invalid(self, **kw):
'''400'''
response.status = 400
return request.context.get('kwargs')
@expose()
def unauthorized(self, **kw):
'''401'''
# This error is terse and opaque on purpose.
# Don't give any clues to help AuthN along.
response.status = 401
response.content_type = 'text/plain'
LOG.error('unauthorized')
import traceback
traceback.print_stack()
LOG.error(self.__class__)
LOG.error(kw)
response.body = _('Authentication required')
LOG.error(response.body)
return response
@expose('json')
@error_wrapper
def forbidden(self, **kw):
'''403'''
response.status = 403
return request.context.get('kwargs')
@expose('json')
@error_wrapper
def not_found(self, **kw):
'''404'''
response.status = 404
return request.context.get('kwargs')
@expose('json')
@error_wrapper
def not_allowed(self, **kw):
'''405'''
kwargs = request.context.get('kwargs')
if kwargs:
allow = kwargs.get('allow', None)
if allow:
response.headers['Allow'] = allow
response.status = 405
return kwargs
@expose('json')
@error_wrapper
def conflict(self, **kw):
'''409'''
response.status = 409
return request.context.get('kwargs')
@expose('json')
@error_wrapper
def server_error(self, **kw):
'''500'''
response.status = 500
return request.context.get('kwargs')
@expose('json')
@error_wrapper
def unavailable(self, **kw):
'''503'''
response.status = 503
return request.context.get('kwargs')

View File

@ -0,0 +1,321 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Groups'''
import logging
from notario import decorators
from notario.validators import types
from pecan import conf, expose, request, response
from pecan_notario import validate
from valet.api.common.compute import nova_client
from valet.api.common.i18n import _
from valet.api.common.ostro_helper import Ostro
from valet.api.db.models import Group
from valet.api.v1.controllers import error, valid_group_name
LOG = logging.getLogger(__name__)
GROUPS_SCHEMA = (
(decorators.optional('description'), types.string),
('name', valid_group_name),
('type', types.string)
)
UPDATE_GROUPS_SCHEMA = (
(decorators.optional('description'), types.string)
)
MEMBERS_SCHEMA = (
('members', types.array)
)
# pylint: disable=R0201
def server_list_for_group(group):
'''Returns a list of VMs associated with a member/group.'''
args = {
"type": "group_vms",
"parameters": {
"group_name": group.name,
},
}
ostro_kwargs = {
"args": args,
}
ostro = Ostro()
ostro.query(**ostro_kwargs)
ostro.send()
status_type = ostro.response['status']['type']
if status_type != 'ok':
message = ostro.response['status']['message']
error(ostro.error_uri, _('Ostro error: %s') % message)
resources = ostro.response['resources']
return resources or []
def tenant_servers_in_group(tenant_id, group):
''' Returns a list of servers the current tenant has in group_name '''
servers = []
server_list = server_list_for_group(group)
nova = nova_client()
for server_id in server_list:
try:
server = nova.servers.get(server_id)
if server.tenant_id == tenant_id:
servers.append(server_id)
except Exception as ex: # TODO(JD): update DB
LOG.error("Instance %s could not be found" % server_id)
LOG.error(ex)
if len(servers) > 0:
return servers
def no_tenant_servers_in_group(tenant_id, group):
''' Verify no servers from tenant_id are in group.
Throws a 409 Conflict if any are found.
'''
server_list = tenant_servers_in_group(tenant_id, group)
if server_list:
error('/errors/conflict', _('Tenant Member {0} has servers in group "{1}": {2}').format(tenant_id, group.name, server_list))
class MembersItemController(object):
''' Members Item Controller /v1/groups/{group_id}/members/{member_id} '''
def __init__(self, member_id):
'''Initialize group member'''
group = request.context['group']
if member_id not in group.members:
error('/errors/not_found', _('Member not found in group'))
request.context['member_id'] = member_id
@classmethod
def allow(cls):
'''Allowed methods'''
return 'GET,DELETE'
@expose(generic=True, template='json')
def index(self):
'''Catch all for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='GET', template='json')
def index_get(self):
'''Verify group member'''
response.status = 204
@index.when(method='DELETE', template='json')
def index_delete(self):
'''Delete group member'''
group = request.context['group']
member_id = request.context['member_id']
# Can't delete a member if it has associated VMs.
no_tenant_servers_in_group(member_id, group)
group.members.remove(member_id)
group.update()
response.status = 204
class MembersController(object):
''' Members Controller /v1/groups/{group_id}/members '''
@classmethod
def allow(cls):
'''Allowed methods'''
return 'PUT,DELETE'
@expose(generic=True, template='json')
def index(self):
'''Catchall for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='PUT', template='json')
@validate(MEMBERS_SCHEMA, '/errors/schema')
def index_put(self, **kwargs):
'''Add one or more members to a group'''
new_members = kwargs.get('members', None)
if not conf.identity.engine.is_tenant_list_valid(new_members):
error('/errors/conflict', _('Member list contains invalid tenant IDs'))
group = request.context['group']
group.members = list(set(group.members + new_members))
group.update()
response.status = 201
# Flush so that the DB is current.
group.flush()
return group
@index.when(method='DELETE', template='json')
def index_delete(self):
'''Delete all group members'''
group = request.context['group']
# Can't delete a member if it has associated VMs.
for member_id in group.members:
no_tenant_servers_in_group(member_id, group)
group.members = []
group.update()
response.status = 204
@expose()
def _lookup(self, member_id, *remainder):
'''Pecan subcontroller routing callback'''
return MembersItemController(member_id), remainder
class GroupsItemController(object):
''' Groups Item Controller /v1/groups/{group_id} '''
members = MembersController()
def __init__(self, group_id):
'''Initialize group'''
group = Group.query.filter_by(id=group_id).first() # pylint: disable=E1101
if not group:
error('/errors/not_found', _('Group not found'))
request.context['group'] = group
@classmethod
def allow(cls):
''' Allowed methods '''
return 'GET,PUT,DELETE'
@expose(generic=True, template='json')
def index(self):
'''Catchall for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='GET', template='json')
def index_get(self):
'''Display a group'''
return {"group": request.context['group']}
@index.when(method='PUT', template='json')
@validate(UPDATE_GROUPS_SCHEMA, '/errors/schema')
def index_put(self, **kwargs):
'''Update a group'''
# Name and type are immutable.
# Group Members are updated in MembersController.
group = request.context['group']
group.description = kwargs.get('description', group.description)
group.update()
response.status = 201
# Flush so that the DB is current.
group.flush()
return group
@index.when(method='DELETE', template='json')
def index_delete(self):
'''Delete a group'''
group = request.context['group']
if isinstance(group.members, list) and len(group.members) > 0:
error('/errors/conflict', _('Unable to delete a Group with members.'))
group.delete()
response.status = 204
class GroupsController(object):
''' Groups Controller /v1/groups '''
@classmethod
def allow(cls):
'''Allowed methods'''
return 'GET,POST'
@expose(generic=True, template='json')
def index(self):
'''Catch all for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='GET', template='json')
def index_get(self):
'''List groups'''
groups_array = []
for group in Group.query.all(): # pylint: disable=E1101
groups_array.append(group)
return {'groups': groups_array}
@index.when(method='POST', template='json')
@validate(GROUPS_SCHEMA, '/errors/schema')
def index_post(self, **kwargs):
'''Create a group'''
group_name = kwargs.get('name', None)
description = kwargs.get('description', None)
group_type = kwargs.get('type', None)
members = [] # Use /v1/groups/members endpoint to add members
try:
group = Group(group_name, description, group_type, members)
if group:
response.status = 201
# Flush so that the DB is current.
group.flush()
return group
except Exception as e:
error('/errors/server_error', _('Unable to create Group. %s') % e)
@expose()
def _lookup(self, group_id, *remainder):
'''Pecan subcontroller routing callback'''
return GroupsItemController(group_id), remainder

View File

@ -0,0 +1,196 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Placements'''
import logging
from pecan import expose, request, response
from valet.api.common.i18n import _
from valet.api.common.ostro_helper import Ostro
from valet.api.db.models import Placement, Plan
from valet.api.v1.controllers import error
from valet.api.v1.controllers import reserve_placement
from valet.api.v1.controllers import update_placements
LOG = logging.getLogger(__name__)
# pylint: disable=R0201
class PlacementsItemController(object):
''' Placements Item Controller /v1/placements/{placement_id} '''
def __init__(self, uuid4):
'''Initializer.'''
self.uuid = uuid4
self.placement = Placement.query.filter_by(id=self.uuid).first() # pylint: disable=E1101
if not self.placement:
self.placement = Placement.query.filter_by(orchestration_id=self.uuid).first() # disable=E1101
if not self.placement:
error('/errors/not_found', _('Placement not found'))
request.context['placement_id'] = self.placement.id
@classmethod
def allow(cls):
'''Allowed methods'''
return 'GET,POST,DELETE'
@expose(generic=True, template='json')
def index(self):
'''Catchall for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='GET', template='json')
def index_get(self):
''' Inspect a placement.
Use POST for reserving placements made by a scheduler.
'''
return {"placement": self.placement}
@index.when(method='POST', template='json')
def index_post(self, **kwargs):
''' Reserve a placement. This and other placements may be replanned.
Once reserved, the location effectively becomes immutable.
'''
res_id = kwargs.get('resource_id')
LOG.info(_('Placement reservation request for resource id '
'%(res_id)s, orchestration id %(orch_id)s.'),
{'res_id': res_id, 'orch_id': self.placement.orchestration_id})
locations = kwargs.get('locations', [])
locations_str = ', '.join(locations)
LOG.info(_('Candidate locations: %s'), locations_str)
if self.placement.location in locations:
# Ostro's placement is in the list of candidates. Good!
# Reserve it. Remember the resource id too.
kwargs = {'resource_id': res_id}
reserve_placement(self.placement, **kwargs)
response.status = 201
else:
# Ostro's placement is NOT in the list of candidates.
# Time for Plan B.
LOG.info(_('Placement of resource id %(res_id)s, '
'orchestration id %(orch_id)s in %(loc)s '
'not allowed. Replanning.'),
{'res_id': res_id,
'orch_id': self.placement.orchestration_id,
'loc': self.placement.location})
# Unreserve the placement. Remember the resource id too.
kwargs = {'resource_id': res_id, 'reserve': False}
reserve_placement(self.placement, **kwargs)
# Find all the reserved placements for the related plan.
reserved = Placement.query.filter_by( # pylint: disable=E1101
plan_id=self.placement.plan_id, reserved=True)
# Keep this placement's orchestration ID handy.
orchestration_id = self.placement.orchestration_id
# Extract all the orchestration IDs.
exclusions = [x.orchestration_id for x in reserved]
if exclusions:
exclusions_str = ', '.join(exclusions)
LOG.info(_('Excluded orchestration IDs: %s'), exclusions_str)
else:
LOG.info(_('No excluded orchestration IDs.'))
# Ask Ostro to try again with new constraints.
# We may get one or more updated placements in return.
# One of those will be the original placement
# we are trying to reserve.
plan = Plan.query.filter_by(id=self.placement.plan_id).first() # pylint: disable=E1101
args = {
"stack_id": plan.stack_id,
"locations": locations,
"orchestration_id": orchestration_id,
"exclusions": exclusions,
}
ostro_kwargs = {"args": args, }
ostro = Ostro()
ostro.replan(**ostro_kwargs)
ostro.send()
status_type = ostro.response['status']['type']
if status_type != 'ok':
message = ostro.response['status']['message']
error(ostro.error_uri, _('Ostro error: %s') % message)
# Update all affected placements. Reserve the original one.
placements = ostro.response['resources']
update_placements(placements, reserve_id=orchestration_id)
response.status = 201
placement = Placement.query.filter_by( # pylint: disable=E1101
orchestration_id=self.placement.orchestration_id).first()
return {"placement": placement}
@index.when(method='DELETE', template='json')
def index_delete(self):
'''Delete a Placement'''
orch_id = self.placement.orchestration_id
self.placement.delete()
LOG.info(_('Placement with orchestration id %s deleted.'), orch_id)
response.status = 204
class PlacementsController(object):
''' Placements Controller /v1/placements '''
@classmethod
def allow(cls):
'''Allowed methods'''
return 'GET'
@expose(generic=True, template='json')
def index(self):
'''Catchall for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='GET', template='json')
def index_get(self):
'''Get placements.'''
placements_array = []
for placement in Placement.query.all(): # pylint: disable=E1101
placements_array.append(placement)
return {"placements": placements_array}
@expose()
def _lookup(self, uuid4, *remainder):
'''Pecan subcontroller routing callback'''
return PlacementsItemController(uuid4), remainder

View File

@ -0,0 +1,284 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Plans'''
import logging
from notario import decorators
from notario.validators import types
from pecan import expose, request, response
from pecan_notario import validate
from valet.api.common.i18n import _
from valet.api.common.ostro_helper import Ostro
from valet.api.db.models import Placement, Plan
from valet.api.v1.controllers import error
from valet.api.v1.controllers import set_placements
from valet.api.v1.controllers import update_placements
from valet.api.v1.controllers import valid_plan_update_action
LOG = logging.getLogger(__name__)
CREATE_SCHEMA = (
('plan_name', types.string),
('resources', types.dictionary),
('stack_id', types.string),
(decorators.optional('timeout'), types.string)
)
UPDATE_SCHEMA = (
('action', valid_plan_update_action),
(decorators.optional('excluded_hosts'), types.array),
(decorators.optional('plan_name'), types.string),
# FIXME: resources needs to work against valid_plan_resources
('resources', types.array),
(decorators.optional('timeout'), types.string)
)
# pylint: disable=R0201
class PlansItemController(object):
''' Plans Item Controller /v1/plans/{plan_id} '''
def __init__(self, uuid4):
'''Initializer.'''
self.uuid = uuid4
self.plan = Plan.query.filter_by(id=self.uuid).first() # pylint: disable=E1101
if not self.plan:
self.plan = Plan.query.filter_by(stack_id=self.uuid).first() # pylint: disable=E1101
if not self.plan:
error('/errors/not_found', _('Plan not found'))
request.context['plan_id'] = self.plan.id
@classmethod
def allow(cls):
'''Allowed methods'''
return 'GET,PUT,DELETE'
@expose(generic=True, template='json')
def index(self):
'''Catchall for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='GET', template='json')
def index_get(self):
'''Get plan'''
return {"plan": self.plan}
@index.when(method='PUT', template='json')
@validate(UPDATE_SCHEMA, '/errors/schema')
def index_put(self, **kwargs):
'''Update a Plan'''
action = kwargs.get('action')
if action == 'migrate':
# Replan the placement of an existing resource.
excluded_hosts = kwargs.get('excluded_hosts', [])
resources = kwargs.get('resources', [])
# TODO(JD): Support replan of more than one existing resource
if not isinstance(resources, list) or len(resources) != 1:
error('/errors/invalid', _('resources must be a list of length 1.'))
# We either got a resource or orchestration id.
the_id = resources[0]
placement = Placement.query.filter_by(resource_id=the_id).first() # pylint: disable=E1101
if not placement:
placement = Placement.query.filter_by(orchestration_id=the_id).first() # pylint: disable=E1101
if not placement:
error('/errors/invalid', _('Unknown resource or orchestration id: %s') % the_id)
LOG.info(_('Migration request for resource id {0}, orchestration id {1}.').format(placement.resource_id, placement.orchestration_id))
args = {
"stack_id": self.plan.stack_id,
"excluded_hosts": excluded_hosts,
"orchestration_id": placement.orchestration_id,
}
ostro_kwargs = {
"args": args,
}
ostro = Ostro()
ostro.migrate(**ostro_kwargs)
ostro.send()
status_type = ostro.response['status']['type']
if status_type != 'ok':
message = ostro.response['status']['message']
error(ostro.error_uri, _('Ostro error: %s') % message)
placements = ostro.response['resources']
update_placements(placements, unlock_all=True)
response.status = 201
# Flush so that the DB is current.
self.plan.flush()
self.plan = Plan.query.filter_by(stack_id=self.plan.stack_id).first() # pylint: disable=E1101
LOG.info(_('Plan with stack id %s updated.'), self.plan.stack_id)
return {"plan": self.plan}
# TODO(JD): Throw unimplemented error?
# pylint: disable=W0612
'''
# FIXME: This is broken. Save for Valet 1.1
# New placements are not being seen in the response, so
# set_placements is currently failing as a result.
ostro = Ostro()
args = request.json
kwargs = {
'tenant_id': request.context['tenant_id'],
'args': args
}
# Prepare the request. If request prep fails,
# an error message will be in the response.
# Though the Ostro helper reports the error,
# we cite it as a Valet error.
if not ostro.build_request(**kwargs):
message = ostro.response['status']['message']
error(ostro.error_uri, _('Valet error: %s') % message)
ostro.send()
status_type = ostro.response['status']['type']
if status_type != 'ok':
message = ostro.response['status']['message']
error(ostro.error_uri, _('Ostro error: %s') % message)
# TODO(JD): Keep. See if we will eventually need these for Ostro.
#plan_name = args['plan_name']
#stack_id = args['stack_id']
resources = ostro.request['resources_update']
placements = ostro.response['resources']
set_placements(self.plan, resources, placements)
response.status = 201
# Flush so that the DB is current.
self.plan.flush()
return self.plan
'''
# pylint: enable=W0612
@index.when(method='DELETE', template='json')
def index_delete(self):
'''Delete a Plan'''
for placement in self.plan.placements():
placement.delete()
stack_id = self.plan.stack_id
self.plan.delete()
LOG.info(_('Plan with stack id %s deleted.'), stack_id)
response.status = 204
class PlansController(object):
''' Plans Controller /v1/plans '''
@classmethod
def allow(cls):
'''Allowed methods'''
return 'GET,POST'
@expose(generic=True, template='json')
def index(self):
'''Catchall for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='GET', template='json')
def index_get(self):
'''Get all the plans'''
plans_array = []
for plan in Plan.query.all(): # pylint: disable=E1101
plans_array.append(plan)
return {"plans": plans_array}
@index.when(method='POST', template='json')
@validate(CREATE_SCHEMA, '/errors/schema')
def index_post(self):
'''Create a Plan'''
ostro = Ostro()
args = request.json
kwargs = {
'tenant_id': request.context['tenant_id'],
'args': args
}
# Prepare the request. If request prep fails,
# an error message will be in the response.
# Though the Ostro helper reports the error,
# we cite it as a Valet error.
if not ostro.build_request(**kwargs):
message = ostro.response['status']['message']
error(ostro.error_uri, _('Valet error: %s') % message)
# If there are no serviceable resources, bail. Not an error.
# Treat it as if an "empty plan" was created.
# FIXME: Ostro should likely handle this and not error out.
if not ostro.is_request_serviceable():
LOG.info(_('Plan has no serviceable resources. Skipping.'))
response.status = 201
return {"plan": {}}
ostro.send()
status_type = ostro.response['status']['type']
if status_type != 'ok':
message = ostro.response['status']['message']
error(ostro.error_uri, _('Ostro error: %s') % message)
plan_name = args['plan_name']
stack_id = args['stack_id']
resources = ostro.request['resources']
placements = ostro.response['resources']
plan = Plan(plan_name, stack_id)
if plan:
set_placements(plan, resources, placements)
response.status = 201
# Flush so that the DB is current.
plan.flush()
LOG.info(_('Plan with stack id %s created.'), plan.stack_id)
return {"plan": plan}
else:
error('/errors/server_error', _('Unable to create Plan.'))
@expose()
def _lookup(self, uuid4, *remainder):
'''Pecan subcontroller routing callback'''
return PlansItemController(uuid4), remainder

View File

@ -0,0 +1,90 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Root'''
import logging
from pecan import expose, request, response
from valet.api.common.i18n import _
from valet.api.v1.controllers import error
from valet.api.v1.controllers.errors import ErrorsController, error_wrapper
from valet.api.v1.controllers.v1 import V1Controller
from webob.exc import status_map
LOG = logging.getLogger(__name__)
# pylint: disable=R0201
class RootController(object):
''' Root Controller / '''
errors = ErrorsController()
v1 = V1Controller() # pylint: disable=C0103
@classmethod
def allow(cls):
'''Allowed methods'''
return 'GET'
@expose(generic=True, template='json')
def index(self):
'''Catchall for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='GET', template='json')
def index_get(self):
'''Get canonical URL for each version'''
ver = {
"versions":
[
{
"status": "CURRENT",
"id": "v1.0",
"links":
[
{
"href": request.application_url + "/v1/",
"rel": "self"
}
]
}
]
}
return ver
@error_wrapper
def error(self, status):
'''Error handler'''
try:
status = int(status)
except ValueError: # pragma: no cover
status = 500
message = getattr(status_map.get(status), 'explanation', '')
return dict(status=status, message=message)

View File

@ -0,0 +1,90 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''Status'''
import logging
from pecan import expose, request, response
from valet.api.common.i18n import _
from valet.api.common.ostro_helper import Ostro
from valet.api.v1.controllers import error
LOG = logging.getLogger(__name__)
# pylint: disable=R0201
class StatusController(object):
''' Status Controller /v1/status '''
@classmethod
def _ping_ostro(cls):
'''Ping Ostro'''
ostro = Ostro()
ostro.ping()
ostro.send()
return ostro.response
@classmethod
def _ping(cls):
'''Ping each subsystem.'''
ostro_response = StatusController._ping_ostro()
# TODO(JD): Ping Music plus any others.
# music_response = StatusController._ping_music()
response = {
"status": {
"ostro": ostro_response,
# "music": music_response,
}
}
return response
@classmethod
def allow(cls):
'''Allowed methods'''
return 'HEAD,GET'
@expose(generic=True, template='json')
def index(self):
'''Catchall for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='HEAD', template='json')
def index_head(self):
'''Ping each subsystem and return summary response'''
self._ping() # pylint: disable=W0612
response.status = 204
@index.when(method='GET', template='json')
def index_get(self):
'''Ping each subsystem and return detailed response'''
_response = self._ping()
response.status = 200
return _response

View File

@ -0,0 +1,130 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''v1'''
import logging
from pecan import conf, expose, request, response
from pecan.secure import SecureController
from valet.api.common.i18n import _
from valet.api.v1.controllers import error
from valet.api.v1.controllers.groups import GroupsController
from valet.api.v1.controllers.placements import PlacementsController
from valet.api.v1.controllers.plans import PlansController
from valet.api.v1.controllers.status import StatusController
LOG = logging.getLogger(__name__)
# pylint: disable=R0201
class V1Controller(SecureController):
''' v1 Controller /v1 '''
groups = GroupsController()
placements = PlacementsController()
plans = PlansController()
status = StatusController()
# Update this whenever a new endpoint is made.
endpoints = ["groups", "placements", "plans", "status"]
@classmethod
def check_permissions(cls):
'''SecureController permission check callback'''
token = None
auth_token = request.headers.get('X-Auth-Token')
msg = "Unauthorized - No auth token"
if auth_token:
msg = "Unauthorized - invalid token"
# The token must have an admin role
# and be associated with a tenant.
token = conf.identity.engine.validate_token(auth_token)
if token:
LOG.debug("Checking token permissions")
msg = "Unauthorized - Permission was not granted"
if V1Controller._permission_granted(request, token):
tenant_id = conf.identity.engine.tenant_from_token(token)
LOG.info("tenant_id - " + str(tenant_id))
if tenant_id:
request.context['tenant_id'] = tenant_id
user_id = conf.identity.engine.user_from_token(token)
request.context['user_id'] = user_id
return True
error('/errors/unauthorized', msg)
@classmethod
def _action_is_migrate(cls, request):
return "plan" in request.path and hasattr(request, "json") and "action" in request.json and request.json["action"] == "migrate"
@classmethod
def _permission_granted(cls, request, token):
return not ("group" in request.path or
V1Controller._action_is_migrate(request)) or\
(conf.identity.engine.is_token_admin(token))
@classmethod
def allow(cls):
'''Allowed methods'''
return 'GET'
@expose(generic=True, template='json')
def index(self):
'''Catchall for unallowed methods'''
message = _('The %s method is not allowed.') % request.method
kwargs = {'allow': self.allow()}
error('/errors/not_allowed', message, **kwargs)
@index.when(method='OPTIONS', template='json')
def index_options(self):
'''Options'''
response.headers['Allow'] = self.allow()
response.status = 204
@index.when(method='GET', template='json')
def index_get(self):
'''Get canonical URL for each endpoint'''
links = []
for endpoint in V1Controller.endpoints:
links.append({
"href": "%(url)s/v1/%(endpoint)s/" %
{
'url': request.application_url,
'endpoint': endpoint
},
"rel": "self"
})
ver = {
"versions":
[
{
"status": "CURRENT",
"id": "v1.0",
"links": links
}
]
}
return ver

57
valet/api/wsgi.py Normal file
View File

@ -0,0 +1,57 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''WSGI Wrapper'''
from common.i18n import _
import os
from pecan.deploy import deploy
def config_file(file_name=None):
"""Returns absolute location of the config file"""
file_name = file_name or 'config.py'
_file = os.path.abspath(__file__)
def dirname(x):
return os.path.dirname(x)
parent_dir = dirname(_file)
return os.path.join(parent_dir, file_name)
def application(environ, start_response):
"""Returns a WSGI app object"""
wsgi_app = deploy(config_file('prod.py'))
return wsgi_app(environ, start_response)
# TODO(JD): Integrate this with a python entry point
# This way we can run valet-api from the command line in a pinch.
if __name__ == '__main__':
from wsgiref.simple_server import make_server # disable=C0411,C0413
# TODO(JD): At some point, it would be nice to use pecan_mount
# import pecan_mount
# HTTPD = make_server('', 8090, pecan_mount.tree)
from valet.api.conf import register_conf, set_domain
register_conf()
set_domain()
HTTPD = make_server('', 8090, deploy(config_file('/var/www/valet/config.py')))
print(_("Serving HTTP on port 8090..."))
# Respond to requests until process is killed
HTTPD.serve_forever()

0
valet/cli/__init__.py Normal file
View File

187
valet/cli/groupcli.py Normal file
View File

@ -0,0 +1,187 @@
#!/usr/bin/python
import argparse
import json
from oslo_config import cfg
import requests
from valet.api.conf import register_conf, set_domain
CONF = cfg.CONF
class ResponseError(Exception):
pass
class ConnectionError(Exception):
pass
def print_verbose(verbose, url, headers, body, rest_cmd, timeout):
if verbose:
print("Sending Request:\nurl: %s\nheaders: %s\nbody: %s\ncmd: %s\ntimeout: %d\n"
% (url, headers, body, rest_cmd.__name__ if rest_cmd is not None else None, timeout))
def pretty_print_json(json_thing, sort=True, indents=4):
if type(json_thing) is str:
print(json.dumps(json.loads(json_thing), sort_keys=sort, indent=indents))
else:
print(json.dumps(json_thing, sort_keys=sort, indent=indents))
return None
def add_to_parser(service_sub):
parser = service_sub.add_parser('group', help='Group Management',
formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=30,
width=120))
parser.add_argument('--version', action='version', version='%(prog)s 1.1')
parser.add_argument('--timeout', type=int, help='Set request timeout in seconds (default: 10)')
parser.add_argument('--host', type=str, help='Hostname or ip of valet server')
parser.add_argument('--port', type=str, help='Port number of valet server')
parser.add_argument('--os-tenant-name', type=str, help='Tenant name')
parser.add_argument('--os-user-name', dest='os_username', type=str, help='Username')
parser.add_argument('--os-password', type=str, help="User's password")
parser.add_argument('--verbose', '-v', help='Show details', action="store_true")
subparsers = parser.add_subparsers(dest='subcmd', metavar='<subcommand>')
# create group
parser_create_group = subparsers.add_parser('create', help='Create new group.')
parser_create_group.add_argument('name', type=str, help='<GROUP_NAME>')
parser_create_group.add_argument('type', type=str, help='<GROUP_TYPE> (exclusivity)')
parser_create_group.add_argument('--description', type=str, help='<GROUP_DESCRIPTION>')
# delete group
parser_delete_group = subparsers.add_parser('delete', help='Delete specified group.')
parser_delete_group.add_argument('groupid', type=str, help='<GROUP_ID>')
# delete group member
parser_delete_group_member = subparsers.add_parser('delete-member', help='Delete members from specified group.')
parser_delete_group_member.add_argument('groupid', type=str, help='<GROUP_ID>')
parser_delete_group_member.add_argument('memberid', type=str, help='<MEMBER_ID>')
# delete all group members
parser_delete_all_group_members = subparsers.add_parser('delete-all-members', help='Delete all members from '
'specified group.')
parser_delete_all_group_members.add_argument('groupid', type=str, help='<GROUP_ID>')
# list group
subparsers.add_parser('list', help='List all groups.')
# show group details
parser_show_group_details = subparsers.add_parser('show', help='Show details about the given group.')
parser_show_group_details.add_argument('groupid', type=str, help='<GROUP_ID>')
# update group
parser_update_group = subparsers.add_parser('update', help='Update group description.')
parser_update_group.add_argument('groupid', type=str, help='<GROUP_ID>')
parser_update_group.add_argument('--description', type=str, help='<GROUP_DESCRIPTION>')
parser_update_group_members = subparsers.add_parser('update-member', help='Update group members.')
parser_update_group_members.add_argument('groupid', type=str, help='<GROUP_ID>')
parser_update_group_members.add_argument('members', type=str, help='<MEMBER_ID>')
return parser
def cmd_details(args):
if args.subcmd == 'create':
return requests.post, ''
elif args.subcmd == 'update':
return requests.put, '/%s' % args.groupid
elif args.subcmd == 'update-member':
return requests.put, '/%s/members' % args.groupid
elif args.subcmd == 'delete':
return requests.delete, '/%s' % (args.groupid)
elif args.subcmd == 'delete-all-members':
return requests.delete, '/%s/members' % (args.groupid)
elif args.subcmd == 'delete-member':
return requests.delete, '/%s/members/%s' % (args.groupid, args.memberid)
elif args.subcmd == 'show':
return requests.get, '/%s' % (args.groupid)
elif args.subcmd == 'list':
return requests.get, ''
def get_token(timeout, args):
# tenant_name = args.os_tenant_name if args.os_tenant_name else os.environ.get('OS_TENANT_NAME')
tenant_name = args.os_tenant_name if args.os_tenant_name else CONF.identity.project_name
auth_name = args.os_username if args.os_username else CONF.identity.username
password = args.os_password if args.os_password else CONF.identity.password
headers = {
'Content-Type': 'application/json',
}
url = '%s/tokens' % CONF.identity.uth_url
data = '''
{
"auth": {
"tenantName": "%s",
"passwordCredentials": {
"username": "%s",
"password": "%s"
}
}
}''' % (tenant_name, auth_name, password)
print_verbose(args.verbose, url, headers, data, None, timeout)
try:
resp = requests.post(url, timeout=timeout, data=data, headers=headers)
if resp.status_code != 200:
raise ResponseError(
'Failed in get_token: status code received {}'.format(
resp.status_code))
return resp.json()['access']['token']['id']
except Exception as e:
message = 'Failed in get_token'
# logger.log_exception(message, str(e))
print(e)
raise ConnectionError(message)
def populate_args_request_body(args):
body_args_list = ['name', 'type', 'description', 'members']
# assign values to dictionary (if val exist). members will be assign as a list
body_dict = {}
for body_arg in body_args_list:
if hasattr(args, body_arg):
body_dict[body_arg] = getattr(args, body_arg) if body_arg != 'members' else [getattr(args, body_arg)]
# remove keys without values
filtered_body_dict = dict((k, v) for k, v in body_dict.iteritems() if v is not None)
# check if dictionary is not empty, convert body dictionary to json format
return json.dumps(filtered_body_dict) if bool(filtered_body_dict) else None
def run(args):
register_conf()
set_domain(project='valet')
args.host = args.host or CONF.server.host
args.port = args.port or CONF.server.port
args.timeout = args.timeout or 10
rest_cmd, cmd_url = cmd_details(args)
args.url = 'http://%s:%s/v1/groups' % (args.host, args.port) + cmd_url
auth_token = get_token(args.timeout, args)
args.headers = {
'content-type': 'application/json',
'X-Auth-Token': auth_token
}
args.body = populate_args_request_body(args)
try:
print_verbose(args.verbose, args.url, args.headers, args.body, rest_cmd, args.timeout)
if args.body:
resp = rest_cmd(args.url, timeout=args.timeout, data=args.body, headers=args.headers)
else:
resp = rest_cmd(args.url, timeout=args.timeout, headers=args.headers)
except Exception as e:
print(e)
exit(1)
if not 200 <= resp.status_code < 300:
content = resp.json() if resp.status_code == 500 else ''
print('API error: %s %s (Reason: %d)\n%s' % (rest_cmd.func_name.upper(), args.url, resp.status_code, content))
exit(1)
try:
if resp.content:
rj = resp.json()
pretty_print_json(rj)
except Exception as e:
print (e)
exit(1)

37
valet/cli/valetcli.py Executable file
View File

@ -0,0 +1,37 @@
#!/usr/bin/python
import argparse
import sys
import valet.cli.groupcli as groupcli
# import logging
class Cli(object):
def __init__(self):
self.args = None
self.submod = None
self.parser = None
def create_parser(self):
self.parser = argparse.ArgumentParser(prog='valet', description='VALET REST CLI')
service_sub = self.parser.add_subparsers(dest='service', metavar='<service>')
self.submod = {'group': groupcli}
for s in self.submod.values():
s.add_to_parser(service_sub)
def parse(self, argv=sys.argv):
sys.argv = argv
self.args = self.parser.parse_args()
def logic(self):
self.submod[self.args.service].run(self.args)
def main(argv):
cli = Cli()
cli.create_parser()
cli.parse(argv)
cli.logic()
if __name__ == "__main__":
main(sys.argv)

0
valet/engine/__init__.py Normal file
View File

82
valet/engine/conf.py Normal file
View File

@ -0,0 +1,82 @@
from oslo_config import cfg
from valet.api import conf as api
CONF = cfg.CONF
ostro_cli_opts = [
cfg.StrOpt('command',
short='c',
default='status',
help='engine command.'),
]
engine_group = cfg.OptGroup(name='engine', title='Valet Engine conf')
engine_opts = [
cfg.StrOpt('pid', default='/var/run/valet/ostro-daemon.pid'),
cfg.StrOpt('mode', default='live',
help='sim will let Ostro simulate datacenter, while live will let it handle a real datacenter'),
cfg.StrOpt('sim_cfg_loc', default='/etc/valet/engine/ostro_sim.cfg'),
cfg.BoolOpt('network_control', default=False, help='whether network controller (i.e., Tegu) has been deployed'),
cfg.StrOpt('network_control_url', default='http://network_control:29444/tegu/api'),
cfg.StrOpt('ip', default='localhost'),
cfg.IntOpt('priority', default=1, help='this instance priority (master=1)'),
cfg.StrOpt('rpc_server_ip', default='localhost',
help='Set RPC server ip and port if used. Otherwise, ignore these parameters'),
cfg.StrOpt('rpc_server_port', default='8002'),
cfg.StrOpt('logger_name', default='engine.log'),
cfg.StrOpt('logging_level', default='debug'),
cfg.StrOpt('logging_dir', default='/var/log/valet/'),
cfg.StrOpt('max_main_log_size', default=5000000),
cfg.IntOpt('max_log_size', default=1000000),
cfg.IntOpt('max_num_of_logs', default=20),
cfg.StrOpt('datacenter_name', default='bigsite',
help='Inform the name of datacenter (region name), where Valet/Ostro is deployed.'),
cfg.IntOpt('num_of_region_chars', default='3', help='number of chars that indicates the region code'),
cfg.StrOpt('rack_code_list', default='r', help='rack indicator.'),
cfg.ListOpt('node_code_list', default='a,c,u,f,o,p,s',
help='indicates the node type. a: network, c KVM compute, u: ESXi compute, f: ?, o: operation, '
'p: power, s: storage.'),
cfg.StrOpt('compute_trigger_time', default='1:00',
help='trigger time or frequency for checking compute hosting server status (i.e., call Nova)'),
cfg.IntOpt('compute_trigger_frequency', default=3600,
help='trigger time or frequency for checking compute hosting server status (i.e., call Nova)'),
cfg.StrOpt('topology_trigger_time', default='2:00',
help='Set trigger time or frequency for checking datacenter topology'),
cfg.IntOpt('topology_trigger_frequency', default=3600,
help='Set trigger time or frequency for checking datacenter topology'),
cfg.IntOpt('default_cpu_allocation_ratio', default=16, help='Set default overbooking ratios. '
'Note that each compute node can have its own ratios'),
cfg.IntOpt('default_ram_allocation_ratio', default=1.5, help='Set default overbooking ratios. '
'Note that each compute node can have its own ratios'),
cfg.IntOpt('default_disk_allocation_ratio', default=1, help='Set default overbooking ratios. '
'Note that each compute node can have its own ratios'),
cfg.IntOpt('static_cpu_standby_ratio', default=20, help='unused percentages of resources (i.e. standby) '
'that are set aside for applications workload spikes.'),
cfg.IntOpt('static_mem_standby_ratio', default=20, help='unused percentages of resources (i.e. standby) '
'that are set aside for applications workload spikes.'),
cfg.IntOpt('static_local_disk_standby_ratio', default=20, help='unused percentages of resources (i.e. standby) '
'that are set aside for applications workload spikes.'),
]
listener_group = cfg.OptGroup(name='events_listener', title='Valet Engine listener')
listener_opts = [
cfg.StrOpt('exchange', default='nova'),
cfg.StrOpt('exchange_type', default='topic'),
cfg.BoolOpt('auto_delete', default=False),
cfg.StrOpt('output_format', default='dict'),
cfg.BoolOpt('store', default=True),
cfg.StrOpt('logging_level', default='debug'),
cfg.StrOpt('logging_loc', default='/var/log/valet/'),
cfg.StrOpt('logger_name', default='ostro_listener.log'),
cfg.IntOpt('max_main_log_size', default=5000000),
]
def register_conf():
api.register_conf()
CONF.register_group(engine_group)
CONF.register_opts(engine_opts, engine_group)
CONF.register_group(listener_group)
CONF.register_opts(listener_opts, listener_group)
CONF.register_cli_opts(ostro_cli_opts)

View File

View File

@ -0,0 +1,4 @@
Metadata-Version: 1.2
Name: ostro-listener
Version: 0.1.0
Author-email: jdandrea@research.att.com

View File

View File

@ -0,0 +1,165 @@
'''
Created on Nov 29, 2016
@author: stack
'''
from datetime import datetime
import json
import pika
import pprint
import threading
import traceback
from valet.api.db.models.music import Music
from valet.engine.listener.oslo_messages import OsloMessage
from valet.engine.optimizer.util.util import init_logger
import yaml
class ListenerManager(threading.Thread):
def __init__(self, _t_id, _t_name, _config):
threading.Thread.__init__(self)
self.thread_id = _t_id
self.thread_name = _t_name
self.config = _config
self.listener_logger = init_logger(self.config.events_listener)
self.MUSIC = None
def run(self):
'''Entry point
Connect to localhost rabbitmq servers, use username:password@ipaddress:port.
The port is typically 5672, and the default username and password are guest and guest.
credentials = pika.PlainCredentials("guest", "PASSWORD")
'''
try:
self.listener_logger.info("ListenerManager: start " + self.thread_name + " ......")
if self.config.events_listener.store:
kwargs = {
'host': self.config.music.host,
'port': self.config.music.port,
'replication_factor': self.config.music.replication_factor,
}
engine = Music(**kwargs)
engine.create_keyspace(self.config.music.keyspace)
self.MUSIC = {'engine': engine, 'keyspace': self.config.music.keyspace}
self.listener_logger.debug('Storing in music on %s, keyspace %s' % (self.config.music.host, self.config.music.keyspace))
self.listener_logger.debug('Connecting to %s, with %s' % (self.config.messaging.host, self.config.messaging.username))
credentials = pika.PlainCredentials(self.config.messaging.username, self.config.messaging.password)
parameters = pika.ConnectionParameters(self.config.messaging.host, self.config.messaging.port, '/', credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
# Select the exchange we want our queue to connect to
exchange_name = self.config.events_listener.exchange
exchange_type = self.config.events_listener.exchange_type
auto_delete = self.config.events_listener.auto_delete
# Use the binding key to select what type of messages you want
# to receive. '#' is a wild card -- meaning receive all messages
binding_key = "#"
# Check whether or not an exchange with the given name and type exists.
# Make sure that the exchange is multicast "fanout" or "topic" type
# otherwise our queue will consume the messages intended for other queues
channel.exchange_declare(exchange=exchange_name,
exchange_type=exchange_type,
auto_delete=auto_delete)
# Create an empty queue
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
# Bind the queue to the selected exchange
channel.queue_bind(exchange=exchange_name, queue=queue_name, routing_key=binding_key)
self.listener_logger.info('Channel is bound, listening on %s exchange %s', self.config.messaging.host, self.config.events_listener.exchange)
# Start consuming messages
channel.basic_consume(self.on_message, queue_name)
except Exception:
self.listener_logger.error(traceback.format_exc())
return
try:
channel.start_consuming()
except KeyboardInterrupt:
channel.stop_consuming()
# Close the channel on keyboard interrupt
channel.close()
connection.close()
def on_message(self, channel, method_frame, _, body): # pylint: disable=W0613
'''Specify the action to be taken on a message received'''
message = yaml.load(body)
try:
if 'oslo.message' in message.keys():
message = yaml.load(message['oslo.message'])
if self.is_message_wanted(message):
if self.MUSIC and self.MUSIC.get('engine'):
self.store_message(message)
else:
return
self.listener_logger.debug("\nMessage No: %s\n", method_frame.delivery_tag)
message_obj = yaml.load(body)
if 'oslo.message' in message_obj.keys():
message_obj = yaml.load(message_obj['oslo.message'])
if self.config.events_listener.output_format == 'json':
self.listener_logger.debug(json.dumps(message_obj, sort_keys=True, indent=2))
elif self.config.events_listener.output_format == 'yaml':
self.listener_logger.debug(yaml.dump(message_obj))
else:
self.listener_logger.debug(pprint.pformat(message_obj))
channel.basic_ack(delivery_tag=method_frame.delivery_tag)
except Exception:
self.listener_logger.error(traceback.format_exc())
return
def is_message_wanted(self, message):
''' Based on markers from Ostro, determine if this is a wanted message. '''
method = message.get('method', None)
args = message.get('args', None)
nova_props = {'nova_object.changes', 'nova_object.data', 'nova_object.name'}
args_props = {'filter_properties', 'instance'}
is_data = method and args
is_nova = is_data and 'objinst' in args and nova_props.issubset(args['objinst'])
action_instance = is_nova and method == 'object_action' and self.is_nova_name(args) and self.is_nova_state(args)
action_compute = is_nova and self.is_compute_name(args)
create_instance = is_data and method == 'build_and_run_instance' and args_props.issubset(args) and 'nova_object.data' in args['instance']
return action_instance or action_compute or create_instance
def store_message(self, message):
'''Store message in Music'''
timestamp = datetime.now().isoformat()
args = json.dumps(message.get('args', None))
exchange = self.config.events_listener.exchange
method = message.get('method', None)
kwargs = {
'timestamp': timestamp,
'args': args,
'exchange': exchange,
'method': method,
'database': self.MUSIC,
}
OsloMessage(**kwargs) # pylint: disable=W0612
def is_nova_name(self, args):
return args['objinst']['nova_object.name'] == 'Instance'
def is_nova_state(self, args):
return args['objinst']['nova_object.data']['vm_state'] in ['deleted', 'active']
def is_compute_name(self, args):
return args['objinst']['nova_object.name'] == 'ComputeNode'

View File

@ -0,0 +1,95 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2016 AT&T
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
'''OsloMessage Database Model'''
# This is based on Music models used in Valet.
import uuid
class OsloMessage(object):
__tablename__ = 'oslo_messages'
_database = None
timestamp = None
args = None
exchange = None
method = None
@classmethod
def schema(cls):
'''Return schema.'''
schema = {
'timestamp': 'text',
'args': 'text',
'exchange': 'text',
'method': 'text',
'PRIMARY KEY': '(timestamp)'
}
return schema
@classmethod
def pk_name(cls):
return 'timestamp'
def pk_value(self):
return self.timestamp
def insert(self):
'''Insert row.'''
keyspace = self._database.get('keyspace')
kwargs = {
'keyspace': keyspace,
'table': self.__tablename__,
'values': self.values()
}
pk_name = self.pk_name()
if pk_name not in kwargs['values']:
the_id = str(uuid.uuid4())
kwargs['values'][pk_name] = the_id
setattr(self, pk_name, the_id)
engine = self._database.get('engine')
engine.create_row(**kwargs)
def values(self):
return {
'timestamp': self.timestamp,
'args': self.args,
'exchange': self.exchange,
'method': self.method,
}
def __init__(self, timestamp, args, exchange,
method, database, _insert=True):
self._database = database
self.timestamp = timestamp
self.args = args
self.exchange = exchange
self.method = method
if _insert:
self.insert()
def __json__(self):
json_ = {}
json_['timestamp'] = self.timestamp
json_['args'] = self.args
json_['exchange'] = self.exchange
json_['method'] = self.method
return json_

View File

View File

@ -0,0 +1,285 @@
#!/bin/python
# Modified: Sep. 27, 2016
import json
from valet.engine.optimizer.app_manager.app_topology import AppTopology
from valet.engine.optimizer.app_manager.app_topology_base import VM
from valet.engine.optimizer.app_manager.application import App
from valet.engine.optimizer.util import util as util
class AppHandler(object):
def __init__(self, _resource, _db, _config, _logger):
self.resource = _resource
self.db = _db
self.config = _config
self.logger = _logger
''' current app requested, a temporary copy '''
self.apps = {}
self.last_log_index = 0
self.status = "success"
def add_app(self, _app_data):
self.apps.clear()
app_topology = AppTopology(self.resource, self.logger)
for app in _app_data:
self.logger.debug("AppHandler: parse app")
stack_id = None
if "stack_id" in app.keys():
stack_id = app["stack_id"]
else:
stack_id = "none"
application_name = None
if "application_name" in app.keys():
application_name = app["application_name"]
else:
application_name = "none"
action = app["action"]
if action == "ping":
self.logger.debug("AppHandler: got ping")
elif action == "replan" or action == "migrate":
re_app = self._regenerate_app_topology(stack_id, app, app_topology, action)
if re_app is None:
self.apps[stack_id] = None
self.status = "cannot locate the original plan for stack = " + stack_id
return None
if action == "replan":
self.logger.debug("AppHandler: got replan: " + stack_id)
elif action == "migrate":
self.logger.debug("AppHandler: got migration: " + stack_id)
app_id = app_topology.set_app_topology(re_app)
if app_id is None:
self.logger.error("AppHandler: " + app_topology.status)
self.status = app_topology.status
self.apps[stack_id] = None
return None
else:
app_id = app_topology.set_app_topology(app)
if app_id is None:
self.logger.error("AppHandler: " + app_topology.status)
self.status = app_topology.status
self.apps[stack_id] = None
return None
new_app = App(stack_id, application_name, action)
self.apps[stack_id] = new_app
return app_topology
def add_placement(self, _placement_map, _timestamp):
for v in _placement_map.keys():
if self.apps[v.app_uuid].status == "requested":
self.apps[v.app_uuid].status = "scheduled"
self.apps[v.app_uuid].timestamp_scheduled = _timestamp
if isinstance(v, VM):
self.apps[v.app_uuid].add_vm(v, _placement_map[v])
# elif isinstance(v, Volume):
# self.apps[v.app_uuid].add_volume(v, _placement_map[v])
else:
if _placement_map[v] in self.resource.hosts.keys():
host = self.resource.hosts[_placement_map[v]]
if v.level == "host":
self.apps[v.app_uuid].add_vgroup(v, host.name)
else:
hg = self.resource.host_groups[_placement_map[v]]
if v.level == hg.host_type:
self.apps[v.app_uuid].add_vgroup(v, hg.name)
if self._store_app_placements() is False:
# NOTE: ignore?
pass
def _store_app_placements(self):
(app_logfile, last_index, mode) = util.get_last_logfile(
self.config.app_log_loc, self.config.max_log_size, self.config.max_num_of_logs,
self.resource.datacenter.name, self.last_log_index)
self.last_log_index = last_index
# TODO(GJ): error handling
logging = open(self.config.app_log_loc + app_logfile, mode)
for appk, app in self.apps.iteritems():
json_log = app.log_in_info()
log_data = json.dumps(json_log)
logging.write(log_data)
logging.write("\n")
logging.close()
self.logger.info("AppHandler: log app in " + app_logfile)
if self.db is not None:
for appk, app in self.apps.iteritems():
json_info = app.get_json_info()
if self.db.add_app(appk, json_info) is False:
return False
if self.db.update_app_log_index(self.resource.datacenter.name, self.last_log_index) is False:
return False
return True
def remove_placement(self):
if self.db is not None:
for appk, _ in self.apps.iteritems():
if self.db.add_app(appk, None) is False:
self.logger.error("AppHandler: error while adding app info to MUSIC")
# NOTE: ignore?
def get_vm_info(self, _s_uuid, _h_uuid, _host):
vm_info = {}
if _h_uuid is not None and _h_uuid != "none" and \
_s_uuid is not None and _s_uuid != "none":
vm_info = self.db.get_vm_info(_s_uuid, _h_uuid, _host)
return vm_info
def update_vm_info(self, _s_uuid, _h_uuid):
s_uuid_exist = bool(_s_uuid is not None and _s_uuid != "none")
h_uuid_exist = bool(_h_uuid is not None and _h_uuid != "none")
if s_uuid_exist and h_uuid_exist:
return self.db.update_vm_info(_s_uuid, _h_uuid)
return True
def _regenerate_app_topology(self, _stack_id, _app, _app_topology, _action):
re_app = {}
old_app = self.db.get_app_info(_stack_id)
if old_app is None:
self.status = "error while getting old_app from MUSIC"
self.logger.error("AppHandler: " + self.status)
return None
elif len(old_app) == 0:
self.status = "cannot find the old app in MUSIC"
self.logger.error("AppHandler: " + self.status)
return None
re_app["action"] = "create"
re_app["stack_id"] = _stack_id
resources = {}
diversity_groups = {}
exclusivity_groups = {}
if "VMs" in old_app.keys():
for vmk, vm in old_app["VMs"].iteritems():
resources[vmk] = {}
resources[vmk]["name"] = vm["name"]
resources[vmk]["type"] = "OS::Nova::Server"
properties = {}
properties["flavor"] = vm["flavor"]
if vm["availability_zones"] != "none":
properties["availability_zone"] = vm["availability_zones"]
resources[vmk]["properties"] = properties
if len(vm["diversity_groups"]) > 0:
for divk, level_name in vm["diversity_groups"].iteritems():
div_id = divk + ":" + level_name
if div_id not in diversity_groups.keys():
diversity_groups[div_id] = []
diversity_groups[div_id].append(vmk)
if len(vm["exclusivity_groups"]) > 0:
for exk, level_name in vm["exclusivity_groups"].iteritems():
ex_id = exk + ":" + level_name
if ex_id not in exclusivity_groups.keys():
exclusivity_groups[ex_id] = []
exclusivity_groups[ex_id].append(vmk)
if _action == "replan":
if vmk == _app["orchestration_id"]:
_app_topology.candidate_list_map[vmk] = _app["locations"]
self.logger.debug("AppHandler: re-requested vm = " + vm["name"] + " in")
for hk in _app["locations"]:
self.logger.debug(" " + hk)
elif vmk in _app["exclusions"]:
_app_topology.planned_vm_map[vmk] = vm["host"]
self.logger.debug("AppHandler: exception from replan = " + vm["name"])
elif _action == "migrate":
if vmk == _app["orchestration_id"]:
_app_topology.exclusion_list_map[vmk] = _app["excluded_hosts"]
if vm["host"] not in _app["excluded_hosts"]:
_app_topology.exclusion_list_map[vmk].append(vm["host"])
else:
_app_topology.planned_vm_map[vmk] = vm["host"]
_app_topology.old_vm_map[vmk] = (vm["host"], vm["cpus"], vm["mem"], vm["local_volume"])
if "VGroups" in old_app.keys():
for gk, affinity in old_app["VGroups"].iteritems():
resources[gk] = {}
resources[gk]["type"] = "ATT::Valet::GroupAssignment"
properties = {}
properties["group_type"] = "affinity"
properties["group_name"] = affinity["name"]
properties["level"] = affinity["level"]
properties["resources"] = []
for r in affinity["subvgroup_list"]:
properties["resources"].append(r)
resources[gk]["properties"] = properties
if len(affinity["diversity_groups"]) > 0:
for divk, level_name in affinity["diversity_groups"].iteritems():
div_id = divk + ":" + level_name
if div_id not in diversity_groups.keys():
diversity_groups[div_id] = []
diversity_groups[div_id].append(gk)
if len(affinity["exclusivity_groups"]) > 0:
for exk, level_name in affinity["exclusivity_groups"].iteritems():
ex_id = exk + ":" + level_name
if ex_id not in exclusivity_groups.keys():
exclusivity_groups[ex_id] = []
exclusivity_groups[ex_id].append(gk)
# NOTE: skip pipes in this version
for div_id, resource_list in diversity_groups.iteritems():
divk_level_name = div_id.split(":")
resources[divk_level_name[0]] = {}
resources[divk_level_name[0]]["type"] = "ATT::Valet::GroupAssignment"
properties = {}
properties["group_type"] = "diversity"
properties["group_name"] = divk_level_name[2]
properties["level"] = divk_level_name[1]
properties["resources"] = resource_list
resources[divk_level_name[0]]["properties"] = properties
for ex_id, resource_list in exclusivity_groups.iteritems():
exk_level_name = ex_id.split(":")
resources[exk_level_name[0]] = {}
resources[exk_level_name[0]]["type"] = "ATT::Valet::GroupAssignment"
properties = {}
properties["group_type"] = "exclusivity"
properties["group_name"] = exk_level_name[2]
properties["level"] = exk_level_name[1]
properties["resources"] = resource_list
resources[exk_level_name[0]]["properties"] = properties
re_app["resources"] = resources
return re_app

View File

@ -0,0 +1,219 @@
#!/bin/python
# Modified: Sep. 22, 2016
from valet.engine.optimizer.app_manager.app_topology_base import VM, VGroup
from valet.engine.optimizer.app_manager.app_topology_parser import Parser
class AppTopology(object):
def __init__(self, _resource, _logger):
self.vgroups = {}
self.vms = {}
self.volumes = {}
''' for replan '''
self.old_vm_map = {}
self.planned_vm_map = {}
self.candidate_list_map = {}
''' for migration-tip '''
self.exclusion_list_map = {}
self.resource = _resource
self.logger = _logger
''' restriction of host naming convention '''
high_level_allowed = True
if "none" in self.resource.datacenter.region_code_list:
high_level_allowed = False
self.parser = Parser(high_level_allowed, self.logger)
self.total_nw_bandwidth = 0
self.total_CPU = 0
self.total_mem = 0
self.total_local_vol = 0
self.total_vols = {}
self.optimization_priority = None
self.status = "success"
''' parse and set each app '''
def set_app_topology(self, _app_graph):
(vgroups, vms, volumes) = self.parser.set_topology(_app_graph)
if len(vgroups) == 0 and len(vms) == 0 and len(volumes) == 0:
self.status = self.parser.status
return None
''' cumulate virtual resources '''
for _, vgroup in vgroups.iteritems():
self.vgroups[vgroup.uuid] = vgroup
for _, vm in vms.iteritems():
self.vms[vm.uuid] = vm
for _, vol in volumes.iteritems():
self.volumes[vol.uuid] = vol
return self.parser.stack_id, self.parser.application_name, self.parser.action
def set_weight(self):
for _, vm in self.vms.iteritems():
self._set_vm_weight(vm)
for _, vg in self.vgroups.iteritems():
self._set_vm_weight(vg)
for _, vg in self.vgroups.iteritems():
self._set_vgroup_resource(vg)
for _, vg in self.vgroups.iteritems():
self._set_vgroup_weight(vg)
def _set_vm_weight(self, _v):
if isinstance(_v, VGroup):
for _, sg in _v.subvgroups.iteritems():
self._set_vm_weight(sg)
else:
if self.resource.CPU_avail > 0:
_v.vCPU_weight = float(_v.vCPUs) / float(self.resource.CPU_avail)
else:
_v.vCPU_weight = 1.0
self.total_CPU += _v.vCPUs
if self.resource.mem_avail > 0:
_v.mem_weight = float(_v.mem) / float(self.resource.mem_avail)
else:
_v.mem_weight = 1.0
self.total_mem += _v.mem
if self.resource.local_disk_avail > 0:
_v.local_volume_weight = float(_v.local_volume_size) / float(self.resource.local_disk_avail)
else:
if _v.local_volume_size > 0:
_v.local_volume_weight = 1.0
else:
_v.local_volume_weight = 0.0
self.total_local_vol += _v.local_volume_size
bandwidth = _v.nw_bandwidth + _v.io_bandwidth
if self.resource.nw_bandwidth_avail > 0:
_v.bandwidth_weight = float(bandwidth) / float(self.resource.nw_bandwidth_avail)
else:
if bandwidth > 0:
_v.bandwidth_weight = 1.0
else:
_v.bandwidth_weight = 0.0
self.total_nw_bandwidth += bandwidth
def _set_vgroup_resource(self, _vg):
if isinstance(_vg, VM):
return
for _, sg in _vg.subvgroups.iteritems():
self._set_vgroup_resource(sg)
_vg.vCPUs += sg.vCPUs
_vg.mem += sg.mem
_vg.local_volume_size += sg.local_volume_size
def _set_vgroup_weight(self, _vgroup):
if self.resource.CPU_avail > 0:
_vgroup.vCPU_weight = float(_vgroup.vCPUs) / float(self.resource.CPU_avail)
else:
if _vgroup.vCPUs > 0:
_vgroup.vCPU_weight = 1.0
else:
_vgroup.vCPU_weight = 0.0
if self.resource.mem_avail > 0:
_vgroup.mem_weight = float(_vgroup.mem) / float(self.resource.mem_avail)
else:
if _vgroup.mem > 0:
_vgroup.mem_weight = 1.0
else:
_vgroup.mem_weight = 0.0
if self.resource.local_disk_avail > 0:
_vgroup.local_volume_weight = float(_vgroup.local_volume_size) / float(self.resource.local_disk_avail)
else:
if _vgroup.local_volume_size > 0:
_vgroup.local_volume_weight = 1.0
else:
_vgroup.local_volume_weight = 0.0
bandwidth = _vgroup.nw_bandwidth + _vgroup.io_bandwidth
if self.resource.nw_bandwidth_avail > 0:
_vgroup.bandwidth_weight = float(bandwidth) / float(self.resource.nw_bandwidth_avail)
else:
if bandwidth > 0:
_vgroup.bandwidth_weight = 1.0
else:
_vgroup.bandwidth_weight = 0.0
for _, svg in _vgroup.subvgroups.iteritems():
if isinstance(svg, VGroup):
self._set_vgroup_weight(svg)
def set_optimization_priority(self):
if len(self.vgroups) == 0 and len(self.vms) == 0 and len(self.volumes) == 0:
return
app_nw_bandwidth_weight = -1
if self.resource.nw_bandwidth_avail > 0:
app_nw_bandwidth_weight = float(self.total_nw_bandwidth) / float(self.resource.nw_bandwidth_avail)
else:
if self.total_nw_bandwidth > 0:
app_nw_bandwidth_weight = 1.0
else:
app_nw_bandwidth_weight = 0.0
app_CPU_weight = -1
if self.resource.CPU_avail > 0:
app_CPU_weight = float(self.total_CPU) / float(self.resource.CPU_avail)
else:
if self.total_CPU > 0:
app_CPU_weight = 1.0
else:
app_CPU_weight = 0.0
app_mem_weight = -1
if self.resource.mem_avail > 0:
app_mem_weight = float(self.total_mem) / float(self.resource.mem_avail)
else:
if self.total_mem > 0:
app_mem_weight = 1.0
else:
app_mem_weight = 0.0
app_local_vol_weight = -1
if self.resource.local_disk_avail > 0:
app_local_vol_weight = float(self.total_local_vol) / float(self.resource.local_disk_avail)
else:
if self.total_local_vol > 0:
app_local_vol_weight = 1.0
else:
app_local_vol_weight = 0.0
total_vol_list = []
for vol_class in self.total_vols.keys():
total_vol_list.append(self.total_vols[vol_class])
app_vol_weight = -1
if self.resource.disk_avail > 0:
app_vol_weight = float(sum(total_vol_list)) / float(self.resource.disk_avail)
else:
if sum(total_vol_list) > 0:
app_vol_weight = 1.0
else:
app_vol_weight = 0.0
opt = [("bw", app_nw_bandwidth_weight),
("cpu", app_CPU_weight),
("mem", app_mem_weight),
("lvol", app_local_vol_weight),
("vol", app_vol_weight)]
self.optimization_priority = sorted(opt, key=lambda resource: resource[1], reverse=True)

View File

@ -0,0 +1,257 @@
#!/bin/python
# Modified: Sep. 22, 2016
LEVELS = ["host", "rack", "cluster"]
class VGroup(object):
def __init__(self, _app_uuid, _uuid):
self.app_uuid = _app_uuid
self.uuid = _uuid
self.name = None
self.status = "requested"
self.vgroup_type = "AFF" # Support Affinity group at this version
self.level = None # host, rack, or cluster
self.survgroup = None # where this vgroup belong to
self.subvgroups = {} # child vgroups
self.vgroup_list = [] # a list of links to VMs or Volumes
self.diversity_groups = {} # cumulative diversity/exclusivity group
self.exclusivity_groups = {} # over this level. key=name, value=level
self.availability_zone_list = []
# self.host_aggregates = {} # cumulative aggregates
self.extra_specs_list = [] # cumulative extra_specs
self.vCPUs = 0
self.mem = 0 # MB
self.local_volume_size = 0 # GB
self.volume_sizes = {} # key = volume_class_name, value = size
self.nw_bandwidth = 0 # Mbps
self.io_bandwidth = 0 # Mbps
self.vCPU_weight = -1
self.mem_weight = -1
self.local_volume_weight = -1
self.volume_weight = -1 # averge of all storage classes
self.bandwidth_weight = -1
self.host = None
def get_json_info(self):
survgroup_id = None
if self.survgroup is None:
survgroup_id = "none"
else:
survgroup_id = self.survgroup.uuid
subvgroup_list = []
for vk in self.subvgroups.keys():
subvgroup_list.append(vk)
link_list = []
for l in self.vgroup_list:
link_list.append(l.get_json_info())
return {'name': self.name,
'status': self.status,
'vgroup_type': self.vgroup_type,
'level': self.level,
'survgroup': survgroup_id,
'subvgroup_list': subvgroup_list,
'link_list': link_list,
'diversity_groups': self.diversity_groups,
'exclusivity_groups': self.exclusivity_groups,
'availability_zones': self.availability_zone_list,
# 'host_aggregates':host_aggregates,
'extra_specs_list': self.extra_specs_list,
'cpus': self.vCPUs,
'mem': self.mem,
'local_volume': self.local_volume_size,
'volumes': self.volume_sizes,
'nw_bandwidth': self.nw_bandwidth,
'io_bandwidth': self.io_bandwidth,
'cpu_weight': self.vCPU_weight,
'mem_weight': self.mem_weight,
'local_volume_weight': self.local_volume_weight,
'volume_weight': self.volume_weight,
'bandwidth_weight': self.bandwidth_weight,
'host': self.host}
class VM(object):
def __init__(self, _app_uuid, _uuid):
self.app_uuid = _app_uuid
self.uuid = _uuid
self.name = None
self.status = "requested"
self.survgroup = None # VGroup where this vm belongs to
self.volume_list = [] # a list of links to Volumes
self.vm_list = [] # a list of links to VMs
self.diversity_groups = {}
self.exclusivity_groups = {}
self.availability_zone = None
# self.host_aggregates = {}
self.extra_specs_list = []
self.flavor = None
self.vCPUs = 0
self.mem = 0 # MB
self.local_volume_size = 0 # GB
self.nw_bandwidth = 0
self.io_bandwidth = 0
self.vCPU_weight = -1
self.mem_weight = -1
self.local_volume_weight = -1
self.bandwidth_weight = -1
self.host = None # where this vm is placed
def get_json_info(self):
survgroup_id = None
if self.survgroup is None:
survgroup_id = "none"
else:
survgroup_id = self.survgroup.uuid
vm_list = []
for vml in self.vm_list:
vm_list.append(vml.get_json_info())
vol_list = []
for voll in self.volume_list:
vol_list.append(voll.get_json_info())
availability_zone = None
if self.availability_zone is None:
availability_zone = "none"
else:
availability_zone = self.availability_zone
return {'name': self.name,
'status': self.status,
'survgroup': survgroup_id,
'vm_list': vm_list,
'volume_list': vol_list,
'diversity_groups': self.diversity_groups,
'exclusivity_groups': self.exclusivity_groups,
'availability_zones': availability_zone,
# 'host_aggregates':host_aggregates,
'extra_specs_list': self.extra_specs_list,
'flavor': self.flavor,
'cpus': self.vCPUs,
'mem': self.mem,
'local_volume': self.local_volume_size,
'nw_bandwidth': self.nw_bandwidth,
'io_bandwidth': self.io_bandwidth,
'cpu_weight': self.vCPU_weight,
'mem_weight': self.mem_weight,
'local_volume_weight': self.local_volume_weight,
'bandwidth_weight': self.bandwidth_weight,
'host': self.host}
class Volume(object):
def __init__(self, _app_uuid, _uuid):
self.app_uuid = _app_uuid
self.uuid = _uuid
self.name = None
self.status = "requested"
self.volume_class = None
self.survgroup = None # where this vm belongs to
self.vm_list = [] # a list of links to VMs
self.diversity_groups = {}
self.exclusivity_groups = {}
self.volume_size = 0 # GB
self.io_bandwidth = 0
self.volume_weight = -1
self.bandwidth_weight = -1
self.storage_host = None
def get_json_info(self):
survgroup_id = None
if self.survgroup is None:
survgroup_id = "none"
else:
survgroup_id = self.survgroup.uuid
volume_class = None
if self.volume_class is None:
volume_class = "none"
else:
volume_class = self.volume_class
vm_list = []
for vml in self.vm_list:
vm_list.append(vml.get_json_info())
return {'name': self.name,
'status': self.status,
'class': volume_class,
'survgroup': survgroup_id,
'vm_list': vm_list,
'diversity_groups': self.diversity_groups,
'exclusivity_groups': self.exclusivity_groups,
'volume': self.volume_size,
'io_bandwidth': self.io_bandwidth,
'volume_weight': self.volume_weight,
'bandwidth_weight': self.bandwidth_weight,
'host': self.storage_host}
class VGroupLink(object):
def __init__(self, _n):
self.node = _n # target VM or Volume
self.nw_bandwidth = 0
self.io_bandwidth = 0
def get_json_info(self):
return {'target': self.node.uuid,
'nw_bandwidth': self.nw_bandwidth,
'io_bandwidth': self.io_bandwidth}
class VMLink(object):
def __init__(self, _n):
self.node = _n # target VM
self.nw_bandwidth = 0 # Mbps
def get_json_info(self):
return {'target': self.node.uuid,
'nw_bandwidth': self.nw_bandwidth}
class VolumeLink(object):
def __init__(self, _n):
self.node = _n # target Volume
self.io_bandwidth = 0 # Mbps
def get_json_info(self):
return {'target': self.node.uuid,
'io_bandwidth': self.io_bandwidth}

View File

@ -0,0 +1,641 @@
#!/bin/python
# Modified: Sep. 27, 2016
from valet.engine.optimizer.app_manager.app_topology_base import VGroup, VGroupLink, VM, VMLink, LEVELS
'''
- Restrictions of nested groups: EX in EX, EX in DIV, DIV in EX, DIV in DIV
- VM/group cannot exist in multiple EX groups
- Nested group's level cannot be higher than nesting group
- No supporting the following Heat components
OS::Nova::ServerGroup
OS::Heat::AutoScalingGroup
OS::Heat::Stack
OS::Heat::ResourceGroup
OS::Heat::ResourceGroup
'''
class Parser(object):
def __init__(self, _high_level_allowed, _logger):
self.logger = _logger
self.high_level_allowed = _high_level_allowed
self.format_version = None
self.stack_id = None # used as application id
self.application_name = None
self.action = None # [create|update|ping]
self.status = "success"
def set_topology(self, _graph):
if "version" in _graph.keys():
self.format_version = _graph["version"]
else:
self.format_version = "0.0"
if "stack_id" in _graph.keys():
self.stack_id = _graph["stack_id"]
else:
self.stack_id = "none"
if "application_name" in _graph.keys():
self.application_name = _graph["application_name"]
else:
self.application_name = "none"
if "action" in _graph.keys():
self.action = _graph["action"]
else:
self.action = "any"
return self._set_topology(_graph["resources"])
def _set_topology(self, _elements):
vgroups = {}
vgroup_captured = False
vms = {}
''' empty at this version '''
volumes = {}
for rk, r in _elements.iteritems():
if r["type"] == "OS::Nova::Server":
vm = VM(self.stack_id, rk)
if "name" in r.keys():
vm.name = r["name"]
else:
vm.name = vm.uuid
vm.flavor = r["properties"]["flavor"]
if "availability_zone" in r["properties"].keys():
az = r["properties"]["availability_zone"]
# NOTE: do not allow to specify a certain host name
vm.availability_zone = az.split(":")[0]
vms[vm.uuid] = vm
self.logger.debug("Parser: get a vm = " + vm.name)
elif r["type"] == "OS::Cinder::Volume":
self.logger.warn("Parser: do nothing for volume at this version")
elif r["type"] == "ATT::Valet::GroupAssignment":
vgroup = VGroup(self.stack_id, rk)
vgroup.vgroup_type = None
if "group_type" in r["properties"].keys():
if r["properties"]["group_type"] == "affinity":
vgroup.vgroup_type = "AFF"
elif r["properties"]["group_type"] == "diversity":
vgroup.vgroup_type = "DIV"
elif r["properties"]["group_type"] == "exclusivity":
vgroup.vgroup_type = "EX"
else:
self.status = "unknown group = " + r["properties"]["group_type"]
return {}, {}, {}
else:
self.status = "no group type"
return {}, {}, {}
if "group_name" in r["properties"].keys():
vgroup.name = r["properties"]["group_name"]
else:
if vgroup.vgroup_type == "EX":
self.status = "no exclusivity group identifier"
return {}, {}, {}
else:
vgroup.name = "any"
if "level" in r["properties"].keys():
vgroup.level = r["properties"]["level"]
if vgroup.level != "host":
if self.high_level_allowed is False:
self.status = "only host level of affinity group allowed " + \
"due to the mis-match of host naming convention"
return {}, {}, {}
else:
self.status = "no grouping level"
return {}, {}, {}
vgroups[vgroup.uuid] = vgroup
self.logger.debug("Parser: get a group = " + vgroup.name)
vgroup_captured = True
self._set_vm_links(_elements, vms)
if self._set_volume_links(_elements, vms, volumes) is False:
return {}, {}, {}
self._set_total_link_capacities(vms, volumes)
self.logger.debug("Parser: all vms parsed")
if self._merge_diversity_groups(_elements, vgroups, vms, volumes) is False:
return {}, {}, {}
if self._merge_exclusivity_groups(_elements, vgroups, vms, volumes) is False:
return {}, {}, {}
if self._merge_affinity_groups(_elements, vgroups, vms, volumes) is False:
return {}, {}, {}
''' delete all EX and DIV vgroups after merging '''
for vgk in vgroups.keys():
vg = vgroups[vgk]
if vg.vgroup_type == "DIV" or vg.vgroup_type == "EX":
del vgroups[vgk]
for vgk in vgroups.keys():
vgroup = vgroups[vgk]
self._set_vgroup_links(vgroup, vgroups, vms, volumes)
if vgroup_captured is True:
self.logger.debug("Parser: all groups resolved")
return vgroups, vms, volumes
def _set_vm_links(self, _elements, _vms):
for _, r in _elements.iteritems():
if r["type"] == "ATT::CloudQoS::Pipe":
resources = r["properties"]["resources"]
for vk1 in resources:
if vk1 in _vms.keys():
vm = _vms[vk1]
for vk2 in resources:
if vk2 != vk1:
if vk2 in _vms.keys():
link = VMLink(_vms[vk2])
if "bandwidth" in r["properties"].keys():
link.nw_bandwidth = r["properties"]["bandwidth"]["min"]
vm.vm_list.append(link)
def _set_volume_links(self, _elements, _vms, _volumes):
for rk, r in _elements.iteritems():
if r["type"] == "OS::Cinder::VolumeAttachment":
self.logger.warn("Parser: do nothing for volume attachment at this version")
return True
def _set_total_link_capacities(self, _vms, _volumes):
for _, vm in _vms.iteritems():
for vl in vm.vm_list:
vm.nw_bandwidth += vl.nw_bandwidth
for voll in vm.volume_list:
vm.io_bandwidth += voll.io_bandwidth
for _, volume in _volumes.iteritems():
for vl in volume.vm_list:
volume.io_bandwidth += vl.io_bandwidth
def _merge_diversity_groups(self, _elements, _vgroups, _vms, _volumes):
for level in LEVELS:
for rk, r in _elements.iteritems():
if r["type"] == "ATT::Valet::GroupAssignment" and \
r["properties"]["group_type"] == "diversity" and \
r["properties"]["level"] == level:
vgroup = _vgroups[rk]
for vk in r["properties"]["resources"]:
if vk in _vms.keys():
vgroup.subvgroups[vk] = _vms[vk]
_vms[vk].diversity_groups[rk] = vgroup.level + ":" + vgroup.name
elif vk in _volumes.keys():
vgroup.subvgroups[vk] = _volumes[vk]
_volumes[vk].diversity_groups[rk] = vgroup.level + ":" + vgroup.name
elif vk in _vgroups.keys():
vg = _vgroups[vk]
if LEVELS.index(vg.level) > LEVELS.index(level):
self.status = "grouping scope: nested group's level is higher"
return False
if vg.vgroup_type == "DIV" or vg.vgroup_type == "EX":
self.status = "group type (" + vg.vgroup_type + ") not allowd to be nested in diversity group at this version"
return False
vgroup.subvgroups[vk] = vg
vg.diversity_groups[rk] = vgroup.level + ":" + vgroup.name
else:
self.status = "invalid resource = " + vk
return False
return True
def _merge_exclusivity_groups(self, _elements, _vgroups, _vms, _volumes):
for level in LEVELS:
for rk, r in _elements.iteritems():
if r["type"] == "ATT::Valet::GroupAssignment" and \
r["properties"]["group_type"] == "exclusivity" and \
r["properties"]["level"] == level:
vgroup = _vgroups[rk]
for vk in r["properties"]["resources"]:
if vk in _vms.keys():
vgroup.subvgroups[vk] = _vms[vk]
_vms[vk].exclusivity_groups[rk] = vgroup.level + ":" + vgroup.name
elif vk in _volumes.keys():
vgroup.subvgroups[vk] = _volumes[vk]
_volumes[vk].exclusivity_groups[rk] = vgroup.level + ":" + vgroup.name
elif vk in _vgroups.keys():
vg = _vgroups[vk]
if LEVELS.index(vg.level) > LEVELS.index(level):
self.status = "grouping scope: nested group's level is higher"
return False
if vg.vgroup_type == "DIV" or vg.vgroup_type == "EX":
self.status = "group type (" + vg.vgroup_type + ") not allowd to be nested in exclusivity group at this version"
return False
vgroup.subvgroups[vk] = vg
vg.exclusivity_groups[rk] = vgroup.level + ":" + vgroup.name
else:
self.status = "invalid resource = " + vk
return False
return True
def _merge_affinity_groups(self, _elements, _vgroups, _vms, _volumes):
affinity_map = {} # key is uuid of vm, volume, or vgroup & value is its parent vgroup
for level in LEVELS:
for rk, r in _elements.iteritems():
if r["type"] == "ATT::Valet::GroupAssignment" and \
r["properties"]["group_type"] == "affinity" and \
r["properties"]["level"] == level:
vgroup = None
if rk in _vgroups.keys():
vgroup = _vgroups[rk]
else:
continue
self.logger.debug("Parser: merge for affinity = " + vgroup.name)
for vk in r["properties"]["resources"]:
if vk in _vms.keys():
vgroup.subvgroups[vk] = _vms[vk]
_vms[vk].survgroup = vgroup
affinity_map[vk] = vgroup
self._add_implicit_diversity_groups(vgroup, _vms[vk].diversity_groups)
self._add_implicit_exclusivity_groups(vgroup, _vms[vk].exclusivity_groups)
self._add_memberships(vgroup, _vms[vk])
del _vms[vk]
elif vk in _volumes.keys():
vgroup.subvgroups[vk] = _volumes[vk]
_volumes[vk].survgroup = vgroup
affinity_map[vk] = vgroup
self._add_implicit_diversity_groups(vgroup, _volumes[vk].diversity_groups)
self._add_implicit_exclusivity_groups(vgroup, _volumes[vk].exclusivity_groups)
self._add_memberships(vgroup, _volumes[vk])
del _volumes[vk]
elif vk in _vgroups.keys():
vg = _vgroups[vk]
if LEVELS.index(vg.level) > LEVELS.index(level):
self.status = "grouping scope: nested group's level is higher"
return False
if vg.vgroup_type == "DIV" or vg.vgroup_type == "EX":
if self._merge_subgroups(vgroup, vg.subvgroups, _vms, _volumes, _vgroups,
_elements, affinity_map) is False:
return False
del _vgroups[vk]
else:
if self._exist_in_subgroups(vk, vgroup) is None:
if self._get_subgroups(vg, _elements,
_vgroups, _vms, _volumes,
affinity_map) is False:
return False
vgroup.subvgroups[vk] = vg
vg.survgroup = vgroup
affinity_map[vk] = vgroup
self._add_implicit_diversity_groups(vgroup, vg.diversity_groups)
self._add_implicit_exclusivity_groups(vgroup, vg.exclusivity_groups)
self._add_memberships(vgroup, vg)
del _vgroups[vk]
else: # vk belongs to the other vgroup already or refer to invalid resource
if vk not in affinity_map.keys():
self.status = "invalid resource = " + vk
return False
if affinity_map[vk].uuid != vgroup.uuid:
if self._exist_in_subgroups(vk, vgroup) is None:
self._set_implicit_grouping(vk, vgroup, affinity_map, _vgroups)
return True
def _merge_subgroups(self, _vgroup, _subgroups, _vms, _volumes, _vgroups, _elements, _affinity_map):
for vk, _ in _subgroups.iteritems():
if vk in _vms.keys():
_vgroup.subvgroups[vk] = _vms[vk]
_vms[vk].survgroup = _vgroup
_affinity_map[vk] = _vgroup
self._add_implicit_diversity_groups(_vgroup, _vms[vk].diversity_groups)
self._add_implicit_exclusivity_groups(_vgroup, _vms[vk].exclusivity_groups)
self._add_memberships(_vgroup, _vms[vk])
del _vms[vk]
elif vk in _volumes.keys():
_vgroup.subvgroups[vk] = _volumes[vk]
_volumes[vk].survgroup = _vgroup
_affinity_map[vk] = _vgroup
self._add_implicit_diversity_groups(_vgroup, _volumes[vk].diversity_groups)
self._add_implicit_exclusivity_groups(_vgroup, _volumes[vk].exclusivity_groups)
self._add_memberships(_vgroup, _volumes[vk])
del _volumes[vk]
elif vk in _vgroups.keys():
vg = _vgroups[vk]
if LEVELS.index(vg.level) > LEVELS.index(_vgroup.level):
self.status = "grouping scope: nested group's level is higher"
return False
if vg.vgroup_type == "DIV" or vg.vgroup_type == "EX":
if self._merge_subgroups(_vgroup, vg.subvgroups,
_vms, _volumes, _vgroups,
_elements, _affinity_map) is False:
return False
del _vgroups[vk]
else:
if self._exist_in_subgroups(vk, _vgroup) is None:
if self._get_subgroups(vg, _elements, _vgroups, _vms, _volumes, _affinity_map) is False:
return False
_vgroup.subvgroups[vk] = vg
vg.survgroup = _vgroup
_affinity_map[vk] = _vgroup
self._add_implicit_diversity_groups(_vgroup, vg.diversity_groups)
self._add_implicit_exclusivity_groups(_vgroup, vg.exclusivity_groups)
self._add_memberships(_vgroup, vg)
del _vgroups[vk]
else: # vk belongs to the other vgroup already or refer to invalid resource
if vk not in _affinity_map.keys():
self.status = "invalid resource = " + vk
return False
if _affinity_map[vk].uuid != _vgroup.uuid:
if self._exist_in_subgroups(vk, _vgroup) is None:
self._set_implicit_grouping(vk, _vgroup, _affinity_map, _vgroups)
return True
def _get_subgroups(self, _vgroup, _elements, _vgroups, _vms, _volumes, _affinity_map):
for vk in _elements[_vgroup.uuid]["properties"]["resources"]:
if vk in _vms.keys():
_vgroup.subvgroups[vk] = _vms[vk]
_vms[vk].survgroup = _vgroup
_affinity_map[vk] = _vgroup
self._add_implicit_diversity_groups(_vgroup, _vms[vk].diversity_groups)
self._add_implicit_exclusivity_groups(_vgroup, _vms[vk].exclusivity_groups)
self._add_memberships(_vgroup, _vms[vk])
del _vms[vk]
elif vk in _volumes.keys():
_vgroup.subvgroups[vk] = _volumes[vk]
_volumes[vk].survgroup = _vgroup
_affinity_map[vk] = _vgroup
self._add_implicit_diversity_groups(_vgroup, _volumes[vk].diversity_groups)
self._add_implicit_exclusivity_groups(_vgroup, _volumes[vk].exclusivity_groups)
self._add_memberships(_vgroup, _volumes[vk])
del _volumes[vk]
elif vk in _vgroups.keys():
vg = _vgroups[vk]
if LEVELS.index(vg.level) > LEVELS.index(_vgroup.level):
self.status = "grouping scope: nested group's level is higher"
return False
if vg.vgroup_type == "DIV" or vg.vgroup_type == "EX":
if self._merge_subgroups(_vgroup, vg.subvgroups,
_vms, _volumes, _vgroups,
_elements, _affinity_map) is False:
return False
del _vgroups[vk]
else:
if self._exist_in_subgroups(vk, _vgroup) is None:
if self._get_subgroups(vg, _elements, _vgroups, _vms, _volumes, _affinity_map) is False:
return False
_vgroup.subvgroups[vk] = vg
vg.survgroup = _vgroup
_affinity_map[vk] = _vgroup
self._add_implicit_diversity_groups(_vgroup, vg.diversity_groups)
self._add_implicit_exclusivity_groups(_vgroup, vg.exclusivity_groups)
self._add_memberships(_vgroup, vg)
del _vgroups[vk]
else:
if vk not in _affinity_map.keys():
self.status = "invalid resource = " + vk
return False
if _affinity_map[vk].uuid != _vgroup.uuid:
if self._exist_in_subgroups(vk, _vgroup) is None:
self._set_implicit_grouping(vk, _vgroup, _affinity_map, _vgroups)
return True
def _add_implicit_diversity_groups(self, _vgroup, _diversity_groups):
for dz, level in _diversity_groups.iteritems():
l = level.split(":", 1)[0]
if LEVELS.index(l) >= LEVELS.index(_vgroup.level):
_vgroup.diversity_groups[dz] = level
def _add_implicit_exclusivity_groups(self, _vgroup, _exclusivity_groups):
for ex, level in _exclusivity_groups.iteritems():
l = level.split(":", 1)[0]
if LEVELS.index(l) >= LEVELS.index(_vgroup.level):
_vgroup.exclusivity_groups[ex] = level
def _add_memberships(self, _vgroup, _v):
if isinstance(_v, VM) or isinstance(_v, VGroup):
for extra_specs in _v.extra_specs_list:
_vgroup.extra_specs_list.append(extra_specs)
if isinstance(_v, VM) and _v.availability_zone is not None:
if _v.availability_zone not in _vgroup.availability_zone_list:
_vgroup.availability_zone_list.append(_v.availability_zone)
if isinstance(_v, VGroup):
for az in _v.availability_zone_list:
if az not in _vgroup.availability_zone_list:
_vgroup.availability_zone_list.append(az)
'''
for hgk, hg in _v.host_aggregates.iteritems():
_vgroup.host_aggregates[hgk] = hg
'''
''' take vk's most top parent as a s_vg's child vgroup '''
def _set_implicit_grouping(self, _vk, _s_vg, _affinity_map, _vgroups):
t_vg = _affinity_map[_vk] # where _vk currently belongs to
if t_vg.uuid in _affinity_map.keys(): # if the parent belongs to the other parent vgroup
self._set_implicit_grouping(t_vg.uuid, _s_vg, _affinity_map, _vgroups)
else:
if LEVELS.index(t_vg.level) > LEVELS.index(_s_vg.level):
t_vg.level = _s_vg.level
'''
self.status = "Grouping scope: sub-group's level is larger"
return False
'''
if self._exist_in_subgroups(t_vg.uuid, _s_vg) is None:
_s_vg.subvgroups[t_vg.uuid] = t_vg
t_vg.survgroup = _s_vg
_affinity_map[t_vg.uuid] = _s_vg
self._add_implicit_diversity_groups(_s_vg, t_vg.diversity_groups)
self._add_implicit_exclusivity_groups(_s_vg, t_vg.exclusivity_groups)
self._add_memberships(_s_vg, t_vg)
del _vgroups[t_vg.uuid]
def _exist_in_subgroups(self, _vk, _vg):
containing_vg_uuid = None
for vk, v in _vg.subvgroups.iteritems():
if vk == _vk:
containing_vg_uuid = _vg.uuid
break
else:
if isinstance(v, VGroup):
containing_vg_uuid = self._exist_in_subgroups(_vk, v)
if containing_vg_uuid is not None:
break
return containing_vg_uuid
def _set_vgroup_links(self, _vgroup, _vgroups, _vms, _volumes):
for _, svg in _vgroup.subvgroups.iteritems(): # currently, not define vgroup itself in pipe
if isinstance(svg, VM):
for vml in svg.vm_list:
found = False
for _, tvgroup in _vgroups.iteritems():
containing_vg_uuid = self._exist_in_subgroups(vml.node.uuid, tvgroup)
if containing_vg_uuid is not None:
found = True
if containing_vg_uuid != _vgroup.uuid and \
self._exist_in_subgroups(containing_vg_uuid, _vgroup) is None:
self._add_nw_link(vml, _vgroup)
break
if found is False:
for tvk in _vms.keys():
if tvk == vml.node.uuid:
self._add_nw_link(vml, _vgroup)
break
for voll in svg.volume_list:
found = False
for _, tvgroup in _vgroups.iteritems():
containing_vg_uuid = self._exist_in_subgroups(voll.node.uuid, tvgroup)
if containing_vg_uuid is not None:
found = True
if containing_vg_uuid != _vgroup.uuid and \
self._exist_in_subgroups(containing_vg_uuid, _vgroup) is None:
self._add_io_link(voll, _vgroup)
break
if found is False:
for tvk in _volumes.keys():
if tvk == voll.node.uuid:
self._add_io_link(voll, _vgroup)
break
# elif isinstance(svg, Volume):
# for vml in svg.vm_list:
# found = False
# for _, tvgroup in _vgroups.iteritems():
# containing_vg_uuid = self._exist_in_subgroups(vml.node.uuid, tvgroup)
# if containing_vg_uuid is not None:
# found = True
# if containing_vg_uuid != _vgroup.uuid and \
# self._exist_in_subgroups(containing_vg_uuid, _vgroup) is None:
# self._add_io_link(vml, _vgroup)
# break
# if found is False:
# for tvk in _vms.keys():
# if tvk == vml.node.uuid:
# self._add_io_link(vml, _vgroup)
# break
elif isinstance(svg, VGroup):
self._set_vgroup_links(svg, _vgroups, _vms, _volumes)
for svgl in svg.vgroup_list: # svgl is a link to VM or Volume
if self._exist_in_subgroups(svgl.node.uuid, _vgroup) is None:
self._add_nw_link(svgl, _vgroup)
self._add_io_link(svgl, _vgroup)
def _add_nw_link(self, _link, _vgroup):
_vgroup.nw_bandwidth += _link.nw_bandwidth
vgroup_link = self._get_vgroup_link(_link, _vgroup.vgroup_list)
if vgroup_link is not None:
vgroup_link.nw_bandwidth += _link.nw_bandwidth
else:
link = VGroupLink(_link.node) # _link.node is VM
link.nw_bandwidth = _link.nw_bandwidth
_vgroup.vgroup_list.append(link)
def _add_io_link(self, _link, _vgroup):
_vgroup.io_bandwidth += _link.io_bandwidth
vgroup_link = self._get_vgroup_link(_link, _vgroup.vgroup_list)
if vgroup_link is not None:
vgroup_link.io_bandwidth += _link.io_bandwidth
else:
link = VGroupLink(_link.node)
link.io_bandwidth = _link.io_bandwidth
_vgroup.vgroup_list.append(link)
def _get_vgroup_link(self, _link, _vgroup_link_list):
vgroup_link = None
for vgl in _vgroup_link_list:
if vgl.node.uuid == _link.node.uuid:
vgroup_link = vgl
break
return vgroup_link

View File

@ -0,0 +1,62 @@
#!/bin/python
# Modified: Feb. 9, 2016
class App(object):
def __init__(self, _app_id, _app_name, _action):
self.app_id = _app_id
self.app_name = _app_name
self.request_type = _action # create, update, or ping
self.timestamp_scheduled = 0
self.vgroups = {}
self.vms = {}
self.volumes = {}
self.status = 'requested' # Moved to "scheduled" (and then "placed")
def add_vm(self, _vm, _host_name):
self.vms[_vm.uuid] = _vm
self.vms[_vm.uuid].status = "scheduled"
self.vms[_vm.uuid].host = _host_name
def add_volume(self, _vol, _host_name):
self.vms[_vol.uuid] = _vol
self.vms[_vol.uuid].status = "scheduled"
self.vms[_vol.uuid].storage_host = _host_name
def add_vgroup(self, _vg, _host_name):
self.vgroups[_vg.uuid] = _vg
self.vgroups[_vg.uuid].status = "scheduled"
self.vgroups[_vg.uuid].host = _host_name
def get_json_info(self):
vms = {}
for vmk, vm in self.vms.iteritems():
vms[vmk] = vm.get_json_info()
vols = {}
for volk, vol in self.volumes.iteritems():
vols[volk] = vol.get_json_info()
vgs = {}
for vgk, vg in self.vgroups.iteritems():
vgs[vgk] = vg.get_json_info()
return {'action': self.request_type,
'timestamp': self.timestamp_scheduled,
'stack_id': self.app_id,
'name': self.app_name,
'VMs': vms,
'Volumes': vols,
'VGroups': vgs}
def log_in_info(self):
return {'action': self.request_type,
'timestamp': self.timestamp_scheduled,
'stack_id': self.app_id,
'name': self.app_name}

View File

@ -0,0 +1,17 @@
# Version 2.0.2: Feb. 9, 2016
# Set database keyspace
db_keyspace=valet_test
db_request_table=placement_requests
db_response_table=placement_results
db_event_table=oslo_messages
db_resource_table=resource_status
db_resource_index_table=resource_log_index
db_app_index_table=app_log_index
db_app_table=app
db_uuid_table=uuid_map
#replication_factor=3

View File

@ -0,0 +1,73 @@
#!/bin/python
#################################################################################################################
# Author: Gueyoung Jung
# Contact: gjung@research.att.com
# Version 2.0.2: Feb. 9, 2016
#
# Functions
#
#################################################################################################################
import sys
class Config(object):
def __init__(self):
self.mode = None
self.db_keyspace = None
self.db_request_table = None
self.db_response_table = None
self.db_event_table = None
self.db_resource_table = None
self.db_app_table = None
self.db_resource_index_table = None
self.db_app_index_table = None
self.db_uuid_table = None
def configure(self):
try:
f = open("./client.cfg", "r")
line = f.readline()
while line:
if line.startswith("#") or line.startswith(" ") or line == "\n":
line = f.readline()
continue
(rk, v) = line.split("=")
k = rk.strip()
if k == "db_keyspace":
self.db_keyspace = v.strip()
elif k == "db_request_table":
self.db_request_table = v.strip()
elif k == "db_response_table":
self.db_response_table = v.strip()
elif k == "db_event_table":
self.db_event_table = v.strip()
elif k == "db_resource_table":
self.db_resource_table = v.strip()
elif k == "db_app_table":
self.db_app_table = v.strip()
elif k == "db_resource_index_table":
self.db_resource_index_table = v.strip()
elif k == "db_app_index_table":
self.db_app_index_table = v.strip()
elif k == "db_uuid_table":
self.db_uuid_table = v.strip()
line = f.readline()
f.close()
return "success"
except IOError as e:
return "I/O error({}): {}".format(e.errno, e.strerror)
except Exception:
return "Unexpected error: ", sys.exc_info()[0]

View File

@ -0,0 +1,150 @@
#!/bin/python
# Modified: Feb. 9, 2016
import json
class Event(object):
def __init__(self, _id):
self.event_id = _id
self.exchange = None
self.method = None
self.args = {}
# For object_action event
self.change_list = []
self.change_data = {}
self.object_name = None
# For object_action and Instance object
self.vm_state = None
# For object_action and ComputeNode object
self.status = "enabled"
self.vcpus_used = -1
self.free_mem = -1
self.free_local_disk = -1
self.disk_available_least = -1
self.numa_cell_list = []
# Common between Instance and ComputeNode
self.host = None
self.vcpus = -1
self.mem = -1
self.local_disk = 0
# For build_and_run_instance
self.heat_resource_name = None
self.heat_resource_uuid = None
self.heat_root_stack_id = None
self.heat_stack_name = None
# Common data
self.uuid = None
def set_data(self):
if self.method == 'object_action':
self.change_list = self.args['objinst']['nova_object.changes']
self.change_data = self.args['objinst']['nova_object.data']
self.object_name = self.args['objinst']['nova_object.name']
if self.object_name == 'Instance':
if 'uuid' in self.change_data.keys():
self.uuid = self.change_data['uuid']
if 'host' in self.change_data.keys():
self.host = self.change_data['host']
if 'vcpus' in self.change_data.keys():
self.vcpus = float(self.change_data['vcpus'])
if 'memory_mb' in self.change_data.keys():
self.mem = float(self.change_data['memory_mb'])
root = -1
ephemeral = -1
swap = -1
if 'root_gb' in self.change_data.keys():
root = float(self.change_data['root_gb'])
if 'ephemeral_gb' in self.change_data.keys():
ephemeral = float(self.change_data['ephemeral_gb'])
if 'flavor' in self.change_data.keys():
flavor = self.change_data['flavor']
if 'nova_object.data' in flavor.keys():
flavor_data = flavor['nova_object.data']
if 'swap' in flavor_data.keys():
swap = float(flavor_data['swap'])
if root != -1:
self.local_disk += root
if ephemeral != -1:
self.local_disk += ephemeral
if swap != -1:
self.local_disk += swap / float(1024)
self.vm_state = self.change_data['vm_state']
elif self.object_name == 'ComputeNode':
if 'host' in self.change_data.keys():
self.host = self.change_data['host']
if 'deleted' in self.change_list and 'deleted' in self.change_data.keys():
if self.change_data['deleted'] == "true" or self.change_data['deleted'] is True:
self.status = "disabled"
if 'vcpus' in self.change_list and 'vcpus' in self.change_data.keys():
self.vcpus = self.change_data['vcpus']
if 'vcpus_used' in self.change_list and 'vcpus_used' in self.change_data.keys():
self.vcpus_used = self.change_data['vcpus_used']
if 'memory_mb' in self.change_list and 'memory_mb' in self.change_data.keys():
self.mem = self.change_data['memory_mb']
if 'free_ram_mb' in self.change_list and 'free_ram_mb' in self.change_data.keys():
self.free_mem = self.change_data['free_ram_mb']
if 'local_gb' in self.change_list and 'local_gb' in self.change_data.keys():
self.local_disk = self.change_data['local_gb']
if 'free_disk_gb' in self.change_list and 'free_disk_gb' in self.change_data.keys():
self.free_local_disk = self.change_data['free_disk_gb']
if 'disk_available_least' in self.change_list and \
'disk_available_least' in self.change_data.keys():
self.disk_available_least = self.change_data['disk_available_least']
if 'numa_topology' in self.change_list and 'numa_topology' in self.change_data.keys():
str_numa_topology = self.change_data['numa_topology']
try:
numa_topology = json.loads(str_numa_topology)
# print json.dumps(numa_topology, indent=4)
if 'nova_object.data' in numa_topology.keys():
if 'cells' in numa_topology['nova_object.data']:
for cell in numa_topology['nova_object.data']['cells']:
self.numa_cell_list.append(cell)
except (ValueError, KeyError, TypeError):
pass
# print "error while parsing numa_topology"
elif self.method == 'build_and_run_instance':
if 'scheduler_hints' in self.args['filter_properties'].keys():
scheduler_hints = self.args['filter_properties']['scheduler_hints']
if 'heat_resource_name' in scheduler_hints.keys():
self.heat_resource_name = scheduler_hints['heat_resource_name']
if 'heat_resource_uuid' in scheduler_hints.keys():
self.heat_resource_uuid = scheduler_hints['heat_resource_uuid']
if 'heat_root_stack_id' in scheduler_hints.keys():
self.heat_root_stack_id = scheduler_hints['heat_root_stack_id']
if 'heat_stack_name' in scheduler_hints.keys():
self.heat_stack_name = scheduler_hints['heat_stack_name']
if 'uuid' in self.args['instance']['nova_object.data'].keys():
self.uuid = self.args['instance']['nova_object.data']['uuid']

View File

@ -0,0 +1,702 @@
#!/bin/python
# Modified: Sep. 27, 2016
import json
import operator
from valet.api.db.models.music import Music
from valet.engine.optimizer.db_connect.event import Event
class MusicHandler(object):
def __init__(self, _config, _logger):
self.config = _config
self.logger = _logger
self.music = None
self.logger.debug("MusicHandler.__init__: mode = " + self.config.mode)
if self.config.mode.startswith("sim"):
self.music = Music()
elif self.config.mode.startswith("live"):
self.music = Music(hosts=self.config.db_hosts, replication_factor=self.config.replication_factor)
def init_db(self):
self.logger.info("MusicHandler.init_db: create table")
try:
self.music.create_keyspace(self.config.db_keyspace)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
self.logger.info("MusicHandler.init_db: create table")
schema = {
'stack_id': 'text',
'request': 'text',
'PRIMARY KEY': '(stack_id)'
}
try:
self.music.create_table(self.config.db_keyspace, self.config.db_request_table, schema)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
schema = {
'stack_id': 'text',
'placement': 'text',
'PRIMARY KEY': '(stack_id)'
}
try:
self.music.create_table(self.config.db_keyspace, self.config.db_response_table, schema)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
schema = {
'timestamp': 'text',
'exchange': 'text',
'method': 'text',
'args': 'text',
'PRIMARY KEY': '(timestamp)'
}
try:
self.music.create_table(self.config.db_keyspace, self.config.db_event_table, schema)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
schema = {
'site_name': 'text',
'resource': 'text',
'PRIMARY KEY': '(site_name)'
}
try:
self.music.create_table(self.config.db_keyspace, self.config.db_resource_table, schema)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
schema = {
'stack_id': 'text',
'app': 'text',
'PRIMARY KEY': '(stack_id)'
}
try:
self.music.create_table(self.config.db_keyspace, self.config.db_app_table, schema)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
schema = {
'site_name': 'text',
'app_log_index': 'text',
'PRIMARY KEY': '(site_name)'
}
try:
self.music.create_table(self.config.db_keyspace, self.config.db_app_index_table, schema)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
schema = {
'site_name': 'text',
'resource_log_index': 'text',
'PRIMARY KEY': '(site_name)'
}
try:
self.music.create_table(self.config.db_keyspace, self.config.db_resource_index_table, schema)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
schema = {
'uuid': 'text',
'h_uuid': 'text',
's_uuid': 'text',
'PRIMARY KEY': '(uuid)'
}
try:
self.music.create_table(self.config.db_keyspace, self.config.db_uuid_table, schema)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
return True
def get_events(self):
event_list = []
events = {}
try:
events = self.music.read_all_rows(self.config.db_keyspace, self.config.db_event_table)
except Exception as e:
self.logger.error("MUSIC error while reading events: " + str(e))
return None
if len(events) > 0:
for _, row in events.iteritems():
event_id = row['timestamp']
exchange = row['exchange']
method = row['method']
args_data = row['args']
self.logger.debug("MusicHandler.get_events: event (" + event_id + ") is entered")
if exchange != "nova":
if self.delete_event(event_id) is False:
return None
self.logger.debug("MusicHandler.get_events: event exchange (" + exchange + ") is not supported")
continue
if method != 'object_action' and method != 'build_and_run_instance':
if self.delete_event(event_id) is False:
return None
self.logger.debug("MusicHandler.get_events: event method (" + method + ") is not considered")
continue
if len(args_data) == 0:
if self.delete_event(event_id) is False:
return None
self.logger.debug("MusicHandler.get_events: event does not have args")
continue
try:
args = json.loads(args_data)
except (ValueError, KeyError, TypeError):
self.logger.warn("MusicHandler.get_events: error while decoding to JSON event = " + method + ":" + event_id)
continue
if method == 'object_action':
if 'objinst' in args.keys():
objinst = args['objinst']
if 'nova_object.name' in objinst.keys():
nova_object_name = objinst['nova_object.name']
if nova_object_name == 'Instance':
if 'nova_object.changes' in objinst.keys() and \
'nova_object.data' in objinst.keys():
change_list = objinst['nova_object.changes']
change_data = objinst['nova_object.data']
if 'vm_state' in change_list and \
'vm_state' in change_data.keys():
if change_data['vm_state'] == 'deleted' or \
change_data['vm_state'] == 'active':
e = Event(event_id)
e.exchange = exchange
e.method = method
e.args = args
event_list.append(e)
else:
if self.delete_event(event_id) is False:
return None
else:
if self.delete_event(event_id) is False:
return None
else:
if self.delete_event(event_id) is False:
return None
elif nova_object_name == 'ComputeNode':
if 'nova_object.changes' in objinst.keys() and \
'nova_object.data' in objinst.keys():
e = Event(event_id)
e.exchange = exchange
e.method = method
e.args = args
event_list.append(e)
else:
if self.delete_event(event_id) is False:
return None
else:
if self.delete_event(event_id) is False:
return None
else:
if self.delete_event(event_id) is False:
return None
else:
if self.delete_event(event_id) is False:
return None
elif method == 'build_and_run_instance':
if 'filter_properties' not in args.keys():
if self.delete_event(event_id) is False:
return None
continue
'''
else:
filter_properties = args['filter_properties']
if 'scheduler_hints' not in filter_properties.keys():
self.delete_event(event_id)
continue
'''
if 'instance' not in args.keys():
if self.delete_event(event_id) is False:
return None
continue
else:
instance = args['instance']
if 'nova_object.data' not in instance.keys():
if self.delete_event(event_id) is False:
return None
continue
e = Event(event_id)
e.exchange = exchange
e.method = method
e.args = args
event_list.append(e)
error_event_list = []
for e in event_list:
e.set_data()
self.logger.debug("MusicHandler.get_events: event (" + e.event_id + ") is parsed")
if e.method == "object_action":
if e.object_name == 'Instance':
if e.uuid is None or e.uuid == "none" or \
e.host is None or e.host == "none" or \
e.vcpus == -1 or e.mem == -1:
error_event_list.append(e)
self.logger.warn("MusicHandler.get_events: data missing in instance object event")
elif e.object_name == 'ComputeNode':
if e.host is None or e.host == "none":
error_event_list.append(e)
self.logger.warn("MusicHandler.get_events: data missing in compute object event")
elif e.method == "build_and_run_instance":
'''
if e.heat_resource_name == None or e.heat_resource_name == "none" or \
e.heat_resource_uuid == None or e.heat_resource_uuid == "none" or \
e.heat_root_stack_id == None or e.heat_root_stack_id == "none" or \
e.heat_stack_name == None or e.heat_stack_name == "none" or \
e.uuid == None or e.uuid == "none":
'''
if e.uuid is None or e.uuid == "none":
error_event_list.append(e)
self.logger.warn("MusicHandler.get_events: data missing in build event")
if len(error_event_list) > 0:
event_list[:] = [e for e in event_list if e not in error_event_list]
if len(event_list) > 0:
event_list.sort(key=operator.attrgetter('event_id'))
return event_list
def delete_event(self, _event_id):
try:
self.music.delete_row_eventually(self.config.db_keyspace,
self.config.db_event_table,
'timestamp', _event_id)
except Exception as e:
self.logger.error("MUSIC error while deleting event: " + str(e))
return False
return True
def get_uuid(self, _uuid):
h_uuid = "none"
s_uuid = "none"
row = {}
try:
row = self.music.read_row(self.config.db_keyspace, self.config.db_uuid_table, 'uuid', _uuid)
except Exception as e:
self.logger.error("MUSIC error while reading uuid: " + str(e))
return None
if len(row) > 0:
h_uuid = row[row.keys()[0]]['h_uuid']
s_uuid = row[row.keys()[0]]['s_uuid']
self.logger.info("MusicHandler.get_uuid: get heat uuid (" + h_uuid + ") for uuid = " + _uuid)
else:
self.logger.debug("MusicHandler.get_uuid: heat uuid not found")
return h_uuid, s_uuid
def put_uuid(self, _e):
heat_resource_uuid = "none"
heat_root_stack_id = "none"
if _e.heat_resource_uuid is not None and _e.heat_resource_uuid != "none":
heat_resource_uuid = _e.heat_resource_uuid
if _e.heat_root_stack_id is not None and _e.heat_root_stack_id != "none":
heat_root_stack_id = _e.heat_root_stack_id
data = {
'uuid': _e.uuid,
'h_uuid': heat_resource_uuid,
's_uuid': heat_root_stack_id
}
try:
self.music.create_row(self.config.db_keyspace, self.config.db_uuid_table, data)
except Exception as e:
self.logger.error("MUSIC error while inserting uuid: " + str(e))
return False
self.logger.info("MusicHandler.put_uuid: uuid (" + _e.uuid + ") added")
'''
self.delete_event(_e.event_id)
self.logger.info("db: build event (" + _e.event_id + ") deleted")
'''
return True
def delete_uuid(self, _k):
try:
self.music.delete_row_eventually(self.config.db_keyspace, self.config.db_uuid_table, 'uuid', _k)
except Exception as e:
self.logger.error("MUSIC error while deleting uuid: " + str(e))
return False
return True
def get_requests(self):
request_list = []
requests = {}
try:
requests = self.music.read_all_rows(self.config.db_keyspace, self.config.db_request_table)
except Exception as e:
self.logger.error("MUSIC error while reading requests: " + str(e))
return None
if len(requests) > 0:
self.logger.info("MusicHandler.get_requests: placement request arrived")
for _, row in requests.iteritems():
self.logger.info(" request_id = " + row['stack_id'])
r_list = json.loads(row['request'])
for r in r_list:
request_list.append(r)
return request_list
def put_result(self, _result):
for appk, app_placement in _result.iteritems():
data = {
'stack_id': appk,
'placement': json.dumps(app_placement)
}
try:
self.music.create_row(self.config.db_keyspace, self.config.db_response_table, data)
except Exception as e:
self.logger.error("MUSIC error while putting placement result: " + str(e))
return False
self.logger.info("MusicHandler.put_result: " + appk + " placement result added")
for appk in _result.keys():
try:
self.music.delete_row_eventually(self.config.db_keyspace,
self.config.db_request_table,
'stack_id', appk)
except Exception as e:
self.logger.error("MUSIC error while deleting handled request: " + str(e))
return False
self.logger.info("MusicHandler.put_result: " + appk + " placement request deleted")
return True
def get_resource_status(self, _k):
json_resource = {}
row = {}
try:
row = self.music.read_row(self.config.db_keyspace, self.config.db_resource_table, 'site_name', _k, self.logger)
except Exception as e:
self.logger.error("MUSIC error while reading resource status: " + str(e))
return None
if len(row) > 0:
str_resource = row[row.keys()[0]]['resource']
json_resource = json.loads(str_resource)
self.logger.info("MusicHandler.get_resource_status: get resource status")
return json_resource
def update_resource_status(self, _k, _status):
row = {}
try:
row = self.music.read_row(self.config.db_keyspace, self.config.db_resource_table, 'site_name', _k)
except Exception as e:
self.logger.error("MUSIC error while reading resource status: " + str(e))
return False
json_resource = {}
if len(row) > 0:
str_resource = row[row.keys()[0]]['resource']
json_resource = json.loads(str_resource)
if 'flavors' in _status.keys():
flavors = _status['flavors']
for fk, f in flavors.iteritems():
if fk in json_resource['flavors'].keys():
del json_resource['flavors'][fk]
json_resource['flavors'][fk] = f
if 'logical_groups' in _status.keys():
logical_groups = _status['logical_groups']
for lgk, lg in logical_groups.iteritems():
if lgk in json_resource['logical_groups'].keys():
del json_resource['logical_groups'][lgk]
json_resource['logical_groups'][lgk] = lg
if 'storages' in _status.keys():
storages = _status['storages']
for stk, st in storages.iteritems():
if stk in json_resource['storages'].keys():
del json_resource['storages'][stk]
json_resource['storages'][stk] = st
if 'switches' in _status.keys():
switches = _status['switches']
for sk, s in switches.iteritems():
if sk in json_resource['switches'].keys():
del json_resource['switches'][sk]
json_resource['switches'][sk] = s
if 'hosts' in _status.keys():
hosts = _status['hosts']
for hk, h in hosts.iteritems():
if hk in json_resource['hosts'].keys():
del json_resource['hosts'][hk]
json_resource['hosts'][hk] = h
if 'host_groups' in _status.keys():
host_groupss = _status['host_groups']
for hgk, hg in host_groupss.iteritems():
if hgk in json_resource['host_groups'].keys():
del json_resource['host_groups'][hgk]
json_resource['host_groups'][hgk] = hg
if 'datacenter' in _status.keys():
datacenter = _status['datacenter']
del json_resource['datacenter']
json_resource['datacenter'] = datacenter
json_resource['timestamp'] = _status['timestamp']
try:
self.music.delete_row_eventually(self.config.db_keyspace,
self.config.db_resource_table,
'site_name', _k)
except Exception as e:
self.logger.error("MUSIC error while deleting resource status: " + str(e))
return False
else:
json_resource = _status
data = {
'site_name': _k,
'resource': json.dumps(json_resource)
}
try:
self.music.create_row(self.config.db_keyspace, self.config.db_resource_table, data)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
self.logger.info("MusicHandler.update_resource_status: resource status updated")
return True
def update_resource_log_index(self, _k, _index):
data = {
'site_name': _k,
'resource_log_index': str(_index)
}
try:
self.music.update_row_eventually(self.config.db_keyspace,
self.config.db_resource_index_table,
'site_name', _k, data)
except Exception as e:
self.logger.error("MUSIC error while updating resource log index: " + str(e))
return False
self.logger.info("MusicHandler.update_resource_log_index: resource log index updated")
return True
def update_app_log_index(self, _k, _index):
data = {
'site_name': _k,
'app_log_index': str(_index)
}
try:
self.music.update_row_eventually(self.config.db_keyspace,
self.config.db_app_index_table,
'site_name', _k, data)
except Exception as e:
self.logger.error("MUSIC error while updating app log index: " + str(e))
return False
self.logger.info("MusicHandler.update_app_log_index: app log index updated")
return True
def add_app(self, _k, _app_data):
try:
self.music.delete_row_eventually(self.config.db_keyspace, self.config.db_app_table, 'stack_id', _k)
except Exception as e:
self.logger.error("MUSIC error while deleting app: " + str(e))
return False
self.logger.info("MusicHandler.add_app: app deleted")
if _app_data is not None:
data = {
'stack_id': _k,
'app': json.dumps(_app_data)
}
try:
self.music.create_row(self.config.db_keyspace, self.config.db_app_table, data)
except Exception as e:
self.logger.error("MUSIC error while inserting app: " + str(e))
return False
self.logger.info("MusicHandler.add_app: app added")
return True
def get_app_info(self, _s_uuid):
json_app = {}
row = {}
try:
row = self.music.read_row(self.config.db_keyspace, self.config.db_app_table, 'stack_id', _s_uuid)
except Exception as e:
self.logger.error("MUSIC error while reading app info: " + str(e))
return None
if len(row) > 0:
str_app = row[row.keys()[0]]['app']
json_app = json.loads(str_app)
return json_app
# TODO(GY): get all other VMs related to this VM
def get_vm_info(self, _s_uuid, _h_uuid, _host):
updated = False
json_app = {}
vm_info = {}
row = {}
try:
row = self.music.read_row(self.config.db_keyspace, self.config.db_app_table, 'stack_id', _s_uuid)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return None
if len(row) > 0:
str_app = row[row.keys()[0]]['app']
json_app = json.loads(str_app)
vms = json_app["VMs"]
for vmk, vm in vms.iteritems():
if vmk == _h_uuid:
if vm["status"] != "deleted":
if vm["host"] != _host:
vm["planned_host"] = vm["host"]
vm["host"] = _host
self.logger.warn("db: conflicted placement decision from Ostro")
# TODO(GY): affinity, diversity, exclusivity validation check
updated = True
else:
self.logger.debug("db: placement as expected")
else:
vm["status"] = "scheduled"
self.logger.warn("db: vm was deleted")
updated = True
vm_info = vm
break
else:
self.logger.error("MusicHandler.get_vm_info: vm is missing from stack")
else:
self.logger.warn("MusicHandler.get_vm_info: not found stack for update = " + _s_uuid)
if updated is True:
if self.add_app(_s_uuid, json_app) is False:
return None
return vm_info
def update_vm_info(self, _s_uuid, _h_uuid):
updated = False
json_app = {}
row = {}
try:
row = self.music.read_row(self.config.db_keyspace, self.config.db_app_table, 'stack_id', _s_uuid)
except Exception as e:
self.logger.error("MUSIC error: " + str(e))
return False
if len(row) > 0:
str_app = row[row.keys()[0]]['app']
json_app = json.loads(str_app)
vms = json_app["VMs"]
for vmk, vm in vms.iteritems():
if vmk == _h_uuid:
if vm["status"] != "deleted":
vm["status"] = "deleted"
self.logger.debug("db: deleted marked")
updated = True
else:
self.logger.warn("db: vm was already deleted")
break
else:
self.logger.error("MusicHandler.update_vm_info: vm is missing from stack")
else:
self.logger.warn("MusicHandler.update_vm_info: not found stack for update = " + _s_uuid)
if updated is True:
if self.add_app(_s_uuid, json_app) is False:
return False
return True
# Unit test
'''
if __name__ == '__main__':
config = Config()
config_status = config.configure()
if config_status != "success":
print "Error while configuring Client: " + config_status
sys.exit(2)
mh = MusicHandler(config, None)
event_list = mh.get_events()
for e in event_list:
print "event id = ", e.event_id
print "host = ", e.host
print "least disk = ", e.disk_available_least
print "disk = ", e.local_disk
for nc in e.numa_cell_list:
print "numa cell = ", nc
'''

View File

View File

@ -0,0 +1,554 @@
#!/bin/python
# Modified: Sep. 27, 2016
from valet.engine.optimizer.app_manager.app_topology_base import VGroup, VM, LEVELS
from valet.engine.optimizer.ostro.openstack_filters import AggregateInstanceExtraSpecsFilter
from valet.engine.optimizer.ostro.openstack_filters import AvailabilityZoneFilter
from valet.engine.optimizer.ostro.openstack_filters import CoreFilter
from valet.engine.optimizer.ostro.openstack_filters import DiskFilter
from valet.engine.optimizer.ostro.openstack_filters import RamFilter
class ConstraintSolver(object):
def __init__(self, _logger):
self.logger = _logger
self.openstack_AZ = AvailabilityZoneFilter(self.logger)
self.openstack_AIES = AggregateInstanceExtraSpecsFilter(self.logger)
self.openstack_R = RamFilter(self.logger)
self.openstack_C = CoreFilter(self.logger)
self.openstack_D = DiskFilter(self.logger)
self.status = "success"
def compute_candidate_list(self, _level, _n, _node_placements, _avail_resources, _avail_logical_groups):
candidate_list = []
''' when replanning '''
if _n.node.host is not None and len(_n.node.host) > 0:
self.logger.debug("ConstraintSolver: reconsider with given candidates")
for hk in _n.node.host:
for ark, ar in _avail_resources.iteritems():
if hk == ark:
candidate_list.append(ar)
else:
for _, r in _avail_resources.iteritems():
candidate_list.append(r)
if len(candidate_list) == 0:
self.status = "no candidate for node = " + _n.node.name
self.logger.warn("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: num of candidates = " + str(len(candidate_list)))
''' availability zone constraint '''
if isinstance(_n.node, VGroup) or isinstance(_n.node, VM):
if (isinstance(_n.node, VM) and _n.node.availability_zone is not None) or \
(isinstance(_n.node, VGroup) and len(_n.node.availability_zone_list) > 0):
self._constrain_availability_zone(_level, _n, candidate_list)
if len(candidate_list) == 0:
self.status = "violate availability zone constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: done availability_zone constraint")
''' host aggregate constraint '''
if isinstance(_n.node, VGroup) or isinstance(_n.node, VM):
if len(_n.node.extra_specs_list) > 0:
self._constrain_host_aggregates(_level, _n, candidate_list)
if len(candidate_list) == 0:
self.status = "violate host aggregate constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: done host_aggregate constraint")
''' cpu capacity constraint '''
if isinstance(_n.node, VGroup) or isinstance(_n.node, VM):
self._constrain_cpu_capacity(_level, _n, candidate_list)
if len(candidate_list) == 0:
self.status = "violate cpu capacity constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: done cpu capacity constraint")
''' memory capacity constraint '''
if isinstance(_n.node, VGroup) or isinstance(_n.node, VM):
self._constrain_mem_capacity(_level, _n, candidate_list)
if len(candidate_list) == 0:
self.status = "violate memory capacity constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: done memory capacity constraint")
''' local disk capacity constraint '''
if isinstance(_n.node, VGroup) or isinstance(_n.node, VM):
self._constrain_local_disk_capacity(_level, _n, candidate_list)
if len(candidate_list) == 0:
self.status = "violate local disk capacity constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: done local disk capacity constraint")
''' network bandwidth constraint '''
self._constrain_nw_bandwidth_capacity(_level, _n, _node_placements, candidate_list)
if len(candidate_list) == 0:
self.status = "violate nw bandwidth capacity constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: done bandwidth capacity constraint")
''' diversity constraint '''
if len(_n.node.diversity_groups) > 0:
for _, diversity_id in _n.node.diversity_groups.iteritems():
if diversity_id.split(":")[0] == _level:
if diversity_id in _avail_logical_groups.keys():
self._constrain_diversity_with_others(_level, diversity_id, candidate_list)
if len(candidate_list) == 0:
break
if len(candidate_list) == 0:
self.status = "violate diversity constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self._constrain_diversity(_level, _n, _node_placements, candidate_list)
if len(candidate_list) == 0:
self.status = "violate diversity constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: done diversity_group constraint")
''' exclusivity constraint '''
exclusivities = self.get_exclusivities(_n.node.exclusivity_groups, _level)
if len(exclusivities) > 1:
self.status = "violate exclusivity constraint (more than one exclusivity) for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return []
else:
if len(exclusivities) == 1:
exclusivity_id = exclusivities[exclusivities.keys()[0]]
if exclusivity_id.split(":")[0] == _level:
self._constrain_exclusivity(_level, exclusivity_id, candidate_list)
if len(candidate_list) == 0:
self.status = "violate exclusivity constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: done exclusivity_group constraint")
else:
self._constrain_non_exclusivity(_level, candidate_list)
if len(candidate_list) == 0:
self.status = "violate non-exclusivity constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: done non-exclusivity_group constraint")
''' affinity constraint '''
affinity_id = _n.get_affinity_id() # level:name, except name == "any"
if affinity_id is not None:
if affinity_id.split(":")[0] == _level:
if affinity_id in _avail_logical_groups.keys():
self._constrain_affinity(_level, affinity_id, candidate_list)
if len(candidate_list) == 0:
self.status = "violate affinity constraint for node = " + _n.node.name
self.logger.error("ConstraintSolver: " + self.status)
return candidate_list
else:
self.logger.debug("ConstraintSolver: done affinity_group constraint")
return candidate_list
'''
constraint modules
'''
def _constrain_affinity(self, _level, _affinity_id, _candidate_list):
conflict_list = []
for r in _candidate_list:
if self.exist_group(_level, _affinity_id, "AFF", r) is False:
if r not in conflict_list:
conflict_list.append(r)
debug_resource_name = r.get_resource_name(_level)
self.logger.debug("ConstraintSolver: not exist affinity in resource = " + debug_resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def _constrain_diversity_with_others(self, _level, _diversity_id, _candidate_list):
conflict_list = []
for r in _candidate_list:
if self.exist_group(_level, _diversity_id, "DIV", r) is True:
if r not in conflict_list:
conflict_list.append(r)
debug_resource_name = r.get_resource_name(_level)
self.logger.debug("ConstraintSolver: conflict diversity in resource = " + debug_resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def exist_group(self, _level, _id, _group_type, _candidate):
match = False
memberships = _candidate.get_memberships(_level)
for lgk, lgr in memberships.iteritems():
if lgr.group_type == _group_type and lgk == _id:
match = True
break
return match
def _constrain_diversity(self, _level, _n, _node_placements, _candidate_list):
conflict_list = []
for r in _candidate_list:
if self.conflict_diversity(_level, _n, _node_placements, r) is True:
if r not in conflict_list:
conflict_list.append(r)
resource_name = r.get_resource_name(_level)
self.logger.debug("ConstraintSolver: conflict the diversity in resource = " + resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def conflict_diversity(self, _level, _n, _node_placements, _candidate):
conflict = False
for v in _node_placements.keys():
diversity_level = _n.get_common_diversity(v.diversity_groups)
if diversity_level != "ANY" and LEVELS.index(diversity_level) >= LEVELS.index(_level):
if diversity_level == "host":
if _candidate.cluster_name == _node_placements[v].cluster_name and \
_candidate.rack_name == _node_placements[v].rack_name and \
_candidate.host_name == _node_placements[v].host_name:
conflict = True
break
elif diversity_level == "rack":
if _candidate.cluster_name == _node_placements[v].cluster_name and \
_candidate.rack_name == _node_placements[v].rack_name:
conflict = True
break
elif diversity_level == "cluster":
if _candidate.cluster_name == _node_placements[v].cluster_name:
conflict = True
break
return conflict
def _constrain_non_exclusivity(self, _level, _candidate_list):
conflict_list = []
for r in _candidate_list:
if self.conflict_exclusivity(_level, r) is True:
if r not in conflict_list:
conflict_list.append(r)
debug_resource_name = r.get_resource_name(_level)
self.logger.debug("ConstraintSolver: exclusivity defined in resource = " + debug_resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def conflict_exclusivity(self, _level, _candidate):
conflict = False
memberships = _candidate.get_memberships(_level)
for mk in memberships.keys():
if memberships[mk].group_type == "EX" and mk.split(":")[0] == _level:
conflict = True
return conflict
def get_exclusivities(self, _exclusivity_groups, _level):
exclusivities = {}
for exk, level in _exclusivity_groups.iteritems():
if level.split(":")[0] == _level:
exclusivities[exk] = level
return exclusivities
def _constrain_exclusivity(self, _level, _exclusivity_id, _candidate_list):
candidate_list = self._get_exclusive_candidates(_level, _exclusivity_id, _candidate_list)
if len(candidate_list) == 0:
candidate_list = self._get_hibernated_candidates(_level, _candidate_list)
_candidate_list[:] = [x for x in _candidate_list if x in candidate_list]
else:
_candidate_list[:] = [x for x in _candidate_list if x in candidate_list]
def _get_exclusive_candidates(self, _level, _exclusivity_id, _candidate_list):
candidate_list = []
for r in _candidate_list:
if self.exist_group(_level, _exclusivity_id, "EX", r) is True:
if r not in candidate_list:
candidate_list.append(r)
else:
debug_resource_name = r.get_resource_name(_level)
self.logger.debug("ConstraintSolver: exclusivity not exist in resource = " + debug_resource_name)
return candidate_list
def _get_hibernated_candidates(self, _level, _candidate_list):
candidate_list = []
for r in _candidate_list:
if self.check_hibernated(_level, r) is True:
if r not in candidate_list:
candidate_list.append(r)
else:
debug_resource_name = r.get_resource_name(_level)
self.logger.debug("ConstraintSolver: exclusivity not allowed in resource = " + debug_resource_name)
return candidate_list
def check_hibernated(self, _level, _candidate):
match = False
num_of_placed_vms = _candidate.get_num_of_placed_vms(_level)
if num_of_placed_vms == 0:
match = True
return match
def _constrain_host_aggregates(self, _level, _n, _candidate_list):
conflict_list = []
for r in _candidate_list:
if self.check_host_aggregates(_level, r, _n.node) is False:
if r not in conflict_list:
conflict_list.append(r)
debug_resource_name = r.get_resource_name(_level)
self.logger.debug("ConstraintSolver: not meet aggregate in resource = " + debug_resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def check_host_aggregates(self, _level, _candidate, _v):
return self.openstack_AIES.host_passes(_level, _candidate, _v)
def _constrain_availability_zone(self, _level, _n, _candidate_list):
conflict_list = []
for r in _candidate_list:
if self.check_availability_zone(_level, r, _n.node) is False:
if r not in conflict_list:
conflict_list.append(r)
debug_resource_name = r.get_resource_name(_level)
self.logger.debug("ConstraintSolver: not meet az in resource = " + debug_resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def check_availability_zone(self, _level, _candidate, _v):
return self.openstack_AZ.host_passes(_level, _candidate, _v)
def _constrain_cpu_capacity(self, _level, _n, _candidate_list):
conflict_list = []
for ch in _candidate_list:
if self.check_cpu_capacity(_level, _n.node, ch) is False:
conflict_list.append(ch)
debug_resource_name = ch.get_resource_name(_level)
self.logger.debug("ConstraintSolver: lack of cpu in " + debug_resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def check_cpu_capacity(self, _level, _v, _candidate):
return self.openstack_C.host_passes(_level, _candidate, _v)
def _constrain_mem_capacity(self, _level, _n, _candidate_list):
conflict_list = []
for ch in _candidate_list:
if self.check_mem_capacity(_level, _n.node, ch) is False:
conflict_list.append(ch)
debug_resource_name = ch.get_resource_name(_level)
self.logger.debug("ConstraintSolver: lack of mem in " + debug_resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def check_mem_capacity(self, _level, _v, _candidate):
return self.openstack_R.host_passes(_level, _candidate, _v)
def _constrain_local_disk_capacity(self, _level, _n, _candidate_list):
conflict_list = []
for ch in _candidate_list:
if self.check_local_disk_capacity(_level, _n.node, ch) is False:
conflict_list.append(ch)
debug_resource_name = ch.get_resource_name(_level)
self.logger.debug("ConstraintSolver: lack of local disk in " + debug_resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def check_local_disk_capacity(self, _level, _v, _candidate):
return self.openstack_D.host_passes(_level, _candidate, _v)
def _constrain_storage_capacity(self, _level, _n, _candidate_list):
conflict_list = []
for ch in _candidate_list:
if self.check_storage_availability(_level, _n.node, ch) is False:
conflict_list.append(ch)
debug_resource_name = ch.get_resource_name(_level)
avail_storages = ch.get_avail_storages(_level)
avail_disks = []
volume_classes = []
volume_sizes = []
if isinstance(_n.node, VGroup):
for vck in _n.node.volume_sizes.keys():
volume_classes.append(vck)
volume_sizes.append(_n.node.volume_sizes[vck])
else:
volume_classes.append(_n.node.volume_class)
volume_sizes.append(_n.node.volume_size)
for vc in volume_classes:
for _, s in avail_storages.iteritems():
if vc == "any" or s.storage_class == vc:
avail_disks.append(s.storage_avail_disk)
self.logger.debug("ConstraintSolver: storage constrained in resource = " + debug_resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def check_storage_availability(self, _level, _v, _ch):
available = False
volume_sizes = []
if isinstance(_v, VGroup):
for vck in _v.volume_sizes.keys():
volume_sizes.append((vck, _v.volume_sizes[vck]))
else:
volume_sizes.append((_v.volume_class, _v.volume_size))
avail_storages = _ch.get_avail_storages(_level)
for vc, vs in volume_sizes:
for _, s in avail_storages.iteritems():
if vc == "any" or s.storage_class == vc:
if s.storage_avail_disk >= vs:
available = True
break
else:
available = False
if available is False:
break
return available
def _constrain_nw_bandwidth_capacity(self, _level, _n, _node_placements, _candidate_list):
conflict_list = []
for cr in _candidate_list:
if self.check_nw_bandwidth_availability(_level, _n, _node_placements, cr) is False:
if cr not in conflict_list:
conflict_list.append(cr)
debug_resource_name = cr.get_resource_name(_level)
self.logger.debug("ConstraintSolver: bw constrained in resource = " + debug_resource_name)
_candidate_list[:] = [c for c in _candidate_list if c not in conflict_list]
def check_nw_bandwidth_availability(self, _level, _n, _node_placements, _cr):
# NOTE: 3rd entry for special node requiring bandwidth of out-going from spine switch
total_req_bandwidths = [0, 0, 0]
link_list = _n.get_all_links()
for vl in link_list:
bandwidth = _n.get_bandwidth_of_link(vl)
placement_level = None
if vl.node in _node_placements.keys(): # vl.node is VM or Volume
placement_level = _node_placements[vl.node].get_common_placement(_cr)
else: # in the open list
placement_level = _n.get_common_diversity(vl.node.diversity_groups)
if placement_level == "ANY":
implicit_diversity = self.get_implicit_diversity(_n.node, link_list, vl.node, _level)
if implicit_diversity[0] is not None:
placement_level = implicit_diversity[1]
self.get_req_bandwidths(_level, placement_level, bandwidth, total_req_bandwidths)
return self._check_nw_bandwidth_availability(_level, total_req_bandwidths, _cr)
# to find any implicit diversity relation caused by the other links of _v
# (i.e., intersection between _v and _target_v)
def get_implicit_diversity(self, _v, _link_list, _target_v, _level):
max_implicit_diversity = (None, 0)
for vl in _link_list:
diversity_level = _v.get_common_diversity(vl.node.diversity_groups)
if diversity_level != "ANY" and LEVELS.index(diversity_level) >= LEVELS.index(_level):
for dk, dl in vl.node.diversity_groups.iteritems():
if LEVELS.index(dl) > LEVELS.index(diversity_level):
if _target_v.uuid != vl.node.uuid:
if dk in _target_v.diversity_groups.keys():
if LEVELS.index(dl) > max_implicit_diversity[1]:
max_implicit_diversity = (dk, dl)
return max_implicit_diversity
def get_req_bandwidths(self, _level, _placement_level, _bandwidth, _total_req_bandwidths):
if _level == "cluster" or _level == "rack":
if _placement_level == "cluster" or _placement_level == "rack":
_total_req_bandwidths[1] += _bandwidth
elif _level == "host":
if _placement_level == "cluster" or _placement_level == "rack":
_total_req_bandwidths[1] += _bandwidth
_total_req_bandwidths[0] += _bandwidth
elif _placement_level == "host":
_total_req_bandwidths[0] += _bandwidth
def _check_nw_bandwidth_availability(self, _level, _req_bandwidths, _candidate_resource):
available = True
if _level == "cluster":
cluster_avail_bandwidths = []
for _, sr in _candidate_resource.cluster_avail_switches.iteritems():
cluster_avail_bandwidths.append(max(sr.avail_bandwidths))
if max(cluster_avail_bandwidths) < _req_bandwidths[1]:
available = False
elif _level == "rack":
rack_avail_bandwidths = []
for _, sr in _candidate_resource.rack_avail_switches.iteritems():
rack_avail_bandwidths.append(max(sr.avail_bandwidths))
if max(rack_avail_bandwidths) < _req_bandwidths[1]:
available = False
elif _level == "host":
host_avail_bandwidths = []
for _, sr in _candidate_resource.host_avail_switches.iteritems():
host_avail_bandwidths.append(max(sr.avail_bandwidths))
if max(host_avail_bandwidths) < _req_bandwidths[0]:
available = False
rack_avail_bandwidths = []
for _, sr in _candidate_resource.rack_avail_switches.iteritems():
rack_avail_bandwidths.append(max(sr.avail_bandwidths))
avail_bandwidth = min(max(host_avail_bandwidths), max(rack_avail_bandwidths))
if avail_bandwidth < _req_bandwidths[1]:
available = False
return available

View File

@ -0,0 +1,246 @@
#!/bin/python
# Modified: Mar. 15, 2016
import openstack_utils
import six
from valet.engine.optimizer.app_manager.app_topology_base import VM
_SCOPE = 'aggregate_instance_extra_specs'
class AggregateInstanceExtraSpecsFilter(object):
"""AggregateInstanceExtraSpecsFilter works with InstanceType records."""
# Aggregate data and instance type does not change within a request
run_filter_once_per_request = True
def __init__(self, _logger):
self.logger = _logger
def host_passes(self, _level, _host, _v):
"""Return a list of hosts that can create instance_type
Check that the extra specs associated with the instance type match
the metadata provided by aggregates. If not present return False.
"""
# If 'extra_specs' is not present or extra_specs are empty then we
# need not proceed further
extra_specs_list = []
for extra_specs in _v.extra_specs_list:
if "host_aggregates" not in extra_specs.keys():
extra_specs_list.append(extra_specs)
if len(extra_specs_list) == 0:
return True
metadatas = openstack_utils.aggregate_metadata_get_by_host(_level, _host)
matched_logical_group_list = []
for extra_specs in extra_specs_list:
for lgk, metadata in metadatas.iteritems():
if self._match_metadata(_host.get_resource_name(_level), lgk, extra_specs, metadata) is True:
matched_logical_group_list.append(lgk)
break
else:
return False
for extra_specs in _v.extra_specs_list:
if "host_aggregates" in extra_specs.keys():
extra_specs["host_aggregates"] = matched_logical_group_list
break
else:
host_aggregate_extra_specs = {}
host_aggregate_extra_specs["host_aggregates"] = matched_logical_group_list
_v.extra_specs_list.append(host_aggregate_extra_specs)
return True
def _match_metadata(self, _h_name, _lg_name, _extra_specs, _metadata):
for key, req in six.iteritems(_extra_specs):
# Either not scope format, or aggregate_instance_extra_specs scope
scope = key.split(':', 1)
if len(scope) > 1:
if scope[0] != _SCOPE:
continue
else:
del scope[0]
key = scope[0]
if key == "host_aggregates":
continue
aggregate_vals = _metadata.get(key, None)
if not aggregate_vals:
self.logger.debug("key (" + key + ") not exists in logical_group (" + _lg_name + ") " + " of host (" + _h_name + ")")
return False
for aggregate_val in aggregate_vals:
if openstack_utils.match(aggregate_val, req):
break
else:
self.logger.debug("key (" + key + ")'s value (" + req + ") not exists in logical_group " + "(" + _lg_name + ") " + " of host (" + _h_name + ")")
return False
return True
# NOTE: originally, OpenStack used the metadata of host_aggregate
class AvailabilityZoneFilter(object):
""" Filters Hosts by availability zone.
Works with aggregate metadata availability zones, using the key
'availability_zone'
Note: in theory a compute node can be part of multiple availability_zones
"""
# Availability zones do not change within a request
run_filter_once_per_request = True
def __init__(self, _logger):
self.logger = _logger
def host_passes(self, _level, _host, _v):
az_request_list = []
if isinstance(_v, VM):
az_request_list.append(_v.availability_zone)
else:
for az in _v.availability_zone_list:
az_request_list.append(az)
if len(az_request_list) == 0:
return True
# metadatas = openstack_utils.aggregate_metadata_get_by_host(_level, _host, key='availability_zone')
availability_zone_list = openstack_utils.availability_zone_get_by_host(_level, _host)
for azr in az_request_list:
if azr not in availability_zone_list:
self.logger.debug("AZ (" + azr + ") not exists in host " + "(" + _host.get_resource_name(_level) + ")")
return False
return True
''' if 'availability_zone' in metadata:
hosts_passes = availability_zone in metadata['availability_zone']
host_az = metadata['availability_zone']
else:
hosts_passes = availability_zone == CONF.default_availability_zone
host_az = CONF.default_availability_zone
if not hosts_passes:
LOG.debug("Availability Zone '%(az)s' requested. "
"%(host_state)s has AZs: %(host_az)s",
{'host_state': host_state,
'az': availability_zone,
'host_az': host_az})
return hosts_passes
'''
class RamFilter(object):
def __init__(self, _logger):
self.logger = _logger
def host_passes(self, _level, _host, _v):
"""Only return hosts with sufficient available RAM."""
requested_ram = _v.mem # MB
# free_ram_mb = host_state.free_ram_mb
# total_usable_ram_mb = host_state.total_usable_ram_mb
(total_ram, usable_ram) = _host.get_mem(_level)
# Do not allow an instance to overcommit against itself, only against other instances.
if not total_ram >= requested_ram:
self.logger.debug("requested mem (" + str(requested_ram) + ") more than total mem (" +
str(total_ram) + ") in host (" + _host.get_resource_name(_level) + ")")
return False
# ram_allocation_ratio = self._get_ram_allocation_ratio(host_state, spec_obj)
# m emory_mb_limit = total_usable_ram_mb * ram_allocation_ratio
# used_ram_mb = total_usable_ram_mb - free_ram_mb
# usable_ram = memory_mb_limit - used_ram_mb
if not usable_ram >= requested_ram:
self.logger.debug("requested mem (" + str(requested_ram) + ") more than avail mem (" +
str(usable_ram) + ") in host (" + _host.get_resource_name(_level) + ")")
return False
# save oversubscription limit for compute node to test against:
# host_state.limits['memory_mb'] = memory_mb_limit
return True
class CoreFilter(object):
def __init__(self, _logger):
self.logger = _logger
def host_passes(self, _level, _host, _v):
"""Return True if host has sufficient CPU cores."""
(vCPUs, avail_vCPUs) = _host.get_vCPUs(_level)
''' if avail_vcpus == 0:
Fail safe
LOG.warning(_LW("VCPUs not set; assuming CPU collection broken"))
return True
'''
instance_vCPUs = _v.vCPUs
# cpu_allocation_ratio = self._get_cpu_allocation_ratio(host_state, spec_obj)
# vcpus_total = host_state.vcpus_total * cpu_allocation_ratio
# Only provide a VCPU limit to compute if the virt driver is reporting
# an accurate count of installed VCPUs. (XenServer driver does not)
'''
if vcpus_total > 0:
host_state.limits['vcpu'] = vcpus_total
'''
# Do not allow an instance to overcommit against itself, only against other instances.
if instance_vCPUs > vCPUs:
self.logger.debug("requested vCPUs (" + str(instance_vCPUs) + ") more than total vCPUs (" +
str(vCPUs) + ") in host (" + _host.get_resource_name(_level) + ")")
return False
# free_vcpus = vcpus_total - host_state.vcpus_used
if avail_vCPUs < instance_vCPUs:
self.logger.debug("requested vCPUs (" + str(instance_vCPUs) + ") more than avail vCPUs (" +
str(avail_vCPUs) + ") in host (" + _host.get_resource_name(_level) + ")")
return False
return True
class DiskFilter(object):
def __init__(self, _logger):
self.logger = _logger
def host_passes(self, _level, _host, _v):
"""Filter based on disk usage."""
# requested_disk = (1024 * (spec_obj.root_gb + spec_obj.ephemeral_gb) + spec_obj.swap)
requested_disk = _v.local_volume_size
(_, usable_disk) = _host.get_local_disk(_level)
# free_disk_mb = host_state.free_disk_mb
# total_usable_disk_mb = host_state.total_usable_disk_gb * 1024
# disk_allocation_ratio = self._get_disk_allocation_ratio(host_state, spec_obj)
# disk_mb_limit = total_usable_disk_mb * disk_allocation_ratio
# used_disk_mb = total_usable_disk_mb - free_disk_mb
# usable_disk_mb = disk_mb_limit - used_disk_mb
if not usable_disk >= requested_disk:
self.logger.debug("requested disk (" + str(requested_disk) + ") more than avail disk (" +
str(usable_disk) + ") in host (" + _host.get_resource_name(_level) + ")")
return False
# disk_gb_limit = disk_mb_limit / 1024
# host_state.limits['disk_gb'] = disk_gb_limit
return True

View File

@ -0,0 +1,90 @@
#!/bin/python
# Modified: Mar. 15, 2016
import collections
import operator
# 1. The following operations are supported:
# =, s==, s!=, s>=, s>, s<=, s<, <in>, <all-in>, <or>, ==, !=, >=, <=
# 2. Note that <or> is handled in a different way below.
# 3. If the first word in the extra_specs is not one of the operators,
# it is ignored.
op_methods = {'=': lambda x, y: float(x) >= float(y),
'<in>': lambda x, y: y in x,
'<all-in>': lambda x, y: all(val in x for val in y),
'==': lambda x, y: float(x) == float(y),
'!=': lambda x, y: float(x) != float(y),
'>=': lambda x, y: float(x) >= float(y),
'<=': lambda x, y: float(x) <= float(y),
's==': operator.eq,
's!=': operator.ne,
's<': operator.lt,
's<=': operator.le,
's>': operator.gt,
's>=': operator.ge}
def match(value, req):
words = req.split()
op = method = None
if words:
op = words.pop(0)
method = op_methods.get(op)
if op != '<or>' and not method:
return value == req
if value is None:
return False
if op == '<or>': # Ex: <or> v1 <or> v2 <or> v3
while True:
if words.pop(0) == value:
return True
if not words:
break
words.pop(0) # remove a keyword <or>
if not words:
break
return False
if words:
if op == '<all-in>': # requires a list not a string
return method(value, words)
return method(value, words[0])
return False
def aggregate_metadata_get_by_host(_level, _host, _key=None):
"""Returns a dict of all metadata based on a metadata key for a specific host. If the key is not provided, returns a dict of all metadata."""
metadatas = {}
logical_groups = _host.get_memberships(_level)
for lgk, lg in logical_groups.iteritems():
if lg.group_type == "AGGR":
if _key is None or _key in lg.metadata:
metadata = collections.defaultdict(set)
for k, v in lg.metadata.items():
metadata[k].update(x.strip() for x in v.split(','))
metadatas[lgk] = metadata
return metadatas
# NOTE: this function not exist in OpenStack
def availability_zone_get_by_host(_level, _host):
availability_zone_list = []
logical_groups = _host.get_memberships(_level)
for lgk, lg in logical_groups.iteritems():
if lg.group_type == "AZ":
availability_zone_list.append(lgk)
return availability_zone_list

View File

@ -0,0 +1,196 @@
#!/bin/python
# Modified: Sep. 27, 2016
import time
from valet.engine.optimizer.app_manager.app_topology_base import VGroup, VM, Volume
from valet.engine.optimizer.ostro.search import Search
class Optimizer(object):
def __init__(self, _resource, _logger):
self.resource = _resource
self.logger = _logger
self.search = Search(self.logger)
self.status = "success"
def place(self, _app_topology):
success = False
uuid_map = None
place_type = None
start_ts = time.time()
if len(_app_topology.candidate_list_map) > 0:
place_type = "replan"
elif len(_app_topology.exclusion_list_map) > 0:
place_type = "migration"
else:
place_type = "create"
if place_type == "migration":
vm_id = _app_topology.exclusion_list_map.keys()[0]
candidate_host_list = []
for hk in self.resource.hosts.keys():
if hk not in _app_topology.exclusion_list_map[vm_id]:
candidate_host_list.append(hk)
_app_topology.candidate_list_map[vm_id] = candidate_host_list
if place_type == "replan" or place_type == "migration":
success = self.search.re_place_nodes(_app_topology, self.resource)
if success is True:
if len(_app_topology.old_vm_map) > 0:
uuid_map = self._delete_old_vms(_app_topology.old_vm_map)
self.resource.update_topology(store=False)
self.logger.debug("Optimizer: remove old placements for replan")
else:
success = self.search.place_nodes(_app_topology, self.resource)
end_ts = time.time()
if success is True:
self.logger.debug("Optimizer: search running time = " + str(end_ts - start_ts) + " sec")
self.logger.debug("Optimizer: total bandwidth = " + str(self.search.bandwidth_usage))
self.logger.debug("Optimizer: total number of hosts = " + str(self.search.num_of_hosts))
placement_map = {}
for v in self.search.node_placements.keys():
if isinstance(v, VM):
placement_map[v] = self.search.node_placements[v].host_name
elif isinstance(v, Volume):
placement_map[v] = self.search.node_placements[v].host_name + "@"
placement_map[v] += self.search.node_placements[v].storage.storage_name
elif isinstance(v, VGroup):
if v.level == "host":
placement_map[v] = self.search.node_placements[v].host_name
elif v.level == "rack":
placement_map[v] = self.search.node_placements[v].rack_name
elif v.level == "cluster":
placement_map[v] = self.search.node_placements[v].cluster_name
self.logger.debug(" " + v.name + " placed in " + placement_map[v])
self._update_resource_status(uuid_map)
return placement_map
else:
self.status = self.search.status
return None
def _delete_old_vms(self, _old_vm_map):
uuid_map = {}
for h_uuid, info in _old_vm_map.iteritems():
uuid = self.resource.get_uuid(h_uuid, info[0])
if uuid is not None:
uuid_map[h_uuid] = uuid
self.resource.remove_vm_by_h_uuid_from_host(info[0], h_uuid, info[1], info[2], info[3])
self.resource.update_host_time(info[0])
host = self.resource.hosts[info[0]]
self.resource.remove_vm_by_h_uuid_from_logical_groups(host, h_uuid)
return uuid_map
def _update_resource_status(self, _uuid_map):
for v, np in self.search.node_placements.iteritems():
if isinstance(v, VM):
uuid = "none"
if _uuid_map is not None:
if v.uuid in _uuid_map.keys():
uuid = _uuid_map[v.uuid]
self.resource.add_vm_to_host(np.host_name,
(v.uuid, v.name, uuid),
v.vCPUs, v.mem, v.local_volume_size)
for vl in v.vm_list:
tnp = self.search.node_placements[vl.node]
placement_level = np.get_common_placement(tnp)
self.resource.deduct_bandwidth(np.host_name, placement_level, vl.nw_bandwidth)
for voll in v.volume_list:
tnp = self.search.node_placements[voll.node]
placement_level = np.get_common_placement(tnp)
self.resource.deduct_bandwidth(np.host_name, placement_level, voll.io_bandwidth)
self._update_logical_grouping(v, self.search.avail_hosts[np.host_name], uuid)
self.resource.update_host_time(np.host_name)
elif isinstance(v, Volume):
self.resource.add_vol_to_host(np.host_name, np.storage.storage_name, v.name, v.volume_size)
for vl in v.vm_list:
tnp = self.search.node_placements[vl.node]
placement_level = np.get_common_placement(tnp)
self.resource.deduct_bandwidth(np.host_name, placement_level, vl.io_bandwidth)
self.resource.update_storage_time(np.storage.storage_name)
def _update_logical_grouping(self, _v, _avail_host, _uuid):
for lgk, lg in _avail_host.host_memberships.iteritems():
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
lg_name = lgk.split(":")
if lg_name[0] == "host" and lg_name[1] != "any":
self.resource.add_logical_group(_avail_host.host_name, lgk, lg.group_type)
if _avail_host.rack_name != "any":
for lgk, lg in _avail_host.rack_memberships.iteritems():
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
lg_name = lgk.split(":")
if lg_name[0] == "rack" and lg_name[1] != "any":
self.resource.add_logical_group(_avail_host.rack_name, lgk, lg.group_type)
if _avail_host.cluster_name != "any":
for lgk, lg in _avail_host.cluster_memberships.iteritems():
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
lg_name = lgk.split(":")
if lg_name[0] == "cluster" and lg_name[1] != "any":
self.resource.add_logical_group(_avail_host.cluster_name, lgk, lg.group_type)
vm_logical_groups = []
self._collect_logical_groups_of_vm(_v, vm_logical_groups)
host = self.resource.hosts[_avail_host.host_name]
self.resource.add_vm_to_logical_groups(host, (_v.uuid, _v.name, _uuid), vm_logical_groups)
def _collect_logical_groups_of_vm(self, _v, _vm_logical_groups):
if isinstance(_v, VM):
for es in _v.extra_specs_list:
if "host_aggregates" in es.keys():
lg_list = es["host_aggregates"]
for lgk in lg_list:
if lgk not in _vm_logical_groups:
_vm_logical_groups.append(lgk)
if _v.availability_zone is not None:
az = _v.availability_zone.split(":")[0]
if az not in _vm_logical_groups:
_vm_logical_groups.append(az)
for _, level in _v.exclusivity_groups.iteritems():
if level not in _vm_logical_groups:
_vm_logical_groups.append(level)
for _, level in _v.diversity_groups.iteritems():
if level not in _vm_logical_groups:
_vm_logical_groups.append(level)
if isinstance(_v, VGroup):
name = _v.level + ":" + _v.name
if name not in _vm_logical_groups:
_vm_logical_groups.append(name)
if _v.survgroup is not None:
self._collect_logical_groups_of_vm(_v.survgroup, _vm_logical_groups)

View File

@ -0,0 +1,633 @@
#!/bin/python
# Modified: Oct. 1, 2016
from oslo_config import cfg
import threading
import time
import traceback
from valet.engine.listener.listener_manager import ListenerManager
from valet.engine.optimizer.app_manager.app_handler import AppHandler
from valet.engine.optimizer.app_manager.app_topology_base import VM, Volume
from valet.engine.optimizer.db_connect.music_handler import MusicHandler
from valet.engine.optimizer.ostro.optimizer import Optimizer
from valet.engine.resource_manager.compute_manager import ComputeManager
from valet.engine.resource_manager.resource import Resource
from valet.engine.resource_manager.topology_manager import TopologyManager
CONF = cfg.CONF
class Ostro(object):
def __init__(self, _config, _logger):
self.config = _config
self.logger = _logger
self.db = MusicHandler(self.config, self.logger)
if self.db.init_db() is False:
self.logger.error("Ostro.__init__: error while initializing MUSIC database")
else:
self.logger.debug("Ostro.__init__: done init music")
self.resource = Resource(self.db, self.config, self.logger)
self.logger.debug("done init resource")
self.app_handler = AppHandler(self.resource, self.db, self.config, self.logger)
self.logger.debug("done init apphandler")
self.optimizer = Optimizer(self.resource, self.logger)
self.logger.debug("done init optimizer")
self.data_lock = threading.Lock()
self.thread_list = []
self.topology = TopologyManager(1, "Topology", self.resource, self.data_lock, self.config, self.logger)
self.logger.debug("done init topology")
self.compute = ComputeManager(2, "Compute", self.resource, self.data_lock, self.config, self.logger)
self.logger.debug("done init compute")
self.listener = ListenerManager(3, "Listener", CONF)
self.logger.debug("done init listener")
self.status = "success"
self.end_of_process = False
def run_ostro(self):
self.logger.info("Ostro.run_ostro: start Ostro ......")
self.topology.start()
self.compute.start()
self.listener.start()
self.thread_list.append(self.topology)
self.thread_list.append(self.compute)
self.thread_list.append(self.listener)
''' for monitoring test '''
# duration = 30.0
# expired = time.time() + duration
while self.end_of_process is False:
time.sleep(1)
event_list = self.db.get_events()
if event_list is None:
break
if len(event_list) > 0:
if self.handle_events(event_list) is False:
break
request_list = self.db.get_requests()
if request_list is None:
break
if len(request_list) > 0:
if self.place_app(request_list) is False:
break
''' for monitoring test '''
# current = time.time()
# if current > expired:
# self.logger.debug("test: ostro running ......")
# expired = current + duration
self.topology.end_of_process = True
self.compute.end_of_process = True
for t in self.thread_list:
t.join()
self.logger.info("Ostro.run_ostro: exit Ostro")
def stop_ostro(self):
self.end_of_process = True
while len(self.thread_list) > 0:
time.sleep(1)
for t in self.thread_list:
if not t.is_alive():
self.thread_list.remove(t)
def bootstrap(self):
self.logger.info("Ostro.bootstrap: start bootstrap")
try:
resource_status = self.db.get_resource_status(self.resource.datacenter.name)
if resource_status is None:
return False
if len(resource_status) > 0:
self.logger.info("Ostro.bootstrap: bootstrap from db")
if self.resource.bootstrap_from_db(resource_status) is False:
return False
else:
self.logger.info("bootstrap from OpenStack")
if self._set_hosts() is False:
self.logger.error('_set_hosts is false')
return False
if self._set_flavors() is False:
self.logger.info("_set_flavors is false")
return False
if self._set_topology() is False:
self.logger.error("_set_topology is false")
return False
self.resource.update_topology()
except Exception:
self.logger.critical("Ostro.bootstrap failed: " + traceback.format_exc())
self.logger.info("Ostro.bootstrap: done bootstrap")
return True
def _set_topology(self):
if self.topology.set_topology() is False:
self.status = "datacenter configuration error"
return False
self.logger.debug("done topology bootstrap")
return True
def _set_hosts(self):
if self.compute.set_hosts() is False:
self.status = "OpenStack (Nova) internal error"
return False
self.logger.debug("done hosts & groups bootstrap")
return True
def _set_flavors(self):
self.logger.debug("start flavors bootstrap")
if self.compute.set_flavors() is False:
self.status = "OpenStack (Nova) internal error"
return False
self.logger.debug("done flavors bootstrap")
return True
def place_app(self, _app_data):
self.data_lock.acquire()
start_time = time.time()
query_request_list = []
placement_request_list = []
for req in _app_data:
if req["action"] == "query":
query_request_list.append(req)
else:
placement_request_list.append(req)
if len(query_request_list) > 0:
self.logger.info("Ostro.place_app: start query")
query_results = self._query(query_request_list)
result = self._get_json_results("query", "ok", self.status, query_results)
if self.db.put_result(result) is False:
self.data_lock.release()
return False
self.logger.info("Ostro.place_app: done query")
if len(placement_request_list) > 0:
self.logger.info("Ostro.place_app: start app placement")
result = None
placement_map = self._place_app(placement_request_list)
if placement_map is None:
result = self._get_json_results("placement", "error", self.status, placement_map)
else:
result = self._get_json_results("placement", "ok", "success", placement_map)
if self.db.put_result(result) is False:
self.data_lock.release()
return False
self.logger.info("Ostro.place_app: done app placement")
end_time = time.time()
self.logger.info("Ostro.place_app: total decision delay of request = " + str(end_time - start_time) + " sec")
self.data_lock.release()
return True
def _query(self, _query_list):
query_results = {}
for q in _query_list:
if "type" in q.keys():
if q["type"] == "group_vms":
if "parameters" in q.keys():
params = q["parameters"]
if "group_name" in params.keys():
vm_list = self._get_vms_from_logical_group(params["group_name"])
query_results[q["stack_id"]] = vm_list
else:
self.status = "unknown paramenter in query"
self.logger.warn("Ostro._query: " + self.status)
query_results[q["stack_id"]] = None
else:
self.status = "no parameters in query"
self.logger.warn("Ostro._query: " + self.status)
query_results[q["stack_id"]] = None
elif q["type"] == "all_groups":
query_results[q["stack_id"]] = self._get_logical_groups()
else:
self.status = "unknown query type"
self.logger.warn("Ostro._query: " + self.status)
query_results[q["stack_id"]] = None
else:
self.status = "no type in query"
self.logger.warn("Ostro._query: " + self.status)
query_results[q["stack_id"]] = None
return query_results
def _get_vms_from_logical_group(self, _group_name):
vm_list = []
vm_id_list = []
for lgk, lg in self.resource.logical_groups.iteritems():
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
lg_id = lgk.split(":")
if lg_id[1] == _group_name:
vm_id_list = lg.vm_list
break
for vm_id in vm_id_list:
if vm_id[2] != "none": # if physical_uuid != 'none'
vm_list.append(vm_id[2])
return vm_list
def _get_logical_groups(self):
logical_groups = {}
for lgk, lg in self.resource.logical_groups.iteritems():
logical_groups[lgk] = lg.get_json_info()
return logical_groups
def _place_app(self, _app_data):
''' set application topology '''
app_topology = self.app_handler.add_app(_app_data)
if app_topology is None:
self.status = self.app_handler.status
self.logger.debug("Ostro._place_app: error while register requested apps: " + self.status)
return None
''' check and set vm flavor information '''
for _, vm in app_topology.vms.iteritems():
if self._set_vm_flavor_information(vm) is False:
self.status = "fail to set flavor information"
self.logger.error("Ostro._place_app: " + self.status)
return None
for _, vg in app_topology.vgroups.iteritems():
if self._set_vm_flavor_information(vg) is False:
self.status = "fail to set flavor information in a group"
self.logger.error("Ostro._place_app: " + self.status)
return None
''' set weights for optimization '''
app_topology.set_weight()
app_topology.set_optimization_priority()
''' perform search for optimal placement of app topology '''
placement_map = self.optimizer.place(app_topology)
if placement_map is None:
self.status = self.optimizer.status
self.logger.debug("Ostro._place_app: error while optimizing app placement: " + self.status)
return None
''' update resource and app information '''
if len(placement_map) > 0:
self.resource.update_topology()
self.app_handler.add_placement(placement_map, self.resource.current_timestamp)
if len(app_topology.exclusion_list_map) > 0 and len(app_topology.planned_vm_map) > 0:
for vk in app_topology.planned_vm_map.keys():
if vk in placement_map.keys():
del placement_map[vk]
return placement_map
def _set_vm_flavor_information(self, _v):
if isinstance(_v, VM):
if self._set_vm_flavor_properties(_v) is False:
return False
else: # affinity group
for _, sg in _v.subvgroups.iteritems():
if self._set_vm_flavor_information(sg) is False:
return False
def _set_vm_flavor_properties(self, _vm):
flavor = self.resource.get_flavor(_vm.flavor)
if flavor is None:
self.logger.warn("Ostro._set_vm_flavor_properties: does not exist flavor (" + _vm.flavor + ") and try to refetch")
''' reset flavor resource and try again '''
if self._set_flavors() is False:
return False
self.resource.update_topology()
flavor = self.resource.get_flavor(_vm.flavor)
if flavor is None:
return False
_vm.vCPUs = flavor.vCPUs
_vm.mem = flavor.mem_cap
_vm.local_volume_size = flavor.disk_cap
if len(flavor.extra_specs) > 0:
extra_specs = {}
for mk, mv in flavor.extra_specs.iteritems():
extra_specs[mk] = mv
_vm.extra_specs_list.append(extra_specs)
return True
def handle_events(self, _event_list):
self.data_lock.acquire()
resource_updated = False
for e in _event_list:
if e.host is not None and e.host != "none":
if self._check_host(e.host) is False:
self.logger.warn("Ostro.handle_events: host (" + e.host + ") related to this event not exists")
continue
if e.method == "build_and_run_instance": # VM is created (from stack)
self.logger.debug("Ostro.handle_events: got build_and_run event")
if self.db.put_uuid(e) is False:
self.data_lock.release()
return False
elif e.method == "object_action":
if e.object_name == 'Instance': # VM became active or deleted
orch_id = self.db.get_uuid(e.uuid)
if orch_id is None:
self.data_lock.release()
return False
if e.vm_state == "active":
self.logger.debug("Ostro.handle_events: got instance_active event")
vm_info = self.app_handler.get_vm_info(orch_id[1], orch_id[0], e.host)
if vm_info is None:
self.logger.error("Ostro.handle_events: error while getting app info from MUSIC")
self.data_lock.release()
return False
if len(vm_info) == 0:
'''
h_uuid is None or "none" because vm is not created by stack
or, stack not found because vm is created by the other stack
'''
self.logger.warn("Ostro.handle_events: no vm_info found in app placement record")
self._add_vm_to_host(e.uuid, orch_id[0], e.host, e.vcpus, e.mem, e.local_disk)
else:
if "planned_host" in vm_info.keys() and vm_info["planned_host"] != e.host:
'''
vm is activated in the different host
'''
self.logger.warn("Ostro.handle_events: vm activated in the different host")
self._add_vm_to_host(e.uuid, orch_id[0], e.host, e.vcpus, e.mem, e.local_disk)
self._remove_vm_from_host(e.uuid, orch_id[0],
vm_info["planned_host"],
float(vm_info["cpus"]),
float(vm_info["mem"]),
float(vm_info["local_volume"]))
self._remove_vm_from_logical_groups(e.uuid, orch_id[0], vm_info["planned_host"])
else:
'''
found vm in the planned host,
possibly the vm deleted in the host while batch cleanup
'''
if self._check_h_uuid(orch_id[0], e.host) is False:
self.logger.debug("Ostro.handle_events: planned vm was deleted")
if self._check_uuid(e.uuid, e.host) is True:
self._update_h_uuid_in_host(orch_id[0], e.uuid, e.host)
self._update_h_uuid_in_logical_groups(orch_id[0], e.uuid, e.host)
else:
self.logger.debug("Ostro.handle_events: vm activated as planned")
self._update_uuid_in_host(orch_id[0], e.uuid, e.host)
self._update_uuid_in_logical_groups(orch_id[0], e.uuid, e.host)
resource_updated = True
elif e.vm_state == "deleted":
self.logger.debug("Ostro.handle_events: got instance_delete event")
self._remove_vm_from_host(e.uuid, orch_id[0], e.host, e.vcpus, e.mem, e.local_disk)
self._remove_vm_from_logical_groups(e.uuid, orch_id[0], e.host)
if self.app_handler.update_vm_info(orch_id[1], orch_id[0]) is False:
self.logger.error("Ostro.handle_events: error while updating app in MUSIC")
self.data_lock.release()
return False
resource_updated = True
else:
self.logger.warn("Ostro.handle_events: unknown vm_state = " + e.vm_state)
elif e.object_name == 'ComputeNode': # Host resource is updated
self.logger.debug("Ostro.handle_events: got compute event")
# NOTE: what if host is disabled?
if self.resource.update_host_resources(e.host, e.status,
e.vcpus, e.vcpus_used,
e.mem, e.free_mem,
e.local_disk, e.free_local_disk,
e.disk_available_least) is True:
self.resource.update_host_time(e.host)
resource_updated = True
else:
self.logger.warn("Ostro.handle_events: unknown object_name = " + e.object_name)
else:
self.logger.warn("Ostro.handle_events: unknown event method = " + e.method)
if resource_updated is True:
self.resource.update_topology()
for e in _event_list:
if self.db.delete_event(e.event_id) is False:
self.data_lock.release()
return False
if e.method == "object_action":
if e.object_name == 'Instance':
if e.vm_state == "deleted":
if self.db.delete_uuid(e.uuid) is False:
self.data_lock.release()
return False
self.data_lock.release()
return True
def _add_vm_to_host(self, _uuid, _h_uuid, _host_name, _vcpus, _mem, _local_disk):
vm_id = None
if _h_uuid is None:
vm_id = ("none", "none", _uuid)
else:
vm_id = (_h_uuid, "none", _uuid)
self.resource.add_vm_to_host(_host_name, vm_id, _vcpus, _mem, _local_disk)
self.resource.update_host_time(_host_name)
def _remove_vm_from_host(self, _uuid, _h_uuid, _host_name, _vcpus, _mem, _local_disk):
if self._check_h_uuid(_h_uuid, _host_name) is True:
self.resource.remove_vm_by_h_uuid_from_host(_host_name, _h_uuid, _vcpus, _mem, _local_disk)
self.resource.update_host_time(_host_name)
else:
if self._check_uuid(_uuid, _host_name) is True:
self.resource.remove_vm_by_uuid_from_host(_host_name, _uuid, _vcpus, _mem, _local_disk)
self.resource.update_host_time(_host_name)
def _remove_vm_from_logical_groups(self, _uuid, _h_uuid, _host_name):
host = self.resource.hosts[_host_name]
if _h_uuid is not None and _h_uuid != "none":
self.resource.remove_vm_by_h_uuid_from_logical_groups(host, _h_uuid)
else:
self.resource.remove_vm_by_uuid_from_logical_groups(host, _uuid)
def _check_host(self, _host_name):
exist = False
for hk in self.resource.hosts.keys():
if hk == _host_name:
exist = True
break
return exist
def _check_h_uuid(self, _h_uuid, _host_name):
if _h_uuid is None or _h_uuid == "none":
return False
host = self.resource.hosts[_host_name]
return host.exist_vm_by_h_uuid(_h_uuid)
def _check_uuid(self, _uuid, _host_name):
if _uuid is None or _uuid == "none":
return False
host = self.resource.hosts[_host_name]
return host.exist_vm_by_uuid(_uuid)
def _update_uuid_in_host(self, _h_uuid, _uuid, _host_name):
host = self.resource.hosts[_host_name]
if host.update_uuid(_h_uuid, _uuid) is True:
self.resource.update_host_time(_host_name)
else:
self.logger.warn("Ostro._update_uuid_in_host: fail to update uuid in host = " + host.name)
def _update_h_uuid_in_host(self, _h_uuid, _uuid, _host_name):
host = self.resource.hosts[_host_name]
if host.update_h_uuid(_h_uuid, _uuid) is True:
self.resource.update_host_time(_host_name)
def _update_uuid_in_logical_groups(self, _h_uuid, _uuid, _host_name):
host = self.resource.hosts[_host_name]
self.resource.update_uuid_in_logical_groups(_h_uuid, _uuid, host)
def _update_h_uuid_in_logical_groups(self, _h_uuid, _uuid, _host_name):
host = self.resource.hosts[_host_name]
self.resource.update_h_uuid_in_logical_groups(_h_uuid, _uuid, host)
def _get_json_results(self, _request_type, _status_type, _status_message, _map):
result = {}
if _request_type == "query":
for qk, qr in _map.iteritems():
query_result = {}
query_status = {}
if qr is None:
query_status['type'] = "error"
query_status['message'] = _status_message
else:
query_status['type'] = "ok"
query_status['message'] = "success"
query_result['status'] = query_status
if qr is not None:
query_result['resources'] = qr
result[qk] = query_result
else:
if _status_type != "error":
applications = {}
for v in _map.keys():
if isinstance(v, VM) or isinstance(v, Volume):
resources = None
if v.app_uuid in applications.keys():
resources = applications[v.app_uuid]
else:
resources = {}
applications[v.app_uuid] = resources
host = _map[v]
resource_property = {"host": host}
properties = {"properties": resource_property}
resources[v.uuid] = properties
for appk, app_resources in applications.iteritems():
app_result = {}
app_status = {}
app_status['type'] = _status_type
app_status['message'] = _status_message
app_result['status'] = app_status
app_result['resources'] = app_resources
result[appk] = app_result
for appk, app in self.app_handler.apps.iteritems():
if app.request_type == "ping":
app_result = {}
app_status = {}
app_status['type'] = _status_type
app_status['message'] = "ping"
app_result['status'] = app_status
app_result['resources'] = {"ip": self.config.ip}
result[appk] = app_result
else:
for appk in self.app_handler.apps.keys():
app_result = {}
app_status = {}
app_status['type'] = _status_type
app_status['message'] = _status_message
app_result['status'] = app_status
app_result['resources'] = {}
result[appk] = app_result
return result

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,300 @@
#!/bin/python
# Modified: Sep. 22, 2016
from valet.engine.optimizer.app_manager.app_topology_base import VGroup, VM, Volume, LEVELS
class Resource(object):
def __init__(self):
self.level = None # level of placement
self.host_name = None
self.host_memberships = {} # all mapped logical groups to host
self.host_vCPUs = 0 # original total vCPUs before overcommit
self.host_avail_vCPUs = 0 # remaining vCPUs after overcommit
self.host_mem = 0 # original total mem cap before overcommit
self.host_avail_mem = 0 # remaining mem cap after
self.host_local_disk = 0 # original total local disk cap before overcommit
self.host_avail_local_disk = 0 # remaining local disk cap after overcommit
self.host_avail_switches = {} # all mapped switches to host
self.host_avail_storages = {} # all mapped storage_resources to host
self.host_num_of_placed_vms = 0 # the number of vms currently placed in this host
self.rack_name = None # where this host is located
self.rack_memberships = {}
self.rack_vCPUs = 0
self.rack_avail_vCPUs = 0
self.rack_mem = 0
self.rack_avail_mem = 0
self.rack_local_disk = 0
self.rack_avail_local_disk = 0
self.rack_avail_switches = {} # all mapped switches to rack
self.rack_avail_storages = {} # all mapped storage_resources to rack
self.rack_num_of_placed_vms = 0
self.cluster_name = None # where this host and rack are located
self.cluster_memberships = {}
self.cluster_vCPUs = 0
self.cluster_avail_vCPUs = 0
self.cluster_mem = 0
self.cluster_avail_mem = 0
self.cluster_local_disk = 0
self.cluster_avail_local_disk = 0
self.cluster_avail_switches = {} # all mapped switches to cluster
self.cluster_avail_storages = {} # all mapped storage_resources to cluster
self.cluster_num_of_placed_vms = 0
self.storage = None # selected best storage for volume among host_avail_storages
self.sort_base = 0 # order to place
def get_common_placement(self, _resource):
level = None
if self.cluster_name != _resource.cluster_name:
level = "cluster"
else:
if self.rack_name != _resource.rack_name:
level = "rack"
else:
if self.host_name != _resource.host_name:
level = "host"
else:
level = "ANY"
return level
def get_resource_name(self, _level):
name = "unknown"
if _level == "cluster":
name = self.cluster_name
elif _level == "rack":
name = self.rack_name
elif _level == "host":
name = self.host_name
return name
def get_memberships(self, _level):
memberships = None
if _level == "cluster":
memberships = self.cluster_memberships
elif _level == "rack":
memberships = self.rack_memberships
elif _level == "host":
memberships = self.host_memberships
return memberships
def get_num_of_placed_vms(self, _level):
num_of_vms = 0
if _level == "cluster":
num_of_vms = self.cluster_num_of_placed_vms
elif _level == "rack":
num_of_vms = self.rack_num_of_placed_vms
elif _level == "host":
num_of_vms = self.host_num_of_placed_vms
return num_of_vms
def get_avail_resources(self, _level):
avail_vCPUs = 0
avail_mem = 0
avail_local_disk = 0
if _level == "cluster":
avail_vCPUs = self.cluster_avail_vCPUs
avail_mem = self.cluster_avail_mem
avail_local_disk = self.cluster_avail_local_disk
elif _level == "rack":
avail_vCPUs = self.rack_avail_vCPUs
avail_mem = self.rack_avail_mem
avail_local_disk = self.rack_avail_local_disk
elif _level == "host":
avail_vCPUs = self.host_avail_vCPUs
avail_mem = self.host_avail_mem
avail_local_disk = self.host_avail_local_disk
return (avail_vCPUs, avail_mem, avail_local_disk)
def get_local_disk(self, _level):
local_disk = 0
avail_local_disk = 0
if _level == "cluster":
local_disk = self.cluster_local_disk
avail_local_disk = self.cluster_avail_local_disk
elif _level == "rack":
local_disk = self.rack_local_disk
avail_local_disk = self.rack_avail_local_disk
elif _level == "host":
local_disk = self.host_local_disk
avail_local_disk = self.host_avail_local_disk
return (local_disk, avail_local_disk)
def get_vCPUs(self, _level):
vCPUs = 0
avail_vCPUs = 0
if _level == "cluster":
vCPUs = self.cluster_vCPUs
avail_vCPUs = self.cluster_avail_vCPUs
elif _level == "rack":
vCPUs = self.rack_vCPUs
avail_vCPUs = self.rack_avail_vCPUs
elif _level == "host":
vCPUs = self.host_vCPUs
avail_vCPUs = self.host_avail_vCPUs
return (vCPUs, avail_vCPUs)
def get_mem(self, _level):
mem = 0
avail_mem = 0
if _level == "cluster":
mem = self.cluster_mem
avail_mem = self.cluster_avail_mem
elif _level == "rack":
mem = self.rack_mem
avail_mem = self.rack_avail_mem
elif _level == "host":
mem = self.host_mem
avail_mem = self.host_avail_mem
return (mem, avail_mem)
def get_avail_storages(self, _level):
avail_storages = None
if _level == "cluster":
avail_storages = self.cluster_avail_storages
elif _level == "rack":
avail_storages = self.rack_avail_storages
elif _level == "host":
avail_storages = self.host_avail_storages
return avail_storages
def get_avail_switches(self, _level):
avail_switches = None
if _level == "cluster":
avail_switches = self.cluster_avail_switches
elif _level == "rack":
avail_switches = self.rack_avail_switches
elif _level == "host":
avail_switches = self.host_avail_switches
return avail_switches
class LogicalGroupResource(object):
def __init__(self):
self.name = None
self.group_type = "AGGR"
self.metadata = {}
self.num_of_placed_vms = 0
self.num_of_placed_vms_per_host = {} # key = host (i.e., id of host or rack), value = num_of_placed_vms
class StorageResource(object):
def __init__(self):
self.storage_name = None
self.storage_class = None
self.storage_avail_disk = 0
self.sort_base = 0
class SwitchResource(object):
def __init__(self):
self.switch_name = None
self.switch_type = None
self.avail_bandwidths = [] # out-bound bandwidths
self.sort_base = 0
class Node(object):
def __init__(self):
self.node = None # VM, Volume, or VGroup
self.sort_base = -1
def get_all_links(self):
link_list = []
if isinstance(self.node, VM):
for vml in self.node.vm_list:
link_list.append(vml)
for voll in self.node.volume_list:
link_list.append(voll)
elif isinstance(self.node, Volume):
for vml in self.node.vm_list: # vml is VolumeLink
link_list.append(vml)
elif isinstance(self.node, VGroup):
for vgl in self.node.vgroup_list:
link_list.append(vgl)
return link_list
def get_bandwidth_of_link(self, _link):
bandwidth = 0
if isinstance(self.node, VGroup) or isinstance(self.node, VM):
if isinstance(_link.node, VM):
bandwidth = _link.nw_bandwidth
elif isinstance(_link.node, Volume):
bandwidth = _link.io_bandwidth
else:
bandwidth = _link.io_bandwidth
return bandwidth
def get_common_diversity(self, _diversity_groups):
common_level = "ANY"
for dk in self.node.diversity_groups.keys():
if dk in _diversity_groups.keys():
level = self.node.diversity_groups[dk].split(":")[0]
if common_level != "ANY":
if LEVELS.index(level) > LEVELS.index(common_level):
common_level = level
else:
common_level = level
return common_level
def get_affinity_id(self):
aff_id = None
if isinstance(self.node, VGroup) and self.node.vgroup_type == "AFF" and \
self.node.name != "any":
aff_id = self.node.level + ":" + self.node.name
return aff_id
def compute_reservation(_level, _placement_level, _bandwidth):
reservation = 0
if _placement_level != "ANY":
diff = LEVELS.index(_placement_level) - LEVELS.index(_level) + 1
if diff > 0:
reservation = _bandwidth * diff * 2
return reservation

View File

@ -0,0 +1,269 @@
#!/bin/python
#################################################################################################################
# Author: Gueyoung Jung
# Contact: gjung@research.att.com
# Version 2.0.2: Feb. 9, 2016
# Modified: Sep. 16, 2016
#
# Functions
# - Set all configurations to run Ostro
#
#################################################################################################################
import os
from oslo_config import cfg
from valet.engine.conf import register_conf
CONF = cfg.CONF
class Config(object):
def __init__(self, *default_config_files):
register_conf()
if default_config_files:
CONF(default_config_files=default_config_files)
else:
CONF(project='valet')
# System parameters
self.root_loc = os.path.dirname(CONF.default_config_files[0])
self.mode = None
self.command = 'status'
self.process = None
self.control_loc = None
self.api_protocol = 'http://'
self.network_control = False
self.network_control_api = None
self.db_keyspace = None
self.db_request_table = None
self.db_response_table = None
self.db_event_table = None
self.db_resource_table = None
self.db_app_table = None
self.db_resource_index_table = None
self.db_app_index_table = None
self.db_uuid_table = None
self.replication_factor = 3
self.db_hosts = []
self.ip = None
self.priority = 0
self.rpc_server_ip = None
self.rpc_server_port = 0
# Logging parameters
self.logger_name = None
self.logging_level = None
self.logging_loc = None
self.resource_log_loc = None
self.app_log_loc = None
self.max_main_log_size = 0
self.max_log_size = 0
self.max_num_of_logs = 0
# Management parameters
self.datacenter_name = None
self.num_of_region_chars = 0
self.rack_code_list = []
self.node_code_list = []
self.topology_trigger_time = None
self.topology_trigger_freq = 0
self.compute_trigger_time = None
self.compute_trigger_freq = 0
self.default_cpu_allocation_ratio = 1
self.default_ram_allocation_ratio = 1
self.default_disk_allocation_ratio = 1
self.static_cpu_standby_ratio = 0
self.static_mem_standby_ratio = 0
self.static_local_disk_standby_ratio = 0
# Authentication parameters
self.project_name = None
self.user_name = None
self.pw = None
# Simulation parameters
self.sim_cfg_loc = None
self.num_of_hosts_per_rack = 0
self.num_of_racks = 0
self.num_of_spine_switches = 0
self.num_of_aggregates = 0
self.aggregated_ratio = 0
self.cpus_per_host = 0
self.mem_per_host = 0
self.disk_per_host = 0
self.bandwidth_of_spine = 0
self.bandwidth_of_rack = 0
self.bandwidth_of_host = 0
self.num_of_basic_flavors = 0
self.base_flavor_cpus = 0
self.base_flavor_mem = 0
self.base_flavor_disk = 0
def configure(self):
status = self._init_system()
if status != "success":
return status
self.sim_cfg_loc = self.root_loc + self.sim_cfg_loc
self.process = self.process
self.logging_loc = self.logging_loc
self.resource_log_loc = self.logging_loc
self.app_log_loc = self.logging_loc
self.eval_log_loc = self.logging_loc
if self.mode.startswith("live") is False:
status = self._set_simulation()
if status != "success":
return status
return "success"
def _init_system(self):
self.command = CONF.command
self.mode = CONF.engine.mode
self.priority = CONF.engine.priority
self.logger_name = CONF.engine.logger_name
self.logging_level = CONF.engine.logging_level
self.logging_loc = CONF.engine.logging_dir
self.resource_log_loc = CONF.engine.logging_dir + 'resources'
self.app_log_loc = CONF.engine.logging_dir + 'app'
self.eval_log_loc = CONF.engine.logging_dir
self.max_log_size = CONF.engine.max_log_size
self.max_num_of_logs = CONF.engine.max_num_of_logs
self.process = CONF.engine.pid
self.rpc_server_ip = CONF.engine.rpc_server_ip
self.rpc_server_port = CONF.engine.rpc_server_port
self.datacenter_name = CONF.engine.datacenter_name
self.network_control = CONF.engine.network_control
self.network_control_url = CONF.engine.network_control_url
self.default_cpu_allocation_ratio = CONF.engine.default_cpu_allocation_ratio
self.default_ram_allocation_ratio = CONF.engine.default_ram_allocation_ratio
self.default_disk_allocation_ratio = CONF.engine.default_disk_allocation_ratio
self.static_cpu_standby_ratio = CONF.engine.static_cpu_standby_ratio
self.static_mem_standby_ratio = CONF.engine.static_mem_standby_ratio
self.static_local_disk_standby_ratio = CONF.engine.static_local_disk_standby_ratio
self.topology_trigger_time = CONF.engine.topology_trigger_time
self.topology_trigger_freq = CONF.engine.topology_trigger_frequency
self.compute_trigger_time = CONF.engine.compute_trigger_time
self.compute_trigger_freq = CONF.engine.compute_trigger_frequency
self.db_keyspace = CONF.music.keyspace
self.db_request_table = CONF.music.request_table
self.db_response_table = CONF.music.response_table
self.db_event_table = CONF.music.event_table
self.db_resource_table = CONF.music.resource_table
self.db_app_table = CONF.music.app_table
self.db_resource_index_table = CONF.music.resource_index_table
self.db_app_index_table = CONF.music.app_index_table
self.db_uuid_table = CONF.music.uuid_table
self.replication_factor = CONF.music.replication_factor
self.db_host = CONF.music.host
self.ip = CONF.engine.ip
self.num_of_region_chars = CONF.engine.num_of_region_chars
self.rack_code_list = CONF.engine.rack_code_list
self.node_code_list = CONF.engine.node_code_list
self.sim_cfg_loc = CONF.engine.sim_cfg_loc
self.project_name = CONF.identity.project_name
self.user_name = CONF.identity.username
self.pw = CONF.identity.password
return "success"
def _set_simulation(self):
self.num_of_spine_switches = CONF.engine.num_of_spine_switches
self.num_of_hosts_per_rack = CONF.engine.num_of_hosts_per_rack
self.num_of_racks = CONF.engine.num_of_racks
self.num_of_aggregates = CONF.engine.num_of_aggregates
self.aggregated_ratio = CONF.engine.aggregated_ratio
self.cpus_per_host = CONF.engine.cpus_per_host
self.mem_per_host = CONF.engine.mem_per_host
self.disk_per_host = CONF.engine.disk_per_host
self.bandwidth_of_spine = CONF.engine.bandwidth_of_spine
self.bandwidth_of_rack = CONF.engine.bandwidth_of_rack
self.bandwidth_of_host = CONF.engine.bandwidth_of_host
self.num_of_basic_flavors = CONF.engine.num_of_basic_flavors
self.base_flavor_cpus = CONF.engine.base_flavor_cpus
self.base_flavor_mem = CONF.engine.base_flavor_mem
self.base_flavor_disk = CONF.engine.base_flavor_disk

View File

@ -0,0 +1,163 @@
#!/usr/bin/env python
# Modified: Mar. 1, 2016
import atexit
import os
from signal import SIGTERM
import sys
import time
class Daemon(object):
""" A generic daemon class.
Usage: subclass the Daemon class and override the run() method
"""
def __init__(self, priority, pidfile, logger, stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'):
self.stdin = stdin
self.stdout = stdout
self.stderr = stderr
self.pidfile = pidfile
self.priority = priority
self.logger = logger
def daemonize(self):
""" Do the UNIX double-fork magic, see Stevens' "Advanced
Programming in the UNIX Environment" for details (ISBN 0201563177)
http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
"""
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError as e:
self.logger.error("Daemon error at step1: " + e.strerror)
sys.stderr.write("fork #1 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# decouple from parent environment
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError as e:
self.logger.error("Daemon error at step2: " + e.strerror)
sys.stderr.write("fork #2 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# redirect standard file descriptors
sys.stdout.flush()
sys.stderr.flush()
si = file(self.stdin, 'r')
so = file(self.stdout, 'a+')
se = file(self.stderr, 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# write pidfile
atexit.register(self.delpid)
pid = str(os.getpid())
file(self.pidfile, 'w+').write("%s\n" % pid)
def delpid(self):
os.remove(self.pidfile)
def getpid(self):
"""returns the content of pidfile or None."""
try:
pf = file(self.pidfile, 'r')
pid = int(pf.read().strip())
pf.close()
except IOError:
pid = None
return pid
def checkpid(self, pid):
""" Check For the existence of a unix pid. """
if pid is None:
return False
try:
os.kill(pid, 0)
except OSError:
self.delpid()
return False
else:
return True
def start(self):
"""Start the daemon"""
# Check for a pidfile to see if the daemon already runs
pid = self.getpid()
if pid:
message = "pidfile %s already exist. Daemon already running?\n"
sys.stderr.write(message % self.pidfile)
sys.exit(1)
# Start the daemon
self.daemonize()
self.run()
def stop(self):
"""Stop the daemon"""
# Get the pid from the pidfile
pid = self.getpid()
if not pid:
message = "pidfile %s does not exist. Daemon not running?\n"
sys.stderr.write(message % self.pidfile)
return # not an error in a restart
# Try killing the daemon process
try:
while 1:
os.kill(pid, SIGTERM)
time.sleep(0.1)
except OSError as err:
err = str(err)
if err.find("No such process") > 0:
if os.path.exists(self.pidfile):
os.remove(self.pidfile)
else:
# print str(err)
sys.exit(1)
def restart(self):
"""Restart the daemon"""
self.stop()
self.start()
def status(self):
""" returns instance's priority """
# Check for a pidfile to see if the daemon already runs
pid = self.getpid()
status = 0
if self.checkpid(pid):
message = "status: pidfile %s exist. Daemon is running\n"
status = self.priority
else:
message = "status: pidfile %s does not exist. Daemon is not running\n"
sys.stderr.write(message % self.pidfile)
return status
def run(self):
""" You should override this method when you subclass Daemon.
It will be called after the process has been daemonized by start() or restart().
"""

View File

@ -0,0 +1,151 @@
#!/bin/python
#################################################################################################################
# Author: Gueyoung Jung
# Contact: gjung@research.att.com
# Version 2.0.2: Feb. 9, 2016
#
# Functions
# - Handle user requests
#
#################################################################################################################
import sys
from configuration import Config
from valet.api.db.models.music import Music
class DBCleaner(object):
def __init__(self, _config):
self.config = _config
self.music = Music()
def clean_db_tables(self):
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_resource_table)
if len(results) > 0:
print("resource table result = ", len(results))
for _, row in results.iteritems():
self.music.delete_row_eventually(self.config.db_keyspace, self.config.db_resource_table, 'site_name', row['site_name'])
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_request_table)
if len(results) > 0:
print("request table result = ", len(results))
for _, row in results.iteritems():
self.music.delete_row_eventually(self.config.db_keyspace,
self.config.db_request_table,
'stack_id', row['stack_id'])
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_response_table)
if len(results) > 0:
print("response table result = ", len(results))
for _, row in results.iteritems():
self.music.delete_row_eventually(self.config.db_keyspace,
self.config.db_response_table,
'stack_id', row['stack_id'])
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_event_table)
if len(results) > 0:
print("event table result = ", len(results))
for _, row in results.iteritems():
self.music.delete_row_eventually(self.config.db_keyspace,
self.config.db_event_table,
'timestamp', row['timestamp'])
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_resource_index_table)
if len(results) > 0:
print("resource_index table result = ", len(results))
for _, row in results.iteritems():
self.music.delete_row_eventually(self.config.db_keyspace,
self.config.db_resource_index_table,
'site_name', row['site_name'])
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_app_index_table)
if len(results) > 0:
print("app_index table result = ", len(results))
for _, row in results.iteritems():
self.music.delete_row_eventually(self.config.db_keyspace,
self.config.db_app_index_table,
'site_name', row['site_name'])
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_app_table)
if len(results) > 0:
print("app table result = ", len(results))
for _, row in results.iteritems():
self.music.delete_row_eventually(self.config.db_keyspace,
self.config.db_app_table,
'stack_id', row['stack_id'])
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_uuid_table)
if len(results) > 0:
print("uuid table result = ", len(results))
for _, row in results.iteritems():
self.music.delete_row_eventually(self.config.db_keyspace,
self.config.db_uuid_table,
'uuid', row['uuid'])
def check_db_tables(self):
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_resource_table)
if len(results) > 0:
print("resource table not cleaned ")
else:
print("resource table cleaned")
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_request_table)
if len(results) > 0:
print("request table not cleaned ")
else:
print("request table cleaned")
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_response_table)
if len(results) > 0:
print("response table not cleaned ")
else:
print("response table cleaned")
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_event_table)
if len(results) > 0:
print("event table not cleaned ")
else:
print("event table cleaned")
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_resource_index_table)
if len(results) > 0:
print("resource log index table not cleaned ")
else:
print("resource log index table cleaned")
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_app_index_table)
if len(results) > 0:
print("app log index table not cleaned ")
else:
print("app log index table cleaned")
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_app_table)
if len(results) > 0:
print("app log table not cleaned ")
else:
print("app log table cleaned")
results = self.music.read_all_rows(self.config.db_keyspace, self.config.db_uuid_table)
if len(results) > 0:
print("uuid table not cleaned ")
else:
print("uuid table cleaned")
if __name__ == '__main__':
config = Config()
config_status = config.configure()
if config_status != "success":
print("Error while configuring Ostro: " + config_status)
sys.exit(2)
c = DBCleaner(config)
c.clean_db_tables()
c.check_db_tables()

View File

@ -0,0 +1,75 @@
#!/bin/python
# Modified: Sep. 22, 2016
import os
import sys
import traceback
from valet.engine.optimizer.ostro.ostro import Ostro
from valet.engine.optimizer.ostro_server.configuration import Config
from valet.engine.optimizer.ostro_server.daemon import Daemon # implemented for Python v2.7
from valet.engine.optimizer.util.util import init_logger
class OstroDaemon(Daemon):
def run(self):
self.logger.info("##### Valet Engine is launched #####")
try:
ostro = Ostro(config, self.logger)
except Exception:
self.logger.error(traceback.format_exc())
if ostro.bootstrap() is False:
self.logger.error("ostro bootstrap failed")
sys.exit(2)
ostro.run_ostro()
def verify_dirs(list_of_dirs):
for d in list_of_dirs:
try:
if not os.path.exists(d):
os.makedirs(d)
except OSError:
print("Error while verifying: " + d)
sys.exit(2)
if __name__ == "__main__":
''' configuration '''
# Configuration
try:
config = Config()
config_status = config.configure()
if config_status != "success":
print(config_status)
sys.exit(2)
''' verify directories '''
dirs_list = [config.logging_loc, config.resource_log_loc, config.app_log_loc, os.path.dirname(config.process)]
verify_dirs(dirs_list)
''' logger '''
logger = init_logger(config)
# Start daemon process
daemon = OstroDaemon(config.priority, config.process, logger)
logger.info("%s ostro ..." % config.command)
# switch case
exit_code = {
'start': daemon.start,
'stop': daemon.stop,
'restart': daemon.restart,
'status': daemon.status,
}[config.command]()
exit_code = exit_code or 0
except Exception:
logger.error(traceback.format_exc())
exit_code = 2
sys.exit(int(exit_code))

View File

@ -0,0 +1,25 @@
# Version 2.0.2: Feb. 9, 2016
# Set simulation parameters
num_of_spine_switches=0
#num_of_racks=1
num_of_racks=2
#num_of_hosts_per_rack=8
num_of_hosts_per_rack=2
bandwidth_of_spine=40000
bandwidth_of_rack=40000
bandwidth_of_host=10000
num_of_aggregates=1
aggregated_ratio=5
cpus_per_host=16
mem_per_host=16000
disk_per_host=1000
num_of_basic_flavors=3
base_flavor_cpus=1
base_flavor_mem=2000
base_flavor_disk=40

View File

View File

@ -0,0 +1,89 @@
#!/bin/python
# Modified: Feb. 9, 2016
from os import listdir, stat
from os.path import isfile, join
import logging
from logging.handlers import RotatingFileHandler
def get_logfile(_loc, _max_log_size, _name):
files = [f for f in listdir(_loc) if isfile(join(_loc, f))]
logfile_index = 0
for f in files:
f_name_list = f.split(".")
f_type = f_name_list[len(f_name_list) - 1]
if f_type == "log":
f_id_list = f.split("_")
temp_f_id = f_id_list[len(f_id_list) - 1]
f_id = temp_f_id.split(".")[0]
f_index = int(f_id)
if f_index > logfile_index:
logfile_index = f_index
last_logfile = _name + "_" + str(logfile_index) + ".log"
mode = None
if isfile(_loc + last_logfile) is True:
statinfo = stat(_loc + last_logfile)
if statinfo.st_size > _max_log_size:
last_logfile = _name + "_" + str(logfile_index + 1) + ".log"
mode = 'w'
else:
mode = 'a'
else:
mode = 'w'
return (last_logfile, mode)
def get_last_logfile(_loc, _max_log_size, _max_num_of_logs, _name, _last_index):
last_logfile = _name + "_" + str(_last_index) + ".log"
mode = None
if isfile(_loc + last_logfile) is True:
statinfo = stat(_loc + last_logfile)
if statinfo.st_size > _max_log_size:
if (_last_index + 1) < _max_num_of_logs:
_last_index = _last_index + 1
else:
_last_index = 0
last_logfile = _name + "_" + str(_last_index) + ".log"
mode = 'w'
else:
mode = 'a'
else:
mode = 'w'
return (last_logfile, _last_index, mode)
def adjust_json_string(_data):
_data = _data.replace("None", '"none"')
_data = _data.replace("False", '"false"')
_data = _data.replace("True", '"true"')
_data = _data.replace('_"none"', "_none")
_data = _data.replace('_"false"', "_false")
_data = _data.replace('_"true"', "_true")
return _data
def init_logger(config):
log_formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
log_handler = RotatingFileHandler(config.logging_loc + config.logger_name,
mode='a',
maxBytes=config.max_main_log_size,
backupCount=2,
encoding=None,
delay=0)
log_handler.setFormatter(log_formatter)
logger = logging.getLogger(config.logger_name)
logger.setLevel(logging.DEBUG if config.logging_level == "debug" else logging.INFO)
logger.addHandler(log_handler)
return logger

View File

@ -0,0 +1,335 @@
#!/bin/python
# Modified: Sep. 27, 2016
from novaclient import client as nova_client
from oslo_config import cfg
from resource_base import Host, LogicalGroup, Flavor
import traceback
# Nova API v2
VERSION = 2
CONF = cfg.CONF
class Compute(object):
def __init__(self, _logger):
self.logger = _logger
self.nova = None
def set_hosts(self, _hosts, _logical_groups):
self._get_nova_client()
status = self._set_availability_zones(_hosts, _logical_groups)
if status != "success":
self.logger.error('_set_availability_zones failed')
return status
status = self._set_aggregates(_hosts, _logical_groups)
if status != "success":
self.logger.error('_set_aggregates failed')
return status
status = self._set_placed_vms(_hosts, _logical_groups)
if status != "success":
self.logger.error('_set_placed_vms failed')
return status
status = self._set_resources(_hosts)
if status != "success":
self.logger.error('_set_resources failed')
return status
return "success"
def _get_nova_client(self):
'''Returns a nova client'''
self.nova = nova_client.Client(VERSION,
CONF.identity.username,
CONF.identity.password,
CONF.identity.project_name,
CONF.identity.auth_url)
def _set_availability_zones(self, _hosts, _logical_groups):
try:
hosts_list = self.nova.hosts.list()
try:
for h in hosts_list:
if h.service == "compute":
host = Host(h.host_name)
host.tag.append("nova")
logical_group = None
if h.zone not in _logical_groups.keys():
logical_group = LogicalGroup(h.zone)
logical_group.group_type = "AZ"
_logical_groups[logical_group.name] = logical_group
else:
logical_group = _logical_groups[h.zone]
host.memberships[logical_group.name] = logical_group
if host.name not in logical_group.vms_per_host.keys():
logical_group.vms_per_host[host.name] = []
self.logger.info("adding Host LogicalGroup: " + str(host.__dict__))
_hosts[host.name] = host
except (ValueError, KeyError, TypeError):
self.logger.error(traceback.format_exc())
return "Error while setting host zones from Nova"
except Exception:
self.logger.critical(traceback.format_exc())
return "success"
def _set_aggregates(self, _hosts, _logical_groups):
aggregate_list = self.nova.aggregates.list()
try:
for a in aggregate_list:
aggregate = LogicalGroup(a.name)
aggregate.group_type = "AGGR"
if a.deleted is not False:
aggregate.status = "disabled"
metadata = {}
for mk in a.metadata.keys():
metadata[mk] = a.metadata.get(mk)
aggregate.metadata = metadata
self.logger.info("adding aggregate LogicalGroup: " + str(aggregate.__dict__))
_logical_groups[aggregate.name] = aggregate
for hn in a.hosts:
host = _hosts[hn]
host.memberships[aggregate.name] = aggregate
aggregate.vms_per_host[host.name] = []
except (ValueError, KeyError, TypeError):
self.logger.error(traceback.format_exc())
return "Error while setting host aggregates from Nova"
return "success"
# NOTE: do not set any info in _logical_groups
def _set_placed_vms(self, _hosts, _logical_groups):
error_status = None
for hk in _hosts.keys():
vm_uuid_list = []
result_status = self._get_vms_of_host(hk, vm_uuid_list)
if result_status == "success":
for vm_uuid in vm_uuid_list:
vm_detail = [] # (vm_name, az, metadata, status)
result_status_detail = self._get_vm_detail(vm_uuid, vm_detail)
if result_status_detail == "success":
# if vm_detail[3] != "SHUTOFF": # status == "ACTIVE" or "SUSPENDED"
vm_id = ("none", vm_detail[0], vm_uuid)
_hosts[hk].vm_list.append(vm_id)
# _logical_groups[vm_detail[1]].vm_list.append(vm_id)
# _logical_groups[vm_detail[1]].vms_per_host[hk].append(vm_id)
else:
error_status = result_status_detail
break
else:
error_status = result_status
if error_status is not None:
break
if error_status is None:
return "success"
else:
return error_status
def _get_vms_of_host(self, _hk, _vm_list):
hypervisor_list = self.nova.hypervisors.search(hypervisor_match=_hk, servers=True)
try:
for hv in hypervisor_list:
if hasattr(hv, 'servers'):
server_list = hv.__getattr__('servers')
for s in server_list:
_vm_list.append(s.uuid)
except (ValueError, KeyError, TypeError):
self.logger.error(traceback.format_exc())
return "Error while getting existing vms"
return "success"
def _get_vm_detail(self, _vm_uuid, _vm_detail):
server = self.nova.servers.get(_vm_uuid)
try:
vm_name = server.name
_vm_detail.append(vm_name)
az = server.__getattr("OS-EXT-AZ:availability_zone")
_vm_detail.append(az)
metadata = server.metadata
_vm_detail.append(metadata)
status = server.status
_vm_detail.append(status)
except (ValueError, KeyError, TypeError):
self.logger.error(traceback.format_exc())
return "Error while getting vm detail"
return "success"
def _set_resources(self, _hosts):
# Returns Hypervisor list
host_list = self.nova.hypervisors.list()
try:
for hv in host_list:
if hv.service['host'] in _hosts.keys():
host = _hosts[hv.service['host']]
host.status = hv.status
host.state = hv.state
host.original_vCPUs = float(hv.vcpus)
host.vCPUs_used = float(hv.vcpus_used)
host.original_mem_cap = float(hv.memory_mb)
host.free_mem_mb = float(hv.free_ram_mb)
host.original_local_disk_cap = float(hv.local_gb)
host.free_disk_gb = float(hv.free_disk_gb)
host.disk_available_least = float(hv.disk_available_least)
except (ValueError, KeyError, TypeError):
self.logger.error(traceback.format_exc())
return "Error while setting host resources from Nova"
return "success"
def set_flavors(self, _flavors):
error_status = None
self._get_nova_client()
result_status = self._set_flavors(_flavors)
if result_status == "success":
for _, f in _flavors.iteritems():
result_status_detail = self._set_extra_specs(f)
if result_status_detail != "success":
error_status = result_status_detail
break
else:
error_status = result_status
if error_status is None:
return "success"
else:
return error_status
def _set_flavors(self, _flavors):
# Get a list of all flavors
flavor_list = self.nova.flavors.list()
try:
for f in flavor_list:
flavor = Flavor(f.name)
flavor.flavor_id = f.id
if hasattr(f, "OS-FLV-DISABLED:disabled"):
if getattr(f, "OS-FLV-DISABLED:disabled"):
flavor.status = "disabled"
flavor.vCPUs = float(f.vcpus)
flavor.mem_cap = float(f.ram)
root_gb = float(f.disk)
ephemeral_gb = 0.0
if hasattr(f, "OS-FLV-EXT-DATA:ephemeral"):
ephemeral_gb = float(getattr(f, "OS-FLV-EXT-DATA:ephemeral"))
swap_mb = 0.0
if hasattr(f, "swap"):
sw = getattr(f, "swap")
if sw != '':
swap_mb = float(sw)
flavor.disk_cap = root_gb + ephemeral_gb + swap_mb / float(1024)
self.logger.info("adding flavor " + str(flavor.__dict__))
_flavors[flavor.name] = flavor
except (ValueError, KeyError, TypeError):
self.logger.error(traceback.format_exc())
return "Error while getting flavors"
return "success"
def _set_extra_specs(self, _flavor):
try:
# Get a list of all flavors
flavors_list = self.nova.flavors.list()
# Get flavor from flavor_list
for flavor in flavors_list:
if flavor.id == _flavor.flavor_id:
extra_specs = flavor.get_keys()
for sk, sv in extra_specs.iteritems():
_flavor.extra_specs[sk] = sv
break
except (ValueError, KeyError, TypeError):
self.logger.error(traceback.format_exc())
return "Error while getting flavor extra spec"
return "success"
# Unit test
'''
if __name__ == '__main__':
config = Config()
config_status = config.configure()
if config_status != "success":
print "Error while configuring Ostro: " + config_status
sys.exit(2)
auth = Authentication()
admin_token = auth.get_tenant_token(config)
if admin_token is None:
print "Error while getting admin_token"
sys.exit(2)
else:
print "admin_token=",admin_token
project_token = auth.get_project_token(config, admin_token)
if project_token is None:
print "Error while getting project_token"
sys.exit(2)
else:
print "project_token=",project_token
c = Compute(config, admin_token, project_token)
hosts = {}
logical_groups = {}
flavors = {}
#c._set_availability_zones(hosts, logical_groups)
#c._set_aggregates(None, logical_groups)
#c._set_placed_vms(hosts, logical_groups)
#c._get_vms_of_host("qos101", None)
#c._get_vm_detail("20b2890b-81bb-4942-94bf-c6bee29630bb", None)
c._set_resources(hosts)
#c._set_flavors(flavors)
'''

View File

@ -0,0 +1,406 @@
#!/bin/python
# Modified: Sep. 22, 2016
import threading
import time
from copy import deepcopy
from valet.engine.resource_manager.compute import Compute
from valet.engine.resource_manager.compute_simulator import SimCompute
from valet.engine.resource_manager.resource_base import Host
class ComputeManager(threading.Thread):
def __init__(self, _t_id, _t_name, _rsc, _data_lock, _config, _logger):
threading.Thread.__init__(self)
self.thread_id = _t_id
self.thread_name = _t_name
self.data_lock = _data_lock
self.end_of_process = False
self.resource = _rsc
self.config = _config
self.logger = _logger
# self.auth = Authentication(_logger)
self.admin_token = None
self.project_token = None
def run(self):
self.logger.info("ComputeManager: start " + self.thread_name + " ......")
if self.config.compute_trigger_freq > 0:
period_end = time.time() + self.config.compute_trigger_freq
while self.end_of_process is False:
time.sleep(60)
if time.time() > period_end:
self._run()
period_end = time.time() + self.config.compute_trigger_freq
else:
(alarm_HH, alarm_MM) = self.config.compute_trigger_time.split(':')
now = time.localtime()
timeout = True
last_trigger_year = now.tm_year
last_trigger_mon = now.tm_mon
last_trigger_mday = now.tm_mday
while self.end_of_process is False:
time.sleep(60)
now = time.localtime()
if now.tm_year > last_trigger_year or now.tm_mon > last_trigger_mon or now.tm_mday > last_trigger_mday:
timeout = False
if timeout is False and \
now.tm_hour >= int(alarm_HH) and now.tm_min >= int(alarm_MM):
self._run()
timeout = True
last_trigger_year = now.tm_year
last_trigger_mon = now.tm_mon
last_trigger_mday = now.tm_mday
self.logger.info("ComputeManager: exit " + self.thread_name)
def _run(self):
self.logger.info("ComputeManager: --- start compute_nodes status update ---")
self.data_lock.acquire()
try:
triggered_host_updates = self.set_hosts()
triggered_flavor_updates = self.set_flavors()
if triggered_host_updates is True and triggered_flavor_updates is True:
if self.resource.update_topology() is False:
# TODO(GY): error in MUSIC. ignore?
pass
else:
# TODO(GY): error handling, e.g., 3 times failure then stop Ostro?
pass
finally:
self.data_lock.release()
self.logger.info("ComputeManager: --- done compute_nodes status update ---")
return True
# def _set_admin_token(self):
# self.admin_token = self.auth.get_tenant_token(self.config)
# if self.admin_token is None:
# self.logger.error("ComputeManager: " + self.auth.status)
# return False
#
# return True
# def _set_project_token(self):
# self.project_token = self.auth.get_project_token(self.config, self.admin_token)
# if self.project_token is None:
# self.logger.error("ComputeManager: " + self.auth.status)
# return False
#
# return True
def set_hosts(self):
hosts = {}
logical_groups = {}
compute = None
if self.config.mode.startswith("sim") is True or \
self.config.mode.startswith("test") is True:
compute = SimCompute(self.config)
else:
compute = Compute(self.logger)
status = compute.set_hosts(hosts, logical_groups)
if status != "success":
self.logger.error("ComputeManager: " + status)
return False
self._compute_avail_host_resources(hosts)
self._check_logical_group_update(logical_groups)
self._check_host_update(hosts)
return True
def _compute_avail_host_resources(self, _hosts):
for hk, host in _hosts.iteritems():
self.resource.compute_avail_resources(hk, host)
def _check_logical_group_update(self, _logical_groups):
for lk in _logical_groups.keys():
if lk not in self.resource.logical_groups.keys():
self.resource.logical_groups[lk] = deepcopy(_logical_groups[lk])
self.resource.logical_groups[lk].last_update = time.time()
self.logger.warn("ComputeManager: new logical group (" + lk + ") added")
for rlk in self.resource.logical_groups.keys():
rl = self.resource.logical_groups[rlk]
if rl.group_type != "EX" and rl.group_type != "AFF" and rl.group_type != "DIV":
if rlk not in _logical_groups.keys():
self.resource.logical_groups[rlk].status = "disabled"
self.resource.logical_groups[rlk].last_update = time.time()
self.logger.warn("ComputeManager: logical group (" + rlk + ") removed")
for lk in _logical_groups.keys():
lg = _logical_groups[lk]
rlg = self.resource.logical_groups[lk]
if lg.group_type != "EX" and lg.group_type != "AFF" and lg.group_type != "DIV":
if self._check_logical_group_metadata_update(lg, rlg) is True:
rlg.last_update = time.time()
self.logger.warn("ComputeManager: logical group (" + lk + ") updated")
def _check_logical_group_metadata_update(self, _lg, _rlg):
if _lg.status != _rlg.status:
_rlg.status = _lg.status
for mdk in _lg.metadata.keys():
if mdk not in _rlg.metadata.keys():
_rlg.metadata[mdk] = _lg.metadata[mdk]
for rmdk in _rlg.metadata.keys():
if rmdk not in _lg.metadata.keys():
del _rlg.metadata[rmdk]
for hk in _lg.vms_per_host.keys():
if hk not in _rlg.vms_per_host.keys():
_rlg.vms_per_host[hk] = deepcopy(_lg.vms_per_host[hk])
for rhk in _rlg.vms_per_host.keys():
if rhk not in _lg.vms_per_host.keys():
del _rlg.vms_per_host[rhk]
def _check_host_update(self, _hosts):
for hk in _hosts.keys():
if hk not in self.resource.hosts.keys():
new_host = Host(hk)
self.resource.hosts[new_host.name] = new_host
new_host.last_update = time.time()
self.logger.warn("ComputeManager: new host (" + new_host.name + ") added")
for rhk, rhost in self.resource.hosts.iteritems():
if rhk not in _hosts.keys():
if "nova" in rhost.tag:
rhost.tag.remove("nova")
rhost.last_update = time.time()
self.logger.warn("ComputeManager: host (" + rhost.name + ") disabled")
for hk in _hosts.keys():
host = _hosts[hk]
rhost = self.resource.hosts[hk]
if self._check_host_config_update(host, rhost) is True:
rhost.last_update = time.time()
for hk, h in self.resource.hosts.iteritems():
if h.clean_memberships() is True:
h.last_update = time.time()
self.logger.warn("ComputeManager: host (" + h.name + ") updated (delete EX/AFF/DIV membership)")
for hk, host in self.resource.hosts.iteritems():
if host.last_update > self.resource.current_timestamp:
self.resource.update_rack_resource(host)
def _check_host_config_update(self, _host, _rhost):
topology_updated = False
topology_updated = self._check_host_status(_host, _rhost)
topology_updated = self._check_host_resources(_host, _rhost)
topology_updated = self._check_host_memberships(_host, _rhost)
topology_updated = self._check_host_vms(_host, _rhost)
return topology_updated
def _check_host_status(self, _host, _rhost):
topology_updated = False
if "nova" not in _rhost.tag:
_rhost.tag.append("nova")
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (tag added)")
if _host.status != _rhost.status:
_rhost.status = _host.status
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (status changed)")
if _host.state != _rhost.state:
_rhost.state = _host.state
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (state changed)")
return topology_updated
def _check_host_resources(self, _host, _rhost):
topology_updated = False
if _host.vCPUs != _rhost.vCPUs or \
_host.original_vCPUs != _rhost.original_vCPUs or \
_host.avail_vCPUs != _rhost.avail_vCPUs:
_rhost.vCPUs = _host.vCPUs
_rhost.original_vCPUs = _host.original_vCPUs
_rhost.avail_vCPUs = _host.avail_vCPUs
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (CPU updated)")
if _host.mem_cap != _rhost.mem_cap or \
_host.original_mem_cap != _rhost.original_mem_cap or \
_host.avail_mem_cap != _rhost.avail_mem_cap:
_rhost.mem_cap = _host.mem_cap
_rhost.original_mem_cap = _host.original_mem_cap
_rhost.avail_mem_cap = _host.avail_mem_cap
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (mem updated)")
if _host.local_disk_cap != _rhost.local_disk_cap or \
_host.original_local_disk_cap != _rhost.original_local_disk_cap or \
_host.avail_local_disk_cap != _rhost.avail_local_disk_cap:
_rhost.local_disk_cap = _host.local_disk_cap
_rhost.original_local_disk_cap = _host.original_local_disk_cap
_rhost.avail_local_disk_cap = _host.avail_local_disk_cap
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (local disk space updated)")
if _host.vCPUs_used != _rhost.vCPUs_used or \
_host.free_mem_mb != _rhost.free_mem_mb or \
_host.free_disk_gb != _rhost.free_disk_gb or \
_host.disk_available_least != _rhost.disk_available_least:
_rhost.vCPUs_used = _host.vCPUs_used
_rhost.free_mem_mb = _host.free_mem_mb
_rhost.free_disk_gb = _host.free_disk_gb
_rhost.disk_available_least = _host.disk_available_least
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (other resource numbers)")
return topology_updated
def _check_host_memberships(self, _host, _rhost):
topology_updated = False
for mk in _host.memberships.keys():
if mk not in _rhost.memberships.keys():
_rhost.memberships[mk] = self.resource.logical_groups[mk]
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (new membership)")
for mk in _rhost.memberships.keys():
m = _rhost.memberships[mk]
if m.group_type != "EX" and m.group_type != "AFF" and m.group_type != "DIV":
if mk not in _host.memberships.keys():
del _rhost.memberships[mk]
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (delete membership)")
return topology_updated
def _check_host_vms(self, _host, _rhost):
topology_updated = False
''' clean up VMs '''
for rvm_id in _rhost.vm_list:
if rvm_id[2] == "none":
_rhost.vm_list.remove(rvm_id)
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (none vm removed)")
self.resource.clean_none_vms_from_logical_groups(_rhost)
for vm_id in _host.vm_list:
if _rhost.exist_vm_by_uuid(vm_id[2]) is False:
_rhost.vm_list.append(vm_id)
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (new vm placed)")
for rvm_id in _rhost.vm_list:
if _host.exist_vm_by_uuid(rvm_id[2]) is False:
_rhost.vm_list.remove(rvm_id)
self.resource.remove_vm_by_uuid_from_logical_groups(_rhost, rvm_id[2])
topology_updated = True
self.logger.warn("ComputeManager: host (" + _rhost.name + ") updated (vm removed)")
return topology_updated
def set_flavors(self):
flavors = {}
compute = None
if self.config.mode.startswith("sim") is True or \
self.config.mode.startswith("test") is True:
compute = SimCompute(self.config)
else:
compute = Compute(self.logger)
status = compute.set_flavors(flavors)
if status != "success":
self.logger.error("ComputeManager: " + status)
return False
self._check_flavor_update(flavors)
return True
def _check_flavor_update(self, _flavors):
for fk in _flavors.keys():
if fk not in self.resource.flavors.keys():
self.resource.flavors[fk] = deepcopy(_flavors[fk])
self.resource.flavors[fk].last_update = time.time()
self.logger.warn("ComputeManager: new flavor (" + fk + ") added")
for rfk in self.resource.flavors.keys():
if rfk not in _flavors.keys():
self.resource.flavors[rfk].status = "disabled"
self.resource.flavors[rfk].last_update = time.time()
self.logger.warn("ComputeManager: flavor (" + rfk + ") removed")
for fk in _flavors.keys():
f = _flavors[fk]
rf = self.resource.flavors[fk]
if self._check_flavor_spec_update(f, rf) is True:
rf.last_update = time.time()
self.logger.warn("ComputeManager: flavor (" + fk + ") spec updated")
def _check_flavor_spec_update(self, _f, _rf):
spec_updated = False
if _f.status != _rf.status:
_rf.status = _f.status
spec_updated = True
if _f.vCPUs != _rf.vCPUs or _f.mem_cap != _rf.mem_cap or _f.disk_cap != _rf.disk_cap:
_rf.vCPUs = _f.vCPUs
_rf.mem_cap = _f.mem_cap
_rf.disk_cap = _f.disk_cap
spec_updated = True
for sk in _f.extra_specs.keys():
if sk not in _rf.extra_specs.keys():
_rf.extra_specs[sk] = _f.extra_specs[sk]
spec_updated = True
for rsk in _rf.extra_specs.keys():
if rsk not in _f.extra_specs.keys():
del _rf.extra_specs[rsk]
spec_updated = True
return spec_updated

View File

@ -0,0 +1,101 @@
#!/bin/python
# Modified: Sep. 4, 2016
from valet.engine.resource_manager.resource_base import Host, LogicalGroup, Flavor
class SimCompute(object):
def __init__(self, _config):
self.config = _config
self.datacenter_name = "sim"
def set_hosts(self, _hosts, _logical_groups):
self._set_availability_zones(_hosts, _logical_groups)
self._set_aggregates(_hosts, _logical_groups)
self._set_placed_vms(_hosts, _logical_groups)
self._set_resources(_hosts)
return "success"
def _set_availability_zones(self, _hosts, _logical_groups):
logical_group = LogicalGroup("nova")
logical_group.group_type = "AZ"
_logical_groups[logical_group.name] = logical_group
for r_num in range(0, self.config.num_of_racks):
for h_num in range(0, self.config.num_of_hosts_per_rack):
host = Host(self.datacenter_name + "0r" + str(r_num) + "c" + str(h_num))
host.tag.append("nova")
host.memberships["nova"] = logical_group
logical_group.vms_per_host[host.name] = []
_hosts[host.name] = host
def _set_aggregates(self, _hosts, _logical_groups):
for a_num in range(0, self.config.num_of_aggregates):
metadata = {}
metadata["cpu_allocation_ratio"] = "0.5"
aggregate = LogicalGroup("aggregate" + str(a_num))
aggregate.group_type = "AGGR"
aggregate.metadata = metadata
_logical_groups[aggregate.name] = aggregate
for a_num in range(0, self.config.num_of_aggregates):
aggregate = _logical_groups["aggregate" + str(a_num)]
for r_num in range(0, self.config.num_of_racks):
for h_num in range(0, self.config.num_of_hosts_per_rack):
host_name = self.datacenter_name + "0r" + str(r_num) + "c" + str(h_num)
if host_name in _hosts.keys():
if (h_num % (self.config.aggregated_ratio + a_num)) == 0:
host = _hosts[host_name]
host.memberships[aggregate.name] = aggregate
aggregate.vms_per_host[host.name] = []
def _set_placed_vms(self, _hosts, _logical_groups):
pass
def _set_resources(self, _hosts):
for r_num in range(0, self.config.num_of_racks):
for h_num in range(0, self.config.num_of_hosts_per_rack):
host_name = self.datacenter_name + "0r" + str(r_num) + "c" + str(h_num)
if host_name in _hosts.keys():
host = _hosts[host_name]
host.original_vCPUs = float(self.config.cpus_per_host)
host.vCPUs_used = 0.0
host.original_mem_cap = float(self.config.mem_per_host)
host.free_mem_mb = host.original_mem_cap
host.original_local_disk_cap = float(self.config.disk_per_host)
host.free_disk_gb = host.original_local_disk_cap
host.disk_available_least = host.original_local_disk_cap
def set_flavors(self, _flavors):
for f_num in range(0, self.config.num_of_basic_flavors):
flavor = Flavor("bflavor" + str(f_num))
flavor.vCPUs = float(self.config.base_flavor_cpus * (f_num + 1))
flavor.mem_cap = float(self.config.base_flavor_mem * (f_num + 1))
flavor.disk_cap = float(self.config.base_flavor_disk * (f_num + 1)) + 10.0 + 20.0 / 1024.0
_flavors[flavor.name] = flavor
for a_num in range(0, self.config.num_of_aggregates):
flavor = Flavor("sflavor" + str(a_num))
flavor.vCPUs = self.config.base_flavor_cpus * (a_num + 1)
flavor.mem_cap = self.config.base_flavor_mem * (a_num + 1)
flavor.disk_cap = self.config.base_flavor_disk * (a_num + 1)
# flavor.extra_specs["availability_zone"] = "nova"
flavor.extra_specs["cpu_allocation_ratio"] = "0.5"
_flavors[flavor.name] = flavor
return "success"

View File

@ -0,0 +1,933 @@
#!/bin/python
# Modified: Sep. 27, 2016
import json
import sys
import time
import traceback
from valet.engine.optimizer.app_manager.app_topology_base import LEVELS
from valet.engine.optimizer.util import util as util
from valet.engine.resource_manager.resource_base import Datacenter, HostGroup, Host, LogicalGroup
from valet.engine.resource_manager.resource_base import Flavor, Switch, Link
class Resource(object):
def __init__(self, _db, _config, _logger):
self.db = _db
self.config = _config
self.logger = _logger
''' resource data '''
self.datacenter = Datacenter(self.config.datacenter_name)
self.host_groups = {}
self.hosts = {}
self.switches = {}
self.storage_hosts = {}
''' metadata '''
self.logical_groups = {}
self.flavors = {}
self.current_timestamp = 0
self.last_log_index = 0
''' resource status aggregation '''
self.CPU_avail = 0
self.mem_avail = 0
self.local_disk_avail = 0
self.disk_avail = 0
self.nw_bandwidth_avail = 0
def bootstrap_from_db(self, _resource_status):
try:
logical_groups = _resource_status.get("logical_groups")
if logical_groups:
for lgk, lg in logical_groups.iteritems():
logical_group = LogicalGroup(lgk)
logical_group.group_type = lg.get("group_type")
logical_group.status = lg.get("status")
logical_group.metadata = lg.get("metadata")
logical_group.vm_list = lg.get("vm_list")
logical_group.volume_list = lg.get("volume_list", [])
logical_group.vms_per_host = lg.get("vms_per_host")
self.logical_groups[lgk] = logical_group
if len(self.logical_groups) > 0:
self.logger.debug("Resource.bootstrap_from_db: logical_groups loaded")
else:
self.logger.warn("Resource.bootstrap_from_db: no logical_groups")
flavors = _resource_status.get("flavors")
if flavors:
for fk, f in flavors.iteritems():
flavor = Flavor(fk)
flavor.flavor_id = f.get("flavor_id")
flavor.status = f.get("status")
flavor.vCPUs = f.get("vCPUs")
flavor.mem_cap = f.get("mem")
flavor.disk_cap = f.get("disk")
flavor.extra_specs = f.get("extra_specs")
self.flavors[fk] = flavor
if len(self.flavors) > 0:
self.logger.debug("Resource.bootstrap_from_db: flavors loaded")
else:
self.logger.error("Resource.bootstrap_from_db: fail loading flavors")
# return False
switches = _resource_status.get("switches")
if switches:
for sk, s in switches.iteritems():
switch = Switch(sk)
switch.switch_type = s.get("switch_type")
switch.status = s.get("status")
self.switches[sk] = switch
if len(self.switches) > 0:
self.logger.debug("Resource.bootstrap_from_db: switches loaded")
for sk, s in switches.iteritems():
switch = self.switches[sk]
up_links = {}
uls = s.get("up_links")
for ulk, ul in uls.iteritems():
ulink = Link(ulk)
ulink.resource = self.switches[ul.get("resource")]
ulink.nw_bandwidth = ul.get("bandwidth")
ulink.avail_nw_bandwidth = ul.get("avail_bandwidth")
up_links[ulk] = ulink
switch.up_links = up_links
peer_links = {}
pls = s.get("peer_links")
for plk, pl in pls.iteritems():
plink = Link(plk)
plink.resource = self.switches[pl.get("resource")]
plink.nw_bandwidth = pl.get("bandwidth")
plink.avail_nw_bandwidth = pl.get("avail_bandwidth")
peer_links[plk] = plink
switch.peer_links = peer_links
self.logger.debug("Resource.bootstrap_from_db: switch links loaded")
else:
self.logger.error("Resource.bootstrap_from_db: fail loading switches")
# return False
# storage_hosts
hosts = _resource_status.get("hosts")
if hosts:
for hk, h in hosts.iteritems():
host = Host(hk)
host.tag = h.get("tag")
host.status = h.get("status")
host.state = h.get("state")
host.vCPUs = h.get("vCPUs")
host.original_vCPUs = h.get("original_vCPUs")
host.avail_vCPUs = h.get("avail_vCPUs")
host.mem_cap = h.get("mem")
host.original_mem_cap = h.get("original_mem")
host.avail_mem_cap = h.get("avail_mem")
host.local_disk_cap = h.get("local_disk")
host.original_local_disk_cap = h.get("original_local_disk")
host.avail_local_disk_cap = h.get("avail_local_disk")
host.vCPUs_used = h.get("vCPUs_used")
host.free_mem_mb = h.get("free_mem_mb")
host.free_disk_gb = h.get("free_disk_gb")
host.disk_available_least = h.get("disk_available_least")
host.vm_list = h.get("vm_list")
host.volume_list = h.get("volume_list", [])
for lgk in h["membership_list"]:
host.memberships[lgk] = self.logical_groups[lgk]
for sk in h.get("switch_list", []):
host.switches[sk] = self.switches[sk]
# host.storages
self.hosts[hk] = host
if len(self.hosts) > 0:
self.logger.debug("Resource.bootstrap_from_db: hosts loaded")
else:
self.logger.error("Resource.bootstrap_from_db: fail loading hosts")
# return False
host_groups = _resource_status.get("host_groups")
if host_groups:
for hgk, hg in host_groups.iteritems():
host_group = HostGroup(hgk)
host_group.host_type = hg.get("host_type")
host_group.status = hg.get("status")
host_group.vCPUs = hg.get("vCPUs")
host_group.original_vCPUs = hg.get("original_vCPUs")
host_group.avail_vCPUs = hg.get("avail_vCPUs")
host_group.mem_cap = hg.get("mem")
host_group.original_mem_cap = hg.get("original_mem")
host_group.avail_mem_cap = hg.get("avail_mem")
host_group.local_disk_cap = hg.get("local_disk")
host_group.original_local_disk_cap = hg.get("original_local_disk")
host_group.avail_local_disk_cap = hg.get("avail_local_disk")
host_group.vm_list = hg.get("vm_list")
host_group.volume_list = hg.get("volume_list", [])
for lgk in hg.get("membership_list"):
host_group.memberships[lgk] = self.logical_groups[lgk]
for sk in hg.get("switch_list", []):
host_group.switches[sk] = self.switches[sk]
# host.storages
self.host_groups[hgk] = host_group
if len(self.host_groups) > 0:
self.logger.debug("Resource.bootstrap_from_db: host_groups loaded")
else:
self.logger.error("Resource.bootstrap_from_db: fail loading host_groups")
# return False
dc = _resource_status.get("datacenter")
if dc:
self.datacenter.name = dc.get("name")
self.datacenter.region_code_list = dc.get("region_code_list")
self.datacenter.status = dc.get("status")
self.datacenter.vCPUs = dc.get("vCPUs")
self.datacenter.original_vCPUs = dc.get("original_vCPUs")
self.datacenter.avail_vCPUs = dc.get("avail_vCPUs")
self.datacenter.mem_cap = dc.get("mem")
self.datacenter.original_mem_cap = dc.get("original_mem")
self.datacenter.avail_mem_cap = dc.get("avail_mem")
self.datacenter.local_disk_cap = dc.get("local_disk")
self.datacenter.original_local_disk_cap = dc.get("original_local_disk")
self.datacenter.avail_local_disk_cap = dc.get("avail_local_disk")
self.datacenter.vm_list = dc.get("vm_list")
self.datacenter.volume_list = dc.get("volume_list", [])
for lgk in dc.get("membership_list"):
self.datacenter.memberships[lgk] = self.logical_groups[lgk]
for sk in dc.get("switch_list", []):
self.datacenter.root_switches[sk] = self.switches[sk]
# host.storages
for ck in dc.get("children"):
if ck in self.host_groups.keys():
self.datacenter.resources[ck] = self.host_groups[ck]
elif ck in self.hosts.keys():
self.datacenter.resources[ck] = self.hosts[ck]
if len(self.datacenter.resources) > 0:
self.logger.debug("Resource.bootstrap_from_db: datacenter loaded")
else:
self.logger.error("Resource.bootstrap_from_db: fail loading datacenter")
# return False
hgs = _resource_status.get("host_groups")
if hgs:
for hgk, hg in hgs.iteritems():
host_group = self.host_groups[hgk]
pk = hg.get("parent")
if pk == self.datacenter.name:
host_group.parent_resource = self.datacenter
elif pk in self.host_groups.keys():
host_group.parent_resource = self.host_groups[pk]
for ck in hg.get("children"):
if ck in self.hosts.keys():
host_group.child_resources[ck] = self.hosts[ck]
elif ck in self.host_groups.keys():
host_group.child_resources[ck] = self.host_groups[ck]
self.logger.debug("Resource.bootstrap_from_db: host_groups'layout loaded")
hs = _resource_status.get("hosts")
if hs:
for hk, h in hs.iteritems():
host = self.hosts[hk]
pk = h.get("parent")
if pk == self.datacenter.name:
host.host_group = self.datacenter
elif pk in self.host_groups.keys():
host.host_group = self.host_groups[pk]
self.logger.debug("Resource.bootstrap_from_db: hosts'layout loaded")
self._update_compute_avail()
self._update_storage_avail()
self._update_nw_bandwidth_avail()
self.logger.debug("Resource.bootstrap_from_db: resource availability updated")
except Exception:
self.logger.error("Resource.bootstrap_from_db - FAILED:" + traceback.format_exc())
return True
def update_topology(self, store=True):
self._update_topology()
self._update_compute_avail()
self._update_storage_avail()
self._update_nw_bandwidth_avail()
if store is False:
return True
ct = self._store_topology_updates()
if ct is None:
return False
else:
self.current_timestamp = ct
return True
def _update_topology(self):
for level in LEVELS:
for _, host_group in self.host_groups.iteritems():
if host_group.host_type == level and host_group.check_availability() is True:
if host_group.last_update > self.current_timestamp:
self._update_host_group_topology(host_group)
if self.datacenter.last_update > self.current_timestamp:
self._update_datacenter_topology()
def _update_host_group_topology(self, _host_group):
_host_group.init_resources()
del _host_group.vm_list[:]
del _host_group.volume_list[:]
_host_group.storages.clear()
for _, host in _host_group.child_resources.iteritems():
if host.check_availability() is True:
_host_group.vCPUs += host.vCPUs
_host_group.original_vCPUs += host.original_vCPUs
_host_group.avail_vCPUs += host.avail_vCPUs
_host_group.mem_cap += host.mem_cap
_host_group.original_mem_cap += host.original_mem_cap
_host_group.avail_mem_cap += host.avail_mem_cap
_host_group.local_disk_cap += host.local_disk_cap
_host_group.original_local_disk_cap += host.original_local_disk_cap
_host_group.avail_local_disk_cap += host.avail_local_disk_cap
for shk, storage_host in host.storages.iteritems():
if storage_host.status == "enabled":
_host_group.storages[shk] = storage_host
for vm_id in host.vm_list:
_host_group.vm_list.append(vm_id)
for vol_name in host.volume_list:
_host_group.volume_list.append(vol_name)
_host_group.init_memberships()
for _, host in _host_group.child_resources.iteritems():
if host.check_availability() is True:
for mk in host.memberships.keys():
_host_group.memberships[mk] = host.memberships[mk]
def _update_datacenter_topology(self):
self.datacenter.init_resources()
del self.datacenter.vm_list[:]
del self.datacenter.volume_list[:]
self.datacenter.storages.clear()
self.datacenter.memberships.clear()
for _, resource in self.datacenter.resources.iteritems():
if resource.check_availability() is True:
self.datacenter.vCPUs += resource.vCPUs
self.datacenter.original_vCPUs += resource.original_vCPUs
self.datacenter.avail_vCPUs += resource.avail_vCPUs
self.datacenter.mem_cap += resource.mem_cap
self.datacenter.original_mem_cap += resource.original_mem_cap
self.datacenter.avail_mem_cap += resource.avail_mem_cap
self.datacenter.local_disk_cap += resource.local_disk_cap
self.datacenter.original_local_disk_cap += resource.original_local_disk_cap
self.datacenter.avail_local_disk_cap += resource.avail_local_disk_cap
for shk, storage_host in resource.storages.iteritems():
if storage_host.status == "enabled":
self.datacenter.storages[shk] = storage_host
for vm_name in resource.vm_list:
self.datacenter.vm_list.append(vm_name)
for vol_name in resource.volume_list:
self.datacenter.volume_list.append(vol_name)
for mk in resource.memberships.keys():
self.datacenter.memberships[mk] = resource.memberships[mk]
def _update_compute_avail(self):
self.CPU_avail = self.datacenter.avail_vCPUs
self.mem_avail = self.datacenter.avail_mem_cap
self.local_disk_avail = self.datacenter.avail_local_disk_cap
def _update_storage_avail(self):
self.disk_avail = 0
for _, storage_host in self.storage_hosts.iteritems():
if storage_host.status == "enabled":
self.disk_avail += storage_host.avail_disk_cap
def _update_nw_bandwidth_avail(self):
self.nw_bandwidth_avail = 0
level = "leaf"
for _, s in self.switches.iteritems():
if s.status == "enabled":
if level == "leaf":
if s.switch_type == "ToR" or s.switch_type == "spine":
level = s.switch_type
elif level == "ToR":
if s.switch_type == "spine":
level = s.switch_type
if level == "leaf":
self.nw_bandwidth_avail = sys.maxint
elif level == "ToR":
for _, h in self.hosts.iteritems():
if h.status == "enabled" and h.state == "up" and \
("nova" in h.tag) and ("infra" in h.tag):
avail_nw_bandwidth_list = [sys.maxint]
for sk, s in h.switches.iteritems():
if s.status == "enabled":
for ulk, ul in s.up_links.iteritems():
avail_nw_bandwidth_list.append(ul.avail_nw_bandwidth)
self.nw_bandwidth_avail += min(avail_nw_bandwidth_list)
elif level == "spine":
for _, hg in self.host_groups.iteritems():
if hg.host_type == "rack" and hg.status == "enabled":
avail_nw_bandwidth_list = [sys.maxint]
for _, s in hg.switches.iteritems():
if s.status == "enabled":
for _, ul in s.up_links.iteritems():
avail_nw_bandwidth_list.append(ul.avail_nw_bandwidth)
# NOTE: peer links?
self.nw_bandwidth_avail += min(avail_nw_bandwidth_list)
def _store_topology_updates(self):
last_update_time = self.current_timestamp
flavor_updates = {}
logical_group_updates = {}
storage_updates = {}
switch_updates = {}
host_updates = {}
host_group_updates = {}
datacenter_update = None
for fk, flavor in self.flavors.iteritems():
if flavor.last_update > self.current_timestamp:
flavor_updates[fk] = flavor.get_json_info()
last_update_time = flavor.last_update
for lgk, lg in self.logical_groups.iteritems():
if lg.last_update > self.current_timestamp:
logical_group_updates[lgk] = lg.get_json_info()
last_update_time = lg.last_update
for shk, storage_host in self.storage_hosts.iteritems():
if storage_host.last_update > self.current_timestamp or \
storage_host.last_cap_update > self.current_timestamp:
storage_updates[shk] = storage_host.get_json_info()
if storage_host.last_update > self.current_time_stamp:
last_update_time = storage_host.last_update
if storage_host.last_cap_update > self.current_timestamp:
last_update_time = storage_host.last_cap_update
for sk, s in self.switches.iteritems():
if s.last_update > self.current_timestamp:
switch_updates[sk] = s.get_json_info()
last_update_time = s.last_update
for hk, host in self.hosts.iteritems():
if host.last_update > self.current_timestamp or host.last_link_update > self.current_timestamp:
host_updates[hk] = host.get_json_info()
if host.last_update > self.current_timestamp:
last_update_time = host.last_update
if host.last_link_update > self.current_timestamp:
last_update_time = host.last_link_update
for hgk, host_group in self.host_groups.iteritems():
if host_group.last_update > self.current_timestamp or \
host_group.last_link_update > self.current_timestamp:
host_group_updates[hgk] = host_group.get_json_info()
if host_group.last_update > self.current_timestamp:
last_update_time = host_group.last_update
if host_group.last_link_update > self.current_timestamp:
last_update_time = host_group.last_link_update
if self.datacenter.last_update > self.current_timestamp or \
self.datacenter.last_link_update > self.current_timestamp:
datacenter_update = self.datacenter.get_json_info()
if self.datacenter.last_update > self.current_timestamp:
last_update_time = self.datacenter.last_update
if self.datacenter.last_link_update > self.current_timestamp:
last_update_time = self.datacenter.last_link_update
(resource_logfile, last_index, mode) = util.get_last_logfile(self.config.resource_log_loc,
self.config.max_log_size,
self.config.max_num_of_logs,
self.datacenter.name,
self.last_log_index)
self.last_log_index = last_index
logging = open(self.config.resource_log_loc + resource_logfile, mode)
json_logging = {}
json_logging['timestamp'] = last_update_time
if len(flavor_updates) > 0:
json_logging['flavors'] = flavor_updates
if len(logical_group_updates) > 0:
json_logging['logical_groups'] = logical_group_updates
if len(storage_updates) > 0:
json_logging['storages'] = storage_updates
if len(switch_updates) > 0:
json_logging['switches'] = switch_updates
if len(host_updates) > 0:
json_logging['hosts'] = host_updates
if len(host_group_updates) > 0:
json_logging['host_groups'] = host_group_updates
if datacenter_update is not None:
json_logging['datacenter'] = datacenter_update
logged_data = json.dumps(json_logging)
logging.write(logged_data)
logging.write("\n")
logging.close()
self.logger.info("Resource._store_topology_updates: log resource status in " + resource_logfile)
if self.db is not None:
if self.db.update_resource_status(self.datacenter.name, json_logging) is False:
return None
if self.db.update_resource_log_index(self.datacenter.name, self.last_log_index) is False:
return None
return last_update_time
def update_rack_resource(self, _host):
rack = _host.host_group
if rack is not None:
rack.last_update = time.time()
if isinstance(rack, HostGroup):
self.update_cluster_resource(rack)
def update_cluster_resource(self, _rack):
cluster = _rack.parent_resource
if cluster is not None:
cluster.last_update = time.time()
if isinstance(cluster, HostGroup):
self.datacenter.last_update = time.time()
def get_uuid(self, _h_uuid, _host_name):
host = self.hosts[_host_name]
return host.get_uuid(_h_uuid)
def add_vm_to_host(self, _host_name, _vm_id, _vcpus, _mem, _ldisk):
host = self.hosts[_host_name]
host.vm_list.append(_vm_id)
host.avail_vCPUs -= _vcpus
host.avail_mem_cap -= _mem
host.avail_local_disk_cap -= _ldisk
host.vCPUs_used += _vcpus
host.free_mem_mb -= _mem
host.free_disk_gb -= _ldisk
host.disk_available_least -= _ldisk
def remove_vm_by_h_uuid_from_host(self, _host_name, _h_uuid, _vcpus, _mem, _ldisk):
host = self.hosts[_host_name]
host.remove_vm_by_h_uuid(_h_uuid)
host.avail_vCPUs += _vcpus
host.avail_mem_cap += _mem
host.avail_local_disk_cap += _ldisk
host.vCPUs_used -= _vcpus
host.free_mem_mb += _mem
host.free_disk_gb += _ldisk
host.disk_available_least += _ldisk
def remove_vm_by_uuid_from_host(self, _host_name, _uuid, _vcpus, _mem, _ldisk):
host = self.hosts[_host_name]
host.remove_vm_by_uuid(_uuid)
host.avail_vCPUs += _vcpus
host.avail_mem_cap += _mem
host.avail_local_disk_cap += _ldisk
host.vCPUs_used -= _vcpus
host.free_mem_mb += _mem
host.free_disk_gb += _ldisk
host.disk_available_least += _ldisk
def add_vol_to_host(self, _host_name, _storage_name, _v_id, _disk):
host = self.hosts[_host_name]
host.volume_list.append(_v_id)
storage_host = self.storage_hosts[_storage_name]
storage_host.volume_list.append(_v_id)
storage_host.avail_disk_cap -= _disk
# NOTE: Assume the up-link of spine switch is not used except out-going from datacenter
# NOTE: What about peer-switches?
def deduct_bandwidth(self, _host_name, _placement_level, _bandwidth):
host = self.hosts[_host_name]
if _placement_level == "host":
self._deduct_host_bandwidth(host, _bandwidth)
elif _placement_level == "rack":
self._deduct_host_bandwidth(host, _bandwidth)
rack = host.host_group
if not isinstance(rack, Datacenter):
self._deduct_host_bandwidth(rack, _bandwidth)
elif _placement_level == "cluster":
self._deduct_host_bandwidth(host, _bandwidth)
rack = host.host_group
self._deduct_host_bandwidth(rack, _bandwidth)
cluster = rack.parent_resource
for _, s in cluster.switches.iteritems():
if s.switch_type == "spine":
for _, ul in s.up_links.iteritems():
ul.avail_nw_bandwidth -= _bandwidth
s.last_update = time.time()
def _deduct_host_bandwidth(self, _host, _bandwidth):
for _, hs in _host.switches.iteritems():
for _, ul in hs.up_links.iteritems():
ul.avail_nw_bandwidth -= _bandwidth
hs.last_update = time.time()
def update_host_resources(self, _hn, _st, _vcpus, _vcpus_used, _mem, _fmem, _ldisk, _fldisk, _avail_least):
updated = False
host = self.hosts[_hn]
if host.status != _st:
host.status = _st
self.logger.debug("Resource.update_host_resources: host status changed")
updated = True
if host.original_vCPUs != _vcpus or \
host.vCPUs_used != _vcpus_used:
self.logger.debug("Resource.update_host_resources: host cpu changed")
host.original_vCPUs = _vcpus
host.vCPUs_used = _vcpus_used
updated = True
if host.free_mem_mb != _fmem or \
host.original_mem_cap != _mem:
self.logger.debug("Resource.update_host_resources: host mem changed")
host.free_mem_mb = _fmem
host.original_mem_cap = _mem
updated = True
if host.free_disk_gb != _fldisk or \
host.original_local_disk_cap != _ldisk or \
host.disk_available_least != _avail_least:
self.logger.debug("Resource.update_host_resources: host disk changed")
host.free_disk_gb = _fldisk
host.original_local_disk_cap = _ldisk
host.disk_available_least = _avail_least
updated = True
if updated is True:
self.compute_avail_resources(_hn, host)
return updated
def update_host_time(self, _host_name):
host = self.hosts[_host_name]
host.last_update = time.time()
self.update_rack_resource(host)
def update_storage_time(self, _storage_name):
storage_host = self.storage_hosts[_storage_name]
storage_host.last_cap_update = time.time()
def add_logical_group(self, _host_name, _lg_name, _lg_type):
host = None
if _host_name in self.hosts.keys():
host = self.hosts[_host_name]
else:
host = self.host_groups[_host_name]
if host is not None:
if _lg_name not in self.logical_groups.keys():
logical_group = LogicalGroup(_lg_name)
logical_group.group_type = _lg_type
logical_group.last_update = time.time()
self.logical_groups[_lg_name] = logical_group
if _lg_name not in host.memberships.keys():
host.memberships[_lg_name] = self.logical_groups[_lg_name]
if isinstance(host, HostGroup):
host.last_update = time.time()
self.update_cluster_resource(host)
def add_vm_to_logical_groups(self, _host, _vm_id, _logical_groups_of_vm):
for lgk in _host.memberships.keys():
if lgk in _logical_groups_of_vm:
lg = self.logical_groups[lgk]
if isinstance(_host, Host):
if lg.add_vm_by_h_uuid(_vm_id, _host.name) is True:
lg.last_update = time.time()
elif isinstance(_host, HostGroup):
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
if lgk.split(":")[0] == _host.host_type:
if lg.add_vm_by_h_uuid(_vm_id, _host.name) is True:
lg.last_update = time.time()
if isinstance(_host, Host) and _host.host_group is not None:
self.add_vm_to_logical_groups(_host.host_group, _vm_id, _logical_groups_of_vm)
elif isinstance(_host, HostGroup) and _host.parent_resource is not None:
self.add_vm_to_logical_groups(_host.parent_resource, _vm_id, _logical_groups_of_vm)
def remove_vm_by_h_uuid_from_logical_groups(self, _host, _h_uuid):
for lgk in _host.memberships.keys():
if lgk not in self.logical_groups.keys():
continue
lg = self.logical_groups[lgk]
if isinstance(_host, Host):
if lg.remove_vm_by_h_uuid(_h_uuid, _host.name) is True:
lg.last_update = time.time()
if _host.remove_membership(lg) is True:
_host.last_update = time.time()
elif isinstance(_host, HostGroup):
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
if lgk.split(":")[0] == _host.host_type:
if lg.remove_vm_by_h_uuid(_h_uuid, _host.name) is True:
lg.last_update = time.time()
if _host.remove_membership(lg) is True:
_host.last_update = time.time()
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
if len(lg.vm_list) == 0:
del self.logical_groups[lgk]
if isinstance(_host, Host) and _host.host_group is not None:
self.remove_vm_by_h_uuid_from_logical_groups(_host.host_group, _h_uuid)
elif isinstance(_host, HostGroup) and _host.parent_resource is not None:
self.remove_vm_by_h_uuid_from_logical_groups(_host.parent_resource, _h_uuid)
def remove_vm_by_uuid_from_logical_groups(self, _host, _uuid):
for lgk in _host.memberships.keys():
if lgk not in self.logical_groups.keys():
continue
lg = self.logical_groups[lgk]
if isinstance(_host, Host):
if lg.remove_vm_by_uuid(_uuid, _host.name) is True:
lg.last_update = time.time()
if _host.remove_membership(lg) is True:
_host.last_update = time.time()
elif isinstance(_host, HostGroup):
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
if lgk.split(":")[0] == _host.host_type:
if lg.remove_vm_by_uuid(_uuid, _host.name) is True:
lg.last_update = time.time()
if _host.remove_membership(lg) is True:
_host.last_update = time.time()
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
if len(lg.vm_list) == 0:
del self.logical_groups[lgk]
if isinstance(_host, Host) and _host.host_group is not None:
self.remove_vm_by_uuid_from_logical_groups(_host.host_group, _uuid)
elif isinstance(_host, HostGroup) and _host.parent_resource is not None:
self.remove_vm_by_uuid_from_logical_groups(_host.parent_resource, _uuid)
def clean_none_vms_from_logical_groups(self, _host):
for lgk in _host.memberships.keys():
if lgk not in self.logical_groups.keys():
continue
lg = self.logical_groups[lgk]
if isinstance(_host, Host):
if lg.clean_none_vms(_host.name) is True:
lg.last_update = time.time()
if _host.remove_membership(lg) is True:
_host.last_update = time.time()
elif isinstance(_host, HostGroup):
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
if lgk.split(":")[0] == _host.host_type:
if lg.clean_none_vms(_host.name) is True:
lg.last_update = time.time()
if _host.remove_membership(lg) is True:
_host.last_update = time.time()
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
if len(lg.vm_list) == 0:
del self.logical_groups[lgk]
if isinstance(_host, Host) and _host.host_group is not None:
self.clean_none_vms_from_logical_groups(_host.host_group)
elif isinstance(_host, HostGroup) and _host.parent_resource is not None:
self.clean_none_vms_from_logical_groups(_host.parent_resource)
def update_uuid_in_logical_groups(self, _h_uuid, _uuid, _host):
for lgk in _host.memberships.keys():
lg = self.logical_groups[lgk]
if isinstance(_host, Host):
if lg.update_uuid(_h_uuid, _uuid, _host.name) is True:
lg.last_update = time.time()
elif isinstance(_host, HostGroup):
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
if lgk.split(":")[0] == _host.host_type:
if lg.update_uuid(_h_uuid, _uuid, _host.name) is True:
lg.last_update = time.time()
if isinstance(_host, Host) and _host.host_group is not None:
self.update_uuid_in_logical_groups(_h_uuid, _uuid, _host.host_group)
elif isinstance(_host, HostGroup) and _host.parent_resource is not None:
self.update_uuid_in_logical_groups(_h_uuid, _uuid, _host.parent_resource)
def update_h_uuid_in_logical_groups(self, _h_uuid, _uuid, _host):
for lgk in _host.memberships.keys():
lg = self.logical_groups[lgk]
if isinstance(_host, Host):
if lg.update_h_uuid(_h_uuid, _uuid, _host.name) is True:
lg.last_update = time.time()
elif isinstance(_host, HostGroup):
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
if lgk.split(":")[0] == _host.host_type:
if lg.update_h_uuid(_h_uuid, _uuid, _host.name) is True:
lg.last_update = time.time()
if isinstance(_host, Host) and _host.host_group is not None:
self.update_h_uuid_in_logical_groups(_h_uuid, _uuid, _host.host_group)
elif isinstance(_host, HostGroup) and _host.parent_resource is not None:
self.update_h_uuid_in_logical_groups(_h_uuid, _uuid, _host.parent_resource)
def compute_avail_resources(self, hk, host):
ram_allocation_ratio_list = []
cpu_allocation_ratio_list = []
disk_allocation_ratio_list = []
for _, lg in host.memberships.iteritems():
if lg.group_type == "AGGR":
if "ram_allocation_ratio" in lg.metadata.keys():
ram_allocation_ratio_list.append(float(lg.metadata["ram_allocation_ratio"]))
if "cpu_allocation_ratio" in lg.metadata.keys():
cpu_allocation_ratio_list.append(float(lg.metadata["cpu_allocation_ratio"]))
if "disk_allocation_ratio" in lg.metadata.keys():
disk_allocation_ratio_list.append(float(lg.metadata["disk_allocation_ratio"]))
ram_allocation_ratio = 1.0
if len(ram_allocation_ratio_list) > 0:
ram_allocation_ratio = min(ram_allocation_ratio_list)
else:
if self.config.default_ram_allocation_ratio > 0:
ram_allocation_ratio = self.config.default_ram_allocation_ratio
static_ram_standby_ratio = 0
if self.config.static_mem_standby_ratio > 0:
static_ram_standby_ratio = float(self.config.static_mem_standby_ratio) / float(100)
host.compute_avail_mem(ram_allocation_ratio, static_ram_standby_ratio)
self.logger.debug("Resource.compute_avail_resources: host (" + hk + ")'s total_mem = " +
str(host.mem_cap) + ", avail_mem = " + str(host.avail_mem_cap))
cpu_allocation_ratio = 1.0
if len(cpu_allocation_ratio_list) > 0:
cpu_allocation_ratio = min(cpu_allocation_ratio_list)
else:
if self.config.default_cpu_allocation_ratio > 0:
cpu_allocation_ratio = self.config.default_cpu_allocation_ratio
static_cpu_standby_ratio = 0
if self.config.static_cpu_standby_ratio > 0:
static_cpu_standby_ratio = float(self.config.static_cpu_standby_ratio) / float(100)
host.compute_avail_vCPUs(cpu_allocation_ratio, static_cpu_standby_ratio)
self.logger.debug("Resource.compute_avail_resources: host (" + hk + ")'s total_vCPUs = " +
str(host.vCPUs) + ", avail_vCPUs = " + str(host.avail_vCPUs))
disk_allocation_ratio = 1.0
if len(disk_allocation_ratio_list) > 0:
disk_allocation_ratio = min(disk_allocation_ratio_list)
else:
if self.config.default_disk_allocation_ratio > 0:
disk_allocation_ratio = self.config.default_disk_allocation_ratio
static_disk_standby_ratio = 0
if self.config.static_local_disk_standby_ratio > 0:
static_disk_standby_ratio = float(self.config.static_local_disk_standby_ratio) / float(100)
host.compute_avail_disk(disk_allocation_ratio, static_disk_standby_ratio)
self.logger.debug("Resource.compute_avail_resources: host (" + hk + ")'s total_local_disk = " +
str(host.local_disk_cap) + ", avail_local_disk = " + str(host.avail_local_disk_cap))
def get_flavor(self, _name):
flavor = None
if _name in self.flavors.keys():
if self.flavors[_name].status == "enabled":
flavor = self.flavors[_name]
return flavor

View File

@ -0,0 +1,684 @@
#!/bin/python
# Modified: Sep. 27, 2016
from valet.engine.optimizer.app_manager.app_topology_base import LEVELS
class Datacenter(object):
def __init__(self, _name):
self.name = _name
self.region_code_list = []
self.status = "enabled"
self.memberships = {} # all available logical groups (e.g., aggregate) in the datacenter
self.vCPUs = 0
self.original_vCPUs = 0
self.avail_vCPUs = 0
self.mem_cap = 0 # MB
self.original_mem_cap = 0
self.avail_mem_cap = 0
self.local_disk_cap = 0 # GB, ephemeral
self.original_local_disk_cap = 0
self.avail_local_disk_cap = 0
self.root_switches = {}
self.storages = {}
self.resources = {}
self.vm_list = [] # a list of placed vms, (ochestration_uuid, vm_name, physical_uuid)
self.volume_list = [] # a list of placed volumes
self.last_update = 0
self.last_link_update = 0
def init_resources(self):
self.vCPUs = 0
self.original_vCPUs = 0
self.avail_vCPUs = 0
self.mem_cap = 0 # MB
self.original_mem_cap = 0
self.avail_mem_cap = 0
self.local_disk_cap = 0 # GB, ephemeral
self.original_local_disk_cap = 0
self.avail_local_disk_cap = 0
def get_json_info(self):
membership_list = []
for lgk in self.memberships.keys():
membership_list.append(lgk)
switch_list = []
for sk in self.root_switches.keys():
switch_list.append(sk)
storage_list = []
for shk in self.storages.keys():
storage_list.append(shk)
child_list = []
for ck in self.resources.keys():
child_list.append(ck)
return {'status': self.status,
'name': self.name,
'region_code_list': self.region_code_list,
'membership_list': membership_list,
'vCPUs': self.vCPUs,
'original_vCPUs': self.original_vCPUs,
'avail_vCPUs': self.avail_vCPUs,
'mem': self.mem_cap,
'original_mem': self.original_mem_cap,
'avail_mem': self.avail_mem_cap,
'local_disk': self.local_disk_cap,
'original_local_disk': self.original_local_disk_cap,
'avail_local_disk': self.avail_local_disk_cap,
'switch_list': switch_list,
'storage_list': storage_list,
'children': child_list,
'vm_list': self.vm_list,
'volume_list': self.volume_list,
'last_update': self.last_update,
'last_link_update': self.last_link_update}
# data container for rack or cluster
class HostGroup(object):
def __init__(self, _id):
self.name = _id
self.host_type = "rack" # rack or cluster(e.g., power domain, zone)
self.status = "enabled"
self.memberships = {} # all available logical groups (e.g., aggregate) in this group
self.vCPUs = 0
self.original_vCPUs = 0
self.avail_vCPUs = 0
self.mem_cap = 0 # MB
self.original_mem_cap = 0
self.avail_mem_cap = 0
self.local_disk_cap = 0 # GB, ephemeral
self.original_local_disk_cap = 0
self.avail_local_disk_cap = 0
self.switches = {} # ToRs
self.storages = {}
self.parent_resource = None # e.g., datacenter
self.child_resources = {} # e.g., hosting servers
self.vm_list = [] # a list of placed vms, (ochestration_uuid, vm_name, physical_uuid)
self.volume_list = [] # a list of placed volumes
self.last_update = 0
self.last_link_update = 0
def init_resources(self):
self.vCPUs = 0
self.original_vCPUs = 0
self.avail_vCPUs = 0
self.mem_cap = 0 # MB
self.original_mem_cap = 0
self.avail_mem_cap = 0
self.local_disk_cap = 0 # GB, ephemeral
self.original_local_disk_cap = 0
self.avail_local_disk_cap = 0
def init_memberships(self):
for lgk in self.memberships.keys():
lg = self.memberships[lgk]
if lg.group_type == "EX" or lg.group_type == "AFF" or lg.group_type == "DIV":
level = lg.name.split(":")[0]
if LEVELS.index(level) < LEVELS.index(self.host_type) or self.name not in lg.vms_per_host.keys():
del self.memberships[lgk]
else:
del self.memberships[lgk]
def remove_membership(self, _lg):
cleaned = False
if _lg.group_type == "EX" or _lg.group_type == "AFF" or _lg.group_type == "DIV":
if self.name not in _lg.vms_per_host.keys():
del self.memberships[_lg.name]
cleaned = True
return cleaned
def check_availability(self):
if self.status == "enabled":
return True
else:
return False
def get_json_info(self):
membership_list = []
for lgk in self.memberships.keys():
membership_list.append(lgk)
switch_list = []
for sk in self.switches.keys():
switch_list.append(sk)
storage_list = []
for shk in self.storages.keys():
storage_list.append(shk)
child_list = []
for ck in self.child_resources.keys():
child_list.append(ck)
return {'status': self.status,
'host_type': self.host_type,
'membership_list': membership_list,
'vCPUs': self.vCPUs,
'original_vCPUs': self.original_vCPUs,
'avail_vCPUs': self.avail_vCPUs,
'mem': self.mem_cap,
'original_mem': self.original_mem_cap,
'avail_mem': self.avail_mem_cap,
'local_disk': self.local_disk_cap,
'original_local_disk': self.original_local_disk_cap,
'avail_local_disk': self.avail_local_disk_cap,
'switch_list': switch_list,
'storage_list': storage_list,
'parent': self.parent_resource.name,
'children': child_list,
'vm_list': self.vm_list,
'volume_list': self.volume_list,
'last_update': self.last_update,
'last_link_update': self.last_link_update}
class Host(object):
def __init__(self, _name):
self.name = _name
self.tag = [] # mark if this is synch'ed by multiple sources
self.status = "enabled"
self.state = "up"
self.memberships = {} # logical group (e.g., aggregate) this hosting server is involved in
self.vCPUs = 0
self.original_vCPUs = 0
self.avail_vCPUs = 0
self.mem_cap = 0 # MB
self.original_mem_cap = 0
self.avail_mem_cap = 0
self.local_disk_cap = 0 # GB, ephemeral
self.original_local_disk_cap = 0
self.avail_local_disk_cap = 0
self.vCPUs_used = 0
self.free_mem_mb = 0
self.free_disk_gb = 0
self.disk_available_least = 0
self.switches = {} # leaf
self.storages = {}
self.host_group = None # e.g., rack
self.vm_list = [] # a list of placed vms, (ochestration_uuid, vm_name, physical_uuid)
self.volume_list = [] # a list of placed volumes
self.last_update = 0
self.last_link_update = 0
def clean_memberships(self):
cleaned = False
for lgk in self.memberships.keys():
lg = self.memberships[lgk]
if self.name not in lg.vms_per_host.keys():
del self.memberships[lgk]
cleaned = True
return cleaned
def remove_membership(self, _lg):
cleaned = False
if _lg.group_type == "EX" or _lg.group_type == "AFF" or _lg.group_type == "DIV":
if self.name not in _lg.vms_per_host.keys():
del self.memberships[_lg.name]
cleaned = True
return cleaned
def check_availability(self):
if self.status == "enabled" and self.state == "up" and ("nova" in self.tag) and ("infra" in self.tag):
return True
else:
return False
def get_uuid(self, _h_uuid):
uuid = None
for vm_id in self.vm_list:
if vm_id[0] == _h_uuid:
uuid = vm_id[2]
break
return uuid
def exist_vm_by_h_uuid(self, _h_uuid):
exist = False
for vm_id in self.vm_list:
if vm_id[0] == _h_uuid:
exist = True
break
return exist
def exist_vm_by_uuid(self, _uuid):
exist = False
for vm_id in self.vm_list:
if vm_id[2] == _uuid:
exist = True
break
return exist
def remove_vm_by_h_uuid(self, _h_uuid):
success = False
for vm_id in self.vm_list:
if vm_id[0] == _h_uuid:
self.vm_list.remove(vm_id)
success = True
break
return success
def remove_vm_by_uuid(self, _uuid):
success = False
for vm_id in self.vm_list:
if vm_id[2] == _uuid:
self.vm_list.remove(vm_id)
success = True
break
return success
def update_uuid(self, _h_uuid, _uuid):
success = False
vm_name = "none"
for vm_id in self.vm_list:
if vm_id[0] == _h_uuid:
vm_name = vm_id[1]
self.vm_list.remove(vm_id)
success = True
break
if success is True:
vm_id = (_h_uuid, vm_name, _uuid)
self.vm_list.append(vm_id)
return success
def update_h_uuid(self, _h_uuid, _uuid):
success = False
vm_name = "none"
for vm_id in self.vm_list:
if vm_id[2] == _uuid:
vm_name = vm_id[1]
self.vm_list.remove(vm_id)
success = True
break
if success is True:
vm_id = (_h_uuid, vm_name, _uuid)
self.vm_list.append(vm_id)
return success
def compute_avail_vCPUs(self, _overcommit_ratio, _standby_ratio):
self.vCPUs = self.original_vCPUs * _overcommit_ratio * (1.0 - _standby_ratio)
self.avail_vCPUs = self.vCPUs - self.vCPUs_used
def compute_avail_mem(self, _overcommit_ratio, _standby_ratio):
self.mem_cap = self.original_mem_cap * _overcommit_ratio * (1.0 - _standby_ratio)
used_mem_mb = self.original_mem_cap - self.free_mem_mb
self.avail_mem_cap = self.mem_cap - used_mem_mb
def compute_avail_disk(self, _overcommit_ratio, _standby_ratio):
self.local_disk_cap = self.original_local_disk_cap * _overcommit_ratio * (1.0 - _standby_ratio)
free_disk_cap = self.free_disk_gb
if self.disk_available_least > 0:
free_disk_cap = min(self.free_disk_gb, self.disk_available_least)
used_disk_cap = self.original_local_disk_cap - free_disk_cap
self.avail_local_disk_cap = self.local_disk_cap - used_disk_cap
def get_json_info(self):
membership_list = []
for lgk in self.memberships.keys():
membership_list.append(lgk)
switch_list = []
for sk in self.switches.keys():
switch_list.append(sk)
storage_list = []
for shk in self.storages.keys():
storage_list.append(shk)
return {'tag': self.tag, 'status': self.status, 'state': self.state,
'membership_list': membership_list,
'vCPUs': self.vCPUs,
'original_vCPUs': self.original_vCPUs,
'avail_vCPUs': self.avail_vCPUs,
'mem': self.mem_cap,
'original_mem': self.original_mem_cap,
'avail_mem': self.avail_mem_cap,
'local_disk': self.local_disk_cap,
'original_local_disk': self.original_local_disk_cap,
'avail_local_disk': self.avail_local_disk_cap,
'vCPUs_used': self.vCPUs_used,
'free_mem_mb': self.free_mem_mb,
'free_disk_gb': self.free_disk_gb,
'disk_available_least': self.disk_available_least,
'switch_list': switch_list,
'storage_list': storage_list,
'parent': self.host_group.name,
'vm_list': self.vm_list,
'volume_list': self.volume_list,
'last_update': self.last_update,
'last_link_update': self.last_link_update}
class LogicalGroup(object):
def __init__(self, _name):
self.name = _name
self.group_type = "AGGR" # AGGR, AZ, INTG, EX, DIV, or AFF
self.status = "enabled"
self.metadata = {} # any metadata to be matched when placing nodes
self.vm_list = [] # a list of placed vms, (ochestration_uuid, vm_name, physical_uuid)
self.volume_list = [] # a list of placed volumes
self.vms_per_host = {} # key = host_id, value = a list of placed vms
self.last_update = 0
def exist_vm_by_h_uuid(self, _h_uuid):
exist = False
for vm_id in self.vm_list:
if vm_id[0] == _h_uuid:
exist = True
break
return exist
def exist_vm_by_uuid(self, _uuid):
exist = False
for vm_id in self.vm_list:
if vm_id[2] == _uuid:
exist = True
break
return exist
def update_uuid(self, _h_uuid, _uuid, _host_id):
success = False
vm_name = "none"
for vm_id in self.vm_list:
if vm_id[0] == _h_uuid:
vm_name = vm_id[1]
self.vm_list.remove(vm_id)
success = True
break
if _host_id in self.vms_per_host.keys():
for host_vm_id in self.vms_per_host[_host_id]:
if host_vm_id[0] == _h_uuid:
self.vms_per_host[_host_id].remove(host_vm_id)
success = True
break
if success is True:
vm_id = (_h_uuid, vm_name, _uuid)
self.vm_list.append(vm_id)
if _host_id in self.vms_per_host.keys():
self.vms_per_host[_host_id].append(vm_id)
return success
def update_h_uuid(self, _h_uuid, _uuid, _host_id):
success = False
vm_name = "none"
for vm_id in self.vm_list:
if vm_id[2] == _uuid:
vm_name = vm_id[1]
self.vm_list.remove(vm_id)
success = True
break
if _host_id in self.vms_per_host.keys():
for host_vm_id in self.vms_per_host[_host_id]:
if host_vm_id[2] == _uuid:
self.vms_per_host[_host_id].remove(host_vm_id)
success = True
break
if success is True:
vm_id = (_h_uuid, vm_name, _uuid)
self.vm_list.append(vm_id)
if _host_id in self.vms_per_host.keys():
self.vms_per_host[_host_id].append(vm_id)
return success
def add_vm_by_h_uuid(self, _vm_id, _host_id):
success = False
if self.exist_vm_by_h_uuid(_vm_id[0]) is False:
self.vm_list.append(_vm_id)
if self.group_type == "EX" or self.group_type == "AFF" or self.group_type == "DIV":
if _host_id not in self.vms_per_host.keys():
self.vms_per_host[_host_id] = []
self.vms_per_host[_host_id].append(_vm_id)
success = True
return success
def remove_vm_by_h_uuid(self, _h_uuid, _host_id):
success = False
for vm_id in self.vm_list:
if vm_id[0] == _h_uuid:
self.vm_list.remove(vm_id)
success = True
break
if _host_id in self.vms_per_host.keys():
for host_vm_id in self.vms_per_host[_host_id]:
if host_vm_id[0] == _h_uuid:
self.vms_per_host[_host_id].remove(host_vm_id)
success = True
break
if self.group_type == "EX" or self.group_type == "AFF" or self.group_type == "DIV":
if (_host_id in self.vms_per_host.keys()) and len(self.vms_per_host[_host_id]) == 0:
del self.vms_per_host[_host_id]
return success
def remove_vm_by_uuid(self, _uuid, _host_id):
success = False
for vm_id in self.vm_list:
if vm_id[2] == _uuid:
self.vm_list.remove(vm_id)
success = True
break
if _host_id in self.vms_per_host.keys():
for host_vm_id in self.vms_per_host[_host_id]:
if host_vm_id[2] == _uuid:
self.vms_per_host[_host_id].remove(host_vm_id)
success = True
break
if self.group_type == "EX" or self.group_type == "AFF" or self.group_type == "DIV":
if (_host_id in self.vms_per_host.keys()) and len(self.vms_per_host[_host_id]) == 0:
del self.vms_per_host[_host_id]
return success
def clean_none_vms(self, _host_id):
success = False
for vm_id in self.vm_list:
if vm_id[2] == "none":
self.vm_list.remove(vm_id)
success = True
if _host_id in self.vms_per_host.keys():
for vm_id in self.vms_per_host[_host_id]:
if vm_id[2] == "none":
self.vms_per_host[_host_id].remove(vm_id)
success = True
if self.group_type == "EX" or self.group_type == "AFF" or self.group_type == "DIV":
if (_host_id in self.vms_per_host.keys()) and len(self.vms_per_host[_host_id]) == 0:
del self.vms_per_host[_host_id]
return success
def get_json_info(self):
return {'status': self.status,
'group_type': self.group_type,
'metadata': self.metadata,
'vm_list': self.vm_list,
'volume_list': self.volume_list,
'vms_per_host': self.vms_per_host,
'last_update': self.last_update}
class Switch(object):
def __init__(self, _switch_id):
self.name = _switch_id
self.switch_type = "ToR" # root, spine, ToR, or leaf
self.status = "enabled"
self.up_links = {}
self.down_links = {} # currently, not used
self.peer_links = {}
self.last_update = 0
def get_json_info(self):
ulinks = {}
for ulk, ul in self.up_links.iteritems():
ulinks[ulk] = ul.get_json_info()
plinks = {}
for plk, pl in self.peer_links.iteritems():
plinks[plk] = pl.get_json_info()
return {'status': self.status,
'switch_type': self.switch_type,
'up_links': ulinks,
'peer_links': plinks,
'last_update': self.last_update}
class Link(object):
def __init__(self, _name):
self.name = _name # format: source + "-" + target
self.resource = None # switch beging connected to
self.nw_bandwidth = 0 # Mbps
self.avail_nw_bandwidth = 0
def get_json_info(self):
return {'resource': self.resource.name,
'bandwidth': self.nw_bandwidth,
'avail_bandwidth': self.avail_nw_bandwidth}
class StorageHost(object):
def __init__(self, _name):
self.name = _name
self.storage_class = None # tiering, e.g., platinum, gold, silver
self.status = "enabled"
self.host_list = []
self.disk_cap = 0 # GB
self.avail_disk_cap = 0
self.volume_list = [] # list of volume names placed in this host
self.last_update = 0
self.last_cap_update = 0
def get_json_info(self):
return {'status': self.status,
'class': self.storage_class,
'host_list': self.host_list,
'disk': self.disk_cap,
'avail_disk': self.avail_disk_cap,
'volume_list': self.volume_list,
'last_update': self.last_update,
'last_cap_update': self.last_cap_update}
class Flavor(object):
def __init__(self, _name):
self.name = _name
self.flavor_id = None
self.status = "enabled"
self.vCPUs = 0
self.mem_cap = 0 # MB
self.disk_cap = 0 # including ephemeral (GB) and swap (MB)
self.extra_specs = {}
self.last_update = 0
def get_json_info(self):
return {'status': self.status,
'flavor_id': self.flavor_id,
'vCPUs': self.vCPUs,
'mem': self.mem_cap,
'disk': self.disk_cap,
'extra_specs': self.extra_specs,
'last_update': self.last_update}

Some files were not shown because too many files have changed in this diff Show More