Retire the Congress project

Recently the TC has worked on determining the criteria for when an
OpenStack project should be retired.  When there was not a PTL nominee
for the Congress project, that triggered the TC to review the project
health per [1], and the TC has determined [2] that development work on
the project has ceased.  This decision was announced in the
openstack-discuss mailing list in April 2020 [3].

This commit retires the repository per the process for governance
removal in the Victoria cycle as specified in the Mandatory Repository
Retirement resolution [4] and detailed in the infra manual [5].

Should interest in developing Congress as part of OpenStack revive,
please revert this commit to have the project rejoin the list of active
projects.

The community wishes to express our thanks and appreciation to all of
those who have contributed to the Congress project over the years.

[1] https://governance.openstack.org/tc/reference/dropping-projects.html
[2] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2020-04-20T15:36:59
[3] http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014292.html
[4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html
[5] https://docs.opendev.org/opendev/infra-manual/latest/drivers.html#retiring-a-project

Change-Id: I21c9ab9820f78cf76adf11c5f0591c60f76372a8
This commit is contained in:
Nate Johnston 2020-04-21 17:03:31 -04:00 committed by Andreas Jaeger
parent 85243abf63
commit bba805af02
748 changed files with 8 additions and 146934 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = congress
omit = congress/tests/*
[report]
ignore_errors = True

66
.gitignore vendored
View File

@ -1,66 +0,0 @@
# Congress build/runtime artifacts
Congress.tokens
subunit.log
congress/tests/policy_engines/snapshot/test
congress/tests/policy/snapshot/test
/doc/html
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
/lib
/lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
.stestr/
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
doc/source/_static/
doc/source/api/
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.sw?
# IDEs
.idea
# Files generated by tests
congress/tests/etc/keys

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=./congress/tests
top_dir=./

View File

@ -1,163 +0,0 @@
- job:
name: congress-tempest-base
parent: devstack-tempest
description: |
Congress devstack tempest tests job
irrelevant-files: &base_irrelevant_files
- ^.*\.rst$
- ^doc/.*$
- ^congress/tests/.*$
- ^releasenotes/.*$
required-projects: &base_required_projects
- name: openstack/devstack-gate
- name: openstack/aodh
- name: openstack/python-aodhclient
- name: openstack/congress
- name: openstack/congress-dashboard
- name: openstack/congress-tempest-plugin
- name: openstack/python-congressclient
- name: openstack/murano
- name: openstack/murano-dashboard
- name: openstack/python-muranoclient
- name: openstack/mistral
- name: openstack/python-mistralclient
- name: openstack/heat
- name: openstack/python-heatclient
- name: openstack/monasca-agent
- name: openstack/monasca-api
- name: openstack/monasca-common
- name: openstack/monasca-grafana-datasource
- name: openstack/monasca-notification
- name: openstack/monasca-persister
- name: openstack/monasca-statsd
- name: openstack/monasca-thresh
- name: openstack/monasca-ui
- name: openstack/python-monascaclient
timeout: 6000
vars: &base_vars
devstack_plugins:
congress: https://opendev.org/openstack/congress
heat: https://opendev.org/openstack/heat
neutron: https://opendev.org/openstack/neutron
devstack_services:
tempest: true
neutron-qos: true
horizon: false
tempest_concurrency: 1
tox_envlist: all
tempest_test_regex: congress_tempest_plugin.*
devstack_localrc:
LIBS_FROM_GIT: python-congressclient
SERVICE_TIMEOUT: 120 # default too short for this job
TEMPEST_PLUGINS: '"/opt/stack/congress-tempest-plugin"'
CONGRESS_MULTIPROCESS_DEPLOYMENT: true
CONGRESS_EXPOSE_ENCRYPTION_KEY_FOR_TEST: true
ENABLE_CONGRESS_Z3: true
USE_Z3_RELEASE: 4.7.1
USE_PYTHON3: True
- job:
name: congress-tempest-ipv6-only
parent: devstack-tempest-ipv6
description: |
Congress devstack tempest tests job for IPv6-only deployment
irrelevant-files: *base_irrelevant_files
required-projects: *base_required_projects
timeout: 6000
vars:
<<: *base_vars
tempest_test_regex: '(^congress_tempest_plugin.*)(\[.*\bsmoke\b.*\])'
- job:
name: congress-tempest-py3
parent: congress-tempest-base
vars:
devstack_plugins:
murano: https://opendev.org/openstack/murano
devstack_localrc:
USE_PYTHON3: true
- job:
name: congress-tempest-replicated
parent: congress-tempest-base
voting: false
vars:
devstack_plugins:
murano: https://opendev.org/openstack/murano
devstack_localrc:
CONGRESS_REPLICATED: true
- job:
name: congress-tempest-replicated-mysql
parent: congress-tempest-replicated
vars:
database: mysql
- job:
name: congress-tempest-replicated-postgresql
parent: congress-tempest-replicated
voting: false
vars:
devstack_services:
mysql: false
postgresql: true
- job:
name: congress-tempest-py3-mysql
parent: congress-tempest-py3
vars:
database: mysql
- job:
name: congress-tempest-py3-JsonIngester
parent: congress-tempest-base
voting: false
vars:
devstack_localrc:
ENABLE_CONGRESS_JSON: true
- job:
name: congress-tempest-py3-postgresql
parent: congress-tempest-base
voting: false
vars:
devstack_services:
mysql: false
postgresql: true
- project:
templates:
- check-requirements
- openstack-cover-jobs
- openstack-lower-constraints-jobs
- openstack-python3-ussuri-jobs
- release-notes-jobs-python3
- publish-openstack-docs-pti
- periodic-stable-jobs
check:
jobs:
- congress-tempest-py3-mysql
- congress-tempest-replicated-postgresql
- congress-tempest-py3-JsonIngester
- congress-tempest-ipv6-only
# Note: the above jobs most likely provides sufficient coverage
# - congress-tempest-py2-postgresql
# - congress-tempest-py3-postgresql
# - congress-tempest-replicated-mysql
# TripleO jobs that deploy Congress.
# Note we don't use a project-template here, so it's easier
# to disable voting on one specific job if things go wrong.
# tripleo-ci-centos-7-scenario001-multinode-oooq will only
# run on stable/pike while the -container will run in Queens
# and beyond.
# If you need any support to debug these jobs in case of
# failures, please reach us on #tripleo IRC channel.
# temporarily disable tripleO check until faster single-node job is available
# - tripleo-ci-centos-7-scenario007-multinode-oooq-container:
# voting: false
gate:
queue: congress
jobs:
- congress-tempest-py3-mysql
- congress-tempest-ipv6-only

View File

@ -1,21 +0,0 @@
============
Contributing
============
The Congress wiki page is the authoritative starting point.
https://wiki.openstack.org/wiki/Congress
If you would like to contribute to the development of any OpenStack
project including Congress,
you must follow the steps in this page:
https://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/congress

View File

@ -1,5 +0,0 @@
===========================
Congress style commandments
===========================
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,12 +0,0 @@
TOPDIR=$(CURDIR)
SRCDIR=$(TOPDIR)/congress
all: docs
clean:
find . -name '*.pyc' -exec rm -f {} \;
rm -Rf $(TOPDIR)/doc/html/*
docs: $(TOPDIR)/doc/source/*.rst
sphinx-build -b html $(TOPDIR)/doc/source $(TOPDIR)/doc/html

View File

@ -2,32 +2,13 @@
Welcome to Congress
===================
Congress is an open policy framework for the cloud. With Congress, a
cloud operator can declare, monitor, enforce, and audit "policy" in a
heterogeneous cloud environment. Congress gets inputs from a cloud's
various cloud services; for example in OpenStack, Congress fetches
information about VMs from Nova, and network state from Neutron, etc.
Congress then feeds input data from those services into its policy engine
where Congress verifies that the cloud's actual state abides by the cloud
operator's policies. Congress is designed to work with **any policy** and
**any cloud service**.
This project is no longer maintained.
* Free software: Apache license
* Documentation: https://docs.openstack.org/congress/latest/
* Wiki: https://wiki.openstack.org/wiki/Congress
* Source: https://github.com/openstack/Congress
* Bugs: https://bugs.launchpad.net/congress
* Blueprints: https://blueprints.launchpad.net/congress
* Release notes: https://docs.openstack.org/releasenotes/congress
* Admin guide: https://docs.openstack.org/congress/latest/admin/index.html
* Contributing: https://docs.openstack.org/congress/latest/contributor/index.html
* REST Clienthttps://opendev.org/openstack/python-congressclient
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. image:: https://governance.openstack.org/tc/badges/congress.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
Installing Congress
===================
Please refer to the
`installation guide <https://docs.openstack.org/congress/latest/install/>`_
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1 +0,0 @@
../../thirdparty/antlr3-antlr-3.5/runtime/Python/antlr3/

View File

@ -1 +0,0 @@
../../thirdparty/antlr3-antlr-3.5/runtime/Python3/antlr3/

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,38 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import os
import sys
# If ../congress/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(__file__),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir,
'congress',
'__init__.py')):
sys.path.insert(0, possible_topdir)
# set command line config options
from congress.common import config
config.init(sys.argv[1:])
from congress.server import congress_server
if __name__ == '__main__':
congress_server.main()

View File

@ -1,14 +0,0 @@
python-all-dev
python3-all-dev
libvirt-dev
libxml2-dev
libxslt1-dev
# libmysqlclient-dev
libpq-dev [platform:dpkg]
libsqlite3-dev
libffi-dev
# mysql-client
# mysql-server
# postgresql
# postgresql-client
rabbitmq-server

View File

@ -1,22 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import gettext
import pbr.version
gettext.install('congress')
__version__ = pbr.version.VersionInfo(
'openstack-congress').version_string()

View File

@ -1,50 +0,0 @@
# Copyright (c) 2015 Intel, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
class ActionsModel(base.APIModel):
"""Model for handling API requests about Actions."""
# Note(dse2): blocking function
def get_items(self, params, context=None):
"""Retrieve items from this model.
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: A dict containing at least a 'actions' key whose value is a
list of items in this model.
"""
# Note: blocking call
caller, source_id = api_utils.get_id_from_context(context)
try:
rpc_args = {'source_id': source_id}
# Note(dse2): blocking call
return self.invoke_rpc(caller, 'get_actions', rpc_args)
except exception.CongressException as e:
raise webservice.DataModelException(
exception.NotFound.code, str(e),
http_status_code=exception.NotFound.code)

View File

@ -1,52 +0,0 @@
# Copyright (c) 2015 NTT All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from congress.api import base
from congress.api import webservice
from congress.db import datasources as db_datasources
LOG = logging.getLogger(__name__)
def create_table_dict(tablename, schema):
cols = [{'name': x['name'], 'description': x['desc']}
if isinstance(x, dict)
else {'name': x, 'description': 'None'}
for x in schema[tablename]]
return {'table_id': tablename,
'columns': cols}
# Note(thread-safety): blocking function
def get_id_from_context(context):
if 'ds_id' in context:
# Note(thread-safety): blocking call
ds_name = db_datasources.get_datasource_name(context.get('ds_id'))
return ds_name, context.get('ds_id')
elif 'policy_id' in context:
return base.ENGINE_SERVICE_ID, context.get('policy_id')
else:
msg = "Internal error: context %s should have included " % str(context)
"either ds_id or policy_id"
try: # Py3: ensure LOG.exception is inside except
raise webservice.DataModelException('404', msg)
except webservice.DataModelException:
LOG.exception(msg)
raise

View File

@ -1,107 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import traceback
from oslo_log import log as logging
import webob
import webob.dec
from congress.api import webservice
from congress.dse2 import data_service
LOG = logging.getLogger(__name__)
API_SERVICE_NAME = '__api'
class ApiApplication(object):
"""An API web application that binds REST resources to a wsgi server.
This indirection between the wsgi server and REST resources facilitates
binding the same resource tree to multiple endpoints (e.g. HTTP/HTTPS).
"""
def __init__(self, resource_mgr):
self.resource_mgr = resource_mgr
@webob.dec.wsgify(RequestClass=webob.Request)
def __call__(self, request):
try:
handler = self.resource_mgr.get_handler(request)
if handler:
msg = _("Handling request '%(meth)s %(path)s' with %(hndlr)s")
LOG.info(msg, {"meth": request.method, "path": request.path,
"hndlr": handler})
# TODO(pballand): validation
response = handler.handle_request(request)
else:
response = webservice.NOT_FOUND_RESPONSE
except webservice.DataModelException as e:
# Error raised based on invalid user input
LOG.exception("ApiApplication: found DataModelException")
response = e.rest_response()
except Exception as e:
# Unexpected error raised by API framework or data model
msg = _("Exception caught for request: %s")
LOG.error(msg, request)
LOG.error(traceback.format_exc())
response = webservice.INTERNAL_ERROR_RESPONSE
return response
class ResourceManager(data_service.DataService):
"""A container for REST API resources.
This container is meant to be called from one or more wsgi servers/ports.
Attributes:
handlers: An array of API resource handlers for registered resources.
"""
def __init__(self):
self.handlers = []
super(ResourceManager, self).__init__(API_SERVICE_NAME)
def register_handler(self, handler, search_index=None):
"""Register a new resource handler.
:param: handler: The resource handler to register.
:param: search_index: Priority of resource handler to resolve cases
where a request matches multiple handlers.
"""
if search_index is not None:
self.handlers.insert(search_index, handler)
else:
self.handlers.append(handler)
msg = _("Registered API handler: %s")
LOG.info(msg, handler)
def get_handler(self, request):
"""Find a handler for a REST request.
:param: request: A webob request object.
:returns: A handler instance or None.
"""
for h in self.handlers:
if h.handles_request(request):
return h
return None

View File

@ -1,43 +0,0 @@
# Copyright (c) 2016 NEC Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
""" Base class for all API models."""
from __future__ import absolute_import
from oslo_config import cfg
ENGINE_SERVICE_ID = '__engine'
LIBRARY_SERVICE_ID = '__library'
DS_MANAGER_SERVICE_ID = '_ds_manager'
JSON_DS_SERVICE_PREFIX = '__json__'
class APIModel(object):
"""Base Class for handling API requests."""
def __init__(self, name, bus=None):
self.name = name
self.dse_long_timeout = cfg.CONF.dse.long_timeout
self.action_retry_timeout = cfg.CONF.dse.execute_action_retry_timeout
self.bus = bus
# Note(thread-safety): blocking function
def invoke_rpc(self, caller, name, kwargs, timeout=None):
local = (caller is ENGINE_SERVICE_ID and
self.bus.node.service_object(
ENGINE_SERVICE_ID) is not None)
return self.bus.rpc(
caller, name, kwargs, timeout=timeout, local=local)

View File

@ -1,165 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from oslo_serialization import jsonutils as json
from congress.api import api_utils
from congress.api import base
from congress.api import error_codes
from congress.api import webservice
from congress import exception
LOG = logging.getLogger(__name__)
class DatasourceModel(base.APIModel):
"""Model for handling API requests about Datasources."""
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
# Note(thread-safety): blocking call
results = self.bus.get_datasources(filter_secret=True)
# Check that running datasources match the datasources in the
# database since this is going to tell the client about those
# datasources, and the running datasources should match the
# datasources we show the client.
return {"results": results}
def get_item(self, id_, params, context=None):
"""Get datasource corresponding to id\_ in model."""
try:
datasource = self.bus.get_datasource(id_)
return datasource
except exception.DatasourceNotFound as e:
LOG.debug("Datasource '%s' not found", id_)
raise webservice.DataModelException(e.code, str(e),
http_status_code=e.code)
# Note(thread-safety): blocking function
def add_item(self, item, params, id_=None, context=None):
"""Add item to model.
:param: item: The item to add to the model
:param: id\_: The ID of the item, or None if an ID should be generated
:param: context: Key-values providing frame of reference of request
:returns: Tuple of (ID, newly_created_item)
:raises KeyError: ID already exists.
"""
obj = None
try:
# Note(thread-safety): blocking call
obj = self.invoke_rpc(base.DS_MANAGER_SERVICE_ID,
'add_datasource',
{'items': item},
timeout=self.dse_long_timeout)
# Let PE synchronizer take care of creating the policy.
except (exception.BadConfig,
exception.DatasourceNameInUse,
exception.DriverNotFound,
exception.DatasourceCreationError) as e:
LOG.debug(_("Datasource creation failed."))
raise webservice.DataModelException(
e.code, webservice.original_msg(e), http_status_code=e.code)
except exception.RpcTargetNotFound as e:
LOG.debug("Datasource creation failed.")
LOG.warning(webservice.original_msg(e))
raise webservice.DataModelException(
e.code, webservice.original_msg(e), http_status_code=503)
return (obj['id'], obj)
# Note(thread-safety): blocking function
def delete_item(self, id_, params, context=None):
ds_id = context.get('ds_id')
try:
# Note(thread-safety): blocking call
datasource = self.bus.get_datasource(ds_id)
# FIXME(thread-safety):
# by the time greenthread resumes, the
# returned datasource name could refer to a totally different
# datasource, causing the rest of this code to unintentionally
# delete a different datasource
# Fix: check UUID of datasource before operating.
# Abort if mismatch
self.invoke_rpc(base.DS_MANAGER_SERVICE_ID,
'delete_datasource',
{'datasource': datasource},
timeout=self.dse_long_timeout)
# Let PE synchronizer takes care of deleting policy
except (exception.DatasourceNotFound,
exception.DanglingReference) as e:
LOG.debug("Datasource deletion failed.")
raise webservice.DataModelException(e.code, str(e))
except exception.RpcTargetNotFound as e:
LOG.debug("Datasource deletion failed.")
LOG.warning(webservice.original_msg(e))
raise webservice.DataModelException(
e.code, webservice.original_msg(e), http_status_code=503)
# Note(thread-safety): blocking function
def request_refresh_action(self, params, context=None, request=None):
caller, source_id = api_utils.get_id_from_context(context)
try:
args = {'source_id': source_id}
# Note(thread-safety): blocking call
self.invoke_rpc(caller, 'request_refresh', args)
except exception.CongressException as e:
LOG.debug(e)
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def execute_action(self, params, context=None, request=None):
"Execute the action."
service = context.get('ds_id')
body = json.loads(request.body)
action = body.get('name')
action_args = body.get('args', {})
if (not isinstance(action_args, dict)):
(num, desc) = error_codes.get('execute_action_args_syntax')
raise webservice.DataModelException(num, desc)
try:
args = {'service_name': service, 'action': action,
'action_args': action_args}
# TODO(ekcs): perhaps keep execution synchronous when explicitly
# called via API
# Note(thread-safety): blocking call
self.invoke_rpc(base.ENGINE_SERVICE_ID, 'execute_action', args)
except exception.PolicyException as e:
(num, desc) = error_codes.get('execute_error')
raise webservice.DataModelException(num, desc + "::" + str(e))
return {}

View File

@ -1,123 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
try:
# For Python 3
import http.client as httplib
except ImportError:
import httplib
# TODO(thinrichs): move this out of api directory. Could go into
# the exceptions.py file. The HTTP error codes may make these errors
# look like they are only useful for the API, but actually they are
# just encoding the classification of the error using http codes.
# To make this more explicit, we could have 2 dictionaries where
# one maps an error name (readable for programmers) to an error number
# and another dictionary that maps an error name/number to the HTTP
# classification. But then it would be easy for a programmer when
# adding a new error to forget one or the other.
# name of unknown error
UNKNOWN = 'unknown'
# dict mapping error name to (<error id>, <description>, <http error code>)
errors = {}
errors[UNKNOWN] = (
1000, "Unknown error", httplib.BAD_REQUEST)
errors['add_item_id'] = (
1001, "Add item does not support user-chosen ID", httplib.BAD_REQUEST)
errors['rule_syntax'] = (
1002, "Syntax error for rule", httplib.BAD_REQUEST)
errors['multiple_rules'] = (
1003, "Received string representing more than 1 rule", httplib.BAD_REQUEST)
errors['incomplete_simulate_args'] = (
1004, "Simulate requires parameters: query, sequence, action_policy",
httplib.BAD_REQUEST)
errors['simulate_without_policy'] = (
1005, "Simulate must be told which policy evaluate the query on",
httplib.BAD_REQUEST)
errors['sequence_syntax'] = (
1006, "Syntax error in sequence", httplib.BAD_REQUEST)
errors['simulate_error'] = (
1007, "Error in simulate procedure", httplib.INTERNAL_SERVER_ERROR)
errors['rule_already_exists'] = (
1008, "Rule already exists", httplib.CONFLICT)
errors['schema_get_item_id'] = (
1009, "Get item for schema does not support user-chosen ID",
httplib.BAD_REQUEST)
errors['policy_name_must_be_provided'] = (
1010, "A name must be provided when creating a policy",
httplib.BAD_REQUEST)
errors['no_policy_update_owner'] = (
1012, "The policy owner_id cannot be updated",
httplib.BAD_REQUEST)
errors['no_policy_update_kind'] = (
1013, "The policy kind cannot be updated",
httplib.BAD_REQUEST)
errors['failed_to_create_policy'] = (
1014, "A new policy could not be created",
httplib.INTERNAL_SERVER_ERROR)
errors['policy_id_must_not_be_provided'] = (
1015, "An ID may not be provided when creating a policy",
httplib.BAD_REQUEST)
errors['execute_error'] = (
1016, "Error in execution procedure", httplib.INTERNAL_SERVER_ERROR)
errors['service_action_syntax'] = (
1017, "Incorrect action syntax. Requires: <service>:<action>",
httplib.BAD_REQUEST)
errors['execute_action_args_syntax'] = (
1018, "Incorrect argument syntax. "
"Requires: {'positional': [<args>], 'named': {<key>:<value>,}}",
httplib.BAD_REQUEST)
errors['rule_not_permitted'] = (
1019, "Rules not permitted on non persisted policies.",
httplib.BAD_REQUEST)
errors['policy_not_exist'] = (
1020, "The specified policy does not exist.", httplib.NOT_FOUND)
errors['policy_rule_insertion_failure'] = (
1021, "The policy rule could not be inserted.", httplib.BAD_REQUEST)
errors['policy_abbreviation_error'] = (
1022, "The policy abbreviation must be a string and the length of the "
"string must be equal to or less than 5 characters.",
httplib.BAD_REQUEST)
def get(name):
if name not in errors:
name = UNKNOWN
return errors[name][:2]
def get_num(name):
if name not in errors:
name = UNKNOWN
return errors[name][0]
def get_desc(name):
if name not in errors:
name = UNKNOWN
return errors[name][1]
def get_http(name):
if name not in errors:
name = UNKNOWN
return errors[name][2]

View File

@ -1,160 +0,0 @@
# Copyright (c) 2017 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from congress.api import base
from congress.api import error_codes
from congress.api import webservice
from congress import exception
LOG = logging.getLogger(__name__)
class LibraryPolicyModel(base.APIModel):
"""Model for handling API requests about Library Policies."""
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
:param: params: A dict-like object containing parameters
from the request query string and body.
The name parameter filters results by name policy name.
:param: context: Key-values providing frame of reference of request
:returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
include_rules = True
if params.get('include_rules', 'true').lower() == 'false':
include_rules = False
try:
# Note: name is included as a filtering parameter in get_items
# rather than a key in get_item because the API does not commit to
# library policy name being unique.
if 'name' in params:
# Note(thread-safety): blocking call
try:
policy = self.invoke_rpc(
base.LIBRARY_SERVICE_ID, 'get_policy_by_name',
{'name': params['name'],
'include_rules': include_rules})
return {"results": [policy]}
except KeyError: # not found
return {"results": []}
else:
# Note(thread-safety): blocking call
return {"results": self.invoke_rpc(
base.LIBRARY_SERVICE_ID,
'get_policies', {'include_rules': include_rules})}
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def get_item(self, id_, params, context=None):
"""Retrieve item with id from model.
:param: id\_: The id of the item to retrieve
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The matching item or None if no item with id exists.
"""
try:
# Note(thread-safety): blocking call
include_rules = True
if params.get('include_rules', 'true').lower() == 'false':
include_rules = False
return self.invoke_rpc(base.LIBRARY_SERVICE_ID,
'get_policy',
{'id_': id_,
'include_rules': include_rules})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def add_item(self, item, params, id_=None, context=None):
"""Add item to model.
:param: item: The item to add to the model
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: id\_: The unique name of the item
:param: context: Key-values providing frame of reference of request
:returns: Tuple of (ID, newly_created_item)
:raises KeyError: ID already exists.
:raises DataModelException: Addition cannot be performed.
"""
if id_ is not None:
(num, desc) = error_codes.get('policy_id_must_not_be_provided')
raise webservice.DataModelException(num, desc)
try:
# Note(thread-safety): blocking call
policy_metadata = self.invoke_rpc(
base.LIBRARY_SERVICE_ID, 'create_policy',
{'policy_dict': item})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
return (policy_metadata['id'], policy_metadata)
# Note(thread-safety): blocking function
def delete_item(self, id_, params, context=None):
"""Remove item from model.
:param: id\_: The unique name of the item to be removed
:param: params:
:param: context: Key-values providing frame of reference of request
:returns: The removed item.
:raises KeyError: Item with specified id\_ not present.
"""
# Note(thread-safety): blocking call
return self.invoke_rpc(base.LIBRARY_SERVICE_ID,
'delete_policy',
{'id_': id_})
def replace_item(self, id_, item, params, context=None):
"""Replace item with id\_ with new data.
:param: id\_: The ID of the item to be replaced
:param: item: The new item
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The new item after replacement.
:raises KeyError: Item with specified id\_ not present.
"""
# Note(thread-safety): blocking call
try:
return self.invoke_rpc(base.LIBRARY_SERVICE_ID,
'replace_policy',
{'id_': id_,
'policy_dict': item})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)

View File

@ -1,254 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import re
from oslo_serialization import jsonutils as json
import six
from congress.api import base
from congress.api import error_codes
from congress.api import webservice
from congress import exception
from congress.library_service import library_service
class PolicyModel(base.APIModel):
"""Model for handling API requests about Policies."""
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
try:
# Note(thread-safety): blocking call
return {"results": self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_get_policies',
{})}
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def get_item(self, id_, params, context=None):
"""Retrieve item with id id\_ from model.
:param: id\_: The ID of the item to retrieve
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The matching item or None if id\_ does not exist.
"""
try:
# Note(thread-safety): blocking call
return self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_get_policy',
{'id_': id_})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def add_item(self, item, params, id_=None, context=None):
"""Add item to model.
:param: item: The item to add to the model
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: id\_: The ID of the item, or None if an ID should be generated
:param: context: Key-values providing frame of reference of request
:returns: Tuple of (ID, newly_created_item)
:raises KeyError: ID already exists.
:raises DataModelException: Addition cannot be performed.
:raises BadRequest: library_policy parameter and request body both
present
"""
if id_ is not None:
(num, desc) = error_codes.get('policy_id_must_not_be_provided')
raise webservice.DataModelException(num, desc)
# case 1: parameter gives library policy UUID
if 'library_policy' in params:
if item:
raise exception.BadRequest(
'Policy creation request with `library_policy` parameter '
'must not have non-empty body.')
try:
# Note(thread-safety): blocking call
library_policy_object = self.invoke_rpc(
base.LIBRARY_SERVICE_ID,
'get_policy', {'id_': params['library_policy']})
policy_metadata = self.invoke_rpc(
base.ENGINE_SERVICE_ID,
'persistent_create_policy_with_rules',
{'policy_rules_obj': library_policy_object},
timeout=self.dse_long_timeout)
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
return (policy_metadata['id'], policy_metadata)
# case 2: item contains rules
if 'rules' in item:
self._check_create_policy_item(item)
try:
library_service.validate_policy_item(item)
# Note(thread-safety): blocking call
policy_metadata = self.invoke_rpc(
base.ENGINE_SERVICE_ID,
'persistent_create_policy_with_rules',
{'policy_rules_obj': item}, timeout=self.dse_long_timeout)
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
return (policy_metadata['id'], policy_metadata)
# case 3: item does not contain rules
self._check_create_policy_item(item)
name = item['name']
try:
# Note(thread-safety): blocking call
policy_metadata = self.invoke_rpc(
base.ENGINE_SERVICE_ID, 'persistent_create_policy',
{'name': name,
'abbr': item.get('abbreviation'),
'kind': item.get('kind'),
'desc': item.get('description')})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
return (policy_metadata['id'], policy_metadata)
def _check_create_policy_item(self, item):
if 'name' not in item:
(num, desc) = error_codes.get('policy_name_must_be_provided')
raise webservice.DataModelException(num, desc)
abbr = item.get('abbreviation')
if abbr:
# the length of abbreviation column is 5 chars in policy DB table,
# check it in API layer and raise exception if it's too long.
if not isinstance(abbr, six.string_types) or len(abbr) > 5:
(num, desc) = error_codes.get('policy_abbreviation_error')
raise webservice.DataModelException(num, desc)
# Note(thread-safety): blocking function
def delete_item(self, id_, params, context=None):
"""Remove item from model.
:param: id\_: The ID or name of the item to be removed
:param: params:
:param: context: Key-values providing frame of reference of request
:returns: The removed item.
:raises KeyError: Item with specified id\_ not present.
"""
# Note(thread-safety): blocking call
return self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_delete_policy',
{'name_or_id': id_},
timeout=self.dse_long_timeout)
def _get_boolean_param(self, key, params):
if key not in params:
return False
value = params[key]
return value.lower() == "true" or value == "1"
# Note: It's confusing to figure out how this method is called.
# It is called via user supplied string in the `action` method of
# api/webservice.py:ElementHandler
# Note(thread-safety): blocking function
def simulate_action(self, params, context=None, request=None):
"""Simulate the effects of executing a sequence of updates.
:returns: the result of a query.
"""
# grab string arguments
theory = context.get('policy_id') or params.get('policy')
if theory is None:
(num, desc) = error_codes.get('simulate_without_policy')
raise webservice.DataModelException(num, desc)
body = json.loads(request.body)
query = body.get('query')
sequence = body.get('sequence')
actions = body.get('action_policy')
delta = self._get_boolean_param('delta', params)
trace = self._get_boolean_param('trace', params)
if query is None or sequence is None or actions is None:
(num, desc) = error_codes.get('incomplete_simulate_args')
raise webservice.DataModelException(num, desc)
try:
args = {'query': query, 'theory': theory, 'sequence': sequence,
'action_theory': actions, 'delta': delta,
'trace': trace, 'as_list': True}
# Note(thread-safety): blocking call
result = self.invoke_rpc(base.ENGINE_SERVICE_ID, 'simulate',
args, timeout=self.dse_long_timeout)
except exception.PolicyException as e:
(num, desc) = error_codes.get('simulate_error')
raise webservice.DataModelException(num, desc + "::" + str(e))
# always return dict
if trace:
return {'result': result[0],
'trace': result[1]}
return {'result': result}
# Note(thread-safety): blocking function
def execute_action(self, params, context=None, request=None):
"""Execute the action."""
body = json.loads(request.body)
# e.g. name = 'nova:disconnectNetwork'
items = re.split(':', body.get('name'))
if len(items) != 2:
(num, desc) = error_codes.get('service_action_syntax')
raise webservice.DataModelException(num, desc)
service = items[0].strip()
action = items[1].strip()
action_args = body.get('args', {})
if (not isinstance(action_args, dict)):
(num, desc) = error_codes.get('execute_action_args_syntax')
raise webservice.DataModelException(num, desc)
try:
args = {'service_name': service,
'action': action,
'action_args': action_args}
# Note(thread-safety): blocking call
self.invoke_rpc(base.ENGINE_SERVICE_ID, 'execute_action', args,
timeout=self.action_retry_timeout)
except exception.PolicyException as e:
(num, desc) = error_codes.get('execute_error')
raise webservice.DataModelException(num, desc + "::" + str(e))
return {}

View File

@ -1,178 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_config import cfg
from congress.api import versions
from congress.api import webservice
class APIRouterV1(object):
def __init__(self, resource_mgr, process_dict):
"""Bootstrap data models and handlers for the API definition."""
# Setup /v1/
version_v1_handler = versions.VersionV1Handler(r'/v1[/]?')
resource_mgr.register_handler(version_v1_handler)
policies = process_dict['api-policy']
policy_collection_handler = webservice.CollectionHandler(
r'/v1/policies',
policies)
resource_mgr.register_handler(policy_collection_handler)
policy_path = r'/v1/policies/(?P<policy_id>[^/]+)'
policy_element_handler = webservice.ElementHandler(
policy_path,
policies,
policy_collection_handler,
allow_update=False,
allow_replace=False)
resource_mgr.register_handler(policy_element_handler)
library_policies = process_dict['api-library-policy']
library_policy_collection_handler = webservice.CollectionHandler(
r'/v1/librarypolicies',
library_policies)
resource_mgr.register_handler(library_policy_collection_handler)
library_policy_path = r'/v1/librarypolicies/(?P<policy_id>[^/]+)'
library_policy_element_handler = webservice.ElementHandler(
library_policy_path,
library_policies,
library_policy_collection_handler,
allow_update=False,
allow_replace=True)
resource_mgr.register_handler(library_policy_element_handler)
policy_rules = process_dict['api-rule']
rule_collection_handler = webservice.CollectionHandler(
r'/v1/policies/(?P<policy_id>[^/]+)/rules',
policy_rules,
"{policy_id}")
resource_mgr.register_handler(rule_collection_handler)
rule_path = (r'/v1/policies/(?P<policy_id>[^/]+)' +
r'/rules/(?P<rule_id>[^/]+)')
rule_element_handler = webservice.ElementHandler(
rule_path,
policy_rules,
"{policy_id}")
resource_mgr.register_handler(rule_element_handler)
# Setup /v1/data-sources
data_sources = process_dict['api-datasource']
ds_collection_handler = webservice.CollectionHandler(
r'/v1/data-sources',
data_sources)
resource_mgr.register_handler(ds_collection_handler)
# Setup /v1/data-sources/<ds_id>
ds_path = r'/v1/data-sources/(?P<ds_id>[^/]+)'
ds_element_handler = webservice.ElementHandler(ds_path, data_sources)
resource_mgr.register_handler(ds_element_handler)
# Setup /v1/data-sources/<ds_id>/schema
schema = process_dict['api-schema']
schema_path = "%s/schema" % ds_path
schema_element_handler = webservice.ElementHandler(schema_path, schema)
resource_mgr.register_handler(schema_element_handler)
# Setup /v1/data-sources/<ds_id>/tables/<table_id>/spec
table_schema_path = "%s/tables/(?P<table_id>[^/]+)/spec" % ds_path
table_schema_element_handler = webservice.ElementHandler(
table_schema_path,
schema)
resource_mgr.register_handler(table_schema_element_handler)
# Setup action handlers
actions = process_dict['api-action']
ds_actions_path = "%s/actions" % ds_path
ds_actions_collection_handler = webservice.CollectionHandler(
ds_actions_path, actions)
resource_mgr.register_handler(ds_actions_collection_handler)
# Setup status handlers
statuses = process_dict['api-status']
ds_status_path = "%s/status" % ds_path
ds_status_element_handler = webservice.ElementHandler(ds_status_path,
statuses)
resource_mgr.register_handler(ds_status_element_handler)
policy_status_path = "%s/status" % policy_path
policy_status_element_handler = webservice.ElementHandler(
policy_status_path,
statuses)
resource_mgr.register_handler(policy_status_element_handler)
rule_status_path = "%s/status" % rule_path
rule_status_element_handler = webservice.ElementHandler(
rule_status_path,
statuses)
resource_mgr.register_handler(rule_status_element_handler)
tables = process_dict['api-table']
tables_path = "(%s|%s)/tables" % (ds_path, policy_path)
table_collection_handler = webservice.CollectionHandler(
tables_path,
tables)
resource_mgr.register_handler(table_collection_handler)
table_path = "%s/(?P<table_id>[^/]+)" % tables_path
table_element_handler = webservice.ElementHandler(table_path, tables)
resource_mgr.register_handler(table_element_handler)
table_rows = process_dict['api-row']
rows_path = "%s/rows" % table_path
row_collection_handler = webservice.CollectionHandler(
rows_path,
table_rows, allow_replace=True)
resource_mgr.register_handler(row_collection_handler)
row_path = "%s/(?P<row_id>[^/]+)" % rows_path
row_element_handler = webservice.ElementHandler(row_path, table_rows)
resource_mgr.register_handler(row_element_handler)
# Setup /v1/data-sources/<ds_id>/webhook
webhook = process_dict['api-webhook']
webhook_path = "%s/webhook" % ds_path
webhook_collection_handler = webservice.CollectionHandler(
webhook_path,
webhook)
resource_mgr.register_handler(webhook_collection_handler)
# Setup /v1/data-sources/<ds_id>/tables/<table_name>/webhook
if cfg.CONF.json_ingester.enable:
json_ingester_webhook_path = \
"%s/tables/(?P<table_name>[^/]+)/webhook" % ds_path
json_ingester_webhook_collection_handler = \
webservice.CollectionHandler(json_ingester_webhook_path,
webhook)
resource_mgr.register_handler(
json_ingester_webhook_collection_handler)
# Setup /v1/system/datasource-drivers
system = process_dict['api-system']
# NOTE(arosen): start url out with datasource-drivers since we don't
# yet implement /v1/system/ yet.
system_collection_handler = webservice.CollectionHandler(
r'/v1/system/drivers',
system)
resource_mgr.register_handler(system_collection_handler)
# Setup /v1/system/datasource-drivers/<driver_id>
driver_path = r'/v1/system/drivers/(?P<driver_id>[^/]+)'
driver_element_handler = webservice.ElementHandler(driver_path, system)
resource_mgr.register_handler(driver_element_handler)

View File

@ -1,197 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
LOG = logging.getLogger(__name__)
class RowModel(base.APIModel):
"""Model for handling API requests about Rows."""
# TODO(thinrichs): No rows have IDs right now. Maybe eventually
# could make ID the hash of the row, but then might as well
# just make the ID a string repr of the row. No use case
# for it as of now since all rows are read-only.
# def get_item(self, id_, context=None):
# """Retrieve item with id id\_ from model.
# Args:
# id_: The ID of the item to retrieve
# context: Key-values providing frame of reference of request
# Returns:
# The matching item or None if item with id\_ does not exist.
# """
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
LOG.info("get_items(context=%s)", context)
gen_trace = False
if 'trace' in params and params['trace'].lower() == 'true':
gen_trace = True
# Get the caller, it should be either policy or datasource
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# It would have saved us if table_id was an UUID rather than a name,
# but it appears that table_id is just another word for tablename.
# Fix: check UUID of datasource before operating. Abort if mismatch
table_id = context['table_id']
try:
args = {'table_id': table_id, 'source_id': source_id,
'trace': gen_trace}
if caller is base.ENGINE_SERVICE_ID:
# allow extra time for row policy engine query
# Note(thread-safety): blocking call
result = self.invoke_rpc(
caller, 'get_row_data', args,
timeout=self.dse_long_timeout)
else:
# Note(thread-safety): blocking call
result = self.invoke_rpc(caller, 'get_row_data', args)
except exception.CongressException as e:
m = ("Error occurred while processing source_id '%s' for row "
"data of the table '%s'" % (source_id, table_id))
LOG.debug(m)
raise webservice.DataModelException.create(e)
if gen_trace and caller is base.ENGINE_SERVICE_ID:
# DSE2 returns lists instead of tuples, so correct that.
results = [{'data': tuple(x['data'])} for x in result[0]]
return {'results': results,
'trace': result[1] or "Not available"}
else:
result = [{'data': tuple(x['data'])} for x in result]
return {'results': result}
# Note(thread-safety): blocking function
def replace_items(self, items, params, context=None):
"""Replaces all data in a table.
:param: id\_: A table id for replacing all row
:param: items: A data for new rows
:param: params: A dict-like object containing parameters from
request query
:param: context: Key-values providing frame of reference of request
:returns: None
:raises KeyError: table id doesn't exist
:raises DataModelException: any error occurs during replacing rows.
"""
LOG.info("replace_items(context=%s)", context)
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(thread-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# It would have saved us if table_id was an UUID rather than a name,
# but it appears that table_id is just another word for tablename.
# Fix: check UUID of datasource before operating. Abort if mismatch
table_id = context['table_id']
try:
args = {'table_id': table_id, 'source_id': source_id,
'objs': items}
# Note(thread-safety): blocking call
self.invoke_rpc(caller, 'replace_entire_table_data', args)
except exception.CongressException as e:
LOG.debug("Error occurred while processing updating rows "
"for source_id '%s' and table_id '%s'",
source_id, table_id, exc_info=True)
raise webservice.DataModelException.create(e)
LOG.info("finish replace_items(context=%s)", context)
LOG.debug("replaced table %s with row items: %s",
table_id, str(items))
# TODO(thinrichs): It makes sense to sometimes allow users to create
# a new row for internal data sources. But since we don't have
# those yet all tuples are read-only from the API.
# def add_item(self, item, id_=None, context=None):
# """Add item to model.
# Args:
# item: The item to add to the model
# id_: The ID of the item, or None if an ID should be generated
# context: Key-values providing frame of reference of request
# Returns:
# Tuple of (ID, newly_created_item)
# Raises:
# KeyError: ID already exists.
# """
# TODO(thinrichs): once we have internal data sources,
# add the ability to update a row. (Or maybe not and implement
# via add+delete.)
# def update_item(self, id_, item, context=None):
# """Update item with id\_ with new data.
# Args:
# id_: The ID of the item to be updated
# item: The new item
# context: Key-values providing frame of reference of request
# Returns:
# The updated item.
# Raises:
# KeyError: Item with specified id\_ not present.
# """
# # currently a noop since the owner_id cannot be changed
# if id_ not in self.items:
# raise KeyError("Cannot update item with ID '%s': "
# "ID does not exist")
# return item
# TODO(thinrichs): once we can create, we should be able to delete
# def delete_item(self, id_, context=None):
# """Remove item from model.
# Args:
# id_: The ID of the item to be removed
# context: Key-values providing frame of reference of request
# Returns:
# The removed item.
# Raises:
# KeyError: Item with specified id\_ not present.
# """

View File

@ -1,124 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import base
from congress.api import error_codes
from congress.api import webservice
from congress import exception
class RuleModel(base.APIModel):
"""Model for handling API requests about policy Rules."""
def policy_name(self, context):
if 'ds_id' in context:
return context['ds_id']
elif 'policy_id' in context:
# Note: policy_id is actually policy name
return context['policy_id']
def get_item(self, id_, params, context=None):
"""Retrieve item with id id\_ from model.
:param: id\_: The ID of the item to retrieve
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The matching item or None if item with id\_ does not exist.
"""
try:
args = {'id_': id_, 'policy_name': self.policy_name(context)}
# Note(thread-safety): blocking call
return self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_get_rule', args)
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
try:
args = {'policy_name': self.policy_name(context)}
# Note(thread-safety): blocking call
rules = self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_get_rules', args)
return {'results': rules}
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def add_item(self, item, params, id_=None, context=None):
"""Add item to model.
:param: item: The item to add to the model
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: id\_: The ID of the item, or None if an ID should be generated
:param: context: Key-values providing frame of reference of request
:returns: Tuple of (ID, newly_created_item)
:raises KeyError: ID already exists.
"""
if id_ is not None:
raise webservice.DataModelException(
*error_codes.get('add_item_id'))
try:
args = {'policy_name': self.policy_name(context),
'str_rule': item.get('rule'),
'rule_name': item.get('name'),
'comment': item.get('comment')}
# Note(thread-safety): blocking call
return self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_insert_rule', args,
timeout=self.dse_long_timeout)
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def delete_item(self, id_, params, context=None):
"""Remove item from model.
:param: id\_: The ID of the item to be removed
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The removed item.
:raises KeyError: Item with specified id\_ not present.
"""
try:
args = {'id_': id_, 'policy_name_or_id': self.policy_name(context)}
# Note(thread-safety): blocking call
return self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_delete_rule', args,
timeout=self.dse_long_timeout)
except exception.CongressException as e:
raise webservice.DataModelException.create(e)

View File

@ -1,67 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
class SchemaModel(base.APIModel):
"""Model for handling API requests about Schemas."""
# Note(thread-safety): blocking function
def get_item(self, id_, params, context=None):
"""Retrieve item with id id\_ from model.
:param: id\_: The ID of the item to retrieve
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The matching item or None if item with id\_ does not exist.
"""
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# Fix: check UUID of datasource before operating. Abort if mismatch
table = context.get('table_id')
args = {'source_id': source_id}
try:
# Note(thread-safety): blocking call
schema = self.invoke_rpc(caller, 'get_datasource_schema', args)
except exception.CongressException as e:
raise webservice.DataModelException(e.code, str(e),
http_status_code=e.code)
# request to see the schema for one table
if table:
if table not in schema:
raise webservice.DataModelException(
404, ("Table '{}' for datasource '{}' has no "
"schema ".format(id_, source_id)),
http_status_code=404)
return api_utils.create_table_dict(table, schema)
tables = [api_utils.create_table_dict(table_, schema)
for table_ in schema]
return {'tables': tables}

View File

@ -1,57 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
class StatusModel(base.APIModel):
"""Model for handling API requests about Statuses."""
# Note(thread-safety): blocking function
def get_item(self, id_, params, context=None):
"""Retrieve item with id id\_ from model.
:param: id\_: The ID of the item to retrieve
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The matching item or None if item with id\_ does not exist.
"""
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# Fix: check UUID of datasource before operating. Abort if mismatch
try:
rpc_args = {'params': context, 'source_id': source_id}
# Note(thread-safety): blocking call
status = self.invoke_rpc(caller, 'get_status', rpc_args)
except exception.CongressException as e:
raise webservice.DataModelException(
exception.NotFound.code, str(e),
http_status_code=exception.NotFound.code)
return status

View File

@ -1,68 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
class DatasourceDriverModel(base.APIModel):
"""Model for handling API requests about DatasourceDriver."""
def get_items(self, params, context=None):
"""Get items in model.
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
drivers = self.bus.get_drivers_info()
fields = ['id', 'description']
results = [self.bus.make_datasource_dict(
driver, fields=fields)
for driver in drivers]
return {"results": results}
def get_item(self, id_, params, context=None):
"""Retrieve item with id id\_ from model.
:param: id\_: The ID of the item to retrieve
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The matching item or None if item with id\_ does not exist.
"""
datasource = context.get('driver_id')
try:
driver = self.bus.get_driver_info(datasource)
schema = self.bus.get_driver_schema(datasource)
except exception.DriverNotFound as e:
raise webservice.DataModelException(e.code, str(e),
http_status_code=e.code)
tables = [api_utils.create_table_dict(table_, schema)
for table_ in schema]
driver['tables'] = tables
return driver

View File

@ -1,151 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
LOG = logging.getLogger(__name__)
class TableModel(base.APIModel):
"""Model for handling API requests about Tables."""
# Note(thread-safety): blocking function
def get_item(self, id_, params, context=None):
"""Retrieve item with id id\_ from model.
:param: id\_: The ID of the item to retrieve
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The matching item or None if item with id\_ does not exist.
"""
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# Fix: check UUID of datasource before operating. Abort if mismatch
args = {'source_id': source_id, 'table_id': id_}
try:
# Note(thread-safety): blocking call
tablename = self.invoke_rpc(caller, 'get_tablename', args)
except exception.CongressException as e:
LOG.debug("Exception occurred while retrieving table %s"
"from datasource %s", id_, source_id)
raise webservice.DataModelException.create(e)
if tablename:
return {'id': tablename}
LOG.info('table id %s is not found in datasource %s', id_, source_id)
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
LOG.info('get_items has context %s', context)
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# Fix: check UUID of datasource before operating. Abort if mismatch
try:
# Note(thread-safety): blocking call
tablenames = self.invoke_rpc(caller, 'get_tablenames',
{'source_id': source_id})
except exception.CongressException as e:
LOG.debug("Exception occurred while retrieving tables"
"from datasource %s", source_id)
raise webservice.DataModelException.create(e)
# when the source_id doesn't have any table, 'tablenames' is set([])
if isinstance(tablenames, set) or isinstance(tablenames, list):
return {'results': [{'id': x} for x in tablenames]}
# Tables can only be created/updated/deleted by writing policy
# or by adding new data sources. Once we have internal data sources
# we need to implement all of these.
# def add_item(self, item, id_=None, context=None):
# """Add item to model.
# Args:
# item: The item to add to the model
# id_: The ID of the item, or None if an ID should be generated
# context: Key-values providing frame of reference of request
# Returns:
# Tuple of (ID, newly_created_item)
# Raises:
# KeyError: ID already exists.
# """
# def update_item(self, id_, item, context=None):
# """Update item with id\_ with new data.
# Args:
# id_: The ID of the item to be updated
# item: The new item
# context: Key-values providing frame of reference of request
# Returns:
# The updated item.
# Raises:
# KeyError: Item with specified id\_ not present.
# """
# # currently a noop since the owner_id cannot be changed
# if id_ not in self.items:
# raise KeyError("Cannot update item with ID '%s': "
# "ID does not exist")
# return item
# def delete_item(self, id_, context=None):
# """Remove item from model.
# Args:
# id_: The ID of the item to be removed
# context: Key-values providing frame of reference of request
# Returns:
# The removed item.
# Raises:
# KeyError: Item with specified id\_ not present.
# """

View File

@ -1,146 +0,0 @@
# Copyright 2015 Huawei.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import copy
import json
import os
from six.moves import http_client
import webob
import webob.dec
from congress.api import webservice
VERSIONS = {
"v1": {
"id": "v1",
"status": "CURRENT",
"updated": "2013-08-12T17:42:13Z",
"links": [
{
"rel": "describedby",
"type": "text/html",
"href": "http://congress.readthedocs.org/",
},
],
},
}
def _get_view_builder(request):
base_url = request.application_url
return ViewBuilder(base_url)
class ViewBuilder(object):
def __init__(self, base_url):
""":param base_url: url of the root wsgi application."""
self.base_url = base_url
def build_choices(self, versions, request):
version_objs = []
for version in sorted(versions.keys()):
version = versions[version]
version_objs.append({
"id": version['id'],
"status": version['status'],
"updated": version['updated'],
"links": self._build_links(version, request.path),
})
return dict(choices=version_objs)
def build_versions(self, versions):
version_objs = []
for version in sorted(versions.keys()):
version = versions[version]
version_objs.append({
"id": version['id'],
"status": version['status'],
"updated": version['updated'],
"links": self._build_links(version),
})
return dict(versions=version_objs)
def build_version(self, version):
reval = copy.deepcopy(version)
reval['links'].insert(0, {
"rel": "self",
"href": self.base_url.rstrip('/') + '/',
})
return dict(version=reval)
def _build_links(self, version_data, path=None):
"""Generate a container of links that refer to the provided version."""
href = self._generate_href(version_data['id'], path)
links = [
{
"rel": "self",
"href": href,
},
]
return links
def _generate_href(self, version, path=None):
"""Create an url that refers to a specific version."""
if path:
path = path.strip('/')
return os.path.join(self.base_url, version, path)
else:
return os.path.join(self.base_url, version) + '/'
class Versions(object):
@classmethod
def factory(cls, global_config, **local_config):
return cls()
@webob.dec.wsgify(RequestClass=webob.Request)
def __call__(self, request):
"""Respond to a request for all Congress API versions."""
builder = _get_view_builder(request)
if request.path == '/':
body = builder.build_versions(VERSIONS)
status = http_client.OK
else:
body = builder.build_choices(VERSIONS, request)
status = http_client.MULTIPLE_CHOICES
return webob.Response(body="%s\n" % json.dumps(body),
status=status,
content_type='application/json',
charset='UTF-8')
class VersionV1Handler(webservice.AbstractApiHandler):
def handle_request(self, request):
builder = _get_view_builder(request)
body = builder.build_version(VERSIONS['v1'])
return webob.Response(body="%s\n" % json.dumps(body),
status=http_client.OK,
content_type='application/json',
charset='UTF-8')

View File

@ -1,52 +0,0 @@
# Copyright (c) 2018 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
class WebhookModel(base.APIModel):
"""Model for handling webhook notifications."""
def add_item(self, item, params, id_=None, context=None):
"""POST webhook notification.
:param item: The webhook payload
:param params: A dict-like object containing parameters
from the request query string and body.
:param id_: not used in this case; should be None
:param context: Key-values providing frame of reference of request
"""
caller, source_id = api_utils.get_id_from_context(context)
table_name = context.get('table_name')
try:
if table_name: # json ingester case
args = {'table_name': table_name,
'body': item}
# Note(thread-safety): blocking call
self.invoke_rpc(base.JSON_DS_SERVICE_PREFIX + caller,
'json_ingester_webhook_handler', args)
else:
args = {'payload': item}
# Note(thread-safety): blocking call
self.invoke_rpc(caller, 'process_webhook_notification', args)
except exception.CongressException as e:
raise webservice.DataModelException.create(e)

View File

@ -1,635 +0,0 @@
# Copyright (c) 2013 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
try:
# For Python 3
import http.client as httplib
except ImportError:
import httplib
import re
from oslo_config import cfg
from oslo_db import exception as db_exc
from oslo_log import log as logging
from oslo_serialization import jsonutils as json
from oslo_utils import uuidutils
import six
import webob
import webob.dec
from congress.api import error_codes
from congress.common import policy
from congress import exception
LOG = logging.getLogger(__name__)
def error_response(status, error_code, description, data=None):
"""Construct and return an error response.
Args:
status: The HTTP status code of the response.
error_code: The application-specific error code.
description: Friendly G11N-enabled string corresponding to error_code.
data: Additional data (not G11N-enabled) for the API consumer.
"""
raw_body = {'error': {
'message': description,
'error_code': error_code,
'error_data': data
}
}
body = '%s\n' % json.dumps(raw_body)
return webob.Response(body=body, status=status,
content_type='application/json',
charset='UTF-8')
NOT_FOUND_RESPONSE = error_response(httplib.NOT_FOUND,
httplib.NOT_FOUND,
"The resource could not be found.")
NOT_SUPPORTED_RESPONSE = error_response(httplib.NOT_IMPLEMENTED,
httplib.NOT_IMPLEMENTED,
"Method not supported")
INTERNAL_ERROR_RESPONSE = error_response(httplib.INTERNAL_SERVER_ERROR,
httplib.INTERNAL_SERVER_ERROR,
"Internal server error")
def original_msg(e):
'''Undo oslo-messaging added traceback to return original exception msg'''
msg = e.args[0].split('\nTraceback (most recent call last):')[0]
if len(msg) != len(e.args[0]):
if len(msg) > 0 and msg[-1] in ("'", '"'):
msg = msg[:-1]
if len(msg) > 1 and msg[0:2] in ('u"', "u'"):
msg = msg[2:]
elif len(msg) > 0 and msg[0] in ("'", '"'):
msg = msg[1:]
return msg
else: # return untouched message is format not as expected
return e.args[0]
class DataModelException(Exception):
"""Congress API Data Model Exception
Custom exception raised by API Data Model methods to communicate errors to
the API framework.
"""
def __init__(self, error_code, description, data=None,
http_status_code=httplib.BAD_REQUEST):
super(DataModelException, self).__init__(description)
self.error_code = error_code
self.description = description
self.data = data
self.http_status_code = http_status_code
@classmethod
def create(cls, error):
"""Generate a DataModelException from an existing CongressException.
:param: error: has a 'name' field corresponding to an error_codes
error-name. It may also have a 'data' field.
:returns: a DataModelException properly populated.
"""
name = getattr(error, "name", None)
if name:
error_code = error_codes.get_num(name)
description = error_codes.get_desc(name)
http_status_code = error_codes.get_http(name)
else:
# Check if it's default http error or else return 'Unknown error'
error_code = error.code or httplib.BAD_REQUEST
if error_code not in httplib.responses:
error_code = httplib.BAD_REQUEST
description = httplib.responses.get(error_code, "Unknown error")
http_status_code = error_code
if str(error):
description += "::" + original_msg(error)
return cls(error_code=error_code,
description=description,
data=getattr(error, 'data', None),
http_status_code=http_status_code)
def rest_response(self):
return error_response(self.http_status_code, self.error_code,
self.description, self.data)
class AbstractApiHandler(object):
"""Abstract handler for API requests.
Attributes:
path_regex: The regular expression matching paths supported by this
handler.
"""
def __init__(self, path_regex):
if path_regex[-1] != '$':
path_regex += "$"
# we only use 'match' so no need to mark the beginning of string
self.path_regex = path_regex
self.path_re = re.compile(path_regex)
def __str__(self):
return "%s(%s)" % (self.__class__.__name__, self.path_re.pattern)
def _get_context(self, request):
"""Return dict of variables in request path."""
m = self.path_re.match(request.path)
# remove all the None values before returning
return dict([(k, v) for k, v in m.groupdict().items()
if v is not None])
def _parse_json_body(self, request):
content_type = (request.content_type or "application/json").lower()
if content_type != 'application/json':
raise DataModelException(
400, "Unsupported Content-Type; must be 'application/json'")
if request.charset != 'UTF-8':
raise DataModelException(
400, "Unsupported charset: must be 'UTF-8'")
try:
request.parsed_body = json.loads(request.body.decode('utf-8'))
except ValueError as e:
msg = "Failed to parse body as %s: %s" % (content_type, e)
raise DataModelException(400, msg)
return request.parsed_body
def handles_request(self, request):
"""Return true iff handler supports the request."""
m = self.path_re.match(request.path)
return m is not None
def handle_request(self, request):
"""Handle a REST request.
:param: request: A webob request object.
:returns: A webob response object.
"""
return NOT_SUPPORTED_RESPONSE
class ElementHandler(AbstractApiHandler):
"""API handler for REST element resources.
REST elements represent individual entities in the data model, and often
support the following operations:
- Read a representation of the element
- Update (replace) the entire element with a new version
- Update (patch) parts of the element with new values
- Delete the element
Elements may also exhibit 'controller' semantics for RPC-style method
invocation, however this is not currently supported.
"""
def __init__(self, path_regex, model,
collection_handler=None, allow_read=True, allow_actions=True,
allow_replace=True, allow_update=True, allow_delete=True):
"""Initialize an element handler.
:param: path_regex: A regular expression that matches the full path
to the element. If multiple handlers match a request path,
the handler with the highest registration search_index wins.
:param: model: A resource data model instance
:param: collection_handler: The collection handler this element
is a member of or None if the element is not a member of a
collection. (Used for named creation of elements)
:param: allow_read: True if element supports read
:param: allow_replace: True if element supports replace
:param: allow_update: True if element supports update
:param: allow_delete: True if element supports delete
"""
super(ElementHandler, self).__init__(path_regex)
self.model = model
self.collection_handler = collection_handler
self.allow_read = allow_read
self.allow_actions = allow_actions
self.allow_replace = allow_replace
self.allow_update = allow_update
self.allow_delete = allow_delete
def _get_element_id(self, request):
m = self.path_re.match(request.path)
if m.groups():
return m.groups()[-1] # TODO(pballand): make robust
return None
def handle_request(self, request):
"""Handle a REST request.
:param: request: A webob request object.
:returns: A webob response object.
"""
try:
if request.method == 'GET' and self.allow_read:
return self.read(request)
elif request.method == 'POST' and self.allow_actions:
return self.action(request)
elif request.method == 'PUT' and self.allow_replace:
return self.replace(request)
elif request.method == 'PATCH' and self.allow_update:
return self.update(request)
elif request.method == 'DELETE' and self.allow_delete:
return self.delete(request)
return NOT_SUPPORTED_RESPONSE
except db_exc.DBError:
LOG.exception('Database backend experienced an unknown error.')
raise exception.DatabaseError
def read(self, request):
if not hasattr(self.model, 'get_item'):
return NOT_SUPPORTED_RESPONSE
id_ = self._get_element_id(request)
item = self.model.get_item(id_, request.params,
context=self._get_context(request))
if item is None:
return error_response(httplib.NOT_FOUND, 404, 'Not found')
return webob.Response(body="%s\n" % json.dumps(item),
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
def action(self, request):
# Non-CRUD operations must specify an 'action' parameter
action = request.params.getall('action')
if len(action) != 1:
if len(action) > 1:
errstr = "Action parameter may not be provided multiple times."
else:
errstr = "Missing required action parameter."
return error_response(httplib.BAD_REQUEST, 400, errstr)
model_method = "%s_action" % action[0].replace('-', '_')
f = getattr(self.model, model_method, None)
if f is None:
return NOT_SUPPORTED_RESPONSE
try:
response = f(request.params, context=self._get_context(request),
request=request)
if isinstance(response, webob.Response):
return response
return webob.Response(body="%s\n" % json.dumps(response),
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
except TypeError:
LOG.exception("Error occurred")
return NOT_SUPPORTED_RESPONSE
def replace(self, request):
if not hasattr(self.model, 'update_item'):
return NOT_SUPPORTED_RESPONSE
id_ = self._get_element_id(request)
try:
item = self._parse_json_body(request)
self.model.replace_item(id_, item, request.params,
context=self._get_context(request))
except KeyError as e:
if (self.collection_handler and
getattr(self.collection_handler, 'allow_named_create',
False)):
return self.collection_handler.create_member(request, id_=id_)
return error_response(httplib.NOT_FOUND, 404,
original_msg(e) or 'Not found')
return webob.Response(body="%s\n" % json.dumps(item),
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
def update(self, request):
if not (hasattr(self.model, 'update_item') or
hasattr(self.model, 'get_item')):
return NOT_SUPPORTED_RESPONSE
context = self._get_context(request)
id_ = self._get_element_id(request)
item = self.model.get_item(id_, request.params, context=context)
if item is None:
return error_response(httplib.NOT_FOUND, 404, 'Not found')
updates = self._parse_json_body(request)
item.update(updates)
self.model.replace_item(id_, item, request.params, context=context)
return webob.Response(body="%s\n" % json.dumps(item),
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
def delete(self, request):
if not hasattr(self.model, 'delete_item'):
return NOT_SUPPORTED_RESPONSE
id_ = self._get_element_id(request)
try:
item = self.model.delete_item(
id_, request.params, context=self._get_context(request))
return webob.Response(body="%s\n" % json.dumps(item),
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
except KeyError as e:
LOG.exception("Error occurred")
return error_response(httplib.NOT_FOUND, 404,
original_msg(e) or 'Not found')
class CollectionHandler(AbstractApiHandler):
"""API handler for REST collection resources.
REST collections represent collections of entities in the data model, and
often support the following operations:
- List elements in the collection
- Create new element in the collection
The following less-common collection operations are NOT SUPPORTED:
- Replace all elements in the collection
- Delete all elements in the collection
"""
def __init__(self, path_regex, model,
allow_named_create=True, allow_list=True, allow_create=True,
allow_replace=False):
"""Initialize a collection handler.
:param: path_regex: A regular expression matching the collection base
path.
:param: model: A resource data model instance
allow_named_create: True if caller can specify ID of new items.
allow_list: True if collection supports listing elements.
allow_create: True if collection supports creating elements.
"""
super(CollectionHandler, self).__init__(path_regex)
self.model = model
self.allow_named_create = allow_named_create
self.allow_list = allow_list
self.allow_create = allow_create
self.allow_replace = allow_replace
def handle_request(self, request):
"""Handle a REST request.
:param: request: A webob request object.
:returns: A webob response object.
"""
# NOTE(arosen): only do policy.json if keystone is used for now.
if cfg.CONF.auth_strategy == "keystone":
context = request.environ['congress.context']
target = {
'project_id': context.project_id,
'user_id': context.user_id
}
# NOTE(arosen): today congress only enforces API policy on which
# API calls we allow tenants to make with their given roles.
action_type = self._get_action_type(request.method)
# FIXME(arosen): There should be a cleaner way to do this.
model_name = self.path_regex.split('/')[1]
action = "%s_%s" % (action_type, model_name)
# TODO(arosen): we should handle serializing the
# response in one place
try:
policy.enforce(context, action, target)
except exception.PolicyNotAuthorized as e:
LOG.info(e)
return webob.Response(body=six.text_type(e), status=e.code,
content_type='application/json',
charset='UTF-8')
if request.method == 'GET' and self.allow_list:
return self.list_members(request)
elif request.method == 'POST' and self.allow_create:
return self.create_member(request)
elif request.method == 'PUT' and self.allow_replace:
return self.replace_members(request)
return NOT_SUPPORTED_RESPONSE
def _get_action_type(self, method):
if method == 'GET':
return 'get'
elif method == 'POST':
return 'create'
elif method == 'DELETE':
return 'delete'
elif method == 'PUT' or method == 'PATCH':
return 'update'
else:
# should never get here but just in case ;)
# FIXME(arosen) raise NotImplemented instead and
# make sure we return that as an http code.
raise TypeError("Invalid HTTP Method")
def list_members(self, request):
if not hasattr(self.model, 'get_items'):
return NOT_SUPPORTED_RESPONSE
items = self.model.get_items(request.params,
context=self._get_context(request))
if items is None:
return error_response(httplib.NOT_FOUND, 404, 'Not found')
elif 'results' not in items:
return error_response(httplib.NOT_FOUND, 404, 'Not found')
body = "%s\n" % json.dumps(items, indent=2)
return webob.Response(body=body, status=httplib.OK,
content_type='application/json',
charset='UTF-8')
def create_member(self, request, id_=None):
if not hasattr(self.model, 'add_item'):
return NOT_SUPPORTED_RESPONSE
item = self._parse_json_body(request)
context = self._get_context(request)
try:
model_return_value = self.model.add_item(
item, request.params, id_, context=context)
except KeyError as e:
LOG.exception("Error occurred")
return error_response(httplib.CONFLICT, httplib.CONFLICT,
original_msg(e) or 'Element already exists')
if model_return_value is None: # webhook request
return webob.Response(body={},
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
else:
id_, item = model_return_value
item['id'] = id_
return webob.Response(body="%s\n" % json.dumps(item),
status=httplib.CREATED,
content_type='application/json',
location="%s/%s" % (request.path, id_),
charset='UTF-8')
def replace_members(self, request):
if not hasattr(self.model, 'replace_items'):
return NOT_SUPPORTED_RESPONSE
items = self._parse_json_body(request)
context = self._get_context(request)
try:
self.model.replace_items(items, request.params, context)
except KeyError as e:
LOG.exception("Error occurred")
return error_response(httplib.BAD_REQUEST, httplib.BAD_REQUEST,
original_msg(e) or
'Update %s Failed' % context['table_id'])
return webob.Response(body="", status=httplib.OK,
content_type='application/json',
charset='UTF-8')
class SimpleDataModel(object):
"""A container providing access to a single type of data."""
def __init__(self, model_name):
self.model_name = model_name
self.items = {}
@staticmethod
def _context_str(context):
context = context or {}
return ".".join(
["%s:%s" % (k, context[k]) for k in sorted(context.keys())])
def get_items(self, params, context=None):
"""Get items in model.
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
cstr = self._context_str(context)
results = list(self.items.setdefault(cstr, {}).values())
return {'results': results}
def add_item(self, item, params, id_=None, context=None):
"""Add item to model.
:param: item: The item to add to the model
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: The ID of the item, or None if an ID should be generated
:param: context: Key-values providing frame of reference of request
:returns: Tuple of (ID, newly_created_item)
:raises KeyError: ID already exists.
:raises DataModelException: Addition cannot be performed.
"""
cstr = self._context_str(context)
if id_ is None:
id_ = uuidutils.generate_uuid()
if id_ in self.items.setdefault(cstr, {}):
raise KeyError("Cannot create item with ID '%s': "
"ID already exists" % id_)
self.items[cstr][id_] = item
return (id_, item)
def get_item(self, id_, params, context=None):
"""Retrieve item with id id\_ from model.
:param: id\_: The ID of the item to retrieve
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The matching item or None if item with id\_ does not exist.
"""
cstr = self._context_str(context)
return self.items.setdefault(cstr, {}).get(id_)
def update_item(self, id_, item, params, context=None):
"""Update item with id\_ with new data.
:param: id\_: The ID of the item to be updated
item: The new item
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The updated item.
:raises KeyError: Item with specified id\_ not present.
:raises DataModelException: Update cannot be performed.
"""
cstr = self._context_str(context)
if id_ not in self.items.setdefault(cstr, {}):
raise KeyError("Cannot update item with ID '%s': "
"ID does not exist" % id_)
self.items.setdefault(cstr, {})[id_] = item
return item
def replace_item(self, id_, item, params, context=None):
"""Replace item with id\_ with new data.
:param: id\_: The ID of the item to be replaced
item: The new item
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The new item after replacement.
:raises KeyError: Item with specified id\_ not present.
:raises DataModelException: Replacement cannot be performed.
"""
cstr = self._context_str(context)
if id_ not in self.items.setdefault(cstr, {}):
raise KeyError("Cannot replace item with ID '%s': "
"ID does not exist" % id_)
self.items.setdefault(cstr, {})[id_] = item
return item
def delete_item(self, id_, params, context=None):
"""Remove item from model.
:param: id\_: The ID of the item to be removed
:param: params: A dict-like object containing parameters
from the request query string and body.
:param: context: Key-values providing frame of reference of request
:returns: The removed item.
:raises KeyError: Item with specified id\_ not present.
"""
cstr = self._context_str(context)
ret = self.items.setdefault(cstr, {})[id_]
del self.items[cstr][id_]
return ret
def replace_items(self, items, params, context=None):
"""Replace items in the model.
:param: items: A dict-like object containing new data
:param: params: A dict-like object containing parameters
:param: context: Key-values providing frame of reference of request
:returns: None
"""
self.items = items

View File

@ -1,79 +0,0 @@
# Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_config import cfg
from oslo_log import log as logging
from oslo_middleware import request_id
import webob.dec
import webob.exc
from congress.common import config
from congress.common import wsgi
from congress import context
LOG = logging.getLogger(__name__)
class CongressKeystoneContext(wsgi.Middleware):
"""Make a request context from keystone headers."""
@webob.dec.wsgify
def __call__(self, req):
# Determine the user ID
user_id = req.headers.get('X_USER_ID')
if not user_id:
LOG.debug("X_USER_ID is not found in request")
return webob.exc.HTTPUnauthorized()
# Determine the tenant
tenant_id = req.headers.get('X_PROJECT_ID')
# Suck out the roles
roles = [r.strip() for r in req.headers.get('X_ROLES', '').split(',')]
# Human-friendly names
tenant_name = req.headers.get('X_PROJECT_NAME')
user_name = req.headers.get('X_USER_NAME')
# Use request_id if already set
req_id = req.environ.get(request_id.ENV_REQUEST_ID)
# Create a context with the authentication data
ctx = context.RequestContext(user_id, tenant_id, roles=roles,
user_name=user_name,
tenant_name=tenant_name,
request_id=req_id)
# Inject the context...
req.environ['congress.context'] = ctx
return self.application
def pipeline_factory(loader, global_conf, **local_conf):
"""Create a paste pipeline based on the 'auth_strategy' config option."""
config.set_config_defaults()
pipeline = local_conf[cfg.CONF.auth_strategy]
pipeline = pipeline.split()
filters = [loader.get_filter(n) for n in pipeline[:-1]]
app = loader.get_app(pipeline[-1])
filters.reverse()
for filter in filters:
app = filter(app)
return app

View File

@ -1,9 +0,0 @@
# NOTE(ekcs): monkey_patch upfront to ensure all imports get patched modules
import eventlet
# NOTE(ekcs): get_hub() before monkey_patch() to workaround issue with
# import cycles in eventlet < 0.22.0;
# Based on the worked-around in eventlet with patch
# https://github.com/eventlet/eventlet/commit/b756447bab51046dfc6f1e0e299cc997ab343701
# For details please check https://bugs.launchpad.net/congress/+bug/1746136
eventlet.hubs.get_hub()
eventlet.monkey_patch()

View File

@ -1,434 +0,0 @@
#
# Copyright (c) 2017 Orange.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Agent is the main entry point for the configuration validator agent.
The agent is executed on the different nodes of the cloud and sends back
configuration values and metadata to the configuration validator datasource
driver.
"""
import json
import os
import sys
from oslo_config import cfg
from oslo_config import generator
from oslo_log import log as logging
from oslo_service import service
import six
from congress.common import config
from congress.cfg_validator.agent import generator as validator_generator
from congress.cfg_validator.agent import opts as validator_opts
from congress.cfg_validator.agent import rpc
from congress.cfg_validator import parsing
from congress.cfg_validator import utils
LOG = logging.getLogger(__name__)
class Config(object):
"""Encapsulates a configuration file and its meta-data.
Attributes:
:ivar path: Path to the configuration on the local file system.
:ivar template: A Template object to use for parsing the configuration.
:ivar data: The normalized Namespace loaded by oslo-config, contains
the parsed values.
:ivar hash: Hash of the configuration file, salted with the hostname
and the template hash
:ivar service_name: The associated service name
"""
# pylint: disable=protected-access
def __init__(self, path, template, service_name):
self.path = path
self.template = template
self.data = None
self.hash = None
self.service = service_name
def parse(self, host):
"""Parses the config at the path given. Updates data and hash.
host: the name of the host where the config is. Used for building a
unique hash.
"""
namespaces_data = [ns.data for ns in self.template.namespaces]
conf = parsing.parse_config_file(namespaces_data, self.path)
Config.sanitize_config(conf)
self.data = conf._namespace._normalized
self.hash = utils.compute_hash(host, self.template.hash,
json.dumps(self.data, sort_keys=True))
@staticmethod
def sanitize_config(conf):
"""Sanitizes some cfg.ConfigOpts values, given its options meta-data.
:param conf: A cfg.ConfigOpts object, pre-loaded with its options
meta-data and with its configurations values.
"""
normalized = getattr(conf._namespace,
'_normalized', None)
if not normalized:
return
normalized = normalized[0]
# Obfuscate values of options declared secret
def _sanitize(opt, group_name='DEFAULT'):
if not opt.secret:
return
if group_name not in normalized:
return
if opt.name in normalized[group_name]:
normalized[group_name][opt.name] = ['*' * 4]
for option in six.itervalues(conf._opts):
_sanitize(option['opt'])
for group_name, group in six.iteritems(conf._groups):
for option in six.itervalues(group._opts):
_sanitize(option['opt'], group_name)
def get_info(self):
"""Information on the configuration file.
:return: a quadruple made of:
* the hash of the template,
* the path to the file,
* the content
* the service name.
"""
return {'template': self.template.hash, 'path': self.path,
'data': self.data, 'service': self.service}
class Namespace(object):
"""Encapsulates a namespace, as defined by oslo-config-generator.
It contains the actual meta-data of the options. The data is loaded from
the service source code, by means of oslo-config-generator.
Attributes:
name: The name, as used by oslo-config-generator.
data: The meta-data of the configuration options.
hash: Hash of the namespace.
"""
def __init__(self, name):
self.name = name
self.data = None
self.hash = None
@staticmethod
def load(name):
"""Loads a namespace from disk
:param name: the name of namespace to load.
:return: a fully configured namespace.
"""
namespace = Namespace(name)
saved_conf = cfg.CONF
cfg.CONF = cfg.ConfigOpts()
try:
json_data = validator_generator.generate_ns_data(name)
finally:
cfg.CONF = saved_conf
namespace.hash = utils.compute_hash(json_data)
namespace.data = json.loads(json_data)
return namespace
def get_info(self):
"""Information on the namespace
:return: a tuple containing
* data: the content of the namespace
* name: the name of the namespace
"""
return {'data': self.data, 'name': self.name}
class Template(object):
"""Describes a template, as defined by oslo-config-generator.
Attributes:
:ivar name: The name, as used by oslo-config-generator.
:ivar path: The path to the template configuration file, as defined by
oslo-config-generator, on the local file system.
:ivar output_file: The default output path for this template.
:ivar namespaces: A set of Namespace objects, which make up this
template.
"""
# pylint: disable=protected-access
def __init__(self, path, output_file):
self.path = path
self.output_file = output_file
self.namespaces = []
self.hash = None
name = os.path.basename(output_file)
self.name = os.path.splitext(name)[0] if name.endswith('.sample') \
else name
@staticmethod
def _parse_template_conf(template_path):
"""Parses a template configuration file"""
conf = cfg.ConfigOpts()
conf.register_opts(generator._generator_opts)
conf(['--config-file', template_path])
return conf.namespace, conf.output_file
@staticmethod
def load(template_path):
"""Loads a template configuration file
:param template_path: path to the template
:return: a fully configured Template object.
"""
namespaces, output_file = Template._parse_template_conf(template_path)
template = Template(template_path, output_file)
for namespace in namespaces:
template.namespaces.append(Namespace.load(namespace))
template.hash = utils.compute_hash(
sorted([ns.hash for ns in template.namespaces]))
return template
def get_info(self):
"""Info on the template
:return: a quadruple made of:
* path: the path to the template path
* name: the name of the template
* output_fle:
* namespaces: an array of namespace hashes.
"""
return {'path': self.path, 'name': self.name,
'output_file': self.output_file,
'namespaces': [ns.hash for ns in self.namespaces]}
class ConfigManager(object):
"""Manages the services configuration files on a node and their meta-data.
Attributes:
:ivar host: A hostname.
:ivar configs: A dict mapping config hashes to their associated Config
object.
:ivar templates: A dict mapping template hashes to their associated
Template object.
:ivar namespaces: A dict mapping namespace hashes to their associated
Namespace object.
"""
def __init__(self, host, services_files):
self.host = host
self.configs = {}
self.templates = {}
self.namespaces = {}
for service_name, files in six.iteritems(services_files):
self.register_service(service_name, files)
def get_template_by_path(self, template_path):
"""Given a path finds the corresponding template if it is registered
:param template_path: the path of the searched template
:return: None or the template
"""
for template in six.itervalues(self.templates):
if template.path == template_path:
return template
def add_template(self, template_path):
"""Adds a new template (loads it from path).
:param template_path: a valid path to the template file.
"""
template = Template.load(template_path)
self.templates[template.hash] = template
self.namespaces.update({ns.hash: ns for ns in template.namespaces})
return template
def register_config(self, config_path, template_path, service_name):
"""Register a configuration file and its associated template.
Template and config are actually parsed and loaded.
:param config_path: a valid path to the config file.
:param template_path: a valid path to the template file.
"""
template = (self.get_template_by_path(template_path)
or self.add_template(template_path))
conf = Config(config_path, template, service_name)
conf.parse(self.host)
self.configs[conf.hash] = conf
LOG.info('{hash: %s, path:%s}' % (conf.hash, conf.path))
def register_service(self, service_name, files):
"""Register all configs for an identified service.
Inaccessible files are ignored and files registration pursues.
:param service_name: The name of the service
:param files: A dict, mapping a configuration path to
its associated template path
"""
for config_path, template_path in six.iteritems(files):
try:
self.register_config(config_path, template_path, service_name)
except (IOError, cfg.ConfigFilesNotFoundError, BaseException):
LOG.error(('Error while registering config %s with template'
' %s for service %s') %
(config_path, template_path, service_name))
class ValidatorAgentEndpoint(object):
"""Validator Agent.
It is used as an RPC endpoint.
Attributes:
config_manager: ConfigManager object.
driver_api: RPC client to communicate with the driver.
"""
# pylint: disable=unused-argument,too-many-instance-attributes
def __init__(self, conf=None):
self.conf = conf or cfg.CONF
validator_conf = self.conf.agent
self.host = validator_conf.host
self.version = validator_conf.version
self.max_delay = validator_conf.max_delay
self.driver_api = rpc.ValidatorDriverClient()
self.services = list(validator_conf.services.keys())
service_files = validator_conf.services
self.config_manager = ConfigManager(self.host, service_files)
def publish_configs_hashes(self, context, **kwargs):
""""Sends back all configuration hashes"""
LOG.info('Sending config hashes')
conf = set(self.config_manager.configs)
self.driver_api.process_configs_hashes({}, conf, self.host)
def publish_templates_hashes(self, context, **kwargs):
""""Sends back all template hashes"""
LOG.info('Sending template hashes')
tpl = set(self.config_manager.templates)
self.driver_api.process_templates_hashes({}, tpl, self.host)
def get_namespace(self, context, **kwargs):
""""Sends back a namespace
:param context: the RPC context
:param hash: the hash of the namespace to send
:return: the namespace or None if not found
"""
ns_hash = kwargs['ns_hash']
LOG.info('Sending namespace %s' % ns_hash)
namespace = self.config_manager.namespaces.get(ns_hash, None)
if namespace is None:
return None
ret = namespace.get_info()
ret['version'] = self.version
return ret
def get_template(self, context, **kwargs):
""""Sends back a template
:param context: the RPC context
:param hash: the hash of the template to send
:return: the template or None if not found
"""
template_hash = kwargs['tpl_hash']
LOG.info('Sending template %s' % template_hash)
template = self.config_manager.templates.get(template_hash, None)
if template is None:
return None
ret = template.get_info()
ret['version'] = self.version
return ret
def get_config(self, context, **kwargs):
""""Sends back a config
:param context: the RPC context
:param hash: the hash of the config to send
:return: the config or None if not found
"""
config_hash = kwargs['cfg_hash']
LOG.info('Sending config %s' % config_hash)
conf = self.config_manager.configs.get(config_hash, None)
if conf is None:
return None
ret = conf.get_info()
ret['version'] = self.version
return ret
def main():
"""Agent entry point"""
validator_opts.register_validator_agent_opts(cfg.CONF)
config.init(sys.argv[1:])
config.setup_logging()
if not cfg.CONF.config_file:
sys.exit("ERROR: Unable to find configuration file via default "
"search paths ~/.congress/, ~/, /etc/congress/, /etc/) and "
"the '--config-file' option!")
agent = ValidatorAgentEndpoint()
server = rpc.AgentService(utils.AGENT_TOPIC, [agent])
service.launch(agent.conf, server).wait()
if __name__ == '__main__':
main()

View File

@ -1,136 +0,0 @@
#
# Copyright (c) 2017 Orange.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
""" Generation of JSON from oslo config options (marshalling) """
import collections
import json
import logging
from oslo_config import cfg
from oslo_config import generator
from oslo_config import types
LOG = logging.getLogger(__name__)
class OptionJsonEncoder(json.JSONEncoder):
"""Json encoder used to give a unique representation to namespaces"""
# pylint: disable=protected-access,method-hidden,too-many-branches
def default(self, o):
if isinstance(o, cfg.Opt):
return {
'kind': type(o).__name__,
'deprecated_for_removal': o.deprecated_for_removal,
'short': o.short,
'name': o.name,
'dest': o.dest,
'deprecated_since': o.deprecated_since,
'required': o.required,
'sample_default': o.sample_default,
'deprecated_opts': o.deprecated_opts,
'positional': o.positional,
'default': o.default,
'secret': o.secret,
'deprecated_reason': o.deprecated_reason,
'mutable': o.mutable,
'type': o.type,
'metavar': o.metavar,
'advanced': o.advanced,
'help': o.help
}
elif isinstance(o, (types.ConfigType, types.HostAddress)):
res = {
'type': type(o).__name__,
}
if isinstance(o, types.Number):
res['max'] = o.max
res['min'] = o.min
# When we build back the type in parsing, we can directly use
# the list of tuples from choices and it will be in a
# canonical order (not sorted but the order elements were
# added)
if isinstance(o.choices, collections.OrderedDict):
res['choices'] = list(o.choices.keys())
else:
res['choices'] = o.choices
if isinstance(o, types.Range):
res['max'] = o.max
res['min'] = o.min
if isinstance(o, types.String):
if o.regex and hasattr(o.regex, 'pattern'):
res['regex'] = o.regex.pattern
else:
res['regex'] = o.regex
res['max_length'] = o.max_length
res['quotes'] = o.quotes
res['ignore_case'] = o.ignore_case
if isinstance(o.choices, collections.OrderedDict):
res['choices'] = list(o.choices.keys())
else:
res['choices'] = o.choices
if isinstance(o, types.List):
res['item_type'] = o.item_type
res['bounds'] = o.bounds
if isinstance(o, types.Dict):
res['value_type'] = o.value_type
res['bounds'] = o.bounds
if isinstance(o, types.URI):
res['schemes'] = o.schemes
res['max_length'] = o.max_length
if isinstance(o, types.IPAddress):
if o.version_checker == o._check_ipv4:
res['version'] = 4
elif o.version_checker == o._check_ipv6:
res['version'] = 6
# Remove unused fields
remove = [k for k, v in res.items() if not v]
for k in remove:
del res[k]
return res
elif isinstance(o, cfg.DeprecatedOpt):
return {
'name': o.name,
'group': o.group
}
elif isinstance(o, cfg.OptGroup):
return {
'title': o.title,
'help': o.help
}
# TODO(vmatt): some options (auth_type, auth_section) from
# keystoneauth1, loaded by keystonemiddleware.auth,
# are not defined conventionally (stable/ocata).
elif isinstance(o, type):
return {
'type': 'String'
}
else:
return {
'type': repr(o)
}
# pylint: disable=protected-access
def generate_ns_data(namespace):
"""Generate a json string containing the namespace"""
groups = generator._get_groups(generator._list_opts([namespace]))
return OptionJsonEncoder(sort_keys=True).encode(groups)

View File

@ -1,58 +0,0 @@
#
# Copyright (c) 2017 Orange.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Options for the config validator agent"""
from oslo_config import cfg
from oslo_config import types
from oslo_log import log as logging
GROUP = cfg.OptGroup(
name='agent',
title='Congress agent options for config datasource')
AGT_OPTS = [
cfg.StrOpt('host', required=True),
cfg.StrOpt('version', required=True, help='OpenStack version'),
cfg.IntOpt('max_delay', default=10, help='The maximum delay an agent will '
'wait before sending his files. '
'The smaller the value, the more '
'likely congestion is to happen'
'.'),
cfg.Opt(
'services',
help='Services activated on this node and configuration files',
default={},
sample_default=(
'nova: { /etc/nova/nova.conf:/path1.conf }, '
'neutron: { /etc/nova/neutron.conf:/path2.conf },'),
type=types.Dict(
bounds=False,
value_type=types.Dict(bounds=True, value_type=types.String()))),
]
def register_validator_agent_opts(conf):
"""Register the options of the agent in the config object"""
conf.register_group(GROUP)
conf.register_opts(AGT_OPTS, group=GROUP)
logging.register_options(conf)
def list_opts():
"""List agent options"""
return [(GROUP, AGT_OPTS)]

View File

@ -1,89 +0,0 @@
#
# Copyright (c) 2017 Orange.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Handling of RPC
Communication with the datasource driver on the config validator agent
"""
from oslo_config import cfg
import oslo_messaging as messaging
from oslo_service import service
from congress.dse2 import dse_node as dse
DRIVER_TOPIC = (dse.DseNode.SERVICE_TOPIC_PREFIX + 'config' + '-'
+ cfg.CONF.dse.bus_id)
class AgentService(service.Service):
"""Definition of the agent service implemented as an RPC endpoint."""
def __init__(self, topic, endpoints, conf=None):
super(AgentService, self).__init__()
self.conf = conf or cfg.CONF
self.host = self.conf.agent.host
self.topic = topic
self.endpoints = endpoints
self.transport = messaging.get_transport(self.conf)
self.target = messaging.Target(exchange=dse.DseNode.EXCHANGE,
topic=self.topic,
version=dse.DseNode.RPC_VERSION,
server=self.host)
self.server = messaging.get_rpc_server(self.transport,
self.target,
self.endpoints,
executor='eventlet')
def start(self):
super(AgentService, self).start()
self.server.start()
def stop(self, graceful=False):
self.server.stop()
super(AgentService, self).stop(graceful)
class ValidatorDriverClient(object):
"""RPC Proxy used by the agent to access the driver."""
def __init__(self, topic=DRIVER_TOPIC):
transport = messaging.get_transport(cfg.CONF)
target = messaging.Target(exchange=dse.DseNode.EXCHANGE,
topic=topic,
version=dse.DseNode.RPC_VERSION)
self.client = messaging.RPCClient(transport, target)
# block calling thread
def process_templates_hashes(self, context, hashes, host):
"""Sends a list of template hashes to the driver for processing
:param hashes: the array of hashes
:param host: the host they come from.
"""
cctx = self.client.prepare()
return cctx.call(context, 'process_templates_hashes', hashes=hashes,
host=host)
# block calling thread
def process_configs_hashes(self, context, hashes, host):
"""Sends a list of config files hashes to the driver for processing
:param hashes: the array of hashes
:param host: the host they come from.
"""
cctx = self.client.prepare()
return cctx.call(context, 'process_configs_hashes',
hashes=hashes, host=host)

View File

@ -1,242 +0,0 @@
#
# Copyright (c) 2017 Orange.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Unmarshaling of options sent by the agent."""
import inspect
import sys
from oslo_config import cfg
from oslo_config import types
from oslo_log import log as logging
from oslo_serialization import jsonutils as json
import six
from congress.cfg_validator import utils
LOG = logging.getLogger(__name__)
# pylint: disable=too-few-public-methods
class IdentifiedOpt(cfg.Opt):
"""A subclass of option that adds a unique id and a namespace id
ids are based on hashes
"""
def __init__(self, id_, ns_id, **kwargs):
super(IdentifiedOpt, self).__init__(**kwargs)
self.id_ = id_
self.ns_id = ns_id
def parse_value(cfgtype, value):
"""Parse and validate a value's type, raising error if check fails.
:raises: ValueError, TypeError
"""
return cfgtype(value)
def make_type(type_descr):
"""Declares a new type
:param type_descr: a type description read from json.
:return: an oslo config type
"""
type_name = type_descr['type']
type_descr = dict(type_descr)
del type_descr['type']
if 'item_type' in type_descr:
item_type = make_type(type_descr['item_type'])
type_descr['item_type'] = item_type
if 'value_type' in type_descr:
value_type = make_type(type_descr['value_type'])
type_descr['value_type'] = value_type
try:
return_obj = getattr(types, type_name)(**type_descr)
except AttributeError:
LOG.warning('Custom type %s is not defined in oslo_config.types and '
'thus cannot be reconstructed. The type constraints will '
'not be enforced.', type_name)
# give the identity function is the type param to oslo_config.cfg.Opt
# not enforcing any type constraints
return_obj = lambda x: x
return return_obj
# This function must never fail even if the content/metadata
# of the option were weird.
# pylint: disable=broad-except
def make_opt(option, opt_hash, ns_hash):
"""Declares a new group
:param name: an option retrieved from json.
:param opt_hash: the option hash
:param ns_hash: the hash of the namespace defining it.
:return: an oslo config option representation augmented with the hashes.
"""
name = option.get('name', None)
deprecateds = []
if option.get('deprecated_opts', None):
for depr_descr in option.get('deprecated_opts', {}):
depr_name = depr_descr.get('name', None)
if depr_name is None:
depr_name = name
depr_opt = cfg.DeprecatedOpt(depr_name,
depr_descr.get('group', None))
deprecateds.append(depr_opt)
if 'type' in option:
cfgtype = make_type(option['type'])
else:
cfgtype = None
default = option.get('default', None)
if default and cfgtype:
try:
default = cfgtype(default)
except Exception:
_, err, _ = sys.exc_info()
LOG.error('Invalid default value (%s, %s): %s'
% (name, default, err))
try:
cfgopt = IdentifiedOpt(
id_=opt_hash,
ns_id=ns_hash,
name=name,
type=cfgtype,
dest=option.get('dest', None),
default=default,
positional=option.get('positional', None),
help=option.get('help', None),
secret=option.get('secret', None),
required=option.get('required', None),
sample_default=option.get('sample_default', None),
deprecated_for_removal=option.get('deprecated_for_removal', None),
deprecated_reason=option.get('deprecated_reason', None),
deprecated_opts=deprecateds,
mutable=option.get('mutable', None))
except Exception:
cfgopt = None
_, err, _ = sys.exc_info()
LOG.error('Invalid option definition (%s in %s): %s'
% (name, ns_hash, err))
return cfgopt
def make_group(name, title, help_msg):
"""Declares a new group
:param name: group name
:param title: group title
:param help_msg: descriptive help message
:return: an oslo config group representation or None for default.
"""
if name == 'DEFAULT':
return None
return cfg.OptGroup(name=name, title=title, help=help_msg)
def add_namespace(conf, ns_dict, ns_hash):
"""Add options from a kind to an already existing config"""
for group_name, group in six.iteritems(ns_dict):
try:
title = group['object'].get('title', None)
help_msg = group['object'].get('help', None)
except AttributeError:
title = help_msg = None
cfggroup = make_group(group_name, title, help_msg)
# Get back the instance already stored or register the group.
if cfggroup is not None:
# pylint: disable=protected-access
cfggroup = conf._get_group(cfggroup, autocreate=True)
for namespace in group['namespaces']:
for option in namespace[1]:
opt_hash = utils.compute_hash(ns_hash, group_name,
option['name'])
cfgopt = make_opt(option, opt_hash, ns_hash)
conf.register_opt(cfgopt, cfggroup)
def construct_conf_manager(namespaces):
"""Construct a config manager from a list of namespaces data.
Register options of given namespaces into a cfg.ConfigOpts object.
A namespaces dict is typically cfg_validator.generator output. Options are
provided an hash as an extra field.
:param namespaces: A list of dict, containing options metadata.
:return: A cfg.ConfigOpts.
"""
conf = cfg.ConfigOpts()
for ns_dict in namespaces:
ns_hash = utils.compute_hash(json.dumps(ns_dict, sort_keys=True))
add_namespace(conf, ns_dict, ns_hash)
return conf
def add_parsed_conf(conf, normalized):
"""Add a normalized values container to a config manager.
:param conf: A cfg.ConfigOpts object.
:param normalized: A normalized values container, as introduced by oslo
cfg._Namespace.
"""
if conf:
# pylint: disable=protected-access
conf._namespace = cfg._Namespace(conf)
# oslo.config version 6.0.1 added extra arg to _add_parsed_config_file
# we determine the number of args required to use appropriately
if six.PY2:
_add_parsed_config_file_args_len = len(inspect.getargspec(
conf._namespace._add_parsed_config_file).args) - 1
# - 1 to not count the first param self
else:
_add_parsed_config_file_args_len = len(inspect.signature(
conf._namespace._add_parsed_config_file).parameters)
if _add_parsed_config_file_args_len == 3: # oslo.config>=6.0.1
conf._namespace._add_parsed_config_file(
'<memory>', [], normalized[0])
else:
conf._namespace._add_parsed_config_file([], normalized[0])
def parse_config_file(namespaces, path):
"""Parse a config file from its pre-loaded namespaces.
:param namespaces: A list of dict, containing namespaces data.
:param path: Path to the configuration file to parse.
:return:
"""
conf = construct_conf_manager(namespaces)
# pylint: disable=protected-access
conf._namespace = cfg._Namespace(conf)
cfg.ConfigParser._parse_file(path, conf._namespace)
return conf

View File

@ -1,67 +0,0 @@
#
# Copyright (c) 2017 Orange.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Support functions for cfg_validator"""
import uuid
from oslo_log import log as logging
from congress.api import base
from congress import exception
from congress import utils
LOG = logging.getLogger(__name__)
#: Topic for RPC between cfg validator driver (client) and the agents (server)
AGENT_TOPIC = 'congress-validator-agent'
NAMESPACE_CONGRESS = uuid.uuid3(
uuid.NAMESPACE_URL,
'http://openstack.org/congress/agent')
def compute_hash(*args):
"""computes a hash from the arguments. Not cryptographically strong."""
inputs = ''.join([str(arg) for arg in args])
return str(uuid.uuid3(NAMESPACE_CONGRESS, inputs))
def cfg_value_to_congress(value):
"""Sanitize values for congress
values of log formatting options typically contains
'%s' etc, which should not be put in datalog
"""
if isinstance(value, str):
value = value.replace('%', '')
if value is None:
return ''
return utils.value_to_congress(value)
def add_rule(bus, policy_name, rules):
"Adds a policy and rules to the engine"
try:
policy_metadata = bus.rpc(
base.ENGINE_SERVICE_ID,
'persistent_create_policy_with_rules',
{'policy_rules_obj': {
"name": policy_name,
"kind": "nonrecursive",
"rules": rules}})
return policy_metadata
except exception.CongressException as err:
LOG.error(err)
return None

View File

@ -1,67 +0,0 @@
# Copyright (c) 2018 NEC, Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
from oslo_config import cfg
from oslo_upgradecheck import upgradecheck
from congress.db import api as db
CONF = cfg.CONF
class Checks(upgradecheck.UpgradeCommands):
"""Contains upgrade checks
Various upgrade checks should be added as separate methods in this class
and added to _upgrade_checks tuple.
"""
def _check_monasca_webhook_driver(self):
"""Check existence of monasca webhook datasource"""
session = db.get_session()
result = session.execute(
"SELECT count(*) FROM datasources WHERE driver = 'monasca_webhook'"
).scalar()
if result == 0:
return upgradecheck.Result(
upgradecheck.Code.SUCCESS,
'No currently configured data source uses the Monasca Webhook '
'data source driver, which contains backward-incompatible '
'schema changes.')
else:
return upgradecheck.Result(
upgradecheck.Code.WARNING,
'There are currently {} configured data source which use the '
'Monasca Webhook data source driver. Because this version of '
'Congress includes backward-incompatible schema changes to '
'the driver, Congress policies referring to Monasca Webhook '
'data may need to be adapted to the new schema.'.format(
result))
_upgrade_checks = (
('Monasca Webhook Driver', _check_monasca_webhook_driver),
)
def main():
return upgradecheck.main(
CONF, project='congress', upgrade_command=Checks())
if __name__ == '__main__':
sys.exit(main())

View File

@ -1,197 +0,0 @@
# Copyright 2014 VMware
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import os
import socket
from oslo_config import cfg
from oslo_db import options as db_options
from oslo_log import log as logging
from oslo_middleware import cors
from congress import version
LOG = logging.getLogger(__name__)
core_opts = [
# TODO(ipv6): consider changing default to '::' for ipv6, breaks ipv4-only
cfg.HostAddressOpt('bind_host', default='0.0.0.0',
help="The host IP to bind to"),
cfg.PortOpt('bind_port', default=1789,
help="The port to bind to"),
cfg.IntOpt('max_simultaneous_requests', default=1024,
help="Thread pool size for eventlet."),
cfg.BoolOpt('tcp_keepalive', default=False,
help='Set this to true to enable TCP_KEEALIVE socket option '
'on connections received by the API server.'),
cfg.IntOpt('tcp_keepidle',
default=600,
help='Sets the value of TCP_KEEPIDLE in seconds for each '
'server socket. Only applies if tcp_keepalive is '
'true. Not supported on OS X.'),
cfg.IntOpt('api_workers', default=1,
help='The number of worker processes to serve the congress '
'API application.'),
cfg.StrOpt('api_paste_config', default='api-paste.ini',
help=_('The API paste config file to use')),
cfg.StrOpt('auth_strategy', default='keystone',
help=_('The type of authentication to use')),
cfg.ListOpt('drivers',
default=[],
deprecated_for_removal=True,
deprecated_reason='automatically loads all configured drivers',
help=_('List of driver class paths to import.')),
cfg.ListOpt('disabled_drivers',
default=[],
help=_('List of driver names to be disabled. For example, '
'disabled_drivers=nova, plexxi')),
cfg.ListOpt('custom_driver_endpoints',
default=[],
help=_("List of third party endpoints to be loaded seperated "
"by comma. For example custom_driver_endpoints = "
"'test=congress.datasources.test_driver:TestDriver',")),
cfg.IntOpt('datasource_sync_period', default=60,
help='The number of seconds to wait between synchronizing '
'datasource config from the database'),
cfg.BoolOpt('enable_execute_action', default=True,
help='Set the flag to False if you don\'t want Congress '
'to execute actions.'),
cfg.BoolOpt('replicated_policy_engine', default=False,
help='Set the flag to use congress with replicated policy '
'engines.'),
cfg.StrOpt('policy_library_path', default='/etc/congress/library',
help=_('The directory containing library policy files.')),
cfg.StrOpt('encryption_key_path', default='/etc/congress/keys',
help=_('The directory containing encryption keys.')),
]
# Register the configuration options
cfg.CONF.register_opts(core_opts)
dse_opts = [
cfg.StrOpt('bus_id', default='bus',
help='Unique ID of this DSE bus'),
cfg.IntOpt('ping_timeout', default=5,
help='RPC short timeout in seconds; used to ping destination'),
cfg.IntOpt('long_timeout', default=120,
help='RPC long timeout in seconds; used on potentially long '
'running requests such as datasource action and PE row '
'query'),
cfg.IntOpt('time_to_resub', default=10,
help='Time in seconds which a subscriber will wait for missing '
'update before attempting to resubscribe from publisher'),
cfg.BoolOpt('execute_action_retry', default=False,
help='Set the flag to True to make Congress retry execute '
'actions; may cause duplicate executions.'),
cfg.IntOpt('execute_action_retry_timeout', default=600,
help='The number of seconds to retry execute action before '
'giving up. Zero or negative value means never give up.'),
]
# Register dse opts
cfg.CONF.register_opts(dse_opts, group='dse')
# json ingester opts
json_opts = [
cfg.BoolOpt('enable', default=False,
help='Set the flag to True to enable the experimental JSON'
'ingester feature.'),
cfg.StrOpt('config_path', default='/etc/congress/json_ingesters',
help=_('The directory for JSON ingester config files.')),
cfg.StrOpt('config_reusables_path',
default='/etc/congress/config_reusables.yaml',
help=_('The path to reusables YAML file for JSON '
'ingesters config.')),
cfg.StrOpt('db_connection',
help='The PostgreSQL connection string to use to connect to '
'the database.',
secret=True),
]
# Register dse opts
cfg.CONF.register_opts(json_opts, group='json_ingester')
logging.register_options(cfg.CONF)
_SQL_CONNECTION_DEFAULT = 'sqlite://'
# Update the default QueuePool parameters. These can be tweaked by the
# configuration variables - max_pool_size, max_overflow and pool_timeout
db_options.set_defaults(cfg.CONF,
connection=_SQL_CONNECTION_DEFAULT,
max_pool_size=10, max_overflow=20, pool_timeout=10)
# Command line options
cli_opts = [
cfg.BoolOpt('datasources', default=False,
help='Use this option to deploy the datasources.'),
cfg.BoolOpt('api', default=False,
help='Use this option to deploy API service'),
cfg.BoolOpt('policy-engine', default=False,
help='Use this option to deploy policy engine service.'),
cfg.StrOpt('node-id', default=socket.gethostname(),
help='A unique ID for this node. Must be unique across all '
'nodes with the same bus_id.'),
cfg.BoolOpt('delete-missing-driver-datasources', default=False,
help='Use this option to delete datasources with missing '
'drivers from DB')
]
cfg.CONF.register_cli_opts(cli_opts)
def init(args, **kwargs):
cfg.CONF(args=args, project='congress',
version='%%(prog)s %s' % version.version_info.release_string(),
**kwargs)
def setup_logging():
"""Sets up logging for the congress package."""
logging.setup(cfg.CONF, 'congress')
def find_paste_config():
config_path = cfg.CONF.find_file(cfg.CONF.api_paste_config)
if not config_path:
raise cfg.ConfigFilesNotFoundError(
config_files=[cfg.CONF.api_paste_config])
config_path = os.path.abspath(config_path)
LOG.info(("Config paste file: %s"), config_path)
return config_path
def set_config_defaults():
"""This method updates all configuration default values."""
cors.set_defaults(
allow_headers=['X-Auth-Token',
'X-OpenStack-Request-ID',
'X-Identity-Status',
'X-Roles',
'X-Service-Catalog',
'X-User-Id',
'X-Tenant-Id'],
expose_headers=['X-Auth-Token',
'X-OpenStack-Request-ID',
'X-Subject-Token',
'X-Service-Token'],
allow_methods=['GET',
'PUT',
'POST',
'DELETE',
'PATCH']
)

View File

@ -1,225 +0,0 @@
# Copyright 2012 OpenStack Foundation
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2010 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import errno
import re
import socket
import ssl
import sys
import eventlet
import eventlet.wsgi
import greenlet
import json
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import service
from paste import deploy
from congress.dse2 import dse_node
from congress import exception
LOG = logging.getLogger(__name__)
class EventletFilteringLogger(object):
# NOTE(morganfainberg): This logger is designed to filter out specific
# Tracebacks to limit the amount of data that eventlet can log. In the
# case of broken sockets (EPIPE and ECONNRESET), we are seeing a huge
# volume of data being written to the logs due to ~14 lines+ per traceback.
# The traceback in these cases are, at best, useful for limited debugging
# cases.
def __init__(self, logger):
self.logger = logger
self.level = logger.logger.level
self.regex = re.compile(r'errno (%d|%d)' %
(errno.EPIPE, errno.ECONNRESET), re.IGNORECASE)
def write(self, msg):
m = self.regex.search(msg)
if m:
self.logger.log(logging.logging.DEBUG,
'Error(%s) writing to socket.',
m.group(1))
else:
self.logger.log(self.level, msg.rstrip())
class Server(service.Service):
"""Server class to Data Service Node without API services."""
def __init__(self, name, bus_id=None):
super(Server, self).__init__()
self.name = name
self.node = dse_node.DseNode(cfg.CONF, self.name, [],
partition_id=bus_id)
def start(self):
self.node.start()
def stop(self):
self.node.stop()
class APIServer(service.ServiceBase):
"""Server class to Data Service Node with API services.
This server has All API services in itself.
"""
def __init__(self, app_conf, name, host=None, port=None, threads=1000,
keepalive=False, keepidle=None, bus_id=None, **kwargs):
self.app_conf = app_conf
self.name = name
self.application = None
self.host = host or '0.0.0.0'
self.port = port or 0
self.pool = eventlet.GreenPool(threads)
self.socket_info = {}
self.greenthread = None
self.do_ssl = False
self.cert_required = False
self.keepalive = keepalive
self.keepidle = keepidle
self.socket = None
self.bus_id = bus_id
# store API, policy-engine, datasource flags; for use in start()
self.flags = kwargs
# TODO(masa): To support Active-Active HA with DseNode on any
# driver of oslo.messaging, make sure to use same partition_id
# among multi DseNodes sharing same message topic namespace.
def start(self, key=None, backlog=128):
"""Run a WSGI server with the given application."""
if self.socket is None:
self.listen(key=key, backlog=backlog)
try:
kwargs = {'global_conf':
{'node_id': self.name,
'bus_id': self.bus_id,
'flags': json.dumps(self.flags)}}
self.application = deploy.loadapp('config:%s' % self.app_conf,
name='congress', **kwargs)
except Exception:
LOG.exception('Failed to Start %s server', self.name)
raise exception.CongressException(
'Failed to Start initializing %s server' % self.name)
self.greenthread = self.pool.spawn(self._run,
self.application,
self.socket)
def listen(self, key=None, backlog=128):
"""Create and start listening on socket.
Call before forking worker processes.
Raises Exception if this has already been called.
"""
if self.socket is not None:
raise Exception(_('Server can only listen once.'))
LOG.info(('Starting %(arg0)s on %(host)s:%(port)s'),
{'arg0': sys.argv[0],
'host': self.host,
'port': self.port})
# TODO(dims): eventlet's green dns/socket module does not actually
# support IPv6 in getaddrinfo(). We need to get around this in the
# future or monitor upstream for a fix
info = socket.getaddrinfo(self.host,
self.port,
socket.AF_UNSPEC,
socket.SOCK_STREAM)[0]
_socket = eventlet.listen(info[-1],
family=info[0],
backlog=backlog)
if key:
self.socket_info[key] = _socket.getsockname()
# SSL is enabled
if self.do_ssl:
if self.cert_required:
cert_reqs = ssl.CERT_REQUIRED
else:
cert_reqs = ssl.CERT_NONE
sslsocket = eventlet.wrap_ssl(_socket, certfile=self.certfile,
keyfile=self.keyfile,
server_side=True,
cert_reqs=cert_reqs,
ca_certs=self.ca_certs)
_socket = sslsocket
# Optionally enable keepalive on the wsgi socket.
if self.keepalive:
_socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
# This option isn't available in the OS X version of eventlet
if hasattr(socket, 'TCP_KEEPIDLE') and self.keepidle is not None:
_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE,
self.keepidle)
self.socket = _socket
def set_ssl(self, certfile, keyfile=None, ca_certs=None,
cert_required=True):
self.certfile = certfile
self.keyfile = keyfile
self.ca_certs = ca_certs
self.cert_required = cert_required
self.do_ssl = True
def kill(self):
if self.greenthread is not None:
self.greenthread.kill()
def stop(self):
self.kill()
# We're not able to stop the DseNode in this case. Is there a need to
# stop the ApiServer without also exiting the process?
def reset(self):
LOG.info("reset() not implemented yet")
def wait(self):
"""Wait until all servers have completed running."""
try:
self.pool.waitall()
except KeyboardInterrupt:
pass
except greenlet.GreenletExit:
pass
def _run(self, application, socket):
"""Start a WSGI server in a new green thread."""
logger = logging.getLogger('eventlet.wsgi.server')
try:
eventlet.wsgi.server(socket, application, max_size=1000,
log=EventletFilteringLogger(logger),
debug=False)
except greenlet.GreenletExit:
# Wait until all servers have completed running
pass
except Exception:
LOG.exception(_('Server error'))
raise

View File

@ -1,17 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from congress.common.policies import base
def list_rules():
return base.list_rules()

View File

@ -1,43 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault(
name='context_is_admin',
check_str='role:admin'
),
policy.RuleDefault(
name='admin_only',
check_str='rule:context_is_admin'
),
policy.RuleDefault(
name='regular_user',
check_str='',
description='The policy rule defining who is a regular user. This '
'rule can be overridden by, for example, a role check.'
),
policy.RuleDefault(
name='default',
check_str='rule:admin_only',
description='The default policy rule to apply when enforcing API '
'permissions. By default, all APIs are admin only. '
'This rule can be overridden (say by rule:regular_user) '
'to allow non-admins to access Congress APIs.'
)
]
def list_rules():
return rules

View File

@ -1,140 +0,0 @@
# Copyright (c) 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Policy Engine For Auth on API calls."""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_config import cfg
from oslo_policy import policy
from congress.common import policies
from congress import exception
_ENFORCER = None
def reset():
global _ENFORCER
if _ENFORCER:
_ENFORCER.clear()
_ENFORCER = None
def init(policy_file=None, rules=None, default_rule=None, use_conf=True):
"""Init an Enforcer class.
:param: policy_file: Custom policy file to use, if none is specified,
`CONF.policy_file` will be used.
:param: rules: Default dictionary / Rules to use. It will be
considered just in the first instantiation.
:param: default_rule: Default rule to use, CONF.default_rule will
be used if none is specified.
:param: use_conf: Whether to load rules from config file.
"""
global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer(cfg.CONF, policy_file=policy_file,
rules=rules,
default_rule=default_rule,
use_conf=use_conf)
register_rules(_ENFORCER)
def register_rules(enforcer):
enforcer.register_defaults(policies.list_rules())
def set_rules(rules, overwrite=True, use_conf=False):
"""Set rules based on the provided dict of rules.
:param: rules: New rules to use. It should be an instance of dict.
:param: overwrite: Whether to overwrite current rules or update them
with the new rules.
:param: use_conf: Whether to reload rules from config file.
"""
init(use_conf=False)
_ENFORCER.set_rules(rules, overwrite, use_conf)
def get_enforcer():
cfg.CONF([], project='congress')
init()
return _ENFORCER
def enforce(context, action, target, do_raise=True, exc=None):
"""Verifies that the action is valid on the target in this context.
:param: context: congress context
:param: action: string representing the action to be checked
this should be colon separated for clarity.
i.e. ``compute:create_instance``,
``compute:attach_volume``,
``volume:attach_volume``
:param: target: dictionary representing the object of the action
for object creation this should be a dictionary representing the
location of the object e.g. ``{'project_id': context.project_id}``
:param: do_raise: if True (the default), raises PolicyNotAuthorized;
if False, returns False
:raises congress.exception.PolicyNotAuthorized: if verification fails
and do_raise is True.
:return: returns a non-False value (not necessarily "True") if
authorized, and the exact value False if not authorized and
do_raise is False.
"""
init()
credentials = context.to_dict()
if not exc:
exc = exception.PolicyNotAuthorized
return _ENFORCER.enforce(action, target, credentials, do_raise=do_raise,
exc=exc, action=action)
def check_is_admin(context):
"""Whether or not roles contains 'admin' role according to policy setting.
"""
init()
# the target is user-self
credentials = context.to_dict()
target = credentials
return _ENFORCER.enforce('context_is_admin', target, credentials)
@policy.register('is_admin')
class IsAdminCheck(policy.Check):
"""An explicit check for is_admin."""
def __init__(self, kind, match):
"""Initialize the check."""
self.expected = (match.lower() == 'true')
super(IsAdminCheck, self).__init__(kind, str(self.expected))
def __call__(self, target, creds, enforcer):
"""Determine whether is_admin matches the requested value."""
return creds['is_admin'] == self.expected
def get_rules():
if _ENFORCER:
return _ENFORCER.rules

View File

@ -1,253 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2010 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Utility methods for working with WSGI servers."""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import sys
import routes.middleware
import webob.dec
import webob.exc
class Request(webob.Request):
pass
class Application(object):
"""Base WSGI application wrapper. Subclasses need to implement __call__."""
@classmethod
def factory(cls, global_config, **local_config):
"""Used for paste app factories in paste.deploy config files.
Any local configuration (that is, values under the [app:APPNAME]
section of the paste config) will be passed into the `__init__` method
as kwargs.
A hypothetical configuration would look like:
[app:wadl]
latest_version = 1.3
paste.app_factory = nova.api.fancy_api:Wadl.factory
which would result in a call to the `Wadl` class as
import nova.api.fancy_api
fancy_api.Wadl(latest_version='1.3')
You could of course re-implement the `factory` method in subclasses,
but using the kwarg passing it shouldn't be necessary.
"""
return cls(**local_config)
def __call__(self, environ, start_response):
r"""Subclasses will probably want to implement __call__ like this:
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, req):
# Any of the following objects work as responses:
# Option 1: simple string
res = 'message\n'
# Option 2: a nicely formatted HTTP exception page
res = exc.HTTPForbidden(explanation='Nice try')
# Option 3: a webob Response object (in case you need to play with
# headers, or you want to be treated like an iterable, or or or)
res = Response();
res.app_iter = open('somefile')
# Option 4: any wsgi app to be run next
res = self.application
# Option 5: you can get a Response object for a wsgi app, too, to
# play with headers etc
res = req.get_response(self.application)
# You can then just return your response...
return res
# ... or set req.response and return None.
req.response = res
See the end of http://pythonpaste.org/webob/modules/dec.html
for more info.
"""
raise NotImplementedError(_('You must implement __call__'))
class Middleware(Application):
"""Base WSGI middleware.
These classes require an application to be
initialized that will be called next. By default the middleware will
simply call its wrapped app, or you can override __call__ to customize its
behavior.
"""
@classmethod
def factory(cls, global_config, **local_config):
"""Used for paste app factories in paste.deploy config files.
Any local configuration (that is, values under the [filter:APPNAME]
section of the paste config) will be passed into the `__init__` method
as kwargs.
A hypothetical configuration would look like:
[filter:analytics]
redis_host = 127.0.0.1
paste.filter_factory = nova.api.analytics:Analytics.factory
which would result in a call to the `Analytics` class as
import nova.api.analytics
analytics.Analytics(app_from_paste, redis_host='127.0.0.1')
You could of course re-implement the `factory` method in subclasses,
but using the kwarg passing it shouldn't be necessary.
"""
def _factory(app):
return cls(app, **local_config)
return _factory
def __init__(self, application):
self.application = application
def process_request(self, req):
"""Called on each request.
If this returns None, the next application down the stack will be
executed. If it returns a response then that response will be returned
and execution will stop here.
"""
return None
def process_response(self, response):
"""Do whatever you'd like to the response."""
return response
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, req):
response = self.process_request(req)
if response:
return response
response = req.get_response(self.application)
return self.process_response(response)
class Debug(Middleware):
"""Helper class for debugging a WSGI application.
Can be inserted into any WSGI application chain to get information
about the request and response.
"""
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, req):
print(('*' * 40) + ' REQUEST ENVIRON')
for key, value in req.environ.items():
print(key, '=', value)
print()
resp = req.get_response(self.application)
print(('*' * 40) + ' RESPONSE HEADERS')
for (key, value) in resp.headers.items():
print(key, '=', value)
print()
resp.app_iter = self.print_generator(resp.app_iter)
return resp
@staticmethod
def print_generator(app_iter):
"""Iterator that prints the contents of a wrapper string."""
print(('*' * 40) + ' BODY')
for part in app_iter:
sys.stdout.write(part)
sys.stdout.flush()
yield part
print()
class Router(object):
"""WSGI middleware that maps incoming requests to WSGI apps."""
def __init__(self, mapper):
"""Create a router for the given routes.Mapper.
Each route in `mapper` must specify a 'controller', which is a
WSGI app to call. You'll probably want to specify an 'action' as
well and have your controller be an object that can route
the request to the action-specific method.
Examples:
mapper = routes.Mapper()
sc = ServerController()
# Explicit mapping of one route to a controller+action
mapper.connect(None, '/svrlist', controller=sc, action='list')
# Actions are all implicitly defined
mapper.resource('server', 'servers', controller=sc)
# Pointing to an arbitrary WSGI app. You can specify the
# {path_info:.*} parameter so the target app can be handed just that
# section of the URL.
mapper.connect(None, '/v1.0/{path_info:.*}', controller=BlogApp())
"""
self.map = mapper
self._router = routes.middleware.RoutesMiddleware(self._dispatch,
self.map)
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, req):
"""Route the incoming request to a controller based on self.map.
If no match, return a 404.
"""
return self._router
@staticmethod
@webob.dec.wsgify(RequestClass=Request)
def _dispatch(req):
"""Dispatch the request to the appropriate controller.
Called by self._router after matching the incoming request to a route
and putting the information into req.environ. Either returns 404
or the routed WSGI app's response.
"""
match = req.environ['wsgiorg.routing_args'][1]
if not match:
return webob.exc.HTTPNotFound()
app = match['controller']
return app

View File

@ -1,149 +0,0 @@
# Copyright 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""RequestContext: context for requests that persist through congress."""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import copy
import datetime
from oslo_context import context as common_context
from oslo_log import log as logging
from congress.common import policy
LOG = logging.getLogger(__name__)
class RequestContext(common_context.RequestContext):
"""Security context and request information.
Represents the user taking a given action within the system.
"""
FROM_DICT_EXTRA_KEYS = [
'user_id', 'tenant_id', 'project_id', 'read_deleted', 'timestamp',
'tenant_name', 'project_name', 'user_name',
]
def __init__(self, user_id, tenant_id, is_admin=None, read_deleted="no",
roles=None, timestamp=None, load_admin_roles=True,
request_id=None, tenant_name=None, user_name=None,
overwrite=True, **kwargs):
"""Object initialization.
:param: read_deleted: 'no' indicates deleted records are hidden, 'yes'
indicates deleted records are visible, 'only' indicates that
*only* deleted records are visible.
:param: overwrite: Set to False to ensure that the greenthread local
copy of the index is not overwritten.
:param: kwargs: Extra arguments that might be present, but we ignore
because they possibly came in from older rpc messages.
"""
super(RequestContext, self).__init__(user=user_id, tenant=tenant_id,
is_admin=is_admin,
request_id=request_id,
overwrite=overwrite,
roles=roles)
self.user_name = user_name
self.tenant_name = tenant_name
self.read_deleted = read_deleted
if not timestamp:
timestamp = datetime.datetime.utcnow()
self.timestamp = timestamp
self._session = None
if self.is_admin is None:
self.is_admin = policy.check_is_admin(self)
# Log only once the context has been configured to prevent
# format errors.
if kwargs:
LOG.debug(('Arguments dropped when creating '
'context: %s'), kwargs)
@property
def project_id(self):
return self.tenant
@property
def tenant_id(self):
return self.tenant
@tenant_id.setter
def tenant_id(self, tenant_id):
self.tenant = tenant_id
@property
def user_id(self):
return self.user
@user_id.setter
def user_id(self, user_id):
self.user = user_id
def _get_read_deleted(self):
return self._read_deleted
def _set_read_deleted(self, read_deleted):
if read_deleted not in ('no', 'yes', 'only'):
raise ValueError(_("read_deleted can only be one of 'no', "
"'yes' or 'only', not %r") % read_deleted)
self._read_deleted = read_deleted
def _del_read_deleted(self):
del self._read_deleted
read_deleted = property(_get_read_deleted, _set_read_deleted,
_del_read_deleted)
def to_dict(self):
ret = super(RequestContext, self).to_dict()
ret.update({'user_id': self.user_id,
'tenant_id': self.tenant_id,
'project_id': self.project_id,
'read_deleted': self.read_deleted,
'timestamp': str(self.timestamp),
'tenant_name': self.tenant_name,
'project_name': self.tenant_name,
'user_name': self.user_name})
return ret
def elevated(self, read_deleted=None):
"""Return a version of this context with admin flag set."""
context = copy.copy(self)
context.is_admin = True
if 'admin' not in [x.lower() for x in context.roles]:
context.roles.append('admin')
if read_deleted is not None:
context.read_deleted = read_deleted
return context
def get_admin_context(read_deleted="no", load_admin_roles=True):
return RequestContext(user_id=None,
tenant_id=None,
is_admin=True,
read_deleted=read_deleted,
load_admin_roles=load_admin_roles,
overwrite=False)

View File

@ -1,298 +0,0 @@
# Copyright (c) 2018 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import collections
import ipaddress
import json
from oslo_utils import uuidutils
import six
TypeNullabilityTuple = collections.namedtuple(
'TypeNullabilityTuple', 'type nullable')
def nullable(marshal):
'''decorator to make marshal function accept None value'''
def func(cls, value):
if value is None:
return None
else:
return marshal(cls, value)
return func
class UnqualifiedNameStr(abc.ABCMeta):
'''metaclass to make str(Type) == Type'''
def __str__(self):
return self.__name__
@six.add_metaclass(UnqualifiedNameStr)
class CongressDataType(object):
@classmethod
@abc.abstractmethod
def marshal(cls, value):
'''Validate a value as valid for this type.
:Raises ValueError: if the value is not valid for this type
'''
raise NotImplementedError
@classmethod
def least_ancestor(cls, target_types):
'''Find this type's least ancestor among target_types
This method helps a data consumer find the least common ancestor of
this type among the types the data consumer supports.
:param supported_types: iterable collection of types
:returns: the subclass of CongressDataType which is the least ancestor
'''
target_types = frozenset(target_types)
current_class = cls
try:
while current_class not in target_types:
current_class = current_class._get_parent()
return current_class
except cls.CongressDataTypeNoParent:
return None
@classmethod
def convert_to_ancestor(cls, value, ancestor_type):
'''Convert this type's exchange value to ancestor_type's exchange value
Generally there is no actual conversion because descendant type value
is directly interpretable as ancestor type value. The only exception
is the conversion from non-string descendents to string. This
conversion is needed by Agnostic engine does not support boolean.
.. warning:: undefined behavior if ancestor_type is not an ancestor of
this type.
'''
if ancestor_type == Str:
return json.dumps(value)
else:
if cls.least_ancestor([ancestor_type]) is None:
raise cls.CongressDataTypeHierarchyError
else:
return value
@classmethod
def _get_parent(cls):
congress_parents = [parent for parent in cls.__bases__
if issubclass(parent, CongressDataType)]
if len(congress_parents) == 1:
return congress_parents[0]
elif len(congress_parents) == 0:
raise cls.CongressDataTypeNoParent(
'No parent type found for {0}'.format(cls))
else:
raise cls.CongressDataTypeHierarchyError(
'More than one parent type found for {0}: {1}'
.format(cls, congress_parents))
class CongressDataTypeNoParent(TypeError):
pass
class CongressDataTypeHierarchyError(TypeError):
pass
class Scalar(CongressDataType):
'''Most general type, emcompassing all JSON scalar values'''
ACCEPTED_VALUE_TYPES = [
six.string_types, six.text_type, six.integer_types, float, bool]
@classmethod
@nullable
def marshal(cls, value):
for type in cls.ACCEPTED_VALUE_TYPES:
if isinstance(value, type):
return value
raise ValueError('Input value (%s) is of %s instead of one of the '
'expected types %s'
% (value, type(value), cls.ACCEPTED_VALUE_TYPES))
class Str(Scalar):
@classmethod
@nullable
def marshal(cls, value):
if not isinstance(value, six.string_types):
raise ValueError('Input value (%s) is of %s instead of expected %s'
% (value, type(value), six.string_types))
return value
class Bool(Scalar):
@classmethod
@nullable
def marshal(cls, value):
if not isinstance(value, bool):
raise ValueError('Input value (%s) is of %s instead of expected %s'
% (value, type(value), bool))
return value
class Int(Scalar):
@classmethod
@nullable
def marshal(cls, value):
if isinstance(value, int):
return value
elif isinstance(value, float) and value.is_integer():
return int(value)
else:
raise ValueError('Input value (%s) is of %s instead of expected %s'
' or %s' % (value, type(value), int, float))
class Float(Scalar):
@classmethod
@nullable
def marshal(cls, value):
if isinstance(value, float):
return value
elif isinstance(value, int):
return float(value)
else:
raise ValueError('Input value (%s) is of %s instead of expected %s'
' or %s' % (value, type(value), int, float))
class UUID(Str):
@classmethod
@nullable
def marshal(cls, value):
if uuidutils.is_uuid_like(value):
return value
else:
raise ValueError('Input value (%s) is not an UUID' % value)
class IPAddress(Str):
@classmethod
@nullable
def marshal(cls, value):
try:
return str(ipaddress.IPv4Address(six.text_type(value)))
except ipaddress.AddressValueError:
try:
ipv6 = ipaddress.IPv6Address(six.text_type(value))
if ipv6.ipv4_mapped:
return str(ipv6.ipv4_mapped)
else:
return str(ipv6)
except ipaddress.AddressValueError:
raise ValueError('Input value (%s) is not interprable '
'as an IP address' % value)
class IPNetwork(Str):
@classmethod
@nullable
def marshal(cls, value):
try:
return str(ipaddress.ip_network(six.text_type(value)))
except ValueError:
raise ValueError('Input value (%s) is not interprable '
'as an IP network' % value)
@six.add_metaclass(abc.ABCMeta)
class CongressTypeFiniteDomain(object):
'''Abstract base class for a Congress type of bounded domain.
Each type inheriting from this class must have a class variable DOMAIN
which is a frozenset of the set of values allowed in the type.
'''
pass
def create_congress_enum_type(class_name, enum_items, base_type,
catch_all_default_value=None):
'''Return a sub-type of base_type
representing a value of type base_type from a fixed, finite domain.
:param enum_items: collection of items forming the domain
:param catch_all_default_value: value to use for any value outside the
domain. Defaults to None to disallow any avy value outside the domain.
'''
domain = set(enum_items)
if catch_all_default_value is not None:
domain.add(catch_all_default_value)
for item in domain:
if not base_type.marshal(item) == item:
raise ValueError
class NewType(base_type, CongressTypeFiniteDomain):
DOMAIN = domain
CATCH_ALL_DEFAULT_VALUE = catch_all_default_value
@classmethod
@nullable
def marshal(cls, value):
if value not in cls.DOMAIN:
if cls.CATCH_ALL_DEFAULT_VALUE is None:
raise ValueError(
'Input value (%s) is not in the expected domain of '
'values %s' % (value, cls.DOMAIN))
else:
return cls.CATCH_ALL_DEFAULT_VALUE
return value
NewType.__name__ = class_name
return NewType
class TypesRegistry(object):
_type_name_to_type_class = {}
@classmethod
def register(cls, type_class):
# skip if type already registered
if not issubclass(type_class, Scalar):
raise TypeError('Attempted to register a type which is not a '
'subclass of the top type %s.' % Scalar)
elif str(type_class) in cls._type_name_to_type_class:
if type_class == cls._type_name_to_type_class[str(type_class)]:
pass # type already registered
else: # conflicting types with same name
raise Exception('Attempted to register new type with the same '
'name \'%s\' as previously registered type.' %
type_class)
else: # register new type
cls._type_name_to_type_class[str(type_class)] = type_class
@classmethod
def type_class(cls, type_name):
return cls._type_name_to_type_class[type_name]
TYPES = [Scalar, Str, Bool, Int, Float, IPAddress, IPNetwork]
for type_class in TYPES:
TypesRegistry.register((type_class))

View File

@ -1,353 +0,0 @@
// Copyright (c) 2013 VMware, Inc. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
//
grammar Congress;
options {
language=Python;
output=AST;
ASTLabelType=CommonTree;
}
tokens {
PROG;
COMMA=',';
COLONMINUS=':-';
LPAREN='(';
RPAREN=')';
RBRACKET=']';
LBRACKET='[';
// Structure
THEORY;
STRUCTURED_NAME;
// Kinds of Formulas
EVENT;
RULE;
LITERAL;
MODAL;
ATOM;
NOT;
AND;
// Terms
NAMED_PARAM;
COLUMN_NAME;
COLUMN_NUMBER;
VARIABLE;
STRING_OBJ;
INTEGER_OBJ;
FLOAT_OBJ;
SYMBOL_OBJ;
}
// a program can be one or more statements or empty
prog
: statement+ EOF -> ^(THEORY statement+)
| EOF
;
// a statement is either a formula or a comment
// let the lexer handle comments directly for efficiency
statement
: formula formula_terminator? -> formula
| COMMENT
;
formula
: rule
| fact
| event
;
// An Event represents the insertion/deletion of policy statements.
// Events always include :-. This is to avoid ambiguity in the grammar
// for the case of insert[p(1)]. Without the requirement that an event
// includes a :-, insert[p(1)] could either represent the event where p(1)
// is inserted or simply a policy statement with an empty body and the modal
// 'insert' in the head.
// This means that to represent the event where p(1) is inserted, you must write
// insert[p(1) :- true]. To represent the query that asks if insert[p(1)] is true
// you write insert[p(1)].
event
: event_op LBRACKET rule (formula_terminator STRING)? RBRACKET -> ^(EVENT event_op rule STRING?)
;
event_op
: 'insert'
| 'delete'
;
formula_terminator
: ';'
| '.'
;
rule
: literal_list COLONMINUS literal_list -> ^(RULE literal_list literal_list)
;
literal_list
: literal (COMMA literal)* -> ^(AND literal+)
;
literal
: fact -> fact
| NEGATION fact -> ^(NOT fact)
;
// Note: if we replace modal_op with ID, it tries to force statements
// like insert[p(x)] :- q(x) to be events instead of rules. Bug?
fact
: atom
| modal_op LBRACKET atom RBRACKET -> ^(MODAL modal_op atom)
;
modal_op
: 'execute'
| 'insert'
| 'delete'
;
atom
: relation_constant (LPAREN parameter_list? RPAREN)? -> ^(ATOM relation_constant parameter_list?)
;
parameter_list
: parameter (COMMA parameter)* -> parameter+
;
parameter
: term -> term
| column_ref EQUAL term -> ^(NAMED_PARAM column_ref term)
;
column_ref
: ID -> ^(COLUMN_NAME ID)
| INT -> ^(COLUMN_NUMBER INT)
;
term
: object_constant
| variable
;
object_constant
: INT -> ^(INTEGER_OBJ INT)
| FLOAT -> ^(FLOAT_OBJ FLOAT)
| STRING -> ^(STRING_OBJ STRING)
;
variable
: ID -> ^(VARIABLE ID)
;
relation_constant
: ID (':' ID)* SIGN? -> ^(STRUCTURED_NAME ID+ SIGN?)
;
// start of the lexer
// first, define keywords to ensure they have lexical priority
NEGATION
: 'not'
| 'NOT'
| '!'
;
EQUAL
: '='
;
SIGN
: '+' | '-'
;
// Python integers, conformant to 3.4.2 spec
// Note that leading zeros in a non-zero decimal number are not allowed
// This is taken care of by the first and second alternatives
INT
: '1'..'9' ('0'..'9')*
| '0'+
| '0' ('o' | 'O') ('0'..'7')+
| '0' ('x' | 'X') (HEX_DIGIT)+
| '0' ('b' | 'B') ('0' | '1')+
;
// Python floating point literals, conformant to 3.4.2 spec
// The integer and exponent parts are always interpreted using radix 10
FLOAT
: FLOAT_NO_EXP
| FLOAT_EXP
;
// String literals according to Python 3.4.2 grammar
// THIS VERSION IMPLEMENTS STRING AND BYTE LITERALS
// AS WELL AS TRIPLE QUOTED STRINGS
// Python strings:
// - can be enclosed in matching single quotes (') or double quotes (")
// - can be enclosed in matching groups of three single or double quotes
// - a backslash (\) character is used to escape characters that otherwise
// have a special meaning (e.g., newline, backslash, or a quote)
// - can be prefixed with a u to simplify maintenance of 2.x and 3.x code
// - 'ur' is NOT allowed
// - unescpaed newlines and quotes are allowed in triple-quoted literal
// EXCEPT that three unescaped contiguous quotes terminate the literal
//
// Byte String Literals according to Python 3.4.2 grammar
// Bytes are always prefixed with 'b' or 'B', and can only contain ASCII
// Any byte with a numeric value of >= 128 must be escaped
//
// Also implemented code refactoring to reduce runtime size of parser
STRING
: (STRPREFIX)? (SLSTRING)+
| (BYTESTRPREFIX) (SLBYTESTRING)+
;
// moved this rule so we could differentiate between .123 and .1aa
// (i.e., relying on lexical priority)
ID
: ('a'..'z'|'A'..'Z'|'_'|'.') ('a'..'z'|'A'..'Z'|'0'..'9'|'_'|'.')*
;
// added Pythonesque comments
COMMENT
: '//' ~('\n'|'\r')* '\r'? '\n' {$channel=HIDDEN;}
| '/*' ( options {greedy=false;} : . )* '*/' {$channel=HIDDEN;}
| '#' ~('\n'|'\r')* '\r'? '\n' {$channel=HIDDEN;}
;
WS
: ( ' '
| '\t'
| '\r'
| '\n'
) {$channel=HIDDEN;}
;
// fragment rules
// these are helper rules that are used by other lexical rules
// they do NOT generate tokens
fragment
EXPONENT
: ('e'|'E') ('+'|'-')? ('0'..'9')+
;
fragment
HEX_DIGIT
: ('0'..'9'|'a'..'f'|'A'..'F')
;
fragment
DIGIT
: ('0'..'9')
;
fragment
FLOAT_NO_EXP
: INT_PART? FRAC_PART
| INT_PART '.'
;
fragment
FLOAT_EXP
: ( INT_PART | FLOAT_NO_EXP ) EXPONENT
;
fragment
INT_PART
: DIGIT+
;
fragment
FRAC_PART
: '.' DIGIT+
;
// The following fragments are for string handling
// any form of 'ur' is illegal
fragment
STRPREFIX
: 'r' | 'R' | 'u' | 'U'
;
fragment
STRING_ESC
: '\\' .
;
// The first two are single-line string with single- and double-quotes
// The second two are multi-line strings with single- and double quotes
fragment
SLSTRING
: '\'' (STRING_ESC | ~('\\' | '\r' | '\n' | '\'') )* '\''
| '"' (STRING_ESC | ~('\\' | '\r' | '\n' | '"') )* '"'
| '\'\'\'' (STRING_ESC | ~('\\') )* '\'\'\''
| '"""' (STRING_ESC | ~('\\') )* '"""'
;
// Python Byte Literals
// Each byte within a byte literal can be an ASCII character or an
// encoded hex number from \x00 to \xff (i.e., 0-255)
// EXCEPT the backslash, newline, or quote
fragment
BYTESTRPREFIX
: 'b' | 'B' | 'br' | 'Br' | 'bR' | 'BR' | 'rb' | 'rB' | 'Rb' | 'RB'
;
fragment
SLBYTESTRING
: '\'' (BYTES_CHAR_SQ | BYTES_ESC)* '\''
| '"' (BYTES_CHAR_DQ | BYTES_ESC)* '"'
| '\'\'\'' (BYTES_CHAR_SQ | BYTES_TESC)* '\'\'\''
| '"""' (BYTES_CHAR_DQ | BYTES_TESC)* '"""'
;
fragment
BYTES_CHAR_SQ
: '\u0000'..'\u0009'
| '\u000B'..'\u000C'
| '\u000E'..'\u0026'
| '\u0028'..'\u005B'
| '\u005D'..'\u007F'
;
fragment
BYTES_CHAR_DQ
: '\u0000'..'\u0009'
| '\u000B'..'\u000C'
| '\u000E'..'\u0021'
| '\u0023'..'\u005B'
| '\u005D'..'\u007F'
;
fragment
BYTES_ESC
: '\\' '\u0000'..'\u007F'
;
fragment
BYTES_TESC
: '\u0000'..'\u005B'
| '\u005D'..'\u007F'
;

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,11 +0,0 @@
If you modify the congress/datalog/Congress.g file, you need to use antlr3
to re-generate the CongressLexer.py and CongressParser.py files with
the following steps:
1. Make sure a recent version of Java is installed. http://java.com/
2. Download ANTLR 3.5.2 or another compatible version from http://www.antlr3.org/download/antlr-3.5.2-complete.jar
3. Execute the following commands in shell
$ cd path/to/congress_repo/congress/datalog
$ java -jar path/to/antlr-3.5.2-complete.jar Congress.g -o Python2 -language Python
$ java -jar path/to/antlr-3.5.2-complete.jar Congress.g -o Python3 -language Python3

View File

@ -1,104 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
# TODO(thinrichs): move algorithms from compile.py that do analysis
# into this file.
import copy
class ModalIndex(object):
def __init__(self):
# Dict mapping modal name to a ref-counted list of tablenames
# Refcounted list of tablenames is a dict from tablename to count
self.index = {}
def add(self, modal, tablename):
if modal not in self.index:
self.index[modal] = {}
if tablename not in self.index[modal]:
self.index[modal][tablename] = 0
self.index[modal][tablename] += 1
def remove(self, modal, tablename):
if modal not in self.index:
raise KeyError("Modal %s has no entries" % modal)
if tablename not in self.index[modal]:
raise KeyError("Tablename %s for modal %s does not exist" %
(tablename, modal))
self.index[modal][tablename] -= 1
self._clean_up(modal, tablename)
def modals(self):
return self.index.keys()
def tables(self, modal):
if modal not in self.index:
return []
return self.index[modal].keys()
def __isub__(self, other):
changes = []
for modal in self.index:
if modal not in other.index:
continue
for table in self.index[modal]:
if table not in other.index[modal]:
continue
self.index[modal][table] -= other.index[modal][table]
changes.append((modal, table))
for (modal, table) in changes:
self._clean_up(modal, table)
return self
def __iadd__(self, other):
for modal in other.index:
if modal not in self.index:
self.index[modal] = other.index[modal]
continue
for table in other.index[modal]:
if table not in self.index[modal]:
self.index[modal][table] = other.index[modal][table]
continue
self.index[modal][table] += other.index[modal][table]
return self
def _clean_up(self, modal, table):
if self.index[modal][table] <= 0:
del self.index[modal][table]
if not len(self.index[modal]):
del self.index[modal]
def __eq__(self, other):
return self.index == other.index
def __neq__(self, other):
return not self.__eq__(other)
def __copy__(self):
new = ModalIndex()
new.index = copy.deepcopy(self.index)
return new
def __str__(self):
return str(self.index)
def __contains__(self, modal):
return modal in self.index

View File

@ -1,647 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
import pulp
import six
from congress import exception
from functools import reduce
LOG = logging.getLogger(__name__)
class LpLang(object):
"""Represent (mostly) linear programs generated from Datalog."""
MIN_THRESHOLD = .00001 # for converting <= to <
class Expression(object):
def __init__(self, *args, **meta):
self.args = args
self.meta = meta
def __ne__(self, other):
return not self.__eq__(other)
def __eq__(self, other):
if not isinstance(other, LpLang.Expression):
return False
if len(self.args) != len(other.args):
return False
if self.args[0] in ['AND', 'OR']:
return set(self.args) == set(other.args)
comm = ['plus', 'times']
if self.args[0] == 'ARITH' and self.args[1].lower() in comm:
return set(self.args) == set(other.args)
if self.args[0] in ['EQ', 'NOTEQ']:
return ((self.args[1] == other.args[1] and
self.args[2] == other.args[2]) or
(self.args[1] == other.args[2] and
self.args[2] == other.args[1]))
return self.args == other.args
def __str__(self):
return "(" + ", ".join(str(x) for x in self.args) + ")"
def __repr__(self):
args = ", ".join(repr(x) for x in self.args)
meta = str(self.meta)
return "<args=%s, meta=%s>" % (args, meta)
def __hash__(self):
return hash(tuple([hash(x) for x in self.args]))
def operator(self):
return self.args[0]
def arguments(self):
return self.args[1:]
def tuple(self):
return tuple(self.args)
@classmethod
def makeVariable(cls, *args, **meta):
return cls.Expression("VAR", *args, **meta)
@classmethod
def makeBoolVariable(cls, *args, **meta):
meta['type'] = 'bool'
return cls.Expression("VAR", *args, **meta)
@classmethod
def makeIntVariable(cls, *args, **meta):
meta['type'] = 'int'
return cls.Expression("VAR", *args, **meta)
@classmethod
def makeOr(cls, *args, **meta):
if len(args) == 1:
return args[0]
return cls.Expression("OR", *args, **meta)
@classmethod
def makeAnd(cls, *args, **meta):
if len(args) == 1:
return args[0]
return cls.Expression("AND", *args, **meta)
@classmethod
def makeEqual(cls, arg1, arg2, **meta):
return cls.Expression("EQ", arg1, arg2, **meta)
@classmethod
def makeNotEqual(cls, arg1, arg2, **meta):
return cls.Expression("NOTEQ", arg1, arg2, **meta)
@classmethod
def makeArith(cls, *args, **meta):
return cls.Expression("ARITH", *args, **meta)
@classmethod
def makeExpr(cls, obj):
if isinstance(obj, six.string_types):
return obj
if isinstance(obj, (float, six.integer_types)):
return obj
op = obj[0].upper()
if op == 'VAR':
return cls.makeVariable(*obj[1:])
if op in ['EQ', 'NOTEQ', 'AND', 'OR']:
args = [cls.makeExpr(x) for x in obj[1:]]
if op == 'EQ':
return cls.makeEqual(*args)
if op == 'NOTEQ':
return cls.makeNotEqual(*args)
if op == 'AND':
return cls.makeAnd(*args)
if op == 'OR':
return cls.makeOr(*args)
raise cls.LpConversionFailure('should never happen')
args = [cls.makeExpr(x) for x in obj[1:]]
return cls.makeArith(obj[0], *args)
@classmethod
def isConstant(cls, thing):
return (isinstance(thing, six.string_types) or
isinstance(thing, (float, six.integer_types)))
@classmethod
def isVariable(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'VAR'
@classmethod
def isEqual(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'EQ'
@classmethod
def isOr(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'OR'
@classmethod
def isAnd(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'AND'
@classmethod
def isNotEqual(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'NOTEQ'
@classmethod
def isArith(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'ARITH'
@classmethod
def isBoolArith(cls, thing):
return (cls.isArith(thing) and
thing.args[1].lower() in ['lteq', 'lt', 'gteq', 'gt', 'equal'])
@classmethod
def variables(cls, exp):
if cls.isConstant(exp):
return set()
elif cls.isVariable(exp):
return set([exp])
else:
variables = set()
for arg in exp.arguments():
variables |= cls.variables(arg)
return variables
def __init__(self):
# instance variable so tests can be run in parallel
self.fresh_var_counter = 0 # for creating new variables
def pure_lp(self, exp, bounds):
"""Rewrite EXP to a pure LP problem.
:param exp: is an Expression of the form
var = (arith11 ^ ... ^ arith1n) | ... | (arithk1 ^ ... ^ arithkn)
where the degenerate cases are permitted as well.
:returns: a collection of expressions each of the form:
a1*x1 + ... + an*xn [<=, ==, >=] b.
"""
flat, support = self.flatten(exp, indicator=False)
flats = support
flats.append(flat)
result = []
for flat in flats:
# LOG.info("flat: %s", flat)
no_and_or = self.remove_and_or(flat)
# LOG.info(" without and/or: %s", no_and_or)
no_indicator = self.indicator_to_pure_lp(no_and_or, bounds)
# LOG.info(" without indicator: %s",
# ";".join(str(x) for x in no_indicator))
result.extend(no_indicator)
return result
def pure_lp_term(self, exp, bounds):
"""Rewrite term exp to a pure LP term.
:param exp: is an Expression of the form
(arith11 ^ ... ^ arith1n) | ... | (arithk1 ^ ... ^ arithkn)
where the degenerate cases are permitted as well.
:returns: (new-exp, support) where new-exp is a term, and support is
a expressions of the following form.
a1*x1 + ... + an*xn [<=, ==, >=] b.
"""
flat, support = self.flatten(exp, indicator=False)
flat_no_andor = self.remove_and_or_term(flat)
results = []
for s in support:
results.extend(self.pure_lp(s, bounds))
return flat_no_andor, results
def remove_and_or(self, exp):
"""Translate and/or operators into times/plus arithmetic.
:param exp: is an Expression that takes one of the following forms.
var [!]= term1 ^ ... ^ termn
var [!]= term1 | ... | termn
var [!]= term1
where termi is an indicator variable.
:returns: an expression equivalent to exp but without any ands/ors.
"""
if self.isConstant(exp) or self.isVariable(exp):
return exp
op = exp.operator().lower()
if op in ['and', 'or']:
return self.remove_and_or_term(exp)
newargs = [self.remove_and_or(arg) for arg in exp.arguments()]
constructor = self.operator_to_constructor(exp.operator())
return constructor(*newargs)
def remove_and_or_term(self, exp):
if exp.operator().lower() == 'and':
op = 'times'
else:
op = 'plus'
return self.makeArith(op, *exp.arguments())
def indicator_to_pure_lp(self, exp, bounds):
"""Translate exp into LP constraints without indicator variable.
:param exp: is an Expression of the form var = arith
:param bounds: is a dictionary from variable to its upper bound
:returns: [EXP] if it is of the wrong form. Otherwise, translates
into the form y = x < 0, and then returns two constraints where
upper(x) is the upper bound of the expression x::
-x <= y * upper(x)
x < (1 - y) * upper(x)
Taken from section 7.4 of
http://www.aimms.com/aimms/download/manuals/
aimms3om_integerprogrammingtricks.pdf
"""
# return exp unchanged if exp not of the form <var> = <arith>
# and figure out whether it's <var> = <arith> or <arith> = <var>
if (self.isConstant(exp) or self.isVariable(exp) or
not self.isEqual(exp)):
return [exp]
args = exp.arguments()
lhs = args[0]
rhs = args[1]
if self.isVariable(lhs) and self.isArith(rhs):
var = lhs
arith = rhs
elif self.isVariable(rhs) and self.isArith(lhs):
var = rhs
arith = lhs
else:
return [exp]
# if arithmetic side is not an inequality, not an indicator var
if not self.isBoolArith(arith):
return [exp]
# Do the transformation.
x = self.arith_to_lt_zero(arith).arguments()[1]
y = var
LOG.info(" x: %s", x)
upper_x = self.upper_bound(x, bounds) + 1
LOG.info(" bounds(x): %s", upper_x)
# -x <= y * upper(x)
c1 = self.makeArith(
'lteq',
self.makeArith('times', -1, x),
self.makeArith('times', y, upper_x))
# x < (1 - y) * upper(x)
c2 = self.makeArith(
'lt',
x,
self.makeArith('times', self.makeArith('minus', 1, y), upper_x))
return [c1, c2]
def arith_to_lt_zero(self, expr):
"""Returns Arith expression equivalent to expr but of the form A < 0.
:param expr: is an Expression
:returns: an expression equivalent to expr but of the form A < 0.
"""
if not self.isArith(expr):
raise self.LpConversionFailure(
"arith_to_lt_zero takes Arith expr but received %s", expr)
args = expr.arguments()
op = args[0].lower()
lhs = args[1]
rhs = args[2]
if op == 'lt':
return LpLang.makeArith(
'lt', LpLang.makeArith('minus', lhs, rhs), 0)
elif op == 'lteq':
return LpLang.makeArith(
'lt',
LpLang.makeArith(
'minus',
LpLang.makeArith('minus', lhs, rhs),
self.MIN_THRESHOLD),
0)
elif op == 'gt':
return LpLang.makeArith(
'lt', LpLang.makeArith('minus', rhs, lhs), 0)
elif op == 'gteq':
return LpLang.makeArith(
'lt',
LpLang.makeArith(
'minus',
LpLang.makeArith('minus', rhs, lhs),
self.MIN_THRESHOLD),
0)
else:
raise self.LpConversionFailure(
"unhandled operator %s in %s" % (op, expr))
def upper_bound(self, expr, bounds):
"""Returns number giving an upper bound on the given expr.
:param expr: is an Expression
:param bounds: is a dictionary from tuple versions of variables
to the size of their upper bound.
"""
if self.isConstant(expr):
return expr
if self.isVariable(expr):
t = expr.tuple()
if t not in bounds:
raise self.LpConversionFailure("not bound given for %s" % expr)
return bounds[expr.tuple()]
if not self.isArith(expr):
raise self.LpConversionFailure(
"expression has no bound: %s" % expr)
args = expr.arguments()
op = args[0].lower()
exps = args[1:]
if op == 'times':
f = lambda x, y: x * y
return reduce(f, [self.upper_bound(x, bounds) for x in exps], 1)
if op == 'plus':
f = lambda x, y: x + y
return reduce(f, [self.upper_bound(x, bounds) for x in exps], 0)
if op == 'minus':
return self.upper_bound(exps[0], bounds)
if op == 'div':
raise self.LpConversionFailure("No bound on division %s" % expr)
raise self.LpConversionFailure("Unknown operator for bound: %s" % expr)
def flatten(self, exp, indicator=True):
"""Remove toplevel embedded and/ors by creating new equalities.
:param exp: is an Expression of the form
var = (arith11 ^ ... ^ arith1n) | ... | (arithk1 ^ ... ^ arithkn)
where arithij is either a variable or an arithmetic expression
where the degenerate cases are permitted as well.
:param indicator: controls whether the method Returns
a single variable (with supporting expressions) or it Returns
an expression that has operator with (flat) arguments
:returns: a collection of expressions each of one of the following
forms:
var1 = var2 * ... * varn
var1 = var2 + ... + varn
var1 = arith
:returns: (new-expression, supporting-expressions)
"""
if self.isConstant(exp) or self.isVariable(exp):
return exp, []
new_args = []
extras = []
new_indicator = not (exp.operator().lower() in ['eq', 'noteq'])
for e in exp.arguments():
newe, extra = self.flatten(e, indicator=new_indicator)
new_args.append(newe)
extras.extend(extra)
constructor = self.operator_to_constructor(exp.operator())
new_exp = constructor(*new_args)
if indicator:
indic, extra = self.create_intermediate(new_exp)
return indic, extra + extras
return new_exp, extras
def operator_to_constructor(self, operator):
"""Given the operator, return the corresponding constructor."""
op = operator.lower()
if op == 'eq':
return self.makeEqual
if op == 'noteq':
return self.makeNotEqual
if op == 'var':
return self.makeVariable
if op == 'and':
return self.makeAnd
if op == 'or':
return self.makeOr
if op == 'arith':
return self.makeArith
raise self.LpConversionFailure("Unknown operator: %s" % operator)
def create_intermediate(self, exp):
"""Given expression, create var = expr and return (var, var=expr)."""
if self.isBoolArith(exp) or self.isAnd(exp) or self.isOr(exp):
var = self.freshVar(type='bool')
else:
var = self.freshVar()
equality = self.makeEqual(var, exp)
return var, [equality]
def freshVar(self, **meta):
var = self.makeVariable('internal', self.fresh_var_counter, **meta)
self.fresh_var_counter += 1
return var
class LpConversionFailure(exception.CongressException):
pass
class PulpLpLang(LpLang):
"""Algorithms for translating LpLang into PuLP library problems."""
MIN_THRESHOLD = .00001
def __init__(self):
# instance variable so tests can be run in parallel
super(PulpLpLang, self).__init__()
self.value_counter = 0
def problem(self, optimization, constraints, bounds):
"""Return PuLP problem for given optimization and constraints.
:param: optimization is an LpLang.Expression that is either a sum
or product to minimize.
:param: constraints is a collection of LpLang.Expression that
each evaluate to true/false (typically equalities)
:param: bounds: is a dictionary mapping LpLang.Expression variable
tuples to their upper bounds.
Returns a pulp.LpProblem.
"""
# translate constraints to pure LP
optimization, hard = self.pure_lp_term(optimization, bounds)
for c in constraints:
hard.extend(self.pure_lp(c, bounds))
LOG.info("* Converted DatalogLP to PureLP *")
LOG.info("optimization: %s", optimization)
LOG.info("constraints: \n%s", "\n".join(str(x) for x in hard))
# translate optimization and constraints into PuLP equivalents
variables = {}
values = {}
optimization = self.pulpify(optimization, variables, values)
hard = [self.pulpify(c, variables, values) for c in hard]
# add them to the problem.
prob = pulp.LpProblem("VM re-assignment", pulp.LpMinimize)
prob += optimization
for c in hard:
prob += c
# invert values
return prob, {value: key for key, value in values.items()}
def pulpify(self, expr, variables, values):
"""Return PuLP version of expr.
:param: expr is an Expression of one of the following forms.
arith
arith = arith
arith <= arith
arith >= arith
:param: vars is a dictionary from Expression variables to PuLP
variables
Returns a PuLP representation of expr.
"""
# LOG.info("pulpify(%s, %s)", expr, variables)
if self.isConstant(expr):
return expr
elif self.isVariable(expr):
return self._pulpify_variable(expr, variables, values)
elif self.isArith(expr):
args = expr.arguments()
op = args[0]
args = [self.pulpify(arg, variables, values) for arg in args[1:]]
if op == 'times':
return reduce(lambda x, y: x * y, args)
elif op == 'plus':
return reduce(lambda x, y: x + y, args)
elif op == 'div':
return reduce(lambda x, y: x / y, args)
elif op == 'minus':
return reduce(lambda x, y: x - y, args)
elif op == 'lteq':
return (args[0] <= args[1])
elif op == 'gteq':
return (args[0] >= args[1])
elif op == 'gt': # pulp makes MIN_THRESHOLD 1
return (args[0] >= args[1] + self.MIN_THRESHOLD)
elif op == 'lt': # pulp makes MIN_THRESHOLD 1
return (args[0] + self.MIN_THRESHOLD <= args[1])
else:
raise self.LpConversionFailure(
"Found unsupported operator %s in %s" % (op, expr))
else:
args = [self.pulpify(arg, variables, values)
for arg in expr.arguments()]
op = expr.operator().lower()
if op == 'eq':
return (args[0] == args[1])
elif op == 'noteq':
return (args[0] != args[1])
else:
raise self.LpConversionFailure(
"Found unsupported operator: %s" % expr)
def _new_value(self, old, values):
"""Create a new value for old and store values[old] = new."""
if old in values:
return values[old]
new = self.value_counter
self.value_counter += 1
values[old] = new
return new
def _pulpify_variable(self, expr, variables, values):
"""Translate DatalogLp variable expr into PuLP variable.
:param: expr is an instance of Expression
:param: variables is a dictionary from Expressions to pulp variables
:param: values is a 1-1 dictionary from strings/floats to integers
representing a mapping of non-integer arguments to variable
names to their integer equivalents.
"""
# pulp mangles variable names that contain certain characters.
# Replace actual args with integers when constructing
# variable names. Includes integers since we don't want to
# have namespace collision problems.
oldargs = expr.arguments()
args = [oldargs[0]]
for arg in oldargs[1:]:
newarg = self._new_value(arg, values)
args.append(newarg)
# name
name = "_".join([str(x) for x in args])
# type
typ = expr.meta.get('type', None)
if typ == 'bool':
cat = pulp.LpBinary
elif typ == 'int':
cat = pulp.LpInteger
else:
cat = pulp.LpContinuous
# set bounds
lowbound = expr.meta.get('lowbound', None)
upbound = expr.meta.get('upbound', None)
var = pulp.LpVariable(
name=name, cat=cat, lowBound=lowbound, upBound=upbound)
# merge with existing variable, if any
if expr in variables:
newvar = self._resolve_var_conflicts(variables[expr], var)
oldvar = variables[expr]
oldvar.cat = newvar.cat
oldvar.lowBound = newvar.lowBound
oldvar.upBound = newvar.upBound
else:
variables[expr] = var
return variables[expr]
def _resolve_var_conflicts(self, var1, var2):
"""Returns variable that combines information from var1 and var2.
:param: meta1 is a pulp.LpVariable
:param: meta2 is a pulp.LpVariable
Returns new pulp.LpVariable representing the conjunction of constraints
from var1 and var2.
Raises LpConversionFailure if the names of var1 and var2 differ.
"""
def type_lessthan(x, y):
return ((x == pulp.LpBinary and y == pulp.LpInteger) or
(x == pulp.LpBinary and y == pulp.LpContinuous) or
(x == pulp.LpInteger and y == pulp.LpContinuous))
if var1.name != var2.name:
raise self.LpConversionFailure(
"Can't resolve variable name conflict: %s and %s" % (
var1, var2))
name = var1.name
if type_lessthan(var1.cat, var2.cat):
cat = var1.cat
else:
cat = var2.cat
if var1.lowBound is None:
lowbound = var2.lowBound
elif var2.lowBound is None:
lowbound = var1.lowBound
else:
lowbound = max(var1.lowBound, var2.lowBound)
if var1.upBound is None:
upbound = var2.upBound
elif var2.upBound is None:
upbound = var1.upBound
else:
upbound = min(var1.upBound, var2.upBound)
return pulp.LpVariable(
name=name, lowBound=lowbound, upBound=upbound, cat=cat)

View File

@ -1,249 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import collections
from oslo_log import log as logging
import six
from congress import exception
LOG = logging.getLogger(__name__)
DATABASE_POLICY_TYPE = 'database'
NONRECURSIVE_POLICY_TYPE = 'nonrecursive'
ACTION_POLICY_TYPE = 'action'
MATERIALIZED_POLICY_TYPE = 'materialized'
DELTA_POLICY_TYPE = 'delta'
DATASOURCE_POLICY_TYPE = 'datasource'
Z3_POLICY_TYPE = 'z3'
class Tracer(object):
def __init__(self):
self.expressions = []
self.funcs = [LOG.debug] # functions to call to trace
def trace(self, table):
self.expressions.append(table)
def is_traced(self, table):
return table in self.expressions or '*' in self.expressions
def log(self, table, msg, *args, **kwargs):
depth = kwargs.pop("depth", 0)
if kwargs:
raise TypeError("Unexpected keyword arguments: %s" % kwargs)
if self.is_traced(table):
for func in self.funcs:
func(("| " * depth) + msg, *args)
class StringTracer(Tracer):
def __init__(self):
super(StringTracer, self).__init__()
self.stream = six.moves.StringIO()
self.funcs.append(self.string_output)
def string_output(self, msg, *args):
self.stream.write((msg % args) + "\n")
def get_value(self):
return self.stream.getvalue()
##############################################################################
# Logical Building Blocks
##############################################################################
class Proof(object):
"""A single proof.
Differs semantically from Database's
Proof in that this version represents a proof that spans rules,
instead of just a proof for a single rule.
"""
def __init__(self, root, children):
self.root = root
self.children = children
def __str__(self):
return self.str_tree(0)
def str_tree(self, depth):
s = " " * depth
s += str(self.root)
s += "\n"
for child in self.children:
s += child.str_tree(depth + 1)
return s
def leaves(self):
if len(self.children) == 0:
return [self.root]
result = []
for child in self.children:
result.extend(child.leaves())
return result
##############################################################################
# Events
##############################################################################
class EventQueue(object):
def __init__(self):
self.queue = collections.deque()
def enqueue(self, event):
self.queue.append(event)
def dequeue(self):
return self.queue.popleft()
def __len__(self):
return len(self.queue)
def __str__(self):
return "[" + ",".join([str(x) for x in self.queue]) + "]"
##############################################################################
# Abstract Theories
##############################################################################
class Theory(object):
def __init__(self, name=None, abbr=None, schema=None, theories=None,
id=None, desc=None, owner=None, kind=None):
self.schema = schema
self.theories = theories
self.kind = kind
self.id = id
self.desc = desc
self.owner = owner
self.tracer = Tracer()
if name is None:
self.name = repr(self)
else:
self.name = name
if abbr is None:
self.abbr = "th"
else:
self.abbr = abbr
maxlength = 6
if len(self.abbr) > maxlength:
self.trace_prefix = self.abbr[0:maxlength]
else:
self.trace_prefix = self.abbr + " " * (maxlength - len(self.abbr))
def set_id(self, id):
self.id = id
def initialize_tables(self, tablenames, facts):
"""initialize_tables
Event handler for (re)initializing a collection of tables. Clears
tables befores assigning the new table content.
@facts must be an iterable containing compile.Fact objects.
"""
raise NotImplementedError
def actual_events(self, events):
"""Returns subset of EVENTS that are not noops."""
actual = []
for event in events:
if event.insert:
if event.formula not in self:
actual.append(event)
else:
if event.formula in self:
actual.append(event)
return actual
def debug_mode(self):
tr = Tracer()
tr.trace('*')
self.set_tracer(tr)
def set_tracer(self, tracer):
self.tracer = tracer
def get_tracer(self):
return self.tracer
def log(self, table, msg, *args, **kwargs):
msg = self.trace_prefix + ": " + msg
self.tracer.log(table, msg, *args, **kwargs)
def policy(self):
"""Return a list of the policy statements in this theory."""
raise NotImplementedError()
def content(self):
"""Return a list of the contents of this theory.
Maybe rules and/or data. Note: do not change name to CONTENTS, as this
is reserved for a dictionary of stuff used by TopDownTheory.
"""
raise NotImplementedError()
def tablenames(self, body_only=False, include_builtin=False,
include_modal=True, include_facts=False):
tablenames = set()
for rule in self.policy():
tablenames |= rule.tablenames(
body_only=body_only, include_builtin=include_builtin,
include_modal=include_modal)
# also include tables in facts
# FIXME: need to conform with intended abstractions
if include_facts and hasattr(self, 'rules'):
tablenames |= set(self.rules.facts.keys())
return tablenames
def __str__(self):
return "Theory %s" % self.name
def content_string(self):
return '\n'.join([str(p) for p in self.content()]) + '\n'
def get_rule(self, ident):
for p in self.policy():
if hasattr(p, 'id') and str(p.id) == str(ident):
return p
raise exception.NotFound('rule_id %s is not found.' % ident)
def arity(self, tablename, modal=None):
"""Return the number of columns for the given tablename.
TABLENAME is of the form <policy>:<table> or <table>.
MODAL is the value of the modal operator.
"""
return NotImplementedError
def get_attr_dict(self):
'''return dict containing the basic attributes of this theory'''
d = {'id': self.id,
'name': self.name,
'abbreviation': self.abbr,
'description': self.desc,
'owner_id': self.owner,
'kind': self.kind}
return d

View File

@ -1,488 +0,0 @@
#! /usr/bin/python
#
# Copyright (c) 2014 IBM, Corp. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import datetime
import netaddr
import sys
import six
from six.moves import range
from dateutil import parser as datetime_parser
from oslo_config import types
BUILTIN_NAMESPACE = 'builtin'
class DatetimeBuiltins(object):
# casting operators (used internally)
@classmethod
def to_timedelta(cls, x):
if isinstance(x, six.string_types):
fields = x.split(":")
num_fields = len(fields)
args = {}
keys = ['seconds', 'minutes', 'hours', 'days', 'weeks']
for i in range(0, len(fields)):
args[keys[i]] = int(fields[num_fields - 1 - i])
return datetime.timedelta(**args)
else:
return datetime.timedelta(seconds=x)
@classmethod
def to_datetime(cls, x):
return datetime_parser.parse(x, ignoretz=True)
# current time
@classmethod
def now(cls):
return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
# extraction and creation of datetimes
@classmethod
def unpack_time(cls, x):
x = cls.to_datetime(x)
return (x.hour, x.minute, x.second)
@classmethod
def unpack_date(cls, x):
x = cls.to_datetime(x)
return (x.year, x.month, x.day)
@classmethod
def unpack_datetime(cls, x):
x = cls.to_datetime(x)
return (x.year, x.month, x.day, x.hour, x.minute, x.second)
@classmethod
def pack_time(cls, hour, minute, second):
return "{}:{}:{}".format(hour, minute, second)
@classmethod
def pack_date(cls, year, month, day):
return "{}-{}-{}".format(year, month, day)
@classmethod
def pack_datetime(cls, year, month, day, hour, minute, second):
return "{}-{}-{} {}:{}:{}".format(
year, month, day, hour, minute, second)
# extraction/creation convenience function
@classmethod
def extract_date(cls, x):
return str(cls.to_datetime(x).date())
@classmethod
def extract_time(cls, x):
return str(cls.to_datetime(x).time())
# conversion to seconds
@classmethod
def datetime_to_seconds(cls, x):
since1900 = cls.to_datetime(x) - datetime.datetime(year=1900,
month=1,
day=1)
return int(since1900.total_seconds())
# native operations on datetime
@classmethod
def datetime_plus(cls, x, y):
return str(cls.to_datetime(x) + cls.to_timedelta(y))
@classmethod
def datetime_minus(cls, x, y):
return str(cls.to_datetime(x) - cls.to_timedelta(y))
@classmethod
def datetime_lessthan(cls, x, y):
return cls.to_datetime(x) < cls.to_datetime(y)
@classmethod
def datetime_lessthanequal(cls, x, y):
return cls.to_datetime(x) <= cls.to_datetime(y)
@classmethod
def datetime_greaterthan(cls, x, y):
return cls.to_datetime(x) > cls.to_datetime(y)
@classmethod
def datetime_greaterthanequal(cls, x, y):
return cls.to_datetime(x) >= cls.to_datetime(y)
@classmethod
def datetime_equal(cls, x, y):
return cls.to_datetime(x) == cls.to_datetime(y)
class NetworkAddressBuiltins(object):
@classmethod
def ips_equal(cls, ip1, ip2):
return netaddr.IPAddress(ip1) == netaddr.IPAddress(ip2)
@classmethod
def ips_lessthan(cls, ip1, ip2):
return netaddr.IPAddress(ip1) < netaddr.IPAddress(ip2)
@classmethod
def ips_lessthan_equal(cls, ip1, ip2):
return netaddr.IPAddress(ip1) <= netaddr.IPAddress(ip2)
@classmethod
def ips_greaterthan(cls, ip1, ip2):
return netaddr.IPAddress(ip1) > netaddr.IPAddress(ip2)
@classmethod
def ips_greaterthan_equal(cls, ip1, ip2):
return netaddr.IPAddress(ip1) >= netaddr.IPAddress(ip2)
@classmethod
def networks_equal(cls, cidr1, cidr2):
return netaddr.IPNetwork(cidr1) == netaddr.IPNetwork(cidr2)
@classmethod
def networks_overlap(cls, cidr1, cidr2):
cidr1_obj = netaddr.IPNetwork(cidr1)
cidr2_obj = netaddr.IPNetwork(cidr2)
return (cidr1_obj.first <= cidr2_obj.first <= cidr1_obj.last or
cidr1_obj.first <= cidr2_obj.last <= cidr1_obj.last)
@classmethod
def ip_in_network(cls, ip, cidr):
cidr_obj = netaddr.IPNetwork(cidr)
ip_obj = netaddr.IPAddress(ip)
return ip_obj in cidr_obj
class OptTypeBuiltins(object):
"""Builtins to validate option values for config validator.
It leverages oslog_config types module to check values.
"""
@classmethod
def validate_int(cls, minv, maxv, value):
"""Check that the value is indeed an integer
Optionnally checks the integer is between given bounds if provided.
:param minv: minimal value or empty string
:param maxv: maximal value or empty string
:param value: value to check
:return: an empty string if ok or an error string.
"""
maxv = None if maxv == '' else maxv
minv = None if minv == '' else minv
try:
types.Integer(min=minv, max=maxv)(value)
except (ValueError, TypeError):
_, err, _ = sys.exc_info()
return str(err)
return ''
@classmethod
def validate_float(cls, minv, maxv, value):
"""Check that the value is a float
Optionnally checks the float is between given bounds if provided.
:param minv: minimal value or empty string
:param maxv: maximal value or empty string
:param value: value to check
:return: an empty string if ok or an error string.
"""
maxv = None if maxv == '' else maxv
minv = None if minv == '' else minv
try:
types.Float(min=minv, max=maxv)(value)
except (ValueError, TypeError):
_, err, _ = sys.exc_info()
return str(err)
return ''
@classmethod
def validate_string(cls, regex, max_length, quotes, ignore_case, value):
"""Check that the value is a string
Optionnally checks the string against typical requirements.
:param regex: a regular expression the value should follow or empty
:param max_length: an integer bound on the size of the string or empty
:param quotes: whether to include quotes or not
:param ignore_case: whether to ignore case or not
:param value: the value to check
:return: an empty string if ok or an error string.
"""
regex = None if regex == '' else regex
try:
types.String(regex=regex, max_length=max_length, quotes=quotes,
ignore_case=ignore_case)(value)
except (ValueError, TypeError):
_, err, _ = sys.exc_info()
return str(err)
return ''
# the registry for builtins
_builtin_map = {
'comparison': [
{'func': 'lt(x,y)', 'num_inputs': 2, 'code': lambda x, y: x < y},
{'func': 'lteq(x,y)', 'num_inputs': 2, 'code': lambda x, y: x <= y},
{'func': 'equal(x,y)', 'num_inputs': 2, 'code': lambda x, y: x == y},
{'func': 'gt(x,y)', 'num_inputs': 2, 'code': lambda x, y: x > y},
{'func': 'gteq(x,y)', 'num_inputs': 2, 'code': lambda x, y: x >= y},
{'func': 'max(x,y,z)', 'num_inputs': 2,
'code': lambda x, y: max(x, y)}],
'arithmetic': [
{'func': 'plus(x,y,z)', 'num_inputs': 2, 'code': lambda x, y: x + y},
{'func': 'minus(x,y,z)', 'num_inputs': 2, 'code': lambda x, y: x - y},
{'func': 'mul(x,y,z)', 'num_inputs': 2, 'code': lambda x, y: x * y},
{'func': 'div(x,y,z)', 'num_inputs': 2, 'code': lambda x, y:
((x // y) if (type(x) == int and type(y) == int) else (x / y))},
{'func': 'float(x,y)', 'num_inputs': 1, 'code': lambda x: float(x)},
{'func': 'int(x,y)', 'num_inputs': 1, 'code': lambda x: int(x)}],
'string': [
{'func': 'concat(x,y,z)', 'num_inputs': 2, 'code': lambda x, y: x + y},
{'func': 'len(x, y)', 'num_inputs': 1, 'code': lambda x: len(x)}],
'datetime': [
{'func': 'now(x)', 'num_inputs': 0,
'code': DatetimeBuiltins.now},
{'func': 'unpack_date(x, year, month, day)', 'num_inputs': 1,
'code': DatetimeBuiltins.unpack_date},
{'func': 'unpack_time(x, hours, minutes, seconds)', 'num_inputs': 1,
'code': DatetimeBuiltins.unpack_time},
{'func': 'unpack_datetime(x, y, m, d, h, i, s)', 'num_inputs': 1,
'code': DatetimeBuiltins.unpack_datetime},
{'func': 'pack_time(hours, minutes, seconds, result)', 'num_inputs': 3,
'code': DatetimeBuiltins.pack_time},
{'func': 'pack_date(year, month, day, result)', 'num_inputs': 3,
'code': DatetimeBuiltins.pack_date},
{'func': 'pack_datetime(y, m, d, h, i, s, result)', 'num_inputs': 6,
'code': DatetimeBuiltins.pack_datetime},
{'func': 'extract_date(x, y)', 'num_inputs': 1,
'code': DatetimeBuiltins.extract_date},
{'func': 'extract_time(x, y)', 'num_inputs': 1,
'code': DatetimeBuiltins.extract_time},
{'func': 'datetime_to_seconds(x, y)', 'num_inputs': 1,
'code': DatetimeBuiltins.datetime_to_seconds},
{'func': 'datetime_plus(x,y,z)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_plus},
{'func': 'datetime_minus(x,y,z)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_minus},
{'func': 'datetime_lt(x,y)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_lessthan},
{'func': 'datetime_lteq(x,y)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_lessthanequal},
{'func': 'datetime_gt(x,y)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_greaterthan},
{'func': 'datetime_gteq(x,y)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_greaterthanequal},
{'func': 'datetime_equal(x,y)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_equal}],
'netaddr': [
{'func': 'ips_equal(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ips_equal},
{'func': 'ips_lt(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ips_lessthan},
{'func': 'ips_lteq(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ips_lessthan_equal},
{'func': 'ips_gt(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ips_greaterthan},
{'func': 'ips_gteq(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ips_greaterthan_equal},
{'func': 'networks_equal(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.networks_equal},
{'func': 'networks_overlap(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.networks_overlap},
{'func': 'ip_in_network(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ip_in_network}],
'type': [
{'func': 'validate_int(max, min, value, result)',
'num_inputs': 3, 'code': OptTypeBuiltins.validate_int},
{'func': 'validate_float(max, min, value, result)',
'num_inputs': 3, 'code': OptTypeBuiltins.validate_float},
{'func': 'validate_string(regex, max_length, quotes, ignore_case,'
' value, result)',
'num_inputs': 5, 'code': OptTypeBuiltins.validate_string}],
}
class CongressBuiltinPred(object):
def __init__(self, name, arglist, num_inputs, code):
self.predname = name
self.predargs = arglist
self.num_inputs = num_inputs
self.code = code
self.num_outputs = len(arglist) - num_inputs
def string_to_pred(self, predstring):
try:
self.predname = predstring.split('(')[0]
self.predargs = predstring.split('(')[1].split(')')[0].split(',')
except Exception:
print("Unexpected error in parsing predicate string")
def __str__(self):
return self.predname + '(' + ",".join(self.predargs) + ')'
class CongressBuiltinCategoryMap(object):
def __init__(self, start_builtin_map):
self.categorydict = dict()
self.preddict = dict()
for key, value in start_builtin_map.items():
self.categorydict[key] = []
for predtriple in value:
pred = self.dict_predtriple_to_pred(predtriple)
self.categorydict[key].append(pred)
self.sync_with_predlist(pred.predname, pred, key, 'add')
def mapequal(self, othercbc):
if self.categorydict == othercbc.categorydict:
return True
else:
return False
def dict_predtriple_to_pred(self, predtriple):
ncode = predtriple['code']
ninputs = predtriple['num_inputs']
nfunc = predtriple['func']
nfunc_pred = nfunc.split("(")[0]
nfunc_arglist = nfunc.split("(")[1].split(")")[0].split(",")
pred = CongressBuiltinPred(nfunc_pred, nfunc_arglist, ninputs, ncode)
return pred
def add_map(self, newmap):
for key, value in newmap.items():
if key not in self.categorydict:
self.categorydict[key] = []
for predtriple in value:
pred = self.dict_predtriple_to_pred(predtriple)
if not self.builtin_is_registered(pred):
self.categorydict[key].append(pred)
self.sync_with_predlist(pred.predname, pred, key, 'add')
def delete_map(self, newmap):
for key, value in newmap.items():
for predtriple in value:
predtotest = self.dict_predtriple_to_pred(predtriple)
for pred in self.categorydict[key]:
if pred.predname == predtotest.predname:
if pred.num_inputs == predtotest.num_inputs:
self.categorydict[key].remove(pred)
self.sync_with_predlist(pred.predname,
pred, key, 'del')
if self.categorydict[key] == []:
del self.categorydict[key]
def sync_with_predlist(self, predname, pred, category, operation):
if operation == 'add':
self.preddict[predname] = [pred, category]
if operation == 'del':
if predname in self.preddict:
del self.preddict[predname]
def delete_builtin(self, category, name, inputs):
if category not in self.categorydict:
self.categorydict[category] = []
for pred in self.categorydict[category]:
if pred.num_inputs == inputs and pred.predname == name:
self.categorydict[category].remove(pred)
self.sync_with_predlist(name, pred, category, 'del')
def get_category_name(self, predname, predinputs):
if predname in self.preddict:
if self.preddict[predname][0].num_inputs == predinputs:
return self.preddict[predname][1]
return None
def exists_category(self, category):
return category in self.categorydict
def insert_category(self, category):
self.categorydict[category] = []
def delete_category(self, category):
if category in self.categorydict:
categorypreds = self.categorydict[category]
for pred in categorypreds:
self.sync_with_predlist(pred.predname, pred, category, 'del')
del self.categorydict[category]
def insert_to_category(self, category, pred):
if category in self.categorydict:
self.categorydict[category].append(pred)
self.sync_with_predlist(pred.predname, pred, category, 'add')
else:
assert("Category does not exist")
def delete_from_category(self, category, pred):
if category in self.categorydict:
self.categorydict[category].remove(pred)
self.sync_with_predlist(pred.predname, pred, category, 'del')
else:
assert("Category does not exist")
def delete_all_in_category(self, category):
if category in self.categorydict:
categorypreds = self.categorydict[category]
for pred in categorypreds:
self.sync_with_predlist(pred.predname, pred, category, 'del')
self.categorydict[category] = []
else:
assert("Category does not exist")
def builtin_is_registered(self, predtotest):
"""Given a CongressBuiltinPred, check if it has been registered."""
pname = predtotest.predname
if pname in self.preddict:
if self.preddict[pname][0].num_inputs == predtotest.num_inputs:
return True
return False
def is_builtin(self, table, arity=None):
"""Given a Tablename and arity, check if it is a builtin."""
# Note: for now we grandfather in old builtin tablenames but will
# deprecate those tablenames in favor of builtin:tablename
if ((table.service == BUILTIN_NAMESPACE and
table.table in self.preddict) or
table.table in self.preddict): # grandfather
if not arity:
return True
if len(self.preddict[table.table][0].predargs) == arity:
return True
return False
def builtin(self, table):
"""Return a CongressBuiltinPred for given Tablename or None."""
if not isinstance(table, six.string_types):
table = table.table
if table in self.preddict:
return self.preddict[table][0]
return None
def list_available_builtins(self):
"""Print out the list of builtins, by category."""
for key, value in self.categorydict.items():
predlist = self.categorydict[key]
for pred in predlist:
print(str(pred))
# a Singleton that serves as the entry point for builtin functionality
builtin_registry = CongressBuiltinCategoryMap(_builtin_map)

File diff suppressed because it is too large Load Diff

View File

@ -1,413 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from six.moves import range
from congress.datalog import base
from congress.datalog import compile
from congress.datalog import topdown
from congress.datalog import unify
from congress.datalog import utility
from congress import exception
##############################################################################
# Concrete Theory: Database
##############################################################################
class Database(topdown.TopDownTheory):
class Proof(object):
def __init__(self, binding, rule):
self.binding = binding
self.rule = rule
def __str__(self):
return "apply({}, {})".format(str(self.binding), str(self.rule))
def __eq__(self, other):
result = (self.binding == other.binding and
self.rule == other.rule)
# LOG.debug("Pf: Comparing %s and %s: %s", self, other, result)
# LOG.debug("Pf: %s == %s is %s",
# self.binding, other.binding, self.binding == other.binding)
# LOG.debug("Pf: %s == %s is %s",
# self.rule, other.rule, self.rule == other.rule)
return result
def __ne__(self, other):
return not self.__eq__(other)
class ProofCollection(object):
def __init__(self, proofs):
self.contents = list(proofs)
def __str__(self):
return '{' + ",".join(str(x) for x in self.contents) + '}'
def __isub__(self, other):
if other is None:
return
# LOG.debug("PC: Subtracting %s and %s", self, other)
remaining = []
for proof in self.contents:
if proof not in other.contents:
remaining.append(proof)
self.contents = remaining
return self
def __ior__(self, other):
if other is None:
return
# LOG.debug("PC: Unioning %s and %s", self, other)
for proof in other.contents:
# LOG.debug("PC: Considering %s", proof)
if proof not in self.contents:
self.contents.append(proof)
return self
def __getitem__(self, key):
return self.contents[key]
def __len__(self):
return len(self.contents)
def __ge__(self, iterable):
for proof in iterable:
if proof not in self.contents:
# LOG.debug("Proof %s makes %s not >= %s",
# proof, self, iterstr(iterable))
return False
return True
def __le__(self, iterable):
for proof in self.contents:
if proof not in iterable:
# LOG.debug("Proof %s makes %s not <= %s",
# proof, self, iterstr(iterable))
return False
return True
def __eq__(self, other):
return self <= other and other <= self
def __ne__(self, other):
return not self.__eq__(other)
class DBTuple(object):
def __init__(self, iterable, proofs=None):
self.tuple = tuple(iterable)
if proofs is None:
proofs = []
self.proofs = Database.ProofCollection(proofs)
def __eq__(self, other):
return self.tuple == other.tuple
def __ne__(self, other):
return not self.__eq__(other)
def __str__(self):
return str(self.tuple) + str(self.proofs)
def __len__(self):
return len(self.tuple)
def __getitem__(self, index):
return self.tuple[index]
def __setitem__(self, index, value):
self.tuple[index] = value
def match(self, atom, unifier):
# LOG.debug("DBTuple matching %s against atom %s in %s",
# self, iterstr(atom.arguments), unifier)
if len(self.tuple) != len(atom.arguments):
return None
changes = []
for i in range(0, len(atom.arguments)):
val, binding = unifier.apply_full(atom.arguments[i])
# LOG.debug("val(%s)=%s at %s; comparing to object %s",
# atom.arguments[i], val, binding, self.tuple[i])
if val.is_variable():
changes.append(binding.add(
val, compile.Term.create_from_python(self.tuple[i]),
None))
else:
if val.name != self.tuple[i]:
unify.undo_all(changes)
return None
return changes
def __init__(self, name=None, abbr=None, theories=None, schema=None,
desc=None, owner=None):
super(Database, self).__init__(
name=name, abbr=abbr, theories=theories, schema=schema,
desc=desc, owner=owner)
self.data = {}
self.kind = base.DATABASE_POLICY_TYPE
def str2(self):
def hash2str(h):
s = "{"
s += ", ".join(["{} : {}".format(str(key), str(h[key]))
for key in h])
return s
def hashlist2str(h):
strings = []
for key in h:
s = "{} : ".format(key)
s += '['
s += ', '.join([str(val) for val in h[key]])
s += ']'
strings.append(s)
return '{' + ", ".join(strings) + '}'
return hashlist2str(self.data)
def __eq__(self, other):
return self.data == other.data
def __ne__(self, other):
return not self.__eq__(other)
def __sub__(self, other):
def add_tuple(table, dbtuple):
new = [table]
new.extend(dbtuple.tuple)
results.append(new)
results = []
for table in self.data:
if table not in other.data:
for dbtuple in self.data[table]:
add_tuple(table, dbtuple)
else:
for dbtuple in self.data[table]:
if dbtuple not in other.data[table]:
add_tuple(table, dbtuple)
return results
def __or__(self, other):
def add_db(db):
for table in db.data:
for dbtuple in db.data[table]:
result.insert(compile.Literal.create_from_table_tuple(
table, dbtuple.tuple), proofs=dbtuple.proofs)
result = Database()
add_db(self)
add_db(other)
return result
def __getitem__(self, key):
# KEY must be a tablename
return self.data[key]
def content(self, tablenames=None):
"""Return a sequence of Literals representing all the table data."""
results = []
if tablenames is None:
tablenames = self.data.keys()
for table in tablenames:
if table not in self.data:
continue
for dbtuple in self.data[table]:
results.append(compile.Literal.create_from_table_tuple(
table, dbtuple.tuple))
return results
def is_noop(self, event):
"""Returns T if EVENT is a noop on the database."""
# insert/delete same code but with flipped return values
# Code below is written as insert, except noop initialization.
if event.is_insert():
noop = True
else:
noop = False
if event.formula.table.table not in self.data:
return not noop
event_data = self.data[event.formula.table.table]
raw_tuple = tuple(event.formula.argument_names())
for dbtuple in event_data:
if dbtuple.tuple == raw_tuple:
if event.proofs <= dbtuple.proofs:
return noop
return not noop
def __contains__(self, formula):
if not compile.is_atom(formula):
return False
if formula.table.table not in self.data:
return False
event_data = self.data[formula.table.table]
raw_tuple = tuple(formula.argument_names())
return any((dbtuple.tuple == raw_tuple for dbtuple in event_data))
def explain(self, atom):
if atom.table.table not in self.data or not atom.is_ground():
return self.ProofCollection([])
args = tuple([x.name for x in atom.arguments])
for dbtuple in self.data[atom.table.table]:
if dbtuple.tuple == args:
return dbtuple.proofs
def tablenames(self, body_only=False, include_builtin=False,
include_modal=True):
"""Return all table names occurring in this theory."""
if body_only:
return []
return self.data.keys()
# overloads for TopDownTheory so we can properly use the
# top_down_evaluation routines
def defined_tablenames(self):
return self.data.keys()
def head_index(self, table, match_literal=None):
if table not in self.data:
return []
return self.data[table]
def head(self, thing):
return thing
def body(self, thing):
return []
def bi_unify(self, dbtuple, unifier1, atom, unifier2, theoryname):
"""THING1 is always a ground DBTuple and THING2 is always an ATOM."""
return dbtuple.match(atom, unifier2)
def atom_to_internal(self, atom, proofs=None):
return atom.table.table, self.DBTuple(atom.argument_names(), proofs)
def insert(self, atom, proofs=None):
"""Inserts ATOM into the DB. Returns changes."""
return self.modify(compile.Event(formula=atom, insert=True,
proofs=proofs))
def delete(self, atom, proofs=None):
"""Deletes ATOM from the DB. Returns changes."""
return self.modify(compile.Event(formula=atom, insert=False,
proofs=proofs))
def update(self, events):
"""Applies all of EVENTS to the DB.
Each event is either an insert or a delete.
"""
changes = []
for event in events:
changes.extend(self.modify(event))
return changes
def update_would_cause_errors(self, events):
"""Return a list of Policyxception.
Return a list of PolicyException if we were
to apply the events EVENTS to the current policy.
"""
self.log(None, "update_would_cause_errors %s", utility.iterstr(events))
errors = []
for event in events:
if not compile.is_atom(event.formula):
errors.append(exception.PolicyException(
"Non-atomic formula is not permitted: {}".format(
str(event.formula))))
else:
errors.extend(compile.fact_errors(
event.formula, self.theories, self.name))
return errors
def modify(self, event):
"""Insert/Delete atom.
Inserts/deletes ATOM and returns a list of changes that
were caused. That list contains either 0 or 1 Event.
"""
assert compile.is_atom(event.formula), "Modify requires Atom"
atom = event.formula
self.log(atom.table.table, "Modify: %s", atom)
if self.is_noop(event):
self.log(atom.table.table, "Event %s is a noop", event)
return []
if event.insert:
self.insert_actual(atom, proofs=event.proofs)
else:
self.delete_actual(atom, proofs=event.proofs)
return [event]
def insert_actual(self, atom, proofs=None):
"""Workhorse for inserting ATOM into the DB.
Along with proofs explaining how ATOM was computed from other tables.
"""
assert compile.is_atom(atom), "Insert requires Atom"
table, dbtuple = self.atom_to_internal(atom, proofs)
self.log(table, "Insert: %s", atom)
if table not in self.data:
self.data[table] = [dbtuple]
self.log(atom.table.table, "First tuple in table %s", table)
return
else:
for existingtuple in self.data[table]:
assert existingtuple.proofs is not None
if existingtuple.tuple == dbtuple.tuple:
assert existingtuple.proofs is not None
existingtuple.proofs |= dbtuple.proofs
assert existingtuple.proofs is not None
return
self.data[table].append(dbtuple)
def delete_actual(self, atom, proofs=None):
"""Workhorse for deleting ATOM from the DB.
Along with the proofs that are no longer true.
"""
assert compile.is_atom(atom), "Delete requires Atom"
self.log(atom.table.table, "Delete: %s", atom)
table, dbtuple = self.atom_to_internal(atom, proofs)
if table not in self.data:
return
for i in range(0, len(self.data[table])):
existingtuple = self.data[table][i]
if existingtuple.tuple == dbtuple.tuple:
existingtuple.proofs -= dbtuple.proofs
if len(existingtuple.proofs) == 0:
del self.data[table][i]
return
def policy(self):
"""Return the policy for this theory.
No policy in this theory; only data.
"""
return []
def get_arity_self(self, tablename):
if tablename not in self.data:
return None
if len(self.data[tablename]) == 0:
return None
return len(self.data[tablename][0].tuple)
def content_string(self):
s = ""
for lit in self.content():
s += str(lit) + '\n'
return s + '\n'

View File

@ -1,171 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.datalog import utility
class FactSet(object):
"""FactSet
Maintains a set of facts, and provides indexing for efficient iteration,
given a partial or full match. Expects that all facts are the same width.
"""
def __init__(self):
self._facts = utility.OrderedSet()
# key is a sorted tuple of column indices, values are dict mapping a
# specific value for the key to a set of Facts.
self._indicies = {}
def __contains__(self, fact):
return fact in self._facts
def __len__(self):
return len(self._facts)
def __iter__(self):
return self._facts.__iter__()
def add(self, fact):
"""Add a fact to the FactSet
Returns True if the fact is absent from this FactSet and adds the
fact, otherwise returns False.
"""
assert isinstance(fact, tuple)
changed = self._facts.add(fact)
if changed:
# Add the fact to the indicies
try:
for index in self._indicies.keys():
self._add_fact_to_index(fact, index)
except Exception:
self._facts.discard(fact)
raise
return changed
def remove(self, fact):
"""Remove a fact from the FactSet
Returns True if the fact is in this FactSet and removes the fact,
otherwise returns False.
"""
changed = self._facts.discard(fact)
if changed:
# Remove from indices
try:
for index in self._indicies.keys():
self._remove_fact_from_index(fact, index)
except Exception:
self._facts.add(fact)
raise
return changed
def create_index(self, columns):
"""Create an index
@columns is a tuple of column indicies that index into the facts in
self. @columns must be sorted in ascending order, and each column
index must be less than the width of a fact in self. If the index
exists, do nothing.
"""
assert sorted(columns) == list(columns)
assert len(columns)
if columns in self._indicies:
return
for f in self._facts:
self._add_fact_to_index(f, columns)
def remove_index(self, columns):
"""Remove an index
@columns is a tuple of column indicies that index into the facts in
self. @columns must be sorted in ascending order, and each column
index must be less than the width of a fact in self. If the index
does not exists, raise KeyError.
"""
assert sorted(columns) == list(columns)
if columns in self._indicies:
del self._indicies[columns]
def has_index(self, columns):
"""Returns True if the index exists."""
return columns in self._indicies
def find(self, partial_fact, iterations=None):
"""Find Facts given a partial fact
@partial_fact is a tuple of pair tuples. The first item in each
pair tuple is an index into a fact, and the second item is a value to
match again self._facts. Expects the pairs to be sorted by index in
ascending order.
@iterations is either an empty list or None. If @iterations is an
empty list, then find() will append the number of iterations find()
used to compute the return value(this is useful for testing indexing).
Returns matching Facts.
"""
index = tuple([i for i, v in partial_fact])
k = tuple([v for i, v in partial_fact])
if index in self._indicies:
if iterations is not None:
iterations.append(1)
if k in self._indicies[index]:
return self._indicies[index][k]
else:
return set()
# There is no index, so iterate.
matches = set()
for f in self._facts:
match = True
for i, v in partial_fact:
if f[i] != v:
match = False
break
if match:
matches.add(f)
if iterations is not None:
iterations.append(len(self._facts))
return matches
def _compute_key(self, columns, fact):
# assumes that @columns is sorted in ascending order.
return tuple([fact[i] for i in columns])
def _add_fact_to_index(self, fact, index):
if index not in self._indicies:
self._indicies[index] = {}
k = self._compute_key(index, fact)
if k not in self._indicies[index]:
self._indicies[index][k] = set((fact,))
else:
self._indicies[index][k].add(fact)
def _remove_fact_from_index(self, fact, index):
k = self._compute_key(index, fact)
self._indicies[index][k].remove(fact)
if not len(self._indicies[index][k]):
del self._indicies[index][k]

View File

@ -1,621 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from six.moves import range
from congress.datalog import base
from congress.datalog import compile
from congress.datalog import database
from congress.datalog import topdown
from congress.datalog import utility
LOG = logging.getLogger(__name__)
class DeltaRule(object):
"""Rule describing how updates to data sources change table."""
def __init__(self, trigger, head, body, original):
self.trigger = trigger # atom
self.head = head # atom
# list of literals, sorted for order-insensitive comparison
self.body = (
sorted([lit for lit in body if not lit.is_builtin()]) +
sorted([lit for lit in body if lit.is_builtin()]))
self.original = original # Rule from which SELF was derived
def __str__(self):
return "<trigger: {}, head: {}, body: {}>".format(
str(self.trigger), str(self.head), [str(lit) for lit in self.body])
def __eq__(self, other):
return (self.trigger == other.trigger and
self.head == other.head and
len(self.body) == len(other.body) and
all(self.body[i] == other.body[i]
for i in range(0, len(self.body))))
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash((self.trigger, self.head, tuple(self.body)))
def variables(self):
"""Return the set of variables occurring in this delta rule."""
vs = self.trigger.variables()
vs |= self.head.variables()
for atom in self.body:
vs |= atom.variables()
return vs
def tablenames(self, body_only=False, include_builtin=False,
include_modal=True):
"""Return the set of tablenames occurring in this delta rule."""
tables = set()
if not body_only:
tables.add(self.head.tablename())
tables.add(self.trigger.tablename())
for atom in self.body:
tables.add(atom.tablename())
return tables
class DeltaRuleTheory (base.Theory):
"""A collection of DeltaRules. Not useful by itself as a policy."""
def __init__(self, name=None, abbr=None, theories=None):
super(DeltaRuleTheory, self).__init__(
name=name, abbr=abbr, theories=theories)
# dictionary from table name to list of rules with that table as
# trigger
self.rules = {}
# dictionary from delta_rule to the rule from which it was derived
self.originals = set()
# dictionary from table name to number of rules with that table in
# head
self.views = {}
# all tables
self.all_tables = {}
self.kind = base.DELTA_POLICY_TYPE
def modify(self, event):
"""Insert/delete the compile.Rule RULE into the theory.
Return list of changes (either the empty list or
a list including just RULE).
"""
self.log(None, "DeltaRuleTheory.modify %s", event.formula)
self.log(None, "originals: %s", utility.iterstr(self.originals))
if event.insert:
if self.insert(event.formula):
return [event]
else:
if self.delete(event.formula):
return [event]
return []
def insert(self, rule):
"""Insert a compile.Rule into the theory.
Return True iff the theory changed.
"""
assert compile.is_regular_rule(rule), (
"DeltaRuleTheory only takes rules")
self.log(rule.tablename(), "Insert: %s", rule)
if rule in self.originals:
self.log(None, utility.iterstr(self.originals))
return False
self.log(rule.tablename(), "Insert 2: %s", rule)
for delta in self.compute_delta_rules([rule]):
self.insert_delta(delta)
self.originals.add(rule)
return True
def insert_delta(self, delta):
"""Insert a delta rule."""
self.log(None, "Inserting delta rule %s", delta)
# views (tables occurring in head)
if delta.head.table.table in self.views:
self.views[delta.head.table.table] += 1
else:
self.views[delta.head.table.table] = 1
# tables
for table in delta.tablenames():
if table in self.all_tables:
self.all_tables[table] += 1
else:
self.all_tables[table] = 1
# contents
if delta.trigger.table.table not in self.rules:
self.rules[delta.trigger.table.table] = utility.OrderedSet()
self.rules[delta.trigger.table.table].add(delta)
def delete(self, rule):
"""Delete a compile.Rule from theory.
Assumes that COMPUTE_DELTA_RULES is deterministic.
Returns True iff the theory changed.
"""
self.log(rule.tablename(), "Delete: %s", rule)
if rule not in self.originals:
return False
for delta in self.compute_delta_rules([rule]):
self.delete_delta(delta)
self.originals.remove(rule)
return True
def delete_delta(self, delta):
"""Delete the DeltaRule DELTA from the theory."""
# views
if delta.head.table.table in self.views:
self.views[delta.head.table.table] -= 1
if self.views[delta.head.table.table] == 0:
del self.views[delta.head.table.table]
# tables
for table in delta.tablenames():
if table in self.all_tables:
self.all_tables[table] -= 1
if self.all_tables[table] == 0:
del self.all_tables[table]
# contents
self.rules[delta.trigger.table.table].discard(delta)
if not len(self.rules[delta.trigger.table.table]):
del self.rules[delta.trigger.table.table]
def policy(self):
return self.originals
def get_arity_self(self, tablename):
for p in self.originals:
if p.head.table.table == tablename:
return len(p.head.arguments)
return None
def __contains__(self, formula):
return formula in self.originals
def __str__(self):
return str(self.rules)
def rules_with_trigger(self, table):
"""Return the list of DeltaRules that trigger on the given TABLE."""
if table in self.rules:
return self.rules[table]
else:
return []
def is_view(self, x):
return x in self.views
def is_known(self, x):
return x in self.all_tables
def base_tables(self):
base = []
for table in self.all_tables:
if table not in self.views:
base.append(table)
return base
@classmethod
def eliminate_self_joins(cls, formulas):
"""Remove self joins.
Return new list of formulas that is equivalent to
the list of formulas FORMULAS except that there
are no self-joins.
"""
def new_table_name(name, arity, index):
return "___{}_{}_{}".format(name, arity, index)
def n_variables(n):
vars = []
for i in range(0, n):
vars.append("x" + str(i))
return vars
# dict from (table name, arity) tuple to
# max num of occurrences of self-joins in any rule
global_self_joins = {}
# remove self-joins from rules
results = []
for rule in formulas:
if rule.is_atom():
results.append(rule)
continue
LOG.debug("eliminating self joins from %s", rule)
occurrences = {} # for just this rule
for atom in rule.body:
table = atom.tablename()
arity = len(atom.arguments)
tablearity = (table, arity)
if tablearity not in occurrences:
occurrences[tablearity] = 1
else:
# change name of atom
atom.table.table = new_table_name(table, arity,
occurrences[tablearity])
# update our counters
occurrences[tablearity] += 1
if tablearity not in global_self_joins:
global_self_joins[tablearity] = 1
else:
global_self_joins[tablearity] = (
max(occurrences[tablearity] - 1,
global_self_joins[tablearity]))
results.append(rule)
LOG.debug("final rule: %s", rule)
# add definitions for new tables
for tablearity in global_self_joins:
table = tablearity[0]
arity = tablearity[1]
for i in range(1, global_self_joins[tablearity] + 1):
newtable = new_table_name(table, arity, i)
args = [compile.Variable(var) for var in n_variables(arity)]
head = compile.Literal(newtable, args)
body = [compile.Literal(table, args)]
results.append(compile.Rule(head, body))
LOG.debug("Adding rule %s", results[-1])
return results
@classmethod
def compute_delta_rules(cls, formulas):
"""Return list of DeltaRules computed from formulas.
Assuming FORMULAS has no self-joins, return a list of DeltaRules
derived from those FORMULAS.
"""
# Should do the following for correctness, but it needs to be
# done elsewhere so that we can properly maintain the tables
# that are generated.
# formulas = cls.eliminate_self_joins(formulas)
delta_rules = []
for rule in formulas:
if rule.is_atom():
continue
rule = compile.reorder_for_safety(rule)
for literal in rule.body:
if literal.is_builtin():
continue
newbody = [lit for lit in rule.body if lit is not literal]
delta_rules.append(
DeltaRule(literal, rule.head, newbody, rule))
return delta_rules
class MaterializedViewTheory(topdown.TopDownTheory):
"""A theory that stores the table contents of views explicitly.
Relies on included theories to define the contents of those
tables not defined by the rules of the theory.
Recursive rules are allowed.
"""
def __init__(self, name=None, abbr=None, theories=None, schema=None,
desc=None, owner=None):
super(MaterializedViewTheory, self).__init__(
name=name, abbr=abbr, theories=theories, schema=schema,
desc=desc, owner=owner)
# queue of events left to process
self.queue = base.EventQueue()
# data storage
db_name = None
db_abbr = None
delta_name = None
delta_abbr = None
if name is not None:
db_name = name + "Database"
delta_name = name + "Delta"
if abbr is not None:
db_abbr = abbr + "DB"
delta_abbr = abbr + "Dlta"
self.database = database.Database(name=db_name, abbr=db_abbr)
# rules that dictate how database changes in response to events
self.delta_rules = DeltaRuleTheory(name=delta_name, abbr=delta_abbr)
self.kind = base.MATERIALIZED_POLICY_TYPE
def set_tracer(self, tracer):
if isinstance(tracer, base.Tracer):
self.tracer = tracer
self.database.tracer = tracer
self.delta_rules.tracer = tracer
else:
self.tracer = tracer['self']
self.database.tracer = tracer['database']
self.delta_rules.tracer = tracer['delta_rules']
def get_tracer(self):
return {'self': self.tracer,
'database': self.database.tracer,
'delta_rules': self.delta_rules.tracer}
# External Interface
# SELECT is handled by TopDownTheory
def insert(self, formula):
return self.update([compile.Event(formula=formula, insert=True)])
def delete(self, formula):
return self.update([compile.Event(formula=formula, insert=False)])
def update(self, events):
"""Apply inserts/deletes described by EVENTS and return changes.
Does not check if EVENTS would cause errors.
"""
for event in events:
assert compile.is_datalog(event.formula), (
"Non-formula not allowed: {}".format(str(event.formula)))
self.enqueue_any(event)
changes = self.process_queue()
return changes
def update_would_cause_errors(self, events):
"""Return a list of PolicyException.
Return a list of PolicyException if we were
to apply the events EVENTS to the current policy.
"""
self.log(None, "update_would_cause_errors %s", utility.iterstr(events))
errors = []
# compute new rule set
for event in events:
assert compile.is_datalog(event.formula), (
"update_would_cause_errors operates only on objects")
self.log(None, "Updating %s", event.formula)
if event.formula.is_atom():
errors.extend(compile.fact_errors(
event.formula, self.theories, self.name))
else:
errors.extend(compile.rule_errors(
event.formula, self.theories, self.name))
return errors
def explain(self, query, tablenames, find_all):
"""Returns a list of proofs if QUERY is true or None if else."""
assert compile.is_atom(query), "Explain requires an atom"
# ignoring TABLENAMES and FIND_ALL
# except that we return the proper type.
proof = self.explain_aux(query, 0)
if proof is None:
return None
else:
return [proof]
def policy(self):
return self.delta_rules.policy()
def get_arity_self(self, tablename):
result = self.database.get_arity_self(tablename)
if result:
return result
return self.delta_rules.get_arity_self(tablename)
# Interface implementation
def explain_aux(self, query, depth):
self.log(query.table.table, "Explaining %s", query, depth=depth)
# Bail out on negated literals. Need different
# algorithm b/c we need to introduce quantifiers.
if query.is_negated():
return base.Proof(query, [])
# grab first local proof, since they're all equally good
localproofs = self.database.explain(query)
if localproofs is None:
return None
if len(localproofs) == 0: # base fact
return base.Proof(query, [])
localproof = localproofs[0]
rule_instance = localproof.rule.plug(localproof.binding)
subproofs = []
for lit in rule_instance.body:
subproof = self.explain_aux(lit, depth + 1)
if subproof is None:
return None
subproofs.append(subproof)
return base.Proof(query, subproofs)
def modify(self, event):
"""Modifies contents of theory to insert/delete FORMULA.
Returns True iff the theory changed.
"""
self.log(None, "Materialized.modify")
self.enqueue_any(event)
changes = self.process_queue()
self.log(event.formula.tablename(),
"modify returns %s", utility.iterstr(changes))
return changes
def enqueue_any(self, event):
"""Enqueue event.
Processing rules is a bit different than processing atoms
in that they generate additional events that we want
to process either before the rule is deleted or after
it is inserted. PROCESS_QUEUE is similar but assumes
that only the data will cause propagations (and ignores
included theories).
"""
# Note: all included theories must define MODIFY
formula = event.formula
if formula.is_atom():
self.log(formula.tablename(), "compute/enq: atom %s", formula)
assert not self.is_view(formula.table.table), (
"Cannot directly modify tables" +
" computed from other tables")
# self.log(formula.table, "%s: %s", text, formula)
self.enqueue(event)
return []
else:
# rules do not need to talk to included theories because they
# only generate events for views
# need to eliminate self-joins here so that we fill all
# the tables introduced by self-join elimination.
for rule in DeltaRuleTheory.eliminate_self_joins([formula]):
new_event = compile.Event(formula=rule, insert=event.insert,
target=event.target)
self.enqueue(new_event)
return []
def enqueue(self, event):
self.log(event.tablename(), "Enqueueing: %s", event)
self.queue.enqueue(event)
def process_queue(self):
"""Data and rule propagation routine.
Returns list of events that were not noops
"""
self.log(None, "Processing queue")
history = []
while len(self.queue) > 0:
event = self.queue.dequeue()
self.log(event.tablename(), "Dequeued %s", event)
if compile.is_regular_rule(event.formula):
changes = self.delta_rules.modify(event)
if len(changes) > 0:
history.extend(changes)
bindings = self.top_down_evaluation(
event.formula.variables(), event.formula.body)
self.log(event.formula.tablename(),
"new bindings after top-down: %s",
utility.iterstr(bindings))
self.process_new_bindings(bindings, event.formula.head,
event.insert, event.formula)
else:
self.propagate(event)
history.extend(self.database.modify(event))
self.log(event.tablename(), "History: %s",
utility.iterstr(history))
return history
def propagate(self, event):
"""Propagate event.
Computes and enqueue events generated by EVENT and the DELTA_RULES.
"""
self.log(event.formula.table.table, "Processing event: %s", event)
applicable_rules = self.delta_rules.rules_with_trigger(
event.formula.table.table)
if len(applicable_rules) == 0:
self.log(event.formula.table.table, "No applicable delta rule")
for delta_rule in applicable_rules:
self.propagate_rule(event, delta_rule)
def propagate_rule(self, event, delta_rule):
"""Propagate event and delta_rule.
Compute and enqueue new events generated by EVENT and DELTA_RULE.
"""
self.log(event.formula.table.table, "Processing event %s with rule %s",
event, delta_rule)
# compute tuples generated by event (either for insert or delete)
# print "event: {}, event.tuple: {},
# event.tuple.rawtuple(): {}".format(
# str(event), str(event.tuple), str(event.tuple.raw_tuple()))
# binding_list is dictionary
# Save binding for delta_rule.trigger; throw away binding for event
# since event is ground.
binding = self.new_bi_unifier()
assert compile.is_literal(delta_rule.trigger)
assert compile.is_literal(event.formula)
undo = self.bi_unify(delta_rule.trigger, binding,
event.formula, self.new_bi_unifier(), self.name)
if undo is None:
return
self.log(event.formula.table.table,
"binding list for event and delta-rule trigger: %s", binding)
bindings = self.top_down_evaluation(
delta_rule.variables(), delta_rule.body, binding)
self.log(event.formula.table.table, "new bindings after top-down: %s",
",".join([str(x) for x in bindings]))
if delta_rule.trigger.is_negated():
insert_delete = not event.insert
else:
insert_delete = event.insert
self.process_new_bindings(bindings, delta_rule.head,
insert_delete, delta_rule.original)
def process_new_bindings(self, bindings, atom, insert, original_rule):
"""Process new bindings.
For each of BINDINGS, apply to ATOM, and enqueue it as an insert if
INSERT is True and as a delete otherwise.
"""
# for each binding, compute generated tuple and group bindings
# by the tuple they generated
new_atoms = {}
for binding in bindings:
new_atom = atom.plug(binding)
if new_atom not in new_atoms:
new_atoms[new_atom] = []
new_atoms[new_atom].append(database.Database.Proof(
binding, original_rule))
self.log(atom.table.table, "new tuples generated: %s",
utility.iterstr(new_atoms))
# enqueue each distinct generated tuple, recording appropriate bindings
for new_atom in new_atoms:
# self.log(event.table, "new_tuple %s: %s", new_tuple,
# new_tuples[new_tuple])
# Only enqueue if new data.
# Putting the check here is necessary to support recursion.
self.enqueue(compile.Event(formula=new_atom,
proofs=new_atoms[new_atom],
insert=insert))
def is_view(self, x):
"""Return True if the table X is defined by the theory."""
return self.delta_rules.is_view(x)
def is_known(self, x):
"""Return True if this theory has any rule mentioning table X."""
return self.delta_rules.is_known(x)
def base_tables(self):
"""Get base tables.
Return the list of tables that are mentioned in the rules but
for which there are no rules with those tables in the head.
"""
return self.delta_rules.base_tables()
def _top_down_th(self, context, caller):
return self.database._top_down_th(context, caller)
def content(self, tablenames=None):
return self.database.content(tablenames=tablenames)
def __contains__(self, formula):
# TODO(thinrichs): if formula is a rule, we need to check
# self.delta_rules; if formula is an atom, we need to check
# self.database, but only if the table for that atom is
# not defined by rules. As it stands, for atoms, we are
# conflating membership with evaluation.
return (formula in self.database or formula in self.delta_rules)

View File

@ -1,406 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from congress.datalog import base
from congress.datalog import compile
from congress.datalog import ruleset
from congress.datalog import topdown
from congress.datalog import utility
from congress import exception
LOG = logging.getLogger(__name__)
class RuleHandlingMixin(object):
# External Interface
def initialize_tables(self, tablenames, facts):
"""Event handler for (re)initializing a collection of tables
@facts must be an iterable containing compile.Fact objects.
"""
LOG.info("initialize_tables")
cleared_tables = set(tablenames)
for t in tablenames:
self.rules.clear_table(t)
count = 0
extra_tables = set()
ignored_facts = 0
for f in facts:
if f.table not in cleared_tables:
extra_tables.add(f.table)
ignored_facts += 1
else:
self.rules.add_rule(f.table, f)
count += 1
if self.schema:
self.schema.update(f, True)
if ignored_facts > 0:
LOG.error("initialize_tables ignored %d facts for tables "
"%s not included in the list of tablenames %s",
ignored_facts, extra_tables, cleared_tables)
LOG.info("initialized %d tables with %d facts",
len(cleared_tables), count)
def insert(self, rule):
changes = self.update([compile.Event(formula=rule, insert=True)])
return [event.formula for event in changes]
def delete(self, rule):
changes = self.update([compile.Event(formula=rule, insert=False)])
return [event.formula for event in changes]
def _update_lit_schema(self, lit, is_insert):
if self.schema is None:
raise exception.PolicyException(
"Cannot update schema because theory %s doesn't have "
"schema." % self.name)
if self.schema.complete:
# complete means the schema is pre-built and shouldn't be updated
return None
return self.schema.update(lit, is_insert)
def update_rule_schema(self, rule, is_insert):
schema_changes = []
if self.schema is None or not self.theories or self.schema.complete:
# complete means the schema is pre-built like datasoures'
return schema_changes
if isinstance(rule, compile.Fact) or isinstance(rule, compile.Literal):
schema_changes.append(self._update_lit_schema(rule, is_insert))
return schema_changes
schema_changes.append(self._update_lit_schema(rule.head, is_insert))
for lit in rule.body:
if lit.is_builtin():
continue
active_theory = lit.table.service or self.name
if active_theory not in self.theories:
continue
schema_changes.append(
self.theories[active_theory]._update_lit_schema(lit,
is_insert))
return schema_changes
def revert_schema(self, schema_changes):
if not self.theories:
return
for change in schema_changes:
if not change:
continue
active_theory = change[3]
if not active_theory:
self.schema.revert(change)
else:
self.theories[active_theory].schema.revert(change)
def update(self, events):
"""Apply EVENTS.
And return the list of EVENTS that actually
changed the theory. Each event is the insert or delete of
a policy statement.
"""
changes = []
self.log(None, "Update %s", utility.iterstr(events))
try:
for event in events:
schema_changes = self.update_rule_schema(
event.formula, event.insert)
formula = compile.reorder_for_safety(event.formula)
if event.insert:
if self._insert_actual(formula):
changes.append(event)
else:
self.revert_schema(schema_changes)
else:
if self._delete_actual(formula):
changes.append(event)
else:
self.revert_schema(schema_changes)
except Exception:
LOG.exception("runtime caught an exception")
raise
return changes
def update_would_cause_errors(self, events):
"""Return a list of PolicyException.
Return a list of PolicyException if we were
to apply the insert/deletes of policy statements dictated by
EVENTS to the current policy.
"""
self.log(None, "update_would_cause_errors %s", utility.iterstr(events))
errors = []
for event in events:
if not compile.is_datalog(event.formula):
errors.append(exception.PolicyException(
"Non-formula found: {}".format(
str(event.formula))))
else:
if event.formula.is_atom():
errors.extend(compile.fact_errors(
event.formula, self.theories, self.name))
else:
errors.extend(compile.rule_errors(
event.formula, self.theories, self.name))
# Would also check that rules are non-recursive, but that
# is currently being handled by Runtime. The current implementation
# disallows recursion in all theories.
return errors
def define(self, rules):
"""Empties and then inserts RULES."""
self.empty()
return self.update([compile.Event(formula=rule, insert=True)
for rule in rules])
def empty(self, tablenames=None, invert=False):
"""Deletes contents of theory.
If provided, TABLENAMES causes only the removal of all rules
that help define one of the tables in TABLENAMES.
If INVERT is true, all rules defining anything other than a
table in TABLENAMES is deleted.
"""
if tablenames is None:
self.rules.clear()
return
if invert:
to_clear = set(self.defined_tablenames()) - set(tablenames)
else:
to_clear = tablenames
for table in to_clear:
self.rules.clear_table(table)
def policy(self):
# eliminate all rules with empty bodies
return [p for p in self.content() if len(p.body) > 0]
def __contains__(self, formula):
if compile.is_atom(formula):
return self.rules.contains(formula.table.table, formula)
else:
return self.rules.contains(formula.head.table.table, formula)
# Internal Interface
def _insert_actual(self, rule):
"""Insert RULE and return True if there was a change."""
self.dirty = True
if compile.is_atom(rule):
rule = compile.Rule(rule, [], rule.location)
self.log(rule.head.table.table, "Insert: %s", repr(rule))
return self.rules.add_rule(rule.head.table.table, rule)
def _delete_actual(self, rule):
"""Delete RULE and return True if there was a change."""
self.dirty = True
if compile.is_atom(rule):
rule = compile.Rule(rule, [], rule.location)
self.log(rule.head.table.table, "Delete: %s", rule)
return self.rules.discard_rule(rule.head.table.table, rule)
def content(self, tablenames=None):
if tablenames is None:
tablenames = self.rules.keys()
results = []
for table in tablenames:
if table in self.rules:
results.extend(self.rules.get_rules(table))
return results
class NonrecursiveRuleTheory(RuleHandlingMixin, topdown.TopDownTheory):
"""A non-recursive collection of Rules."""
def __init__(self, name=None, abbr=None,
schema=None, theories=None, desc=None, owner=None):
super(NonrecursiveRuleTheory, self).__init__(
name=name, abbr=abbr, theories=theories, schema=schema,
desc=desc, owner=owner)
# dictionary from table name to list of rules with that table in head
self.rules = ruleset.RuleSet()
self.kind = base.NONRECURSIVE_POLICY_TYPE
if schema is None:
self.schema = compile.Schema()
# Indicates that a rule was added/removed
# Used by the compiler to know if a theory should be recompiled.
self.dirty = False
# SELECT implemented by TopDownTheory
def head_index(self, table, match_literal=None):
"""Return head index.
This routine must return all the formulas pertinent for
top-down evaluation when a literal with TABLE is at the top
of the stack.
"""
if table in self.rules:
return self.rules.get_rules(table, match_literal)
return []
def arity(self, table, modal=None):
"""Return the number of arguments TABLENAME takes.
:param table: can be either a string or a Tablename
:returns: None if arity is unknown (if it does not occur in
the head of a rule).
"""
if isinstance(table, compile.Tablename):
service = table.service
name = table.table
fullname = table.name()
else:
fullname = table
service, name = compile.Tablename.parse_service_table(table)
# check if schema knows the answer
if self.schema:
if service is None or service == self.name:
arity = self.schema.arity(name)
else:
arity = self.schema.arity(fullname)
if arity is not None:
return arity
# assuming a single arity for all tables
formulas = self.head_index(fullname) or self.head_index(name)
try:
first = next(f for f in formulas
if f.head.table.matches(service, name, modal))
except StopIteration:
return None
# should probably have an overridable function for computing
# the arguments of a head. Instead we assume heads have .arguments
return len(self.head(first).arguments)
def defined_tablenames(self):
"""Returns list of table names defined in/written to this theory."""
return self.rules.keys()
def head(self, formula):
"""Given the output from head_index(), return the formula head.
Given a FORMULA, return the thing to unify against.
Usually, FORMULA is a compile.Rule, but it could be anything
returned by HEAD_INDEX.
"""
return formula.head
def body(self, formula):
"""Return formula body.
Given a FORMULA, return a list of things to push onto the
top-down eval stack.
"""
return formula.body
class ActionTheory(NonrecursiveRuleTheory):
"""ActionTheory object.
Same as NonrecursiveRuleTheory except it has fewer constraints
on the permitted rules. Still working out the details.
"""
def __init__(self, name=None, abbr=None,
schema=None, theories=None, desc=None, owner=None):
super(ActionTheory, self).__init__(name=name, abbr=abbr,
schema=schema, theories=theories,
desc=desc, owner=owner)
self.kind = base.ACTION_POLICY_TYPE
def update_would_cause_errors(self, events):
"""Return a list of PolicyException.
Return a list of PolicyException if we were
to apply the events EVENTS to the current policy.
"""
self.log(None, "update_would_cause_errors %s", utility.iterstr(events))
errors = []
for event in events:
if not compile.is_datalog(event.formula):
errors.append(exception.PolicyException(
"Non-formula found: {}".format(
str(event.formula))))
else:
if event.formula.is_atom():
errors.extend(compile.fact_errors(
event.formula, self.theories, self.name))
else:
errors.extend(compile.rule_head_has_no_theory(
event.formula,
permit_head=lambda lit: lit.is_update()))
# Should put this back in place, but there are some
# exceptions that we don't handle right now.
# Would like to mark some tables as only being defined
# for certain bound/free arguments and take that into
# account when doing error checking.
# errors.extend(compile.rule_negation_safety(event.formula))
return errors
class MultiModuleNonrecursiveRuleTheory(NonrecursiveRuleTheory):
"""MultiModuleNonrecursiveRuleTheory object.
Same as NonrecursiveRuleTheory, except we allow rules with theories
in the head. Intended for use with TopDownTheory's INSTANCES method.
"""
def _insert_actual(self, rule):
"""Insert RULE and return True if there was a change."""
if compile.is_atom(rule):
rule = compile.Rule(rule, [], rule.location)
self.log(rule.head.table.table, "Insert: %s", rule)
return self.rules.add_rule(rule.head.table.table, rule)
def _delete_actual(self, rule):
"""Delete RULE and return True if there was a change."""
if compile.is_atom(rule):
rule = compile.Rule(rule, [], rule.location)
self.log(rule.head.table.table, "Delete: %s", rule)
return self.rules.discard_rule(rule.head.table.table, rule)
# def update_would_cause_errors(self, events):
# return []
class DatasourcePolicyTheory(NonrecursiveRuleTheory):
"""DatasourcePolicyTheory
DatasourcePolicyTheory is identical to NonrecursiveRuleTheory, except that
self.kind is base.DATASOURCE_POLICY_TYPE instead of
base.NONRECURSIVE_POLICY_TYPE. DatasourcePolicyTheory uses a different
self.kind so that the synchronizer knows not to synchronize policies of
kind DatasourcePolicyTheory with the database listing of policies.
"""
def __init__(self, name=None, abbr=None,
schema=None, theories=None, desc=None, owner=None):
super(DatasourcePolicyTheory, self).__init__(
name=name, abbr=abbr, theories=theories, schema=schema,
desc=desc, owner=owner)
self.kind = base.DATASOURCE_POLICY_TYPE

View File

@ -1,176 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.datalog import compile
from congress.datalog import factset
from congress.datalog import utility
class RuleSet(object):
"""RuleSet
Keeps track of all rules for all tables.
"""
# Internally:
# An index_name looks like this: (p, (2, 4)) which means this index is
# on table 'p' and it specifies columns 2 and 4.
#
# An index_key looks like this: (p, (2, 'abc'), (4, 'def'))
def __init__(self):
self.rules = {}
self.facts = {}
def __str__(self):
return str(self.rules) + " " + str(self.facts)
def add_rule(self, key, rule):
"""Add a rule to the Ruleset
@rule can be a Rule or a Fact. Returns True if add_rule() changes the
RuleSet.
"""
if isinstance(rule, compile.Fact):
# If the rule is a Fact, then add it to self.facts.
if key not in self.facts:
self.facts[key] = factset.FactSet()
return self.facts[key].add(rule)
elif len(rule.body) == 0 and not rule.head.is_negated():
# If the rule is a Rule, with no body, then it's a Fact, so
# convert the Rule to a Fact to a Fact and add to self.facts.
f = compile.Fact(key, (a.name for a in rule.head.arguments))
if key not in self.facts:
self.facts[key] = factset.FactSet()
return self.facts[key].add(f)
else:
# else the rule is a regular rule, so add it to self.rules.
if key in self.rules:
return self.rules[key].add(rule)
else:
self.rules[key] = utility.OrderedSet([rule])
return True
def discard_rule(self, key, rule):
"""Remove a rule from the Ruleset
@rule can be a Rule or a Fact. Returns True if discard_rule() changes
the RuleSet.
"""
if isinstance(rule, compile.Fact):
# rule is a Fact, so remove from self.facts
if key in self.facts:
changed = self.facts[key].remove(rule)
if len(self.facts[key]) == 0:
del self.facts[key]
return changed
return False
elif not len(rule.body):
# rule is a Rule, but without a body so it will be in self.facts.
if key in self.facts:
fact = compile.Fact(key, [a.name for a in rule.head.arguments])
changed = self.facts[key].remove(fact)
if len(self.facts[key]) == 0:
del self.facts[key]
return changed
return False
else:
# rule is a Rule with a body, so remove from self.rules.
if key in self.rules:
changed = self.rules[key].discard(rule)
if len(self.rules[key]) == 0:
del self.rules[key]
return changed
return False
def keys(self):
return list(self.facts.keys()) + list(self.rules.keys())
def __contains__(self, key):
return key in self.facts or key in self.rules
def contains(self, key, rule):
if isinstance(rule, compile.Fact):
return key in self.facts and rule in self.facts[key]
elif isinstance(rule, compile.Literal):
if key not in self.facts:
return False
fact = compile.Fact(key, [a.name for a in rule.arguments])
return fact in self.facts[key]
elif not len(rule.body):
if key not in self.facts:
return False
fact = compile.Fact(key, [a.name for a in rule.head.arguments])
return fact in self.facts[key]
else:
return key in self.rules and rule in self.rules[key]
def get_rules(self, key, match_literal=None):
facts = []
if (match_literal and not match_literal.is_negated() and
key in self.facts):
# If the caller supplies a literal to match against, then use an
# index to find the matching rules.
bound_arguments = tuple([i for i, arg
in enumerate(match_literal.arguments)
if not arg.is_variable()])
if (bound_arguments and
not self.facts[key].has_index(bound_arguments)):
# The index does not exist, so create it.
self.facts[key].create_index(bound_arguments)
partial_fact = tuple(
[(i, arg.name)
for i, arg in enumerate(match_literal.arguments)
if not arg.is_variable()])
facts = list(self.facts[key].find(partial_fact))
else:
# There is no usable match_literal, so get all facts for the
# table.
facts = list(self.facts.get(key, ()))
# Convert native tuples to Rule objects.
# TODO(alex): This is inefficient because it creates Literal and Rule
# objects. It would be more efficient to change the TopDownTheory and
# unifier to handle Facts natively.
fact_rules = []
for fact in facts:
# Setting use_modules=False so we don't split up tablenames.
# This allows us to choose at compile-time whether to split
# the tablename up.
literal = compile.Literal(
key, [compile.Term.create_from_python(x) for x in fact],
use_modules=False)
fact_rules.append(compile.Rule(literal, ()))
return fact_rules + list(self.rules.get(key, ()))
def clear(self):
self.rules = {}
self.facts = {}
def clear_table(self, table):
self.rules[table] = utility.OrderedSet()
self.facts[table] = factset.FactSet()

View File

@ -1,642 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
import six
from six.moves import range
from congress.datalog import base
from congress.datalog import builtin
from congress.datalog import compile
from congress.datalog import unify
from congress.datalog import utility
LOG = logging.getLogger(__name__)
class TopDownTheory(base.Theory):
"""Class that holds the Top-Down evaluation routines.
Classes will inherit from this class if they want to import and specialize
those routines.
"""
class TopDownContext(object):
"""Struct for storing the search state of top-down evaluation."""
def __init__(self, literals, literal_index, binding, context, theory,
depth):
self.literals = literals
self.literal_index = literal_index
self.binding = binding
self.previous = context
self.theory = theory # a theory object, not just its name
self.depth = depth
def __str__(self):
return (
"TopDownContext<literals={}, literal_index={}, binding={}, "
"previous={}, theory={}, depth={}>").format(
"[" + ",".join([str(x) for x in self.literals]) + "]",
str(self.literal_index), str(self.binding),
str(self.previous), self.theory.name, str(self.depth))
class TopDownResult(object):
"""Stores a single result for top-down-evaluation."""
def __init__(self, binding, support):
self.binding = binding
self.support = support # for abduction
def __str__(self):
return "TopDownResult(binding={}, support={})".format(
unify.binding_str(self.binding), utility.iterstr(self.support))
class TopDownCaller(object):
"""Struct for info about the original caller of top-down evaluation.
VARIABLES is the list of variables (from the initial query)
that we want bindings for.
BINDING is the initially empty BiUnifier.
FIND_ALL controls whether just the first or all answers are found.
ANSWERS is populated by top-down evaluation: it is the list of
VARIABLES instances that the search process proved true.
"""
def __init__(self, variables, binding, theory,
find_all=True, save=None):
# an iterable of variable objects
self.variables = variables
# a bi-unifier
self.binding = binding
# the top-level theory (for included theories)
self.theory = theory
# a boolean
self.find_all = find_all
# The results of top-down-eval: a list of TopDownResults
self.results = []
# a Function that takes a compile.Literal and a unifier and
# returns T iff that literal under the unifier should be
# saved as part of an abductive explanation
self.save = save
# A variable used to store explanations as they are constructed
self.support = []
def __str__(self):
return (
"TopDownCaller<variables={}, binding={}, find_all={}, "
"results={}, save={}, support={}>".format(
utility.iterstr(self.variables), str(self.binding),
str(self.find_all), utility.iterstr(self.results),
repr(self.save), utility.iterstr(self.support)))
#########################################
# External interface
def __init__(self, name=None, abbr=None, theories=None, schema=None,
desc=None, owner=None):
super(TopDownTheory, self).__init__(
name=name, abbr=abbr, theories=theories, schema=schema,
desc=desc, owner=owner)
self.includes = []
def select(self, query, find_all=True):
"""Return list of instances of QUERY that are true.
If FIND_ALL is False, the return list has at most 1 element.
"""
assert compile.is_datalog(query), "Query must be atom/rule"
if compile.is_atom(query):
literals = [query]
else:
literals = query.body
# Because our output is instances of QUERY, need all the variables
# in QUERY.
bindings = self.top_down_evaluation(query.variables(), literals,
find_all=find_all)
# LOG.debug("Top_down_evaluation returned: %s", bindings)
if len(bindings) > 0:
self.log(query.tablename(), "Found answer %s",
"[" + ",".join([str(query.plug(x))
for x in bindings]) + "]")
return [query.plug(x) for x in bindings]
def explain(self, query, tablenames, find_all=True):
"""Return list of instances of QUERY that are true.
Same as select except stores instances of TABLENAMES
that participated in each proof. If QUERY is an atom,
returns list of rules with QUERY in the head and
the stored instances of TABLENAMES in the body; if QUERY is
a rule, the rules returned have QUERY's head in the head
and the stored instances of TABLENAMES in the body.
"""
# This is different than abduction because instead of replacing
# a proof attempt with saving a literal, we want to save a literal
# after a successful proof attempt.
assert False, "Not yet implemented"
def abduce(self, query, tablenames, find_all=True):
"""Compute additional literals.
Computes additional literals that if true would make
(some instance of) QUERY true. Returns a list of rules
where the head represents an instance of the QUERY and
the body is the collection of literals that must be true
in order to make that instance true. If QUERY is a rule,
each result is an instance of the head of that rule, and
the computed literals if true make the body of that rule
(and hence the head) true. If FIND_ALL is true, the
return list has at most one element.
Limitation: every negative literal relevant to a proof of
QUERY is unconditionally true, i.e. no literals are saved
when proving a negative literal is true.
"""
assert compile.is_datalog(query), "abduce requires a formula"
if compile.is_atom(query):
literals = [query]
output = query
else:
literals = query.body
output = query.head
# We need all the variables we will be using in the output, which
# here is just the head of QUERY (or QUERY itself if it is an atom)
abductions = self.top_down_abduction(
output.variables(), literals, find_all=find_all,
save=lambda lit, binding: lit.tablename() in tablenames)
results = [compile.Rule(output.plug(abd.binding), abd.support)
for abd in abductions]
self.log(query.tablename(), "abduction result:")
self.log(query.tablename(), "\n".join([str(x) for x in results]))
return results
def consequences(self, filter=None, table_theories=None):
"""Return all the true instances of any table in this theory."""
# find all table, theory pairs defined in this theory
if table_theories is None:
table_theories = set()
for key in self.rules.keys():
table_theories |= set([(rule.head.table.table,
rule.head.table.service)
for rule in self.rules.get_rules(key)])
results = set()
# create queries: need table names and arities
# TODO(thinrichs): arity computation will need to ignore
# modals once we start using insert[p(x)] instead of p+(x)
for (table, theory) in table_theories:
if filter is None or filter(table):
tablename = compile.Tablename(table, theory)
arity = self.arity(tablename)
vs = []
for i in range(0, arity):
vs.append("x" + str(i))
vs = [compile.Variable(var) for var in vs]
tablename = table
if theory:
tablename = theory + ":" + tablename
query = compile.Literal(tablename, vs)
results |= set(self.select(query))
return results
def top_down_evaluation(self, variables, literals,
binding=None, find_all=True):
"""Compute bindings.
Compute all bindings of VARIABLES that make LITERALS
true according to the theory (after applying the unifier BINDING).
If FIND_ALL is False, stops after finding one such binding.
Returns a list of dictionary bindings.
"""
# LOG.debug("CALL: top_down_evaluation(vars=%s, literals=%s, "
# "binding=%s)",
# ";".join(str(x) for x in variables),
# ";".join(str(x) for x in literals),
# str(binding))
results = self.top_down_abduction(variables, literals,
binding=binding, find_all=find_all,
save=None)
# LOG.debug("EXIT: top_down_evaluation(vars=%s, literals=%s, "
# "binding=%s) returned %s",
# iterstr(variables), iterstr(literals),
# str(binding), iterstr(results))
return [x.binding for x in results]
def top_down_abduction(self, variables, literals, binding=None,
find_all=True, save=None):
"""Compute bindings.
Compute all bindings of VARIABLES that make LITERALS
true according to the theory (after applying the
unifier BINDING), if we add some number of additional
literals. Note: will not save any literals that are
needed to prove a negated literal since the results
would not make sense. Returns a list of TopDownResults.
"""
if binding is None:
binding = self.new_bi_unifier()
caller = self.TopDownCaller(variables, binding, self,
find_all=find_all, save=save)
if len(literals) == 0:
self._top_down_finish(None, caller)
else:
# Note: must use same unifier in CALLER and CONTEXT
context = self.TopDownContext(literals, 0, binding, None, self, 0)
self._top_down_eval(context, caller)
return list(set(caller.results))
#########################################
# Internal implementation
def _top_down_eval(self, context, caller):
"""Compute instances.
Compute all instances of LITERALS (from LITERAL_INDEX and above)
that are true according to the theory (after applying the
unifier BINDING to LITERALS).
Returns True if done searching and False otherwise.
"""
# no recursive rules, ever; this style of algorithm will not terminate
lit = context.literals[context.literal_index]
# LOG.debug("CALL: %s._top_down_eval(%s, %s)",
# self.name, context, caller)
# abduction
if caller.save is not None and caller.save(lit, context.binding):
self._print_call(lit, context.binding, context.depth)
# save lit and binding--binding may not be fully flushed out
# when we save (or ever for that matter)
caller.support.append((lit, context.binding))
self._print_save(lit, context.binding, context.depth)
success = self._top_down_finish(context, caller)
caller.support.pop() # pop in either case
if success:
return True
else:
self._print_fail(lit, context.binding, context.depth)
return False
# regular processing
if lit.is_negated():
# LOG.debug("%s is negated", lit)
# recurse on the negation of the literal
plugged = lit.plug(context.binding)
assert plugged.is_ground(), (
"Negated literal not ground when evaluated: " +
str(plugged))
self._print_call(lit, context.binding, context.depth)
new_context = self.TopDownContext(
[lit.complement()], 0, context.binding, None,
self, context.depth + 1)
new_caller = self.TopDownCaller(caller.variables, caller.binding,
caller.theory, find_all=False,
save=None)
# Make sure new_caller has find_all=False, so we stop as soon
# as we can.
# Ensure save=None so that abduction does not save anything.
# Saving while performing NAF makes no sense.
self._top_down_eval(new_context, new_caller)
if len(new_caller.results) > 0:
self._print_fail(lit, context.binding, context.depth)
return False # not done searching, b/c we failed
else:
# don't need bindings b/c LIT must be ground
return self._top_down_finish(context, caller, redo=False)
elif lit.tablename() == 'true':
self._print_call(lit, context.binding, context.depth)
return self._top_down_finish(context, caller, redo=False)
elif lit.tablename() == 'false':
self._print_fail(lit, context.binding, context.depth)
return False
elif lit.is_builtin():
return self._top_down_builtin(context, caller)
elif (self.theories is not None and
lit.table.service is not None and
lit.table.modal is None and # not a modal
lit.table.service != self.name and
not lit.is_update()): # not a pseudo-modal
return self._top_down_module(context, caller)
else:
return self._top_down_truth(context, caller)
def _top_down_builtin(self, context, caller):
"""Evaluate a table with a builtin semantics.
Returns True if done searching and False otherwise.
"""
lit = context.literals[context.literal_index]
self._print_call(lit, context.binding, context.depth)
built = builtin.builtin_registry.builtin(lit.table)
# copy arguments into variables
# PLUGGED is an instance of compile.Literal
plugged = lit.plug(context.binding)
# PLUGGED.arguments is a list of compile.Term
# create args for function
args = []
for i in range(0, built.num_inputs):
# save builtins with unbound vars during evaluation
if not plugged.arguments[i].is_object() and caller.save:
# save lit and binding--binding may not be fully flushed out
# when we save (or ever for that matter)
caller.support.append((lit, context.binding))
self._print_save(lit, context.binding, context.depth)
success = self._top_down_finish(context, caller)
caller.support.pop() # pop in either case
if success:
return True
else:
self._print_fail(lit, context.binding, context.depth)
return False
assert plugged.arguments[i].is_object(), (
("Builtins must be evaluated only after their "
"inputs are ground: {} with num-inputs {}".format(
str(plugged), builtin.num_inputs)))
args.append(plugged.arguments[i].name)
# evaluate builtin: must return number, string, or iterable
# of numbers/strings
try:
result = built.code(*args)
except Exception as e:
errmsg = "Error in builtin: " + str(e)
self._print_note(lit, context.binding, context.depth, errmsg)
self._print_fail(lit, context.binding, context.depth)
return False
# self._print_note(lit, context.binding, context.depth,
# "Result: " + str(result))
success = None
undo = []
if built.num_outputs > 0:
# with return values, local success means we can bind
# the results to the return value arguments
if (isinstance(result,
(six.integer_types, float, six.string_types))):
result = [result]
# Turn result into normal objects
result = [compile.Term.create_from_python(x) for x in result]
# adjust binding list
unifier = self.new_bi_unifier()
undo = unify.bi_unify_lists(result,
unifier,
lit.arguments[built.num_inputs:],
context.binding)
success = undo is not None
else:
# without return values, local success means
# result was True according to Python
success = bool(result)
if not success:
self._print_fail(lit, context.binding, context.depth)
unify.undo_all(undo)
return False
# otherwise, try to finish proof. If success, return True
if self._top_down_finish(context, caller, redo=False):
unify.undo_all(undo)
return True
# if fail, return False.
else:
unify.undo_all(undo)
self._print_fail(lit, context.binding, context.depth)
return False
def _top_down_module(self, context, caller):
"""Move to another theory and continue evaluation."""
# LOG.debug("%s._top_down_module(%s)", self.name, context)
lit = context.literals[context.literal_index]
if lit.table.service not in self.theories:
self._print_call(lit, context.binding, context.depth)
errmsg = "No such policy: %s" % lit.table.service
self._print_note(lit, context.binding, context.depth, errmsg)
self._print_fail(lit, context.binding, context.depth)
return False
return self.theories[lit.table.service]._top_down_eval(context, caller)
def _top_down_truth(self, context, caller):
"""Top down evaluation.
Do top-down evaluation over the root theory at which
the call was made and all the included theories.
"""
# return self._top_down_th(context, caller)
return self._top_down_includes(context, caller)
def _top_down_includes(self, context, caller):
"""Top-down evaluation of all the theories included in this theory."""
is_true = self._top_down_th(context, caller)
if is_true and not caller.find_all:
return True
for th in self.includes:
is_true = th._top_down_includes(context, caller)
if is_true and not caller.find_all:
return True
return False
def _top_down_th(self, context, caller):
"""Top-down evaluation for the rules in self."""
# LOG.debug("%s._top_down_th(%s)", self.name, context)
lit = context.literals[context.literal_index]
self._print_call(lit, context.binding, context.depth)
for rule in self.head_index(lit.table.table,
lit.plug(context.binding)):
unifier = self.new_bi_unifier()
self._print_note(lit, context.binding, context.depth,
"Trying %s" % rule)
# Prefer to bind vars in rule head
undo = self.bi_unify(self.head(rule), unifier, lit,
context.binding, self.name)
if undo is None: # no unifier
continue
if len(self.body(rule)) == 0:
if self._top_down_finish(context, caller):
unify.undo_all(undo)
if not caller.find_all:
return True
else:
unify.undo_all(undo)
else:
new_context = self.TopDownContext(
rule.body, 0, unifier, context, self, context.depth + 1)
if self._top_down_eval(new_context, caller):
unify.undo_all(undo)
if not caller.find_all:
return True
else:
unify.undo_all(undo)
self._print_fail(lit, context.binding, context.depth)
return False
def _top_down_finish(self, context, caller, redo=True):
"""Helper function.
This is called once top_down successfully completes
a proof for a literal. Handles (i) continuing search
for those literals still requiring proofs within CONTEXT,
(ii) adding solutions to CALLER once all needed proofs have
been found, and (iii) printing out Redo/Exit during tracing.
Returns True if the search is finished and False otherwise.
Temporary, transparent modification of CONTEXT.
"""
if context is None:
# Found an answer; now store it
if caller is not None:
# flatten bindings and store before we undo
# copy caller.support and store before we undo
binding = {}
for var in caller.variables:
binding[var] = caller.binding.apply(var)
result = self.TopDownResult(
binding, [support[0].plug(support[1], caller=caller)
for support in caller.support])
caller.results.append(result)
return True
else:
self._print_exit(context.literals[context.literal_index],
context.binding, context.depth)
# continue the search
if context.literal_index < len(context.literals) - 1:
context.literal_index += 1
finished = context.theory._top_down_eval(context, caller)
context.literal_index -= 1 # in case answer is False
else:
finished = self._top_down_finish(context.previous, caller)
# return search result (after printing a Redo if failure)
if redo and (not finished or caller.find_all):
self._print_redo(context.literals[context.literal_index],
context.binding, context.depth)
return finished
def _print_call(self, literal, binding, depth):
msg = "{}Call: %s".format("| " * depth)
self.log(literal.tablename(), msg, literal.plug(binding))
def _print_exit(self, literal, binding, depth):
msg = "{}Exit: %s".format("| " * depth)
self.log(literal.tablename(), msg, literal.plug(binding))
def _print_save(self, literal, binding, depth):
msg = "{}Save: %s".format("| " * depth)
self.log(literal.tablename(), msg, literal.plug(binding))
def _print_fail(self, literal, binding, depth):
msg = "{}Fail: %s".format("| " * depth)
self.log(literal.tablename(), msg, literal.plug(binding))
return False
def _print_redo(self, literal, binding, depth):
msg = "{}Redo: %s".format("| " * depth)
self.log(literal.tablename(), msg, literal.plug(binding))
return False
def _print_note(self, literal, binding, depth, msg):
self.log(literal.tablename(), "{}Note: {}".format("| " * depth,
msg))
#########################################
# Routines for specialization
@classmethod
def new_bi_unifier(cls, dictionary=None):
"""Return a unifier compatible with unify.bi_unify."""
return unify.BiUnifier(dictionary=dictionary)
# lambda (index):
# compile.Variable("x" + str(index)), dictionary=dictionary)
def defined_tablenames(self):
"""Returns list of table names defined in/written to this theory."""
raise NotImplementedError
def head_index(self, table, match_literal=None):
"""Return head index.
This routine must return all the formulas pertinent for
top-down evaluation when a literal with TABLE is at the top
of the stack.
"""
raise NotImplementedError
def head(self, formula):
"""Given the output from head_index(), return the formula head.
Given a FORMULA, return the thing to unify against.
Usually, FORMULA is a compile.Rule, but it could be anything
returned by HEAD_INDEX.
"""
raise NotImplementedError
def body(self, formula):
"""Return formula body.
Given a FORMULA, return a list of things to push onto the
top-down eval stack.
"""
raise NotImplementedError
def bi_unify(self, head, unifier1, body_element, unifier2, theoryname):
"""Unify atoms.
Given something returned by self.head HEAD and an element in
the return of self.body BODY_ELEMENT, modify UNIFIER1 and UNIFIER2
so that HEAD.plug(UNIFIER1) == BODY_ELEMENT.plug(UNIFIER2).
Returns changes that can be undone via unify.undo-all.
THEORYNAME is the name of the theory for HEAD.
"""
return unify.bi_unify_atoms(head, unifier1, body_element, unifier2,
theoryname)
#########################################
# Routines for unknowns
def instances(self, rule, possibilities=None):
results = set([])
possibilities = possibilities or []
self._instances(rule, 0, self.new_bi_unifier(), results, possibilities)
return results
def _instances(self, rule, index, binding, results, possibilities):
"""Return all instances of the given RULE without evaluating builtins.
Assumes self.head_index returns rules with empty bodies.
"""
if index >= len(rule.body):
results.add(rule.plug(binding))
return
lit = rule.body[index]
self._print_call(lit, binding, 0)
# if already ground or a builtin, go to the next literal
if (lit.is_ground() or lit.is_builtin()):
self._instances(rule, index + 1, binding, results, possibilities)
return
# Otherwise, find instances in this theory
if lit.tablename() in possibilities:
options = possibilities[lit.tablename()]
else:
options = self.head_index(lit.tablename(), lit.plug(binding))
for data in options:
self._print_note(lit, binding, 0, "Trying: %s" % repr(data))
undo = unify.match_atoms(lit, binding, self.head(data))
if undo is None: # no unifier
continue
self._print_exit(lit, binding, 0)
# recurse on the rest of the literals in the rule
self._instances(rule, index + 1, binding, results, possibilities)
if undo is not None:
unify.undo_all(undo)
self._print_redo(lit, binding, 0)
self._print_fail(lit, binding, 0)

View File

@ -1,526 +0,0 @@
# Copyright (c) 2013 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from oslo_utils import uuidutils
from six.moves import range
from congress.datalog import compile
LOG = logging.getLogger(__name__)
# A unifier designed for the bi_unify_atoms routine
# which is used by a backward-chaining style datalog implementation.
# Main goal: minimize memory allocation by manipulating only unifiers
# to keep variable namespaces separate.
class BiUnifier(object):
"""A unifier designed for bi_unify_atoms.
Recursive datastructure. When adding a binding variable u to
variable v, keeps a reference to the unifier for v.
A variable's identity is its name plus its unification context.
This enables a variable with the same name but from two
different atoms to be treated as different variables.
"""
class Value(object):
def __init__(self, value, unifier):
# actual value
self.value = value
# unifier context
self.unifier = unifier
def __str__(self):
return "<{},{}>".format(
str(self.value), repr(self.unifier))
def recur_str(self):
if self.unifier is None:
recur = str(self.unifier)
else:
recur = self.unifier.recur_str()
return "<{},{}>".format(
str(self.value), recur)
def __eq__(self, other):
return self.value == other.value and self.unifer == other.unifier
def __ne__(self, other):
return not self.__eq__(other)
def __repr__(self):
return "Value(value={}, unifier={})".format(
repr(self.value), repr(self.unifier))
class Undo(object):
def __init__(self, var, unifier):
self.var = var
self.unifier = unifier
def __str__(self):
return "<var: {}, unifier: {}>".format(
str(self.var), str(self.unifier))
def __eq__(self, other):
return self.var == other.var and self.unifier == other.unifier
def __ne__(self, other):
return not self.__eq__(other)
def __init__(self, dictionary=None):
# each value is a Value
self.contents = {}
if dictionary is not None:
for var, value in dictionary.items():
self.add(var, value, None)
def add(self, var, value, unifier):
value = self.Value(value, unifier)
# LOG.debug("Adding %s -> %s to unifier %s", var, value, self)
self.contents[var] = value
return self.Undo(var, self)
def delete(self, var):
if var in self.contents:
del self.contents[var]
def value(self, term):
if term in self.contents:
return self.contents[term]
else:
return None
def apply(self, term, caller=None):
return self.apply_full(term, caller=caller)[0]
def apply_full(self, term, caller=None):
"""Recursively apply unifiers to TERM.
Return (i) the final value and (ii) the final unifier.
If the final value is a variable, instantiate
with a new variable if not in KEEP_VARS
"""
# LOG.debug("apply_full(%s, %s)", term, self)
val = self.value(term)
if val is None:
# If result is a variable and this variable is not one of those
# in the top-most calling context, then create a new variable
# name based on this Binding.
# This process avoids improper variable capture.
# Outputting the same variable with the same binding twice will
# generate the same output, but outputting the same variable with
# different bindings will generate different outputs.
# Note that this variable name mangling
# is not done for the top-most variables,
# which makes output a bit easier to read.
# Unfortunately, the process is non-deterministic from one run
# to the next, which makes testing difficult.
if (caller is not None and term.is_variable() and
not (term in caller.variables and caller.binding is self)):
return (compile.Variable(term.name + str(id(self))), self)
else:
return (term, self)
elif val.unifier is None or not val.value.is_variable():
return (val.value, val.unifier)
else:
return val.unifier.apply_full(val.value)
def is_one_to_one(self):
image = set() # set of all things mapped TO
for x in self.contents:
val = self.apply(x)
if val in image:
return False
image.add(val)
return True
def __str__(self):
s = repr(self)
s += "={"
s += ",".join(["{}:{}".format(str(var), str(val))
for var, val in self.contents.items()])
s += "}"
return s
def recur_str(self):
s = repr(self)
s += "={"
s += ",".join(["{}:{}".format(var, val.recur_str())
for var, val in self.contents.items()])
s += "}"
return s
def __eq__(self, other):
return self.contents == other.contents
def __ne__(self, other):
return not self.__eq__(other)
def binding_str(binding):
"""Handles string conversion of either dictionary or Unifier."""
if isinstance(binding, dict):
s = ",".join(["{}: {}".format(str(var), str(val))
for var, val in binding.items()])
return '{' + s + '}'
else:
return str(binding)
def undo_all(changes):
"""Undo all the changes in CHANGES."""
# LOG.debug("undo_all(%s)",
# "[" + ",".join([str(x) for x in changes]) + "]")
if changes is None:
return
for change in changes:
if change.unifier is not None:
change.unifier.delete(change.var)
def same_schema(atom1, atom2, theoryname=None):
"""Return True if ATOM1 and ATOM2 have the same schema.
THEORYNAME is the default theory name.
"""
if not atom1.table.same(atom2.table, theoryname):
return False
if len(atom1.arguments) != len(atom2.arguments):
return False
return True
def bi_unify_atoms(atom1, unifier1, atom2, unifier2, theoryname=None):
"""Unify atoms.
If possible, modify BiUnifier UNIFIER1 and BiUnifier UNIFIER2 so that
ATOM1.plug(UNIFIER1) == ATOM2.plug(UNIFIER2).
Returns None if not possible; otherwise, returns
a list of changes to unifiers that can be undone
with undo-all. May alter unifiers besides UNIFIER1 and UNIFIER2.
THEORYNAME is the default theory name.
"""
# logging.debug("Unifying %s under %s and %s under %s",
# atom1, unifier1, atom2, unifier2)
if not same_schema(atom1, atom2, theoryname):
return None
return bi_unify_lists(atom1.arguments, unifier1,
atom2.arguments, unifier2)
def bi_unify_lists(iter1, unifier1, iter2, unifier2):
"""Unify lists.
If possible, modify BiUnifier UNIFIER1 and BiUnifier UNIFIER2 such that
iter1.plug(UNIFIER1) == iter2.plug(UNIFIER2), assuming PLUG is defined
over lists. Returns None if not possible; otherwise, returns
a list of changes to unifiers that can be undone
with undo-all. May alter unifiers besides UNIFIER1 and UNIFIER2.
"""
if len(iter1) != len(iter2):
return None
changes = []
for i in range(0, len(iter1)):
assert isinstance(iter1[i], compile.Term)
assert isinstance(iter2[i], compile.Term)
# grab values for args
val1, binding1 = unifier1.apply_full(iter1[i])
val2, binding2 = unifier2.apply_full(iter2[i])
# logging.debug("val(%s)=%s at %s, val(%s)=%s at %s",
# atom1.arguments[i], val1, binding1,
# atom2.arguments[i], val2, binding2)
# assign variable (if necessary) or fail
if val1.is_variable() and val2.is_variable():
# logging.debug("1 and 2 are variables")
if bi_var_equal(val1, binding1, val2, binding2):
continue
else:
changes.append(binding1.add(val1, val2, binding2))
elif val1.is_variable() and not val2.is_variable():
# logging.debug("Left arg is a variable")
changes.append(binding1.add(val1, val2, binding2))
elif not val1.is_variable() and val2.is_variable():
# logging.debug("Right arg is a variable")
changes.append(binding2.add(val2, val1, binding1))
elif val1 == val2:
continue
else:
# logging.debug("Unify failure: undoing")
undo_all(changes)
return None
return changes
# def plug(atom, binding, withtable=False):
# """ Returns a tuple representing the arguments to ATOM after having
# applied BINDING to the variables in ATOM. """
# if withtable is True:
# result = [atom.table]
# else:
# result = []
# for i in range(0, len(atom.arguments)):
# if (atom.arguments[i].is_variable() and
# atom.arguments[i].name in binding):
# result.append(binding[atom.arguments[i].name])
# else:
# result.append(atom.arguments[i].name)
# return tuple(result)
def match_tuple_atom(tupl, atom):
"""Get bindings.
Returns a binding dictionary that when applied to ATOM's arguments
gives exactly TUPLE, or returns None if no such binding exists.
"""
if len(tupl) != len(atom.arguments):
return None
binding = {}
for i in range(0, len(tupl)):
arg = atom.arguments[i]
if arg.is_variable():
if arg.name in binding:
oldval = binding[arg.name]
if oldval != tupl[i]:
return None
else:
binding[arg.name] = tuple[i]
return binding
def match_atoms(atom1, unifier, atom2):
"""Modify UNIFIER so that ATOM1.plug(UNIFIER) == ATOM2.
ATOM2 is assumed to be ground.
UNIFIER is assumed to be a BiUnifier.
Return the changes to UNIFIER or None if matching is impossible.
Matching is a special case of instance-checking since ATOM2
in this case must be ground, whereas there is no such limitation
for instance-checking. This makes the code significantly simpler
and faster.
"""
if not same_schema(atom1, atom2):
return None
changes = []
for i in range(0, len(atom1.arguments)):
val, binding = unifier.apply_full(atom1.arguments[i])
# LOG.debug("val(%s)=%s at %s; comparing to object %s",
# atom1.arguments[i], val, binding, atom2.arguments[i])
if val.is_variable():
changes.append(binding.add(val, atom2.arguments[i], None))
else:
if val.name != atom2.arguments[i].name:
undo_all(changes)
return None
return changes
def bi_var_equal(var1, unifier1, var2, unifier2):
"""Check var equality.
Returns True iff variable VAR1 in unifier UNIFIER1 is the same
variable as VAR2 in UNIFIER2.
"""
return (var1 == var2 and unifier1 is unifier2)
def same(formula1, formula2):
"""Check formulas are the same.
Determine if FORMULA1 and FORMULA2 are the same up to a variable
renaming. Treats FORMULA1 and FORMULA2 as having different
variable namespaces. Returns None or the pair of unifiers.
"""
if isinstance(formula1, compile.Literal):
if isinstance(formula2, compile.Rule):
return None
elif formula1.is_negated() != formula2.is_negated():
return None
else:
u1 = BiUnifier()
u2 = BiUnifier()
if same_atoms(formula1, u1, formula2, u2, set()) is not None:
return (u1, u2)
return None
elif isinstance(formula1, compile.Rule):
if isinstance(formula2, compile.Literal):
return None
else:
if len(formula1.body) != len(formula2.body):
return None
u1 = BiUnifier()
u2 = BiUnifier()
bound2 = set()
result = same_atoms(formula1.head, u1, formula2.head, u2, bound2)
if result is None:
return None
for i in range(0, len(formula1.body)):
result = same_atoms(
formula1.body[i], u1, formula2.body[i], u2, bound2)
if result is None:
return None
return (u1, u2)
else:
return None
def same_atoms(atom1, unifier1, atom2, unifier2, bound2):
"""Check whether atoms are identical.
Modifies UNIFIER1 and UNIFIER2 to demonstrate
that ATOM1 and ATOM2 are identical up to a variable renaming.
Returns None if not possible or the list of changes if it is.
BOUND2 is the set of variables already bound in UNIFIER2
"""
def die():
undo_all(changes)
return None
LOG.debug("same_atoms(%s, %s)", atom1, atom2)
if not same_schema(atom1, atom2):
return None
changes = []
# LOG.debug("same_atoms entering loop")
for i in range(0, len(atom1.arguments)):
val1, binding1 = unifier1.apply_full(atom1.arguments[i])
val2, binding2 = unifier2.apply_full(atom2.arguments[i])
# LOG.debug("val1: %s at %s; val2: %s at %s",
# val1, binding1, val2, binding2)
if val1.is_variable() and val2.is_variable():
if bi_var_equal(val1, binding1, val2, binding2):
continue
# if we already bound either of these variables, not SAME
if not bi_var_equal(val1, binding1, atom1.arguments[i], unifier1):
# LOG.debug("same_atoms: arg1 already bound")
return die()
if not bi_var_equal(val2, binding2, atom2.arguments[i], unifier2):
# LOG.debug("same_atoms: arg2 already bound")
return die()
if val2 in bound2:
# LOG.debug("same_atoms: binding is not 1-1")
return die()
changes.append(binding1.add(val1, val2, binding2))
bound2.add(val2)
elif val1.is_variable():
# LOG.debug("val1 is a variable")
return die()
elif val2.is_variable():
# LOG.debug("val2 is a variable")
return die()
elif val1 != val2:
# one is a variable and one is not or unmatching object constants
# LOG.debug("val1 != val2")
return die()
return changes
def instance(formula1, formula2):
"""Determine if FORMULA1 is an instance of FORMULA2.
If there is some binding that when applied to FORMULA1 results
in FORMULA2. Returns None or a unifier.
"""
LOG.debug("instance(%s, %s)", formula1, formula2)
if isinstance(formula1, compile.Literal):
if isinstance(formula2, compile.Rule):
return None
elif formula1.is_negated() != formula2.is_negated():
return None
else:
u = BiUnifier()
if instance_atoms(formula1, formula2, u) is not None:
return u
return None
elif isinstance(formula1, compile.Rule):
if isinstance(formula2, compile.Literal):
return None
else:
if len(formula1.body) != len(formula2.body):
return None
u = BiUnifier()
result = instance_atoms(formula1.head, formula2.head, u)
if result is None:
return None
for i in range(0, len(formula1.body)):
result = same_atoms(
formula1.body[i], formula2.body[i], u)
if result is None:
return None
return u
else:
return None
def instance_atoms(atom1, atom2, unifier2):
"""Check atoms equality by adding bindings.
Adds bindings to UNIFIER2 to make ATOM1 equal to ATOM2
after applying UNIFIER2 to ATOM2 only. Returns None if
no such bindings make equality hold.
"""
def die():
undo_all(changes)
return None
LOG.debug("instance_atoms(%s, %s)", atom1, atom2)
if not same_schema(atom1, atom2):
return None
unifier1 = BiUnifier()
changes = []
for i in range(0, len(atom1.arguments)):
val1, binding1 = unifier1.apply_full(atom1.arguments[i])
val2, binding2 = unifier2.apply_full(atom2.arguments[i])
# LOG.debug("val1: %s at %s; val2: %s at %s",
# val1, binding1, val2, binding2)
if val1.is_variable() and val2.is_variable():
if bi_var_equal(val1, binding1, val2, binding2):
continue
# if we already bound either of these variables, not INSTANCE
if not bi_var_equal(val1, binding1, atom1.arguments[i], unifier1):
# LOG.debug("instance_atoms: arg1 already bound")
return die()
if not bi_var_equal(val2, binding2, atom2.arguments[i], unifier2):
# LOG.debug("instance_atoms: arg2 already bound")
return die()
# add binding to UNIFIER2
changes.append(binding2.add(val2, val1, binding1))
elif val1.is_variable():
return die()
elif val2.is_variable():
changes.append(binding2.add(val2, val1, binding1))
# LOG.debug("var2 is a variable")
elif val1 != val2:
# unmatching object constants
# LOG.debug("val1 != val2")
return die()
return changes
def skolemize(formulas):
"""Instantiate all variables consistently with UUIDs in the formulas."""
# create binding then plug it in.
variables = set()
for formula in formulas:
variables |= formula.variables()
binding = {}
for var in variables:
binding[var] = compile.Term.create_from_python(
uuidutils.generate_uuid())
return [formula.plug(binding) for formula in formulas]

View File

@ -1,536 +0,0 @@
# Copyright (c) 2013 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import collections
from functools import reduce
class Graph(object):
"""A standard graph data structure.
With routines applicable to analysis of policy.
"""
class dfs_data(object):
"""Data for each node in graph during depth-first-search."""
def __init__(self, begin=None, end=None):
self.begin = begin
self.end = end
def __str__(self):
return "<begin: %s, end: %s>" % (self.begin, self.end)
class edge_data(object):
"""Data for each edge in graph."""
def __init__(self, node=None, label=None):
self.node = node
self.label = label
def __str__(self):
return "<Label:%s, Node:%s>" % (self.label, self.node)
def __eq__(self, other):
return self.node == other.node and self.label == other.label
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(str(self))
def __init__(self, graph=None):
self.edges = {} # dict from node to list of nodes
self.nodes = {} # dict from node to info about node
self._cycles = None
def __or__(self, other):
# do this the simple way so that subclasses get this code for free
g = self.__class__()
for node in self.nodes:
g.add_node(node)
for node in other.nodes:
g.add_node(node)
for name in self.edges:
for edge in self.edges[name]:
g.add_edge(name, edge.node, label=edge.label)
for name in other.edges:
for edge in other.edges[name]:
g.add_edge(name, edge.node, label=edge.label)
return g
def __ior__(self, other):
if len(other) == 0:
# no changes if other is empty
return self
self._cycles = None
for name in other.nodes:
self.add_node(name)
for name in other.edges:
for edge in other.edges[name]:
self.add_edge(name, edge.node, label=edge.label)
return self
def __len__(self):
return (len(self.nodes) +
reduce(lambda x, y: x+y,
(len(x) for x in self.edges.values()),
0))
def add_node(self, val):
"""Add node VAL to graph."""
if val not in self.nodes: # preserve old node info
self.nodes[val] = None
return True
return False
def delete_node(self, val):
"""Delete node VAL from graph and all edges."""
try:
del self.nodes[val]
del self.edges[val]
except KeyError:
assert val not in self.edges
def add_edge(self, val1, val2, label=None):
"""Add edge from VAL1 to VAL2 with label LABEL to graph.
Also adds the nodes.
"""
self._cycles = None # so that has_cycles knows it needs to rerun
self.add_node(val1)
self.add_node(val2)
val = self.edge_data(node=val2, label=label)
try:
self.edges[val1].add(val)
except KeyError:
self.edges[val1] = set([val])
def delete_edge(self, val1, val2, label=None):
"""Delete edge from VAL1 to VAL2 with label LABEL.
LABEL must match (even if None). Does not delete nodes.
"""
try:
edge = self.edge_data(node=val2, label=label)
self.edges[val1].remove(edge)
except KeyError:
# KeyError either because val1 or edge
return
self._cycles = None
def node_in(self, val):
return val in self.nodes
def edge_in(self, val1, val2, label=None):
return (val1 in self.edges and
self.edge_data(val2, label) in self.edges[val1])
def reset_nodes(self):
for node in self.nodes:
self.nodes[node] = None
def depth_first_search(self, roots=None):
"""Run depth first search on the graph.
Also modify self.nodes, self.counter, and self.cycle.
Use all nodes if @roots param is None or unspecified
"""
self.reset()
if roots is None:
roots = self.nodes
for node in roots:
if node in self.nodes and self.nodes[node].begin is None:
self.dfs(node)
def _enumerate_cycles(self):
self.reset()
for node in self.nodes.keys():
self._reset_dfs_data()
self.dfs(node, target=node)
for path in self.__target_paths:
self._cycles.add(Cycle(path))
def reset(self, roots=None):
"""Return nodes to pristine state."""
self._reset_dfs_data()
roots = roots or self.nodes
self._cycles = set()
def _reset_dfs_data(self):
for node in self.nodes.keys():
self.nodes[node] = self.dfs_data()
self.counter = 0
self.__target_paths = []
def dfs(self, node, target=None, dfs_stack=None):
"""DFS implementation.
Assumes data structures have been properly prepared.
Creates start/begin times on nodes.
Adds paths from node to target to self.__target_paths
"""
if dfs_stack is None:
dfs_stack = []
dfs_stack.append(node)
if (target is not None and node == target and
len(dfs_stack) > 1): # non-trival path to target found
self.__target_paths.append(list(dfs_stack)) # record
if self.nodes[node].begin is None:
self.nodes[node].begin = self.next_counter()
if node in self.edges:
for edge in self.edges[node]:
self.dfs(edge.node, target=target, dfs_stack=dfs_stack)
self.nodes[node].end = self.next_counter()
dfs_stack.pop()
def stratification(self, labels):
"""Return the stratification result.
Return mapping of node name to integer indicating the
stratum to which that node is assigned. LABELS is the list
of edge labels that dictate a change in strata.
"""
stratum = {}
for node in self.nodes:
stratum[node] = 1
changes = True
while changes:
changes = False
for node in self.edges:
for edge in self.edges[node]:
oldp = stratum[node]
if edge.label in labels:
stratum[node] = max(stratum[node],
1 + stratum[edge.node])
else:
stratum[node] = max(stratum[node],
stratum[edge.node])
if oldp != stratum[node]:
changes = True
if stratum[node] > len(self.nodes):
return None
return stratum
def roots(self):
"""Return list of nodes with no incoming edges."""
possible_roots = set(self.nodes)
for node in self.edges:
for edge in self.edges[node]:
if edge.node in possible_roots:
possible_roots.remove(edge.node)
return possible_roots
def has_cycle(self):
"""Checks if there are cycles.
Run depth_first_search only if it has not already been run.
"""
if self._cycles is None:
self._enumerate_cycles()
return len(self._cycles) > 0
def cycles(self):
"""Return list of cycles. None indicates unknown. """
if self._cycles is None:
self._enumerate_cycles()
cycles_list = []
for cycle_graph in self._cycles:
cycles_list.append(cycle_graph.list_repr())
return cycles_list
def dependencies(self, node):
"""Returns collection of node names reachable from NODE.
If NODE does not exist in graph, returns None.
"""
if node not in self.nodes:
return None
self.reset()
node_obj = self.nodes[node]
if node_obj is None or node_obj.begin is None or node_obj.end is None:
self.depth_first_search([node])
node_obj = self.nodes[node]
return set([n for n, dfs_obj in self.nodes.items()
if dfs_obj.begin is not None])
def next_counter(self):
"""Return next counter value and increment the counter."""
self.counter += 1
return self.counter - 1
def __str__(self):
s = "{"
for node in self.nodes:
s += "(" + str(node) + " : ["
if node in self.edges:
s += ", ".join([str(x) for x in self.edges[node]])
s += "],\n"
s += "}"
return s
def _inverted_edge_graph(self):
"""create a shallow copy of self with the edges inverted"""
newGraph = Graph()
newGraph.nodes = self.nodes
for source_node in self.edges:
for edge in self.edges[source_node]:
try:
newGraph.edges[edge.node].add(Graph.edge_data(source_node))
except KeyError:
newGraph.edges[edge.node] = set(
[Graph.edge_data(source_node)])
return newGraph
def find_dependent_nodes(self, nodes):
"""Return all nodes dependent on @nodes.
Node T is dependent on node T.
Node T is dependent on node R if there is an edge from node S to T,
and S is dependent on R.
Note that node T is dependent on node T even if T is not in the graph
"""
return (self._inverted_edge_graph().find_reachable_nodes(nodes)
| set(nodes))
def find_reachable_nodes(self, roots):
"""Return all nodes reachable from @roots."""
if len(roots) == 0:
return set()
self.depth_first_search(roots)
result = [x for x in self.nodes if self.nodes[x].begin is not None]
self.reset_nodes()
return set(result)
class Cycle(frozenset):
"""An immutable set of 2-tuples to represent a directed cycle
Extends frozenset, adding a list_repr method to represent a cycle as an
ordered list of nodes.
The set representation facilicates identity of cycles regardless of order.
The list representation is much more readable.
"""
def __new__(cls, cycle):
edge_list = []
for i in range(1, len(cycle)):
edge_list.append((cycle[i - 1], cycle[i]))
new_obj = super(Cycle, cls).__new__(cls, edge_list)
new_obj.__list_repr = list(cycle) # save copy as list_repr
return new_obj
def list_repr(self):
"""Return list-of-nodes representation of cycle"""
return self.__list_repr
class BagGraph(Graph):
"""A graph data structure with bag semantics for nodes and edges.
Keeps track of the number of times each node/edge has been inserted.
A node/edge is removed from the graph only once it has been deleted
the same number of times it was inserted. Deletions when no node/edge
already exist are ignored.
"""
def __init__(self, graph=None):
super(BagGraph, self).__init__(graph)
self._node_refcounts = {} # dict from node to counter
self._edge_refcounts = {} # dict from edge to counter
def add_node(self, val):
"""Add node VAL to graph."""
super(BagGraph, self).add_node(val)
if val in self._node_refcounts:
self._node_refcounts[val] += 1
else:
self._node_refcounts[val] = 1
def delete_node(self, val):
"""Delete node VAL from graph (but leave all edges)."""
if val not in self._node_refcounts:
return
self._node_refcounts[val] -= 1
if self._node_refcounts[val] == 0:
super(BagGraph, self).delete_node(val)
del self._node_refcounts[val]
def add_edge(self, val1, val2, label=None):
"""Add edge from VAL1 to VAL2 with label LABEL to graph.
Also adds the nodes VAL1 and VAL2 (important for refcounting).
"""
super(BagGraph, self).add_edge(val1, val2, label=label)
edge = (val1, val2, label)
if edge in self._edge_refcounts:
self._edge_refcounts[edge] += 1
else:
self._edge_refcounts[edge] = 1
def delete_edge(self, val1, val2, label=None):
"""Delete edge from VAL1 to VAL2 with label LABEL.
LABEL must match (even if None). Also deletes nodes
whenever the edge exists.
"""
edge = (val1, val2, label)
if edge not in self._edge_refcounts:
return
self.delete_node(val1)
self.delete_node(val2)
self._edge_refcounts[edge] -= 1
if self._edge_refcounts[edge] == 0:
super(BagGraph, self).delete_edge(val1, val2, label=label)
del self._edge_refcounts[edge]
def node_in(self, val):
return val in self._node_refcounts
def edge_in(self, val1, val2, label=None):
return (val1, val2, label) in self._edge_refcounts
def node_count(self, node):
if node in self._node_refcounts:
return self._node_refcounts[node]
else:
return 0
def edge_count(self, val1, val2, label=None):
edge = (val1, val2, label)
if edge in self._edge_refcounts:
return self._edge_refcounts[edge]
else:
return 0
def __len__(self):
return (reduce(lambda x, y: x+y, self._node_refcounts.values(), 0) +
reduce(lambda x, y: x+y, self._edge_refcounts.values(), 0))
def __str__(self):
s = "{"
for node in self.nodes:
s += "(%s *%s: [" % (str(node), self._node_refcounts[node])
if node in self.edges:
s += ", ".join(
["%s *%d" %
(str(x), self.edge_count(node, x.node, x.label))
for x in self.edges[node]])
s += "],\n"
s += "}"
return s
class OrderedSet(collections.MutableSet):
"""Provide sequence capabilities with rapid membership checks.
Mostly lifted from the activestate recipe[1] linked at Python's collections
documentation[2]. Some modifications, such as returning True or False from
add(key) and discard(key) if a change is made.
[1] - http://code.activestate.com/recipes/576694/
[2] - https://docs.python.org/2/library/collections.html
"""
def __init__(self, iterable=None):
self.end = end = []
end += [None, end, end] # sentinel node for doubly linked list
self.map = {} # key --> [key, prev, next]
if iterable is not None:
self |= iterable
def __len__(self):
return len(self.map)
def __contains__(self, key):
return key in self.map
def add(self, key):
if key not in self.map:
end = self.end
curr = end[1]
curr[2] = end[1] = self.map[key] = [key, curr, end]
return True
return False
def discard(self, key):
if key in self.map:
key, prev, next = self.map.pop(key)
prev[2] = next
next[1] = prev
return True
return False
def __iter__(self):
end = self.end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
def pop(self, last=True):
if not self:
raise KeyError('pop from an empty set')
key = self.end[1][0] if last else self.end[2][0]
self.discard(key)
return key
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, list(self))
def __eq__(self, other):
if isinstance(other, OrderedSet):
return len(self) == len(other) and list(self) == list(other)
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
class iterstr(object):
"""Lazily provides informal string representation of iterables.
Calling __str__ directly on iterables returns a string containing the
formal representation of the elements. This class wraps the iterable and
instead returns the informal representation of the elements.
"""
def __init__(self, iterable):
self.iterable = iterable
self._str_interp = None
self._repr_interp = None
def __str__(self):
if self._str_interp is None:
self._str_interp = "[" + ";".join(map(str, self.iterable)) + "]"
return self._str_interp
def __repr__(self):
if self._repr_interp is None:
self._repr_interp = "[" + ";".join(map(repr, self.iterable)) + "]"
return self._repr_interp

View File

@ -1,109 +0,0 @@
# Copyright (c) 2016 NEC Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from aodhclient import client as aodh_client
from oslo_log import log as logging
import six
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class AodhDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
ALARMS = "alarms"
value_trans = {'translation-type': 'VALUE'}
# TODO(ramineni): enable ALARM_RULES translator
alarms_translator = {
'translation-type': 'HDICT',
'table-name': ALARMS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'alarm_id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'state', 'translator': value_trans},
{'fieldname': 'enabled', 'translator': value_trans},
{'fieldname': 'type', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'time_constraints', 'translator': value_trans},
{'fieldname': 'user_id', 'translator': value_trans},
{'fieldname': 'project_id', 'translator': value_trans},
{'fieldname': 'alarm_actions', 'translator': value_trans},
{'fieldname': 'ok_actions', 'translator': value_trans},
{'fieldname': 'insufficient_data_actions', 'translator':
value_trans},
{'fieldname': 'repeat_actions', 'translator': value_trans},
{'fieldname': 'timestamp', 'translator': value_trans},
{'fieldname': 'state_timestamp', 'translator': value_trans},
)}
def safe_id(x):
if isinstance(x, six.string_types):
return x
try:
return x['resource_id']
except KeyError:
return str(x)
TRANSLATORS = [alarms_translator]
def __init__(self, name='', args=None):
super(AodhDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
session = ds_utils.get_keystone_session(args)
endpoint = session.get_endpoint(service_type='alarming',
interface='publicURL')
self.aodh_client = aodh_client.Client(version='2', session=session,
endpoint_override=endpoint)
self.add_executable_client_methods(self.aodh_client, 'aodhclient.v2.')
self.initialize_update_method()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'aodh'
result['description'] = ('Datasource driver that interfaces with '
'aodh.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_method(self):
alarms_method = lambda: self._translate_alarms(
self.aodh_client.alarm.list())
self.add_update_method(alarms_method, self.alarms_translator)
@ds_utils.update_state_on_changed(ALARMS)
def _translate_alarms(self, obj):
"""Translate the alarms represented by OBJ into tables."""
LOG.debug("ALARMS: %s", str(obj))
row_data = AodhDriver.convert_objs(obj, self.alarms_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.aodh_client, action, action_args)

View File

@ -1,66 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from six.moves import range
from congress.datasources import datasource_driver
def d6service(name, keys, inbox, datapath, args):
"""Create a dataservice instance.
This method is called by d6cage to create a dataservice
instance. There are a couple of parameters we found useful
to add to that call, so we included them here instead of
modifying d6cage (and all the d6cage.createservice calls).
"""
return BenchmarkDriver(name, keys, inbox, datapath, args)
class BenchmarkDriver(datasource_driver.PollingDataSourceDriver):
BENCHTABLE = 'benchtable'
value_trans = {'translation-type': 'VALUE'}
translator = {
'translation-type': 'HDICT',
'table-name': BENCHTABLE,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'field1', 'translator': value_trans},
{'fieldname': 'field2', 'translator': value_trans})}
TRANSLATORS = [translator]
def __init__(self, name='', keys='', inbox=None, datapath=None, args=None):
super(BenchmarkDriver, self).__init__(name, keys,
inbox, datapath, args)
# used by update_from_datasources to manufacture data. Default small.
self.datarows = 10
self._init_end_start_poll()
def update_from_datasource(self):
self.state = {}
# TODO(sh): using self.convert_objs() takes about 10x the time. Needs
# optimization efforts.
row_data = tuple((self.BENCHTABLE, ('val1_%d' % i, 'val2_%d' % i))
for i in range(self.datarows))
for table, row in row_data:
if table not in self.state:
self.state[table] = set()
self.state[table].add(row)
def get_credentials(self, *args, **kwargs):
return {}

View File

@ -1,634 +0,0 @@
#
# Copyright (c) 2017 Orange.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Datasource for configuration options"""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from collections import OrderedDict
import datetime
import os
import six
from oslo_concurrency import lockutils
from oslo_config import cfg
from oslo_config import types
from oslo_log import log as logging
import oslo_messaging as messaging
from congress.cfg_validator import parsing
from congress.cfg_validator import utils
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.dse2 import dse_node as dse
LOG = logging.getLogger(__name__)
FILE = u'file'
VALUE = u'binding'
OPTION = u'option'
OPTION_INFO = u'option_info'
INT_TYPE = u'int_type'
FLOAT_TYPE = u'float_type'
STR_TYPE = u'string_type'
LIST_TYPE = u'list_type'
RANGE_TYPE = u'range_type'
URI_TYPE = u'uri_type'
IPADDR_TYPE = u'ipaddr_type'
SERVICE = u'service'
HOST = u'host'
MODULE = u'module'
TEMPLATE = u'template'
TEMPLATE_NS = u'template_ns'
NAMESPACE = u'namespace'
class ValidatorDriver(datasource_driver.PollingDataSourceDriver):
"""Driver for the Configuration validation datasource"""
# pylint: disable=too-many-instance-attributes
DS_NAME = u'config'
def __init__(self, name=None, args=None):
super(ValidatorDriver, self).__init__(self.DS_NAME, args)
# { template_hash -> {name, namespaces} }
self.known_templates = {}
# { namespace_hash -> namespace_name }
self.known_namespaces = {}
# set(config_hash)
self.known_configs = set()
# { template_hash -> (conf_hash, conf)[] }
self.templates_awaited_by_config = {}
self.agent_api = ValidatorAgentClient()
self.rule_added = False
if hasattr(self, 'add_rpc_endpoint'):
self.add_rpc_endpoint(ValidatorDriverEndpoints(self))
self._init_end_start_poll()
# pylint: disable=no-self-use
def get_context(self):
"""context for RPC. To define"""
return {}
@staticmethod
def get_datasource_info():
"""Gives back a standardized description of the datasource"""
result = {}
result['id'] = 'config'
result['description'] = (
'Datasource driver that allows OS configs retrieval.')
result['config'] = {
'poll_time': constants.OPTIONAL,
'lazy_tables': constants.OPTIONAL}
return result
@classmethod
def get_schema(cls):
sch = {
# option value
VALUE: [
{'name': 'option_id', 'desc': 'The represented option'},
{'name': 'file_id',
'desc': 'The file containing the assignement'},
{'name': 'val', 'desc': 'Actual value'}],
OPTION: [
{'name': 'id', 'desc': 'Id'},
{'name': 'namespace', 'desc': ''},
{'name': 'group', 'desc': ''},
{'name': 'name', 'desc': ''}, ],
# options metadata, omitted : dest
OPTION_INFO: [
{'name': 'option_id', 'desc': 'Option id'},
{'name': 'type', 'desc': ''},
{'name': 'default', 'desc': ''},
{'name': 'deprecated', 'desc': ''},
{'name': 'deprecated_reason', 'desc': ''},
{'name': 'mutable', 'desc': ''},
{'name': 'positional', 'desc': ''},
{'name': 'required', 'desc': ''},
{'name': 'sample_default', 'desc': ''},
{'name': 'secret', 'desc': ''},
{'name': 'help', 'desc': ''}],
HOST: [
{'name': 'id', 'desc': 'Id'},
{'name': 'name', 'desc': 'Arbitraty host name'}],
FILE: [
{'name': 'id', 'desc': 'Id'},
{'name': 'host_id', 'desc': 'File\'s host'},
{'name': 'template',
'desc': 'Template specifying the content of the file'},
{'name': 'name', 'desc': ''}],
MODULE: [
{'name': 'id', 'desc': 'Id'},
{'name': 'base_dir', 'desc': ''},
{'name': 'module', 'desc': ''}],
SERVICE: [
{'name': 'service', 'desc': ''},
{'name': 'host', 'desc': ''},
{'name': 'version', 'desc': ''}],
TEMPLATE: [
{'name': 'id', 'desc': ''},
{'name': 'name', 'desc': ''}, ],
TEMPLATE_NS: [
{'name': 'template', 'desc': 'hash'},
{'name': 'namespace', 'desc': 'hash'}],
NAMESPACE: [
{'name': 'id', 'desc': ''},
{'name': 'name', 'desc': ''}],
INT_TYPE: [
{'name': 'option_id', 'desc': ''},
{'name': 'min', 'desc': ''},
{'name': 'max', 'desc': ''},
{'name': 'choices', 'desc': ''}, ],
FLOAT_TYPE: [
{'name': 'option_id', 'desc': ''},
{'name': 'min', 'desc': ''},
{'name': 'max', 'desc': ''}, ],
STR_TYPE: [
{'name': 'option_id', 'desc': ''},
{'name': 'regex', 'desc': ''},
{'name': 'max_length', 'desc': ''},
{'name': 'quotes', 'desc': ''},
{'name': 'ignore_case', 'desc': ''},
{'name': 'choices', 'desc': ''}, ],
LIST_TYPE: [
{'name': 'option_id', 'desc': ''},
{'name': 'item_type', 'desc': ''},
{'name': 'bounds', 'desc': ''}, ],
IPADDR_TYPE: [
{'name': 'option_id', 'desc': ''},
{'name': 'version', 'desc': ''}, ],
URI_TYPE: [
{'name': 'option_id', 'desc': ''},
{'name': 'max_length', 'desc': ''},
{'name': 'schemes', 'desc': ''}, ],
RANGE_TYPE: [
{'name': 'option_id', 'desc': ''},
{'name': 'min', 'desc': ''},
{'name': 'max', 'desc': ''}, ],
}
return sch
def poll(self):
LOG.info("%s:: polling", self.name)
# Initialize published state to a sensible empty state.
# Avoids races with queries.
if self.number_of_updates == 0:
for tablename in set(self.get_schema()):
self.state[tablename] = set()
self.publish(tablename, self.state[tablename],
use_snapshot=False)
self.agent_api.publish_templates_hashes(self.get_context())
self.agent_api.publish_configs_hashes(self.get_context())
self.last_updated_time = datetime.datetime.now()
self.number_of_updates += 1
def process_config_hashes(self, hashes, host):
"""Handles a list of config files hashes and their retrieval.
If the driver can process the parsing and translation of the config,
it registers the configs to the driver.
:param hashes: A list of config files hashes
:param host: Name of the node hosting theses config files
"""
LOG.debug('Received configs list from %s' % host)
for cfg_hash in set(hashes) - self.known_configs:
config = self.agent_api.get_config(self.get_context(),
cfg_hash, host)
if self.process_config(cfg_hash, config, host):
self.known_configs.add(cfg_hash)
LOG.debug('Config %s from %s registered' % (cfg_hash, host))
@lockutils.synchronized('validator_process_template_hashes')
def process_template_hashes(self, hashes, host):
"""Handles a list of template hashes and their retrieval.
Uses lock to avoid multiple sending of the same data.
:param hashes: A list of templates hashes
:param host: Name of the node hosting theses config files
"""
LOG.debug('Process template hashes from %s' % host)
for t_h in set(hashes) - set(self.known_templates):
LOG.debug('Treating template hash %s' % t_h)
template = self.agent_api.get_template(self.get_context(), t_h,
host)
ns_hashes = template['namespaces']
for ns_hash in set(ns_hashes) - set(self.known_namespaces):
namespace = self.agent_api.get_namespace(
self.get_context(), ns_hash, host)
self.known_namespaces[ns_hash] = namespace
self.known_templates[t_h] = template
for (c_h, config) in self.templates_awaited_by_config.pop(t_h, []):
if self.process_config(c_h, config, host):
self.known_configs.add(c_h)
LOG.debug('Config %s from %s registered (late)' %
(c_h, host))
return True
def translate_service(self, host_id, service, version):
"""Translates a service infos to SERVICE table.
:param host_id: Host ID, should reference HOST.ID
:param service: A service name
:param version: A version name, can be None
"""
if not host_id or not service:
return
service_row = tuple(
map(utils.cfg_value_to_congress, (service, host_id, version)))
self.state[SERVICE].add(service_row)
def translate_host(self, host_id, host_name):
"""Translates a host infos to HOST table.
:param host_id: Host ID
:param host_name: A host name
"""
if not host_id:
return
host_row = tuple(
map(utils.cfg_value_to_congress, (host_id, host_name)))
self.state[HOST].add(host_row)
def translate_file(self, file_id, host_id, template_id, file_name):
"""Translates a file infos to FILE table.
:param file_id: File ID
:param host_id: Host ID, should reference HOST.ID
:param template_id: Template ID, should reference TEMPLATE.ID
"""
if not file_id or not host_id:
return
file_row = tuple(
map(utils.cfg_value_to_congress,
(file_id, host_id, template_id, file_name)))
self.state[FILE].add(file_row)
def translate_template_namespace(self, template_id, name, ns_ids):
"""Translates a template infos and its namespaces infos.
Modifies tables : TEMPLATE, NAMESPACE and TEMPLATE_NS
:param template_id: Template ID
:param name: A template name
:param ns_ids: List of namespace IDs, defining this template, should
reference NAMESPACE.ID
"""
if not template_id:
return
template_row = tuple(
map(utils.cfg_value_to_congress, (template_id, name)))
self.state[TEMPLATE].add(template_row)
for ns_h, ns_name in six.iteritems(ns_ids):
if not ns_h:
continue
namespace_row = tuple(map(utils.cfg_value_to_congress,
(ns_h, ns_name)))
self.state[NAMESPACE].add(namespace_row)
tpl_ns_row = tuple(
map(utils.cfg_value_to_congress, (template_id, ns_h)))
self.state[TEMPLATE_NS].add(tpl_ns_row)
# pylint: disable=protected-access,too-many-branches
def translate_type(self, opt_id, cfg_type):
"""Translates a type to the appropriate type table.
:param opt_id: Option ID, should reference OPTION.ID
:param cfg_type: An oslo ConfigType for the referenced option
"""
if not opt_id:
return
if isinstance(cfg_type, types.String):
tablename = STR_TYPE
# oslo.config 5.2 begins to use a different representation of
# choices (OrderedDict). We first convert back to simple list to
# have consistent output regardless of oslo.config version
if isinstance(cfg_type.choices, OrderedDict):
choices = list(map(lambda item: item[0],
cfg_type.choices.items()))
else:
choices = cfg_type.choices
row = (cfg_type.regex, cfg_type.max_length, cfg_type.quotes,
cfg_type.ignore_case, choices)
elif isinstance(cfg_type, types.Integer):
tablename = INT_TYPE
# oslo.config 5.2 begins to use a different representation of
# choices (OrderedDict). We first convert back to simple list to
# have consistent output regardless of oslo.config version
if isinstance(cfg_type.choices, OrderedDict):
choices = list(map(lambda item: item[0],
cfg_type.choices.items()))
else:
choices = cfg_type.choices
row = (cfg_type.min, cfg_type.max, choices)
elif isinstance(cfg_type, types.Float):
tablename = FLOAT_TYPE
row = (cfg_type.min, cfg_type.max)
elif isinstance(cfg_type, types.List):
tablename = LIST_TYPE
row = (type(cfg_type.item_type).__name__, cfg_type.bounds)
elif isinstance(cfg_type, types.IPAddress):
tablename = IPADDR_TYPE
if cfg_type.version_checker == cfg_type._check_ipv4:
version = 4
elif cfg_type.version_checker == cfg_type._check_ipv6:
version = 6
else:
version = None
row = (version,)
elif isinstance(cfg_type, types.URI):
tablename = URI_TYPE
row = (cfg_type.max_length, cfg_type.schemes)
elif isinstance(cfg_type, types.Range):
tablename = RANGE_TYPE
row = (cfg_type.min, cfg_type.max)
else:
return
row = (opt_id,) + row
if isinstance(cfg_type, types.List):
self.translate_type(opt_id, cfg_type.item_type)
self.state[tablename].add(
tuple(map(utils.cfg_value_to_congress, row)))
def translate_value(self, file_id, option_id, value):
"""Translates a value to the VALUE table.
If value is a list, a table entry is added for every list item.
If value is a dict, a table entry is added for every key-value.
:param file_id: File ID, should reference FILE.ID
:param option_id: Option ID, should reference OPTION.ID
:param value: A value, can be None
"""
if not file_id:
return
if not option_id:
return
if isinstance(value, list):
for v_item in value:
value_row = tuple(
map(utils.cfg_value_to_congress,
(option_id, file_id, v_item)))
self.state[VALUE].add(value_row)
elif isinstance(value, dict):
for v_key, v_item in six.iteritems(value):
value_row = tuple(
map(utils.cfg_value_to_congress,
(option_id, file_id, '%s:%s' % (v_key, v_item))))
self.state[VALUE].add(value_row)
else:
value_row = tuple(
map(utils.cfg_value_to_congress,
(option_id, file_id, value)))
self.state[VALUE].add(value_row)
def translate_option(self, option, group_name):
"""Translates an option metadata to datasource tables.
Modifies tables : OPTION, OPTION_INFO
:param option: An IdentifiedOpt object
:param group_name: Associated section name
"""
if option is None:
return
if not group_name:
return
option_row = tuple(map(utils.cfg_value_to_congress, (
option.id_, option.ns_id, group_name, option.name)))
self.state[OPTION].add(option_row)
option_info_row = tuple(
map(utils.cfg_value_to_congress, (
option.id_,
type(option.type).__name__,
option.default,
option.deprecated_for_removal,
option.deprecated_reason,
option.mutable,
option.positional,
option.required,
option.sample_default,
option.secret,
option.help)))
self.state[OPTION_INFO].add(option_info_row)
def translate_conf(self, conf, file_id):
"""Translates a config manager to the datasource state.
:param conf: A config manager ConfigOpts, containing the parsed values
and the options metadata to read them
:param file_id: Id of the file, which contains the parsed values
"""
cfg_ns = conf._namespace
def _do_translation(option, group_name='DEFAULT'):
option = option['opt']
# skip options that do not have the required attributes
# avoids processing built-in options included by oslo.config, which
# don't have all the needed IdentifiedOpt attributes.
# see: https://github.com/openstack/oslo.config/commit/5ad89d40210bf5922de62e30b096634cac36da6c#diff-768b817a50237989cacd1a8064b4a8af # noqa
for attribute in ['id_', 'name', 'type', 'ns_id']:
if not hasattr(option, attribute):
return
self.translate_option(option, group_name)
try:
value = option._get_from_namespace(cfg_ns, group_name)
if hasattr(cfg, 'LocationInfo'):
value = value[0]
except KeyError:
# No value parsed for this option
return
self.translate_type(option.id_, option.type)
try:
value = parsing.parse_value(option.type, value)
except (ValueError, TypeError):
LOG.warning('Value for option %s is not valid : %s' % (
option.name, value))
self.translate_value(file_id, option.id_, value)
for _, option in six.iteritems(conf._opts):
_do_translation(option)
for group_name, identified_group in six.iteritems(conf._groups):
for _, option in six.iteritems(identified_group._opts):
_do_translation(option, group_name)
def process_config(self, file_hash, config, host):
"""Manages all translations related to a config file.
Publish tables to PE.
:param file_hash: Hash of the configuration file
:param config: object representing the configuration
:param host: Remote host name
:return: True if config was processed
"""
try:
LOG.debug("process_config hash=%s" % file_hash)
template_hash = config['template']
template = self.known_templates.get(template_hash, None)
if template is None:
waiting = (
self.templates_awaited_by_config.get(template_hash, []))
waiting.append((file_hash, config))
self.templates_awaited_by_config[template_hash] = waiting
LOG.debug('Template %s not yet registered' % template_hash)
return False
host_id = utils.compute_hash(host)
namespaces = [self.known_namespaces.get(h, None).get('data', None)
for h in template['namespaces']]
conf = parsing.construct_conf_manager(namespaces)
parsing.add_parsed_conf(conf, config['data'])
for tablename in set(self.get_schema()) - set(self.state):
self.state[tablename] = set()
self.publish(tablename, self.state[tablename],
use_snapshot=False)
self.translate_conf(conf, file_hash)
self.translate_host(host_id, host)
self.translate_service(
host_id, config['service'], config['version'])
file_name = os.path.basename(config['path'])
self.translate_file(file_hash, host_id, template_hash, file_name)
ns_hashes = {h: self.known_namespaces[h]['name']
for h in template['namespaces']}
self.translate_template_namespace(template_hash, template['name'],
ns_hashes)
for tablename in self.state:
self.publish(tablename, self.state[tablename],
use_snapshot=True)
return True
except KeyError:
LOG.error('Config %s from %s NOT registered'
% (file_hash, host))
return False
class ValidatorAgentClient(object):
"""RPC Proxy to access the agent from the datasource."""
def __init__(self, topic=utils.AGENT_TOPIC):
transport = messaging.get_transport(cfg.CONF)
target = messaging.Target(exchange=dse.DseNode.EXCHANGE,
topic=topic,
version=dse.DseNode.RPC_VERSION)
self.client = messaging.RPCClient(transport, target)
def publish_configs_hashes(self, context):
"""Asks for config hashes"""
cctx = self.client.prepare(fanout=True)
return cctx.cast(context, 'publish_configs_hashes')
def publish_templates_hashes(self, context):
"""Asks for template hashes"""
cctx = self.client.prepare(fanout=True)
return cctx.cast(context, 'publish_templates_hashes')
# block calling thread
def get_namespace(self, context, ns_hash, server):
"""Retrieves an explicit namespace from a server given a hash. """
cctx = self.client.prepare(server=server)
return cctx.call(context, 'get_namespace', ns_hash=ns_hash)
# block calling thread
def get_template(self, context, tpl_hash, server):
"""Retrieves an explicit template from a server given a hash"""
cctx = self.client.prepare(server=server)
return cctx.call(context, 'get_template', tpl_hash=tpl_hash)
# block calling thread
def get_config(self, context, cfg_hash, server):
"""Retrieves a config from a server given a hash"""
cctx = self.client.prepare(server=server)
return cctx.call(context, 'get_config', cfg_hash=cfg_hash)
class ValidatorDriverEndpoints(object):
"""RPC endpoint on the datasource driver for use by the agents"""
def __init__(self, driver):
self.driver = driver
# pylint: disable=unused-argument
def process_templates_hashes(self, context, **kwargs):
"""Process the template hashes received from a server"""
LOG.debug(
'Received template hashes from host %s' % kwargs.get('host', ''))
self.driver.process_template_hashes(**kwargs)
# pylint: disable=unused-argument
def process_configs_hashes(self, context, **kwargs):
"""Process the config hashes received from a server"""
LOG.debug(
'Received config hashes from host %s' % kwargs.get('host', ''))
self.driver.process_config_hashes(**kwargs)

View File

@ -1,180 +0,0 @@
# Copyright (c) 2014 Montavista Software, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Schema version history
version: 2.1
date: 2016-03-27
changes:
- Added columns to the volumes table: encrypted, availability_zone,
replication_status, multiattach, snapshot_id, source_volid,
consistencygroup_id, migration_status
- Added the attachments table for volume attachment information.
version: 2.0
Initial schema version.
"""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import cinderclient.client
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
class CinderDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
VOLUMES = "volumes"
ATTACHMENTS = "attachments"
SNAPSHOTS = "snapshots"
SERVICES = "services"
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
volumes_translator = {
'translation-type': 'HDICT',
'table-name': VOLUMES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'size', 'translator': value_trans},
{'fieldname': 'user_id', 'translator': value_trans},
{'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'bootable', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'volume_type', 'translator': value_trans},
{'fieldname': 'encrypted', 'translator': value_trans},
{'fieldname': 'availability_zone', 'translator': value_trans},
{'fieldname': 'replication_status', 'translator': value_trans},
{'fieldname': 'multiattach', 'translator': value_trans},
{'fieldname': 'snapshot_id', 'translator': value_trans},
{'fieldname': 'source_volid', 'translator': value_trans},
{'fieldname': 'consistencygroup_id', 'translator': value_trans},
{'fieldname': 'migration_status', 'translator': value_trans},
{'fieldname': 'attachments',
'translator': {'translation-type': 'HDICT',
'table-name': ATTACHMENTS,
'parent-key': 'id',
'parent-col-name': 'volume_id',
'parent-key-desc': 'UUID of volume',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'server_id',
'translator': value_trans},
{'fieldname': 'attachment_id',
'translator': value_trans},
{'fieldname': 'host_name',
'translator': value_trans},
{'fieldname': 'device',
'translator': value_trans})}}
)}
snapshots_translator = {
'translation-type': 'HDICT',
'table-name': SNAPSHOTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'size', 'translator': value_trans},
{'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'volume_id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans})}
services_translator = {
'translation-type': 'HDICT',
'table-name': SERVICES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'binary', 'translator': value_trans},
{'fieldname': 'zone', 'translator': value_trans},
{'fieldname': 'state', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans},
{'fieldname': 'host', 'translator': value_trans},
{'fieldname': 'disabled_reason', 'translator': value_trans})}
TRANSLATORS = [volumes_translator, snapshots_translator,
services_translator]
def __init__(self, name='', args=None):
super(CinderDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
session = ds_utils.get_keystone_session(args)
self.cinder_client = cinderclient.client.Client(version='2',
session=session)
self.add_executable_client_methods(self.cinder_client,
'cinderclient.v2.')
self.initialize_update_method()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'cinder'
result['description'] = ('Datasource driver that interfaces with '
'OpenStack cinder.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_method(self):
volumes_method = lambda: self._translate_volumes(
self.cinder_client.volumes.list(detailed=True,
search_opts={'all_tenants': 1}))
self.add_update_method(volumes_method, self.volumes_translator)
snapshots_method = lambda: self._translate_snapshots(
self.cinder_client.volume_snapshots.list(
detailed=True, search_opts={'all_tenants': 1}))
self.add_update_method(snapshots_method, self.snapshots_translator)
services_method = lambda: self._translate_services(
self.cinder_client.services.list(host=None, binary=None))
self.add_update_method(services_method, self.services_translator)
@ds_utils.update_state_on_changed(VOLUMES)
def _translate_volumes(self, obj):
row_data = CinderDriver.convert_objs(obj, self.volumes_translator)
return row_data
@ds_utils.update_state_on_changed(SNAPSHOTS)
def _translate_snapshots(self, obj):
row_data = CinderDriver.convert_objs(obj, self.snapshots_translator)
return row_data
@ds_utils.update_state_on_changed(SERVICES)
def _translate_services(self, obj):
row_data = CinderDriver.convert_objs(obj, self.services_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.cinder_client, action, action_args)

View File

@ -1,244 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from cloudfoundryclient.v2 import client
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class CloudFoundryV2Driver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
ORGANIZATIONS = 'organizations'
SERVICE_BINDINGS = 'service_bindings'
APPS = 'apps'
SPACES = 'spaces'
SERVICES = 'services'
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
organizations_translator = {
'translation-type': 'HDICT',
'table-name': ORGANIZATIONS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'guid', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans})}
service_bindings_translator = {
'translation-type': 'LIST',
'table-name': SERVICE_BINDINGS,
'parent-key': 'guid',
'parent-col-name': 'app_guid',
'val-col': 'service_instance_guid',
'translator': value_trans}
apps_translator = {
'translation-type': 'HDICT',
'table-name': APPS,
'in-list': True,
'parent-key': 'guid',
'parent-col-name': 'space_guid',
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'guid', 'translator': value_trans},
{'fieldname': 'buildpack', 'translator': value_trans},
{'fieldname': 'command', 'translator': value_trans},
{'fieldname': 'console', 'translator': value_trans},
{'fieldname': 'debug', 'translator': value_trans},
{'fieldname': 'detected_buildpack', 'translator': value_trans},
{'fieldname': 'detected_start_command',
'translator': value_trans},
{'fieldname': 'disk_quota', 'translator': value_trans},
{'fieldname': 'docker_image', 'translator': value_trans},
{'fieldname': 'environment_json', 'translator': value_trans},
{'fieldname': 'health_check_timeout', 'translator': value_trans},
{'fieldname': 'instances', 'translator': value_trans},
{'fieldname': 'memory', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'package_state', 'translator': value_trans},
{'fieldname': 'package_updated_at', 'translator': value_trans},
{'fieldname': 'production', 'translator': value_trans},
{'fieldname': 'staging_failed_reason', 'translator': value_trans},
{'fieldname': 'staging_task_id', 'translator': value_trans},
{'fieldname': 'state', 'translator': value_trans},
{'fieldname': 'version', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans},
{'fieldname': 'service_bindings',
'translator': service_bindings_translator})}
spaces_translator = {
'translation-type': 'HDICT',
'table-name': SPACES,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'guid', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans},
{'fieldname': 'apps', 'translator': apps_translator})}
services_translator = {
'translation-type': 'HDICT',
'table-name': SERVICES,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'guid', 'translator': value_trans},
{'fieldname': 'space_guid', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'bound_app_count', 'translator': value_trans},
{'fieldname': 'last_operation', 'translator': value_trans},
{'fieldname': 'service_plan_name', 'translator': value_trans})}
TRANSLATORS = [organizations_translator,
spaces_translator, services_translator]
def __init__(self, name='', args=None):
super(CloudFoundryV2Driver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
self.cloudfoundry = client.Client(username=self.creds['username'],
password=self.creds['password'],
base_url=self.creds['auth_url'])
self.cloudfoundry.login()
self._cached_organizations = []
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'cloudfoundryv2'
result['description'] = ('Datasource driver that interfaces with '
'cloudfoundry')
result['config'] = {'username': constants.REQUIRED,
'password': constants.REQUIRED,
'poll_time': constants.OPTIONAL,
'auth_url': constants.REQUIRED}
result['secret'] = ['password']
return result
def _save_organizations(self, organizations):
temp_organizations = []
for organization in organizations['resources']:
temp_organizations.append(organization['metadata']['guid'])
self._cached_organizations = temp_organizations
def _parse_services(self, services):
data = []
space_guid = services['guid']
for service in services['services']:
data.append(
{'bound_app_count': service['bound_app_count'],
'guid': service['guid'],
'name': service['name'],
'service_plan_name': service['service_plan']['name'],
'space_guid': space_guid})
return data
def _get_app_services_guids(self, service_bindings):
result = []
for service_binding in service_bindings['resources']:
result.append(service_binding['entity']['service_instance_guid'])
return result
def update_from_datasource(self):
LOG.debug("CloudFoundry grabbing Data")
organizations = self.cloudfoundry.get_organizations()
self._translate_organizations(organizations)
self._save_organizations(organizations)
spaces = self._get_spaces()
services = self._get_services_update_spaces(spaces)
self._translate_spaces(spaces)
self._translate_services(services)
def _get_services_update_spaces(self, spaces):
services = []
for space in spaces:
space['apps'] = []
temp_apps = self.cloudfoundry.get_apps_in_space(space['guid'])
for temp_app in temp_apps['resources']:
service_bindings = self.cloudfoundry.get_app_service_bindings(
temp_app['metadata']['guid'])
data = dict(list(temp_app['metadata'].items()) +
list(temp_app['entity'].items()))
app_services = self._get_app_services_guids(service_bindings)
if app_services:
data['service_bindings'] = app_services
space['apps'].append(data)
services.extend(self._parse_services(
self.cloudfoundry.get_spaces_summary(space['guid'])))
return services
def _get_spaces(self):
spaces = []
for org in self._cached_organizations:
temp_spaces = self.cloudfoundry.get_organization_spaces(org)
for temp_space in temp_spaces['resources']:
spaces.append(dict(list(temp_space['metadata'].items()) +
list(temp_space['entity'].items())))
return spaces
@ds_utils.update_state_on_changed(SERVICES)
def _translate_services(self, obj):
LOG.debug("services: %s", obj)
row_data = CloudFoundryV2Driver.convert_objs(
obj, self.services_translator)
return row_data
@ds_utils.update_state_on_changed(ORGANIZATIONS)
def _translate_organizations(self, obj):
LOG.debug("organziations: %s", obj)
# convert_objs needs the data structured a specific way so we
# do this here. Perhaps we can improve convert_objs later to be
# more flexiable.
results = [dict(list(o['metadata'].items()) +
list(o['entity'].items()))
for o in obj['resources']]
row_data = CloudFoundryV2Driver.convert_objs(
results,
self.organizations_translator)
return row_data
@ds_utils.update_state_on_changed(SPACES)
def _translate_spaces(self, obj):
LOG.debug("spaces: %s", obj)
row_data = CloudFoundryV2Driver.convert_objs(
obj,
self.spaces_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.cloudfoundry, action, action_args)

View File

@ -1,21 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
# datasource config options
REQUIRED = 'required'
OPTIONAL = '(optional)'

File diff suppressed because it is too large Load Diff

View File

@ -1,184 +0,0 @@
# Copyright (c) 2013,2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import functools
import inspect
import re
from keystoneauth1 import loading as kaloading
from congress.datasources import constants
def typed_value_trans(type):
return {'translation-type': 'VALUE', 'data-type': type}
def get_openstack_required_config():
return {'auth_url': constants.REQUIRED,
'endpoint': constants.OPTIONAL,
'region': constants.OPTIONAL,
'username': constants.REQUIRED,
'password': constants.REQUIRED,
'user_domain_name': constants.OPTIONAL,
'project_domain_name': constants.OPTIONAL,
'tenant_name': constants.OPTIONAL,
'project_name': constants.REQUIRED,
'poll_time': constants.OPTIONAL}
def update_state_on_changed(root_table_name):
"""Decorator to check raw data before retranslating.
If raw data is same with cached self.raw_state,
don't translate data, return empty list directly.
If raw data is changed, translate it and update state.
"""
def outer(f):
@functools.wraps(f)
def inner(self, raw_data, *args, **kw):
if (root_table_name not in self.raw_state or
# TODO(RuiChen): workaround for oslo-incubator bug/1499369,
# enable self.raw_state cache, once the bug is resolved.
raw_data is not self.raw_state[root_table_name]):
result = f(self, raw_data, *args, **kw)
self._update_state(root_table_name, result)
self.raw_state[root_table_name] = raw_data
else:
result = []
return result
return inner
return outer
def add_column(colname, desc=None, type=None, nullable=True):
"""Adds column in the form of dict."""
col_dict = {'name': colname, 'desc': desc}
if type is not None:
col_dict['type'] = str(type)
if not nullable:
col_dict['nullable'] = False
return col_dict
def inspect_methods(client, api_prefix):
"""Inspect all callable methods from client for congress."""
# some methods are referred multiple times, we should
# save them here to avoid infinite loop
obj_checked = []
method_checked = []
# For depth-first search
obj_stack = []
# save all inspected methods that will be returned
allmethods = []
obj_checked.append(client)
obj_stack.append(client)
while len(obj_stack) > 0:
cur_obj = obj_stack.pop()
# everything starts with '_' are considered as internal only
for f in [f for f in dir(cur_obj) if not f.startswith('_')]:
p = getattr(cur_obj, f, None)
if inspect.ismethod(p):
m_p = {}
# to get a name that can be called by Congress, no need
# to return the full path
m_p['name'] = cur_obj.__module__.replace(api_prefix, '')
if m_p['name'] == '':
m_p['name'] = p.__name__
else:
m_p['name'] = m_p['name'] + '.' + p.__name__
# skip checked methods
if m_p['name'] in method_checked:
continue
m_doc = inspect.getdoc(p)
# not return deprecated methods
if m_doc and "DEPRECATED:" in m_doc:
continue
if m_doc:
m_doc = re.sub('\n|\s+', ' ', m_doc)
x = re.split(' :param ', m_doc)
m_p['desc'] = x.pop(0)
y = inspect.getargspec(p)
m_p['args'] = []
while len(y.args) > 0:
m_p_name = y.args.pop(0)
if m_p_name == 'self':
continue
if len(x) > 0:
m_p_desc = x.pop(0)
else:
m_p_desc = "None"
m_p['args'].append({'name': m_p_name,
'desc': m_p_desc})
else:
m_p['args'] = []
m_p['desc'] = ''
allmethods.append(m_p)
method_checked.append(m_p['name'])
elif inspect.isfunction(p):
m_p = {}
m_p['name'] = cur_obj.__module__.replace(api_prefix, '')
if m_p['name'] == '':
m_p['name'] = f
else:
m_p['name'] = m_p['name'] + '.' + f
# TODO(zhenzanz): Never see doc for function yet.
# m_doc = inspect.getdoc(p)
m_p['args'] = []
m_p['desc'] = ''
allmethods.append(m_p)
method_checked.append(m_p['name'])
elif isinstance(p, object) and hasattr(p, '__module__'):
# avoid infinite loop by checking that p not in obj_checked.
# don't use 'in' since that uses ==, and some clients err
if ((not any(p is x for x in obj_checked)) and
(not inspect.isbuiltin(p))):
if re.match(api_prefix, p.__module__):
if (not inspect.isclass(p)):
obj_stack.append(p)
return allmethods
# Note (thread-safety): blocking function
def get_keystone_session(creds, headers=None):
auth_details = {}
auth_details['auth_url'] = creds['auth_url']
auth_details['username'] = creds['username']
auth_details['password'] = creds['password']
auth_details['project_name'] = (creds.get('project_name') or
creds.get('tenant_name'))
auth_details['tenant_name'] = creds.get('tenant_name')
auth_details['user_domain_name'] = creds.get('user_domain_name', 'Default')
auth_details['project_domain_name'] = creds.get('project_domain_name',
'Default')
loader = kaloading.get_plugin_loader('password')
auth_plugin = loader.load_from_options(**auth_details)
if headers is None:
session = kaloading.session.Session().load_from_options(
auth=auth_plugin)
else:
session = kaloading.session.Session().load_from_options(
auth=auth_plugin, additional_headers=headers)
return session

View File

@ -1,106 +0,0 @@
# Copyright (c) 2016 NTT All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import six
from congress.datasources import constants
from congress.datasources import datasource_driver
class DoctorDriver(datasource_driver.PushedDataSourceDriver):
"""A DataSource Driver for OPNFV Doctor project.
This driver has a table for Doctor project's Inspector. Please check
https://wiki.opnfv.org/display/doctor/Doctor+Home for the details
about OPNFV Doctor project.
To update the table, call Update row API.
PUT /v1/data-sources/<the driver id>/tables/<table id>/rows
For updating 'events' table, the request body should be following
style. The request will replace all rows in the table with the body,
which means if you update the table with [] it will clear the table.
One {} object in the list represents one row of the table.
request body::
[
{
"time": "2016-02-22T11:48:55Z",
"type": "compute.host.down",
"details": {
"hostname": "compute1",
"status": "down",
"monitor": "zabbix1",
"monitor_event_id": "111"
}
},
.....
]
"""
value_trans = {'translation-type': 'VALUE'}
def safe_id(x):
if isinstance(x, six.string_types):
return x
try:
return x['id']
except Exception:
return str(x)
def flatten_events(row_events):
flatten = []
for event in row_events:
details = event.pop('details')
for k, v in details.items():
event[k] = v
flatten.append(event)
return flatten
events_translator = {
'translation-type': 'HDICT',
'table-name': 'events',
'selector-type': 'DICT_SELECTOR',
'objects-extract-fn': flatten_events,
'field-translators':
({'fieldname': 'time', 'translator': value_trans},
{'fieldname': 'type', 'translator': value_trans},
{'fieldname': 'hostname', 'translator': value_trans},
{'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'monitor', 'translator': value_trans},
{'fieldname': 'monitor_event_id', 'translator': value_trans},)
}
TRANSLATORS = [events_translator]
def __init__(self, name='', args=None):
super(DoctorDriver, self).__init__(name, args=args)
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'doctor'
result['description'] = ('Datasource driver that allows external '
'systems to push data in accordance with '
'OPNFV Doctor Inspector southbound interface '
'specification.')
result['config'] = {'persist_data': constants.OPTIONAL}
return result

View File

@ -1,142 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import glanceclient.v2.client as glclient
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class GlanceV2Driver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
IMAGES = "images"
TAGS = "tags"
value_trans = {'translation-type': 'VALUE'}
images_translator = {
'translation-type': 'HDICT',
'table-name': IMAGES,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'UUID of image',
'translator': value_trans},
{'fieldname': 'status', 'desc': 'The image status',
'translator': value_trans},
{'fieldname': 'name',
'desc': 'Image Name', 'translator': value_trans},
{'fieldname': 'container_format',
'desc': 'The container format of image',
'translator': value_trans},
{'fieldname': 'created_at',
'desc': 'The date and time when the resource was created',
'translator': value_trans},
{'fieldname': 'updated_at',
'desc': 'The date and time when the resource was updated.',
'translator': value_trans},
{'fieldname': 'disk_format',
'desc': 'The disk format of the image.',
'translator': value_trans},
{'fieldname': 'owner',
'desc': 'The ID of the owner or tenant of the image',
'translator': value_trans},
{'fieldname': 'protected',
'desc': 'Indicates whether the image can be deleted.',
'translator': value_trans},
{'fieldname': 'min_ram',
'desc': 'minimum amount of RAM in MB required to boot the image',
'translator': value_trans},
{'fieldname': 'min_disk',
'desc': 'minimum disk size in GB required to boot the image',
'translator': value_trans},
{'fieldname': 'checksum', 'desc': 'Hash of the image data used',
'translator': value_trans},
{'fieldname': 'size',
'desc': 'The size of the image data, in bytes.',
'translator': value_trans},
{'fieldname': 'file',
'desc': 'URL for the virtual machine image file',
'translator': value_trans},
{'fieldname': 'kernel_id', 'desc': 'kernel id',
'translator': value_trans},
{'fieldname': 'ramdisk_id', 'desc': 'ramdisk id',
'translator': value_trans},
{'fieldname': 'schema',
'desc': 'URL for schema of the virtual machine image',
'translator': value_trans},
{'fieldname': 'visibility', 'desc': 'The image visibility',
'translator': value_trans},
{'fieldname': 'tags',
'translator': {'translation-type': 'LIST',
'table-name': TAGS,
'val-col': 'tag',
'val-col-desc': 'List of image tags',
'parent-key': 'id',
'parent-col-name': 'image_id',
'parent-key-desc': 'UUID of image',
'translator': value_trans}})}
TRANSLATORS = [images_translator]
def __init__(self, name='', args=None):
super(GlanceV2Driver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
session = ds_utils.get_keystone_session(self.creds)
self.glance = glclient.Client(session=session)
self.add_executable_client_methods(self.glance, 'glanceclient.v2.')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'glancev2'
result['description'] = ('Datasource driver that interfaces with '
'OpenStack Images aka Glance.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_methods(self):
images_method = lambda: self._translate_images(
{'images': self.glance.images.list()})
self.add_update_method(images_method, self.images_translator)
@ds_utils.update_state_on_changed(IMAGES)
def _translate_images(self, obj):
"""Translate the images represented by OBJ into tables."""
LOG.debug("IMAGES: %s", str(dict(obj)))
row_data = GlanceV2Driver.convert_objs(
obj['images'], GlanceV2Driver.images_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.glance, action, action_args)

View File

@ -1,245 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import heatclient.v1.client as heatclient
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class HeatV1Driver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
STACKS = "stacks"
STACKS_LINKS = "stacks_links"
DEPLOYMENTS = "deployments"
DEPLOYMENT_OUTPUT_VALUES = "deployment_output_values"
RESOURCES = "resources"
RESOURCES_LINKS = "resources_links"
EVENTS = "events"
EVENTS_LINKS = "events_links"
# TODO(thinrichs): add snapshots
value_trans = {'translation-type': 'VALUE'}
stacks_links_translator = {
'translation-type': 'HDICT',
'table-name': STACKS_LINKS,
'parent-key': 'id',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'href', 'translator': value_trans},
{'fieldname': 'rel', 'translator': value_trans})}
stacks_translator = {
'translation-type': 'HDICT',
'table-name': STACKS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'stack_name', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'creation_time', 'translator': value_trans},
{'fieldname': 'updated_time', 'translator': value_trans},
{'fieldname': 'stack_status', 'translator': value_trans},
{'fieldname': 'stack_status_reason', 'translator': value_trans},
{'fieldname': 'stack_owner', 'translator': value_trans},
{'fieldname': 'parent', 'translator': value_trans},
{'fieldname': 'links', 'translator': stacks_links_translator})}
deployments_output_values_translator = {
'translation-type': 'HDICT',
'table-name': DEPLOYMENT_OUTPUT_VALUES,
'parent-key': 'id',
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'deploy_stdout', 'translator': value_trans},
{'fieldname': 'deploy_stderr', 'translator': value_trans},
{'fieldname': 'deploy_status_code', 'translator': value_trans},
{'fieldname': 'result', 'translator': value_trans})}
software_deployment_translator = {
'translation-type': 'HDICT',
'table-name': DEPLOYMENTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'server_id', 'translator': value_trans},
{'fieldname': 'config_id', 'translator': value_trans},
{'fieldname': 'action', 'translator': value_trans},
{'fieldname': 'status_reason', 'translator': value_trans},
{'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'output_values',
'translator': deployments_output_values_translator})}
resources_links_translator = {
'translation-type': 'HDICT',
'table-name': RESOURCES_LINKS,
'parent-key': 'physical_resource_id',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'href', 'translator': value_trans},
{'fieldname': 'rel', 'translator': value_trans})}
resources_translator = {
'translation-type': 'HDICT',
'table-name': RESOURCES,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'physical_resource_id', 'translator': value_trans},
{'fieldname': 'logical_resource_id', 'translator': value_trans},
{'fieldname': 'stack_id', 'translator': value_trans},
{'fieldname': 'resource_name', 'translator': value_trans},
{'fieldname': 'resource_type', 'translator': value_trans},
{'fieldname': 'creation_time', 'translator': value_trans},
{'fieldname': 'updated_time', 'translator': value_trans},
{'fieldname': 'resource_status', 'translator': value_trans},
{'fieldname': 'resource_status_reason', 'translator': value_trans},
{'fieldname': 'links', 'translator': resources_links_translator})}
events_links_translator = {
'translation-type': 'HDICT',
'table-name': EVENTS_LINKS,
'parent-key': 'id',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'href', 'translator': value_trans},
{'fieldname': 'rel', 'translator': value_trans})}
events_translator = {
'translation-type': 'HDICT',
'table-name': EVENTS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'physical_resource_id', 'translator': value_trans},
{'fieldname': 'logical_resource_id', 'translator': value_trans},
{'fieldname': 'stack_id', 'translator': value_trans},
{'fieldname': 'resource_name', 'translator': value_trans},
{'fieldname': 'event_time', 'translator': value_trans},
{'fieldname': 'resource_status', 'translator': value_trans},
{'fieldname': 'resource_status_reason', 'translator': value_trans},
{'fieldname': 'links', 'translator': events_links_translator})}
TRANSLATORS = [stacks_translator, software_deployment_translator,
resources_translator, events_translator]
def __init__(self, name='', args=None):
super(HeatV1Driver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
session = ds_utils.get_keystone_session(self.creds)
endpoint = session.get_endpoint(service_type='orchestration',
interface='publicURL')
self.heat = heatclient.Client(session=session, endpoint=endpoint)
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'heat'
result['description'] = ('Datasource driver that interfaces with'
' OpenStack orchestration aka heat.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_methods(self):
stacks_method = lambda: self._translate_stacks(
{'stacks': self.heat.stacks.list()})
self.add_update_method(stacks_method, self.stacks_translator)
resources_method = lambda: self._translate_resources(
self._get_resources(self.heat.stacks.list()))
self.add_update_method(resources_method, self.resources_translator)
events_method = lambda: self._translate_events(
self._get_events(self.heat.stacks.list()))
self.add_update_method(events_method, self.events_translator)
deployments_method = lambda: self._translate_software_deployment(
{'deployments': self.heat.software_deployments.list()})
self.add_update_method(deployments_method,
self.software_deployment_translator)
def _get_resources(self, stacks):
rval = []
for stack in stacks:
resources = self.heat.resources.list(stack.id)
for resource in resources:
resource = resource.to_dict()
resource['stack_id'] = stack.id
rval.append(resource)
return {'resources': rval}
def _get_events(self, stacks):
rval = []
for stack in stacks:
events = self.heat.events.list(stack.id)
for event in events:
event = event.to_dict()
event['stack_id'] = stack.id
rval.append(event)
return {'events': rval}
@ds_utils.update_state_on_changed(STACKS)
def _translate_stacks(self, obj):
"""Translate the stacks represented by OBJ into tables."""
LOG.debug("STACKS: %s", str(dict(obj)))
row_data = HeatV1Driver.convert_objs(
obj['stacks'], HeatV1Driver.stacks_translator)
return row_data
@ds_utils.update_state_on_changed(DEPLOYMENTS)
def _translate_software_deployment(self, obj):
"""Translate the stacks represented by OBJ into tables."""
LOG.debug("Software Deployments: %s", str(dict(obj)))
row_data = HeatV1Driver.convert_objs(
obj['deployments'], HeatV1Driver.software_deployment_translator)
return row_data
@ds_utils.update_state_on_changed(RESOURCES)
def _translate_resources(self, obj):
"""Translate the resources represented by OBJ into tables."""
LOG.debug("Resources: %s", str(dict(obj)))
row_data = HeatV1Driver.convert_objs(
obj['resources'], HeatV1Driver.resources_translator)
return row_data
@ds_utils.update_state_on_changed(EVENTS)
def _translate_events(self, obj):
"""Translate the events represented by OBJ into tables."""
LOG.debug("Events: %s", str(dict(obj)))
row_data = HeatV1Driver.convert_objs(
obj['events'], HeatV1Driver.events_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.heat, action, action_args)

View File

@ -1,221 +0,0 @@
# Copyright (c) 2015 Intel Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from ironicclient import client
import six
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
class IronicDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
CHASSISES = "chassises"
NODES = "nodes"
NODE_PROPERTIES = "node_properties"
PORTS = "ports"
DRIVERS = "drivers"
ACTIVE_HOSTS = "active_hosts"
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
def safe_id(x):
if isinstance(x, six.string_types):
return x
try:
return x['id']
except KeyError:
return str(x)
def safe_port_extra(x):
try:
return x['vif_port_id']
except KeyError:
return ""
chassises_translator = {
'translation-type': 'HDICT',
'table-name': CHASSISES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'uuid', 'col': 'id', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans})}
nodes_translator = {
'translation-type': 'HDICT',
'table-name': NODES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'uuid', 'col': 'id',
'desc': '', 'translator': value_trans},
{'fieldname': 'chassis_uuid', 'desc': '',
'col': 'owner_chassis', 'translator': value_trans},
{'fieldname': 'power_state', 'desc': '',
'translator': value_trans},
{'fieldname': 'maintenance', 'desc': '',
'translator': value_trans},
{'fieldname': 'properties', 'desc': '',
'translator':
{'translation-type': 'HDICT',
'table-name': NODE_PROPERTIES,
'parent-key': 'id',
'parent-col-name': 'properties',
'selector-type': 'DICT_SELECTOR',
'in-list': False,
'field-translators':
({'fieldname': 'memory_mb',
'translator': value_trans},
{'fieldname': 'cpu_arch',
'translator': value_trans},
{'fieldname': 'local_gb',
'translator': value_trans},
{'fieldname': 'cpus',
'translator': value_trans})}},
{'fieldname': 'driver', 'translator': value_trans},
{'fieldname': 'instance_uuid', 'col': 'running_instance',
'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'provision_updated_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans})}
ports_translator = {
'translation-type': 'HDICT',
'table-name': PORTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'uuid', 'col': 'id', 'translator': value_trans},
{'fieldname': 'node_uuid', 'col': 'owner_node',
'translator': value_trans},
{'fieldname': 'address', 'col': 'mac_address',
'translator': value_trans},
{'fieldname': 'extra', 'col': 'vif_port_id', 'translator':
{'translation-type': 'VALUE',
'extract-fn': safe_port_extra}},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans})}
drivers_translator = {
'translation-type': 'HDICT',
'table-name': DRIVERS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'hosts', 'translator':
{'translation-type': 'LIST',
'table-name': ACTIVE_HOSTS,
'parent-key': 'name',
'parent-col-name': 'name',
'val-col': 'hosts',
'translator':
{'translation-type': 'VALUE'}}})}
TRANSLATORS = [chassises_translator, nodes_translator, ports_translator,
drivers_translator]
def __init__(self, name='', args=None):
super(IronicDriver, self).__init__(name, args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = self.get_ironic_credentials(args)
session = ds_utils.get_keystone_session(self.creds)
self.ironic_client = client.get_client(
api_version=self.creds.get('api_version', '1'), session=session)
self.add_executable_client_methods(self.ironic_client,
'ironicclient.v1.')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'ironic'
result['description'] = ('Datasource driver that interfaces with '
'OpenStack bare metal aka ironic.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def get_ironic_credentials(self, creds):
d = {}
d['api_version'] = '1'
d['insecure'] = False
# save a copy to renew auth token
d['username'] = creds['username']
d['password'] = creds['password']
d['auth_url'] = creds['auth_url']
d['tenant_name'] = creds['tenant_name']
# ironicclient.get_client() uses different names
d['os_username'] = creds['username']
d['os_password'] = creds['password']
d['os_auth_url'] = creds['auth_url']
d['os_tenant_name'] = creds['tenant_name']
return d
def initialize_update_methods(self):
chassises_method = lambda: self._translate_chassises(
self.ironic_client.chassis.list(detail=True, limit=0))
self.add_update_method(chassises_method, self.chassises_translator)
nodes_method = lambda: self._translate_nodes(
self.ironic_client.node.list(detail=True, limit=0))
self.add_update_method(nodes_method, self.nodes_translator)
ports_method = lambda: self._translate_ports(
self.ironic_client.port.list(detail=True, limit=0))
self.add_update_method(ports_method, self.ports_translator)
drivers_method = lambda: self._translate_drivers(
self.ironic_client.driver.list())
self.add_update_method(drivers_method, self.drivers_translator)
@ds_utils.update_state_on_changed(CHASSISES)
def _translate_chassises(self, obj):
row_data = IronicDriver.convert_objs(obj,
IronicDriver.chassises_translator)
return row_data
@ds_utils.update_state_on_changed(NODES)
def _translate_nodes(self, obj):
row_data = IronicDriver.convert_objs(obj,
IronicDriver.nodes_translator)
return row_data
@ds_utils.update_state_on_changed(PORTS)
def _translate_ports(self, obj):
row_data = IronicDriver.convert_objs(obj,
IronicDriver.ports_translator)
return row_data
@ds_utils.update_state_on_changed(DRIVERS)
def _translate_drivers(self, obj):
row_data = IronicDriver.convert_objs(obj,
IronicDriver.drivers_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.ironic_client, action, action_args)

View File

@ -1,178 +0,0 @@
# Copyright (c) 2019 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import json
import eventlet
from oslo_concurrency import lockutils
from oslo_config import cfg
from oslo_log import log as logging
import psycopg2
import requests
from congress.datasources import datasource_utils
from congress.datasources.json_ingester import sql
from congress import exception
LOG = logging.getLogger(__name__)
class ExecApiManager(object):
def __init__(self, configs):
super(ExecApiManager, self).__init__()
self._exec_api_sessions = {}
self._exec_api_endpoints = {}
# state tracking the most recent state consisting of the union
# of all the rows from all the _exec_api tables
# used to determine which rows are new
self._last_exec_api_state = set([])
for config in configs:
# FIXME(json_ingester): validate config
if config.get('allow_exec_api', False) is True:
auth_config = config.get('authentication')
if auth_config is None:
session = requests.Session()
session.headers.update(
config.get('api_default_headers', {}))
else:
if auth_config['type'] == 'keystone':
session = datasource_utils.get_keystone_session(
config['authentication']['config'],
headers=config.get('api_default_headers', {}))
else:
LOG.error('authentication type %s not supported.',
auth_config.get['type'])
raise exception.BadConfig(
'authentication type {} not '
'supported.'.auth_config['type'])
name = config['name']
self._exec_api_endpoints[name] = config['api_endpoint']
self._exec_api_sessions[name] = session
@lockutils.synchronized('congress_json_ingester_exec_api')
def evaluate_and_execute_actions(self):
# FIXME(json_ingester): retry
new_exec_api_state = self._read_all_execute_tables()
new_exec_api_rows = new_exec_api_state - self._last_exec_api_state
LOG.debug('New exec_api rows %s', new_exec_api_rows)
self._execute_exec_api_rows(new_exec_api_rows)
self._last_exec_api_state = new_exec_api_state
def _execute_exec_api_rows(self, rows):
def exec_api(session, kwargs):
LOG.info("Making API request %s.", kwargs)
try:
session.request(**kwargs)
except Exception:
LOG.exception('Exception in making API request %s.', kwargs)
for row in rows:
(endpoint, path, method, body, parameters, headers) = row
if endpoint in self._exec_api_endpoints:
kwargs = {
'endpoint_override': self._exec_api_endpoints[endpoint],
'url': path,
'method': method.upper(),
'connect_retries': 10,
'status_code_retries': 10}
body = json.loads(body)
if body is not None:
kwargs['json'] = body
parameters = json.loads(parameters)
if parameters is not None:
kwargs['params'] = parameters
headers = json.loads(headers)
if headers is not None:
kwargs['headers'] = headers
if cfg.CONF.enable_execute_action:
eventlet.spawn_n(
exec_api, self._exec_api_sessions[endpoint], kwargs)
else:
LOG.info("Simulating API request %s", kwargs)
else:
LOG.warning(
'No configured API endpoint with name %s. '
'Skipping the API request: '
'(endpoint, path, method, body, parameters, headers) '
'= %s.', endpoint, row)
eventlet.sleep(0) # defer to greenthreads running api requests
@staticmethod
def _read_all_execute_tables():
def json_rows_to_str_rows(json_rows):
# FIXME(json_ingester): validate; log and drop invalid rows
return [(endpoint, path, method, json.dumps(body, sort_keys=True),
json.dumps(parameters, sort_keys=True),
json.dumps(headers, sort_keys=True)) for
(endpoint, path, method, body, parameters, headers)
in json_rows]
FIND_ALL_EXEC_VIEWS = """
SELECT table_schema, table_name FROM information_schema.tables
WHERE table_schema NOT LIKE 'pg\_%'
AND table_schema <> 'information_schema'
AND table_name LIKE '\_exec_api';"""
READ_EXEC_VIEW = """
SELECT endpoint, path, method, body, parameters, headers
FROM {}.{};"""
conn = None
try:
conn = psycopg2.connect(cfg.CONF.json_ingester.db_connection)
# repeatable read to make sure all the _exec_api rows from all
# schemas are obtained at the same snapshot
conn.set_session(
isolation_level=psycopg2.extensions.
ISOLATION_LEVEL_REPEATABLE_READ,
readonly=True, autocommit=False)
cur = conn.cursor()
# find all _exec_api tables
cur.execute(sql.SQL(FIND_ALL_EXEC_VIEWS))
all_exec_api_tables = cur.fetchall()
# read each _exec_api_table
all_exec_api_rows = set([])
for (table_schema, table_name) in all_exec_api_tables:
try:
cur.execute(sql.SQL(READ_EXEC_VIEW).format(
sql.Identifier(table_schema),
sql.Identifier(table_name)))
all_rows = cur.fetchall()
all_exec_api_rows.update(
json_rows_to_str_rows(all_rows))
except psycopg2.ProgrammingError:
LOG.warning('The "%s" table in the "%s" schema does not '
'have the right columns for API execution. '
'Its content is ignored for the purpose of '
'API execution. Please check and correct the '
'view definition.',
table_name, table_schema)
conn.commit()
cur.close()
return all_exec_api_rows
except (Exception, psycopg2.Error):
LOG.exception("Error reading from DB")
raise
finally:
if conn is not None:
conn.close()

View File

@ -1,465 +0,0 @@
# Copyright (c) 2018, 2019 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import datetime
import json
import sys
from jsonpath_rw import parser
from oslo_config import cfg
from oslo_log import log as logging
import psycopg2
import requests
from congress.api import base as api_base
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils
from congress.datasources.json_ingester import sql
from congress.dse2 import data_service
from congress import exception
LOG = logging.getLogger(__name__)
class JsonIngester(datasource_driver.PollingDataSourceDriver):
def __init__(self, name, config, exec_manager):
def validate_config(config):
# FIXME: use json schema to validate config
config_tables = config['tables']
poll_tables = [table for table in config_tables
if 'poll' in config_tables[table]]
if len(poll_tables) > 0:
# FIXME: when polling table exists, require configs:
# api_endpoint, authentication
pass
for table_name in config_tables:
if ('poll' in config_tables[table_name]
and 'webhook' in config_tables[table_name]):
raise exception.BadConfig(
'Table ({}) cannot be configured for '
'both poll and webhook.'.format(table_name))
# use prefix to avoid service_id clash with regular data sources
super(JsonIngester, self).__init__(
api_base.JSON_DS_SERVICE_PREFIX + name)
self.exec_manager = exec_manager # ref to global mgr for api exec
self.type = 'json_ingester'
self.name = name # set name back to one without prefix for use here
if 'tables' not in config:
# config w/o table used to define exec_api endpoint
# in this case, no need to create datasource service
return
validate_config(config)
self._config = config
self._create_schema_and_tables()
self.poll_time = self._config.get('poll_interval', 60)
self._setup_table_key_sets()
self._api_endpoint = self._config.get('api_endpoint')
self._initialize_session()
self._initialize_update_methods()
if len(self.update_methods) > 0:
self._init_end_start_poll()
else:
self.initialized = True
# For DSE2. Must go after __init__
if hasattr(self, 'add_rpc_endpoint'):
self.add_rpc_endpoint(JsonIngesterEndpoints(self))
def _setup_table_key_sets(self):
# because postgres cannot directly use the jsonb column d as key,
# the _key column is added as key in order to support performant
# delete of specific rows in delta update to the db table
# for each table, maintain in memory an association between the json
# data and a unique key. The association is maintained using the
# KeyMap class
# Note: The key may change from session to session, which does not
# cause a problem in this case because the db tables
# (along with old keys) are cleared each time congress starts
# { table_name -> KeyMap object}
self.key_sets = {}
for table_name in self._config['tables']:
self.key_sets[table_name] = KeyMap()
def _clear_table_state(self, table_name):
del self.state[table_name]
self.key_sets[table_name].clear()
def publish(self, table, data, use_snapshot=False):
LOG.debug('JSON Ingester "%s" publishing table "%s"', self.name, table)
LOG.trace('publish(self=%s, table=%s, data=%s, use_snapshot=%s',
self, table, data, use_snapshot)
return self._update_table(
table, new_data=data,
old_data=self.prior_state.get(table, set([])),
use_snapshot=use_snapshot)
def _create_schema_and_tables(self):
create_schema_statement = """CREATE SCHEMA IF NOT EXISTS {};"""
create_table_statement = """
CREATE TABLE IF NOT EXISTS {}.{}
(d jsonb, _key text, primary key (_key));"""
# Note: because postgres cannot directly use the jsonb column d as key,
# the _key column is added as key in order to support performant
# delete of specific rows in delta update to the db table
create_index_statement = """
CREATE INDEX IF NOT EXISTS {index} on {schema}.{table}
USING GIN (d);"""
drop_index_statement = """
DROP INDEX IF EXISTS {schema}.{index};"""
conn = None
try:
conn = psycopg2.connect(cfg.CONF.json_ingester.db_connection)
conn.set_session(
isolation_level=psycopg2.extensions.
ISOLATION_LEVEL_READ_COMMITTED,
readonly=False, autocommit=False)
cur = conn.cursor()
# create schema
cur.execute(
sql.SQL(create_schema_statement).format(
sql.Identifier(self.name)))
for table_name in self._config['tables']:
# create table
cur.execute(sql.SQL(create_table_statement).format(
sql.Identifier(self.name), sql.Identifier(table_name)))
if self._config['tables'][table_name].get('gin_index', True):
cur.execute(sql.SQL(create_index_statement).format(
schema=sql.Identifier(self.name),
table=sql.Identifier(table_name),
index=sql.Identifier(
'__{}_d_gin_idx'.format(table_name))))
else:
cur.execute(sql.SQL(drop_index_statement).format(
schema=sql.Identifier(self.name),
index=sql.Identifier(
'__{}_d_gin_idx'.format(table_name))))
conn.commit()
cur.close()
except (Exception, psycopg2.Error):
if 'table_name' in locals():
LOG.exception("Error creating table %s in schema %s",
table_name, self.name)
else:
LOG.exception("Error creating schema %s", self.name)
raise
finally:
if conn is not None:
conn.close()
def _update_table(
self, table_name, new_data, old_data, use_snapshot):
# return False immediately if no change to update
if new_data == old_data:
return False
insert_statement = """INSERT INTO {}.{}
VALUES(%s, %s);"""
delete_all_statement = """DELETE FROM {}.{};"""
delete_tuple_statement = """
DELETE FROM {}.{} WHERE _key = %s;"""
conn = None
try:
conn = psycopg2.connect(cfg.CONF.json_ingester.db_connection)
conn.set_session(
isolation_level=psycopg2.extensions.
ISOLATION_LEVEL_READ_COMMITTED,
readonly=False, autocommit=False)
cur = conn.cursor()
if use_snapshot:
to_insert = new_data
# delete all existing data from table
cur.execute(sql.SQL(delete_all_statement).format(
sql.Identifier(self.name), sql.Identifier(table_name)))
self.key_sets[table_name].clear()
else:
to_insert = new_data - old_data
to_delete = old_data - new_data
# delete the appropriate rows from table
for d in to_delete:
cur.execute(sql.SQL(delete_tuple_statement).format(
sql.Identifier(self.name),
sql.Identifier(table_name)),
(str(self.key_sets[table_name].remove_and_get_key(d)),)
)
# insert new data into table
for d in to_insert:
cur.execute(sql.SQL(insert_statement).format(
sql.Identifier(self.name),
sql.Identifier(table_name)),
(d, str(self.key_sets[table_name].add_and_get_key(d))))
conn.commit()
cur.close()
return True # return True indicating change made
except (Exception, psycopg2.Error):
LOG.exception("Error writing to DB")
# makes the next update use snapshot
self._clear_table_state(table_name)
return False # return False indicating no change made (rollback)
finally:
if conn is not None:
conn.close()
def add_update_method(self, method, table_name):
if table_name in self.update_methods:
raise exception.Conflict('A method has already registered for '
'the table %s.' %
table_name)
self.update_methods[table_name] = method
def _initialize_session(self):
auth_config = self._config.get('authentication')
if auth_config is None:
self._session = requests.Session()
self._session.headers.update(
self._config.get('api_default_headers', {}))
else:
if auth_config['type'] == 'keystone':
self._session = datasource_utils.get_keystone_session(
self._config['authentication']['config'],
headers=self._config.get('api_default_headers', {}))
else:
LOG.error('authentication type %s not supported.',
auth_config.get['type'])
raise exception.BadConfig(
'authentication type {} not supported.'.format(
auth_config['type']))
def _initialize_update_methods(self):
for table_name in self._config['tables']:
if 'poll' in self._config['tables'][table_name]:
table_info = self._config['tables'][table_name]['poll']
# Note: using default parameters to get early-binding of
# variables in closure
def update_method(
table_name=table_name, table_info=table_info):
try:
full_path = self._api_endpoint.rstrip(
'/') + '/' + table_info['api_path'].lstrip('/')
result = self._session.get(full_path).json()
# FIXME: generalize to other verbs?
jsonpath_expr = parser.parse(table_info['jsonpath'])
ingest_data = [match.value for match in
jsonpath_expr.find(result)]
self.state[table_name] = set(
[json.dumps(item, sort_keys=True)
for item in ingest_data])
except BaseException:
LOG.exception('Exception occurred while updating '
'table %s.%s from: URL %s',
self.name, table_name,
full_path)
self.add_update_method(update_method, table_name)
def update_from_datasource(self):
for table in self.update_methods:
LOG.debug('update table %s.' % table)
self.update_methods[table]()
# Note(thread-safety): blocking function
def poll(self):
"""Periodically called to update new info.
Function called periodically to grab new information, compute
deltas, and publish those deltas.
"""
LOG.info("%s:: polling", self.name)
self.prior_state = dict(self.state) # copying self.state
self.last_error = None # non-None only when last poll errored
try:
self.update_from_datasource() # sets self.state
# publish those tables with polling update methods
overall_change_made = False
for tablename in self.update_methods:
use_snapshot = tablename not in self.prior_state
# Note(thread-safety): blocking call[
this_table_change_made = self.publish(
tablename, self.state.get(tablename, set([])),
use_snapshot=use_snapshot)
overall_change_made = (overall_change_made
or this_table_change_made)
if overall_change_made:
self.exec_manager.evaluate_and_execute_actions()
except Exception as e:
self.last_error = e
LOG.exception("Datasource driver raised exception")
self.last_updated_time = datetime.datetime.now()
self.number_of_updates += 1
LOG.info("%s:: finished polling", self.name)
def json_ingester_webhook_handler(self, table_name, body):
def get_exactly_one_jsonpath_match(
jsonpath, jsondata, custom_error_msg):
jsonpath_expr = parser.parse(jsonpath)
matches = jsonpath_expr.find(jsondata)
if len(matches) != 1:
raise exception.BadRequest(
custom_error_msg.format(jsonpath, jsondata))
return matches[0].value
try:
webhook_config = self._config['tables'][table_name]['webhook']
except KeyError:
raise exception.NotFound(
'In JSON Ingester: "{}", the table "{}" either does not exist '
'or is not configured for webhook.'.format(
self.name, table_name))
json_record = get_exactly_one_jsonpath_match(
webhook_config['record_jsonpath'], body,
'In identifying JSON record from webhook body, the configured '
'jsonpath expression "{}" fails to obtain exactly one match on '
'webhook body "{}".')
json_id = get_exactly_one_jsonpath_match(
webhook_config['id_jsonpath'], json_record,
'In identifying ID from JSON record, the configured jsonpath '
'expression "{}" fails to obtain exactly one match on JSON record'
' "{}".')
self._webhook_update_table(table_name, key=json_id, data=json_record)
self.exec_manager.evaluate_and_execute_actions()
def _webhook_update_table(self, table_name, key, data):
key_string = json.dumps(key, sort_keys=True)
PGSQL_MAX_INDEXABLE_SIZE = 2712
if len(key_string) > PGSQL_MAX_INDEXABLE_SIZE:
raise exception.BadRequest(
'The supplied key ({}) exceeds the max indexable size ({}) in '
'PostgreSQL.'.format(key_string, PGSQL_MAX_INDEXABLE_SIZE))
insert_statement = """INSERT INTO {}.{}
VALUES(%s, %s);"""
delete_tuple_statement = """
DELETE FROM {}.{} WHERE _key = %s;"""
conn = None
try:
conn = psycopg2.connect(cfg.CONF.json_ingester.db_connection)
conn.set_session(
isolation_level=psycopg2.extensions.
ISOLATION_LEVEL_READ_COMMITTED,
readonly=False, autocommit=False)
cur = conn.cursor()
# delete the appropriate row from table
cur.execute(sql.SQL(delete_tuple_statement).format(
sql.Identifier(self.name),
sql.Identifier(table_name)),
(key_string,))
# insert new row into table
cur.execute(sql.SQL(insert_statement).format(
sql.Identifier(self.name),
sql.Identifier(table_name)),
(json.dumps(data), key_string))
conn.commit()
cur.close()
except (Exception, psycopg2.Error):
LOG.exception("Error writing to DB")
finally:
if conn is not None:
conn.close()
def validate_lazy_tables(self):
'''override non-applicable parent method as no-op'''
pass
def initialize_translators(self):
'''override non-applicable parent method as no-op'''
pass
def get_snapshot(self, table_name):
raise NotImplementedError(
'This method should not be called in PollingJsonIngester.')
def get_row_data(self, table_id, *args, **kwargs):
raise NotImplementedError(
'This method should not be called in PollingJsonIngester.')
def register_translator(self, translator):
raise NotImplementedError(
'This method should not be called in PollingJsonIngester.')
def get_translator(self, translator_name):
raise NotImplementedError(
'This method should not be called in PollingJsonIngester.')
def get_translators(self):
raise NotImplementedError(
'This method should not be called in PollingJsonIngester.')
class JsonIngesterEndpoints(data_service.DataServiceEndPoints):
def __init__(self, service):
super(JsonIngesterEndpoints, self).__init__(service)
# Note (thread-safety): blocking function
def json_ingester_webhook_handler(self, context, table_name, body):
# Note (thread-safety): blocking call
return self.service.json_ingester_webhook_handler(table_name, body)
class KeyMap(object):
'''Map associating a unique integer key with each hashable object'''
_PY_MIN_INT = -sys.maxsize - 1 # minimum primitive integer supported
_PGSQL_MIN_BIGINT = -2**63 # minimum BIGINT supported in postgreSQL
# reference: https://www.postgresql.org/docs/9.4/datatype-numeric.html
def __init__(self):
self._key_mapping = {}
self._reclaimed_free_keys = set([])
self._next_incremental_key = max(
self._PY_MIN_INT, self._PGSQL_MIN_BIGINT) # start from least
def add_and_get_key(self, datum):
'''Add a datum and return associated key'''
if datum in self._key_mapping:
return self._key_mapping[datum]
else:
try:
next_key = self._reclaimed_free_keys.pop()
except KeyError:
next_key = self._next_incremental_key
self._next_incremental_key += 1
self._key_mapping[datum] = next_key
return next_key
def remove_and_get_key(self, datum):
'''Remove a datum and return associated key'''
key = self._key_mapping.pop(datum)
self._reclaimed_free_keys.add(key)
return key
def clear(self):
'''Remove all data and keys'''
self.__init__()
def __len__(self):
return len(self._key_mapping)

View File

@ -1,36 +0,0 @@
# Copyright (c) 2019 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
This module provides a minimal implementation psycopg2.sql features used by
Congress. The purpose is to avoid requiring psycopg2>=2.7 which is not
available in CentOS 7.
'''
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import re
def SQL(input_statement):
return input_statement
def Identifier(identifier):
'''Validate and return quoted SQL identifier.'''
if re.search('^[a-zA-Z_][a-zA-Z0-9_]*$', identifier):
return '"' + identifier + '"'
else:
raise Exception('Unacceptable SQL identifier: {}'.format(identifier))

View File

@ -1,136 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import keystoneclient.v2_0.client
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
class KeystoneDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
# Table names
USERS = "users"
ROLES = "roles"
TENANTS = "tenants"
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
users_translator = {
'translation-type': 'HDICT',
'table-name': USERS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'username', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'enabled', 'translator': value_trans},
{'fieldname': 'tenantId', 'translator': value_trans},
{'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'email', 'translator': value_trans})}
roles_translator = {
'translation-type': 'HDICT',
'table-name': ROLES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans})}
tenants_translator = {
'translation-type': 'HDICT',
'table-name': TENANTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'enabled', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'id', 'translator': value_trans})}
TRANSLATORS = [users_translator, roles_translator, tenants_translator]
def __init__(self, name='', args=None):
super(KeystoneDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = self.get_keystone_credentials_v2(args)
self.client = keystoneclient.v2_0.client.Client(**self.creds)
self.add_executable_client_methods(self.client,
'keystoneclient.v2_0.client')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'keystone'
result['description'] = ('Datasource driver that interfaces with '
'keystone.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def get_keystone_credentials_v2(self, args):
creds = args
d = {}
d['version'] = '2'
d['username'] = creds['username']
d['password'] = creds['password']
d['auth_url'] = creds['auth_url']
d['tenant_name'] = creds['tenant_name']
return d
def initialize_update_methods(self):
users_method = lambda: self._translate_users(self.client.users.list())
self.add_update_method(users_method, self.users_translator)
roles_method = lambda: self._translate_roles(self.client.roles.list())
self.add_update_method(roles_method, self.roles_translator)
tenants_method = lambda: self._translate_tenants(
self.client.tenants.list())
self.add_update_method(tenants_method, self.tenants_translator)
@ds_utils.update_state_on_changed(USERS)
def _translate_users(self, obj):
row_data = KeystoneDriver.convert_objs(obj,
KeystoneDriver.users_translator)
return row_data
@ds_utils.update_state_on_changed(ROLES)
def _translate_roles(self, obj):
row_data = KeystoneDriver.convert_objs(obj,
KeystoneDriver.roles_translator)
return row_data
@ds_utils.update_state_on_changed(TENANTS)
def _translate_tenants(self, obj):
row_data = KeystoneDriver.convert_objs(
obj, KeystoneDriver.tenants_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.client, action, action_args)

View File

@ -1,167 +0,0 @@
# Copyright (c) 2016 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from keystoneclient.v3 import client
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
class KeystoneV3Driver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
# Table names
USERS = "users"
ROLES = "roles"
PROJECTS = "projects"
DOMAINS = "domains"
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
users_translator = {
'translation-type': 'HDICT',
'table-name': USERS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'The ID for the user.',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'username, unique within domain',
'translator': value_trans},
{'fieldname': 'enabled', 'desc': 'user is enabled or not',
'translator': value_trans},
{'fieldname': 'default_project_id',
'desc': 'ID of the default project for the user',
'translator': value_trans},
{'fieldname': 'domain_id',
'desc': 'The ID of the domain for the user.',
'translator': value_trans})}
roles_translator = {
'translation-type': 'HDICT',
'table-name': ROLES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'role ID', 'translator': value_trans},
{'fieldname': 'name', 'desc': 'role name',
'translator': value_trans})}
projects_translator = {
'translation-type': 'HDICT',
'table-name': PROJECTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'enabled', 'desc': 'project is enabled or not',
'translator': value_trans},
{'fieldname': 'description', 'desc': 'project description',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'project name',
'translator': value_trans},
{'fieldname': 'domain_id',
'desc': 'The ID of the domain for the project',
'translator': value_trans},
{'fieldname': 'id', 'desc': 'ID for the project',
'translator': value_trans})}
domains_translator = {
'translation-type': 'HDICT',
'table-name': DOMAINS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'enabled', 'desc': 'domain is enabled or disabled',
'translator': value_trans},
{'fieldname': 'description', 'desc': 'domain description',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'domain name',
'translator': value_trans},
{'fieldname': 'id', 'desc': 'domain ID',
'translator': value_trans})}
TRANSLATORS = [users_translator, roles_translator, projects_translator,
domains_translator]
def __init__(self, name='', args=None):
super(KeystoneV3Driver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
session = ds_utils.get_keystone_session(args)
self.client = client.Client(session=session)
self.add_executable_client_methods(self.client,
'keystoneclient.v3.client')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'keystonev3'
result['description'] = ('Datasource driver that interfaces with '
'keystone.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_methods(self):
users_method = lambda: self._translate_users(self.client.users.list())
self.add_update_method(users_method, self.users_translator)
roles_method = lambda: self._translate_roles(self.client.roles.list())
self.add_update_method(roles_method, self.roles_translator)
projects_method = lambda: self._translate_projects(
self.client.projects.list())
self.add_update_method(projects_method, self.projects_translator)
domains_method = lambda: self._translate_domains(
self.client.domains.list())
self.add_update_method(domains_method, self.domains_translator)
@ds_utils.update_state_on_changed(USERS)
def _translate_users(self, obj):
row_data = KeystoneV3Driver.convert_objs(
obj, KeystoneV3Driver.users_translator)
return row_data
@ds_utils.update_state_on_changed(ROLES)
def _translate_roles(self, obj):
row_data = KeystoneV3Driver.convert_objs(
obj, KeystoneV3Driver.roles_translator)
return row_data
@ds_utils.update_state_on_changed(PROJECTS)
def _translate_projects(self, obj):
row_data = KeystoneV3Driver.convert_objs(
obj, KeystoneV3Driver.projects_translator)
return row_data
@ds_utils.update_state_on_changed(DOMAINS)
def _translate_domains(self, obj):
row_data = KeystoneV3Driver.convert_objs(
obj, KeystoneV3Driver.domains_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.client, action, action_args)

View File

@ -1,204 +0,0 @@
# Copyright (c) 2018 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Mistral Driver for Congress
This driver allows the creation of Congress datasources that interfaces with
Mistral workflows service. The Congress datasource reflects as Congress tables
the Mistral data on workflows, workflow executions, actions, and action
executions. The datasource also supports the triggering of Mistral APIs such as
initiation of a workflows or actions. The triggering of workflows or actions is
especially useful for creating Congress policies that take remedial action.
Datasource creation CLI example:
$ openstack congress datasource create mistral mistral_datasource \
--config username=admin \
--config tenant_name=admin \
--config auth_url=http://127.0.0.1/identity \
--config password=password
"""
from mistralclient.api.v2 import client as mistral_client
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
class MistralDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
WORKFLOWS = 'workflows'
ACTIONS = 'actions'
WORKFLOW_EXECUTIONS = 'workflow_executions'
ACTION_EXECUTIONS = 'action_executions'
value_trans = {'translation-type': 'VALUE'}
workflows_translator = {
'translation-type': 'HDICT',
'table-name': WORKFLOWS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'scope', 'translator': value_trans},
{'fieldname': 'input', 'translator': value_trans},
{'fieldname': 'namespace', 'translator': value_trans},
{'fieldname': 'project_id', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans},
{'fieldname': 'definition', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
# TODO(ekcs): maybe enable tags in the future
)}
actions_translator = {
'translation-type': 'HDICT',
'table-name': ACTIONS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'input', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans},
{'fieldname': 'is_system', 'translator': value_trans},
{'fieldname': 'definition', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'scope', 'translator': value_trans},
# TODO(ekcs): maybe enable tags in the future
)}
workflow_executions_translator = {
'translation-type': 'HDICT',
'table-name': WORKFLOW_EXECUTIONS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'workflow_name', 'translator': value_trans},
{'fieldname': 'input', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans},
{'fieldname': 'state', 'translator': value_trans},
{'fieldname': 'state_info', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'workflow_id', 'translator': value_trans},
{'fieldname': 'workflow_namespace', 'translator': value_trans},
{'fieldname': 'params', 'translator': value_trans},
# TODO(ekcs): maybe add task_execution_ids table
)}
action_executions_translator = {
'translation-type': 'HDICT',
'table-name': ACTION_EXECUTIONS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'state_info', 'translator': value_trans},
{'fieldname': 'workflow_name', 'translator': value_trans},
{'fieldname': 'task_execution_id', 'translator': value_trans},
{'fieldname': 'task_name', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'input', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans},
{'fieldname': 'accepted', 'translator': value_trans},
{'fieldname': 'state', 'translator': value_trans},
{'fieldname': 'workflow_namespace', 'translator': value_trans},
# TODO(ekcs): maybe add action execution tags
)}
TRANSLATORS = [
workflows_translator, actions_translator,
workflow_executions_translator, action_executions_translator]
def __init__(self, name='', args=None):
super(MistralDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
session = ds_utils.get_keystone_session(args)
self.mistral_client = mistral_client.Client(session=session)
self.add_executable_client_methods(
self.mistral_client, 'mistralclient.api.v2.')
self.initialize_update_method()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'mistral'
result['description'] = ('Datasource driver that interfaces with '
'Mistral.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_method(self):
workflows_method = lambda: self._translate_workflows(
self.mistral_client.workflows.list())
self.add_update_method(workflows_method, self.workflows_translator)
workflow_executions_method = (
lambda: self._translate_workflow_executions(
self.mistral_client.executions.list()))
self.add_update_method(workflow_executions_method,
self.workflow_executions_translator)
actions_method = lambda: self._translate_actions(
self.mistral_client.actions.list())
self.add_update_method(actions_method, self.actions_translator)
action_executions_method = lambda: self._translate_action_executions(
self.mistral_client.action_executions.list())
self.add_update_method(action_executions_method,
self.action_executions_translator)
@ds_utils.update_state_on_changed(WORKFLOWS)
def _translate_workflows(self, obj):
"""Translate the workflows represented by OBJ into tables."""
row_data = MistralDriver.convert_objs(obj, self.workflows_translator)
return row_data
@ds_utils.update_state_on_changed(ACTIONS)
def _translate_actions(self, obj):
"""Translate the workflows represented by OBJ into tables."""
row_data = MistralDriver.convert_objs(obj, self.actions_translator)
return row_data
@ds_utils.update_state_on_changed(WORKFLOW_EXECUTIONS)
def _translate_workflow_executions(self, obj):
"""Translate the workflow_executions represented by OBJ into tables."""
row_data = MistralDriver.convert_objs(
obj, self.workflow_executions_translator)
return row_data
@ds_utils.update_state_on_changed(ACTION_EXECUTIONS)
def _translate_action_executions(self, obj):
"""Translate the action_executions represented by OBJ into tables."""
row_data = MistralDriver.convert_objs(
obj, self.action_executions_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.mistral_client, action, action_args)

View File

@ -1,298 +0,0 @@
# Copyright (c) 2015 Cisco, 2018 NEC, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from datetime import datetime
from datetime import timedelta
import eventlet
from futurist import periodics
from monascaclient import client as monasca_client
from oslo_concurrency import lockutils
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
DATA = "statistics.data"
DIMENSIONS = "dimensions"
METRICS = "metrics"
NOTIFICATIONS = "alarm_notification"
STATISTICS = "statistics"
value_trans = {'translation-type': 'VALUE'}
# TODO(thinrichs): figure out how to move even more of this boilerplate
# into DataSourceDriver. E.g. change all the classes to Driver instead of
# NeutronDriver, CeilometerDriver, etc. and move the d6instantiate function
# to DataSourceDriver.
class MonascaDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
# TODO(fabiog): add events and logs when fully supported in Monasca
# EVENTS = "events"
# LOGS = "logs"
metric_translator = {
'translation-type': 'HDICT',
'table-name': METRICS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'dimensions',
'translator': {'translation-type': 'VDICT',
'table-name': DIMENSIONS,
'id-col': 'id',
'key-col': 'key', 'val-col': 'value',
'translator': value_trans}})
}
statistics_translator = {
'translation-type': 'HDICT',
'table-name': STATISTICS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'statistics',
'translator': {'translation-type': 'LIST',
'table-name': DATA,
'id-col': 'name',
'val-col': 'value_col',
'translator': value_trans}})
}
TRANSLATORS = [metric_translator, statistics_translator]
def __init__(self, name='', args=None):
super(MonascaDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
if not args.get('project_name'):
args['project_name'] = args['tenant_name']
# set default polling time to 1hr
self.poll_time = int(args.get('poll_time', 3600))
session = ds_utils.get_keystone_session(args)
# if the endpoint not defined retrieved it from keystone catalog
if 'endpoint' not in args:
args['endpoint'] = session.get_endpoint(service_type='monitoring',
interface='publicURL')
self.monasca = monasca_client.Client('2_0', session=session,
endpoint=args['endpoint'])
self.add_executable_client_methods(self.monasca, 'monascaclient.')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'monasca'
result['description'] = ('Datasource driver that interfaces with '
'monasca.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_methods(self):
metrics_method = lambda: self._translate_metric(
self.monasca.metrics.list())
self.add_update_method(metrics_method, self.metric_translator)
statistics_method = self.update_statistics
self.add_update_method(statistics_method, self.statistics_translator)
def update_statistics(self):
today = datetime.utcnow()
yesterday = timedelta(hours=24)
start_from = datetime.isoformat(today-yesterday)
for metric in self.monasca.metrics.list_names():
LOG.debug("Monasca statistics for metric %s", metric['name'])
_query_args = dict(
start_time=start_from,
name=metric['name'],
statistics='avg',
period=int(self.poll_time),
merge_metrics='true')
statistics = self.monasca.metrics.list_statistics(
**_query_args)
self._translate_statistics(statistics)
@ds_utils.update_state_on_changed(METRICS)
def _translate_metric(self, obj):
"""Translate the metrics represented by OBJ into tables."""
LOG.debug("METRIC: %s", str(obj))
row_data = MonascaDriver.convert_objs(obj,
self.metric_translator)
return row_data
@ds_utils.update_state_on_changed(STATISTICS)
def _translate_statistics(self, obj):
"""Translate the metrics represented by OBJ into tables."""
LOG.debug("STATISTICS: %s", str(obj))
row_data = MonascaDriver.convert_objs(obj,
self.statistics_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.monasca, action, action_args)
class MonascaWebhookDriver(datasource_driver.PushedDataSourceDriver):
METRICS = 'alarms.' + METRICS
DIMENSIONS = METRICS + '.' + DIMENSIONS
metric_translator = {
'translation-type': 'HDICT',
'table-name': METRICS,
'parent-key': 'alarm_id',
'parent-col-name': 'alarm_id',
'parent-key-desc': 'ALARM id',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'dimensions',
'translator': {'translation-type': 'VDICT',
'table-name': DIMENSIONS,
'id-col': 'id',
'key-col': 'key', 'val-col': 'value',
'translator': value_trans}})
}
alarm_notification_translator = {
'translation-type': 'HDICT',
'table-name': NOTIFICATIONS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'alarm_id', 'translator': value_trans},
{'fieldname': 'alarm_definition_id', 'translator': value_trans},
{'fieldname': 'alarm_name', 'translator': value_trans},
{'fieldname': 'alarm_description', 'translator': value_trans},
{'fieldname': 'alarm_timestamp', 'translator': value_trans},
{'fieldname': 'state', 'translator': value_trans},
{'fieldname': 'old_state', 'translator': value_trans},
{'fieldname': 'message', 'translator': value_trans},
{'fieldname': 'tenant_id', 'translator': value_trans},
{'fieldname': 'metrics', 'translator': metric_translator},)
}
TRANSLATORS = [alarm_notification_translator]
def __init__(self, name='', args=None):
LOG.warning(
'The Monasca webhook driver is classified as having unstable '
'schema. The schema may change in future releases in '
'backwards-incompatible ways.')
super(MonascaWebhookDriver, self).__init__(name, args=args)
if args is None:
args = {}
# set default time to 10 days before deleting an active alarm
self.hours_to_keep_alarm = int(args.get('hours_to_keep_alarm', 240))
self.set_up_periodic_tasks()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'monasca_webhook'
result['description'] = ('Datasource driver that accepts Monasca '
'webhook alarm notifications.')
result['config'] = {'persist_data': constants.OPTIONAL,
'hours_to_keep_alarm': constants.OPTIONAL}
return result
def _delete_rows(self, tablename, column_number, value):
to_remove = [row for row in self.state[tablename]
if row[column_number] == value]
for row in to_remove:
self.state[tablename].discard(row)
def _webhook_handler(self, alarm):
tablenames = [NOTIFICATIONS, self.METRICS, self.DIMENSIONS]
# remove already existing same alarm row from alarm_notification
alarm_id = alarm['alarm_id']
column_index_number_of_alarm_id = 0
self._delete_rows(NOTIFICATIONS, column_index_number_of_alarm_id,
alarm_id)
# remove already existing same metric from metrics
self._delete_rows(self.METRICS, column_index_number_of_alarm_id,
alarm_id)
translator = self.alarm_notification_translator
row_data = MonascaWebhookDriver.convert_objs([alarm], translator)
# add alarm to table
for table, row in row_data:
if table in tablenames:
self.state[table].add(row)
for table in tablenames:
LOG.debug('publish a new state %s in %s',
self.state[table], table)
self.publish(table, self.state[table])
return tablenames
def set_up_periodic_tasks(self):
@lockutils.synchronized('congress_monasca_webhook_ds_data')
@periodics.periodic(spacing=max(self.hours_to_keep_alarm * 3600/10, 1))
def delete_old_alarms():
tablename = NOTIFICATIONS
col_index_of_timestamp = 4
# find for removal all alarms at least self.hours_to_keep_alarm old
to_remove = [
row for row in self.state[tablename]
if (datetime.utcnow() -
datetime.utcfromtimestamp(row[col_index_of_timestamp])
>= timedelta(hours=self.hours_to_keep_alarm))]
for row in to_remove:
self.state[tablename].discard(row)
# deletes corresponding metrics table rows
col_index_of_alarm_id = 0
alarm_id = row[col_index_of_alarm_id]
self._delete_rows(self.METRICS, col_index_of_alarm_id,
alarm_id)
periodic_task_callables = [
(delete_old_alarms, None, {}),
(delete_old_alarms, None, {})]
self.periodic_tasks = periodics.PeriodicWorker(periodic_task_callables)
self.periodic_tasks_thread = eventlet.spawn_n(
self.periodic_tasks.start)
def __del__(self):
if self.periodic_tasks:
self.periodic_tasks.stop()
self.periodic_tasks.wait()
self.periodic_tasks = None
if self.periodic_tasks_thread:
eventlet.greenthread.kill(self.periodic_tasks_thread)
self.periodic_tasks_thread = None

View File

@ -1,150 +0,0 @@
# Copyright (c) 2015 Hewlett-Packard. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
logger = logging.getLogger(__name__)
class IOMuranoObject(object):
name = 'io.murano.Object'
@classmethod
def is_class_type(cls, name):
if name == cls.name:
return True
else:
return False
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
return [cls.name]
class IOMuranoEnvironment(IOMuranoObject):
name = 'io.murano.Environment'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoObject.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesInstance(IOMuranoObject):
name = 'io.murano.resources.Instance'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoObject.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesLinuxInstance(IOMuranoResourcesInstance):
name = 'io.murano.resources.LinuxInstance'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoResourcesInstance.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesLinuxMuranoInstance(IOMuranoResourcesLinuxInstance):
name = 'io.murano.resources.LinuxMuranoInstance'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoResourcesLinuxInstance.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesWindowsInstance(IOMuranoResourcesInstance):
name = 'io.murano.resources.WindowsInstance'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoResourcesInstance.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesNetwork(IOMuranoObject):
name = 'io.murano.resources.Network'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoObject.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesNeutronNetwork(IOMuranoResourcesNetwork):
name = 'io.murano.resources.NeutronNetwork'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoResourcesNetwork.get_parent_types()
types.append(cls.name)
return types
class IOMuranoApplication(IOMuranoObject):
name = 'io.murano.Application'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoObject.get_parent_types()
types.append(cls.name)
return types
class IOMuranoApps(IOMuranoApplication):
# This is a common class for all applications
# name should be set to actual apps type before use
# (e.g io.murano.apps.apache.ApacheHttpServer)
name = None
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoApplication.get_parent_types()
types.append(cls.name)
return types

Some files were not shown because too many files have changed in this diff Show More