Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: I107bd8eafb21c5a195d483d68521fea055eb6f5b
This commit is contained in:
Tony Breeds 2017-09-12 15:37:44 -06:00
parent c009fdff76
commit b69f046d73
682 changed files with 14 additions and 136013 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = congress
omit = congress/tests/*
[report]
ignore_errors = True

62
.gitignore vendored
View File

@ -1,62 +0,0 @@
# Congress build/runtime artifacts
Congress.tokens
subunit.log
congress/tests/policy_engines/snapshot/test
congress/tests/policy/snapshot/test
/doc/html
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
/lib
/lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.sw?
# IDEs
.idea

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/congress.git

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ ./congress/tests $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,25 +0,0 @@
============
Contributing
============
The Congress wiki page is the authoritative starting point.
https://wiki.openstack.org/wiki/Congress
If you would like to contribute to the development of any OpenStack
project including Congress,
you must follow the steps in this page:
https://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/congress

View File

@ -1,5 +0,0 @@
===========================
Congress style commandments
===========================
Read the OpenStack Style Commandments https://docs.openstack.org/developer/hacking/

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,12 +0,0 @@
TOPDIR=$(CURDIR)
SRCDIR=$(TOPDIR)/congress
all: docs
clean:
find . -name '*.pyc' -exec rm -f {} \;
rm -Rf $(TOPDIR)/doc/html/*
docs: $(TOPDIR)/doc/source/*.rst
sphinx-build -b html $(TOPDIR)/doc/source $(TOPDIR)/doc/html

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,465 +0,0 @@
========================
Team and repository tags
========================
.. image:: https://governance.openstack.org/badges/congress.svg
:target: https://governance.openstack.org/reference/tags/index.html
.. Change things from this point on
.. _readme:
======================================
Congress Introduction and Installation
======================================
1. What is Congress
===================
Congress is an open policy framework for the cloud. With Congress, a
cloud operator can declare, monitor, enforce, and audit "policy" in a
heterogeneous cloud environment. Congress gets inputs from a cloud's
various cloud services; for example in OpenStack, Congress fetches
information about VMs from Nova, and network state from Neutron, etc.
Congress then feeds input data from those services into its policy engine
where Congress verifies that the cloud's actual state abides by the cloud
operator's policies. Congress is designed to work with **any policy** and
**any cloud service**.
2. Why is Policy Important
==========================
The cloud is a collection of autonomous
services that constantly change the state of the cloud, and it can be
challenging for the cloud operator to know whether the cloud is even
configured correctly. For example,
* The services are often independent from each other and do not
support transactional consistency across services, so a cloud
management system can change one service (create a VM) without also
making a necessary change to another service (attach the VM to a
network). This can lead to incorrect behavior.
* Other times, we have seen a cloud operator allocate cloud resources
and then forget to clean them up when the resources are no longer in
use, effectively leaving garbage around the system and wasting
resources.
* The desired cloud state can also change over time. For example, if
a security vulnerability is discovered in Linux version X, then all
machines with version X that were ok in the past are now in an
undesirable state. A version number policy would detect all the
machines in that undesirable state. This is a trivial example, but
the more complex the policy, the more helpful a policy system
becomes.
Congress's job is to help people manage that plethora of state across
all cloud services with a succinct policy language.
3. Using Congress
=================
Setting up Congress involves writing policies and configuring Congress
to fetch input data from the cloud services. The cloud operator
writes policy in the Congress policy language, which receives input
from the cloud services in the form of tables. The language itself
resembles datalog. For more detail about the policy language and data
format see :ref:`Policy <policy>`.
To add a service as an input data source, the cloud operator configures a Congress
"driver," and the driver queries the service. Congress already
has drivers for several types of service, but if a cloud operator
needs to use an unsupported service, she can write a new driver
without much effort and probably contribute the driver to the
Congress project so that no one else needs to write the same driver.
Finally, when using Congress, the cloud operator must choose what
Congress should do with the policy it has been given:
* **monitoring**: detect violations of policy and provide a list of those violations
* **proactive enforcement**: prevent violations before they happen (functionality that requires
other services to consult with Congress before making changes)
* **reactive enforcement**: correct violations after they happen (a manual process that
Congress tries to simplify)
In the future, Congress
will also help the cloud operator audit policy (analyze the history
of policy and policy violations).
Congress is free software and is licensed with Apache.
* Free software: Apache license
4. Installing Congress
======================
There are 2 ways to install Congress.
* As part of DevStack. Get Congress running alongside other OpenStack services like Nova
and Neutron, all on a single machine. This is a great way to try out Congress for the
first time.
* Separate install. Get Congress running alongside an existing OpenStack
deployment
4.1 Devstack-install
--------------------
For integrating Congress with DevStack:
1. Download DevStack
.. code-block:: console
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ cd devstack
2. Configure DevStack to use Congress and any other service you want. To do that, modify
the ``local.conf`` file (inside the DevStack directory). Here is what
our file looks like:
.. code-block:: console
[[local|localrc]]
enable_plugin congress https://git.openstack.org/openstack/congress
enable_plugin heat https://git.openstack.org/openstack/heat
enable_plugin aodh https://git.openstack.org/openstack/aodh
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
enable_service s-proxy s-object s-container s-account
3. Run ``stack.sh``. The default configuration expects the passwords to be 'password'
without the quotes
.. code-block:: console
$ ./stack.sh
4.2 Separate install
--------------------
Install the following software, if you haven't already.
* python 2.7: https://www.python.org/download/releases/2.7/
* pip: https://pip.pypa.io/en/latest/installing.html
* java: https://java.com (any reasonably current version should work)
On Ubuntu: console apt-get install default-jre
On Federa: console yum install jre
* Additionally
.. code-block:: console
$ sudo apt-get install git gcc python-dev python-antlr3 libxml2 libxslt1-dev libzip-dev build-essential libssl-dev libffi-dev
$ sudo apt install python-setuptools
$ sudo pip install --upgrade pip virtualenv pbr tox
Clone Congress
.. code-block:: console
$ git clone https://github.com/openstack/congress.git
$ cd congress
Install requirements
.. code-block:: console
$ sudo pip install .
Install Source code
.. code-block:: console
$ sudo python setup.py install
Configure Congress (Assume you put config files in /etc/congress)
.. code-block:: console
$ sudo mkdir -p /etc/congress
$ sudo mkdir -p /etc/congress/snapshot
$ sudo cp etc/api-paste.ini /etc/congress
$ sudo cp etc/policy.json /etc/congress
Set-up Policy Library [optional]
This step copies the bundled collection Congress policies into the Congress
policy library for easy activation by an administrator. The policies in the
library do not become active until explicitly activated by an administrator.
The step may be skipped if you do not want to load the bundled policies into
the policy library.
.. code-block:: console
$ sudo cp -r library /etc/congress/.
Generate a configuration file as outlined in the Configuration Options section
of the :ref:`Deployment <deployment>` document. Note: you may have to run the command with sudo.
There are several sections in the congress/etc/congress.conf.sample file you may want to change:
* [DEFAULT] Section
- drivers
- auth_strategy
* "From oslo.log" Section
- log_file
- log_dir (remember to create the directory)
* [database] Section
- connection
Add drivers:
.. code-block:: text
drivers = congress.datasources.neutronv2_driver.NeutronV2Driver,congress.datasources.glancev2_driver.GlanceV2Driver,congress.datasources.nova_driver.NovaDriver,congress.datasources.keystone_driver.KeystoneDriver,congress.datasources.ceilometer_driver.CeilometerDriver,congress.datasources.cinder_driver.CinderDriver,congress.datasources.swift_driver.SwiftDriver,congress.datasources.plexxi_driver.PlexxiDriver,congress.datasources.vCenter_driver.VCenterDriver,congress.datasources.murano_driver.MuranoDriver,congress.datasources.ironic_driver.IronicDriver
The default auth_strategy is keystone. To set Congress to use no authorization strategy:
.. code-block:: text
auth_strategy = noauth
If you use noauth, you might want to delete or comment out the [keystone_authtoken] section.
Set the database connection string in the [database] section (adapt MySQL root password):
.. code-block:: text
connection = mysql+pymysql://root:password@127.0.0.1/congress?charset=utf8
To use RabbitMQ with Congress, set the transport_url in the "From oslo.messaging" section according to your setup:
.. code-block:: text
transport_url = rabbit://$RABBIT_USERID:$RABBIT_PASSWORD@$RABBIT_HOST:5672
A bare-bones congress.conf is as follows:
.. code-block:: text
[DEFAULT]
auth_strategy = noauth
drivers = congress.datasources.neutronv2_driver.NeutronV2Driver,congress.datasources.glancev2_driver.GlanceV2Driver,congress.datasources.nova_driver.NovaDriver,congress.datasources.keystone_driver.KeystoneDriver,congress.datasources.ceilometer_driver.CeilometerDriver,congress.datasources.cinder_driver.CinderDriver,congress.datasources.swift_driver.SwiftDriver,congress.datasources.plexxi_driver.PlexxiDriver,congress.datasources.vCenter_driver.VCenterDriver,congress.datasources.murano_driver.MuranoDriver,congress.datasources.ironic_driver.IronicDriver
log_file=congress.log
log_dir=/var/log/congress
[database]
connection = mysql+pymysql://root:password@127.0.0.1/congress?charset=utf8
When you are finished editing congress.conf.sample, copy it to the /etc/congress directory.
.. code-block:: console
sudo cp etc/congress.conf.sample /etc/congress/congress.conf
Create database
.. code-block:: console
$ mysql -u root -p
$ mysql> CREATE DATABASE congress;
$ mysql> GRANT ALL PRIVILEGES ON congress.* TO 'congress'@'localhost' IDENTIFIED BY 'CONGRESS_DBPASS';
$ mysql> GRANT ALL PRIVILEGES ON congress.* TO 'congress'@'%' IDENTIFIED BY 'CONGRESS_DBPASS';
Push down schema
.. code-block:: console
$ sudo congress-db-manage --config-file /etc/congress/congress.conf upgrade head
Set up Congress accounts
Use your OpenStack RC file to set and export required environment variables:
OS_USERNAME, OS_PASSWORD, OS_PROJECT_NAME, OS_TENANT_NAME, OS_AUTH_URL.
(Adapt parameters according to your environment)
.. code-block:: console
$ ADMIN_ROLE=$(openstack role list | awk "/ admin / { print \$2 }")
$ SERVICE_TENANT=$(openstack project list | awk "/ service / { print \$2 }")
$ CONGRESS_USER=$(openstack user create --password password --project service --email "congress@example.com" congress | awk "/ id / {print \$4 }")
$ openstack role add $ADMIN_ROLE --user $CONGRESS_USER --project $SERVICE_TENANT
$ CONGRESS_SERVICE=$(openstack service create policy --name congress --description "Congress Service" | awk "/ id / { print \$4 }")
Create the Congress Service Endpoint
Endpoint creation differs based upon the Identity version. Please see the `endpoint <https://docs.openstack.org/developer/python-openstackclient/command-objects/endpoint.html>`_ documentation for details.
.. code-block:: console
Identity v2:
$ openstack endpoint create $CONGRESS_SERVICE --region RegionOne --publicurl https://127.0.0.1:1789/ --adminurl https://127.0.0.1:1789/ --internalurl https://127.0.0.1:1789/
.. code-block:: console
Identity v3:
$ openstack endpoint create --region $OS_REGION_NAME $CONGRESS_SERVICE public https://$SERVICE_HOST:1789
$ openstack endpoint create --region $OS_REGION_NAME $CONGRESS_SERVICE admin https://$SERVICE_HOST:1789
$ openstack endpoint create --region $OS_REGION_NAME $CONGRESS_SERVICE internal https://$SERVICE_HOST:1789
Start Congress
The default behavior is to start the Congress API, Policy Engine, and
Datasource in a single node. For HAHT deployment options, please see the
:ref:`HA Overview <ha_overview>` document.
.. code-block:: console
$ sudo /usr/local/bin/congress-server --debug
Install the Congress Client
The command line interface (CLI) for Congress resides in a project called python-congressclient.
Follow the installation instructions on the `GitHub page <https://github.com/openstack/python-congressclient>`_.
Configure datasource drivers
For this you must have the Congress CLI installed. Run this command for every
service that Congress will poll for data.
Please note that the service name $SERVICE should match the ID of the
datasource driver, e.g. "neutronv2" for Neutron and "glancev2" for Glance;
$OS_USERNAME, $OS_TENANT_NAME, $OS_PASSWORD and $SERVICE_HOST are used to
configure the related datasource driver so that congress knows how to
talk with the service.
.. code-block:: console
$ openstack congress datasource create $SERVICE $"SERVICE" \
--config username=$OS_USERNAME \
--config tenant_name=$OS_TENANT_NAME
--config password=$OS_PASSWORD
--config auth_url=https://$SERVICE_HOST:5000/v2.0
Install the Congress Dashboard in Horizon
Clone congress-dashboard repo, located here https://github.com/openstack/congress-dashboard
Follow the instructions in the README file located in https://github.com/openstack/congress-dashboard/blob/master/README.rst
for further installation.
Note: After you install the Congress Dashboard and restart apache, the OpenStack Dashboard may throw
a "You have offline compression enabled..." error, follow the instructions in the error message.
You may have to:
.. code-block:: console
$ cd /opt/stack/horizon
$ python manage.py compress
$ sudo service apache2 restart
Read the HTML documentation
Install python-sphinx and the oslosphinx extension if missing and build the docs.
After building, open congress/doc/html/index.html in a browser.
.. code-block:: console
$ sudo pip install sphinx
$ sudo pip install oslosphinx
$ make docs
Test Using the Congress CLI
If you are not familiar with using the OpenStack command-line clients, please read the `OpenStack documentation <https://docs.openstack.org/user-guide/cli.html>`_ before proceeding.
Once you have set up or obtained credentials to use the OpenStack command-line clients, you may begin testing Congress. During installation a number of policies are created.
To view policies: $ openstack congress policy list
To view installed datasources: $ openstack congress datasource list
To list available commands: $ openstack congress --help
4.3 Unit Tests
------------------------
Run unit tests in the Congress directory
.. code-block:: console
$ tox -epy27
In order to break into the debugger from a unit test we need to insert
a break point to the code:
.. code-block:: python
import pdb; pdb.set_trace()
Then run ``tox`` with the debug environment as one of the following::
tox -e debug
tox -e debug test_file_name.TestClass.test_name
For more information see the `oslotest documentation
<https://docs.openstack.org/developer/oslotest/features.html#debugging-with-oslo-debug-helper>`_.
4.4 Upgrade
-----------
Here are the instructions for upgrading to a new release of the
Congress server.
1. Stop the Congress server.
2. Update the Congress git repo
.. code-block:: console
$ cd /path/to/congress
$ git fetch origin
3. Checkout the release you are interested in, say Mitaka. Note that this
step will not succeed if you have any uncommitted changes in the repo.
.. code-block:: console
$ git checkout origin/stable/mitaka
If you have changes committed locally that are not merged into the public
repository, you now need to cherry-pick those changes onto the new
branch.
4. Install dependencies
.. code-block:: console
$ sudo pip install
5. Install source code
.. code-block:: console
$ sudo python setup.py install
6. Migrate the database schema
.. code-block:: console
$ sudo congress-db-manage --config-file /etc/congress/congress.conf upgrade head
7. (optional) Check if the configuration options you are currently using are
still supported and whether there are any new configuration options you
would like to use. To see the current list of configuration options,
use the following command, which will create a sample configuration file
in ``etc/congress.conf.sample`` for you to examine.
.. code-block:: console
$ tox -egenconfig
8. Restart Congress, e.g.
.. code-block:: console
$ sudo /usr/local/bin/congress-server --debug

View File

@ -1 +0,0 @@
../../thirdparty/antlr3-antlr-3.5/runtime/Python/antlr3/

View File

@ -1 +0,0 @@
../../thirdparty/antlr3-antlr-3.5/runtime/Python3/antlr3/

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,36 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import os
import sys
# If ../congress/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(__file__),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir,
'congress',
'__init__.py')):
sys.path.insert(0, possible_topdir)
from congress.server import congress_server
if __name__ == '__main__':
congress_server.main()

View File

@ -1,14 +0,0 @@
python-all-dev
python3-all-dev
libvirt-dev
libxml2-dev
libxslt1-dev
# libmysqlclient-dev
# libpq-dev
libsqlite3-dev
libffi-dev
# mysql-client
# mysql-server
# postgresql
# postgresql-client
rabbitmq-server

View File

@ -1,22 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import gettext
import pbr.version
gettext.install('congress')
__version__ = pbr.version.VersionInfo(
'congress').version_string()

View File

@ -1,52 +0,0 @@
# Copyright (c) 2015 Intel, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
class ActionsModel(base.APIModel):
"""Model for handling API requests about Actions."""
# Note(dse2): blocking function
def get_items(self, params, context=None):
"""Retrieve items from this model.
Args:
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
A dict containing at least a 'actions' key whose value is a list
of items in this model.
"""
# Note: blocking call
caller, source_id = api_utils.get_id_from_context(context)
try:
rpc_args = {'source_id': source_id}
# Note(dse2): blocking call
return self.invoke_rpc(caller, 'get_actions', rpc_args)
except exception.CongressException as e:
raise webservice.DataModelException(
exception.NotFound.code, str(e),
http_status_code=exception.NotFound.code)

View File

@ -1,52 +0,0 @@
# Copyright (c) 2015 NTT All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from congress.api import base
from congress.api import webservice
from congress.db import datasources as db_datasources
LOG = logging.getLogger(__name__)
def create_table_dict(tablename, schema):
cols = [{'name': x['name'], 'description': x['desc']}
if isinstance(x, dict)
else {'name': x, 'description': 'None'}
for x in schema[tablename]]
return {'table_id': tablename,
'columns': cols}
# Note(thread-safety): blocking function
def get_id_from_context(context):
if 'ds_id' in context:
# Note(thread-safety): blocking call
ds_name = db_datasources.get_datasource_name(context.get('ds_id'))
return ds_name, context.get('ds_id')
elif 'policy_id' in context:
return base.ENGINE_SERVICE_ID, context.get('policy_id')
else:
msg = "Internal error: context %s should have included " % str(context)
"either ds_id or policy_id"
try: # Py3: ensure LOG.exception is inside except
raise webservice.DataModelException('404', msg)
except webservice.DataModelException:
LOG.exception(msg)
raise

View File

@ -1,109 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import traceback
from oslo_log import log as logging
import webob
import webob.dec
from congress.api import webservice
from congress.dse2 import data_service
LOG = logging.getLogger(__name__)
API_SERVICE_NAME = '__api'
class ApiApplication(object):
"""An API web application that binds REST resources to a wsgi server.
This indirection between the wsgi server and REST resources facilitates
binding the same resource tree to multiple endpoints (e.g. HTTP/HTTPS).
"""
def __init__(self, resource_mgr):
self.resource_mgr = resource_mgr
@webob.dec.wsgify(RequestClass=webob.Request)
def __call__(self, request):
try:
handler = self.resource_mgr.get_handler(request)
if handler:
msg = _("Handling request '%(meth)s %(path)s' with %(hndlr)s")
LOG.info(msg, {"meth": request.method, "path": request.path,
"hndlr": handler})
# TODO(pballand): validation
response = handler.handle_request(request)
else:
response = webservice.NOT_FOUND_RESPONSE
except webservice.DataModelException as e:
# Error raised based on invalid user input
LOG.exception("ApiApplication: found DataModelException")
response = e.rest_response()
except Exception as e:
# Unexpected error raised by API framework or data model
msg = _("Exception caught for request: %s")
LOG.error(msg, request)
LOG.error(traceback.format_exc())
response = webservice.INTERNAL_ERROR_RESPONSE
return response
class ResourceManager(data_service.DataService):
"""A container for REST API resources.
This container is meant to be called from one or more wsgi servers/ports.
Attributes:
handlers: An array of API resource handlers for registered resources.
"""
def __init__(self):
self.handlers = []
super(ResourceManager, self).__init__(API_SERVICE_NAME)
def register_handler(self, handler, search_index=None):
"""Register a new resource handler.
Args:
handler: The resource handler to register.
search_index: Priority of resource handler to resolve cases where
a request matches multiple handlers.
"""
if search_index is not None:
self.handlers.insert(search_index, handler)
else:
self.handlers.append(handler)
msg = _("Registered API handler: %s")
LOG.info(msg, handler)
def get_handler(self, request):
"""Find a handler for a REST request.
Args:
request: A webob request object.
Returns:
A handler instance or None.
"""
for h in self.handlers:
if h.handles_request(request):
return h
return None

View File

@ -1,41 +0,0 @@
# Copyright (c) 2016 NEC Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
""" Base class for all API models."""
from __future__ import absolute_import
from oslo_config import cfg
ENGINE_SERVICE_ID = '__engine'
LIBRARY_SERVICE_ID = '__library'
DS_MANAGER_SERVICE_ID = '_ds_manager'
class APIModel(object):
"""Base Class for handling API requests."""
def __init__(self, name, bus=None):
self.name = name
self.dse_long_timeout = cfg.CONF.dse.long_timeout
self.bus = bus
# Note(thread-safety): blocking function
def invoke_rpc(self, caller, name, kwargs, timeout=None):
local = (caller is ENGINE_SERVICE_ID and
self.bus.node.service_object(
ENGINE_SERVICE_ID) is not None)
return self.bus.rpc(
caller, name, kwargs, timeout=timeout, local=local)

View File

@ -1,158 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import json
from oslo_log import log as logging
from congress.api import api_utils
from congress.api import base
from congress.api import error_codes
from congress.api import webservice
from congress import exception
LOG = logging.getLogger(__name__)
class DatasourceModel(base.APIModel):
"""Model for handling API requests about Datasources."""
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
Args:
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
# Note(thread-safety): blocking call
results = self.bus.get_datasources(filter_secret=True)
# Check that running datasources match the datasources in the
# database since this is going to tell the client about those
# datasources, and the running datasources should match the
# datasources we show the client.
return {"results": results}
def get_item(self, id_, params, context=None):
"""Get datasource corresponding to id_ in model."""
try:
datasource = self.bus.get_datasource(id_)
return datasource
except exception.DatasourceNotFound as e:
LOG.exception("Datasource '%s' not found", id_)
raise webservice.DataModelException(e.code, str(e),
http_status_code=e.code)
# Note(thread-safety): blocking function
def add_item(self, item, params, id_=None, context=None):
"""Add item to model.
Args:
item: The item to add to the model
id_: The ID of the item, or None if an ID should be generated
context: Key-values providing frame of reference of request
Returns:
Tuple of (ID, newly_created_item)
Raises:
KeyError: ID already exists.
"""
obj = None
try:
# Note(thread-safety): blocking call
obj = self.invoke_rpc(base.DS_MANAGER_SERVICE_ID,
'add_datasource',
{'items': item},
timeout=self.dse_long_timeout)
# Let PE synchronizer take care of creating the policy.
except (exception.BadConfig,
exception.DatasourceNameInUse,
exception.DriverNotFound,
exception.DatasourceCreationError) as e:
LOG.exception(_("Datasource creation failed."))
raise webservice.DataModelException(e.code, str(e),
http_status_code=e.code)
return (obj['id'], obj)
# Note(thread-safety): blocking function
def delete_item(self, id_, params, context=None):
ds_id = context.get('ds_id')
try:
# Note(thread-safety): blocking call
datasource = self.bus.get_datasource(ds_id)
# FIXME(thread-safety):
# by the time greenthread resumes, the
# returned datasource name could refer to a totally different
# datasource, causing the rest of this code to unintentionally
# delete a different datasource
# Fix: check UUID of datasource before operating.
# Abort if mismatch
self.invoke_rpc(base.DS_MANAGER_SERVICE_ID,
'delete_datasource',
{'datasource': datasource},
timeout=self.dse_long_timeout)
# Let PE synchronizer takes care of deleting policy
except (exception.DatasourceNotFound,
exception.DanglingReference) as e:
raise webservice.DataModelException(e.code, str(e))
# Note(thread-safety): blocking function
def request_refresh_action(self, params, context=None, request=None):
caller, source_id = api_utils.get_id_from_context(context)
try:
args = {'source_id': source_id}
# Note(thread-safety): blocking call
self.invoke_rpc(caller, 'request_refresh', args)
except exception.CongressException as e:
LOG.exception(e)
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def execute_action(self, params, context=None, request=None):
"Execute the action."
service = context.get('ds_id')
body = json.loads(request.body)
action = body.get('name')
action_args = body.get('args', {})
if (not isinstance(action_args, dict)):
(num, desc) = error_codes.get('execute_action_args_syntax')
raise webservice.DataModelException(num, desc)
try:
args = {'service_name': service, 'action': action,
'action_args': action_args}
# TODO(ekcs): perhaps keep execution synchronous when explicitly
# called via API
# Note(thread-safety): blocking call
self.invoke_rpc(base.ENGINE_SERVICE_ID, 'execute_action', args)
except exception.PolicyException as e:
(num, desc) = error_codes.get('execute_error')
raise webservice.DataModelException(num, desc + "::" + str(e))
return {}

View File

@ -1,123 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
try:
# For Python 3
import http.client as httplib
except ImportError:
import httplib
# TODO(thinrichs): move this out of api directory. Could go into
# the exceptions.py file. The HTTP error codes may make these errors
# look like they are only useful for the API, but actually they are
# just encoding the classification of the error using http codes.
# To make this more explicit, we could have 2 dictionaries where
# one maps an error name (readable for programmers) to an error number
# and another dictionary that maps an error name/number to the HTTP
# classification. But then it would be easy for a programmer when
# adding a new error to forget one or the other.
# name of unknown error
UNKNOWN = 'unknown'
# dict mapping error name to (<error id>, <description>, <http error code>)
errors = {}
errors[UNKNOWN] = (
1000, "Unknown error", httplib.BAD_REQUEST)
errors['add_item_id'] = (
1001, "Add item does not support user-chosen ID", httplib.BAD_REQUEST)
errors['rule_syntax'] = (
1002, "Syntax error for rule", httplib.BAD_REQUEST)
errors['multiple_rules'] = (
1003, "Received string representing more than 1 rule", httplib.BAD_REQUEST)
errors['incomplete_simulate_args'] = (
1004, "Simulate requires parameters: query, sequence, action_policy",
httplib.BAD_REQUEST)
errors['simulate_without_policy'] = (
1005, "Simulate must be told which policy evaluate the query on",
httplib.BAD_REQUEST)
errors['sequence_syntax'] = (
1006, "Syntax error in sequence", httplib.BAD_REQUEST)
errors['simulate_error'] = (
1007, "Error in simulate procedure", httplib.INTERNAL_SERVER_ERROR)
errors['rule_already_exists'] = (
1008, "Rule already exists", httplib.CONFLICT)
errors['schema_get_item_id'] = (
1009, "Get item for schema does not support user-chosen ID",
httplib.BAD_REQUEST)
errors['policy_name_must_be_provided'] = (
1010, "A name must be provided when creating a policy",
httplib.BAD_REQUEST)
errors['no_policy_update_owner'] = (
1012, "The policy owner_id cannot be updated",
httplib.BAD_REQUEST)
errors['no_policy_update_kind'] = (
1013, "The policy kind cannot be updated",
httplib.BAD_REQUEST)
errors['failed_to_create_policy'] = (
1014, "A new policy could not be created",
httplib.INTERNAL_SERVER_ERROR)
errors['policy_id_must_not_be_provided'] = (
1015, "An ID may not be provided when creating a policy",
httplib.BAD_REQUEST)
errors['execute_error'] = (
1016, "Error in execution procedure", httplib.INTERNAL_SERVER_ERROR)
errors['service_action_syntax'] = (
1017, "Incorrect action syntax. Requires: <service>:<action>",
httplib.BAD_REQUEST)
errors['execute_action_args_syntax'] = (
1018, "Incorrect argument syntax. "
"Requires: {'positional': [<args>], 'named': {<key>:<value>,}}",
httplib.BAD_REQUEST)
errors['rule_not_permitted'] = (
1019, "Rules not permitted on non persisted policies.",
httplib.BAD_REQUEST)
errors['policy_not_exist'] = (
1020, "The specified policy does not exist.", httplib.NOT_FOUND)
errors['policy_rule_insertion_failure'] = (
1021, "The policy rule could not be inserted.", httplib.BAD_REQUEST)
errors['policy_abbreviation_error'] = (
1022, "The policy abbreviation must be a string and the length of the "
"string must be equal to or less than 5 characters.",
httplib.BAD_REQUEST)
def get(name):
if name not in errors:
name = UNKNOWN
return errors[name][:2]
def get_num(name):
if name not in errors:
name = UNKNOWN
return errors[name][0]
def get_desc(name):
if name not in errors:
name = UNKNOWN
return errors[name][1]
def get_http(name):
if name not in errors:
name = UNKNOWN
return errors[name][2]

View File

@ -1,150 +0,0 @@
# Copyright (c) 2017 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from congress.api import base
from congress.api import error_codes
from congress.api import webservice
from congress import exception
LOG = logging.getLogger(__name__)
class LibraryPolicyModel(base.APIModel):
"""Model for handling API requests about Library Policies."""
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
Args:
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
try:
# Note(thread-safety): blocking call
return {"results": self.invoke_rpc(base.LIBRARY_SERVICE_ID,
'get_policies',
{})}
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def get_item(self, id_, params, context=None):
"""Retrieve item with name name from model.
Args:
name: The unique name of the item to retrieve
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The matching item or None if no item named name exists.
"""
try:
# Note(thread-safety): blocking call
return self.invoke_rpc(base.LIBRARY_SERVICE_ID,
'get_policy',
{'id_': id_, 'include_rules': True})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def add_item(self, item, params, id_=None, context=None):
"""Add item to model.
Args:
item: The item to add to the model
params: A dict-like object containing parameters
from the request query string and body.
id_: The unique name of the item
context: Key-values providing frame of reference of request
Returns:
Tuple of (ID, newly_created_item)
Raises:
KeyError: ID already exists.
DataModelException: Addition cannot be performed.
"""
if id_ is not None:
(num, desc) = error_codes.get('policy_id_must_not_be_provided')
raise webservice.DataModelException(num, desc)
try:
# Note(thread-safety): blocking call
policy_metadata = self.invoke_rpc(
base.LIBRARY_SERVICE_ID, 'create_policy',
{'policy_dict': item})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
return (policy_metadata['id'], policy_metadata)
# Note(thread-safety): blocking function
def delete_item(self, id_, params, context=None):
"""Remove item from model.
Args:
id_: The unique name of the item to be removed
params:
context: Key-values providing frame of reference of request
Returns:
The removed item.
Raises:
KeyError: Item with specified id_ not present.
"""
# Note(thread-safety): blocking call
return self.invoke_rpc(base.LIBRARY_SERVICE_ID,
'delete_policy',
{'id_': id_})
def update_item(self, id_, item, params, context=None):
"""Update item with id_ with new data.
Args:
id_: The ID of the item to be updated
item: The new item
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The updated item.
Raises:
KeyError: Item with specified id_ not present.
"""
# Note(thread-safety): blocking call
try:
return self.invoke_rpc(base.LIBRARY_SERVICE_ID,
'replace_policy',
{'id_': id_,
'policy_dict': item})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)

View File

@ -1,253 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import json
import re
import six
from congress.api import base
from congress.api import error_codes
from congress.api import webservice
from congress import exception
from congress.library_service import library_service
class PolicyModel(base.APIModel):
"""Model for handling API requests about Policies."""
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
Args:
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
try:
# Note(thread-safety): blocking call
return {"results": self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_get_policies',
{})}
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def get_item(self, id_, params, context=None):
"""Retrieve item with id id_ from model.
Args:
id_: The ID of the item to retrieve
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The matching item or None if id_ does not exist.
"""
try:
# Note(thread-safety): blocking call
return self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_get_policy',
{'id_': id_})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def add_item(self, item, params, id_=None, context=None):
"""Add item to model.
Args:
item: The item to add to the model
params: A dict-like object containing parameters
from the request query string and body.
id_: The ID of the item, or None if an ID should be generated
context: Key-values providing frame of reference of request
Returns:
Tuple of (ID, newly_created_item)
Raises:
KeyError: ID already exists.
DataModelException: Addition cannot be performed.
BadRequest: library_policy parameter and request body both present
"""
# case 1: parameter gives library policy UUID
if 'library_policy' in params:
if item is not None:
raise exception.BadRequest(
'Policy creation reqest with `library_policy` parameter '
'must not have body.')
try:
# Note(thread-safety): blocking call
library_policy_object = self.invoke_rpc(
base.LIBRARY_SERVICE_ID,
'get_policy', {'id_': params['library_policy']})
policy_metadata = self.invoke_rpc(
base.ENGINE_SERVICE_ID,
'persistent_create_policy_with_rules',
{'policy_rules_obj': library_policy_object})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
return (policy_metadata['id'], policy_metadata)
# case 2: item contains rules
if 'rules' in item:
try:
library_service.validate_policy_item(item)
# Note(thread-safety): blocking call
policy_metadata = self.invoke_rpc(
base.ENGINE_SERVICE_ID,
'persistent_create_policy_with_rules',
{'policy_rules_obj': item})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
return (policy_metadata['id'], policy_metadata)
# case 3: item does not contain rules
self._check_create_policy(id_, item)
name = item['name']
try:
# Note(thread-safety): blocking call
policy_metadata = self.invoke_rpc(
base.ENGINE_SERVICE_ID, 'persistent_create_policy',
{'name': name,
'abbr': item.get('abbreviation'),
'kind': item.get('kind'),
'desc': item.get('description')})
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
return (policy_metadata['id'], policy_metadata)
def _check_create_policy(self, id_, item):
if id_ is not None:
(num, desc) = error_codes.get('policy_id_must_not_be_provided')
raise webservice.DataModelException(num, desc)
if 'name' not in item:
(num, desc) = error_codes.get('policy_name_must_be_provided')
raise webservice.DataModelException(num, desc)
abbr = item.get('abbreviation')
if abbr:
# the length of abbreviation column is 5 chars in policy DB table,
# check it in API layer and raise exception if it's too long.
if not isinstance(abbr, six.string_types) or len(abbr) > 5:
(num, desc) = error_codes.get('policy_abbreviation_error')
raise webservice.DataModelException(num, desc)
# Note(thread-safety): blocking function
def delete_item(self, id_, params, context=None):
"""Remove item from model.
Args:
id_: The ID or name of the item to be removed
params:
context: Key-values providing frame of reference of request
Returns:
The removed item.
Raises:
KeyError: Item with specified id_ not present.
"""
# Note(thread-safety): blocking call
return self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_delete_policy',
{'name_or_id': id_})
def _get_boolean_param(self, key, params):
if key not in params:
return False
value = params[key]
return value.lower() == "true" or value == "1"
# Note(thread-safety): blocking function
def simulate_action(self, params, context=None, request=None):
"""Simulate the effects of executing a sequence of updates.
:returns: the result of a query.
"""
# grab string arguments
theory = context.get('policy_id') or params.get('policy')
if theory is None:
(num, desc) = error_codes.get('simulate_without_policy')
raise webservice.DataModelException(num, desc)
body = json.loads(request.body)
query = body.get('query')
sequence = body.get('sequence')
actions = body.get('action_policy')
delta = self._get_boolean_param('delta', params)
trace = self._get_boolean_param('trace', params)
if query is None or sequence is None or actions is None:
(num, desc) = error_codes.get('incomplete_simulate_args')
raise webservice.DataModelException(num, desc)
try:
args = {'query': query, 'theory': theory, 'sequence': sequence,
'action_theory': actions, 'delta': delta,
'trace': trace, 'as_list': True}
# Note(thread-safety): blocking call
result = self.invoke_rpc(base.ENGINE_SERVICE_ID, 'simulate',
args, timeout=self.dse_long_timeout)
except exception.PolicyException as e:
(num, desc) = error_codes.get('simulate_error')
raise webservice.DataModelException(num, desc + "::" + str(e))
# always return dict
if trace:
return {'result': result[0],
'trace': result[1]}
return {'result': result}
# Note(thread-safety): blocking function
def execute_action(self, params, context=None, request=None):
"""Execute the action."""
body = json.loads(request.body)
# e.g. name = 'nova:disconnectNetwork'
items = re.split(':', body.get('name'))
if len(items) != 2:
(num, desc) = error_codes.get('service_action_syntax')
raise webservice.DataModelException(num, desc)
service = items[0].strip()
action = items[1].strip()
action_args = body.get('args', {})
if (not isinstance(action_args, dict)):
(num, desc) = error_codes.get('execute_action_args_syntax')
raise webservice.DataModelException(num, desc)
try:
args = {'service_name': service,
'action': action,
'action_args': action_args}
# Note(thread-safety): blocking call
self.invoke_rpc(base.ENGINE_SERVICE_ID, 'execute_action', args)
except exception.PolicyException as e:
(num, desc) = error_codes.get('execute_error')
raise webservice.DataModelException(num, desc + "::" + str(e))
return {}

View File

@ -1,158 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import versions
from congress.api import webservice
class APIRouterV1(object):
def __init__(self, resource_mgr, process_dict):
"""Bootstrap data models and handlers for the API definition."""
# Setup /v1/
version_v1_handler = versions.VersionV1Handler(r'/v1[/]?')
resource_mgr.register_handler(version_v1_handler)
policies = process_dict['api-policy']
policy_collection_handler = webservice.CollectionHandler(
r'/v1/policies',
policies)
resource_mgr.register_handler(policy_collection_handler)
policy_path = r'/v1/policies/(?P<policy_id>[^/]+)'
policy_element_handler = webservice.ElementHandler(
policy_path,
policies,
policy_collection_handler,
allow_update=False,
allow_replace=False)
resource_mgr.register_handler(policy_element_handler)
library_policies = process_dict['api-library-policy']
library_policy_collection_handler = webservice.CollectionHandler(
r'/v1/librarypolicies',
library_policies)
resource_mgr.register_handler(library_policy_collection_handler)
library_policy_path = r'/v1/librarypolicies/(?P<policy_id>[^/]+)'
library_policy_element_handler = webservice.ElementHandler(
library_policy_path,
library_policies,
library_policy_collection_handler,
allow_update=False,
allow_replace=True)
resource_mgr.register_handler(library_policy_element_handler)
policy_rules = process_dict['api-rule']
rule_collection_handler = webservice.CollectionHandler(
r'/v1/policies/(?P<policy_id>[^/]+)/rules',
policy_rules,
"{policy_id}")
resource_mgr.register_handler(rule_collection_handler)
rule_path = (r'/v1/policies/(?P<policy_id>[^/]+)' +
r'/rules/(?P<rule_id>[^/]+)')
rule_element_handler = webservice.ElementHandler(
rule_path,
policy_rules,
"{policy_id}")
resource_mgr.register_handler(rule_element_handler)
# Setup /v1/data-sources
data_sources = process_dict['api-datasource']
ds_collection_handler = webservice.CollectionHandler(
r'/v1/data-sources',
data_sources)
resource_mgr.register_handler(ds_collection_handler)
# Setup /v1/data-sources/<ds_id>
ds_path = r'/v1/data-sources/(?P<ds_id>[^/]+)'
ds_element_handler = webservice.ElementHandler(ds_path, data_sources)
resource_mgr.register_handler(ds_element_handler)
# Setup /v1/data-sources/<ds_id>/schema
schema = process_dict['api-schema']
schema_path = "%s/schema" % ds_path
schema_element_handler = webservice.ElementHandler(schema_path, schema)
resource_mgr.register_handler(schema_element_handler)
# Setup /v1/data-sources/<ds_id>/tables/<table_id>/spec
table_schema_path = "%s/tables/(?P<table_id>[^/]+)/spec" % ds_path
table_schema_element_handler = webservice.ElementHandler(
table_schema_path,
schema)
resource_mgr.register_handler(table_schema_element_handler)
# Setup action handlers
actions = process_dict['api-action']
ds_actions_path = "%s/actions" % ds_path
ds_actions_collection_handler = webservice.CollectionHandler(
ds_actions_path, actions)
resource_mgr.register_handler(ds_actions_collection_handler)
# Setup status handlers
statuses = process_dict['api-status']
ds_status_path = "%s/status" % ds_path
ds_status_element_handler = webservice.ElementHandler(ds_status_path,
statuses)
resource_mgr.register_handler(ds_status_element_handler)
policy_status_path = "%s/status" % policy_path
policy_status_element_handler = webservice.ElementHandler(
policy_status_path,
statuses)
resource_mgr.register_handler(policy_status_element_handler)
rule_status_path = "%s/status" % rule_path
rule_status_element_handler = webservice.ElementHandler(
rule_status_path,
statuses)
resource_mgr.register_handler(rule_status_element_handler)
tables = process_dict['api-table']
tables_path = "(%s|%s)/tables" % (ds_path, policy_path)
table_collection_handler = webservice.CollectionHandler(
tables_path,
tables)
resource_mgr.register_handler(table_collection_handler)
table_path = "%s/(?P<table_id>[^/]+)" % tables_path
table_element_handler = webservice.ElementHandler(table_path, tables)
resource_mgr.register_handler(table_element_handler)
table_rows = process_dict['api-row']
rows_path = "%s/rows" % table_path
row_collection_handler = webservice.CollectionHandler(
rows_path,
table_rows, allow_update=True)
resource_mgr.register_handler(row_collection_handler)
row_path = "%s/(?P<row_id>[^/]+)" % rows_path
row_element_handler = webservice.ElementHandler(row_path, table_rows)
resource_mgr.register_handler(row_element_handler)
# Setup /v1/system/datasource-drivers
system = process_dict['api-system']
# NOTE(arosen): start url out with datasource-drivers since we don't
# yet implement /v1/system/ yet.
system_collection_handler = webservice.CollectionHandler(
r'/v1/system/drivers',
system)
resource_mgr.register_handler(system_collection_handler)
# Setup /v1/system/datasource-drivers/<driver_id>
driver_path = r'/v1/system/drivers/(?P<driver_id>[^/]+)'
driver_element_handler = webservice.ElementHandler(driver_path, system)
resource_mgr.register_handler(driver_element_handler)

View File

@ -1,200 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
LOG = logging.getLogger(__name__)
class RowModel(base.APIModel):
"""Model for handling API requests about Rows."""
# TODO(thinrichs): No rows have IDs right now. Maybe eventually
# could make ID the hash of the row, but then might as well
# just make the ID a string repr of the row. No use case
# for it as of now since all rows are read-only.
# def get_item(self, id_, context=None):
# """Retrieve item with id id_ from model.
# Args:
# id_: The ID of the item to retrieve
# context: Key-values providing frame of reference of request
# Returns:
# The matching item or None if item with id_ does not exist.
# """
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
Args:
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
LOG.info("get_items(context=%s)", context)
gen_trace = False
if 'trace' in params and params['trace'].lower() == 'true':
gen_trace = True
# Get the caller, it should be either policy or datasource
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# It would have saved us if table_id was an UUID rather than a name,
# but it appears that table_id is just another word for tablename.
# Fix: check UUID of datasource before operating. Abort if mismatch
table_id = context['table_id']
try:
args = {'table_id': table_id, 'source_id': source_id,
'trace': gen_trace}
if caller is base.ENGINE_SERVICE_ID:
# allow extra time for row policy engine query
# Note(thread-safety): blocking call
result = self.invoke_rpc(
caller, 'get_row_data', args,
timeout=self.dse_long_timeout)
else:
# Note(thread-safety): blocking call
result = self.invoke_rpc(caller, 'get_row_data', args)
except exception.CongressException as e:
m = ("Error occurred while processing source_id '%s' for row "
"data of the table '%s'" % (source_id, table_id))
LOG.exception(m)
raise webservice.DataModelException.create(e)
if gen_trace and caller is base.ENGINE_SERVICE_ID:
# DSE2 returns lists instead of tuples, so correct that.
results = [{'data': tuple(x['data'])} for x in result[0]]
return {'results': results,
'trace': result[1] or "Not available"}
else:
result = [{'data': tuple(x['data'])} for x in result]
return {'results': result}
# Note(thread-safety): blocking function
def update_items(self, items, params, context=None):
"""Updates all data in a table.
Args:
id_: A table id for updating all row
items: A data for new rows
params: A dict-like object containing parameters from
request query
context: Key-values providing frame of reference of request
Returns: None
Raises:
KeyError: table id doesn't exist
DataModelException: any error occurs during replacing rows.
"""
LOG.info("update_items(context=%s)", context)
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# It would have saved us if table_id was an UUID rather than a name,
# but it appears that table_id is just another word for tablename.
# Fix: check UUID of datasource before operating. Abort if mismatch
table_id = context['table_id']
try:
args = {'table_id': table_id, 'source_id': source_id,
'objs': items}
# Note(thread-safety): blocking call
self.invoke_rpc(caller, 'update_entire_data', args)
except exception.CongressException as e:
LOG.exception("Error occurred while processing updating rows "
"for source_id '%s' and table_id '%s'",
source_id, table_id)
raise webservice.DataModelException.create(e)
LOG.info("finish update_items(context=%s)", context)
LOG.debug("updated table %s with row items: %s",
table_id, str(items))
# TODO(thinrichs): It makes sense to sometimes allow users to create
# a new row for internal data sources. But since we don't have
# those yet all tuples are read-only from the API.
# def add_item(self, item, id_=None, context=None):
# """Add item to model.
# Args:
# item: The item to add to the model
# id_: The ID of the item, or None if an ID should be generated
# context: Key-values providing frame of reference of request
# Returns:
# Tuple of (ID, newly_created_item)
# Raises:
# KeyError: ID already exists.
# """
# TODO(thinrichs): once we have internal data sources,
# add the ability to update a row. (Or maybe not and implement
# via add+delete.)
# def update_item(self, id_, item, context=None):
# """Update item with id_ with new data.
# Args:
# id_: The ID of the item to be updated
# item: The new item
# context: Key-values providing frame of reference of request
# Returns:
# The updated item.
# Raises:
# KeyError: Item with specified id_ not present.
# """
# # currently a noop since the owner_id cannot be changed
# if id_ not in self.items:
# raise KeyError("Cannot update item with ID '%s': "
# "ID does not exist")
# return item
# TODO(thinrichs): once we can create, we should be able to delete
# def delete_item(self, id_, context=None):
# """Remove item from model.
# Args:
# id_: The ID of the item to be removed
# context: Key-values providing frame of reference of request
# Returns:
# The removed item.
# Raises:
# KeyError: Item with specified id_ not present.
# """

View File

@ -1,133 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import base
from congress.api import error_codes
from congress.api import webservice
from congress import exception
class RuleModel(base.APIModel):
"""Model for handling API requests about policy Rules."""
def policy_name(self, context):
if 'ds_id' in context:
return context['ds_id']
elif 'policy_id' in context:
# Note: policy_id is actually policy name
return context['policy_id']
def get_item(self, id_, params, context=None):
"""Retrieve item with id id_ from model.
Args:
id_: The ID of the item to retrieve
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The matching item or None if item with id_ does not exist.
"""
try:
args = {'id_': id_, 'policy_name': self.policy_name(context)}
# Note(thread-safety): blocking call
return self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_get_rule', args)
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
Args:
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
try:
args = {'policy_name': self.policy_name(context)}
# Note(thread-safety): blocking call
rules = self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_get_rules', args)
return {'results': rules}
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def add_item(self, item, params, id_=None, context=None):
"""Add item to model.
Args:
item: The item to add to the model
params: A dict-like object containing parameters
from the request query string and body.
id_: The ID of the item, or None if an ID should be generated
context: Key-values providing frame of reference of request
Returns:
Tuple of (ID, newly_created_item)
Raises:
KeyError: ID already exists.
"""
if id_ is not None:
raise webservice.DataModelException(
*error_codes.get('add_item_id'))
try:
args = {'policy_name': self.policy_name(context),
'str_rule': item.get('rule'),
'rule_name': item.get('name'),
'comment': item.get('comment')}
# Note(thread-safety): blocking call
return self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_insert_rule', args,
timeout=self.dse_long_timeout)
except exception.CongressException as e:
raise webservice.DataModelException.create(e)
# Note(thread-safety): blocking function
def delete_item(self, id_, params, context=None):
"""Remove item from model.
Args:
id_: The ID of the item to be removed
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The removed item.
Raises:
KeyError: Item with specified id_ not present.
"""
try:
args = {'id_': id_, 'policy_name_or_id': self.policy_name(context)}
# Note(thread-safety): blocking call
return self.invoke_rpc(base.ENGINE_SERVICE_ID,
'persistent_delete_rule', args,
timeout=self.dse_long_timeout)
except exception.CongressException as e:
raise webservice.DataModelException.create(e)

View File

@ -1,69 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
class SchemaModel(base.APIModel):
"""Model for handling API requests about Schemas."""
# Note(thread-safety): blocking function
def get_item(self, id_, params, context=None):
"""Retrieve item with id id_ from model.
Args:
id_: The ID of the item to retrieve
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The matching item or None if item with id_ does not exist.
"""
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# Fix: check UUID of datasource before operating. Abort if mismatch
table = context.get('table_id')
args = {'source_id': source_id}
try:
# Note(thread-safety): blocking call
schema = self.invoke_rpc(caller, 'get_datasource_schema', args)
except exception.CongressException as e:
raise webservice.DataModelException(e.code, str(e),
http_status_code=e.code)
# request to see the schema for one table
if table:
if table not in schema:
raise webservice.DataModelException(
404, ("Table '{}' for datasource '{}' has no "
"schema ".format(id_, source_id)),
http_status_code=404)
return api_utils.create_table_dict(table, schema)
tables = [api_utils.create_table_dict(table_, schema)
for table_ in schema]
return {'tables': tables}

View File

@ -1,59 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
class StatusModel(base.APIModel):
"""Model for handling API requests about Statuses."""
# Note(thread-safety): blocking function
def get_item(self, id_, params, context=None):
"""Retrieve item with id id_ from model.
Args:
id_: The ID of the item to retrieve
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The matching item or None if item with id_ does not exist.
"""
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# Fix: check UUID of datasource before operating. Abort if mismatch
try:
rpc_args = {'params': context, 'source_id': source_id}
# Note(thread-safety): blocking call
status = self.invoke_rpc(caller, 'get_status', rpc_args)
except exception.CongressException as e:
raise webservice.DataModelException(
exception.NotFound.code, str(e),
http_status_code=exception.NotFound.code)
return status

View File

@ -1,71 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
class DatasourceDriverModel(base.APIModel):
"""Model for handling API requests about DatasourceDriver."""
def get_items(self, params, context=None):
"""Get items in model.
Args:
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
drivers = self.bus.get_drivers_info()
fields = ['id', 'description']
results = [self.bus.make_datasource_dict(
drivers[driver], fields=fields)
for driver in drivers]
return {"results": results}
def get_item(self, id_, params, context=None):
"""Retrieve item with id id_ from model.
Args:
id_: The ID of the item to retrieve
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The matching item or None if item with id_ does not exist.
"""
datasource = context.get('driver_id')
try:
driver = self.bus.get_driver_info(datasource)
schema = self.bus.get_driver_schema(datasource)
except exception.DriverNotFound as e:
raise webservice.DataModelException(e.code, str(e),
http_status_code=e.code)
tables = [api_utils.create_table_dict(table_, schema)
for table_ in schema]
driver['tables'] = tables
return driver

View File

@ -1,154 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from congress.api import api_utils
from congress.api import base
from congress.api import webservice
from congress import exception
LOG = logging.getLogger(__name__)
class TableModel(base.APIModel):
"""Model for handling API requests about Tables."""
# Note(thread-safety): blocking function
def get_item(self, id_, params, context=None):
"""Retrieve item with id id_ from model.
Args:
id_: The ID of the item to retrieve
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The matching item or None if item with id_ does not exist.
"""
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# Fix: check UUID of datasource before operating. Abort if mismatch
args = {'source_id': source_id, 'table_id': id_}
try:
# Note(thread-safety): blocking call
tablename = self.invoke_rpc(caller, 'get_tablename', args)
except exception.CongressException as e:
LOG.exception("Exception occurred while retrieving table %s"
"from datasource %s", id_, source_id)
raise webservice.DataModelException.create(e)
if tablename:
return {'id': tablename}
LOG.info('table id %s is not found in datasource %s', id_, source_id)
# Note(thread-safety): blocking function
def get_items(self, params, context=None):
"""Get items in model.
Args:
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
LOG.info('get_items has context %s', context)
# Note(thread-safety): blocking call
caller, source_id = api_utils.get_id_from_context(context)
# FIXME(threod-safety): in DSE2, the returned caller can be a
# datasource name. But the datasource name may now refer to a new,
# unrelated datasource. Causing the rest of this code to operate on
# an unintended datasource.
# Fix: check UUID of datasource before operating. Abort if mismatch
try:
# Note(thread-safety): blocking call
tablenames = self.invoke_rpc(caller, 'get_tablenames',
{'source_id': source_id})
except exception.CongressException as e:
LOG.exception("Exception occurred while retrieving tables"
"from datasource %s", source_id)
raise webservice.DataModelException.create(e)
# when the source_id doesn't have any table, 'tablenames' is set([])
if isinstance(tablenames, set) or isinstance(tablenames, list):
return {'results': [{'id': x} for x in tablenames]}
# Tables can only be created/updated/deleted by writing policy
# or by adding new data sources. Once we have internal data sources
# we need to implement all of these.
# def add_item(self, item, id_=None, context=None):
# """Add item to model.
# Args:
# item: The item to add to the model
# id_: The ID of the item, or None if an ID should be generated
# context: Key-values providing frame of reference of request
# Returns:
# Tuple of (ID, newly_created_item)
# Raises:
# KeyError: ID already exists.
# """
# def update_item(self, id_, item, context=None):
# """Update item with id_ with new data.
# Args:
# id_: The ID of the item to be updated
# item: The new item
# context: Key-values providing frame of reference of request
# Returns:
# The updated item.
# Raises:
# KeyError: Item with specified id_ not present.
# """
# # currently a noop since the owner_id cannot be changed
# if id_ not in self.items:
# raise KeyError("Cannot update item with ID '%s': "
# "ID does not exist")
# return item
# def delete_item(self, id_, context=None):
# """Remove item from model.
# Args:
# id_: The ID of the item to be removed
# context: Key-values providing frame of reference of request
# Returns:
# The removed item.
# Raises:
# KeyError: Item with specified id_ not present.
# """

View File

@ -1,146 +0,0 @@
# Copyright 2015 Huawei.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import copy
import json
import os
from six.moves import http_client
import webob
import webob.dec
from congress.api import webservice
VERSIONS = {
"v1": {
"id": "v1",
"status": "CURRENT",
"updated": "2013-08-12T17:42:13Z",
"links": [
{
"rel": "describedby",
"type": "text/html",
"href": "http://congress.readthedocs.org/",
},
],
},
}
def _get_view_builder(request):
base_url = request.application_url
return ViewBuilder(base_url)
class ViewBuilder(object):
def __init__(self, base_url):
""":param base_url: url of the root wsgi application."""
self.base_url = base_url
def build_choices(self, versions, request):
version_objs = []
for version in sorted(versions.keys()):
version = versions[version]
version_objs.append({
"id": version['id'],
"status": version['status'],
"updated": version['updated'],
"links": self._build_links(version, request.path),
})
return dict(choices=version_objs)
def build_versions(self, versions):
version_objs = []
for version in sorted(versions.keys()):
version = versions[version]
version_objs.append({
"id": version['id'],
"status": version['status'],
"updated": version['updated'],
"links": self._build_links(version),
})
return dict(versions=version_objs)
def build_version(self, version):
reval = copy.deepcopy(version)
reval['links'].insert(0, {
"rel": "self",
"href": self.base_url.rstrip('/') + '/',
})
return dict(version=reval)
def _build_links(self, version_data, path=None):
"""Generate a container of links that refer to the provided version."""
href = self._generate_href(version_data['id'], path)
links = [
{
"rel": "self",
"href": href,
},
]
return links
def _generate_href(self, version, path=None):
"""Create an url that refers to a specific version."""
if path:
path = path.strip('/')
return os.path.join(self.base_url, version, path)
else:
return os.path.join(self.base_url, version) + '/'
class Versions(object):
@classmethod
def factory(cls, global_config, **local_config):
return cls()
@webob.dec.wsgify(RequestClass=webob.Request)
def __call__(self, request):
"""Respond to a request for all Congress API versions."""
builder = _get_view_builder(request)
if request.path == '/':
body = builder.build_versions(VERSIONS)
status = http_client.OK
else:
body = builder.build_choices(VERSIONS, request)
status = http_client.MULTIPLE_CHOICES
return webob.Response(body="%s\n" % json.dumps(body),
status=status,
content_type='application/json',
charset='UTF-8')
class VersionV1Handler(webservice.AbstractApiHandler):
def handle_request(self, request):
builder = _get_view_builder(request)
body = builder.build_version(VERSIONS['v1'])
return webob.Response(body="%s\n" % json.dumps(body),
status=http_client.OK,
content_type='application/json',
charset='UTF-8')

View File

@ -1,629 +0,0 @@
# Copyright (c) 2013 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
try:
# For Python 3
import http.client as httplib
except ImportError:
import httplib
import json
import re
from oslo_config import cfg
from oslo_db import exception as db_exc
from oslo_log import log as logging
from oslo_utils import uuidutils
import six
import webob
import webob.dec
from congress.api import error_codes
from congress.common import policy
from congress import exception
LOG = logging.getLogger(__name__)
def error_response(status, error_code, description, data=None):
"""Construct and return an error response.
Args:
status: The HTTP status code of the response.
error_code: The application-specific error code.
description: Friendly G11N-enabled string corresponding to error_code.
data: Additional data (not G11N-enabled) for the API consumer.
"""
raw_body = {'error': {
'message': description,
'error_code': error_code,
'error_data': data
}
}
body = '%s\n' % json.dumps(raw_body)
return webob.Response(body=body, status=status,
content_type='application/json',
charset='UTF-8')
NOT_FOUND_RESPONSE = error_response(httplib.NOT_FOUND,
httplib.NOT_FOUND,
"The resouce could not be found.")
NOT_SUPPORTED_RESPONSE = error_response(httplib.NOT_IMPLEMENTED,
httplib.NOT_IMPLEMENTED,
"Method not supported")
INTERNAL_ERROR_RESPONSE = error_response(httplib.INTERNAL_SERVER_ERROR,
httplib.INTERNAL_SERVER_ERROR,
"Internal server error")
def original_msg(e):
'''Undo oslo-messaging added traceback to return original exception msg'''
msg = e.args[0].split('\nTraceback (most recent call last):')[0]
if len(msg) != len(e.args[0]):
if len(msg) > 0 and msg[-1] in ("'", '"'):
msg = msg[:-1]
if len(msg) > 1 and msg[0:2] in ('u"', "u'"):
msg = msg[2:]
elif len(msg) > 0 and msg[0] in ("'", '"'):
msg = msg[1:]
return msg
else: # return untouched message is format not as expected
return e.args[0]
class DataModelException(Exception):
"""Congress API Data Model Exception
Custom exception raised by API Data Model methods to communicate errors to
the API framework.
"""
def __init__(self, error_code, description, data=None,
http_status_code=httplib.BAD_REQUEST):
super(DataModelException, self).__init__(description)
self.error_code = error_code
self.description = description
self.data = data
self.http_status_code = http_status_code
@classmethod
def create(cls, error):
"""Generate a DataModelException from an existing CongressException.
:param error has a 'name' field corresponding to an error_codes
error-name. It may also have a 'data' field.
Returns a DataModelException properly populated.
"""
name = getattr(error, "name", None)
if name:
error_code = error_codes.get_num(name)
description = error_codes.get_desc(name)
http_status_code = error_codes.get_http(name)
else:
# Check if it's default http error or else return 'Unknown error'
error_code = error.code or httplib.BAD_REQUEST
if error_code not in httplib.responses:
error_code = httplib.BAD_REQUEST
description = httplib.responses.get(error_code, "Unknown error")
http_status_code = error_code
if str(error):
description += "::" + original_msg(error)
return cls(error_code=error_code,
description=description,
data=getattr(error, 'data', None),
http_status_code=http_status_code)
def rest_response(self):
return error_response(self.http_status_code, self.error_code,
self.description, self.data)
class AbstractApiHandler(object):
"""Abstract handler for API requests.
Attributes:
path_regex: The regular expression matching paths supported by this
handler.
"""
def __init__(self, path_regex):
if path_regex[-1] != '$':
path_regex += "$"
# we only use 'match' so no need to mark the beginning of string
self.path_regex = path_regex
self.path_re = re.compile(path_regex)
def __str__(self):
return "%s(%s)" % (self.__class__.__name__, self.path_re.pattern)
def _get_context(self, request):
"""Return dict of variables in request path."""
m = self.path_re.match(request.path)
# remove all the None values before returning
return dict([(k, v) for k, v in m.groupdict().items()
if v is not None])
def _parse_json_body(self, request):
content_type = (request.content_type or "application/json").lower()
if content_type != 'application/json':
raise DataModelException(
400, "Unsupported Content-Type; must be 'application/json'")
if request.charset != 'UTF-8':
raise DataModelException(
400, "Unsupported charset: must be 'UTF-8'")
try:
request.parsed_body = json.loads(request.body.decode('utf-8'))
except ValueError as e:
msg = "Failed to parse body as %s: %s" % (content_type, e)
raise DataModelException(400, msg)
return request.parsed_body
def handles_request(self, request):
"""Return true iff handler supports the request."""
m = self.path_re.match(request.path)
return m is not None
def handle_request(self, request):
"""Handle a REST request.
Args:
request: A webob request object.
Returns:
A webob response object.
"""
return NOT_SUPPORTED_RESPONSE
class ElementHandler(AbstractApiHandler):
"""API handler for REST element resources.
REST elements represent individual entities in the data model, and often
support the following operations:
- Read a representation of the element
- Update (replace) the entire element with a new version
- Update (patch) parts of the element with new values
- Delete the element
Elements may also exhibit 'controller' semantics for RPC-style method
invocation, however this is not currently supported.
"""
def __init__(self, path_regex, model,
collection_handler=None, allow_read=True, allow_actions=True,
allow_replace=True, allow_update=True, allow_delete=True):
"""Initialize an element handler.
Args:
path_regex: A regular expression that matches the full path
to the element. If multiple handlers match a request path,
the handler with the highest registration search_index wins.
model: A resource data model instance
collection_handler: The collection handler this element
is a member of or None if the element is not a member of a
collection. (Used for named creation of elements)
allow_read: True if element supports read
allow_replace: True if element supports replace
allow_update: True if element supports update
allow_delete: True if element supports delete
"""
super(ElementHandler, self).__init__(path_regex)
self.model = model
self.collection_handler = collection_handler
self.allow_read = allow_read
self.allow_actions = allow_actions
self.allow_replace = allow_replace
self.allow_update = allow_update
self.allow_delete = allow_delete
def _get_element_id(self, request):
m = self.path_re.match(request.path)
if m.groups():
return m.groups()[-1] # TODO(pballand): make robust
return None
def handle_request(self, request):
"""Handle a REST request.
Args:
request: A webob request object.
Returns:
A webob response object.
"""
try:
if request.method == 'GET' and self.allow_read:
return self.read(request)
elif request.method == 'POST' and self.allow_actions:
return self.action(request)
elif request.method == 'PUT' and self.allow_replace:
return self.replace(request)
elif request.method == 'PATCH' and self.allow_update:
return self.update(request)
elif request.method == 'DELETE' and self.allow_delete:
return self.delete(request)
return NOT_SUPPORTED_RESPONSE
except db_exc.DBError:
LOG.exception('Database backend experienced an unknown error.')
raise exception.DatabaseError
def read(self, request):
if not hasattr(self.model, 'get_item'):
return NOT_SUPPORTED_RESPONSE
id_ = self._get_element_id(request)
item = self.model.get_item(id_, request.params,
context=self._get_context(request))
if item is None:
return error_response(httplib.NOT_FOUND, 404, 'Not found')
return webob.Response(body="%s\n" % json.dumps(item),
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
def action(self, request):
# Non-CRUD operations must specify an 'action' parameter
action = request.params.getall('action')
if len(action) != 1:
if len(action) > 1:
errstr = "Action parameter may not be provided multiple times."
else:
errstr = "Missing required action parameter."
return error_response(httplib.BAD_REQUEST, 400, errstr)
model_method = "%s_action" % action[0].replace('-', '_')
f = getattr(self.model, model_method, None)
if f is None:
return NOT_SUPPORTED_RESPONSE
try:
response = f(request.params, context=self._get_context(request),
request=request)
if isinstance(response, webob.Response):
return response
return webob.Response(body="%s\n" % json.dumps(response),
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
except TypeError:
LOG.exception("Error occurred")
return NOT_SUPPORTED_RESPONSE
def replace(self, request):
if not hasattr(self.model, 'update_item'):
return NOT_SUPPORTED_RESPONSE
id_ = self._get_element_id(request)
try:
item = self._parse_json_body(request)
self.model.update_item(id_, item, request.params,
context=self._get_context(request))
except KeyError as e:
if (self.collection_handler and
getattr(self.collection_handler, 'allow_named_create',
False)):
return self.collection_handler.create_member(request, id_=id_)
return error_response(httplib.NOT_FOUND, 404,
original_msg(e) or 'Not found')
return webob.Response(body="%s\n" % json.dumps(item),
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
def update(self, request):
if not (hasattr(self.model, 'update_item') or
hasattr(self.model, 'get_item')):
return NOT_SUPPORTED_RESPONSE
context = self._get_context(request)
id_ = self._get_element_id(request)
item = self.model.get_item(id_, request.params, context=context)
if item is None:
return error_response(httplib.NOT_FOUND, 404, 'Not found')
updates = self._parse_json_body(request)
item.update(updates)
self.model.update_item(id_, item, request.params, context=context)
return webob.Response(body="%s\n" % json.dumps(item),
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
def delete(self, request):
if not hasattr(self.model, 'delete_item'):
return NOT_SUPPORTED_RESPONSE
id_ = self._get_element_id(request)
try:
item = self.model.delete_item(
id_, request.params, context=self._get_context(request))
return webob.Response(body="%s\n" % json.dumps(item),
status=httplib.OK,
content_type='application/json',
charset='UTF-8')
except KeyError as e:
LOG.exception("Error occurred")
return error_response(httplib.NOT_FOUND, 404,
original_msg(e) or 'Not found')
class CollectionHandler(AbstractApiHandler):
"""API handler for REST collection resources.
REST collections represent collections of entities in the data model, and
often support the following operations:
- List elements in the collection
- Create new element in the collection
The following less-common collection operations are NOT SUPPORTED:
- Replace all elements in the collection
- Delete all elements in the collection
"""
def __init__(self, path_regex, model,
allow_named_create=True, allow_list=True, allow_create=True,
allow_update=False):
"""Initialize a collection handler.
Args:
path_regex: A regular expression matching the collection base path.
model: A resource data model instance
allow_named_create: True if caller can specify ID of new items.
allow_list: True if collection supports listing elements.
allow_create: True if collection supports creating elements.
"""
super(CollectionHandler, self).__init__(path_regex)
self.model = model
self.allow_named_create = allow_named_create
self.allow_list = allow_list
self.allow_create = allow_create
self.allow_update = allow_update
def handle_request(self, request):
"""Handle a REST request.
Args:
request: A webob request object.
Returns:
A webob response object.
"""
# NOTE(arosen): only do policy.json if keystone is used for now.
if cfg.CONF.auth_strategy == "keystone":
context = request.environ['congress.context']
target = {
'project_id': context.project_id,
'user_id': context.user_id
}
# NOTE(arosen): today congress only enforces API policy on which
# API calls we allow tenants to make with their given roles.
action_type = self._get_action_type(request.method)
# FIXME(arosen): There should be a cleaner way to do this.
model_name = self.path_regex.split('/')[1]
action = "%s_%s" % (action_type, model_name)
# TODO(arosen): we should handle serializing the
# response in one place
try:
policy.enforce(context, action, target)
except exception.PolicyNotAuthorized as e:
LOG.info(e)
return webob.Response(body=six.text_type(e), status=e.code,
content_type='application/json',
charset='UTF-8')
if request.method == 'GET' and self.allow_list:
return self.list_members(request)
elif request.method == 'POST' and self.allow_create:
return self.create_member(request)
elif request.method == 'PUT' and self.allow_update:
return self.update_members(request)
return NOT_SUPPORTED_RESPONSE
def _get_action_type(self, method):
if method == 'GET':
return 'get'
elif method == 'POST':
return 'create'
elif method == 'DELETE':
return 'delete'
elif method == 'PUT' or method == 'PATCH':
return 'update'
else:
# should never get here but just in case ;)
# FIXME(arosen) raise NotImplemented instead and
# make sure we return that as an http code.
raise TypeError("Invalid HTTP Method")
def list_members(self, request):
if not hasattr(self.model, 'get_items'):
return NOT_SUPPORTED_RESPONSE
items = self.model.get_items(request.params,
context=self._get_context(request))
if items is None:
return error_response(httplib.NOT_FOUND, 404, 'Not found')
elif 'results' not in items:
return error_response(httplib.NOT_FOUND, 404, 'Not found')
body = "%s\n" % json.dumps(items, indent=2)
return webob.Response(body=body, status=httplib.OK,
content_type='application/json',
charset='UTF-8')
def create_member(self, request, id_=None):
if not hasattr(self.model, 'add_item'):
return NOT_SUPPORTED_RESPONSE
item = self._parse_json_body(request)
context = self._get_context(request)
try:
id_, item = self.model.add_item(
item, request.params, id_, context=context)
except KeyError as e:
LOG.exception("Error occurred")
return error_response(httplib.CONFLICT, httplib.CONFLICT,
original_msg(e) or 'Element already exists')
item['id'] = id_
return webob.Response(body="%s\n" % json.dumps(item),
status=httplib.CREATED,
content_type='application/json',
location="%s/%s" % (request.path, id_),
charset='UTF-8')
def update_members(self, request):
if not hasattr(self.model, 'update_items'):
return NOT_SUPPORTED_RESPONSE
items = self._parse_json_body(request)
context = self._get_context(request)
try:
self.model.update_items(items, request.params, context)
except KeyError as e:
LOG.exception("Error occurred")
return error_response(httplib.BAD_REQUEST, httplib.BAD_REQUEST,
original_msg(e) or
'Update %s Failed' % context['table_id'])
return webob.Response(body="", status=httplib.OK,
content_type='application/json',
charset='UTF-8')
class SimpleDataModel(object):
"""A container providing access to a single type of data."""
def __init__(self, model_name):
self.model_name = model_name
self.items = {}
@staticmethod
def _context_str(context):
context = context or {}
return ".".join(
["%s:%s" % (k, context[k]) for k in sorted(context.keys())])
def get_items(self, params, context=None):
"""Get items in model.
Args:
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns: A dict containing at least a 'results' key whose value is
a list of items in the model. Additional keys set in the
dict will also be rendered for the user.
"""
cstr = self._context_str(context)
results = list(self.items.setdefault(cstr, {}).values())
return {'results': results}
def add_item(self, item, params, id_=None, context=None):
"""Add item to model.
Args:
item: The item to add to the model
params: A dict-like object containing parameters
from the request query string and body.
id_: The ID of the item, or None if an ID should be generated
context: Key-values providing frame of reference of request
Returns:
Tuple of (ID, newly_created_item)
Raises:
KeyError: ID already exists.
DataModelException: Addition cannot be performed.
"""
cstr = self._context_str(context)
if id_ is None:
id_ = uuidutils.generate_uuid()
if id_ in self.items.setdefault(cstr, {}):
raise KeyError("Cannot create item with ID '%s': "
"ID already exists" % id_)
self.items[cstr][id_] = item
return (id_, item)
def get_item(self, id_, params, context=None):
"""Retrieve item with id id_ from model.
Args:
id_: The ID of the item to retrieve
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The matching item or None if item with id_ does not exist.
"""
cstr = self._context_str(context)
return self.items.setdefault(cstr, {}).get(id_)
def update_item(self, id_, item, params, context=None):
"""Update item with id_ with new data.
Args:
id_: The ID of the item to be updated
item: The new item
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The updated item.
Raises:
KeyError: Item with specified id_ not present.
DataModelException: Update cannot be performed.
"""
cstr = self._context_str(context)
if id_ not in self.items.setdefault(cstr, {}):
raise KeyError("Cannot update item with ID '%s': "
"ID does not exist" % id_)
self.items.setdefault(cstr, {})[id_] = item
return item
def delete_item(self, id_, params, context=None):
"""Remove item from model.
Args:
id_: The ID of the item to be removed
params: A dict-like object containing parameters
from the request query string and body.
context: Key-values providing frame of reference of request
Returns:
The removed item.
Raises:
KeyError: Item with specified id_ not present.
"""
cstr = self._context_str(context)
ret = self.items.setdefault(cstr, {})[id_]
del self.items[cstr][id_]
return ret
def update_items(self, items, params, context=None):
"""Update items in the model.
Args:
items: A dict-like object containing new data
params: A dict-like object containing parameters
context: Key-values providing frame of reference of request
Returns:
None.
"""
self.items = items

View File

@ -1,79 +0,0 @@
# Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_config import cfg
from oslo_log import log as logging
from oslo_middleware import request_id
import webob.dec
import webob.exc
from congress.common import config
from congress.common import wsgi
from congress import context
LOG = logging.getLogger(__name__)
class CongressKeystoneContext(wsgi.Middleware):
"""Make a request context from keystone headers."""
@webob.dec.wsgify
def __call__(self, req):
# Determine the user ID
user_id = req.headers.get('X_USER_ID')
if not user_id:
LOG.debug("X_USER_ID is not found in request")
return webob.exc.HTTPUnauthorized()
# Determine the tenant
tenant_id = req.headers.get('X_PROJECT_ID')
# Suck out the roles
roles = [r.strip() for r in req.headers.get('X_ROLES', '').split(',')]
# Human-friendly names
tenant_name = req.headers.get('X_PROJECT_NAME')
user_name = req.headers.get('X_USER_NAME')
# Use request_id if already set
req_id = req.environ.get(request_id.ENV_REQUEST_ID)
# Create a context with the authentication data
ctx = context.RequestContext(user_id, tenant_id, roles=roles,
user_name=user_name,
tenant_name=tenant_name,
request_id=req_id)
# Inject the context...
req.environ['congress.context'] = ctx
return self.application
def pipeline_factory(loader, global_conf, **local_conf):
"""Create a paste pipeline based on the 'auth_strategy' config option."""
config.set_config_defaults()
pipeline = local_conf[cfg.CONF.auth_strategy]
pipeline = pipeline.split()
filters = [loader.get_filter(n) for n in pipeline[:-1]]
app = loader.get_app(pipeline[-1])
filters.reverse()
for filter in filters:
app = filter(app)
return app

View File

@ -1,184 +0,0 @@
# Copyright 2014 VMware
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import os
import socket
from oslo_config import cfg
from oslo_db import options as db_options
from oslo_log import log as logging
from oslo_middleware import cors
from oslo_policy import opts as policy_opts
from congress import version
LOG = logging.getLogger(__name__)
core_opts = [
cfg.HostAddressOpt('bind_host', default='0.0.0.0',
help="The host IP to bind to"),
cfg.PortOpt('bind_port', default=1789,
help="The port to bind to"),
cfg.IntOpt('max_simultaneous_requests', default=1024,
help="Thread pool size for eventlet."),
cfg.BoolOpt('tcp_keepalive', default=False,
help='Set this to true to enable TCP_KEEALIVE socket option '
'on connections received by the API server.'),
cfg.IntOpt('tcp_keepidle',
default=600,
help='Sets the value of TCP_KEEPIDLE in seconds for each '
'server socket. Only applies if tcp_keepalive is '
'true. Not supported on OS X.'),
cfg.StrOpt('policy_path',
help="The path to the latest policy dump",
deprecated_for_removal=True,
deprecated_reason='No longer used'),
cfg.StrOpt('datasource_file',
deprecated_for_removal=True,
help="The file containing datasource configuration"),
cfg.StrOpt('root_path',
deprecated_for_removal=True,
deprecated_reason='automatically calculated its path in '
'initializing steps.',
help="The absolute path to the congress repo"),
cfg.IntOpt('api_workers', default=1,
help='The number of worker processes to serve the congress '
'API application.'),
cfg.StrOpt('api_paste_config', default='api-paste.ini',
help=_('The API paste config file to use')),
cfg.StrOpt('auth_strategy', default='keystone',
help=_('The type of authentication to use')),
cfg.ListOpt('drivers',
default=[],
help=_('List of driver class paths to import.')),
cfg.IntOpt('datasource_sync_period', default=60,
help='The number of seconds to wait between synchronizing '
'datasource config from the database'),
cfg.BoolOpt('enable_execute_action', default=True,
help='Set the flag to False if you don\'t want Congress '
'to execute actions.'),
cfg.BoolOpt('replicated_policy_engine', default=False,
help='Set the flag to use congress with replicated policy '
'engines.'),
cfg.StrOpt('policy_library_path', default='/etc/congress/library',
help=_('The directory containing library policy files.')),
cfg.BoolOpt('distributed_architecture',
deprecated_for_removal=True,
deprecated_reason='distributed architecture is now the only '
'supported configuration.',
help="Set the flag to use congress distributed architecture."),
]
# Register the configuration options
cfg.CONF.register_opts(core_opts)
dse_opts = [
cfg.StrOpt('bus_id', default='bus',
help='Unique ID of this DSE bus'),
cfg.IntOpt('ping_timeout', default=5,
help='RPC short timeout in seconds; used to ping destination'),
cfg.IntOpt('long_timeout', default=120,
help='RPC long timeout in seconds; used on potentially long '
'running requests such as datasource action and PE row '
'query'),
cfg.IntOpt('time_to_resub', default=10,
help='Time in seconds which a subscriber will wait for missing '
'update before attempting to resubscribe from publisher'),
cfg.BoolOpt('execute_action_retry', default=False,
help='Set the flag to True to make Congress retry execute '
'actions; may cause duplicate executions.'),
cfg.IntOpt('execute_action_retry_timeout', default=600,
help='The number of seconds to retry execute action before '
'giving up. Zero or negative value means never give up.'),
]
# Register dse opts
cfg.CONF.register_opts(dse_opts, group='dse')
policy_opts.set_defaults(cfg.CONF, 'policy.json')
logging.register_options(cfg.CONF)
_SQL_CONNECTION_DEFAULT = 'sqlite://'
# Update the default QueuePool parameters. These can be tweaked by the
# configuration variables - max_pool_size, max_overflow and pool_timeout
db_options.set_defaults(cfg.CONF,
connection=_SQL_CONNECTION_DEFAULT,
max_pool_size=10, max_overflow=20, pool_timeout=10)
# Command line options
cli_opts = [
cfg.BoolOpt('datasources', default=False,
help='Use this option to deploy the datasources.'),
cfg.BoolOpt('api', default=False,
help='Use this option to deploy API service'),
cfg.BoolOpt('policy-engine', default=False,
help='Use this option to deploy policy engine service.'),
cfg.StrOpt('node-id', default=socket.gethostname(),
help='A unique ID for this node. Must be unique across all '
'nodes with the same bus_id.'),
cfg.BoolOpt('delete-missing-driver-datasources', default=False,
help='Use this option to delete datasources with missing '
'drivers from DB')
]
cfg.CONF.register_cli_opts(cli_opts)
def init(args, **kwargs):
cfg.CONF(args=args, project='congress',
version='%%(prog)s %s' % version.version_info.release_string(),
**kwargs)
def setup_logging():
"""Sets up logging for the congress package."""
logging.setup(cfg.CONF, 'congress')
def find_paste_config():
config_path = cfg.CONF.find_file(cfg.CONF.api_paste_config)
if not config_path:
raise cfg.ConfigFilesNotFoundError(
config_files=[cfg.CONF.api_paste_config])
config_path = os.path.abspath(config_path)
LOG.info(("Config paste file: %s"), config_path)
return config_path
def set_config_defaults():
"""This method updates all configuration default values."""
# CORS Defaults
# TODO(krotscheck): Update with https://review.openstack.org/#/c/285368/
cfg.set_defaults(cors.CORS_OPTS,
allow_headers=['X-Auth-Token',
'X-OpenStack-Request-ID',
'X-Identity-Status',
'X-Roles',
'X-Service-Catalog',
'X-User-Id',
'X-Tenant-Id'],
expose_headers=['X-Auth-Token',
'X-OpenStack-Request-ID',
'X-Subject-Token',
'X-Service-Token'],
allow_methods=['GET',
'PUT',
'POST',
'DELETE',
'PATCH']
)

View File

@ -1,225 +0,0 @@
# Copyright 2012 OpenStack Foundation
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2010 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import errno
import re
import socket
import ssl
import sys
import eventlet
import eventlet.wsgi
import greenlet
import json
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import service
from paste import deploy
from congress.dse2 import dse_node
from congress import exception
LOG = logging.getLogger(__name__)
class EventletFilteringLogger(object):
# NOTE(morganfainberg): This logger is designed to filter out specific
# Tracebacks to limit the amount of data that eventlet can log. In the
# case of broken sockets (EPIPE and ECONNRESET), we are seeing a huge
# volume of data being written to the logs due to ~14 lines+ per traceback.
# The traceback in these cases are, at best, useful for limited debugging
# cases.
def __init__(self, logger):
self.logger = logger
self.level = logger.logger.level
self.regex = re.compile(r'errno (%d|%d)' %
(errno.EPIPE, errno.ECONNRESET), re.IGNORECASE)
def write(self, msg):
m = self.regex.search(msg)
if m:
self.logger.log(logging.logging.DEBUG,
'Error(%s) writing to socket.',
m.group(1))
else:
self.logger.log(self.level, msg.rstrip())
class Server(service.Service):
"""Server class to Data Service Node without API services."""
def __init__(self, name, bus_id=None):
super(Server, self).__init__()
self.name = name
self.node = dse_node.DseNode(cfg.CONF, self.name, [],
partition_id=bus_id)
def start(self):
self.node.start()
def stop(self):
self.node.stop()
class APIServer(service.ServiceBase):
"""Server class to Data Service Node with API services.
This server has All API services in itself.
"""
def __init__(self, app_conf, name, host=None, port=None, threads=1000,
keepalive=False, keepidle=None, bus_id=None, **kwargs):
self.app_conf = app_conf
self.name = name
self.application = None
self.host = host or '0.0.0.0'
self.port = port or 0
self.pool = eventlet.GreenPool(threads)
self.socket_info = {}
self.greenthread = None
self.do_ssl = False
self.cert_required = False
self.keepalive = keepalive
self.keepidle = keepidle
self.socket = None
self.bus_id = bus_id
# store API, policy-engine, datasource flags; for use in start()
self.flags = kwargs
# TODO(masa): To support Active-Active HA with DseNode on any
# driver of oslo.messaging, make sure to use same partition_id
# among multi DseNodes sharing same message topic namespace.
def start(self, key=None, backlog=128):
"""Run a WSGI server with the given application."""
if self.socket is None:
self.listen(key=key, backlog=backlog)
try:
kwargs = {'global_conf':
{'node_id': self.name,
'bus_id': self.bus_id,
'flags': json.dumps(self.flags)}}
self.application = deploy.loadapp('config:%s' % self.app_conf,
name='congress', **kwargs)
except Exception:
LOG.exception('Failed to Start %s server', self.name)
raise exception.CongressException(
'Failed to Start initializing %s server' % self.name)
self.greenthread = self.pool.spawn(self._run,
self.application,
self.socket)
def listen(self, key=None, backlog=128):
"""Create and start listening on socket.
Call before forking worker processes.
Raises Exception if this has already been called.
"""
if self.socket is not None:
raise Exception(_('Server can only listen once.'))
LOG.info(('Starting %(arg0)s on %(host)s:%(port)s'),
{'arg0': sys.argv[0],
'host': self.host,
'port': self.port})
# TODO(dims): eventlet's green dns/socket module does not actually
# support IPv6 in getaddrinfo(). We need to get around this in the
# future or monitor upstream for a fix
info = socket.getaddrinfo(self.host,
self.port,
socket.AF_UNSPEC,
socket.SOCK_STREAM)[0]
_socket = eventlet.listen(info[-1],
family=info[0],
backlog=backlog)
if key:
self.socket_info[key] = _socket.getsockname()
# SSL is enabled
if self.do_ssl:
if self.cert_required:
cert_reqs = ssl.CERT_REQUIRED
else:
cert_reqs = ssl.CERT_NONE
sslsocket = eventlet.wrap_ssl(_socket, certfile=self.certfile,
keyfile=self.keyfile,
server_side=True,
cert_reqs=cert_reqs,
ca_certs=self.ca_certs)
_socket = sslsocket
# Optionally enable keepalive on the wsgi socket.
if self.keepalive:
_socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
# This option isn't available in the OS X version of eventlet
if hasattr(socket, 'TCP_KEEPIDLE') and self.keepidle is not None:
_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE,
self.keepidle)
self.socket = _socket
def set_ssl(self, certfile, keyfile=None, ca_certs=None,
cert_required=True):
self.certfile = certfile
self.keyfile = keyfile
self.ca_certs = ca_certs
self.cert_required = cert_required
self.do_ssl = True
def kill(self):
if self.greenthread is not None:
self.greenthread.kill()
def stop(self):
self.kill()
# We're not able to stop the DseNode in this case. Is there a need to
# stop the ApiServer without also exiting the process?
def reset(self):
LOG.info("reset() not implemented yet")
def wait(self):
"""Wait until all servers have completed running."""
try:
self.pool.waitall()
except KeyboardInterrupt:
pass
except greenlet.GreenletExit:
pass
def _run(self, application, socket):
"""Start a WSGI server in a new green thread."""
logger = logging.getLogger('eventlet.wsgi.server')
try:
eventlet.wsgi.server(socket, application, max_size=1000,
log=EventletFilteringLogger(logger),
debug=False)
except greenlet.GreenletExit:
# Wait until all servers have completed running
pass
except Exception:
LOG.exception(_('Server error'))
raise

View File

@ -1,128 +0,0 @@
# Copyright (c) 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Policy Engine For Auth on API calls."""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_config import cfg
from oslo_policy import policy
from congress import exception
_ENFORCER = None
def reset():
global _ENFORCER
if _ENFORCER:
_ENFORCER.clear()
_ENFORCER = None
def init(policy_file=None, rules=None, default_rule=None, use_conf=True):
"""Init an Enforcer class.
:param policy_file: Custom policy file to use, if none is specified,
`CONF.policy_file` will be used.
:param rules: Default dictionary / Rules to use. It will be
considered just in the first instantiation.
:param default_rule: Default rule to use, CONF.default_rule will
be used if none is specified.
:param use_conf: Whether to load rules from config file.
"""
global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer(cfg.CONF, policy_file=policy_file,
rules=rules,
default_rule=default_rule,
use_conf=use_conf)
def set_rules(rules, overwrite=True, use_conf=False):
"""Set rules based on the provided dict of rules.
:param rules: New rules to use. It should be an instance of dict.
:param overwrite: Whether to overwrite current rules or update them
with the new rules.
:param use_conf: Whether to reload rules from config file.
"""
init(use_conf=False)
_ENFORCER.set_rules(rules, overwrite, use_conf)
def enforce(context, action, target, do_raise=True, exc=None):
"""Verifies that the action is valid on the target in this context.
:param context: congress context
:param action: string representing the action to be checked
this should be colon separated for clarity.
i.e. ``compute:create_instance``,
``compute:attach_volume``,
``volume:attach_volume``
:param target: dictionary representing the object of the action
for object creation this should be a dictionary representing the
location of the object e.g. ``{'project_id': context.project_id}``
:param do_raise: if True (the default), raises PolicyNotAuthorized;
if False, returns False
:raises congress.exception.PolicyNotAuthorized: if verification fails
and do_raise is True.
:return: returns a non-False value (not necessarily "True") if
authorized, and the exact value False if not authorized and
do_raise is False.
"""
init()
credentials = context.to_dict()
if not exc:
exc = exception.PolicyNotAuthorized
return _ENFORCER.enforce(action, target, credentials, do_raise=do_raise,
exc=exc, action=action)
def check_is_admin(context):
"""Whether or not roles contains 'admin' role according to policy setting.
"""
init()
# the target is user-self
credentials = context.to_dict()
target = credentials
return _ENFORCER.enforce('context_is_admin', target, credentials)
@policy.register('is_admin')
class IsAdminCheck(policy.Check):
"""An explicit check for is_admin."""
def __init__(self, kind, match):
"""Initialize the check."""
self.expected = (match.lower() == 'true')
super(IsAdminCheck, self).__init__(kind, str(self.expected))
def __call__(self, target, creds, enforcer):
"""Determine whether is_admin matches the requested value."""
return creds['is_admin'] == self.expected
def get_rules():
if _ENFORCER:
return _ENFORCER.rules

View File

@ -1,253 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2010 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Utility methods for working with WSGI servers."""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import sys
import routes.middleware
import webob.dec
import webob.exc
class Request(webob.Request):
pass
class Application(object):
"""Base WSGI application wrapper. Subclasses need to implement __call__."""
@classmethod
def factory(cls, global_config, **local_config):
"""Used for paste app factories in paste.deploy config files.
Any local configuration (that is, values under the [app:APPNAME]
section of the paste config) will be passed into the `__init__` method
as kwargs.
A hypothetical configuration would look like:
[app:wadl]
latest_version = 1.3
paste.app_factory = nova.api.fancy_api:Wadl.factory
which would result in a call to the `Wadl` class as
import nova.api.fancy_api
fancy_api.Wadl(latest_version='1.3')
You could of course re-implement the `factory` method in subclasses,
but using the kwarg passing it shouldn't be necessary.
"""
return cls(**local_config)
def __call__(self, environ, start_response):
r"""Subclasses will probably want to implement __call__ like this:
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, req):
# Any of the following objects work as responses:
# Option 1: simple string
res = 'message\n'
# Option 2: a nicely formatted HTTP exception page
res = exc.HTTPForbidden(explanation='Nice try')
# Option 3: a webob Response object (in case you need to play with
# headers, or you want to be treated like an iterable, or or or)
res = Response();
res.app_iter = open('somefile')
# Option 4: any wsgi app to be run next
res = self.application
# Option 5: you can get a Response object for a wsgi app, too, to
# play with headers etc
res = req.get_response(self.application)
# You can then just return your response...
return res
# ... or set req.response and return None.
req.response = res
See the end of http://pythonpaste.org/webob/modules/dec.html
for more info.
"""
raise NotImplementedError(_('You must implement __call__'))
class Middleware(Application):
"""Base WSGI middleware.
These classes require an application to be
initialized that will be called next. By default the middleware will
simply call its wrapped app, or you can override __call__ to customize its
behavior.
"""
@classmethod
def factory(cls, global_config, **local_config):
"""Used for paste app factories in paste.deploy config files.
Any local configuration (that is, values under the [filter:APPNAME]
section of the paste config) will be passed into the `__init__` method
as kwargs.
A hypothetical configuration would look like:
[filter:analytics]
redis_host = 127.0.0.1
paste.filter_factory = nova.api.analytics:Analytics.factory
which would result in a call to the `Analytics` class as
import nova.api.analytics
analytics.Analytics(app_from_paste, redis_host='127.0.0.1')
You could of course re-implement the `factory` method in subclasses,
but using the kwarg passing it shouldn't be necessary.
"""
def _factory(app):
return cls(app, **local_config)
return _factory
def __init__(self, application):
self.application = application
def process_request(self, req):
"""Called on each request.
If this returns None, the next application down the stack will be
executed. If it returns a response then that response will be returned
and execution will stop here.
"""
return None
def process_response(self, response):
"""Do whatever you'd like to the response."""
return response
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, req):
response = self.process_request(req)
if response:
return response
response = req.get_response(self.application)
return self.process_response(response)
class Debug(Middleware):
"""Helper class for debugging a WSGI application.
Can be inserted into any WSGI application chain to get information
about the request and response.
"""
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, req):
print(('*' * 40) + ' REQUEST ENVIRON')
for key, value in req.environ.items():
print(key, '=', value)
print()
resp = req.get_response(self.application)
print(('*' * 40) + ' RESPONSE HEADERS')
for (key, value) in resp.headers.items():
print(key, '=', value)
print()
resp.app_iter = self.print_generator(resp.app_iter)
return resp
@staticmethod
def print_generator(app_iter):
"""Iterator that prints the contents of a wrapper string."""
print(('*' * 40) + ' BODY')
for part in app_iter:
sys.stdout.write(part)
sys.stdout.flush()
yield part
print()
class Router(object):
"""WSGI middleware that maps incoming requests to WSGI apps."""
def __init__(self, mapper):
"""Create a router for the given routes.Mapper.
Each route in `mapper` must specify a 'controller', which is a
WSGI app to call. You'll probably want to specify an 'action' as
well and have your controller be an object that can route
the request to the action-specific method.
Examples:
mapper = routes.Mapper()
sc = ServerController()
# Explicit mapping of one route to a controller+action
mapper.connect(None, '/svrlist', controller=sc, action='list')
# Actions are all implicitly defined
mapper.resource('server', 'servers', controller=sc)
# Pointing to an arbitrary WSGI app. You can specify the
# {path_info:.*} parameter so the target app can be handed just that
# section of the URL.
mapper.connect(None, '/v1.0/{path_info:.*}', controller=BlogApp())
"""
self.map = mapper
self._router = routes.middleware.RoutesMiddleware(self._dispatch,
self.map)
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, req):
"""Route the incoming request to a controller based on self.map.
If no match, return a 404.
"""
return self._router
@staticmethod
@webob.dec.wsgify(RequestClass=Request)
def _dispatch(req):
"""Dispatch the request to the appropriate controller.
Called by self._router after matching the incoming request to a route
and putting the information into req.environ. Either returns 404
or the routed WSGI app's response.
"""
match = req.environ['wsgiorg.routing_args'][1]
if not match:
return webob.exc.HTTPNotFound()
app = match['controller']
return app

View File

@ -1,149 +0,0 @@
# Copyright 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""RequestContext: context for requests that persist through congress."""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import copy
import datetime
from oslo_context import context as common_context
from oslo_log import log as logging
from congress.common import policy
LOG = logging.getLogger(__name__)
class RequestContext(common_context.RequestContext):
"""Security context and request information.
Represents the user taking a given action within the system.
"""
def __init__(self, user_id, tenant_id, is_admin=None, read_deleted="no",
roles=None, timestamp=None, load_admin_roles=True,
request_id=None, tenant_name=None, user_name=None,
overwrite=True, **kwargs):
"""Object initialization.
:param read_deleted: 'no' indicates deleted records are hidden, 'yes'
indicates deleted records are visible, 'only' indicates that
*only* deleted records are visible.
:param overwrite: Set to False to ensure that the greenthread local
copy of the index is not overwritten.
:param kwargs: Extra arguments that might be present, but we ignore
because they possibly came in from older rpc messages.
"""
super(RequestContext, self).__init__(user=user_id, tenant=tenant_id,
is_admin=is_admin,
request_id=request_id,
overwrite=overwrite,
roles=roles)
self.user_name = user_name
self.tenant_name = tenant_name
self.read_deleted = read_deleted
if not timestamp:
timestamp = datetime.datetime.utcnow()
self.timestamp = timestamp
self._session = None
if self.is_admin is None:
self.is_admin = policy.check_is_admin(self)
# Log only once the context has been configured to prevent
# format errors.
if kwargs:
LOG.debug(('Arguments dropped when creating '
'context: %s'), kwargs)
@property
def project_id(self):
return self.tenant
@property
def tenant_id(self):
return self.tenant
@tenant_id.setter
def tenant_id(self, tenant_id):
self.tenant = tenant_id
@property
def user_id(self):
return self.user
@user_id.setter
def user_id(self, user_id):
self.user = user_id
def _get_read_deleted(self):
return self._read_deleted
def _set_read_deleted(self, read_deleted):
if read_deleted not in ('no', 'yes', 'only'):
raise ValueError(_("read_deleted can only be one of 'no', "
"'yes' or 'only', not %r") % read_deleted)
self._read_deleted = read_deleted
def _del_read_deleted(self):
del self._read_deleted
read_deleted = property(_get_read_deleted, _set_read_deleted,
_del_read_deleted)
def to_dict(self):
ret = super(RequestContext, self).to_dict()
ret.update({'user_id': self.user_id,
'tenant_id': self.tenant_id,
'project_id': self.project_id,
'read_deleted': self.read_deleted,
'timestamp': str(self.timestamp),
'tenant_name': self.tenant_name,
'project_name': self.tenant_name,
'user_name': self.user_name})
return ret
@classmethod
def from_dict(cls, values):
return cls(**values)
def elevated(self, read_deleted=None):
"""Return a version of this context with admin flag set."""
context = copy.copy(self)
context.is_admin = True
if 'admin' not in [x.lower() for x in context.roles]:
context.roles.append('admin')
if read_deleted is not None:
context.read_deleted = read_deleted
return context
def get_admin_context(read_deleted="no", load_admin_roles=True):
return RequestContext(user_id=None,
tenant_id=None,
is_admin=True,
read_deleted=read_deleted,
load_admin_roles=load_admin_roles,
overwrite=False)

View File

@ -1,353 +0,0 @@
// Copyright (c) 2013 VMware, Inc. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
//
grammar Congress;
options {
language=Python;
output=AST;
ASTLabelType=CommonTree;
}
tokens {
PROG;
COMMA=',';
COLONMINUS=':-';
LPAREN='(';
RPAREN=')';
RBRACKET=']';
LBRACKET='[';
// Structure
THEORY;
STRUCTURED_NAME;
// Kinds of Formulas
EVENT;
RULE;
LITERAL;
MODAL;
ATOM;
NOT;
AND;
// Terms
NAMED_PARAM;
COLUMN_NAME;
COLUMN_NUMBER;
VARIABLE;
STRING_OBJ;
INTEGER_OBJ;
FLOAT_OBJ;
SYMBOL_OBJ;
}
// a program can be one or more statements or empty
prog
: statement+ EOF -> ^(THEORY statement+)
| EOF
;
// a statement is either a formula or a comment
// let the lexer handle comments directly for efficiency
statement
: formula formula_terminator? -> formula
| COMMENT
;
formula
: rule
| fact
| event
;
// An Event represents the insertion/deletion of policy statements.
// Events always include :-. This is to avoid ambiguity in the grammar
// for the case of insert[p(1)]. Without the requirement that an event
// includes a :-, insert[p(1)] could either represent the event where p(1)
// is inserted or simply a policy statement with an empty body and the modal
// 'insert' in the head.
// This means that to represent the event where p(1) is inserted, you must write
// insert[p(1) :- true]. To represent the query that asks if insert[p(1)] is true
// you write insert[p(1)].
event
: event_op LBRACKET rule (formula_terminator STRING)? RBRACKET -> ^(EVENT event_op rule STRING?)
;
event_op
: 'insert'
| 'delete'
;
formula_terminator
: ';'
| '.'
;
rule
: literal_list COLONMINUS literal_list -> ^(RULE literal_list literal_list)
;
literal_list
: literal (COMMA literal)* -> ^(AND literal+)
;
literal
: fact -> fact
| NEGATION fact -> ^(NOT fact)
;
// Note: if we replace modal_op with ID, it tries to force statements
// like insert[p(x)] :- q(x) to be events instead of rules. Bug?
fact
: atom
| modal_op LBRACKET atom RBRACKET -> ^(MODAL modal_op atom)
;
modal_op
: 'execute'
| 'insert'
| 'delete'
;
atom
: relation_constant (LPAREN parameter_list? RPAREN)? -> ^(ATOM relation_constant parameter_list?)
;
parameter_list
: parameter (COMMA parameter)* -> parameter+
;
parameter
: term -> term
| column_ref EQUAL term -> ^(NAMED_PARAM column_ref term)
;
column_ref
: ID -> ^(COLUMN_NAME ID)
| INT -> ^(COLUMN_NUMBER INT)
;
term
: object_constant
| variable
;
object_constant
: INT -> ^(INTEGER_OBJ INT)
| FLOAT -> ^(FLOAT_OBJ FLOAT)
| STRING -> ^(STRING_OBJ STRING)
;
variable
: ID -> ^(VARIABLE ID)
;
relation_constant
: ID (':' ID)* SIGN? -> ^(STRUCTURED_NAME ID+ SIGN?)
;
// start of the lexer
// first, define keywords to ensure they have lexical priority
NEGATION
: 'not'
| 'NOT'
| '!'
;
EQUAL
: '='
;
SIGN
: '+' | '-'
;
// Python integers, conformant to 3.4.2 spec
// Note that leading zeros in a non-zero decimal number are not allowed
// This is taken care of by the first and second alternatives
INT
: '1'..'9' ('0'..'9')*
| '0'+
| '0' ('o' | 'O') ('0'..'7')+
| '0' ('x' | 'X') (HEX_DIGIT)+
| '0' ('b' | 'B') ('0' | '1')+
;
// Python floating point literals, conformant to 3.4.2 spec
// The integer and exponent parts are always interpreted using radix 10
FLOAT
: FLOAT_NO_EXP
| FLOAT_EXP
;
// String literals according to Python 3.4.2 grammar
// THIS VERSION IMPLEMENTS STRING AND BYTE LITERALS
// AS WELL AS TRIPLE QUOTED STRINGS
// Python strings:
// - can be enclosed in matching single quotes (') or double quotes (")
// - can be enclosed in matching groups of three single or double quotes
// - a backslash (\) character is used to escape characters that otherwise
// have a special meaning (e.g., newline, backslash, or a quote)
// - can be prefixed with a u to simplify maintenance of 2.x and 3.x code
// - 'ur' is NOT allowed
// - unescpaed newlines and quotes are allowed in triple-quoted literal
// EXCEPT that three unescaped contiguous quotes terminate the literal
//
// Byte String Literals according to Python 3.4.2 grammar
// Bytes are always prefixed with 'b' or 'B', and can only contain ASCII
// Any byte with a numeric value of >= 128 must be escaped
//
// Also implemented code refactoring to reduce runtime size of parser
STRING
: (STRPREFIX)? (SLSTRING)+
| (BYTESTRPREFIX) (SLBYTESTRING)+
;
// moved this rule so we could differentiate between .123 and .1aa
// (i.e., relying on lexical priority)
ID
: ('a'..'z'|'A'..'Z'|'_'|'.') ('a'..'z'|'A'..'Z'|'0'..'9'|'_'|'.')*
;
// added Pythonesque comments
COMMENT
: '//' ~('\n'|'\r')* '\r'? '\n' {$channel=HIDDEN;}
| '/*' ( options {greedy=false;} : . )* '*/' {$channel=HIDDEN;}
| '#' ~('\n'|'\r')* '\r'? '\n' {$channel=HIDDEN;}
;
WS
: ( ' '
| '\t'
| '\r'
| '\n'
) {$channel=HIDDEN;}
;
// fragment rules
// these are helper rules that are used by other lexical rules
// they do NOT generate tokens
fragment
EXPONENT
: ('e'|'E') ('+'|'-')? ('0'..'9')+
;
fragment
HEX_DIGIT
: ('0'..'9'|'a'..'f'|'A'..'F')
;
fragment
DIGIT
: ('0'..'9')
;
fragment
FLOAT_NO_EXP
: INT_PART? FRAC_PART
| INT_PART '.'
;
fragment
FLOAT_EXP
: ( INT_PART | FLOAT_NO_EXP ) EXPONENT
;
fragment
INT_PART
: DIGIT+
;
fragment
FRAC_PART
: '.' DIGIT+
;
// The following fragments are for string handling
// any form of 'ur' is illegal
fragment
STRPREFIX
: 'r' | 'R' | 'u' | 'U'
;
fragment
STRING_ESC
: '\\' .
;
// The first two are single-line string with single- and double-quotes
// The second two are multi-line strings with single- and double quotes
fragment
SLSTRING
: '\'' (STRING_ESC | ~('\\' | '\r' | '\n' | '\'') )* '\''
| '"' (STRING_ESC | ~('\\' | '\r' | '\n' | '"') )* '"'
| '\'\'\'' (STRING_ESC | ~('\\') )* '\'\'\''
| '"""' (STRING_ESC | ~('\\') )* '"""'
;
// Python Byte Literals
// Each byte within a byte literal can be an ASCII character or an
// encoded hex number from \x00 to \xff (i.e., 0-255)
// EXCEPT the backslash, newline, or quote
fragment
BYTESTRPREFIX
: 'b' | 'B' | 'br' | 'Br' | 'bR' | 'BR' | 'rb' | 'rB' | 'Rb' | 'RB'
;
fragment
SLBYTESTRING
: '\'' (BYTES_CHAR_SQ | BYTES_ESC)* '\''
| '"' (BYTES_CHAR_DQ | BYTES_ESC)* '"'
| '\'\'\'' (BYTES_CHAR_SQ | BYTES_TESC)* '\'\'\''
| '"""' (BYTES_CHAR_DQ | BYTES_TESC)* '"""'
;
fragment
BYTES_CHAR_SQ
: '\u0000'..'\u0009'
| '\u000B'..'\u000C'
| '\u000E'..'\u0026'
| '\u0028'..'\u005B'
| '\u005D'..'\u007F'
;
fragment
BYTES_CHAR_DQ
: '\u0000'..'\u0009'
| '\u000B'..'\u000C'
| '\u000E'..'\u0021'
| '\u0023'..'\u005B'
| '\u005D'..'\u007F'
;
fragment
BYTES_ESC
: '\\' '\u0000'..'\u007F'
;
fragment
BYTES_TESC
: '\u0000'..'\u005B'
| '\u005D'..'\u007F'
;

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,11 +0,0 @@
If you modify the congress/datalog/Congress.g file, you need to use antlr3
to re-generate the CongressLexer.py and CongressParser.py files with
the following steps:
1. Make sure a recent version of Java is installed. http://java.com/
2. Download ANTLR 3.5.2 or another compatible version from http://www.antlr3.org/download/antlr-3.5.2-complete.jar
3. Execute the following commands in shell
$ cd path/to/congress_repo/congress/datalog
$ java -jar path/to/antlr-3.5.2-complete.jar Congress.g -o Python2 -language Python
$ java -jar path/to/antlr-3.5.2-complete.jar Congress.g -o Python3 -language Python3

View File

@ -1,104 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
# TODO(thinrichs): move algorithms from compile.py that do analysis
# into this file.
import copy
class ModalIndex(object):
def __init__(self):
# Dict mapping modal name to a ref-counted list of tablenames
# Refcounted list of tablenames is a dict from tablename to count
self.index = {}
def add(self, modal, tablename):
if modal not in self.index:
self.index[modal] = {}
if tablename not in self.index[modal]:
self.index[modal][tablename] = 0
self.index[modal][tablename] += 1
def remove(self, modal, tablename):
if modal not in self.index:
raise KeyError("Modal %s has no entries" % modal)
if tablename not in self.index[modal]:
raise KeyError("Tablename %s for modal %s does not exist" %
(tablename, modal))
self.index[modal][tablename] -= 1
self._clean_up(modal, tablename)
def modals(self):
return self.index.keys()
def tables(self, modal):
if modal not in self.index:
return []
return self.index[modal].keys()
def __isub__(self, other):
changes = []
for modal in self.index:
if modal not in other.index:
continue
for table in self.index[modal]:
if table not in other.index[modal]:
continue
self.index[modal][table] -= other.index[modal][table]
changes.append((modal, table))
for (modal, table) in changes:
self._clean_up(modal, table)
return self
def __iadd__(self, other):
for modal in other.index:
if modal not in self.index:
self.index[modal] = other.index[modal]
continue
for table in other.index[modal]:
if table not in self.index[modal]:
self.index[modal][table] = other.index[modal][table]
continue
self.index[modal][table] += other.index[modal][table]
return self
def _clean_up(self, modal, table):
if self.index[modal][table] <= 0:
del self.index[modal][table]
if not len(self.index[modal]):
del self.index[modal]
def __eq__(self, other):
return self.index == other.index
def __neq__(self, other):
return not self.__eq__(other)
def __copy__(self):
new = ModalIndex()
new.index = copy.deepcopy(self.index)
return new
def __str__(self):
return str(self.index)
def __contains__(self, modal):
return modal in self.index

View File

@ -1,644 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
import pulp
import six
from congress import exception
from functools import reduce
LOG = logging.getLogger(__name__)
class LpLang(object):
"""Represent (mostly) linear programs generated from Datalog."""
MIN_THRESHOLD = .00001 # for converting <= to <
class Expression(object):
def __init__(self, *args, **meta):
self.args = args
self.meta = meta
def __ne__(self, other):
return not self.__eq__(other)
def __eq__(self, other):
if not isinstance(other, LpLang.Expression):
return False
if len(self.args) != len(other.args):
return False
if self.args[0] in ['AND', 'OR']:
return set(self.args) == set(other.args)
comm = ['plus', 'times']
if self.args[0] == 'ARITH' and self.args[1].lower() in comm:
return set(self.args) == set(other.args)
if self.args[0] in ['EQ', 'NOTEQ']:
return ((self.args[1] == other.args[1] and
self.args[2] == other.args[2]) or
(self.args[1] == other.args[2] and
self.args[2] == other.args[1]))
return self.args == other.args
def __str__(self):
return "(" + ", ".join(str(x) for x in self.args) + ")"
def __repr__(self):
args = ", ".join(repr(x) for x in self.args)
meta = str(self.meta)
return "<args=%s, meta=%s>" % (args, meta)
def __hash__(self):
return hash(tuple([hash(x) for x in self.args]))
def operator(self):
return self.args[0]
def arguments(self):
return self.args[1:]
def tuple(self):
return tuple(self.args)
@classmethod
def makeVariable(cls, *args, **meta):
return cls.Expression("VAR", *args, **meta)
@classmethod
def makeBoolVariable(cls, *args, **meta):
meta['type'] = 'bool'
return cls.Expression("VAR", *args, **meta)
@classmethod
def makeIntVariable(cls, *args, **meta):
meta['type'] = 'int'
return cls.Expression("VAR", *args, **meta)
@classmethod
def makeOr(cls, *args, **meta):
if len(args) == 1:
return args[0]
return cls.Expression("OR", *args, **meta)
@classmethod
def makeAnd(cls, *args, **meta):
if len(args) == 1:
return args[0]
return cls.Expression("AND", *args, **meta)
@classmethod
def makeEqual(cls, arg1, arg2, **meta):
return cls.Expression("EQ", arg1, arg2, **meta)
@classmethod
def makeNotEqual(cls, arg1, arg2, **meta):
return cls.Expression("NOTEQ", arg1, arg2, **meta)
@classmethod
def makeArith(cls, *args, **meta):
return cls.Expression("ARITH", *args, **meta)
@classmethod
def makeExpr(cls, obj):
if isinstance(obj, six.string_types):
return obj
if isinstance(obj, (float, six.integer_types)):
return obj
op = obj[0].upper()
if op == 'VAR':
return cls.makeVariable(*obj[1:])
if op in ['EQ', 'NOTEQ', 'AND', 'OR']:
args = [cls.makeExpr(x) for x in obj[1:]]
if op == 'EQ':
return cls.makeEqual(*args)
if op == 'NOTEQ':
return cls.makeNotEqual(*args)
if op == 'AND':
return cls.makeAnd(*args)
if op == 'OR':
return cls.makeOr(*args)
raise cls.LpConversionFailure('should never happen')
args = [cls.makeExpr(x) for x in obj[1:]]
return cls.makeArith(obj[0], *args)
@classmethod
def isConstant(cls, thing):
return (isinstance(thing, six.string_types) or
isinstance(thing, (float, six.integer_types)))
@classmethod
def isVariable(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'VAR'
@classmethod
def isEqual(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'EQ'
@classmethod
def isOr(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'OR'
@classmethod
def isAnd(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'AND'
@classmethod
def isNotEqual(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'NOTEQ'
@classmethod
def isArith(cls, thing):
return isinstance(thing, cls.Expression) and thing.args[0] == 'ARITH'
@classmethod
def isBoolArith(cls, thing):
return (cls.isArith(thing) and
thing.args[1].lower() in ['lteq', 'lt', 'gteq', 'gt', 'equal'])
@classmethod
def variables(cls, exp):
if cls.isConstant(exp):
return set()
elif cls.isVariable(exp):
return set([exp])
else:
variables = set()
for arg in exp.arguments():
variables |= cls.variables(arg)
return variables
def __init__(self):
# instance variable so tests can be run in parallel
self.fresh_var_counter = 0 # for creating new variables
def pure_lp(self, exp, bounds):
"""Rewrite EXP to a pure LP problem.
:param exp is an Expression of the form
var = (arith11 ^ ... ^ arith1n) | ... | (arithk1 ^ ... ^ arithkn)
where the degenerate cases are permitted as well.
Returns a collection of expressions each of the form:
a1*x1 + ... + an*xn [<=, ==, >=] b.
"""
flat, support = self.flatten(exp, indicator=False)
flats = support
flats.append(flat)
result = []
for flat in flats:
# LOG.info("flat: %s", flat)
no_and_or = self.remove_and_or(flat)
# LOG.info(" without and/or: %s", no_and_or)
no_indicator = self.indicator_to_pure_lp(no_and_or, bounds)
# LOG.info(" without indicator: %s",
# ";".join(str(x) for x in no_indicator))
result.extend(no_indicator)
return result
def pure_lp_term(self, exp, bounds):
"""Rewrite term exp to a pure LP term.
:param exp is an Expression of the form
(arith11 ^ ... ^ arith1n) | ... | (arithk1 ^ ... ^ arithkn)
where the degenerate cases are permitted as well.
Returns (new-exp, support) where new-exp is a term, and support is
a expressions of the following form.
a1*x1 + ... + an*xn [<=, ==, >=] b.
"""
flat, support = self.flatten(exp, indicator=False)
flat_no_andor = self.remove_and_or_term(flat)
results = []
for s in support:
results.extend(self.pure_lp(s, bounds))
return flat_no_andor, results
def remove_and_or(self, exp):
"""Translate and/or operators into times/plus arithmetic.
:param exp is an Expression that takes one of the following forms.
var [!]= term1 ^ ... ^ termn
var [!]= term1 | ... | termn
var [!]= term1
where termi is an indicator variable.
Returns an expression equivalent to exp but without any ands/ors.
"""
if self.isConstant(exp) or self.isVariable(exp):
return exp
op = exp.operator().lower()
if op in ['and', 'or']:
return self.remove_and_or_term(exp)
newargs = [self.remove_and_or(arg) for arg in exp.arguments()]
constructor = self.operator_to_constructor(exp.operator())
return constructor(*newargs)
def remove_and_or_term(self, exp):
if exp.operator().lower() == 'and':
op = 'times'
else:
op = 'plus'
return self.makeArith(op, *exp.arguments())
def indicator_to_pure_lp(self, exp, bounds):
"""Translate exp into LP constraints without indicator variable.
:param exp is an Expression of the form var = arith
:param bounds is a dictionary from variable to its upper bound
Returns [EXP] if it is of the wrong form. Otherwise, translates
into the form y = x < 0, and then returns two constraints where
upper(x) is the upper bound of the expression x:
-x <= y * upper(x)
x < (1 - y) * upper(x)
Taken from section 7.4 of
http://www.aimms.com/aimms/download/manuals/
aimms3om_integerprogrammingtricks.pdf
"""
# return exp unchanged if exp not of the form <var> = <arith>
# and figure out whether it's <var> = <arith> or <arith> = <var>
if (self.isConstant(exp) or self.isVariable(exp) or
not self.isEqual(exp)):
return [exp]
args = exp.arguments()
lhs = args[0]
rhs = args[1]
if self.isVariable(lhs) and self.isArith(rhs):
var = lhs
arith = rhs
elif self.isVariable(rhs) and self.isArith(lhs):
var = rhs
arith = lhs
else:
return [exp]
# if arithmetic side is not an inequality, not an indicator var
if not self.isBoolArith(arith):
return [exp]
# Do the transformation.
x = self.arith_to_lt_zero(arith).arguments()[1]
y = var
LOG.info(" x: %s", x)
upper_x = self.upper_bound(x, bounds) + 1
LOG.info(" bounds(x): %s", upper_x)
# -x <= y * upper(x)
c1 = self.makeArith(
'lteq',
self.makeArith('times', -1, x),
self.makeArith('times', y, upper_x))
# x < (1 - y) * upper(x)
c2 = self.makeArith(
'lt',
x,
self.makeArith('times', self.makeArith('minus', 1, y), upper_x))
return [c1, c2]
def arith_to_lt_zero(self, expr):
"""Returns Arith expression equivalent to expr but of the form A < 0.
:param expr is an Expression
Returns an expression equivalent to expr but of the form A < 0.
"""
if not self.isArith(expr):
raise self.LpConversionFailure(
"arith_to_lt_zero takes Arith expr but received %s", expr)
args = expr.arguments()
op = args[0].lower()
lhs = args[1]
rhs = args[2]
if op == 'lt':
return LpLang.makeArith(
'lt', LpLang.makeArith('minus', lhs, rhs), 0)
elif op == 'lteq':
return LpLang.makeArith(
'lt',
LpLang.makeArith(
'minus',
LpLang.makeArith('minus', lhs, rhs),
self.MIN_THRESHOLD),
0)
elif op == 'gt':
return LpLang.makeArith(
'lt', LpLang.makeArith('minus', rhs, lhs), 0)
elif op == 'gteq':
return LpLang.makeArith(
'lt',
LpLang.makeArith(
'minus',
LpLang.makeArith('minus', rhs, lhs),
self.MIN_THRESHOLD),
0)
else:
raise self.LpConversionFailure(
"unhandled operator %s in %s" % (op, expr))
def upper_bound(self, expr, bounds):
"""Returns number giving an upper bound on the given expr.
:param expr is an Expression
:param bounds is a dictionary from tuple versions of variables
to the size of their upper bound.
"""
if self.isConstant(expr):
return expr
if self.isVariable(expr):
t = expr.tuple()
if t not in bounds:
raise self.LpConversionFailure("not bound given for %s" % expr)
return bounds[expr.tuple()]
if not self.isArith(expr):
raise self.LpConversionFailure(
"expression has no bound: %s" % expr)
args = expr.arguments()
op = args[0].lower()
exps = args[1:]
if op == 'times':
f = lambda x, y: x * y
return reduce(f, [self.upper_bound(x, bounds) for x in exps], 1)
if op == 'plus':
f = lambda x, y: x + y
return reduce(f, [self.upper_bound(x, bounds) for x in exps], 0)
if op == 'minus':
return self.upper_bound(exps[0], bounds)
if op == 'div':
raise self.LpConversionFailure("No bound on division %s" % expr)
raise self.LpConversionFailure("Unknown operator for bound: %s" % expr)
def flatten(self, exp, indicator=True):
"""Remove toplevel embedded and/ors by creating new equalities.
:param exp is an Expression of the form
var = (arith11 ^ ... ^ arith1n) | ... | (arithk1 ^ ... ^ arithkn)
where arithij is either a variable or an arithmetic expression
where the degenerate cases are permitted as well.
:param indicator controls whether the method Returns
a single variable (with supporting expressions) or it Returns
an expression that has operator with (flat) arguments
Returns a collection of expressions each of one of the following
forms:
var1 = var2 * ... * varn
var1 = var2 + ... + varn
var1 = arith
Returns (new-expression, supporting-expressions)
"""
if self.isConstant(exp) or self.isVariable(exp):
return exp, []
new_args = []
extras = []
new_indicator = not (exp.operator().lower() in ['eq', 'noteq'])
for e in exp.arguments():
newe, extra = self.flatten(e, indicator=new_indicator)
new_args.append(newe)
extras.extend(extra)
constructor = self.operator_to_constructor(exp.operator())
new_exp = constructor(*new_args)
if indicator:
indic, extra = self.create_intermediate(new_exp)
return indic, extra + extras
return new_exp, extras
def operator_to_constructor(self, operator):
"""Given the operator, return the corresponding constructor."""
op = operator.lower()
if op == 'eq':
return self.makeEqual
if op == 'noteq':
return self.makeNotEqual
if op == 'var':
return self.makeVariable
if op == 'and':
return self.makeAnd
if op == 'or':
return self.makeOr
if op == 'arith':
return self.makeArith
raise self.LpConversionFailure("Unknown operator: %s" % operator)
def create_intermediate(self, exp):
"""Given expression, create var = expr and return (var, var=expr)."""
if self.isBoolArith(exp) or self.isAnd(exp) or self.isOr(exp):
var = self.freshVar(type='bool')
else:
var = self.freshVar()
equality = self.makeEqual(var, exp)
return var, [equality]
def freshVar(self, **meta):
var = self.makeVariable('internal', self.fresh_var_counter, **meta)
self.fresh_var_counter += 1
return var
class LpConversionFailure(exception.CongressException):
pass
class PulpLpLang(LpLang):
"""Algorithms for translating LpLang into PuLP library problems."""
MIN_THRESHOLD = .00001
def __init__(self):
# instance variable so tests can be run in parallel
super(PulpLpLang, self).__init__()
self.value_counter = 0
def problem(self, optimization, constraints, bounds):
"""Return PuLP problem for given optimization and constraints.
:param optimization is an LpLang.Expression that is either a sum
or product to minimize.
:param constraints is a collection of LpLang.Expression that
each evaluate to true/false (typically equalities)
:param bounds is a dictionary mapping LpLang.Expression variable
tuples to their upper bounds.
Returns a pulp.LpProblem.
"""
# translate constraints to pure LP
optimization, hard = self.pure_lp_term(optimization, bounds)
for c in constraints:
hard.extend(self.pure_lp(c, bounds))
LOG.info("* Converted DatalogLP to PureLP *")
LOG.info("optimization: %s", optimization)
LOG.info("constraints: \n%s", "\n".join(str(x) for x in hard))
# translate optimization and constraints into PuLP equivalents
variables = {}
values = {}
optimization = self.pulpify(optimization, variables, values)
hard = [self.pulpify(c, variables, values) for c in hard]
# add them to the problem.
prob = pulp.LpProblem("VM re-assignment", pulp.LpMinimize)
prob += optimization
for c in hard:
prob += c
# invert values
return prob, {value: key for key, value in values.items()}
def pulpify(self, expr, variables, values):
"""Return PuLP version of expr.
:param expr is an Expression of one of the following forms.
arith
arith = arith
arith <= arith
arith >= arith
:param vars is a dictionary from Expression variables to PuLP variables
Returns a PuLP representation of expr.
"""
# LOG.info("pulpify(%s, %s)", expr, variables)
if self.isConstant(expr):
return expr
elif self.isVariable(expr):
return self._pulpify_variable(expr, variables, values)
elif self.isArith(expr):
args = expr.arguments()
op = args[0]
args = [self.pulpify(arg, variables, values) for arg in args[1:]]
if op == 'times':
return reduce(lambda x, y: x * y, args)
elif op == 'plus':
return reduce(lambda x, y: x + y, args)
elif op == 'div':
return reduce(lambda x, y: x / y, args)
elif op == 'minus':
return reduce(lambda x, y: x - y, args)
elif op == 'lteq':
return (args[0] <= args[1])
elif op == 'gteq':
return (args[0] >= args[1])
elif op == 'gt': # pulp makes MIN_THRESHOLD 1
return (args[0] >= args[1] + self.MIN_THRESHOLD)
elif op == 'lt': # pulp makes MIN_THRESHOLD 1
return (args[0] + self.MIN_THRESHOLD <= args[1])
else:
raise self.LpConversionFailure(
"Found unsupported operator %s in %s" % (op, expr))
else:
args = [self.pulpify(arg, variables, values)
for arg in expr.arguments()]
op = expr.operator().lower()
if op == 'eq':
return (args[0] == args[1])
elif op == 'noteq':
return (args[0] != args[1])
else:
raise self.LpConversionFailure(
"Found unsupported operator: %s" % expr)
def _new_value(self, old, values):
"""Create a new value for old and store values[old] = new."""
if old in values:
return values[old]
new = self.value_counter
self.value_counter += 1
values[old] = new
return new
def _pulpify_variable(self, expr, variables, values):
"""Translate DatalogLp variable expr into PuLP variable.
:param expr is an instance of Expression
:param variables is a dictionary from Expressions to pulp variables
:param values is a 1-1 dictionary from strings/floats to integers
representing a mapping of non-integer arguments to variable
names to their integer equivalents.
"""
# pulp mangles variable names that contain certain characters.
# Replace actual args with integers when constructing
# variable names. Includes integers since we don't want to
# have namespace collision problems.
oldargs = expr.arguments()
args = [oldargs[0]]
for arg in oldargs[1:]:
newarg = self._new_value(arg, values)
args.append(newarg)
# name
name = "_".join([str(x) for x in args])
# type
typ = expr.meta.get('type', None)
if typ == 'bool':
cat = pulp.LpBinary
elif typ == 'int':
cat = pulp.LpInteger
else:
cat = pulp.LpContinuous
# set bounds
lowbound = expr.meta.get('lowbound', None)
upbound = expr.meta.get('upbound', None)
var = pulp.LpVariable(
name=name, cat=cat, lowBound=lowbound, upBound=upbound)
# merge with existing variable, if any
if expr in variables:
newvar = self._resolve_var_conflicts(variables[expr], var)
oldvar = variables[expr]
oldvar.cat = newvar.cat
oldvar.lowBound = newvar.lowBound
oldvar.upBound = newvar.upBound
else:
variables[expr] = var
return variables[expr]
def _resolve_var_conflicts(self, var1, var2):
"""Returns variable that combines information from var1 and var2.
:param meta1 is a pulp.LpVariable
:param meta2 is a pulp.LpVariable
Returns new pulp.LpVariable representing the conjunction of constraints
from var1 and var2.
Raises LpConversionFailure if the names of var1 and var2 differ.
"""
def type_lessthan(x, y):
return ((x == pulp.LpBinary and y == pulp.LpInteger) or
(x == pulp.LpBinary and y == pulp.LpContinuous) or
(x == pulp.LpInteger and y == pulp.LpContinuous))
if var1.name != var2.name:
raise self.LpConversionFailure(
"Can't resolve variable name conflict: %s and %s" % (
var1, var2))
name = var1.name
if type_lessthan(var1.cat, var2.cat):
cat = var1.cat
else:
cat = var2.cat
if var1.lowBound is None:
lowbound = var2.lowBound
elif var2.lowBound is None:
lowbound = var1.lowBound
else:
lowbound = max(var1.lowBound, var2.lowBound)
if var1.upBound is None:
upbound = var2.upBound
elif var2.upBound is None:
upbound = var1.upBound
else:
upbound = min(var1.upBound, var2.upBound)
return pulp.LpVariable(
name=name, lowBound=lowbound, upBound=upbound, cat=cat)

View File

@ -1,248 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import collections
from oslo_log import log as logging
import six
from congress import exception
LOG = logging.getLogger(__name__)
DATABASE_POLICY_TYPE = 'database'
NONRECURSIVE_POLICY_TYPE = 'nonrecursive'
ACTION_POLICY_TYPE = 'action'
MATERIALIZED_POLICY_TYPE = 'materialized'
DELTA_POLICY_TYPE = 'delta'
DATASOURCE_POLICY_TYPE = 'datasource'
class Tracer(object):
def __init__(self):
self.expressions = []
self.funcs = [LOG.debug] # functions to call to trace
def trace(self, table):
self.expressions.append(table)
def is_traced(self, table):
return table in self.expressions or '*' in self.expressions
def log(self, table, msg, *args, **kwargs):
depth = kwargs.pop("depth", 0)
if kwargs:
raise TypeError("Unexpected keyword arguments: %s" % kwargs)
if self.is_traced(table):
for func in self.funcs:
func(("| " * depth) + msg, *args)
class StringTracer(Tracer):
def __init__(self):
super(StringTracer, self).__init__()
self.stream = six.moves.StringIO()
self.funcs.append(self.string_output)
def string_output(self, msg, *args):
self.stream.write((msg % args) + "\n")
def get_value(self):
return self.stream.getvalue()
##############################################################################
# Logical Building Blocks
##############################################################################
class Proof(object):
"""A single proof.
Differs semantically from Database's
Proof in that this version represents a proof that spans rules,
instead of just a proof for a single rule.
"""
def __init__(self, root, children):
self.root = root
self.children = children
def __str__(self):
return self.str_tree(0)
def str_tree(self, depth):
s = " " * depth
s += str(self.root)
s += "\n"
for child in self.children:
s += child.str_tree(depth + 1)
return s
def leaves(self):
if len(self.children) == 0:
return [self.root]
result = []
for child in self.children:
result.extend(child.leaves())
return result
##############################################################################
# Events
##############################################################################
class EventQueue(object):
def __init__(self):
self.queue = collections.deque()
def enqueue(self, event):
self.queue.append(event)
def dequeue(self):
return self.queue.popleft()
def __len__(self):
return len(self.queue)
def __str__(self):
return "[" + ",".join([str(x) for x in self.queue]) + "]"
##############################################################################
# Abstract Theories
##############################################################################
class Theory(object):
def __init__(self, name=None, abbr=None, schema=None, theories=None,
id=None, desc=None, owner=None, kind=None):
self.schema = schema
self.theories = theories
self.kind = kind
self.id = id
self.desc = desc
self.owner = owner
self.tracer = Tracer()
if name is None:
self.name = repr(self)
else:
self.name = name
if abbr is None:
self.abbr = "th"
else:
self.abbr = abbr
maxlength = 6
if len(self.abbr) > maxlength:
self.trace_prefix = self.abbr[0:maxlength]
else:
self.trace_prefix = self.abbr + " " * (maxlength - len(self.abbr))
def set_id(self, id):
self.id = id
def initialize_tables(self, tablenames, facts):
"""initialize_tables
Event handler for (re)initializing a collection of tables. Clears
tables befores assigning the new table content.
@facts must be an iterable containing compile.Fact objects.
"""
raise NotImplementedError
def actual_events(self, events):
"""Returns subset of EVENTS that are not noops."""
actual = []
for event in events:
if event.insert:
if event.formula not in self:
actual.append(event)
else:
if event.formula in self:
actual.append(event)
return actual
def debug_mode(self):
tr = Tracer()
tr.trace('*')
self.set_tracer(tr)
def set_tracer(self, tracer):
self.tracer = tracer
def get_tracer(self):
return self.tracer
def log(self, table, msg, *args, **kwargs):
msg = self.trace_prefix + ": " + msg
self.tracer.log(table, msg, *args, **kwargs)
def policy(self):
"""Return a list of the policy statements in this theory."""
raise NotImplementedError()
def content(self):
"""Return a list of the contents of this theory.
Maybe rules and/or data. Note: do not change name to CONTENTS, as this
is reserved for a dictionary of stuff used by TopDownTheory.
"""
raise NotImplementedError()
def tablenames(self, body_only=False, include_builtin=False,
include_modal=True, include_facts=False):
tablenames = set()
for rule in self.policy():
tablenames |= rule.tablenames(
body_only=body_only, include_builtin=include_builtin,
include_modal=include_modal)
# also include tables in facts
# FIXME: need to conform with intended abstractions
if include_facts and hasattr(self, 'rules'):
tablenames |= set(self.rules.facts.keys())
return tablenames
def __str__(self):
return "Theory %s" % self.name
def content_string(self):
return '\n'.join([str(p) for p in self.content()]) + '\n'
def get_rule(self, ident):
for p in self.policy():
if hasattr(p, 'id') and str(p.id) == str(ident):
return p
raise exception.NotFound('rule_id %s is not found.' % ident)
def arity(self, tablename, modal=None):
"""Return the number of columns for the given tablename.
TABLENAME is of the form <policy>:<table> or <table>.
MODAL is the value of the modal operator.
"""
return NotImplementedError
def get_attr_dict(self):
'''return dict containing the basic attributes of this theory'''
d = {'id': self.id,
'name': self.name,
'abbreviation': self.abbr,
'description': self.desc,
'owner_id': self.owner,
'kind': self.kind}
return d

View File

@ -1,412 +0,0 @@
#! /usr/bin/python
#
# Copyright (c) 2014 IBM, Corp. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import datetime
import netaddr
import six
from six.moves import range
from dateutil import parser as datetime_parser
BUILTIN_NAMESPACE = 'builtin'
class DatetimeBuiltins(object):
# casting operators (used internally)
@classmethod
def to_timedelta(cls, x):
if isinstance(x, six.string_types):
fields = x.split(":")
num_fields = len(fields)
args = {}
keys = ['seconds', 'minutes', 'hours', 'days', 'weeks']
for i in range(0, len(fields)):
args[keys[i]] = int(fields[num_fields - 1 - i])
return datetime.timedelta(**args)
else:
return datetime.timedelta(seconds=x)
@classmethod
def to_datetime(cls, x):
return datetime_parser.parse(x, ignoretz=True)
# current time
@classmethod
def now(cls):
return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
# extraction and creation of datetimes
@classmethod
def unpack_time(cls, x):
x = cls.to_datetime(x)
return (x.hour, x.minute, x.second)
@classmethod
def unpack_date(cls, x):
x = cls.to_datetime(x)
return (x.year, x.month, x.day)
@classmethod
def unpack_datetime(cls, x):
x = cls.to_datetime(x)
return (x.year, x.month, x.day, x.hour, x.minute, x.second)
@classmethod
def pack_time(cls, hour, minute, second):
return "{}:{}:{}".format(hour, minute, second)
@classmethod
def pack_date(cls, year, month, day):
return "{}-{}-{}".format(year, month, day)
@classmethod
def pack_datetime(cls, year, month, day, hour, minute, second):
return "{}-{}-{} {}:{}:{}".format(
year, month, day, hour, minute, second)
# extraction/creation convenience function
@classmethod
def extract_date(cls, x):
return str(cls.to_datetime(x).date())
@classmethod
def extract_time(cls, x):
return str(cls.to_datetime(x).time())
# conversion to seconds
@classmethod
def datetime_to_seconds(cls, x):
since1900 = cls.to_datetime(x) - datetime.datetime(year=1900,
month=1,
day=1)
return int(since1900.total_seconds())
# native operations on datetime
@classmethod
def datetime_plus(cls, x, y):
return str(cls.to_datetime(x) + cls.to_timedelta(y))
@classmethod
def datetime_minus(cls, x, y):
return str(cls.to_datetime(x) - cls.to_timedelta(y))
@classmethod
def datetime_lessthan(cls, x, y):
return cls.to_datetime(x) < cls.to_datetime(y)
@classmethod
def datetime_lessthanequal(cls, x, y):
return cls.to_datetime(x) <= cls.to_datetime(y)
@classmethod
def datetime_greaterthan(cls, x, y):
return cls.to_datetime(x) > cls.to_datetime(y)
@classmethod
def datetime_greaterthanequal(cls, x, y):
return cls.to_datetime(x) >= cls.to_datetime(y)
@classmethod
def datetime_equal(cls, x, y):
return cls.to_datetime(x) == cls.to_datetime(y)
class NetworkAddressBuiltins(object):
@classmethod
def ips_equal(cls, ip1, ip2):
return netaddr.IPAddress(ip1) == netaddr.IPAddress(ip2)
@classmethod
def ips_lessthan(cls, ip1, ip2):
return netaddr.IPAddress(ip1) < netaddr.IPAddress(ip2)
@classmethod
def ips_lessthan_equal(cls, ip1, ip2):
return netaddr.IPAddress(ip1) <= netaddr.IPAddress(ip2)
@classmethod
def ips_greaterthan(cls, ip1, ip2):
return netaddr.IPAddress(ip1) > netaddr.IPAddress(ip2)
@classmethod
def ips_greaterthan_equal(cls, ip1, ip2):
return netaddr.IPAddress(ip1) >= netaddr.IPAddress(ip2)
@classmethod
def networks_equal(cls, cidr1, cidr2):
return netaddr.IPNetwork(cidr1) == netaddr.IPNetwork(cidr2)
@classmethod
def networks_overlap(cls, cidr1, cidr2):
cidr1_obj = netaddr.IPNetwork(cidr1)
cidr2_obj = netaddr.IPNetwork(cidr2)
return (cidr1_obj.first <= cidr2_obj.first <= cidr1_obj.last or
cidr1_obj.first <= cidr2_obj.last <= cidr1_obj.last)
@classmethod
def ip_in_network(cls, ip, cidr):
cidr_obj = netaddr.IPNetwork(cidr)
ip_obj = netaddr.IPAddress(ip)
return ip_obj in cidr_obj
# the registry for builtins
_builtin_map = {
'comparison': [
{'func': 'lt(x,y)', 'num_inputs': 2, 'code': lambda x, y: x < y},
{'func': 'lteq(x,y)', 'num_inputs': 2, 'code': lambda x, y: x <= y},
{'func': 'equal(x,y)', 'num_inputs': 2, 'code': lambda x, y: x == y},
{'func': 'gt(x,y)', 'num_inputs': 2, 'code': lambda x, y: x > y},
{'func': 'gteq(x,y)', 'num_inputs': 2, 'code': lambda x, y: x >= y},
{'func': 'max(x,y,z)', 'num_inputs': 2,
'code': lambda x, y: max(x, y)}],
'arithmetic': [
{'func': 'plus(x,y,z)', 'num_inputs': 2, 'code': lambda x, y: x + y},
{'func': 'minus(x,y,z)', 'num_inputs': 2, 'code': lambda x, y: x - y},
{'func': 'mul(x,y,z)', 'num_inputs': 2, 'code': lambda x, y: x * y},
{'func': 'div(x,y,z)', 'num_inputs': 2, 'code': lambda x, y:
((x // y) if (type(x) == int and type(y) == int) else (x / y))},
{'func': 'float(x,y)', 'num_inputs': 1, 'code': lambda x: float(x)},
{'func': 'int(x,y)', 'num_inputs': 1, 'code': lambda x: int(x)}],
'string': [
{'func': 'concat(x,y,z)', 'num_inputs': 2, 'code': lambda x, y: x + y},
{'func': 'len(x, y)', 'num_inputs': 1, 'code': lambda x: len(x)}],
'datetime': [
{'func': 'now(x)', 'num_inputs': 0,
'code': DatetimeBuiltins.now},
{'func': 'unpack_date(x, year, month, day)', 'num_inputs': 1,
'code': DatetimeBuiltins.unpack_date},
{'func': 'unpack_time(x, hours, minutes, seconds)', 'num_inputs': 1,
'code': DatetimeBuiltins.unpack_time},
{'func': 'unpack_datetime(x, y, m, d, h, i, s)', 'num_inputs': 1,
'code': DatetimeBuiltins.unpack_datetime},
{'func': 'pack_time(hours, minutes, seconds, result)', 'num_inputs': 3,
'code': DatetimeBuiltins.pack_time},
{'func': 'pack_date(year, month, day, result)', 'num_inputs': 3,
'code': DatetimeBuiltins.pack_date},
{'func': 'pack_datetime(y, m, d, h, i, s, result)', 'num_inputs': 6,
'code': DatetimeBuiltins.pack_datetime},
{'func': 'extract_date(x, y)', 'num_inputs': 1,
'code': DatetimeBuiltins.extract_date},
{'func': 'extract_time(x, y)', 'num_inputs': 1,
'code': DatetimeBuiltins.extract_time},
{'func': 'datetime_to_seconds(x, y)', 'num_inputs': 1,
'code': DatetimeBuiltins.datetime_to_seconds},
{'func': 'datetime_plus(x,y,z)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_plus},
{'func': 'datetime_minus(x,y,z)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_minus},
{'func': 'datetime_lt(x,y)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_lessthan},
{'func': 'datetime_lteq(x,y)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_lessthanequal},
{'func': 'datetime_gt(x,y)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_greaterthan},
{'func': 'datetime_gteq(x,y)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_greaterthanequal},
{'func': 'datetime_equal(x,y)', 'num_inputs': 2,
'code': DatetimeBuiltins.datetime_equal}],
'netaddr': [
{'func': 'ips_equal(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ips_equal},
{'func': 'ips_lt(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ips_lessthan},
{'func': 'ips_lteq(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ips_lessthan_equal},
{'func': 'ips_gt(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ips_greaterthan},
{'func': 'ips_gteq(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ips_greaterthan_equal},
{'func': 'networks_equal(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.networks_equal},
{'func': 'networks_overlap(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.networks_overlap},
{'func': 'ip_in_network(x,y)', 'num_inputs': 2,
'code': NetworkAddressBuiltins.ip_in_network}]
}
class CongressBuiltinPred(object):
def __init__(self, name, arglist, num_inputs, code):
self.predname = name
self.predargs = arglist
self.num_inputs = num_inputs
self.code = code
self.num_outputs = len(arglist) - num_inputs
def string_to_pred(self, predstring):
try:
self.predname = predstring.split('(')[0]
self.predargs = predstring.split('(')[1].split(')')[0].split(',')
except Exception:
print("Unexpected error in parsing predicate string")
def __str__(self):
return self.predname + '(' + ",".join(self.predargs) + ')'
class CongressBuiltinCategoryMap(object):
def __init__(self, start_builtin_map):
self.categorydict = dict()
self.preddict = dict()
for key, value in start_builtin_map.items():
self.categorydict[key] = []
for predtriple in value:
pred = self.dict_predtriple_to_pred(predtriple)
self.categorydict[key].append(pred)
self.sync_with_predlist(pred.predname, pred, key, 'add')
def mapequal(self, othercbc):
if self.categorydict == othercbc.categorydict:
return True
else:
return False
def dict_predtriple_to_pred(self, predtriple):
ncode = predtriple['code']
ninputs = predtriple['num_inputs']
nfunc = predtriple['func']
nfunc_pred = nfunc.split("(")[0]
nfunc_arglist = nfunc.split("(")[1].split(")")[0].split(",")
pred = CongressBuiltinPred(nfunc_pred, nfunc_arglist, ninputs, ncode)
return pred
def add_map(self, newmap):
for key, value in newmap.items():
if key not in self.categorydict:
self.categorydict[key] = []
for predtriple in value:
pred = self.dict_predtriple_to_pred(predtriple)
if not self.builtin_is_registered(pred):
self.categorydict[key].append(pred)
self.sync_with_predlist(pred.predname, pred, key, 'add')
def delete_map(self, newmap):
for key, value in newmap.items():
for predtriple in value:
predtotest = self.dict_predtriple_to_pred(predtriple)
for pred in self.categorydict[key]:
if pred.predname == predtotest.predname:
if pred.num_inputs == predtotest.num_inputs:
self.categorydict[key].remove(pred)
self.sync_with_predlist(pred.predname,
pred, key, 'del')
if self.categorydict[key] == []:
del self.categorydict[key]
def sync_with_predlist(self, predname, pred, category, operation):
if operation == 'add':
self.preddict[predname] = [pred, category]
if operation == 'del':
if predname in self.preddict:
del self.preddict[predname]
def delete_builtin(self, category, name, inputs):
if category not in self.categorydict:
self.categorydict[category] = []
for pred in self.categorydict[category]:
if pred.num_inputs == inputs and pred.predname == name:
self.categorydict[category].remove(pred)
self.sync_with_predlist(name, pred, category, 'del')
def get_category_name(self, predname, predinputs):
if predname in self.preddict:
if self.preddict[predname][0].num_inputs == predinputs:
return self.preddict[predname][1]
return None
def exists_category(self, category):
return category in self.categorydict
def insert_category(self, category):
self.categorydict[category] = []
def delete_category(self, category):
if category in self.categorydict:
categorypreds = self.categorydict[category]
for pred in categorypreds:
self.sync_with_predlist(pred.predname, pred, category, 'del')
del self.categorydict[category]
def insert_to_category(self, category, pred):
if category in self.categorydict:
self.categorydict[category].append(pred)
self.sync_with_predlist(pred.predname, pred, category, 'add')
else:
assert("Category does not exist")
def delete_from_category(self, category, pred):
if category in self.categorydict:
self.categorydict[category].remove(pred)
self.sync_with_predlist(pred.predname, pred, category, 'del')
else:
assert("Category does not exist")
def delete_all_in_category(self, category):
if category in self.categorydict:
categorypreds = self.categorydict[category]
for pred in categorypreds:
self.sync_with_predlist(pred.predname, pred, category, 'del')
self.categorydict[category] = []
else:
assert("Category does not exist")
def builtin_is_registered(self, predtotest):
"""Given a CongressBuiltinPred, check if it has been registered."""
pname = predtotest.predname
if pname in self.preddict:
if self.preddict[pname][0].num_inputs == predtotest.num_inputs:
return True
return False
def is_builtin(self, table, arity=None):
"""Given a Tablename and arity, check if it is a builtin."""
# Note: for now we grandfather in old builtin tablenames but will
# deprecate those tablenames in favor of builtin:tablename
if ((table.service == BUILTIN_NAMESPACE and
table.table in self.preddict) or
table.table in self.preddict): # grandfather
if not arity:
return True
if len(self.preddict[table.table][0].predargs) == arity:
return True
return False
def builtin(self, table):
"""Return a CongressBuiltinPred for given Tablename or None."""
if not isinstance(table, six.string_types):
table = table.table
if table in self.preddict:
return self.preddict[table][0]
return None
def list_available_builtins(self):
"""Print out the list of builtins, by category."""
for key, value in self.categorydict.items():
predlist = self.categorydict[key]
for pred in predlist:
print(str(pred))
# a Singleton that serves as the entry point for builtin functionality
builtin_registry = CongressBuiltinCategoryMap(_builtin_map)

File diff suppressed because it is too large Load Diff

View File

@ -1,413 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from six.moves import range
from congress.datalog import base
from congress.datalog import compile
from congress.datalog import topdown
from congress.datalog import unify
from congress.datalog import utility
from congress import exception
##############################################################################
# Concrete Theory: Database
##############################################################################
class Database(topdown.TopDownTheory):
class Proof(object):
def __init__(self, binding, rule):
self.binding = binding
self.rule = rule
def __str__(self):
return "apply({}, {})".format(str(self.binding), str(self.rule))
def __eq__(self, other):
result = (self.binding == other.binding and
self.rule == other.rule)
# LOG.debug("Pf: Comparing %s and %s: %s", self, other, result)
# LOG.debug("Pf: %s == %s is %s",
# self.binding, other.binding, self.binding == other.binding)
# LOG.debug("Pf: %s == %s is %s",
# self.rule, other.rule, self.rule == other.rule)
return result
def __ne__(self, other):
return not self.__eq__(other)
class ProofCollection(object):
def __init__(self, proofs):
self.contents = list(proofs)
def __str__(self):
return '{' + ",".join(str(x) for x in self.contents) + '}'
def __isub__(self, other):
if other is None:
return
# LOG.debug("PC: Subtracting %s and %s", self, other)
remaining = []
for proof in self.contents:
if proof not in other.contents:
remaining.append(proof)
self.contents = remaining
return self
def __ior__(self, other):
if other is None:
return
# LOG.debug("PC: Unioning %s and %s", self, other)
for proof in other.contents:
# LOG.debug("PC: Considering %s", proof)
if proof not in self.contents:
self.contents.append(proof)
return self
def __getitem__(self, key):
return self.contents[key]
def __len__(self):
return len(self.contents)
def __ge__(self, iterable):
for proof in iterable:
if proof not in self.contents:
# LOG.debug("Proof %s makes %s not >= %s",
# proof, self, iterstr(iterable))
return False
return True
def __le__(self, iterable):
for proof in self.contents:
if proof not in iterable:
# LOG.debug("Proof %s makes %s not <= %s",
# proof, self, iterstr(iterable))
return False
return True
def __eq__(self, other):
return self <= other and other <= self
def __ne__(self, other):
return not self.__eq__(other)
class DBTuple(object):
def __init__(self, iterable, proofs=None):
self.tuple = tuple(iterable)
if proofs is None:
proofs = []
self.proofs = Database.ProofCollection(proofs)
def __eq__(self, other):
return self.tuple == other.tuple
def __ne__(self, other):
return not self.__eq__(other)
def __str__(self):
return str(self.tuple) + str(self.proofs)
def __len__(self):
return len(self.tuple)
def __getitem__(self, index):
return self.tuple[index]
def __setitem__(self, index, value):
self.tuple[index] = value
def match(self, atom, unifier):
# LOG.debug("DBTuple matching %s against atom %s in %s",
# self, iterstr(atom.arguments), unifier)
if len(self.tuple) != len(atom.arguments):
return None
changes = []
for i in range(0, len(atom.arguments)):
val, binding = unifier.apply_full(atom.arguments[i])
# LOG.debug("val(%s)=%s at %s; comparing to object %s",
# atom.arguments[i], val, binding, self.tuple[i])
if val.is_variable():
changes.append(binding.add(
val, compile.Term.create_from_python(self.tuple[i]),
None))
else:
if val.name != self.tuple[i]:
unify.undo_all(changes)
return None
return changes
def __init__(self, name=None, abbr=None, theories=None, schema=None,
desc=None, owner=None):
super(Database, self).__init__(
name=name, abbr=abbr, theories=theories, schema=schema,
desc=desc, owner=owner)
self.data = {}
self.kind = base.DATABASE_POLICY_TYPE
def str2(self):
def hash2str(h):
s = "{"
s += ", ".join(["{} : {}".format(str(key), str(h[key]))
for key in h])
return s
def hashlist2str(h):
strings = []
for key in h:
s = "{} : ".format(key)
s += '['
s += ', '.join([str(val) for val in h[key]])
s += ']'
strings.append(s)
return '{' + ", ".join(strings) + '}'
return hashlist2str(self.data)
def __eq__(self, other):
return self.data == other.data
def __ne__(self, other):
return not self.__eq__(other)
def __sub__(self, other):
def add_tuple(table, dbtuple):
new = [table]
new.extend(dbtuple.tuple)
results.append(new)
results = []
for table in self.data:
if table not in other.data:
for dbtuple in self.data[table]:
add_tuple(table, dbtuple)
else:
for dbtuple in self.data[table]:
if dbtuple not in other.data[table]:
add_tuple(table, dbtuple)
return results
def __or__(self, other):
def add_db(db):
for table in db.data:
for dbtuple in db.data[table]:
result.insert(compile.Literal.create_from_table_tuple(
table, dbtuple.tuple), proofs=dbtuple.proofs)
result = Database()
add_db(self)
add_db(other)
return result
def __getitem__(self, key):
# KEY must be a tablename
return self.data[key]
def content(self, tablenames=None):
"""Return a sequence of Literals representing all the table data."""
results = []
if tablenames is None:
tablenames = self.data.keys()
for table in tablenames:
if table not in self.data:
continue
for dbtuple in self.data[table]:
results.append(compile.Literal.create_from_table_tuple(
table, dbtuple.tuple))
return results
def is_noop(self, event):
"""Returns T if EVENT is a noop on the database."""
# insert/delete same code but with flipped return values
# Code below is written as insert, except noop initialization.
if event.is_insert():
noop = True
else:
noop = False
if event.formula.table.table not in self.data:
return not noop
event_data = self.data[event.formula.table.table]
raw_tuple = tuple(event.formula.argument_names())
for dbtuple in event_data:
if dbtuple.tuple == raw_tuple:
if event.proofs <= dbtuple.proofs:
return noop
return not noop
def __contains__(self, formula):
if not compile.is_atom(formula):
return False
if formula.table.table not in self.data:
return False
event_data = self.data[formula.table.table]
raw_tuple = tuple(formula.argument_names())
return any((dbtuple.tuple == raw_tuple for dbtuple in event_data))
def explain(self, atom):
if atom.table.table not in self.data or not atom.is_ground():
return self.ProofCollection([])
args = tuple([x.name for x in atom.arguments])
for dbtuple in self.data[atom.table.table]:
if dbtuple.tuple == args:
return dbtuple.proofs
def tablenames(self, body_only=False, include_builtin=False,
include_modal=True):
"""Return all table names occurring in this theory."""
if body_only:
return []
return self.data.keys()
# overloads for TopDownTheory so we can properly use the
# top_down_evaluation routines
def defined_tablenames(self):
return self.data.keys()
def head_index(self, table, match_literal=None):
if table not in self.data:
return []
return self.data[table]
def head(self, thing):
return thing
def body(self, thing):
return []
def bi_unify(self, dbtuple, unifier1, atom, unifier2, theoryname):
"""THING1 is always a ground DBTuple and THING2 is always an ATOM."""
return dbtuple.match(atom, unifier2)
def atom_to_internal(self, atom, proofs=None):
return atom.table.table, self.DBTuple(atom.argument_names(), proofs)
def insert(self, atom, proofs=None):
"""Inserts ATOM into the DB. Returns changes."""
return self.modify(compile.Event(formula=atom, insert=True,
proofs=proofs))
def delete(self, atom, proofs=None):
"""Deletes ATOM from the DB. Returns changes."""
return self.modify(compile.Event(formula=atom, insert=False,
proofs=proofs))
def update(self, events):
"""Applies all of EVENTS to the DB.
Each event is either an insert or a delete.
"""
changes = []
for event in events:
changes.extend(self.modify(event))
return changes
def update_would_cause_errors(self, events):
"""Return a list of Policyxception.
Return a list of PolicyException if we were
to apply the events EVENTS to the current policy.
"""
self.log(None, "update_would_cause_errors %s", utility.iterstr(events))
errors = []
for event in events:
if not compile.is_atom(event.formula):
errors.append(exception.PolicyException(
"Non-atomic formula is not permitted: {}".format(
str(event.formula))))
else:
errors.extend(compile.fact_errors(
event.formula, self.theories, self.name))
return errors
def modify(self, event):
"""Insert/Delete atom.
Inserts/deletes ATOM and returns a list of changes that
were caused. That list contains either 0 or 1 Event.
"""
assert compile.is_atom(event.formula), "Modify requires Atom"
atom = event.formula
self.log(atom.table.table, "Modify: %s", atom)
if self.is_noop(event):
self.log(atom.table.table, "Event %s is a noop", event)
return []
if event.insert:
self.insert_actual(atom, proofs=event.proofs)
else:
self.delete_actual(atom, proofs=event.proofs)
return [event]
def insert_actual(self, atom, proofs=None):
"""Workhorse for inserting ATOM into the DB.
Along with proofs explaining how ATOM was computed from other tables.
"""
assert compile.is_atom(atom), "Insert requires Atom"
table, dbtuple = self.atom_to_internal(atom, proofs)
self.log(table, "Insert: %s", atom)
if table not in self.data:
self.data[table] = [dbtuple]
self.log(atom.table.table, "First tuple in table %s", table)
return
else:
for existingtuple in self.data[table]:
assert existingtuple.proofs is not None
if existingtuple.tuple == dbtuple.tuple:
assert existingtuple.proofs is not None
existingtuple.proofs |= dbtuple.proofs
assert existingtuple.proofs is not None
return
self.data[table].append(dbtuple)
def delete_actual(self, atom, proofs=None):
"""Workhorse for deleting ATOM from the DB.
Along with the proofs that are no longer true.
"""
assert compile.is_atom(atom), "Delete requires Atom"
self.log(atom.table.table, "Delete: %s", atom)
table, dbtuple = self.atom_to_internal(atom, proofs)
if table not in self.data:
return
for i in range(0, len(self.data[table])):
existingtuple = self.data[table][i]
if existingtuple.tuple == dbtuple.tuple:
existingtuple.proofs -= dbtuple.proofs
if len(existingtuple.proofs) == 0:
del self.data[table][i]
return
def policy(self):
"""Return the policy for this theory.
No policy in this theory; only data.
"""
return []
def get_arity_self(self, tablename):
if tablename not in self.data:
return None
if len(self.data[tablename]) == 0:
return None
return len(self.data[tablename][0].tuple)
def content_string(self):
s = ""
for lit in self.content():
s += str(lit) + '\n'
return s + '\n'

View File

@ -1,171 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.datalog import utility
class FactSet(object):
"""FactSet
Maintains a set of facts, and provides indexing for efficient iteration,
given a partial or full match. Expects that all facts are the same width.
"""
def __init__(self):
self._facts = utility.OrderedSet()
# key is a sorted tuple of column indices, values are dict mapping a
# specific value for the key to a set of Facts.
self._indicies = {}
def __contains__(self, fact):
return fact in self._facts
def __len__(self):
return len(self._facts)
def __iter__(self):
return self._facts.__iter__()
def add(self, fact):
"""Add a fact to the FactSet
Returns True if the fact is absent from this FactSet and adds the
fact, otherwise returns False.
"""
assert isinstance(fact, tuple)
changed = self._facts.add(fact)
if changed:
# Add the fact to the indicies
try:
for index in self._indicies.keys():
self._add_fact_to_index(fact, index)
except Exception:
self._facts.discard(fact)
raise
return changed
def remove(self, fact):
"""Remove a fact from the FactSet
Returns True if the fact is in this FactSet and removes the fact,
otherwise returns False.
"""
changed = self._facts.discard(fact)
if changed:
# Remove from indices
try:
for index in self._indicies.keys():
self._remove_fact_from_index(fact, index)
except Exception:
self._facts.add(fact)
raise
return changed
def create_index(self, columns):
"""Create an index
@columns is a tuple of column indicies that index into the facts in
self. @columns must be sorted in ascending order, and each column
index must be less than the width of a fact in self. If the index
exists, do nothing.
"""
assert sorted(columns) == list(columns)
assert len(columns)
if columns in self._indicies:
return
for f in self._facts:
self._add_fact_to_index(f, columns)
def remove_index(self, columns):
"""Remove an index
@columns is a tuple of column indicies that index into the facts in
self. @columns must be sorted in ascending order, and each column
index must be less than the width of a fact in self. If the index
does not exists, raise KeyError.
"""
assert sorted(columns) == list(columns)
if columns in self._indicies:
del self._indicies[columns]
def has_index(self, columns):
"""Returns True if the index exists."""
return columns in self._indicies
def find(self, partial_fact, iterations=None):
"""Find Facts given a partial fact
@partial_fact is a tuple of pair tuples. The first item in each
pair tuple is an index into a fact, and the second item is a value to
match again self._facts. Expects the pairs to be sorted by index in
ascending order.
@iterations is either an empty list or None. If @iterations is an
empty list, then find() will append the number of iterations find()
used to compute the return value(this is useful for testing indexing).
Returns matching Facts.
"""
index = tuple([i for i, v in partial_fact])
k = tuple([v for i, v in partial_fact])
if index in self._indicies:
if iterations is not None:
iterations.append(1)
if k in self._indicies[index]:
return self._indicies[index][k]
else:
return set()
# There is no index, so iterate.
matches = set()
for f in self._facts:
match = True
for i, v in partial_fact:
if f[i] != v:
match = False
break
if match:
matches.add(f)
if iterations is not None:
iterations.append(len(self._facts))
return matches
def _compute_key(self, columns, fact):
# assumes that @columns is sorted in ascending order.
return tuple([fact[i] for i in columns])
def _add_fact_to_index(self, fact, index):
if index not in self._indicies:
self._indicies[index] = {}
k = self._compute_key(index, fact)
if k not in self._indicies[index]:
self._indicies[index][k] = set((fact,))
else:
self._indicies[index][k].add(fact)
def _remove_fact_from_index(self, fact, index):
k = self._compute_key(index, fact)
self._indicies[index][k].remove(fact)
if not len(self._indicies[index][k]):
del self._indicies[index][k]

View File

@ -1,621 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from six.moves import range
from congress.datalog import base
from congress.datalog import compile
from congress.datalog import database
from congress.datalog import topdown
from congress.datalog import utility
LOG = logging.getLogger(__name__)
class DeltaRule(object):
"""Rule describing how updates to data sources change table."""
def __init__(self, trigger, head, body, original):
self.trigger = trigger # atom
self.head = head # atom
# list of literals, sorted for order-insensitive comparison
self.body = (
sorted([lit for lit in body if not lit.is_builtin()]) +
sorted([lit for lit in body if lit.is_builtin()]))
self.original = original # Rule from which SELF was derived
def __str__(self):
return "<trigger: {}, head: {}, body: {}>".format(
str(self.trigger), str(self.head), [str(lit) for lit in self.body])
def __eq__(self, other):
return (self.trigger == other.trigger and
self.head == other.head and
len(self.body) == len(other.body) and
all(self.body[i] == other.body[i]
for i in range(0, len(self.body))))
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash((self.trigger, self.head, tuple(self.body)))
def variables(self):
"""Return the set of variables occurring in this delta rule."""
vs = self.trigger.variables()
vs |= self.head.variables()
for atom in self.body:
vs |= atom.variables()
return vs
def tablenames(self, body_only=False, include_builtin=False,
include_modal=True):
"""Return the set of tablenames occurring in this delta rule."""
tables = set()
if not body_only:
tables.add(self.head.tablename())
tables.add(self.trigger.tablename())
for atom in self.body:
tables.add(atom.tablename())
return tables
class DeltaRuleTheory (base.Theory):
"""A collection of DeltaRules. Not useful by itself as a policy."""
def __init__(self, name=None, abbr=None, theories=None):
super(DeltaRuleTheory, self).__init__(
name=name, abbr=abbr, theories=theories)
# dictionary from table name to list of rules with that table as
# trigger
self.rules = {}
# dictionary from delta_rule to the rule from which it was derived
self.originals = set()
# dictionary from table name to number of rules with that table in
# head
self.views = {}
# all tables
self.all_tables = {}
self.kind = base.DELTA_POLICY_TYPE
def modify(self, event):
"""Insert/delete the compile.Rule RULE into the theory.
Return list of changes (either the empty list or
a list including just RULE).
"""
self.log(None, "DeltaRuleTheory.modify %s", event.formula)
self.log(None, "originals: %s", utility.iterstr(self.originals))
if event.insert:
if self.insert(event.formula):
return [event]
else:
if self.delete(event.formula):
return [event]
return []
def insert(self, rule):
"""Insert a compile.Rule into the theory.
Return True iff the theory changed.
"""
assert compile.is_regular_rule(rule), (
"DeltaRuleTheory only takes rules")
self.log(rule.tablename(), "Insert: %s", rule)
if rule in self.originals:
self.log(None, utility.iterstr(self.originals))
return False
self.log(rule.tablename(), "Insert 2: %s", rule)
for delta in self.compute_delta_rules([rule]):
self.insert_delta(delta)
self.originals.add(rule)
return True
def insert_delta(self, delta):
"""Insert a delta rule."""
self.log(None, "Inserting delta rule %s", delta)
# views (tables occurring in head)
if delta.head.table.table in self.views:
self.views[delta.head.table.table] += 1
else:
self.views[delta.head.table.table] = 1
# tables
for table in delta.tablenames():
if table in self.all_tables:
self.all_tables[table] += 1
else:
self.all_tables[table] = 1
# contents
if delta.trigger.table.table not in self.rules:
self.rules[delta.trigger.table.table] = utility.OrderedSet()
self.rules[delta.trigger.table.table].add(delta)
def delete(self, rule):
"""Delete a compile.Rule from theory.
Assumes that COMPUTE_DELTA_RULES is deterministic.
Returns True iff the theory changed.
"""
self.log(rule.tablename(), "Delete: %s", rule)
if rule not in self.originals:
return False
for delta in self.compute_delta_rules([rule]):
self.delete_delta(delta)
self.originals.remove(rule)
return True
def delete_delta(self, delta):
"""Delete the DeltaRule DELTA from the theory."""
# views
if delta.head.table.table in self.views:
self.views[delta.head.table.table] -= 1
if self.views[delta.head.table.table] == 0:
del self.views[delta.head.table.table]
# tables
for table in delta.tablenames():
if table in self.all_tables:
self.all_tables[table] -= 1
if self.all_tables[table] == 0:
del self.all_tables[table]
# contents
self.rules[delta.trigger.table.table].discard(delta)
if not len(self.rules[delta.trigger.table.table]):
del self.rules[delta.trigger.table.table]
def policy(self):
return self.originals
def get_arity_self(self, tablename):
for p in self.originals:
if p.head.table.table == tablename:
return len(p.head.arguments)
return None
def __contains__(self, formula):
return formula in self.originals
def __str__(self):
return str(self.rules)
def rules_with_trigger(self, table):
"""Return the list of DeltaRules that trigger on the given TABLE."""
if table in self.rules:
return self.rules[table]
else:
return []
def is_view(self, x):
return x in self.views
def is_known(self, x):
return x in self.all_tables
def base_tables(self):
base = []
for table in self.all_tables:
if table not in self.views:
base.append(table)
return base
@classmethod
def eliminate_self_joins(cls, formulas):
"""Remove self joins.
Return new list of formulas that is equivalent to
the list of formulas FORMULAS except that there
are no self-joins.
"""
def new_table_name(name, arity, index):
return "___{}_{}_{}".format(name, arity, index)
def n_variables(n):
vars = []
for i in range(0, n):
vars.append("x" + str(i))
return vars
# dict from (table name, arity) tuple to
# max num of occurrences of self-joins in any rule
global_self_joins = {}
# remove self-joins from rules
results = []
for rule in formulas:
if rule.is_atom():
results.append(rule)
continue
LOG.debug("eliminating self joins from %s", rule)
occurrences = {} # for just this rule
for atom in rule.body:
table = atom.tablename()
arity = len(atom.arguments)
tablearity = (table, arity)
if tablearity not in occurrences:
occurrences[tablearity] = 1
else:
# change name of atom
atom.table.table = new_table_name(table, arity,
occurrences[tablearity])
# update our counters
occurrences[tablearity] += 1
if tablearity not in global_self_joins:
global_self_joins[tablearity] = 1
else:
global_self_joins[tablearity] = (
max(occurrences[tablearity] - 1,
global_self_joins[tablearity]))
results.append(rule)
LOG.debug("final rule: %s", rule)
# add definitions for new tables
for tablearity in global_self_joins:
table = tablearity[0]
arity = tablearity[1]
for i in range(1, global_self_joins[tablearity] + 1):
newtable = new_table_name(table, arity, i)
args = [compile.Variable(var) for var in n_variables(arity)]
head = compile.Literal(newtable, args)
body = [compile.Literal(table, args)]
results.append(compile.Rule(head, body))
LOG.debug("Adding rule %s", results[-1])
return results
@classmethod
def compute_delta_rules(cls, formulas):
"""Return list of DeltaRules computed from formulas.
Assuming FORMULAS has no self-joins, return a list of DeltaRules
derived from those FORMULAS.
"""
# Should do the following for correctness, but it needs to be
# done elsewhere so that we can properly maintain the tables
# that are generated.
# formulas = cls.eliminate_self_joins(formulas)
delta_rules = []
for rule in formulas:
if rule.is_atom():
continue
rule = compile.reorder_for_safety(rule)
for literal in rule.body:
if literal.is_builtin():
continue
newbody = [lit for lit in rule.body if lit is not literal]
delta_rules.append(
DeltaRule(literal, rule.head, newbody, rule))
return delta_rules
class MaterializedViewTheory(topdown.TopDownTheory):
"""A theory that stores the table contents of views explicitly.
Relies on included theories to define the contents of those
tables not defined by the rules of the theory.
Recursive rules are allowed.
"""
def __init__(self, name=None, abbr=None, theories=None, schema=None,
desc=None, owner=None):
super(MaterializedViewTheory, self).__init__(
name=name, abbr=abbr, theories=theories, schema=schema,
desc=desc, owner=owner)
# queue of events left to process
self.queue = base.EventQueue()
# data storage
db_name = None
db_abbr = None
delta_name = None
delta_abbr = None
if name is not None:
db_name = name + "Database"
delta_name = name + "Delta"
if abbr is not None:
db_abbr = abbr + "DB"
delta_abbr = abbr + "Dlta"
self.database = database.Database(name=db_name, abbr=db_abbr)
# rules that dictate how database changes in response to events
self.delta_rules = DeltaRuleTheory(name=delta_name, abbr=delta_abbr)
self.kind = base.MATERIALIZED_POLICY_TYPE
def set_tracer(self, tracer):
if isinstance(tracer, base.Tracer):
self.tracer = tracer
self.database.tracer = tracer
self.delta_rules.tracer = tracer
else:
self.tracer = tracer['self']
self.database.tracer = tracer['database']
self.delta_rules.tracer = tracer['delta_rules']
def get_tracer(self):
return {'self': self.tracer,
'database': self.database.tracer,
'delta_rules': self.delta_rules.tracer}
# External Interface
# SELECT is handled by TopDownTheory
def insert(self, formula):
return self.update([compile.Event(formula=formula, insert=True)])
def delete(self, formula):
return self.update([compile.Event(formula=formula, insert=False)])
def update(self, events):
"""Apply inserts/deletes described by EVENTS and return changes.
Does not check if EVENTS would cause errors.
"""
for event in events:
assert compile.is_datalog(event.formula), (
"Non-formula not allowed: {}".format(str(event.formula)))
self.enqueue_any(event)
changes = self.process_queue()
return changes
def update_would_cause_errors(self, events):
"""Return a list of PolicyException.
Return a list of PolicyException if we were
to apply the events EVENTS to the current policy.
"""
self.log(None, "update_would_cause_errors %s", utility.iterstr(events))
errors = []
# compute new rule set
for event in events:
assert compile.is_datalog(event.formula), (
"update_would_cause_errors operates only on objects")
self.log(None, "Updating %s", event.formula)
if event.formula.is_atom():
errors.extend(compile.fact_errors(
event.formula, self.theories, self.name))
else:
errors.extend(compile.rule_errors(
event.formula, self.theories, self.name))
return errors
def explain(self, query, tablenames, find_all):
"""Returns a list of proofs if QUERY is true or None if else."""
assert compile.is_atom(query), "Explain requires an atom"
# ignoring TABLENAMES and FIND_ALL
# except that we return the proper type.
proof = self.explain_aux(query, 0)
if proof is None:
return None
else:
return [proof]
def policy(self):
return self.delta_rules.policy()
def get_arity_self(self, tablename):
result = self.database.get_arity_self(tablename)
if result:
return result
return self.delta_rules.get_arity_self(tablename)
# Interface implementation
def explain_aux(self, query, depth):
self.log(query.table.table, "Explaining %s", query, depth=depth)
# Bail out on negated literals. Need different
# algorithm b/c we need to introduce quantifiers.
if query.is_negated():
return base.Proof(query, [])
# grab first local proof, since they're all equally good
localproofs = self.database.explain(query)
if localproofs is None:
return None
if len(localproofs) == 0: # base fact
return base.Proof(query, [])
localproof = localproofs[0]
rule_instance = localproof.rule.plug(localproof.binding)
subproofs = []
for lit in rule_instance.body:
subproof = self.explain_aux(lit, depth + 1)
if subproof is None:
return None
subproofs.append(subproof)
return base.Proof(query, subproofs)
def modify(self, event):
"""Modifies contents of theory to insert/delete FORMULA.
Returns True iff the theory changed.
"""
self.log(None, "Materialized.modify")
self.enqueue_any(event)
changes = self.process_queue()
self.log(event.formula.tablename(),
"modify returns %s", utility.iterstr(changes))
return changes
def enqueue_any(self, event):
"""Enqueue event.
Processing rules is a bit different than processing atoms
in that they generate additional events that we want
to process either before the rule is deleted or after
it is inserted. PROCESS_QUEUE is similar but assumes
that only the data will cause propagations (and ignores
included theories).
"""
# Note: all included theories must define MODIFY
formula = event.formula
if formula.is_atom():
self.log(formula.tablename(), "compute/enq: atom %s", formula)
assert not self.is_view(formula.table.table), (
"Cannot directly modify tables" +
" computed from other tables")
# self.log(formula.table, "%s: %s", text, formula)
self.enqueue(event)
return []
else:
# rules do not need to talk to included theories because they
# only generate events for views
# need to eliminate self-joins here so that we fill all
# the tables introduced by self-join elimination.
for rule in DeltaRuleTheory.eliminate_self_joins([formula]):
new_event = compile.Event(formula=rule, insert=event.insert,
target=event.target)
self.enqueue(new_event)
return []
def enqueue(self, event):
self.log(event.tablename(), "Enqueueing: %s", event)
self.queue.enqueue(event)
def process_queue(self):
"""Data and rule propagation routine.
Returns list of events that were not noops
"""
self.log(None, "Processing queue")
history = []
while len(self.queue) > 0:
event = self.queue.dequeue()
self.log(event.tablename(), "Dequeued %s", event)
if compile.is_regular_rule(event.formula):
changes = self.delta_rules.modify(event)
if len(changes) > 0:
history.extend(changes)
bindings = self.top_down_evaluation(
event.formula.variables(), event.formula.body)
self.log(event.formula.tablename(),
"new bindings after top-down: %s",
utility.iterstr(bindings))
self.process_new_bindings(bindings, event.formula.head,
event.insert, event.formula)
else:
self.propagate(event)
history.extend(self.database.modify(event))
self.log(event.tablename(), "History: %s",
utility.iterstr(history))
return history
def propagate(self, event):
"""Propagate event.
Computes and enqueue events generated by EVENT and the DELTA_RULES.
"""
self.log(event.formula.table.table, "Processing event: %s", event)
applicable_rules = self.delta_rules.rules_with_trigger(
event.formula.table.table)
if len(applicable_rules) == 0:
self.log(event.formula.table.table, "No applicable delta rule")
for delta_rule in applicable_rules:
self.propagate_rule(event, delta_rule)
def propagate_rule(self, event, delta_rule):
"""Propagate event and delta_rule.
Compute and enqueue new events generated by EVENT and DELTA_RULE.
"""
self.log(event.formula.table.table, "Processing event %s with rule %s",
event, delta_rule)
# compute tuples generated by event (either for insert or delete)
# print "event: {}, event.tuple: {},
# event.tuple.rawtuple(): {}".format(
# str(event), str(event.tuple), str(event.tuple.raw_tuple()))
# binding_list is dictionary
# Save binding for delta_rule.trigger; throw away binding for event
# since event is ground.
binding = self.new_bi_unifier()
assert compile.is_literal(delta_rule.trigger)
assert compile.is_literal(event.formula)
undo = self.bi_unify(delta_rule.trigger, binding,
event.formula, self.new_bi_unifier(), self.name)
if undo is None:
return
self.log(event.formula.table.table,
"binding list for event and delta-rule trigger: %s", binding)
bindings = self.top_down_evaluation(
delta_rule.variables(), delta_rule.body, binding)
self.log(event.formula.table.table, "new bindings after top-down: %s",
",".join([str(x) for x in bindings]))
if delta_rule.trigger.is_negated():
insert_delete = not event.insert
else:
insert_delete = event.insert
self.process_new_bindings(bindings, delta_rule.head,
insert_delete, delta_rule.original)
def process_new_bindings(self, bindings, atom, insert, original_rule):
"""Process new bindings.
For each of BINDINGS, apply to ATOM, and enqueue it as an insert if
INSERT is True and as a delete otherwise.
"""
# for each binding, compute generated tuple and group bindings
# by the tuple they generated
new_atoms = {}
for binding in bindings:
new_atom = atom.plug(binding)
if new_atom not in new_atoms:
new_atoms[new_atom] = []
new_atoms[new_atom].append(database.Database.Proof(
binding, original_rule))
self.log(atom.table.table, "new tuples generated: %s",
utility.iterstr(new_atoms))
# enqueue each distinct generated tuple, recording appropriate bindings
for new_atom in new_atoms:
# self.log(event.table, "new_tuple %s: %s", new_tuple,
# new_tuples[new_tuple])
# Only enqueue if new data.
# Putting the check here is necessary to support recursion.
self.enqueue(compile.Event(formula=new_atom,
proofs=new_atoms[new_atom],
insert=insert))
def is_view(self, x):
"""Return True if the table X is defined by the theory."""
return self.delta_rules.is_view(x)
def is_known(self, x):
"""Return True if this theory has any rule mentioning table X."""
return self.delta_rules.is_known(x)
def base_tables(self):
"""Get base tables.
Return the list of tables that are mentioned in the rules but
for which there are no rules with those tables in the head.
"""
return self.delta_rules.base_tables()
def _top_down_th(self, context, caller):
return self.database._top_down_th(context, caller)
def content(self, tablenames=None):
return self.database.content(tablenames=tablenames)
def __contains__(self, formula):
# TODO(thinrichs): if formula is a rule, we need to check
# self.delta_rules; if formula is an atom, we need to check
# self.database, but only if the table for that atom is
# not defined by rules. As it stands, for atoms, we are
# conflating membership with evaluation.
return (formula in self.database or formula in self.delta_rules)

View File

@ -1,398 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from congress.datalog import base
from congress.datalog import compile
from congress.datalog import ruleset
from congress.datalog import topdown
from congress.datalog import utility
from congress import exception
LOG = logging.getLogger(__name__)
class NonrecursiveRuleTheory(topdown.TopDownTheory):
"""A non-recursive collection of Rules."""
def __init__(self, name=None, abbr=None,
schema=None, theories=None, desc=None, owner=None):
super(NonrecursiveRuleTheory, self).__init__(
name=name, abbr=abbr, theories=theories, schema=schema,
desc=desc, owner=owner)
# dictionary from table name to list of rules with that table in head
self.rules = ruleset.RuleSet()
self.kind = base.NONRECURSIVE_POLICY_TYPE
if schema is None:
self.schema = compile.Schema()
# External Interface
# SELECT implemented by TopDownTheory
def initialize_tables(self, tablenames, facts):
"""Event handler for (re)initializing a collection of tables
@facts must be an iterable containing compile.Fact objects.
"""
LOG.info("initialize_tables")
cleared_tables = set(tablenames)
for t in tablenames:
self.rules.clear_table(t)
count = 0
extra_tables = set()
ignored_facts = 0
for f in facts:
if f.table not in cleared_tables:
extra_tables.add(f.table)
ignored_facts += 1
else:
self.rules.add_rule(f.table, f)
count += 1
if self.schema:
self.schema.update(f, True)
if ignored_facts > 0:
LOG.error("initialize_tables ignored %d facts for tables "
"%s not included in the list of tablenames %s",
ignored_facts, extra_tables, cleared_tables)
LOG.info("initialized %d tables with %d facts",
len(cleared_tables), count)
def insert(self, rule):
changes = self.update([compile.Event(formula=rule, insert=True)])
return [event.formula for event in changes]
def delete(self, rule):
changes = self.update([compile.Event(formula=rule, insert=False)])
return [event.formula for event in changes]
def _update_lit_schema(self, lit, is_insert):
if self.schema is None:
raise exception.PolicyException(
"Cannot update schema because theory %s doesn't have "
"schema." % self.name)
if self.schema.complete:
# complete means the schema is pre-built and shouldn't be updated
return None
return self.schema.update(lit, is_insert)
def update_rule_schema(self, rule, is_insert):
schema_changes = []
if self.schema is None or not self.theories or self.schema.complete:
# complete means the schema is pre-built like datasoures'
return schema_changes
if isinstance(rule, compile.Fact) or isinstance(rule, compile.Literal):
schema_changes.append(self._update_lit_schema(rule, is_insert))
return schema_changes
schema_changes.append(self._update_lit_schema(rule.head, is_insert))
for lit in rule.body:
if lit.is_builtin():
continue
active_theory = lit.table.service or self.name
if active_theory not in self.theories:
continue
schema_changes.append(
self.theories[active_theory]._update_lit_schema(lit,
is_insert))
return schema_changes
def revert_schema(self, schema_changes):
if not self.theories:
return
for change in schema_changes:
if not change:
continue
active_theory = change[3]
if not active_theory:
self.schema.revert(change)
else:
self.theories[active_theory].schema.revert(change)
def update(self, events):
"""Apply EVENTS.
And return the list of EVENTS that actually
changed the theory. Each event is the insert or delete of
a policy statement.
"""
changes = []
self.log(None, "Update %s", utility.iterstr(events))
try:
for event in events:
schema_changes = self.update_rule_schema(
event.formula, event.insert)
formula = compile.reorder_for_safety(event.formula)
if event.insert:
if self._insert_actual(formula):
changes.append(event)
else:
self.revert_schema(schema_changes)
else:
if self._delete_actual(formula):
changes.append(event)
else:
self.revert_schema(schema_changes)
except Exception:
LOG.exception("runtime caught an exception")
raise
return changes
def update_would_cause_errors(self, events):
"""Return a list of PolicyException.
Return a list of PolicyException if we were
to apply the insert/deletes of policy statements dictated by
EVENTS to the current policy.
"""
self.log(None, "update_would_cause_errors %s", utility.iterstr(events))
errors = []
for event in events:
if not compile.is_datalog(event.formula):
errors.append(exception.PolicyException(
"Non-formula found: {}".format(
str(event.formula))))
else:
if event.formula.is_atom():
errors.extend(compile.fact_errors(
event.formula, self.theories, self.name))
else:
errors.extend(compile.rule_errors(
event.formula, self.theories, self.name))
# Would also check that rules are non-recursive, but that
# is currently being handled by Runtime. The current implementation
# disallows recursion in all theories.
return errors
def define(self, rules):
"""Empties and then inserts RULES."""
self.empty()
return self.update([compile.Event(formula=rule, insert=True)
for rule in rules])
def empty(self, tablenames=None, invert=False):
"""Deletes contents of theory.
If provided, TABLENAMES causes only the removal of all rules
that help define one of the tables in TABLENAMES.
If INVERT is true, all rules defining anything other than a
table in TABLENAMES is deleted.
"""
if tablenames is None:
self.rules.clear()
return
if invert:
to_clear = set(self.defined_tablenames()) - set(tablenames)
else:
to_clear = tablenames
for table in to_clear:
self.rules.clear_table(table)
def policy(self):
# eliminate all rules with empty bodies
return [p for p in self.content() if len(p.body) > 0]
def __contains__(self, formula):
if compile.is_atom(formula):
return self.rules.contains(formula.table.table, formula)
else:
return self.rules.contains(formula.head.table.table, formula)
# Internal Interface
def _insert_actual(self, rule):
"""Insert RULE and return True if there was a change."""
if compile.is_atom(rule):
rule = compile.Rule(rule, [], rule.location)
self.log(rule.head.table.table, "Insert: %s", repr(rule))
return self.rules.add_rule(rule.head.table.table, rule)
def _delete_actual(self, rule):
"""Delete RULE and return True if there was a change."""
if compile.is_atom(rule):
rule = compile.Rule(rule, [], rule.location)
self.log(rule.head.table.table, "Delete: %s", rule)
return self.rules.discard_rule(rule.head.table.table, rule)
def content(self, tablenames=None):
if tablenames is None:
tablenames = self.rules.keys()
results = []
for table in tablenames:
if table in self.rules:
results.extend(self.rules.get_rules(table))
return results
def head_index(self, table, match_literal=None):
"""Return head index.
This routine must return all the formulas pertinent for
top-down evaluation when a literal with TABLE is at the top
of the stack.
"""
if table in self.rules:
return self.rules.get_rules(table, match_literal)
return []
def arity(self, table, modal=None):
"""Return the number of arguments TABLENAME takes.
:param table can be either a string or a Tablename
Returns None if arity is unknown (if it does not occur in
the head of a rule).
"""
if isinstance(table, compile.Tablename):
service = table.service
name = table.table
fullname = table.name()
else:
fullname = table
service, name = compile.Tablename.parse_service_table(table)
# check if schema knows the answer
if self.schema:
if service is None or service == self.name:
arity = self.schema.arity(name)
else:
arity = self.schema.arity(fullname)
if arity is not None:
return arity
# assuming a single arity for all tables
formulas = self.head_index(fullname) or self.head_index(name)
try:
first = next(f for f in formulas
if f.head.table.matches(service, name, modal))
except StopIteration:
return None
# should probably have an overridable function for computing
# the arguments of a head. Instead we assume heads have .arguments
return len(self.head(first).arguments)
def defined_tablenames(self):
"""Returns list of table names defined in/written to this theory."""
return self.rules.keys()
def head(self, formula):
"""Given the output from head_index(), return the formula head.
Given a FORMULA, return the thing to unify against.
Usually, FORMULA is a compile.Rule, but it could be anything
returned by HEAD_INDEX.
"""
return formula.head
def body(self, formula):
"""Return formula body.
Given a FORMULA, return a list of things to push onto the
top-down eval stack.
"""
return formula.body
class ActionTheory(NonrecursiveRuleTheory):
"""ActionTheory object.
Same as NonrecursiveRuleTheory except it has fewer constraints
on the permitted rules. Still working out the details.
"""
def __init__(self, name=None, abbr=None,
schema=None, theories=None, desc=None, owner=None):
super(ActionTheory, self).__init__(name=name, abbr=abbr,
schema=schema, theories=theories,
desc=desc, owner=owner)
self.kind = base.ACTION_POLICY_TYPE
def update_would_cause_errors(self, events):
"""Return a list of PolicyException.
Return a list of PolicyException if we were
to apply the events EVENTS to the current policy.
"""
self.log(None, "update_would_cause_errors %s", utility.iterstr(events))
errors = []
for event in events:
if not compile.is_datalog(event.formula):
errors.append(exception.PolicyException(
"Non-formula found: {}".format(
str(event.formula))))
else:
if event.formula.is_atom():
errors.extend(compile.fact_errors(
event.formula, self.theories, self.name))
else:
errors.extend(compile.rule_head_has_no_theory(
event.formula,
permit_head=lambda lit: lit.is_update()))
# Should put this back in place, but there are some
# exceptions that we don't handle right now.
# Would like to mark some tables as only being defined
# for certain bound/free arguments and take that into
# account when doing error checking.
# errors.extend(compile.rule_negation_safety(event.formula))
return errors
class MultiModuleNonrecursiveRuleTheory(NonrecursiveRuleTheory):
"""MultiModuleNonrecursiveRuleTheory object.
Same as NonrecursiveRuleTheory, except we allow rules with theories
in the head. Intended for use with TopDownTheory's INSTANCES method.
"""
def _insert_actual(self, rule):
"""Insert RULE and return True if there was a change."""
if compile.is_atom(rule):
rule = compile.Rule(rule, [], rule.location)
self.log(rule.head.table.table, "Insert: %s", rule)
return self.rules.add_rule(rule.head.table.table, rule)
def _delete_actual(self, rule):
"""Delete RULE and return True if there was a change."""
if compile.is_atom(rule):
rule = compile.Rule(rule, [], rule.location)
self.log(rule.head.table.table, "Delete: %s", rule)
return self.rules.discard_rule(rule.head.table.table, rule)
# def update_would_cause_errors(self, events):
# return []
class DatasourcePolicyTheory(NonrecursiveRuleTheory):
"""DatasourcePolicyTheory
DatasourcePolicyTheory is identical to NonrecursiveRuleTheory, except that
self.kind is base.DATASOURCE_POLICY_TYPE instead of
base.NONRECURSIVE_POLICY_TYPE. DatasourcePolicyTheory uses a different
self.kind so that the synchronizer knows not to synchronize policies of
kind DatasourcePolicyTheory with the database listing of policies.
"""
def __init__(self, name=None, abbr=None,
schema=None, theories=None, desc=None, owner=None):
super(DatasourcePolicyTheory, self).__init__(
name=name, abbr=abbr, theories=theories, schema=schema,
desc=desc, owner=owner)
self.kind = base.DATASOURCE_POLICY_TYPE

View File

@ -1,176 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from congress.datalog import compile
from congress.datalog import factset
from congress.datalog import utility
class RuleSet(object):
"""RuleSet
Keeps track of all rules for all tables.
"""
# Internally:
# An index_name looks like this: (p, (2, 4)) which means this index is
# on table 'p' and it specifies columns 2 and 4.
#
# An index_key looks like this: (p, (2, 'abc'), (4, 'def'))
def __init__(self):
self.rules = {}
self.facts = {}
def __str__(self):
return str(self.rules) + " " + str(self.facts)
def add_rule(self, key, rule):
"""Add a rule to the Ruleset
@rule can be a Rule or a Fact. Returns True if add_rule() changes the
RuleSet.
"""
if isinstance(rule, compile.Fact):
# If the rule is a Fact, then add it to self.facts.
if key not in self.facts:
self.facts[key] = factset.FactSet()
return self.facts[key].add(rule)
elif len(rule.body) == 0 and not rule.head.is_negated():
# If the rule is a Rule, with no body, then it's a Fact, so
# convert the Rule to a Fact to a Fact and add to self.facts.
f = compile.Fact(key, (a.name for a in rule.head.arguments))
if key not in self.facts:
self.facts[key] = factset.FactSet()
return self.facts[key].add(f)
else:
# else the rule is a regular rule, so add it to self.rules.
if key in self.rules:
return self.rules[key].add(rule)
else:
self.rules[key] = utility.OrderedSet([rule])
return True
def discard_rule(self, key, rule):
"""Remove a rule from the Ruleset
@rule can be a Rule or a Fact. Returns True if discard_rule() changes
the RuleSet.
"""
if isinstance(rule, compile.Fact):
# rule is a Fact, so remove from self.facts
if key in self.facts:
changed = self.facts[key].remove(rule)
if len(self.facts[key]) == 0:
del self.facts[key]
return changed
return False
elif not len(rule.body):
# rule is a Rule, but without a body so it will be in self.facts.
if key in self.facts:
fact = compile.Fact(key, [a.name for a in rule.head.arguments])
changed = self.facts[key].remove(fact)
if len(self.facts[key]) == 0:
del self.facts[key]
return changed
return False
else:
# rule is a Rule with a body, so remove from self.rules.
if key in self.rules:
changed = self.rules[key].discard(rule)
if len(self.rules[key]) == 0:
del self.rules[key]
return changed
return False
def keys(self):
return list(self.facts.keys()) + list(self.rules.keys())
def __contains__(self, key):
return key in self.facts or key in self.rules
def contains(self, key, rule):
if isinstance(rule, compile.Fact):
return key in self.facts and rule in self.facts[key]
elif isinstance(rule, compile.Literal):
if key not in self.facts:
return False
fact = compile.Fact(key, [a.name for a in rule.arguments])
return fact in self.facts[key]
elif not len(rule.body):
if key not in self.facts:
return False
fact = compile.Fact(key, [a.name for a in rule.head.arguments])
return fact in self.facts[key]
else:
return key in self.rules and rule in self.rules[key]
def get_rules(self, key, match_literal=None):
facts = []
if (match_literal and not match_literal.is_negated() and
key in self.facts):
# If the caller supplies a literal to match against, then use an
# index to find the matching rules.
bound_arguments = tuple([i for i, arg
in enumerate(match_literal.arguments)
if not arg.is_variable()])
if (bound_arguments and
not self.facts[key].has_index(bound_arguments)):
# The index does not exist, so create it.
self.facts[key].create_index(bound_arguments)
partial_fact = tuple(
[(i, arg.name)
for i, arg in enumerate(match_literal.arguments)
if not arg.is_variable()])
facts = list(self.facts[key].find(partial_fact))
else:
# There is no usable match_literal, so get all facts for the
# table.
facts = list(self.facts.get(key, ()))
# Convert native tuples to Rule objects.
# TODO(alex): This is inefficient because it creates Literal and Rule
# objects. It would be more efficient to change the TopDownTheory and
# unifier to handle Facts natively.
fact_rules = []
for fact in facts:
# Setting use_modules=False so we don't split up tablenames.
# This allows us to choose at compile-time whether to split
# the tablename up.
literal = compile.Literal(
key, [compile.Term.create_from_python(x) for x in fact],
use_modules=False)
fact_rules.append(compile.Rule(literal, ()))
return fact_rules + list(self.rules.get(key, ()))
def clear(self):
self.rules = {}
self.facts = {}
def clear_table(self, table):
self.rules[table] = utility.OrderedSet()
self.facts[table] = factset.FactSet()

View File

@ -1,639 +0,0 @@
# Copyright (c) 2015 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
import six
from six.moves import range
from congress.datalog import base
from congress.datalog import builtin
from congress.datalog import compile
from congress.datalog import unify
from congress.datalog import utility
LOG = logging.getLogger(__name__)
class TopDownTheory(base.Theory):
"""Class that holds the Top-Down evaluation routines.
Classes will inherit from this class if they want to import and specialize
those routines.
"""
class TopDownContext(object):
"""Struct for storing the search state of top-down evaluation."""
def __init__(self, literals, literal_index, binding, context, theory,
depth):
self.literals = literals
self.literal_index = literal_index
self.binding = binding
self.previous = context
self.theory = theory # a theory object, not just its name
self.depth = depth
def __str__(self):
return (
"TopDownContext<literals={}, literal_index={}, binding={}, "
"previous={}, theory={}, depth={}>").format(
"[" + ",".join([str(x) for x in self.literals]) + "]",
str(self.literal_index), str(self.binding),
str(self.previous), self.theory.name, str(self.depth))
class TopDownResult(object):
"""Stores a single result for top-down-evaluation."""
def __init__(self, binding, support):
self.binding = binding
self.support = support # for abduction
def __str__(self):
return "TopDownResult(binding={}, support={})".format(
unify.binding_str(self.binding), utility.iterstr(self.support))
class TopDownCaller(object):
"""Struct for info about the original caller of top-down evaluation.
VARIABLES is the list of variables (from the initial query)
that we want bindings for.
BINDING is the initially empty BiUnifier.
FIND_ALL controls whether just the first or all answers are found.
ANSWERS is populated by top-down evaluation: it is the list of
VARIABLES instances that the search process proved true.
"""
def __init__(self, variables, binding, theory,
find_all=True, save=None):
# an iterable of variable objects
self.variables = variables
# a bi-unifier
self.binding = binding
# the top-level theory (for included theories)
self.theory = theory
# a boolean
self.find_all = find_all
# The results of top-down-eval: a list of TopDownResults
self.results = []
# a Function that takes a compile.Literal and a unifier and
# returns T iff that literal under the unifier should be
# saved as part of an abductive explanation
self.save = save
# A variable used to store explanations as they are constructed
self.support = []
def __str__(self):
return (
"TopDownCaller<variables={}, binding={}, find_all={}, "
"results={}, save={}, support={}>".format(
utility.iterstr(self.variables), str(self.binding),
str(self.find_all), utility.iterstr(self.results),
repr(self.save), utility.iterstr(self.support)))
#########################################
# External interface
def __init__(self, name=None, abbr=None, theories=None, schema=None,
desc=None, owner=None):
super(TopDownTheory, self).__init__(
name=name, abbr=abbr, theories=theories, schema=schema,
desc=desc, owner=owner)
self.includes = []
def select(self, query, find_all=True):
"""Return list of instances of QUERY that are true.
If FIND_ALL is False, the return list has at most 1 element.
"""
assert compile.is_datalog(query), "Query must be atom/rule"
if compile.is_atom(query):
literals = [query]
else:
literals = query.body
# Because our output is instances of QUERY, need all the variables
# in QUERY.
bindings = self.top_down_evaluation(query.variables(), literals,
find_all=find_all)
# LOG.debug("Top_down_evaluation returned: %s", bindings)
if len(bindings) > 0:
self.log(query.tablename(), "Found answer %s",
"[" + ",".join([str(query.plug(x))
for x in bindings]) + "]")
return [query.plug(x) for x in bindings]
def explain(self, query, tablenames, find_all=True):
"""Return list of instances of QUERY that are true.
Same as select except stores instances of TABLENAMES
that participated in each proof. If QUERY is an atom,
returns list of rules with QUERY in the head and
the stored instances of TABLENAMES in the body; if QUERY is
a rule, the rules returned have QUERY's head in the head
and the stored instances of TABLENAMES in the body.
"""
# This is different than abduction because instead of replacing
# a proof attempt with saving a literal, we want to save a literal
# after a successful proof attempt.
assert False, "Not yet implemented"
def abduce(self, query, tablenames, find_all=True):
"""Compute additional literals.
Computes additional literals that if true would make
(some instance of) QUERY true. Returns a list of rules
where the head represents an instance of the QUERY and
the body is the collection of literals that must be true
in order to make that instance true. If QUERY is a rule,
each result is an instance of the head of that rule, and
the computed literals if true make the body of that rule
(and hence the head) true. If FIND_ALL is true, the
return list has at most one element.
Limitation: every negative literal relevant to a proof of
QUERY is unconditionally true, i.e. no literals are saved
when proving a negative literal is true.
"""
assert compile.is_datalog(query), "abduce requires a formula"
if compile.is_atom(query):
literals = [query]
output = query
else:
literals = query.body
output = query.head
# We need all the variables we will be using in the output, which
# here is just the head of QUERY (or QUERY itself if it is an atom)
abductions = self.top_down_abduction(
output.variables(), literals, find_all=find_all,
save=lambda lit, binding: lit.tablename() in tablenames)
results = [compile.Rule(output.plug(abd.binding), abd.support)
for abd in abductions]
self.log(query.tablename(), "abduction result:")
self.log(query.tablename(), "\n".join([str(x) for x in results]))
return results
def consequences(self, filter=None, table_theories=None):
"""Return all the true instances of any table in this theory."""
# find all table, theory pairs defined in this theory
if table_theories is None:
table_theories = set()
for key in self.rules.keys():
table_theories |= set([(rule.head.table.table,
rule.head.table.service)
for rule in self.rules.get_rules(key)])
results = set()
# create queries: need table names and arities
# TODO(thinrichs): arity computation will need to ignore
# modals once we start using insert[p(x)] instead of p+(x)
for (table, theory) in table_theories:
if filter is None or filter(table):
tablename = compile.Tablename(table, theory)
arity = self.arity(tablename)
vs = []
for i in range(0, arity):
vs.append("x" + str(i))
vs = [compile.Variable(var) for var in vs]
tablename = table
if theory:
tablename = theory + ":" + tablename
query = compile.Literal(tablename, vs)
results |= set(self.select(query))
return results
def top_down_evaluation(self, variables, literals,
binding=None, find_all=True):
"""Compute bindings.
Compute all bindings of VARIABLES that make LITERALS
true according to the theory (after applying the unifier BINDING).
If FIND_ALL is False, stops after finding one such binding.
Returns a list of dictionary bindings.
"""
# LOG.debug("CALL: top_down_evaluation(vars=%s, literals=%s, "
# "binding=%s)",
# ";".join(str(x) for x in variables),
# ";".join(str(x) for x in literals),
# str(binding))
results = self.top_down_abduction(variables, literals,
binding=binding, find_all=find_all,
save=None)
# LOG.debug("EXIT: top_down_evaluation(vars=%s, literals=%s, "
# "binding=%s) returned %s",
# iterstr(variables), iterstr(literals),
# str(binding), iterstr(results))
return [x.binding for x in results]
def top_down_abduction(self, variables, literals, binding=None,
find_all=True, save=None):
"""Compute bindings.
Compute all bindings of VARIABLES that make LITERALS
true according to the theory (after applying the
unifier BINDING), if we add some number of additional
literals. Note: will not save any literals that are
needed to prove a negated literal since the results
would not make sense. Returns a list of TopDownResults.
"""
if binding is None:
binding = self.new_bi_unifier()
caller = self.TopDownCaller(variables, binding, self,
find_all=find_all, save=save)
if len(literals) == 0:
self._top_down_finish(None, caller)
else:
# Note: must use same unifier in CALLER and CONTEXT
context = self.TopDownContext(literals, 0, binding, None, self, 0)
self._top_down_eval(context, caller)
return list(set(caller.results))
#########################################
# Internal implementation
def _top_down_eval(self, context, caller):
"""Compute instances.
Compute all instances of LITERALS (from LITERAL_INDEX and above)
that are true according to the theory (after applying the
unifier BINDING to LITERALS).
Returns True if done searching and False otherwise.
"""
# no recursive rules, ever; this style of algorithm will not terminate
lit = context.literals[context.literal_index]
# LOG.debug("CALL: %s._top_down_eval(%s, %s)",
# self.name, context, caller)
# abduction
if caller.save is not None and caller.save(lit, context.binding):
self._print_call(lit, context.binding, context.depth)
# save lit and binding--binding may not be fully flushed out
# when we save (or ever for that matter)
caller.support.append((lit, context.binding))
self._print_save(lit, context.binding, context.depth)
success = self._top_down_finish(context, caller)
caller.support.pop() # pop in either case
if success:
return True
else:
self._print_fail(lit, context.binding, context.depth)
return False
# regular processing
if lit.is_negated():
# LOG.debug("%s is negated", lit)
# recurse on the negation of the literal
plugged = lit.plug(context.binding)
assert plugged.is_ground(), (
"Negated literal not ground when evaluated: " +
str(plugged))
self._print_call(lit, context.binding, context.depth)
new_context = self.TopDownContext(
[lit.complement()], 0, context.binding, None,
self, context.depth + 1)
new_caller = self.TopDownCaller(caller.variables, caller.binding,
caller.theory, find_all=False,
save=None)
# Make sure new_caller has find_all=False, so we stop as soon
# as we can.
# Ensure save=None so that abduction does not save anything.
# Saving while performing NAF makes no sense.
self._top_down_eval(new_context, new_caller)
if len(new_caller.results) > 0:
self._print_fail(lit, context.binding, context.depth)
return False # not done searching, b/c we failed
else:
# don't need bindings b/c LIT must be ground
return self._top_down_finish(context, caller, redo=False)
elif lit.tablename() == 'true':
self._print_call(lit, context.binding, context.depth)
return self._top_down_finish(context, caller, redo=False)
elif lit.tablename() == 'false':
self._print_fail(lit, context.binding, context.depth)
return False
elif lit.is_builtin():
return self._top_down_builtin(context, caller)
elif (self.theories is not None and
lit.table.service is not None and
lit.table.modal is None and # not a modal
lit.table.service != self.name and
not lit.is_update()): # not a pseudo-modal
return self._top_down_module(context, caller)
else:
return self._top_down_truth(context, caller)
def _top_down_builtin(self, context, caller):
"""Evaluate a table with a builtin semantics.
Returns True if done searching and False otherwise.
"""
lit = context.literals[context.literal_index]
self._print_call(lit, context.binding, context.depth)
built = builtin.builtin_registry.builtin(lit.table)
# copy arguments into variables
# PLUGGED is an instance of compile.Literal
plugged = lit.plug(context.binding)
# PLUGGED.arguments is a list of compile.Term
# create args for function
args = []
for i in range(0, built.num_inputs):
# save builtins with unbound vars during evaluation
if not plugged.arguments[i].is_object() and caller.save:
# save lit and binding--binding may not be fully flushed out
# when we save (or ever for that matter)
caller.support.append((lit, context.binding))
self._print_save(lit, context.binding, context.depth)
success = self._top_down_finish(context, caller)
caller.support.pop() # pop in either case
if success:
return True
else:
self._print_fail(lit, context.binding, context.depth)
return False
assert plugged.arguments[i].is_object(), (
("Builtins must be evaluated only after their "
"inputs are ground: {} with num-inputs {}".format(
str(plugged), builtin.num_inputs)))
args.append(plugged.arguments[i].name)
# evaluate builtin: must return number, string, or iterable
# of numbers/strings
try:
result = built.code(*args)
except Exception as e:
errmsg = "Error in builtin: " + str(e)
self._print_note(lit, context.binding, context.depth, errmsg)
self._print_fail(lit, context.binding, context.depth)
return False
# self._print_note(lit, context.binding, context.depth,
# "Result: " + str(result))
success = None
undo = []
if built.num_outputs > 0:
# with return values, local success means we can bind
# the results to the return value arguments
if (isinstance(result,
(six.integer_types, float, six.string_types))):
result = [result]
# Turn result into normal objects
result = [compile.Term.create_from_python(x) for x in result]
# adjust binding list
unifier = self.new_bi_unifier()
undo = unify.bi_unify_lists(result,
unifier,
lit.arguments[built.num_inputs:],
context.binding)
success = undo is not None
else:
# without return values, local success means
# result was True according to Python
success = bool(result)
if not success:
self._print_fail(lit, context.binding, context.depth)
unify.undo_all(undo)
return False
# otherwise, try to finish proof. If success, return True
if self._top_down_finish(context, caller, redo=False):
unify.undo_all(undo)
return True
# if fail, return False.
else:
unify.undo_all(undo)
self._print_fail(lit, context.binding, context.depth)
return False
def _top_down_module(self, context, caller):
"""Move to another theory and continue evaluation."""
# LOG.debug("%s._top_down_module(%s)", self.name, context)
lit = context.literals[context.literal_index]
if lit.table.service not in self.theories:
self._print_call(lit, context.binding, context.depth)
errmsg = "No such policy: %s" % lit.table.service
self._print_note(lit, context.binding, context.depth, errmsg)
self._print_fail(lit, context.binding, context.depth)
return False
return self.theories[lit.table.service]._top_down_eval(context, caller)
def _top_down_truth(self, context, caller):
"""Top down evaluation.
Do top-down evaluation over the root theory at which
the call was made and all the included theories.
"""
# return self._top_down_th(context, caller)
return self._top_down_includes(context, caller)
def _top_down_includes(self, context, caller):
"""Top-down evaluation of all the theories included in this theory."""
is_true = self._top_down_th(context, caller)
if is_true and not caller.find_all:
return True
for th in self.includes:
is_true = th._top_down_includes(context, caller)
if is_true and not caller.find_all:
return True
return False
def _top_down_th(self, context, caller):
"""Top-down evaluation for the rules in self."""
# LOG.debug("%s._top_down_th(%s)", self.name, context)
lit = context.literals[context.literal_index]
self._print_call(lit, context.binding, context.depth)
for rule in self.head_index(lit.table.table,
lit.plug(context.binding)):
unifier = self.new_bi_unifier()
self._print_note(lit, context.binding, context.depth,
"Trying %s" % rule)
# Prefer to bind vars in rule head
undo = self.bi_unify(self.head(rule), unifier, lit,
context.binding, self.name)
if undo is None: # no unifier
continue
if len(self.body(rule)) == 0:
if self._top_down_finish(context, caller):
unify.undo_all(undo)
if not caller.find_all:
return True
else:
unify.undo_all(undo)
else:
new_context = self.TopDownContext(
rule.body, 0, unifier, context, self, context.depth + 1)
if self._top_down_eval(new_context, caller):
unify.undo_all(undo)
if not caller.find_all:
return True
else:
unify.undo_all(undo)
self._print_fail(lit, context.binding, context.depth)
return False
def _top_down_finish(self, context, caller, redo=True):
"""Helper function.
This is called once top_down successfully completes
a proof for a literal. Handles (i) continuing search
for those literals still requiring proofs within CONTEXT,
(ii) adding solutions to CALLER once all needed proofs have
been found, and (iii) printing out Redo/Exit during tracing.
Returns True if the search is finished and False otherwise.
Temporary, transparent modification of CONTEXT.
"""
if context is None:
# Found an answer; now store it
if caller is not None:
# flatten bindings and store before we undo
# copy caller.support and store before we undo
binding = {}
for var in caller.variables:
binding[var] = caller.binding.apply(var)
result = self.TopDownResult(
binding, [support[0].plug(support[1], caller=caller)
for support in caller.support])
caller.results.append(result)
return True
else:
self._print_exit(context.literals[context.literal_index],
context.binding, context.depth)
# continue the search
if context.literal_index < len(context.literals) - 1:
context.literal_index += 1
finished = context.theory._top_down_eval(context, caller)
context.literal_index -= 1 # in case answer is False
else:
finished = self._top_down_finish(context.previous, caller)
# return search result (after printing a Redo if failure)
if redo and (not finished or caller.find_all):
self._print_redo(context.literals[context.literal_index],
context.binding, context.depth)
return finished
def _print_call(self, literal, binding, depth):
msg = "{}Call: %s".format("| " * depth)
self.log(literal.tablename(), msg, literal.plug(binding))
def _print_exit(self, literal, binding, depth):
msg = "{}Exit: %s".format("| " * depth)
self.log(literal.tablename(), msg, literal.plug(binding))
def _print_save(self, literal, binding, depth):
msg = "{}Save: %s".format("| " * depth)
self.log(literal.tablename(), msg, literal.plug(binding))
def _print_fail(self, literal, binding, depth):
msg = "{}Fail: %s".format("| " * depth)
self.log(literal.tablename(), msg, literal.plug(binding))
return False
def _print_redo(self, literal, binding, depth):
msg = "{}Redo: %s".format("| " * depth)
self.log(literal.tablename(), msg, literal.plug(binding))
return False
def _print_note(self, literal, binding, depth, msg):
self.log(literal.tablename(), "{}Note: {}".format("| " * depth,
msg))
#########################################
# Routines for specialization
@classmethod
def new_bi_unifier(cls, dictionary=None):
"""Return a unifier compatible with unify.bi_unify."""
return unify.BiUnifier(dictionary=dictionary)
# lambda (index):
# compile.Variable("x" + str(index)), dictionary=dictionary)
def defined_tablenames(self):
"""Returns list of table names defined in/written to this theory."""
raise NotImplementedError
def head_index(self, table, match_literal=None):
"""Return head index.
This routine must return all the formulas pertinent for
top-down evaluation when a literal with TABLE is at the top
of the stack.
"""
raise NotImplementedError
def head(self, formula):
"""Given the output from head_index(), return the formula head.
Given a FORMULA, return the thing to unify against.
Usually, FORMULA is a compile.Rule, but it could be anything
returned by HEAD_INDEX.
"""
raise NotImplementedError
def body(self, formula):
"""Return formula body.
Given a FORMULA, return a list of things to push onto the
top-down eval stack.
"""
raise NotImplementedError
def bi_unify(self, head, unifier1, body_element, unifier2, theoryname):
"""Unify atoms.
Given something returned by self.head HEAD and an element in
the return of self.body BODY_ELEMENT, modify UNIFIER1 and UNIFIER2
so that HEAD.plug(UNIFIER1) == BODY_ELEMENT.plug(UNIFIER2).
Returns changes that can be undone via unify.undo-all.
THEORYNAME is the name of the theory for HEAD.
"""
return unify.bi_unify_atoms(head, unifier1, body_element, unifier2,
theoryname)
#########################################
# Routines for unknowns
def instances(self, rule, possibilities=None):
results = set([])
possibilities = possibilities or []
self._instances(rule, 0, self.new_bi_unifier(), results, possibilities)
return results
def _instances(self, rule, index, binding, results, possibilities):
"""Return all instances of the given RULE without evaluating builtins.
Assumes self.head_index returns rules with empty bodies.
"""
if index >= len(rule.body):
results.add(rule.plug(binding))
return
lit = rule.body[index]
self._print_call(lit, binding, 0)
# if already ground or a builtin, go to the next literal
if (lit.is_ground() or lit.is_builtin()):
self._instances(rule, index + 1, binding, results, possibilities)
return
# Otherwise, find instances in this theory
if lit.tablename() in possibilities:
options = possibilities[lit.tablename()]
else:
options = self.head_index(lit.tablename(), lit.plug(binding))
for data in options:
self._print_note(lit, binding, 0, "Trying: %s" % repr(data))
undo = unify.match_atoms(lit, binding, self.head(data))
if undo is None: # no unifier
continue
self._print_exit(lit, binding, 0)
# recurse on the rest of the literals in the rule
self._instances(rule, index + 1, binding, results, possibilities)
if undo is not None:
unify.undo_all(undo)
self._print_redo(lit, binding, 0)
self._print_fail(lit, binding, 0)

View File

@ -1,526 +0,0 @@
# Copyright (c) 2013 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from oslo_utils import uuidutils
from six.moves import range
from congress.datalog import compile
LOG = logging.getLogger(__name__)
# A unifier designed for the bi_unify_atoms routine
# which is used by a backward-chaining style datalog implementation.
# Main goal: minimize memory allocation by manipulating only unifiers
# to keep variable namespaces separate.
class BiUnifier(object):
"""A unifier designed for bi_unify_atoms.
Recursive datastructure. When adding a binding variable u to
variable v, keeps a reference to the unifier for v.
A variable's identity is its name plus its unification context.
This enables a variable with the same name but from two
different atoms to be treated as different variables.
"""
class Value(object):
def __init__(self, value, unifier):
# actual value
self.value = value
# unifier context
self.unifier = unifier
def __str__(self):
return "<{},{}>".format(
str(self.value), repr(self.unifier))
def recur_str(self):
if self.unifier is None:
recur = str(self.unifier)
else:
recur = self.unifier.recur_str()
return "<{},{}>".format(
str(self.value), recur)
def __eq__(self, other):
return self.value == other.value and self.unifer == other.unifier
def __ne__(self, other):
return not self.__eq__(other)
def __repr__(self):
return "Value(value={}, unifier={})".format(
repr(self.value), repr(self.unifier))
class Undo(object):
def __init__(self, var, unifier):
self.var = var
self.unifier = unifier
def __str__(self):
return "<var: {}, unifier: {}>".format(
str(self.var), str(self.unifier))
def __eq__(self, other):
return self.var == other.var and self.unifier == other.unifier
def __ne__(self, other):
return not self.__eq__(other)
def __init__(self, dictionary=None):
# each value is a Value
self.contents = {}
if dictionary is not None:
for var, value in dictionary.items():
self.add(var, value, None)
def add(self, var, value, unifier):
value = self.Value(value, unifier)
# LOG.debug("Adding %s -> %s to unifier %s", var, value, self)
self.contents[var] = value
return self.Undo(var, self)
def delete(self, var):
if var in self.contents:
del self.contents[var]
def value(self, term):
if term in self.contents:
return self.contents[term]
else:
return None
def apply(self, term, caller=None):
return self.apply_full(term, caller=caller)[0]
def apply_full(self, term, caller=None):
"""Recursively apply unifiers to TERM.
Return (i) the final value and (ii) the final unifier.
If the final value is a variable, instantiate
with a new variable if not in KEEP_VARS
"""
# LOG.debug("apply_full(%s, %s)", term, self)
val = self.value(term)
if val is None:
# If result is a variable and this variable is not one of those
# in the top-most calling context, then create a new variable
# name based on this Binding.
# This process avoids improper variable capture.
# Outputting the same variable with the same binding twice will
# generate the same output, but outputting the same variable with
# different bindings will generate different outputs.
# Note that this variable name mangling
# is not done for the top-most variables,
# which makes output a bit easier to read.
# Unfortunately, the process is non-deterministic from one run
# to the next, which makes testing difficult.
if (caller is not None and term.is_variable() and
not (term in caller.variables and caller.binding is self)):
return (compile.Variable(term.name + str(id(self))), self)
else:
return (term, self)
elif val.unifier is None or not val.value.is_variable():
return (val.value, val.unifier)
else:
return val.unifier.apply_full(val.value)
def is_one_to_one(self):
image = set() # set of all things mapped TO
for x in self.contents:
val = self.apply(x)
if val in image:
return False
image.add(val)
return True
def __str__(self):
s = repr(self)
s += "={"
s += ",".join(["{}:{}".format(str(var), str(val))
for var, val in self.contents.items()])
s += "}"
return s
def recur_str(self):
s = repr(self)
s += "={"
s += ",".join(["{}:{}".format(var, val.recur_str())
for var, val in self.contents.items()])
s += "}"
return s
def __eq__(self, other):
return self.contents == other.contents
def __ne__(self, other):
return not self.__eq__(other)
def binding_str(binding):
"""Handles string conversion of either dictionary or Unifier."""
if isinstance(binding, dict):
s = ",".join(["{}: {}".format(str(var), str(val))
for var, val in binding.items()])
return '{' + s + '}'
else:
return str(binding)
def undo_all(changes):
"""Undo all the changes in CHANGES."""
# LOG.debug("undo_all(%s)",
# "[" + ",".join([str(x) for x in changes]) + "]")
if changes is None:
return
for change in changes:
if change.unifier is not None:
change.unifier.delete(change.var)
def same_schema(atom1, atom2, theoryname=None):
"""Return True if ATOM1 and ATOM2 have the same schema.
THEORYNAME is the default theory name.
"""
if not atom1.table.same(atom2.table, theoryname):
return False
if len(atom1.arguments) != len(atom2.arguments):
return False
return True
def bi_unify_atoms(atom1, unifier1, atom2, unifier2, theoryname=None):
"""Unify atoms.
If possible, modify BiUnifier UNIFIER1 and BiUnifier UNIFIER2 so that
ATOM1.plug(UNIFIER1) == ATOM2.plug(UNIFIER2).
Returns None if not possible; otherwise, returns
a list of changes to unifiers that can be undone
with undo-all. May alter unifiers besides UNIFIER1 and UNIFIER2.
THEORYNAME is the default theory name.
"""
# logging.debug("Unifying %s under %s and %s under %s",
# atom1, unifier1, atom2, unifier2)
if not same_schema(atom1, atom2, theoryname):
return None
return bi_unify_lists(atom1.arguments, unifier1,
atom2.arguments, unifier2)
def bi_unify_lists(iter1, unifier1, iter2, unifier2):
"""Unify lists.
If possible, modify BiUnifier UNIFIER1 and BiUnifier UNIFIER2 such that
iter1.plug(UNIFIER1) == iter2.plug(UNIFIER2), assuming PLUG is defined
over lists. Returns None if not possible; otherwise, returns
a list of changes to unifiers that can be undone
with undo-all. May alter unifiers besides UNIFIER1 and UNIFIER2.
"""
if len(iter1) != len(iter2):
return None
changes = []
for i in range(0, len(iter1)):
assert isinstance(iter1[i], compile.Term)
assert isinstance(iter2[i], compile.Term)
# grab values for args
val1, binding1 = unifier1.apply_full(iter1[i])
val2, binding2 = unifier2.apply_full(iter2[i])
# logging.debug("val(%s)=%s at %s, val(%s)=%s at %s",
# atom1.arguments[i], val1, binding1,
# atom2.arguments[i], val2, binding2)
# assign variable (if necessary) or fail
if val1.is_variable() and val2.is_variable():
# logging.debug("1 and 2 are variables")
if bi_var_equal(val1, binding1, val2, binding2):
continue
else:
changes.append(binding1.add(val1, val2, binding2))
elif val1.is_variable() and not val2.is_variable():
# logging.debug("Left arg is a variable")
changes.append(binding1.add(val1, val2, binding2))
elif not val1.is_variable() and val2.is_variable():
# logging.debug("Right arg is a variable")
changes.append(binding2.add(val2, val1, binding1))
elif val1 == val2:
continue
else:
# logging.debug("Unify failure: undoing")
undo_all(changes)
return None
return changes
# def plug(atom, binding, withtable=False):
# """ Returns a tuple representing the arguments to ATOM after having
# applied BINDING to the variables in ATOM. """
# if withtable is True:
# result = [atom.table]
# else:
# result = []
# for i in range(0, len(atom.arguments)):
# if (atom.arguments[i].is_variable() and
# atom.arguments[i].name in binding):
# result.append(binding[atom.arguments[i].name])
# else:
# result.append(atom.arguments[i].name)
# return tuple(result)
def match_tuple_atom(tupl, atom):
"""Get bindings.
Returns a binding dictionary that when applied to ATOM's arguments
gives exactly TUPLE, or returns None if no such binding exists.
"""
if len(tupl) != len(atom.arguments):
return None
binding = {}
for i in range(0, len(tupl)):
arg = atom.arguments[i]
if arg.is_variable():
if arg.name in binding:
oldval = binding[arg.name]
if oldval != tupl[i]:
return None
else:
binding[arg.name] = tuple[i]
return binding
def match_atoms(atom1, unifier, atom2):
"""Modify UNIFIER so that ATOM1.plug(UNIFIER) == ATOM2.
ATOM2 is assumed to be ground.
UNIFIER is assumed to be a BiUnifier.
Return the changes to UNIFIER or None if matching is impossible.
Matching is a special case of instance-checking since ATOM2
in this case must be ground, whereas there is no such limitation
for instance-checking. This makes the code significantly simpler
and faster.
"""
if not same_schema(atom1, atom2):
return None
changes = []
for i in range(0, len(atom1.arguments)):
val, binding = unifier.apply_full(atom1.arguments[i])
# LOG.debug("val(%s)=%s at %s; comparing to object %s",
# atom1.arguments[i], val, binding, atom2.arguments[i])
if val.is_variable():
changes.append(binding.add(val, atom2.arguments[i], None))
else:
if val.name != atom2.arguments[i].name:
undo_all(changes)
return None
return changes
def bi_var_equal(var1, unifier1, var2, unifier2):
"""Check var equality.
Returns True iff variable VAR1 in unifier UNIFIER1 is the same
variable as VAR2 in UNIFIER2.
"""
return (var1 == var2 and unifier1 is unifier2)
def same(formula1, formula2):
"""Check formulas are the same.
Determine if FORMULA1 and FORMULA2 are the same up to a variable
renaming. Treats FORMULA1 and FORMULA2 as having different
variable namespaces. Returns None or the pair of unifiers.
"""
if isinstance(formula1, compile.Literal):
if isinstance(formula2, compile.Rule):
return None
elif formula1.is_negated() != formula2.is_negated():
return None
else:
u1 = BiUnifier()
u2 = BiUnifier()
if same_atoms(formula1, u1, formula2, u2, set()) is not None:
return (u1, u2)
return None
elif isinstance(formula1, compile.Rule):
if isinstance(formula2, compile.Literal):
return None
else:
if len(formula1.body) != len(formula2.body):
return None
u1 = BiUnifier()
u2 = BiUnifier()
bound2 = set()
result = same_atoms(formula1.head, u1, formula2.head, u2, bound2)
if result is None:
return None
for i in range(0, len(formula1.body)):
result = same_atoms(
formula1.body[i], u1, formula2.body[i], u2, bound2)
if result is None:
return None
return (u1, u2)
else:
return None
def same_atoms(atom1, unifier1, atom2, unifier2, bound2):
"""Check whether atoms are identical.
Modifies UNIFIER1 and UNIFIER2 to demonstrate
that ATOM1 and ATOM2 are identical up to a variable renaming.
Returns None if not possible or the list of changes if it is.
BOUND2 is the set of variables already bound in UNIFIER2
"""
def die():
undo_all(changes)
return None
LOG.debug("same_atoms(%s, %s)", atom1, atom2)
if not same_schema(atom1, atom2):
return None
changes = []
# LOG.debug("same_atoms entering loop")
for i in range(0, len(atom1.arguments)):
val1, binding1 = unifier1.apply_full(atom1.arguments[i])
val2, binding2 = unifier2.apply_full(atom2.arguments[i])
# LOG.debug("val1: %s at %s; val2: %s at %s",
# val1, binding1, val2, binding2)
if val1.is_variable() and val2.is_variable():
if bi_var_equal(val1, binding1, val2, binding2):
continue
# if we already bound either of these variables, not SAME
if not bi_var_equal(val1, binding1, atom1.arguments[i], unifier1):
# LOG.debug("same_atoms: arg1 already bound")
return die()
if not bi_var_equal(val2, binding2, atom2.arguments[i], unifier2):
# LOG.debug("same_atoms: arg2 already bound")
return die()
if val2 in bound2:
# LOG.debug("same_atoms: binding is not 1-1")
return die()
changes.append(binding1.add(val1, val2, binding2))
bound2.add(val2)
elif val1.is_variable():
# LOG.debug("val1 is a variable")
return die()
elif val2.is_variable():
# LOG.debug("val2 is a variable")
return die()
elif val1 != val2:
# one is a variable and one is not or unmatching object constants
# LOG.debug("val1 != val2")
return die()
return changes
def instance(formula1, formula2):
"""Determine if FORMULA1 is an instance of FORMULA2.
If there is some binding that when applied to FORMULA1 results
in FORMULA2. Returns None or a unifier.
"""
LOG.debug("instance(%s, %s)", formula1, formula2)
if isinstance(formula1, compile.Literal):
if isinstance(formula2, compile.Rule):
return None
elif formula1.is_negated() != formula2.is_negated():
return None
else:
u = BiUnifier()
if instance_atoms(formula1, formula2, u) is not None:
return u
return None
elif isinstance(formula1, compile.Rule):
if isinstance(formula2, compile.Literal):
return None
else:
if len(formula1.body) != len(formula2.body):
return None
u = BiUnifier()
result = instance_atoms(formula1.head, formula2.head, u)
if result is None:
return None
for i in range(0, len(formula1.body)):
result = same_atoms(
formula1.body[i], formula2.body[i], u)
if result is None:
return None
return u
else:
return None
def instance_atoms(atom1, atom2, unifier2):
"""Check atoms equality by adding bindings.
Adds bindings to UNIFIER2 to make ATOM1 equal to ATOM2
after applying UNIFIER2 to ATOM2 only. Returns None if
no such bindings make equality hold.
"""
def die():
undo_all(changes)
return None
LOG.debug("instance_atoms(%s, %s)", atom1, atom2)
if not same_schema(atom1, atom2):
return None
unifier1 = BiUnifier()
changes = []
for i in range(0, len(atom1.arguments)):
val1, binding1 = unifier1.apply_full(atom1.arguments[i])
val2, binding2 = unifier2.apply_full(atom2.arguments[i])
# LOG.debug("val1: %s at %s; val2: %s at %s",
# val1, binding1, val2, binding2)
if val1.is_variable() and val2.is_variable():
if bi_var_equal(val1, binding1, val2, binding2):
continue
# if we already bound either of these variables, not INSTANCE
if not bi_var_equal(val1, binding1, atom1.arguments[i], unifier1):
# LOG.debug("instance_atoms: arg1 already bound")
return die()
if not bi_var_equal(val2, binding2, atom2.arguments[i], unifier2):
# LOG.debug("instance_atoms: arg2 already bound")
return die()
# add binding to UNIFIER2
changes.append(binding2.add(val2, val1, binding1))
elif val1.is_variable():
return die()
elif val2.is_variable():
changes.append(binding2.add(val2, val1, binding1))
# LOG.debug("var2 is a variable")
elif val1 != val2:
# unmatching object constants
# LOG.debug("val1 != val2")
return die()
return changes
def skolemize(formulas):
"""Instantiate all variables consistently with UUIDs in the formulas."""
# create binding then plug it in.
variables = set()
for formula in formulas:
variables |= formula.variables()
binding = {}
for var in variables:
binding[var] = compile.Term.create_from_python(
uuidutils.generate_uuid())
return [formula.plug(binding) for formula in formulas]

View File

@ -1,536 +0,0 @@
# Copyright (c) 2013 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import collections
from functools import reduce
class Graph(object):
"""A standard graph data structure.
With routines applicable to analysis of policy.
"""
class dfs_data(object):
"""Data for each node in graph during depth-first-search."""
def __init__(self, begin=None, end=None):
self.begin = begin
self.end = end
def __str__(self):
return "<begin: %s, end: %s>" % (self.begin, self.end)
class edge_data(object):
"""Data for each edge in graph."""
def __init__(self, node=None, label=None):
self.node = node
self.label = label
def __str__(self):
return "<Label:%s, Node:%s>" % (self.label, self.node)
def __eq__(self, other):
return self.node == other.node and self.label == other.label
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(str(self))
def __init__(self, graph=None):
self.edges = {} # dict from node to list of nodes
self.nodes = {} # dict from node to info about node
self._cycles = None
def __or__(self, other):
# do this the simple way so that subclasses get this code for free
g = self.__class__()
for node in self.nodes:
g.add_node(node)
for node in other.nodes:
g.add_node(node)
for name in self.edges:
for edge in self.edges[name]:
g.add_edge(name, edge.node, label=edge.label)
for name in other.edges:
for edge in other.edges[name]:
g.add_edge(name, edge.node, label=edge.label)
return g
def __ior__(self, other):
if len(other) == 0:
# no changes if other is empty
return self
self._cycles = None
for name in other.nodes:
self.add_node(name)
for name in other.edges:
for edge in other.edges[name]:
self.add_edge(name, edge.node, label=edge.label)
return self
def __len__(self):
return (len(self.nodes) +
reduce(lambda x, y: x+y,
(len(x) for x in self.edges.values()),
0))
def add_node(self, val):
"""Add node VAL to graph."""
if val not in self.nodes: # preserve old node info
self.nodes[val] = None
return True
return False
def delete_node(self, val):
"""Delete node VAL from graph and all edges."""
try:
del self.nodes[val]
del self.edges[val]
except KeyError:
assert val not in self.edges
def add_edge(self, val1, val2, label=None):
"""Add edge from VAL1 to VAL2 with label LABEL to graph.
Also adds the nodes.
"""
self._cycles = None # so that has_cycles knows it needs to rerun
self.add_node(val1)
self.add_node(val2)
val = self.edge_data(node=val2, label=label)
try:
self.edges[val1].add(val)
except KeyError:
self.edges[val1] = set([val])
def delete_edge(self, val1, val2, label=None):
"""Delete edge from VAL1 to VAL2 with label LABEL.
LABEL must match (even if None). Does not delete nodes.
"""
try:
edge = self.edge_data(node=val2, label=label)
self.edges[val1].remove(edge)
except KeyError:
# KeyError either because val1 or edge
return
self._cycles = None
def node_in(self, val):
return val in self.nodes
def edge_in(self, val1, val2, label=None):
return (val1 in self.edges and
self.edge_data(val2, label) in self.edges[val1])
def reset_nodes(self):
for node in self.nodes:
self.nodes[node] = None
def depth_first_search(self, roots=None):
"""Run depth first search on the graph.
Also modify self.nodes, self.counter, and self.cycle.
Use all nodes if @roots param is None or unspecified
"""
self.reset()
if roots is None:
roots = self.nodes
for node in roots:
if node in self.nodes and self.nodes[node].begin is None:
self.dfs(node)
def _enumerate_cycles(self):
self.reset()
for node in self.nodes.keys():
self._reset_dfs_data()
self.dfs(node, target=node)
for path in self.__target_paths:
self._cycles.add(Cycle(path))
def reset(self, roots=None):
"""Return nodes to pristine state."""
self._reset_dfs_data()
roots = roots or self.nodes
self._cycles = set()
def _reset_dfs_data(self):
for node in self.nodes.keys():
self.nodes[node] = self.dfs_data()
self.counter = 0
self.__target_paths = []
def dfs(self, node, target=None, dfs_stack=None):
"""DFS implementation.
Assumes data structures have been properly prepared.
Creates start/begin times on nodes.
Adds paths from node to target to self.__target_paths
"""
if dfs_stack is None:
dfs_stack = []
dfs_stack.append(node)
if (target is not None and node == target and
len(dfs_stack) > 1): # non-trival path to target found
self.__target_paths.append(list(dfs_stack)) # record
if self.nodes[node].begin is None:
self.nodes[node].begin = self.next_counter()
if node in self.edges:
for edge in self.edges[node]:
self.dfs(edge.node, target=target, dfs_stack=dfs_stack)
self.nodes[node].end = self.next_counter()
dfs_stack.pop()
def stratification(self, labels):
"""Return the stratification result.
Return mapping of node name to integer indicating the
stratum to which that node is assigned. LABELS is the list
of edge labels that dictate a change in strata.
"""
stratum = {}
for node in self.nodes:
stratum[node] = 1
changes = True
while changes:
changes = False
for node in self.edges:
for edge in self.edges[node]:
oldp = stratum[node]
if edge.label in labels:
stratum[node] = max(stratum[node],
1 + stratum[edge.node])
else:
stratum[node] = max(stratum[node],
stratum[edge.node])
if oldp != stratum[node]:
changes = True
if stratum[node] > len(self.nodes):
return None
return stratum
def roots(self):
"""Return list of nodes with no incoming edges."""
possible_roots = set(self.nodes)
for node in self.edges:
for edge in self.edges[node]:
if edge.node in possible_roots:
possible_roots.remove(edge.node)
return possible_roots
def has_cycle(self):
"""Checks if there are cycles.
Run depth_first_search only if it has not already been run.
"""
if self._cycles is None:
self._enumerate_cycles()
return len(self._cycles) > 0
def cycles(self):
"""Return list of cycles. None indicates unknown. """
if self._cycles is None:
self._enumerate_cycles()
cycles_list = []
for cycle_graph in self._cycles:
cycles_list.append(cycle_graph.list_repr())
return cycles_list
def dependencies(self, node):
"""Returns collection of node names reachable from NODE.
If NODE does not exist in graph, returns None.
"""
if node not in self.nodes:
return None
self.reset()
node_obj = self.nodes[node]
if node_obj is None or node_obj.begin is None or node_obj.end is None:
self.depth_first_search([node])
node_obj = self.nodes[node]
return set([n for n, dfs_obj in self.nodes.items()
if dfs_obj.begin is not None])
def next_counter(self):
"""Return next counter value and increment the counter."""
self.counter += 1
return self.counter - 1
def __str__(self):
s = "{"
for node in self.nodes:
s += "(" + str(node) + " : ["
if node in self.edges:
s += ", ".join([str(x) for x in self.edges[node]])
s += "],\n"
s += "}"
return s
def _inverted_edge_graph(self):
"""create a shallow copy of self with the edges inverted"""
newGraph = Graph()
newGraph.nodes = self.nodes
for source_node in self.edges:
for edge in self.edges[source_node]:
try:
newGraph.edges[edge.node].add(Graph.edge_data(source_node))
except KeyError:
newGraph.edges[edge.node] = set(
[Graph.edge_data(source_node)])
return newGraph
def find_dependent_nodes(self, nodes):
"""Return all nodes dependent on @nodes.
Node T is dependent on node T.
Node T is dependent on node R if there is an edge from node S to T,
and S is dependent on R.
Note that node T is dependent on node T even if T is not in the graph
"""
return (self._inverted_edge_graph().find_reachable_nodes(nodes)
| set(nodes))
def find_reachable_nodes(self, roots):
"""Return all nodes reachable from @roots."""
if len(roots) == 0:
return set()
self.depth_first_search(roots)
result = [x for x in self.nodes if self.nodes[x].begin is not None]
self.reset_nodes()
return set(result)
class Cycle(frozenset):
"""An immutable set of 2-tuples to represent a directed cycle
Extends frozenset, adding a list_repr method to represent a cycle as an
ordered list of nodes.
The set representation facilicates identity of cycles regardless of order.
The list representation is much more readable.
"""
def __new__(cls, cycle):
edge_list = []
for i in range(1, len(cycle)):
edge_list.append((cycle[i - 1], cycle[i]))
new_obj = super(Cycle, cls).__new__(cls, edge_list)
new_obj.__list_repr = list(cycle) # save copy as list_repr
return new_obj
def list_repr(self):
"""Return list-of-nodes representation of cycle"""
return self.__list_repr
class BagGraph(Graph):
"""A graph data structure with bag semantics for nodes and edges.
Keeps track of the number of times each node/edge has been inserted.
A node/edge is removed from the graph only once it has been deleted
the same number of times it was inserted. Deletions when no node/edge
already exist are ignored.
"""
def __init__(self, graph=None):
super(BagGraph, self).__init__(graph)
self._node_refcounts = {} # dict from node to counter
self._edge_refcounts = {} # dict from edge to counter
def add_node(self, val):
"""Add node VAL to graph."""
super(BagGraph, self).add_node(val)
if val in self._node_refcounts:
self._node_refcounts[val] += 1
else:
self._node_refcounts[val] = 1
def delete_node(self, val):
"""Delete node VAL from graph (but leave all edges)."""
if val not in self._node_refcounts:
return
self._node_refcounts[val] -= 1
if self._node_refcounts[val] == 0:
super(BagGraph, self).delete_node(val)
del self._node_refcounts[val]
def add_edge(self, val1, val2, label=None):
"""Add edge from VAL1 to VAL2 with label LABEL to graph.
Also adds the nodes VAL1 and VAL2 (important for refcounting).
"""
super(BagGraph, self).add_edge(val1, val2, label=label)
edge = (val1, val2, label)
if edge in self._edge_refcounts:
self._edge_refcounts[edge] += 1
else:
self._edge_refcounts[edge] = 1
def delete_edge(self, val1, val2, label=None):
"""Delete edge from VAL1 to VAL2 with label LABEL.
LABEL must match (even if None). Also deletes nodes
whenever the edge exists.
"""
edge = (val1, val2, label)
if edge not in self._edge_refcounts:
return
self.delete_node(val1)
self.delete_node(val2)
self._edge_refcounts[edge] -= 1
if self._edge_refcounts[edge] == 0:
super(BagGraph, self).delete_edge(val1, val2, label=label)
del self._edge_refcounts[edge]
def node_in(self, val):
return val in self._node_refcounts
def edge_in(self, val1, val2, label=None):
return (val1, val2, label) in self._edge_refcounts
def node_count(self, node):
if node in self._node_refcounts:
return self._node_refcounts[node]
else:
return 0
def edge_count(self, val1, val2, label=None):
edge = (val1, val2, label)
if edge in self._edge_refcounts:
return self._edge_refcounts[edge]
else:
return 0
def __len__(self):
return (reduce(lambda x, y: x+y, self._node_refcounts.values(), 0) +
reduce(lambda x, y: x+y, self._edge_refcounts.values(), 0))
def __str__(self):
s = "{"
for node in self.nodes:
s += "(%s *%s: [" % (str(node), self._node_refcounts[node])
if node in self.edges:
s += ", ".join(
["%s *%d" %
(str(x), self.edge_count(node, x.node, x.label))
for x in self.edges[node]])
s += "],\n"
s += "}"
return s
class OrderedSet(collections.MutableSet):
"""Provide sequence capabilities with rapid membership checks.
Mostly lifted from the activestate recipe[1] linked at Python's collections
documentation[2]. Some modifications, such as returning True or False from
add(key) and discard(key) if a change is made.
[1] - http://code.activestate.com/recipes/576694/
[2] - https://docs.python.org/2/library/collections.html
"""
def __init__(self, iterable=None):
self.end = end = []
end += [None, end, end] # sentinel node for doubly linked list
self.map = {} # key --> [key, prev, next]
if iterable is not None:
self |= iterable
def __len__(self):
return len(self.map)
def __contains__(self, key):
return key in self.map
def add(self, key):
if key not in self.map:
end = self.end
curr = end[1]
curr[2] = end[1] = self.map[key] = [key, curr, end]
return True
return False
def discard(self, key):
if key in self.map:
key, prev, next = self.map.pop(key)
prev[2] = next
next[1] = prev
return True
return False
def __iter__(self):
end = self.end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
def pop(self, last=True):
if not self:
raise KeyError('pop from an empty set')
key = self.end[1][0] if last else self.end[2][0]
self.discard(key)
return key
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, list(self))
def __eq__(self, other):
if isinstance(other, OrderedSet):
return len(self) == len(other) and list(self) == list(other)
else:
return False
def __ne__(self, other):
return not self.__eq__(other)
class iterstr(object):
"""Lazily provides informal string representation of iterables.
Calling __str__ directly on iterables returns a string containing the
formal representation of the elements. This class wraps the iterable and
instead returns the informal representation of the elements.
"""
def __init__(self, iterable):
self.iterable = iterable
self._str_interp = None
self._repr_interp = None
def __str__(self):
if self._str_interp is None:
self._str_interp = "[" + ";".join(map(str, self.iterable)) + "]"
return self._str_interp
def __repr__(self):
if self._repr_interp is None:
self._repr_interp = "[" + ";".join(map(repr, self.iterable)) + "]"
return self._repr_interp

View File

@ -1,115 +0,0 @@
# Copyright (c) 2016 NEC Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from aodhclient import client as aodh_client
from oslo_log import log as logging
import six
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class AodhDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
ALARMS = "alarms"
ALARM_THRESHOLD_RULE = "alarms.threshold_rule"
value_trans = {'translation-type': 'VALUE'}
alarms_translator = {
'translation-type': 'HDICT',
'table-name': ALARMS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'alarm_id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'state', 'translator': value_trans},
{'fieldname': 'enabled', 'translator': value_trans},
{'fieldname': 'threshold_rule', 'col': 'threshold_rule_id',
'translator': {'translation-type': 'VDICT',
'table-name': ALARM_THRESHOLD_RULE,
'id-col': 'threshold_rule_id',
'key-col': 'key', 'val-col': 'value',
'translator': value_trans}},
{'fieldname': 'type', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'time_constraints', 'translator': value_trans},
{'fieldname': 'user_id', 'translator': value_trans},
{'fieldname': 'project_id', 'translator': value_trans},
{'fieldname': 'alarm_actions', 'translator': value_trans},
{'fieldname': 'ok_actions', 'translator': value_trans},
{'fieldname': 'insufficient_data_actions', 'translator':
value_trans},
{'fieldname': 'repeat_actions', 'translator': value_trans},
{'fieldname': 'timestamp', 'translator': value_trans},
{'fieldname': 'state_timestamp', 'translator': value_trans},
)}
def safe_id(x):
if isinstance(x, six.string_types):
return x
try:
return x['resource_id']
except KeyError:
return str(x)
TRANSLATORS = [alarms_translator]
def __init__(self, name='', args=None):
super(AodhDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
session = ds_utils.get_keystone_session(args)
endpoint = session.get_endpoint(service_type='alarming',
interface='publicURL')
self.aodh_client = aodh_client.Client(version='2', session=session,
endpoint_override=endpoint)
self.add_executable_client_methods(self.aodh_client, 'aodhclient.v2.')
self.initialize_update_method()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'aodh'
result['description'] = ('Datasource driver that interfaces with '
'aodh.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_method(self):
alarms_method = lambda: self._translate_alarms(
self.aodh_client.alarm.list())
self.add_update_method(alarms_method, self.alarms_translator)
@ds_utils.update_state_on_changed(ALARMS)
def _translate_alarms(self, obj):
"""Translate the alarms represented by OBJ into tables."""
LOG.debug("ALARMS: %s", str(obj))
row_data = AodhDriver.convert_objs(obj, self.alarms_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.aodh_client, action, action_args)

View File

@ -1,66 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from six.moves import range
from congress.datasources import datasource_driver
def d6service(name, keys, inbox, datapath, args):
"""Create a dataservice instance.
This method is called by d6cage to create a dataservice
instance. There are a couple of parameters we found useful
to add to that call, so we included them here instead of
modifying d6cage (and all the d6cage.createservice calls).
"""
return BenchmarkDriver(name, keys, inbox, datapath, args)
class BenchmarkDriver(datasource_driver.PollingDataSourceDriver):
BENCHTABLE = 'benchtable'
value_trans = {'translation-type': 'VALUE'}
translator = {
'translation-type': 'HDICT',
'table-name': BENCHTABLE,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'field1', 'translator': value_trans},
{'fieldname': 'field2', 'translator': value_trans})}
TRANSLATORS = [translator]
def __init__(self, name='', keys='', inbox=None, datapath=None, args=None):
super(BenchmarkDriver, self).__init__(name, keys,
inbox, datapath, args)
# used by update_from_datasources to manufacture data. Default small.
self.datarows = 10
self._init_end_start_poll()
def update_from_datasource(self):
self.state = {}
# TODO(sh): using self.convert_objs() takes about 10x the time. Needs
# optimization efforts.
row_data = tuple((self.BENCHTABLE, ('val1_%d' % i, 'val2_%d' % i))
for i in range(self.datarows))
for table, row in row_data:
if table not in self.state:
self.state[table] = set()
self.state[table].add(row)
def get_credentials(self, *args, **kwargs):
return {}

View File

@ -1,284 +0,0 @@
# Copyright (c) 2014 Montavista Software, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import copy
import ceilometerclient
import ceilometerclient.client as cc
from keystoneauth1 import exceptions
from oslo_log import log as logging
import six
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
# TODO(thinrichs): figure out how to move even more of this boilerplate
# into DataSourceDriver. E.g. change all the classes to Driver instead of
# NeutronDriver, CeilometerDriver, etc. and move the d6instantiate function
# to DataSourceDriver.
class CeilometerDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
METERS = "meters"
ALARMS = "alarms"
EVENTS = "events"
EVENT_TRAITS = "events.traits"
ALARM_THRESHOLD_RULE = "alarms.threshold_rule"
STATISTICS = "statistics"
value_trans = {'translation-type': 'VALUE'}
meters_translator = {
'translation-type': 'HDICT',
'table-name': METERS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'meter_id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'type', 'translator': value_trans},
{'fieldname': 'unit', 'translator': value_trans},
{'fieldname': 'source', 'translator': value_trans},
{'fieldname': 'resource_id', 'translator': value_trans},
{'fieldname': 'user_id', 'translator': value_trans},
{'fieldname': 'project_id', 'translator': value_trans})}
alarms_translator = {
'translation-type': 'HDICT',
'table-name': ALARMS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'alarm_id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'state', 'translator': value_trans},
{'fieldname': 'enabled', 'translator': value_trans},
{'fieldname': 'threshold_rule', 'col': 'threshold_rule_id',
'translator': {'translation-type': 'VDICT',
'table-name': ALARM_THRESHOLD_RULE,
'id-col': 'threshold_rule_id',
'key-col': 'key', 'val-col': 'value',
'translator': value_trans}},
{'fieldname': 'type', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'time_constraints', 'translator': value_trans},
{'fieldname': 'user_id', 'translator': value_trans},
{'fieldname': 'project_id', 'translator': value_trans},
{'fieldname': 'alarm_actions', 'translator': value_trans},
{'fieldname': 'ok_actions', 'translator': value_trans},
{'fieldname': 'insufficient_data_actions', 'translator':
value_trans},
{'fieldname': 'repeat_actions', 'translator': value_trans},
{'fieldname': 'timestamp', 'translator': value_trans},
{'fieldname': 'state_timestamp', 'translator': value_trans},
)}
events_translator = {
'translation-type': 'HDICT',
'table-name': EVENTS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'message_id', 'translator': value_trans},
{'fieldname': 'event_type', 'translator': value_trans},
{'fieldname': 'generated', 'translator': value_trans},
{'fieldname': 'traits',
'translator': {'translation-type': 'HDICT',
'table-name': EVENT_TRAITS,
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'parent-key': 'message_id',
'parent-col-name': 'event_message_id',
'field-translators':
({'fieldname': 'name',
'translator': value_trans},
{'fieldname': 'type',
'translator': value_trans},
{'fieldname': 'value',
'translator': value_trans}
)}}
)}
def safe_id(x):
if isinstance(x, six.string_types):
return x
try:
return x['resource_id']
except KeyError:
return str(x)
statistics_translator = {
'translation-type': 'HDICT',
'table-name': STATISTICS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'meter_name', 'translator': value_trans},
{'fieldname': 'groupby', 'col': 'resource_id',
'translator': {'translation-type': 'VALUE',
'extract-fn': safe_id}},
{'fieldname': 'avg', 'translator': value_trans},
{'fieldname': 'count', 'translator': value_trans},
{'fieldname': 'duration', 'translator': value_trans},
{'fieldname': 'duration_start', 'translator': value_trans},
{'fieldname': 'duration_end', 'translator': value_trans},
{'fieldname': 'max', 'translator': value_trans},
{'fieldname': 'min', 'translator': value_trans},
{'fieldname': 'period', 'translator': value_trans},
{'fieldname': 'period_end', 'translator': value_trans},
{'fieldname': 'period_start', 'translator': value_trans},
{'fieldname': 'sum', 'translator': value_trans},
{'fieldname': 'unit', 'translator': value_trans})}
TRANSLATORS = [meters_translator, alarms_translator, events_translator,
statistics_translator]
def __init__(self, name='', args=None):
super(CeilometerDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
session = ds_utils.get_keystone_session(args)
self.ceilometer_client = cc.get_client(version='2', session=session)
self.add_executable_client_methods(self.ceilometer_client,
'ceilometerclient.v2.')
self.initialize_update_method()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'ceilometer'
result['description'] = ('Datasource driver that interfaces with '
'ceilometer.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_method(self):
meters_method = lambda: self._translate_meters(
self.ceilometer_client.meters.list())
self.add_update_method(meters_method, self.meters_translator)
def alarms_list_suppress_no_aodh_error(ceilometer_client):
'''Return alarms.list(), suppressing error due to Aodh absence
Requires python-ceilometerclient >= 2.6.2
'''
try:
return self.ceilometer_client.alarms.list()
except ceilometerclient.exc.HTTPException as e:
if 'alarms URLs is unavailable when Aodh is disabled or ' \
'unavailable' in str(e):
LOG.info('alarms not available because Aodh is '
'disabled or unavailable. '
'Empty alarms list reported instead.')
return []
else:
raise
except exceptions.ConnectFailure:
LOG.info('Unable to connect to Aodh service, not up '
'or configured')
return []
alarms_method = lambda: self._translate_alarms(
alarms_list_suppress_no_aodh_error(self.ceilometer_client))
self.add_update_method(alarms_method, self.alarms_translator)
events_method = lambda: self._translate_events(self._events_list())
self.add_update_method(events_method, self.events_translator)
statistics_method = lambda: self._translate_statistics(
self._get_statistics(self.ceilometer_client.meters.list()))
self.add_update_method(statistics_method, self.statistics_translator)
def _events_list(self):
try:
return self.ceilometer_client.events.list()
except (ceilometerclient.exc.HTTPException,
exceptions.ConnectFailure):
LOG.info('events list not available because Panko is disabled or '
'unavailable. Empty list reported instead')
return []
def _get_statistics(self, meters):
statistics = []
names = set()
for m in meters:
LOG.debug("Adding meter %s", m.name)
names.add(m.name)
for meter_name in names:
LOG.debug("Getting all Resource ID for meter: %s",
meter_name)
stat_list = self.ceilometer_client.statistics.list(
meter_name, groupby=['resource_id'])
LOG.debug("Statistics List: %s", stat_list)
if (stat_list):
for temp in stat_list:
temp_dict = copy.copy(temp.to_dict())
temp_dict['meter_name'] = meter_name
statistics.append(temp_dict)
return statistics
@ds_utils.update_state_on_changed(METERS)
def _translate_meters(self, obj):
"""Translate the meters represented by OBJ into tables."""
meters = [o.to_dict() for o in obj]
LOG.debug("METERS: %s", str(meters))
row_data = CeilometerDriver.convert_objs(meters,
self.meters_translator)
return row_data
@ds_utils.update_state_on_changed(ALARMS)
def _translate_alarms(self, obj):
"""Translate the alarms represented by OBJ into tables."""
alarms = [o.to_dict() for o in obj]
LOG.debug("ALARMS: %s", str(alarms))
row_data = CeilometerDriver.convert_objs(alarms,
self.alarms_translator)
return row_data
@ds_utils.update_state_on_changed(EVENTS)
def _translate_events(self, obj):
"""Translate the events represented by OBJ into tables."""
events = [o.to_dict() for o in obj]
LOG.debug("EVENTS: %s", str(events))
row_data = CeilometerDriver.convert_objs(events,
self.events_translator)
return row_data
@ds_utils.update_state_on_changed(STATISTICS)
def _translate_statistics(self, obj):
"""Translate the statistics represented by OBJ into tables."""
LOG.debug("STATISTICS: %s", str(obj))
row_data = CeilometerDriver.convert_objs(obj,
self.statistics_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.ceilometer_client, action, action_args)

View File

@ -1,171 +0,0 @@
# Copyright (c) 2014 Montavista Software, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Schema version history
version: 2.1
date: 2016-03-27
changes:
- Added columns to the volumes table: encrypted, availability_zone,
replication_status, multiattach, snapshot_id, source_volid,
consistencygroup_id, migration_status
- Added the attachments table for volume attachment information.
version: 2.0
Initial schema version.
"""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import cinderclient.client
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
class CinderDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
VOLUMES = "volumes"
ATTACHMENTS = "attachments"
SNAPSHOTS = "snapshots"
SERVICES = "services"
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
volumes_translator = {
'translation-type': 'HDICT',
'table-name': VOLUMES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'size', 'translator': value_trans},
{'fieldname': 'user_id', 'translator': value_trans},
{'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'bootable', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'volume_type', 'translator': value_trans},
{'fieldname': 'encrypted', 'translator': value_trans},
{'fieldname': 'availability_zone', 'translator': value_trans},
{'fieldname': 'replication_status', 'translator': value_trans},
{'fieldname': 'multiattach', 'translator': value_trans},
{'fieldname': 'snapshot_id', 'translator': value_trans},
{'fieldname': 'source_volid', 'translator': value_trans},
{'fieldname': 'consistencygroup_id', 'translator': value_trans},
{'fieldname': 'migration_status', 'translator': value_trans},
{'fieldname': 'attachments',
'translator': {'translation-type': 'LIST',
'table-name': ATTACHMENTS,
'val-col': 'attachment',
'val-col-desc': 'List of attachments',
'parent-key': 'id',
'parent-col-name': 'volume_id',
'parent-key-desc': 'UUID of volume',
'translator': value_trans}},
)}
snapshots_translator = {
'translation-type': 'HDICT',
'table-name': SNAPSHOTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'size', 'translator': value_trans},
{'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'volume_id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans})}
services_translator = {
'translation-type': 'HDICT',
'table-name': SERVICES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'binary', 'translator': value_trans},
{'fieldname': 'zone', 'translator': value_trans},
{'fieldname': 'state', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans},
{'fieldname': 'host', 'translator': value_trans},
{'fieldname': 'disabled_reason', 'translator': value_trans})}
TRANSLATORS = [volumes_translator, snapshots_translator,
services_translator]
def __init__(self, name='', args=None):
super(CinderDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
session = ds_utils.get_keystone_session(args)
self.cinder_client = cinderclient.client.Client(version='2',
session=session)
self.add_executable_client_methods(self.cinder_client,
'cinderclient.v2.')
self.initialize_update_method()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'cinder'
result['description'] = ('Datasource driver that interfaces with '
'OpenStack cinder.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_method(self):
volumes_method = lambda: self._translate_volumes(
self.cinder_client.volumes.list(detailed=True,
search_opts={'all_tenants': 1}))
self.add_update_method(volumes_method, self.volumes_translator)
snapshots_method = lambda: self._translate_snapshots(
self.cinder_client.volume_snapshots.list(
detailed=True, search_opts={'all_tenants': 1}))
self.add_update_method(snapshots_method, self.snapshots_translator)
services_method = lambda: self._translate_services(
self.cinder_client.services.list(host=None, binary=None))
self.add_update_method(services_method, self.services_translator)
@ds_utils.update_state_on_changed(VOLUMES)
def _translate_volumes(self, obj):
row_data = CinderDriver.convert_objs(obj, self.volumes_translator)
return row_data
@ds_utils.update_state_on_changed(SNAPSHOTS)
def _translate_snapshots(self, obj):
row_data = CinderDriver.convert_objs(obj, self.snapshots_translator)
return row_data
@ds_utils.update_state_on_changed(SERVICES)
def _translate_services(self, obj):
row_data = CinderDriver.convert_objs(obj, self.services_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.cinder_client, action, action_args)

View File

@ -1,244 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from cloudfoundryclient.v2 import client
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class CloudFoundryV2Driver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
ORGANIZATIONS = 'organizations'
SERVICE_BINDINGS = 'service_bindings'
APPS = 'apps'
SPACES = 'spaces'
SERVICES = 'services'
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
organizations_translator = {
'translation-type': 'HDICT',
'table-name': ORGANIZATIONS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'guid', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans})}
service_bindings_translator = {
'translation-type': 'LIST',
'table-name': SERVICE_BINDINGS,
'parent-key': 'guid',
'parent-col-name': 'app_guid',
'val-col': 'service_instance_guid',
'translator': value_trans}
apps_translator = {
'translation-type': 'HDICT',
'table-name': APPS,
'in-list': True,
'parent-key': 'guid',
'parent-col-name': 'space_guid',
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'guid', 'translator': value_trans},
{'fieldname': 'buildpack', 'translator': value_trans},
{'fieldname': 'command', 'translator': value_trans},
{'fieldname': 'console', 'translator': value_trans},
{'fieldname': 'debug', 'translator': value_trans},
{'fieldname': 'detected_buildpack', 'translator': value_trans},
{'fieldname': 'detected_start_command',
'translator': value_trans},
{'fieldname': 'disk_quota', 'translator': value_trans},
{'fieldname': 'docker_image', 'translator': value_trans},
{'fieldname': 'environment_json', 'translator': value_trans},
{'fieldname': 'health_check_timeout', 'translator': value_trans},
{'fieldname': 'instances', 'translator': value_trans},
{'fieldname': 'memory', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'package_state', 'translator': value_trans},
{'fieldname': 'package_updated_at', 'translator': value_trans},
{'fieldname': 'production', 'translator': value_trans},
{'fieldname': 'staging_failed_reason', 'translator': value_trans},
{'fieldname': 'staging_task_id', 'translator': value_trans},
{'fieldname': 'state', 'translator': value_trans},
{'fieldname': 'version', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans},
{'fieldname': 'service_bindings',
'translator': service_bindings_translator})}
spaces_translator = {
'translation-type': 'HDICT',
'table-name': SPACES,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'guid', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans},
{'fieldname': 'apps', 'translator': apps_translator})}
services_translator = {
'translation-type': 'HDICT',
'table-name': SERVICES,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'guid', 'translator': value_trans},
{'fieldname': 'space_guid', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'bound_app_count', 'translator': value_trans},
{'fieldname': 'last_operation', 'translator': value_trans},
{'fieldname': 'service_plan_name', 'translator': value_trans})}
TRANSLATORS = [organizations_translator,
spaces_translator, services_translator]
def __init__(self, name='', args=None):
super(CloudFoundryV2Driver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
self.cloudfoundry = client.Client(username=self.creds['username'],
password=self.creds['password'],
base_url=self.creds['auth_url'])
self.cloudfoundry.login()
self._cached_organizations = []
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'cloudfoundryv2'
result['description'] = ('Datasource driver that interfaces with '
'cloudfoundry')
result['config'] = {'username': constants.REQUIRED,
'password': constants.REQUIRED,
'poll_time': constants.OPTIONAL,
'auth_url': constants.REQUIRED}
result['secret'] = ['password']
return result
def _save_organizations(self, organizations):
temp_organizations = []
for organization in organizations['resources']:
temp_organizations.append(organization['metadata']['guid'])
self._cached_organizations = temp_organizations
def _parse_services(self, services):
data = []
space_guid = services['guid']
for service in services['services']:
data.append(
{'bound_app_count': service['bound_app_count'],
'guid': service['guid'],
'name': service['name'],
'service_plan_name': service['service_plan']['name'],
'space_guid': space_guid})
return data
def _get_app_services_guids(self, service_bindings):
result = []
for service_binding in service_bindings['resources']:
result.append(service_binding['entity']['service_instance_guid'])
return result
def update_from_datasource(self):
LOG.debug("CloudFoundry grabbing Data")
organizations = self.cloudfoundry.get_organizations()
self._translate_organizations(organizations)
self._save_organizations(organizations)
spaces = self._get_spaces()
services = self._get_services_update_spaces(spaces)
self._translate_spaces(spaces)
self._translate_services(services)
def _get_services_update_spaces(self, spaces):
services = []
for space in spaces:
space['apps'] = []
temp_apps = self.cloudfoundry.get_apps_in_space(space['guid'])
for temp_app in temp_apps['resources']:
service_bindings = self.cloudfoundry.get_app_service_bindings(
temp_app['metadata']['guid'])
data = dict(list(temp_app['metadata'].items()) +
list(temp_app['entity'].items()))
app_services = self._get_app_services_guids(service_bindings)
if app_services:
data['service_bindings'] = app_services
space['apps'].append(data)
services.extend(self._parse_services(
self.cloudfoundry.get_spaces_summary(space['guid'])))
return services
def _get_spaces(self):
spaces = []
for org in self._cached_organizations:
temp_spaces = self.cloudfoundry.get_organization_spaces(org)
for temp_space in temp_spaces['resources']:
spaces.append(dict(list(temp_space['metadata'].items()) +
list(temp_space['entity'].items())))
return spaces
@ds_utils.update_state_on_changed(SERVICES)
def _translate_services(self, obj):
LOG.debug("services: %s", obj)
row_data = CloudFoundryV2Driver.convert_objs(
obj, self.services_translator)
return row_data
@ds_utils.update_state_on_changed(ORGANIZATIONS)
def _translate_organizations(self, obj):
LOG.debug("organziations: %s", obj)
# convert_objs needs the data structured a specific way so we
# do this here. Perhaps we can improve convert_objs later to be
# more flexiable.
results = [dict(list(o['metadata'].items()) +
list(o['entity'].items()))
for o in obj['resources']]
row_data = CloudFoundryV2Driver.convert_objs(
results,
self.organizations_translator)
return row_data
@ds_utils.update_state_on_changed(SPACES)
def _translate_spaces(self, obj):
LOG.debug("spaces: %s", obj)
row_data = CloudFoundryV2Driver.convert_objs(
obj,
self.spaces_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.cloudfoundry, action, action_args)

View File

@ -1,21 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
# datasource config options
REQUIRED = 'required'
OPTIONAL = '(optional)'

File diff suppressed because it is too large Load Diff

View File

@ -1,191 +0,0 @@
# Copyright (c) 2013,2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import functools
import inspect
import re
# from six.moves.urllib import parse as urlparse
# import keystoneauth1.identity.v2 as v2
# import keystoneauth1.identity.v3 as v3
from keystoneauth1 import loading as kaloading
# import keystoneauth1.session as kssession
from congress.datasources import constants
def get_openstack_required_config():
return {'auth_url': constants.REQUIRED,
'endpoint': constants.OPTIONAL,
'region': constants.OPTIONAL,
'username': constants.REQUIRED,
'password': constants.REQUIRED,
'tenant_name': constants.REQUIRED,
'project_name': constants.OPTIONAL,
'poll_time': constants.OPTIONAL}
def update_state_on_changed(root_table_name):
"""Decorator to check raw data before retranslating.
If raw data is same with cached self.raw_state,
don't translate data, return empty list directly.
If raw data is changed, translate it and update state.
"""
def outer(f):
@functools.wraps(f)
def inner(self, raw_data, *args, **kw):
if (root_table_name not in self.raw_state or
# TODO(RuiChen): workaround for oslo-incubator bug/1499369,
# enable self.raw_state cache, once the bug is resolved.
raw_data is not self.raw_state[root_table_name]):
result = f(self, raw_data, *args, **kw)
self._update_state(root_table_name, result)
self.raw_state[root_table_name] = raw_data
else:
result = []
return result
return inner
return outer
def add_column(colname, desc=None):
"""Adds column in the form of dict."""
return {'name': colname, 'desc': desc}
def inspect_methods(client, api_prefix):
"""Inspect all callable methods from client for congress."""
# some methods are referred multiple times, we should
# save them here to avoid infinite loop
obj_checked = []
method_checked = []
# For depth-first search
obj_stack = []
# save all inspected methods that will be returned
allmethods = []
obj_checked.append(client)
obj_stack.append(client)
while len(obj_stack) > 0:
cur_obj = obj_stack.pop()
# everything starts with '_' are considered as internal only
for f in [f for f in dir(cur_obj) if not f.startswith('_')]:
p = getattr(cur_obj, f, None)
if inspect.ismethod(p):
m_p = {}
# to get a name that can be called by Congress, no need
# to return the full path
m_p['name'] = cur_obj.__module__.replace(api_prefix, '')
if m_p['name'] == '':
m_p['name'] = p.__name__
else:
m_p['name'] = m_p['name'] + '.' + p.__name__
# skip checked methods
if m_p['name'] in method_checked:
continue
m_doc = inspect.getdoc(p)
# not return deprecated methods
if m_doc and "DEPRECATED:" in m_doc:
continue
if m_doc:
m_doc = re.sub('\n|\s+', ' ', m_doc)
x = re.split(' :param ', m_doc)
m_p['desc'] = x.pop(0)
y = inspect.getargspec(p)
m_p['args'] = []
while len(y.args) > 0:
m_p_name = y.args.pop(0)
if m_p_name == 'self':
continue
if len(x) > 0:
m_p_desc = x.pop(0)
else:
m_p_desc = "None"
m_p['args'].append({'name': m_p_name,
'desc': m_p_desc})
else:
m_p['args'] = []
m_p['desc'] = ''
allmethods.append(m_p)
method_checked.append(m_p['name'])
elif inspect.isfunction(p):
m_p = {}
m_p['name'] = cur_obj.__module__.replace(api_prefix, '')
if m_p['name'] == '':
m_p['name'] = f
else:
m_p['name'] = m_p['name'] + '.' + f
# TODO(zhenzanz): Never see doc for function yet.
# m_doc = inspect.getdoc(p)
m_p['args'] = []
m_p['desc'] = ''
allmethods.append(m_p)
method_checked.append(m_p['name'])
elif isinstance(p, object) and hasattr(p, '__module__'):
# avoid infinite loop by checking that p not in obj_checked.
# don't use 'in' since that uses ==, and some clients err
if ((not any(p is x for x in obj_checked)) and
(not inspect.isbuiltin(p))):
if re.match(api_prefix, p.__module__):
if (not inspect.isclass(p)):
obj_stack.append(p)
return allmethods
# Note (thread-safety): blocking function
def get_keystone_session(creds):
auth_details = {}
auth_details['auth_url'] = creds['auth_url']
auth_details['username'] = creds['username']
auth_details['password'] = creds['password']
auth_details['project_name'] = (creds.get('project_name') or
creds.get('tenant_name'))
auth_details['tenant_name'] = creds.get('tenant_name')
auth_details['user_domain_name'] = creds.get('user_domain_name', 'Default')
auth_details['project_domain_name'] = creds.get('project_domain_name',
'Default')
loader = kaloading.get_plugin_loader('password')
auth_plugin = loader.load_from_options(**auth_details)
session = kaloading.session.Session().load_from_options(
auth=auth_plugin)
# auth = v3.Password(
# auth_url=creds['auth_url'],
# username=creds['username'],
# password=creds['password'],
# project_name=creds.get('project_name') or creds.get('tenant_name'),
# user_domain_name=creds.get('user_domain_name', 'Default'),
# project_domain_name=creds.get('project_domain_name', 'Default'))
#
# else:
# # Use v2 plugin
# # Note (thread-safety): blocking call
# auth = v2.Password(auth_url=creds['auth_url'],
# username=creds['username'],
# password=creds['password'],
# tenant_name=creds['tenant_name'])
#
# # Note (thread-safety): blocking call?
# session = kssession.Session(auth=auth)
return session

View File

@ -1,105 +0,0 @@
# Copyright (c) 2016 NTT All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import six
from congress.datasources import constants
from congress.datasources import datasource_driver
class DoctorDriver(datasource_driver.PushedDataSourceDriver):
"""A DataSource Driver for OPNFV Doctor project.
This driver has a table for Doctor project's Inspector. Please check
https://wiki.opnfv.org/display/doctor/Doctor+Home for the details
about OPNFV Doctor project.
To update the table, call Update row API.
PUT /v1/data-sources/<the driver id>/tables/<table id>/rows
For updating 'events' table, the request body should be following
style. The request will replace all rows in the table with the body,
which means if you update the table with [] it will clear the table.
One {} object in the list represents one row of the table.
request body:
[
{
"time": "2016-02-22T11:48:55Z",
"type": "compute.host.down",
"details": {
"hostname": "compute1",
"status": "down",
"monitor": "zabbix1",
"monitor_event_id": "111"
}
},
.....
]
"""
value_trans = {'translation-type': 'VALUE'}
def safe_id(x):
if isinstance(x, six.string_types):
return x
try:
return x['id']
except Exception:
return str(x)
def flatten_events(row_events):
flatten = []
for event in row_events:
details = event.pop('details')
for k, v in details.items():
event[k] = v
flatten.append(event)
return flatten
events_translator = {
'translation-type': 'HDICT',
'table-name': 'events',
'selector-type': 'DICT_SELECTOR',
'objects-extract-fn': flatten_events,
'field-translators':
({'fieldname': 'time', 'translator': value_trans},
{'fieldname': 'type', 'translator': value_trans},
{'fieldname': 'hostname', 'translator': value_trans},
{'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'monitor', 'translator': value_trans},
{'fieldname': 'monitor_event_id', 'translator': value_trans},)
}
TRANSLATORS = [events_translator]
def __init__(self, name='', args=None):
super(DoctorDriver, self).__init__(name, args=args)
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'doctor'
result['description'] = ('Datasource driver that allows external '
'systems to push data in accordance with '
'OPNFV Doctor Inspector southbound interface '
'specification.')
result['config'] = {'persist_data': constants.OPTIONAL}
return result

View File

@ -1,142 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import glanceclient.v2.client as glclient
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class GlanceV2Driver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
IMAGES = "images"
TAGS = "tags"
value_trans = {'translation-type': 'VALUE'}
images_translator = {
'translation-type': 'HDICT',
'table-name': IMAGES,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'UUID of image',
'translator': value_trans},
{'fieldname': 'status', 'desc': 'The image status',
'translator': value_trans},
{'fieldname': 'name',
'desc': 'Image Name', 'translator': value_trans},
{'fieldname': 'container_format',
'desc': 'The container format of image',
'translator': value_trans},
{'fieldname': 'created_at',
'desc': 'The date and time when the resource was created',
'translator': value_trans},
{'fieldname': 'updated_at',
'desc': 'The date and time when the resource was updated.',
'translator': value_trans},
{'fieldname': 'disk_format',
'desc': 'The disk format of the image.',
'translator': value_trans},
{'fieldname': 'owner',
'desc': 'The ID of the owner or tenant of the image',
'translator': value_trans},
{'fieldname': 'protected',
'desc': 'Indicates whether the image can be deleted.',
'translator': value_trans},
{'fieldname': 'min_ram',
'desc': 'minimum amount of RAM in MB required to boot the image',
'translator': value_trans},
{'fieldname': 'min_disk',
'desc': 'minimum disk size in GB required to boot the image',
'translator': value_trans},
{'fieldname': 'checksum', 'desc': 'Hash of the image data used',
'translator': value_trans},
{'fieldname': 'size',
'desc': 'The size of the image data, in bytes.',
'translator': value_trans},
{'fieldname': 'file',
'desc': 'URL for the virtual machine image file',
'translator': value_trans},
{'fieldname': 'kernel_id', 'desc': 'kernel id',
'translator': value_trans},
{'fieldname': 'ramdisk_id', 'desc': 'ramdisk id',
'translator': value_trans},
{'fieldname': 'schema',
'desc': 'URL for schema of the virtual machine image',
'translator': value_trans},
{'fieldname': 'visibility', 'desc': 'The image visibility',
'translator': value_trans},
{'fieldname': 'tags',
'translator': {'translation-type': 'LIST',
'table-name': TAGS,
'val-col': 'tag',
'val-col-desc': 'List of image tags',
'parent-key': 'id',
'parent-col-name': 'image_id',
'parent-key-desc': 'UUID of image',
'translator': value_trans}})}
TRANSLATORS = [images_translator]
def __init__(self, name='', args=None):
super(GlanceV2Driver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
session = ds_utils.get_keystone_session(self.creds)
self.glance = glclient.Client(session=session)
self.add_executable_client_methods(self.glance, 'glanceclient.v2.')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'glancev2'
result['description'] = ('Datasource driver that interfaces with '
'OpenStack Images aka Glance.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_methods(self):
images_method = lambda: self._translate_images(
{'images': self.glance.images.list()})
self.add_update_method(images_method, self.images_translator)
@ds_utils.update_state_on_changed(IMAGES)
def _translate_images(self, obj):
"""Translate the images represented by OBJ into tables."""
LOG.debug("IMAGES: %s", str(dict(obj)))
row_data = GlanceV2Driver.convert_objs(
obj['images'], GlanceV2Driver.images_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.glance, action, action_args)

View File

@ -1,245 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import heatclient.v1.client as heatclient
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class HeatV1Driver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
STACKS = "stacks"
STACKS_LINKS = "stacks_links"
DEPLOYMENTS = "deployments"
DEPLOYMENT_OUTPUT_VALUES = "deployment_output_values"
RESOURCES = "resources"
RESOURCES_LINKS = "resources_links"
EVENTS = "events"
EVENTS_LINKS = "events_links"
# TODO(thinrichs): add snapshots
value_trans = {'translation-type': 'VALUE'}
stacks_links_translator = {
'translation-type': 'HDICT',
'table-name': STACKS_LINKS,
'parent-key': 'id',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'href', 'translator': value_trans},
{'fieldname': 'rel', 'translator': value_trans})}
stacks_translator = {
'translation-type': 'HDICT',
'table-name': STACKS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'stack_name', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'creation_time', 'translator': value_trans},
{'fieldname': 'updated_time', 'translator': value_trans},
{'fieldname': 'stack_status', 'translator': value_trans},
{'fieldname': 'stack_status_reason', 'translator': value_trans},
{'fieldname': 'stack_owner', 'translator': value_trans},
{'fieldname': 'parent', 'translator': value_trans},
{'fieldname': 'links', 'translator': stacks_links_translator})}
deployments_output_values_translator = {
'translation-type': 'HDICT',
'table-name': DEPLOYMENT_OUTPUT_VALUES,
'parent-key': 'id',
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'deploy_stdout', 'translator': value_trans},
{'fieldname': 'deploy_stderr', 'translator': value_trans},
{'fieldname': 'deploy_status_code', 'translator': value_trans},
{'fieldname': 'result', 'translator': value_trans})}
software_deployment_translator = {
'translation-type': 'HDICT',
'table-name': DEPLOYMENTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'server_id', 'translator': value_trans},
{'fieldname': 'config_id', 'translator': value_trans},
{'fieldname': 'action', 'translator': value_trans},
{'fieldname': 'status_reason', 'translator': value_trans},
{'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'output_values',
'translator': deployments_output_values_translator})}
resources_links_translator = {
'translation-type': 'HDICT',
'table-name': RESOURCES_LINKS,
'parent-key': 'physical_resource_id',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'href', 'translator': value_trans},
{'fieldname': 'rel', 'translator': value_trans})}
resources_translator = {
'translation-type': 'HDICT',
'table-name': RESOURCES,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'physical_resource_id', 'translator': value_trans},
{'fieldname': 'logical_resource_id', 'translator': value_trans},
{'fieldname': 'stack_id', 'translator': value_trans},
{'fieldname': 'resource_name', 'translator': value_trans},
{'fieldname': 'resource_type', 'translator': value_trans},
{'fieldname': 'creation_time', 'translator': value_trans},
{'fieldname': 'updated_time', 'translator': value_trans},
{'fieldname': 'resource_status', 'translator': value_trans},
{'fieldname': 'resource_status_reason', 'translator': value_trans},
{'fieldname': 'links', 'translator': resources_links_translator})}
events_links_translator = {
'translation-type': 'HDICT',
'table-name': EVENTS_LINKS,
'parent-key': 'id',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'href', 'translator': value_trans},
{'fieldname': 'rel', 'translator': value_trans})}
events_translator = {
'translation-type': 'HDICT',
'table-name': EVENTS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'physical_resource_id', 'translator': value_trans},
{'fieldname': 'logical_resource_id', 'translator': value_trans},
{'fieldname': 'stack_id', 'translator': value_trans},
{'fieldname': 'resource_name', 'translator': value_trans},
{'fieldname': 'event_time', 'translator': value_trans},
{'fieldname': 'resource_status', 'translator': value_trans},
{'fieldname': 'resource_status_reason', 'translator': value_trans},
{'fieldname': 'links', 'translator': events_links_translator})}
TRANSLATORS = [stacks_translator, software_deployment_translator,
resources_translator, events_translator]
def __init__(self, name='', args=None):
super(HeatV1Driver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
session = ds_utils.get_keystone_session(self.creds)
endpoint = session.get_endpoint(service_type='orchestration',
interface='publicURL')
self.heat = heatclient.Client(session=session, endpoint=endpoint)
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'heat'
result['description'] = ('Datasource driver that interfaces with'
' OpenStack orchestration aka heat.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_methods(self):
stacks_method = lambda: self._translate_stacks(
{'stacks': self.heat.stacks.list()})
self.add_update_method(stacks_method, self.stacks_translator)
resources_method = lambda: self._translate_resources(
self._get_resources(self.heat.stacks.list()))
self.add_update_method(resources_method, self.resources_translator)
events_method = lambda: self._translate_events(
self._get_events(self.heat.stacks.list()))
self.add_update_method(events_method, self.events_translator)
deployments_method = lambda: self._translate_software_deployment(
{'deployments': self.heat.software_deployments.list()})
self.add_update_method(deployments_method,
self.software_deployment_translator)
def _get_resources(self, stacks):
rval = []
for stack in stacks:
resources = self.heat.resources.list(stack.id)
for resource in resources:
resource = resource.to_dict()
resource['stack_id'] = stack.id
rval.append(resource)
return {'resources': rval}
def _get_events(self, stacks):
rval = []
for stack in stacks:
events = self.heat.events.list(stack.id)
for event in events:
event = event.to_dict()
event['stack_id'] = stack.id
rval.append(event)
return {'events': rval}
@ds_utils.update_state_on_changed(STACKS)
def _translate_stacks(self, obj):
"""Translate the stacks represented by OBJ into tables."""
LOG.debug("STACKS: %s", str(dict(obj)))
row_data = HeatV1Driver.convert_objs(
obj['stacks'], HeatV1Driver.stacks_translator)
return row_data
@ds_utils.update_state_on_changed(DEPLOYMENTS)
def _translate_software_deployment(self, obj):
"""Translate the stacks represented by OBJ into tables."""
LOG.debug("Software Deployments: %s", str(dict(obj)))
row_data = HeatV1Driver.convert_objs(
obj['deployments'], HeatV1Driver.software_deployment_translator)
return row_data
@ds_utils.update_state_on_changed(RESOURCES)
def _translate_resources(self, obj):
"""Translate the resources represented by OBJ into tables."""
LOG.debug("Resources: %s", str(dict(obj)))
row_data = HeatV1Driver.convert_objs(
obj['resources'], HeatV1Driver.resources_translator)
return row_data
@ds_utils.update_state_on_changed(EVENTS)
def _translate_events(self, obj):
"""Translate the events represented by OBJ into tables."""
LOG.debug("Events: %s", str(dict(obj)))
row_data = HeatV1Driver.convert_objs(
obj['events'], HeatV1Driver.events_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.heat, action, action_args)

View File

@ -1,221 +0,0 @@
# Copyright (c) 2015 Intel Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from ironicclient import client
import six
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
class IronicDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
CHASSISES = "chassises"
NODES = "nodes"
NODE_PROPERTIES = "node_properties"
PORTS = "ports"
DRIVERS = "drivers"
ACTIVE_HOSTS = "active_hosts"
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
def safe_id(x):
if isinstance(x, six.string_types):
return x
try:
return x['id']
except KeyError:
return str(x)
def safe_port_extra(x):
try:
return x['vif_port_id']
except KeyError:
return ""
chassises_translator = {
'translation-type': 'HDICT',
'table-name': CHASSISES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'uuid', 'col': 'id', 'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans})}
nodes_translator = {
'translation-type': 'HDICT',
'table-name': NODES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'uuid', 'col': 'id',
'desc': '', 'translator': value_trans},
{'fieldname': 'chassis_uuid', 'desc': '',
'col': 'owner_chassis', 'translator': value_trans},
{'fieldname': 'power_state', 'desc': '',
'translator': value_trans},
{'fieldname': 'maintenance', 'desc': '',
'translator': value_trans},
{'fieldname': 'properties', 'desc': '',
'translator':
{'translation-type': 'HDICT',
'table-name': NODE_PROPERTIES,
'parent-key': 'id',
'parent-col-name': 'properties',
'selector-type': 'DICT_SELECTOR',
'in-list': False,
'field-translators':
({'fieldname': 'memory_mb',
'translator': value_trans},
{'fieldname': 'cpu_arch',
'translator': value_trans},
{'fieldname': 'local_gb',
'translator': value_trans},
{'fieldname': 'cpus',
'translator': value_trans})}},
{'fieldname': 'driver', 'translator': value_trans},
{'fieldname': 'instance_uuid', 'col': 'running_instance',
'translator': value_trans},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'provision_updated_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans})}
ports_translator = {
'translation-type': 'HDICT',
'table-name': PORTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'uuid', 'col': 'id', 'translator': value_trans},
{'fieldname': 'node_uuid', 'col': 'owner_node',
'translator': value_trans},
{'fieldname': 'address', 'col': 'mac_address',
'translator': value_trans},
{'fieldname': 'extra', 'col': 'vif_port_id', 'translator':
{'translation-type': 'VALUE',
'extract-fn': safe_port_extra}},
{'fieldname': 'created_at', 'translator': value_trans},
{'fieldname': 'updated_at', 'translator': value_trans})}
drivers_translator = {
'translation-type': 'HDICT',
'table-name': DRIVERS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'hosts', 'translator':
{'translation-type': 'LIST',
'table-name': ACTIVE_HOSTS,
'parent-key': 'name',
'parent-col-name': 'name',
'val-col': 'hosts',
'translator':
{'translation-type': 'VALUE'}}})}
TRANSLATORS = [chassises_translator, nodes_translator, ports_translator,
drivers_translator]
def __init__(self, name='', args=None):
super(IronicDriver, self).__init__(name, args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = self.get_ironic_credentials(args)
session = ds_utils.get_keystone_session(self.creds)
self.ironic_client = client.get_client(
api_version=self.creds.get('api_version', '1'), session=session)
self.add_executable_client_methods(self.ironic_client,
'ironicclient.v1.')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'ironic'
result['description'] = ('Datasource driver that interfaces with '
'OpenStack bare metal aka ironic.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def get_ironic_credentials(self, creds):
d = {}
d['api_version'] = '1'
d['insecure'] = False
# save a copy to renew auth token
d['username'] = creds['username']
d['password'] = creds['password']
d['auth_url'] = creds['auth_url']
d['tenant_name'] = creds['tenant_name']
# ironicclient.get_client() uses different names
d['os_username'] = creds['username']
d['os_password'] = creds['password']
d['os_auth_url'] = creds['auth_url']
d['os_tenant_name'] = creds['tenant_name']
return d
def initialize_update_methods(self):
chassises_method = lambda: self._translate_chassises(
self.ironic_client.chassis.list(detail=True, limit=0))
self.add_update_method(chassises_method, self.chassises_translator)
nodes_method = lambda: self._translate_nodes(
self.ironic_client.node.list(detail=True, limit=0))
self.add_update_method(nodes_method, self.nodes_translator)
ports_method = lambda: self._translate_ports(
self.ironic_client.port.list(detail=True, limit=0))
self.add_update_method(ports_method, self.ports_translator)
drivers_method = lambda: self._translate_drivers(
self.ironic_client.driver.list())
self.add_update_method(drivers_method, self.drivers_translator)
@ds_utils.update_state_on_changed(CHASSISES)
def _translate_chassises(self, obj):
row_data = IronicDriver.convert_objs(obj,
IronicDriver.chassises_translator)
return row_data
@ds_utils.update_state_on_changed(NODES)
def _translate_nodes(self, obj):
row_data = IronicDriver.convert_objs(obj,
IronicDriver.nodes_translator)
return row_data
@ds_utils.update_state_on_changed(PORTS)
def _translate_ports(self, obj):
row_data = IronicDriver.convert_objs(obj,
IronicDriver.ports_translator)
return row_data
@ds_utils.update_state_on_changed(DRIVERS)
def _translate_drivers(self, obj):
row_data = IronicDriver.convert_objs(obj,
IronicDriver.drivers_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.ironic_client, action, action_args)

View File

@ -1,136 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import keystoneclient.v2_0.client
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
class KeystoneDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
# Table names
USERS = "users"
ROLES = "roles"
TENANTS = "tenants"
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
users_translator = {
'translation-type': 'HDICT',
'table-name': USERS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'username', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'enabled', 'translator': value_trans},
{'fieldname': 'tenantId', 'translator': value_trans},
{'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'email', 'translator': value_trans})}
roles_translator = {
'translation-type': 'HDICT',
'table-name': ROLES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans})}
tenants_translator = {
'translation-type': 'HDICT',
'table-name': TENANTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'enabled', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'id', 'translator': value_trans})}
TRANSLATORS = [users_translator, roles_translator, tenants_translator]
def __init__(self, name='', args=None):
super(KeystoneDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = self.get_keystone_credentials_v2(args)
self.client = keystoneclient.v2_0.client.Client(**self.creds)
self.add_executable_client_methods(self.client,
'keystoneclient.v2_0.client')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'keystone'
result['description'] = ('Datasource driver that interfaces with '
'keystone.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def get_keystone_credentials_v2(self, args):
creds = args
d = {}
d['version'] = '2'
d['username'] = creds['username']
d['password'] = creds['password']
d['auth_url'] = creds['auth_url']
d['tenant_name'] = creds['tenant_name']
return d
def initialize_update_methods(self):
users_method = lambda: self._translate_users(self.client.users.list())
self.add_update_method(users_method, self.users_translator)
roles_method = lambda: self._translate_roles(self.client.roles.list())
self.add_update_method(roles_method, self.roles_translator)
tenants_method = lambda: self._translate_tenants(
self.client.tenants.list())
self.add_update_method(tenants_method, self.tenants_translator)
@ds_utils.update_state_on_changed(USERS)
def _translate_users(self, obj):
row_data = KeystoneDriver.convert_objs(obj,
KeystoneDriver.users_translator)
return row_data
@ds_utils.update_state_on_changed(ROLES)
def _translate_roles(self, obj):
row_data = KeystoneDriver.convert_objs(obj,
KeystoneDriver.roles_translator)
return row_data
@ds_utils.update_state_on_changed(TENANTS)
def _translate_tenants(self, obj):
row_data = KeystoneDriver.convert_objs(
obj, KeystoneDriver.tenants_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.client, action, action_args)

View File

@ -1,167 +0,0 @@
# Copyright (c) 2016 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from keystoneclient.v3 import client
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
class KeystoneV3Driver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
# Table names
USERS = "users"
ROLES = "roles"
PROJECTS = "projects"
DOMAINS = "domains"
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
users_translator = {
'translation-type': 'HDICT',
'table-name': USERS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'The ID for the user.',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'username, unique within domain',
'translator': value_trans},
{'fieldname': 'enabled', 'desc': 'user is enabled or not',
'translator': value_trans},
{'fieldname': 'default_project_id',
'desc': 'ID of the default project for the user',
'translator': value_trans},
{'fieldname': 'domain_id',
'desc': 'The ID of the domain for the user.',
'translator': value_trans})}
roles_translator = {
'translation-type': 'HDICT',
'table-name': ROLES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'role ID', 'translator': value_trans},
{'fieldname': 'name', 'desc': 'role name',
'translator': value_trans})}
projects_translator = {
'translation-type': 'HDICT',
'table-name': PROJECTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'enabled', 'desc': 'project is enabled or not',
'translator': value_trans},
{'fieldname': 'description', 'desc': 'project description',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'project name',
'translator': value_trans},
{'fieldname': 'domain_id',
'desc': 'The ID of the domain for the project',
'translator': value_trans},
{'fieldname': 'id', 'desc': 'ID for the project',
'translator': value_trans})}
domains_translator = {
'translation-type': 'HDICT',
'table-name': DOMAINS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'enabled', 'desc': 'domain is enabled or disabled',
'translator': value_trans},
{'fieldname': 'description', 'desc': 'domain description',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'domain name',
'translator': value_trans},
{'fieldname': 'id', 'desc': 'domain ID',
'translator': value_trans})}
TRANSLATORS = [users_translator, roles_translator, projects_translator,
domains_translator]
def __init__(self, name='', args=None):
super(KeystoneV3Driver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
session = ds_utils.get_keystone_session(args)
self.client = client.Client(session=session)
self.add_executable_client_methods(self.client,
'keystoneclient.v3.client')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'keystonev3'
result['description'] = ('Datasource driver that interfaces with '
'keystone.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_methods(self):
users_method = lambda: self._translate_users(self.client.users.list())
self.add_update_method(users_method, self.users_translator)
roles_method = lambda: self._translate_roles(self.client.roles.list())
self.add_update_method(roles_method, self.roles_translator)
projects_method = lambda: self._translate_projects(
self.client.projects.list())
self.add_update_method(projects_method, self.projects_translator)
domains_method = lambda: self._translate_domains(
self.client.domains.list())
self.add_update_method(domains_method, self.domains_translator)
@ds_utils.update_state_on_changed(USERS)
def _translate_users(self, obj):
row_data = KeystoneV3Driver.convert_objs(
obj, KeystoneV3Driver.users_translator)
return row_data
@ds_utils.update_state_on_changed(ROLES)
def _translate_roles(self, obj):
row_data = KeystoneV3Driver.convert_objs(
obj, KeystoneV3Driver.roles_translator)
return row_data
@ds_utils.update_state_on_changed(PROJECTS)
def _translate_projects(self, obj):
row_data = KeystoneV3Driver.convert_objs(
obj, KeystoneV3Driver.projects_translator)
return row_data
@ds_utils.update_state_on_changed(DOMAINS)
def _translate_domains(self, obj):
row_data = KeystoneV3Driver.convert_objs(
obj, KeystoneV3Driver.domains_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.client, action, action_args)

View File

@ -1,165 +0,0 @@
# Copyright (c) 2015 Cisco.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import datetime
import keystoneclient.v3.client as ksclient
from monascaclient import client as monasca_client
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
# TODO(thinrichs): figure out how to move even more of this boilerplate
# into DataSourceDriver. E.g. change all the classes to Driver instead of
# NeutronDriver, CeilometerDriver, etc. and move the d6instantiate function
# to DataSourceDriver.
class MonascaDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
METRICS = "metrics"
DIMENSIONS = "dimensions"
STATISTICS = "statistics"
DATA = "statistics.data"
# TODO(fabiog): add events and logs when fully supported in Monasca
# EVENTS = "events"
# LOGS = "logs"
value_trans = {'translation-type': 'VALUE'}
metric_translator = {
'translation-type': 'HDICT',
'table-name': METRICS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'dimensions',
'translator': {'translation-type': 'VDICT',
'table-name': DIMENSIONS,
'id-col': 'id',
'key-col': 'key', 'val-col': 'value',
'translator': value_trans}})
}
statistics_translator = {
'translation-type': 'HDICT',
'table-name': STATISTICS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'statistics',
'translator': {'translation-type': 'LIST',
'table-name': DATA,
'id-col': 'name',
'val-col': 'value_col',
'translator': value_trans}})
}
TRANSLATORS = [metric_translator, statistics_translator]
def __init__(self, name='', args=None):
super(MonascaDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
if not self.creds.get('project_name'):
self.creds['project_name'] = self.creds['tenant_name']
if not self.creds.get('poll_time'):
# set default polling time to 1hr
self.creds['poll_time'] = 3600
# Monasca uses Keystone V3
self.creds['auth_url'] = self.creds['auth_url'].replace("v2.0", "v3")
self.keystone = ksclient.Client(**self.creds)
self.creds['token'] = self.keystone.auth_token
if not self.creds.get('endpoint'):
# if the endpoint not defined retrieved it from keystone catalog
self.creds['endpoint'] = self.keystone.service_catalog.url_for(
service_type='monitoring', endpoint_type='publicURL')
self.monasca = monasca_client.Client('2_0', **self.creds)
self.add_executable_client_methods(self.monasca, 'monascaclient.')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'monasca'
result['description'] = ('Datasource driver that interfaces with '
'monasca.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_methods(self):
metrics_method = lambda: self._translate_metric(
self.monasca.metrics.list())
self.add_update_method(metrics_method, self.metric_translator)
statistics_method = self.update_statistics
self.add_update_method(statistics_method, self.statistics_translator)
def update_statistics(self):
today = datetime.datetime.now()
yesterday = datetime.timedelta(hours=24)
start_from = datetime.datetime.isoformat(today-yesterday)
for metric in self.monasca.metrics.list_names():
LOG.debug("Monasca statistics for metric %s", metric['name'])
_query_args = dict(
start_time=start_from,
name=metric['name'],
statistics='avg',
period=int(self.creds['poll_time']),
merge_metrics='true')
statistics = self.monasca.metrics.list_statistics(
**_query_args)
self._translate_statistics(statistics)
@ds_utils.update_state_on_changed(METRICS)
def _translate_metric(self, obj):
"""Translate the metrics represented by OBJ into tables."""
LOG.debug("METRIC: %s", str(obj))
row_data = MonascaDriver.convert_objs(obj,
self.metric_translator)
return row_data
@ds_utils.update_state_on_changed(STATISTICS)
def _translate_statistics(self, obj):
"""Translate the metrics represented by OBJ into tables."""
LOG.debug("STATISTICS: %s", str(obj))
row_data = MonascaDriver.convert_objs(obj,
self.statistics_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.monasca, action, action_args)

View File

@ -1,150 +0,0 @@
# Copyright (c) 2015 Hewlett-Packard. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
logger = logging.getLogger(__name__)
class IOMuranoObject(object):
name = 'io.murano.Object'
@classmethod
def is_class_type(cls, name):
if name == cls.name:
return True
else:
return False
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
return [cls.name]
class IOMuranoEnvironment(IOMuranoObject):
name = 'io.murano.Environment'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoObject.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesInstance(IOMuranoObject):
name = 'io.murano.resources.Instance'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoObject.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesLinuxInstance(IOMuranoResourcesInstance):
name = 'io.murano.resources.LinuxInstance'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoResourcesInstance.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesLinuxMuranoInstance(IOMuranoResourcesLinuxInstance):
name = 'io.murano.resources.LinuxMuranoInstance'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoResourcesLinuxInstance.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesWindowsInstance(IOMuranoResourcesInstance):
name = 'io.murano.resources.WindowsInstance'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoResourcesInstance.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesNetwork(IOMuranoObject):
name = 'io.murano.resources.Network'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoObject.get_parent_types()
types.append(cls.name)
return types
class IOMuranoResourcesNeutronNetwork(IOMuranoResourcesNetwork):
name = 'io.murano.resources.NeutronNetwork'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoResourcesNetwork.get_parent_types()
types.append(cls.name)
return types
class IOMuranoApplication(IOMuranoObject):
name = 'io.murano.Application'
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoObject.get_parent_types()
types.append(cls.name)
return types
class IOMuranoApps(IOMuranoApplication):
# This is a common class for all applications
# name should be set to actual apps type before use
# (e.g io.murano.apps.apache.ApacheHttpServer)
name = None
@classmethod
def get_parent_types(cls, class_name=None):
if class_name and not cls.is_class_type(class_name):
return []
types = IOMuranoApplication.get_parent_types()
types.append(cls.name)
return types

View File

@ -1,510 +0,0 @@
# Copyright (c) 2015 Hewlett-Packard. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import inspect
import muranoclient.client
from muranoclient.common import exceptions as murano_exceptions
from oslo_log import log as logging
from oslo_utils import uuidutils
import six
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils
from congress.datasources import murano_classes
from congress import utils
logger = logging.getLogger(__name__)
class MuranoDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
OBJECTS = "objects"
PARENT_TYPES = "parent_types"
PROPERTIES = "properties"
RELATIONSHIPS = "relationships"
CONNECTED = "connected"
STATES = "states"
ACTIONS = "actions"
UNUSED_PKG_PROPERTIES = ['id', 'owner_id', 'description']
UNUSED_ENV_PROPERTIES = ['id', 'tenant_id']
APPS_TYPE_PREFIXES = ['io.murano.apps', 'io.murano.databases']
def __init__(self, name='', args=None):
super(MuranoDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
session = datasource_utils.get_keystone_session(self.creds)
client_version = "1"
self.murano_client = muranoclient.client.Client(
client_version, session=session, endpoint_type='publicURL',
service_type='application-catalog')
self.add_executable_client_methods(
self.murano_client,
'muranoclient.v1.')
logger.debug("Successfully created murano_client")
self.action_call_returns = []
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'murano'
result['description'] = ('Datasource driver that interfaces with '
'murano')
result['config'] = datasource_utils.get_openstack_required_config()
result['secret'] = ['password']
return result
def update_from_datasource(self):
"""Called when it is time to pull new data from this datasource.
Sets self.state[tablename] = <set of tuples of strings/numbers>
for every tablename exported by this datasource.
"""
self.state[self.STATES] = set()
self.state[self.OBJECTS] = set()
self.state[self.PROPERTIES] = set()
self.state[self.PARENT_TYPES] = set()
self.state[self.RELATIONSHIPS] = set()
self.state[self.CONNECTED] = set()
self.state[self.ACTIONS] = dict()
# Workaround for 401 error issue
try:
# Moves _translate_packages above translate_services to
# make use of properties table in translate_services
logger.debug("Murano grabbing packages")
packages = self.murano_client.packages.list()
self._translate_packages(packages)
logger.debug("Murano grabbing environments")
environments = self.murano_client.environments.list()
self._translate_environments(environments)
self._translate_services(environments)
self._translate_deployments(environments)
self._translate_connected()
except murano_exceptions.HTTPException:
raise
@classmethod
def get_schema(cls):
"""Returns a dictionary of table schema.
The dictionary mapping tablenames to the list of column names
for that table. Both tablenames and columnnames are strings.
"""
d = {}
d[cls.OBJECTS] = ('object_id', 'owner_id', 'type')
# parent_types include not only the type of object's immediate
# parent but also all of its ancestors and its own type. The
# additional info helps writing better datalog rules.
d[cls.PARENT_TYPES] = ('id', 'parent_type')
d[cls.PROPERTIES] = ('owner_id', 'name', 'value')
d[cls.RELATIONSHIPS] = ('source_id', 'target_id', 'name')
d[cls.CONNECTED] = ('source_id', 'target_id')
d[cls.STATES] = ('id', 'state')
return d
def _translate_environments(self, environments):
"""Translate the environments into tables.
Assigns self.state[tablename] for all those TABLENAMEs
generated from environments
"""
logger.debug("_translate_environments: %s", environments)
if not environments:
return
self.state[self.STATES] = set()
if self.OBJECTS not in self.state:
self.state[self.OBJECTS] = set()
if self.PROPERTIES not in self.state:
self.state[self.PROPERTIES] = set()
if self.PARENT_TYPES not in self.state:
self.state[self.PARENT_TYPES] = set()
if self.RELATIONSHIPS not in self.state:
self.state[self.RELATIONSHIPS] = set()
if self.CONNECTED not in self.state:
self.state[self.CONNECTED] = set()
env_type = 'io.murano.Environment'
for env in environments:
self.state[self.OBJECTS].add(
(env.id, env.tenant_id, env_type))
self.state[self.STATES].add((env.id, env.status))
parent_types = self._get_parent_types(env_type)
self._add_parent_types(env.id, parent_types)
for key, value in env.to_dict().items():
if key in self.UNUSED_ENV_PROPERTIES:
continue
self._add_properties(env.id, key, value)
def _translate_services(self, environments):
"""Translate the environment services into tables.
Assigns self.state[tablename] for all those TABLENAMEs
generated from services
"""
logger.debug("Murano grabbing environments services")
if not environments:
return
for env in environments:
services = self.murano_client.services.list(env.id)
self._translate_environment_services(services, env.id)
def _translate_environment_services(self, services, env_id):
"""Translate the environment services into tables.
Assigns self.state[tablename] for all those TABLENAMEs
generated from services
"""
# clean actions for given environment
if self.ACTIONS not in self.state:
self.state[self.ACTIONS] = dict()
env_actions = self.state[self.ACTIONS][env_id] = set()
if not services:
return
for s in services:
s_dict = s.to_dict()
s_id = s_dict['?']['id']
s_type = s_dict['?']['type']
self.state[self.OBJECTS].add((s_id, env_id, s_type))
for key, value in s_dict.items():
if key in ['instance', '?']:
continue
self._add_properties(s_id, key, value)
self._add_relationships(s_id, key, value)
parent_types = self._get_parent_types(s_type)
self._add_parent_types(s_id, parent_types)
self._add_relationships(env_id, 'services', s_id)
self._translate_service_action(s_dict, env_actions)
if 'instance' not in s_dict:
continue
# populate service instance
si_dict = s.instance
si_id = si_dict['?']['id']
si_type = si_dict['?']['type']
self.state[self.OBJECTS].add((si_id, s_id, si_type))
for key, value in si_dict.items():
if key in ['?']:
continue
self._add_properties(si_id, key, value)
if key not in ['image']:
# there's no murano image object in the environment,
# therefore glance 'image' relationship is irrelevant
# at this point.
self._add_relationships(si_id, key, value)
# There's a relationship between the service and instance
self._add_relationships(s_id, 'instance', si_id)
parent_types = self._get_parent_types(si_type)
self._add_parent_types(si_id, parent_types)
self._translate_service_action(si_dict, env_actions)
def _translate_service_action(self, obj_dict, env_actions):
"""Translates environment's object actions to env_actions structure.
env_actions: [(obj_id, action_id, action_name, enabled)]
:param obj_dict: object dictionary
:param env_actions: set of environment actions
"""
obj_id = obj_dict['?']['id']
if '_actions' in obj_dict['?']:
o_actions = obj_dict['?']['_actions']
if not o_actions:
return
for action_id, action_value in o_actions.items():
action_name = action_value.get('name', '')
enabled = action_value.get('enabled', False)
action = (obj_id, action_id, action_name, enabled)
env_actions.add(action)
# TODO(tranldt): support action arguments.
# If action arguments are included in '_actions',
# they can be populated into tables.
def _translate_deployments(self, environments):
"""Translate the environment deployments into tables.
Assigns self.state[tablename] for all those TABLENAMEs
generated from deployments
"""
if not environments:
return
for env in environments:
deployments = self.murano_client.deployments.list(env.id)
self._translate_environment_deployments(deployments, env.id)
def _translate_environment_deployments(self, deployments, env_id):
"""Translate the environment deployments into tables.
Assigns self.state[tablename] for all those TABLENAMEs
generated from deployments
"""
if not deployments:
return
for d in deployments:
if 'defaultNetworks' not in d.description:
continue
default_networks = d.description['defaultNetworks']
net_id = None
if 'environment' in default_networks:
net_id = default_networks['environment']['?']['id']
net_type = default_networks['environment']['?']['type']
self.state[self.OBJECTS].add((net_id, env_id, net_type))
parent_types = self._get_parent_types(net_type)
self._add_parent_types(net_id, parent_types)
for key, value in default_networks['environment'].items():
if key in ['?']:
continue
self._add_properties(net_id, key, value)
if not net_id:
continue
self._add_relationships(env_id, 'defaultNetworks', net_id)
for key, value in default_networks.items():
if key in ['environment']:
# data from environment already populated
continue
new_key = 'defaultNetworks.' + key
self._add_properties(net_id, new_key, value)
# services from deployment are not of interest because the same
# info is obtained from services API
def _translate_packages(self, packages):
"""Translate the packages into tables.
Assigns self.state[tablename] for all those TABLENAMEs
generated from packages/applications
"""
# packages is a generator type
if not packages:
return
if self.OBJECTS not in self.state:
self.state[self.OBJECTS] = set()
if self.PROPERTIES not in self.state:
self.state[self.PROPERTIES] = set()
for pkg in packages:
logger.debug("pkg=%s", pkg.to_dict())
pkg_type = pkg.type
if pkg.type == 'Application':
pkg_type = 'io.murano.Application'
self.state[self.OBJECTS].add((pkg.id, pkg.owner_id, pkg_type))
for key, value in pkg.to_dict().items():
if key in self.UNUSED_PKG_PROPERTIES:
continue
self._add_properties(pkg.id, key, value)
def _add_properties(self, obj_id, key, value):
"""Add a set of (obj_id, key, value) to properties table.
:param obj_id: uuid of object
:param key: property name. For the case value is a list, the
same key is used for multiple values.
:param value: property value. If value is a dict, the nested
properties will be mapped using dot notation.
"""
if value is None or value == '':
return
if isinstance(value, dict):
for k, v in value.items():
new_key = key + "." + k
self._add_properties(obj_id, new_key, v)
elif isinstance(value, list):
if len(value) == 0:
return
for item in value:
self.state[self.PROPERTIES].add(
(obj_id, key, utils.value_to_congress(item)))
else:
self.state[self.PROPERTIES].add(
(obj_id, key, utils.value_to_congress(value)))
def _add_relationships(self, obj_id, key, value):
"""Add a set of (obj_id, value, key) to relationships table.
:param obj_id: source uuid
:param key: relationship name
:param value: target uuid
"""
if (not isinstance(value, six.string_types) or
not uuidutils.is_uuid_like(value)):
return
logger.debug("Relationship: source = %s, target = %s, rel_name = %s"
% (obj_id, value, key))
self.state[self.RELATIONSHIPS].add((obj_id, value, key))
def _transitive_closure(self):
"""Computes transitive closure on a directed graph.
In other words computes reachability within the graph.
E.g. {(1, 2), (2, 3)} -> {(1, 2), (2, 3), (1, 3)}
(1, 3) was added because there is path from 1 to 3 in the graph.
"""
closure = self.state[self.CONNECTED]
while True:
# Attempts to discover new transitive relations
# by joining 2 subsequent relations/edges within the graph.
new_relations = {(x, w) for x, y in closure
for q, w in closure if q == y}
# Creates union with already discovered relations.
closure_until_now = closure | new_relations
# If no new relations were discovered in last cycle
# the computation is finished.
if closure_until_now == closure:
self.state[self.CONNECTED] = closure
break
closure = closure_until_now
def _add_connected(self, source_id, target_id):
"""Looks up the target_id in objects and add links to connected table.
Adds sets of (source_id, target_id) to connected table along
with its indirections.
:param source_id: source uuid
:param target_id: target uuid
"""
for row in self.state[self.OBJECTS]:
if row[1] == target_id:
self.state[self.CONNECTED].add((row[1], row[0]))
self.state[self.CONNECTED].add((source_id, row[0]))
self.state[self.CONNECTED].add((source_id, target_id))
def _translate_connected(self):
"""Translates relationships table into connected table."""
for row in self.state[self.RELATIONSHIPS]:
self._add_connected(row[0], row[1])
self._transitive_closure()
def _add_parent_types(self, obj_id, parent_types):
"""Add sets of (obj_id, parent_type) to parent_types table.
:param obj_id: uuid of object
:param parent_types: list of parent type string
"""
if parent_types:
for p_type in parent_types:
self.state[self.PARENT_TYPES].add((obj_id, p_type))
def _get_package_type(self, class_name):
"""Determine whether obj_type is an Application or Library.
:param class_name: <string> service/application class name
e.g. io.murano.apps.linux.Telnet.
:return: - package type (e.g. 'Application') if found.
- None if no package type found.
"""
pkg_type = None
if self.PROPERTIES in self.state:
idx_uuid = 0
idx_value = 2
uuid = None
for row in self.state[self.PROPERTIES]:
if 'class_definitions' in row and class_name in row:
uuid = row[idx_uuid]
break
if uuid:
for row in self.state[self.PROPERTIES]:
if 'type' in row and uuid == row[idx_uuid]:
pkg_type = row[idx_value]
# If the package is removed after deployed, its properties
# are not known and so above search will fail. In that case
# let's check for class_name prefix as the last resort.
if not pkg_type:
for prefix in self.APPS_TYPE_PREFIXES:
if prefix in class_name:
pkg_type = 'Application'
break
return pkg_type
def _get_parent_types(self, obj_type):
"""Get class types of all OBJ_TYPE's parents including itself.
Look up the hierarchy of OBJ_TYPE and return types of all its
ancestor including its own type.
:param obj_type: <string>
"""
class_types = []
p = lambda x: inspect.isclass(x)
g = inspect.getmembers(murano_classes, p)
for name, cls in g:
logger.debug("%s: %s" % (name, cls))
if (cls is murano_classes.IOMuranoApps and
self._get_package_type(obj_type) == 'Application'):
cls.name = obj_type
if 'get_parent_types' in dir(cls):
class_types = cls.get_parent_types(obj_type)
if class_types:
break
return class_types
def _call_murano_action(self, environment_id, object_id, action_name):
"""Invokes action of object in Murano environment.
:param environment_id: uuid
:param object_id: uuid
:param action_name: string
"""
# get action id using object_id, env_id and action name
logger.debug("Requested Murano action invoke %s on %s in %s",
action_name, object_id, environment_id)
if (not self.state[self.ACTIONS] or
environment_id not in self.state[self.ACTIONS]):
logger.warning('Datasource "%s" found no actions for '
'environment "%s"', self.name, environment_id)
return
env_actions = self.state[self.ACTIONS][environment_id]
for env_action in env_actions:
ea_obj_id, ea_action_id, ea_action_name, ea_enabled = env_action
if (object_id == ea_obj_id and action_name == ea_action_name
and ea_enabled):
logger.debug("Invoking Murano action_id = %s, action_name %s",
ea_action_id, ea_action_name)
# TODO(tranldt): support action arguments
task_id = self.murano_client.actions.call(environment_id,
ea_action_id)
logger.debug("Murano action invoked %s - task id %s",
ea_action_id, task_id)
self.action_call_returns.append(task_id)
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
logger.info("%s:: executing %s on %s", self.name, action, action_args)
self.action_call_returns = []
positional_args = action_args.get('positional', [])
logger.debug('Processing action execution: action = %s, '
'positional args = %s', action, positional_args)
try:
env_id = positional_args[0]
obj_id = positional_args[1]
action_name = positional_args[2]
self._call_murano_action(env_id, obj_id, action_name)
except Exception as e:
logger.exception(str(e))

View File

@ -1,338 +0,0 @@
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import neutronclient.v2_0.client
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils
LOG = logging.getLogger(__name__)
class NeutronDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
NETWORKS = "networks"
NETWORKS_SUBNETS = "networks.subnets"
PORTS = "ports"
PORTS_ADDR_PAIRS = "ports.address_pairs"
PORTS_SECURITY_GROUPS = "ports.security_groups"
PORTS_BINDING_CAPABILITIES = "ports.binding_capabilities"
PORTS_FIXED_IPS = "ports.fixed_ips"
PORTS_FIXED_IPS_GROUPS = "ports.fixed_ips_groups"
PORTS_EXTRA_DHCP_OPTS = "ports.extra_dhcp_opts"
ROUTERS = "routers"
ROUTERS_EXTERNAL_GATEWAYS = "routers.external_gateways"
SECURITY_GROUPS = "security_groups"
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
networks_translator = {
'translation-type': 'HDICT',
'table-name': NETWORKS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'subnets', 'col': 'subnet_group_id',
'translator': {'translation-type': 'LIST',
'table-name': 'networks.subnets',
'id-col': 'subnet_group_id',
'val-col': 'subnet',
'translator': value_trans}},
{'fieldname': 'provider:physical_network',
'translator': value_trans},
{'fieldname': 'admin_state_up', 'translator': value_trans},
{'fieldname': 'tenant_id', 'translator': value_trans},
{'fieldname': 'provider:network_type', 'translator': value_trans},
{'fieldname': 'router:external', 'translator': value_trans},
{'fieldname': 'shared', 'translator': value_trans},
{'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'provider:segmentation_id',
'translator': value_trans})}
ports_translator = {
'translation-type': 'HDICT',
'table-name': PORTS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'allowed_address_pairs',
'col': 'allowed_address_pairs_id',
'translator': {'translation-type': 'LIST',
'table-name': PORTS_ADDR_PAIRS,
'id-col': 'allowed_address_pairs_id',
'val-col': 'address',
'translator': value_trans}},
{'fieldname': 'security_groups',
'col': 'security_groups_id',
'translator': {'translation-type': 'LIST',
'table-name': PORTS_SECURITY_GROUPS,
'id-col': 'security_groups_id',
'val-col': 'security_group_id',
'translator': value_trans}},
{'fieldname': 'extra_dhcp_opts',
'col': 'extra_dhcp_opt_group_id',
'translator': {'translation-type': 'LIST',
'table-name': PORTS_EXTRA_DHCP_OPTS,
'id-col': 'extra_dhcp_opt_group_id',
'val-col': 'dhcp_opt',
'translator': value_trans}},
{'fieldname': 'binding:capabilities',
'col': 'binding:capabilities_id',
'translator': {'translation-type': 'VDICT',
'table-name': PORTS_BINDING_CAPABILITIES,
'id-col': 'binding:capabilities_id',
'key-col': 'key', 'val-col': 'value',
'translator': value_trans}},
{'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'admin_state_up', 'translator': value_trans},
{'fieldname': 'network_id', 'translator': value_trans},
{'fieldname': 'tenant_id', 'translator': value_trans},
{'fieldname': 'binding:vif_type', 'translator': value_trans},
{'fieldname': 'device_owner', 'translator': value_trans},
{'fieldname': 'mac_address', 'translator': value_trans},
{'fieldname': 'fixed_ips',
'col': 'fixed_ips',
'translator': {'translation-type': 'LIST',
'table-name': PORTS_FIXED_IPS_GROUPS,
'id-col': 'fixed_ips_group_id',
'val-col': 'fixed_ip_id',
'translator': {'translation-type': 'VDICT',
'table-name': PORTS_FIXED_IPS,
'id-col': 'fixed_ip_id',
'key-col': 'key',
'val-col': 'value',
'translator': value_trans}}},
{'fieldname': 'id', 'translator': value_trans},
{'fieldname': 'device_id', 'translator': value_trans},
{'fieldname': 'binding:host_id', 'translator': value_trans})}
routers_translator = {
'translation-type': 'HDICT',
'table-name': ROUTERS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'external_gateway_info',
'translator': {'translation-type': 'VDICT',
'table-name': ROUTERS_EXTERNAL_GATEWAYS,
'id-col': 'external_gateway_info',
'key-col': 'key', 'val-col': 'value',
'translator': value_trans}},
{'fieldname': 'networks', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'admin_state_up', 'translator': value_trans},
{'fieldname': 'tenant_id', 'translator': value_trans},
{'fieldname': 'id', 'translator': value_trans})}
security_groups_translator = {
'translation-type': 'HDICT',
'table-name': SECURITY_GROUPS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'tenant_id', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'description', 'translator': value_trans},
{'fieldname': 'id', 'translator': value_trans})}
TRANSLATORS = [networks_translator, ports_translator, routers_translator,
security_groups_translator]
def __init__(self, name='', args=None):
super(NeutronDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = self.get_neutron_credentials(args)
self.neutron = neutronclient.v2_0.client.Client(**self.creds)
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'neutron'
result['description'] = ('Do not use this driver is deprecated')
result['config'] = datasource_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def get_neutron_credentials(self, creds):
d = {}
d['username'] = creds['username']
d['tenant_name'] = creds['tenant_name']
d['password'] = creds['password']
d['auth_url'] = creds['auth_url']
return d
def initialize_update_methods(self):
networks_method = lambda: self._translate_networks(
self.neutron.list_networks())
self.add_update_method(networks_method, self.networks_translator)
ports_method = lambda: self._translate_ports(self.neutron.list_ports())
self.add_update_method(ports_method, self.ports_translator)
routers_method = lambda: self._translate_routers(
self.neutron.list_routers())
self.add_update_method(routers_method, self.routers_translator)
security_method = lambda: self._translate_security_group(
self.neutron.list_security_groups())
self.add_update_method(security_method,
self.security_groups_translator)
@datasource_utils.update_state_on_changed(NETWORKS)
def _translate_networks(self, obj):
"""Translate the networks represented by OBJ into tables.
Assigns self.state[tablename] for all those TABLENAMEs
generated from OBJ: NETWORKS, NETWORKS_SUBNETS
"""
LOG.debug("NETWORKS: %s", dict(obj))
row_data = NeutronDriver.convert_objs(obj['networks'],
self.networks_translator)
return row_data
@datasource_utils.update_state_on_changed(PORTS)
def _translate_ports(self, obj):
"""Translate the ports represented by OBJ into tables.
Assigns self.state[tablename] for all those TABLENAMEs
generated from OBJ: PORTS, PORTS_ADDR_PAIRS,
PORTS_SECURITY_GROUPS, PORTS_BINDING_CAPABILITIES,
PORTS_FIXED_IPS, PORTS_FIXED_IPS_GROUPS,
PORTS_EXTRA_DHCP_OPTS.
"""
LOG.debug("PORTS: %s", obj)
row_data = NeutronDriver.convert_objs(obj['ports'],
self.ports_translator)
return row_data
@datasource_utils.update_state_on_changed(ROUTERS)
def _translate_routers(self, obj):
"""Translates the routers represented by OBJ into a single table.
Assigns self.state[SECURITY_GROUPS] to that table.
"""
LOG.debug("ROUTERS: %s", dict(obj))
row_data = NeutronDriver.convert_objs(obj['routers'],
self.routers_translator)
return row_data
@datasource_utils.update_state_on_changed(SECURITY_GROUPS)
def _translate_security_groups(self, obj):
LOG.debug("SECURITY_GROUPS: %s", dict(obj))
row_data = NeutronDriver.convert_objs(obj['security_groups'],
self.security_groups_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.neutron, action, action_args)
# Sample Mapping
# Network :
# ========
#
# json
# ------
# {u'status': u'ACTIVE', u'subnets':
# [u'4cef03d0-1d02-40bb-8c99-2f442aac6ab0'],
# u'name':u'test-network', u'provider:physical_network': None,
# u'admin_state_up': True,
# u'tenant_id': u'570fe78a1dc54cffa053bd802984ede2',
# u'provider:network_type': u'gre',
# u'router:external': False, u'shared': False, u'id':
# u'240ff9df-df35-43ae-9df5-27fae87f2492',
# u'provider:segmentation_id': 4}
#
# tuple
# -----
#
# Networks : (u'ACTIVE', 'cdca5538-ae2d-11e3-92c1-bcee7bdf8d69',
# u'vova_network', None,
# True, u'570fe78a1dc54cffa053bd802984ede2', u'gre', 'False', 'False',
# u'1e3bc4fe-85c2-4b04-9b7f-ee40239787ef', 7)
#
# Networks and subnets
# ('cdcaa1a0-ae2d-11e3-92c1-bcee7bdf8d69',
# u'4cef03d0-1d02-40bb-8c99-2f442aac6ab0')
#
#
# Ports
# ======
# json
# ----
# {u'status': u'ACTIVE',
# u'binding:host_id': u'havana', u'name': u'',
# u'allowed_address_pairs': [],
# u'admin_state_up': True, u'network_id':
# u'240ff9df-df35-43ae-9df5-27fae87f2492',
# u'tenant_id': u'570fe78a1dc54cffa053bd802984ede2',
# u'extra_dhcp_opts': [],
# u'binding:vif_type': u'ovs', u'device_owner':
# u'network:router_interface',
# u'binding:capabilities': {u'port_filter': True},
# u'mac_address': u'fa:16:3e:ab:90:df',
# u'fixed_ips': [{u'subnet_id':
# u'4cef03d0-1d02-40bb-8c99-2f442aac6ab0',
# u'ip_address': u'90.0.0.1'}], u'id':
# u'0a2ce569-85a8-45ec-abb3-0d4b34ff69ba',u'security_groups': [],
# u'device_id': u'864e4acf-bf8e-4664-8cf7-ad5daa95681e'},
# tuples
# -------
# Ports [(u'ACTIVE', u'havana', u'',
# '6425751e-ae2c-11e3-bba1-bcee7bdf8d69', 'True',
# u'240ff9df-df35-43ae-9df5-27fae87f2492',
# u'570fe78a1dc54cffa053bd802984ede2',
# '642579e2-ae2c-11e3-bba1-bcee7bdf8d69', u'ovs',
# u'network:router_interface', '64257dac-ae2c-11e3-bba1-bcee7bdf8d69',
# u'fa:16:3e:ab:90:df',
# '64258126-ae2c-11e3-bba1-bcee7bdf8d69',
# u'0a2ce569-85a8-45ec-abb3-0d4b34ff69ba',
# '64258496-ae2c-11e3-bba1-bcee7bdf8d69',
# u'864e4acf-bf8e-4664-8cf7-ad5daa95681e')
#
# Ports and Address Pairs
# [('6425751e-ae2c-11e3-bba1-bcee7bdf8d69', '')
# Ports and Security Groups
# [('64258496-ae2c-11e3-bba1-bcee7bdf8d69', '')
# Ports and Binding Capabilities [
# ('64257dac-ae2c-11e3-bba1-bcee7bdf8d69',u'port_filter','True')
# Ports and Fixed IPs [('64258126-ae2c-11e3-bba1-bcee7bdf8d69',
# u'subnet_id',u'4cef03d0-1d02-40bb-8c99-2f442aac6ab0'),
# ('64258126-ae2c-11e3-bba1-bcee7bdf8d69', u'ip_address',
# u'90.0.0.1')
#
# Ports and Extra dhcp opts [
# ('642579e2-ae2c-11e3-bba1-bcee7bdf8d69', '')

View File

@ -1,468 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2014 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import neutronclient.v2_0.client
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class NeutronV2Driver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
NETWORKS = 'networks'
FIXED_IPS = 'fixed_ips'
SECURITY_GROUP_PORT_BINDINGS = 'security_group_port_bindings'
PORTS = 'ports'
ALLOCATION_POOLS = 'allocation_pools'
DNS_NAMESERVERS = 'dns_nameservers'
HOST_ROUTES = 'host_routes'
SUBNETS = 'subnets'
EXTERNAL_FIXED_IPS = 'external_fixed_ips'
EXTERNAL_GATEWAY_INFOS = 'external_gateway_infos'
ROUTERS = 'routers'
SECURITY_GROUP_RULES = 'security_group_rules'
SECURITY_GROUPS = 'security_groups'
FLOATING_IPS = 'floating_ips'
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
floating_ips_translator = {
'translation-type': 'HDICT',
'table-name': FLOATING_IPS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'The UUID of the floating IP address',
'translator': value_trans},
{'fieldname': 'router_id', 'desc': 'UUID of router',
'translator': value_trans},
{'fieldname': 'tenant_id', 'desc': 'Tenant ID',
'translator': value_trans},
{'fieldname': 'floating_network_id',
'desc': 'The UUID of the network associated with floating IP',
'translator': value_trans},
{'fieldname': 'fixed_ip_address',
'desc': 'Fixed IP address associated with floating IP address',
'translator': value_trans},
{'fieldname': 'floating_ip_address',
'desc': 'The floating IP address', 'translator': value_trans},
{'fieldname': 'port_id', 'desc': 'UUID of port',
'translator': value_trans},
{'fieldname': 'status', 'desc': 'The floating IP status',
'translator': value_trans})}
networks_translator = {
'translation-type': 'HDICT',
'table-name': NETWORKS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'Network ID',
'translator': value_trans},
{'fieldname': 'tenant_id', 'desc': 'Tenant ID',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'Network name',
'translator': value_trans},
{'fieldname': 'status', 'desc': 'Network status',
'translator': value_trans},
{'fieldname': 'admin_state_up',
'desc': 'Administrative state of the network (true/false)',
'translator': value_trans},
{'fieldname': 'shared',
'desc': 'Indicates if network is shared across all tenants',
'translator': value_trans})}
ports_fixed_ips_translator = {
'translation-type': 'HDICT',
'table-name': FIXED_IPS,
'parent-key': 'id',
'parent-col-name': 'port_id',
'parent-key-desc': 'UUID of Port',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'ip_address',
'desc': 'The IP addresses for the port',
'translator': value_trans},
{'fieldname': 'subnet_id',
'desc': 'The UUID of the subnet to which the port is attached',
'translator': value_trans})}
ports_security_groups_translator = {
'translation-type': 'LIST',
'table-name': SECURITY_GROUP_PORT_BINDINGS,
'parent-key': 'id',
'parent-col-name': 'port_id',
'parent-key-desc': 'UUID of port',
'val-col': 'security_group_id',
'val-col-desc': 'UUID of security group',
'translator': value_trans}
ports_translator = {
'translation-type': 'HDICT',
'table-name': PORTS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'UUID of port',
'translator': value_trans},
{'fieldname': 'tenant_id', 'desc': 'tenant ID',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'port name',
'translator': value_trans},
{'fieldname': 'network_id', 'desc': 'UUID of attached network',
'translator': value_trans},
{'fieldname': 'mac_address', 'desc': 'MAC address of the port',
'translator': value_trans},
{'fieldname': 'admin_state_up',
'desc': 'Administrative state of the port',
'translator': value_trans},
{'fieldname': 'status', 'desc': 'Port status',
'translator': value_trans},
{'fieldname': 'device_id',
'desc': 'The UUID of the device that uses this port',
'translator': value_trans},
{'fieldname': 'device_owner',
'desc': 'The UUID of the entity that uses this port',
'translator': value_trans},
{'fieldname': 'fixed_ips',
'desc': 'The IP addresses for the port',
'translator': ports_fixed_ips_translator},
{'fieldname': 'security_groups',
'translator': ports_security_groups_translator})}
subnets_allocation_pools_translator = {
'translation-type': 'HDICT',
'table-name': ALLOCATION_POOLS,
'parent-key': 'id',
'parent-col-name': 'subnet_id',
'parent-key-desc': 'UUID of subnet',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'start',
'desc': 'The start address for the allocation pools',
'translator': value_trans},
{'fieldname': 'end',
'desc': 'The end address for the allocation pools',
'translator': value_trans})}
subnets_dns_nameservers_translator = {
'translation-type': 'LIST',
'table-name': DNS_NAMESERVERS,
'parent-key': 'id',
'parent-col-name': 'subnet_id',
'parent-key-desc': 'UUID of subnet',
'val-col': 'dns_nameserver',
'val-col-desc': 'The DNS server',
'translator': value_trans}
subnets_routes_translator = {
'translation-type': 'HDICT',
'table-name': HOST_ROUTES,
'parent-key': 'id',
'parent-col-name': 'subnet_id',
'parent-key-desc': 'UUID of subnet',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'destination',
'desc': 'The destination for static route',
'translator': value_trans},
{'fieldname': 'nexthop',
'desc': 'The next hop for the destination',
'translator': value_trans})}
subnets_translator = {
'translation-type': 'HDICT',
'table-name': SUBNETS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'UUID of subnet',
'translator': value_trans},
{'fieldname': 'tenant_id', 'desc': 'tenant ID',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'subnet name',
'translator': value_trans},
{'fieldname': 'network_id', 'desc': 'UUID of attached network',
'translator': value_trans},
{'fieldname': 'ip_version',
'desc': 'The IP version, which is 4 or 6',
'translator': value_trans},
{'fieldname': 'cidr', 'desc': 'The CIDR',
'translator': value_trans},
{'fieldname': 'gateway_ip', 'desc': 'The gateway IP address',
'translator': value_trans},
{'fieldname': 'enable_dhcp', 'desc': 'Is DHCP is enabled or not',
'translator': value_trans},
{'fieldname': 'ipv6_ra_mode', 'desc': 'The IPv6 RA mode',
'translator': value_trans},
{'fieldname': 'ipv6_address_mode',
'desc': 'The IPv6 address mode', 'translator': value_trans},
{'fieldname': 'allocation_pools',
'translator': subnets_allocation_pools_translator},
{'fieldname': 'dns_nameservers',
'translator': subnets_dns_nameservers_translator},
{'fieldname': 'host_routes',
'translator': subnets_routes_translator})}
external_fixed_ips_translator = {
'translation-type': 'HDICT',
'table-name': EXTERNAL_FIXED_IPS,
'parent-key': 'router_id',
'parent-col-name': 'router_id',
'parent-key-desc': 'UUID of router',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'subnet_id', 'desc': 'UUID of the subnet',
'translator': value_trans},
{'fieldname': 'ip_address', 'desc': 'IP Address',
'translator': value_trans})}
routers_external_gateway_infos_translator = {
'translation-type': 'HDICT',
'table-name': EXTERNAL_GATEWAY_INFOS,
'parent-key': 'id',
'parent-col-name': 'router_id',
'parent-key-desc': 'UUID of router',
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'network_id', 'desc': 'Network ID',
'translator': value_trans},
{'fieldname': 'enable_snat',
'desc': 'current Source NAT status for router',
'translator': value_trans},
{'fieldname': 'external_fixed_ips',
'translator': external_fixed_ips_translator})}
routers_translator = {
'translation-type': 'HDICT',
'table-name': ROUTERS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'uuid of the router',
'translator': value_trans},
{'fieldname': 'tenant_id', 'desc': 'tenant ID',
'translator': value_trans},
{'fieldname': 'status', 'desc': 'router status',
'translator': value_trans},
{'fieldname': 'admin_state_up',
'desc': 'administrative state of router',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'router name',
'translator': value_trans},
{'fieldname': 'distributed',
'desc': "indicates if it's distributed router ",
'translator': value_trans},
{'fieldname': 'external_gateway_info',
'translator': routers_external_gateway_infos_translator})}
security_group_rules_translator = {
'translation-type': 'HDICT',
'table-name': SECURITY_GROUP_RULES,
'parent-key': 'id',
'parent-col-name': 'security_group_id',
'parent-key-desc': 'uuid of security group',
'selector-type': 'DICT_SELECTOR',
'in-list': True,
'field-translators':
({'fieldname': 'id', 'desc': 'The UUID of the security group rule',
'translator': value_trans},
{'fieldname': 'tenant_id', 'desc': 'tenant ID',
'translator': value_trans},
{'fieldname': 'remote_group_id',
'desc': 'remote group id to associate with security group rule',
'translator': value_trans},
{'fieldname': 'direction',
'desc': 'Direction in which the security group rule is applied',
'translator': value_trans},
{'fieldname': 'ethertype', 'desc': 'IPv4 or IPv6',
'translator': value_trans},
{'fieldname': 'protocol',
'desc': 'protocol that is matched by the security group rule.',
'translator': value_trans},
{'fieldname': 'port_range_min',
'desc': 'Min port number in the range',
'translator': value_trans},
{'fieldname': 'port_range_max',
'desc': 'Max port number in the range',
'translator': value_trans},
{'fieldname': 'remote_ip_prefix',
'desc': 'Remote IP prefix to be associated',
'translator': value_trans})}
security_group_translator = {
'translation-type': 'HDICT',
'table-name': SECURITY_GROUPS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'The UUID for the security group',
'translator': value_trans},
{'fieldname': 'tenant_id', 'desc': 'Tenant ID',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'The security group name',
'translator': value_trans},
{'fieldname': 'description', 'desc': 'security group description',
'translator': value_trans},
{'fieldname': 'security_group_rules',
'translator': security_group_rules_translator})}
TRANSLATORS = [networks_translator, ports_translator, subnets_translator,
routers_translator, security_group_translator,
floating_ips_translator]
def __init__(self, name='', args=None):
super(NeutronV2Driver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
session = ds_utils.get_keystone_session(self.creds)
self.neutron = neutronclient.v2_0.client.Client(session=session)
self.add_executable_method('update_resource_attrs',
[{'name': 'resource_type',
'description': 'resource type (e.g. ' +
'port, network, subnet)'},
{'name': 'id',
'description': 'ID of the resource'},
{'name': 'attr1',
'description': 'attribute name to ' +
'update (e.g. admin_state_up)'},
{'name': 'attr1-value',
'description': 'updated attr1 value'},
{'name': 'attrN',
'description': 'attribute name to ' +
'update'},
{'name': 'attrN-value',
'description': 'updated attrN value'}],
"A wrapper for update_<resource_type>()")
self.add_executable_client_methods(self.neutron,
'neutronclient.v2_0.client')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'neutronv2'
result['description'] = ('Datasource driver that interfaces with '
'OpenStack Networking aka Neutron.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_methods(self):
networks_method = lambda: self._translate_networks(
self.neutron.list_networks())
self.add_update_method(networks_method, self.networks_translator)
subnets_method = lambda: self._translate_subnets(
self.neutron.list_subnets())
self.add_update_method(subnets_method, self.subnets_translator)
ports_method = lambda: self._translate_ports(self.neutron.list_ports())
self.add_update_method(ports_method, self.ports_translator)
routers_method = lambda: self._translate_routers(
self.neutron.list_routers())
self.add_update_method(routers_method, self.routers_translator)
security_method = lambda: self._translate_security_groups(
self.neutron.list_security_groups())
self.add_update_method(security_method,
self.security_group_translator)
floatingips_method = lambda: self._translate_floating_ips(
self.neutron.list_floatingips())
self.add_update_method(floatingips_method,
self.floating_ips_translator)
@ds_utils.update_state_on_changed(FLOATING_IPS)
def _translate_floating_ips(self, obj):
LOG.debug("floating_ips: %s", dict(obj))
row_data = NeutronV2Driver.convert_objs(obj['floatingips'],
self.floating_ips_translator)
return row_data
@ds_utils.update_state_on_changed(NETWORKS)
def _translate_networks(self, obj):
LOG.debug("networks: %s", dict(obj))
row_data = NeutronV2Driver.convert_objs(obj['networks'],
self.networks_translator)
return row_data
@ds_utils.update_state_on_changed(PORTS)
def _translate_ports(self, obj):
LOG.debug("ports: %s", obj)
row_data = NeutronV2Driver.convert_objs(obj['ports'],
self.ports_translator)
return row_data
@ds_utils.update_state_on_changed(SUBNETS)
def _translate_subnets(self, obj):
LOG.debug("subnets: %s", obj)
row_data = NeutronV2Driver.convert_objs(obj['subnets'],
self.subnets_translator)
return row_data
@ds_utils.update_state_on_changed(ROUTERS)
def _translate_routers(self, obj):
LOG.debug("routers: %s", obj)
row_data = NeutronV2Driver.convert_objs(obj['routers'],
self.routers_translator)
return row_data
@ds_utils.update_state_on_changed(SECURITY_GROUPS)
def _translate_security_groups(self, obj):
LOG.debug("security_groups: %s", obj)
row_data = NeutronV2Driver.convert_objs(obj['security_groups'],
self.security_group_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.neutron, action, action_args)
def update_resource_attrs(self, args):
positional_args = args.get('positional', [])
if not positional_args or len(positional_args) < 4:
LOG.error('Args for update_resource_attrs() must contain resource '
'type, resource ID and pairs of key-value attributes to '
'update')
return
resource_type = positional_args.pop(0)
resource_id = positional_args.pop(0)
action = 'update_%s' % resource_type
update_attrs = self._convert_args(positional_args)
body = {resource_type: update_attrs}
action_args = {'named': {resource_type: resource_id,
'body': body}}
self._execute_api(self.neutron, action, action_args)

View File

@ -1,272 +0,0 @@
# Copyright (c) 2013 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import novaclient.client
from oslo_log import log as logging
import six
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class NovaDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
SERVERS = "servers"
FLAVORS = "flavors"
HOSTS = "hosts"
SERVICES = 'services'
AVAILABILITY_ZONES = "availability_zones"
# This is the most common per-value translator, so define it once here.
value_trans = {'translation-type': 'VALUE'}
def safe_id(x):
if isinstance(x, six.string_types):
return x
try:
return x['id']
except Exception:
return str(x)
servers_translator = {
'translation-type': 'HDICT',
'table-name': SERVERS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'The UUID for the server',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'Name of the server',
'translator': value_trans},
{'fieldname': 'hostId', 'col': 'host_id',
'desc': 'The UUID for the host', 'translator': value_trans},
{'fieldname': 'status', 'desc': 'The server status',
'translator': value_trans},
{'fieldname': 'tenant_id', 'desc': 'The tenant ID',
'translator': value_trans},
{'fieldname': 'user_id',
'desc': 'The user ID of the user who owns the server',
'translator': value_trans},
{'fieldname': 'image', 'col': 'image_id',
'desc': 'Name or ID of image',
'translator': {'translation-type': 'VALUE',
'extract-fn': safe_id}},
{'fieldname': 'flavor', 'col': 'flavor_id',
'desc': 'ID of the flavor',
'translator': {'translation-type': 'VALUE',
'extract-fn': safe_id}},
{'fieldname': 'OS-EXT-AZ:availability_zone', 'col': 'zone',
'desc': 'The availability zone of host',
'translator': value_trans},
{'fieldname': 'OS-EXT-SRV-ATTR:hypervisor_hostname',
'desc': ('The hostname of hypervisor where the server is '
'running'),
'col': 'host_name', 'translator': value_trans})}
flavors_translator = {
'translation-type': 'HDICT',
'table-name': FLAVORS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'ID of the flavor',
'translator': value_trans},
{'fieldname': 'name', 'desc': 'Name of the flavor',
'translator': value_trans},
{'fieldname': 'vcpus', 'desc': 'Number of vcpus',
'translator': value_trans},
{'fieldname': 'ram', 'desc': 'Memory size in MB',
'translator': value_trans},
{'fieldname': 'disk', 'desc': 'Disk size in GB',
'translator': value_trans},
{'fieldname': 'ephemeral', 'desc': 'Ephemeral space size in GB',
'translator': value_trans},
{'fieldname': 'rxtx_factor', 'desc': 'RX/TX factor',
'translator': value_trans})}
hosts_translator = {
'translation-type': 'HDICT',
'table-name': HOSTS,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'host_name', 'desc': 'Name of host',
'translator': value_trans},
{'fieldname': 'service', 'desc': 'Enabled service',
'translator': value_trans},
{'fieldname': 'zone', 'desc': 'The availability zone of host',
'translator': value_trans})}
services_translator = {
'translation-type': 'HDICT',
'table-name': SERVICES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'col': 'service_id', 'desc': 'Service ID',
'translator': value_trans},
{'fieldname': 'binary', 'desc': 'Service binary',
'translator': value_trans},
{'fieldname': 'host', 'desc': 'Host Name',
'translator': value_trans},
{'fieldname': 'zone', 'desc': 'Availability Zone',
'translator': value_trans},
{'fieldname': 'status', 'desc': 'Status of service',
'translator': value_trans},
{'fieldname': 'state', 'desc': 'State of service',
'translator': value_trans},
{'fieldname': 'updated_at', 'desc': 'Last updated time',
'translator': value_trans},
{'fieldname': 'disabled_reason', 'desc': 'Disabled reason',
'translator': value_trans})}
availability_zones_translator = {
'translation-type': 'HDICT',
'table-name': AVAILABILITY_ZONES,
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'zoneName', 'col': 'zone',
'desc': 'Availability zone name', 'translator': value_trans},
{'fieldname': 'zoneState', 'col': 'state',
'desc': 'Availability zone state',
'translator': value_trans})}
TRANSLATORS = [servers_translator, flavors_translator, hosts_translator,
services_translator, availability_zones_translator]
def __init__(self, name='', args=None):
super(NovaDriver, self).__init__(name, args)
datasource_driver.ExecutionDriver.__init__(self)
self.creds = args
session = ds_utils.get_keystone_session(self.creds)
self.nova_client = novaclient.client.Client(
version=self.creds.get('api_version', '2'), session=session)
self.add_executable_method('servers_set_meta',
[{'name': 'server',
'description': 'server id'},
{'name': 'meta-key1',
'description': 'meta key 1'},
{'name': 'meta-value1',
'description': 'value for meta key1'},
{'name': 'meta-keyN',
'description': 'meta key N'},
{'name': 'meta-valueN',
'description': 'value for meta keyN'}],
"A wrapper for servers.set_meta()")
self.add_executable_client_methods(self.nova_client, 'novaclient.v2.')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'nova'
result['description'] = ('Datasource driver that interfaces with '
'OpenStack Compute aka nova.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['api_version'] = constants.OPTIONAL
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def initialize_update_methods(self):
servers_method = lambda: self._translate_servers(
self.nova_client.servers.list(
detailed=True, search_opts={"all_tenants": 1}))
self.add_update_method(servers_method, self.servers_translator)
flavors_method = lambda: self._translate_flavors(
self.nova_client.flavors.list())
self.add_update_method(flavors_method, self.flavors_translator)
hosts_method = lambda: self._translate_hosts(
self.nova_client.hosts.list())
self.add_update_method(hosts_method, self.hosts_translator)
services_method = lambda: self._translate_services(
self.nova_client.services.list())
self.add_update_method(services_method, self.services_translator)
az_method = lambda: self._translate_availability_zones(
self.nova_client.availability_zones.list())
self.add_update_method(az_method, self.availability_zones_translator)
@ds_utils.update_state_on_changed(SERVERS)
def _translate_servers(self, obj):
row_data = NovaDriver.convert_objs(obj, NovaDriver.servers_translator)
return row_data
@ds_utils.update_state_on_changed(FLAVORS)
def _translate_flavors(self, obj):
row_data = NovaDriver.convert_objs(obj, NovaDriver.flavors_translator)
return row_data
@ds_utils.update_state_on_changed(HOSTS)
def _translate_hosts(self, obj):
row_data = NovaDriver.convert_objs(obj, NovaDriver.hosts_translator)
return row_data
@ds_utils.update_state_on_changed(SERVICES)
def _translate_services(self, obj):
row_data = NovaDriver.convert_objs(obj, NovaDriver.services_translator)
return row_data
@ds_utils.update_state_on_changed(AVAILABILITY_ZONES)
def _translate_availability_zones(self, obj):
row_data = NovaDriver.convert_objs(
obj,
NovaDriver.availability_zones_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.nova_client, action, action_args)
# "action" methods - to be used with "execute"
def servers_set_meta(self, args):
"""A wrapper for servers.set_meta().
'execute[p(x)]' doesn't take optional args at the moment.
Therefore, this function translates the positional ARGS
to optional args and call the servers.set_meta() api.
:param <list> args: expected server ID and pairs of meta
data in positional args such as:
{'positional': ['server_id', 'meta1', 'value1', 'meta2', 'value2']}
Usage:
execute[nova.servers_set_meta(svr_id, meta1, val1, meta2, val2) :-
triggering_table(id)
"""
action = 'servers.set_meta'
positional_args = args.get('positional', [])
if not positional_args:
LOG.error('Args not found for servers_set_meta()')
return
# Strip off the server_id before converting meta data pairs
server_id = positional_args.pop(0)
meta_data = self._convert_args(positional_args)
action_args = {'named': {'server': server_id,
'metadata': meta_data}}
self._execute_api(self.nova_client, action, action_args)

View File

@ -1,647 +0,0 @@
# Copyright (c) 2014 Marist SDN Innovation lab Joint with Plexxi Inc.
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import json
try:
from plexxi.core.api.binding import AffinityGroup
from plexxi.core.api.binding import Job
from plexxi.core.api.binding import PhysicalPort
from plexxi.core.api.binding import PlexxiSwitch
from plexxi.core.api.binding import VirtualizationHost
from plexxi.core.api.binding import VirtualMachine
from plexxi.core.api.binding import VirtualSwitch
from plexxi.core.api.binding import VmwareVirtualMachine
from plexxi.core.api.session import CoreSession
except ImportError:
pass
from oslo_config import cfg
from oslo_log import log as logging
import requests
from congress.datasources import constants
from congress.datasources import datasource_driver
LOG = logging.getLogger(__name__)
class PlexxiDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
HOSTS = "hosts"
HOST_MACS = HOSTS + '.macs'
HOST_GUESTS = HOSTS + '.guests'
VMS = "vms"
VM_MACS = VMS + '.macs'
AFFINITIES = "affinities"
VSWITCHES = "vswitches"
VSWITCHES_MACS = VSWITCHES + '.macs'
VSWITCHES_HOSTS = VSWITCHES + '.hosts'
PLEXXISWITCHES = "plexxiswitches"
PLEXXISWITCHES_MACS = PLEXXISWITCHES + '.macs'
PORTS = "ports"
NETWORKLINKS = "networklinks"
def __init__(self, name='', args=None, session=None):
super(PlexxiDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
self.exchange = session
self.creds = args
self.raw_state = {}
try:
self.unique_names = self.string_to_bool(args['unique_names'])
except KeyError:
LOG.warning("unique_names has not been configured, "
"defaulting to False.")
self.unique_names = False
port = str(cfg.CONF.bind_port)
host = str(cfg.CONF.bind_host)
self.headers = {'content-type': 'application/json'}
self.name_cooldown = False
self.api_address = "http://" + host + ":" + port + "/v1"
self.name_rule_needed = True
if str(cfg.CONF.auth_strategy) == 'keystone':
if 'keystone_pass' not in args:
LOG.error("Keystone is enabled, but a password was not " +
"provided. All automated API calls are disabled")
self.unique_names = False
self.name_rule_needed = False
elif 'keystone_user' not in args:
LOG.error("Keystone is enabled, but a username was not " +
"provided. All automated API calls are disabled")
self.unique_names = False
self.name_rule_needed = False
else:
self.keystone_url = str(cfg.CONF.keystone_authtoken.auth_uri)
self.keystoneauth()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'plexxi'
result['description'] = ('Datasource driver that interfaces with '
'PlexxiCore.')
result['config'] = {'auth_url': constants.REQUIRED, # PlexxiCore url
'username': constants.REQUIRED,
'password': constants.REQUIRED,
'poll_time': constants.OPTIONAL,
'tenant_name': constants.REQUIRED,
'unique_names': constants.OPTIONAL,
'keystone_pass': constants.OPTIONAL,
'keystone_user': constants.OPTIONAL}
result['secret'] = ['password']
return result
def update_from_datasource(self):
"""Called when it is time to pull new data from this datasource.
Pulls lists of objects from PlexxiCore, if the data does not match
the correspondig table in the driver's raw state or has not yet been
added to the state, the driver calls methods to parse this data.
Once all data has been updated,sets
self.state[tablename] = <list of tuples of strings/numbers>
for every tablename exported by PlexxiCore.
"""
# Initialize instance variables that get set during update
self.hosts = []
self.mac_list = []
self.guest_list = []
self.plexxi_switches = []
self.affinities = []
self.vswitches = []
self.vms = []
self.vm_macs = []
self.ports = []
self.network_links = []
if self.exchange is None:
self.connect_to_plexxi()
# Get host data from PlexxiCore
hosts = VirtualizationHost.getAll(session=self.exchange)
if (self.HOSTS not in self.state or
hosts != self.raw_state[self.HOSTS]):
self._translate_hosts(hosts)
self.raw_state[self.HOSTS] = hosts
else:
self.hosts = self.state[self.HOSTS]
self.mac_list = self.state[self.HOST_MACS]
self.guest_list = self.state[self.HOST_GUESTS]
# Get PlexxiSwitch Data from PlexxiCore
plexxiswitches = PlexxiSwitch.getAll(session=self.exchange)
if (self.PLEXXISWITCHES not in self.state or
plexxiswitches != self.raw_state[self.PLEXXISWITCHES]):
self._translate_pswitches(plexxiswitches)
self.raw_state[self.PLEXXISWITCHES] = plexxiswitches
else:
self.plexxi_switches = self.state[self.PLEXXISWITCHES]
# Get affinity data from PlexxiCore
affinities = AffinityGroup.getAll(session=self.exchange)
if (self.AFFINITIES not in self.state or
affinities != self.raw_state[self.AFFINITIES]):
if AffinityGroup.getCount(session=self.exchange) == 0:
self.state[self.AFFINITIES] = ['No Affinities found']
else:
self._translate_affinites(affinities)
self.raw_state[self.AFFINITIES] = affinities
else:
self.affinties = self.state[self.AFFINITIES]
# Get vswitch data from PlexxiCore
vswitches = VirtualSwitch.getAll(session=self.exchange)
if (self.VSWITCHES not in self.state or
vswitches != self.raw_state[self.VSWITCHES]):
self._translate_vswitches(vswitches)
self.raw_state[self.VSWITCHES] = vswitches
else:
self.vswitches = self.state[self.VSWITCHES]
# Get virtual machine data from PlexxiCore
vms = VirtualMachine.getAll(session=self.exchange)
if (self.VMS not in self.state or
vms != self.raw_state[self.VMS]):
self._translate_vms(vms)
self.raw_state[self.VMS] = set(vms)
else:
self.vms = self.state[self.VMS]
self.vm_macs = self.state[self.VMS_MACS]
# Get port data from PlexxiCore
ports = PhysicalPort.getAll(session=self.exchange)
if(self.PORTS not in self.state or
ports != self.raw_state[self.PORTS]):
self._translate_ports(ports)
self.raw_state[self.PORTS] = set(ports)
else:
self.ports = self.state[self.PORTS]
self.network_links = self.state[self.NETWORKLINKS]
LOG.debug("Setting Plexxi State")
self.state = {}
self.state[self.HOSTS] = set(self.hosts)
self.state[self.HOST_MACS] = set(self.mac_list)
self.state[self.HOST_GUESTS] = set(self.guest_list)
self.state[self.PLEXXISWITCHES] = set(self.plexxi_switches)
self.state[self.PLEXXISWITCHES_MACS] = set(self.ps_macs)
self.state[self.AFFINITIES] = set(self.affinities)
self.state[self.VSWITCHES] = set(self.vswitches)
self.state[self.VSWITCHES_MACS] = set(self.vswitch_macs)
self.state[self.VSWITCHES_HOSTS] = set(self.vswitch_hosts)
self.state[self.VMS] = set(self.vms)
self.state[self.VM_MACS] = set(self.vm_macs)
self.state[self.PORTS] = set(self.ports)
self.state[self.NETWORKLINKS] = set(self.network_links)
# Create Rules
if self.name_rule_needed is True:
if self.name_rule_check() is True:
self.name_rule_create()
else:
self.name_rule_needed = False
# Act on Policy
if self.unique_names is True:
if not self.name_cooldown:
self.name_response()
else:
self.name_cooldown = False
@classmethod
def get_schema(cls):
"""Creates a table schema for incoming data from PlexxiCore.
Returns a dictionary map of tablenames corresponding to column names
for that table. Both tableNames and columnnames are strings.
"""
d = {}
d[cls.HOSTS] = ("uuid", "name", "mac_count", "vmcount")
d[cls.HOST_MACS] = ("Host_uuid", "Mac_Address")
d[cls.HOST_GUESTS] = ("Host_uuid", "VM_uuid")
d[cls.VMS] = ("uuid", "name", "host_uuid", "ip", "mac_count")
d[cls.VM_MACS] = ("vmID", "Mac_Address")
d[cls.AFFINITIES] = ("uuid", "name")
d[cls.VSWITCHES] = ("uuid", "host_count", "vnic_count")
d[cls.VSWITCHES_MACS] = ("vswitch_uuid", "Mac_Address")
d[cls.VSWITCHES_HOSTS] = ("vswitch_uuid", "hostuuid")
d[cls.PLEXXISWITCHES] = ("uuid", "ip", "status")
d[cls.PLEXXISWITCHES_MACS] = ("Switch_uuid", "Mac_Address")
d[cls.PORTS] = ("uuid", "name")
d[cls.NETWORKLINKS] = ("uuid", "name", "port_uuid", "start_uuid",
"start_name", "stop_uuid", "stop_name")
return d
def _translate_hosts(self, hosts):
"""Translates data about Hosts from PlexxiCore for Congress.
Responsible for the states 'hosts','hosts.macs' and 'hosts.guests'
"""
row_keys = self.get_column_map(self.HOSTS)
hostlist = []
maclist = []
vm_uuids = []
for host in hosts:
row = ['None'] * (max(row_keys.values()) + 1)
hostID = host.getForeignUuid()
row[row_keys['uuid']] = hostID
row[row_keys['name']] = host.getName()
pnics = host.getPhysicalNetworkInterfaces()
if pnics:
for pnic in pnics:
mac = str(pnic.getMacAddress())
tuple_mac = (hostID, mac)
maclist.append(tuple_mac)
mac_count = len(maclist)
if (mac_count > 0):
row[row_keys['mac_count']] = mac_count
vmCount = host.getVirtualMachineCount()
row[row_keys['vmcount']] = vmCount
if vmCount != 0:
vms = host.getVirtualMachines()
for vm in vms:
tuple_vmid = (hostID, vm.getForeignUuid())
vm_uuids.append(tuple_vmid)
hostlist.append(tuple(row))
self.hosts = hostlist
self.mac_list = maclist
self.guest_list = vm_uuids
def _translate_pswitches(self, plexxi_switches):
"""Translates data on Plexxi Switches from PlexxiCore for Congress.
Responsible for state 'Plexxi_swtiches' and 'Plexxi_switches.macs'
"""
row_keys = self.get_column_map(self.PLEXXISWITCHES)
pslist = []
maclist = []
for switch in plexxi_switches:
row = ['None'] * (max(row_keys.values()) + 1)
psuuid = str(switch.getUuid())
row[row_keys['uuid']] = psuuid
psip = str(switch.getIpAddress())
row[row_keys['ip']] = psip
psstatus = str(switch.getStatus())
row[row_keys['status']] = psstatus
pnics = switch.getPhysicalNetworkInterfaces()
for pnic in pnics:
mac = str(pnic.getMacAddress())
macrow = [psuuid, mac]
maclist.append(tuple(macrow))
pslist.append(tuple(row))
self.plexxi_switches = pslist
self.ps_macs = maclist
def _translate_affinites(self, affinites):
"""Translates data about affinites from PlexxiCore for Congress.
Responsible for state 'affinities'
"""
row_keys = self.get_column_map(self.AFFINITIES)
affinitylist = []
for affinity in affinites:
row = ['None'] * (max(row_keys.values()) + 1)
uuid = str(affinity.getUuid())
row[row_keys['uuid']] = uuid
row[row_keys['name']] = affinity.getName()
affinitylist.append(tuple(row))
self.affinities = affinitylist
def _translate_vswitches(self, vswitches):
"""Translates data about vswitches from PlexxiCore for Congress.
Responsible for states vswitchlist,vswitch_macs,vswitch_hosts
"""
# untested
row_keys = self.get_column_map(self.VSWITCHES)
vswitchlist = []
tuple_macs = []
vswitch_host_list = []
for vswitch in vswitches:
row = ['None'] * (max(row_keys.values()) + 1)
vswitchID = vswitch.getForeignUuid()
row[row_keys['uuid']] = vswitchID
vSwitchHosts = vswitch.getVirtualizationHosts()
try:
host_count = len(vSwitchHosts)
except TypeError:
host_count = 0
row[row_keys['host_count']] = host_count
if host_count != 0:
for host in vSwitchHosts:
hostuuid = host.getForeignUuid()
hostrow = [vswitchID, hostuuid]
vswitch_host_list.append(tuple(hostrow))
vswitch_vnics = vswitch.getVirtualNetworkInterfaces()
try:
vnic_count = len(vswitch_vnics)
except TypeError:
vnic_count = 0
row[row_keys['vnic_count']] = vnic_count
if vnic_count != 0:
for vnic in vswitch_vnics:
mac = vnic.getMacAddress()
macrow = [vswitchID, str(mac)]
tuple_macs.append(tuple(macrow))
vswitchlist.append(tuple(row))
self.vswitches = vswitchlist
self.vswitch_macs = tuple_macs
self.vswitch_hosts = vswitch_host_list
def _translate_vms(self, vms):
"""Translate data on VMs from PlexxiCore for Congress.
Responsible for states 'vms' and 'vms.macs'
"""
row_keys = self.get_column_map(self.VMS)
vmlist = []
maclist = []
for vm in vms:
row = ['None'] * (max(row_keys.values()) + 1)
vmID = vm.getForeignUuid()
row[row_keys['uuid']] = vmID
vmName = vm.getName()
row[row_keys['name']] = vmName
try:
vmhost = vm.getVirtualizationHost()
vmhostuuid = vmhost.getForeignUuid()
row[row_keys['host_uuid']] = vmhostuuid
except AttributeError:
LOG.debug("The host for " + vmName + " could not be found")
vmIP = vm.getIpAddress()
if vmIP:
row[row_keys['ip']] = vmIP
vmVnics = vm.getVirtualNetworkInterfaces()
mac_count = 0
for vnic in vmVnics:
mac = str(vnic.getMacAddress())
tuple_mac = (vmID, mac)
maclist.append(tuple_mac)
mac_count += 1
row[row_keys['mac_count']] = mac_count
vmlist.append(tuple(row))
self.vms = vmlist
self.vm_macs = maclist
def _translate_ports(self, ports):
"""Translate data about ports from PlexxiCore for Congress.
Responsible for states 'ports' and 'ports.links'
"""
row_keys = self.get_column_map(self.PORTS)
link_keys = self.get_column_map(self.NETWORKLINKS)
port_list = []
link_list = []
for port in ports:
row = ['None'] * (max(row_keys.values()) + 1)
portID = str(port.getUuid())
row[row_keys['uuid']] = portID
portName = str(port.getName())
row[row_keys['name']] = portName
links = port.getNetworkLinks()
if links:
link_keys = self.get_column_map(self.NETWORKLINKS)
for link in links:
link_row = self._translate_network_link(link, link_keys,
portID)
link_list.append(tuple(link_row))
port_list.append(tuple(row))
self.ports = port_list
self.network_links = link_list
def _translate_network_link(self, link, row_keys, sourcePortUuid):
"""Translates data about network links from PlexxiCore for Congress.
Subfunction of translate_ports,each handles a set of network links
attached to a port. Directly responsible for the state of
'ports.links'
"""
row = ['None'] * (max(row_keys.values()) + 1)
linkID = str(link.getUuid())
row[row_keys['uuid']] = linkID
row[row_keys['port_uuid']] = sourcePortUuid
linkName = str(link.getName())
row[row_keys['name']] = linkName
linkStartObj = link.getStartNetworkInterface()
linkStartName = str(linkStartObj.getName())
row[row_keys['start_name']] = linkStartName
linkStartUuid = str(linkStartObj.getUuid())
row[row_keys['start_uuid']] = linkStartUuid
linkStopObj = link.getStopNetworkInterface()
linkStopUuid = str(linkStopObj.getUuid())
row[row_keys['stop_uuid']] = linkStopUuid
linkStopName = str(linkStopObj.getName())
row[row_keys['stop_name']] = linkStopName
return row
def string_to_bool(self, string):
"""Used for parsing boolean variables stated in datasources.conf."""
string = string.strip()
s = string.lower()
if s in['true', 'yes', 'on']:
return True
else:
return False
def connect_to_plexxi(self):
"""Connect to PlexxiCore.
Create a CoreSession connecting congress to PlexxiCore using
credentials provided in datasources.conf
"""
if 'auth_url' not in self.creds:
LOG.error("Plexxi url not supplied. Could not start Plexxi" +
"connection driver")
if 'username' not in self.creds:
LOG.error("Plexxi username not supplied. Could not start " +
"Plexxi connection driver")
if 'password' not in self.creds:
LOG.error("Plexxi password not supplied. Could not start " +
"Plexxi connection driver")
try:
self.exchange = CoreSession.connect(
baseUrl=self.creds['auth_url'],
allowUntrusted=True,
username=self.creds['username'],
password=self.creds['password'])
except requests.exceptions.HTTPError as error:
if (int(error.response.status_code) == 401 or
int(error.response.status_code) == 403):
msg = ("Incorrect username/password combination. Passed" +
"in username was " + self.creds['username'])
raise Exception(requests.exceptions.HTTPErrror(msg))
else:
raise Exception(requests.exceptions.HTTPError(error))
except requests.exceptions.ConnectionError:
msg = ("Cannot connect to PlexxiCore at " +
self.creds['auth_url'] + " with the username " +
self.creds['username'])
raise Exception(requests.exceptions.ConnectionError(msg))
def keystoneauth(self):
"""Acquire a keystone auth token for API calls
Called when congress is running with keystone as the authentication
method.This provides the driver a keystone token that is then placed
in the header of API calls made to congress.
"""
try:
authreq = {
"auth": {
"tenantName": self.creds['tenant_name'],
"passwordCredentials": {
"username": self.creds['keystone_user'],
"password": self.creds['keystone_pass']
}
}
}
headers = {'content-type': 'application/json',
'accept': 'application/json'}
request = requests.post(url=self.keystone_url+'/v2.0/tokens',
data=json.dumps(authreq),
headers=headers)
response = request.json()
token = response['access']['token']['id']
self.headers['X-Auth-Token'] = token
except Exception:
LOG.exception("Could not authenticate with keystone." +
"All automated API calls have been disabled")
self.unique_names = False
self.name_rule_needed = False
def name_rule_check(self):
"""Checks to see if a RepeatedNames rule already exists
This method is used to prevent the driver from recreating additional
RepeatedNames tables each time congress is restarted.
"""
try:
table = requests.get(self.api_address + "/policies/" +
"plexxi/rules",
headers=self.headers)
result = json.loads(table.text)
for entry in result['results']:
if entry['name'] == "RepeatedNames":
return False
return True
except Exception:
LOG.exception("An error has occurred when accessing the " +
"Congress API.All automated API calls have been " +
"disabled.")
self.unique_names = False
self.name_rule_needed = False
return False
def name_rule_create(self):
"""Creates RepeatedName table for unique names policy.
The RepeatedName table contains the name and plexxiUuid of
VMs that have the same name in the Plexxi table and the Nova Table.
"""
try:
datasources = self.node.get_datasources()
for datasource in datasources:
if datasource['driver'] == 'nova':
repeated_name_rule = ('{"rule": "RepeatedName' +
'(vname,pvuuid):-' + self.name +
':vms(0=pvuuid,1=vname),' +
datasource['name'] +
':servers(1=vname)",' +
'"name": "RepeatedNames"}')
requests.post(url=self.api_address +
'/policies/plexxi/rules',
data=repeated_name_rule,
headers=self.headers)
self.name_rule_needed = False
break
except Exception:
LOG.exception("Could not create Repeated Name table")
def name_response(self):
"""Checks for any entries in the RepeatedName table.
For all entries found in the RepeatedName table, the corresponding
VM will be then prefixed with 'conflict-' in PlexxiCore.
"""
vmname = False
vmuuid = False
json_response = []
self.name_cooldown = True
try:
plexxivms = VmwareVirtualMachine.getAll(session=self.exchange)
table = requests.get(self.api_address + "/policies/" +
"plexxi/tables/RepeatedName/rows",
headers=self.headers)
if table.text == "Authentication required":
self.keystoneauth()
table = requests.get(self.api_address + "/policies/" +
"plexxi/tables/RepeatedName/rows",
headers=self.headers)
json_response = json.loads(table.text)
for row in json_response['results']:
vmname = row['data'][0]
vmuuid = row['data'][1]
if vmname and vmuuid:
for plexxivm in plexxivms:
if (plexxivm.getForeignUuid() == vmuuid):
new_vm_name = "Conflict-" + vmname
desc = ("Congress has found a VM with the same " +
"name on the nova network. This vm " +
"will now be renamed to " + new_vm_name)
job_name = (" Congress Driver:Changing virtual" +
"machine, " + vmname + "\'s name")
changenamejob = Job.create(name=job_name,
description=desc + ".",
session=self.exchange)
changenamejob.begin()
plexxivm.setName(new_vm_name)
changenamejob.commit()
LOG.info(desc + " in PlexxiCore.")
except Exception:
LOG.exception("error in name_response")
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
# TODO(aimeeu)
# The 'else' block is where we would execute native Plexxi actions
# (actions implemented in the Plexxi libraries). However, that is
# hard to do because of the rest of the way the driver is written.
# The question for the 'else' block is whether it's worth exposing
# all the native Plexxi actions. See comments in review
# https://review.openstack.org/#/c/335539/

View File

@ -1,79 +0,0 @@
# Copyright (c) 2016 NTT All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import datetime
from oslo_log import log as logging
from congress.datasources import constants
from congress.datasources import datasource_driver
LOG = logging.getLogger(__name__)
class PushDriver(datasource_driver.PushedDataSourceDriver):
"""A DataSource Driver for pushing tuples of data.
To use this driver, run the following API:
PUT /v1/data-sources/<the driver id>/tables/<table id>/rows
Still a work in progress, but intent is to allow a request body
to be any list of lists where the internal lists all have
the same number of elements.
request body:
[ [1,2], [3,4] ]
"""
def __init__(self, name='', args=None):
super(PushDriver, self).__init__(name, args=args)
self._table_deps['data'] = ['data']
@classmethod
def get_schema(cls):
schema = {}
# Hardcode the tables for now. Later, create the tables on the fly.
# May be as easy as deleting the following line.
schema['data'] = []
return schema
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'push'
result['description'] = ('Datasource driver that allows external '
'systems to push data.')
# TODO(masa): Remove the REQUIRED config once python-congressclient
# has been able to retrieve non-dict object in config fields at
# $ openstack congress datasource list command
result['config'] = {'description': constants.REQUIRED,
'persist_data': constants.OPTIONAL}
return result
def update_entire_data(self, table_id, objs):
LOG.info('update %s table in %s datasource', table_id, self.name)
tablename = 'data' # hard code
self.prior_state = dict(self.state)
self._update_state(tablename,
[tuple([table_id, tuple(x)]) for x in objs])
LOG.debug('publish a new state %s in %s',
self.state[tablename], tablename)
self.publish(tablename, self.state[tablename])
self.number_of_updates += 1
self.last_updated_time = datetime.datetime.now()

View File

@ -1,154 +0,0 @@
# Copyright (c) 2014 Montavista Software, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
import swiftclient.service
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class SwiftDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
CONTAINERS = "containers"
OBJECTS = "objects"
value_trans = {'translation-type': 'VALUE'}
containers_translator = {
'translation-type': 'HDICT',
'table-name': CONTAINERS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'count', 'translator': value_trans},
{'fieldname': 'bytes', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans})}
objects_translator = {
'translation-type': 'HDICT',
'table-name': OBJECTS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'bytes', 'translator': value_trans},
{'fieldname': 'last_modified', 'translator': value_trans},
{'fieldname': 'hash', 'translator': value_trans},
{'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'content_type', 'translator': value_trans},
{'fieldname': 'container_name', 'translator': value_trans})}
TRANSLATORS = [containers_translator, objects_translator]
def __init__(self, name='', args=None):
if args is None:
args = self.empty_credentials()
super(SwiftDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
options = self.get_swift_credentials_v1(args)
# TODO(ramineni): Enable v3 support
options['os_auth_url'] = options['os_auth_url'].replace('v3', 'v2.0')
self.swift_service = swiftclient.service.SwiftService(options)
self.add_executable_client_methods(self.swift_service,
'swiftclient.service')
self.initialize_update_methods()
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
# TODO(zhenzanz): This is verified with keystoneauth for swift.
# Do we need to support other Swift auth systems?
# http://docs.openstack.org/developer/swift/overview_auth.html
result = {}
result['id'] = 'swift'
result['description'] = ('Datasource driver that interfaces with '
'swift.')
result['config'] = ds_utils.get_openstack_required_config()
result['config']['lazy_tables'] = constants.OPTIONAL
result['secret'] = ['password']
return result
def get_swift_credentials_v1(self, creds):
# Check swiftclient/service.py _default_global_options for more
# auth options. But these 4 options seem to be enough.
options = {}
options['os_username'] = creds['username']
options['os_password'] = creds['password']
options['os_tenant_name'] = creds['tenant_name']
options['os_auth_url'] = creds['auth_url']
return options
def initialize_update_methods(self):
containers_method = lambda: self._translate_containers(
self._get_containers_and_objects(container=True))
self.add_update_method(containers_method, self.containers_translator)
objects_method = lambda: self._translate_objects(
self._get_containers_and_objects(objects=True))
self.add_update_method(objects_method, self.objects_translator)
def _get_containers_and_objects(self, container=False, objects=False):
container_list = self.swift_service.list()
cont_list = []
objects = []
containers = []
LOG.debug("Swift obtaining containers List")
for stats in container_list:
containers = stats['listing']
for item in containers:
cont_list.append(item['name'])
if container:
return containers
LOG.debug("Swift obtaining objects List")
for container in cont_list:
object_list = self.swift_service.list(container)
for items in object_list:
item_list = items['listing']
for obj in item_list:
obj['container_name'] = container
for obj in item_list:
objects.append(obj)
if objects:
return objects
return containers, objects
@ds_utils.update_state_on_changed(CONTAINERS)
def _translate_containers(self, obj):
"""Translate the containers represented by OBJ into tables."""
row_data = SwiftDriver.convert_objs(obj,
self.containers_translator)
return row_data
@ds_utils.update_state_on_changed(OBJECTS)
def _translate_objects(self, obj):
"""Translate the objects represented by OBJ into tables."""
row_data = SwiftDriver.convert_objs(obj,
self.objects_translator)
return row_data
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.swift_service, action, action_args)

View File

@ -1,329 +0,0 @@
# Copyright (c) 2014 Marist SDN Innovation lab Joint with Plexxi Inc.
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_log import log as logging
from oslo_vmware import api
from oslo_vmware import vim_util
from congress.datasources import constants
from congress.datasources import datasource_driver
from congress.datasources import datasource_utils as ds_utils
LOG = logging.getLogger(__name__)
class VCenterDriver(datasource_driver.PollingDataSourceDriver,
datasource_driver.ExecutionDriver):
HOSTS = "hosts"
HOST_DNS = "host.DNS_IPs"
HOST_PNICS = "host.PNICs"
HOST_VNICS = "host.VNICs"
VMS = "vms"
value_trans = {'translation-type': 'VALUE'}
vms_translator = {
'translation-type': 'HDICT',
'table-name': VMS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'uuid', 'translator': value_trans},
{'fieldname': 'host_uuid', 'translator': value_trans},
{'fieldname': 'pathName', 'translator': value_trans},
{'fieldname': 'status', 'translator': value_trans},
{'fieldname': 'CpuDemand', 'translator': value_trans},
{'fieldname': 'CpuUsage', 'translator': value_trans},
{'fieldname': 'memorySizeMB', 'translator': value_trans},
{'fieldname': 'MemoryUsage', 'translator': value_trans},
{'fieldname': 'committedStorage', 'translator': value_trans},
{'fieldname': 'uncommittedStorage', 'translator': value_trans},
{'fieldname': 'annotation', 'translator': value_trans})}
pnic_translator = {
'translation-type': 'HDICT',
'table-name': HOST_PNICS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'host_uuid', 'translator': value_trans},
{'fieldname': 'device', 'translator': value_trans},
{'fieldname': 'mac', 'translator': value_trans},
{'fieldname': 'ipAddress', 'translator': value_trans},
{'fieldname': 'subnetMask', 'translator': value_trans})}
vnic_translator = {
'translation-type': 'HDICT',
'table-name': HOST_VNICS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'host_uuid', 'translator': value_trans},
{'fieldname': 'device', 'translator': value_trans},
{'fieldname': 'mac', 'translator': value_trans},
{'fieldname': 'portgroup', 'translator': value_trans},
{'fieldname': 'ipAddress', 'translator': value_trans},
{'fieldname': 'subnetMask', 'translator': value_trans})}
hosts_translator = {
'translation-type': 'HDICT',
'table-name': HOSTS,
'selector-type': 'DICT_SELECTOR',
'field-translators':
({'fieldname': 'name', 'translator': value_trans},
{'fieldname': 'uuid', 'translator': value_trans},
{'fieldname': HOST_DNS, 'col': 'Host:DNS_id',
'translator': {'translation-type': 'LIST',
'table-name': HOST_DNS,
'id-col': 'Host:DNS_id',
'val-col': 'DNS_IPs',
'translator': value_trans}})}
TRANSLATORS = [hosts_translator, pnic_translator, vnic_translator,
vms_translator]
def __init__(self, name='', args=None, session=None):
if args is None:
args = self.empty_credentials()
else:
args['tenant_name'] = None
super(VCenterDriver, self).__init__(name, args=args)
datasource_driver.ExecutionDriver.__init__(self)
try:
self.max_VMs = int(args['max_vms'])
except (KeyError, ValueError):
LOG.warning("max_vms has not been configured, "
" defaulting to 999.")
self.max_VMs = 999
try:
self.max_Hosts = int(args['max_hosts'])
except (KeyError, ValueError):
LOG.warning("max_hosts has not been configured, "
"defaulting to 999.")
self.max_Hosts = 999
self.hosts = None
self.creds = args
self.session = session
if session is None:
self.session = api.VMwareAPISession(self.creds['auth_url'],
self.creds['username'],
self.creds['password'],
10, 1,
create_session=True)
self._init_end_start_poll()
@staticmethod
def get_datasource_info():
result = {}
result['id'] = 'vcenter'
result['description'] = ('Datasource driver that interfaces with '
'vcenter')
result['config'] = {'auth_url': constants.REQUIRED,
'username': constants.REQUIRED,
'password': constants.REQUIRED,
'poll_time': constants.OPTIONAL,
'max_vms': constants.OPTIONAL,
'max_hosts': constants.OPTIONAL}
result['secret'] = ['password']
return result
def update_from_datasource(self):
"""Called when it is time to pull new data from this datasource.
Pulls lists of objects from vCenter, if the data does not match
the correspondig table in the driver's raw state or has not yet been
added to the state, the driver calls methods to parse this data.
"""
hosts, pnics, vnics = self._get_hosts_and_nics()
self._translate_hosts(hosts)
self._translate_pnics(pnics)
self._translate_vnics(vnics)
vms = self._get_vms()
self._translate_vms(vms)
@ds_utils.update_state_on_changed(HOSTS)
def _translate_hosts(self, hosts):
"""Translate the host data from vCenter."""
row_data = VCenterDriver.convert_objs(hosts,
VCenterDriver.hosts_translator)
return row_data
@ds_utils.update_state_on_changed(HOST_PNICS)
def _translate_pnics(self, pnics):
"""Translate the host pnics data from vCenter."""
row_data = VCenterDriver.convert_objs(pnics,
VCenterDriver.pnic_translator)
return row_data
@ds_utils.update_state_on_changed(HOST_VNICS)
def _translate_vnics(self, vnics):
"""Translate the host vnics data from vCenter."""
row_data = VCenterDriver.convert_objs(vnics,
VCenterDriver.vnic_translator)
return row_data
def _get_hosts_and_nics(self):
"""Convert vCenter host object to simple format.
First the raw host data acquired from vCenter is parsed and
organized into a simple format that can be read by congress
translators. This creates three lists, hosts, pnics and vnics.
These lists are then parsed by congress translators to create tables.
"""
rawhosts = self._get_hosts_from_vcenter()
hosts = []
pnics = []
vnics = []
for host in rawhosts['objects']:
h = {}
h['vCenter_id'] = host.obj['value']
for prop in host['propSet']:
if prop.name == "hardware.systemInfo.uuid":
h['uuid'] = prop.val
break
for prop in host['propSet']:
if prop.name == "name":
h['name'] = prop.val
continue
if prop.name == "config.network.dnsConfig.address":
try:
h[self.HOST_DNS] = prop.val.string
except AttributeError:
h[self.HOST_DNS] = ["No DNS IP adddresses configured"]
continue
if prop.name == "config.network.pnic":
for pnic in prop.val.PhysicalNic:
p = {}
p['host_uuid'] = h['uuid']
p['mac'] = pnic['mac']
p['device'] = pnic['device']
p['ipAddress'] = pnic['spec']['ip']['ipAddress']
p['subnetMask'] = pnic['spec']['ip']['subnetMask']
pnics.append(p)
if prop.name == "config.network.vnic":
for vnic in prop.val.HostVirtualNic:
v = {}
v['host_uuid'] = h['uuid']
v['device'] = vnic['device']
v['portgroup'] = vnic['portgroup']
v['mac'] = vnic['spec']['mac']
v['ipAddress'] = vnic['spec']['ip']['ipAddress']
v['subnetMask'] = vnic['spec']['ip']['subnetMask']
vnics.append(v)
hosts.append(h)
# cached the hosts for vms
self.hosts = hosts
return hosts, pnics, vnics
@ds_utils.update_state_on_changed(VMS)
def _translate_vms(self, vms):
"""Translate the VM data from vCenter."""
row_data = VCenterDriver.convert_objs(vms,
VCenterDriver.vms_translator)
return row_data
def _get_vms(self):
rawvms = self._get_vms_from_vcenter()
vms = []
for vm in rawvms['objects']:
v = {}
for prop in vm['propSet']:
if prop.name == "name":
v['name'] = prop.val
continue
if prop.name == "config.uuid":
v['uuid'] = prop.val
continue
if prop.name == "config.annotation":
v['annotation'] = prop.val
continue
if prop.name == "summary.config.vmPathName":
v['pathName'] = prop.val
continue
if prop.name == "summary.config.memorySizeMB":
v['memorySizeMB'] = prop.val
continue
if prop.name == "summary.quickStats":
v['MemoryUsage'] = prop.val['guestMemoryUsage']
v['CpuDemand'] = prop.val['overallCpuDemand']
v['CpuUsage'] = prop.val['overallCpuUsage']
continue
if prop.name == "summary.overallStatus":
v['status'] = prop.val
if prop.name == "summary.storage":
v['committedStorage'] = prop.val['committed']
v['uncommittedStorage'] = prop.val['uncommitted']
continue
if prop.name == 'runtime.host':
for host in self.hosts:
if host['vCenter_id'] == prop.val['value']:
v['host_uuid'] = host['uuid']
continue
continue
vms.append(v)
return vms
def _get_hosts_from_vcenter(self):
"""Called to pull host data from vCenter
"""
dataFields = ['name',
'hardware.systemInfo.uuid',
'config.network.dnsConfig.address',
'config.network.pnic',
'config.network.vnic']
return self.session.invoke_api(vim_util, 'get_objects',
self.session.vim, 'HostSystem',
self.max_Hosts, dataFields)
def _get_vms_from_vcenter(self):
"""Called to pull VM data from vCenter
"""
dataFields = ['name',
'config.uuid',
'config.annotation',
'summary.config.vmPathName',
'runtime.host',
'summary.config.memorySizeMB',
'summary.quickStats',
'summary.overallStatus',
'summary.storage']
return self.session.invoke_api(vim_util, 'get_objects',
self.session.vim, 'VirtualMachine',
self.max_VMs, dataFields)
def execute(self, action, action_args):
"""Overwrite ExecutionDriver.execute()."""
# action can be written as a method or an API call.
func = getattr(self, action, None)
if func and self.is_executable(func):
func(action_args)
else:
self._execute_api(self.session, action, action_args)

View File

@ -1,130 +0,0 @@
# Copyright 2011 VMware, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_config import cfg
from oslo_db.sqlalchemy import session
_FACADE = None
def _create_facade_lazily():
global _FACADE
if _FACADE is None:
_FACADE = session.EngineFacade.from_config(cfg.CONF, sqlite_fk=True)
return _FACADE
def get_engine():
"""Helper method to grab engine."""
facade = _create_facade_lazily()
return facade.get_engine()
def get_session(autocommit=True, expire_on_commit=False, make_new=False):
"""Helper method to grab session."""
if make_new: # do not reuse existing facade
facade = session.EngineFacade.from_config(cfg.CONF, sqlite_fk=True)
else:
facade = _create_facade_lazily()
return facade.get_session(autocommit=autocommit,
expire_on_commit=expire_on_commit)
def get_locking_session():
"""Obtain db_session that works with table locking
supported backends: MySQL and PostgreSQL
return default session if backend not supported (eg. sqlite)
"""
if is_mysql() or is_postgres():
db_session = get_session(
autocommit=False,
# to prevent implicit new transactions,
# which UNLOCKS in MySQL
expire_on_commit=False, # need to UNLOCK after commit
make_new=True) # brand new facade avoids interference
else: # unsupported backend for locking (eg sqlite), return default
db_session = get_session()
return db_session
def lock_tables(session, tables):
"""Write-lock tables for supported backends: MySQL and PostgreSQL"""
session.begin(subtransactions=True)
if is_mysql(): # Explicitly LOCK TABLES for MySQL
session.execute('SET autocommit=0')
session.execute('LOCK TABLES {}'.format(
','.join([table + ' WRITE' for table in tables])))
elif is_postgres(): # Explicitly LOCK TABLE for Postgres
session.execute('BEGIN TRANSACTION')
for table in tables:
session.execute('LOCK TABLE {} IN EXCLUSIVE MODE'.format(table))
def commit_unlock_tables(session):
"""Commit and unlock tables for supported backends: MySQL and PostgreSQL"""
session.execute('COMMIT') # execute COMMIT on DB backend
session.commit()
# because sqlalchemy session does not guarantee
# exact boundary correspondence to DB backend transactions
# We must guarantee DB commits transaction before UNLOCK
# unlock
if is_mysql():
session.execute('UNLOCK TABLES')
# postgres automatically releases lock at transaction end
def rollback_unlock_tables(session):
"""Rollback and unlock tables
supported backends: MySQL and PostgreSQL
"""
# unlock
if is_mysql():
session.execute('UNLOCK TABLES')
# postgres automatically releases lock at transaction end
session.rollback()
def is_mysql():
"""Return true if and only if database backend is mysql"""
return (cfg.CONF.database.connection is not None and
(cfg.CONF.database.connection.split(':/')[0] == 'mysql' or
cfg.CONF.database.connection.split('+')[0] == 'mysql'))
def is_postgres():
"""Return true if and only if database backend is postgres"""
return (cfg.CONF.database.connection is not None and
(cfg.CONF.database.connection.split(':/')[0] == 'postgresql' or
cfg.CONF.database.connection.split('+')[0] == 'postgresql'))
def is_sqlite():
"""Return true if and only if database backend is sqlite"""
return (cfg.CONF.database.connection is not None and
(cfg.CONF.database.connection.split(':/')[0] == 'sqlite' or
cfg.CONF.database.connection.split('+')[0] == 'sqlite'))

View File

@ -1,116 +0,0 @@
# Copyright (c) 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import json
import sqlalchemy as sa
from sqlalchemy.orm import exc as db_exc
from congress.db import api as db
from congress.db import db_ds_table_data as table_data
from congress.db import model_base
class Datasource(model_base.BASE, model_base.HasId):
__tablename__ = 'datasources'
name = sa.Column(sa.String(255), unique=True)
driver = sa.Column(sa.String(255))
config = sa.Column(sa.Text(), nullable=False)
description = sa.Column(sa.Text(), nullable=True)
enabled = sa.Column(sa.Boolean, default=True)
def __init__(self, id_, name, driver, config, description,
enabled=True):
self.id = id_
self.name = name
self.driver = driver
self.config = json.dumps(config)
self.description = description
self.enabled = enabled
def add_datasource(id_, name, driver, config, description,
enabled, session=None):
session = session or db.get_session()
with session.begin(subtransactions=True):
datasource = Datasource(
id_=id_,
name=name,
driver=driver,
config=config,
description=description,
enabled=enabled)
session.add(datasource)
return datasource
def delete_datasource(id_, session=None):
session = session or db.get_session()
return session.query(Datasource).filter(
Datasource.id == id_).delete()
def delete_datasource_with_data(id_, session=None):
session = session or db.get_session()
with session.begin(subtransactions=True):
deleted = delete_datasource(id_, session)
table_data.delete_ds_table_data(id_, session=session)
return deleted
def get_datasource_name(name_or_id, session=None):
session = session or db.get_session()
datasource_obj = get_datasource(name_or_id, session)
if datasource_obj is not None:
return datasource_obj.name
return name_or_id
def get_datasource(name_or_id, session=None):
db_object = (get_datasource_by_name(name_or_id, session) or
get_datasource_by_id(name_or_id, session))
return db_object
def get_datasource_by_id(id_, session=None):
session = session or db.get_session()
try:
return (session.query(Datasource).
filter(Datasource.id == id_).
one())
except db_exc.NoResultFound:
pass
def get_datasource_by_name(name, session=None):
session = session or db.get_session()
try:
return (session.query(Datasource).
filter(Datasource.name == name).
one())
except db_exc.NoResultFound:
pass
def get_datasources(session=None, deleted=False):
session = session or db.get_session()
return (session.query(Datasource).
all())

View File

@ -1,90 +0,0 @@
# Copyright (c) 2016 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import json
import sqlalchemy as sa
from sqlalchemy.orm import exc as db_exc
from congress.db import api as db
from congress.db import model_base
class DSTableData(model_base.BASE):
__tablename__ = 'dstabledata'
ds_id = sa.Column(sa.String(36), nullable=False, primary_key=True)
tablename = sa.Column(sa.String(255), nullable=False, primary_key=True)
# choose long length compatible with MySQL, SQLite, Postgres
tabledata = sa.Column(sa.Text(), nullable=False)
def store_ds_table_data(ds_id, tablename, tabledata, session=None):
session = session or db.get_session()
tabledata = _json_encode_table_data(tabledata)
with session.begin(subtransactions=True):
new_row = session.merge(DSTableData(
ds_id=ds_id,
tablename=tablename,
tabledata=tabledata))
return new_row
def delete_ds_table_data(ds_id, tablename=None, session=None):
session = session or db.get_session()
if tablename is None:
return session.query(DSTableData).filter(
DSTableData.ds_id == ds_id).delete()
else:
return session.query(DSTableData).filter(
DSTableData.ds_id == ds_id,
DSTableData.tablename == tablename).delete()
def get_ds_table_data(ds_id, tablename=None, session=None):
session = session or db.get_session()
try:
if tablename is None:
rows = session.query(DSTableData).filter(
DSTableData.ds_id == ds_id)
return_list = []
for row in rows:
return_list.append(
{'tablename': row.tablename,
'tabledata': _json_decode_table_data(row.tabledata)})
return return_list
else:
return _json_decode_table_data(session.query(DSTableData).filter(
DSTableData.ds_id == ds_id,
DSTableData.tablename == tablename).one().tabledata)
except db_exc.NoResultFound:
pass
def _json_encode_table_data(tabledata):
tabledata = list(tabledata)
for i in range(0, len(tabledata)):
tabledata[i] = list(tabledata[i])
return json.dumps(tabledata)
def _json_decode_table_data(json_tabledata):
tabledata = json.loads(json_tabledata)
for i in range(0, len(tabledata)):
tabledata[i] = tuple(tabledata[i])
return set(tabledata)

View File

@ -1,120 +0,0 @@
# Copyright (c) 2017 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import json
from oslo_db import exception as oslo_db_exc
import sqlalchemy as sa
from sqlalchemy.orm import exc as db_exc
from congress.db import api as db
from congress.db import model_base
class LibraryPolicy(model_base.BASE, model_base.HasId):
__tablename__ = 'library_policies'
name = sa.Column(sa.String(255), nullable=False, unique=True)
abbreviation = sa.Column(sa.String(5), nullable=False)
description = sa.Column(sa.Text(), nullable=False)
kind = sa.Column(sa.Text(), nullable=False)
rules = sa.Column(sa.Text(), nullable=False)
def to_dict(self, include_rules=True, json_rules=False):
"""From a given library policy, return a policy dict.
Args:
include_rules (bool, optional): include policy rules in return
dictionary. Defaults to False.
"""
if not include_rules:
d = {'id': self.id,
'name': self.name,
'abbreviation': self.abbreviation,
'description': self.description,
'kind': self.kind}
else:
d = {'id': self.id,
'name': self.name,
'abbreviation': self.abbreviation,
'description': self.description,
'kind': self.kind,
'rules': (self.rules if json_rules
else json.loads(self.rules))}
return d
def add_policy(policy_dict, session=None):
session = session or db.get_session()
try:
with session.begin(subtransactions=True):
new_row = LibraryPolicy(
name=policy_dict['name'],
abbreviation=policy_dict['abbreviation'],
description=policy_dict['description'],
kind=policy_dict['kind'],
rules=json.dumps(policy_dict['rules']))
session.add(new_row)
return new_row
except oslo_db_exc.DBDuplicateEntry:
raise KeyError(
"Policy with name %s already exists" % policy_dict['name'])
def replace_policy(id_, policy_dict, session=None):
session = session or db.get_session()
try:
with session.begin(subtransactions=True):
new_row = LibraryPolicy(
id=id_,
name=policy_dict['name'],
abbreviation=policy_dict['abbreviation'],
description=policy_dict['description'],
kind=policy_dict['kind'],
rules=json.dumps(policy_dict['rules']))
session.query(LibraryPolicy).filter(
LibraryPolicy.id == id_).one().update(
new_row.to_dict(include_rules=True, json_rules=True))
return new_row
except db_exc.NoResultFound:
raise KeyError('No policy found with policy id %s' % id_)
def delete_policy(id_, session=None):
session = session or db.get_session()
return session.query(LibraryPolicy).filter(
LibraryPolicy.id == id_).delete()
def delete_policies(session=None):
session = session or db.get_session()
return session.query(LibraryPolicy).delete()
def get_policy(id_, session=None):
session = session or db.get_session()
try:
return session.query(LibraryPolicy).filter(
LibraryPolicy.id == id_).one()
except db_exc.NoResultFound:
raise KeyError('No policy found with policy id %s' % id_)
def get_policies(session=None):
session = session or db.get_session()
return (session.query(LibraryPolicy).all())

View File

@ -1,274 +0,0 @@
# Copyright (c) 2012 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from oslo_db import exception as oslo_db_exc
import sqlalchemy as sa
from sqlalchemy.orm import exc as db_exc
from congress.db import api as db
from congress.db import model_base
class Policy(model_base.BASE, model_base.HasId, model_base.HasAudit):
__tablename__ = 'policies'
# name is a human-readable string, so it can be referenced in policy
name = sa.Column(sa.String(255), nullable=False, unique=True)
abbreviation = sa.Column(sa.String(5), nullable=False)
description = sa.Column(sa.Text(), nullable=False)
owner = sa.Column(sa.Text(), nullable=False)
kind = sa.Column(sa.Text(), nullable=False)
def __init__(self, id_, name, abbreviation, description, owner, kind,
deleted=False):
self.id = id_
self.name = name
self.abbreviation = abbreviation
self.description = description
self.owner = owner
self.kind = kind
self.deleted = is_soft_deleted(id_, deleted)
def to_dict(self):
"""From a given database policy, return a policy dict."""
d = {'id': self.id,
'name': self.name,
'abbreviation': self.abbreviation,
'description': self.description,
'owner_id': self.owner,
'kind': self.kind}
return d
class PolicyDeleted(model_base.BASE, model_base.HasId, model_base.HasAudit):
__tablename__ = 'policiesdeleted'
# name is a human-readable string, so it can be referenced in policy
name = sa.Column(sa.String(255), nullable=False)
abbreviation = sa.Column(sa.String(5), nullable=False)
description = sa.Column(sa.Text(), nullable=False)
owner = sa.Column(sa.Text(), nullable=False)
kind = sa.Column(sa.Text(), nullable=False)
# overwrite some columns from HasAudit to stop value auto-generation
created_at = sa.Column(sa.DateTime, nullable=False)
updated_at = sa.Column(sa.DateTime, nullable=True)
def __init__(self, policy_obj):
'''Initialize a PolicyDeleted object by copying a Policy object.
Args:
policy_obj: a Policy object
'''
self.id = policy_obj.id
self.name = policy_obj.name
self.abbreviation = policy_obj.abbreviation
self.description = policy_obj.description
self.owner = policy_obj.owner
self.kind = policy_obj.kind
self.deleted = policy_obj.deleted
self.created_at = policy_obj.created_at
self.updated_at = policy_obj.updated_at
def add_policy(id_, name, abbreviation, description, owner, kind,
deleted=False, session=None):
if session:
# IMPORTANT: if session provided, do not interrupt existing transaction
# with BEGIN which can drop db locks and change desired transaction
# boundaries for proper commit and rollback
try:
policy = Policy(id_, name, abbreviation, description, owner,
kind, deleted)
session.add(policy)
return policy
except oslo_db_exc.DBDuplicateEntry:
raise KeyError("Policy with name %s already exists" % name)
# else
session = db.get_session()
try:
with session.begin(subtransactions=True):
policy = Policy(id_, name, abbreviation, description, owner,
kind, deleted)
session.add(policy)
return policy
except oslo_db_exc.DBDuplicateEntry:
raise KeyError("Policy with name %s already exists" % name)
def delete_policy(id_, session=None):
session = session or db.get_session()
with session.begin(subtransactions=True):
# delete all rules for that policy from database
policy = get_policy_by_id(id_, session=session)
for rule in get_policy_rules(policy.name, session=session):
delete_policy_rule(rule.id, session=session)
policy_deleted = PolicyDeleted(policy)
session.add(policy_deleted)
# hard delete policy in Policy table
session.query(Policy).filter(Policy.id == id_).delete()
# soft delete policy in PolicyDeleted table
return session.query(PolicyDeleted).filter(
PolicyDeleted.id == id_).soft_delete()
def get_policy_by_id(id_, session=None, deleted=False):
session = session or db.get_session()
try:
return (session.query(Policy).
filter(Policy.id == id_).
filter(Policy.deleted == is_soft_deleted(id_, deleted)).
one())
except db_exc.NoResultFound:
pass
def get_policy_by_name(name, session=None, deleted=False):
session = session or db.get_session()
try:
return (session.query(Policy).
filter(Policy.name == name).
filter(Policy.deleted == is_soft_deleted(name, deleted)).
one())
except db_exc.NoResultFound:
pass
def get_policy(name_or_id, session=None, deleted=False):
# Try to retrieve policy either by id or name
db_object = (get_policy_by_id(name_or_id, session, deleted) or
get_policy_by_name(name_or_id, session, deleted))
if not db_object:
raise KeyError("Policy Name or ID '%s' does not exist" % (name_or_id))
return db_object
def get_policies(session=None, deleted=False):
session = session or db.get_session()
return (session.query(Policy).
filter(Policy.deleted == '').
all())
def policy_name(name_or_id, session=None):
session = session or db.get_session()
try:
ans = (session.query(Policy).
filter(Policy.deleted == '').
filter(Policy.id == name_or_id).
one())
except db_exc.NoResultFound:
return name_or_id
return ans.name
class PolicyRule(model_base.BASE, model_base.HasId, model_base.HasAudit):
__tablename__ = "policy_rules"
# TODO(thinrichs): change this so instead of storing the policy name
# we store the policy's ID. Nontrivial since we often have the
# policy's name but not the ID; looking up the ID from the name
# outside of this class leads to race conditions, which means
# this class ought to be modified so that add/delete/etc. supports
# either name or ID as input.
rule = sa.Column(sa.Text(), nullable=False)
policy_name = sa.Column(sa.Text(), nullable=False)
comment = sa.Column(sa.String(255), nullable=False)
name = sa.Column(sa.String(255))
def __init__(self, id, policy_name, rule, comment, deleted=False,
rule_name=""):
self.id = id
self.policy_name = policy_name
self.rule = rule
# FIXME(arosen) we should not be passing None for comment here.
self.comment = comment or ""
self.deleted = is_soft_deleted(id, deleted)
self.name = rule_name
def to_dict(self):
d = {'rule': self.rule,
'id': self.id,
'comment': self.comment,
'name': self.name}
return d
def add_policy_rule(id, policy_name, rule, comment, deleted=False,
rule_name="", session=None):
if session:
# IMPORTANT: if session provided, do not interrupt existing transaction
# with BEGIN which can drop db locks and change desired transaction
# boundaries for proper commit and rollback
policy_rule = PolicyRule(id, policy_name, rule, comment,
deleted, rule_name=rule_name)
session.add(policy_rule)
return policy_rule
# else
session = db.get_session()
with session.begin(subtransactions=True):
policy_rule = PolicyRule(id, policy_name, rule, comment,
deleted, rule_name=rule_name)
session.add(policy_rule)
return policy_rule
def delete_policy_rule(id, session=None):
"""Specify either the ID or the NAME, and that policy is deleted."""
session = session or db.get_session()
return session.query(PolicyRule).filter(PolicyRule.id == id).soft_delete()
def get_policy_rule(id, policy_name, session=None, deleted=False):
session = session or db.get_session()
rule_query = (session.query(PolicyRule).
filter(PolicyRule.id == id).
filter(PolicyRule.deleted == is_soft_deleted(id, deleted)))
if policy_name:
rule_query = (rule_query.
filter(PolicyRule.policy_name == policy_name))
try:
return rule_query.one()
except db_exc.NoResultFound:
pass
def get_policy_rules(policy_name=None, session=None,
deleted=False):
session = session or db.get_session()
rule_query = session.query(PolicyRule)
if not deleted:
rule_query = rule_query.filter(PolicyRule.deleted == '')
else:
rule_query = rule_query.filter(PolicyRule.deleted != '')
if policy_name:
rule_query = rule_query.filter(PolicyRule.policy_name == policy_name)
return rule_query.all()
def is_soft_deleted(uuid, deleted):
return '' if deleted is False else uuid

View File

@ -1,80 +0,0 @@
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
The migrations in the alembic/versions contain the changes needed to migrate
from older Congress releases to newer versions. A migration occurs by executing
a script that details the changes needed to upgrade/downgrade the database. The
migration scripts are ordered so that multiple scripts can run sequentially to
update the database. The scripts are executed by Congress's migration wrapper
which uses the Alembic library to manage the migration. Congress supports
migration from Juno or later.
If you are a deployer or developer and want to migrate from Juno to Kilo
or later you must first add version tracking to the database:
$ congress-db-manage --config-file /path/to/congress.conf stamp initial_db
You can then upgrade to the latest database version via:
$ congress-db-manage --config-file /path/to/congress.conf upgrade head
To check the current database version:
$ congress-db-manage --config-file /path/to/congress.conf current
To create a script to run the migration offline:
$ congress-db-manage --config-file /path/to/congress.conf upgrade head --sql
To run the offline migration between specific migration versions:
$ congress-db-manage --config-file /path/to/congress.conf \
upgrade <start version>:<end version> --sql
Upgrade the database incrementally:
$ congress-db-manage --config-file /path/to/congress.conf \
upgrade --delta <# of revs>
Downgrade the database by a certain number of revisions:
$ congress-db-manage --config-file /path/to/congress.conf \
downgrade --delta <# of revs>
DEVELOPERS:
A database migration script is required when you submit a change to Congress
that alters the database model definition. The migration script is a special
python file that includes code to update/downgrade the database to match the
changes in the model definition. Alembic will execute these scripts in order to
provide a linear migration path between revision. The congress-db-manage command
can be used to generate migration template for you to complete. The operations
in the template are those supported by the Alembic migration library.
$ congress-db-manage --config-file /path/to/congress.conf \
revision -m "description of revision" --autogenerate
This generates a prepopulated template with the changes needed to match the
database state with the models. You should inspect the autogenerated template
to ensure that the proper models have been altered.
In rare circumstances, you may want to start with an empty migration template
and manually author the changes necessary for an upgrade/downgrade. You can
create a blank file via:
$ congress-db-manage --config-file /path/to/congress.conf \
revision -m "description of revision"
The migration timeline should remain linear so that there is a clear path when
upgrading/downgrading. To verify that the timeline does not branch, you can
run this command:
$ congress-db-manage --config-file /path/to/congress.conf check_migration
If the migration path does branch, you can find the branch point via:
$ congress-db-manage --config-file /path/to/congress.conf history

View File

@ -1,128 +0,0 @@
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import functools
from alembic import context
from alembic import op
import sqlalchemy as sa
def skip_if_offline(func):
"""Decorator for skipping migrations in offline mode."""
@functools.wraps(func)
def decorator(*args, **kwargs):
if context.is_offline_mode():
return
return func(*args, **kwargs)
return decorator
def raise_if_offline(func):
"""Decorator for raising if a function is called in offline mode."""
@functools.wraps(func)
def decorator(*args, **kwargs):
if context.is_offline_mode():
raise RuntimeError(_("%s cannot be called while in offline mode") %
func.__name__)
return func(*args, **kwargs)
return decorator
@raise_if_offline
def schema_has_table(table_name):
"""Check whether the specified table exists in the current schema.
This method cannot be executed in offline mode.
"""
bind = op.get_bind()
insp = sa.engine.reflection.Inspector.from_engine(bind)
return table_name in insp.get_table_names()
@raise_if_offline
def schema_has_column(table_name, column_name):
"""Check whether the specified column exists in the current schema.
This method cannot be executed in offline mode.
"""
bind = op.get_bind()
insp = sa.engine.reflection.Inspector.from_engine(bind)
# first check that the table exists
if not schema_has_table(table_name):
return
# check whether column_name exists in table columns
return column_name in [column['name'] for column in
insp.get_columns(table_name)]
@raise_if_offline
def alter_column_if_exists(table_name, column_name, **kwargs):
"""Alter a column only if it exists in the schema."""
if schema_has_column(table_name, column_name):
op.alter_column(table_name, column_name, **kwargs)
@raise_if_offline
def drop_table_if_exists(table_name):
if schema_has_table(table_name):
op.drop_table(table_name)
@raise_if_offline
def rename_table_if_exists(old_table_name, new_table_name):
if schema_has_table(old_table_name):
op.rename_table(old_table_name, new_table_name)
def alter_enum(table, column, enum_type, nullable):
bind = op.get_bind()
engine = bind.engine
if engine.name == 'postgresql':
values = {'table': table,
'column': column,
'name': enum_type.name}
op.execute("ALTER TYPE %(name)s RENAME TO old_%(name)s" % values)
enum_type.create(bind, checkfirst=False)
op.execute("ALTER TABLE %(table)s RENAME COLUMN %(column)s TO "
"old_%(column)s" % values)
op.add_column(table, sa.Column(column, enum_type, nullable=nullable))
op.execute("UPDATE %(table)s SET %(column)s = "
"old_%(column)s::text::%(name)s" % values)
op.execute("ALTER TABLE %(table)s DROP COLUMN old_%(column)s" % values)
op.execute("DROP TYPE old_%(name)s" % values)
else:
op.alter_column(table, column, type_=enum_type,
existing_nullable=nullable)
def create_table_if_not_exist_psql(table_name, values):
if op.get_bind().engine.dialect.server_version_info < (9, 1, 0):
op.execute("CREATE LANGUAGE plpgsql")
op.execute("CREATE OR REPLACE FUNCTION execute(TEXT) RETURNS VOID AS $$"
"BEGIN EXECUTE $1; END;"
"$$ LANGUAGE plpgsql STRICT;")
op.execute("CREATE OR REPLACE FUNCTION table_exist(TEXT) RETURNS bool as "
"$$ SELECT exists(select 1 from pg_class where relname=$1);"
"$$ language sql STRICT;")
op.execute("SELECT execute($$CREATE TABLE %(name)s %(columns)s $$) "
"WHERE NOT table_exist(%(name)r);" %
{'name': table_name,
'columns': values})

View File

@ -1,52 +0,0 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = %(here)s/alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# default to an empty string because the Neutron migration cli will
# extract the correct value and set it programmatically before alemic is fully
# invoked.
sqlalchemy.url =
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

Some files were not shown because too many files have changed in this diff Show More