Retire astara repo

Retire repository, following
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

Change-Id: I699a2ab0ce552cde94a4eecd85748862f4f64f95
This commit is contained in:
Andreas Jaeger 2018-10-14 12:54:36 +02:00
parent 07e5dfe057
commit f0be3fddf6
47 changed files with 10 additions and 3152 deletions

36
.gitignore vendored
View File

@ -1,36 +0,0 @@
*.py[co]
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
#Translations
*.mo
#Mr Developer
.mr.developer.cfg
# Packaging output
*.deb
# pbr output
AUTHORS
ChangeLog
test.conf

View File

@ -1,10 +0,0 @@
language: python
python:
- "2.7"
install:
- pip install -r test_requirements.txt --use-mirror
- pip install flake8 --use-mirrors
- pip install -q . --use-mirrors
before_script:
- flake8 --show-source --ignore=E125 --statistics akanda test setup.py
script: nosetests -d

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,7 +0,0 @@
# Astara Neutron
*Part of the [Astara Project](https://github.com/openstack/astara).*
Addon API extensions for OpenStack Neutron which enable functionality and integration
with the Astara project, notably Astara router appliance interaction, and
services for the Astara RUG orchestration service.

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,167 +0,0 @@
====================================================================
Akanda User-facing API implemented as a Neutron Resource Extension
====================================================================
Provides
========
Portforward
-----------
portfoward.py implemented under neutron/extensions allows the ability
to create portforwarding rules.
Filterrule
----------
filterrule.py implemented under neutron/extensions allows the ability
to create firewall rules that eventually gets implemented as OpenBSD
PF rules within the Akanda appliance.
AddressBook
-----------
addressbook.py implemented under neutron/extensions allows the ability
to administratively manage IP Address groups that can be used in filter
rules.
Info
----
This is the home for the REST API that users will be calling directly with
their preferred REST tool (curl, Python wrapper, etc.).
This code could eventually become part of OpenStack Neutron or act as a source
or inspiration that will. As such, this API should be constructed entirely with
standard OpenStack tools.
Authz
-----
The resource extensions are implemented with the ability to leverage AuthZ.
In order to use AuthZ, update Neutron's policy file for the extension to work
with the following::
"create_portforward": [],
"get_portforward": [["rule:admin_or_owner"]],
"update_portforward": [["rule:admin_or_owner"]],
"delete_portforward": [["rule:admin_or_owner"]]
To use quotas, add to the QUOTAS section of neutron.conf::
quota_portforward = 10
Installation - DevStack (single node setup)
===========================================
Preliminary Steps
-----------------
1. Create a localrc file under the devstack directory with the following::
MYSQL_PASSWORD=openstack
RABBIT_PASSWORD=openstack
SERVICE_TOKEN=openstack
SERVICE_PASSWORD=openstack
ADMIN_PASSWORD=openstack
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service neutron
enable_service q-l3
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
Q_PLUGIN=openvswitch
NOVA_USE_NEUTRON_API=v2
2. Run ./stack.sh until the stack account and /opt/stack directory gets created.
3. Run ./unstack.sh
Neutron Extensions install
--------------------------
<workdir> = https://github.com/dreamhost/akanda/tree/master/userapi_extensions/akanda/neutron
1. Clone neutron to /opt/stack using ``git clone https://github.com/openstack/neutron.git``
2. Change to the ``userapi_extensions`` dir within the Akanda project
3. Run ``python setup.py develop``
4. Return to devstack directory and replace the following lines::
- Q_PLUGIN_CLASS="neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2"
+ Q_PLUGIN_CLASS="akanda.neutron.plugins.ovs_neutron_plugin.OVSNeutronPluginV2"
5. Add the following line to load the extension right above Q_AUTH_STRATEGY::
+ iniset $Q_CONF_FILE DEFAULT api_extensions_path "extensions:/opt/stack/akanda/userapi_extensions/akanda/neutron/extensions"
6. Run ./stack.sh again to generate the required DB migrations and start the required services.
7. You should see for example (dhaddressbook in this case), something
similar to the following to indicate a successful load of an
extension, however it is not complete without quotas::
2012-09-11 09:17:04 INFO [neutron.api.extensions] Initializing extension manager.
2012-09-11 09:17:04 INFO [neutron.api.extensions] Loading extension file: _authzbase.py
2012-09-11 09:17:04 INFO [neutron.api.extensions] Loading extension file: addressbook.py
2012-09-11 09:17:04 DEBUG [neutron.api.extensions] Ext name: addressbook
2012-09-11 09:17:04 DEBUG [neutron.api.extensions] Ext alias: dhaddressbook
2012-09-11 09:17:04 DEBUG [neutron.api.extensions] Ext description: An addressbook extension
2012-09-11 09:17:04 DEBUG [neutron.api.extensions] Ext namespace: http://docs.dreamcompute.com/api/ext/v1.0
8. Switch to q-svc screen and press Ctrl-C
9. To enable Quote Support
Stop q-svc as add the following to [QUOTA] section of
``/etc/neutron/neutron.conf``::
quota_portforward = 10
quota_filterrule = 100
quota_addressbook = 5
quota_addressbookgroup = 50
quota_addressbookentry = 250
10. Add the follow to /etc/neutron/policy.json to enable policies::
"create_filerrule": [],
"get_filterrule": [["rule:admin_or_owner"]],
"update_filterrule": [["rule:admin_or_owner"]],
"delete_filterrule": [["rule:admin_or_owner"]],
"create_addressbook": [],
"get_addressbook": [["rule:admin_or_owner"]],
"update_addressbook": [["rule:admin_or_owner"]],
"delete_addressbook": [["rule:admin_or_owner"]],
"create_addressbookgroup": [],
"get_addressbookgroup": [["rule:admin_or_owner"]],
"update_addressbookgroup": [["rule:admin_or_owner"]],
"delete_addressbookgroup": [["rule:admin_or_owner"]],
"create_addressbookentry": [],
"get_addressbookentry": [["rule:admin_or_owner"]],
"update_addressbookentry": [["rule:admin_or_owner"]],
"delete_addressbookentry": [["rule:admin_or_owner"]],
"update_routerstatus": [["rule:admin_only"]]
11. Restart q-svc by using up arrow to retrieve the command from the history.
Appendix
--------
To manually start and stop Neutron Services under DevStack:
1. Run 'screen -x'. To show a list of screens, use Ctrl+A+" (double quote char)
2. Select q-svc. In most cases - Ctrl+A+1 should work.
3. Run the following to start Neutron or Ctrl+C to stop::
$ need-command-here
Gotchas
=======
1. There is no Neutron Model validation for source and destination
protocols in FilterRule. I.e., you can create forward rules between
UDP and TCP or anything else. Currently validation happens only in
Horizon. If you use the API directly, you are on your own!

View File

@ -1,15 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1 +0,0 @@
Generic single-database configuration.

View File

@ -1,85 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from logging import config as logging_config
from alembic import context
from neutron.db import model_base
from oslo_config import cfg
from oslo_db.sqlalchemy import session
import sqlalchemy as sa
from sqlalchemy import event
MYSQL_ENGINE = None
ASTARA_NEUTRON_VERSION_TABLE = 'alembic_version_astara_neutron'
config = context.config
neutron_config = config.neutron_config
logging_config.fileConfig(config.config_file_name)
target_metadata = model_base.BASEV2.metadata
def set_mysql_engine():
try:
mysql_engine = neutron_config.command.mysql_engine
except cfg.NoSuchOptError:
mysql_engine = None
global MYSQL_ENGINE
MYSQL_ENGINE = (mysql_engine or
model_base.BASEV2.__table_args__['mysql_engine'])
def run_migrations_offline():
set_mysql_engine()
kwargs = dict()
if neutron_config.database.connection:
kwargs['url'] = neutron_config.database.connection
else:
kwargs['dialect_name'] = neutron_config.database.engine
kwargs['version_table'] = ASTARA_NEUTRON_VERSION_TABLE
context.configure(**kwargs)
with context.begin_transaction():
context.run_migrations()
@event.listens_for(sa.Table, 'after_parent_attach')
def set_storage_engine(target, parent):
if MYSQL_ENGINE:
target.kwargs['mysql_engine'] = MYSQL_ENGINE
def run_migrations_online():
set_mysql_engine()
engine = session.create_engine(neutron_config.database.connection)
connection = engine.connect()
context.configure(
connection=connection,
target_metadata=target_metadata,
version_table=ASTARA_NEUTRON_VERSION_TABLE
)
try:
with context.begin_transaction():
context.run_migrations()
finally:
connection.close()
engine.dispose()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -1,36 +0,0 @@
# Copyright ${create_date.year} <PUT YOUR NAME/COMPANY HERE>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
% if branch_labels:
branch_labels = ${repr(branch_labels)}
%endif
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}

View File

@ -1 +0,0 @@
a999bcf20008

View File

@ -1,44 +0,0 @@
# Copyright 2016 <PUT YOUR NAME/COMPANY HERE>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from alembic import op
import sqlalchemy as sa
"""empty message
Revision ID: a999bcf20008
Revises: start_astara_neutron
Create Date: 2016-03-14 14:09:43.025886
"""
# revision identifiers, used by Alembic.
revision = 'a999bcf20008'
down_revision = 'start_astara_neutron'
def upgrade():
op.create_table(
'astara_byonf',
sa.Column('tenant_id', sa.String(length=255), nullable=False),
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('function_type', sa.String(length=255), nullable=False),
sa.Column('driver', sa.String(length=36), nullable=False),
sa.Column('image_uuid', sa.String(length=36), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('tenant_id', 'function_type',
name='uix_tenant_id_function'),
)

View File

@ -1,30 +0,0 @@
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""start astara-neutron chain
Revision ID: start_astara_neutron
Revises: None
Create Date: 2015-03-14 11:06:18.196062
"""
# revision identifiers, used by Alembic.
revision = 'start_astara_neutron'
down_revision = None
def upgrade():
pass

View File

@ -1,28 +0,0 @@
# Copyright (c) 2016 Akanda, Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy as sa
from neutron.db import model_base, models_v2
class Byonf(model_base.BASEV2, models_v2.HasId, models_v2.HasTenant):
__tablename__ = 'astara_byonf'
function_type = sa.Column(sa.String(length=255), nullable=False)
driver = sa.Column(sa.String(length=36), nullable=False)
image_uuid = sa.Column(sa.String(length=36), nullable=False)
__table_args__ = (
sa.UniqueConstraint(
'tenant_id', 'function_type', name='uix_tenant_id_function'),
)

View File

@ -1,15 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,177 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
from neutron.api.v2 import base
from neutron.api.v2 import resource as api_resource
from neutron.common import exceptions as q_exc
class ResourcePlugin(object):
"""
This is a class does some of what the Neutron plugin does, managing
resources in a way very similar to what Neutron does. It differ from
Neutron is that this provides a base plugin infrastructure, and doesn't
manage any resources.
Neutron doesn't split infrastructure and implementation.
"""
JOINS = ()
def __init__(self, delegate):
# synthesize the hooks because Neutron's base class uses the
# resource name as part of the method name
setattr(self, 'get_%s' % delegate.collection_name,
self._get_collection)
setattr(self, 'get_%s' % delegate.resource_name, self._get_item)
setattr(self, 'update_%s' % delegate.resource_name, self._update_item)
setattr(self, 'create_%s' % delegate.resource_name, self._create_item)
setattr(self, 'delete_%s' % delegate.resource_name, self._delete_item)
self.delegate = delegate
def _get_tenant_id_for_create(self, context, resource):
if context.is_admin and 'tenant_id' in resource:
tenant_id = resource['tenant_id']
elif ('tenant_id' in resource and
resource['tenant_id'] != context.tenant_id):
reason = 'Cannot create resource for another tenant'
raise q_exc.AdminRequired(reason=reason)
else:
tenant_id = context.tenant_id
return tenant_id
def _model_query(self, context):
query = context.session.query(self.delegate.model)
# NOTE(jkoelker) non-admin queries are scoped to their tenant_id
if not context.is_admin and hasattr(self.delegate.model, 'tenant_id'):
query = query.filter(
self.delegate.model.tenant_id == context.tenant_id)
return query
def _apply_filters_to_query(self, query, model, filters):
if filters:
for key, value in filters.iteritems():
column = getattr(model, key, None)
if column:
query = query.filter(column.in_(value))
return query
def _get_collection(self, context, filters=None, fields=None):
collection = self._model_query(context)
collection = self._apply_filters_to_query(collection,
self.delegate.model,
filters)
return [self._fields(self.delegate.make_dict(c), fields) for c in
collection.all()]
def _get_by_id(self, context, id):
query = self._model_query(context)
return query.filter_by(id=id).one()
def _get_item(self, context, id, fields=None):
obj = self._get_by_id(context, id)
return self._fields(self.delegate.make_dict(obj), fields)
def _update_item(self, context, id, **kwargs):
key = self.delegate.resource_name
resource_dict = kwargs[key][key]
obj = self._get_by_id(context, id)
return self.delegate.update(context, obj, resource_dict)
def _create_item(self, context, **kwargs):
key = self.delegate.resource_name
resource_dict = kwargs[key][key]
tenant_id = self._get_tenant_id_for_create(context, resource_dict)
return self.delegate.create(context, tenant_id, resource_dict)
def _delete_item(self, context, id):
obj = self._get_by_id(context, id)
with context.session.begin():
self.delegate.before_delete(obj)
context.session.delete(obj)
def _fields(self, resource, fields):
if fields:
return dict([(key, item) for key, item in resource.iteritems()
if key in fields])
return resource
class ResourceDelegateInterface(object):
"""
An abstract marker class defines the interface of RESTful resources.
"""
__metaclass__ = abc.ABCMeta
def before_delete(self, resource):
pass
@abc.abstractproperty
def model(self):
pass
@abc.abstractproperty
def resource_name(self):
pass
@abc.abstractproperty
def collection_name(self):
pass
@property
def joins(self):
return ()
@abc.abstractmethod
def update(self, context, resource, body):
pass
@abc.abstractmethod
def create(self, context, tenant_id, body):
pass
@abc.abstractmethod
def make_dict(self, obj):
pass
class ResourceDelegate(ResourceDelegateInterface):
"""
This class partially implemnts the ResourceDelegateInterface, providing
common code for use by child classes that inherit from it.
"""
def create(self, context, tenant_id, body):
with context.session.begin(subtransactions=True):
item = self.model(**body)
context.session.add(item)
return self.make_dict(item)
def update(self, context, resource, resource_dict):
with context.session.begin(subtransactions=True):
resource.update(resource_dict)
context.session.add(resource)
return self.make_dict(resource)
def create_extension(delegate):
"""
"""
return api_resource.Resource(base.Controller(ResourcePlugin(delegate),
delegate.collection_name,
delegate.resource_name,
delegate.ATTRIBUTE_MAP))

View File

@ -1,118 +0,0 @@
# Copyright 2014 DreamHost, LLC
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.api import extensions
from neutron.api.v2 import attributes as attr
from astara_neutron.extensions import _authzbase
from astara_neutron.db.models import models
import oslo_db.exception as db_exc
import webob.exc
class ByonfResource(_authzbase.ResourceDelegate):
"""This resource is intended as a private API that allows the rug to chan
the supporting network function.
"""
model = models.Byonf
resource_name = 'byonf'
collection_name = 'byonfs'
ATTRIBUTE_MAP = {
'tenant_id': {
'allow_post': True,
'allow_put': True,
'is_visible': True,
'validate': {'type:string': attr.TENANT_ID_MAX_LEN},
}, 'id': {
'allow_post': False,
'allow_put': False,
'is_visible': True
},
'image_uuid': {
'allow_post': True,
'allow_put': True,
'is_visible': True,
'enforce_policy': True,
'required_by_policy': True,
'validate': {'type:uuid': None}
},
'function_type': {
'allow_post': True,
'allow_put': True,
'is_visible': True,
'enforce_policy': True,
'required_by_policy': True
},
'driver': {
'allow_post': True,
'allow_put': True,
'is_visible': True,
'enforce_policy': True,
'required_by_policy': True
}
}
def create(self, context, tenant_id, resource_dict):
try:
return super(ByonfResource, self).create(
context, tenant_id, resource_dict)
except db_exc.DBDuplicateEntry:
raise webob.exc.HTTPConflict(
'Tenant %s already has driver associatation for function: %s' %
(resource_dict['tenant_id'], resource_dict['function_type']))
def make_dict(self, byo):
"""
Convert a Byo model object to a dictionary.
"""
return {
'tenant_id': byo['tenant_id'],
'image_uuid': byo['image_uuid'],
'function_type': byo['function_type'],
'driver': byo['driver'],
'id': byo['id']
}
class Byonf(extensions.ExtensionDescriptor):
"""
"""
def get_name(self):
return "byonf"
def get_alias(self):
return "byonf"
def get_description(self):
return "A byonf extension"
def get_namespace(self):
return 'http://docs.openstack.org/api/ext/v1.0'
def get_updated(self):
return "2015-12-07T09:14:43-05:00"
def get_resources(self):
return [extensions.ResourceExtension(
'byonf',
_authzbase.create_extension(ByonfResource()))]
def get_actions(self):
return []
def get_request_extensions(self):
return []

View File

@ -1,92 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.api import extensions
from neutron_lbaas.db.loadbalancer import models
from astara_neutron.extensions import _authzbase
class LoadbalancerstatusResource(_authzbase.ResourceDelegate):
"""This resource is intended as a private API that allows the rug to change
a router's status (which is normally a read-only attribute)
"""
model = models.LoadBalancer
resource_name = 'loadbalancerstatus'
collection_name = 'loadbalancerstatuses'
ATTRIBUTE_MAP = {
'tenant_id': {
'allow_post': False,
'allow_put': False,
'is_visible': False
},
'operating_status': {
'allow_post': False,
'allow_put': True,
'is_visible': True,
'enforce_policy': True,
'required_by_policy': True
},
'provisioning_status': {
'allow_post': False,
'allow_put': True,
'is_visible': True,
'enforce_policy': True,
'required_by_policy': True
}
}
def make_dict(self, loadbalancer):
"""
Convert a loadbalancer model object to a dictionary.
"""
return {
'tenant_id': loadbalancer['tenant_id'],
'operating_status': loadbalancer['operating_status'],
'provisioning_status': loadbalancer['provisioning_status'],
}
class Loadbalancerstatus(extensions.ExtensionDescriptor):
"""
"""
@classmethod
def get_name(cls):
return "loadbalancerstatus"
@classmethod
def get_alias(cls):
return "akloadbalancerstatus"
@classmethod
def get_description(cls):
return "A loadbalancer-status extension"
@classmethod
def get_namespace(cls):
return 'http://docs.dreamcompute.com/api/ext/v1.0'
@classmethod
def get_updated(cls):
return "2015-10-09T09:14:43-05:00"
@classmethod
def get_resources(cls):
return [extensions.ResourceExtension(
'akloadbalancerstatus',
_authzbase.create_extension(LoadbalancerstatusResource()))]

View File

@ -1,127 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.api import extensions
from neutron.db.l3_db import Router
from neutron.db import models_v2
from astara_neutron.extensions import _authzbase
STATUS_ACTIVE = 'ACTIVE'
STATUS_DOWN = 'DOWN'
class RouterstatusResource(_authzbase.ResourceDelegate):
"""This resource is intended as a private API that allows the rug to change
a router's status (which is normally a read-only attribute)
"""
model = Router
resource_name = 'routerstatus'
collection_name = 'routerstatuses'
ATTRIBUTE_MAP = {
'tenant_id': {
'allow_post': False,
'allow_put': False,
'is_visible': False
},
'status': {
'allow_post': False,
'allow_put': True,
'is_visible': True,
'enforce_policy': True,
'required_by_policy': True
}
}
def make_dict(self, router):
"""
Convert a router model object to a dictionary.
"""
return {
'tenant_id': router['tenant_id'],
'status': router['status']
}
def update(self, context, resource, resource_dict):
with context.session.begin(subtransactions=True):
resource.update(resource_dict)
context.session.add(resource)
# sync logical ports to backing port status
for router_port in resource.attached_ports:
if router_port.port.status != resource.status:
self._update_port_status(
context,
resource,
router_port.port
)
context.session.add(router_port.port)
return self.make_dict(resource)
def _update_port_status(self, context, router, port):
# assume port is down until proven otherwise
next_status = STATUS_DOWN
# find backing ports works with both ASTARA and AKANDA
partial_name = 'A%%:VRRP:%s' % router.id
query = context.session.query(models_v2.Port)
query = query.filter(
models_v2.Port.network_id == port.network_id,
models_v2.Port.name.like(partial_name)
)
for backing_port in query.all():
if not backing_port.device_owner or not backing_port.device_id:
continue
next_status = backing_port.status
if next_status != STATUS_ACTIVE:
break
port.status = next_status
class Routerstatus(extensions.ExtensionDescriptor):
"""
"""
@classmethod
def get_name(cls):
return "routerstatus"
@classmethod
def get_alias(cls):
return "dhrouterstatus"
@classmethod
def get_description(cls):
return "A router-status extension"
@classmethod
def get_namespace(cls):
return 'http://docs.dreamcompute.com/api/ext/v1.0'
@classmethod
def get_updated(cls):
return "2014-06-04T09:14:43-05:00"
@classmethod
def get_resources(cls):
return [extensions.ResourceExtension(
'dhrouterstatus',
_authzbase.create_extension(RouterstatusResource()))]

View File

@ -1,15 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,341 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import netaddr
import logging
import random
from neutron.api.v2 import attributes
from neutron.common.config import cfg
from neutron.common import exceptions as q_exc
from neutron.db import models_v2 as qmodels
from neutron.db import l3_db
from neutron._i18n import _
from neutron import manager
from neutron.plugins.common import constants
IPV6_ASSIGNMENT_ATTEMPTS = 1000
LOG = logging.getLogger(__name__)
astara_opts = [
cfg.StrOpt(
'astara_ipv6_tenant_range',
default='fdd6:a1fa:cfa8::/48',
help='IPv6 address prefix',
deprecated_opts=[
cfg.DeprecatedOpt('akanda_ipv6_tenant_range')
]),
cfg.IntOpt(
'astara_ipv6_prefix_length',
default=64,
help='Default length of prefix to pre-assign',
deprecated_opts=[
cfg.DeprecatedOpt('akanda_ipv6_prefix_length')
]),
cfg.ListOpt(
'astara_allowed_cidr_ranges',
default=['10.0.0.0/8', '172.16.0.0/12', '192.168.0.0/16', 'fc00::/7'],
help='List of allowed subnet cidrs for non-admin users',
deprecated_opts=[
cfg.DeprecatedOpt('akanda_allowed_cidr_ranges')
]),
cfg.BoolOpt(
'astara_auto_add_resources',
default=True,
help='Attempt to auto add resources to speed up network construction'
)
]
cfg.CONF.register_opts(astara_opts)
SUPPORTED_EXTENSIONS = [
'dhrouterstatus',
'byonf'
]
def auto_add_ipv6_subnet(f):
@functools.wraps(f)
def wrapper(self, context, network):
LOG.debug('auto_add_ipv6_subnet')
net = f(self, context, network)
if cfg.CONF.astara_auto_add_resources:
_add_ipv6_subnet(context, net)
return net
return wrapper
def auto_add_subnet_to_router(f):
@functools.wraps(f)
def wrapper(self, context, subnet):
LOG.debug('auto_add_subnet_to_router')
check_subnet_cidr_meets_policy(context, subnet)
subnet = f(self, context, subnet)
if cfg.CONF.astara_auto_add_resources:
_add_subnet_to_router(context, subnet)
return subnet
return wrapper
# NOTE(mark): in Havana gateway_ip cannot be updated leaving here if this
# returns in Icehouse.
def sync_subnet_gateway_port(f):
@functools.wraps(f)
def wrapper(self, context, id, subnet):
LOG.debug('sync_subnet_gateway_port')
retval = f(self, context, id, subnet)
_update_internal_gateway_port_ip(context, retval)
return retval
return wrapper
def check_subnet_cidr_meets_policy(context, subnet):
if context.is_admin:
return
elif getattr(context, '_astara_auto_add', None):
return
net = netaddr.IPNetwork(subnet['subnet']['cidr'])
for allowed_cidr in cfg.CONF.astara_allowed_cidr_ranges:
if net in netaddr.IPNetwork(allowed_cidr):
return
else:
reason = _('Cannot create a subnet that is not within the '
'allowed address ranges [%s].' %
cfg.CONF.astara_allowed_cidr_ranges)
raise q_exc.AdminRequired(reason=reason)
def get_special_ipv6_addrs(ips, mac_address):
current_ips = set(ips)
special_ips = set([_generate_ipv6_address('fe80::/64', mac_address)])
astara_ipv6_cidr = netaddr.IPNetwork(cfg.CONF.astara_ipv6_tenant_range)
for ip in current_ips:
if '/' not in ip and netaddr.IPAddress(ip) in astara_ipv6_cidr:
# Calculate the cidr here because the caller does not have access
# to request context, subnet or port_id.
special_ips.add(
'%s/%s' % (
netaddr.IPAddress(
netaddr.IPNetwork(
'%s/%d' % (ip, cfg.CONF.astara_ipv6_prefix_length)
).first
),
cfg.CONF.astara_ipv6_prefix_length
)
)
return special_ips - current_ips
def _add_subnet_to_router(context, subnet):
LOG.debug('_add_subnet_to_router')
if context.is_admin:
# admins can manually add their own interfaces
return
if not subnet.get('gateway_ip'):
return
service_plugin = manager.NeutronManager.get_service_plugins().get(
constants.L3_ROUTER_NAT)
router_q = context.session.query(l3_db.Router)
router_q = router_q.filter_by(tenant_id=context.tenant_id)
router = router_q.first()
if not router:
router_args = {
'tenant_id': subnet['tenant_id'],
'name': 'ak-%s' % subnet['tenant_id'],
'admin_state_up': True
}
router = service_plugin.create_router(context, {'router': router_args})
if not _update_internal_gateway_port_ip(context, router['id'], subnet):
service_plugin.add_router_interface(context.elevated(),
router['id'],
{'subnet_id': subnet['id']})
def _update_internal_gateway_port_ip(context, router_id, subnet):
"""Attempt to update internal gateway port if one already exists."""
LOG.debug(
'setting gateway port IP for router %s on network %s for subnet %s',
router_id,
subnet['network_id'],
subnet['id'],
)
if not subnet.get('gateway_ip'):
LOG.debug('no gateway set for subnet %s, skipping', subnet['id'])
return
q = context.session.query(l3_db.RouterPort)
q = q.join(qmodels.Port)
q = q.filter(
l3_db.RouterPort.router_id == router_id,
l3_db.RouterPort.port_type == l3_db.DEVICE_OWNER_ROUTER_INTF,
qmodels.Port.network_id == subnet['network_id']
)
routerport = q.first()
if not routerport:
LOG.info(
'Unable to find a %s port for router %s on network %s.'
% ('DEVICE_OWNER_ROUTER_INTF', router_id, subnet['network_id'])
)
return
fixed_ips = [
{'subnet_id': ip["subnet_id"], 'ip_address': ip["ip_address"]}
for ip in routerport.port["fixed_ips"]
]
plugin = manager.NeutronManager.get_plugin()
service_plugin = manager.NeutronManager.get_service_plugins().get(
constants.L3_ROUTER_NAT)
for index, ip in enumerate(fixed_ips):
if ip['subnet_id'] == subnet['id']:
if not subnet['gateway_ip']:
del fixed_ips[index]
elif ip['ip_address'] != subnet['gateway_ip']:
ip['ip_address'] = subnet['gateway_ip']
else:
return True # nothing to update
break
else:
try:
service_plugin._check_for_dup_router_subnet(
context,
routerport.router,
subnet['network_id'],
subnet['id'],
subnet['cidr']
)
except:
LOG.info(
('Subnet %(id)s will not be auto added to router because '
'%(gateway_ip)s is already in use by another attached '
'network attached to this router.'),
subnet
)
return True # nothing to add
fixed_ips.append(
{'subnet_id': subnet['id'], 'ip_address': subnet['gateway_ip']}
)
# we call into the plugin vs updating the db directly because of l3 hooks
# baked into the plugins.
port_dict = {'fixed_ips': fixed_ips}
plugin.update_port(
context.elevated(),
routerport.port['id'],
{'port': port_dict}
)
return True
def _add_ipv6_subnet(context, network):
plugin = manager.NeutronManager.get_plugin()
try:
subnet_generator = _ipv6_subnet_generator(
cfg.CONF.astara_ipv6_tenant_range,
cfg.CONF.astara_ipv6_prefix_length)
except:
LOG.exception('Unable able to add tenant IPv6 subnet.')
return
remaining = IPV6_ASSIGNMENT_ATTEMPTS
while remaining:
remaining -= 1
candidate_cidr = subnet_generator.next()
sub_q = context.session.query(qmodels.Subnet)
sub_q = sub_q.filter_by(cidr=str(candidate_cidr))
existing = sub_q.all()
if not existing:
create_args = {
'tenant_id': network['tenant_id'],
'network_id': network['id'],
'name': '',
'cidr': str(candidate_cidr),
'ip_version': candidate_cidr.version,
'enable_dhcp': True,
'ipv6_address_mode': 'slaac',
'ipv6_ra_mode': 'slaac',
'gateway_ip': attributes.ATTR_NOT_SPECIFIED,
'dns_nameservers': attributes.ATTR_NOT_SPECIFIED,
'host_routes': attributes.ATTR_NOT_SPECIFIED,
'allocation_pools': attributes.ATTR_NOT_SPECIFIED
}
context._astara_auto_add = True
plugin.create_subnet(context, {'subnet': create_args})
del context._astara_auto_add
break
else:
LOG.error('Unable to generate a unique tenant subnet cidr')
def _ipv6_subnet_generator(network_range, prefixlen):
# coerce prefixlen to stay within bounds
prefixlen = min(128, prefixlen)
net = netaddr.IPNetwork(network_range)
if net.version != 6:
raise ValueError('Tenant range %s is not a valid IPv6 cidr' %
network_range)
if prefixlen < net.prefixlen:
raise ValueError('Prefixlen (/%d) must be larger than the network '
'range prefixlen (/%s)' % (prefixlen, net.prefixlen))
rand = random.SystemRandom()
max_range = 2 ** (prefixlen - net.prefixlen)
while True:
rand_bits = rand.randint(0, max_range)
candidate_cidr = netaddr.IPNetwork(
netaddr.IPAddress(net.value + (rand_bits << prefixlen)))
candidate_cidr.prefixlen = prefixlen
yield candidate_cidr
# Note(rods): we need to keep this method untill the nsx driver won't
# be updated to use neutron's native support for slaac
def _generate_ipv6_address(cidr, mac_address):
network = netaddr.IPNetwork(cidr)
tokens = ['%02x' % int(t, 16) for t in mac_address.split(':')]
eui64 = int(''.join(tokens[0:3] + ['ff', 'fe'] + tokens[3:6]), 16)
# the bit inversion is required by the RFC
return str(netaddr.IPAddress(network.value + (eui64 ^ 0x0200000000000000)))

View File

@ -1,29 +0,0 @@
# Copyright (c) 2015 Akanda, Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron_lbaas.services.loadbalancer import plugin
class LoadBalancerPluginv2(plugin.LoadBalancerPluginv2):
"""
This is allows loadbalancer status to be updated from Akanda.
To enable, add the full python path to this class to the service_plugin
list in neutron.conf Ensure both the path to astara_neutron/extensions
has been added to api_extensions_path *as well as* the path to
neutron-lbaas/neutron_lbaas/extensions.
"""
supported_extension_aliases = (
plugin.LoadBalancerPluginv2.supported_extension_aliases +
['akloadbalancerstatus']
)

View File

@ -1,176 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
import netaddr
from oslo_config import cfg
from neutron.api.v2 import attributes
from neutron.common import constants as neutron_constants
from neutron.db import l3_db
from neutron.db import models_v2
from neutron.plugins.ml2 import plugin
from neutron.services.l3_router import l3_router_plugin
from astara_neutron.plugins import decorators as astara
AKANDA_PORT_NAME_RE = re.compile(
'^(ASTARA|AKANDA):(MGT|VRRP):[0-9a-f]{8}(-[0-9a-f]{4}){3}-[0-9a-f]{12}$'
)
class Ml2Plugin(plugin.Ml2Plugin):
_supported_extension_aliases = (
plugin.Ml2Plugin._supported_extension_aliases +
["dhrouterstatus", "byonf"]
)
disabled_extensions = [
neutron_constants.DHCP_AGENT_SCHEDULER_EXT_ALIAS,
neutron_constants.LBAAS_AGENT_SCHEDULER_EXT_ALIAS
]
for ext in disabled_extensions:
try:
_supported_extension_aliases.remove(ext)
except ValueError:
pass
@astara.auto_add_ipv6_subnet
def create_network(self, context, network):
return super(Ml2Plugin, self).create_network(context, network)
@astara.auto_add_subnet_to_router
def create_subnet(self, context, subnet):
return super(Ml2Plugin, self).create_subnet(context, subnet)
@astara.sync_subnet_gateway_port
def update_subnet(self, context, id, subnet):
return super(Ml2Plugin, self).update_subnet(
context, id, subnet)
# Nova is unhappy when the port does not have any IPs, so we're going
# to add the v6 link local dummy data.
# TODO(mark): limit this lie to service user
def _make_port_dict(self, port, fields=None, process_extensions=True):
res = super(Ml2Plugin, self)._make_port_dict(
port,
fields,
process_extensions
)
if not res.get('fixed_ips') and res.get('mac_address'):
v6_link_local = netaddr.EUI(res['mac_address']).ipv6_link_local()
res['fixed_ips'] = [
{
'subnet_id': '00000000-0000-0000-0000-000000000000',
'ip_address': str(v6_link_local)
}
]
return res
def _select_dhcp_ips_for_network_ids(self, context, network_ids):
ips = super(Ml2Plugin, self)._select_dhcp_ips_for_network_ids(
context,
network_ids
)
# allow DHCP replies from router interfaces since they're combined in
# Astara appliances. Minimal impact if another appliance is used.
query = context.session.query(models_v2.Port.mac_address,
models_v2.Port.network_id,
models_v2.IPAllocation.ip_address)
query = query.join(models_v2.IPAllocation)
query = query.filter(models_v2.Port.network_id.in_(network_ids))
owner = neutron_constants.DEVICE_OWNER_ROUTER_INTF
query = query.filter(models_v2.Port.device_owner == owner)
for mac_address, network_id, ip in query:
if (netaddr.IPAddress(ip).version == 6 and not
netaddr.IPAddress(ip).is_link_local()):
ip = str(netaddr.EUI(mac_address).ipv6_link_local())
if ip not in ips[network_id]:
ips[network_id].append(ip)
return ips
# TODO(markmcclain) add upstream ability to remove port-security
# workaround it for now by filtering out Akanda ports
def get_ports_from_devices(self, context, devices):
"this wrapper removes Akanda VRRP ports since they are router ports"
ports = super(Ml2Plugin, self).get_ports_from_devices(context, devices)
return (
port
for port in ports
if port and not AKANDA_PORT_NAME_RE.match(port['name'])
)
class L3RouterPlugin(l3_router_plugin.L3RouterPlugin):
# An issue in neutron is making this class inheriting some
# methods from l3_dvr_db.L3_NAT_with_dvr_db_mixin.As a workaround
# we force it to use the original methods in the
# l3_db.L3_NAT_db_mixin class.
get_sync_data = l3_db.L3_NAT_db_mixin.get_sync_data
add_router_interface = l3_db.L3_NAT_db_mixin.add_router_interface
remove_router_interface = l3_db.L3_NAT_db_mixin.remove_router_interface
# call this directly instead of through class hierarchy, to avoid
# the l3_hamode_db from doing agent-based HA setup and checks
_create_router = l3_db.L3_NAT_dbonly_mixin.create_router
def list_routers_on_l3_agent(self, context, agent_id):
return {
'routers': self.get_routers(context),
}
def list_active_sync_routers_on_active_l3_agent(
self, context, host, router_ids):
# Override L3AgentSchedulerDbMixin method
filters = {}
if router_ids:
filters['id'] = router_ids
routers = self.get_routers(context, filters=filters)
new_router_ids = [r['id'] for r in routers]
if new_router_ids:
return self.get_sync_data(
context,
router_ids=new_router_ids,
active=True,
)
return []
@classmethod
def _is_ha(cls, router):
ha = router.get('ha')
if not attributes.is_attr_set(ha):
ha = cfg.CONF.l3_ha
return ha
def create_router(self, context, router):
router['router']['ha'] = self._is_ha(router['router'])
return self._create_router(context, router)
if neutron_constants.L3_AGENT_SCHEDULER_EXT_ALIAS in \
L3RouterPlugin.supported_extension_aliases:
L3RouterPlugin.supported_extension_aliases.remove(
neutron_constants.L3_AGENT_SCHEDULER_EXT_ALIAS
)

View File

@ -1,386 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
from sqlalchemy import exc as sql_exc
from neutron.api.rpc.handlers import dhcp_rpc, l3_rpc
from neutron.common import constants
from neutron.common import exceptions as n_exc
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.db import agents_db
from neutron.db import l3_db
from oslo_log import log as logging
from oslo.db import exception as db_exc
from neutron.i18n import _
from neutron.plugins.vmware.api_client import exception as api_exc
from neutron.plugins.vmware.common import nsx_utils
from neutron.plugins.vmware.common import sync as nsx_sync
from neutron.plugins.vmware.dbexts import db as nsx_db
from neutron.plugins.vmware.nsxlib import switch as switchlib
from neutron.plugins.vmware.plugins import base
from neutron.plugins.vmware.plugins.base import cfg as n_cfg
from astara_neutron.plugins import decorators as astara
LOG = logging.getLogger("NeutronPlugin")
def astara_nvp_ipv6_port_security_wrapper(f):
@functools.wraps(f)
def wrapper(lport_obj, mac_address, fixed_ips, port_security_enabled,
security_profiles, queue_id, mac_learning_enabled,
allowed_address_pairs):
f(lport_obj, mac_address, fixed_ips, port_security_enabled,
security_profiles, queue_id, mac_learning_enabled,
allowed_address_pairs)
# evaulate the state so that we only override the value when enabled
# otherwise we are preserving the underlying behavior of the NVP plugin
if port_security_enabled:
# hotfix to enable egress mulitcast
lport_obj['allow_egress_multicast'] = True
# TODO(mark): investigate moving away from this an wrapping
# (create|update)_port
# add link-local and subnet cidr for IPv6 temp addresses
special_ipv6_addrs = astara.get_special_ipv6_addrs(
(p['ip_address'] for p in lport_obj['allowed_address_pairs']),
mac_address
)
lport_obj['allowed_address_pairs'].extend(
{'mac_address': mac_address, 'ip_address': addr}
for addr in special_ipv6_addrs
)
return wrapper
base.switchlib._configure_extensions = astara_nvp_ipv6_port_security_wrapper(
base.switchlib._configure_extensions
)
class AstaraNsxSynchronizer(nsx_sync.NsxSynchronizer):
"""
The NsxSynchronizer class in Neutron runs a synchronization thread to
sync nvp objects with neutron objects. Since we don't use nvp's routers
the sync was failing making neutron showing all the routers like if the
were in Error state. To fix this behaviour we override the two methods
responsible for the routers synchronization in the NsxSynchronizer class
to be a noop
"""
def _synchronize_state(self, *args, **kwargs):
"""
Given the complexicity of the NSX synchronization process, there are
about a million ways for it to go wrong. (MySQL connection issues,
transactional race conditions, etc...) In the event that an exception
is thrown, behavior of the upstream implementation is to immediately
report the exception and kill the synchronizer thread.
This makes it very difficult to detect failure (because the thread just
ends) and the problem can only be fixed by completely restarting
neutron.
This implementation changes the behavior to repeatedly fail (and retry)
and log verbosely during failure so that the failure is more obvious
(and so that auto-recovery is a possibility if e.g., the database
comes back to life or a network-related issue becomes resolved).
"""
try:
return nsx_sync.NsxSynchronizer._synchronize_state(
self, *args, **kwargs
)
except:
LOG.exception("An error occurred while communicating with "
"NSX backend. Will retry synchronization "
"in %d seconds" % self._sync_backoff)
self._sync_backoff = min(self._sync_backoff * 2, 64)
return self._sync_backoff
else:
self._sync_backoff = 1
def _synchronize_lrouters(self, *args, **kwargs):
pass
def synchronize_router(self, *args, **kwargs):
pass
class NsxPluginV2(base.NsxPluginV2):
"""
NsxPluginV2 is a Neutron plugin that provides L2 Virtual Network
functionality using NSX.
"""
supported_extension_aliases = (
base.NsxPluginV2.supported_extension_aliases +
astara.SUPPORTED_EXTENSIONS
)
def __init__(self):
# In order to force this driver to not sync neutron routers with
# with NSX routers, we need to use our subclass of the
# NsxSynchronizer object. Sadly, the call to the __init__ method
# of the superclass instantiates a non-customizable NsxSynchronizer
# object wich spawns a sync thread that sets the state of all the
# neutron routers to ERROR when neutron starts. To avoid spawning
# that thread, we need to temporarily override the cfg object and
# disable NSX synchronization in the superclass constructor.
actual = {
'state_sync_interval': n_cfg.CONF.NSX_SYNC.state_sync_interval,
'max_random_sync_delay': n_cfg.CONF.NSX_SYNC.max_random_sync_delay,
'min_sync_req_delay': n_cfg.CONF.NSX_SYNC.min_sync_req_delay
}
for key in actual:
n_cfg.CONF.set_override(key, 0, 'NSX_SYNC')
super(NsxPluginV2, self).__init__()
for key, value in actual.items():
n_cfg.CONF.set_override(key, value, 'NSX_SYNC')
# ---------------------------------------------------------------------
# Original code:
# self._port_drivers = {
# 'create': {l3_db.DEVICE_OWNER_ROUTER_GW:
# self._nsx_create_ext_gw_port,
# l3_db.DEVICE_OWNER_FLOATINGIP:
# self._nsx_create_fip_port,
# l3_db.DEVICE_OWNER_ROUTER_INTF:
# self._nsx_create_router_port,
# networkgw_db.DEVICE_OWNER_NET_GW_INTF:
# self._nsx_create_l2_gw_port,
# 'default': self._nsx_create_port},
# 'delete': {l3_db.DEVICE_OWNER_ROUTER_GW:
# self._nsx_delete_ext_gw_port,
# l3_db.DEVICE_OWNER_ROUTER_INTF:
# self._nsx_delete_router_port,
# l3_db.DEVICE_OWNER_FLOATINGIP:
# self._nsx_delete_fip_port,
# networkgw_db.DEVICE_OWNER_NET_GW_INTF:
# self._nsx_delete_port,
# 'default': self._nsx_delete_port}
# }
self._port_drivers = {
'create': {
l3_db.DEVICE_OWNER_FLOATINGIP: self._nsx_create_fip_port,
'default': self._nsx_create_port
},
'delete': {
l3_db.DEVICE_OWNER_FLOATINGIP: self._nsx_delete_fip_port,
'default': self._nsx_delete_port
}
}
# ---------------------------------------------------------------------
# Create a synchronizer instance for backend sync
# ---------------------------------------------------------------------
# Note(rods):
# We added this code with the only purpose to make the nsx driver use
# our subclass of the NsxSynchronizer object.
#
# DHC-2385
#
# Original code:
# self._synchronizer = sync.NsxSynchronizer(
# self, self.cluster,
# self.nsx_sync_opts.state_sync_interval,
# self.nsx_sync_opts.min_sync_req_delay,
# self.nsx_sync_opts.min_chunk_size,
# self.nsx_sync_opts.max_random_sync_delay)
self._synchronizer = AstaraNsxSynchronizer(
self, self.cluster,
self.nsx_sync_opts.state_sync_interval,
self.nsx_sync_opts.min_sync_req_delay,
self.nsx_sync_opts.min_chunk_size,
self.nsx_sync_opts.max_random_sync_delay)
# ---------------------------------------------------------------------
def setup_dhcpmeta_access(self):
# Ok, so we're going to add L3 here too with the DHCP
self.conn = n_rpc.create_connection(new=True)
self.conn.create_consumer(
topics.PLUGIN,
[dhcp_rpc.DhcpRpcCallback(), agents_db.AgentExtRpcCallback()],
fanout=False
)
self.conn.create_consumer(
topics.L3PLUGIN,
[l3_rpc.L3RpcCallback()],
fanout=False
)
# Consume from all consumers in a thread
self.conn.consume_in_threads()
self.handle_network_dhcp_access_delegate = noop
self.handle_port_dhcp_access_delegate = noop
self.handle_port_metadata_access_delegate = noop
self.handle_metadata_access_delegate = noop
@astara.auto_add_ipv6_subnet
def create_network(self, context, network):
return super(NsxPluginV2, self).create_network(context, network)
@astara.auto_add_subnet_to_router
def create_subnet(self, context, subnet):
return super(NsxPluginV2, self).create_subnet(context, subnet)
# we need to use original versions l3_db.L3_NAT_db_mixin mixin and not
# NSX versions that manage NSX's logical router
create_router = l3_db.L3_NAT_db_mixin.create_router
update_router = l3_db.L3_NAT_db_mixin.update_router
delete_router = l3_db.L3_NAT_db_mixin.delete_router
get_router = l3_db.L3_NAT_db_mixin.get_router
get_routers = l3_db.L3_NAT_db_mixin.get_routers
add_router_interface = l3_db.L3_NAT_db_mixin.add_router_interface
remove_router_interface = l3_db.L3_NAT_db_mixin.remove_router_interface
update_floatingip = l3_db.L3_NAT_db_mixin.update_floatingip
delete_floatingip = l3_db.L3_NAT_db_mixin.delete_floatingip
get_floatingip = l3_db.L3_NAT_db_mixin.get_floatingip
get_floatings = l3_db.L3_NAT_db_mixin.get_floatingips
_update_fip_assoc = l3_db.L3_NAT_db_mixin._update_fip_assoc
_update_router_gw_info = l3_db.L3_NAT_db_mixin._update_router_gw_info
disassociate_floatingips = l3_db.L3_NAT_db_mixin.disassociate_floatingips
get_sync_data = l3_db.L3_NAT_db_mixin.get_sync_data
def _ensure_metadata_host_route(self, *args, **kwargs):
""" Astara metadata services are provided by router so make no-op/"""
pass
def _nsx_create_port(self, context, port_data):
"""Driver for creating a logical switch port on NSX platform."""
# FIXME(salvatore-orlando): On the NSX platform we do not really have
# external networks. So if as user tries and create a "regular" VIF
# port on an external network we are unable to actually create.
# However, in order to not break unit tests, we need to still create
# the DB object and return success
# NOTE(rods): Reporting mark's comment on havana version of this patch.
# Astara does want ports for external networks so this method is
# basically same with external check removed and the auto plugging of
# router ports
# ---------------------------------------------------------------------
# Note(rods): Remove the check on the external network
#
# Original code:
# if self._network_is_external(context, port_data['network_id']):
# LOG.info(_("NSX plugin does not support regular VIF ports on "
# "external networks. Port %s will be down."),
# port_data['network_id'])
# # No need to actually update the DB state - the default is down
# return port_data
# ---------------------------------------------------------------------
lport = None
selected_lswitch = None
try:
selected_lswitch = self._nsx_find_lswitch_for_port(context,
port_data)
lport = self._nsx_create_port_helper(context.session,
selected_lswitch['uuid'],
port_data,
True)
nsx_db.add_neutron_nsx_port_mapping(
context.session, port_data['id'],
selected_lswitch['uuid'], lport['uuid'])
# -----------------------------------------------------------------
# Note(rods): Auto plug router ports
#
# Original code:
# if port_data['device_owner'] not in self.port_special_owners:
# switchlib.plug_vif_interface(
# self.cluster, selected_lswitch['uuid'],
# lport['uuid'], "VifAttachment", port_data['id'])
switchlib.plug_vif_interface(
self.cluster, selected_lswitch['uuid'],
lport['uuid'], "VifAttachment", port_data['id'])
# -----------------------------------------------------------------
LOG.debug(_("_nsx_create_port completed for port %(name)s "
"on network %(network_id)s. The new port id is "
"%(id)s."), port_data)
except (api_exc.NsxApiException, n_exc.NeutronException):
self._handle_create_port_exception(
context, port_data['id'],
selected_lswitch and selected_lswitch['uuid'],
lport and lport['uuid'])
except db_exc.DBError as e:
if (port_data['device_owner'] == constants.DEVICE_OWNER_DHCP and
isinstance(e.inner_exception, sql_exc.IntegrityError)):
msg = (_("Concurrent network deletion detected; Back-end Port "
"%(nsx_id)s creation to be rolled back for Neutron "
"port: %(neutron_id)s")
% {'nsx_id': lport['uuid'],
'neutron_id': port_data['id']})
LOG.warning(msg)
if selected_lswitch and lport:
try:
switchlib.delete_port(self.cluster,
selected_lswitch['uuid'],
lport['uuid'])
except n_exc.NotFound:
LOG.debug(_("NSX Port %s already gone"), lport['uuid'])
def _nsx_delete_port(self, context, port_data):
# FIXME(salvatore-orlando): On the NSX platform we do not really have
# external networks. So deleting regular ports from external networks
# does not make sense. However we cannot raise as this would break
# unit tests.
# NOTE(rods): reporting mark's comment on havana version of this patch.
# Astara does want ports for external networks so this method is
# basically same with external check removed
# ---------------------------------------------------------------------
# Original code:
# if self._network_is_external(context, port_data['network_id']):
# LOG.info(_("NSX plugin does not support regular VIF ports on "
# "external networks. Port %s will be down."),
# port_data['network_id'])
# return
# ---------------------------------------------------------------------
nsx_switch_id, nsx_port_id = nsx_utils.get_nsx_switch_and_port_id(
context.session, self.cluster, port_data['id'])
if not nsx_port_id:
LOG.debug(_("Port '%s' was already deleted on NSX platform"), id)
return
# TODO(bgh): if this is a bridged network and the lswitch we just got
# back will have zero ports after the delete we should garbage collect
# the lswitch.
try:
switchlib.delete_port(self.cluster, nsx_switch_id, nsx_port_id)
LOG.debug(_("_nsx_delete_port completed for port %(port_id)s "
"on network %(net_id)s"),
{'port_id': port_data['id'],
'net_id': port_data['network_id']})
except n_exc.NotFound:
LOG.warning(_("Port %s not found in NSX"), port_data['id'])
def noop(*args, **kwargs):
pass

View File

@ -1,18 +0,0 @@
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
version_info = pbr.version.VersionInfo('astara_neutron')

View File

@ -1,3 +0,0 @@
---
fixes:
- Bug `266586 <https://bugs.launchpad.net/astara/+bug/266586>`_ \- Always allow DHCP traffic through security groups from router to tenant VMs on the same subnet

View File

@ -1,12 +0,0 @@
---
features:
- Adds a new BYONF API extension to Neutron which allows operators to override
the astara-orchestrator driver used to back a resource as well as the Glance
image id to be used for the virtual appliance, on a per tenant basis. This
requires leveraging the Neutron database to store these associations, so a
migration repository has been added to create the required tables there.
other:
- In order to use the BYONF API, the Neutron database must be migrated to create
the required astara_byonf table. This can be accomplished by running
``neutron-db-manage --subproject astara-neutron upgrade head``.

View File

@ -1,3 +0,0 @@
---
prelude: astara-neutron Mitaka Series Release v8.0.0.

View File

@ -1,275 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Astara Release Notes documentation build configuration file, created by
# sphinx-quickstart on Tue Nov 3 17:40:50 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'oslosphinx',
'reno.sphinxext',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'astara-neutron Release Notes'
copyright = u'2015, Astara Developers'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
from astara_neutron.version import version_info as astara_version
# The full version, including alpha/beta/rc tags.
release = astara_version.version_string_with_vcs()
# The short X.Y version.
version = astara_version.canonical_version_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'AstaraReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'AstaraReleaseNotes.tex', u'Astara Release Notes Documentation',
u'Astara Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'astarareleasenotes',
u'astara-neutron Release Notes Documentation',
[u'Astara Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'AstaraReleaseNotes',
u'astara-neutron Release Notes Documentation',
u'Astara Developers', 'AstaraNeutronReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False

View File

@ -1,8 +0,0 @@
==============================
astara-neutron Release Notes
==============================
.. toctree::
:maxdepth: 1
mitaka

View File

@ -1,5 +0,0 @@
==========================================================
astara-neutron mitaka Series Release Notes
==========================================================
.. release-notes::

View File

@ -1,5 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
oslo.log>=1.14.0 # Apache-2.0
oslo.utils>=3.15.0 # Apache-2.0

View File

@ -1,57 +0,0 @@
#!/usr/bin/env python
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Usage: python rabbit.py '#'
#
import pika
import sys
credentials = pika.PlainCredentials('guest', 'yetanothersecret')
connection = pika.BlockingConnection(pika.ConnectionParameters
(host='192.168.57.100',
credentials=credentials))
channel = connection.channel()
channel.exchange_declare(exchange='quantum',
type='topic')
result = channel.queue_declare(exclusive=False)
queue_name = result.method.queue
binding_keys = sys.argv[1:]
if not binding_keys:
print >> sys.stderr, "Usage: %s [binding_key]..." % (sys.argv[0],)
sys.exit(1)
for binding_key in binding_keys:
channel.queue_bind(exchange='quantum',
queue=queue_name,
routing_key=binding_key)
print ' [*] Waiting for logs. To exit press CTRL+C'
def callback(ch, method, properties, body):
print " [x] %r:%r" % (method.routing_key, body,)
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
channel.start_consuming()

View File

@ -1,43 +0,0 @@
[metadata]
name = astara-neutron
summary = Astara API extensions for OpenStack Neutron
description-file =
README.md
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://github.com/openstack/akanda-neutron
classifier =
Environment :: OpenStack
Intended Audience :: Developers
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
[files]
packages =
astara_neutron
[entry_points]
neutron.db.alembic_migrations =
astara-neutron = astara_neutron.db.migration:alembic_migrations
[global]
setup-hooks =
pbr.hooks.setup_hook
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source
[nosetests]
where = test
verbosity = 2
detailed-errors = 1
cover-package = akanda

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

View File

@ -1,11 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
nose # LGPL
coverage>=3.6 # Apache-2.0
mock>=2.0 # BSD
# Doc requirements
sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
reno>=1.8.0 # Apache2

View File

@ -1,183 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from quantumclient.v2_0 import client
from quantumclient.common import exceptions
class AkandaClientWrapper(client.Client):
"""Add client support for Akanda Extensions. """
addressgroup_path = '/dhaddressgroup'
addressentry_path = '/dhaddressentry'
filterrule_path = '/dhfilterrule'
portalias_path = '/dhportalias'
portforward_path = '/dhportforward'
# portalias crud
@client.APIParamsCall
def list_portalias(self, **params):
return self.get(self.portalias_path, params=params)
@client.APIParamsCall
def create_portalias(self, body=None):
return self.post(self.portalias_path, body=body)
@client.APIParamsCall
def show_portalias(self, portforward, **params):
return self.get('%s/%s' % (self.portalias_path, portforward),
params=params)
@client.APIParamsCall
def update_portalias(self, portforward, body=None):
return self.put('%s/%s' % (self.portalias_path, portforward),
body=body)
@client.APIParamsCall
def delete_portalias(self, portforward):
return self.delete('%s/%s' % (self.portalias_path, portforward))
# portforward crud
@client.APIParamsCall
def list_portforwards(self, **params):
return self.get(self.portforward_path, params=params)
@client.APIParamsCall
def create_portforward(self, body=None):
return self.post(self.portforward_path, body=body)
@client.APIParamsCall
def show_portforward(self, portforward, **params):
return self.get('%s/%s' % (self.portforward_path, portforward),
params=params)
@client.APIParamsCall
def update_portforward(self, portforward, body=None):
return self.put('%s/%s' % (self.portforward_path, portforward),
body=body)
@client.APIParamsCall
def delete_portforward(self, portforward):
return self.delete('%s/%s' % (self.portforward_path, portforward))
# filterrule crud
@client.APIParamsCall
def list_filterrules(self, **params):
return self.get(self.filterrule_path, params=params)
@client.APIParamsCall
def create_filterrule(self, body=None):
return self.post(self.filterrule_path, body=body)
@client.APIParamsCall
def show_filterrule(self, filterrule, **params):
return self.get('%s/%s' % (self.filterrule_path, filterrule),
params=params)
@client.APIParamsCall
def update_filterrule(self, filterrule, body=None):
return self.put('%s/%s' % (self.filterrule_path, filterrule),
body=body)
@client.APIParamsCall
def delete_filterrule(self, filterrule):
return self.delete('%s/%s' % (self.filterrule_path, filterrule))
# addressbook group crud
@client.APIParamsCall
def list_addressgroups(self, **params):
return self.get(self.addressgroup_path, params=params)
@client.APIParamsCall
def create_addressgroup(self, body=None):
return self.post(self.addressgroup_path, body=body)
@client.APIParamsCall
def show_addressgroup(self, addressgroup, **params):
return self.get('%s/%s' % (self.addressgroup_path,
addressgroup),
params=params)
@client.APIParamsCall
def update_addressgroup(self, addressgroup, body=None):
return self.put('%s/%s' % (self.addressgroup_path,
addressgroup),
body=body)
@client.APIParamsCall
def delete_addressgroup(self, addressgroup, body=None):
return self.delete('%s/%s' % (self.addressgroup_path,
addressgroup))
# addressbook entries crud
@client.APIParamsCall
def list_addressbookentries(self, **params):
return self.get(self.addressentry_path, params=params)
@client.APIParamsCall
def create_addressentry(self, body=None):
return self.post(self.addressentry_path, body=body)
@client.APIParamsCall
def show_addressentry(self, addressentry, **params):
return self.get('%s/%s' % (self.addressentry_path,
addressentry),
params=params)
@client.APIParamsCall
def update_addressentry(self, addressentry, body=None):
return self.put('%s/%s' % (self.addressentry_path,
addressentry),
body=body)
@client.APIParamsCall
def delete_addressentry(self, addressentry):
return self.delete('%s/%s' % (self.addressentry_path,
addressentry))
if __name__ == '__main__':
# WARNING: This block will delete all object owned by the
# specified user. It may do too much.
c = AkandaClientWrapper(
username='demo',
password='secrete',
tenant_name='demo',
auth_url='http://localhost:5000/v2.0/',
auth_strategy='keystone',
auth_region='RegionOne')
resources = [
(c.list_portalias, c.delete_portalias, 'portalias'),
(c.list_filterrules, c.delete_filterrule, 'filterrule'),
(c.list_portforwards, c.delete_portforward, 'portforward'),
(c.list_addressbookentries, c.delete_addressentry, 'addressentry'),
(c.list_addressgroups, c.delete_addressgroup, 'addressgroup'),
(c.list_ports, c.delete_port, 'port'),
(c.list_subnets, c.delete_subnet, 'subnet'),
(c.list_networks, c.delete_network, 'network')
]
for lister, deleter, obj_type in resources:
print obj_type
response = lister()
data = response[iter(response).next()]
for o in data:
print repr(o)
try:
deleter(o['id'])
except exceptions.QuantumClientException as err:
print 'ERROR:', err

View File

@ -1,329 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
from quantumclient.v2_0 import client
class AkandaClientWrapper(client.Client):
"""Add client support for Akanda Extensions. """
addressgroup_path = '/dhaddressgroup'
addressentry_path = '/dhaddressentry'
filterrule_path = '/dhfilterrule'
portalias_path = '/dhportalias'
portforward_path = '/dhportforward'
# portalias crud
@client.APIParamsCall
def list_portalias(self, **params):
return self.get(self.portalias_path, params=params)
@client.APIParamsCall
def create_portalias(self, body=None):
return self.post(self.portalias_path, body=body)
@client.APIParamsCall
def show_portalias(self, portforward, **params):
return self.get('%s/%s' % (self.portalias_path, portforward),
params=params)
@client.APIParamsCall
def update_portalias(self, portforward, body=None):
return self.put('%s/%s' % (self.portalias_path, portforward),
body=body)
@client.APIParamsCall
def delete_portalias(self, portforward):
return self.delete('%s/%s' % (self.portalias_path, portforward))
# portforward crud
@client.APIParamsCall
def list_portforwards(self, **params):
return self.get(self.portforward_path, params=params)
@client.APIParamsCall
def create_portforward(self, body=None):
return self.post(self.portforward_path, body=body)
@client.APIParamsCall
def show_portforward(self, portforward, **params):
return self.get('%s/%s' % (self.portforward_path, portforward),
params=params)
@client.APIParamsCall
def update_portforward(self, portforward, body=None):
return self.put('%s/%s' % (self.portforward_path, portforward),
body=body)
@client.APIParamsCall
def delete_portforward(self, portforward):
return self.delete('%s/%s' % (self.portforward_path, portforward))
# filterrule crud
@client.APIParamsCall
def list_filterrules(self, **params):
return self.get(self.filterrule_path, params=params)
@client.APIParamsCall
def create_filterrule(self, body=None):
return self.post(self.filterrule_path, body=body)
@client.APIParamsCall
def show_filterrule(self, filterrule, **params):
return self.get('%s/%s' % (self.filterrule_path, filterrule),
params=params)
@client.APIParamsCall
def update_filterrule(self, filterrule, body=None):
return self.put('%s/%s' % (self.filterrule_path, filterrule),
body=body)
@client.APIParamsCall
def delete_filterrule(self, filterrule):
return self.delete('%s/%s' % (self.filterrule_path, filterrule))
# addressbook group crud
@client.APIParamsCall
def list_addressgroups(self, **params):
return self.get(self.addressgroup_path, params=params)
@client.APIParamsCall
def create_addressgroup(self, body=None):
return self.post(self.addressgroup_path, body=body)
@client.APIParamsCall
def show_addressgroup(self, addressgroup, **params):
return self.get('%s/%s' % (self.addressgroup_path,
addressgroup),
params=params)
@client.APIParamsCall
def update_addressgroup(self, addressgroup, body=None):
return self.put('%s/%s' % (self.addressgroup_path,
addressgroup),
body=body)
@client.APIParamsCall
def delete_addressgroup(self, addressgroup, body=None):
return self.delete('%s/%s' % (self.addressgroup_path,
addressgroup))
# addressbook entries crud
@client.APIParamsCall
def list_addressbookentries(self, **params):
return self.get(self.addressentry_path, params=params)
@client.APIParamsCall
def create_addressentry(self, body=None):
return self.post(self.addressentry_path, body=body)
@client.APIParamsCall
def show_addressentry(self, addressentry, **params):
return self.get('%s/%s' % (self.addressentry_path,
addressentry),
params=params)
@client.APIParamsCall
def update_addressentry(self, addressentry, body=None):
return self.put('%s/%s' % (self.addressentry_path,
addressentry),
body=body)
@client.APIParamsCall
def delete_addressentry(self, addressentry):
return self.delete('%s/%s' % (self.addressentry_path,
addressentry))
class VisibilityTest(unittest.TestCase):
def setUp(self):
###
import random
c = AkandaClientWrapper(
username='demo',
password='secrete',
tenant_name='demo',
auth_url='http://localhost:5000/v2.0/',
auth_strategy='keystone',
auth_region='RegionOne')
self.group = c.create_addressgroup(
body={'addressgroup': {'name': 'group1'}})
self.entry_args = dict(name='entry1',
group_id=self.group['addressgroup']['id'],
cidr='192.168.1.1/24')
self.addr_entry = c.create_addressentry(
body=dict(addressentry=self.entry_args))
# port forward
self.network = c.create_network(
body=dict(network=dict(name='test_net')))
subnet_args = dict(network_id=self.network['network']['id'],
ip_version=4,
cidr='10.%d.%d.0/24' % (random.randint(0, 255),
random.randint(0, 255)))
self.subnet = c.create_subnet(body=dict(subnet=subnet_args))
port_args = dict(network_id=self.network['network']['id'],
device_owner='test')
self.port = c.create_port(body=dict(port=port_args))
pf_args = dict(name='rule1',
protocol='udp',
public_port=53,
private_port=53,
port_id=self.port['port']['id'])
self.forward = c.create_portforward(body=dict(portforward=pf_args))
rule_args = dict(action='pass',
protocol='tcp',
destination_id=self.group['addressgroup']['id'],
destination_port=80)
self.rule = c.create_filterrule(body=dict(filterrule=rule_args))
alias_args = dict(name='ssh', protocol='tcp', port=22)
self.port_alias = c.create_portalias(body=dict(portalias=alias_args))
def tearDown(self):
c = AkandaClientWrapper(
username='demo',
password='secrete',
tenant_name='demo',
auth_url='http://localhost:5000/v2.0/',
auth_strategy='keystone',
auth_region='RegionOne')
c.delete_portalias(self.port_alias['portalias']['id'])
c.delete_filterrule(self.rule['filterrule']['id'])
c.delete_portforward(self.forward['portforward']['id'])
c.delete_addressentry(self.addr_entry['addressentry']['id'])
c.delete_addressgroup(self.group['addressgroup']['id'])
c.delete_port(self.port['port']['id'])
c.delete_subnet(self.subnet['subnet']['id'])
c.delete_network(self.network['network']['id'])
class CanSeeTestCaseMixin(object):
def test_addressgroup(self):
ag = self.c.show_addressgroup(self.group['addressgroup']['id'])
assert ag
assert ag['addressgroup']['id'] == self.group['addressgroup']['id']
def test_addressentry(self):
ae = self.c.show_addressentry(self.addr_entry['addressentry']['id'])
assert ae
assert ae['addressentry']['id'] == \
self.addr_entry['addressentry']['id']
def test_portforward(self):
pf = self.c.show_portforward(self.forward['portforward']['id'])
assert pf
assert pf['portforward']['id'] == self.forward['portforward']['id']
def test_filterrule(self):
fr = self.c.show_filterrule(self.rule['filterrule']['id'])
assert fr
assert fr['filterrule']['id'] == self.rule['filterrule']['id']
def test_portalias(self):
pa = self.c.show_portalias(self.port_alias['portalias']['id'])
assert pa
assert pa['portalias']['id'] == self.port_alias['portalias']['id']
class SameUserTest(VisibilityTest, CanSeeTestCaseMixin):
def setUp(self):
super(SameUserTest, self).setUp()
# Re-connect as the same user and verify that the
# objects are visible.
self.c = AkandaClientWrapper(
username='demo',
password='secrete',
tenant_name='demo',
auth_url='http://localhost:5000/v2.0/',
auth_strategy='keystone',
auth_region='RegionOne',
)
class DifferentUserSameTenantTest(VisibilityTest, CanSeeTestCaseMixin):
def setUp(self):
super(DifferentUserSameTenantTest, self).setUp()
# Re-connect as another user in the same tenant and verify
# that the objects are visible.
self.c = AkandaClientWrapper(
username='demo2',
password='secrete',
tenant_name='demo',
auth_url='http://localhost:5000/v2.0/',
auth_strategy='keystone',
auth_region='RegionOne',
)
class DifferentTenantTest(VisibilityTest):
def setUp(self):
super(DifferentTenantTest, self).setUp()
# Re-connect as another user in the same tenant and verify
# that the objects are visible.
self.c = AkandaClientWrapper(
username='alt1',
password='secrete',
tenant_name='alt',
auth_url='http://localhost:5000/v2.0/',
auth_strategy='keystone',
auth_region='RegionOne',
)
def _check_one(self, one, lister):
response = lister()
objs = response.values()[0]
ids = [o['id'] for o in objs]
assert one not in ids
def test_addressgroup(self):
self._check_one(self.group['addressgroup']['id'],
self.c.list_addressgroups)
def test_addressentry(self):
self._check_one(self.addr_entry['addressentry']['id'],
self.c.list_addressbookentries)
def test_portforward(self):
self._check_one(self.forward['portforward']['id'],
self.c.list_portforwards)
def test_filterrule(self):
self._check_one(self.rule['filterrule']['id'],
self.c.list_filterrules)
def test_portalias(self):
self._check_one(self.port_alias['portalias']['id'],
self.c.list_portalias)
if __name__ == '__main__':
unittest.main()

View File

@ -1,25 +0,0 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
class PlaceHolderTestCase(unittest.TestCase):
"""
"""
def test_nothing(self):
pass

35
tox.ini
View File

@ -1,35 +0,0 @@
[tox]
envlist = py27,pep8
[testenv]
distribute = False
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt
commands = nosetests --with-coverage --cover-package=astara_neutron {posargs}
sitepackages = False
[tox:jenkins]
[testenv:style]
deps = flake8
setuptools_git>=0.4
commands = flake8 astara_neutron setup.py
[testenv:pep8]
deps = {[testenv:style]deps}
commands = {[testenv:style]commands}
[testenv:doc]
commands = {[testenv:releasenotes]commands}
[testenv:cover]
setenv = NOSE_WITH_COVERAGE=1
[testenv:venv]
commands = {posargs}
[flake8]
ignore = E133,E226,E241,E242,E731,F821
[testenv:releasenotes]
commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html