Add config generation to tox.ini

Move sample config to etc/drydock
Update docs to generate a config with tox

Update configuration for Keystone

- Add config generation to tox.ini
- Fix default in bootdata config
- Add keystone dependencies
- Add config generator config
- Move sample config to a skeleton etc/drydock tree

Use PasteDeploy for WSGI integration

Using keystonemiddleware outside of a PasteDeploy
pipeline is deprecated. Move Drydock to use PasteDeploy
and integrate with keystonemiddleware

Update Falcon context object

Add keystone identity fields to context object
Clean up context marker field

Fix AuthMiddleware for keystone

Update falcon middleware to harvest headers injected
by keystonemiddleware

Fix context middleware

Update context middleware to enforce
a UUID-formatted external context marker

Lock keystonemiddleware version

Lock keystonemiddleware version to the Newton release

Sample drydock.conf with keystone

This drydock.conf file is known to integrate successfully
with Keystone via keystonemiddleware and the password plugin

Add .dockerignore

Stop adding .tox environment to docker images

Integrate with oslo.policy

Add oslo.policy 1.9.0 to requirements (Newton release)
Add tox job to generate sample policy.yaml
Create DrydockPolicy as facade for RBAC

Inject policy engine into API init

Create a DrydockPolicy instance and inject it into
the Drydock API resources.

Remove per-resource authorization

Update Drydock context and auth middleware

Update Drydock context to use keystone IDs instead of names as required
by oslo.policy
Update AuthMiddleware to capture headers when request provides
a service token

Add RBAC for /designs API

Add RBAC enforcement for GET and POST of
/api/v1.0/designs endpoint

Refactor check_policy

Refactor check_policy into the base class

Enforce RBAC for /designs/id endpoint

Enforce RBAC on /designs/id/parts endpoint

Enforce RBAC on /designs/id/parts/kind

Enforce RBAC on /designs/id/parts/kinds/

Enforce RBAC on /tasks/ endpoints

Create unit tests

- New unit tests for DrydockPolicy
- New unit tests for AuthMiddleware w/ Keystone integration

Address impacting keystonemiddleware bug

Use v4.9.1 to address https://bugs.launchpad.net/keystonemiddleware/+bug/1653646

Add oslo_config fixtures for unit testing

API base class fixes

Fix an import error in API resource base class

More graceful error handling in drydock_client

Create shared function for checking API response status codes

Create client errors for auth

Create specific Exceptions for Unauthorized
and Forbidden responses

Ignore generated sample configs

Lock iso8601 version

oslo.versionedobjects appears to be impcompatible with
iso8601 0.1.12 on Python 3.2+

Update docs for Keystone

Note Keystone as a external depdendency and
add notes on correctly configuring Drydock for
Keystone integration

Add keystoneauth1 to list_opts

Explicitly pull keystoneauth password plugin
options when generating a config template

Update reference config for keystone

Update the reference config template
for Keystone integration

Add keystoneauth1 to requirements

Need to directly include keystoneauth1 so that
oslo_config options can be pulled from it

Update config doc for keystoneauth1

Use the keystoneauth1 generated configuration options
for the configuration  docs

Remove auth options

Force dependence on Keystone as the only authentication
backend

Clean up imports

Fix how falcon modules are imported

Default to empty role list

Move param extraction

Enforce RBAC before starting to parse parameters

Implement DocumentedRuleDefault

Use DocumentedRuleDefault for policy defaults at request
of @tlam. Requires v 1.21.1 of oslo_policy, which is tied
to the Pike openstack release.

Change sample output filenames

Update filenames to follow Openstack convention

Fix tests to use hex formatted IDs

Openstack resource IDs are not hyphenated, so update
unit tests to reflect this

Fix formating and whitespace

Refactor a few small items for code review

Update keystone integration to be more
robust with Newton codebase

Centralize policy_engine reference to
support a decorator-based model

RBAC enforcement decorator

Add units tests for decorator-based
RBAC and the tasks API

Minor refactoring and format changes

Change-Id: I35f90b0c88ec577fda1077814f5eac5c0ffb41e9
This commit is contained in:
Scott Hussey 2017-07-21 14:08:22 -05:00
parent 48a1f54966
commit 4ae627be44
38 changed files with 1387 additions and 253 deletions

1
.dockerignore Normal file
View File

@ -0,0 +1 @@
.tox

View File

@ -1,17 +1,22 @@
# drydock_provisioner
A python REST orchestrator to translate a YAML host topology to a provisioned set of hosts and provide a set of cloud-init post-provisioning instructions.
To build and run, first move into the root directory of the repo and run:
$ tox -e genconfig
$ tox -e genpolicy
$ sudo docker build . -t drydock
$ sudo docker run -d -v $(pwd)/examples:/etc/drydock -P --name='drydock' drydock
$ vi etc/drydock/drydock.conf # Customize configuration
$ sudo docker run -d -v $(pwd)/etc/drydock:/etc/drydock -P --name='drydock' drydock
$ DDPORT=$(sudo docker port drydock 8000/tcp | awk -F ':' '{ print $NF }')
$ curl -v http://localhost:${DDPORT}/api/v1.0/designs
To be useful, Drydock needs to operate in a realistic topology and has some required
downstream services.
See [Configuring Drydock](docs/configuration.rst) for details on customizing the configuration. To be useful, Drydock needs
to operate in a realistic topology and has some required downstream services.
* A VM running Canonical MaaS v2.2+
* A functional Openstack Keystone instance w/ the v3 API
* Docker running to start the Drydock image (can be co-located on the MaaS VM)
* A second VM or Baremetal Node to provision via Drydock
* Baremetal needs to be able to PXE boot

51
docs/configuration.rst Normal file
View File

@ -0,0 +1,51 @@
===================
Configuring Drydock
===================
Drydock uses an INI-like standard oslo_config file. A sample
file can be generated via tox::
$ tox -e genconfig
Customize your configuration based on the information below
Keystone Integration
====================
Drydock requires a service account to use for validating client
tokens::
$ openstack domain create 'ucp'
$ openstack project create --domain 'ucp' 'service'
$ openstack user create --domain ucp --project service --project-domain 'ucp' --password drydock drydock
$ openstack role add --project-domain ucp --user-domain ucp --user drydock --project service admin
The service account must then be included in the drydock.conf::
[keystone_authtoken]
auth_uri = http://<keystone_ip>:5000/v3
auth_version = 3
delay_auth_decision = true
auth_type = password
auth_section = keystone_authtoken_password
[keystone_authtoken_password]
auth_url = http://<keystone_ip>:5000
project_name = service
project_domain_name = ucp
user_name = drydock
user_domain_name = ucp
password = drydock
MaaS Integration
================
Drydock uses Canonical MaaS to provision new nodes. This requires a running MaaS
instance and providing Drydock with the address and credentials. The MaaS API
enforces authentication via a API key generated by MaaS and used to sign API calls.
Configure Drydock with the MaaS API URL and a valid API key.::
[maasdriver]
maas_api_url = http://<maas_ip>:<maas_port>/MAAS
maas_api_key = <valid API key>

View File

@ -33,14 +33,14 @@ Clone the git repo and customize your configuration file
::
git clone https://github.com/att-comdev/drydock
mkdir /tmp/drydock-etc
cp drydock/examples/drydock.conf /tmp/drydock-etc/
cp -r drydock/examples/bootdata /tmp/drydock-etc/
cd drydock
tox -e genconfig
cp -r etc /tmp/drydock-etc
In `/tmp/drydock-etc/drydock.conf` customize your maas_api_url to be
In `/tmp/drydock-etc/drydock/drydock.conf` customize your maas_api_url to be
the URL you used when opening the web UI and maas_api_key.
When starting the Drydock container, /tmp/drydock-etc will be
When starting the Drydock container, /tmp/drydock-etc/drydock will be
mounted as /etc/drydock with your customized configuration.
Drydock
@ -51,7 +51,7 @@ You will need to customize and mount your configuration file
::
$ sudo docker run -v /tmp/drydock-etc:/etc/drydock -P -d drydock:latest
$ sudo docker run -v /tmp/drydock-etc/drydock:/etc/drydock -P -d drydock:latest
Configure Site
--------------
@ -77,4 +77,3 @@ Use the CLI to create tasks to deploy your site
$ drydock --token <token> --url <drydock_url> task create -d <design_id> -a prepare_site
$ drydock --token <token> --url <drydock_url> task create -d <design_id> -a prepare_node
$ drydock --token <token> --url <drydock_url> task create -d <design_id> -a deploy_node

View File

@ -37,7 +37,6 @@ class DesignCreate(CliAction): # pylint: disable=too-few-public-methods
self.base_design = base_design
def invoke(self):
return self.api_client.create_design(base_design=self.base_design)

View File

@ -36,6 +36,10 @@ import pkgutil
from oslo_config import cfg
import keystoneauth1.loading as loading
IGNORED_MODULES = ('drydock', 'config')
class DrydockConfig(object):
"""
Initialize all the core options
@ -54,12 +58,6 @@ class DrydockConfig(object):
cfg.StrOpt('control_logger_name', default='${global_logger_name}.control', help='Logger name for API server logging'),
]
# API Authentication options
auth_options = [
cfg.StrOpt('admin_token', default='bigboss', help='X-Auth-Token value to bypass backend authentication', secret=True),
cfg.BoolOpt('bypass_enabled', default=False, help='Can backend authentication be bypassed?'),
]
# Enabled plugins
plugin_options = [
cfg.MultiStrOpt('ingester',
@ -95,17 +93,15 @@ class DrydockConfig(object):
def register_options(self):
self.conf.register_opts(DrydockConfig.options)
self.conf.register_opts(DrydockConfig.logging_options, group='logging')
self.conf.register_opts(DrydockConfig.auth_options, group='authentication')
self.conf.register_opts(DrydockConfig.plugin_options, group='plugins')
self.conf.register_opts(DrydockConfig.timeout_options, group='timeouts')
self.conf.register_opts(loading.get_auth_plugin_conf_options('password'), group='keystone_authtoken')
IGNORED_MODULES = ('drydock', 'config')
config_mgr = DrydockConfig()
def list_opts():
opts = {'DEFAULT': DrydockConfig.options,
'logging': DrydockConfig.logging_options,
'authentication': DrydockConfig.auth_options,
'plugins': DrydockConfig.plugin_options,
'timeouts': DrydockConfig.timeout_options
}
@ -115,6 +111,8 @@ def list_opts():
module_names = _list_module_names(package_path, parent_module)
imported_modules = _import_modules(module_names)
_append_config_options(imported_modules, opts)
# Assume we'll use the password plugin, so include those options in the configuration template
opts['keystone_authtoken'] = loading.get_auth_plugin_conf_options('password')
return _tupleize(opts)
def _tupleize(d):

View File

@ -28,6 +28,7 @@ def start_api(state_manager=None, ingester=None, orchestrator=None):
state persistence
:param ingester: Instance of drydock_provisioner.ingester.ingester.Ingester for handling design
part input
:param orchestrator: Instance of drydock_provisioner.orchestrator.Orchestrator for managing tasks
"""
control_api = falcon.API(request_type=DrydockRequest,
middleware=[AuthMiddleware(), ContextMiddleware(), LoggingMiddleware()])

View File

@ -11,18 +11,19 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import falcon.request as request
import uuid
import json
import logging
import falcon
import falcon.request
import drydock_provisioner.error as errors
class BaseResource(object):
def __init__(self):
self.logger = logging.getLogger('control')
self.authorized_roles = []
def on_options(self, req, resp):
self_attrs = dir(self)
@ -36,18 +37,6 @@ class BaseResource(object):
resp.headers['Allow'] = ','.join(allowed_methods)
resp.status = falcon.HTTP_200
# For authorizing access at the Resource level. A Resource requiring
# finer grained authorization at the method or instance level must
# implement that in the request handlers
def authorize_roles(self, role_list):
authorized = set(self.authorized_roles)
applied = set(role_list)
if authorized.isdisjoint(applied):
return False
else:
return True
def req_json(self, req):
if req.content_length is None or req.content_length == 0:
return None
@ -101,8 +90,8 @@ class BaseResource(object):
class StatefulResource(BaseResource):
def __init__(self, state_manager=None):
super(StatefulResource, self).__init__()
def __init__(self, state_manager=None, **kwargs):
super(StatefulResource, self).__init__(**kwargs)
if state_manager is None:
self.error(None, "StatefulResource:init - StatefulResources require a state manager be set")
@ -115,10 +104,17 @@ class DrydockRequestContext(object):
def __init__(self):
self.log_level = 'ERROR'
self.user = None
self.roles = ['anyone']
self.user = None # Username
self.user_id = None # User ID (UUID)
self.user_domain_id = None # Domain owning user
self.roles = []
self.project_id = None
self.project_domain_id = None # Domain owning project
self.is_admin_project = False
self.authenticated = False
self.request_id = str(uuid.uuid4())
self.external_marker = None
self.external_marker = ''
self.policy_engine = None
def set_log_level(self, level):
if level in ['error', 'info', 'debug']:
@ -127,6 +123,9 @@ class DrydockRequestContext(object):
def set_user(self, user):
self.user = user
def set_project(self, project):
self.project = project
def add_role(self, role):
self.roles.append(role)
@ -138,7 +137,23 @@ class DrydockRequestContext(object):
if x != role]
def set_external_marker(self, marker):
self.external_marker = str(marker)[:20]
self.external_marker = marker
class DrydockRequest(request.Request):
context_type = DrydockRequestContext
def set_policy_engine(self, engine):
self.policy_engine = engine
def to_policy_view(self):
policy_dict = {}
policy_dict['user_id'] = self.user_id
policy_dict['user_domain_id'] = self.user_domain_id
policy_dict['project_id'] = self.project_id
policy_dict['project_domain_id'] = self.project_domain_id
policy_dict['roles'] = self.roles
policy_dict['is_admin_project'] = self.is_admin_project
return policy_dict
class DrydockRequest(falcon.request.Request):
context_type = DrydockRequestContext

View File

@ -23,7 +23,7 @@ from .base import StatefulResource
class BootdataResource(StatefulResource):
bootdata_options = [
cfg.StrOpt('prom_init', default=None, help='Path to file to distribute for prom_init.sh')
cfg.StrOpt('prom_init', default='/etc/drydock/bootdata/join.sh', help='Path to file to distribute for prom_init.sh')
]
def __init__(self, orchestrator=None, **kwargs):

View File

@ -1,5 +1,4 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Copyright 2017 AT&T Intellectual Property. All other rights reserved. #
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
@ -16,6 +15,7 @@ import json
import uuid
import logging
import drydock_provisioner.policy as policy
import drydock_provisioner.objects as hd_objects
import drydock_provisioner.error as errors
@ -25,17 +25,25 @@ class DesignsResource(StatefulResource):
def __init__(self, **kwargs):
super(DesignsResource, self).__init__(**kwargs)
self.authorized_roles = ['user']
@policy.ApiEnforcer('physical_provisioner:read_data')
def on_get(self, req, resp):
ctx = req.context
state = self.state_manager
designs = list(state.designs.keys())
try:
designs = list(state.designs.keys())
resp.body = json.dumps(designs)
resp.status = falcon.HTTP_200
resp.body = json.dumps(designs)
resp.status = falcon.HTTP_200
except Exception as ex:
self.error(req.context, "Exception raised: %s" % str(ex))
self.return_error(resp, falcon.HTTP_500, message="Error accessing design list", retry=True)
@policy.ApiEnforcer('physical_provisioner:ingest_data')
def on_post(self, req, resp):
ctx = req.context
try:
json_data = self.req_json(req)
design = None
@ -67,8 +75,10 @@ class DesignResource(StatefulResource):
self.authorized_roles = ['user']
self.orchestrator = orchestrator
@policy.ApiEnforcer('physical_provisioner:read_data')
def on_get(self, req, resp, design_id):
source = req.params.get('source', 'designed')
ctx = req.context
try:
design = None
@ -93,6 +103,7 @@ class DesignsPartsResource(StatefulResource):
self.error(None, "DesignsPartsResource requires a configured Ingester instance")
raise ValueError("DesignsPartsResource requires a configured Ingester instance")
@policy.ApiEnforcer('physical_provisioner:ingest_data')
def on_post(self, req, resp, design_id):
ingester_name = req.params.get('ingester', None)
@ -108,12 +119,13 @@ class DesignsPartsResource(StatefulResource):
resp.status = falcon.HTTP_201
resp.body = json.dumps([x.obj_to_simple() for x in parsed_items])
else:
self.return_error(resp, falcon.HTTP_400, message="Empty body not supported", retry=False)
self.return_error(resp, falcon.HTTP_400, message="Empty body not supported", retry=False)
except ValueError:
self.return_error(resp, falcon.HTTP_500, message="Error processing input", retry=False)
except LookupError:
self.return_error(resp, falcon.HTTP_400, message="Ingester %s not registered" % ingester_name, retry=False)
@policy.ApiEnforcer('physical_provisioner:ingest_data')
def on_get(self, req, resp, design_id):
try:
design = self.state_manager.get_design(design_id)
@ -142,12 +154,16 @@ class DesignsPartsResource(StatefulResource):
class DesignsPartsKindsResource(StatefulResource):
def __init__(self, **kwargs):
super(DesignsPartsKindsResource, self).__init__(**kwargs)
self.authorized_roles = ['user']
@policy.ApiEnforcer('physical_provisioner:read_data')
def on_get(self, req, resp, design_id, kind):
pass
ctx = req.context
resp.status = falcon.HTTP_200
class DesignsPartResource(StatefulResource):
@ -156,7 +172,9 @@ class DesignsPartResource(StatefulResource):
self.authorized_roles = ['user']
self.orchestrator = orchestrator
@policy.ApiEnforcer('physical_provisioner:read_data')
def on_get(self, req , resp, design_id, kind, name):
ctx = req.context
source = req.params.get('source', 'designed')
try:
@ -188,3 +206,6 @@ class DesignsPartResource(StatefulResource):
except errors.DesignError as dex:
self.error(req.context, str(dex))
self.return_error(resp, falcon.HTTP_404, message=str(dex), retry=False)
except Exception as exc:
self.error(req.context, str(exc))
self.return_error(resp. falcon.HTTP_500, message=str(exc), retry=False)

View File

@ -12,68 +12,73 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import falcon
import logging
import uuid
import re
from oslo_config import cfg
from drydock_provisioner import policy
class AuthMiddleware(object):
def __init__(self):
self.logger = logging.getLogger('drydock')
# Authentication
def process_request(self, req, resp):
ctx = req.context
token = req.get_header('X-Auth-Token')
user = self.validate_token(token)
ctx.set_policy_engine(policy.policy_engine)
if user is not None:
ctx.set_user(user)
user_roles = self.role_list(user)
ctx.add_roles(user_roles)
for k, v in req.headers.items():
self.logger.debug("Request with header %s: %s" % (k, v))
auth_status = req.get_header('X-SERVICE-IDENTITY-STATUS')
service = True
if auth_status is None:
auth_status = req.get_header('X-IDENTITY-STATUS')
service = False
if auth_status == 'Confirmed':
# Process account and roles
ctx.authenticated = True
ctx.user = req.get_header('X-SERVICE-USER-NAME') if service else req.get_header('X-USER-NAME')
ctx.user_id = req.get_header('X-SERVICE-USER-ID') if service else req.get_header('X-USER-ID')
ctx.user_domain_id = req.get_header('X-SERVICE-USER-DOMAIN-ID') if service else req.get_header('X-USER-DOMAIN-ID')
ctx.project_id = req.get_header('X-SERVICE-PROJECT-ID') if service else req.get_header('X-PROJECT-ID')
ctx.project_domain_id = req.get_header('X-SERVICE-PROJECT-DOMAIN-ID') if service else req.get_header('X-PROJECT-DOMAIN-NAME')
if service:
ctx.add_roles(req.get_header('X-SERVICE-ROLES').split(','))
else:
ctx.add_roles(req.get_header('X-ROLES').split(','))
if req.get_header('X-IS-ADMIN-PROJECT') == 'True':
ctx.is_admin_project = True
else:
ctx.is_admin_project = False
self.logger.debug('Request from authenticated user %s with roles %s' % (ctx.user, ','.join(ctx.roles)))
else:
ctx.add_role('anyone')
ctx.authenticated = False
# Authorization
def process_resource(self, req, resp, resource, params):
ctx = req.context
if not resource.authorize_roles(ctx.roles):
raise falcon.HTTPUnauthorized('Authentication required',
('This resource requires an authorized role.'))
# Return the username associated with an authenticated token or None
def validate_token(self, token):
if token == '42':
return 'scott'
elif token == 'bigboss':
return 'admin'
else:
return None
# Return the list of roles assigned to the username
# Roles need to be an enum
def role_list(self, username):
if username == 'scott':
return ['user']
elif username == 'admin':
return ['user', 'admin']
class ContextMiddleware(object):
def __init__(self):
# Setup validation pattern for external marker
UUIDv4_pattern = '^[0-9A-F]{8}-[0-9A-F]{4}-4[0-9A-F]{3}-[89AB][0-9A-F]{3}-[0-9A-F]{12}$';
self.marker_re = re.compile(UUIDv4_pattern, re.I)
def process_request(self, req, resp):
ctx = req.context
requested_logging = req.get_header('X-Log-Level')
if (cfg.CONF.logging.log_level == 'DEBUG' or
(requested_logging == 'DEBUG' and 'admin' in ctx.roles)):
ctx.set_log_level('DEBUG')
elif requested_logging == 'INFO':
ctx.set_log_level('INFO')
ext_marker = req.get_header('X-Context-Marker')
ctx.set_external_marker(ext_marker if ext_marker is not None else '')
if ext_marker is not None and self.marker_re.fullmatch(ext_marker):
ctx.set_external_marker(ext_marker)
class LoggingMiddleware(object):

View File

@ -16,6 +16,9 @@ import json
import threading
import traceback
from drydock_provisioner import policy
from drydock_provisioner import error as errors
import drydock_provisioner.objects.task as obj_task
from .base import StatefulResource
@ -23,38 +26,203 @@ class TasksResource(StatefulResource):
def __init__(self, orchestrator=None, **kwargs):
super(TasksResource, self).__init__(**kwargs)
self.authorized_roles = ['user']
self.orchestrator = orchestrator
@policy.ApiEnforcer('physical_provisioner:read_task')
def on_get(self, req, resp):
task_id_list = [str(x.get_id()) for x in self.state_manager.tasks]
resp.body = json.dumps(task_id_list)
def on_post(self, req, resp):
try:
json_data = self.req_json(req)
design_id = json_data.get('design_id', None)
action = json_data.get('action', None)
node_filter = json_data.get('node_filter', None)
if design_id is None or action is None:
self.info(req.context, "Task creation requires fields design_id, action")
self.return_error(resp, falcon.HTTP_400, message="Task creation requires fields design_id, action", retry=False)
return
task = self.orchestrator.create_task(obj_task.OrchestratorTask, design_id=design_id,
action=action, node_filter=node_filter)
task_thread = threading.Thread(target=self.orchestrator.execute_task, args=[task.get_id()])
task_thread.start()
resp.body = json.dumps(task.to_dict())
resp.status = falcon.HTTP_201
task_id_list = [str(x.get_id()) for x in self.state_manager.tasks]
resp.body = json.dumps(task_id_list)
resp.status = falcon.HTTP_200
except Exception as ex:
self.error(req.context, "Unknown error: %s\n%s" % (str(ex), traceback.format_exc()))
self.return_error(resp, falcon.HTTP_500, message="Unknown error", retry=False)
@policy.ApiEnforcer('physical_provisioner:create_task')
def on_post(self, req, resp):
# A map of supported actions to the handlers for tasks for those actions
supported_actions = {
'validate_design': TasksResource.task_validate_design,
'verify_site': TasksResource.task_verify_site,
'prepare_site': TasksResource.task_prepare_site,
'verify_node': TasksResource.task_verify_node,
'prepare_node': TasksResource.task_prepare_node,
'deploy_node': TasksResource.task_deploy_node,
'destroy_node': TasksResource.task_destroy_node,
}
try:
ctx = req.context
json_data = self.req_json(req)
action = json_data.get('action', None)
if action not in supported_actions:
self.error(req,context, "Unsupported action %s" % action)
self.return_error(resp, falcon.HTTP_400, message="Unsupported action %s" % action, retry=False)
else:
supported_actions.get(action)(self, req, resp)
except Exception as ex:
self.error(req.context, "Unknown error: %s\n%s" % (str(ex), traceback.format_exc()))
self.return_error(resp, falcon.HTTP_500, message="Unknown error", retry=False)
@policy.ApiEnforcer('physical_provisioner:validate_design')
def task_validate_design(self, req, resp):
json_data = self.req_json(req)
action = json_data.get('action', None)
if action != 'validate_design':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_validate_design" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:verify_site')
def task_verify_site(self, req, resp):
json_data = self.req_json(req)
action = json_data.get('action', None)
if action != 'verify_site':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_verify_site" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:prepare_site')
def task_prepare_site(self, req, resp):
json_data = self.req_json(req)
action = json_data.get('action', None)
if action != 'prepare_site':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_prepare_site" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:verify_node')
def task_verify_node(self, req, resp):
json_data = self.req_json(req)
action = json_data.get('action', None)
if action != 'verify_node':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_verify_node" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:prepare_node')
def task_prepare_node(self, req, resp):
json_data = self.req_json(req)
action = json_data.get('action', None)
if action != 'prepare_node':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_prepare_node" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:deploy_node')
def task_deploy_node(self, req, resp):
json_data = self.req_json(req)
action = json_data.get('action', None)
if action != 'deploy_node':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_deploy_node" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
@policy.ApiEnforcer('physical_provisioner:destroy_node')
def task_destroy_node(self, req, resp):
json_data = self.req_json(req)
action = json_data.get('action', None)
if action != 'destroy_node':
self.error(req.context, "Task body ended up in wrong handler: action %s in task_destroy_node" % action)
self.return_error(resp, falcon.HTTP_500, message="Error - misrouted request", retry=False)
try:
task = self.create_task(json_data)
resp.body = json.dumps(task.to_dict())
resp.append_header('Location', "/api/v1.0/tasks/%s" % str(task.task_id))
resp.status = falcon.HTTP_201
except errors.InvalidFormat as ex:
self.error(req.context, ex.msg)
self.return_error(resp, falcon.HTTP_400, message=ex.msg, retry=False)
def create_task(self, task_body):
"""
Given the parsed body of a create task request, create the task
and start it in a thread
:param dict task_body: Dict representing the JSON body of a create task request
action - The action the task will execute
design_id - The design context the task will execute in
node_filter - A filter on which nodes will be affected by the task. The result is
an intersection of
applying all filters
node_names - A list of node hostnames
rack_names - A list of rack names that contain the nodes
node_tags - A list of tags applied to the nodes
:return: The Task object created
"""
design_id = task_body.get('design_id', None)
node_filter = task_body.get('node_filter', None)
action = task_body.get('action', None)
if design_id is None or action is None:
raise errors.InvalidFormat('Task creation requires fields design_id, action')
task = self.orchestrator.create_task(obj_task.OrchestratorTask, design_id=design_id,
action=action, node_filter=node_filter)
task_thread = threading.Thread(target=self.orchestrator.execute_task, args=[task.get_id()])
task_thread.start()
return task
class TaskResource(StatefulResource):
@ -64,7 +232,14 @@ class TaskResource(StatefulResource):
self.orchestrator = orchestrator
def on_get(self, req, resp, task_id):
ctx = req.context
policy_action = 'physical_provisioner:read_task'
try:
if not self.check_policy(policy_action, ctx):
self.access_denied(req, resp, policy_action)
return
task = self.state_manager.get_task(task_id)
if task is None:

View File

@ -17,6 +17,7 @@ import os
from oslo_config import cfg
from drydock_provisioner import policy
import drydock_provisioner.config as config
import drydock_provisioner.objects as objects
import drydock_provisioner.ingester as ingester
@ -68,13 +69,23 @@ def start_drydock():
if 'MAAS_API_KEY' in os.environ:
cfg.CONF.set_override(name='maas_api_key', override=os.environ['MAAS_API_KEY'], group='maasdriver')
# Setup the RBAC policy enforcer
policy.policy_engine = policy.DrydockPolicy()
policy.policy_engine.register_policy()
wsgi_callable = api.start_api(state_manager=state, ingester=input_ingester, orchestrator=orchestrator)
# Ensure that the policy_engine is initialized before starting the API
wsgi_callable = api.start_api(state_manager=state, ingester=input_ingester,
orchestrator=orchestrator)
# Now that loggers are configured, log the effective config
cfg.CONF.log_opt_values(logging.getLogger(cfg.CONF.logging.global_logger_name), logging.DEBUG)
return wsgi_callable
# Initialization compatible with PasteDeploy
def paste_start_drydock(global_conf, **kwargs):
# At this time just ignore everything in the paste configuration and rely on oslo_config
return drydock
drydock = start_drydock()

View File

@ -36,11 +36,9 @@ class DrydockClient(object):
resp = self.session.get(endpoint)
if resp.status_code != 200:
raise errors.ClientError("Received a %d from GET URL: %s" % (resp.status_code, endpoint),
code=resp.status_code)
else:
return resp.json()
self._check_response(resp)
return resp.json()
def get_design(self, design_id, source='designed'):
"""
@ -55,14 +53,10 @@ class DrydockClient(object):
resp = self.session.get(endpoint, query={'source': source})
if resp.status_code == 404:
raise errors.ClientError("Design ID %s not found." % (design_id), code=404)
elif resp.status_code != 200:
raise errors.ClientError("Received a %d from GET URL: %s" % (resp.status_code, endpoint),
code=resp.status_code)
else:
return resp.json()
self._check_response(resp)
return resp.json()
def create_design(self, base_design=None):
"""
Create a new design context for holding design parts
@ -77,12 +71,10 @@ class DrydockClient(object):
else:
resp = self.session.post(endpoint)
if resp.status_code != 201:
raise errors.ClientError("Received a %d from POST URL: %s" % (resp.status_code, endpoint),
code=resp.status_code)
else:
design = resp.json()
return design.get('id', None)
self._check_response(resp)
design = resp.json()
return design.get('id', None)
def get_part(self, design_id, kind, key, source='designed'):
"""
@ -99,13 +91,9 @@ class DrydockClient(object):
resp = self.session.get(endpoint, query={'source': source})
if resp.status_code == 404:
raise errors.ClientError("%s %s in design %s not found" % (key, kind, design_id), code=404)
elif resp.status_code != 200:
raise errors.ClientError("Received a %d from GET URL: %s" % (resp.status_code, endpoint),
code=resp.status_code)
else:
return resp.json()
self._check_response(resp)
return resp.json()
def load_parts(self, design_id, yaml_string=None):
"""
@ -120,15 +108,10 @@ class DrydockClient(object):
resp = self.session.post(endpoint, query={'ingester': 'yaml'}, body=yaml_string)
if resp.status_code == 400:
raise errors.ClientError("Invalid inputs: %s" % resp.text, code=resp.status_code)
elif resp.status_code == 500:
raise errors.ClientError("Server error: %s" % resp.text, code=resp.status_code)
elif resp.status_code == 201:
return resp.json()
else:
raise errors.ClientError("Uknown error. Received %d" % resp.status_code,
code=resp.status_code)
self._check_response(resp)
return resp.json()
def get_tasks(self):
"""
Get a list of all the tasks, completed or running.
@ -140,10 +123,9 @@ class DrydockClient(object):
resp = self.session.get(endpoint)
if resp.status_code != 200:
raise errors.ClientError("Server error: %s" % resp.text, code=resp.status_code)
else:
return resp.json()
self._check_response(resp)
return resp.json()
def get_task(self, task_id):
"""
@ -157,12 +139,9 @@ class DrydockClient(object):
resp = self.session.get(endpoint)
if resp.status_code == 200:
return resp.json()
elif resp.status_code == 404:
raise errors.ClientError("Task %s not found" % task_id, code=resp.status_code)
else:
raise errors.ClientError("Server error: %s" % resp.text, code=resp.status_code)
self._check_response(resp)
return resp.json()
def create_task(self, design_id, task_action, node_filter=None):
"""
@ -185,9 +164,14 @@ class DrydockClient(object):
resp = self.session.post(endpoint, data=task_dict)
if resp.status_code == 201:
return resp.json().get('task_id')
elif resp.status_code == 400:
raise errors.ClientError("Invalid inputs, received a %d: %s" % (resp.status_code, resp.text),
code=resp.status_code)
self._check_response(resp)
return resp.json().get('task_id')
def _check_response(self, resp):
if resp.status_code == 401:
raise errors.ClientUnauthorizedError("Unauthorized access to %s, include valid token." % resp.url)
elif resp.status_code == 403:
raise errors.ClientForbiddenError("Forbidden access to %s" % resp.url)
elif not resp.ok:
raise errors.ClientError("Error - received %d: %s" % (resp.status_code, resp.text), code=resp.status_code)

View File

@ -16,27 +16,35 @@ import json
class DesignError(Exception):
pass
class StateError(Exception):
pass
class OrchestratorError(Exception):
pass
class TransientOrchestratorError(OrchestratorError):
pass
class PersistentOrchestratorError(OrchestratorError):
pass
class DriverError(Exception):
pass
class TransientDriverError(DriverError):
pass
class PersistentDriverError(DriverError):
pass
class ApiError(Exception):
def __init__(self, msg, code=500):
super().__init__(msg)
@ -47,13 +55,22 @@ class ApiError(Exception):
err_dict = {'error': msg, 'type': self.__class__.__name__}
return json.dumps(err_dict)
class InvalidFormat(ApiError):
def __init__(self, msg, code=400):
super(InvalidFormat, self).__init__(msg, code=code)
super(InvalidFormat, self).__init__(msg, code=code)
class ClientError(Exception):
class ClientError(ApiError):
def __init__(self, msg, code=500):
super().__init__(msg)
self.message = msg
self.status_code = code
class ClientUnauthorizedError(ClientError):
def __init__(self, msg):
super().__init__(msg, code=401)
class ClientForbiddenError(ClientError):
def __init__(self, msg):
super().__init__(msg, code=403)

View File

@ -0,0 +1,114 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import functools
from oslo_config import cfg
from oslo_policy import policy
# Global reference to a instantiated DrydockPolicy. Will be initialized by drydock.py
policy_engine = None
class DrydockPolicy(object):
"""
Initialize policy defaults
"""
# Base Policy
base_rules = [
policy.RuleDefault('admin_required', 'role:admin or is_admin:1', description='Actions requiring admin authority'),
]
# Orchestrator Policy
task_rules = [
policy.DocumentedRuleDefault('physical_provisioner:read_task', 'role:admin', 'Get task status',
[{'path': '/api/v1.0/tasks', 'method': 'GET'},
{'path': '/api/v1.0/tasks/{task_id}', 'method': 'GET'}]),
policy.DocumentedRuleDefault('physical_provisioner:validate_design', 'role:admin', 'Create validate_design task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:verify_site', 'role:admin', 'Create verify_site task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:prepare_site', 'role:admin', 'Create prepare_site task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:verify_node', 'role:admin', 'Create verify_node task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:prepare_node', 'role:admin', 'Create prepare_node task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:deploy_node', 'role:admin', 'Create deploy_node task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
policy.DocumentedRuleDefault('physical_provisioner:destroy_node', 'role:admin', 'Create destroy_node task',
[{'path': '/api/v1.0/tasks', 'method': 'POST'}]),
]
# Data Management Policy
data_rules = [
policy.DocumentedRuleDefault('physical_provisioner:read_data', 'role:admin', 'Read loaded design data',
[{'path': '/api/v1.0/designs', 'method': 'GET'},
{'path': '/api/v1.0/designs/{design_id}', 'method': 'GET'}]),
policy.DocumentedRuleDefault('physical_provisioner:ingest_data', 'role:admin', 'Load design data',
[{'path': '/api/v1.0/designs', 'method': 'POST'},
{'path': '/api/v1.0/designs/{design_id}/parts', 'method': 'POST'}])
]
def __init__(self):
self.enforcer = policy.Enforcer(cfg.CONF)
def register_policy(self):
self.enforcer.register_defaults(DrydockPolicy.base_rules)
self.enforcer.register_defaults(DrydockPolicy.task_rules)
self.enforcer.register_defaults(DrydockPolicy.data_rules)
self.enforcer.load_rules()
def authorize(self, action, ctx):
target = {'project_id': ctx.project_id, 'user_id': ctx.user_id}
return self.enforcer.authorize(action, target, ctx.to_policy_view())
class ApiEnforcer(object):
"""
A decorator class for enforcing RBAC policies
"""
def __init__(self, action):
self.action = action
self.logger = logging.getLogger('drydock.policy')
def __call__(self, f):
@functools.wraps(f)
def secure_handler(slf, req, resp, *args):
ctx = req.context
policy_engine = ctx.policy_engine
self.logger.debug("Enforcing policy %s on request %s" % (self.action, ctx.request_id))
if policy_engine is not None and policy_engine.authorize(self.action, ctx):
return f(slf, req, resp, *args)
else:
if ctx.authenticated:
slf.info(ctx, "Error - Forbidden access - action: %s" % self.action)
slf.return_error(resp, falcon.HTTP_403, message="Forbidden", retry=False)
else:
slf.info(ctx, "Error - Unauthenticated access")
slf.return_error(resp, falcon.HTTP_401, message="Unauthenticated", retry=False)
return secure_handler
def list_policies():
default_policy = []
default_policy.extend(DrydockPolicy.base_rules)
default_policy.extend(DrydockPolicy.task_rules)
default_policy.extend(DrydockPolicy.data_rules)
return default_policy

View File

@ -5,7 +5,8 @@ CMD="drydock"
PORT=${PORT:-9000}
if [ "$1" = 'server' ]; then
exec uwsgi --http :${PORT} -w drydock_provisioner.drydock --callable drydock --enable-threads -L --pyargv "--config-file /etc/drydock/drydock.conf"
# exec uwsgi --http :${PORT} -w drydock_provisioner.drydock --callable drydock --enable-threads -L --pyargv "--config-file /etc/drydock/drydock.conf"
exec uwsgi --http :${PORT} --paste config:/etc/drydock/api-paste.ini --enable-threads -L --pyargv "--config-file /etc/drydock/drydock.conf"
fi
exec ${CMD} $@

3
etc/drydock/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
# Ignore generated samples
drydock.conf
policy.yaml

View File

@ -0,0 +1,8 @@
[app:drydock-api]
paste.app_factory = drydock_provisioner.drydock:paste_start_drydock
[pipeline:main]
pipeline = authtoken drydock-api
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory

View File

@ -0,0 +1,7 @@
[DEFAULT]
output_file = etc/drydock/drydock.conf.sample
wrap_width = 80
namespace = drydock_provisioner
namespace = keystonemiddleware.auth_token
namespace = oslo.policy

View File

@ -0,0 +1,5 @@
[DEFAULT]
output_file = etc/drydock/policy.yaml.sample
wrap_width = 80
namespace = drydock_provisioner

View File

@ -1,50 +0,0 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[DEFAULT]
# No global options yet
[logging]
log_level = 'DEBUG'
[authentication]
bypass_enabled = True
[plugins]
# All the config ingesters that are active
# Supports multiple values
ingester = 'drydock_provisioner.ingester.plugins.yaml.YamlIngester'
# OOB drivers that are enabled
# Supports multiple values
oob_driver = 'drydock_provisioner.drivers.oob.pyghmi_driver.PyghmiDriver'
oob_driver = 'drydock_provisioner.drivers.oob.manual_driver.driver.ManualDriver'
# Node driver that is enabled
node_driver = 'drydock_provisioner.drivers.node.maasdriver.driver.MaasNodeDriver'
[timeouts]
create_network_template = 2
identify_node = 10
configure_hardware = 30
apply_node_networking = 5
apply_node_platform = 5
deploy_node = 45
[maasdriver]
maas_api_url = 'http://localhost:8000/MAAS/api/2.0/'
maas_api_key = 'your:secret:key'
[bootdata]
prom_init = '/etc/drydock/bootdata/join.sh'

View File

@ -0,0 +1,347 @@
[DEFAULT]
#
# From drydock_provisioner
#
# Polling interval in seconds for checking subtask or downstream status (integer
# value)
#poll_interval = 10
[authentication]
#
# From drydock_provisioner
#
# Client request authentication strategy (string value)
#auth_strategy = keystone
# X-Auth-Token value to bypass backend authentication (string value)
#admin_token = bigboss
# Can backend authentication be bypassed? (boolean value)
#bypass_enabled = false
[bootdata]
#
# From drydock_provisioner
#
# Path to file to distribute for prom_init.sh (string value)
#prom_init = /etc/drydock/bootdata/join.sh
[keystone_authtoken]
#
# From keystonemiddleware.auth_token
#
# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users. Unauthenticated
# clients are redirected to this endpoint to authenticate. Although this
# endpoint should ideally be unversioned, client support in the wild varies.
# If you're using a versioned v2 endpoint here, then this should *not* be the
# same endpoint the service user utilizes for validating tokens, because normal
# end users may not be able to reach that endpoint. (string value)
auth_uri = http://172.20.0.3:5000/v3
# API version of the admin Identity API endpoint. (string value)
auth_version = 3
# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
delay_auth_decision = true
# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>
# How many times are we trying to reconnect when communicating with Identity API
# Server. (integer value)
#http_request_max_retries = 3
# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>
# Required if identity server requires client certificate (string value)
#certfile = <None>
# Required if identity server requires client certificate (string value)
#keyfile = <None>
# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# The region in which the identity server can be found. (string value)
#region_name = <None>
# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>
# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>
# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set to
# -1 to disable caching completely. (integer value)
#token_cache_time = 300
# Determines the frequency at which the list of revoked tokens is retrieved from
# the Identity service (in seconds). A high number of revocation events combined
# with a low cache duration may significantly reduce performance. Only valid for
# PKI tokens. (integer value)
#revocation_cache_time = 10
# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None
# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>
# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300
# (Optional) Maximum total number of open connections to every memcached server.
# (integer value)
#memcache_pool_maxsize = 10
# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3
# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60
# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10
# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false
# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true
# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it if
# not. "strict" like "permissive" but if the bind type is unknown the token will
# be rejected. "required" any form of token binding is needed to be allowed.
# Finally the name of a binding method that must be present in tokens. (string
# value)
#enforce_token_bind = permissive
# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false
# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5
# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
auth_type = password
# Config Section from which to load plugin specific options (string value)
auth_section = keystone_authtoken_password
[keystone_authtoken_password]
#
# From drydock_provisioner
#
# Authentication URL (string value)
auth_url = http://172.20.0.3:5000/
# Domain ID to scope to (string value)
#domain_id = <None>
# Domain name to scope to (string value)
domain_name = ucp
# Project ID to scope to (string value)
# Deprecated group/name - [keystone_authtoken_password]/tenant_id
#project_id = <None>
# Project name to scope to (string value)
# Deprecated group/name - [keystone_authtoken_password]/tenant_name
project_name = service
# Domain ID containing project (string value)
#project_domain_id = <None>
# Domain name containing project (string value)
project_domain_name = ucp
# Trust ID (string value)
#trust_id = <None>
# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>
# Optional domain name to use with v3 API and v2 parameters. It will be used for
# both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
default_domain_name = ucp
# User id (string value)
#user_id = <None>
# Username (string value)
# Deprecated group/name - [keystone_authtoken_password]/user_name
#username = <None>
user_name = drydock
# User's domain id (string value)
#user_domain_id = <None>
# User's domain name (string value)
user_domain_name = ucp
# User's password (string value)
password = drydock
[logging]
#
# From drydock_provisioner
#
# Global log level for Drydock (string value)
#log_level = INFO
# Logger name for the top-level logger (string value)
#global_logger_name = drydock
# Logger name for OOB driver logging (string value)
#oobdriver_logger_name = ${global_logger_name}.oobdriver
# Logger name for Node driver logging (string value)
#nodedriver_logger_name = ${global_logger_name}.nodedriver
# Logger name for API server logging (string value)
#control_logger_name = ${global_logger_name}.control
[maasdriver]
#
# From drydock_provisioner
#
# The API key for accessing MaaS (string value)
#maas_api_key = <None>
# The URL for accessing MaaS API (string value)
#maas_api_url = <None>
# Polling interval for querying MaaS status in seconds (integer value)
#poll_interval = 10
[oslo_policy]
#
# From oslo.policy
#
# The file that defines policies. (string value)
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
#policy_dirs = policy.d
[plugins]
#
# From drydock_provisioner
#
# Module path string of a input ingester to enable (multi valued)
#ingester = drydock_provisioner.ingester.plugins.yaml.YamlIngester
# Module path string of a OOB driver to enable (multi valued)
#oob_driver = drydock_provisioner.drivers.oob.pyghmi_driver.PyghmiDriver
# Module path string of the Node driver to enable (string value)
#node_driver = drydock_provisioner.drivers.node.maasdriver.driver.MaasNodeDriver
# Module path string of the Network driver enable (string value)
#network_driver = <None>
[timeouts]
#
# From drydock_provisioner
#
# Fallback timeout when a specific one is not configured (integer value)
#drydock_timeout = 5
# Timeout in minutes for creating site network templates (integer value)
#create_network_template = 2
# Timeout in minutes for creating user credentials (integer value)
#configure_user_credentials = 2
# Timeout in minutes for initial node identification (integer value)
#identify_node = 10
# Timeout in minutes for node commissioning and hardware configuration (integer
# value)
#configure_hardware = 30
# Timeout in minutes for configuring node networking (integer value)
#apply_node_networking = 5
# Timeout in minutes for configuring node platform (integer value)
#apply_node_platform = 5
# Timeout in minutes for deploying a node (integer value)
#deploy_node = 45

View File

@ -1,11 +1,16 @@
PyYAML
pyghmi>=1.0.18
PyYAML===3.12
pyghmi===1.0.18
netaddr
falcon
oslo.versionedobjects>=1.23.0
oslo.versionedobjects===1.23.0
requests
oauthlib
uwsgi>1.4
uwsgi===2.0.15
bson===0.4.7
oslo.config
click===6.7
PasteDeploy==1.5.2
keystonemiddleware===4.9.1
oslo.policy===1.22.1
iso8601===0.1.11
keystoneauth1===2.13.0

69
requirements-lock.txt Normal file
View File

@ -0,0 +1,69 @@
amqp==2.2.1
Babel==2.3.4
bson==0.4.7
cachetools==2.0.0
certifi==2017.7.27.1
chardet==3.0.4
click==6.7
contextlib2==0.5.5
debtcollector==1.17.0
enum-compat==0.0.2
eventlet==0.20.0
falcon==1.2.0
fasteners==0.14.1
futurist==1.3.0
greenlet==0.4.12
idna==2.5
iso8601==0.1.11
Jinja2==2.9.6
keystoneauth1===2.13.0
keystonemiddleware==4.9.1
kombu==4.1.0
MarkupSafe==1.0
monotonic==1.3
msgpack-python==0.4.8
netaddr==0.7.19
netifaces==0.10.6
oauthlib==2.0.2
oslo.concurrency==3.21.0
oslo.config==4.11.0
oslo.context==2.17.0
oslo.i18n==3.17.0
oslo.log==3.30.0
oslo.messaging==5.30.0
oslo.middleware==3.30.0
oslo.policy==1.22.1
oslo.serialization==2.20.0
oslo.service==1.25.0
oslo.utils==3.28.0
oslo.versionedobjects==1.23.0
Paste==2.0.3
PasteDeploy==1.5.2
pbr==3.1.1
pika==0.10.0
pika-pool==0.1.3
positional==1.1.2
prettytable==0.7.2
pycadf==2.6.0
pycrypto==2.6.1
pyghmi==1.0.18
pyinotify==0.9.6
pyparsing==2.2.0
python-dateutil==2.6.1
python-keystoneclient==3.13.0
python-mimeparse==1.6.0
pytz==2017.2
PyYAML==3.12
repoze.lru==0.6
requests==2.18.2
rfc3986==1.1.0
Routes==2.4.1
six==1.10.0
statsd==3.2.1
stevedore==1.25.0
tenacity==4.4.0
urllib3==1.22
uWSGI==2.0.15
vine==1.1.4
WebOb==1.7.3
wrapt==1.10.10

View File

@ -4,3 +4,4 @@ responses
mock
tox
oslo.versionedobjects[fixtures]>=1.23.0
oslo.config[fixtures]

View File

@ -46,20 +46,9 @@ setup(name='drydock_provisioner',
'drydock_provisioner.cli.part',
'drydock_provisioner.cli.task',
'drydock_provisioner.drydock_client'],
install_requires=[
'PyYAML',
'pyghmi>=1.0.18',
'netaddr',
'falcon',
'oslo.versionedobjects>=1.23.0',
'requests',
'oauthlib',
'uwsgi>1.4',
'bson===0.4.7',
'oslo.config',
],
entry_points={
'oslo.config.opts': 'drydock_provisioner = drydock_provisioner.config:list_opts',
'oslo.policy.policies': 'drydock_provisioner = drydock_provisioner.policy:list_policies',
'console_scripts': 'drydock = drydock_provisioner.cli.commands:drydock'
}
)

View File

@ -0,0 +1,114 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid
import logging
import json
from drydock_provisioner import policy
from drydock_provisioner.orchestrator import Orchestrator
from drydock_provisioner.control.base import DrydockRequestContext, BaseResource
from drydock_provisioner.control.tasks import TaskResource, TasksResource
import pytest
import falcon
logging.basicConfig(level=logging.DEBUG)
class TestTasksApi():
def test_read_tasks(self, mocker):
''' DrydockPolicy.authorized() should correctly use oslo_policy to enforce
RBAC policy based on a DrydockRequestContext instance
'''
mocker.patch('oslo_policy.policy.Enforcer')
state = mocker.MagicMock()
ctx = DrydockRequestContext()
policy_engine = policy.DrydockPolicy()
# Mock policy enforcement
policy_mock_config = {'authorize.return_value': True}
policy_engine.enforcer.configre_mock(**policy_mock_config)
api = TasksResource(state_manager=state)
# Configure context
project_id = str(uuid.uuid4().hex)
ctx.project_id = project_id
user_id = str(uuid.uuid4().hex)
ctx.user_id = user_id
ctx.roles = ['admin']
ctx.set_policy_engine(policy_engine)
# Configure mocked request and response
req = mocker.MagicMock()
resp = mocker.MagicMock()
req.context = ctx
api.on_get(req, resp)
expected_calls = [mocker.call.tasks]
assert state.has_calls(expected_calls)
assert resp.status == falcon.HTTP_200
def test_create_task(self, mocker):
mocker.patch('oslo_policy.policy.Enforcer')
state = mocker.MagicMock()
orch = mocker.MagicMock(spec=Orchestrator, wraps=Orchestrator(state_manager=state))
orch_mock_config = {'execute_task.return_value': True}
orch.configure_mock(**orch_mock_config)
ctx = DrydockRequestContext()
policy_engine = policy.DrydockPolicy()
json_body = json.dumps({
'action': 'verify_site',
'design_id': 'foo',
}).encode('utf-8')
# Mock policy enforcement
policy_mock_config = {'authorize.return_value': True}
policy_engine.enforcer.configure_mock(**policy_mock_config)
api = TasksResource(orchestrator=orch, state_manager=state)
# Configure context
project_id = str(uuid.uuid4().hex)
ctx.project_id = project_id
user_id = str(uuid.uuid4().hex)
ctx.user_id = user_id
ctx.roles = ['admin']
ctx.set_policy_engine(policy_engine)
# Configure mocked request and response
req = mocker.MagicMock(spec=falcon.Request)
req.content_type = 'application/json'
req.stream.read.return_value = json_body
resp = mocker.MagicMock(spec=falcon.Response)
req.context = ctx
api.on_post(req, resp)
assert resp.status == falcon.HTTP_201
assert resp.get_header('Location') is not None
@policy.ApiEnforcer('physical_provisioner:read_task')
def target_function(self, req, resp):
return True

View File

@ -0,0 +1,59 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid
import logging
from drydock_provisioner import policy
from drydock_provisioner.control.base import DrydockRequestContext
import pytest
logging.basicConfig(level=logging.DEBUG)
class TestEnforcerDecorator():
def test_apienforcer_decorator(self,mocker):
''' DrydockPolicy.authorized() should correctly use oslo_policy to enforce
RBAC policy based on a DrydockRequestContext instance. authorized() is
called via the policy.ApiEnforcer decorator.
'''
mocker.patch('oslo_policy.policy.Enforcer')
ctx = DrydockRequestContext()
policy_engine = policy.DrydockPolicy()
# Configure context
project_id = str(uuid.uuid4())
ctx.project_id = project_id
user_id = str(uuid.uuid4())
ctx.user_id = user_id
ctx.roles = ['admin']
ctx.set_policy_engine(policy_engine)
# Configure mocked request and response
req = mocker.MagicMock()
resp = mocker.MagicMock()
req.context = ctx
self.target_function(req, resp)
expected_calls = [mocker.call.authorize('physical_provisioner:read_task', {'project_id': project_id, 'user_id': user_id},
ctx.to_policy_view())]
policy_engine.enforcer.assert_has_calls(expected_calls)
@policy.ApiEnforcer('physical_provisioner:read_task')
def target_function(self, req, resp):
return True

View File

@ -0,0 +1,115 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid
import falcon
import sys
from drydock_provisioner.control.base import DrydockRequest
from drydock_provisioner.control.middleware import AuthMiddleware
import pytest
class TestAuthMiddleware():
# the WSGI env for a request processed by keystone middleware
# with user token
ks_user_env = { 'REQUEST_METHOD': 'GET',
'SCRIPT_NAME': '/foo',
'PATH_INFO': '',
'QUERY_STRING': '',
'CONTENT_TYPE': '',
'CONTENT_LENGTH': 0,
'SERVER_NAME': 'localhost',
'SERVER_PORT': '9000',
'SERVER_PROTOCOL': 'HTTP/1.1',
'HTTP_X_IDENTITY_STATUS': 'Confirmed',
'HTTP_X_PROJECT_ID': '',
'HTTP_X_USER_ID': '',
'HTTP_X_AUTH_TOKEN': '',
'HTTP_X_ROLES': '',
'wsgi.version': (1,0),
'wsgi.url_scheme': 'http',
'wsgi.input': sys.stdin,
'wsgi.errors': sys.stderr,
'wsgi.multithread': False,
'wsgi.multiprocess': False,
'wsgi.run_once': False,
}
# the WSGI env for a request processed by keystone middleware
# with service token
ks_service_env = { 'REQUEST_METHOD': 'GET',
'SCRIPT_NAME': '/foo',
'PATH_INFO': '',
'QUERY_STRING': '',
'CONTENT_TYPE': '',
'CONTENT_LENGTH': 0,
'SERVER_NAME': 'localhost',
'SERVER_PORT': '9000',
'SERVER_PROTOCOL': 'HTTP/1.1',
'HTTP_X_SERVICE_IDENTITY_STATUS': 'Confirmed',
'HTTP_X_SERVICE_PROJECT_ID': '',
'HTTP_X_SERVICE_USER_ID': '',
'HTTP_X_SERVICE_TOKEN': '',
'HTTP_X_ROLES': '',
'wsgi.version': (1,0),
'wsgi.url_scheme': 'http',
'wsgi.input': sys.stdin,
'wsgi.errors': sys.stderr,
'wsgi.multithread': False,
'wsgi.multiprocess': False,
'wsgi.run_once': False,
}
def test_process_request_user(self):
''' AuthMiddleware is expected to correctly identify the headers
added to an authenticated request by keystonemiddleware in a
PasteDeploy configuration
'''
req_env = TestAuthMiddleware.ks_user_env
project_id = str(uuid.uuid4().hex)
req_env['HTTP_X_PROJECT_ID'] = project_id
user_id = str(uuid.uuid4().hex)
req_env['HTTP_X_USER_ID'] = user_id
token = str(uuid.uuid4().hex)
req_env['HTTP_X_AUTH_TOKEN'] = token
middleware = AuthMiddleware()
request = DrydockRequest(req_env)
response = falcon.Response()
middleware.process_request(request, response)
assert request.context.authenticated == True
assert request.context.user_id == user_id
def test_process_request_user_noauth(self):
''' AuthMiddleware is expected to correctly identify the headers
added to an unauthenticated (no token, bad token) request by
keystonemiddleware in a PasteDeploy configuration
'''
req_env = TestAuthMiddleware.ks_user_env
req_env['HTTP_X_IDENTITY_STATUS'] = 'Invalid'
middleware = AuthMiddleware()
request = DrydockRequest(req_env)
response = falcon.Response()
middleware.process_request(request, response)
assert request.context.authenticated == False

View File

@ -26,9 +26,6 @@ import yaml
class TestClass(object):
def setup_method(self, method):
print("Running test {0}".format(method.__name__))
def test_design_inheritance(self, loaded_design):
orchestrator = Orchestrator(state_manager=loaded_design,
@ -41,13 +38,11 @@ class TestClass(object):
design_data = orchestrator.compute_model_inheritance(design_data)
node = design_data.get_baremetal_node("controller01")
assert node.applied.get('hardware_profile') == 'HPGen9v3'
iface = node.get_applied_interface('bond0')
print(yaml.dump(iface, default_flow_style=False))
assert iface.get_applied_slave_count() == 2
iface = node.get_applied_interface('pxe')
@ -68,7 +63,6 @@ class TestClass(object):
return design_state
@pytest.fixture(scope='module')
def input_files(self, tmpdir_factory, request):
tmpdir = tmpdir_factory.mktemp('data')
@ -80,4 +74,4 @@ class TestClass(object):
dst_file = str(tmpdir) + "/" + f
shutil.copyfile(src_file, dst_file)
return tmpdir
return tmpdir

View File

@ -78,4 +78,4 @@ class TestClass(object):
dst_file = str(tmpdir) + "/" + f
shutil.copyfile(src_file, dst_file)
return tmpdir
return tmpdir

View File

@ -34,7 +34,7 @@ class TestClass(object):
def test_ingest_multidoc(self, input_files):
input_file = input_files.join("multidoc.yaml")
ingester = YamlIngester()
models = ingester.ingest_data(filenames=[str(input_file)])
@ -52,4 +52,4 @@ class TestClass(object):
dst_file = str(tmpdir) + "/" + f
shutil.copyfile(src_file, dst_file)
return tmpdir
return tmpdir

View File

@ -104,4 +104,4 @@ class TestClass(object):
dst_file = str(tmpdir) + "/" + f
shutil.copyfile(src_file, dst_file)
return tmpdir
return tmpdir

View File

@ -0,0 +1,63 @@
# Copyright 2017 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import uuid
from drydock_provisioner.policy import DrydockPolicy
from drydock_provisioner.control.base import DrydockRequestContext
import pytest
class TestDefaultRules():
def test_register_policy(self, mocker):
''' DrydockPolicy.register_policy() should correctly register all default
policy rules
'''
mocker.patch('oslo_policy.policy.Enforcer')
policy_engine = DrydockPolicy()
policy_engine.register_policy()
expected_calls = [mocker.call.register_defaults(DrydockPolicy.base_rules),
mocker.call.register_defaults(DrydockPolicy.task_rules),
mocker.call.register_defaults(DrydockPolicy.data_rules)]
# Validate the oslo_policy Enforcer was loaded with expected default policy rules
policy_engine.enforcer.assert_has_calls(expected_calls, any_order=True)
def test_authorize_context(self,mocker):
''' DrydockPolicy.authorized() should correctly use oslo_policy to enforce
RBAC policy based on a DrydockRequestContext instance
'''
mocker.patch('oslo_policy.policy.Enforcer')
ctx = DrydockRequestContext()
# Configure context
project_id = str(uuid.uuid4().hex)
ctx.project_id = project_id
user_id = str(uuid.uuid4().hex)
ctx.user_id = user_id
ctx.roles = ['admin']
# Define action
policy_action = 'physical_provisioner:read_task'
policy_engine = DrydockPolicy()
policy_engine.authorize(policy_action, ctx)
expected_calls = [mocker.call.authorize(policy_action, {'project_id': project_id, 'user_id': user_id},
ctx.to_policy_view())]
policy_engine.enforcer.assert_has_calls(expected_calls)

View File

@ -71,4 +71,4 @@ spec:
# Is this link supporting multiple layer 2 networks?
trunking:
mode: '802.1q'
default_network: mgmt
default_network: mgmt

12
tox.ini
View File

@ -9,7 +9,15 @@ setenv=
PYTHONWARNING=all
commands=
py.test \
{posargs}
{posargs}
[testenv:genconfig]
basepython=python3.5
commands = oslo-config-generator --config-file=etc/drydock/drydock-config-generator.conf
[testenv:genpolicy]
basepython=python3.5
commands = oslopolicy-sample-generator --config-file etc/drydock/drydock-policy-generator.conf
[flake8]
ignore=E302,H306