designate/designate/worker/processing.py
Tim Simmons 81ce132e90 Worker Model
- More information about the actual worker code can be found
  in `designate/worker/README.md` and in the inline docstrings

- Stand up a `designate-worker` process with an rpcapi, all
  the usual jazz

- Implement a base `Task` class that defines the behavior of
  a task and exposes resources to the task.

- Implement CUD Zone tasks, which includes Tasks that poll for zones,
  send Notifies, and update status. These are all done in parallel
  with threads using a shared threadpool, rather than iteratively.

- Implement a `recover_shard` task that serves the function
  of a periodic recovery, but only for a shard. Call that
  task with various shards from the zone manager.

- Put some shims in central and mdns so that the worker can
  be switched on/off with a few config values.

- Changes Zone Manager -> Producer
    - Removes zm rpcapi
    - Adds startable designate-producer service
    - Makes zone-manager an alias for producer service with a warning log
    - Lots of renaming

- Moves zone export to worker
    - API now uses central_api.export_zone to get zonefiles
    - Central uses worker_api.start_zone_export to init exports
    - Now including unit tests
    - Temporary workarounds for upgrade/migration move the logic
      into central if worker isn't available.

- Deprecates Pool manager polling options and adds warning msg on
  starting designate-pool-manager

- Get some devstack going

- Changes powerdns backend to get new sqlalchemy sessions for each
  action

- Sets the default number of threads in a worker process to 200,
  this is pretty much a shot in the dark, but 1000 seemed like
  too many, and 20 wasn't enough.

- Grenade upgrade testing

- Deprecation warnings for zone/pool mgr

The way to run this is simple, just stop `designate-pool-manager`
and `designate-zone-manager`, toggle the config settings in the
`service:worker` section: enabled = true, notify = true
and start `designate-worker` and `designate-producer` and you
should be good to go.

Change-Id: I259e9825d3a4eea58e082303ba3bdbdb7bf8c363
2016-08-24 14:54:31 +00:00

80 lines
2.2 KiB
Python

# Copyright 2016 Rackspace Inc.
#
# Author: Eric Larson <eric.larson@rackspace.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from concurrent import futures
from oslo_log import log as logging
from oslo_config import cfg
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
def default_executor():
thread_count = 5
try:
thread_count = CONF['service:worker'].threads
except Exception:
pass
return futures.ThreadPoolExecutor(thread_count)
class Executor(object):
"""
Object to facilitate the running of a task, or a set of tasks on an
executor that can map multiple tasks across a configurable number of
threads
"""
def __init__(self, executor=None):
self._executor = executor or default_executor()
@staticmethod
def do(task):
return task()
def task_name(self, task):
if hasattr(task, 'task_name'):
return str(task.task_name)
if hasattr(task, 'func_name'):
return str(task.func_name)
return 'UnnamedTask'
def run(self, tasks):
"""
Run task or set of tasks
:param tasks: the task or tasks you want to execute in the
executor's pool
:return: The results of the tasks (list)
If a single task is pass
"""
self.start_time = time.time()
if callable(tasks):
tasks = [tasks]
results = [r for r in self._executor.map(self.do, tasks)]
self.end_time = time.time()
self.task_time = self.end_time - self.start_time
task_names = [self.task_name(t) for t in tasks]
LOG.debug("Finished Tasks %(tasks)s in %(time)fs",
{'tasks': task_names, 'time': self.task_time})
return results