255e8a839d
This adjusts the batching logic in the Nova notifier to immediately send and then sleep to allow batching of subsequent calls in the batch interval. So rather than always wait for 2 seconds to elapse while batching, batching will only occur in the 2 second period after a call is made. This turns the batch notifier into a standard queuing rate limiter. The upside to this is a single port create results in an immediate notification to Nova without a delay. The downside is now that a sudden burst of 6 port creations to a previously idle server will result in 2 notification calls to Nova (1 for the first call and another for the other 5). Closes-Bug: #1564648 Change-Id: I82f403441564955345f47877151e0c457712dd2f
66 lines
2.3 KiB
Python
66 lines
2.3 KiB
Python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
# not use this file except in compliance with the License. You may obtain
|
|
# a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
# License for the specific language governing permissions and limitations
|
|
# under the License.
|
|
|
|
import eventlet
|
|
from oslo_utils import uuidutils
|
|
|
|
from neutron.common import utils
|
|
|
|
|
|
class BatchNotifier(object):
|
|
def __init__(self, batch_interval, callback):
|
|
self.pending_events = []
|
|
self.callback = callback
|
|
self.batch_interval = batch_interval
|
|
self._lock_identifier = 'notifier-%s' % uuidutils.generate_uuid()
|
|
|
|
def queue_event(self, event):
|
|
"""Called to queue sending an event with the next batch of events.
|
|
|
|
Sending events individually, as they occur, has been problematic as it
|
|
can result in a flood of sends. Previously, there was a loopingcall
|
|
thread that would send batched events on a periodic interval. However,
|
|
maintaining a persistent thread in the loopingcall was also
|
|
problematic.
|
|
|
|
This replaces the loopingcall with a mechanism that creates a
|
|
short-lived thread on demand whenever an event is queued. That thread
|
|
will wait for a lock, send all queued events and then sleep for
|
|
'batch_interval' seconds to allow other events to queue up.
|
|
|
|
This effectively acts as a rate limiter to only allow 1 batch per
|
|
'batch_interval' seconds.
|
|
|
|
:param event: the event that occurred.
|
|
"""
|
|
if not event:
|
|
return
|
|
|
|
self.pending_events.append(event)
|
|
|
|
@utils.synchronized(self._lock_identifier)
|
|
def synced_send():
|
|
self._notify()
|
|
# sleeping after send while holding the lock allows subsequent
|
|
# events to batch up
|
|
eventlet.sleep(self.batch_interval)
|
|
|
|
eventlet.spawn_n(synced_send)
|
|
|
|
def _notify(self):
|
|
if not self.pending_events:
|
|
return
|
|
|
|
batched_events = self.pending_events
|
|
self.pending_events = []
|
|
self.callback(batched_events)
|