Merge "Granular metering data in neutron-metering-agent"
This commit is contained in:
commit
bdd6c6cdb5
@ -1,6 +1,159 @@
|
||||
==================
|
||||
metering_agent.ini
|
||||
==================
|
||||
|
||||
Neutron Metering system
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Neutron metering service enables operators to account the traffic in/out
|
||||
of the OpenStack environment. The concept is quite simple, operators can
|
||||
create metering labels, and decide if the labels are applied to all projects
|
||||
(tenants) or if they are applied to a specific one. Then, the operator needs
|
||||
to create traffic rules in the metering labels. The traffic rules are used
|
||||
to match traffic in/out of the OpenStack environment, and the accounting of
|
||||
packets and bytes is sent to the notification queue for further processing
|
||||
by Ceilometer (or some other system that is consuming that queue). The
|
||||
message sent in the queue is of type ``event``. Therefore, it requires an
|
||||
event processing configuration to be added/enabled in Ceilometer.
|
||||
|
||||
|
||||
The metering agent has the following configurations:
|
||||
|
||||
* ``driver``: the driver used to implement the metering rules. The default
|
||||
is ``neutron.services.metering.drivers.noop``, which means, we do not
|
||||
execute anything in the networking host. The only driver implemented so far
|
||||
is ``neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver``.
|
||||
Therefore, only ``iptables`` is supported so far;
|
||||
|
||||
* ``measure_interval``: the interval in seconds used to gather the bytes and
|
||||
packets information from the network plane. The default value is ``30``
|
||||
seconds;
|
||||
|
||||
* ``report_interval``: the interval in secodns used to generated the report
|
||||
(message) of the data that is gathered. The default value is ``300``
|
||||
seconds.
|
||||
|
||||
* ``granular_traffic_data``: Defines if the metering agent driver should
|
||||
present traffic data in a granular fashion, instead of grouping all of the
|
||||
traffic data for all projects and routers where the labels were assigned
|
||||
to. The default value is ``False`` for backward compatibility.
|
||||
|
||||
Non-granular traffic messages
|
||||
-----------------------------
|
||||
The non-granular (``granular_traffic_data = False``) traffic messages (here
|
||||
also called as legacy) have the following format; bear in mind that if labels
|
||||
are shared, then the counters are for all routers of all projects where the
|
||||
labels were applied.
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"pkts": "<the number of packets that matched the rules of the labels>",
|
||||
"bytes": "<the number of bytes that matched the rules of the labels>",
|
||||
"time": "<seconds between the first data collection and the last one>",
|
||||
"first_update": "timeutils.utcnow_ts() of the first collection",
|
||||
"last_update": "timeutils.utcnow_ts() of the last collection",
|
||||
"host": "<neutron metering agent host name>",
|
||||
"label_id": "<the label id>",
|
||||
"tenant_id": "<the tenant id>"
|
||||
}
|
||||
|
||||
The ``first_update`` and ``last_update`` timestamps represent the moment
|
||||
when the first and last data collection happened within the report interval.
|
||||
On the other hand, the ``time`` represents the difference between those two
|
||||
timestamp.
|
||||
|
||||
The ``tenant_id`` is only consistent when labels are not shared. Otherwise,
|
||||
they will contain the project id of the last router of the last project
|
||||
processed when the agent is started up. In other words, it is better not
|
||||
use it when dealing with shared labels.
|
||||
|
||||
All of the messages generated in this configuration mode are sent to the
|
||||
message bus as ``l3.meter`` events.
|
||||
|
||||
Granular traffic messages
|
||||
-------------------------
|
||||
The granular (``granular_traffic_data = True``) traffic messages allow
|
||||
operators to obtain granular information for shared metering labels.
|
||||
Therefore, a single label, when configured as ``shared=True`` and applied in
|
||||
all projects/routers of the environment, it will generate data in a granular
|
||||
fashion.
|
||||
|
||||
It (the metering agent) will account the traffic counter data in the
|
||||
following granularities.
|
||||
|
||||
* ``label`` -- all of the traffic counter for a given label. One must bear
|
||||
in mind that a label can be assigned to multiple routers. Therefore, this
|
||||
granularity represents all aggregation for all data for all routers of all
|
||||
projects where the label has been applied.
|
||||
|
||||
* ``router`` -- all of the traffic counter for all labels that are assigned to
|
||||
the router.
|
||||
|
||||
* ``project`` -- all of the traffic counters for all labels of all routers that
|
||||
a project has.
|
||||
|
||||
* ``router-label`` -- all of the traffic counters for a router and the given
|
||||
label.
|
||||
|
||||
* ``project-label`` -- all of the traffic counters for all routers of a project
|
||||
that have a given label.
|
||||
|
||||
Each granularity presented here is sent to the message bus with different
|
||||
events types that vary according to the granularity. The mapping between
|
||||
granularity and event type is presented as follows.
|
||||
|
||||
* ``label`` -- event type ``l3.meter.label``.
|
||||
|
||||
* ``router`` -- event type ``l3.meter.router``.
|
||||
|
||||
* ``project`` -- event type ``l3.meter.project``..
|
||||
|
||||
* ``router-label`` -- event type ``l3.meter.label_router``.
|
||||
|
||||
* ``project-label`` -- event type ``l3.meter.label_project``.
|
||||
|
||||
Furthermore, we have metadata that is appended to the messages depending on
|
||||
the granularity. As follows we present the mapping between the granularities
|
||||
and the metadata that will be available.
|
||||
|
||||
* ``label``, ``router-label``, and ``project-label`` granularities -- have the
|
||||
metadata ``label_id``, ``label_name``, ``label_shared``, ``project_id`` (if
|
||||
shared, this value will come with ``all`` for the ``label`` granularity), and
|
||||
``router_id`` (only for ``router-label`` granularity).
|
||||
|
||||
* The ``router`` granularity -- has the ``router_id`` and ``project_id``
|
||||
metadata.
|
||||
|
||||
* The ``project`` granularity only has the ``project_id`` metadata.
|
||||
|
||||
The message will also contain some attributes that can be found in the
|
||||
legacy mode such as ``bytes``, ``pkts``, ``time``, ``first_update``,
|
||||
``last_update``, and ``host``. As follows we present an example of JSON message
|
||||
with all of the possible attributes.
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"resource_id": "router-f0f745d9a59c47fdbbdd187d718f9e41-label-00c714f1-49c8-462c-8f5d-f05f21e035c7",
|
||||
"project_id": "f0f745d9a59c47fdbbdd187d718f9e41",
|
||||
"first_update": 1591058790,
|
||||
"bytes": 0,
|
||||
"label_id": "00c714f1-49c8-462c-8f5d-f05f21e035c7",
|
||||
"label_name": "test1",
|
||||
"last_update": 1591059037,
|
||||
"host": "<hostname>",
|
||||
"time": 247,
|
||||
"pkts": 0,
|
||||
"label_shared": true
|
||||
}
|
||||
|
||||
The ``resource_id`` is a unique identified for the "resource" being
|
||||
monitored. Here we consider a resource to be any of the granularities that
|
||||
we handle.
|
||||
|
||||
Sample of metering_agent.ini
|
||||
----------------------------
|
||||
|
||||
As follows we present all of the possible configuration one can use in the
|
||||
metering agent init file.
|
||||
|
||||
.. show-options::
|
||||
:config-file: etc/oslo-config-generator/metering_agent.ini
|
||||
:config-file: etc/oslo-config-generator/metering_agent.ini
|
@ -25,6 +25,14 @@ metering_agent_opts = [
|
||||
help=_("Interval between two metering measures")),
|
||||
cfg.IntOpt('report_interval', default=300,
|
||||
help=_("Interval between two metering reports")),
|
||||
cfg.BoolOpt('granular_traffic_data',
|
||||
default=False,
|
||||
help=_("Defines if the metering agent driver should present "
|
||||
"traffic data in a granular fashion, instead of "
|
||||
"grouping all of the traffic data for all projects and "
|
||||
"routers where the labels were assigned to. The "
|
||||
"default value is `False` for backward compatibility."),
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
|
@ -198,7 +198,8 @@ class MeteringDbMixin(metering.MeteringPluginBase):
|
||||
|
||||
rules = self._get_metering_rules_dict(label)
|
||||
|
||||
data = {'id': label['id'], 'rules': rules}
|
||||
data = {'id': label['id'], 'rules': rules,
|
||||
'shared': label['shared'], 'name': label['name']}
|
||||
router_dict[constants.METERING_LABEL_KEY].append(data)
|
||||
|
||||
routers_dict[router['id']] = router_dict
|
||||
|
@ -34,8 +34,9 @@ from neutron.conf.agent import common as config
|
||||
from neutron.conf.services import metering_agent
|
||||
from neutron import manager
|
||||
from neutron import service as neutron_service
|
||||
from neutron.services.metering.drivers import utils as driverutils
|
||||
|
||||
from neutron.services.metering.drivers import abstract_driver as driver
|
||||
from neutron.services.metering.drivers import utils as driverutils
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
@ -78,6 +79,7 @@ class MeteringAgent(MeteringPluginRpc, manager.Manager):
|
||||
self.label_tenant_id = {}
|
||||
self.routers = {}
|
||||
self.metering_infos = {}
|
||||
self.metering_labels = {}
|
||||
super(MeteringAgent, self).__init__(host=host)
|
||||
|
||||
def _load_drivers(self):
|
||||
@ -89,45 +91,112 @@ class MeteringAgent(MeteringPluginRpc, manager.Manager):
|
||||
self.conf)
|
||||
|
||||
def _metering_notification(self):
|
||||
for label_id, info in self.metering_infos.items():
|
||||
data = {'label_id': label_id,
|
||||
'tenant_id': self.label_tenant_id.get(label_id),
|
||||
'pkts': info['pkts'],
|
||||
'bytes': info['bytes'],
|
||||
'time': info['time'],
|
||||
'first_update': info['first_update'],
|
||||
'last_update': info['last_update'],
|
||||
'host': self.host}
|
||||
for key, info in self.metering_infos.items():
|
||||
data = self.create_notification_message_data(info, key)
|
||||
|
||||
traffic_meter_event = 'l3.meter'
|
||||
|
||||
granularity = info.get('traffic-counter-granularity')
|
||||
if granularity:
|
||||
traffic_meter_event = 'l3.meter.%s' % granularity
|
||||
|
||||
LOG.debug("Send metering report [%s] via event [%s].",
|
||||
data, traffic_meter_event)
|
||||
|
||||
LOG.debug("Send metering report: %s", data)
|
||||
notifier = n_rpc.get_notifier('metering')
|
||||
notifier.info(self.context, 'l3.meter', data)
|
||||
notifier.info(self.context, traffic_meter_event, data)
|
||||
|
||||
info['pkts'] = 0
|
||||
info['bytes'] = 0
|
||||
info['time'] = 0
|
||||
|
||||
def create_notification_message_data(self, info, key):
|
||||
data = {'pkts': info['pkts'],
|
||||
'bytes': info['bytes'],
|
||||
'time': info['time'],
|
||||
'first_update': info['first_update'],
|
||||
'last_update': info['last_update'],
|
||||
'host': self.host}
|
||||
|
||||
if self.conf.granular_traffic_data:
|
||||
data['resource_id'] = key
|
||||
self.set_project_id_for_granular_traffic_data(data, key)
|
||||
else:
|
||||
data['label_id'] = key
|
||||
data['tenant_id'] = self.label_tenant_id.get(key)
|
||||
|
||||
LOG.debug("Metering notification created [%s] with info data [%s], "
|
||||
"key[%s], and metering_labels configured [%s]. ", data, info,
|
||||
key, self.metering_labels)
|
||||
return data
|
||||
|
||||
def set_project_id_for_granular_traffic_data(self, data, key):
|
||||
if driver.BASE_LABEL_TRAFFIC_COUNTER_KEY in key:
|
||||
other_ids, actual_label_id = key.split(
|
||||
driver.BASE_LABEL_TRAFFIC_COUNTER_KEY)
|
||||
is_label_shared = self.metering_labels[actual_label_id]['shared']
|
||||
|
||||
data['label_id'] = actual_label_id
|
||||
data['label_name'] = self.metering_labels[actual_label_id]['name']
|
||||
data['label_shared'] = is_label_shared
|
||||
|
||||
if is_label_shared:
|
||||
self.configure_project_id_shared_labels(data, other_ids[:-1])
|
||||
else:
|
||||
data['project_id'] = self.label_tenant_id.get(actual_label_id)
|
||||
elif driver.BASE_PROJECT_TRAFFIC_COUNTER_KEY in key:
|
||||
data['project_id'] = key.split(
|
||||
driver.BASE_PROJECT_TRAFFIC_COUNTER_KEY)[1]
|
||||
elif driver.BASE_ROUTER_TRAFFIC_COUNTER_KEY in key:
|
||||
router_id = key.split(driver.BASE_ROUTER_TRAFFIC_COUNTER_KEY)[1]
|
||||
data['router_id'] = router_id
|
||||
self.configure_project_id_based_on_router(data, router_id)
|
||||
else:
|
||||
raise Exception(_("Unexpected key [%s] format.") % key)
|
||||
|
||||
def configure_project_id_shared_labels(self, data, key):
|
||||
if driver.BASE_PROJECT_TRAFFIC_COUNTER_KEY in key:
|
||||
project_id = key.split(driver.BASE_PROJECT_TRAFFIC_COUNTER_KEY)[1]
|
||||
|
||||
data['project_id'] = project_id
|
||||
elif driver.BASE_ROUTER_TRAFFIC_COUNTER_KEY in key:
|
||||
router_id = key.split(driver.BASE_ROUTER_TRAFFIC_COUNTER_KEY)[1]
|
||||
|
||||
data['router_id'] = router_id
|
||||
self.configure_project_id_based_on_router(data, router_id)
|
||||
else:
|
||||
data['project_id'] = 'all'
|
||||
|
||||
def configure_project_id_based_on_router(self, data, router_id):
|
||||
if router_id in self.routers:
|
||||
router = self.routers[router_id]
|
||||
data['project_id'] = router['tenant_id']
|
||||
else:
|
||||
LOG.warning("Could not find router with ID [%s].", router_id)
|
||||
|
||||
def _purge_metering_info(self):
|
||||
deadline_timestamp = timeutils.utcnow_ts() - self.conf.report_interval
|
||||
label_ids = [
|
||||
label_id
|
||||
for label_id, info in self.metering_infos.items()
|
||||
expired_metering_info_key = [
|
||||
key for key, info in self.metering_infos.items()
|
||||
if info['last_update'] < deadline_timestamp]
|
||||
for label_id in label_ids:
|
||||
del self.metering_infos[label_id]
|
||||
|
||||
def _add_metering_info(self, label_id, pkts, bytes):
|
||||
for key in expired_metering_info_key:
|
||||
del self.metering_infos[key]
|
||||
|
||||
def _add_metering_info(self, key, traffic_counter):
|
||||
granularity = traffic_counter.get('traffic-counter-granularity')
|
||||
|
||||
ts = timeutils.utcnow_ts()
|
||||
info = self.metering_infos.get(label_id, {'bytes': 0,
|
||||
'pkts': 0,
|
||||
'time': 0,
|
||||
'first_update': ts,
|
||||
'last_update': ts})
|
||||
info['bytes'] += bytes
|
||||
info['pkts'] += pkts
|
||||
info = self.metering_infos.get(
|
||||
key, {'bytes': 0, 'traffic-counter-granularity': granularity,
|
||||
'pkts': 0, 'time': 0, 'first_update': ts, 'last_update': ts})
|
||||
|
||||
info['bytes'] += traffic_counter['bytes']
|
||||
info['pkts'] += traffic_counter['pkts']
|
||||
info['time'] += ts - info['last_update']
|
||||
info['last_update'] = ts
|
||||
|
||||
self.metering_infos[label_id] = info
|
||||
self.metering_infos[key] = info
|
||||
|
||||
return info
|
||||
|
||||
@ -140,12 +209,17 @@ class MeteringAgent(MeteringPluginRpc, manager.Manager):
|
||||
label_id = label['id']
|
||||
self.label_tenant_id[label_id] = tenant_id
|
||||
|
||||
accs = self._get_traffic_counters(self.context, self.routers.values())
|
||||
if not accs:
|
||||
LOG.debug("Retrieving traffic counters for routers [%s].",
|
||||
self.routers)
|
||||
traffic_counters = self._get_traffic_counters(self.context,
|
||||
self.routers.values())
|
||||
LOG.debug("Traffic counters [%s] retrieved for routers [%s].",
|
||||
traffic_counters, self.routers)
|
||||
if not traffic_counters:
|
||||
return
|
||||
|
||||
for label_id, acc in accs.items():
|
||||
self._add_metering_info(label_id, acc['pkts'], acc['bytes'])
|
||||
for key, traffic_counter in traffic_counters.items():
|
||||
self._add_metering_info(key, traffic_counter)
|
||||
|
||||
def _metering_loop(self):
|
||||
self._sync_router_namespaces(self.context, self.routers.values())
|
||||
@ -208,6 +282,8 @@ class MeteringAgent(MeteringPluginRpc, manager.Manager):
|
||||
for router in routers:
|
||||
self.routers[router['id']] = router
|
||||
|
||||
self.store_metering_labels(router)
|
||||
|
||||
return self._invoke_driver(context, routers,
|
||||
'update_routers')
|
||||
|
||||
@ -233,14 +309,30 @@ class MeteringAgent(MeteringPluginRpc, manager.Manager):
|
||||
'update_metering_label_rules')
|
||||
|
||||
def add_metering_label(self, context, routers):
|
||||
LOG.debug("Creating a metering label from agent")
|
||||
LOG.debug("Creating a metering label from agent with parameters ["
|
||||
"%s].", routers)
|
||||
for router in routers:
|
||||
self.store_metering_labels(router)
|
||||
|
||||
return self._invoke_driver(context, routers,
|
||||
'add_metering_label')
|
||||
|
||||
def store_metering_labels(self, router):
|
||||
labels = router[constants.METERING_LABEL_KEY]
|
||||
for label in labels:
|
||||
self.metering_labels[label['id']] = label
|
||||
|
||||
def remove_metering_label(self, context, routers):
|
||||
self._add_metering_infos()
|
||||
LOG.debug("Delete a metering label from agent with parameters ["
|
||||
"%s].", routers)
|
||||
|
||||
for router in routers:
|
||||
labels = router[constants.METERING_LABEL_KEY]
|
||||
for label in labels:
|
||||
if label['id'] in self.metering_labels.keys():
|
||||
del self.metering_labels[label['id']]
|
||||
|
||||
LOG.debug("Delete a metering label from agent")
|
||||
return self._invoke_driver(context, routers,
|
||||
'remove_metering_label')
|
||||
|
||||
|
@ -14,12 +14,21 @@
|
||||
|
||||
import abc
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
BASE_ROUTER_TRAFFIC_COUNTER_KEY = "router-"
|
||||
BASE_PROJECT_TRAFFIC_COUNTER_KEY = "project-"
|
||||
BASE_LABEL_TRAFFIC_COUNTER_KEY = "label-"
|
||||
|
||||
|
||||
class MeteringAbstractDriver(object, metaclass=abc.ABCMeta):
|
||||
"""Abstract Metering driver."""
|
||||
|
||||
def __init__(self, plugin, conf):
|
||||
pass
|
||||
self.conf = conf or cfg.CONF
|
||||
self.plugin = plugin
|
||||
|
||||
self.granular_traffic_data = self.conf.granular_traffic_data
|
||||
|
||||
@abc.abstractmethod
|
||||
def update_routers(self, context, routers):
|
||||
@ -48,3 +57,22 @@ class MeteringAbstractDriver(object, metaclass=abc.ABCMeta):
|
||||
@abc.abstractmethod
|
||||
def sync_router_namespaces(self, context, routers):
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def get_router_traffic_counter_key(router_id):
|
||||
return MeteringAbstractDriver._concat_base_key_with_id(
|
||||
router_id, BASE_ROUTER_TRAFFIC_COUNTER_KEY)
|
||||
|
||||
@staticmethod
|
||||
def get_project_traffic_counter_key(tenant_id):
|
||||
return MeteringAbstractDriver._concat_base_key_with_id(
|
||||
tenant_id, BASE_PROJECT_TRAFFIC_COUNTER_KEY)
|
||||
|
||||
@staticmethod
|
||||
def get_label_traffic_counter_key(label_id):
|
||||
return MeteringAbstractDriver._concat_base_key_with_id(
|
||||
label_id, BASE_LABEL_TRAFFIC_COUNTER_KEY)
|
||||
|
||||
@staticmethod
|
||||
def _concat_base_key_with_id(resource_id, base_traffic_key):
|
||||
return base_traffic_key + "%s" % resource_id
|
||||
|
@ -122,10 +122,9 @@ class RouterWithMetering(object):
|
||||
class IptablesMeteringDriver(abstract_driver.MeteringAbstractDriver):
|
||||
|
||||
def __init__(self, plugin, conf):
|
||||
self.plugin = plugin
|
||||
self.conf = conf or cfg.CONF
|
||||
self.routers = {}
|
||||
super(IptablesMeteringDriver, self).__init__(plugin, conf)
|
||||
|
||||
self.routers = {}
|
||||
self.driver = common_utils.load_interface_driver(self.conf)
|
||||
|
||||
def _update_router(self, router):
|
||||
@ -424,42 +423,173 @@ class IptablesMeteringDriver(abstract_driver.MeteringAbstractDriver):
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def get_traffic_counters(self, context, routers):
|
||||
accs = {}
|
||||
traffic_counters = {}
|
||||
routers_to_reconfigure = set()
|
||||
for router in routers:
|
||||
rm = self.routers.get(router['id'])
|
||||
if not rm:
|
||||
continue
|
||||
|
||||
for label_id in rm.metering_labels:
|
||||
try:
|
||||
chain = iptables_manager.get_chain_name(WRAP_NAME +
|
||||
LABEL +
|
||||
label_id,
|
||||
wrap=False)
|
||||
|
||||
chain_acc = rm.iptables_manager.get_traffic_counters(
|
||||
chain, wrap=False, zero=True)
|
||||
except RuntimeError:
|
||||
LOG.exception('Failed to get traffic counters, '
|
||||
'router: %s', router)
|
||||
routers_to_reconfigure.add(router['id'])
|
||||
continue
|
||||
|
||||
if not chain_acc:
|
||||
continue
|
||||
|
||||
acc = accs.get(label_id, {'pkts': 0, 'bytes': 0})
|
||||
|
||||
acc['pkts'] += chain_acc['pkts']
|
||||
acc['bytes'] += chain_acc['bytes']
|
||||
|
||||
accs[label_id] = acc
|
||||
if self.granular_traffic_data:
|
||||
self.retrieve_and_account_granular_traffic_counters(
|
||||
router, routers_to_reconfigure, traffic_counters)
|
||||
else:
|
||||
self.retrieve_and_account_traffic_counters_legacy(
|
||||
router, routers_to_reconfigure, traffic_counters)
|
||||
|
||||
for router_id in routers_to_reconfigure:
|
||||
del self.routers[router_id]
|
||||
|
||||
return accs
|
||||
return traffic_counters
|
||||
|
||||
def retrieve_and_account_traffic_counters_legacy(self, router,
|
||||
routers_to_reconfigure,
|
||||
traffic_counters):
|
||||
rm = self.routers.get(router['id'])
|
||||
if not rm:
|
||||
return
|
||||
|
||||
for label_id in rm.metering_labels:
|
||||
chain_acc = self.retrieve_traffic_counters(label_id, rm, router,
|
||||
routers_to_reconfigure)
|
||||
|
||||
if not chain_acc:
|
||||
continue
|
||||
|
||||
acc = traffic_counters.get(label_id, {'pkts': 0, 'bytes': 0})
|
||||
|
||||
acc['pkts'] += chain_acc['pkts']
|
||||
acc['bytes'] += chain_acc['bytes']
|
||||
|
||||
traffic_counters[label_id] = acc
|
||||
|
||||
@staticmethod
|
||||
def retrieve_traffic_counters(label_id, rm, router,
|
||||
routers_to_reconfigure):
|
||||
try:
|
||||
chain = iptables_manager.get_chain_name(WRAP_NAME +
|
||||
LABEL +
|
||||
label_id,
|
||||
wrap=False)
|
||||
|
||||
chain_acc = rm.iptables_manager.get_traffic_counters(
|
||||
chain, wrap=False, zero=True)
|
||||
except RuntimeError:
|
||||
LOG.exception('Failed to get traffic counters, '
|
||||
'router: %s', router)
|
||||
routers_to_reconfigure.add(router['id'])
|
||||
return {}
|
||||
return chain_acc
|
||||
|
||||
def retrieve_and_account_granular_traffic_counters(self, router,
|
||||
routers_to_reconfigure,
|
||||
traffic_counters):
|
||||
"""Retrieve and account traffic counters for routers.
|
||||
|
||||
This method will retrieve the traffic counters for all labels that
|
||||
are assigned to a router. Then, it will account the traffic counter
|
||||
data in the following granularities.
|
||||
* label -- all of the traffic counter for a given label.
|
||||
One must bear in mind that a label can be assigned to multiple
|
||||
routers.
|
||||
* router -- all of the traffic counter for all labels that
|
||||
are assigned to the router.
|
||||
* project -- all of the traffic counters for all labels of
|
||||
all routers that a project has.
|
||||
* router-label -- all of the traffic counters for a router
|
||||
and the given label.
|
||||
* project-label -- all of the traffic counters for all
|
||||
routers of a project that have a given label.
|
||||
|
||||
|
||||
All of the keys have the following standard in the
|
||||
`traffic_counters` dictionary.
|
||||
* labels -- label-<label_id>
|
||||
* routers -- router-<router_id>
|
||||
* project -- project-<tenant_id>
|
||||
* router-label -- router-<router_id>-label-<label_id>
|
||||
* project-label -- project-<tenant_id>-label-<label_id>
|
||||
|
||||
And last, but not least, if we are not able to retrieve the traffic
|
||||
counters from `iptables` for a given router, we will add it to
|
||||
`routers_to_reconfigure` set.
|
||||
|
||||
:param router:
|
||||
:param routers_to_reconfigure:
|
||||
:param traffic_counters:
|
||||
"""
|
||||
router_id = router['id']
|
||||
rm = self.routers.get(router_id)
|
||||
if not rm:
|
||||
return
|
||||
|
||||
default_traffic_counters = {'pkts': 0, 'bytes': 0}
|
||||
project_traffic_counter_key = self.get_project_traffic_counter_key(
|
||||
router['tenant_id'])
|
||||
router_traffic_counter_key = self.get_router_traffic_counter_key(
|
||||
router_id)
|
||||
|
||||
project_counters = traffic_counters.get(
|
||||
project_traffic_counter_key, default_traffic_counters.copy())
|
||||
project_counters['traffic-counter-granularity'] = "project"
|
||||
|
||||
router_counters = traffic_counters.get(
|
||||
router_traffic_counter_key, default_traffic_counters.copy())
|
||||
router_counters['traffic-counter-granularity'] = "router"
|
||||
|
||||
for label_id in rm.metering_labels:
|
||||
label_traffic_counter_key = self.get_label_traffic_counter_key(
|
||||
label_id)
|
||||
|
||||
project_label_traffic_counter_key = "%s-%s" % (
|
||||
project_traffic_counter_key, label_traffic_counter_key)
|
||||
router_label_traffic_counter_key = "%s-%s" % (
|
||||
router_traffic_counter_key, label_traffic_counter_key)
|
||||
|
||||
chain_acc = self.retrieve_traffic_counters(label_id, rm, router,
|
||||
routers_to_reconfigure)
|
||||
|
||||
if not chain_acc:
|
||||
continue
|
||||
|
||||
label_traffic_counters = traffic_counters.get(
|
||||
label_traffic_counter_key, default_traffic_counters.copy())
|
||||
label_traffic_counters['traffic-counter-granularity'] = "label"
|
||||
|
||||
project_label_traffic_counters = traffic_counters.get(
|
||||
project_label_traffic_counter_key,
|
||||
default_traffic_counters.copy())
|
||||
project_label_traffic_counters[
|
||||
'traffic-counter-granularity'] = "label_project"
|
||||
|
||||
router_label_traffic_counters = traffic_counters.get(
|
||||
router_label_traffic_counter_key,
|
||||
default_traffic_counters.copy())
|
||||
router_label_traffic_counters[
|
||||
'traffic-counter-granularity'] = "label_router"
|
||||
|
||||
project_label_traffic_counters['pkts'] += chain_acc['pkts']
|
||||
project_label_traffic_counters['bytes'] += chain_acc['bytes']
|
||||
|
||||
router_label_traffic_counters['pkts'] += chain_acc['pkts']
|
||||
router_label_traffic_counters['bytes'] += chain_acc['bytes']
|
||||
|
||||
label_traffic_counters['pkts'] += chain_acc['pkts']
|
||||
label_traffic_counters['bytes'] += chain_acc['bytes']
|
||||
|
||||
traffic_counters[project_label_traffic_counter_key] = \
|
||||
project_label_traffic_counters
|
||||
|
||||
traffic_counters[router_label_traffic_counter_key] = \
|
||||
router_label_traffic_counters
|
||||
|
||||
traffic_counters[label_traffic_counter_key] = \
|
||||
label_traffic_counters
|
||||
|
||||
router_counters['pkts'] += chain_acc['pkts']
|
||||
router_counters['bytes'] += chain_acc['bytes']
|
||||
|
||||
project_counters['pkts'] += chain_acc['pkts']
|
||||
project_counters['bytes'] += chain_acc['bytes']
|
||||
|
||||
traffic_counters[router_traffic_counter_key] = router_counters
|
||||
traffic_counters[project_traffic_counter_key] = project_counters
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def sync_router_namespaces(self, context, routers):
|
||||
|
@ -58,6 +58,7 @@ class TestMeteringOperations(base.BaseTestCase):
|
||||
cfg.CONF.set_override('driver', 'noop')
|
||||
cfg.CONF.set_override('measure_interval', 0)
|
||||
cfg.CONF.set_override('report_interval', 0)
|
||||
cfg.CONF.set_override('granular_traffic_data', False)
|
||||
|
||||
self.setup_notification_driver()
|
||||
|
||||
@ -144,6 +145,7 @@ class TestMeteringOperations(base.BaseTestCase):
|
||||
|
||||
cfg.CONF.set_override('measure_interval', measure_interval)
|
||||
cfg.CONF.set_override('report_interval', report_interval)
|
||||
cfg.CONF.set_override('granular_traffic_data', False)
|
||||
|
||||
for i in range(report_interval):
|
||||
self.agent._metering_loop()
|
||||
@ -173,9 +175,11 @@ class TestMeteringOperations(base.BaseTestCase):
|
||||
def test_router_deleted(self):
|
||||
label_id = _uuid()
|
||||
self.driver.get_traffic_counters = mock.MagicMock()
|
||||
self.driver.get_traffic_counters.return_value = {label_id:
|
||||
{'pkts': 44,
|
||||
'bytes': 222}}
|
||||
|
||||
expected_traffic_counters = {'pkts': 44, 'bytes': 222}
|
||||
self.driver.get_traffic_counters.return_value = {
|
||||
label_id: expected_traffic_counters}
|
||||
|
||||
self.agent._add_metering_info = mock.MagicMock()
|
||||
|
||||
self.agent.routers_updated(None, ROUTERS)
|
||||
@ -184,7 +188,8 @@ class TestMeteringOperations(base.BaseTestCase):
|
||||
self.assertEqual(1, self.agent._add_metering_info.call_count)
|
||||
self.assertEqual(1, self.driver.remove_router.call_count)
|
||||
|
||||
self.agent._add_metering_info.assert_called_with(label_id, 44, 222)
|
||||
self.agent._add_metering_info.assert_called_with(
|
||||
label_id, expected_traffic_counters)
|
||||
|
||||
@mock.patch('time.time')
|
||||
def _test_purge_metering_info(self, current_timestamp, is_empty,
|
||||
@ -209,16 +214,18 @@ class TestMeteringOperations(base.BaseTestCase):
|
||||
def _test_add_metering_info(self, expected_info, current_timestamp,
|
||||
mock_time):
|
||||
mock_time.return_value = current_timestamp
|
||||
actual_info = self.agent._add_metering_info('fake_label_id', 1, 1)
|
||||
actual_info = self.agent._add_metering_info(
|
||||
'fake_label_id', expected_info)
|
||||
|
||||
self.assertEqual(1, len(self.agent.metering_infos))
|
||||
self.assertEqual(expected_info, actual_info)
|
||||
self.assertEqual(expected_info,
|
||||
self.agent.metering_infos['fake_label_id'])
|
||||
self.assertEqual(1, mock_time.call_count)
|
||||
|
||||
def test_add_metering_info_create(self):
|
||||
def test_add_metering_info_create_no_granular_traffic_counters(self):
|
||||
expected_info = {'bytes': 1, 'pkts': 1, 'time': 0, 'first_update': 1,
|
||||
'last_update': 1}
|
||||
'last_update': 1, 'traffic-counter-granularity': None}
|
||||
self._test_add_metering_info(expected_info, 1)
|
||||
|
||||
def test_add_metering_info_update(self):
|
||||
|
@ -17,6 +17,7 @@ from unittest import mock
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from neutron.conf.services import metering_agent as metering_agent_config
|
||||
from neutron.services.metering.drivers.iptables import iptables_driver
|
||||
from neutron.tests import base
|
||||
|
||||
@ -146,6 +147,10 @@ class IptablesDriverTestCase(base.BaseTestCase):
|
||||
self.iptables_cls.return_value = self.iptables_inst
|
||||
cfg.CONF.set_override('interface_driver',
|
||||
'neutron.agent.linux.interface.NullDriver')
|
||||
|
||||
metering_agent_config.register_metering_agent_opts()
|
||||
cfg.CONF.set_override('granular_traffic_data', False)
|
||||
|
||||
self.metering = iptables_driver.IptablesMeteringDriver('metering',
|
||||
cfg.CONF)
|
||||
|
||||
@ -717,3 +722,105 @@ class IptablesDriverTestCase(base.BaseTestCase):
|
||||
self.assertIsNotNone(rm.iptables_manager)
|
||||
self.assertEqual(
|
||||
3, self.metering._process_ns_specific_metering_label.call_count)
|
||||
|
||||
def test_get_traffic_counters_granular_data(self):
|
||||
for r in TEST_ROUTERS:
|
||||
rm = iptables_driver.RouterWithMetering(self.metering.conf, r)
|
||||
rm.metering_labels = {r['_metering_labels'][0]['id']: 'fake'}
|
||||
self.metering.routers[r['id']] = rm
|
||||
|
||||
mocked_method = self.iptables_cls.return_value.get_traffic_counters
|
||||
mocked_method.side_effect = [{'pkts': 2, 'bytes': 5},
|
||||
{'pkts': 4, 'bytes': 3}]
|
||||
|
||||
old_granular_traffic_data = self.metering.granular_traffic_data
|
||||
|
||||
expected_total_number_of_data_granularities = 9
|
||||
expected_response = {
|
||||
"router-373ec392-1711-44e3-b008-3251ccfc5099": {
|
||||
"pkts": 4,
|
||||
"bytes": 3,
|
||||
"traffic-counter-granularity": "router"
|
||||
},
|
||||
"label-c5df2fe5-c600-4a2a-b2f4-c0fb6df73c83": {
|
||||
"pkts": 2,
|
||||
"bytes": 5,
|
||||
"traffic-counter-granularity": "label"
|
||||
},
|
||||
"router-473ec392-1711-44e3-b008-3251ccfc5099-"
|
||||
"label-c5df2fe5-c600-4a2a-b2f4-c0fb6df73c83": {
|
||||
"pkts": 2,
|
||||
"bytes": 5,
|
||||
"traffic-counter-granularity": "label_router"
|
||||
},
|
||||
"label-eeef45da-c600-4a2a-b2f4-c0fb6df73c83": {
|
||||
"pkts": 4,
|
||||
"bytes": 3,
|
||||
"traffic-counter-granularity": "label"
|
||||
},
|
||||
"project-6c5f5d2a1fa2441e88e35422926f48e8-"
|
||||
"label-eeef45da-c600-4a2a-b2f4-c0fb6df73c83": {
|
||||
"pkts": 4,
|
||||
"bytes": 3,
|
||||
"traffic-counter-granularity": "label_project"
|
||||
|
||||
},
|
||||
"router-473ec392-1711-44e3-b008-3251ccfc5099": {
|
||||
"pkts": 2,
|
||||
"bytes": 5,
|
||||
"traffic-counter-granularity": "router"
|
||||
},
|
||||
"project-6c5f5d2a1fa2441e88e35422926f48e8": {
|
||||
"pkts": 6,
|
||||
"bytes": 8,
|
||||
"traffic-counter-granularity": "project"
|
||||
},
|
||||
"router-373ec392-1711-44e3-b008-3251ccfc5099-"
|
||||
"label-eeef45da-c600-4a2a-b2f4-c0fb6df73c83": {
|
||||
"pkts": 4,
|
||||
"bytes": 3,
|
||||
"traffic-counter-granularity": "label_router"
|
||||
},
|
||||
"project-6c5f5d2a1fa2441e88e35422926f48e8-"
|
||||
"label-c5df2fe5-c600-4a2a-b2f4-c0fb6df73c83": {
|
||||
"pkts": 2,
|
||||
"bytes": 5,
|
||||
"traffic-counter-granularity": "label_project"
|
||||
}
|
||||
}
|
||||
try:
|
||||
self.metering.granular_traffic_data = True
|
||||
counters = self.metering.get_traffic_counters(None, TEST_ROUTERS)
|
||||
|
||||
self.assertEqual(expected_total_number_of_data_granularities,
|
||||
len(counters))
|
||||
self.assertEqual(expected_response, counters)
|
||||
finally:
|
||||
self.metering.granular_traffic_data = old_granular_traffic_data
|
||||
|
||||
def test_get_traffic_counters_legacy_mode(self):
|
||||
for r in TEST_ROUTERS:
|
||||
rm = iptables_driver.RouterWithMetering(self.metering.conf, r)
|
||||
rm.metering_labels = {r['_metering_labels'][0]['id']: 'fake'}
|
||||
self.metering.routers[r['id']] = rm
|
||||
|
||||
mocked_method = self.iptables_cls.return_value.get_traffic_counters
|
||||
mocked_method.side_effect = [{'pkts': 2, 'bytes': 5},
|
||||
{'pkts': 4, 'bytes': 3}]
|
||||
|
||||
old_granular_traffic_data = self.metering.granular_traffic_data
|
||||
|
||||
expected_total_number_of_data_granularity = 2
|
||||
|
||||
expected_response = {
|
||||
'eeef45da-c600-4a2a-b2f4-c0fb6df73c83': {'pkts': 4, 'bytes': 3},
|
||||
'c5df2fe5-c600-4a2a-b2f4-c0fb6df73c83': {'pkts': 2, 'bytes': 5}}
|
||||
try:
|
||||
self.metering.granular_traffic_data = False
|
||||
counters = self.metering.get_traffic_counters(None, TEST_ROUTERS)
|
||||
print("%s" % counters)
|
||||
self.assertEqual(expected_total_number_of_data_granularity,
|
||||
len(counters))
|
||||
self.assertEqual(expected_response, counters)
|
||||
finally:
|
||||
self.metering.granular_traffic_data = old_granular_traffic_data
|
||||
|
@ -160,7 +160,8 @@ class TestMeteringPlugin(test_db_base_plugin_v2.NeutronDbPluginV2TestCase,
|
||||
'tenant_id': self.tenant_id,
|
||||
'_metering_labels': [
|
||||
{'rules': [],
|
||||
'id': self.uuid}],
|
||||
'id': self.uuid, 'shared': False,
|
||||
'name': 'label'}],
|
||||
'id': self.uuid}]
|
||||
|
||||
tenant_id_2 = '8a268a58-1610-4890-87e0-07abb8231206'
|
||||
@ -184,13 +185,15 @@ class TestMeteringPlugin(test_db_base_plugin_v2.NeutronDbPluginV2TestCase,
|
||||
'tenant_id': self.tenant_id,
|
||||
'_metering_labels': [
|
||||
{'rules': [],
|
||||
'id': self.uuid},
|
||||
'id': self.uuid, 'shared': False,
|
||||
'name': 'label'},
|
||||
{'rules': [],
|
||||
'id': second_uuid}],
|
||||
'id': second_uuid, 'shared': True,
|
||||
'name': 'label'}],
|
||||
'id': self.uuid}]
|
||||
|
||||
tenant_id_2 = '8a268a58-1610-4890-87e0-07abb8231206'
|
||||
with self.router(name='router1', tenant_id=self.tenant_id,
|
||||
with self.router(name='router1', tenant_id=self.tenant_id, shared=True,
|
||||
set_context=True):
|
||||
with self.metering_label(tenant_id=self.tenant_id,
|
||||
set_context=True):
|
||||
@ -208,7 +211,8 @@ class TestMeteringPlugin(test_db_base_plugin_v2.NeutronDbPluginV2TestCase,
|
||||
'tenant_id': self.tenant_id,
|
||||
'_metering_labels': [
|
||||
{'rules': [],
|
||||
'id': self.uuid}],
|
||||
'id': self.uuid, 'shared': False,
|
||||
'name': 'label'}],
|
||||
'id': self.uuid}]
|
||||
|
||||
with self.router(tenant_id=self.tenant_id, set_context=True):
|
||||
@ -229,9 +233,11 @@ class TestMeteringPlugin(test_db_base_plugin_v2.NeutronDbPluginV2TestCase,
|
||||
'tenant_id': self.tenant_id,
|
||||
'_metering_labels': [
|
||||
{'rules': [],
|
||||
'id': self.uuid},
|
||||
'id': self.uuid, 'shared': False,
|
||||
'name': 'label'},
|
||||
{'rules': [],
|
||||
'id': second_uuid}],
|
||||
'id': second_uuid, 'shared': False,
|
||||
'name': 'label'}],
|
||||
'id': self.uuid}]
|
||||
expected_remove = [{'status': 'ACTIVE',
|
||||
'name': 'router1',
|
||||
@ -241,7 +247,8 @@ class TestMeteringPlugin(test_db_base_plugin_v2.NeutronDbPluginV2TestCase,
|
||||
'tenant_id': self.tenant_id,
|
||||
'_metering_labels': [
|
||||
{'rules': [],
|
||||
'id': second_uuid}],
|
||||
'id': second_uuid, 'shared': False,
|
||||
'name': 'label'}],
|
||||
'id': self.uuid}]
|
||||
|
||||
with self.router(tenant_id=self.tenant_id, set_context=True):
|
||||
@ -384,7 +391,8 @@ class TestMeteringPluginL3AgentScheduler(
|
||||
'tenant_id': self.tenant_id,
|
||||
'_metering_labels': [
|
||||
{'rules': [],
|
||||
'id': second_uuid}],
|
||||
'id': second_uuid, 'shared': False,
|
||||
'name': 'label'}],
|
||||
'id': self.uuid},
|
||||
{'status': 'ACTIVE',
|
||||
'name': 'router2',
|
||||
@ -394,7 +402,8 @@ class TestMeteringPluginL3AgentScheduler(
|
||||
'tenant_id': self.tenant_id,
|
||||
'_metering_labels': [
|
||||
{'rules': [],
|
||||
'id': second_uuid}],
|
||||
'id': second_uuid, 'shared': False,
|
||||
'name': 'label'}],
|
||||
'id': second_uuid}]
|
||||
|
||||
# bind each router to a specific agent
|
||||
|
Loading…
Reference in New Issue
Block a user