Retire monasca-specs repository

This repository is being retired as part of the Monasca project
retirement. The project content has been replaced with a retirement
notice.

Needed-By: I3cb522ce8f51424b64e93c1efaf0dfd1781cd5ac
Change-Id: Ia2dd8a119d5eef46052df0eaf2b2dd699a313d46
Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
This commit is contained in:
Goutham Pacha Ravi
2025-08-11 22:15:36 -07:00
parent 393acda2c9
commit 3d2164df0e
49 changed files with 7 additions and 6252 deletions

View File

@@ -1,3 +0,0 @@
- project:
templates:
- openstack-specs-jobs

View File

@@ -1,38 +1,9 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: https://governance.openstack.org/tc/badges/monasca-specs.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on
======
README
======
Monasca Specifications
======================
This git repository is used to hold priorities and approved design
specifications for additions to the Monasca project. Reviews of the specs are
done in gerrit, using a similar workflow to how we review and merge changes to
the code itself.
The layout of this repository is::
priorities/<release>/
specs/<release>/
Where there are three sub-directories in ``specs``:
specs/<release>/approved
Specifications approved, but not yet implemented
specs/<release>/implemented
Implemented specifications
specs/<release>/not-implemented
Specifications that were approved but are not expected to be implemented.
These are kept for historical reference.
For any further questions, please email openstack-discuss@lists.openstack.org
or join #openstack-dev on OFTC.

View File

@@ -1,155 +0,0 @@
# -*- coding: utf-8 -*-
#
# Tempest documentation build configuration file, created by
# sphinx-quickstart on Tue May 21 17:43:32 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import datetime
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['redirect',
'sphinx.ext.todo',
'openstackdocstheme',
'yasfb',
]
# Feed configuration for yasfb
feed_base_url = 'https://specs.openstack.org/openstack/monasca-specs'
feed_author = 'Monasca Team'
todo_include_todos = True
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Monasca Specs'
copyright = u'%s, Monasca Team' % datetime.date.today().year
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
#exclude_patterns = [
# '_build',
# 'image_src/plantuml/README.rst',
#]
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['monasca-specs.']
version = ''
release = ''
# -- Options for man page output ----------------------------------------------
man_pages = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
html_domain_indices = False
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# -- openstackdocstheme configuration -----------------------------------------
openstackdocs_repo_name = 'openstack/monasca-specs'
openstackdocs_auto_name = False
openstackdocs_use_storyboard = True

View File

@@ -1,160 +0,0 @@
.. monasca-specs documentation master file
=====================
Monasca Project Plans
=====================
Priorities
==========
At the beginning of each release cycle we agree what the whole community wants
to focus on for the upcoming release. This is the output of those discussions:
.. toctree::
:glob:
:maxdepth: 1
:reversed:
priorities/*
Specifications
==============
Here you can find the specs, and spec template, for each release:
.. toctree::
:glob:
:maxdepth: 1
:reversed:
specs/queens/index
specs/rocky/index
specs/stein/index
There are also some approved backlog specifications that are looking for
owners:
.. toctree::
:glob:
:maxdepth: 1
specs/backlog/index
Process
=======
The lifecycle of a specification
--------------------------------
Developers proposing a specification should propose a new file in the
``approved`` directory. monasca-specs-core will review the change in the usual
manner for the OpenStack project and eventually it will get merged if a
consensus is reached. As the developer of an approved specification, it is your
responsibility to assign tasks to your story. Developers are then free to
propose code reviews to implement their specification. These reviews should be
sure to reference the Storyboard story and task in their commit message for
tracking purposes.
Once all code for the feature is merged into Monasca, the Storyboard story is
marked complete.
Periodically, someone from monasca-specs-core will move implemented
specifications from the ``approved`` directory to the ``implemented``
directory. Whilst individual developers are welcome to propose this move for
their implemented specifications, we have generally just done this in a batch
at the end of the release cycle. It is important to create redirects when this
is done so that existing links to the approved specification are not broken.
Redirects aren't symbolic links, they are defined in a file which sphinx
consumes.
This directory structure allows you to see what we thought about doing,
decided to do and actually got done. Users interested in functionality in a
given release should only refer to the ``implemented`` directory.
Example specifications
----------------------
You can find an example spec in ``specs/<release_name>-template.rst``.
Backlog specifications
----------------------
Additionally, we allow the proposal of specifications that do not have a
developer assigned to them. These are proposed for review in the same manner as
above, but are added to::
specs/backlog/approved
Specifications in this directory indicate the original author has either
become unavailable or has indicated that they are not going to implement the
specification. The specifications found here are available as projects for
people looking to get involved with Monasca. If you are interested in
claiming a spec, start by posting a review for the specification that moves it
from this directory to the next active release. Please set yourself as the new
`primary assignee` and maintain the original author in the `other contributors`
list.
Working with gerrit and specification unit tests
------------------------------------------------
For more information about working with gerrit, see
http://docs.openstack.org/infra/manual/developers.html#development-workflow
To validate that the specification is syntactically correct (i.e. get more
confidence in the Jenkins result), please execute the following command::
$ tox
After running ``tox``, the documentation will be available for viewing in HTML
format in the ``doc/build/`` directory.
Specification review policies
=============================
There are a number of review policies which monasca-specs-core will apply when
reviewing proposed specifications. They are:
Trivial specifications
----------------------
Proposed changes which are trivial (very small amounts of code) and don't
change any of our public APIs are sometimes not required to provide a
specification. In these cases a Storyboard story is considered sufficient.
These proposals are approved during the `Open Discussion` portion of the
weekly Monasca IRC meeting. If you think your proposed feature is trivial and
meets these requirements, we recommend you bring it up for discussion there
before writing a full specification.
Previously approved specifications
----------------------------------
`Specifications are only approved for a single release`. If your specification
was previously approved but not implemented (or not completely implemented),
then you must seek re-approval for the specification. You can re-propose your
specification by doing the following:
* Copy (not move) your specification to the right directory for the current
release.
* Update the document to comply with the new template.
* If there are no functional changes to the specification (only template
changes) then add the `Previously-approved: <release>` tag to your commit
message.
* Send for review.
* monasca-specs-core will merge specifications which meet these requirements
with a single +2.
Specifications which depend on merging code in other OpenStack projects
-----------------------------------------------------------------------
For specifications `that depend on code in other OpenStack projects merging`
we will not approve the Monasca specification until the code in that other
project has merged. To indicate your specification is in this state, please
use the Depends-On git commit message tag. The correct format is
`Depends-On: <change id of other work>`. monasca-specs-core can approve the
specification at any time, but it won't merge until the code we need to land
in the other project has merged as well.
Indices and tables
==================
* :ref:`search`

View File

@@ -1 +0,0 @@
../../priorities

View File

@@ -1,49 +0,0 @@
# A simple sphinx plugin which creates HTML redirections from old names
# to new names. It does this by looking for files named "redirect" in
# the documentation source and using the contents to create simple HTML
# redirection pages for changed filenames.
import os.path
from sphinx.util import logging
LOG = logging.getLogger(__name__)
def setup(app):
app.connect('build-finished', emit_redirects)
def process_redirect_file(app, path, ent):
parent_path = path.replace(app.builder.srcdir, app.builder.outdir)
with open(os.path.join(path, ent)) as redirects:
for line in redirects.readlines():
from_path, to_path = line.rstrip().split(' ')
from_path = from_path.replace('.rst', '.html')
to_path = to_path.replace('.rst', '.html')
redirected_filename = os.path.join(parent_path, from_path)
redirected_directory = os.path.dirname(redirected_filename)
if not os.path.exists(redirected_directory):
os.makedirs(redirected_directory)
with open(redirected_filename, 'w') as f:
f.write('<html><head><meta http-equiv="refresh" content="0; '
'url=%s" /></head></html>'
% to_path)
def emit_redirects(app, exc):
LOG.info('scanning %s for redirects...' % app.builder.srcdir)
def process_directory(path):
for ent in os.listdir(path):
p = os.path.join(path, ent)
if os.path.isdir(p):
process_directory(p)
elif ent == 'redirects':
LOG.info(' found redirects at %s' % p)
process_redirect_file(app, path, ent)
process_directory(app.builder.srcdir)
LOG.info('...done')

View File

@@ -1 +0,0 @@
../../../../specs/backlog/approved

View File

@@ -1,21 +0,0 @@
==============================
Monasca Backlog Specifications
==============================
These specifications are ideas and features that are desirable but do not have
anyone working on them.
Template:
.. toctree::
:maxdepth: 1
Specification Template <template>
Approved (but not implemented) backlog specs:
.. toctree::
:glob:
:maxdepth: 1
.. approved/*

View File

@@ -1 +0,0 @@
../../../../specs/queens-template.rst

View File

@@ -1 +0,0 @@
../../../../specs/queens/approved/

View File

@@ -1 +0,0 @@
../../../../specs/queens/implemented/

View File

@@ -1,26 +0,0 @@
=============================
Monasca Queens Specifications
=============================
Template:
.. toctree::
:maxdepth: 1
Specification Template (Queens release) <template>
Queens implemented specs:
.. toctree::
:glob:
:maxdepth: 1
.. implemented/*
Queens approved (but not implemented) specs:
.. toctree::
:glob:
:maxdepth: 1
approved/*

View File

@@ -1 +0,0 @@
../../../../specs/queens-template.rst

View File

@@ -1 +0,0 @@
../../../../specs/rocky/approved/

View File

@@ -1 +0,0 @@
../../../../specs/rocky/implemented/

View File

@@ -1,34 +0,0 @@
=============================
Monasca Rocky Specifications
=============================
Template:
.. toctree::
:maxdepth: 1
Specification Template (Rocky release) <template>
Rocky implemented specs:
.. toctree::
:glob:
:maxdepth: 1
.. implemented/*
Rocky approved (but not implemented) specs:
.. toctree::
:glob:
:maxdepth: 1
approved/*
Rocky deprecated specs:
.. toctree::
:glob:
:maxdepth: 1
not-implemented/*

View File

@@ -1 +0,0 @@
../../../../specs/rocky/not-implemented

View File

@@ -1 +0,0 @@
../../../../specs/rocky-template.rst

View File

@@ -1 +0,0 @@
../../../../specs/stein/approved

View File

@@ -1 +0,0 @@
../../../../specs/stein/implemented

View File

@@ -1,26 +0,0 @@
=============================
Monasca Stein Specifications
=============================
Template:
.. toctree::
:maxdepth: 1
Specification Template (Stein release) <template>
Stein implemented specs:
.. toctree::
:glob:
:maxdepth: 1
.. implemented/*
Stein approved (but not implemented) specs:
.. toctree::
:glob:
:maxdepth: 1
approved/*

View File

@@ -1 +0,0 @@
../../../../specs/stein-template.rst

View File

@@ -1,139 +0,0 @@
.. _queens-priorities:
=========================
Queens Project Priorities
=========================
List of priorities the Monasca drivers team is prioritizing in Queens.
The owners listed are responsible for tracking the status of that work and
helping get that work done. They are not the only contributors to this work,
and not necessary doing most of the coding!
Essential Priorities
~~~~~~~~~~~~~~~~~~~~
+---------------------------------------------+-----------------------------+
| Title | Owners |
+=============================================+=============================+
| Cassandra support | jgu |
+---------------------------------------------+-----------------------------+
| `Add Monasca publisher to Ceilometer`_ | joadavis, aagate |
+---------------------------------------------+-----------------------------+
| `Fix Keystone authentication for Grafana`_ | rhochmuth, witek, Dobroslaw |
+---------------------------------------------+-----------------------------+
| Alarm grouping, silencing, inhibition | Andrea Adams, rhochmuth |
+---------------------------------------------+-----------------------------+
| `Run API under WSGi (Community Goal)`_ | kornicameister, witek |
+---------------------------------------------+-----------------------------+
| `Support Python 3.5 (Community Goal)`_ | witek, sc |
+---------------------------------------------+-----------------------------+
| `Split Tempest Plugins (Community Goal)`_ | chandankumar |
+---------------------------------------------+-----------------------------+
High Priorities
~~~~~~~~~~~~~~~
+---------------------------------------------+-------------------------+
| Title | Owners |
+=============================================+=========================+
| :ref:`service-agent-domain` | jgr |
+---------------------------------------------+-------------------------+
| Replace python-kafka with pykafka | |
+---------------------------------------------+-------------------------+
| Metrics retention policy | |
+---------------------------------------------+-------------------------+
| `Persisting Events`_ | witek |
+---------------------------------------------+-------------------------+
| Monasca Query Language | |
+---------------------------------------------+-------------------------+
| `Policy in Code (Community Goal)`_ | jgr |
+---------------------------------------------+-------------------------+
Optional Priorities
~~~~~~~~~~~~~~~~~~~
+---------------------------------------------+-------------------------+
| Title | Owners |
+=============================================+=========================+
| `3-nodes cluster with Docker Compose`_ | witek |
+---------------------------------------------+-------------------------+
| Add message attributes to Log API | koji |
+---------------------------------------------+-------------------------+
Details
~~~~~~~
Add Monasca publisher to Ceilometer
-----------------------------------
Monasca-Ceilometer (aka. Ceilosca) code currently exists in its own project.
This is for historical reasons. With changes in Ceilometer and the
Telemetry project, it may be possible to have the Monasca publisher from
monsasca-ceilometer merged in to the Ceilometer repository. This could reduce
future workload in maintenance.
.. _ceilosca merge storyboard: https://storyboard.openstack.org/#!/story/2001239
.. _grafana-auth:
Fix Keystone authentication for Grafana
---------------------------------------
The current implementation of Keystone authentication for Grafana is maintained
in the `forked repository`_. Due to upstream changes in Grafana major
refactoring is required to rebase the fork with newest Grafana code.
The goal is to contribute Keystone authentication (or generic pluggable
authentication mechanism) to Grafana upstream. If not possible, the current
fork should be refactored to allow its further maintenance.
.. _forked repository: https://github.com/monasca/grafana
Run API under WSGi (Community Goal)
-----------------------------------
This is a community-wide release goal for Pike. The goal is to
support, and test, running `WSGI`_.
.. _WSGI: https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html
Support Python 3.5 (Community Goal)
-----------------------------------
This is a community-wide release goal for Pike. The goal is to
support, and test, running with `python 3.5`_.
.. _python 3.5: https://governance.openstack.org/tc/goals/pike/python35.html
Split Tempest Plugins (Community Goal)
--------------------------------------
This goal is to make sure we always use a `separate python project`_ for
monasca-api, monasca-log-api and monasca-events-api tempest plugins.
.. _separate python project: https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
Policy in Code (Community Goal)
-------------------------------
The goal is to register and document default `policies`_ for the APIs in code.
.. _policies: https://governance.openstack.org/tc/goals/queens/policy-in-code.html
Persisting Events
-----------------
The goal is to provide the `pipeline`_ for persisting OpenStack notifications
and/or events from external systems to the database, e.g. Elasticsearch.
.. _pipeline: https://storyboard.openstack.org/#!/story/2001112
3-nodes cluster with Docker Compose
-----------------------------------
The goal is to provide an easy and simple way of deploying Monasca in a `static
3-nodes cluster`_ with Docker containers without using cluster management layer
like Kubernetes or Docker Swarm.
.. _static 3-nodes cluster: https://github.com/monasca/monasca-docker/issues/154

View File

@@ -1,157 +0,0 @@
.. _rocky-priorities:
=========================
Rocky Project Priorities
=========================
List of priorities the Monasca drivers team is prioritizing in Rocky.
The owners listed are responsible for tracking the status of that work and
helping get that work done. They are not the only contributors to this work,
and not necessarily doing most of the coding!
Essential Priorities
~~~~~~~~~~~~~~~~~~~~
+-----------------------------------------------+-----------------------------+
| Title | Owners |
+===============================================+=============================+
| `Kafka upgrade`_ | witek |
+-----------------------------------------------+-----------------------------+
| `Alembic migrations`_ | jgr |
+-----------------------------------------------+-----------------------------+
| Metrics retention policy | jgu |
+-----------------------------------------------+-----------------------------+
| Monasca Transformer refresh | aagate, joadavis |
+-----------------------------------------------+-----------------------------+
| `Run API under WSGi`_ | witek |
+-----------------------------------------------+-----------------------------+
| `Support Python 3.5`_ | witek, sc |
+-----------------------------------------------+-----------------------------+
| Enable mutable configuration | |
+-----------------------------------------------+-----------------------------+
| `Policy in Code`_ | amofakhar |
+-----------------------------------------------+-----------------------------+
High Priorities
~~~~~~~~~~~~~~~
+---------------------------------------------+-------------------------+
| Title | Owners |
+=============================================+=========================+
| `Templating webhook notifications`_ | dougsz |
+---------------------------------------------+-------------------------+
| :ref:`service-agent-domain` | jgr |
+---------------------------------------------+-------------------------+
| `Add Monasca publisher to Ceilometer`_ | joadavis, aagate |
+---------------------------------------------+-------------------------+
| Alarm grouping, silencing, inhibition | witek |
+---------------------------------------------+-------------------------+
| Documentation refresh | |
+---------------------------------------------+-------------------------+
Optional Priorities
~~~~~~~~~~~~~~~~~~~
+---------------------------------------------+-------------------------+
| Title | Owners |
+=============================================+=========================+
| New agent plugins for OpenStack | |
+---------------------------------------------+-------------------------+
| Cross-project integrations | |
+---------------------------------------------+-------------------------+
| Monasca Query Language | |
+---------------------------------------------+-------------------------+
| Create Docker images from OpenStack repos | |
+---------------------------------------------+-------------------------+
| `Kolla deployment`_ | dougsz |
+---------------------------------------------+-------------------------+
| `Query logs pipeline`_ | dougsz |
+---------------------------------------------+-------------------------+
| New monasca-thresh | |
+---------------------------------------------+-------------------------+
| Monasca-persister performance improvements | sgrasley, jgu |
+---------------------------------------------+-------------------------+
Details
~~~~~~~
Kafka upgrade
-----------------------------------
The goal is to upgrade all Monasca components to use Apache Kafka 1.0.x.
Currently used embedded forked version of kafka-python client should be
replaced with pykafka (or alternatively confluent-kafka-python). The
integration should be preceded by extensive performance and endurance testing.
Alembic migrations
------------------
The goal is to provide a consistent and easy to use way to maintain SQL schema
changes. The implementation should allow schema initialization and migration
from one version to another. `Alembic`_ is a lightweight database migration
tool which optimally fulfills our requirements.
.. _Alembic: http://alembic.zzzcomputing.com/en/latest/
Run API under WSGi
-----------------------------------
This is a community-wide release goal for Pike. `The goal`_ is to:
* Provide WSGI application script files.
* Switch devstack jobs to deploy Monasca APIs under uwsgi with Apache acting as
a front end proxy.
.. _The goal: https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html
Support Python 3.5
-----------------------------------
This is a community-wide release goal for Pike. The goal is to
support, test and running with `Python 3.5`_.
.. _Python 3.5: https://governance.openstack.org/tc/goals/pike/python35.html
Policy in Code
-------------------------------
The goal is to register and document default `policies`_ for the APIs in code.
.. _policies: https://governance.openstack.org/tc/goals/queens/policy-in-code.html
Add Monasca publisher to Ceilometer
-----------------------------------
Monasca-Ceilometer (aka. Ceilosca) code currently exists in its own project.
This is for historical reasons. With changes in Ceilometer and the
Telemetry project, it may be possible to have the Monasca publisher from
monsasca-ceilometer `merged into the Ceilometer`_ repository. This could reduce
future workload in maintenance.
.. _merged into the Ceilometer: https://storyboard.openstack.org/#!/story/2001239
Templating webhook notifications
--------------------------------
Improve the quality of notifications generated from alerts. We want notifications
to be informative, concise and flexible.
Kolla deployment
----------------
Add support for deploying Monasca in Docker containers using the OpenStack Kolla
project. This change will support deploying Monasca in a high availability
configuration. Blueprints exist for `containers`_ and the `Ansible roles`_ to deploy
them.
.. _containers: https://blueprints.launchpad.net/kolla/+spec/monasca-containers
.. _Ansible roles: https://blueprints.launchpad.net/kolla-ansible/+spec/monasca-roles
Query logs pipeline
-------------------
`Add support`_ for querying ElasticSearch via the Monasca Log API to support tenant
scoped access to logs. This should include accessing the logs via Grafana.
.. _Add support: https://blueprints.launchpad.net/monasca/+spec/log-query-api

View File

@@ -1,111 +0,0 @@
.. _stein-priorities:
=========================
Stein Project Priorities
=========================
List of priorities the Monasca drivers team is prioritizing in Stein.
The owners listed are responsible for tracking the status of that work and
helping get that work done. They are not the only contributors to this work,
and not necessarily doing most of the coding!
The implementation progress on these priorities and other identified important
tasks is tracked in `this board`_.
.. _this board: https://storyboard.openstack.org/#!/board/111
Essential Priorities
~~~~~~~~~~~~~~~~~~~~
+-----------------------------------------------+-----------------------------+
| Title | Owners |
+===============================================+=============================+
| `Kafka client upgrade`_ | witek |
+-----------------------------------------------+-----------------------------+
| `Monasca Events Agent`_ | joadavis, aagate |
+-----------------------------------------------+-----------------------------+
| `Merge Monasca APIs`_ | dougsz |
+-----------------------------------------------+-----------------------------+
| `Add query endpoint for logs/events`_ | dougsz |
+-----------------------------------------------+-----------------------------+
| `Run under Python 3 by default`_ | adriancz, Dobroslaw |
+-----------------------------------------------+-----------------------------+
| `Pre upgrade checks`_ | joadavis |
+-----------------------------------------------+-----------------------------+
High Priorities
~~~~~~~~~~~~~~~
+---------------------------------------------+-------------------------+
| Title | Owners |
+=============================================+=========================+
| Auto-scaling with Heat | witek |
+---------------------------------------------+-------------------------+
| `Metrics retention policy`_ | joadavis |
+---------------------------------------------+-------------------------+
| Documentation refresh | |
+---------------------------------------------+-------------------------+
| Deployment in OpenStack Helm | srwilkers |
+---------------------------------------------+-------------------------+
| Integration with Watcher | yushiro |
+---------------------------------------------+-------------------------+
Details
~~~~~~~
Kafka client upgrade
--------------------
Currently, in all Python Monasca components, the copy of `kafka-python` library
in version 0.9.5 (released on Feb 16, 2016) is used. Sticking with the old
frozen client version is also unacceptable in terms of security. The goal is to
upgrade the Apache Kafka client to `confluent-kafka-python`. This will
dramatically improve the performance and reliability.
Merge Monasca APIs
------------------
The goal is to merge all Monasca APIs into a single unified API to reduce
maintenance overhead, make it easier for developers to add new features and
improve the user experience.
Monasca Events Agent
--------------------
The goal is to extend Monasca Ceilometer project and add a new events publisher
which will publish Openstack notifications (or events) to Monasca Events API.
Add query endpoint for logs/events
----------------------------------
`Add support`_ for querying ElasticSearch via the Monasca API to support tenant
scoped access to logs and events. This should include accessing the logs via
Grafana.
.. _Add support: https://blueprints.launchpad.net/monasca/+spec/log-query-api
Run under Python 3 by default
-----------------------------
As OpenStack Technical Committee agreed in the `Python2 Deprecation Timeline`_
resolution, the next phase of our adoption of Python 3 is to begin running all
jobs using Python 3 by default and only using Python 2 to test operating under
Python 2 (via unit, functional, or integration tests). This goal describes the
activities needed to move us to this `python 3 first`_ state.
.. _Python2 Deprecation Timeline: https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html#python2-deprecation-timeline
.. _Python 3 first: https://governance.openstack.org/tc/goals/stein/python3-first.html
Pre upgrade checks
------------------
The goal is to provide an `upgrade check command`_ which would perform any
upgrade validation that can be automated.
.. _upgrade check command: https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html
Metrics retention policy
------------------------
The goal is to add a new API for managing the mapping of metrics to TTL values.

View File

@@ -1,186 +0,0 @@
.. _train-priorities:
=========================
Train Project Priorities
=========================
List of priorities the Monasca drivers team is prioritizing in Train.
The owners listed are responsible for tracking the status of that work and
helping get that work done. They are not the only contributors to this work,
and not necessarily doing most of the coding!
The implementation progress on these priorities and other identified important
tasks is tracked in `this board`_.
.. _this board: https://storyboard.openstack.org/#!/board/141
Essential Priorities
~~~~~~~~~~~~~~~~~~~~
+-------------------------------------------------+---------------------------+
| Title | Owners |
+=================================================+===========================+
| `Kafka client upgrade`_ | witek |
+-------------------------------------------------+---------------------------+
| `Merge Monasca APIs`_ | adriancz |
+-------------------------------------------------+---------------------------+
| `Middleware upgrade`_ | dougsz |
+-------------------------------------------------+---------------------------+
| `Thresholding engine replacement (tech prev.)`_ | |
+-------------------------------------------------+---------------------------+
| `PDF generation for documentation`_ | |
+-------------------------------------------------+---------------------------+
High Priorities
~~~~~~~~~~~~~~~
+---------------------------------------------+-------------------------+
| Title | Owners |
+=============================================+=========================+
| `Application credentials (Grafana)`_ | dougsz |
+---------------------------------------------+-------------------------+
| `Application credentials (agent)`_ | |
+---------------------------------------------+-------------------------+
| `Documentation refresh`_ | joadavis |
+---------------------------------------------+-------------------------+
| `Java Persister deprecation`_ | joadavis |
+---------------------------------------------+-------------------------+
Optional Priorities
~~~~~~~~~~~~~~~~~~~
+---------------------------------------------+-------------------------+
| Title | Owners |
+=============================================+=========================+
| `Monasca Events Agent`_ | |
+---------------------------------------------+-------------------------+
| New query language | |
+---------------------------------------------+-------------------------+
| OpenStack CLI | sc |
+---------------------------------------------+-------------------------+
| Reuse Prometheus dashboards | |
+---------------------------------------------+-------------------------+
| Vitrage integration | chaconpiza |
+---------------------------------------------+-------------------------+
Backlog
~~~~~~~
+---------------------------------------------+-------------------------+
| Title | Owners |
+=============================================+=========================+
| Sharding model for InfluxDB | dougsz |
+---------------------------------------------+-------------------------+
| OpenStack Helm | |
+---------------------------------------------+-------------------------+
| OpenStack Ansible | sc |
+---------------------------------------------+-------------------------+
| Senlin integration | |
+---------------------------------------------+-------------------------+
| Gnocchi support | |
+---------------------------------------------+-------------------------+
Details
~~~~~~~
Kafka client upgrade
--------------------
Currently, in all Python Monasca components, the copy of `kafka-python` library
in version 0.9.5 (released on Feb 16, 2016) is used. Sticking with the old
frozen client version is also unacceptable in terms of security. The goal is to
upgrade the Apache Kafka client to `confluent-kafka-python`. This will
dramatically improve the performance and reliability.
Story: https://storyboard.openstack.org/#!/story/2003705
Merge Monasca APIs
------------------
The goal is to merge all Monasca APIs into a single unified API to reduce
maintenance overhead, make it easier for developers to add new features and
improve the user experience.
Story: https://storyboard.openstack.org/#!/story/2003881
Middleware upgrade
------------------
We want to change the general approach and try to use the newest (stable)
versions of software available. The beginning of the cycle is the good time
point to upgrade components such as e.g.: Apache Kafka, InfluxDB, Apache
Storm, ELK.
Story: https://storyboard.openstack.org/#!/story/2005624
Thresholding engine replacement (tech prev.)
--------------------------------------------
The goal of this task is to provide the technical preview of the new
component replacing the current thresholding engine.
Story: https://storyboard.openstack.org/#!/story/2005598
PDF generation for documentation
--------------------------------
This is the community wide goal.
https://governance.openstack.org/tc/goals/train/pdf-doc-generation.html
Application credentials (Grafana)
---------------------------------
`Keystone appliction credentials <https://docs.openstack
.org/keystone/latest/user/application_credentials.html>`_ offer the mechanism
to allow applications to authenticate to Keystone. The ability to specify
`access rules <http://specs.openstack
.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds
.html>`_ for application credentials is being developed and will be
released in the Train cycle.
The goal of this story is to add application credentials support in
monasca-grafana-datasource. The access rules should be limited to only
reading the measurements from Monasca. It will allow storing these
credentials directly in the datasource without the security risk of revealing
the OpenStack user's password. It will also decouple the datasource from
Grafana's authentication.
Story: https://storyboard.openstack.org/#!/story/2005623
Application credentials (agent)
-------------------------------
`Keystone appliction credentials <https://docs.openstack
.org/keystone/latest/user/application_credentials.html>`_ offer the mechanism
to allow applications to authenticate to Keystone. The ability to specify
`access rules <http://specs.openstack
.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds
.html>`_ for application credentials is being developed and will be
released in the Train cycle.
The goal of this story is to add application credentials support in
*monasca-agent*. This will prevent the security risk of revealing OpenStack
user's password when installing the agent on the tenants environment. The
access rules of these application credentials should be limited to posting
measurements. *monasca-setup* should be extended to automatically generate such
credentials and save them in configuration file if needed.
Documentation refresh
---------------------
Story: https://storyboard.openstack.org/#!/story/2005625
Java Persister deprecation
--------------------------
Story: https://storyboard.openstack.org/#!/story/2005628
Monasca Events Agent
--------------------
The goal is to extend Monasca Ceilometer project and add a new events publisher
which will publish Openstack notifications (or events) to Monasca Events API.
Story: https://storyboard.openstack.org/#!/story/2003023

View File

@@ -1,139 +0,0 @@
.. _ussuri-priorities:
=========================
Ussuri Project Priorities
=========================
List of priorities the Monasca drivers team is prioritizing in Ussuri.
The owners listed are responsible for tracking the status of that work and
helping get that work done. They are not the only contributors to this work,
and not necessarily doing most of the coding!
The implementation progress on these priorities and other identified important
tasks is tracked in `this board`_.
.. _this board: https://storyboard.openstack.org/#!/board/190
Essential Priorities
~~~~~~~~~~~~~~~~~~~~
+-------------------------------------------------+---------------------------+
| Title | Owners |
+=================================================+===========================+
| `New thresholding engine`_ | chaconpiza |
+-------------------------------------------------+---------------------------+
| `Monasca Events Agent`_ | witek |
+-------------------------------------------------+---------------------------+
| `IPv6 support`_ | witek |
+-------------------------------------------------+---------------------------+
High Priorities
~~~~~~~~~~~~~~~
+---------------------------------------------+-------------------------+
| Title | Owners |
+=============================================+=========================+
| `InfluxDB HA Setup`_ | dougsz |
+---------------------------------------------+-------------------------+
| `Query Logs API`_ | dougsz |
+---------------------------------------------+-------------------------+
| `Application Credentials`_ | dougsz |
+---------------------------------------------+-------------------------+
| `New InfluxDB query capabilities`_ | dougsz |
+---------------------------------------------+-------------------------+
| `Middleware upgrade`_ | witek |
+---------------------------------------------+-------------------------+
Optional Priorities
~~~~~~~~~~~~~~~~~~~
+---------------------------------------------+-------------------------+
| Title | Owners |
+=============================================+=========================+
| Sharding model for InfluxDB | |
+---------------------------------------------+-------------------------+
| Refresh Monasca transform engine | joadavis |
+---------------------------------------------+-------------------------+
Details
~~~~~~~
New thresholding engine
--------------------------------------------
`Faust library`_ has been evaluated and the prototype of the thresholding
engine based on this library has been implemented. The goal of this effort is
to implement the new thresholding engine for Monasca to replace Apache Storm
Java application.
.. _Faust library: https://faust.readthedocs.io
Monasca Events Agent
--------------------
The goal is to implement Monasca Events Listener which will publish Openstack
notifications and events from third party applications to Monasca Events API.
`Specification`_ listing existing requirements and proposed implementation
has been written up in the past.
.. _Specification: http://specs.openstack.org/openstack/monasca-specs/specs/stein/approved/monasca-events-listener.html
IPv6 Support
------------
It is the community wide goal to `support IPv6-Only Deployments`_.
.. _support IPv6-Only Deployments: https://governance.openstack.org/tc/goals/selected/train/ipv6-support-and-testing.html
InfluxDB HA Setup
-----------------
Story: https://storyboard.openstack.org/#!/story/2005620
Query Logs API
--------------
`Add support`_ for querying ElasticSearch via the Monasca Log API to support tenant
scoped access to logs.
.. _Add support: https://blueprints.launchpad.net/monasca/+spec/log-query-api
Application Credentials
-----------------------
`Keystone appliction credentials <https://docs.openstack
.org/keystone/latest/user/application_credentials.html>`_ offer the mechanism
to allow applications to authenticate to Keystone. The ability to specify
`access rules <http://specs.openstack
.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds
.html>`_ for application credentials has been implemented in the Train cycle.
The goal of this story is to add application credentials support in
*monasca-agent*. This will prevent the security risk of revealing OpenStack
user's password when installing the agent on the tenants environment. The
access rules of these application credentials should be limited to posting
measurements. *monasca-setup* should be extended to automatically generate such
credentials and save them in configuration file if needed.
Similar task should be implemented in *monasca-grafana-datasource*.
Stories:
* https://storyboard.openstack.org/#!/story/2005622
* https://storyboard.openstack.org/#!/story/2005623
New InfluxDB Query Capabilities
-------------------------------
The goal is to extend the Monasca API to query measurements using aggregation
functions available in InfluxDB, like e.g. DERIVATIVE(). Another goal is to
investigate the new Flux QL to allow basic arithmetic operations between
different measurements, e.g. (disk_used / disk_total).
Middleware upgrade
------------------
Story: https://storyboard.openstack.org/#!/story/2006768

View File

@@ -1,116 +0,0 @@
.. _victoria-priorities:
===========================
Victoria Project Priorities
===========================
List of priorities the Monasca drivers team is prioritizing in Victoria.
The owners listed are responsible for tracking the status of that work and
helping get that work done. They are not the only contributors to this work,
and not necessarily doing most of the coding!
The implementation progress on these priorities and other identified important
tasks is tracked in `this board`_.
.. _this board: https://storyboard.openstack.org/#!/board/217
Essential Priorities
~~~~~~~~~~~~~~~~~~~~
+-------------------------------------------------+---------------------------+
| Title | Owners |
+=================================================+===========================+
| Middleware upgrades Grafana 4.0 -> 7.0.1 | dougszu |
+-------------------------------------------------+---------------------------+
| `Kafka/InfluxDB Sharding`_ | witek |
+-------------------------------------------------+---------------------------+
| Merge events-api into monasca-api | adriancz |
+-------------------------------------------------+---------------------------+
High Priorities
~~~~~~~~~~~~~~~
+----------------------------------------------------+-------------------------+
| Title | Owners |
+====================================================+=========================+
| `New thresholding engine`_ | chaconpiza |
+----------------------------------------------------+-------------------------+
| `Application Credentials`_ | |
+----------------------------------------------------+-------------------------+
| Define Prometheus based architecture | |
+----------------------------------------------------+-------------------------+
| Selenium Tests for Monasca-UI | |
+----------------------------------------------------+-------------------------+
| Middleware upgrades ELK 7.3.0 -> 7.7.0 | dougszu |
+----------------------------------------------------+-------------------------+
| Middleware upgrades Apache Kafka 2.0.1 -> 2.5.0 | |
+----------------------------------------------------+-------------------------+
| Middleware upgrades InfluxDB 1.7.6 -> 1.8.0 | |
+----------------------------------------------------+-------------------------+
Optional Priorities
~~~~~~~~~~~~~~~~~~~
+------------------------------------------+-------------------------+
| Title | Owners |
+==========================================+=========================+
| Extend Monasca API | |
+------------------------------------------+-------------------------+
| Add Time and Times in Monasca UI | |
+------------------------------------------+-------------------------+
| Monasca agents RPM Packaging | |
+------------------------------------------+-------------------------+
| OpenStack Client Integration | |
+------------------------------------------+-------------------------+
| Refresh Monasca transform engine | dougsz |
+------------------------------------------+-------------------------+
Details
~~~~~~~
New thresholding engine
--------------------------------------------
`Faust library`_ has been evaluated and the prototype of the thresholding
engine based on this library has been implemented. The goal of this effort is
to implement the new thresholding engine for Monasca to replace Apache Storm
Java application.
.. _Faust library: https://faust.readthedocs.io
Kafka/InfluxDB Sharding
-----------------------
Story: https://storyboard.openstack.org/#!/story/2005620
Application Credentials
-----------------------
`Keystone appliction credentials <https://docs.openstack
.org/keystone/latest/user/application_credentials.html>`_ offer the mechanism
to allow applications to authenticate to Keystone. The ability to specify
`access rules <http://specs.openstack
.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds
.html>`_ for application credentials has been implemented in the Train cycle.
The goal of this story is to add application credentials support in
*monasca-agent*. This will prevent the security risk of revealing OpenStack
user's password when installing the agent on the tenants environment. The
access rules of these application credentials should be limited to posting
measurements. *monasca-setup* should be extended to automatically generate such
credentials and save them in configuration file if needed.
Similar task should be implemented in *monasca-grafana-datasource*.
Stories:
* https://storyboard.openstack.org/#!/story/2005622
* https://storyboard.openstack.org/#!/story/2005623
Middleware upgrade
------------------
Story: https://storyboard.openstack.org/#!/story/2006768

View File

@@ -1,88 +0,0 @@
.. _xena-priorities:
===========================
Xena Project Priorities
===========================
List of priorities the Monasca drivers team is prioritizing in Xena.
The owners listed are responsible for tracking the status of that work and
helping get that work done. They are not the only contributors to this work,
and not necessarily doing most of the coding!
The implementation progress on these priorities and other identified important
tasks is tracked in `this board`_.
.. _this board: https://storyboard.openstack.org/#!/board/236
Essential Priorities
~~~~~~~~~~~~~~~~~~~~
+----------------------------------------------------+---------------------------+
| Title | Owners |
+====================================================+===========================+
| `Migrate CI/CD from Travis-CI to Github actions` | chaconpiza |
+----------------------------------------------------+---------------------------+
| `Update Docker Images` | chaconpiza |
+----------------------------------------------------+---------------------------+
| `Update https://github.com/monasca/monasca-docker` | chaconpiza |
+----------------------------------------------------+---------------------------+
High Priorities
~~~~~~~~~~~~~~~
+----------------------------------------------------+-------------------------+
| Title | Owners |
+====================================================+=========================+
| `Thresholding engine in cluster mode` | chaconpiza |
+----------------------------------------------------+-------------------------+
| `Add Time and Times in Monasca UI` | |
+----------------------------------------------------+-------------------------+
| `Define Prometheus based architecture` | |
+----------------------------------------------------+-------------------------+
| `Application Credentials`_ | |
+----------------------------------------------------+-------------------------+
| `Middleware upgrades ELK 7.3.0 -> OpenDistro` | |
+----------------------------------------------------+-------------------------+
Optional Priorities
~~~~~~~~~~~~~~~~~~~
+------------------------------------------+-------------------------+
| Title | Owners |
+==========================================+=========================+
+------------------------------------------+-------------------------+
| `Selenium Tests for Monasca-UI` | |
+------------------------------------------+-------------------------+
| `OpenStack Client Integration` | |
+------------------------------------------+-------------------------+
Details
~~~~~~~
Application Credentials
-----------------------
`Keystone appliction credentials <https://docs.openstack
.org/keystone/latest/user/application_credentials.html>`_ offer the mechanism
to allow applications to authenticate to Keystone. The ability to specify
`access rules <http://specs.openstack
.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds
.html>`_ for application credentials has been implemented in the Train cycle.
The goal of this story is to add application credentials support in
*monasca-agent*. This will prevent the security risk of revealing OpenStack
user's password when installing the agent on the tenants environment. The
access rules of these application credentials should be limited to posting
measurements. *monasca-setup* should be extended to automatically generate such
credentials and save them in configuration file if needed.
Similar task should be implemented in *monasca-grafana-datasource*.
Stories:
* https://storyboard.openstack.org/#!/story/2005622
* https://storyboard.openstack.org/#!/story/2005623

View File

@@ -1,88 +0,0 @@
.. _yoga-priorities:
===========================
Yoga Project Priorities
===========================
List of priorities the Monasca drivers team is prioritizing in Yoga.
The owners listed are responsible for tracking the status of that work and
helping get that work done. They are not the only contributors to this work,
and not necessarily doing most of the coding!
The implementation progress on these priorities and other identified important
tasks is tracked in `this board`_.
.. _this board: https://storyboard.openstack.org/#!/board/247
Essential Priorities
~~~~~~~~~~~~~~~~~~~~
+----------------------------------------------------+---------------------------+
| Title | Owners |
+====================================================+===========================+
| `Update Docker Images` | chaconpiza |
+----------------------------------------------------+---------------------------+
| `Update https://github.com/monasca/monasca-docker` | chaconpiza |
+----------------------------------------------------+---------------------------+
| `Bring compatibility with Python3 zed unit tests` | chaconpiza |
+----------------------------------------------------+---------------------------+
High Priorities
~~~~~~~~~~~~~~~
+----------------------------------------------------+-------------------------+
| Title | Owners |
+====================================================+=========================+
| `Thresholding engine in cluster mode` | chaconpiza |
+----------------------------------------------------+-------------------------+
| `Add Time and Times in Monasca UI` | |
+----------------------------------------------------+-------------------------+
| `Define Prometheus based architecture` | |
+----------------------------------------------------+-------------------------+
| `Application Credentials`_ | |
+----------------------------------------------------+-------------------------+
| `Middleware upgrades ELK 7.3.0 -> OpenDistro` | |
+----------------------------------------------------+-------------------------+
Optional Priorities
~~~~~~~~~~~~~~~~~~~
+------------------------------------------+-------------------------+
| Title | Owners |
+==========================================+=========================+
+------------------------------------------+-------------------------+
| `Selenium Tests for Monasca-UI` | |
+------------------------------------------+-------------------------+
| `OpenStack Client Integration` | |
+------------------------------------------+-------------------------+
Details
~~~~~~~
Application Credentials
-----------------------
`Keystone appliction credentials <https://docs.openstack
.org/keystone/latest/user/application_credentials.html>`_ offer the mechanism
to allow applications to authenticate to Keystone. The ability to specify
`access rules <http://specs.openstack
.org/openstack/keystone-specs/specs/keystone/stein/capabilities-app-creds
.html>`_ for application credentials has been implemented in the Train cycle.
The goal of this story is to add application credentials support in
*monasca-agent*. This will prevent the security risk of revealing OpenStack
user's password when installing the agent on the tenants environment. The
access rules of these application credentials should be limited to posting
measurements. *monasca-setup* should be extended to automatically generate such
credentials and save them in configuration file if needed.
Similar task should be implemented in *monasca-grafana-datasource*.
Stories:
* https://storyboard.openstack.org/#!/story/2005622
* https://storyboard.openstack.org/#!/story/2005623

View File

@@ -1,9 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
openstackdocstheme>=2.2.1 # Apache-2.0
sphinx>=2.0.0,!=2.1.0 # BSD
testrepository>=0.0.18 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
yasfb>=0.8.0 # BSD

View File

@@ -1,13 +0,0 @@
[metadata]
name = monasca-specs
summary = OpenStack Monasca Project Development Specs
description-file =
README.rst
author = OpenStack
author-email = openstack-discuss@lists.openstack.org
home-page = http://specs.openstack.org/openstack/monasca-specs/
classifier =
Environment :: OpenStack
Intended Audience :: Developers
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux

View File

@@ -1,20 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import setuptools
setuptools.setup(
setup_requires=['pbr>=2.0.0'],
pbr=True)

0
specs/.gitignore vendored
View File

View File

@@ -1,378 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================================
Example Spec - The title of your feature request
================================================
Include the URL of your story:
https://storyboard.openstack.org
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that operators can understand. The title and this first paragraph
should be used as the subject line and body of the commit message
respectively.
Some notes about the monasca-spec and stories process:
* Not all stories need a spec. For more information see
https://docs.openstack.org/monasca-api/latest/contributor/index.html
* The aim of this document is first to define the problem we need to solve,
and second agree the overall approach to solve that problem.
* This is not intended to be extensive documentation for a new feature.
For example, there is no need to specify the exact configuration changes,
nor the exact details of any DB model changes. But you should still define
that such changes are required, and be clear on how that will affect
upgrades.
* You should aim to get your spec approved before writing your code.
While you are free to write prototypes and code before getting your spec
approved, its possible that the outcome of the spec review process leads
you towards a fundamentally different solution than you first envisaged.
* But, API changes are held to a much higher level of scrutiny.
As soon as an API change merges, we must assume it could be in production
somewhere, and as such, we then need to support that API change forever.
To avoid getting that wrong, we do want lots of details about API changes
upfront.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 79 columns.
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox and see the generated
HTML file in doc/build/html/specs/<path_of_your_file>
* If you would like to provide a diagram with your spec, ascii diagrams are
required. http://asciiflow.com/ is a very nice tool to assist with making
ascii diagrams. The reason for this is that the tool used to review specs is
based purely on plain text. Plain text will allow review to proceed without
having to look at additional files which can not be viewed in gerrit. It
will also allow inline feedback on the diagram itself.
* If your specification proposes any changes to the Monasca REST API such
as changing parameters which can be returned or accepted, or even
the semantics of what happens when a client calls into the API, then
you should add the APIImpact flag to the commit message. Specifications with
the APIImpact flag can be found with the following query:
https://review.opendev.org/#/q/status:open+project:openstack/monasca-specs+message:apiimpact,n,z
Problem description
===================
A detailed description of the problem. What problem is this feature request
addressing?
Use Cases
---------
What use cases does this address? What impact on actors does this change have?
Ensure you are clear about the actors in each use case: Developer, End User,
Deployer etc.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
At this point, if you would like to just get feedback on if the problem and
proposed change fit in monasca, you can stop here and post this for review to
get preliminary feedback. If so please say: Posting to get preliminary feedback
on the scope of this spec.
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to
require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing instances, or modify other existing data
describe how that will work.
REST API impact
---------------
Each API method which is either added or changed should have the following
* Specification for the method
* A description of what the method does suitable for use in
user documentation
* Method type (POST/PUT/GET/DELETE)
* Normal http response code(s)
* Expected error http response code(s)
* A description for each possible error code should be included
describing semantic errors which can cause it such as
inconsistent parameters supplied to the method, or when an
instance is not in an appropriate state for the request to
succeed. Errors caused by syntactic problems covered by the JSON
schema definition do not need to be included.
* URL for the resource
* URL should not include underscores, and use hyphens instead.
* Parameters which can be passed via the url
* JSON schema definition for the request body data if allowed
* Field names should use snake_case style, not CamelCase or MixedCase
style.
* JSON schema definition for the response body data if any
* Field names should use snake_case style, not CamelCase or MixedCase
style.
* Example use case including typical API samples for both data supplied
by the caller and the response
* Discuss any policy changes, and discuss what things a deployer needs to
think about when defining their policy.
Note that the schema should be defined as restrictively as
possible. Parameters which are required should be marked as such and
only under exceptional circumstances should additional parameters
which are not defined in the schema be permitted.
Reuse of existing predefined parameter types such as regexps for
passwords and user defined names is highly encouraged.
Security impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Does this change touch sensitive data such as tokens, keys, or user data?
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
* Does this change involve cryptography or hashing?
* Does this change require the use of sudo or any elevated privileges?
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
For more detailed guidance, please see the OpenStack Security Guidelines as
a reference (https://wiki.openstack.org/wiki/Security/Guidelines). These
guidelines are a work in progress and are designed to help you identify
security best practices. For further information, feel free to reach out
to the OpenStack Security Group at openstack-security@lists.openstack.org.
Other end user impact
---------------------
Aside from the API, are there other ways a user will interact with this
feature?
* Does this change have an impact on python-monascaclient? What does the user
interface there look like?
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A periodic task might look like a small addition but if it calls conductor or
another service the load is multiplied by the number of nodes in the system.
* Scheduler filters get called once per host for every instance being created,
so any latency they introduce is linear with the size of the system.
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries (whether direct or via conductor)
can have a profound impact on performance when called in critical sections of
the code.
* Will the change include any locking, and if so what considerations are there
on holding the lock?
Other deployer impact
---------------------
Discuss things that will affect how you deploy and configure Monasca
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed (for example a flag that other hypervisor drivers might want to
implement as well)? Are the default values ones which will work well in
real deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features. For example, if we
change the directory name that instances are stored in, how do we handle
instance directories created before the change landed? Do we move them? Do
we have a special case in the code? Do we assume that the operator will
recreate all the instances in their cloud?
Developer impact
----------------
Discuss things that will affect other developers working on Monasca.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a feature where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* Include specific references to specs and/or blueprints in monasca, or in
other projects, that this one either depends on or is related to.
* If this requires functionality of another project that is not currently used
by Monasca (such as the glance v2 API when we previously only required v1),
document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss the important scenarios needed to test here, as well as
specific edge cases we should be ensuring work correctly. For each
scenario please specify if this requires specialized hardware, a full
openstack environment, or can be simulated inside the Monasca tree.
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Documentation Impact
====================
Which audiences are affected most by this change, and which documentation
titles on docs.openstack.org should be updated because of this change? Don't
repeat details discussed above, but reference them here in the context of
documentation for multiple audiences. For example, the Operations Guide targets
cloud operators, and the End User Guide would need to be updated if the change
offers a new feature available through the CLI or dashboard. If a config option
changes or is deprecated, note here that the documentation needs to be updated
to reflect this specification's change.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. if it's an EC2 thing, link the
EC2 docs)
* Anything else you feel it is worthwhile to refer to
History
=======
Optional section intended to be used each time the spec is updated to describe
new design, API or any database schema updated. Useful to let reader understand
what's happened along the time.
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Queens
- Introduced

View File

@@ -1,224 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================
Use oslo.policy for Monasca APIs
================================
https://storyboard.openstack.org/#!/story/2001233
Presently, neither monasca-api nor monasca-log-api use `oslo.policy`. Instead,
they contain their own policy enforcement code. Without `oslo.policy` they will
not be able to implement the policy in code community goal for Queens[0], which
requires the use of oslo mechanisms for defining and enforcing the policy.
Problem description
===================
Since neither monasca-api nor monasca-log-api use oslo.policy for policy
enforcement, they cannot describe their policy in code using `oslo.policy`
mechanisms as mandated by the community goal[0].
Use Cases
---------
The change proposed by this spec will improve the situation outlined above as
follows:
#. Policy will be enforced in the OpenStack wide standard manner by
`oslo.policy`. This reduces the maintenance burden on Monasca developers
because they will no longer need to maintain Monasca's own policy enforcement
code.
#. All default policy rules will be available in a standard format to all
interested parties, which fulfils the community goal.
#. Operators will be able to override the Monasca API and Monasca Log API
default policies using `policy.json` files and run command line tools to
generate a full `policy.json` for either service (e.g. for auditing
purposes).
#. The ability to use new tooling in oslo.policy that let's developers
deprecate or change policy defaults in a way operators can consume.
Proposed change
===============
This spec proposes the following changes to `monasca-common`, `monasca-api`,
`monasca-log-api` and the upcoming `monasca-events-api`:
#. Implement a policy enforcement engine in `monasca_common.policy`. We can
probably model this on Nova's policy enforcement engine[1]. We will have to
modify it somewhat to account for the fact that we have multiple APIs that
use the same policy engine.
#. Define a module that contains the default policy rules in both `monasca-api`
and `monasca-log-api` and exposes them to the enforcement engine (in
`monasca-events-api` there is such a module already). Nova's approach of
having a `list_rules()` method[2] should work just fine for us.
We can either copy Nova's approach of aggregating individual endpoints'
policies in that central module[3] or just have them in there directly.
#. Modify existing policy enforcement code in `monasca-api`, `monasca-log-api`
and `monasca-events-api` by code to use the enforcement engine from
`monasca-common`.
#. Add monasca-api-policy and monasca-log-api-policy command line entry points
that allow the user to generate a policy.json file for both APIs.
Alternatives
------------
None, since this is a community goal. One thing that could be done differently
is the policy enforcement engine: it would be conceivable to have independent
enforcer implementations in both monasca-api and monasca-log-api. This would
needlessly violate the DRY principle, though.
Data model impact
-----------------
This change does not impact the data model.
REST API impact
---------------
This change must be implemented in a way that creates no discernible impact for
the API's consumers.
Client impact
-------------
N/A
Configuration changes
---------------------
This change will continue to use the same `monasca-api` and `monasca-log-api`
configuration settings we already use for policy enforcement:
* `agent_authorized_roles`, `default_authorized_roles`,
`delegate_authorized_roles` and `read_only_authorized_roles` for `monasca-api`
* `default_roles`, `agent_roles` and `delegate_roles` for `monasca-log-api`
The only difference is that a different implementation
(`monasca-common.policy`) will use them now.
Additionally, the policy enforcer will allow operators to create a
`policy.json` file for each API service that overrides the in-code defaults.
Security impact
---------------
This change introduces a way for operators to influence monasca-api and
monasca-log-api policy through configuration files. If they misconfigure policy
that way, they may allow unauthorized users to send or retrieve metrics and
logs.
Other end user impact
---------------------
N/A
Performance Impact
------------------
N/A
Other deployer impact
---------------------
N/A
Developer impact
----------------
Developers that introduce new API operations will need to register policy rules
for these endpoints once this feature is implemented.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
amofakhar
Work Items
----------
#. Add policy enforcement module to `monasca-common`. It might be a good idea
to have an extension mechanism for the policy's target analogous to this
one[1] in Magnum to account for the different role variables from the
configuration file needed by `monasca-api` and `monasca-log-api`,
respectively. That way a generic policy enforcer can be made flexible to the
point where all the role differences between `monasca-api` and
`monasca-log-api` can be expressed in terms of policy rules.
#. Retire policy enforcement code in `monasca-events-api` in favour of the
policy enforcement module in `monasca-common`.
#. Add policy registration code to `monasca-api`.
#. Use policy enforcement module in `monasca_api.v2.reference.helpers.validate_authorization()`
#. Add policy registration code to `monasca-log-api`
#. Remove policy enforcement code from `monasca_log_api.middleware.role_middleware.`
(for validating agent role) and `monasca_log_api.app.base.request`
and replace it by using `monasca-common.policy` from either
`monasca_log_api.middleware.role_middleware.` or
`monasca_log_api.app.base_request` (to have centralized enforcement in one
spot).
#. Add `monasca-api-policy` console script to `monasca-api`
#. Add `monasca-log-api-policy` console script to `monasca-log-api`
Dependencies
============
N/A
Testing
=======
Existing policy enforcement tests are likely to need a major overhaul.
Documentation Impact
====================
The following things need to be documented for operators:
#. Their newly added ability to create `policy.json` files for Monasca API services
#. The functionality of the `monasca-api-policy` and `monasca-log-api-policy` scripts
References
==========
[0] https://governance.openstack.org/tc/goals/queens/policy-in-code.html
[1] https://github.com/openstack/nova/blob/master/nova/policy.py
[2] https://github.com/openstack/nova/blob/master/nova/policy.py#L207
[3] https://github.com/openstack/nova/blob/master/nova/policies/__init__.py
[2] https://github.com/openstack/magnum/blob/master/magnum/common/policy.py#L102
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Queens
- Introduced

View File

@@ -1,430 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
.. _service-agent-domain:
===========================================
Service Domain for Self Service Agent Users
===========================================
https://storyboard.openstack.org/#!/story/2001214
In order to send metrics and logs to Monasca the Monasca agent currently needs
a Keystone user with a special Keystone role, usually `monasca-agent`. This is
fine for infrastructure monitoring of an OpenStack cloud where the person
interested in monitoring can usually create user accounts, too, and where these
accounts' credentials are stored on the cloud's compute nodes and controllers
which should be well protected against breaches. Storing such credentials on
public facing instances to be monitored by Monasca is a problem because these
instances (a) tend to be more exposed and (b) because an OpenStack user
creating instances usually cannot create special purpose user accounts. This
spec proposes a solution to these two problems.
Problem description
===================
Currently there are two ways to submit metrics and logs for a given OpenStack
project:
1) Cross tenant submission: this requires a user with a special role that can
submit metrics or logs on behalf of any arbitrary project. This role is
currently used by compute nodes to submit libvirt metrics for the projects
their running instances reside in. These roles are controlled by the
`security/delegate_authorized_roles` setting for monasca-api and the
`roles_middleware/delegate_roles` setting for monasca-log-api.
2) Metric submission by a user in the project: this is the normal way. Any user
with the designated Monasca agent roles (`monasca-agent` by default) in a
project can submit metrics and logs for this project. These roles are
controlled by the `security/agent_authorized_roles` setting for monasca-api
and the `roles_middleware/agent_roles` setting for monasca-log-api.
Both options are bad from a security point of view. (1) is worse, because it
will allow submission of arbitrary bogus log entries/metric events for arbitray
projects. Because of this high abuse potential, and because it is currently
only implemented in monasca-agent's libvirt monitoring plugin it is unlikely to
be employed for instance monitoring, though. (2) comes with a slightly lower
but still problematic security risk:
If a user wants to monitor their instances, they need to pass the Keystone
credentials of an OpenStack user with the `monasca-agent` role into their
instances. Like any OpenStack user with any role in a project, this user can
access arbitrary OpenStack APIs and create/delete resources at will. While it
would be possible to add global deny rules for the `monasca-agent` role to
every other OpenStack service's `policy.json`, this is unlikely to be
implemented in practice. Consequently, the compromise of an instance and its
`monasca-agent` complication will usually leave an attacker with unrestricted
out-of-band access to its creating user's Keystone project. This allows it to
typically allows it to, for example
* Create, view and delete Nova instances
* Create, view and delete Neutron networks
* Create, view and delete Cinder volumes
* Create, view and delete Glance images
In addition to this security problem there is a usability issue as well. A
regular OpenStack user can neither create Keystone users, nor can they assign
the `monasca-agent` role to these users. Consequently any user requiring
Monasca monitoring for their instance needs to ask somebody with admin
privileges to create a user and assign the `monasca-agent` role to that user.
Consequently, self-service instance monitoring is not possible.
Use Cases
---------
The change proposed by this spec will improve the situation outlined above as
follows:
* End Users will be able to acquire access credentials for their instances'
metrics and log agents in a self-service manner.
* End Users' attack surface from compromised instances with Monasca agents will
be reduced to submission of bogus metrics/logs for their project.
Proposed change
===============
This change takes inspiration from OpenStack Magnum, particularily the fix for
CVE-2016-7404[0]. Before this fix, Magnum would create Keystone trustee users
in a separate domain, with one or more of a cluster owner's roles delegated via
Keystone trusts. These user accounts would only get used for submitting
certificate sign requests to the Magnum API in most cases so they had similarly
generous permissions to the monasca agent users. The fix for `CVE-2016-7404`
reduced the use of trusts to only the scenarios where they were really needed:
in the default case these users do not get any trusts delegated, nor do they
have roles assigned, rendering them useless for most OpenStack APIs. The Magnum
API's policy rules for `certificate:create` and `certificate:sign` are the sole
exception from this rule: they allow access if the user exists in the Magnum
domain and the user's ID matches the recorded user ID for a given cluster.
For Monasca, this spec proposes an analogous Keystone domain for agent users,
just like Magnum's trustee user domain. Likewise, the Monasca API service would
get access to an admin account for this domain so it can create users inside
the domain. The final puzzle piece are two extensions to the Monasca metrics
and log APIs:
1) A monasca-api endpoint that allows end users to get these special agent
users created for their project and to retrieve their credentials.
2) A modification to log and metrics submission endpoints for monasca-api and
monasca-log-api that allows submission for the project associated with the
agent user in question.
In the remainder of this section you will find a detailed description of how
this change affects various parts of monasca-api.
Alternatives
------------
Some things could be implemented in a different manner from the approach
outlined below:
* It would be possible to substitute the rather heavyweight Keystone
domain for the agent users by a lightweight, automatically generated API key.
This would make for leaner credentials, but it would allow authorization for
monasca-api/monasca-log-api submission entirely without Keystone. This is
less of a technical and more of a political issue. Also, monasca-client would
need additional, homegrown code for authenticating with this API key which
may introduce additional security bugs. On the whole this is probably a bad
idea (while discussing this on IRC people were in favour of using Keystone as
well).
* Instead of recording project association and permissions for agent users in
the database one could encode it in the agent users' user names. This would
be less than elegant, though. On the other hand, we would not need to create
database tables/add database client code to monasca-log-api.
* The current approach has monasca-api handling creation/deletion of agent
users for both metrics and log submission. It would be conceivable to
implement independent user creation for both APIs, but this would add
considerable implementation overhead for no little benefit.
Data model impact
-----------------
We will need to introduce a new table `agent_users` with the following fields:
* `id` (string): the agent user's keystone user UUID. Unique primary key.
* `creator_id` (string): the creating user's keystone user UUID
* `project_id` (string): the project the user can submit logs metrics for
* `submit_metrics` (boolean): optional flag specified upon creation. Defaults
to True if unspecified. Controls whether the user is allowed
to submit metrics
* `submit_logs` (boolean): optional flag specified upon Creation. Defaults to
True if unspecified. Controls whether the user is allowed to
submit logs.
For this to work, monasca-log-api will need a database client implementation
and the configuration options to go with that, of course. In order to reduce
code duplication, as much database handling code from monasca-api as possible
will be moved to monasca-common from where both monasca-api and monasca-log-api
can use it.
REST API impact
---------------
The REST API needs to be modified in 3 places:
1. There needs to be a facility for self-service agent user creation
2. monasca-api needs to grant or deny metric submission based on the project
an agent user is associated with and whether it has its `submit_metrics`
flag set to `True`.
3. monasca-api needs to grant or deny log submission based on the project
an agent user is associated with and whether it has its `submit_metrics`
flag set to `True`.
In the remainder of this section these API changes are described in detail.
.. _agent_user_api:
Agent User Handling
^^^^^^^^^^^^^^^^^^^
To be able to create, delete and list agent users, and retrieve agent users'
credentials this spec proposes the following extensions to monasca-api:
::
POST /v2.0/agent_users
This request creates agent users. The request body must follow the following
JSON schema:
::
{
type: "map",
required: "true",
"mapping": {
"password": { "type": "int", "required": false },
"submit_metrics": { "type": "boolean", "required": false },
"submit_logs": { "type": "boolean", "required": false }
}
}
The parameters behave as follows:
* `password`: if this is set, the provided password will be used as the agent
user's password. Otherwise, a randomly generated 40 character
string will be used.
* `submit_metrics`: if this is set to `False`, this agent user will not be
allowed to submit metrics to monasca-api. This parameter is
optional. If it is omitted, the default is `True` and the
agent user will be allowed to send metrics.
* `submit_logs`: if this is set to `False`, this agent user will not be
allowed to send logs to monasca-log-api. This parameter is
optional. If it is omitted, the default is `True` and the
agent user will be allowed to send logs to monasca-log-api.
This request will
* Return `200` with a JSON formatted database record for the agent user in the
body if the request is successful. In addition to the database record the
response will contain a `password` field with the newly created user's
password. This password will *not* be recorded in the database.
* Return `500` with an error message in the body if user creation fails.
* Return `401` for unauthenticated users or users without any roles.
::
GET /v2.0/agent_users
This request lists agent users. It will
* Return `200 OK` and a JSON formatted list of agent user database records for
the requester's project. For requesters with the `admin` role, agent users
from all projects will be listed. The list may be empty.
* Return `401` for unauthenticated users or users without any roles.
::
GET /v2.0/agent_users/<id>
This request retrieves the database record for an agent user. It will
* Return `200` with a JSON formatted record for the agent user
identified by the Keystone user ID `<id>` in the body, provided there is a
record for `<id>` in the database and the requester is allowed to access it.
The requester is allowed to access a record if the requester's project
matches `project_id` or the requester fullfills the `oslo.policy` `is_admin`
criterion.
* Return `404` if there is no record for the user or the requester is not
allowed to access it.
* Return `401` for unauthenticated users or users without any roles.
Extended Policy Check for Metric Submission
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The request
::
POST /v2.0/metrics
will need to be extended to not only check for the roles configured in
`security/agent_authorized_roles` but it will also need to verify if
1. The requesting user is in the designated agent user domain
2. If so, look the user up in the `agent_users` table and submit the metrics
being sent for the agent user's recorded project.
3. If the lookup fails for some reason (e.g. for a manually created user in the
agent user domain that does not have a record in the database), the request
is treated as unauthorized.
Extended Policy Check for Log Submission
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following requests
::
POST /v3.0/logs
...will need to be extended to not only check for the roles configured in
`roles_middleware/delegate_roles` but it will also need to verify if
1. The requesting user is in the designated agent user domain
2. If so, look the user up in the `agent_users` table and submit the logs being
sent for the agent user's recorded project.
3. If the lookup fails for some reason (e.g. for a manually created user in the
agent user domain that does not have an associated domain), the request is
treated as unauthorized.
Client impact
-------------
`python-monascaclient` will need to be extended to implement the API operations
described in the :ref:`agent_user_api` section above.
`monasca-agent` will not need to be modified.
Configuration changes
---------------------
monasca-api will need configuration settings to provide it with admin settings
for the agent user domain.
monasca-log-api will need configuration settings to provide it with access to
the monasca-api database.
Both monasca-api and monasca-log-api will need configuration that allows an
operator to disable this feature as desired. Since it allows users to create
monitoring users, it should be disabled by default. Optionally, it should be
possible to configure a Keystone role required to create agent users. If this
role is left unspecified, any user should be able to create agent users)
Security impact
---------------
This feature introduces a mechanism that allows regular users to create
unprivileged special purpose user accounts in a dedicated Keystone domain. This
might not be desirable for every operator, hence the provisions for disabling
or restricting it in the previous section.
The special purpose users in question do not have Keystone roles and are
therefore unusable for most purposes. There is a precedent for this in Heat and
Magnum. The former creates such users as targets for a Keystone trust delegated
by a Heat stack's creating user. The latter creates such users as of the fix
for CVE-2016-7404[0] and uses them in the exact same manner as the one proposed
by this spec.
Other end user impact
---------------------
Users will be able to create dedicated monitoring/logging users in a
self-service manner, which is an improvement over the current situation (they
need to ask somebody with admin privileges to create users with the special
`monasca-agent` role).
Performance Impact
------------------
This change may add a small performance penalty due to adding a database lookup
for every metrics submission attempt. This shouldn't be too bad, though,
because every metrics submission attempt already requires Keystone token
validation, beside which one database lookup is pretty small. Nonetheless, this
feature can be disabled until a fix is found if it turns out to cause major
performance issues.
Other deployer impact
---------------------
N/A
Developer impact
----------------
N/A
Implementation
==============
Assignee(s)
-----------
Primary assignee:
jgr-launchpad
Work Items
----------
1. Add common database code to monasca-common.
2. Modify monasca-api to use database code from monasca-common and remove its
own database code.
3. Implement agent user CRUD operations in monasca-api
4. Extend monasca-api policy enforcement code to authorize agent users
5. Extend monasca-log-api policy enforcement code to authorize agent users
Dependencies
============
Testing
=======
Documentation Impact
====================
The self service creation of agent users will need to be documented.
The various newly introduced configuration settings will need to be documented.
Creating a domain for agent users will need to be documented.
References
==========
[0] https://github.com/openstack/magnum/commit/2d4e617a529ea12ab5330f12631f44172a623a14
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Queens
- Introduced

View File

@@ -1,378 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================================
Example Spec - The title of your feature request
================================================
Include the URL of your story:
https://storyboard.openstack.org
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that operators can understand. The title and this first paragraph
should be used as the subject line and body of the commit message
respectively.
Some notes about the monasca-spec and stories process:
* Not all stories need a spec. For more information see
https://docs.openstack.org/monasca-api/latest/contributor/index.html
* The aim of this document is first to define the problem we need to solve,
and second agree the overall approach to solve that problem.
* This is not intended to be extensive documentation for a new feature.
For example, there is no need to specify the exact configuration changes,
nor the exact details of any DB model changes. But you should still define
that such changes are required, and be clear on how that will affect
upgrades.
* You should aim to get your spec approved before writing your code.
While you are free to write prototypes and code before getting your spec
approved, its possible that the outcome of the spec review process leads
you towards a fundamentally different solution than you first envisaged.
* But, API changes are held to a much higher level of scrutiny.
As soon as an API change merges, we must assume it could be in production
somewhere, and as such, we then need to support that API change forever.
To avoid getting that wrong, we do want lots of details about API changes
upfront.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 79 columns.
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox and see the generated
HTML file in doc/build/html/specs/<path_of_your_file>
* If you would like to provide a diagram with your spec, ascii diagrams are
required. http://asciiflow.com/ is a very nice tool to assist with making
ascii diagrams. The reason for this is that the tool used to review specs is
based purely on plain text. Plain text will allow review to proceed without
having to look at additional files which can not be viewed in gerrit. It
will also allow inline feedback on the diagram itself.
* If your specification proposes any changes to the Monasca REST API such
as changing parameters which can be returned or accepted, or even
the semantics of what happens when a client calls into the API, then
you should add the APIImpact flag to the commit message. Specifications with
the APIImpact flag can be found with the following query:
https://review.opendev.org/#/q/status:open+project:openstack/monasca-specs+message:apiimpact,n,z
Problem description
===================
A detailed description of the problem. What problem is this feature request
addressing?
Use Cases
---------
What use cases does this address? What impact on actors does this change have?
Ensure you are clear about the actors in each use case: Developer, End User,
Deployer etc.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
At this point, if you would like to just get feedback on if the problem and
proposed change fit in monasca, you can stop here and post this for review to
get preliminary feedback. If so please say: Posting to get preliminary feedback
on the scope of this spec.
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to
require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing instances, or modify other existing data
describe how that will work.
REST API impact
---------------
Each API method which is either added or changed should have the following
* Specification for the method
* A description of what the method does suitable for use in
user documentation
* Method type (POST/PUT/GET/DELETE)
* Normal http response code(s)
* Expected error http response code(s)
* A description for each possible error code should be included
describing semantic errors which can cause it such as
inconsistent parameters supplied to the method, or when an
instance is not in an appropriate state for the request to
succeed. Errors caused by syntactic problems covered by the JSON
schema definition do not need to be included.
* URL for the resource
* URL should not include underscores, and use hyphens instead.
* Parameters which can be passed via the url
* JSON schema definition for the request body data if allowed
* Field names should use snake_case style, not CamelCase or MixedCase
style.
* JSON schema definition for the response body data if any
* Field names should use snake_case style, not CamelCase or MixedCase
style.
* Example use case including typical API samples for both data supplied
by the caller and the response
* Discuss any policy changes, and discuss what things a deployer needs to
think about when defining their policy.
Note that the schema should be defined as restrictively as
possible. Parameters which are required should be marked as such and
only under exceptional circumstances should additional parameters
which are not defined in the schema be permitted.
Reuse of existing predefined parameter types such as regexps for
passwords and user defined names is highly encouraged.
Security impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Does this change touch sensitive data such as tokens, keys, or user data?
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
* Does this change involve cryptography or hashing?
* Does this change require the use of sudo or any elevated privileges?
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
For more detailed guidance, please see the OpenStack Security Guidelines as
a reference (https://wiki.openstack.org/wiki/Security/Guidelines). These
guidelines are a work in progress and are designed to help you identify
security best practices. For further information, feel free to reach out
to the OpenStack Security Group at openstack-security@lists.openstack.org.
Other end user impact
---------------------
Aside from the API, are there other ways a user will interact with this
feature?
* Does this change have an impact on python-monascaclient? What does the user
interface there look like?
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A periodic task might look like a small addition but if it calls conductor or
another service the load is multiplied by the number of nodes in the system.
* Scheduler filters get called once per host for every instance being created,
so any latency they introduce is linear with the size of the system.
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries (whether direct or via conductor)
can have a profound impact on performance when called in critical sections of
the code.
* Will the change include any locking, and if so what considerations are there
on holding the lock?
Other deployer impact
---------------------
Discuss things that will affect how you deploy and configure Monasca
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed (for example a flag that other hypervisor drivers might want to
implement as well)? Are the default values ones which will work well in
real deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features. For example, if we
change the directory name that instances are stored in, how do we handle
instance directories created before the change landed? Do we move them? Do
we have a special case in the code? Do we assume that the operator will
recreate all the instances in their cloud?
Developer impact
----------------
Discuss things that will affect other developers working on Monasca.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a feature where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* Include specific references to specs and/or blueprints in monasca, or in
other projects, that this one either depends on or is related to.
* If this requires functionality of another project that is not currently used
by Monasca (such as the glance v2 API when we previously only required v1),
document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss the important scenarios needed to test here, as well as
specific edge cases we should be ensuring work correctly. For each
scenario please specify if this requires specialized hardware, a full
openstack environment, or can be simulated inside the Monasca tree.
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Documentation Impact
====================
Which audiences are affected most by this change, and which documentation
titles on docs.openstack.org should be updated because of this change? Don't
repeat details discussed above, but reference them here in the context of
documentation for multiple audiences. For example, the Operations Guide targets
cloud operators, and the End User Guide would need to be updated if the change
offers a new feature available through the CLI or dashboard. If a config option
changes or is deprecated, note here that the documentation needs to be updated
to reflect this specification's change.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. if it's an EC2 thing, link the
EC2 docs)
* Anything else you feel it is worthwhile to refer to
History
=======
Optional section intended to be used each time the spec is updated to describe
new design, API or any database schema updated. Useful to let reader understand
what's happened along the time.
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Rocky
- Introduced

View File

@@ -1,204 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=============================
Introduce Database Migrations
=============================
https://storyboard.openstack.org/#!/story/2001654
Most OpenStack services use `SQLAlchemy Migrate
<https://sqlalchemy-migrate.readthedocs.io/en/latest/>`_ or `Alembic
<http://www.alembic.io/>`_ to provide migration scripts that (a) update the
service's database schema if it changes and (b) migrate data as required.
Currently, Monasca does not follow this pattern. Instead, it provides simple
SQL scripts for PostgreSQL or MySQL databases that are only useful for
initializing a database schema. For any subsequent changes required by updated
code operators are on their own, i.e. they need to apply these manually to keep
Monasca working. This spec proposes introducing (a) database migrations and (b)
a separate tool that analyses an existing database's schema state and adds the
necessary metadata to update it through migrations in the future.
Problem description
===================
In a nutshell, the database schema used for holding various alarm,
notification, and metric metadata is not updateable. This is due to the source
of truth for the schema being a simple SQL script that is only useful for
schema initialization on a blank database.
The group most affected by this is operators, who are faced with several
unattractive choices when a code change requires a database schema change:
#. They can drop their whole Monasca database and recreate it from scratch with
the updated SQL script. That way they will lose all of their alarm
definitions along with notification settings, their complete alarm history,
and some metrics metadata.
#. They can manually update the database and migrate data (in the case of
columns/tables getting renamed). This is error prone and carries a risk of
data loss.
#. They can decide to live without that code change. If the change breaks on an
outdated database they may even have to create and maintain code patches for
that.
Besides operators, the lack of database migration infrastructure is a
significant obstacle to Monasca developers as well: it turns any data model
change into a breaking change due to the schema update that requires. Thus the
current state of affairs prevents new features that require database
changes or renders them extremely unpopular at the very least.
Use Cases
---------
#. Operators will be able to update Monasca without risking breakage due to
database changes.
#. Developers can modify the data model without breaking existing
installations.
Proposed change
===============
The proposal aims to improve the current state of affairs by adding Alembic
based command line tooling for the following operations:
#. A legacy database migration command line tool. This command line tool will
detect which revision of the database schema SQL script was used based on
the tables and columns currently in the database. It will then initialize
the database's migration meta data to allow for regular database migration
from that point forward.
#. A database migration command line tool with multiple base revisions to
cover the following scenarios:
#. An uninitialized database
#. A schema state resulting from any of the existing SQL script
revisions.
Both (1) and (2) may be implemented as part of the same tool.
A database schema based on third party SQL scripts (which may be found in
various configuration management tools used for deploying Monasca) is not
supported.
Alternatives
------------
This change is long overdue and best practice throughout OpenStack. That being
said, it would be possible to considerably reduce its scope by omitting the
heuristics for detecting an existing legacy database schema. That way operators
would be forced to drop the database and recreate it using migrations. If
possible we should not impose this on operators.
It would also be possible to use plain SQLAlchemy Migrate rather than Alembic
to implement migrations, but OpenStack appears to have standardized on Alembic
by now (see https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy for a
discussion of why that happened).
Data model impact
-----------------
There will be no impact on the data model itself. There will only be changes to
how the data model is synchronized to the database's run time schema.
REST API impact
---------------
There is no REST API impact.
Security impact
---------------
The only slightly security related aspect of this change is providing the
database migration command line tools with access credentials for the data
base. The accepted best practice for this is running it as root on the machine
where the API service (in our case `monasca-api`) is also running and retrieve
these credentials from the API service's configuration.
Other end user impact
---------------------
N/A
Performance Impact
------------------
N/A
Other deployer impact
---------------------
Configuration management that uses the legacy SQL scripts must be changed to
use the new migration tools. Among other things, the Monasca devstack plugin
must be modified to use them. Users updating to a Monasca version with this
feature may need to run a special migration to get versioning metadata added to
their database schema.
Developer impact
----------------
Any developer making data model changes after this feature is implemented will
have to write a migration for updating the database schema accordingly.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<jgr-launchpad>
Tasks
-----
* Add command line tool(s) for regular migration (i.e. migration of a versioned
database) and transition of an unversioned database to a versioned one.
* Create migration chains from:
* Empty database
* All revisions of the schema file in the repository.
Dependencies
============
N/A
Testing
=======
Any meaningful testing can only take place in a full Monasca deployment with an
actual database to test against. In particular, testing attention should focus
on testing this with all revisions of the legacy SQL script in
`devstack/files/schema/mon_mysql.sql`. This testing can take place at an early
stage during Devstack setup as follows:
#. Each SQL script revision is checked out from the git repository, applied to
the database and converted to a "migratable" database by running the
conversion operation on the database. After each iteration, the database is
dropped to prepare for the next revision.
#. As a final step, the initial migration for a blank database is performed and
Devstack proceeds normally.
Since PostgreSQL is deprecated in OpenStack it will be sufficient to implement
testing with the MySQL flavoured SQL script.
Documentation Impact
====================
This feature needs to be documented in operator/deployer documentation to
ensure operators use migrations rather than legacy SQL scripts.
References
==========
* Etherpad from Rocky PTG where this was discussed:
https://etherpad.openstack.org/p/monasca-ptg-rocky

View File

@@ -1,380 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================================
Metric Retention Policy
================================================
Story board: https://storyboard.openstack.org/#!/story/2001576
Metric retention policy must be in place to avoid disk being filled up.
Retention period should be adjustable for different types of metrics, e.g.,
monitoring vs. metering or aggregate vs. raw meters.
Problem description
===================
In a cloud of 200 compute hosts, there can be up to one billion metrics
generated daily. The time series database disks will be filled up in months
if not weeks if old metric data is not purged regularly. The retention
requirement can be different based on the type of the metrics and the usage
model. For example, the customer may want to preserve the metering metrics
for months or years, while s/he has no interest in more than a week old
monitoring metrics. Some customers' billing system may pull the metering data
on a daily base which could eliminate the need of longer retention of metering
metrics. Monasca needs to support metric retention policy that can be tailored
per metric or metric type.
Use Cases
---------
- Use case 1:
Installer sets a default TTL value in configuration. At installation time,
a default TTL (time to live) value is specified in the configuration for
monasca-api and is used as the default retention policy.
The default retention policy is applied if a metric doesn't match another
retention policy. This default retention is generally a shorter period of
time and may be used for the common monitoring metrics.
- Use case 2:
Installer loads a set of metric to TTL mappings (retention policies), which
is stored in the Monasca API data store (mysql database). These mappings may
be provided in a JSON structure. This is intended to be useful for bootstrap
or restore from backup.
- Use case 3:
Monasca API receives new metric (regardless of source). Metric is mapped to
a dictionary to determine TTL (or default value used if no match). TTL is
passed with metric value on to the Persister for storage in TSDB.
Note that the use cases for monasca-agent to post metrics are unchanged, just
the processing at Monasca API then the API to Persister message.]
The Monasca Persister then stores the metric and specifies the TTL to the
TSDB configured (i.e. InfluxDB or Cassandra).
- Use case 4:
Operator uses Monasca CLI to specify (or modify) a TTL value for a metric
match string. Match string could be specific, such as "cpu.user_perc" or a
wildcard string, such as "image.*". CLI posts request to Monasca TTL API,
where it is validated then stored in database.
- Use case 5:
Operator uses Monasca CLI to GET the dictionary of metric:TTL mappings.
This can be used to export the list for backup or verification.
- Use case 6 (optional):
Operator uses Monasca UI to accomplish use case 4 or 5
Proposed change
===============
1. Monasca API:
Add a new API for managing the mapping of metrics to TTL values.
See the `REST API impact`_ section below.
Add storage for the mapping in the MySQL database. This is to allow
all instances of Monasca API to share the configuration dynamically.
*TBD* - Create a schema for storing the metric:TTL dictionary.
A policy precedence needs to be defined. It is possible that more than
one retention policy may apply to a given meter, so a clear precedence
needs to be defined to determine which TTL value to apply.
*TBD* - a few concrete examples.
2. Monasca Persister:
Persister reads the default retention policy setting from the service
configuration file in the influxDbConfiguration and cassandraDbConfiguration
section.
::
# Retention policy may be left blank to indicate default policy.
# Unit is days
retentionPolicy: 7
It may be convenient to allow specifying a unit with the policy value. For
example "7d" for 7 days or "3m" for 3 months.
It will retrieve the TTL property in the incoming metric message. If not set,
the TTL value from the default retention policy will used instead.
It is expected with the addition of this Metrics Retention feature that the
default retentionPolicy value would be set to a low value, and that metrics
that are to be kept longer would be called out specifically through the
Retention API and appropriate values set.
The TTL is set in the parameterized database query when persisting the metrics
into the time series database, including both Cassandra and InfluxDB.
*TBD* - exact call structures for each TSDB.
Note that this does mean that each storage back end would need to have code
customized in the persister to support passing the TTL value. This may also
be possible for ElasticSearch, though that is not part of this initial spec.
3. Monasca CLI (optional):
A new CLI feature could be created to simplify getting the list of TTL
mappings or posting an update to a TTL mapping. This would need Keystone
authentication, and would use the existing 'monasca' CLI authentication.
4. Monasca UI (optional):
A new feature could be added to the Monasca UI that would allow a Cloud
Operator to view and edit the list of TTL mappings.
Bonus points for allowing the UI to have sample metrics and simulate the
mapping on the page.
Alternatives
------------
The original proposal was to have monasca-transform, monasca-ceilometer, and
monasca-agent each keep a TTL default setting and have a property to allow
specifying a TTL per metric. This would have also required a change to the
Monasca API to add an optional TTL to the metric POST listener.
While this would have been simpler to implement in the Monasca API, the
additional work to change all the services that originate metrics made this
alternative not as appealing.
Another alternative would be to implement a new Monasca Retention API as
outlined, but not include dimensions for the metrics. This would allow a much
simpler data structure of key:value pairs, with the key being the unique match
string and the value the standardized TTL value. While the implementation
would be much simpler, it is felt that the additional power of having match
dimensions would be beneficial.
Data model impact
-----------------
The Monasca API data model will need to be extended to store the metric to
TTL mappings (retention policies).
*TBD* - schema
REST API impact
---------------
A new metric retention API endpoint would be added to Monasca API.
URL: /v2.0/metrics-retention
Method: GET
A GET request will return the current list of metric retention policies.
Examples::
Empty list (default retention used for all metrics)
[]
Simple list
[
{
match: "cpu.user_perc",
dimensions: {"host": "node1"},
retentionPolicy: "7d"
},
{
match: "cpu.stolen_perc",
dimensions: {},
retentionPolicy: "7d"
}
]
Method: PUT
The PUT method is used for all create/update/delete methods on the metric
retention policy list. Any list of metrics PUT to the API will be merged
with the existing list. Single entries will also be supported.
JSON structure for PUT/GET to Retention API::
{
match: "cpu.user_perc",
dimensions: {},
retentionPolicy: "7d"
}
TBD: do we support adding a character for time unit? Will it be confusing to
PUT "1d" and GET back "86400"?
Special case: to delete a retention policy, give a retentionPolicy value of
None and it will be removed from the list.
::
{
match: "cpu.user_time",
dimensions: {},
retentionPolicy: None
}
Additionally, a list of retention policy items may be PUT, with the format
matching the response from GET. Each item in the list will be compared to
existing metric policies (match string and dimensions). If an exact match is
found, the retentionPolicy value will be replaced. Otherwise, the new item is
added to the list.
(This is intended to make bootstrap or restore from backup easier)
The communication from Monasca API to Persister would have the TTL value
added as a parameter.
NOTE: Care should be taken in defining the REST API path, as Gnocchi uses
"/metric", which may be confusing to some users.
Security impact
---------------
None. Security measures already in place for the Monasca API would remain.
Other end user impact
---------------------
None for most users, as access to the Monasca Metrics API is restricted to
Cloud Operators.
A Cloud Operator would have a new responsibility to configure retention for
the metrics.
A future discussion could be had about whether a tenant user should be granted
the ability to set their own retention policies, but generally the Cloud
Operator is responsible for ensuring there are sufficient resources to meet the
retention requirements.
Performance Impact
------------------
This feature has no direct impact on the write throughput. However, it allows
the user to enable shorter retention period for monitoring metrics which
can potentially improve the read performance for the queries that involves
search, grouping and filtering when there are less metrics in the table. This
improves the storage footprint.
Depending on how complex the metric retention match string gets there could be
some performance impact. *TBD*
Other deployer impact
---------------------
No change in deployment of the services.
The service could be deployed with simply a default TTL value in configuration.
If the operator desires, at install time a complete list of TTL values could
be loaded as part of the installation process once the Monasca API is running.
For planning, the user now has the option to specify a shorter retention period
for monitoring metrics or even per metric or metric category. The disk size
should be calculated based upon the retention policy accordingly.
Developer impact
----------------
Monasca agent plugin developers should be aware of the new TTL property
now available to them. It is an optional property that is only needed if a
different TTL value than the default retention policy in the Persister service
is needed.
Implementation
==============
Assignee(s)
-----------
Contributors are welcome!
Primary assignee:
Other contributors:
Work Items
----------
* Add new metrics-retention API endpoint to Monasca API
* Add code to match all incoming metrics to the Monasca API with the appropriate
retention policy (or default)
* Add TTL in seconds as a parameter to the request from Monasca API to
Persister
* Create a CLI
* PUT of updated retention policy(ies)
* GET of the list
* Determine correct precedence for retention policies that overlap, and clearly
document with examples.
Dependencies
============
Dependent on retention policy support in the TSDB storage. Both Cassandra
and InfluxDB support specifying a retention policy.
Testing
=======
Unit testing
Unit tests in the Monasca API should be written for the scenarios of defining
a TTL for each metric.
* Metric received, no matching retention policy found, default policy used
* Metric received, one exact matching metric retention policy found, matching
policy parameter passed to Persister call
* Metric received, more than one matching policy, correct precedent determined
and appropriate policy parameter passed to Persister call
Monasca Persister will also need unit tests to verify the passed-in value is
passed on to the TSDB retention method call, and to handle the case of a missing
TTL parameter. We may decide that the TTL parameter is optional then a global
default TTL value should be used.
Functional testing
Functional testing is more involved, as one way to test would be to trigger some
metrics, have them stored in the TSDB, then wait for the TTL value to expire and
verify the metric is removed correctly. More thought and definition is needed
to define what is appropriate and possible (i.e. to not retest features of the
TSDB).
Documentation Impact
====================
Operators who use Monasca would need documentation to describe the format of
the new API and recommended usage. This may include guidelines on how to set
a low default and to choose which metrics should be kept longer. The default
TTL value as set in a config file should also be documented.
References
==========
* Links
* Stein PTG discussion - https://etherpad.openstack.org/p/monasca-ptg-stein
* Glossary
* TTL - short for Time to Live, a setting in TSDB that defines when an item
(in this case a metric) will be cleaned out.
* TSDB - Time Series Database, such as InfluxDB or Cassandra.
History
=======
Optional section intended to be used each time the spec is updated to describe
new design, API or any database schema updated. Useful to let reader understand
what's happened along the time.
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Queens
- Introduced

View File

@@ -1,342 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================
Monasca services in Docker
==========================
The main task of this story is to move building and publishing Docker images
of all Monasca components from https://github.com/monasca/monasca-docker
to their respective OpenStack repositories.
Problem description
===================
At the moment Docker images for Monasca components are built from Dockerfiles
provided in `monasca-docker repository <https://github.com/monasca/monasca-docker>`_.
The process of building and publishing the images has to be explicitly
triggered and is described
`here <https://github.com/monasca/monasca-docker/blob/master/CONTRIBUTING.md#releasing-changes>`_.
Due to the separation of image definitions and the actual upstream code they
can easily diverge.
Use Cases
---------
Have supported and standardized option to deploy Monasca services with Docker.
From the development point of view very little will change.
A developer is writing code and putting it into Gerrit, then automatic process
test if Docker image will be built properly.
The images can be used in integration tests running in OpenStack CI.
End User should see no change.
Deployer has one more supported way of deploying Monasca.
Proposed change
===============
Every repository with each component would have additional `docker` folder
that would contain all necessary files for building Docker image.
Example files:
* `Dockerfile`
* `README`
* Starting script that will template needed configuration files, wait for other
needed components to be available (like Keystone) and start service
on Docker image run.
* Templates for configuration files, bootstrap scripts for configuring database
schemas etc..
The process should be started after commit land in git repository (so all
tests passing).
The goal is a fully automatic process without the need for human intervention.
Every image should have an easy and standardized way of getting information
from what version of Monasca component it was built (definitely needed for
`master` and `latest` tags).
Every Dockerfile will be in a separate repository so there is a need
for standardization of these files so that they look mostly the same.
There are linters for Dockerfiles, one need to be chosen and added to
the testing process (will help with standardization of the Dockerfiles,
check Kolla jobs).
Images will be build with Python 3.5. If any Monasca service does not support
Python 3, we will wait for support before creating container for it.
Images will be published to https://hub.docker.com/u/monasca/
Each image should have README file in RST format with information about running
specific component.
We will be supporting Docker actual stable version and 2 previous versions.
All stable branches will have specific versions of Docker that they are build
with and this versions will be frozen at the moment of branching them from
master. Master branch will be build with 3 newest Docker versions.
All images will be based on custom `monasca-base` image that is using official
Python on Alpine Linux to conserve disc space. We will be using ideas for
minimal configuration from other base image examples (like based on Ubuntu
https://github.com/phusion/baseimage-docker or based on Alpine
https://github.com/blacklabelops/baseimages).
All images should be created in a way that will try to make them as small
as possible without compromising on functionality. Possible ways of making them
smaller could be found in articles like the following
https://blog.codeship.com/alpine-based-docker-images-make-difference-real-world-apps/
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
All images standardize on Jinja2 for templating configuration files with the
following tool:
* https://github.com/Aisbergg/python-templer - Python3 command-line tool for
templating in shell-scripts, leveraging the Jinja2 library
* Allows to use environment variables
* YAML data sources supported
* LGPL license but we are not linking against it, only using CLI
Alternatives
------------
As an example of the CI configuration, we could use Kolla process
of creating Docker images:
https://github.com/openstack/kolla
https://hub.docker.com/u/kolla/
It is though much too complex for what we need, so should be used just
as a reference.
For templating:
* https://github.com/wrouesnel/p2cli - command line tool for rendering pongo2
(Django-syntax like, similar to Jinja2) templates (Go binary, 4x smaller
size if the image does not already have Python)
* Allows to use environment variables
* YAML, JSON data sources supported
* https://github.com/jwilder/dockerize - Utility to simplify running
applications in docker containers (Go binary)
* Generate application configuration files at container startup time from
templates (Go text/template) and container environment variables
* Tail multiple log files to stdout and/or stderr
* Wait for other services to be available using TCP, HTTP(S), unix before
starting the main process.
Data model impact
-----------------
None
REST API impact
---------------
None
Security impact
---------------
All services will be closed in Docker container.
Other end user impact
---------------------
None
Performance Impact
------------------
Because of additional Docker layer services could run slower.
Other deployer impact
---------------------
Deployment should be easier as deployer would need to create configuration
and services with all dependencies will be enclosed in Docker images.
Developer impact
----------------
Features adding new dependencies or changing the way Monasca components
are installed would have to be reflected in Docker image definitions.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
dobroslaw-zybort <dobroslaw.zybort@ts.fujitsu.com>
Work Items
----------
What is needed:
* Building test images for every change in Gerrit.
* Automated process of building images on every git commit and pushing them
to the Docker Hub.
* Automated process of building images on every tag and pushing them
to the Docker Hub.
* Automated process of building images on OpenStack releases and pushing
them to the Docker Hub (could be the second step).
* Adding labels with source code base information of the image (e.g.
build-date, commit-sha1).
* Running integration tests on each code commit to verify the Docker binary.
Single node deployment setup with Docker Compose.
* Tempest tests.
* Smoke tests.
Five types of tags on Docker Hub:
* On every new release/tag.
* `latest` pointing to the last tag.
* `master` pointing to the last git commit - useful for testing last.
changes/fixes before release.
* Tags for the last commit on stable OpenStack versions (`queens`, `rocky` etc.) -
useful for testing last changes/fixes before release.
Optional:
* Build all images with Python 3.6 and test stability.
* Run multi node deployment integration tests with Kubernetes Helm.
Needs to remember (to make maintenance easier):
* Using proper init process.
* Cleaning after the installation process.
* Same name of starting script in all repositories.
* Use same config templating mechanism in all repositories.
* All components should be downloaded from Git repository and have a possibility
to change branch for building Docker image (useful for testing changes
proposed on Gerrit).
Non objectives of this story (could be next steps):
* Automated process of building images on specific past commits and pushing
them to the Docker Hub.
* Migrate Devstack to Docker images of Monasca.
Dependencies
============
What should be moved to where:
* https://github.com/monasca/monasca-docker/tree/master/monasca-agent-base =>
https://github.com/openstack/monasca-agent
* https://github.com/monasca/monasca-docker/tree/master/monasca-agent-collector =>
https://github.com/openstack/monasca-agent
* https://github.com/monasca/monasca-docker/tree/master/monasca-agent-forwarder =>
https://github.com/openstack/monasca-agent
* https://github.com/monasca/monasca-docker/tree/master/monasca-api-python =>
https://github.com/openstack/monasca-api
* https://github.com/monasca/monasca-docker/tree/master/monasca-client =>
https://github.com/openstack/python-monascaclient
* https://github.com/monasca/monasca-docker/tree/master/monasca-log-api =>
https://github.com/openstack/monasca-log-api
* https://github.com/monasca-docker/monasca-notification =>
https://github.com/openstack/monasca-notification
* https://github.com/monasca/monasca-docker/tree/master/monasca-persister-python =>
https://github.com/openstack/monasca-persister
* https://github.com/monasca/monasca-docker/tree/master/monasca-python =>
https://github.com/openstack/monasca-common
* https://github.com/monasca/monasca-docker/tree/master/monasca-thresh =>
https://github.com/openstack/monasca-thresh
* will need also https://github.com/monasca/monasca-docker/tree/master/storm =>
https://github.com/openstack/monasca-thresh
* https://github.com/monasca/monasca-docker/tree/master/tempest-tests =>
https://github.com/openstack/monasca-tempest-plugin
Not in scope:
* https://github.com/monasca/monasca-docker/monasca-alarms - This image
contains a container that can be used to create Monasca Notifications
and Alarm Definitions.
* https://github.com/monasca/monasca-docker/monasca-log-agent - Logstash
output monasca_log_api plugin.
* https://github.com/monasca/monasca-docker/monasca-log-metrics - Contains
Logstash configuration to transform logs into metrics based on log's severity.
* https://github.com/monasca/monasca-docker/monasca-log-persister - Contains
Logstash configuration to save logs inside **log-db** (i.e. ElasticSearch).
* https://github.com/monasca/monasca-docker/monasca-log-transformer - Image
contains Logstash configuration to detect log's severity.
Testing
=======
At this moment CI for https://github.com/monasca/monasca-docker run two types
of tests on each change:
* tempest-tests https://github.com/monasca/monasca-docker/tree/master/tempest-tests
* smoke-tests https://github.com/monasca/monasca-docker/tree/master/smoke-tests
* https://github.com/monasca/smoke-test
Both of them, at the moment, are running on metrics part of the Monasca stack.
Tests should consider if tested service started and is behaving correctly
in built and running Docker container. We need to decide if we want to run
all tests on the whole Monasca stack on every change or if we should create
some smaller tests for each separate service.
Documentation Impact
====================
Basic installation instructions should be added here [1] and published
to https://docs.openstack.org
[1] https://opendev.org/openstack/monasca-api/src/branch/master/doc/source/install
Now high level documentation is stored in:
https://github.com/monasca/monasca-docker/tree/master/docs
Separate images also have `README.md` files that give lower level information.
Documentation should contain all necessary information how to configure and run
all services.
References
==========
* https://github.com/monasca/monasca-docker
* http://eavesdrop.openstack.org/meetings/monasca/2018/monasca.2018-01-10-15.00.log.html
History
=======
Optional section intended to be used each time the spec is updated to describe
the new design, API or any database schema updated. Useful to let reader
understand what's happened at the time.
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Rocky
- Introduced

View File

@@ -1,169 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===============================================
Python Persister Performance Metrics Collection
===============================================
Story board: https://storyboard.openstack.org/#!/story/2001576
This defines the list of measurements for the metric upsert processing time and
throughput in Python Persister and provides a rest api to retrieve those
measurements.
Problem description
===================
The Java Persister, built on top of the DropWizard framework, provides a list
of internal performance related metrics, e.g., the total number of metric
messages that have been processed since the last service start up, the average,
min and max metric processing time etc. The Python Persister, on the other
hand, lacks such instrumentation. This presents a challenge to the operator
who wants to monitor, triage, and tune the Persister performance and to the
Persister performance testing tool that was introduced in Queens release. The
Cassandra Python Persister plugin depends on this feature for performance
tuning.
Use Cases
---------
- Use case 1: The developer instruments the defined performance metrics.
There are two approaches towards the internal performance metrics. The first
approach is in memory metering similar to the Java implementation. The data
collection starts when the Persister service starts up and is not persisted
through service restart. The second approach is to treat such measurement
exactly the same as the "normal" metrics Monasca collects. The advantage is
that such metrics will be persisted and rest apis are already available to
retrieve the metrics.
The list of Persister metrics includes:
1. Total number of metrics upsert request received and completed on a given
Persister service instance in the given period of time
2. Total number of metrics upsert request received and completed on a
process or thread in a given period of time (P2)
3. The average, min, max metric request processing time in a given period of
time for a given Persister service instance and process/thread.
- Use case 2: Retrieves persister performance metrics through rest api.
The performance metrics can be retrieved using the list metrics api in the
Monasca API service.
Proposed change
===============
1. Monasca Persister
- Python Persister integrates with monasca-statsd to send count and timer
metrics
- Persister conf to add properties for statsd
2. Persister performance benchmark tool adds support to retrieve the metrics
from Monasca rest api source in addition to the DropWizard admin api.
Alternatives
------------
None
Data model impact
-----------------
None
REST API impact
---------------
None
Security impact
---------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
TBD, The statsd call to update counter and timer is expected to have small
performance impact.
Other deployer impact
---------------------
No change in deployment of the services.
Developer impact
----------------
None.
Implementation
==============
Assignee(s)
-----------
Contributors are welcome!
Primary assignee:
Other contributors:
Work Items
----------
1. Monasca Persister
- Python Persister integrates with monasca-statsd to send count and timer
metrics
- Persister conf to add properties for statsd
2. Persister performance benchmark tool adds support to retrieve the metrics
from Monasca rest api source in addition to the DropWizard admin api.
Dependencies
============
None
Testing
=======
- Set up a system, use JQuery to automate storing many metrics, check results.
The tools to accomplish this testing can be found in monasca-persister/perf/
Documentation Impact
====================
The existing README.md in monasca-persister/perf describes the needed steps.
Some minor changes may need to be made to stay current.
References
==========
https://github.com/openstack/monasca-persister/tree/master/perf
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Rocky
- Introduced
* - Stein
- Revised with testing notes

View File

@@ -1,705 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================================
Monasca Events Publishing - from Ceilometer
================================================
https://storyboard.openstack.org/#!/story/2003023
Monasca Events API [1] was developed to store Openstack Notification data in Elasticsearch. There is
still a need to collect and publish Openstack Notifications to Monasca Events API. Monasca
Ceilometer project[2] currently publishes ceilometer samples[3] to Monasca API. We are proposing to
extend Monasca Ceilometer project and add a new events publisher which will publish Openstack
notifications (or events)[3] to Monasca Events API.
UPDATE: This spec is being superceded by the ../../stein/approved/monasca-events-listener.rst spec,
but is kept here for reference.
Problem description
===================
All Openstack services generate a lot of notifications or events which contain large amounts of
operational and state information about the service and its resources. This notification data is not
currently available in Monasca.
Ceilometer data processing pipeline[3] provides an extensible mechanism of publishing samples and
events using a custom publisher. Ceilometer samples represent a quantity that can be measured (for
e.g. the size of a volume) and events represent an occurrence of an event and do not have any
associated quantity (e.g. volume was created).
Monasca Ceilometer project currently provides a samples publisher. Monasca Ceilometer samples
publisher converts Ceilometer samples to Monasca Metrics format which are then published to Monasca
API. There is no corresponding events publisher to Monasca yet.
By adding an event publisher to Monasca Ceilometer project we could take advantage of Ceilometer's
event publishing mechanism to publish events to Monasca Events API.
Ceilometer consists of different data collection components - namely Polling Agent, Notification
Agent and Compute Agent. (Please see [7] for System Architecture diagram) Ceilometer also has a data
storage and retrieval component, which would be Monasca in our case.
Samples publisher and new proposed events publisher run within Ceilometer's Notification Agent
component and are part of Notifcation Agent's data processing pipeline. Monasca
Ceilometer presumes the need to install and deploy Ceilometer Notification Agent component (doesn't
need Polling Agent or Compute Agent deployed) on all the control nodes. Ceilometer Notification
Agent is Highly Available (HA) and can run on multiple nodes. We will have to evaluate its
performance in terms of scaling for events, but we haven't run into performance/scale problems
with current samples publisher.
Use Cases
---------
#. Openstack notification data would be stored in Elasticsearch
via the Monasca Events API
Example sequence from Nova notification to Monasca API
#. Nova completes the creation of a VM
#. Nova generates a Notification message to oslo.messaging
#. Ceilometer Notification agent receives the Notification message
#. Ceilometer translates the Notification to a Monasca API format according to the configuration
#. Ceilometer Event Publisher publishes formatted Notification to Monasca Events API
#. Monasca Events API receives and validates formatted Notification
#. Monasca Events stores event Notification in configured Elasticsearch instance
Proposed change
===============
#. Openstack Notifications consist of envelope and payload fields
Example Openstack Notification data format:
.. code-block:: javascript
{
"_context_auth_token": "42630b3ea13242fcad20e0a92d0207f1",
"_context_domain": null,
"_context_instance_lock_checked": false,
"_context_is_admin": true,
"_context_project_domain": null,
"_context_project_id": "a4f77",
"_context_project_name": "admin",
"_context_quota_class": null,
"_context_read_deleted": "no",
"_context_read_only": false,
"_context_remote_address": "192.168.245.4",
"_context_request_id": "req-5948338c-f223-4fd8-9249-8769f7a3e460",
"_context_resource_uuid": null,
"_context_roles": [
"monasca-user",
"admin",
"KeystoneAdmin"
],
"_context_service_catalog": [
{
"endpoints": [
{
"adminURL": "http://192.168.245.8:8776/v2/a4f77",
"internalURL": "http://192.168.245.8:8776/v2/a4f77",
"publicURL": "http://192.168.245.9:8776/v2/a4f77",
"region": "region1"
}
],
"name": "cinderv2",
"type": "volumev2"
},
{
"endpoints": [
{
"adminURL": "http://192.168.245.8:8776/v1/a4f77",
"internalURL": "http://192.168.245.8:8776/v1/a4f77",
"publicURL": "http://192.168.245.9:8776/v1/a4f77",
"region": "region1"
}
],
"name": "cinder",
"type": "volume"
}
],
"_context_show_deleted": false,
"_context_tenant": "a4f77",
"_context_timestamp": "2015-09-18T20:54:23.468522",
"_context_user": "be396488c7034811a200a3cb1d103a28",
"_context_user_domain": null,
"_context_user_id": "be396488c7034811a200a3cb1d103a28",
"_context_user_identity": "be396488c7034811a200a3cb1d103a28 a4f77 - - -",
"_context_user_name": "admin",
"_unique_id": "ff9699d587bf4283a3c367ab88be1541",
"event_type": "compute.instance.create.start",
"message_id": "c6149ba1-34b3-4367-b8c2-b1d6f073742d",
"payload": {
"access_ip_v4": null,
"access_ip_v6": null,
"architecture": null,
"availability_zone": null,
"cell_name": "",
"created_at": "2015-09-18 20:55:25+00:00",
"deleted_at": "",
"disk_gb": 1,
"display_name": "testeee",
"ephemeral_gb": 0,
"host": null,
"hostname": "testeee",
"image_meta": {
"base_image_ref": "df0c8",
"container_format": "bare",
"disk_format": "qcow2",
"min_disk": "1",
"min_ram": "0"
},
"image_name": "glanceaaa3",
"image_ref_url": "http://192.168.245.5:9292/images/df0c8",
"instance_flavor_id": "1",
"instance_id": "abd2ef5c-0381-434a-8efc-d7b39b28a2b6",
"instance_type": "m1.tiny",
"instance_type_id": 4,
"kernel_id": "",
"launched_at": "",
"memory_mb": 512,
"metadata": {},
"node": null,
"os_type": null,
"progress": "",
"ramdisk_id": "",
"reservation_id": "r-1ghilddw",
"root_gb": 1,
"state": "building",
"state_description": "",
"tenant_id": "a4f77",
"terminated_at": "",
"user_id": "be396488c7034811a200a3cb1d103a28",
"vcpus": 1
},
"priority": "INFO",
"publisher_id": "compute.ccp-compute0001-mgmt",
"timestamp": "2015-09-18 20:55:37.639023"
}
#. All the fields with the prefix of '_context" are the envelope fields, the
other interesting fields are
#. 'message_id' - notification identifier
#. 'payload' - contains most of the relevant and useful information in JSON format
#. 'priority' - notification priority
#. 'publisher_id' - notification publisher
#. 'timestamp' - notification timestamp
#. Ceilometer event publishing framework converts the Openstack notifications to events format[4].
Event publishing framework also has the ability to extract only some of the 'payload' data into
a flat set of key-value pairs called 'traits' and publish the normalized 'event' with 'traits'
extracted from the payload using a custom publisher.
Extraction of certain fields into traits from the payload is
driven by configuration file, but by default "publisher_id",
'request_id', 'tenant_id', 'user_id' and 'project_id'
fields are always extracted and added as 'traits'.
The event can also have an optional field called 'raw' which has original
notification, provided 'store_raw' option is set in ceilometer.conf
Questions/TODO:
* Q1: Does the store_raw field only apply to events, or to all notifications processed by
Ceilometer?
* We will have to find it out if it has any adverse impact on sample publisher. Though in the
case of samples, monasca sample publisher definitely does not submit raw payload, so it must
be getting dropped.
Example Ceilometer Event data format:
.. code-block:: javascript
{
"event_type": "compute.instance.create.start",
"message_id": "c6149ba1-34b3-4367-b8c2-b1d6f073742d",
"generated": "2015-09-18 20:55:37.639023",
"traits": {
"publisher_id": "compute.ccp-compute0001-mgmt",
"request_id": "req-5948338c-f223-4fd8-9249-8769f7a3e460",
"tenant_id": "a4f77",
"project_id": "a4f77",
"user_id": "be396488c7034811a200a3cb1d103a28"
},
"raw": { "_context_auth_token": "42630b3ea13242fcad20e0a92d0207f1",
"_context_domain": null,
...
...
"event_type": "compute.instance.create.start",
"message_id": "c6149ba1-34b3-4367-b8c2-b1d6f073742d",
"payload": {
"access_ip_v4": null,
"access_ip_v6": null,
"architecture": null,
"availability_zone": null,
"cell_name": "",
"created_at": "2015-09-18 20:55:25+00:00",
"deleted_at": "",
"disk_gb": 1,
"display_name": "testeee",
"ephemeral_gb": 0,
"host": null,
"hostname": "testeee",
"image_meta": {
"base_image_ref": "df0c8",
"container_format": "bare",
"disk_format": "qcow2",
"min_disk": "1",
"min_ram": "0"
},
"image_name": "glanceaaa3",
"image_ref_url": "http://192.168.245.5:9292/images/df0c8",
"instance_flavor_id": "1",
"instance_id": "abd2ef5c-0381-434a-8efc-d7b39b28a2b6",
"instance_type": "m1.tiny",
"instance_type_id": 4,
"kernel_id": "",
"launched_at": "",
"memory_mb": 512,
"metadata": {},
"node": null,
"os_type": null,
"progress": "",
"ramdisk_id": "",
"reservation_id": "r-1ghilddw",
"root_gb": 1,
"state": "building",
"state_description": "",
"tenant_id": "a4f77",
"terminated_at": "",
"user_id": "be396488c7034811a200a3cb1d103a28",
"vcpus": 1
}
}
}
#. Key-Value pairs that can be extracted from 'payload' in form of traits
can be defined in events definitions file.
For example the following events definitions yaml specifies that for
all events which have a prefix of "compute.instance.*" then
add "user_id", "instance_id", and "instance_type_id" as traits,
after extracting values from "payload.user_id", "payload.instance_id",
and "payload.instance_type_id" respectively.
.. code-block:: yaml
---
- event_type: compute.instance.*
traits: &instance_traits
user_id:
fields: payload.user_id
instance_id:
fields: payload.instance_id
instance_type_id:
type: int
fields: payload.instance_type_id
We are for now proposing not to use this feature, of defining traits for each event
extracting since we have the ability to store entire payload, via
Monasca Events API.
We can certainly look at enabling this feature in the future if we run into trouble storing
entire JSON "payload" in Elasticsearch. This is certainly a nifty way to trim the amount
of data that will be stored.
#. The proposed new Custom Monasca Ceilometer event publisher will run within Ceilometer's
Notification Agent component. It will leverage Ceilometer's data processing pipeline[3] which
converts notifications to Ceilometer's event format. At the end of its processing, Monasca
Ceilometer event publisher will convert Ceilometer Event data into Monasca Event format[6] and
publish the monasca event to Monasca Events API.
#. Monasca Events API allows a field called 'payload' which can be in an arbitrary
nested JSON format. Monasca-Ceilometer event publisher will extract JSON field called
'payload' from 'raw' (JSON path notation: 'raw.payload'), publish the payload from the
original notification to Monasca Events API.
Example Monasca Event Format:
.. code-block:: javascript
events: [
{
dimensions": {
"service": "compute.ccp-compute0001-mgmt",
"topic": "notification.sample",
"hostname": "nova-compute:compute
},
event: {
"event_type": "compute.instance.create.start",
"payload": {
"access_ip_v4": null,
"access_ip_v6": null,
"architecture": null,
"availability_zone": null,
"cell_name": "",
"created_at": "2015-09-18 20:55:25+00:00",
"deleted_at": "",
"disk_gb": 1,
"display_name": "testeee",
"ephemeral_gb": 0,
"host": null,
"hostname": "testeee",
"image_meta": {
"base_image_ref": "df0c8",
"container_format": "bare",
"disk_format": "qcow2",
"min_disk": "1",
"min_ram": "0"
},
"image_name": "glanceaaa3",
"image_ref_url": "http://192.168.245.5:9292/images/df0c8",
"instance_flavor_id": "1",
"instance_id": "abd2ef5c-0381-434a-8efc-d7b39b28a2b6",
"instance_type": "m1.tiny",
"instance_type_id": 4,
"kernel_id": "",
"launched_at": "",
"memory_mb": 512,
"metadata": {},
"node": null,
"os_type": null,
"progress": "",
"ramdisk_id": "",
"reservation_id": "r-1ghilddw",
"root_gb": 1,
"state": "building",
"state_description": "",
"tenant_id": "a4f77",
"terminated_at": "",
"user_id": "be396488c7034811a200a3cb1d103a28",
"vcpus": 1
}
},
publisher_id: "compute.ccp-compute0001-mgmt",
priority: "INFO"
}
]
#. If no traits are specified in events pipeline yaml configuration file for an event
Ceilometer's data processing pipeline will add the following default traits:
* service: (All notifications should have this) notifications publisher
* tenant_id
* request_id
* project_id
* user_id
Note: "service" is not the service that produced the event as in say "compute", "glance",
"cinder" but rather notification RabbitMQ publisher that produced the event
e.g. "compute.ccp-compute0001-mgmt" so is not very useful.
#. Ceilometer event data is converted to Monasca event data format before being published to Monasca
Event API. Following fields in Monasca Event data are not available in current Ceilometer Event
data format:
* "service"
* "dimensions.topic"
* "event.priority"
We are proposing removing these fields from Monasca Event format (will be done as a separate
spec/implementation process) for the following reasons:
"service": Currently Openstack notifications do not specify a service, that
generated the notification in a consistent way. It might be possible to create an external
mapping file which maps event name to a service but its hard to maintain such mapping over a
period of time.
"dimensions.topic": This field is not available in the source Openstack notification
"event.priority": This field is not currently available in Ceilometer Event format. It is
available in the source Openstack notification. Note: If we think this field can be useful we can
propose adding it to the Ceilometer Event format.
#. Following new fields will be added to Monasca Event data as dimensions:
* "dimensions.publisher_id": Identifier for the publisher that generated the event. Maps to
"traits.publisher_id" in Ceilometer event data.
* "dimensions.user_id": Identifier for user that generated the event. Maps to "traits.user_id" in
Ceilometer event data.
* "dimensions.project_id": Identifier of the project that generated the event. Maps to
"traits.project_id" or "traits.tenant_id" in Ceilometer event data.
#. hostname is available in the event payload, but its location might differ from event to event. We
can use Ceilometer's event definitions config to always add a trait called "hostname" to all
events. e.g. for compute.instance.* will have a trait called "hostname", which grabs data from
"payload.hostname"
.. code-block:: yaml
---
- event_type: compute.instance.*
traits: &instance_traits
user_id:
fields: payload.hostname
#. The proposed new Monasca Ceilometer event publisher will have the ability to submit event
data in a batch and at a configurable frequency (similar to current samples publisher). The
event data will be published if the items in the current batch reach their maximum size
(config setting) or if certain time interval has elapsed since the last publish
(config setting). This will make sure that the batch does not get huge at the same time
there is no significant delay in publishing of the events to Monasca Events API.
#. Monasca Ceilometer event publisher will use service credentials from ceilometer configuration
file (in "[monasca]" section) to get keystone token.
Example "[monasca]" section in ceilometer config file
.. code-block:: text
[monasca]
service_auth_url = https://localhost:5000/v3
service_password = secretpassword
service_username = ceilometer
service_interface = internalURL
service_auth_type = password
# service_project_id may also be used
service_project_name = admin
service_domain_name = Default
service_region_name = RegionOne
The publisher will then make a POST request to Monasca Events /v1.0/events REST api[8] to publish
events to Monasca Events API. The URL for the instance of Monasca Events API will be configured
in the Ceilometer 'events-pipeline.yaml' file. This has the added advantage of allowing
different events to be published differently (see Ceilometer pipeline documentation [10]).
#. "tenant_id" and "user_id" that the notification relates to are available in "payload" section
of the notification, and these notifications are generated by each service itself.
There is no additional "Openstack-operator-agent" like component or functionality required to
fetch that data from the service and publish to monasca event api on behalf of the original
tenant.
Ceilometer publishing pipeline simply extracts these "tenant_id" and "user_id" fields from the
"payload" and makes those fields available as "tenant_id" and "user_id" traits, which would then
be mapped to "dimensions.project_id" and "dimensions.user_id" fields in monasca events format.
In other words, original "tenant_id" and "user_id" values are available in
the payload of the notification, and will make its way to "dimensions.tenant_id"
and "dimensions.user_id" in Monasca Event.
Questions/TODO:
* Q: Do we need to do anything special to handle multi-tenancy in monasca-events api like being
done for metrics[9] ? Would original user_id and tenant_id in "dimensions.user_id" and
"dimensions.tenant_id" fields in dimensions serve this purpose?
* Q: In Ceilometer V2 API (which has been deprecated and removed), when querying data the role
"admin" could access data for all tenants, whereas a user with "ceilometer-admin" role could
access only data for a particular tenant. Can we implement something like this for
monasca-events api when querying for data?
#. Monasca Ceilometer event publisher will also retry submitting a batch, in case Monasca
Events API is temporarily unavailable or down. The retry frequency, the number of retries
and the number of items that can be in the retry batch will also be set via configuration.
Alternative Solutions
---------------------
#. Standalone monasca event agent which reads Openstack notifications published to RabbitMQ
(on "notification" topic) and publishes them to Monasca Events API.
Pro:
* No dependency on Telemetry project.
* May be simple to develop if leverage the oslo.messaging functionality.
* Ceilometer has *deprecated* the events functionality in the Stein release. [13]
Con:
* Another agent to convince users to install on their systems.
* Reinventing work already done in the Ceilometer agent. The OpenStack community already uses Ceilometer and contributes updates when something fails.
This alternate solution will be detailed in a separate spec, as it is likely
the long term solution Monasca will need.
#. Openstack Panko [5] is a event storage and REST API for Ceilometer.
Pro:
* An 'official' subproject within Telemetry, so there is some community recognition.
Con:
* Its primary storage is in a relational database which has problems with scale.
* It is not maintained actively and not ready for production. [11]
* It will be deprecated eventually. [12]
Data model impact
-----------------
None
REST API impact
---------------
#. We are proposing to tweak the Monasca Event data format by removing and adding following
fields as mentioned in "Proposed change" section above.
Remove fields (JSON path notation): "service", "dimensions.topic",
"dimensions.hostname" and "event.priority"
Add fields (JSON path notation): "dimensions.publisher_id", "dimensions.user_id" and
"dimensions.project_id"
This change will have an impact on Monasca Events API.
Security impact
---------------
The proposed Monasca Ceilometer events publisher will collect and publish
Openstack event (notification) data to Monasca API. Openstack notification
data does not have any sensitive data like 'tokens'.
Notifications do contain 'user_id' and 'project_id' fields but do not
contain any Personally Identifiable Information (PII) for the user or
the project.
Other end user impact
---------------------
None.
Performance Impact
------------------
#. The number of notifications(events) generated by different services will depend on the capacity
of the cloud along with the number of resources being created by the users.
For example, if there was a large number of compute VM's being created or destroyed it could
lead to a surge in number of notifications (events) that would have to be published. Optimum
configuration options related to say event batch size and event batch interval would have to be
documented, to reduce any adverse affect on performance.
#. Monasca Ceilometer publisher runs within Ceilometer Notification Agent component and invoked as a
last step in its data processing pipeline. It is an additional component that will have to to be
deployed on all the controller nodes. We will have to evaluate the performance impact of
Ceilometer Notification Agent when publishing events to Monasca Events API.
Other deployer impact
---------------------
#. The proposed new Monasca-Ceilometer events publisher will introduce
few new configuration options like
* events api endpoint
* events batch interval
* events batch size
* events retry interval
#. Monasca Ceilometer Events publisher will have to to be added to Ceilometer's
"[ceilometer.event.publisher]" section entry_points.txt
For example:
[ceilometer.event.publisher]
monasca = ceilometer.publisher.monclient:MonascaEventsPublisher
#. As part of developing new Monasca Ceilometer Events publisher devstack plugin would be updated to
add the above configuration changes.
Developer impact
----------------
#. The proposed change to Monasca Event Format will have an impact on existing Monasca Event API,
since Monasca Event Format will have to be tweaked. (See REST API Impact section above)
Implementation
==============
Assignee(s)
-----------
Primary assignee:
joadavis, aagate
Other contributors:
<launchpad-id or None>
Work Items
----------
#. Implement new Monasca Ceilometer Events publisher.
#. Implement monasca-ceilometer devstack plugin changes to deploy
new events publisher.
#. Implement unit tests for Events publisher.
#. Implement change to Monasca Event format in Monasca Events API.
Dependencies
============
#. Monasca Events API 1.0: https://storyboard.openstack.org/#!/story/2001654
#. Monasca Ceilometer project: https://github.com/openstack/monasca-ceilometer
#. Ceilometer Data processing and pipelines:
https://docs.openstack.org/ceilometer/pike/admin/telemetry-data-pipelines.html
Testing
=======
#. New Monasca Ceilometer Event publisher unit tests will be added, which can test publishing with
various config options events batch size, events batch interval, handling retry when Monasca
Event API is not available.
#. Adding tempest tests for Monasca Ceilometer events publisher could be looked at as part of
separate effort.
Please note that current Monasca Ceilometer samples publisher does not have tempest tests either
so having tempest tests for both events and samples publisher could be considered in the future.
Documentation Impact
====================
#. New Monasca Events Publisher config options will be documented
* events api endpoint
* events batch interval
* events batch size
* events retry interval
#. Recommended values for each of the config options will also be documented based on the size of
the cloud and resources for Cloud Operators.
References
==========
[1] Monasca Events API 1.0: https://storyboard.openstack.org/#!/story/2001654
[2] Monasca Ceilometer project: https://github.com/openstack/monasca-ceilometer
[3] Ceilometer Data processing and pipelines:
https://docs.openstack.org/ceilometer/pike/admin/telemetry-data-pipelines.html
[4] Ceilometer Events: https://docs.openstack.org/ceilometer/latest/admin/telemetry-events.html
[5] Openstack Panko: https://github.com/openstack/panko
[6] Monasca Event Format:
https://github.com/openstack/monasca-events-api/blob/master/doc/api-samples/v1/req_simple_event.json
[7] Ceilometer System Architecture Diagram:
https://docs.openstack.org/ceilometer/ocata/architecture.html
[8] Monasca Events POST v1.0 API:
https://github.com/openstack/monasca-events-api/blob/master/api-ref/source/events.inc
[9] Cross-Tenant Metric Submission:
https://github.com/openstack/monasca-agent/blob/master/docs/MonascaMetrics.md#cross-tenant-metric-submission
[10] Ceilometer pipeline yaml documentation:
https://docs.openstack.org/ceilometer/latest/admin/telemetry-data-pipelines.html
[11] No future for Panko or Aodh:
https://julien.danjou.info/lessons-from-openstack-telemetry-deflation/
[12] Ceilometer Events deprecated means Panko also deprecated:
http://eavesdrop.openstack.org/irclogs/%23openstack-telemetry/%23openstack-telemetry.2018-10-10.log.html
[13] Ceilometer Events marked as deprecated in Stein:
https://review.opendev.org/#/c/603336/

View File

@@ -1,378 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================================
Example Spec - The title of your feature request
================================================
Include the URL of your story:
https://storyboard.openstack.org
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that operators can understand. The title and this first paragraph
should be used as the subject line and body of the commit message
respectively.
Some notes about the monasca-spec and stories process:
* Not all stories need a spec. For more information see
https://docs.openstack.org/monasca-api/latest/contributor/index.html
* The aim of this document is first to define the problem we need to solve,
and second agree the overall approach to solve that problem.
* This is not intended to be extensive documentation for a new feature.
For example, there is no need to specify the exact configuration changes,
nor the exact details of any DB model changes. But you should still define
that such changes are required, and be clear on how that will affect
upgrades.
* You should aim to get your spec approved before writing your code.
While you are free to write prototypes and code before getting your spec
approved, its possible that the outcome of the spec review process leads
you towards a fundamentally different solution than you first envisaged.
* But, API changes are held to a much higher level of scrutiny.
As soon as an API change merges, we must assume it could be in production
somewhere, and as such, we then need to support that API change forever.
To avoid getting that wrong, we do want lots of details about API changes
upfront.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 79 columns.
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox and see the generated
HTML file in doc/build/html/specs/<path_of_your_file>
* If you would like to provide a diagram with your spec, ascii diagrams are
required. http://asciiflow.com/ is a very nice tool to assist with making
ascii diagrams. The reason for this is that the tool used to review specs is
based purely on plain text. Plain text will allow review to proceed without
having to look at additional files which can not be viewed in gerrit. It
will also allow inline feedback on the diagram itself.
* If your specification proposes any changes to the Monasca REST API such
as changing parameters which can be returned or accepted, or even
the semantics of what happens when a client calls into the API, then
you should add the APIImpact flag to the commit message. Specifications with
the APIImpact flag can be found with the following query:
https://review.opendev.org/#/q/status:open+project:openstack/monasca-specs+message:apiimpact,n,z
Problem description
===================
A detailed description of the problem. What problem is this feature request
addressing?
Use Cases
---------
What use cases does this address? What impact on actors does this change have?
Ensure you are clear about the actors in each use case: Developer, End User,
Deployer etc.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
At this point, if you would like to just get feedback on if the problem and
proposed change fit in monasca, you can stop here and post this for review to
get preliminary feedback. If so please say: Posting to get preliminary feedback
on the scope of this spec.
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to
require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing instances, or modify other existing data
describe how that will work.
REST API impact
---------------
Each API method which is either added or changed should have the following
* Specification for the method
* A description of what the method does suitable for use in
user documentation
* Method type (POST/PUT/GET/DELETE)
* Normal http response code(s)
* Expected error http response code(s)
* A description for each possible error code should be included
describing semantic errors which can cause it such as
inconsistent parameters supplied to the method, or when an
instance is not in an appropriate state for the request to
succeed. Errors caused by syntactic problems covered by the JSON
schema definition do not need to be included.
* URL for the resource
* URL should not include underscores, and use hyphens instead.
* Parameters which can be passed via the url
* JSON schema definition for the request body data if allowed
* Field names should use snake_case style, not CamelCase or MixedCase
style.
* JSON schema definition for the response body data if any
* Field names should use snake_case style, not CamelCase or MixedCase
style.
* Example use case including typical API samples for both data supplied
by the caller and the response
* Discuss any policy changes, and discuss what things a deployer needs to
think about when defining their policy.
Note that the schema should be defined as restrictively as
possible. Parameters which are required should be marked as such and
only under exceptional circumstances should additional parameters
which are not defined in the schema be permitted.
Reuse of existing predefined parameter types such as regexps for
passwords and user defined names is highly encouraged.
Security impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Does this change touch sensitive data such as tokens, keys, or user data?
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
* Does this change involve cryptography or hashing?
* Does this change require the use of sudo or any elevated privileges?
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
For more detailed guidance, please see the OpenStack Security Guidelines as
a reference (https://wiki.openstack.org/wiki/Security/Guidelines). These
guidelines are a work in progress and are designed to help you identify
security best practices. For further information, feel free to reach out
to the OpenStack Security Group at openstack-security@lists.openstack.org.
Other end user impact
---------------------
Aside from the API, are there other ways a user will interact with this
feature?
* Does this change have an impact on python-monascaclient? What does the user
interface there look like?
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A periodic task might look like a small addition but if it calls conductor or
another service the load is multiplied by the number of nodes in the system.
* Scheduler filters get called once per host for every instance being created,
so any latency they introduce is linear with the size of the system.
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries (whether direct or via conductor)
can have a profound impact on performance when called in critical sections of
the code.
* Will the change include any locking, and if so what considerations are there
on holding the lock?
Other deployer impact
---------------------
Discuss things that will affect how you deploy and configure Monasca
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed (for example a flag that other hypervisor drivers might want to
implement as well)? Are the default values ones which will work well in
real deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features. For example, if we
change the directory name that instances are stored in, how do we handle
instance directories created before the change landed? Do we move them? Do
we have a special case in the code? Do we assume that the operator will
recreate all the instances in their cloud?
Developer impact
----------------
Discuss things that will affect other developers working on Monasca.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a feature where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* Include specific references to specs and/or blueprints in monasca, or in
other projects, that this one either depends on or is related to.
* If this requires functionality of another project that is not currently used
by Monasca (such as the glance v2 API when we previously only required v1),
document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss the important scenarios needed to test here, as well as
specific edge cases we should be ensuring work correctly. For each
scenario please specify if this requires specialized hardware, a full
openstack environment, or can be simulated inside the Monasca tree.
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Documentation Impact
====================
Which audiences are affected most by this change, and which documentation
titles on docs.openstack.org should be updated because of this change? Don't
repeat details discussed above, but reference them here in the context of
documentation for multiple audiences. For example, the Operations Guide targets
cloud operators, and the End User Guide would need to be updated if the change
offers a new feature available through the CLI or dashboard. If a config option
changes or is deprecated, note here that the documentation needs to be updated
to reflect this specification's change.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. if it's an EC2 thing, link the
EC2 docs)
* Anything else you feel it is worthwhile to refer to
History
=======
Optional section intended to be used each time the spec is updated to describe
new design, API or any database schema updated. Useful to let reader understand
what's happened along the time.
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Stein
- Introduced

View File

@@ -1,251 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==================
Merge Monasca APIs
==================
https://storyboard.openstack.org/#!/story/2003881
Monasca currently has 3 APIs; one for metrics, one for logs and one for
events. Historically, this has created additional overhead for developers
upgrading or adding new features to the APIs, for system administrators
configuring the APIs, and for packagers generating Monasca distributions.
The purpose of this spec is to consider whether the Monasca APIs should
be merged into a single, unified API.
Problem description
===================
The Monasca APIs are written in Python and share a lot of functionality,
some, but not all of which has been factored out to the Monasca common
library. Here lies the first problem:
1. There exists significant technical debt.
There is a significant amount of common functionality that has not been
factored out to Monasca common. For example, the Monasca API repo contains
code for validating metrics and so does the Monasca common library. The
same is true for query authorisation, where validation code lives in
both repos. This technical debt could of course be addressed, but how
did it come about?
2. Updating the APIs to support common features incurs significant
developer and reviewer overhead compared to updating a single API.
The APIs are all written slightly differently. Adding support for
Oslo Policy or uWSGI to one API is not the same as adding it to another
API. Furthermore, changes frequently have to be made across multiple
repos. Whilst Zuul is well suited to coordinating this task, it requires
careful synchronisation of versions by developers and meticulous
attention from reviewers. If one modifies code which is common to all
three APIs, they must systematically verify that in each case the changes
work as intended. Of course, automated testing can provide relief here, but
it doesn't make the burden disappear entirely.
3. Historically it has been difficult to maintain a standard experience
across multiple APIs.
At times, APIs have diverged when in an ideal world they should not have.
For example, when adding support for a common feature, work has traditionally
been focused on one API to keep the task simple, with the view to adding
support for the other APIs at a later date. If works stops, for example due
to a release, or due to a developer moving on, the APIs can be left in a
diverged state for a significant amount of time. Of course, one solution
is to block merging of a common feature until the work is complete
in all three APIs, and all common code is added to the Monasca
common library. In practice, this has been prevented by the number of man
hours available. For example, a contributor may only be interested in
metrics. They may not be able to justify spending time working on the other
APIs.
4. Packaging, deploying, and configuring 3 APIs is more complex than
deploying a single, unified API.
This is an obvious point, but it can be made worse by 3). For example,
historically, it has not always been possible to run all APIs in the same
way. Configuration of common functionality such as Kafka has not
always been uniform. There is extra build system configuration,
additional keystone endpoints that need to be registered and monitoring
the availability of 3 APIs is more difficult than one.
Use Cases
---------
As a developer I would save time by implementing a feature common
to all APIs in a single repo, rather than across four repos.
As a reviewer I will save time by not having to think about how
a change may impact multiple repos.
As a deployer I will save time by having two fewer services to deploy
and configure.
As a security analyst I can focus my efforts reviewing a single API,
rather than three.
As a user of Monasca I would like a consistent experience, including usability
and documentation.
As a packager I would like to have a single Monasca API, rather than
three to save time configuring and maintaining my build system.
Proposed change
===============
Merge all APIs into a single unified API. Merge all common API code from the
Monasca common repo into the unified API repository. Specifically, it is
proposed to merge all relevant code into the Monasca API.
Alternatives
------------
1. Refactor all APIs so that the code is standard. Prevent merging of common
features until the work is complete across all APIs, and all common code
has been factored out into the Monasca common library. From historical
experience this will be difficult without additional developers.
2. Don't do anything and carry on working around the technical debt. In the
long term this is likely to make it more difficult to add new features, and
require more time for maintenance.
Data model impact
-----------------
None
REST API impact
---------------
Aside from the fact that a single service will implement the combined schema
from all APIs, the calls should not change. We should be careful when merging,
for example dimension validation code, that we do not break things which were
accepted in one API, but not another. An ideal result would be that we use
the same code for parsing common fields such as dimensions for all API calls.
Security impact
---------------
If a single API was previously exposed publicly, deploying the unified API
will increase the surface area for attack. However, in general, it will be
easier to review changes to make security improvements due to the reduced
developer and reviewer overhead.
During the deprecation period of the Events and Log API, security fixes made
to the unified API may need to be backported.
Other end user impact
---------------------
All services talking to the Monasca Events and Log APIs will need
to be directed at the unified API. For services which use Keystone
to lookup the Monasca endpoints this should occur automatically. Other
services such as the Fluentd Monasca output plugin will need to be
reconfigured manually. A grace period where the Monasca Events and Log
APIs are still supported is one possibility to make this transition
easier.
Performance Impact
------------------
No direct impact.
In general, it will be less effort to profile and optimise a single API than
it would be to do the same for all three APIs.
In clustered deployments with specialised nodes (for example a dedicated
logging node, hosting the Monasca Log API) the unified API can be deployed
as a direct replacement, and the additional functionality can simply be
ignored.
Other deployer impact
---------------------
For Monasca users supporting legacy releases, any security or bug fixes
made to the unified API may need to be backported to the individual APIs.
All actively maintained deployment solutions will need to be updated to
deploy the unified API. For example, Monasca-Docker, DevStack, Kolla,
OpenStack Ansible, and Helm.
In the case of DevStack we should merge the three existing plugins into
one. The resulting plugin should have options like `log_pipeline_enabled`
and `metrics_pipeline_enabled` to support enabling those pipelines
separately. This is useful, for example, when DevStack is used in OpenStack
CI to allow testing changes localised to specific areas more efficiently.
Developer impact
----------------
The motiviation behind this change is to reduce the burden placed on
developers and reviewers when making improvements to the Monasca APIs. It
is hoped that this will lead to an increase in developer productivity.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
* Review test coverage of APIs, and add coverage for any missing areas.
* Review test coverage of Monasca common and add coverage for any missing areas.
* Merge Monasca common Python code into the Monasca API. Common functionality
should include:
* authorisation
* validation
* OpenStack policy
* WSGI deployment
* Implement Log API schema in the Monasca API and port tests.
* Implement Events API schema and port tests.
* Merge DevStack plugins into a single plugin and add support for enabling
pipelines individually.
* Deprecate Monasca Log and Event APIs.
* Merge and update documentation
Dependencies
============
No additional dependencies are added. The dependency on Monasca common can
be removed.
Testing
=======
The Monasca API and Log API Tempest test plugins have already been merged into
one plugin. Any Tempest tests which exist for the Events API should also be
merged into the unified plugin. The Tempest plugin will need to be updated to
use the unified API.
Unit tests from the Log API and Events API repo will need to be ported to the
Monasca API (unified) repo. Some of these tests may be redundant.
Documentation Impact
====================
Three sets of documentation will be reduced to one. Whilst it will take
some effort to merge the documentation, it should hopefully be more
consistent.
References
==========
None
History
=======
None

View File

@@ -1,589 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=======================
Monasca Events Listener
=======================
https://storyboard.openstack.org/#!/story/2003023
Monasca Events API [1]_ was developed to store event data in Elasticsearch.
A new application could use the Monasca Events API to post an event directly for processing
and storage.
However, a general collection service is needed to capture the existing OpenStack Notifications
already generated by OpenStack Services and pass them to Monasca for storage and processing.
This specification proposes creating a new Monasca Events Listener to capture events and pass
them to Monasca services.
Problem description
===================
All Openstack services generate a lot of notifications or events which contain large amounts of
operational and state information about the service and its resources. Services such as Nova [14]_
already publish to a RabbitMQ topic.
This notification data is not currently available in Monasca.
Ceilometer data processing pipeline [3]_ provides an extensible mechanism of publishing samples and
events using a custom publisher. Ceilometer samples represent a quantity that can be measured
(e.g. the size of a volume) and events represent an occurrence of an event and do not have any
associated quantity (e.g. volume was created). The Telemetry project also has the Panko [5]_
service for indexing and storing these events.
A previous `spec <../../rocky/not-implemented/monasca-events-publishing.html>`_ was created to
specify an enhancement to Ceilometer to allow collected events to be published to the Monasca
Events API.
However, with the recent deprecation of Ceilometer's Event functionality [13]_, this is no longer
a long-term option.
This spec is for creating a new Monasca service which would listen for OpenStack Event messages
(often called notifications) and process them through Monasca by consuming the RabbitMQ message
and producing a post to the Monasca Events API with that event.
This service could use Ceilometer, Vitrage, Watcher, or another service as an example of how to
listen to notifications from OpenStack services such as Nova [14]_.
It is being proposed to make the Monasca Events Listener service a part of the Monasca Agent
code base and reuse code, including the existing monasca_setup script and config.
Use Cases
---------
#. Openstack notification data would be stored in Elasticsearch via the Monasca services
Example sequence:
#. Nova completes the creation of a VM
#. Nova generates a Notification message to oslo.messaging
#. oslo.messaging posts the message to RabbitMQ
#. Monasca Event Listener receives the Notification message from RabbitMQ
#. Monasca Event Listener validates and translates the Notification to a Monasca Events
API format according to the configuration
#. Monasca Event Listener publishes formatted Notification to Monasca Events API
#. Monasca Events API receives and validates formatted Notification
#. Monasca Events API stores event Notification in configured Elasticsearch instance
Proposed change
===============
#. Monasca Event Listener will be a new service which can be run in a Highly
Available configuration by running an instance of Monasca Event Listener
on each Controller node in a cloud. Each node will listen to OpenStack
notifications in RabbitMQ and convert notifications to a post to the Monasca
Events API. The Monasca Events API then passes the notification on as Kafka
messages on the 'monevents' topic, where the Monasca Persister will receive
and store them. By the nature of RabbitMQ clients, the load will be
distributed between the Monasca Events Listener instances (only one listener
will process each RabbitMQ message).
#. Monasca Event Listener will filter messages from OpenStack services based on
specification of event_types to be collected. This will reduce 'noise' and
focus event collection on those events that are deemed valuable.
This filtering specification can be from a configuration file, or optionally
could be controlled through a new API implemented as part of the Monasca
Events API service.
#. OpenStack Notifications consist of envelope and payload fields. See [15]_ and [16]_ for
examples.
Example OpenStack Notification data format:
.. code-block:: javascript
{
"_context_auth_token": "42630b3ea13242fcad20e0a92d0207f1",
"_context_domain": null,
"_context_instance_lock_checked": false,
"_context_is_admin": true,
"_context_project_domain": null,
"_context_project_id": "a4f77",
"_context_project_name": "admin",
"_context_quota_class": null,
"_context_read_deleted": "no",
"_context_read_only": false,
"_context_remote_address": "192.168.245.4",
"_context_request_id": "req-5948338c-f223-4fd8-9249-8769f7a3e460",
"_context_resource_uuid": null,
"_context_roles": [
"monasca-user",
"admin",
"KeystoneAdmin"
],
"_context_service_catalog": [
{
"endpoints": [
{
"adminURL": "http://192.168.245.8:8776/v2/a4f77",
"internalURL": "http://192.168.245.8:8776/v2/a4f77",
"publicURL": "http://192.168.245.9:8776/v2/a4f77",
"region": "region1"
}
],
"name": "cinderv2",
"type": "volumev2"
},
{
"endpoints": [
{
"adminURL": "http://192.168.245.8:8776/v1/a4f77",
"internalURL": "http://192.168.245.8:8776/v1/a4f77",
"publicURL": "http://192.168.245.9:8776/v1/a4f77",
"region": "region1"
}
],
"name": "cinder",
"type": "volume"
}
],
"_context_show_deleted": false,
"_context_tenant": "a4f77",
"_context_timestamp": "2015-09-18T20:54:23.468522",
"_context_user": "be396488c7034811a200a3cb1d103a28",
"_context_user_domain": null,
"_context_user_id": "be396488c7034811a200a3cb1d103a28",
"_context_user_identity": "be396488c7034811a200a3cb1d103a28 a4f77 - - -",
"_context_user_name": "admin",
"_unique_id": "ff9699d587bf4283a3c367ab88be1541",
"event_type": "compute.instance.create.start",
"message_id": "c6149ba1-34b3-4367-b8c2-b1d6f073742d",
"payload": {
"access_ip_v4": null,
"access_ip_v6": null,
"architecture": null,
"availability_zone": null,
"cell_name": "",
"created_at": "2015-09-18 20:55:25+00:00",
"deleted_at": "",
"disk_gb": 1,
"display_name": "testeee",
"ephemeral_gb": 0,
"host": null,
"hostname": "testeee",
"image_meta": {
"base_image_ref": "df0c8",
"container_format": "bare",
"disk_format": "qcow2",
"min_disk": "1",
"min_ram": "0"
},
"image_name": "glanceaaa3",
"image_ref_url": "http://192.168.245.5:9292/images/df0c8",
"instance_flavor_id": "1",
"instance_id": "abd2ef5c-0381-434a-8efc-d7b39b28a2b6",
"instance_type": "m1.tiny",
"instance_type_id": 4,
"kernel_id": "",
"launched_at": "",
"memory_mb": 512,
"metadata": {},
"node": null,
"os_type": null,
"progress": "",
"ramdisk_id": "",
"reservation_id": "r-1ghilddw",
"root_gb": 1,
"state": "building",
"state_description": "",
"tenant_id": "a4f77",
"terminated_at": "",
"user_id": "be396488c7034811a200a3cb1d103a28",
"vcpus": 1
},
"priority": "INFO",
"publisher_id": "compute.ccp-compute0001-mgmt",
"timestamp": "2015-09-18 20:55:37.639023"
}
#. All the fields with the prefix of '_context' are the envelope fields, the
other interesting fields are
#. 'message_id' - notification identifier
#. 'payload' - contains most of the relevant and useful information in JSON format
#. 'priority' - notification priority
#. 'publisher_id' - notification publisher
#. 'timestamp' - notification timestamp
#. Monasca Event Listener converts the OpenStack notifications to Monasca events format.
This format will be suitable for Kafka messaging and will match the expected data fields
of the Monasca Persister. This conversion and validation should be common between the
Monasca Event Listener and Monasca Event API.
#. The Kafka client connection will handle communication issues such as reconnections and
resending as needed.
#. Monasca Events API allows a field called 'payload' which can be in an arbitrary
nested JSON format.
Example Monasca Event Format:
.. code-block:: javascript
events: [
{
dimensions": {
"service": "compute.ccp-compute0001-mgmt",
"topic": "notification.sample",
"hostname": "nova-compute:compute
},
event: {
"event_type": "compute.instance.create.start",
"payload": {
"access_ip_v4": null,
"access_ip_v6": null,
"architecture": null,
"availability_zone": null,
"cell_name": "",
"created_at": "2015-09-18 20:55:25+00:00",
"deleted_at": "",
"disk_gb": 1,
"display_name": "testeee",
"ephemeral_gb": 0,
"host": null,
"hostname": "testeee",
"image_meta": {
"base_image_ref": "df0c8",
"container_format": "bare",
"disk_format": "qcow2",
"min_disk": "1",
"min_ram": "0"
},
"image_name": "glanceaaa3",
"image_ref_url": "http://192.168.245.5:9292/images/df0c8",
"instance_flavor_id": "1",
"instance_id": "abd2ef5c-0381-434a-8efc-d7b39b28a2b6",
"instance_type": "m1.tiny",
"instance_type_id": 4,
"kernel_id": "",
"launched_at": "",
"memory_mb": 512,
"metadata": {},
"node": null,
"os_type": null,
"progress": "",
"ramdisk_id": "",
"reservation_id": "r-1ghilddw",
"root_gb": 1,
"state": "building",
"state_description": "",
"tenant_id": "a4f77",
"terminated_at": "",
"user_id": "be396488c7034811a200a3cb1d103a28",
"vcpus": 1
}
},
publisher_id: "compute.ccp-compute0001-mgmt",
priority: "INFO"
}
]
#. Following fields in Monasca Event data may not be available in the OpenStack notification
data format:
* "service"
* "dimensions.topic"
* "event.priority"
We are proposing removing these fields from Monasca Event format (will be done as a separate
spec/implementation process) for the following reasons:
"service": Currently OpenStack notifications do not specify a service, that
generated the notification in a consistent way. It might be possible to create an external
mapping file which maps event name to a service but its hard to maintain such mapping over a
period of time.
"dimensions.topic": This field is not available in the source OpenStack notification.
However, the Monasca Event Listener may be able to save the RabbitMQ topic that the notification
was collected from. In that case, this field should be used.
"event.priority": This field is not currently available in Ceilometer Event format. It is
available in the source OpenStack notification. Note: If we think this field can be useful we can
propose adding it to the Monasca Event Listener format.
#. Following new fields will be added to Monasca Event data as dimensions:
* "dimensions.publisher_id": Identifier for the publisher that generated the event.
* "dimensions.user_id": Identifier for user that generated the event.
* "dimensions.project_id": Identifier of the project that generated the event.
#. 'hostname' is available in the event payload, but its location might differ from event to event.
#. The proposed new Monasca Event Listener will have the ability to submit event
data in a batch and at a configurable frequency (similar to current samples publisher). The
event data will be published if the items in the current batch reach their maximum size
(config setting) or if certain time interval has elapsed since the last publish
(config setting). This will make sure that the batch does not get huge at the same time
there is no significant delay in publishing of the events to Monasca Events API.
#. Monasca Event Listener will have a configuration file to configure connection information for
both RabbitMQ and Monasca Events API.
#. The "tenant_id" and "user_id" that the notification relates to are available in "payload"
section of the notification, and these notifications are generated by each service itself.
There is no additional "OpenStack-operator-agent" like component or functionality required to
fetch that data from the service and publish to monasca event api on behalf of the original
tenant.
(Ceilometer publishing pipeline simply extracts these "tenant_id" and "user_id" fields from the
"payload" and makes those fields available as "tenant_id" and "user_id" traits, which would then
be mapped to "dimensions.project_id" and "dimensions.user_id" fields in monasca events format.)
In other words, original "tenant_id" and "user_id" values are available in
the payload of the notification, and will make its way to "dimensions.tenant_id"
and "dimensions.user_id" in Monasca Event.
Questions/TODO:
* Q: Do we need to do anything special to handle multi-tenancy in monasca-events api like being
done for metrics [9]_ ? Would original user_id and tenant_id in "dimensions.user_id" and
"dimensions.tenant_id" fields in dimensions serve this purpose?
* A: Monasca Events Listener can start with sending all events to a single "admin" tenant
and if required in the future some other process could copy select metrics back to tenant
projects.
* Q: In Ceilometer V2 API (which has been deprecated and removed), when querying data the role
"admin" could access data for all tenants, whereas a user with "ceilometer-admin" role could
access only data for a particular tenant. Can we implement something like this for
monasca-events api when querying for data?
* A: In Monasca API every request is scoped to the project, so there is no equivalent of
Ceilometer's "admin" role to query data for all projects. So placing all events in to
an "admin" project may be the best approach.
* Q: How should services which generate notifications but do not include a tenant_id be
handled? For example Keystone [16]_.
How does Ceilometer handle such events?
* A: If all events are in an "admin" project then admin metrics like shared ceph cluster
load or provider network load can be copied back to tenants so they may understand how
infrastructure is affecting their workload.
* Note: Configuration of Elasticsearch cluster is out of scope for this spec. If needed
could assign a separate Elasticsearch cluster to the events API to avoid overloads.
Alternative Solutions
---------------------
#. Reuse the Ceilometer functionality to collect and publish events to the
Monasca Events API. While this may be less work initially, Ceilometer has
deprecated the Events functionality as of Stein. [13]_
#. An alternate Events Listener was proposed that would listen for RabbitMQ events
then publish them directly to the 'monevents' topic in Kafka. Discussion on this
can be seen in the git history for this document and in IRC logs [18]_.
Pro:
* A much simpler approach, more efficient that HTTP hop through another service.
* No need for batching in service, as RabbitMQ and Kafka clients would handle fast
throughput and short network interruptions.
Con:
* Nova Cells v2 each have their own RabbitMQ instances. While most deployments
would likely have a centralized RabbitMQ, it is not required in the documented
architecture.
* Regions may also cause separation of RabbitMQ instances that need to be monitored.
* While it might be possible to have a service/agent in each Cell publish back to
a centralized Kafka directly, our authentication and networking for Kafka was
not designed to support that.
#. OpenStack Panko [5]_ is a event storage and REST API for Ceilometer and could
be used instead of a Monasca solution.
Pro:
* An 'official' subproject within Telemetry, so there is some community recognition.
Con:
* Its primary storage is in a relational database which has problems with scale.
* It is not maintained actively and not ready for production. [11]_
* It will be deprecated eventually. [12]_
Data model impact
-----------------
None
REST API impact
---------------
#. We are proposing to tweak the Monasca Event data format by removing and adding following
fields as mentioned in "Proposed change" section above.
Remove fields or make them optional (JSON path notation): "service", "dimensions.topic",
"dimensions.hostname" and "event.priority"
Add fields (JSON path notation): "dimensions.publisher_id", "dimensions.user_id" and
"dimensions.project_id"
This change will have an impact on Monasca Events API.
Security impact
---------------
The proposed Monasca Event Listener will collect and publish OpenStack event
(notification) data to Monasca Events API. OpenStack notification data does not
have any sensitive data like 'tokens'.
Notifications do contain 'user_id' and 'project_id' fields but do not
contain any Personally Identifiable Information (PII) for the user or
the project.
Other end user impact
---------------------
None.
Performance Impact
------------------
#. The number of notifications(events) generated by different services will depend on the capacity
of the cloud along with the number of resources being created by the users.
For example, if there was a large number of compute VM's being created or destroyed it could
lead to a surge in number of notifications (events) that would have to be published. Optimum
configuration options related to say event batch size and event batch interval would have to be
documented, to reduce any adverse affect on performance.
#. The proposed Monasca Event Listener is a new service, so performance is unknown. However, the
Monasca API has been shown to have a high performance throughput.
#. If any part of the Monasca notification pipeline goes down, notifications could back-up in
RabbitMQ and bring down the cluster. The risk of this could be mitigated by using a
separate RabbitMQ instance for notifications.
Other deployer impact
---------------------
#. The proposed Monasca Event Listener will introduce a
few new configuration options like
* RabbitMQ connection information
* Monasca Events API endpoint URL
* events batch interval
* events batch size
* events retry interval
* Keystone credentials for obtaining a token
* Conversion options for OpenStack notifications to the Monasca Event format. This may
be stored in separate pipeline configuration files, similar to how transform specs
are configured in Monasca Transform.
#. As part of developing new Monasca Event Listener devstack plugin would be
updated to add the above configuration changes.
Developer impact
----------------
#. The proposed change to Monasca Event Format will have an impact on existing Monasca Event API,
since Monasca Event Format will have to be tweaked. (See REST API Impact section above)
Implementation
==============
Assignee(s)
-----------
Primary assignee:
joadavis, aagate
Other contributors:
<launchpad-id or None>
Work Items
----------
#. Implement new Monasca Event Listener.
* Connection to RabbitMQ for OpenStack Notifications
* Add filtering of notifications for configured event_types
* Specification in configuration file
* (Optional) Creation of a new API to configure event_type subscriptions
* Validation of OpenStack Notification data and format
* Conversion of data format to meet Monasca Events requirements
* Publishing to Monasca Events API
* Configuration of conversion specifications per-event type
#. Implement monasca devstack plugin changes to deploy
new Events Listener service.
#. Implement unit tests for Monasca Event Listener.
Dependencies
============
None
Testing
=======
#. New Monasca Event Listener unit tests will be added, which can test publishing with
various config options events batch size, events batch interval, handling retry when Monasca
Event API is not available.
#. Adding tempest tests for Monasca Event Listener could be looked at as part of
separate effort.
Documentation Impact
====================
#. New Monasca Event Listener config options will be documented
#. Recommended values for each of the config options will also be documented based on the size of
the cloud and resources for Cloud Operators.
References
==========
.. [1] Monasca Events API 1.0: https://opendev.org/openstack/monasca-events-api/
[2] Monasca Ceilometer project: https://github.com/openstack/monasca-ceilometer
.. [3] Ceilometer Data processing and pipelines: https://docs.openstack.org/ceilometer/pike/admin/telemetry-data-pipelines.html
[4] Ceilometer Events: https://docs.openstack.org/ceilometer/latest/admin/telemetry-events.html
.. [5] Openstack Panko: https://github.com/openstack/panko
[6] Monasca Event Format: https://github.com/openstack/monasca-events-api/blob/master/doc/api-samples/v1/req_simple_event.json
[7] Ceilometer System Architecture Diagram: https://docs.openstack.org/ceilometer/ocata/architecture.html
[8] Monasca Events POST v1.0 API: https://github.com/openstack/monasca-events-api/blob/master/api-ref/source/events.inc
.. [9] Cross-Tenant Metric Submission: https://github.com/openstack/monasca-agent/blob/master/docs/MonascaMetrics.md#cross-tenant-metric-submission
[10] Ceilometer pipeline yaml documentation: https://docs.openstack.org/ceilometer/latest/admin/telemetry-data-pipelines.html
.. [11] No future for Panko or Aodh: https://julien.danjou.info/lessons-from-openstack-telemetry-deflation/
.. [12] Ceilometer Events deprecated means Panko also deprecated: http://eavesdrop.openstack.org/irclogs/%23openstack-telemetry/%23openstack-telemetry.2018-10-10.log.html
.. [13] Ceilometer Events marked as deprecated in Stein: https://review.opendev.org/#/c/603336/
.. [14] Nova notification version update lists services effected (see "Deprecating legacy notifications"): https://etherpad.openstack.org/p/nova-ptg-stein
.. [15] Nova notification reference: https://docs.openstack.org/nova/latest/reference/notifications.html#existing-versioned-notifications
.. [16] Keystone notification reference: https://docs.openstack.org/keystone/latest/advanced-topics/event_notifications.html#example-notification-project-create
[17] Monasca Events API publisher to Kafka: https://github.com/openstack/monasca-events-api/blob/master/monasca_events_api/app/common/events_publisher.py
.. [18] Monasca IRC meeting Dec 15, 2018: http://eavesdrop.openstack.org/meetings/monasca/2018/monasca.2018-12-12-15.00.log.html

View File

@@ -1,195 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================
Upgrade Apache Kafka client
===========================
Include the URL of your story:
https://storyboard.openstack.org/#!/story/2003705
Currently in all Python Monasca components the copy of `kafka-python` library
in version 0.9.5 (released on Feb 16, 2016) is used [1]_. This specification
describes the process of upgrading the Apache Kafka client to
`confluent-kafka-python` [2]_. This will improve the performance and
reliability. Sticking with the old frozen client version is also unacceptable
in terms of security.
Problem description
===================
The use of `KeyedProducer` and `SimpleConsumer` in `kafka-python` library has
been deprecated as of version 1.0.0 [3]_. Further use of this code poses a
security risk. Additionally, profiling of ``monasca-persister`` has shown that
most of the time is spent during the consumption of Kafka messages [7]_. Thus,
there is a big potential on improving overall Monasca performance by upgrading
the used Kafka client.
Proposed change
===============
The wiki page hosted by Apache Software Foundation lists available Python
clients [4]_. There are currently three actively maintained and supported
clients: `confluent-kafka-python`, `kafka-python` and `pykafka`. Several
benchmarks have shown [5]_, [6]_ that the client maintained by Confluent is
both the fastest and most complete.
There is significant performance improvement when using asynchronous producer
(~50x). Sending messages asynchronously will require more care to avoid
duplicating the persisted data but performance gain justifies that.
`confluent-kafka-python` is also the only client which offers support for
Apache Avro serialization which reduces the size of messages and thus
additionally speeds up communication.
The proposed change includes using:
* `confluent-kafka-python` library
* in asynchronous mode
Code changes will affect following components:
* monasca-common
* monasca-{log,event}-api
* monasca-persister
* monasca-notification
* monasca-transform
Java components (`monasca-thresh` and `monasca-persister`) are out of scope of
this specification. Client upgrading in these components should be handled
separately.
This client has an external dependency on `librdkafka`, a finely tuned C
client.
Alternatives
------------
* `pykafka`
* new version of `kafka-python`
* use synchronous mode
Data model impact
-----------------
No data model impact.
REST API impact
---------------
No REST API impact.
Security impact
---------------
This change will improve the security because of removing the deprecated and
unmaintained code.
Other end user impact
---------------------
No end user impact.
Performance Impact
------------------
This change should dramatically improve the performance of the complete
solution. In particular performance of `monasca-persister` and `monasca-api` is
expected to improve.
Other deployer impact
---------------------
New libraries should be packaged and deployed:
* `confluent-kafka-python`
* `librdkafka`
Developer impact
----------------
`confluent-kafka-python` has to be used instead of `kafka-python` in all
affected components.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
witek
Other contributors:
<>
Work Items
----------
* remove code using `pykafka`
* remove `pykafka` from requirements and lower-constraints
* add `confluent-kafka-python` to global-requirements
* implement common routines in ``monasca-common``
* use new code in:
* ``monasca-{log,events}-api``
* ``monasca-persister``
* ``monasca-notification``
* ``monasca-transform``
* delete old deprecated code
Dependencies
============
New packages have to be build for:
* `confluent-kafka-python`
* `librdkafka`
Testing
=======
We should test the implementation using existing integration tests (tempest).
Additionally we should test the scenario when the producer fails to receive
response from Kafka for some of the messages in the bulk. It should be avoided
that duplicate entries are created in the database.
The implantation should be followed by executing following tests on the
complete stack:
* stress
* endurance
* performance
Documentation Impact
====================
No documentation impact.
References
==========
.. [1] https://github.com/dpkp/kafka-python/releases/tag/v0.9.5
.. [2] https://github.com/confluentinc/confluent-kafka-python
.. [3] https://github.com/dpkp/kafka-python/blob/master/docs/changelog.rst#100-feb-15-2016
.. [4] https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-Python
.. [5] https://github.com/monasca/monasca-perf/blob/master/kafka_python_client_perf/monascaInvestigationKafkaPythonAPIs.md
.. [6] http://activisiongamescience.github.io/2016/06/15/Kafka-Client-Benchmarking/
.. [7] https://opendev.org/openstack/monasca-persister/commit/a7112fd30bd545dd850e0e267dcceb9ea27551ad
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Stein
- Introduced

View File

@@ -1,5 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
doc8>=0.6.0 # Apache-2.0

35
tox.ini
View File

@@ -1,35 +0,0 @@
[tox]
minversion = 2.7
envlist = docs,pep8
skipsdist = True
[testenv]
basepython = python3
usedevelop = True
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
[testenv:venv]
commands = {posargs}
[testenv:docs]
commands = sphinx-build -W -b html doc/source doc/build/html
[testenv:pep8]
description = Runs set of linters against codebase (checkniceness)
commands = {[testenv:checkniceness]commands}
[testenv:checkniceness]
description = Validates (pep-like) documenation
skip_install = True
usedevelop = False
commands = doc8 --file-encoding utf-8 {toxinidir}/doc
[testenv:spelling]
deps =
-r{toxinidir}/requirements.txt
sphinxcontrib-spelling
PyEnchant
commands = sphinx-build -b spelling doc/source doc/build/spelling