Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: I06945b7d85ded52044899b712d90c151d1c0fcf2
This commit is contained in:
Tony Breeds 2017-09-12 15:36:58 -06:00
parent 12d63a4192
commit 2e64f284c6
690 changed files with 14 additions and 84866 deletions

View File

@ -1,8 +0,0 @@
[run]
branch = True
source = ceilometer
omit = ceilometer/tests/*
[report]
ignore_errors = True

21
.gitignore vendored
View File

@ -1,21 +0,0 @@
*.egg*
*.mo
*.pyc
.coverage
.testrepository
.tox
AUTHORS
build/*
ChangeLog
cover/*
dist/*
doc/build
doc/source/api/
etc/ceilometer/ceilometer.conf
subunit.log
# Files created by releasenotes build
releasenotes/build
# Files created by api-ref build
api-ref/build

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/ceilometer.git

View File

@ -1,33 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>
Adam Gandelman <adamg@canonical.com> <adamg@ubuntu.com>
Alan Pevec <alan.pevec@redhat.com> <apevec@redhat.com>
Alexei Kornienko <akornienko@mirantis.com> <alexei.kornienko@gmail.com>
ChangBo Guo(gcb) <eric.guo@easystack.cn> Chang Bo Guo <guochbo@cn.ibm.com>
Chinmaya Bharadwaj <chinmaya-bharadwaj.a@hp.com> chinmay <chinmaya-bharadwaj.a@hp.com>
Clark Boylan <cboylan@sapwetik.org> <clark.boylan@gmail.com>
Doug Hellmann <doug@doughellmann.com> <doug.hellmann@dreamhost.com>
Fei Long Wang <flwang@catalyst.net.nz> <flwang@cn.ibm.com>
Fengqian Gao <fengqian.gao@intel.com> Fengqian <fengqian.gao@intel.com>
Fengqian Gao <fengqian.gao@intel.com> Fengqian.Gao <fengqian.gao@intel.com>
Gordon Chung <gord@live.ca> gordon chung <gord@live.ca>
Gordon Chung <gord@live.ca> Gordon Chung <chungg@ca.ibm.com>
Gordon Chung <gord@live.ca> gordon chung <chungg@ca.ibm.com>
Ildiko Vancsa <ildiko.vancsa@ericsson.com> Ildiko <ildiko.vancsa@ericsson.com>
John H. Tran <jhtran@att.com> John Tran <jhtran@att.com>
Julien Danjou <julien.danjou@enovance.com> <julien@danjou.info>
LiuSheng <liusheng@huawei.com> liu-sheng <liusheng@huawei.com>
Mehdi Abaakouk <mehdi.abaakouk@enovance.com> <sileht@sileht.net>
Nejc Saje <nsaje@redhat.com> <nejc@saje.info>
Nejc Saje <nsaje@redhat.com> <nejc.saje@xlab.si>
Nicolas Barcet (nijaba) <nick@enovance.com> <nick.barcet@canonical.com>
Pádraig Brady <pbrady@redhat.com> <P@draigBrady.com>
Rich Bowen <rbowen@redhat.com> <rbowen@rcbowen.com>
Sandy Walsh <sandy.walsh@rackspace.com> <sandy@sandywalsh.com>
Sascha Peilicke <speilicke@suse.com> <saschpe@gmx.de>
Sean Dague <sean.dague@samsung.com> <sean@dague.net>
Shengjie Min <shengjie_min@dell.com> shengjie-min <shengjie_min@dell.com>
Shuangtai Tian <shuangtai.tian@intel.com> shuangtai <shuangtai.tian@intel.com>
Swann Croiset <swann.croiset@bull.net> <swann@oopss.org>
ZhiQiang Fan <zhiqiang.fan@huawei.com> <aji.zqfan@gmail.com>

View File

@ -1,9 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-600} \
${PYTHON:-python} -m subunit.run discover ${OS_TEST_PATH:-./ceilometer/tests} -t . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list
# NOTE(chdent): Only used/matches on gabbi-related tests.
group_regex=(gabbi\.(suitemaker|driver)\.test_gabbi_(?:prefix_|)[^_]+)_

View File

@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps documented at:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/ceilometer

View File

@ -1,27 +0,0 @@
Ceilometer Style Commandments
=============================
- Step 1: Read the OpenStack Style Commandments
https://docs.openstack.org/hacking/latest/
- Step 2: Read on
Ceilometer Specific Commandments
--------------------------------
- [C301] LOG.warn() is not allowed. Use LOG.warning()
- [C302] Deprecated library function os.popen()
Creating Unit Tests
-------------------
For every new feature, unit tests should be created that both test and
(implicitly) document the usage of said feature. If submitting a patch for a
bug that had no unit test, a new passing unit test should be added. If a
submitted bug fix does have a unit test, be sure to add a new one that fails
without the patch and passes with the patch.
All unittest classes must ultimately inherit from testtools.TestCase.
All setUp and tearDown methods must upcall using the super() method.
tearDown methods should be avoided and addCleanup calls should be preferred.
Never manually create tempfiles. Always use the tempfile fixtures from
the fixture library to ensure that they are cleaned up.

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,17 +0,0 @@
= Generalist Code Reviewers =
The current members of ceilometer-core are listed here:
https://launchpad.net/~ceilometer-drivers/+members#active
This group can +2 and approve patches in Ceilometer. However, they may
choose to seek feedback from the appropriate specialist maintainer before
approving a patch if it is in any way controversial or risky.
= IRC handles of maintainers =
gordc
jd__
liusheng
llu
pradk
sileht

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,28 +0,0 @@
==========
Ceilometer
==========
Ceilometer is a data collection service that collects event and metering
data by monitoring notifications sent from OpenStack services. It publishes
collected data to various targets including data stores
and message queues.
Ceilometer is distributed under the terms of the Apache
License, Version 2.0. The full terms and conditions of this
license are detailed in the LICENSE file.
For more information about Ceilometer APIs, see
https://developer.openstack.org/api-ref-telemetry-v2.html
Release notes are available at
https://releases.openstack.org/teams/telemetry.html
Developer documentation is available at
https://docs.openstack.org/ceilometer/latest/
For information on how to contribute to ceilometer, see the CONTRIBUTING.rst
file.
The project home is at https://launchpad.net/ceilometer
To report any ceilometer related bugs, see https://bugs.launchpad.net/ceilometer/

View File

@ -1,336 +0,0 @@
.. -*- rst -*-
======
Alarms
======
Lists, creates, gets details for, updates, and deletes alarms.
Show alarm details
==================
.. rest_method:: GET /v2/alarms/{alarm_id}
Shows details for an alarm, by alarm ID.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- alarm_id: alarm_id_path
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- alarm: alarm_response
- alarm_actions: alarm_actions
- alarm_id: alarm_id
- combination_rule: alarm_combination_rule
- description: alarm_description
- enabled: alarm_enabled
- insufficient_data_actions: alarm_insufficient_data_actions
- timestamp: alarm_timestamp
- name: alarm_name
- ok_actions: alarm_ok_actions
- project_id: alarm_project_id
- state_timestamp: alarm_state_timestamp
- threshold_rule: alarm_threshold_rule
- repeat_actions: alarm_repeat_actions
- state: alarm_state
- type: alarm_type
- user_id: user_id
Response Example
----------------
.. literalinclude:: ../samples/alarm-show-response.json
:language: javascript
Update alarm
============
.. rest_method:: PUT /v2/alarms/{alarm_id}
Updates an alarm.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- alarm_id: alarm_id_path
- alarm: alarm_request
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- alarm: alarm_response
- alarm_actions: alarm_actions
- alarm_id: alarm_id
- combination_rule: alarm_combination_rule
- description: alarm_description
- enabled: alarm_enabled
- insufficient_data_actions: alarm_insufficient_data_actions
- timestamp: alarm_timestamp
- name: alarm_name
- ok_actions: alarm_ok_actions
- project_id: alarm_project_id
- state_timestamp: alarm_state_timestamp
- threshold_rule: alarm_threshold_rule
- repeat_actions: alarm_repeat_actions
- state: alarm_state
- type: alarm_type
- user_id: user_id
Response Example
----------------
.. literalinclude:: ../samples/alarm-show-response.json
:language: javascript
Delete alarm
============
.. rest_method:: DELETE /v2/alarms/{alarm_id}
Deletes an alarm, by alarm ID.
Normal response codes:204
Request
-------
.. rest_parameters:: parameters.yaml
- alarm_id: alarm_id_path
Update alarm state
==================
.. rest_method:: PUT /v2/alarms/{alarm_id}/state
Sets the state of an alarm.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- alarm_id: alarm_id_path
- state: alarm_state
Response Example
----------------
.. literalinclude::
:language: javascript
Show alarm state
================
.. rest_method:: GET /v2/alarms/{alarm_id}/state
Shows the state for an alarm, by alarm ID.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- alarm_id: alarm_id_path
Response Example
----------------
.. literalinclude::
:language: javascript
List alarms
===========
.. rest_method:: GET /v2/alarms
Lists alarms, based on a query.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- q: q
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- alarm_actions: alarm_actions
- ok_actions: ok_actions
- description: description
- timestamp: timestamp
- enabled: enabled
- combination_rule: combination_rule
- state_timestamp: state_timestamp
- threshold_rule: threshold_rule
- alarm_id: alarm_id
- state: state
- insufficient_data_actions: alarm_insufficient_data_actions
- repeat_actions: repeat_actions
- user_id: user_id
- project_id: project_id
- type: type
- name: name
Response Example
----------------
.. literalinclude:: ../samples/alarms-list-response.json
:language: javascript
Create alarm
============
.. rest_method:: POST /v2/alarms
Creates an alarm.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- data: data
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- alarm: alarm_response
- alarm_actions: alarm_actions
- alarm_id: alarm_id
- combination_rule: alarm_combination_rule
- description: alarm_description
- enabled: alarm_enabled
- insufficient_data_actions: alarm_insufficient_data_actions
- timestamp: alarm_timestamp
- name: alarm_name
- ok_actions: alarm_ok_actions
- project_id: alarm_project_id
- state_timestamp: alarm_state_timestamp
- threshold_rule: alarm_threshold_rule
- repeat_actions: alarm_repeat_actions
- state: alarm_state
- type: alarm_type
- user_id: user_id
Response Example
----------------
.. literalinclude:: ../samples/alarm-show-response.json
:language: javascript
Show alarm history
==================
.. rest_method:: GET /v2/alarms/{alarm_id}/history
Assembles and shows the history for an alarm, by alarm ID.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- alarm_id: alarm_id_path
- q: q
Response Example
----------------
.. literalinclude::
:language: javascript

View File

@ -1,92 +0,0 @@
.. -*- rst -*-
============
Capabilities
============
Gets information for API and storage capabilities.
The Telemetry service enables you to store samples, events, and
alarm definitions in supported database back ends. The
``capabilities`` resource enables you to list the capabilities that
a database supports.
The ``capabilities`` resource returns a flattened dictionary of
capability properties, each with an associated boolean value. A
value of ``true`` indicates that the corresponding capability is
available in the back end.
You can optionally configure separate database back ends for
samples, events, and alarms definitions. The ``capabilities``
response shows a value of ``true`` to indicate that the definitions
database for samples, events, or alarms is ready to use in a
production environment.
List capabilities
=================
.. rest_method:: GET /v2/capabilities
A representation of the API and storage capabilities. Usually, the storage driver imposes constraints.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- statistics:query:complex: statistics:query:complex
- alarms:history:query:simple: alarms:history:query:simple
- meters:query:metadata: meters:query:metadata
- alarms:query:simple: alarms:query:simple
- resources:query:simple: resources:query:simple
- api: api
- statistics:aggregation:selectable:quartile: statistics:aggregation:selectable:quartile
- statistics:query:simple: statistics:query:simple
- statistics:aggregation:selectable:count: statistics:aggregation:selectable:count
- statistics:aggregation:selectable:min: statistics:aggregation:selectable:min
- statistics:aggregation:selectable:sum: statistics:aggregation:selectable:sum
- storage: storage
- alarm_storage: alarm_storage
- statistics:aggregation:selectable:avg: statistics:aggregation:selectable:avg
- meters:query:complex: meters:query:complex
- statistics:groupby: statistics:groupby
- alarms:history:query:complex: alarms:history:query:complex
- meters:query:simple: meters:query:simple
- samples:query:metadata: samples:query:metadata
- statistics:query:metadata: statistics:query:metadata
- storage:production_ready: storage:production_ready
- samples:query:simple: samples:query:simple
- resources:query:metadata: resources:query:metadata
- statistics:aggregation:selectable:max: statistics:aggregation:selectable:max
- samples:query:complex: samples:query:complex
- statistics:aggregation:standard: statistics:aggregation:standard
- events:query:simple: events:query:simple
- statistics:aggregation:selectable:stddev: statistics:aggregation:selectable:stddev
- alarms:query:complex: alarms:query:complex
- statistics:aggregation:selectable:cardinality: statistics:aggregation:selectable:cardinality
- event_storage: event_storage
- resources:query:complex: resources:query:complex
Response Example
----------------
.. literalinclude:: ../samples/capabilities-list-response.json
:language: javascript

View File

@ -1,273 +0,0 @@
# -*- coding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# ceilometer documentation build configuration file, created by
# sphinx-quickstart on Sat May 1 15:17:47 2010.
#
# This file is execfile()d with the current directory set to
# its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import subprocess
import sys
import warnings
import openstackdocstheme
html_theme = 'openstackdocs'
html_theme_path = [openstackdocstheme.get_html_theme_path()]
html_theme_options = {
"sidebar_mode": "toc",
}
extensions = [
'os_api_ref',
]
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#
# source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Compute API Reference'
copyright = u'2010-present, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
from ceilometer.version import version_info as ceilometer_version
# The full version, including alpha/beta/rc tags.
release = ceilometer_version.version_string_with_vcs()
# The short X.Y version.
version = ceilometer_version.canonical_version_string()
# Config logABug feature
giturl = (
u'https://git.openstack.org/cgit/openstack/ceilometer/tree/api-ref/source')
# source tree
# html_context allows us to pass arbitrary values into the html template
html_context = {'bug_tag': 'api-ref',
'giturl': giturl,
'bug_project': 'ceilometer'}
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# The reST default role (used for this markup: `text`) to use
# for all documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for man page output ----------------------------------------------
# Grouping the document tree for man pages.
# List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
"-n1"]
try:
html_last_updated_fmt = subprocess.check_output(git_cmd).decode('utf-8')
except Exception:
warnings.warn('Cannot get last updated time from git repository. '
'Not setting "html_last_updated_fmt".')
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_use_modindex = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'ceilometerdoc'
# -- Options for LaTeX output -------------------------------------------------
# The paper size ('letter' or 'a4').
# latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
# latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index', 'CeilometerReleaseNotes.tex',
u'Ceilometer Release Notes Documentation',
u'Ceilometer Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# Additional stuff for the LaTeX preamble.
# latex_preamble = ''
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_use_modindex = True
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'ceilometerreleasenotes',
u'Ceilometer Release Notes Documentation', [u'Ceilometer Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'CeilometerReleaseNotes',
u'Ceilometer Release Notes Documentation',
u'Ceilometer Developers', 'CeilometerReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False

View File

@ -1,93 +0,0 @@
.. -*- rst -*-
======
Events
======
Lists all events and shows details for an event.
Show event details
==================
.. rest_method:: GET /v2/events/{message_id}
Shows details for an event.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- message_id: message_id_path
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- events: events
- raw: event_raw
- generated: event_generated
- event_type: event_type
- message_id: message_id
Response Example
----------------
.. literalinclude:: ../samples/event-show-response.json
:language: javascript
List events
===========
.. rest_method:: GET /v2/events
Lists all events.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- q: q
- limit: limit
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- events: events
- raw: event_raw
- generated: generated
- event_type: event_type
- message_id: message_id
Response Example
----------------
.. literalinclude:: ../samples/events-list-response.json
:language: javascript

View File

@ -1,8 +0,0 @@
=========================
Ceilometer Release Notes
=========================
.. toctree::
:maxdepth: 1

View File

@ -1,386 +0,0 @@
.. -*- rst -*-
======
Meters
======
Lists all meters, adds samples to meters, and lists samples for
meters. For list operations, if you do not explicitly set the
``limit`` query parameter, a default limit is applied. The default
limit is the ``default_api_return_limit`` configuration option
value.
Also, computes and lists statistics for samples in a time range.
You can use the ``aggregate`` query parameter in the ``statistics``
URI to explicitly select the ``stddev``, ``cardinality``, or any
other standard function. For example:
::
GET /v2/meters/METER_NAME/statistics?aggregate.func=NAME
&
aggregate.param=VALUE
The ``aggregate.param`` parameter value is optional for all
functions except the ``cardinality`` function.
The API silently ignores any duplicate aggregate function and
parameter pairs.
The API accepts and storage drivers support duplicate functions
with different parameter values. In this example, the
``cardinality`` function is accepted twice with two different
parameter values:
::
GET /v2/meters/METER_NAME/statistics?aggregate.func=cardinality
&
aggregate.param=resource_id
&
aggregate.func=cardinality
&
aggregate.param=project_id
**Examples:**
Use the ``stddev`` function to request the standard deviation of
CPU utilization:
::
GET /v2/meters/cpu_util/statistics?aggregate.func=stddev
The response looks like this:
.. code-block:: json
[
{
"aggregate": {
"stddev": 0.6858829
},
"duration_start": "2014-01-30T11:13:23",
"duration_end": "2014-01-31T16:07:13",
"duration": 104030,
"period": 0,
"period_start": "2014-01-30T11:13:23",
"period_end": "2014-01-31T16:07:13",
"groupby": null,
"unit": "%"
}
]
Use the ``cardinality`` function with the project ID to return the
number of distinct tenants with images:
::
GET /v2/meters/image/statistics?aggregate.func=cardinality
&
aggregate.param=project_id
The following, more complex, example determines:
- The number of distinct instances (``cardinality``)
- The total number of instance samples (``count``) for a tenant in
15-minute intervals (``period`` and ``groupby`` options)
::
GET /v2/meters/instance/statistics?aggregate.func=cardinality
&
aggregate.param=resource_id
&
aggregate.func=count
&
groupby=project_id
&
period=900
The response looks like this:
.. code-block:: json
[
{
"count": 19,
"aggregate": {
"count": 19,
"cardinality/resource_id": 3
},
"duration": 328.47803,
"duration_start": "2014-01-31T10:00:41.823919",
"duration_end": "2014-01-31T10:06:10.301948",
"period": 900,
"period_start": "2014-01-31T10:00:00",
"period_end": "2014-01-31T10:15:00",
"groupby": {
"project_id": "061a5c91811e4044b7dc86c6136c4f99"
},
"unit": "instance"
},
{
"count": 22,
"aggregate": {
"count": 22,
"cardinality/resource_id": 4
},
"duration": 808.00385,
"duration_start": "2014-01-31T10:15:15",
"duration_end": "2014-01-31T10:28:43.003840",
"period": 900,
"period_start": "2014-01-31T10:15:00",
"period_end": "2014-01-31T10:30:00",
"groupby": {
"project_id": "061a5c91811e4044b7dc86c6136c4f99"
},
"unit": "instance"
},
{
"count": 2,
"aggregate": {
"count": 2,
"cardinality/resource_id": 2
},
"duration": 0,
"duration_start": "2014-01-31T10:35:15",
"duration_end": "2014-01-31T10:35:15",
"period": 900,
"period_start": "2014-01-31T10:30:00",
"period_end": "2014-01-31T10:45:00",
"groupby": {
"project_id": "061a5c91811e4044b7dc86c6136c4f99"
},
"unit": "instance"
}
]
Show meter statistics
=====================
.. rest_method:: GET /v2/meters/{meter_name}/statistics
Computes and lists statistics for samples in a time range.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- meter_name: meter_name
- q: q
- groupby: groupby
- period: period
- aggregate: aggregate
- limit: limit
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- count: count
- duration_start: duration_start
- min: min
- max: max
- duration_end: duration_end
- period: period
- sum: sum
- duration: duration
- period_end: period_end
- aggregate: aggregate
- period_start: period_start
- avg: avg
- groupby: groupby
- unit: unit
Response Example
----------------
.. literalinclude:: ../samples/statistics-list-response.json
:language: javascript
List meters
===========
.. rest_method:: GET /v2/meters
Lists meters, based on the data recorded so far.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- q: q
- limit: limit
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- user_id: user_id
- name: name
- resource_id: resource_id
- source: source
- meter_id: meter_id
- project_id: project_id
- type: type
- unit: unit
Response Example
----------------
.. literalinclude:: ../samples/meters-list-response.json
:language: javascript
List samples for meter
======================
.. rest_method:: GET /v2/meters/{meter_name}
Lists samples for a meter, by meter name.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- meter_name: meter_name
- q: q
- limit: limit
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- user_id: user_id
- resource_id: resource_id
- timestamp: timestamp
- meter: meter
- volume: volume
- source: source
- recorded_at: recorded_at
- project_id: project_id
- type: type
- id: id
- unit: unit
- metadata: metadata
Response Example
----------------
.. literalinclude:: ../samples/samples-list-response.json
:language: javascript
Add samples to meter
====================
.. rest_method:: POST /v2/meters/{meter_name}
Adds samples to a meter, by meter name.
If you attempt to add a sample that is not supported, this call
returns the ``409`` response code.
Normal response codes: 200
Error response codes:409,
Request
-------
.. rest_parameters:: parameters.yaml
- user_id: user_id
- resource_id: resource_id
- timestamp: timestamp
- meter: meter
- volume: volume
- source: source
- recorded_at: recorded_at
- project_id: project_id
- type: type
- id: id
- unit: unit
- metadata: metadata
- meter_name: meter_name
- direct: direct
- samples: samples
Request Example
---------------
.. literalinclude:: ../samples/sample-create-request.json
:language: javascript
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- user_id: user_id
- resource_id: resource_id
- timestamp: timestamp
- meter: meter
- volume: volume
- source: source
- recorded_at: recorded_at
- project_id: project_id
- type: type
- id: id
- unit: unit
- metadata: metadata
Response Example
----------------
.. literalinclude:: ../samples/sample-show-response.json
:language: javascript

View File

@ -1,768 +0,0 @@
# variables in header
{}
# variables in path
alarm_id_path:
description: |
The UUID of the alarm.
in: path
required: false
type: string
message_id_path:
description: |
The UUID of the message.
in: path
required: false
type: string
meter_name:
description: |
The name of the meter.
in: path
required: false
type: string
resource_id_path:
description: |
The UUID of the resource.
in: path
required: false
type: string
sample_id:
description: |
The UUID of the sample.
in: path
required: false
type: string
# variables in query
aggregate:
description: |
A list of selectable aggregation functions to apply.
For example:
::
GET /v2/meters/METER_NAME/statistics?aggregate.func=cardinality
&
aggregate.param=resource_id
&
aggregate.func=cardinality
&
aggregate.param=project_id
in: query
required: false
type: object
direct:
description: |
Indicates whether the samples are POST ed
directly to storage. Set ``?direct=True`` to POST the samples
directly to storage.
in: query
required: false
type: string
groupby:
description: |
Fields for group by aggregation.
in: query
required: false
type: object
limit:
description: |
Limits the maximum number of samples that the response returns.
For example:
::
GET /v2/events?limit=1000
in: query
required: false
type: integer
limit_1:
description: |
Requests a page size of items. Returns a number
of items up to a limit value. Use the ``limit`` parameter to make
an initial limited request and use the ID of the last-seen item
from the response as the ``marker`` parameter value in a
subsequent limited request.
in: query
required: false
type: integer
meter_links:
description: |
Set ``?meter_links=1`` to return a self link and
related meter links.
in: query
required: false
type: integer
period:
description: |
The period, in seconds, for which you want
statistics.
in: query
required: false
type: integer
q:
description: |
Filters the response by one or more arguments.
For example: ``?q.field=Foo & q.value=my_text``.
in: query
required: false
type: array
q_1:
description: |
Filters the response by one or more event arguments.
For example:
::
GET /v2/events?q.field=Foo
&
q.value=my_text
in: query
required: false
type: array
samples:
description: |
A list of samples.
in: query
required: false
type: array
# variables in body
alarm_actions:
description: |
The list of actions that the alarm performs.
in: body
required: true
type: array
alarm_combination_rule:
description: |
The rules for the combination alarm type.
in: body
required: true
type: string
alarm_description:
description: |
Describes the alarm.
in: body
required: true
type: string
alarm_enabled:
description: |
If ``true``, evaluation and actioning is enabled
for the alarm.
in: body
required: true
type: boolean
alarm_id:
description: |
The UUID of the alarm.
in: body
required: true
type: string
alarm_insufficient_data_actions:
description: |
The list of actions that the alarm performs when
the alarm state is ``insufficient_data``.
in: body
required: true
type: array
alarm_name:
description: |
The name of the alarm.
in: body
required: true
type: string
alarm_ok_actions:
description: |
The list of actions that the alarm performs when
the alarm state is ``ok``.
in: body
required: true
type: array
alarm_repeat_actions:
description: |
If set to ``true``, the alarm notifications are
repeated. Otherwise, this value is ``false``.
in: body
required: true
type: boolean
alarm_request:
description: |
An alarm within the request body.
in: body
required: false
type: string
alarm_state:
description: |
The state of the alarm.
in: body
required: true
type: string
alarm_state_timestamp:
description: |
The date and time of the alarm state.
in: body
required: true
type: string
alarm_storage:
description: |
Defines the capabilities for the storage that
stores persisting alarm definitions. A value of ``true`` indicates
that the capability is available.
in: body
required: true
type: object
alarm_threshold_rule:
description: |
The rules for the threshold alarm type.
in: body
required: true
type: string
alarm_timestamp:
description: |
The date and time of the alarm.
in: body
required: true
type: string
alarm_type:
description: |
The type of the alarm, which is either
``threshold`` or ``combination``.
in: body
required: true
type: string
alarms:history:query:complex:
description: |
If ``true``, the complex query capability for
alarm history is available for the configured database back end.
in: body
required: true
type: boolean
alarms:history:query:simple:
description: |
If ``true``, the simple query capability for
alarm history is available for the configured database back end.
in: body
required: true
type: boolean
alarms:query:complex:
description: |
If ``true``, the complex query capability for
alarm definitions is available for the configured database back
end.
in: body
required: true
type: boolean
alarms:query:simple:
description: |
If ``true``, the simple query capability for
alarm definitions is available for the configured database back
end.
in: body
required: true
type: boolean
api:
description: |
A set of key and value pairs that contain the API
capabilities for the configured storage driver.
in: body
required: true
type: object
avg:
description: |
The average of all volume values in the data.
in: body
required: true
type: number
combination_rule:
description: |
The rules for the combination alarm type.
in: body
required: true
type: string
count:
description: |
The number of samples seen.
in: body
required: true
type: integer
description:
description: |
Describes the alarm.
in: body
required: true
type: string
duration:
description: |
The number of seconds between the oldest and
newest date and time stamp.
in: body
required: true
type: number
duration_end:
description: |
The date and time in UTC format of the query end
time.
in: body
required: true
type: string
duration_start:
description: |
The date and time in UTC format of the query
start time.
in: body
required: true
type: string
event_generated:
description: |
The date and time when the event occurred.
in: body
required: true
type: string
event_raw:
description: |
A dictionary object that stores event messages
for future evaluation.
in: body
required: true
type: object
event_storage:
description: |
If ``true``, the capabilities for the storage
that stores persisting events is available.
in: body
required: true
type: object
event_type:
description: |
The dotted string that represents the event.
in: body
required: true
type: string
events:
description: |
A list of objects. Each object contains key and
value pairs that describe the event.
in: body
required: true
type: array
events:query:simple:
description: |
If ``true``, the simple query capability for
events is available for the configured database back end.
in: body
required: true
type: boolean
id:
description: |
The UUID of the sample.
in: body
required: true
type: string
links:
description: |
A list that contains a self link and associated
meter links.
in: body
required: true
type: array
max:
description: |
The maximum volume seen in the data.
in: body
required: true
type: number
message_id:
description: |
The UUID of the message.
in: body
required: true
type: string
metadata:
description: |
An arbitrary set of one or more metadata key and
value pairs that are associated with the sample.
in: body
required: true
type: object
metadata_1:
description: |
A set of one or more arbitrary metadata key and
value pairs that are associated with the resource.
in: body
required: true
type: object
meter:
description: |
The meter name.
in: body
required: true
type: string
meter_id:
description: |
The UUID of the meter.
in: body
required: true
type: string
meters:query:complex:
description: |
If ``true``, the complex query capability for
meters is available for the configured database back end.
in: body
required: true
type: boolean
meters:query:metadata:
description: |
If ``true``, the simple query capability for the
metadata of meters is available for the configured database back
end.
in: body
required: true
type: boolean
meters:query:simple:
description: |
If ``true``, the simple query capability for
meters is available for the configured database back end.
in: body
required: true
type: boolean
min:
description: |
The minimum volume seen in the data.
in: body
required: true
type: number
name:
description: |
The name of the alarm.
in: body
required: true
type: string
name_1:
description: |
The meter name.
in: body
required: true
type: string
period_end:
description: |
The period end date and time in UTC format.
in: body
required: true
type: string
period_start:
description: |
The period start date and time in UTC format.
in: body
required: true
type: string
project_id:
description: |
The UUID of the project or tenant that owns the
resource.
in: body
required: true
type: string
project_id_1:
description: |
The UUID of the project.
in: body
required: true
type: string
project_id_2:
description: |
The UUID of the owning project or tenant.
in: body
required: true
type: string
recorded_at:
description: |
The date and time when the sample was recorded.
in: body
required: true
type: string
measurement_resource_id:
description: |
The UUID of the resource for which the
measurements are taken.
in: body
required: true
type: string
resource:
description: |
in: body
required: true
type: object
resource_id:
description: |
The UUID of the resource.
in: body
required: true
type: string
resouces:
description: |
List of the resources.
in: body
required: true
type: array
resources:query:complex:
description: |
If ``true``, the complex query capability for
resources is available for the configured database back end.
in: body
required: true
type: boolean
resources:query:metadata:
description: |
If ``true``, the simple query capability for the
metadata of resources is available for the configured database
back end.
in: body
required: true
type: boolean
resources:query:simple:
description: |
If ``true``, the simple query capability for
resources is available for the configured database back end.
in: body
required: true
type: boolean
samples:query:complex:
description: |
If ``true``, the complex query capability for
samples is available for the configured database back end.
in: body
required: true
type: boolean
samples:query:metadata:
description: |
If ``true``, the simple query capability for the
metadata of samples is available for the configured database back
end.
in: body
required: true
type: boolean
samples:query:simple:
description: |
If ``true``, the simple query capability for
samples is available for the configured database back end.
in: body
required: true
type: boolean
source:
description: |
The name of the source that identifies where the
sample comes from.
in: body
required: true
type: string
source_1:
description: |
The name of the source from which the meter came.
in: body
required: true
type: string
source_2:
description: |
The name of the source from which the resource
came.
in: body
required: true
type: string
state:
description: |
The state of the alarm.
in: body
required: true
type: string
statistics:aggregation:selectable:avg:
description: |
If ``true``, the ``avg`` capability is available
for the configured database back end. Use the ``avg`` capability
to get average values for samples.
in: body
required: true
type: boolean
statistics:aggregation:selectable:cardinality:
description: |
If ``true``, the ``cardinality`` capability is
available for the configured database back end. Use the
``cardinality`` capability to get cardinality for samples.
in: body
required: true
type: boolean
statistics:aggregation:selectable:count:
description: |
If ``true``, the ``count`` capability is
available for the configured database back end. Use the ``count``
capability to calculate the number of samples for a query.
in: body
required: true
type: boolean
statistics:aggregation:selectable:max:
description: |
If ``true``, the ``max`` capability is available
for the configured database back end. . Use the ``max`` capability
to calculate the maximum value for a query.
in: body
required: true
type: boolean
statistics:aggregation:selectable:min:
description: |
If ``true``, the ``min`` capability is available
for the configured database back end. Use the ``min`` capability
to calculate the minimum value for a query.
in: body
required: true
type: boolean
statistics:aggregation:selectable:quartile:
description: |
If ``true``, the ``quartile`` capability is
available for the configured database back end. Use the
``quartile`` capability to calculate the quartile of sample
volumes for a query.
in: body
required: true
type: boolean
statistics:aggregation:selectable:stddev:
description: |
If ``true``, the ``stddev`` capability is
available for the configured database back end. Use the ``stddev``
capability to calculate the standard deviation of sample volumes
for a query.
in: body
required: true
type: boolean
statistics:aggregation:selectable:sum:
description: |
If ``true``, the ``sum`` capability is available
for the configured database back end. Use the ``sum`` capability
to calculate the sum of sample volumes for a query.
in: body
required: true
type: boolean
statistics:aggregation:standard:
description: |
If ``true``, the ``standard`` set of aggregation
capability is available for the configured database back end.
in: body
required: true
type: boolean
statistics:groupby:
description: |
If ``true``, the ``groupby`` capability is
available for calculating statistics for the configured database
back end.
in: body
required: true
type: boolean
statistics:query:complex:
description: |
If ``true``, the complex query capability for
statistics is available for the configured database back end.
in: body
required: true
type: boolean
statistics:query:metadata:
description: |
If ``true``, the simple query capability for the
sample metadata that is used to calculate statistics is available
for the configured database back end.
in: body
required: true
type: boolean
statistics:query:simple:
description: |
If ``true``, the simple query capability for
statistics is available for the configured database back end.
in: body
required: true
type: boolean
storage:
description: |
If ``true``, the capabilities for the storage
that stores persisting samples is available.
in: body
required: true
type: object
storage:production_ready:
description: |
If ``true``, the database back end is ready to
use in a production environment.
in: body
required: true
type: boolean
sum:
description: |
The total of all of the volume values seen in the
data.
in: body
required: true
type: number
timestamp:
description: |
The date and time in UTC format when the
measurement was made.
in: body
required: true
type: string
timestamp_1:
description: |
The date and time of the alarm.
in: body
required: true
type: string
type:
description: |
The meter type.
in: body
required: true
type: string
type_2:
description: |
The meter type. The type value is gauge, delta,
or cumulative.
in: body
required: true
type: string
unit:
description: |
The unit of measure for the ``volume`` value.
in: body
required: true
type: string
unit_1:
description: |
The unit of measure.
in: body
required: true
type: string
unit_2:
description: |
The unit type of the data set.
in: body
required: true
type: string
user_id:
description: |
The UUID of the user who either created or last
updated the resource.
in: body
required: true
type: string
user_id_1:
description: |
The UUID of the user.
in: body
required: true
type: string
volume:
description: |
The actual measured value.
in: body
required: true
type: number

View File

@ -1,95 +0,0 @@
.. -*- rst -*-
=========
Resources
=========
Lists all and gets information for resources.
List resources
==============
.. rest_method:: GET /v2/resources
Lists definitions for all resources.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- q: q
- meter_links: meter_links
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- resources: resources
- user_id: user_id
- links: links
- resource_id: resource_id
- source: source
- project_id: project_id
- metadata: metadata
Response Example
----------------
.. literalinclude:: ../samples/resources-list-response.json
:language: javascript
Show resource details
=====================
.. rest_method:: GET /v2/resources/{resource_id}
Shows details for a resource, by resource ID.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- resource_id: resource_id_path
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- resource: resource
- user_id: user_id
- links: links
- resource_id: resource_id
- source: source
- project_id: project_id
- metadata: metadata
Response Example
----------------
.. literalinclude:: ../samples/resource-show-response.json
:language: javascript

View File

@ -1,111 +0,0 @@
.. -*- rst -*-
=======
Samples
=======
Lists all samples and gets information for a sample.
For list operations, if you do not explicitly set the ``limit``
query parameter, a default limit is applied. The default limit is
the ``default_api_return_limit`` configuration option value.
Show sample details
===================
.. rest_method:: GET /v2/samples/{sample_id}
Shows details for a sample, by sample ID.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- sample_id: sample_id
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- user_id: user_id
- resource_id: resource_id
- timestamp: timestamp
- meter: meter
- volume: volume
- source: source
- recorded_at: recorded_at
- project_id: project_id
- type: type
- id: id
- unit: unit
- metadata: metadata
Response Example
----------------
.. literalinclude:: ../samples/sample-show-response.json
:language: javascript
List samples
============
.. rest_method:: GET /v2/samples
Lists all known samples, based on the data recorded so far.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- q: q
- limit: limit
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- user_id: user_id
- resource_id: resource_id
- timestamp: timestamp
- meter: meter
- volume: volume
- source: source
- recorded_at: recorded_at
- project_id: project_id
- type: type
- id: id
- unit: unit
- metadata: metadata
Response Example
----------------
.. literalinclude:: ../samples/samples-list-response.json
:language: javascript

View File

@ -1,24 +0,0 @@
{
"alarm_actions": [
"http://site:8000/alarm"
],
"alarm_id": null,
"combination_rule": null,
"description": "An alarm",
"enabled": true,
"insufficient_data_actions": [
"http://site:8000/nodata"
],
"name": "SwiftObjectAlarm",
"ok_actions": [
"http://site:8000/ok"
],
"project_id": "c96c887c216949acbdfbd8b494863567",
"repeat_actions": false,
"state": "ok",
"state_timestamp": "2013-11-21T12:33:08.486228",
"threshold_rule": null,
"timestamp": "2013-11-21T12:33:08.486221",
"type": "threshold",
"user_id": "c96c887c216949acbdfbd8b494863567"
}

View File

@ -1,25 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<value>
<alarm_actions>
<item>http://site:8000/alarm</item>
</alarm_actions>
<alarm_id nil="true" />
<combination_rule nil="true" />
<description>An alarm</description>
<enabled>true</enabled>
<insufficient_data_actions>
<item>http://site:8000/nodata</item>
</insufficient_data_actions>
<name>SwiftObjectAlarm</name>
<ok_actions>
<item>http://site:8000/ok</item>
</ok_actions>
<project_id>c96c887c216949acbdfbd8b494863567</project_id>
<repeat_actions>false</repeat_actions>
<state>ok</state>
<state_timestamp>2013-11-21T12:33:08.486228</state_timestamp>
<threshold_rule nil="true" />
<timestamp>2013-11-21T12:33:08.486221</timestamp>
<type>threshold</type>
<user_id>c96c887c216949acbdfbd8b494863567</user_id>
</value>

View File

@ -1,26 +0,0 @@
[
{
"alarm_actions": [
"http://site:8000/alarm"
],
"alarm_id": null,
"combination_rule": null,
"description": "An alarm",
"enabled": true,
"insufficient_data_actions": [
"http://site:8000/nodata"
],
"name": "SwiftObjectAlarm",
"ok_actions": [
"http://site:8000/ok"
],
"project_id": "c96c887c216949acbdfbd8b494863567",
"repeat_actions": false,
"state": "ok",
"state_timestamp": "2013-11-21T12:33:08.486228",
"threshold_rule": null,
"timestamp": "2013-11-21T12:33:08.486221",
"type": "threshold",
"user_id": "c96c887c216949acbdfbd8b494863567"
}
]

View File

@ -1,27 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<values>
<value>
<alarm_actions>
<item>http://site:8000/alarm</item>
</alarm_actions>
<alarm_id nil="true" />
<combination_rule nil="true" />
<description>An alarm</description>
<enabled>true</enabled>
<insufficient_data_actions>
<item>http://site:8000/nodata</item>
</insufficient_data_actions>
<name>SwiftObjectAlarm</name>
<ok_actions>
<item>http://site:8000/ok</item>
</ok_actions>
<project_id>c96c887c216949acbdfbd8b494863567</project_id>
<repeat_actions>false</repeat_actions>
<state>ok</state>
<state_timestamp>2013-11-21T12:33:08.486228</state_timestamp>
<threshold_rule nil="true" />
<timestamp>2013-11-21T12:33:08.486221</timestamp>
<type>threshold</type>
<user_id>c96c887c216949acbdfbd8b494863567</user_id>
</value>
</values>

View File

@ -1,40 +0,0 @@
{
"alarm_storage": {
"storage:production_ready": true
},
"api": {
"alarms:history:query:complex": true,
"alarms:history:query:simple": true,
"alarms:query:complex": true,
"alarms:query:simple": true,
"events:query:simple": true,
"meters:query:complex": false,
"meters:query:metadata": true,
"meters:query:simple": true,
"resources:query:complex": false,
"resources:query:metadata": true,
"resources:query:simple": true,
"samples:query:complex": true,
"samples:query:metadata": true,
"samples:query:simple": true,
"statistics:aggregation:selectable:avg": true,
"statistics:aggregation:selectable:cardinality": true,
"statistics:aggregation:selectable:count": true,
"statistics:aggregation:selectable:max": true,
"statistics:aggregation:selectable:min": true,
"statistics:aggregation:selectable:quartile": false,
"statistics:aggregation:selectable:stddev": true,
"statistics:aggregation:selectable:sum": true,
"statistics:aggregation:standard": true,
"statistics:groupby": true,
"statistics:query:complex": false,
"statistics:query:metadata": true,
"statistics:query:simple": true
},
"event_storage": {
"storage:production_ready": true
},
"storage": {
"storage:production_ready": true
}
}

View File

@ -1,131 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<value>
<api>
<item>
<key>statistics:query:complex</key>
<value>false</value>
</item>
<item>
<key>alarms:history:query:simple</key>
<value>true</value>
</item>
<item>
<key>meters:query:metadata</key>
<value>true</value>
</item>
<item>
<key>alarms:query:simple</key>
<value>true</value>
</item>
<item>
<key>resources:query:simple</key>
<value>true</value>
</item>
<item>
<key>statistics:aggregation:selectable:quartile</key>
<value>false</value>
</item>
<item>
<key>statistics:query:simple</key>
<value>true</value>
</item>
<item>
<key>statistics:aggregation:selectable:count</key>
<value>true</value>
</item>
<item>
<key>statistics:aggregation:selectable:min</key>
<value>true</value>
</item>
<item>
<key>statistics:aggregation:selectable:sum</key>
<value>true</value>
</item>
<item>
<key>alarms:query:complex</key>
<value>true</value>
</item>
<item>
<key>meters:query:complex</key>
<value>false</value>
</item>
<item>
<key>statistics:groupby</key>
<value>true</value>
</item>
<item>
<key>alarms:history:query:complex</key>
<value>true</value>
</item>
<item>
<key>meters:query:simple</key>
<value>true</value>
</item>
<item>
<key>samples:query:metadata</key>
<value>true</value>
</item>
<item>
<key>statistics:query:metadata</key>
<value>true</value>
</item>
<item>
<key>samples:query:simple</key>
<value>true</value>
</item>
<item>
<key>resources:query:metadata</key>
<value>true</value>
</item>
<item>
<key>statistics:aggregation:selectable:max</key>
<value>true</value>
</item>
<item>
<key>samples:query:complex</key>
<value>true</value>
</item>
<item>
<key>statistics:aggregation:standard</key>
<value>true</value>
</item>
<item>
<key>events:query:simple</key>
<value>true</value>
</item>
<item>
<key>statistics:aggregation:selectable:stddev</key>
<value>true</value>
</item>
<item>
<key>statistics:aggregation:selectable:avg</key>
<value>true</value>
</item>
<item>
<key>statistics:aggregation:selectable:cardinality</key>
<value>true</value>
</item>
<item>
<key>resources:query:complex</key>
<value>false</value>
</item>
</api>
<storage>
<item>
<key>storage:production_ready</key>
<value>true</value>
</item>
</storage>
<alarm_storage>
<item>
<key>storage:production_ready</key>
<value>true</value>
</item>
</alarm_storage>
<event_storage>
<item>
<key>storage:production_ready</key>
<value>true</value>
</item>
</event_storage>
</value>

View File

@ -1,18 +0,0 @@
{
"raw": {},
"traits": [
{
"type": "string",
"name": "action",
"value": "read"
},
{
"type": "string",
"name": "eventTime",
"value": "2015-10-28T20:26:58.545477+0000"
}
],
"generated": "2015-10-28T20:26:58.546933",
"message_id": "bae43de6-e9fa-44ad-8c15-40a852584444",
"event_type": "http.request"
}

View File

@ -1,20 +0,0 @@
[
{
"raw": {},
"traits": [
{
"type": "string",
"name": "action",
"value": "read"
},
{
"type": "string",
"name": "eventTime",
"value": "2015-10-28T20:26:58.545477+0000"
}
],
"generated": "2015-10-28T20:26:58.546933",
"message_id": "bae43de6-e9fa-44ad-8c15-40a852584444",
"event_type": "http.request"
}
]

View File

@ -1,12 +0,0 @@
[
{
"meter_id": "YmQ5NDMxYzEtOGQ2OS00YWQzLTgwM2EtOGQ0YTZiODlmZDM2K2luc3RhbmNl",
"name": "instance",
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
"source": "openstack",
"type": "gauge",
"unit": "instance",
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff"
}
]

View File

@ -1,13 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<values>
<value>
<name>instance</name>
<type>gauge</type>
<unit>instance</unit>
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
<source>openstack</source>
<meter_id>YmQ5NDMxYzEtOGQ2OS00YWQzLTgwM2EtOGQ0YTZiODlmZDM2K2luc3RhbmNl</meter_id>
</value>
</values>

View File

@ -1,20 +0,0 @@
{
"links": [
{
"href": "http://localhost:8777/v2/resources/bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
"rel": "self"
},
{
"href": "http://localhost:8777/v2/meters/volume?q.field=resource_id&q.value=bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
"rel": "volume"
}
],
"metadata": {
"name1": "value1",
"name2": "value2"
},
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
"source": "openstack",
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff"
}

View File

@ -1,27 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<value>
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
<metadata>
<item>
<key>name2</key>
<value>value2</value>
</item>
<item>
<key>name1</key>
<value>value1</value>
</item>
</metadata>
<links>
<item>
<href>http://localhost:8777/v2/resources/bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</href>
<rel>self</rel>
</item>
<item>
<href>http://localhost:8777/v2/meters/volume?q.field=resource_id&amp;q.value=bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</href>
<rel>volume</rel>
</item>
</links>
<source>openstack</source>
</value>

View File

@ -1,22 +0,0 @@
[
{
"links": [
{
"href": "http://localhost:8777/v2/resources/bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
"rel": "self"
},
{
"href": "http://localhost:8777/v2/meters/volume?q.field=resource_id&q.value=bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
"rel": "volume"
}
],
"metadata": {
"name1": "value1",
"name2": "value2"
},
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
"source": "openstack",
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff"
}
]

View File

@ -1,29 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<values>
<value>
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
<metadata>
<item>
<key>name2</key>
<value>value2</value>
</item>
<item>
<key>name1</key>
<value>value1</value>
</item>
</metadata>
<links>
<item>
<href>http://localhost:8777/v2/resources/bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</href>
<rel>self</rel>
</item>
<item>
<href>http://localhost:8777/v2/meters/volume?q.field=resource_id&amp;q.value=bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</href>
<rel>volume</rel>
</item>
</links>
<source>openstack</source>
</value>
</values>

View File

@ -1,17 +0,0 @@
{
"id": "8db08c68-bc70-11e4-a8c4-fa163e1d1a9b",
"metadata": {
"name1": "value1",
"name2": "value2"
},
"meter": "instance",
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
"recorded_at": "2015-02-24T22:00:32.747930",
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
"source": "openstack",
"timestamp": "2015-02-24T22:00:32.747930",
"type": "gauge",
"unit": "instance",
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff",
"volume": 1.0
}

View File

@ -1,23 +0,0 @@
<value>
<id>8db08c68-bc70-11e4-a8c4-fa163e1d1a9b</id>
<meter>instance</meter>
<type>gauge</type>
<unit>instance</unit>
<volume>1.0</volume>
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
<source>openstack</source>
<timestamp>2015-02-24T22:00:32.747930</timestamp>
<recorded_at>2015-02-24T22:00:32.747930</recorded_at>
<metadata>
<item>
<key>name2</key>
<value>value2</value>
</item>
<item>
<key>name1</key>
<value>value1</value>
</item>
</metadata>
</value>

View File

@ -1,17 +0,0 @@
{
"id": "9b23b398-6139-11e5-97e9-bc764e045bf6",
"metadata": {
"name1": "value1",
"name2": "value2"
},
"meter": "instance",
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
"recorded_at": "2015-09-22T14:52:54.850725",
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
"source": "openstack",
"timestamp": "2015-09-22T14:52:54.850718",
"type": "gauge",
"unit": "instance",
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff",
"volume": 1
}

View File

@ -1,24 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<value>
<id>9b23b398-6139-11e5-97e9-bc764e045bf6</id>
<meter>instance</meter>
<type>gauge</type>
<unit>instance</unit>
<volume>1.0</volume>
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
<source>openstack</source>
<timestamp>2015-09-22T14:52:54.850718</timestamp>
<recorded_at>2015-09-22T14:52:54.850725</recorded_at>
<metadata>
<item>
<key>name2</key>
<value>value2</value>
</item>
<item>
<key>name1</key>
<value>value1</value>
</item>
</metadata>
</value>

View File

@ -1,19 +0,0 @@
[
{
"id": "9b23b398-6139-11e5-97e9-bc764e045bf6",
"metadata": {
"name1": "value1",
"name2": "value2"
},
"meter": "instance",
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
"recorded_at": "2015-09-22T14:52:54.850725",
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
"source": "openstack",
"timestamp": "2015-09-22T14:52:54.850718",
"type": "gauge",
"unit": "instance",
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff",
"volume": 1
}
]

View File

@ -1,26 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<values>
<value>
<id>9b23b398-6139-11e5-97e9-bc764e045bf6</id>
<meter>instance</meter>
<type>gauge</type>
<unit>instance</unit>
<volume>1.0</volume>
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
<source>openstack</source>
<timestamp>2015-09-22T14:52:54.850718</timestamp>
<recorded_at>2015-09-22T14:52:54.850725</recorded_at>
<metadata>
<item>
<key>name2</key>
<value>value2</value>
</item>
<item>
<key>name1</key>
<value>value1</value>
</item>
</metadata>
</value>
</values>

View File

@ -1,16 +0,0 @@
[
{
"avg": 4.5,
"count": 10,
"duration": 300,
"duration_end": "2013-01-04T16:47:00",
"duration_start": "2013-01-04T16:42:00",
"max": 9,
"min": 1,
"period": 7200,
"period_end": "2013-01-04T18:00:00",
"period_start": "2013-01-04T16:00:00",
"sum": 45,
"unit": "GiB"
}
]

View File

@ -1,17 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<values>
<value>
<avg>4.5</avg>
<count>10</count>
<duration>300.0</duration>
<duration_end>2013-01-04T16:47:00</duration_end>
<duration_start>2013-01-04T16:42:00</duration_start>
<max>9.0</max>
<min>1.0</min>
<period>7200</period>
<period_end>2013-01-04T18:00:00</period_end>
<period_start>2013-01-04T16:00:00</period_start>
<sum>45.0</sum>
<unit>GiB</unit>
</value>
</values>

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,11 +0,0 @@
libpq-dev [platform:dpkg]
libxml2-dev [platform:dpkg test]
libxslt-devel [platform:rpm test]
libxslt1-dev [platform:dpkg test]
postgresql [platform:dpkg]
mysql-client [platform:dpkg]
mysql-server [platform:dpkg]
build-essential [platform:dpkg]
libffi-dev [platform:dpkg]
mongodb [platform:dpkg]
gettext [platform:dpkg]

View File

@ -1,20 +0,0 @@
# Copyright 2014 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class NotImplementedError(NotImplementedError):
# FIXME(jd) This is used by WSME to return a correct HTTP code. We should
# not expose it here but wrap our methods in the API to convert it to a
# proper HTTP error.
code = 501

View File

@ -1,41 +0,0 @@
# Copyright 2014-2015 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
from ceilometer.agent import plugin_base as plugin
from ceilometer import keystone_client
LOG = log.getLogger(__name__)
class EndpointDiscovery(plugin.DiscoveryBase):
"""Discovery that supplies service endpoints.
This discovery should be used when the relevant APIs are not well suited
to dividing the pollster's work into smaller pieces than a whole service
at once.
"""
def discover(self, manager, param=None):
endpoints = keystone_client.get_service_catalog(
manager.keystone).get_urls(
service_type=param,
interface=self.conf.service_credentials.interface,
region_name=self.conf.service_credentials.region_name)
if not endpoints:
LOG.warning('No endpoints found for service %s',
"<all services>" if param is None else param)
return []
return endpoints

View File

@ -1,21 +0,0 @@
# Copyright 2015 Intel
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ceilometer.agent import plugin_base
class LocalNodeDiscovery(plugin_base.DiscoveryBase):
def discover(self, manager, param=None):
"""Return local node as resource."""
return ['local_host']

View File

@ -1,44 +0,0 @@
# Copyright 2014 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
from ceilometer.agent import plugin_base as plugin
LOG = log.getLogger(__name__)
class TenantDiscovery(plugin.DiscoveryBase):
"""Discovery that supplies keystone tenants.
This discovery should be used when the pollster's work can't be divided
into smaller pieces than per-tenants. Example of this is the Swift
pollster, which polls account details and does so per-project.
"""
def discover(self, manager, param=None):
domains = manager.keystone.domains.list()
LOG.debug('Found %s keystone domains', len(domains))
if domains:
tenants = []
for domain in domains:
domain_tenants = manager.keystone.projects.list(domain)
LOG.debug("Found %s tenants in domain %s", len(domain_tenants),
domain.name)
tenants = tenants + domain_tenants
else:
tenants = manager.keystone.projects.list()
LOG.debug("No domains - found %s tenants in default domain",
len(tenants))
return tenants or []

View File

@ -1,523 +0,0 @@
#
# Copyright 2013 Julien Danjou
# Copyright 2014-2017 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import itertools
import logging
import random
import uuid
from concurrent import futures
import cotyledon
from futurist import periodics
from keystoneauth1 import exceptions as ka_exceptions
from oslo_config import cfg
from oslo_log import log
import oslo_messaging
from oslo_utils import fnmatch
from oslo_utils import timeutils
import six
from six import moves
from six.moves.urllib import parse as urlparse
from stevedore import extension
from tooz import coordination
from ceilometer.agent import plugin_base
from ceilometer import keystone_client
from ceilometer import messaging
from ceilometer import pipeline
from ceilometer.publisher import utils as publisher_utils
from ceilometer import utils
LOG = log.getLogger(__name__)
OPTS = [
cfg.BoolOpt('batch_polled_samples',
default=True,
help='To reduce polling agent load, samples are sent to the '
'notification agent in a batch. To gain higher '
'throughput at the cost of load set this to False.'),
cfg.IntOpt('shuffle_time_before_polling_task',
default=0,
help='To reduce large requests at same time to Nova or other '
'components from different compute agents, shuffle '
'start time of polling task.'),
]
POLLING_OPTS = [
cfg.StrOpt('cfg_file',
default="polling.yaml",
help="Configuration file for pipeline definition."
),
cfg.StrOpt('partitioning_group_prefix',
deprecated_group='central',
help='Work-load partitioning group prefix. Use only if you '
'want to run multiple polling agents with different '
'config files. For each sub-group of the agent '
'pool with the same partitioning_group_prefix a disjoint '
'subset of pollsters should be loaded.'),
]
class PollsterListForbidden(Exception):
def __init__(self):
msg = ('It is forbidden to use pollster-list option of polling agent '
'in case of using coordination between multiple agents. Please '
'use either multiple agents being coordinated or polling list '
'option for one polling agent.')
super(PollsterListForbidden, self).__init__(msg)
class EmptyPollstersList(Exception):
def __init__(self):
msg = ('No valid pollsters can be loaded with the startup parameters'
' polling-namespaces and pollster-list.')
super(EmptyPollstersList, self).__init__(msg)
class Resources(object):
def __init__(self, agent_manager):
self.agent_manager = agent_manager
self._resources = []
self._discovery = []
self.blacklist = []
def setup(self, source):
self._resources = source.resources
self._discovery = source.discovery
def get(self, discovery_cache=None):
source_discovery = (self.agent_manager.discover(self._discovery,
discovery_cache)
if self._discovery else [])
if self._resources:
static_resources_group = self.agent_manager.construct_group_id(
utils.hash_of_set(self._resources))
return [v for v in self._resources if
self.agent_manager.hashrings[
static_resources_group].belongs_to_self(
six.text_type(v))] + source_discovery
return source_discovery
@staticmethod
def key(source_name, pollster):
return '%s-%s' % (source_name, pollster.name)
class PollingTask(object):
"""Polling task for polling samples and notifying.
A polling task can be invoked periodically or only once.
"""
def __init__(self, agent_manager):
self.manager = agent_manager
# elements of the Cartesian product of sources X pollsters
# with a common interval
self.pollster_matches = collections.defaultdict(set)
# we relate the static resources and per-source discovery to
# each combination of pollster and matching source
resource_factory = lambda: Resources(agent_manager)
self.resources = collections.defaultdict(resource_factory)
self._batch = self.manager.conf.batch_polled_samples
self._telemetry_secret = self.manager.conf.publisher.telemetry_secret
def add(self, pollster, source):
self.pollster_matches[source.name].add(pollster)
key = Resources.key(source.name, pollster)
self.resources[key].setup(source)
def poll_and_notify(self):
"""Polling sample and notify."""
cache = {}
discovery_cache = {}
poll_history = {}
for source_name in self.pollster_matches:
for pollster in self.pollster_matches[source_name]:
key = Resources.key(source_name, pollster)
candidate_res = list(
self.resources[key].get(discovery_cache))
if not candidate_res and pollster.obj.default_discovery:
candidate_res = self.manager.discover(
[pollster.obj.default_discovery], discovery_cache)
# Remove duplicated resources and black resources. Using
# set() requires well defined __hash__ for each resource.
# Since __eq__ is defined, 'not in' is safe here.
polling_resources = []
black_res = self.resources[key].blacklist
history = poll_history.get(pollster.name, [])
for x in candidate_res:
if x not in history:
history.append(x)
if x not in black_res:
polling_resources.append(x)
poll_history[pollster.name] = history
# If no resources, skip for this pollster
if not polling_resources:
p_context = 'new ' if history else ''
LOG.info("Skip pollster %(name)s, no %(p_context)s"
"resources found this cycle",
{'name': pollster.name, 'p_context': p_context})
continue
LOG.info("Polling pollster %(poll)s in the context of "
"%(src)s",
dict(poll=pollster.name, src=source_name))
try:
polling_timestamp = timeutils.utcnow().isoformat()
samples = pollster.obj.get_samples(
manager=self.manager,
cache=cache,
resources=polling_resources
)
sample_batch = []
for sample in samples:
# Note(yuywz): Unify the timestamp of polled samples
sample.set_timestamp(polling_timestamp)
sample_dict = (
publisher_utils.meter_message_from_counter(
sample, self._telemetry_secret
))
if self._batch:
sample_batch.append(sample_dict)
else:
self._send_notification([sample_dict])
if sample_batch:
self._send_notification(sample_batch)
except plugin_base.PollsterPermanentError as err:
LOG.error(
'Prevent pollster %(name)s from '
'polling %(res_list)s on source %(source)s anymore!',
dict(name=pollster.name,
res_list=str(err.fail_res_list),
source=source_name))
self.resources[key].blacklist.extend(err.fail_res_list)
except Exception as err:
LOG.error(
'Continue after error from %(name)s: %(error)s'
% ({'name': pollster.name, 'error': err}),
exc_info=True)
def _send_notification(self, samples):
self.manager.notifier.sample(
{},
'telemetry.polling',
{'samples': samples}
)
class AgentManager(cotyledon.Service):
def __init__(self, worker_id, conf, namespaces=None, pollster_list=None):
namespaces = namespaces or ['compute', 'central']
pollster_list = pollster_list or []
group_prefix = conf.polling.partitioning_group_prefix
# features of using coordination and pollster-list are exclusive, and
# cannot be used at one moment to avoid both samples duplication and
# samples being lost
if pollster_list and conf.coordination.backend_url:
raise PollsterListForbidden()
super(AgentManager, self).__init__(worker_id)
self.conf = conf
def _match(pollster):
"""Find out if pollster name matches to one of the list."""
return any(fnmatch.fnmatch(pollster.name, pattern) for
pattern in pollster_list)
if type(namespaces) is not list:
namespaces = [namespaces]
# we'll have default ['compute', 'central'] here if no namespaces will
# be passed
extensions = (self._extensions('poll', namespace, self.conf).extensions
for namespace in namespaces)
# get the extensions from pollster builder
extensions_fb = (self._extensions_from_builder('poll', namespace)
for namespace in namespaces)
if pollster_list:
extensions = (moves.filter(_match, exts)
for exts in extensions)
extensions_fb = (moves.filter(_match, exts)
for exts in extensions_fb)
self.extensions = list(itertools.chain(*list(extensions))) + list(
itertools.chain(*list(extensions_fb)))
if self.extensions == []:
raise EmptyPollstersList()
discoveries = (self._extensions('discover', namespace,
self.conf).extensions
for namespace in namespaces)
self.discoveries = list(itertools.chain(*list(discoveries)))
self.polling_periodics = None
if self.conf.coordination.backend_url:
# XXX uuid4().bytes ought to work, but it requires ascii for now
coordination_id = str(uuid.uuid4()).encode('ascii')
self.partition_coordinator = coordination.get_coordinator(
self.conf.coordination.backend_url, coordination_id)
else:
self.partition_coordinator = None
# Compose coordination group prefix.
# We'll use namespaces as the basement for this partitioning.
namespace_prefix = '-'.join(sorted(namespaces))
self.group_prefix = ('%s-%s' % (namespace_prefix, group_prefix)
if group_prefix else namespace_prefix)
self.notifier = oslo_messaging.Notifier(
messaging.get_transport(self.conf),
driver=self.conf.publisher_notifier.telemetry_driver,
publisher_id="ceilometer.polling")
self._keystone = None
self._keystone_last_exception = None
@staticmethod
def _get_ext_mgr(namespace, *args, **kwargs):
def _catch_extension_load_error(mgr, ep, exc):
# Extension raising ExtensionLoadError can be ignored,
# and ignore anything we can't import as a safety measure.
if isinstance(exc, plugin_base.ExtensionLoadError):
LOG.exception("Skip loading extension for %s", ep.name)
return
show_exception = (LOG.isEnabledFor(logging.DEBUG)
and isinstance(exc, ImportError))
LOG.error("Failed to import extension for %(name)r: "
"%(error)s",
{'name': ep.name, 'error': exc},
exc_info=show_exception)
if isinstance(exc, ImportError):
return
raise exc
return extension.ExtensionManager(
namespace=namespace,
invoke_on_load=True,
invoke_args=args,
invoke_kwds=kwargs,
on_load_failure_callback=_catch_extension_load_error,
)
def _extensions(self, category, agent_ns=None, *args, **kwargs):
namespace = ('ceilometer.%s.%s' % (category, agent_ns) if agent_ns
else 'ceilometer.%s' % category)
return self._get_ext_mgr(namespace, *args, **kwargs)
def _extensions_from_builder(self, category, agent_ns=None):
ns = ('ceilometer.builder.%s.%s' % (category, agent_ns) if agent_ns
else 'ceilometer.builder.%s' % category)
mgr = self._get_ext_mgr(ns, self.conf)
def _build(ext):
return ext.plugin.get_pollsters_extensions(self.conf)
# NOTE: this seems a stevedore bug. if no extensions are found,
# map will raise runtimeError which is not documented.
if mgr.names():
return list(itertools.chain(*mgr.map(_build)))
else:
return []
def join_partitioning_groups(self):
groups = set([self.construct_group_id(d.obj.group_id)
for d in self.discoveries])
# let each set of statically-defined resources have its own group
static_resource_groups = set([
self.construct_group_id(utils.hash_of_set(p.resources))
for p in self.polling_manager.sources
if p.resources
])
groups.update(static_resource_groups)
self.hashrings = dict(
(group, self.partition_coordinator.join_partitioned_group(group))
for group in groups)
def create_polling_task(self):
"""Create an initially empty polling task."""
return PollingTask(self)
def setup_polling_tasks(self):
polling_tasks = {}
for source in self.polling_manager.sources:
polling_task = None
for pollster in self.extensions:
if source.support_meter(pollster.name):
polling_task = polling_tasks.get(source.get_interval())
if not polling_task:
polling_task = self.create_polling_task()
polling_tasks[source.get_interval()] = polling_task
polling_task.add(pollster, source)
return polling_tasks
def construct_group_id(self, discovery_group_id):
return '%s-%s' % (self.group_prefix, discovery_group_id)
def start_polling_tasks(self):
# set shuffle time before polling task if necessary
delay_polling_time = random.randint(
0, self.conf.shuffle_time_before_polling_task)
data = self.setup_polling_tasks()
# Don't start useless threads if no task will run
if not data:
return
# One thread per polling tasks is enough
self.polling_periodics = periodics.PeriodicWorker.create(
[], executor_factory=lambda:
futures.ThreadPoolExecutor(max_workers=len(data)))
for interval, polling_task in data.items():
delay_time = interval + delay_polling_time
@periodics.periodic(spacing=interval, run_immediately=False)
def task(running_task):
self.interval_task(running_task)
utils.spawn_thread(utils.delayed, delay_time,
self.polling_periodics.add, task, polling_task)
utils.spawn_thread(self.polling_periodics.start, allow_empty=True)
def run(self):
super(AgentManager, self).run()
self.polling_manager = pipeline.setup_polling(self.conf)
if self.partition_coordinator:
self.partition_coordinator.start()
self.join_partitioning_groups()
self.start_polling_tasks()
def terminate(self):
self.stop_pollsters_tasks()
if self.partition_coordinator:
self.partition_coordinator.stop()
super(AgentManager, self).terminate()
def interval_task(self, task):
# NOTE(sileht): remove the previous keystone client
# and exception to get a new one in this polling cycle.
self._keystone = None
self._keystone_last_exception = None
task.poll_and_notify()
@property
def keystone(self):
# FIXME(sileht): This lazy loading of keystone client doesn't
# look concurrently safe, we never see issue because once we have
# connected to keystone everything is fine, and because all pollsters
# are delayed during startup. But each polling task creates a new
# client and overrides it which has been created by other polling
# tasks. During this short time bad thing can occur.
#
# I think we must not reset keystone client before
# running a polling task, but refresh it periodically instead.
# NOTE(sileht): we do lazy loading of the keystone client
# for multiple reasons:
# * don't use it if no plugin need it
# * use only one client for all plugins per polling cycle
if self._keystone is None and self._keystone_last_exception is None:
try:
self._keystone = keystone_client.get_client(self.conf)
self._keystone_last_exception = None
except ka_exceptions.ClientException as e:
self._keystone = None
self._keystone_last_exception = e
if self._keystone is not None:
return self._keystone
else:
raise self._keystone_last_exception
@staticmethod
def _parse_discoverer(url):
s = urlparse.urlparse(url)
return (s.scheme or s.path), (s.netloc + s.path if s.scheme else None)
def _discoverer(self, name):
for d in self.discoveries:
if d.name == name:
return d.obj
return None
def discover(self, discovery=None, discovery_cache=None):
resources = []
discovery = discovery or []
for url in discovery:
if discovery_cache is not None and url in discovery_cache:
resources.extend(discovery_cache[url])
continue
name, param = self._parse_discoverer(url)
discoverer = self._discoverer(name)
if discoverer:
try:
if discoverer.KEYSTONE_REQUIRED_FOR_SERVICE:
service_type = getattr(
self.conf.service_types,
discoverer.KEYSTONE_REQUIRED_FOR_SERVICE)
if not keystone_client.get_service_catalog(
self.keystone).get_endpoints(
service_type=service_type):
LOG.warning(
'Skipping %(name)s, %(service_type)s service '
'is not registered in keystone',
{'name': name, 'service_type': service_type})
continue
discovered = discoverer.discover(self, param)
if self.partition_coordinator:
discovered = [
v for v in discovered if self.hashrings[
self.construct_group_id(discoverer.group_id)
].belongs_to_self(six.text_type(v))]
resources.extend(discovered)
if discovery_cache is not None:
discovery_cache[url] = discovered
except ka_exceptions.ClientException as e:
LOG.error('Skipping %(name)s, keystone issue: '
'%(exc)s', {'name': name, 'exc': e})
except Exception as err:
LOG.exception('Unable to discover resources: %s', err)
else:
LOG.warning('Unknown discovery extension: %s', name)
return resources
def stop_pollsters_tasks(self):
if self.polling_periodics:
self.polling_periodics.stop()
self.polling_periodics.wait()
self.polling_periodics = None

View File

@ -1,276 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Base class for plugins.
"""
import abc
import collections
from oslo_log import log
import oslo_messaging
import six
from stevedore import extension
from ceilometer import messaging
LOG = log.getLogger(__name__)
ExchangeTopics = collections.namedtuple('ExchangeTopics',
['exchange', 'topics'])
class PluginBase(object):
"""Base class for all plugins."""
@six.add_metaclass(abc.ABCMeta)
class NotificationBase(PluginBase):
"""Base class for plugins that support the notification API."""
def __init__(self, manager):
super(NotificationBase, self).__init__()
# NOTE(gordc): this is filter rule used by oslo.messaging to dispatch
# messages to an endpoint.
if self.event_types:
self.filter_rule = oslo_messaging.NotificationFilter(
event_type='|'.join(self.event_types))
self.manager = manager
@staticmethod
def get_notification_topics(conf):
if 'notification_topics' in conf:
return conf.notification_topics
return conf.oslo_messaging_notifications.topics
@abc.abstractproperty
def event_types(self):
"""Return a sequence of strings.
Strings are defining the event types to be given to this plugin.
"""
@abc.abstractmethod
def get_targets(self, conf):
"""Return a sequence of oslo.messaging.Target.
Sequence is defining the exchange and topics to be connected for this
plugin.
:param conf: Configuration.
"""
@abc.abstractmethod
def process_notification(self, message):
"""Return a sequence of Counter instances for the given message.
:param message: Message to process.
"""
@staticmethod
def _consume_and_drop(notifications):
"""RPC endpoint for useless notification level"""
# NOTE(sileht): nothing special todo here, but because we listen
# for the generic notification exchange we have to consume all its
# queues
audit = _consume_and_drop
debug = _consume_and_drop
warn = _consume_and_drop
error = _consume_and_drop
critical = _consume_and_drop
def info(self, notifications):
"""RPC endpoint for notification messages at info level
When another service sends a notification over the message
bus, this method receives it.
:param notifications: list of notifications
"""
self._process_notifications('info', notifications)
def sample(self, notifications):
"""RPC endpoint for notification messages at sample level
When another service sends a notification over the message
bus at sample priority, this method receives it.
:param notifications: list of notifications
"""
self._process_notifications('sample', notifications)
def _process_notifications(self, priority, notifications):
for notification in notifications:
try:
notification = messaging.convert_to_old_notification_format(
priority, notification)
self.to_samples_and_publish(notification)
except Exception:
LOG.error('Fail to process notification', exc_info=True)
def to_samples_and_publish(self, notification):
"""Return samples produced by *process_notification*.
Samples produced for the given notification.
:param context: Execution context from the service or RPC call
:param notification: The notification to process.
"""
with self.manager.publisher() as p:
p(list(self.process_notification(notification)))
class ExtensionLoadError(Exception):
"""Error of loading pollster plugin.
PollsterBase provides a hook, setup_environment, called in pollster loading
to setup required HW/SW dependency. Any exception from it would be
propagated as ExtensionLoadError, then skip loading this pollster.
"""
pass
class PollsterPermanentError(Exception):
"""Permanent error when polling.
When unrecoverable error happened in polling, pollster can raise this
exception with failed resource to prevent itself from polling any more.
Resource is one of parameter resources from get_samples that cause polling
error.
"""
def __init__(self, resources):
self.fail_res_list = resources
@six.add_metaclass(abc.ABCMeta)
class PollsterBase(PluginBase):
"""Base class for plugins that support the polling API."""
def setup_environment(self):
"""Setup required environment for pollster.
Each subclass could overwrite it for specific usage. Any exception
raised in this function would prevent pollster being loaded.
"""
pass
def __init__(self, conf):
super(PollsterBase, self).__init__()
self.conf = conf
try:
self.setup_environment()
except Exception as err:
raise ExtensionLoadError(err)
@abc.abstractproperty
def default_discovery(self):
"""Default discovery to use for this pollster.
There are three ways a pollster can get a list of resources to poll,
listed here in ascending order of precedence:
1. from the per-agent discovery,
2. from the per-pollster discovery (defined here)
3. from the per-pipeline configured discovery and/or per-pipeline
configured static resources.
If a pollster should only get resources from #1 or #3, this property
should be set to None.
"""
@abc.abstractmethod
def get_samples(self, manager, cache, resources):
"""Return a sequence of Counter instances from polling the resources.
:param manager: The service manager class invoking the plugin.
:param cache: A dictionary to allow pollsters to pass data
between themselves when recomputing it would be
expensive (e.g., asking another service for a
list of objects).
:param resources: A list of resources the pollster will get data
from. It's up to the specific pollster to decide
how to use it. It is usually supplied by a discovery,
see ``default_discovery`` for more information.
"""
@classmethod
def build_pollsters(cls, conf):
"""Return a list of tuple (name, pollster).
The name is the meter name which the pollster would return, the
pollster is a pollster object instance. The pollster which implements
this method should be registered in the namespace of
ceilometer.builder.xxx instead of ceilometer.poll.xxx.
"""
return []
@classmethod
def get_pollsters_extensions(cls, conf):
"""Return a list of stevedore extensions.
The returned stevedore extensions wrap the pollster object instances
returned by build_pollsters.
"""
extensions = []
try:
for name, pollster in cls.build_pollsters(conf):
ext = extension.Extension(name, None, cls, pollster)
extensions.append(ext)
except Exception as err:
raise ExtensionLoadError(err)
return extensions
@six.add_metaclass(abc.ABCMeta)
class DiscoveryBase(object):
KEYSTONE_REQUIRED_FOR_SERVICE = None
"""Service type required in keystone catalog to works"""
def __init__(self, conf):
self.conf = conf
@abc.abstractmethod
def discover(self, manager, param=None):
"""Discover resources to monitor.
The most fine-grained discovery should be preferred, so the work is
the most evenly distributed among multiple agents (if they exist).
For example:
if the pollster can separately poll individual resources, it should
have its own discovery implementation to discover those resources. If
it can only poll per-tenant, then the `TenantDiscovery` should be
used. If even that is not possible, use `EndpointDiscovery` (see
their respective docstrings).
:param manager: The service manager class invoking the plugin.
:param param: an optional parameter to guide the discovery
"""
@property
def group_id(self):
"""Return group id of this discovery.
All running recoveries with the same group_id should return the same
set of resources at a given point in time. By default, a discovery is
put into a global group, meaning that all discoveries of its type
running anywhere in the cloud, return the same set of resources.
This property can be overridden to provide correct grouping of
localized discoveries. For example, compute discovery is localized
to a host, which is reflected in its group_id.
A None value signifies that this discovery does not want to be part
of workload partitioning at all.
"""
return 'global'

View File

@ -1,112 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2015-2016 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import uuid
from oslo_config import cfg
from oslo_log import log
from paste import deploy
import pecan
from ceilometer.api import hooks
from ceilometer.api import middleware
LOG = log.getLogger(__name__)
OPTS = [
cfg.StrOpt('api_paste_config',
default="api_paste.ini",
help="Configuration file for WSGI definition of API."
),
]
API_OPTS = [
cfg.IntOpt('default_api_return_limit',
min=1,
default=100,
help='Default maximum number of items returned by API request.'
),
]
def setup_app(pecan_config=None, conf=None):
if conf is None:
raise RuntimeError("No configuration passed")
# FIXME: Replace DBHook with a hooks.TransactionHook
app_hooks = [hooks.ConfigHook(conf),
hooks.DBHook(conf),
hooks.NotifierHook(conf),
hooks.TranslationHook()]
pecan_config = pecan_config or {
"app": {
'root': 'ceilometer.api.controllers.root.RootController',
'modules': ['ceilometer.api'],
}
}
pecan.configuration.set_config(dict(pecan_config), overwrite=True)
app = pecan.make_app(
pecan_config['app']['root'],
hooks=app_hooks,
wrap_app=middleware.ParsableErrorMiddleware,
guess_content_type_from_ext=False
)
return app
# NOTE(sileht): pastedeploy uses ConfigParser to handle
# global_conf, since python 3 ConfigParser doesn't
# allow to store object as config value, only strings are
# permit, so to be able to pass an object created before paste load
# the app, we store them into a global var. But the each loaded app
# store it's configuration in unique key to be concurrency safe.
global APPCONFIGS
APPCONFIGS = {}
def load_app(conf):
global APPCONFIGS
# Build the WSGI app
cfg_file = None
cfg_path = conf.api_paste_config
if not os.path.isabs(cfg_path):
cfg_file = conf.find_file(cfg_path)
elif os.path.exists(cfg_path):
cfg_file = cfg_path
if not cfg_file:
raise cfg.ConfigFilesNotFoundError([conf.api_paste_config])
configkey = str(uuid.uuid4())
APPCONFIGS[configkey] = conf
LOG.info("Full WSGI config used: %s", cfg_file)
LOG.warning("Note: Ceilometer API is deprecated; use APIs from Aodh"
" (alarms), Gnocchi (metrics) and/or Panko (events).")
return deploy.loadapp("config:" + cfg_file,
global_conf={'configkey': configkey})
def app_factory(global_config, **local_conf):
global APPCONFIGS
conf = APPCONFIGS.get(global_config.get('configkey'))
return setup_app(conf=conf)

View File

@ -1,25 +0,0 @@
# -*- mode: python -*-
#
# Copyright 2013 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Use this file for deploying the API under mod_wsgi.
See http://pecan.readthedocs.org/en/latest/deployment.html for details.
"""
from ceilometer import service
from ceilometer.api import app
# Initialize the oslo configuration library and logging
conf = service.prepare_service([])
application = app.load_app(conf)

View File

@ -1,56 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from ceilometer.api.controllers.v2 import root as v2
MEDIA_TYPE_JSON = 'application/vnd.openstack.telemetry-%s+json'
MEDIA_TYPE_XML = 'application/vnd.openstack.telemetry-%s+xml'
class RootController(object):
def __init__(self):
self.v2 = v2.V2Controller()
@pecan.expose('json')
def index(self):
base_url = pecan.request.application_url
available = [{'tag': 'v2', 'date': '2013-02-13T00:00:00Z', }]
collected = [version_descriptor(base_url, v['tag'], v['date'])
for v in available]
versions = {'versions': {'values': collected}}
return versions
def version_descriptor(base_url, version, released_on):
url = version_url(base_url, version)
return {
'id': version,
'links': [
{'href': url, 'rel': 'self', },
{'href': 'http://docs.openstack.org/',
'rel': 'describedby', 'type': 'text/html', }],
'media-types': [
{'base': 'application/json', 'type': MEDIA_TYPE_JSON % version, },
{'base': 'application/xml', 'type': MEDIA_TYPE_XML % version, }],
'status': 'stable',
'updated': released_on,
}
def version_url(base_url, version_number):
return '%s/%s' % (base_url, version_number)

View File

@ -1,222 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import ast
import datetime
import functools
import inspect
import json
from oslo_utils import strutils
from oslo_utils import timeutils
import pecan
import six
import wsme
from wsme import types as wtypes
from ceilometer.i18n import _
operation_kind = ('lt', 'le', 'eq', 'ne', 'ge', 'gt')
operation_kind_enum = wtypes.Enum(str, *operation_kind)
class ClientSideError(wsme.exc.ClientSideError):
def __init__(self, error, status_code=400):
pecan.response.translatable_error = error
super(ClientSideError, self).__init__(error, status_code)
class EntityNotFound(ClientSideError):
def __init__(self, entity, id):
super(EntityNotFound, self).__init__(
_("%(entity)s %(id)s Not Found") % {'entity': entity,
'id': id},
status_code=404)
class ProjectNotAuthorized(ClientSideError):
def __init__(self, id, aspect='project'):
params = dict(aspect=aspect, id=id)
super(ProjectNotAuthorized, self).__init__(
_("Not Authorized to access %(aspect)s %(id)s") % params,
status_code=401)
class Base(wtypes.DynamicBase):
@classmethod
def from_db_model(cls, m):
return cls(**(m.as_dict()))
@classmethod
def from_db_and_links(cls, m, links):
return cls(links=links, **(m.as_dict()))
def as_dict(self, db_model):
valid_keys = inspect.getargspec(db_model.__init__)[0]
if 'self' in valid_keys:
valid_keys.remove('self')
return self.as_dict_from_keys(valid_keys)
def as_dict_from_keys(self, keys):
return dict((k, getattr(self, k))
for k in keys
if hasattr(self, k) and
getattr(self, k) != wsme.Unset)
class Link(Base):
"""A link representation."""
href = wtypes.text
"The url of a link"
rel = wtypes.text
"The name of a link"
@classmethod
def sample(cls):
return cls(href=('http://localhost:8777/v2/meters/volume?'
'q.field=resource_id&'
'q.value=bd9431c1-8d69-4ad3-803a-8d4a6b89fd36'),
rel='volume'
)
class Query(Base):
"""Query filter."""
# The data types supported by the query.
_supported_types = ['integer', 'float', 'string', 'boolean', 'datetime']
# Functions to convert the data field to the correct type.
_type_converters = {'integer': int,
'float': float,
'boolean': functools.partial(
strutils.bool_from_string, strict=True),
'string': six.text_type,
'datetime': timeutils.parse_isotime}
_op = None # provide a default
def get_op(self):
return self._op or 'eq'
def set_op(self, value):
self._op = value
field = wsme.wsattr(wtypes.text, mandatory=True)
"The name of the field to test"
# op = wsme.wsattr(operation_kind, default='eq')
# this ^ doesn't seem to work.
op = wsme.wsproperty(operation_kind_enum, get_op, set_op)
"The comparison operator. Defaults to 'eq'."
value = wsme.wsattr(wtypes.text, mandatory=True)
"The value to compare against the stored data"
type = wtypes.text
"The data type of value to compare against the stored data"
def __repr__(self):
# for logging calls
return '<Query %r %s %r %s>' % (self.field,
self.op,
self.value,
self.type)
@classmethod
def sample(cls):
return cls(field='resource_id',
op='eq',
value='bd9431c1-8d69-4ad3-803a-8d4a6b89fd36',
type='string'
)
def as_dict(self):
return self.as_dict_from_keys(['field', 'op', 'type', 'value'])
def _get_value_as_type(self, forced_type=None):
"""Convert metadata value to the specified data type.
This method is called during metadata query to help convert the
querying metadata to the data type specified by user. If there is no
data type given, the metadata will be parsed by ast.literal_eval to
try to do a smart converting.
NOTE (flwang) Using "_" as prefix to avoid an InvocationError raised
from wsmeext/sphinxext.py. It's OK to call it outside the Query class.
Because the "public" side of that class is actually the outside of the
API, and the "private" side is the API implementation. The method is
only used in the API implementation, so it's OK.
:returns: metadata value converted with the specified data type.
"""
type = forced_type or self.type
try:
converted_value = self.value
if not type:
try:
converted_value = ast.literal_eval(self.value)
except (ValueError, SyntaxError):
# Unable to convert the metadata value automatically
# let it default to self.value
pass
else:
if type not in self._supported_types:
# Types must be explicitly declared so the
# correct type converter may be used. Subclasses
# of Query may define _supported_types and
# _type_converters to define their own types.
raise TypeError()
converted_value = self._type_converters[type](self.value)
if isinstance(converted_value, datetime.datetime):
converted_value = timeutils.normalize_time(converted_value)
except ValueError:
msg = (_('Unable to convert the value %(value)s'
' to the expected data type %(type)s.') %
{'value': self.value, 'type': type})
raise ClientSideError(msg)
except TypeError:
msg = (_('The data type %(type)s is not supported. The supported'
' data type list is: %(supported)s') %
{'type': type, 'supported': self._supported_types})
raise ClientSideError(msg)
except Exception:
msg = (_('Unexpected exception converting %(value)s to'
' the expected data type %(type)s.') %
{'value': self.value, 'type': type})
raise ClientSideError(msg)
return converted_value
class JsonType(wtypes.UserType):
"""A simple JSON type."""
basetype = wtypes.text
name = 'json'
@staticmethod
def validate(value):
# check that value can be serialised
json.dumps(value)
return value

View File

@ -1,90 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from ceilometer.api.controllers.v2 import base
from ceilometer import utils
def _flatten_capabilities(capabilities):
return dict((k, v) for k, v in utils.recursive_keypairs(capabilities))
class Capabilities(base.Base):
"""A representation of the API and storage capabilities.
Usually constrained by restrictions imposed by the storage driver.
"""
api = {wtypes.text: bool}
"A flattened dictionary of API capabilities"
storage = {wtypes.text: bool}
"A flattened dictionary of storage capabilities"
@classmethod
def sample(cls):
return cls(
api=_flatten_capabilities({
'meters': {'query': {'simple': True,
'metadata': True}},
'resources': {'query': {'simple': True,
'metadata': True}},
'samples': {'query': {'simple': True,
'metadata': True,
'complex': True}},
'statistics': {'groupby': True,
'query': {'simple': True,
'metadata': True},
'aggregation': {'standard': True,
'selectable': {
'max': True,
'min': True,
'sum': True,
'avg': True,
'count': True,
'stddev': True,
'cardinality': True,
'quartile': False}}},
}),
storage=_flatten_capabilities(
{'storage': {'production_ready': True}}),
)
class CapabilitiesController(rest.RestController):
"""Manages capabilities queries."""
@wsme_pecan.wsexpose(Capabilities)
def get(self):
"""Returns a flattened dictionary of API capabilities.
Capabilities supported by the currently configured storage driver.
"""
# variation in API capabilities is effectively determined by
# the lack of strict feature parity across storage drivers
conn = pecan.request.storage_conn
driver_capabilities = conn.get_capabilities().copy()
driver_perf = conn.get_storage_capabilities()
return Capabilities(api=_flatten_capabilities(driver_capabilities),
storage=_flatten_capabilities(driver_perf))

View File

@ -1,505 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import base64
import datetime
from oslo_log import log
from oslo_utils import strutils
from oslo_utils import timeutils
import pecan
from pecan import rest
import six
import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from ceilometer.api.controllers.v2 import base
from ceilometer.api.controllers.v2 import utils as v2_utils
from ceilometer.api import rbac
from ceilometer.i18n import _
from ceilometer.publisher import utils as publisher_utils
from ceilometer import sample
from ceilometer import storage
from ceilometer.storage import base as storage_base
from ceilometer import utils
LOG = log.getLogger(__name__)
class OldSample(base.Base):
"""A single measurement for a given meter and resource.
This class is deprecated in favor of Sample.
"""
source = wtypes.text
"The ID of the source that identifies where the sample comes from"
counter_name = wsme.wsattr(wtypes.text, mandatory=True)
"The name of the meter"
# FIXME(dhellmann): Make this meter_name?
counter_type = wsme.wsattr(wtypes.text, mandatory=True)
"The type of the meter (see :ref:`measurements`)"
# FIXME(dhellmann): Make this meter_type?
counter_unit = wsme.wsattr(wtypes.text, mandatory=True)
"The unit of measure for the value in counter_volume"
# FIXME(dhellmann): Make this meter_unit?
counter_volume = wsme.wsattr(float, mandatory=True)
"The actual measured value"
user_id = wtypes.text
"The ID of the user who last triggered an update to the resource"
project_id = wtypes.text
"The ID of the project or tenant that owns the resource"
resource_id = wsme.wsattr(wtypes.text, mandatory=True)
"The ID of the :class:`Resource` for which the measurements are taken"
timestamp = datetime.datetime
"UTC date and time when the measurement was made"
recorded_at = datetime.datetime
"When the sample has been recorded."
resource_metadata = {wtypes.text: wtypes.text}
"Arbitrary metadata associated with the resource"
message_id = wtypes.text
"A unique identifier for the sample"
def __init__(self, counter_volume=None, resource_metadata=None,
timestamp=None, **kwds):
resource_metadata = resource_metadata or {}
if counter_volume is not None:
counter_volume = float(counter_volume)
resource_metadata = v2_utils.flatten_metadata(resource_metadata)
# this is to make it easier for clients to pass a timestamp in
if timestamp and isinstance(timestamp, six.string_types):
timestamp = timeutils.parse_isotime(timestamp)
super(OldSample, self).__init__(counter_volume=counter_volume,
resource_metadata=resource_metadata,
timestamp=timestamp, **kwds)
if self.resource_metadata in (wtypes.Unset, None):
self.resource_metadata = {}
@classmethod
def sample(cls):
return cls(source='openstack',
counter_name='instance',
counter_type='gauge',
counter_unit='instance',
counter_volume=1,
resource_id='bd9431c1-8d69-4ad3-803a-8d4a6b89fd36',
project_id='35b17138-b364-4e6a-a131-8f3099c5be68',
user_id='efd87807-12d2-4b38-9c70-5f5c2ac427ff',
recorded_at=datetime.datetime(2015, 1, 1, 12, 0, 0, 0),
timestamp=datetime.datetime(2015, 1, 1, 12, 0, 0, 0),
resource_metadata={'name1': 'value1',
'name2': 'value2'},
message_id='5460acce-4fd6-480d-ab18-9735ec7b1996',
)
class Statistics(base.Base):
"""Computed statistics for a query."""
groupby = {wtypes.text: wtypes.text}
"Dictionary of field names for group, if groupby statistics are requested"
unit = wtypes.text
"The unit type of the data set"
min = float
"The minimum volume seen in the data"
max = float
"The maximum volume seen in the data"
avg = float
"The average of all of the volume values seen in the data"
sum = float
"The total of all of the volume values seen in the data"
count = int
"The number of samples seen"
aggregate = {wtypes.text: float}
"The selectable aggregate value(s)"
duration = float
"The difference, in seconds, between the oldest and newest timestamp"
duration_start = datetime.datetime
"UTC date and time of the earliest timestamp, or the query start time"
duration_end = datetime.datetime
"UTC date and time of the oldest timestamp, or the query end time"
period = int
"The difference, in seconds, between the period start and end"
period_start = datetime.datetime
"UTC date and time of the period start"
period_end = datetime.datetime
"UTC date and time of the period end"
def __init__(self, start_timestamp=None, end_timestamp=None, **kwds):
super(Statistics, self).__init__(**kwds)
self._update_duration(start_timestamp, end_timestamp)
def _update_duration(self, start_timestamp, end_timestamp):
# "Clamp" the timestamps we return to the original time
# range, excluding the offset.
if (start_timestamp and
self.duration_start and
self.duration_start < start_timestamp):
self.duration_start = start_timestamp
LOG.debug('clamping min timestamp to range')
if (end_timestamp and
self.duration_end and
self.duration_end > end_timestamp):
self.duration_end = end_timestamp
LOG.debug('clamping max timestamp to range')
# If we got valid timestamps back, compute a duration in seconds.
#
# If the min > max after clamping then we know the
# timestamps on the samples fell outside of the time
# range we care about for the query, so treat them as
# "invalid."
#
# If the timestamps are invalid, return None as a
# sentinel indicating that there is something "funny"
# about the range.
if (self.duration_start and
self.duration_end and
self.duration_start <= self.duration_end):
self.duration = timeutils.delta_seconds(self.duration_start,
self.duration_end)
else:
self.duration_start = self.duration_end = self.duration = None
@classmethod
def sample(cls):
return cls(unit='GiB',
min=1,
max=9,
avg=4.5,
sum=45,
count=10,
duration_start=datetime.datetime(2013, 1, 4, 16, 42),
duration_end=datetime.datetime(2013, 1, 4, 16, 47),
period=7200,
period_start=datetime.datetime(2013, 1, 4, 16, 00),
period_end=datetime.datetime(2013, 1, 4, 18, 00),
)
class Aggregate(base.Base):
func = wsme.wsattr(wtypes.text, mandatory=True)
"The aggregation function name"
param = wsme.wsattr(wtypes.text, default=None)
"The paramter to the aggregation function"
def __init__(self, **kwargs):
super(Aggregate, self).__init__(**kwargs)
@staticmethod
def validate(aggregate):
valid_agg = (storage_base.Connection.CAPABILITIES.get('statistics', {})
.get('aggregation', {}).get('selectable', {}).keys())
if aggregate.func not in valid_agg:
msg = _('Invalid aggregation function: %s') % aggregate.func
raise base.ClientSideError(msg)
return aggregate
@classmethod
def sample(cls):
return cls(func='cardinality',
param='resource_id')
def _validate_groupby_fields(groupby_fields):
"""Checks that the list of groupby fields from request is valid.
If all fields are valid, returns fields with duplicates removed.
"""
# NOTE(terriyu): Currently, metadata fields are supported in our
# group by statistics implementation only for mongodb
valid_fields = set(['user_id', 'resource_id', 'project_id', 'source',
'resource_metadata.instance_type'])
invalid_fields = set(groupby_fields) - valid_fields
if invalid_fields:
raise wsme.exc.UnknownArgument(invalid_fields,
"Invalid groupby fields")
# Remove duplicate fields
# NOTE(terriyu): This assumes that we don't care about the order of the
# group by fields.
return list(set(groupby_fields))
class MeterController(rest.RestController):
"""Manages operations on a single meter."""
_custom_actions = {
'statistics': ['GET'],
}
def __init__(self, meter_name):
pecan.request.context['meter_name'] = meter_name
self.meter_name = meter_name
@wsme_pecan.wsexpose([OldSample], [base.Query], int)
def get_all(self, q=None, limit=None):
"""Return samples for the meter.
:param q: Filter rules for the data to be returned.
:param limit: Maximum number of samples to return.
"""
rbac.enforce('get_samples', pecan.request)
q = q or []
limit = v2_utils.enforce_limit(limit)
kwargs = v2_utils.query_to_kwargs(q, storage.SampleFilter.__init__)
kwargs['meter'] = self.meter_name
f = storage.SampleFilter(**kwargs)
return [OldSample.from_db_model(e)
for e in pecan.request.storage_conn.get_samples(f, limit=limit)
]
@wsme_pecan.wsexpose([OldSample], str, body=[OldSample], status_code=201)
def post(self, direct='', samples=None):
"""Post a list of new Samples to Telemetry.
:param direct: a flag indicates whether the samples will be posted
directly to storage or not.
:param samples: a list of samples within the request body.
"""
rbac.enforce('create_samples', pecan.request)
direct = strutils.bool_from_string(direct)
if not samples:
msg = _('Samples should be included in request body')
raise base.ClientSideError(msg)
now = timeutils.utcnow()
auth_project = rbac.get_limited_to_project(pecan.request.headers)
def_source = pecan.request.cfg.sample_source
def_project_id = pecan.request.headers.get('X-Project-Id')
def_user_id = pecan.request.headers.get('X-User-Id')
published_samples = []
for s in samples:
if self.meter_name != s.counter_name:
raise wsme.exc.InvalidInput('counter_name', s.counter_name,
'should be %s' % self.meter_name)
if s.message_id:
raise wsme.exc.InvalidInput('message_id', s.message_id,
'The message_id must not be set')
if s.counter_type not in sample.TYPES:
raise wsme.exc.InvalidInput('counter_type', s.counter_type,
'The counter type must be: ' +
', '.join(sample.TYPES))
s.user_id = (s.user_id or def_user_id)
s.project_id = (s.project_id or def_project_id)
s.source = '%s:%s' % (s.project_id, (s.source or def_source))
s.timestamp = (s.timestamp or now)
if auth_project and auth_project != s.project_id:
# non admin user trying to cross post to another project_id
auth_msg = 'can not post samples to other projects'
raise wsme.exc.InvalidInput('project_id', s.project_id,
auth_msg)
published_sample = sample.Sample(
name=s.counter_name,
type=s.counter_type,
unit=s.counter_unit,
volume=s.counter_volume,
user_id=s.user_id,
project_id=s.project_id,
resource_id=s.resource_id,
timestamp=s.timestamp.isoformat(),
resource_metadata=utils.restore_nesting(s.resource_metadata,
separator='.'),
source=s.source)
s.message_id = published_sample.id
sample_dict = publisher_utils.meter_message_from_counter(
published_sample,
pecan.request.cfg.publisher.telemetry_secret)
if direct:
ts = timeutils.parse_isotime(sample_dict['timestamp'])
sample_dict['timestamp'] = timeutils.normalize_time(ts)
pecan.request.storage_conn.record_metering_data(sample_dict)
else:
published_samples.append(sample_dict)
if not direct:
pecan.request.notifier.sample(
{'user': def_user_id,
'tenant': def_project_id,
'is_admin': True},
'telemetry.api',
{'samples': published_samples})
return samples
@wsme_pecan.wsexpose([Statistics],
[base.Query], [six.text_type], int, [Aggregate])
def statistics(self, q=None, groupby=None, period=None, aggregate=None):
"""Computes the statistics of the samples in the time range given.
:param q: Filter rules for the data to be returned.
:param groupby: Fields for group by aggregation
:param period: Returned result will be an array of statistics for a
period long of that number of seconds.
:param aggregate: The selectable aggregation functions to be applied.
"""
rbac.enforce('compute_statistics', pecan.request)
q = q or []
groupby = groupby or []
aggregate = aggregate or []
if period and period < 0:
raise base.ClientSideError(_("Period must be positive."))
kwargs = v2_utils.query_to_kwargs(q, storage.SampleFilter.__init__)
kwargs['meter'] = self.meter_name
f = storage.SampleFilter(**kwargs)
g = _validate_groupby_fields(groupby)
aggregate = utils.uniq(aggregate, ['func', 'param'])
# Find the original timestamp in the query to use for clamping
# the duration returned in the statistics.
start = end = None
for i in q:
if i.field == 'timestamp' and i.op in ('lt', 'le'):
end = timeutils.parse_isotime(i.value).replace(
tzinfo=None)
elif i.field == 'timestamp' and i.op in ('gt', 'ge'):
start = timeutils.parse_isotime(i.value).replace(
tzinfo=None)
try:
computed = pecan.request.storage_conn.get_meter_statistics(
f, period, g, aggregate)
return [Statistics(start_timestamp=start,
end_timestamp=end,
**c.as_dict())
for c in computed]
except OverflowError as e:
params = dict(period=period, err=e)
raise base.ClientSideError(
_("Invalid period %(period)s: %(err)s") % params)
class Meter(base.Base):
"""One category of measurements."""
name = wtypes.text
"The unique name for the meter"
type = wtypes.Enum(str, *sample.TYPES)
"The meter type (see :ref:`measurements`)"
unit = wtypes.text
"The unit of measure"
resource_id = wtypes.text
"The ID of the :class:`Resource` for which the measurements are taken"
project_id = wtypes.text
"The ID of the project or tenant that owns the resource"
user_id = wtypes.text
"The ID of the user who last triggered an update to the resource"
source = wtypes.text
"The ID of the source that identifies where the meter comes from"
meter_id = wtypes.text
"The unique identifier for the meter"
def __init__(self, **kwargs):
meter_id = '%s+%s' % (kwargs['resource_id'], kwargs['name'])
# meter_id is of type Unicode but base64.encodestring() only accepts
# strings. See bug #1333177
meter_id = base64.b64encode(meter_id.encode('utf-8'))
kwargs['meter_id'] = meter_id
super(Meter, self).__init__(**kwargs)
@classmethod
def sample(cls):
return cls(name='instance',
type='gauge',
unit='instance',
resource_id='bd9431c1-8d69-4ad3-803a-8d4a6b89fd36',
project_id='35b17138-b364-4e6a-a131-8f3099c5be68',
user_id='efd87807-12d2-4b38-9c70-5f5c2ac427ff',
source='openstack',
)
class MetersController(rest.RestController):
"""Works on meters."""
@pecan.expose()
def _lookup(self, meter_name, *remainder):
return MeterController(meter_name), remainder
@wsme_pecan.wsexpose([Meter], [base.Query], int, str)
def get_all(self, q=None, limit=None, unique=''):
"""Return all known meters, based on the data recorded so far.
:param q: Filter rules for the meters to be returned.
:param unique: flag to indicate unique meters to be returned.
"""
rbac.enforce('get_meters', pecan.request)
q = q or []
# Timestamp field is not supported for Meter queries
limit = v2_utils.enforce_limit(limit)
kwargs = v2_utils.query_to_kwargs(
q, pecan.request.storage_conn.get_meters,
['limit'], allow_timestamps=False)
return [Meter.from_db_model(m)
for m in pecan.request.storage_conn.get_meters(
limit=limit, unique=strutils.bool_from_string(unique),
**kwargs)]

View File

@ -1,359 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import jsonschema
from oslo_log import log
from oslo_utils import timeutils
import pecan
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from ceilometer.api.controllers.v2 import base
from ceilometer.api.controllers.v2 import samples
from ceilometer.api.controllers.v2 import utils as v2_utils
from ceilometer.api import rbac
from ceilometer.i18n import _
from ceilometer import storage
from ceilometer import utils
LOG = log.getLogger(__name__)
class ComplexQuery(base.Base):
"""Holds a sample query encoded in json."""
filter = wtypes.text
"The filter expression encoded in json."
orderby = wtypes.text
"List of single-element dicts for specifying the ordering of the results."
limit = int
"The maximum number of results to be returned."
@classmethod
def sample(cls):
return cls(filter='{"and": [{"and": [{"=": ' +
'{"counter_name": "cpu_util"}}, ' +
'{">": {"counter_volume": 0.23}}, ' +
'{"<": {"counter_volume": 0.26}}]}, ' +
'{"or": [{"and": [{">": ' +
'{"timestamp": "2013-12-01T18:00:00"}}, ' +
'{"<": ' +
'{"timestamp": "2013-12-01T18:15:00"}}]}, ' +
'{"and": [{">": ' +
'{"timestamp": "2013-12-01T18:30:00"}}, ' +
'{"<": ' +
'{"timestamp": "2013-12-01T18:45:00"}}]}]}]}',
orderby='[{"counter_volume": "ASC"}, ' +
'{"timestamp": "DESC"}]',
limit=42
)
def _list_to_regexp(items, regexp_prefix=""):
regexp = ["^%s$" % item for item in items]
regexp = regexp_prefix + "|".join(regexp)
return regexp
class ValidatedComplexQuery(object):
complex_operators = ["and", "or"]
order_directions = ["asc", "desc"]
simple_ops = ["=", "!=", "<", ">", "<=", "=<", ">=", "=>", "=~"]
regexp_prefix = "(?i)"
complex_ops = _list_to_regexp(complex_operators, regexp_prefix)
simple_ops = _list_to_regexp(simple_ops, regexp_prefix)
order_directions = _list_to_regexp(order_directions, regexp_prefix)
timestamp_fields = ["timestamp", "state_timestamp"]
def __init__(self, query, db_model, additional_name_mapping=None,
metadata_allowed=False):
additional_name_mapping = additional_name_mapping or {}
self.name_mapping = {"user": "user_id",
"project": "project_id"}
self.name_mapping.update(additional_name_mapping)
valid_keys = db_model.get_field_names()
valid_keys = list(valid_keys) + list(self.name_mapping.keys())
valid_fields = _list_to_regexp(valid_keys)
if metadata_allowed:
valid_filter_fields = valid_fields + "|^metadata\.[\S]+$"
else:
valid_filter_fields = valid_fields
schema_value = {
"oneOf": [{"type": "string"},
{"type": "number"},
{"type": "boolean"}],
"minProperties": 1,
"maxProperties": 1}
schema_value_in = {
"type": "array",
"items": {"oneOf": [{"type": "string"},
{"type": "number"}]},
"minItems": 1}
schema_field = {
"type": "object",
"patternProperties": {valid_filter_fields: schema_value},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
schema_field_in = {
"type": "object",
"patternProperties": {valid_filter_fields: schema_value_in},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
schema_leaf_in = {
"type": "object",
"patternProperties": {"(?i)^in$": schema_field_in},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
schema_leaf_simple_ops = {
"type": "object",
"patternProperties": {self.simple_ops: schema_field},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
schema_and_or_array = {
"type": "array",
"items": {"$ref": "#"},
"minItems": 2}
schema_and_or = {
"type": "object",
"patternProperties": {self.complex_ops: schema_and_or_array},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
schema_not = {
"type": "object",
"patternProperties": {"(?i)^not$": {"$ref": "#"}},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
self.schema = {
"oneOf": [{"$ref": "#/definitions/leaf_simple_ops"},
{"$ref": "#/definitions/leaf_in"},
{"$ref": "#/definitions/and_or"},
{"$ref": "#/definitions/not"}],
"minProperties": 1,
"maxProperties": 1,
"definitions": {"leaf_simple_ops": schema_leaf_simple_ops,
"leaf_in": schema_leaf_in,
"and_or": schema_and_or,
"not": schema_not}}
self.orderby_schema = {
"type": "array",
"items": {
"type": "object",
"patternProperties":
{valid_fields:
{"type": "string",
"pattern": self.order_directions}},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}}
self.original_query = query
def validate(self, visibility_field):
"""Validates the query content and does the necessary conversions."""
if self.original_query.filter is wtypes.Unset:
self.filter_expr = None
else:
try:
self.filter_expr = json.loads(self.original_query.filter)
self._validate_filter(self.filter_expr)
except (ValueError, jsonschema.exceptions.ValidationError) as e:
raise base.ClientSideError(
_("Filter expression not valid: %s") % e)
self._replace_isotime_with_datetime(self.filter_expr)
self._convert_operator_to_lower_case(self.filter_expr)
self._normalize_field_names_for_db_model(self.filter_expr)
self._force_visibility(visibility_field)
if self.original_query.orderby is wtypes.Unset:
self.orderby = None
else:
try:
self.orderby = json.loads(self.original_query.orderby)
self._validate_orderby(self.orderby)
except (ValueError, jsonschema.exceptions.ValidationError) as e:
raise base.ClientSideError(
_("Order-by expression not valid: %s") % e)
self._convert_orderby_to_lower_case(self.orderby)
self._normalize_field_names_in_orderby(self.orderby)
self.limit = (None if self.original_query.limit is wtypes.Unset
else self.original_query.limit)
self.limit = v2_utils.enforce_limit(self.limit)
@staticmethod
def _convert_orderby_to_lower_case(orderby):
for orderby_field in orderby:
utils.lowercase_values(orderby_field)
def _normalize_field_names_in_orderby(self, orderby):
for orderby_field in orderby:
self._replace_field_names(orderby_field)
def _traverse_postorder(self, tree, visitor):
op = list(tree.keys())[0]
if op.lower() in self.complex_operators:
for i, operand in enumerate(tree[op]):
self._traverse_postorder(operand, visitor)
if op.lower() == "not":
self._traverse_postorder(tree[op], visitor)
visitor(tree)
def _check_cross_project_references(self, own_project_id,
visibility_field):
"""Do not allow other than own_project_id."""
def check_project_id(subfilter):
op, value = list(subfilter.items())[0]
if (op.lower() not in self.complex_operators
and list(value.keys())[0] == visibility_field
and value[visibility_field] != own_project_id):
raise base.ProjectNotAuthorized(value[visibility_field])
self._traverse_postorder(self.filter_expr, check_project_id)
def _force_visibility(self, visibility_field):
"""Force visibility field.
If the tenant is not admin insert an extra
"and <visibility_field>=<tenant's project_id>" clause to the query.
"""
authorized_project = rbac.get_limited_to_project(pecan.request.headers)
is_admin = authorized_project is None
if not is_admin:
self._restrict_to_project(authorized_project, visibility_field)
self._check_cross_project_references(authorized_project,
visibility_field)
def _restrict_to_project(self, project_id, visibility_field):
restriction = {"=": {visibility_field: project_id}}
if self.filter_expr is None:
self.filter_expr = restriction
else:
self.filter_expr = {"and": [restriction, self.filter_expr]}
def _replace_isotime_with_datetime(self, filter_expr):
def replace_isotime(subfilter):
op, value = list(subfilter.items())[0]
if op.lower() not in self.complex_operators:
field = list(value.keys())[0]
if field in self.timestamp_fields:
date_time = self._convert_to_datetime(subfilter[op][field])
subfilter[op][field] = date_time
self._traverse_postorder(filter_expr, replace_isotime)
def _normalize_field_names_for_db_model(self, filter_expr):
def _normalize_field_names(subfilter):
op, value = list(subfilter.items())[0]
if op.lower() not in self.complex_operators:
self._replace_field_names(value)
self._traverse_postorder(filter_expr,
_normalize_field_names)
def _replace_field_names(self, subfilter):
field, value = list(subfilter.items())[0]
if field in self.name_mapping:
del subfilter[field]
subfilter[self.name_mapping[field]] = value
if field.startswith("metadata."):
del subfilter[field]
subfilter["resource_" + field] = value
def _convert_operator_to_lower_case(self, filter_expr):
self._traverse_postorder(filter_expr, utils.lowercase_keys)
@staticmethod
def _convert_to_datetime(isotime):
try:
date_time = timeutils.parse_isotime(isotime)
date_time = date_time.replace(tzinfo=None)
return date_time
except ValueError:
LOG.exception("String %s is not a valid isotime" % isotime)
msg = _('Failed to parse the timestamp value %s') % isotime
raise base.ClientSideError(msg)
def _validate_filter(self, filter_expr):
jsonschema.validate(filter_expr, self.schema)
def _validate_orderby(self, orderby_expr):
jsonschema.validate(orderby_expr, self.orderby_schema)
class QuerySamplesController(rest.RestController):
"""Provides complex query possibilities for samples."""
@wsme_pecan.wsexpose([samples.Sample], body=ComplexQuery)
def post(self, body):
"""Define query for retrieving Sample data.
:param body: Query rules for the samples to be returned.
"""
rbac.enforce('query_sample', pecan.request)
sample_name_mapping = {"resource": "resource_id",
"meter": "counter_name",
"type": "counter_type",
"unit": "counter_unit",
"volume": "counter_volume"}
query = ValidatedComplexQuery(body,
storage.models.Sample,
sample_name_mapping,
metadata_allowed=True)
query.validate(visibility_field="project_id")
conn = pecan.request.storage_conn
return [samples.Sample.from_db_model(s)
for s in conn.query_samples(query.filter_expr,
query.orderby,
query.limit)]
class QueryController(rest.RestController):
samples = QuerySamplesController()

View File

@ -1,158 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
from six.moves import urllib
import pecan
from pecan import rest
import six
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from ceilometer.api.controllers.v2 import base
from ceilometer.api.controllers.v2 import utils
from ceilometer.api import rbac
from ceilometer.i18n import _
class Resource(base.Base):
"""An externally defined object for which samples have been received."""
resource_id = wtypes.text
"The unique identifier for the resource"
project_id = wtypes.text
"The ID of the owning project or tenant"
user_id = wtypes.text
"The ID of the user who created the resource or updated it last"
first_sample_timestamp = datetime.datetime
"UTC date & time not later than the first sample known for this resource"
last_sample_timestamp = datetime.datetime
"UTC date & time not earlier than the last sample known for this resource"
metadata = {wtypes.text: wtypes.text}
"Arbitrary metadata associated with the resource"
links = [base.Link]
"A list containing a self link and associated meter links"
source = wtypes.text
"The source where the resource come from"
def __init__(self, metadata=None, **kwds):
metadata = metadata or {}
metadata = utils.flatten_metadata(metadata)
super(Resource, self).__init__(metadata=metadata, **kwds)
@classmethod
def sample(cls):
return cls(
resource_id='bd9431c1-8d69-4ad3-803a-8d4a6b89fd36',
project_id='35b17138-b364-4e6a-a131-8f3099c5be68',
user_id='efd87807-12d2-4b38-9c70-5f5c2ac427ff',
timestamp=datetime.datetime(2015, 1, 1, 12, 0, 0, 0),
source="openstack",
metadata={'name1': 'value1',
'name2': 'value2'},
links=[
base.Link(href=('http://localhost:8777/v2/resources/'
'bd9431c1-8d69-4ad3-803a-8d4a6b89fd36'),
rel='self'),
base.Link(href=('http://localhost:8777/v2/meters/volume?'
'q.field=resource_id&q.value='
'bd9431c1-8d69-4ad3-803a-8d4a6b89fd36'),
rel='volume')
],
)
class ResourcesController(rest.RestController):
"""Works on resources."""
@staticmethod
def _make_link(rel_name, url, type, type_arg, query=None):
query_str = ''
if query:
query_str = '?q.field=%s&q.value=%s' % (query['field'],
query['value'])
return base.Link(href='%s/v2/%s/%s%s' % (url, type,
type_arg, query_str),
rel=rel_name)
def _resource_links(self, resource_id, meter_links=1):
links = [self._make_link('self', pecan.request.application_url,
'resources', resource_id)]
if meter_links:
for meter in pecan.request.storage_conn.get_meters(
resource=resource_id):
query = {'field': 'resource_id', 'value': resource_id}
links.append(self._make_link(meter.name,
pecan.request.application_url,
'meters', meter.name,
query=query))
return links
@wsme_pecan.wsexpose(Resource, six.text_type)
def get_one(self, resource_id):
"""Retrieve details about one resource.
:param resource_id: The UUID of the resource.
"""
rbac.enforce('get_resource', pecan.request)
# In case we have special character in resource id, for example, swift
# can generate samples with resource id like
# 29f809d9-88bb-4c40-b1ba-a77a1fcf8ceb/glance
resource_id = urllib.parse.unquote(resource_id)
authorized_project = rbac.get_limited_to_project(pecan.request.headers)
resources = list(pecan.request.storage_conn.get_resources(
resource=resource_id, project=authorized_project))
if not resources:
raise base.EntityNotFound(_('Resource'), resource_id)
return Resource.from_db_and_links(resources[0],
self._resource_links(resource_id))
@wsme_pecan.wsexpose([Resource], [base.Query], int, int)
def get_all(self, q=None, limit=None, meter_links=1):
"""Retrieve definitions of all of the resources.
:param q: Filter rules for the resources to be returned.
:param limit: Maximum number of resources to return.
:param meter_links: option to include related meter links.
"""
rbac.enforce('get_resources', pecan.request)
q = q or []
limit = utils.enforce_limit(limit)
kwargs = utils.query_to_kwargs(
q, pecan.request.storage_conn.get_resources, ['limit'])
resources = [
Resource.from_db_and_links(r,
self._resource_links(r.resource_id,
meter_links))
for r in pecan.request.storage_conn.get_resources(limit=limit,
**kwargs)]
return resources

View File

@ -1,222 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystoneauth1 import exceptions
from oslo_config import cfg
from oslo_log import log
from oslo_utils import strutils
import pecan
from ceilometer.api.controllers.v2 import capabilities
from ceilometer.api.controllers.v2 import meters
from ceilometer.api.controllers.v2 import query
from ceilometer.api.controllers.v2 import resources
from ceilometer.api.controllers.v2 import samples
from ceilometer.i18n import _
from ceilometer import keystone_client
API_OPTS = [
cfg.BoolOpt('gnocchi_is_enabled',
help=('Set True to disable resource/meter/sample URLs. '
'Default autodetection by querying keystone.')),
cfg.BoolOpt('aodh_is_enabled',
help=('Set True to redirect alarms URLs to aodh. '
'Default autodetection by querying keystone.')),
cfg.StrOpt('aodh_url',
help=('The endpoint of Aodh to redirect alarms URLs '
'to Aodh API. Default autodetection by querying '
'keystone.')),
cfg.BoolOpt('panko_is_enabled',
help=('Set True to redirect events URLs to Panko. '
'Default autodetection by querying keystone.')),
cfg.StrOpt('panko_url',
help=('The endpoint of Panko to redirect events URLs '
'to Panko API. Default autodetection by querying '
'keystone.')),
]
LOG = log.getLogger(__name__)
def gnocchi_abort():
pecan.abort(410, ("This telemetry installation is configured to use "
"Gnocchi. Please use the Gnocchi API available on "
"the metric endpoint to retrieve data."))
def aodh_abort():
pecan.abort(410, _("alarms URLs is unavailable when Aodh is "
"disabled or unavailable."))
def _redirect(url):
pecan.redirect(location=url + pecan.request.path_qs,
code=308)
class QueryController(object):
def __init__(self, gnocchi_is_enabled=False,
aodh_url=None):
self.gnocchi_is_enabled = gnocchi_is_enabled
self.aodh_url = aodh_url
@pecan.expose()
def _lookup(self, kind, *remainder):
if kind == 'alarms' and self.aodh_url:
_redirect(self.aodh_url)
elif kind == 'alarms':
aodh_abort()
elif kind == 'samples' and self.gnocchi_is_enabled:
gnocchi_abort()
elif kind == 'samples':
return query.QuerySamplesController(), remainder
else:
pecan.abort(404)
class V2Controller(object):
"""Version 2 API controller root."""
capabilities = capabilities.CapabilitiesController()
def __init__(self):
self._gnocchi_is_enabled = None
self._aodh_is_enabled = None
self._aodh_url = None
self._panko_is_enabled = None
self._panko_url = None
@property
def gnocchi_is_enabled(self):
if self._gnocchi_is_enabled is None:
if pecan.request.cfg.api.gnocchi_is_enabled is not None:
self._gnocchi_is_enabled = (
pecan.request.cfg.api.gnocchi_is_enabled)
elif ("gnocchi" not in pecan.request.cfg.meter_dispatchers
or "database" in pecan.request.cfg.meter_dispatchers):
self._gnocchi_is_enabled = False
else:
try:
catalog = keystone_client.get_service_catalog(
keystone_client.get_client(pecan.request.cfg))
catalog.url_for(service_type='metric')
except exceptions.EndpointNotFound:
self._gnocchi_is_enabled = False
except exceptions.ClientException:
LOG.warning("Can't connect to keystone, assuming "
"gnocchi is disabled and retry later")
else:
self._gnocchi_is_enabled = True
LOG.warning("ceilometer-api started with gnocchi "
"enabled. The resources/meters/samples "
"URLs are disabled.")
return self._gnocchi_is_enabled
@property
def aodh_url(self):
if self._aodh_url is None:
if pecan.request.cfg.api.aodh_is_enabled is False:
self._aodh_url = ""
elif pecan.request.cfg.api.aodh_url is not None:
self._aodh_url = self._normalize_url(
pecan.request.cfg.api.aodh_url)
else:
try:
catalog = keystone_client.get_service_catalog(
keystone_client.get_client(pecan.request.cfg))
self._aodh_url = self._normalize_url(
catalog.url_for(service_type='alarming'))
except exceptions.EndpointNotFound:
self._aodh_url = ""
except exceptions.ClientException:
LOG.warning("Can't connect to keystone, assuming aodh "
"is disabled and retry later.")
else:
LOG.warning("ceilometer-api started with aodh "
"enabled. Alarms URLs will be redirected "
"to aodh endpoint.")
return self._aodh_url
@property
def panko_url(self):
if self._panko_url is None:
if pecan.request.cfg.api.panko_is_enabled is False:
self._panko_url = ""
elif pecan.request.cfg.api.panko_url is not None:
self._panko_url = self._normalize_url(
pecan.request.cfg.api.panko_url)
else:
try:
catalog = keystone_client.get_service_catalog(
keystone_client.get_client(pecan.request.cfg))
self._panko_url = self._normalize_url(
catalog.url_for(service_type='event'))
except exceptions.EndpointNotFound:
self._panko_url = ""
except exceptions.ClientException:
LOG.warning(
"Can't connect to keystone, assuming Panko "
"is disabled and retry later.")
else:
LOG.warning("ceilometer-api started with Panko "
"enabled. Events URLs will be redirected "
"to Panko endpoint.")
return self._panko_url
@pecan.expose()
def _lookup(self, kind, *remainder):
if (kind in ['meters', 'resources', 'samples']
and self.gnocchi_is_enabled):
if kind == 'meters' and pecan.request.method == 'POST':
direct = pecan.request.params.get('direct', '')
if strutils.bool_from_string(direct):
pecan.abort(400, _('direct option cannot be true when '
'Gnocchi is enabled.'))
return meters.MetersController(), remainder
gnocchi_abort()
elif kind == 'meters':
return meters.MetersController(), remainder
elif kind == 'resources':
return resources.ResourcesController(), remainder
elif kind == 'samples':
return samples.SamplesController(), remainder
elif kind == 'query':
return QueryController(
gnocchi_is_enabled=self.gnocchi_is_enabled,
aodh_url=self.aodh_url,
), remainder
elif kind == 'alarms' and (not self.aodh_url):
aodh_abort()
elif kind == 'alarms' and self.aodh_url:
_redirect(self.aodh_url)
elif kind == 'events' and self.panko_url:
return _redirect(self.panko_url)
elif kind == 'event_types' and self.panko_url:
return _redirect(self.panko_url)
else:
pecan.abort(404)
@staticmethod
def _normalize_url(url):
if url.endswith("/"):
return url[:-1]
return url

View File

@ -1,145 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import uuid
import pecan
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from ceilometer.api.controllers.v2 import base
from ceilometer.api.controllers.v2 import utils
from ceilometer.api import rbac
from ceilometer.i18n import _
from ceilometer import sample
from ceilometer import storage
class Sample(base.Base):
"""One measurement."""
id = wtypes.text
"The unique identifier for the sample."
meter = wtypes.text
"The meter name this sample is for."
type = wtypes.Enum(str, *sample.TYPES)
"The meter type (see :ref:`meter_types`)"
unit = wtypes.text
"The unit of measure."
volume = float
"The metered value."
user_id = wtypes.text
"The user this sample was taken for."
project_id = wtypes.text
"The project this sample was taken for."
resource_id = wtypes.text
"The :class:`Resource` this sample was taken for."
source = wtypes.text
"The source that identifies where the sample comes from."
timestamp = datetime.datetime
"When the sample has been generated."
recorded_at = datetime.datetime
"When the sample has been recorded."
metadata = {wtypes.text: wtypes.text}
"Arbitrary metadata associated with the sample."
@classmethod
def from_db_model(cls, m):
return cls(id=m.message_id,
meter=m.counter_name,
type=m.counter_type,
unit=m.counter_unit,
volume=m.counter_volume,
user_id=m.user_id,
project_id=m.project_id,
resource_id=m.resource_id,
source=m.source,
timestamp=m.timestamp,
recorded_at=m.recorded_at,
metadata=utils.flatten_metadata(m.resource_metadata))
@classmethod
def sample(cls):
return cls(id=str(uuid.uuid1()),
meter='instance',
type='gauge',
unit='instance',
volume=1,
resource_id='bd9431c1-8d69-4ad3-803a-8d4a6b89fd36',
project_id='35b17138-b364-4e6a-a131-8f3099c5be68',
user_id='efd87807-12d2-4b38-9c70-5f5c2ac427ff',
timestamp=datetime.datetime(2015, 1, 1, 12, 0, 0, 0),
recorded_at=datetime.datetime(2015, 1, 1, 12, 0, 0, 0),
source='openstack',
metadata={'name1': 'value1',
'name2': 'value2'},
)
class SamplesController(rest.RestController):
"""Controller managing the samples."""
@wsme_pecan.wsexpose([Sample], [base.Query], int)
def get_all(self, q=None, limit=None):
"""Return all known samples, based on the data recorded so far.
:param q: Filter rules for the samples to be returned.
:param limit: Maximum number of samples to be returned.
"""
rbac.enforce('get_samples', pecan.request)
q = q or []
limit = utils.enforce_limit(limit)
kwargs = utils.query_to_kwargs(q, storage.SampleFilter.__init__)
f = storage.SampleFilter(**kwargs)
return map(Sample.from_db_model,
pecan.request.storage_conn.get_samples(f, limit=limit))
@wsme_pecan.wsexpose(Sample, wtypes.text)
def get_one(self, sample_id):
"""Return a sample.
:param sample_id: the id of the sample.
"""
rbac.enforce('get_sample', pecan.request)
f = storage.SampleFilter(message_id=sample_id)
samples = list(pecan.request.storage_conn.get_samples(f))
if len(samples) < 1:
raise base.EntityNotFound(_('Sample'), sample_id)
return Sample.from_db_model(samples[0])

View File

@ -1,316 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import datetime
import inspect
from oslo_log import log
from oslo_utils import timeutils
import pecan
import six
import wsme
from ceilometer.api.controllers.v2 import base
from ceilometer.api import rbac
from ceilometer.i18n import _
from ceilometer import utils
LOG = log.getLogger(__name__)
def enforce_limit(limit):
"""Ensure limit is defined and is valid. if not, set a default."""
if limit is None:
limit = pecan.request.cfg.api.default_api_return_limit
LOG.info('No limit value provided, result set will be'
' limited to %(limit)d.', {'limit': limit})
if not limit or limit <= 0:
raise base.ClientSideError(_("Limit must be positive"))
return limit
def get_auth_project(on_behalf_of=None):
auth_project = rbac.get_limited_to_project(pecan.request.headers)
created_by = pecan.request.headers.get('X-Project-Id')
is_admin = auth_project is None
if is_admin and on_behalf_of != created_by:
auth_project = on_behalf_of
return auth_project
def sanitize_query(query, db_func, on_behalf_of=None):
"""Check the query.
See if:
1) the request is coming from admin - then allow full visibility
2) non-admin - make sure that the query includes the requester's project.
"""
q = copy.copy(query)
auth_project = get_auth_project(on_behalf_of)
if auth_project:
_verify_query_segregation(q, auth_project)
proj_q = [i for i in q if i.field == 'project_id']
valid_keys = inspect.getargspec(db_func)[0]
if not proj_q and 'on_behalf_of' not in valid_keys:
# The user is restricted, but they didn't specify a project
# so add it for them.
q.append(base.Query(field='project_id',
op='eq',
value=auth_project))
return q
def _verify_query_segregation(query, auth_project=None):
"""Ensure non-admin queries are not constrained to another project."""
auth_project = (auth_project or
rbac.get_limited_to_project(pecan.request.headers))
if not auth_project:
return
for q in query:
if q.field in ('project', 'project_id') and auth_project != q.value:
raise base.ProjectNotAuthorized(q.value)
def validate_query(query, db_func, internal_keys=None,
allow_timestamps=True):
"""Validates the syntax of the query and verifies the query.
Verification check if the query request is authorized for the included
project.
:param query: Query expression that should be validated
:param db_func: the function on the storage level, of which arguments
will form the valid_keys list, which defines the valid fields for a
query expression
:param internal_keys: internally used field names, that should not be
used for querying
:param allow_timestamps: defines whether the timestamp-based constraint is
applicable for this query or not
:raises InvalidInput: if an operator is not supported for a given field
:raises InvalidInput: if timestamp constraints are allowed, but
search_offset was included without timestamp constraint
:raises: UnknownArgument: if a field name is not a timestamp field, nor
in the list of valid keys
"""
internal_keys = internal_keys or []
_verify_query_segregation(query)
valid_keys = inspect.getargspec(db_func)[0]
internal_timestamp_keys = ['end_timestamp', 'start_timestamp',
'end_timestamp_op', 'start_timestamp_op']
if 'start_timestamp' in valid_keys:
internal_keys += internal_timestamp_keys
valid_keys += ['timestamp', 'search_offset']
internal_keys.append('self')
internal_keys.append('metaquery')
valid_keys = set(valid_keys) - set(internal_keys)
translation = {'user_id': 'user',
'project_id': 'project',
'resource_id': 'resource'}
has_timestamp_query = _validate_timestamp_fields(query,
'timestamp',
('lt', 'le', 'gt', 'ge'),
allow_timestamps)
has_search_offset_query = _validate_timestamp_fields(query,
'search_offset',
'eq',
allow_timestamps)
if has_search_offset_query and not has_timestamp_query:
raise wsme.exc.InvalidInput('field', 'search_offset',
"search_offset cannot be used without " +
"timestamp")
def _is_field_metadata(field):
return (field.startswith('metadata.') or
field.startswith('resource_metadata.'))
for i in query:
if i.field not in ('timestamp', 'search_offset'):
key = translation.get(i.field, i.field)
operator = i.op
if key in valid_keys or _is_field_metadata(i.field):
if operator == 'eq':
if key == 'enabled':
i._get_value_as_type('boolean')
elif _is_field_metadata(key):
i._get_value_as_type()
else:
raise wsme.exc.InvalidInput('op', i.op,
'unimplemented operator for '
'%s' % i.field)
else:
msg = ("unrecognized field in query: %s, "
"valid keys: %s") % (query, sorted(valid_keys))
raise wsme.exc.UnknownArgument(key, msg)
def _validate_timestamp_fields(query, field_name, operator_list,
allow_timestamps):
"""Validates the timestamp related constraints in a query if there are any.
:param query: query expression that may contain the timestamp fields
:param field_name: timestamp name, which should be checked (timestamp,
search_offset)
:param operator_list: list of operators that are supported for that
timestamp, which was specified in the parameter field_name
:param allow_timestamps: defines whether the timestamp-based constraint is
applicable to this query or not
:returns: True, if there was a timestamp constraint, containing
a timestamp field named as defined in field_name, in the query and it
was allowed and syntactically correct.
:returns: False, if there wasn't timestamp constraint, containing a
timestamp field named as defined in field_name, in the query
:raises InvalidInput: if an operator is unsupported for a given timestamp
field
:raises UnknownArgument: if the timestamp constraint is not allowed in
the query
"""
for item in query:
if item.field == field_name:
# If *timestamp* or *search_offset* field was specified in the
# query, but timestamp is not supported on that resource, on
# which the query was invoked, then raise an exception.
if not allow_timestamps:
raise wsme.exc.UnknownArgument(field_name,
"not valid for " +
"this resource")
if item.op not in operator_list:
raise wsme.exc.InvalidInput('op', item.op,
'unimplemented operator for %s' %
item.field)
return True
return False
def query_to_kwargs(query, db_func, internal_keys=None,
allow_timestamps=True):
validate_query(query, db_func, internal_keys=internal_keys,
allow_timestamps=allow_timestamps)
query = sanitize_query(query, db_func)
translation = {'user_id': 'user',
'project_id': 'project',
'resource_id': 'resource'}
stamp = {}
metaquery = {}
kwargs = {}
for i in query:
if i.field == 'timestamp':
if i.op in ('lt', 'le'):
stamp['end_timestamp'] = i.value
stamp['end_timestamp_op'] = i.op
elif i.op in ('gt', 'ge'):
stamp['start_timestamp'] = i.value
stamp['start_timestamp_op'] = i.op
else:
if i.op == 'eq':
if i.field == 'search_offset':
stamp['search_offset'] = i.value
elif i.field == 'enabled':
kwargs[i.field] = i._get_value_as_type('boolean')
elif i.field.startswith('metadata.'):
metaquery[i.field] = i._get_value_as_type()
elif i.field.startswith('resource_metadata.'):
metaquery[i.field[9:]] = i._get_value_as_type()
else:
key = translation.get(i.field, i.field)
kwargs[key] = i.value
if metaquery and 'metaquery' in inspect.getargspec(db_func)[0]:
kwargs['metaquery'] = metaquery
if stamp:
kwargs.update(_get_query_timestamps(stamp))
return kwargs
def _get_query_timestamps(args=None):
"""Return any optional timestamp information in the request.
Determine the desired range, if any, from the GET arguments. Set
up the query range using the specified offset.
[query_start ... start_timestamp ... end_timestamp ... query_end]
Returns a dictionary containing:
start_timestamp: First timestamp to use for query
start_timestamp_op: First timestamp operator to use for query
end_timestamp: Final timestamp to use for query
end_timestamp_op: Final timestamp operator to use for query
"""
if args is None:
return {}
search_offset = int(args.get('search_offset', 0))
def _parse_timestamp(timestamp):
if not timestamp:
return None
try:
iso_timestamp = timeutils.parse_isotime(timestamp)
iso_timestamp = iso_timestamp.replace(tzinfo=None)
except ValueError:
raise wsme.exc.InvalidInput('timestamp', timestamp,
'invalid timestamp format')
return iso_timestamp
start_timestamp = _parse_timestamp(args.get('start_timestamp'))
end_timestamp = _parse_timestamp(args.get('end_timestamp'))
start_timestamp = start_timestamp - datetime.timedelta(
minutes=search_offset) if start_timestamp else None
end_timestamp = end_timestamp + datetime.timedelta(
minutes=search_offset) if end_timestamp else None
return {'start_timestamp': start_timestamp,
'end_timestamp': end_timestamp,
'start_timestamp_op': args.get('start_timestamp_op'),
'end_timestamp_op': args.get('end_timestamp_op')}
def flatten_metadata(metadata):
"""Return flattened resource metadata.
Metadata is returned with flattened nested structures (except nested sets)
and with all values converted to unicode strings.
"""
if metadata:
# After changing recursive_keypairs` output we need to keep
# flattening output unchanged.
# Example: recursive_keypairs({'a': {'b':{'c':'d'}}}, '.')
# output before: a.b:c=d
# output now: a.b.c=d
# So to keep the first variant just replace all dots except the first
return dict((k.replace('.', ':').replace(':', '.', 1),
six.text_type(v))
for k, v in utils.recursive_keypairs(metadata,
separator='.')
if type(v) is not set)
return {}

View File

@ -1,91 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
import oslo_messaging
from oslo_policy import policy
from pecan import hooks
from ceilometer import messaging
from ceilometer import storage
LOG = log.getLogger(__name__)
class ConfigHook(hooks.PecanHook):
"""Attach the configuration object to the request.
That allows controllers to get it.
"""
def __init__(self, conf):
super(ConfigHook, self).__init__()
self.conf = conf
self.enforcer = policy.Enforcer(conf)
self.enforcer.load_rules()
def on_route(self, state):
state.request.cfg = self.conf
state.request.enforcer = self.enforcer
class DBHook(hooks.PecanHook):
def __init__(self, conf):
self.storage_connection = self.get_connection(conf)
if not self.storage_connection:
raise Exception(
"API failed to start. Failed to connect to database")
def before(self, state):
state.request.storage_conn = self.storage_connection
@staticmethod
def get_connection(conf):
try:
return storage.get_connection_from_config(conf)
except Exception as err:
LOG.exception("Failed to connect to db" "retry later: %s",
err)
class NotifierHook(hooks.PecanHook):
"""Create and attach a notifier to the request.
Usually, samples will be push to notification bus by notifier when they
are posted via /v2/meters/ API.
"""
def __init__(self, conf):
transport = messaging.get_transport(conf)
self.notifier = oslo_messaging.Notifier(
transport, driver=conf.publisher_notifier.telemetry_driver,
publisher_id="ceilometer.api")
def before(self, state):
state.request.notifier = self.notifier
class TranslationHook(hooks.PecanHook):
def after(self, state):
# After a request has been done, we need to see if
# ClientSideError has added an error onto the response.
# If it has we need to get it info the thread-safe WSGI
# environ to be used by the ParsableErrorMiddleware.
if hasattr(state.response, 'translatable_error'):
state.request.environ['translatable_error'] = (
state.response.translatable_error)

View File

@ -1,127 +0,0 @@
#
# Copyright 2013 IBM Corp.
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Middleware to replace the plain text message body of an error
response with one formatted so the client can parse it.
Based on pecan.middleware.errordocument
"""
import json
from lxml import etree
from oslo_log import log
import six
import webob
from ceilometer import i18n
LOG = log.getLogger(__name__)
class ParsableErrorMiddleware(object):
"""Replace error body with something the client can parse."""
@staticmethod
def best_match_language(accept_language):
"""Determines best available locale from the Accept-Language header.
:returns: the best language match or None if the 'Accept-Language'
header was not available in the request.
"""
if not accept_language:
return None
all_languages = i18n.get_available_languages()
return accept_language.best_match(all_languages)
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
# Request for this state, modified by replace_start_response()
# and used when an error is being reported.
state = {}
def replacement_start_response(status, headers, exc_info=None):
"""Overrides the default response to make errors parsable."""
try:
status_code = int(status.split(' ')[0])
state['status_code'] = status_code
except (ValueError, TypeError): # pragma: nocover
raise Exception((
'ErrorDocumentMiddleware received an invalid '
'status %s' % status
))
else:
if (state['status_code'] // 100) not in (2, 3):
# Remove some headers so we can replace them later
# when we have the full error message and can
# compute the length.
headers = [(h, v)
for (h, v) in headers
if h not in ('Content-Length', 'Content-Type')
]
# Save the headers in case we need to modify them.
state['headers'] = headers
return start_response(status, headers, exc_info)
app_iter = self.app(environ, replacement_start_response)
if (state['status_code'] // 100) not in (2, 3):
req = webob.Request(environ)
error = environ.get('translatable_error')
user_locale = self.best_match_language(req.accept_language)
if (req.accept.best_match(['application/json', 'application/xml'])
== 'application/xml'):
content_type = 'application/xml'
try:
# simple check xml is valid
fault = etree.fromstring(b'\n'.join(app_iter))
# Add the translated error to the xml data
if error is not None:
for fault_string in fault.findall('faultstring'):
fault_string.text = i18n.translate(error,
user_locale)
error_message = etree.tostring(fault)
body = b''.join((b'<error_message>',
error_message,
b'</error_message>'))
except etree.XMLSyntaxError as err:
LOG.error('Error parsing HTTP response: %s', err)
error_message = state['status_code']
body = '<error_message>%s</error_message>' % error_message
if six.PY3:
body = body.encode('utf-8')
else:
content_type = 'application/json'
app_data = b'\n'.join(app_iter)
if six.PY3:
app_data = app_data.decode('utf-8')
try:
fault = json.loads(app_data)
if error is not None and 'faultstring' in fault:
fault['faultstring'] = i18n.translate(error,
user_locale)
except ValueError as err:
fault = app_data
body = json.dumps({'error_message': fault})
if six.PY3:
body = body.encode('utf-8')
state['headers'].append(('Content-Length', str(len(body))))
state['headers'].append(('Content-Type', content_type))
body = [body]
else:
body = app_iter
return body

View File

@ -1,86 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2014 Hewlett-Packard Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Access Control Lists (ACL's) control access the API server."""
import pecan
def _has_rule(name):
return name in pecan.request.enforcer.rules.keys()
def enforce(policy_name, request):
"""Checks authorization of a rule against the request.
:param request: HTTP request
:param policy_name: the policy name to validate authz against.
"""
rule_method = "telemetry:" + policy_name
headers = request.headers
policy_dict = dict()
policy_dict['roles'] = headers.get('X-Roles', "").split(",")
policy_dict['user_id'] = (headers.get('X-User-Id'))
policy_dict['project_id'] = (headers.get('X-Project-Id'))
# maintain backward compat with Juno and previous by allowing the action if
# there is no rule defined for it
if ((_has_rule('default') or _has_rule(rule_method)) and
not pecan.request.enforcer.enforce(rule_method, {}, policy_dict)):
pecan.core.abort(status_code=403, detail='RBAC Authorization Failed')
# TODO(fabiog): these methods are still used because the scoping part is really
# convoluted and difficult to separate out.
def get_limited_to(headers):
"""Return the user and project the request should be limited to.
:param headers: HTTP headers dictionary
:return: A tuple of (user, project), set to None if there's no limit on
one of these.
"""
policy_dict = dict()
policy_dict['roles'] = headers.get('X-Roles', "").split(",")
policy_dict['user_id'] = (headers.get('X-User-Id'))
policy_dict['project_id'] = (headers.get('X-Project-Id'))
# maintain backward compat with Juno and previous by using context_is_admin
# rule if the segregation rule (added in Kilo) is not defined
rule_name = 'segregation' if _has_rule(
'segregation') else 'context_is_admin'
if not pecan.request.enforcer.enforce(rule_name,
{},
policy_dict):
return headers.get('X-User-Id'), headers.get('X-Project-Id')
return None, None
def get_limited_to_project(headers):
"""Return the project the request should be limited to.
:param headers: HTTP headers dictionary
:return: A project, or None if there's no limit on it.
"""
return get_limited_to(headers)[1]

View File

@ -1,31 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import cotyledon
from cotyledon import oslo_config_glue
from ceilometer import notification
from ceilometer import service
def main():
conf = service.prepare_service()
sm = cotyledon.ServiceManager()
sm.add(notification.NotificationService,
workers=conf.notification.workers, args=(conf,))
oslo_config_glue.setup(sm, conf)
sm.run()

View File

@ -1,34 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2015-2016 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from cotyledon import oslo_config_glue
from oslo_log import log
from ceilometer.api import app
from ceilometer import service
LOG = log.getLogger(__name__)
def build_wsgi_app(argv=None):
conf = service.prepare_service(argv=argv)
conf.register_opts(oslo_config_glue.service_opts)
if conf.log_options:
LOG.debug('Full set of CONF:')
conf.log_opt_values(LOG, logging.DEBUG)
return app.load_app(conf)

View File

@ -1,30 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import cotyledon
from cotyledon import oslo_config_glue
from ceilometer import collector
from ceilometer import service
def main():
conf = service.prepare_service()
sm = cotyledon.ServiceManager()
sm.add(collector.CollectorService, workers=conf.collector.workers,
args=(conf,))
oslo_config_glue.setup(sm, conf)
sm.run()

View File

@ -1,91 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014-2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import cotyledon
from cotyledon import oslo_config_glue
from oslo_config import cfg
from oslo_log import log
from ceilometer.agent import manager
from ceilometer import service
LOG = log.getLogger(__name__)
class MultiChoicesOpt(cfg.Opt):
def __init__(self, name, choices=None, **kwargs):
super(MultiChoicesOpt, self).__init__(
name, type=DeduplicatedCfgList(choices), **kwargs)
self.choices = choices
def _get_argparse_kwargs(self, group, **kwargs):
"""Extends the base argparse keyword dict for multi choices options."""
kwargs = super(MultiChoicesOpt, self)._get_argparse_kwargs(group)
kwargs['nargs'] = '+'
choices = kwargs.get('choices', self.choices)
if choices:
kwargs['choices'] = choices
return kwargs
class DeduplicatedCfgList(cfg.types.List):
def __init__(self, choices=None, **kwargs):
super(DeduplicatedCfgList, self).__init__(**kwargs)
self.choices = choices or []
def __call__(self, *args, **kwargs):
result = super(DeduplicatedCfgList, self).__call__(*args, **kwargs)
result_set = set(result)
if len(result) != len(result_set):
LOG.warning("Duplicated values: %s found in CLI options, "
"auto de-duplicated", result)
result = list(result_set)
if self.choices and not (result_set <= set(self.choices)):
raise Exception('Valid values are %s, but found %s'
% (self.choices, result))
return result
CLI_OPTS = [
MultiChoicesOpt('polling-namespaces',
default=['compute', 'central'],
choices=['compute', 'central', 'ipmi'],
dest='polling_namespaces',
help='Polling namespace(s) to be used while '
'resource polling'),
MultiChoicesOpt('pollster-list',
default=[],
dest='pollster_list',
help='List of pollsters (or wildcard templates) to be '
'used while polling'),
]
def create_polling_service(worker_id, conf):
return manager.AgentManager(worker_id,
conf,
conf.polling_namespaces,
conf.pollster_list)
def main():
conf = cfg.ConfigOpts()
conf.register_cli_opts(CLI_OPTS)
service.prepare_service(conf=conf)
sm = cotyledon.ServiceManager()
sm.add(create_polling_service, args=(conf,))
oslo_config_glue.setup(sm, conf)
sm.run()

View File

@ -1,94 +0,0 @@
# -*- coding: utf-8 -*-
#
# Copyright 2012-2014 Julien Danjou
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Command line tool for creating meter for Ceilometer.
"""
import logging
import sys
from oslo_config import cfg
from oslo_utils import timeutils
from stevedore import extension
from ceilometer import pipeline
from ceilometer import sample
from ceilometer import service
def send_sample():
conf = cfg.ConfigOpts()
conf.register_cli_opts([
cfg.StrOpt('sample-name',
short='n',
help='Meter name.',
required=True),
cfg.StrOpt('sample-type',
short='y',
help='Meter type (gauge, delta, cumulative).',
default='gauge',
required=True),
cfg.StrOpt('sample-unit',
short='U',
help='Meter unit.'),
cfg.IntOpt('sample-volume',
short='l',
help='Meter volume value.',
default=1),
cfg.StrOpt('sample-resource',
short='r',
help='Meter resource id.',
required=True),
cfg.StrOpt('sample-user',
short='u',
help='Meter user id.'),
cfg.StrOpt('sample-project',
short='p',
help='Meter project id.'),
cfg.StrOpt('sample-timestamp',
short='i',
help='Meter timestamp.',
default=timeutils.utcnow().isoformat()),
cfg.StrOpt('sample-metadata',
short='m',
help='Meter metadata.'),
])
service.prepare_service(conf=conf)
# Set up logging to use the console
console = logging.StreamHandler(sys.stderr)
console.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(message)s')
console.setFormatter(formatter)
root_logger = logging.getLogger('')
root_logger.addHandler(console)
root_logger.setLevel(logging.DEBUG)
pipeline_manager = pipeline.setup_pipeline(
conf, extension.ExtensionManager('ceilometer.transformer'))
with pipeline_manager.publisher() as p:
p([sample.Sample(
name=conf.sample_name,
type=conf.sample_type,
unit=conf.sample_unit,
volume=conf.sample_volume,
user_id=conf.sample_user,
project_id=conf.sample_project,
resource_id=conf.sample_resource,
timestamp=conf.sample_timestamp,
resource_metadata=conf.sample_metadata and eval(
conf.sample_metadata))])

View File

@ -1,152 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log
from six import moves
import six.moves.urllib.parse as urlparse
import sqlalchemy as sa
from ceilometer import service
from ceilometer import storage
LOG = log.getLogger(__name__)
def upgrade():
conf = cfg.ConfigOpts()
conf.register_cli_opts([
cfg.BoolOpt('skip-metering-database',
help='Skip metering database upgrade.',
default=False),
cfg.BoolOpt('skip-gnocchi-resource-types',
help='Skip gnocchi resource-types upgrade.',
default=False),
])
service.prepare_service(conf=conf)
if conf.skip_metering_database:
LOG.info("Skipping metering database upgrade")
else:
url = (getattr(conf.database, 'metering_connection') or
conf.database.connection)
if url:
LOG.debug("Upgrading metering database")
storage.get_connection(conf, url).upgrade()
else:
LOG.info("Skipping metering database upgrade, "
"legacy database backend not configured.")
if conf.skip_gnocchi_resource_types:
LOG.info("Skipping Gnocchi resource types upgrade")
else:
LOG.debug("Upgrading Gnocchi resource types")
from ceilometer import gnocchi_client
gnocchi_client.upgrade_resource_types(conf)
def expirer():
conf = service.prepare_service()
if conf.database.metering_time_to_live > 0:
LOG.debug("Clearing expired metering data")
storage_conn = storage.get_connection_from_config(conf)
storage_conn.clear_expired_metering_data(
conf.database.metering_time_to_live)
else:
LOG.info("Nothing to clean, database metering time to live "
"is disabled")
def db_clean_legacy():
conf = cfg.ConfigOpts()
conf.register_cli_opts([
cfg.strOpt('confirm-drop-table',
short='n',
help='confirm to drop the legacy tables')])
if not conf.confirm_drop_table:
confirm = moves.input("Do you really want to drop the legacy "
"alarm and event tables? This will destroy "
"data definitively if it exist. Please type "
"'YES' to confirm: ")
if confirm != 'YES':
print("DB legacy cleanup aborted!")
return
service.prepare_service(conf=conf)
url = (getattr(conf.database, "metering_connection") or
conf.database.connection)
parsed = urlparse.urlparse(url)
if parsed.password:
masked_netloc = '****'.join(parsed.netloc.rsplit(parsed.password))
masked_url = parsed._replace(netloc=masked_netloc)
masked_url = urlparse.urlunparse(masked_url)
else:
masked_url = url
LOG.info('Starting to drop event, alarm and alarm history tables in '
'backend: %s', masked_url)
connection_scheme = parsed.scheme
conn = storage.get_connection_from_config(conf)
if connection_scheme in ('mysql', 'mysql+pymysql', 'postgresql',
'sqlite'):
engine = conn._engine_facade.get_engine()
meta = sa.MetaData(bind=engine)
for table_name in ('alarm', 'alarm_history',
'trait_text', 'trait_int',
'trait_float', 'trait_datetime',
'event', 'event_type'):
if engine.has_table(table_name):
table = sa.Table(table_name, meta, autoload=True)
table.drop()
LOG.info("Legacy %s table of SQL backend has been "
"dropped.", table_name)
else:
LOG.info('%s table does not exist.', table_name)
elif connection_scheme == 'hbase':
with conn.conn_pool.connection() as h_conn:
tables = h_conn.tables()
table_name_mapping = {'alarm': 'alarm',
'alarm_h': 'alarm history',
'event': 'event'}
for table_name in ('alarm', 'alarm_h', 'event'):
try:
if table_name in tables:
h_conn.disable_table(table_name)
h_conn.delete_table(table_name)
LOG.info("Legacy %s table of Hbase backend "
"has been dropped.",
table_name_mapping[table_name])
else:
LOG.info('%s table does not exist.',
table_name_mapping[table_name])
except Exception as e:
LOG.error('Error occurred while dropping alarm '
'tables of Hbase, %s', e)
elif connection_scheme == 'mongodb':
for table_name in ('alarm', 'alarm_history', 'event'):
if table_name in conn.db.conn.collection_names():
conn.db.conn.drop_collection(table_name)
LOG.info("Legacy %s table of Mongodb backend has been "
"dropped.", table_name)
else:
LOG.info('%s table does not exist.', table_name)
LOG.info('Legacy alarm and event tables cleanup done.')

View File

@ -1,194 +0,0 @@
#
# Copyright 2012-2013 eNovance <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from itertools import chain
import select
import socket
import cotyledon
import msgpack
from oslo_config import cfg
from oslo_log import log
import oslo_messaging
from oslo_utils import netutils
from oslo_utils import units
from ceilometer import dispatcher
from ceilometer.i18n import _
from ceilometer import messaging
from ceilometer.publisher import utils as publisher_utils
from ceilometer import utils
OPTS = [
cfg.HostAddressOpt('udp_address',
default='0.0.0.0',
help='Address to which the UDP socket is bound. Set to '
'an empty string to disable.'),
cfg.PortOpt('udp_port',
default=4952,
help='Port to which the UDP socket is bound.'),
cfg.IntOpt('batch_size',
default=1,
help='Number of notification messages to wait before '
'dispatching them'),
cfg.IntOpt('batch_timeout',
help='Number of seconds to wait before dispatching samples '
'when batch_size is not reached (None means indefinitely)'),
cfg.IntOpt('workers',
default=1,
min=1,
deprecated_group='DEFAULT',
deprecated_name='collector_workers',
help='Number of workers for collector service. '
'default value is 1.')
]
LOG = log.getLogger(__name__)
class CollectorService(cotyledon.Service):
"""Listener for the collector service."""
def __init__(self, worker_id, conf):
super(CollectorService, self).__init__(worker_id)
self.conf = conf
# ensure dispatcher is configured before starting other services
dispatcher_managers = dispatcher.load_dispatcher_manager(conf)
(self.meter_manager, self.event_manager) = dispatcher_managers
self.sample_listener = None
self.event_listener = None
self.udp_thread = None
import debtcollector
debtcollector.deprecate("Ceilometer collector service is deprecated."
"Use publishers to push data instead",
version="9.0", removal_version="10.0")
def run(self):
if self.conf.collector.udp_address:
self.udp_thread = utils.spawn_thread(self.start_udp)
transport = messaging.get_transport(self.conf, optional=True)
if transport:
if list(self.meter_manager):
sample_target = oslo_messaging.Target(
topic=self.conf.publisher_notifier.metering_topic)
self.sample_listener = (
messaging.get_batch_notification_listener(
transport, [sample_target],
[SampleEndpoint(self.conf.publisher.telemetry_secret,
self.meter_manager)],
allow_requeue=True,
batch_size=self.conf.collector.batch_size,
batch_timeout=self.conf.collector.batch_timeout))
self.sample_listener.start()
if list(self.event_manager):
event_target = oslo_messaging.Target(
topic=self.conf.publisher_notifier.event_topic)
self.event_listener = (
messaging.get_batch_notification_listener(
transport, [event_target],
[EventEndpoint(self.conf.publisher.telemetry_secret,
self.event_manager)],
allow_requeue=True,
batch_size=self.conf.collector.batch_size,
batch_timeout=self.conf.collector.batch_timeout))
self.event_listener.start()
def start_udp(self):
address_family = socket.AF_INET
if netutils.is_valid_ipv6(self.conf.collector.udp_address):
address_family = socket.AF_INET6
udp = socket.socket(address_family, socket.SOCK_DGRAM)
udp.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
try:
# NOTE(zhengwei): linux kernel >= 3.9
udp.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
except Exception:
LOG.warning("System does not support socket.SO_REUSEPORT "
"option. Only one worker will be able to process "
"incoming data.")
udp.bind((self.conf.collector.udp_address,
self.conf.collector.udp_port))
self.udp_run = True
while self.udp_run:
# NOTE(sileht): return every 10 seconds to allow
# clear shutdown
if not select.select([udp], [], [], 10.0)[0]:
continue
# NOTE(jd) Arbitrary limit of 64K because that ought to be
# enough for anybody.
data, source = udp.recvfrom(64 * units.Ki)
try:
sample = msgpack.loads(data, encoding='utf-8')
except Exception:
LOG.warning(_("UDP: Cannot decode data sent by %s"), source)
else:
if publisher_utils.verify_signature(
sample, self.conf.publisher.telemetry_secret):
try:
LOG.debug("UDP: Storing %s", sample)
self.meter_manager.map_method(
'record_metering_data', sample)
except Exception:
LOG.exception(_("UDP: Unable to store meter"))
else:
LOG.warning('sample signature invalid, '
'discarding: %s', sample)
def terminate(self):
if self.sample_listener:
utils.kill_listeners([self.sample_listener])
if self.event_listener:
utils.kill_listeners([self.event_listener])
if self.udp_thread:
self.udp_run = False
self.udp_thread.join()
super(CollectorService, self).terminate()
class CollectorEndpoint(object):
def __init__(self, secret, dispatcher_manager):
self.secret = secret
self.dispatcher_manager = dispatcher_manager
def sample(self, messages):
"""RPC endpoint for notification messages
When another service sends a notification over the message
bus, this method receives it.
"""
goods = []
for sample in chain.from_iterable(m["payload"] for m in messages):
if publisher_utils.verify_signature(sample, self.secret):
goods.append(sample)
else:
LOG.warning('notification signature invalid, '
'discarding: %s', sample)
try:
self.dispatcher_manager.map_method(self.method, goods)
except Exception:
LOG.exception("Dispatcher failed to handle the notification, "
"re-queuing it.")
return oslo_messaging.NotificationResult.REQUEUE
class SampleEndpoint(CollectorEndpoint):
method = 'record_metering_data'
class EventEndpoint(CollectorEndpoint):
method = 'record_events'

View File

@ -1,272 +0,0 @@
#
# Copyright 2014 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import hashlib
from lxml import etree
import operator
import threading
import cachetools
from novaclient import exceptions
from oslo_config import cfg
from oslo_log import log
from oslo_utils import timeutils
try:
import libvirt
except ImportError:
libvirt = None
from ceilometer.agent import plugin_base
from ceilometer.compute.virt.libvirt import utils as libvirt_utils
from ceilometer import nova_client
OPTS = [
cfg.BoolOpt('workload_partitioning',
default=False,
deprecated_for_removal=True,
help='Enable work-load partitioning, allowing multiple '
'compute agents to be run simultaneously. '
'(replaced by instance_discovery_method)'),
cfg.StrOpt('instance_discovery_method',
default='libvirt_metadata',
choices=['naive', 'workload_partitioning', 'libvirt_metadata'],
help="Ceilometer offers many methods to discover the instance "
"running on a compute node: \n"
"* naive: poll nova to get all instances\n"
"* workload_partitioning: poll nova to get instances of "
"the compute\n"
"* libvirt_metadata: get instances from libvirt metadata "
" but without instance metadata (recommended for Gnocchi "
" backend"),
cfg.IntOpt('resource_update_interval',
default=0,
min=0,
help="New instances will be discovered periodically based"
" on this option (in seconds). By default, "
"the agent discovers instances according to pipeline "
"polling interval. If option is greater than 0, "
"the instance list to poll will be updated based "
"on this option's interval. Measurements relating "
"to the instances will match intervals "
"defined in pipeline. "),
cfg.IntOpt('resource_cache_expiry',
default=3600,
min=0,
help="The expiry to totally refresh the instances resource "
"cache, since the instance may be migrated to another "
"host, we need to clean the legacy instances info in "
"local cache by totally refreshing the local cache. "
"The minimum should be the value of the config option "
"of resource_update_interval. This option is only used "
"for agent polling to Nova API, so it will works only "
"when 'instance_discovery_method' was set to 'naive'.")
]
LOG = log.getLogger(__name__)
class NovaLikeServer(object):
def __init__(self, **kwargs):
self.id = kwargs.pop('id')
for k, v in kwargs.items():
setattr(self, k, v)
def __repr__(self):
return '<NovaLikeServer: %s>' % getattr(self, 'name', 'unknown-name')
def __eq__(self, other):
return self.id == other.id
class InstanceDiscovery(plugin_base.DiscoveryBase):
method = None
def __init__(self, conf):
super(InstanceDiscovery, self).__init__(conf)
if not self.method:
self.method = conf.compute.instance_discovery_method
# For backward compatibility
if self.method == "naive" and conf.compute.workload_partitioning:
self.method = "workload_partitioning"
self.nova_cli = nova_client.Client(conf)
self.expiration_time = conf.compute.resource_update_interval
self.cache_expiry = conf.compute.resource_cache_expiry
if self.method == "libvirt_metadata":
# 4096 instances on a compute should be enough :)
self._flavor_cache = cachetools.LRUCache(4096)
else:
self.lock = threading.Lock()
self.instances = {}
self.last_run = None
self.last_cache_expire = None
@property
def connection(self):
return libvirt_utils.refresh_libvirt_connection(self.conf, self)
def discover(self, manager, param=None):
"""Discover resources to monitor."""
if self.method != "libvirt_metadata":
return self.discover_nova_polling(manager, param=None)
else:
return self.discover_libvirt_polling(manager, param=None)
@staticmethod
def _safe_find_int(xml, path):
elem = xml.find("./%s" % path)
if elem is not None:
return int(elem.text)
return 0
@cachetools.cachedmethod(operator.attrgetter('_flavor_cache'))
def get_flavor_id(self, name):
try:
return self.nova_cli.nova_client.flavors.find(name=name).id
except exceptions.NotFound:
return None
@libvirt_utils.retry_on_disconnect
def discover_libvirt_polling(self, manager, param=None):
instances = []
for domain in self.connection.listAllDomains():
full_xml = etree.fromstring(domain.XMLDesc())
os_type_xml = full_xml.find("./os/type")
xml_string = domain.metadata(
libvirt.VIR_DOMAIN_METADATA_ELEMENT,
"http://openstack.org/xmlns/libvirt/nova/1.0")
metadata_xml = etree.fromstring(xml_string)
# TODO(sileht): We don't have the flavor ID here So the Gnocchi
# resource update will fail for compute sample (or put None ?)
# We currently poll nova to get the flavor ID, but storing the
# flavor_id doesn't have any sense because the flavor description
# can change over the time, we should store the detail of the
# flavor. this is why nova doesn't put the id in the libvirt
# metadata
# This implements
flavor_xml = metadata_xml.find("./flavor")
flavor = {
"id": self.get_flavor_id(flavor_xml.attrib["name"]),
"name": flavor_xml.attrib["name"],
"vcpus": self._safe_find_int(flavor_xml, "vcpus"),
"ram": self._safe_find_int(flavor_xml, "memory"),
"disk": self._safe_find_int(flavor_xml, "disk"),
"ephemeral": self._safe_find_int(flavor_xml, "ephemeral"),
"swap": self._safe_find_int(flavor_xml, "swap"),
}
dom_state = domain.state()[0]
vm_state = libvirt_utils.LIBVIRT_POWER_STATE.get(dom_state)
status = libvirt_utils.LIBVIRT_STATUS.get(dom_state)
user_id = metadata_xml.find("./owner/user").attrib["uuid"]
project_id = metadata_xml.find("./owner/project").attrib["uuid"]
# From:
# https://github.com/openstack/nova/blob/852f40fd0c6e9d8878212ff3120556668023f1c4/nova/api/openstack/compute/views/servers.py#L214-L220
host_id = hashlib.sha224(
(project_id + self.conf.host).encode('utf-8')).hexdigest()
# The image description is partial, but Gnocchi only care about the
# id, so we are fine
image_xml = metadata_xml.find("./root[@type='image']")
image = ({'id': image_xml.attrib['uuid']}
if image_xml is not None else None)
instance_data = {
"id": domain.UUIDString(),
"name": metadata_xml.find("./name").text,
"flavor": flavor,
"image": image,
"os_type": os_type_xml.text,
"architecture": os_type_xml.attrib["arch"],
"OS-EXT-SRV-ATTR:instance_name": domain.name(),
"OS-EXT-SRV-ATTR:host": self.conf.host,
"OS-EXT-STS:vm_state": vm_state,
"tenant_id": project_id,
"user_id": user_id,
"hostId": host_id,
"status": status,
# NOTE(sileht): Other fields that Ceilometer tracks
# where we can't get the value here, but their are
# retrieved by notification
"metadata": {},
# "OS-EXT-STS:task_state"
# 'reservation_id',
# 'OS-EXT-AZ:availability_zone',
# 'kernel_id',
# 'ramdisk_id',
# some image detail
}
LOG.debug("instance data: %s", instance_data)
instances.append(NovaLikeServer(**instance_data))
return instances
def discover_nova_polling(self, manager, param=None):
secs_from_last_update = 0
utc_now = timeutils.utcnow(True)
secs_from_last_expire = 0
if self.last_run:
secs_from_last_update = timeutils.delta_seconds(
self.last_run, utc_now)
if self.last_cache_expire:
secs_from_last_expire = timeutils.delta_seconds(
self.last_cache_expire, utc_now)
instances = []
# NOTE(ityaptin) we update make a nova request only if
# it's a first discovery or resources expired
with self.lock:
if (not self.last_run or secs_from_last_update >=
self.expiration_time):
try:
if (secs_from_last_expire < self.cache_expiry and
self.last_run):
since = self.last_run.isoformat()
else:
since = None
self.instances.clear()
self.last_cache_expire = utc_now
instances = self.nova_cli.instance_get_all_by_host(
self.conf.host, since)
self.last_run = utc_now
except Exception:
# NOTE(zqfan): instance_get_all_by_host is wrapped and will
# log exception when there is any error. It is no need to
# raise it again and print one more time.
return []
for instance in instances:
if getattr(instance, 'OS-EXT-STS:vm_state', None) in [
'deleted', 'error']:
self.instances.pop(instance.id, None)
else:
self.instances[instance.id] = instance
return self.instances.values()
@property
def group_id(self):
return self.conf.host

View File

@ -1,173 +0,0 @@
# Copyright 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import monotonic
from oslo_log import log
from oslo_utils import timeutils
import ceilometer
from ceilometer.agent import plugin_base
from ceilometer.compute.pollsters import util
from ceilometer.compute.virt import inspector as virt_inspector
from ceilometer import sample
LOG = log.getLogger(__name__)
class NoVolumeException(Exception):
pass
class GenericComputePollster(plugin_base.PollsterBase):
"""This class aims to cache instance statistics data
First polled pollsters that inherit of this will retrieve and cache
stats of an instance, then other pollsters will just build the samples
without queyring the backend anymore.
"""
sample_name = None
sample_unit = ''
sample_type = sample.TYPE_GAUGE
sample_stats_key = None
inspector_method = None
def setup_environment(self):
super(GenericComputePollster, self).setup_environment()
self.inspector = self._get_inspector(self.conf)
@staticmethod
def aggregate_method(stats):
# Don't aggregate anything by default
return stats
@classmethod
def _get_inspector(cls, conf):
# FIXME(sileht): This doesn't looks threadsafe...
try:
inspector = cls._inspector
except AttributeError:
inspector = virt_inspector.get_hypervisor_inspector(conf)
cls._inspector = inspector
return inspector
@property
def default_discovery(self):
return 'local_instances'
def _record_poll_time(self):
"""Method records current time as the poll time.
:return: time in seconds since the last poll time was recorded
"""
current_time = timeutils.utcnow()
duration = None
if hasattr(self, '_last_poll_time'):
duration = timeutils.delta_seconds(self._last_poll_time,
current_time)
self._last_poll_time = current_time
return duration
@staticmethod
def get_additional_metadata(instance, stats):
pass
@staticmethod
def get_resource_id(instance, stats):
return instance.id
def _inspect_cached(self, cache, instance, duration):
cache.setdefault(self.inspector_method, {})
if instance.id not in cache[self.inspector_method]:
result = getattr(self.inspector, self.inspector_method)(
instance, duration)
polled_time = monotonic.monotonic()
# Ensure we don't cache an iterator
if isinstance(result, collections.Iterable):
result = list(result)
else:
result = [result]
cache[self.inspector_method][instance.id] = (polled_time, result)
return cache[self.inspector_method][instance.id]
def _stats_to_sample(self, instance, stats, polled_time):
volume = getattr(stats, self.sample_stats_key)
LOG.debug("%(instance_id)s/%(name)s volume: "
"%(volume)s" % {
'name': self.sample_name,
'instance_id': instance.id,
'volume': (volume if volume is not None
else 'Unavailable')})
if volume is None:
raise NoVolumeException()
return util.make_sample_from_instance(
self.conf,
instance,
name=self.sample_name,
unit=self.sample_unit,
type=self.sample_type,
resource_id=self.get_resource_id(instance, stats),
volume=volume,
additional_metadata=self.get_additional_metadata(
instance, stats),
monotonic_time=polled_time,
)
def get_samples(self, manager, cache, resources):
self._inspection_duration = self._record_poll_time()
for instance in resources:
try:
polled_time, result = self._inspect_cached(
cache, instance, self._inspection_duration)
if not result:
continue
for stats in self.aggregate_method(result):
yield self._stats_to_sample(instance, stats, polled_time)
except NoVolumeException:
# FIXME(sileht): This should be a removed... but I will
# not change the test logic for now
LOG.warning("%(name)s statistic in not available for "
"instance %(instance_id)s" %
{'name': self.sample_name,
'instance_id': instance.id})
except virt_inspector.InstanceNotFoundException as err:
# Instance was deleted while getting samples. Ignore it.
LOG.debug('Exception while getting samples %s', err)
except virt_inspector.InstanceShutOffException as e:
LOG.debug('Instance %(instance_id)s was shut off while '
'getting sample of %(name)s: %(exc)s',
{'instance_id': instance.id,
'name': self.sample_name, 'exc': e})
except virt_inspector.NoDataException as e:
LOG.warning('Cannot inspect data of %(pollster)s for '
'%(instance_id)s, non-fatal reason: %(exc)s',
{'pollster': self.__class__.__name__,
'instance_id': instance.id, 'exc': e})
raise plugin_base.PollsterPermanentError(resources)
except ceilometer.NotImplementedError:
# Selected inspector does not implement this pollster.
LOG.debug('%(inspector)s does not provide data for '
'%(pollster)s',
{'inspector': self.inspector.__class__.__name__,
'pollster': self.__class__.__name__})
raise plugin_base.PollsterPermanentError(resources)
except Exception as err:
LOG.error(
'Could not get %(name)s events for %(id)s: %(e)s', {
'name': self.sample_name, 'id': instance.id, 'e': err},
exc_info=True)

View File

@ -1,239 +0,0 @@
#
# Copyright 2012 eNovance <licensing@enovance.com>
# Copyright 2012 Red Hat, Inc
# Copyright 2014 Cisco Systems, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
from oslo_log import log
from ceilometer.compute import pollsters
from ceilometer import sample
LOG = log.getLogger(__name__)
class AggregateDiskPollster(pollsters.GenericComputePollster):
inspector_method = "inspect_disks"
def aggregate_method(self, result):
fields = list(result[0]._fields)
fields.remove("device")
agg_stats = collections.defaultdict(int)
devices = []
for stats in result:
devices.append(stats.device)
for f in fields:
agg_stats[f] += getattr(stats, f)
kwargs = dict(agg_stats)
kwargs["device"] = devices
return [result[0].__class__(**kwargs)]
@staticmethod
def get_additional_metadata(instance, stats):
return {'device': stats.device}
class PerDeviceDiskPollster(pollsters.GenericComputePollster):
inspector_method = "inspect_disks"
@staticmethod
def get_resource_id(instance, stats):
return "%s-%s" % (instance.id, stats.device)
@staticmethod
def get_additional_metadata(instance, stats):
return {'disk_name': stats.device}
class ReadRequestsPollster(AggregateDiskPollster):
sample_name = 'disk.read.requests'
sample_unit = 'request'
sample_type = sample.TYPE_CUMULATIVE
sample_stats_key = 'read_requests'
class PerDeviceReadRequestsPollster(PerDeviceDiskPollster):
sample_name = 'disk.device.read.requests'
sample_unit = 'request'
sample_type = sample.TYPE_CUMULATIVE
sample_stats_key = 'read_requests'
class ReadBytesPollster(AggregateDiskPollster):
sample_name = 'disk.read.bytes'
sample_unit = 'B'
sample_type = sample.TYPE_CUMULATIVE
sample_stats_key = 'read_bytes'
class PerDeviceReadBytesPollster(PerDeviceDiskPollster):
sample_name = 'disk.device.read.bytes'
sample_unit = 'B'
sample_type = sample.TYPE_CUMULATIVE
sample_stats_key = 'read_bytes'
class WriteRequestsPollster(AggregateDiskPollster):
sample_name = 'disk.write.requests'
sample_unit = 'request'
sample_type = sample.TYPE_CUMULATIVE
sample_stats_key = 'write_requests'
class PerDeviceWriteRequestsPollster(PerDeviceDiskPollster):
sample_name = 'disk.device.write.requests'
sample_unit = 'request'
sample_type = sample.TYPE_CUMULATIVE
sample_stats_key = 'write_requests'
class WriteBytesPollster(AggregateDiskPollster):
sample_name = 'disk.write.bytes'
sample_unit = 'B'
sample_type = sample.TYPE_CUMULATIVE
sample_stats_key = 'write_bytes'
class PerDeviceWriteBytesPollster(PerDeviceDiskPollster):
sample_name = 'disk.device.write.bytes'
sample_unit = 'B'
sample_type = sample.TYPE_CUMULATIVE
sample_stats_key = 'write_bytes'
class ReadBytesRatePollster(AggregateDiskPollster):
inspector_method = "inspect_disk_rates"
sample_name = 'disk.read.bytes.rate'
sample_unit = 'B/s'
sample_stats_key = 'read_bytes_rate'
class PerDeviceReadBytesRatePollster(PerDeviceDiskPollster):
inspector_method = "inspect_disk_rates"
sample_name = 'disk.device.read.bytes.rate'
sample_unit = 'B/s'
sample_stats_key = 'read_bytes_rate'
class ReadRequestsRatePollster(AggregateDiskPollster):
inspector_method = "inspect_disk_rates"
sample_name = 'disk.read.requests.rate'
sample_unit = 'request/s'
sample_stats_key = 'read_requests_rate'
class PerDeviceReadRequestsRatePollster(PerDeviceDiskPollster):
inspector_method = "inspect_disk_rates"
sample_name = 'disk.device.read.requests.rate'
sample_unit = 'request/s'
sample_stats_key = 'read_requests_rate'
class WriteBytesRatePollster(AggregateDiskPollster):
inspector_method = "inspect_disk_rates"
sample_name = 'disk.write.bytes.rate'
sample_unit = 'B/s'
sample_stats_key = 'write_bytes_rate'
class PerDeviceWriteBytesRatePollster(PerDeviceDiskPollster):
inspector_method = "inspect_disk_rates"
sample_name = 'disk.device.write.bytes.rate'
sample_unit = 'B/s'
sample_stats_key = 'write_bytes_rate'
class WriteRequestsRatePollster(AggregateDiskPollster):
inspector_method = "inspect_disk_rates"
sample_name = 'disk.write.requests.rate'
sample_unit = 'request/s'
sample_stats_key = 'write_requests_rate'
class PerDeviceWriteRequestsRatePollster(PerDeviceDiskPollster):
inspector_method = "inspect_disk_rates"
sample_name = 'disk.device.write.requests.rate'
sample_unit = 'request/s'
sample_stats_key = 'write_requests_rate'
class DiskLatencyPollster(AggregateDiskPollster):
inspector_method = 'inspect_disk_latency'
sample_name = 'disk.latency'
sample_unit = 'ms'
sample_stats_key = 'disk_latency'
class PerDeviceDiskLatencyPollster(PerDeviceDiskPollster):
inspector_method = 'inspect_disk_latency'
sample_name = 'disk.device.latency'
sample_unit = 'ms'
sample_stats_key = 'disk_latency'
class DiskIOPSPollster(AggregateDiskPollster):
inspector_method = 'inspect_disk_iops'
sample_name = 'disk.iops'
sample_unit = 'count/s'
sample_stats_key = 'iops_count'
class PerDeviceDiskIOPSPollster(PerDeviceDiskPollster):
inspector_method = 'inspect_disk_iops'
sample_name = 'disk.device.iops'
sample_unit = 'count/s'
sample_stats_key = 'iops_count'
class CapacityPollster(AggregateDiskPollster):
inspector_method = 'inspect_disk_info'
sample_name = 'disk.capacity'
sample_unit = 'B'
sample_stats_key = 'capacity'
class PerDeviceCapacityPollster(PerDeviceDiskPollster):
inspector_method = 'inspect_disk_info'
sample_name = 'disk.device.capacity'
sample_unit = 'B'
sample_stats_key = 'capacity'
class AllocationPollster(AggregateDiskPollster):
inspector_method = 'inspect_disk_info'
sample_name = 'disk.allocation'
sample_unit = 'B'
sample_stats_key = 'allocation'
class PerDeviceAllocationPollster(PerDeviceDiskPollster):
inspector_method = 'inspect_disk_info'
sample_name = 'disk.device.allocation'
sample_unit = 'B'
sample_stats_key = 'allocation'
class PhysicalPollster(AggregateDiskPollster):
inspector_method = 'inspect_disk_info'
sample_name = 'disk.usage'
sample_unit = 'B'
sample_stats_key = 'physical'
class PerDevicePhysicalPollster(PerDeviceDiskPollster):
inspector_method = 'inspect_disk_info'
sample_name = 'disk.device.usage'
sample_unit = 'B'
sample_stats_key = 'physical'

View File

@ -1,101 +0,0 @@
#
# Copyright 2012 eNovance <licensing@enovance.com>
# Copyright 2012 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ceilometer.compute import pollsters
from ceilometer import sample
class InstanceStatsPollster(pollsters.GenericComputePollster):
inspector_method = 'inspect_instance'
class CPUPollster(InstanceStatsPollster):
sample_name = 'cpu'
sample_unit = 'ns'
sample_stats_key = 'cpu_time'
sample_type = sample.TYPE_CUMULATIVE
@staticmethod
def get_additional_metadata(instance, c_data):
return {'cpu_number': c_data.cpu_number}
class CPUUtilPollster(InstanceStatsPollster):
sample_name = 'cpu_util'
sample_unit = '%'
sample_stats_key = 'cpu_util'
class MemoryUsagePollster(InstanceStatsPollster):
sample_name = 'memory.usage'
sample_unit = 'MB'
sample_stats_key = 'memory_usage'
class MemoryResidentPollster(InstanceStatsPollster):
sample_name = 'memory.resident'
sample_unit = 'MB'
sample_stats_key = 'memory_resident'
class MemorySwapInPollster(InstanceStatsPollster):
sample_name = 'memory.swap.in'
sample_unit = 'MB'
sample_stats_key = 'memory_swap_in'
class MemorySwapOutPollster(InstanceStatsPollster):
sample_name = 'memory.swap.out'
sample_unit = 'MB'
sample_stats_key = 'memory_swap_out'
class PerfCPUCyclesPollster(InstanceStatsPollster):
sample_name = 'perf.cpu.cycles'
sample_stats_key = 'cpu_cycles'
class PerfInstructionsPollster(InstanceStatsPollster):
sample_name = 'perf.instructions'
sample_stats_key = 'instructions'
class PerfCacheReferencesPollster(InstanceStatsPollster):
sample_name = 'perf.cache.references'
sample_stats_key = 'cache_references'
class PerfCacheMissesPollster(InstanceStatsPollster):
sample_name = 'perf.cache.misses'
sample_stats_key = 'cache_misses'
class MemoryBandwidthTotalPollster(InstanceStatsPollster):
sample_name = 'memory.bandwidth.total'
sample_unit = 'B/s'
sample_stats_key = 'memory_bandwidth_total'
class MemoryBandwidthLocalPollster(InstanceStatsPollster):
sample_name = 'memory.bandwidth.local'
sample_unit = 'B/s'
sample_stats_key = 'memory_bandwidth_local'
class CPUL3CachePollster(InstanceStatsPollster):
sample_name = 'cpu_l3_cache'
sample_unit = 'B'
sample_stats_key = "cpu_l3_cache_usage"

View File

@ -1,111 +0,0 @@
#
# Copyright 2012 eNovance <licensing@enovance.com>
# Copyright 2012 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ceilometer.compute import pollsters
from ceilometer.compute.pollsters import util
from ceilometer import sample
class NetworkPollster(pollsters.GenericComputePollster):
inspector_method = "inspect_vnics"
@staticmethod
def get_additional_metadata(instance, stats):
additional_stats = {k: getattr(stats, k)
for k in ["name", "mac", "fref", "parameters"]}
if stats.fref is not None:
additional_stats['vnic_name'] = stats.fref
else:
additional_stats['vnic_name'] = stats.name
return additional_stats
@staticmethod
def get_resource_id(instance, stats):
if stats.fref is not None:
return stats.fref
else:
instance_name = util.instance_name(instance)
return "%s-%s-%s" % (instance_name, instance.id, stats.name)
class IncomingBytesPollster(NetworkPollster):
sample_name = 'network.incoming.bytes'
sample_type = sample.TYPE_CUMULATIVE
sample_unit = 'B'
sample_stats_key = 'rx_bytes'
class IncomingPacketsPollster(NetworkPollster):
sample_name = 'network.incoming.packets'
sample_type = sample.TYPE_CUMULATIVE
sample_unit = 'packet'
sample_stats_key = 'rx_packets'
class OutgoingBytesPollster(NetworkPollster):
sample_name = 'network.outgoing.bytes'
sample_type = sample.TYPE_CUMULATIVE
sample_unit = 'B'
sample_stats_key = 'tx_bytes'
class OutgoingPacketsPollster(NetworkPollster):
sample_name = 'network.outgoing.packets'
sample_type = sample.TYPE_CUMULATIVE
sample_unit = 'packet'
sample_stats_key = 'tx_packets'
class IncomingBytesRatePollster(NetworkPollster):
inspector_method = "inspect_vnic_rates"
sample_name = 'network.incoming.bytes.rate'
sample_unit = 'B/s'
sample_stats_key = 'rx_bytes_rate'
class OutgoingBytesRatePollster(NetworkPollster):
inspector_method = "inspect_vnic_rates"
sample_name = 'network.outgoing.bytes.rate'
sample_unit = 'B/s'
sample_stats_key = 'tx_bytes_rate'
class IncomingDropPollster(NetworkPollster):
sample_name = 'network.incoming.packets.drop'
sample_type = sample.TYPE_CUMULATIVE
sample_unit = 'packet'
sample_stats_key = 'rx_drop'
class OutgoingDropPollster(NetworkPollster):
sample_name = 'network.outgoing.packets.drop'
sample_type = sample.TYPE_CUMULATIVE
sample_unit = 'packet'
sample_stats_key = 'tx_drop'
class IncomingErrorsPollster(NetworkPollster):
sample_name = 'network.incoming.packets.error'
sample_type = sample.TYPE_CUMULATIVE
sample_unit = 'packet'
sample_stats_key = 'rx_errors'
class OutgoingErrorsPollster(NetworkPollster):
sample_name = 'network.outgoing.packets.error'
sample_type = sample.TYPE_CUMULATIVE
sample_unit = 'packet'
sample_stats_key = 'tx_errors'

View File

@ -1,99 +0,0 @@
#
# Copyright 2012 eNovance <licensing@enovance.com>
# Copyright 2012 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ceilometer import sample
INSTANCE_PROPERTIES = [
# Identity properties
'reservation_id',
# Type properties
'architecture',
'OS-EXT-AZ:availability_zone',
'kernel_id',
'os_type',
'ramdisk_id',
]
def _get_metadata_from_object(conf, instance):
"""Return a metadata dictionary for the instance."""
instance_type = instance.flavor['name'] if instance.flavor else None
metadata = {
'display_name': instance.name,
'name': getattr(instance, 'OS-EXT-SRV-ATTR:instance_name', u''),
'instance_id': instance.id,
'instance_type': instance_type,
'host': instance.hostId,
'instance_host': getattr(instance, 'OS-EXT-SRV-ATTR:host', u''),
'flavor': instance.flavor,
'status': instance.status.lower(),
'state': getattr(instance, 'OS-EXT-STS:vm_state', u''),
'task_state': getattr(instance, 'OS-EXT-STS:task_state', u''),
}
# Image properties
if instance.image:
metadata['image'] = instance.image
metadata['image_ref'] = instance.image['id']
# Images that come through the conductor API in the nova notifier
# plugin will not have links.
if instance.image.get('links'):
metadata['image_ref_url'] = instance.image['links'][0]['href']
else:
metadata['image_ref_url'] = None
else:
metadata['image'] = None
metadata['image_ref'] = None
metadata['image_ref_url'] = None
for name in INSTANCE_PROPERTIES:
if hasattr(instance, name):
metadata[name] = getattr(instance, name)
metadata['vcpus'] = instance.flavor['vcpus']
metadata['memory_mb'] = instance.flavor['ram']
metadata['disk_gb'] = instance.flavor['disk']
metadata['ephemeral_gb'] = instance.flavor['ephemeral']
metadata['root_gb'] = (int(metadata['disk_gb']) -
int(metadata['ephemeral_gb']))
return sample.add_reserved_user_metadata(conf, instance.metadata,
metadata)
def make_sample_from_instance(conf, instance, name, type, unit, volume,
resource_id=None, additional_metadata=None,
monotonic_time=None):
additional_metadata = additional_metadata or {}
resource_metadata = _get_metadata_from_object(conf, instance)
resource_metadata.update(additional_metadata)
return sample.Sample(
name=name,
type=type,
unit=unit,
volume=volume,
user_id=instance.user_id,
project_id=instance.tenant_id,
resource_id=resource_id or instance.id,
resource_metadata=resource_metadata,
monotonic_time=monotonic_time,
)
def instance_name(instance):
"""Shortcut to get instance name."""
return getattr(instance, 'OS-EXT-SRV-ATTR:instance_name', None)

View File

@ -1,150 +0,0 @@
# Copyright 2013 Cloudbase Solutions Srl
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Implementation of Inspector abstraction for Hyper-V"""
import collections
import functools
import sys
from os_win import exceptions as os_win_exc
from os_win import utilsfactory
from oslo_utils import units
import six
from ceilometer.compute.pollsters import util
from ceilometer.compute.virt import inspector as virt_inspector
def convert_exceptions(function, exception_map):
expected_exceptions = tuple(exception_map.keys())
@functools.wraps(function)
def wrapper(*args, **kwargs):
try:
return function(*args, **kwargs)
except expected_exceptions as ex:
# exception might be a subclass of an expected exception.
for expected in expected_exceptions:
if isinstance(ex, expected):
raised_exception = exception_map[expected]
break
exc_info = sys.exc_info()
# NOTE(claudiub): Python 3 raises the exception object given as
# the second argument in six.reraise.
# The original message will be maintained by passing the original
# exception.
exc = raised_exception(six.text_type(exc_info[1]))
six.reraise(raised_exception, exc, exc_info[2])
return wrapper
def decorate_all_methods(decorator, *args, **kwargs):
def decorate(cls):
for attr in cls.__dict__:
class_member = getattr(cls, attr)
if callable(class_member):
setattr(cls, attr, decorator(class_member, *args, **kwargs))
return cls
return decorate
exception_conversion_map = collections.OrderedDict([
# NOTE(claudiub): order should be from the most specialized exception type
# to the most generic exception type.
# (expected_exception, converted_exception)
(os_win_exc.NotFound, virt_inspector.InstanceNotFoundException),
(os_win_exc.OSWinException, virt_inspector.InspectorException),
])
# NOTE(claudiub): the purpose of the decorator below is to prevent any
# os_win exceptions (subclasses of OSWinException) to leak outside of the
# HyperVInspector.
@decorate_all_methods(convert_exceptions, exception_conversion_map)
class HyperVInspector(virt_inspector.Inspector):
def __init__(self, conf):
super(HyperVInspector, self).__init__(conf)
self._utils = utilsfactory.get_metricsutils()
self._host_max_cpu_clock = self._compute_host_max_cpu_clock()
def _compute_host_max_cpu_clock(self):
hostutils = utilsfactory.get_hostutils()
# host's number of CPUs and CPU clock speed will not change.
cpu_info = hostutils.get_cpus_info()
host_cpu_count = len(cpu_info)
host_cpu_clock = cpu_info[0]['MaxClockSpeed']
return float(host_cpu_clock * host_cpu_count)
def inspect_instance(self, instance, duration):
instance_name = util.instance_name(instance)
(cpu_clock_used,
cpu_count, uptime) = self._utils.get_cpu_metrics(instance_name)
cpu_percent_used = cpu_clock_used / self._host_max_cpu_clock
# Nanoseconds
cpu_time = (int(uptime * cpu_percent_used) * units.k)
memory_usage = self._utils.get_memory_metrics(instance_name)
return virt_inspector.InstanceStats(
cpu_number=cpu_count,
cpu_time=cpu_time,
memory_usage=memory_usage)
def inspect_vnics(self, instance, duration):
instance_name = util.instance_name(instance)
for vnic_metrics in self._utils.get_vnic_metrics(instance_name):
yield virt_inspector.InterfaceStats(
name=vnic_metrics["element_name"],
mac=vnic_metrics["address"],
fref=None,
parameters=None,
rx_bytes=vnic_metrics['rx_mb'] * units.Mi,
rx_packets=0,
rx_drop=0,
rx_errors=0,
tx_bytes=vnic_metrics['tx_mb'] * units.Mi,
tx_packets=0,
tx_drop=0,
tx_errors=0)
def inspect_disks(self, instance, duration):
instance_name = util.instance_name(instance)
for disk_metrics in self._utils.get_disk_metrics(instance_name):
yield virt_inspector.DiskStats(
device=disk_metrics['instance_id'],
read_requests=0,
# Return bytes
read_bytes=disk_metrics['read_mb'] * units.Mi,
write_requests=0,
write_bytes=disk_metrics['write_mb'] * units.Mi,
errors=0)
def inspect_disk_latency(self, instance, duration):
instance_name = util.instance_name(instance)
for disk_metrics in self._utils.get_disk_latency_metrics(
instance_name):
yield virt_inspector.DiskLatencyStats(
device=disk_metrics['instance_id'],
disk_latency=disk_metrics['disk_latency'] / 1000)
def inspect_disk_iops(self, instance, duration):
instance_name = util.instance_name(instance)
for disk_metrics in self._utils.get_disk_iops_count(instance_name):
yield virt_inspector.DiskIOPSStats(
device=disk_metrics['instance_id'],
iops_count=disk_metrics['iops_count'])

View File

@ -1,280 +0,0 @@
#
# Copyright 2012 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Inspector abstraction for read-only access to hypervisors."""
import collections
from oslo_config import cfg
from oslo_log import log
from stevedore import driver
import ceilometer
OPTS = [
cfg.StrOpt('hypervisor_inspector',
default='libvirt',
help='Inspector to use for inspecting the hypervisor layer. '
'Known inspectors are libvirt, hyperv, vsphere '
'and xenapi.'),
]
LOG = log.getLogger(__name__)
# Named tuple representing instance statistics
class InstanceStats(object):
fields = [
'cpu_number', # number: number of CPUs
'cpu_time', # time: cumulative CPU time
'cpu_util', # util: CPU utilization in percentage
'cpu_l3_cache_usage', # cachesize: Amount of CPU L3 cache used
'memory_usage', # usage: Amount of memory used
'memory_resident', #
'memory_swap_in', # memory swap in
'memory_swap_out', # memory swap out
'memory_bandwidth_total', # total: total system bandwidth from one
# level of cache
'memory_bandwidth_local', # local: bandwidth of memory traffic for a
# memory controller
'cpu_cycles', # cpu_cycles: the number of cpu cycles one
# instruction needs
'instructions', # instructions: the count of instructions
'cache_references', # cache_references: the count of cache hits
'cache_misses', # cache_misses: the count of caches misses
]
def __init__(self, **kwargs):
for k in self.fields:
setattr(self, k, kwargs.pop(k, None))
if kwargs:
raise AttributeError(
"'InstanceStats' object has no attributes '%s'" % kwargs)
# Named tuple representing vNIC statistics.
#
# name: the name of the vNIC
# mac: the MAC address
# fref: the filter ref
# parameters: miscellaneous parameters
# rx_bytes: number of received bytes
# rx_packets: number of received packets
# tx_bytes: number of transmitted bytes
# tx_packets: number of transmitted packets
#
InterfaceStats = collections.namedtuple('InterfaceStats',
['name', 'mac', 'fref', 'parameters',
'rx_bytes', 'tx_bytes',
'rx_packets', 'tx_packets',
'rx_drop', 'tx_drop',
'rx_errors', 'tx_errors'])
# Named tuple representing vNIC rate statistics.
#
# name: the name of the vNIC
# mac: the MAC address
# fref: the filter ref
# parameters: miscellaneous parameters
# rx_bytes_rate: rate of received bytes
# tx_bytes_rate: rate of transmitted bytes
#
InterfaceRateStats = collections.namedtuple('InterfaceRateStats',
['name', 'mac',
'fref', 'parameters',
'rx_bytes_rate', 'tx_bytes_rate'])
# Named tuple representing disk statistics.
#
# read_bytes: number of bytes read
# read_requests: number of read operations
# write_bytes: number of bytes written
# write_requests: number of write operations
# errors: number of errors
#
DiskStats = collections.namedtuple('DiskStats',
['device',
'read_bytes', 'read_requests',
'write_bytes', 'write_requests',
'errors'])
# Named tuple representing disk rate statistics.
#
# read_bytes_rate: number of bytes read per second
# read_requests_rate: number of read operations per second
# write_bytes_rate: number of bytes written per second
# write_requests_rate: number of write operations per second
#
DiskRateStats = collections.namedtuple('DiskRateStats',
['device',
'read_bytes_rate',
'read_requests_rate',
'write_bytes_rate',
'write_requests_rate'])
# Named tuple representing disk latency statistics.
#
# disk_latency: average disk latency
#
DiskLatencyStats = collections.namedtuple('DiskLatencyStats',
['device', 'disk_latency'])
# Named tuple representing disk iops statistics.
#
# iops: number of iops per second
#
DiskIOPSStats = collections.namedtuple('DiskIOPSStats',
['device', 'iops_count'])
# Named tuple representing disk Information.
#
# capacity: capacity of the disk
# allocation: allocation of the disk
# physical: usage of the disk
DiskInfo = collections.namedtuple('DiskInfo',
['device',
'capacity',
'allocation',
'physical'])
# Exception types
#
class InspectorException(Exception):
def __init__(self, message=None):
super(InspectorException, self).__init__(message)
class InstanceNotFoundException(InspectorException):
pass
class InstanceShutOffException(InspectorException):
pass
class NoDataException(InspectorException):
pass
# Main virt inspector abstraction layering over the hypervisor API.
#
class Inspector(object):
def __init__(self, conf):
self.conf = conf
def inspect_instance(self, instance, duration):
"""Inspect the CPU statistics for an instance.
:param instance: the target instance
:param duration: the last 'n' seconds, over which the value should be
inspected
:return: the instance stats
"""
raise ceilometer.NotImplementedError
def inspect_vnics(self, instance, duration):
"""Inspect the vNIC statistics for an instance.
:param instance: the target instance
:param duration: the last 'n' seconds, over which the value should be
inspected
:return: for each vNIC, the number of bytes & packets
received and transmitted
"""
raise ceilometer.NotImplementedError
def inspect_vnic_rates(self, instance, duration):
"""Inspect the vNIC rate statistics for an instance.
:param instance: the target instance
:param duration: the last 'n' seconds, over which the value should be
inspected
:return: for each vNIC, the rate of bytes & packets
received and transmitted
"""
raise ceilometer.NotImplementedError
def inspect_disks(self, instance, duration):
"""Inspect the disk statistics for an instance.
:param instance: the target instance
:param duration: the last 'n' seconds, over which the value should be
inspected
:return: for each disk, the number of bytes & operations
read and written, and the error count
"""
raise ceilometer.NotImplementedError
def inspect_disk_rates(self, instance, duration):
"""Inspect the disk statistics as rates for an instance.
:param instance: the target instance
:param duration: the last 'n' seconds, over which the value should be
inspected
:return: for each disk, the number of bytes & operations
read and written per second, with the error count
"""
raise ceilometer.NotImplementedError
def inspect_disk_latency(self, instance, duration):
"""Inspect the disk statistics as rates for an instance.
:param instance: the target instance
:param duration: the last 'n' seconds, over which the value should be
inspected
:return: for each disk, the average disk latency
"""
raise ceilometer.NotImplementedError
def inspect_disk_iops(self, instance, duration):
"""Inspect the disk statistics as rates for an instance.
:param instance: the target instance
:param duration: the last 'n' seconds, over which the value should be
inspected
:return: for each disk, the number of iops per second
"""
raise ceilometer.NotImplementedError
def inspect_disk_info(self, instance, duration):
"""Inspect the disk information for an instance.
:param instance: the target instance
:param duration: the last 'n' seconds, over which the value should be
inspected
:return: for each disk , capacity , allocation and usage
"""
raise ceilometer.NotImplementedError
def get_hypervisor_inspector(conf):
try:
namespace = 'ceilometer.compute.virt'
mgr = driver.DriverManager(namespace,
conf.hypervisor_inspector,
invoke_on_load=True,
invoke_args=(conf, ))
return mgr.driver
except ImportError as e:
LOG.error("Unable to load the hypervisor inspector: %s" % e)
return Inspector(conf)

View File

@ -1,214 +0,0 @@
#
# Copyright 2012 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Implementation of Inspector abstraction for libvirt."""
from lxml import etree
from oslo_log import log as logging
from oslo_utils import units
import six
try:
import libvirt
except ImportError:
libvirt = None
from ceilometer.compute.pollsters import util
from ceilometer.compute.virt import inspector as virt_inspector
from ceilometer.compute.virt.libvirt import utils as libvirt_utils
from ceilometer.i18n import _
LOG = logging.getLogger(__name__)
class LibvirtInspector(virt_inspector.Inspector):
def __init__(self, conf):
super(LibvirtInspector, self).__init__(conf)
# NOTE(sileht): create a connection on startup
self.connection
@property
def connection(self):
return libvirt_utils.refresh_libvirt_connection(self.conf, self)
def _lookup_by_uuid(self, instance):
instance_name = util.instance_name(instance)
try:
return self.connection.lookupByUUIDString(instance.id)
except libvirt.libvirtError as ex:
if libvirt_utils.is_disconnection_exception(ex):
raise
msg = _("Error from libvirt while looking up instance "
"<name=%(name)s, id=%(id)s>: "
"[Error Code %(error_code)s] "
"%(ex)s") % {'name': instance_name,
'id': instance.id,
'error_code': ex.get_error_code(),
'ex': ex}
raise virt_inspector.InstanceNotFoundException(msg)
except Exception as ex:
raise virt_inspector.InspectorException(six.text_type(ex))
def _get_domain_not_shut_off_or_raise(self, instance):
instance_name = util.instance_name(instance)
domain = self._lookup_by_uuid(instance)
state = domain.info()[0]
if state == libvirt.VIR_DOMAIN_SHUTOFF:
msg = _('Failed to inspect data of instance '
'<name=%(name)s, id=%(id)s>, '
'domain state is SHUTOFF.') % {
'name': instance_name, 'id': instance.id}
raise virt_inspector.InstanceShutOffException(msg)
return domain
@libvirt_utils.retry_on_disconnect
def inspect_vnics(self, instance, duration):
domain = self._get_domain_not_shut_off_or_raise(instance)
tree = etree.fromstring(domain.XMLDesc(0))
for iface in tree.findall('devices/interface'):
target = iface.find('target')
if target is not None:
name = target.get('dev')
else:
continue
mac = iface.find('mac')
if mac is not None:
mac_address = mac.get('address')
else:
continue
fref = iface.find('filterref')
if fref is not None:
fref = fref.get('filter')
params = dict((p.get('name').lower(), p.get('value'))
for p in iface.findall('filterref/parameter'))
dom_stats = domain.interfaceStats(name)
yield virt_inspector.InterfaceStats(name=name,
mac=mac_address,
fref=fref,
parameters=params,
rx_bytes=dom_stats[0],
rx_packets=dom_stats[1],
rx_errors=dom_stats[2],
rx_drop=dom_stats[3],
tx_bytes=dom_stats[4],
tx_packets=dom_stats[5],
tx_errors=dom_stats[6],
tx_drop=dom_stats[7])
@libvirt_utils.retry_on_disconnect
def inspect_disks(self, instance, duration):
domain = self._get_domain_not_shut_off_or_raise(instance)
tree = etree.fromstring(domain.XMLDesc(0))
for device in filter(
bool,
[target.get("dev")
for target in tree.findall('devices/disk/target')]):
block_stats = domain.blockStats(device)
yield virt_inspector.DiskStats(device=device,
read_requests=block_stats[0],
read_bytes=block_stats[1],
write_requests=block_stats[2],
write_bytes=block_stats[3],
errors=block_stats[4])
@libvirt_utils.retry_on_disconnect
def inspect_disk_info(self, instance, duration):
domain = self._get_domain_not_shut_off_or_raise(instance)
tree = etree.fromstring(domain.XMLDesc(0))
for disk in tree.findall('devices/disk'):
disk_type = disk.get('type')
if disk_type:
if disk_type == 'network':
LOG.warning(
'Inspection disk usage of network disk '
'%(instance_uuid)s unsupported by libvirt' % {
'instance_uuid': instance.id})
continue
# NOTE(lhx): "cdrom" device associated to the configdrive
# no longer has a "source" element. Releated bug:
# https://bugs.launchpad.net/ceilometer/+bug/1622718
if disk.find('source') is None:
continue
target = disk.find('target')
device = target.get('dev')
if device:
block_info = domain.blockInfo(device)
yield virt_inspector.DiskInfo(device=device,
capacity=block_info[0],
allocation=block_info[1],
physical=block_info[2])
@libvirt_utils.raise_nodata_if_unsupported
@libvirt_utils.retry_on_disconnect
def inspect_instance(self, instance, duration=None):
domain = self._get_domain_not_shut_off_or_raise(instance)
memory_used = memory_resident = None
memory_swap_in = memory_swap_out = None
memory_stats = domain.memoryStats()
# Stat provided from libvirt is in KB, converting it to MB.
if 'available' in memory_stats and 'unused' in memory_stats:
memory_used = (memory_stats['available'] -
memory_stats['unused']) / units.Ki
if 'rss' in memory_stats:
memory_resident = memory_stats['rss'] / units.Ki
if 'swap_in' in memory_stats and 'swap_out' in memory_stats:
memory_swap_in = memory_stats['swap_in'] / units.Ki
memory_swap_out = memory_stats['swap_out'] / units.Ki
# TODO(sileht): stats also have the disk/vnic info
# we could use that instead of the old method for Queen
stats = self.connection.domainListGetStats([domain], 0)[0][1]
cpu_time = 0
current_cpus = stats.get('vcpu.current')
# Iterate over the maximum number of CPUs here, and count the
# actual number encountered, since the vcpu.x structure can
# have holes according to
# https://libvirt.org/git/?p=libvirt.git;a=blob;f=src/libvirt-domain.c
# virConnectGetAllDomainStats()
for vcpu in six.moves.range(stats.get('vcpu.maximum', 0)):
try:
cpu_time += (stats.get('vcpu.%s.time' % vcpu) +
stats.get('vcpu.%s.wait' % vcpu))
current_cpus -= 1
except TypeError:
# pass here, if there are too many holes, the cpu count will
# not match, so don't need special error handling.
pass
if current_cpus:
# There wasn't enough data, so fall back
cpu_time = stats.get('cpu.time')
return virt_inspector.InstanceStats(
cpu_number=stats.get('vcpu.current'),
cpu_time=cpu_time,
memory_usage=memory_used,
memory_resident=memory_resident,
memory_swap_in=memory_swap_in,
memory_swap_out=memory_swap_out,
cpu_cycles=stats.get("perf.cpu_cycles"),
instructions=stats.get("perf.instructions"),
cache_references=stats.get("perf.cache_references"),
cache_misses=stats.get("perf.cache_misses"),
memory_bandwidth_total=stats.get("perf.mbmt"),
memory_bandwidth_local=stats.get("perf.mbml"),
cpu_l3_cache_usage=stats.get("perf.cmt"),
)

View File

@ -1,126 +0,0 @@
#
# Copyright 2016 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log as logging
import tenacity
try:
import libvirt
except ImportError:
libvirt = None
from ceilometer.compute.virt import inspector as virt_inspector
from ceilometer.i18n import _
LOG = logging.getLogger(__name__)
OPTS = [
cfg.StrOpt('libvirt_type',
default='kvm',
choices=['kvm', 'lxc', 'qemu', 'uml', 'xen'],
help='Libvirt domain type.'),
cfg.StrOpt('libvirt_uri',
default='',
help='Override the default libvirt URI '
'(which is dependent on libvirt_type).'),
]
LIBVIRT_PER_TYPE_URIS = dict(uml='uml:///system', xen='xen:///', lxc='lxc:///')
# We don't use the libvirt constants in case of libvirt is not available
VIR_DOMAIN_NOSTATE = 0
VIR_DOMAIN_RUNNING = 1
VIR_DOMAIN_BLOCKED = 2
VIR_DOMAIN_PAUSED = 3
VIR_DOMAIN_SHUTDOWN = 4
VIR_DOMAIN_SHUTOFF = 5
VIR_DOMAIN_CRASHED = 6
VIR_DOMAIN_PMSUSPENDED = 7
# Stolen from nova
LIBVIRT_POWER_STATE = {
VIR_DOMAIN_NOSTATE: 'pending',
VIR_DOMAIN_RUNNING: 'running',
VIR_DOMAIN_BLOCKED: 'running',
VIR_DOMAIN_PAUSED: 'paused',
VIR_DOMAIN_SHUTDOWN: 'shutdown',
VIR_DOMAIN_SHUTOFF: 'shutdown',
VIR_DOMAIN_CRASHED: 'crashed',
VIR_DOMAIN_PMSUSPENDED: 'suspended',
}
# NOTE(sileht): This is a guessing of the nova
# status, should be true 99.9% on the time,
# but can be wrong during some transistion state
# like shelving/rescuing
LIBVIRT_STATUS = {
VIR_DOMAIN_NOSTATE: 'building',
VIR_DOMAIN_RUNNING: 'active',
VIR_DOMAIN_BLOCKED: 'active',
VIR_DOMAIN_PAUSED: 'paused',
VIR_DOMAIN_SHUTDOWN: 'stopped',
VIR_DOMAIN_SHUTOFF: 'stopped',
VIR_DOMAIN_CRASHED: 'error',
VIR_DOMAIN_PMSUSPENDED: 'suspended',
}
def new_libvirt_connection(conf):
if not libvirt:
raise ImportError("python-libvirt module is missing")
uri = (conf.libvirt_uri or LIBVIRT_PER_TYPE_URIS.get(conf.libvirt_type,
'qemu:///system'))
LOG.debug('Connecting to libvirt: %s', uri)
return libvirt.openReadOnly(uri)
def refresh_libvirt_connection(conf, klass):
connection = getattr(klass, '_libvirt_connection', None)
if not connection or not connection.isAlive():
connection = new_libvirt_connection(conf)
setattr(klass, '_libvirt_connection', connection)
return connection
def is_disconnection_exception(e):
if not libvirt:
return False
return (isinstance(e, libvirt.libvirtError)
and e.get_error_code() in (libvirt.VIR_ERR_SYSTEM_ERROR,
libvirt.VIR_ERR_INTERNAL_ERROR)
and e.get_error_domain() in (libvirt.VIR_FROM_REMOTE,
libvirt.VIR_FROM_RPC))
retry_on_disconnect = tenacity.retry(
retry=tenacity.retry_if_exception(is_disconnection_exception),
stop=tenacity.stop_after_attempt(2))
def raise_nodata_if_unsupported(method):
def inner(in_self, instance, *args, **kwargs):
try:
return method(in_self, instance, *args, **kwargs)
except libvirt.libvirtError as e:
# NOTE(sileht): At this point libvirt connection error
# have been reraise as tenacity.RetryError()
msg = _('Failed to inspect instance %(instance_uuid)s stats, '
'can not get info from libvirt: %(error)s') % {
"instance_uuid": instance.id,
"error": e}
raise virt_inspector.NoDataException(msg)
return inner

View File

@ -1,205 +0,0 @@
# Copyright (c) 2014 VMware, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Implementation of Inspector abstraction for VMware vSphere"""
from oslo_config import cfg
from oslo_utils import units
import six
from ceilometer.compute.virt import inspector as virt_inspector
from ceilometer.compute.virt.vmware import vsphere_operations
from ceilometer.i18n import _
vmware_api = None
opt_group = cfg.OptGroup(name='vmware',
title='Options for VMware')
OPTS = [
cfg.HostAddressOpt('host_ip',
default='127.0.0.1',
help='IP address of the VMware vSphere host.'),
cfg.PortOpt('host_port',
default=443,
help='Port of the VMware vSphere host.'),
cfg.StrOpt('host_username',
default='',
help='Username of VMware vSphere.'),
cfg.StrOpt('host_password',
default='',
help='Password of VMware vSphere.',
secret=True),
cfg.StrOpt('ca_file',
help='CA bundle file to use in verifying the vCenter server '
'certificate.'),
cfg.BoolOpt('insecure',
default=False,
help='If true, the vCenter server certificate is not '
'verified. If false, then the default CA truststore is '
'used for verification. This option is ignored if '
'"ca_file" is set.'),
cfg.IntOpt('api_retry_count',
default=10,
help='Number of times a VMware vSphere API may be retried.'),
cfg.FloatOpt('task_poll_interval',
default=0.5,
help='Sleep time in seconds for polling an ongoing async '
'task.'),
cfg.StrOpt('wsdl_location',
help='Optional vim service WSDL location '
'e.g http://<server>/vimService.wsdl. '
'Optional over-ride to default location for bug '
'work-arounds.'),
]
VC_AVERAGE_MEMORY_CONSUMED_CNTR = 'mem:consumed:average'
VC_AVERAGE_CPU_CONSUMED_CNTR = 'cpu:usage:average'
VC_NETWORK_RX_COUNTER = 'net:received:average'
VC_NETWORK_TX_COUNTER = 'net:transmitted:average'
VC_DISK_READ_RATE_CNTR = "disk:read:average"
VC_DISK_READ_REQUESTS_RATE_CNTR = "disk:numberReadAveraged:average"
VC_DISK_WRITE_RATE_CNTR = "disk:write:average"
VC_DISK_WRITE_REQUESTS_RATE_CNTR = "disk:numberWriteAveraged:average"
def get_api_session(conf):
global vmware_api
if vmware_api is None:
vmware_api = __import__('oslo_vmware.api')
api_session = vmware_api.api.VMwareAPISession(
conf.vmware.host_ip,
conf.vmware.host_username,
conf.vmware.host_password,
conf.vmware.api_retry_count,
conf.vmware.task_poll_interval,
wsdl_loc=conf.vmware.wsdl_location,
port=conf.vmware.host_port,
cacert=conf.vmware.ca_file,
insecure=conf.vmware.insecure)
return api_session
class VsphereInspector(virt_inspector.Inspector):
def __init__(self, conf):
super(VsphereInspector, self).__init__(conf)
self._ops = vsphere_operations.VsphereOperations(
get_api_session(self.conf), 1000)
def _get_vm_mobj_not_power_off_or_raise(self, instance):
vm_mobj = self._ops.get_vm_mobj(instance.id)
if vm_mobj is None:
raise virt_inspector.InstanceNotFoundException(
_('VM %s not found in VMware vSphere') % instance.id)
vm_powerState = self._ops.query_vm_property(vm_mobj,
'runtime.powerState')
if vm_powerState == "poweredOff":
raise virt_inspector.InstanceShutOffException(
_('VM %s is poweroff in VMware vSphere') % instance.id)
return vm_mobj
def inspect_vnic_rates(self, instance, duration):
vm_mobj = self._get_vm_mobj_not_power_off_or_raise(instance)
vnic_stats = {}
vnic_ids = set()
for net_counter in (VC_NETWORK_RX_COUNTER, VC_NETWORK_TX_COUNTER):
net_counter_id = self._ops.get_perf_counter_id(net_counter)
vnic_id_to_stats_map = self._ops.query_vm_device_stats(
vm_mobj, net_counter_id, duration)
# The sample for this map is: {4000: 0.0, vmnic5: 0.0, vmnic4: 0.0,
# vmnic3: 0.0, vmnic2: 0.0, vmnic1: 0.0, vmnic0: 0.0}
# "4000" is the virtual nic which we need.
# And these "vmnic*" are phynical nics in the host, so we remove it
vnic_id_to_stats_map = {k: v for (k, v)
in vnic_id_to_stats_map.items()
if not k.startswith('vmnic')}
vnic_stats[net_counter] = vnic_id_to_stats_map
vnic_ids.update(six.iterkeys(vnic_id_to_stats_map))
# Stats provided from vSphere are in KB/s, converting it to B/s.
for vnic_id in sorted(vnic_ids):
rx_bytes_rate = (vnic_stats[VC_NETWORK_RX_COUNTER]
.get(vnic_id, 0) * units.Ki)
tx_bytes_rate = (vnic_stats[VC_NETWORK_TX_COUNTER]
.get(vnic_id, 0) * units.Ki)
yield virt_inspector.InterfaceRateStats(
name=vnic_id,
mac=None,
fref=None,
parameters=None,
rx_bytes_rate=rx_bytes_rate,
tx_bytes_rate=tx_bytes_rate)
def inspect_disk_rates(self, instance, duration):
vm_mobj = self._get_vm_mobj_not_power_off_or_raise(instance)
disk_stats = {}
disk_ids = set()
disk_counters = [
VC_DISK_READ_RATE_CNTR,
VC_DISK_READ_REQUESTS_RATE_CNTR,
VC_DISK_WRITE_RATE_CNTR,
VC_DISK_WRITE_REQUESTS_RATE_CNTR
]
for disk_counter in disk_counters:
disk_counter_id = self._ops.get_perf_counter_id(disk_counter)
disk_id_to_stat_map = self._ops.query_vm_device_stats(
vm_mobj, disk_counter_id, duration)
disk_stats[disk_counter] = disk_id_to_stat_map
disk_ids.update(six.iterkeys(disk_id_to_stat_map))
for disk_id in disk_ids:
def stat_val(counter_name):
return disk_stats[counter_name].get(disk_id, 0)
# Stats provided from vSphere are in KB/s, converting it to B/s.
yield virt_inspector.DiskRateStats(
device=disk_id,
read_bytes_rate=stat_val(VC_DISK_READ_RATE_CNTR) * units.Ki,
read_requests_rate=stat_val(VC_DISK_READ_REQUESTS_RATE_CNTR),
write_bytes_rate=stat_val(VC_DISK_WRITE_RATE_CNTR) * units.Ki,
write_requests_rate=stat_val(VC_DISK_WRITE_REQUESTS_RATE_CNTR)
)
def inspect_instance(self, instance, duration):
vm_mobj = self._get_vm_mobj_not_power_off_or_raise(instance)
cpu_util_counter_id = self._ops.get_perf_counter_id(
VC_AVERAGE_CPU_CONSUMED_CNTR)
cpu_util = self._ops.query_vm_aggregate_stats(
vm_mobj, cpu_util_counter_id, duration)
# For this counter vSphere returns values scaled-up by 100, since the
# corresponding API can't return decimals, but only longs.
# For e.g. if the utilization is 12.34%, the value returned is 1234.
# Hence, dividing by 100.
cpu_util = cpu_util / 100
mem_counter_id = self._ops.get_perf_counter_id(
VC_AVERAGE_MEMORY_CONSUMED_CNTR)
memory = self._ops.query_vm_aggregate_stats(
vm_mobj, mem_counter_id, duration)
# Stat provided from vSphere is in KB, converting it to MB.
memory = memory / units.Ki
return virt_inspector.InstanceStats(
cpu_util=cpu_util,
memory_usage=memory)

View File

@ -1,234 +0,0 @@
# Copyright (c) 2014 VMware, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
try:
from oslo_vmware import vim_util
except ImportError:
# NOTE(sileht): this is safe because inspector will not load
vim_util = None
PERF_MANAGER_TYPE = "PerformanceManager"
PERF_COUNTER_PROPERTY = "perfCounter"
VM_INSTANCE_ID_PROPERTY = 'config.extraConfig["nvp.vm-uuid"].value'
# ESXi Servers sample performance data every 20 seconds. 20-second interval
# data is called instance data or real-time data. To retrieve instance data,
# we need to specify a value of 20 seconds for the "PerfQuerySpec.intervalId"
# property. In that case the "QueryPerf" method operates as a raw data feed
# that bypasses the vCenter database and instead retrieves performance data
# from an ESXi host.
# The following value is time interval for real-time performance stats
# in seconds and it is not configurable.
VC_REAL_TIME_SAMPLING_INTERVAL = 20
class VsphereOperations(object):
"""Class to invoke vSphere APIs calls.
vSphere APIs calls are required by various pollsters, collecting data from
VMware infrastructure.
"""
def __init__(self, api_session, max_objects):
self._api_session = api_session
self._max_objects = max_objects
# Mapping between "VM's Nova instance Id" -> "VM's managed object"
# In case a VM is deployed by Nova, then its name is instance ID.
# So this map essentially has VM names as keys.
self._vm_mobj_lookup_map = {}
# Mapping from full name -> ID, for VC Performance counters
self._perf_counter_id_lookup_map = None
def _init_vm_mobj_lookup_map(self):
session = self._api_session
result = session.invoke_api(vim_util, "get_objects", session.vim,
"VirtualMachine", self._max_objects,
[VM_INSTANCE_ID_PROPERTY],
False)
while result:
for object in result.objects:
vm_mobj = object.obj
# propSet will be set only if the server provides value
if hasattr(object, 'propSet') and object.propSet:
vm_instance_id = object.propSet[0].val
if vm_instance_id:
self._vm_mobj_lookup_map[vm_instance_id] = vm_mobj
result = session.invoke_api(vim_util, "continue_retrieval",
session.vim, result)
def get_vm_mobj(self, vm_instance_id):
"""Method returns VC mobj of the VM by its NOVA instance ID."""
if vm_instance_id not in self._vm_mobj_lookup_map:
self._init_vm_mobj_lookup_map()
return self._vm_mobj_lookup_map.get(vm_instance_id, None)
def _init_perf_counter_id_lookup_map(self):
# Query details of all the performance counters from VC
session = self._api_session
client_factory = session.vim.client.factory
perf_manager = session.vim.service_content.perfManager
prop_spec = vim_util.build_property_spec(
client_factory, PERF_MANAGER_TYPE, [PERF_COUNTER_PROPERTY])
obj_spec = vim_util.build_object_spec(
client_factory, perf_manager, None)
filter_spec = vim_util.build_property_filter_spec(
client_factory, [prop_spec], [obj_spec])
options = client_factory.create('ns0:RetrieveOptions')
options.maxObjects = 1
prop_collector = session.vim.service_content.propertyCollector
result = session.invoke_api(session.vim, "RetrievePropertiesEx",
prop_collector, specSet=[filter_spec],
options=options)
perf_counter_infos = result.objects[0].propSet[0].val.PerfCounterInfo
# Extract the counter Id for each counter and populate the map
self._perf_counter_id_lookup_map = {}
for perf_counter_info in perf_counter_infos:
counter_group = perf_counter_info.groupInfo.key
counter_name = perf_counter_info.nameInfo.key
counter_rollup_type = perf_counter_info.rollupType
counter_id = perf_counter_info.key
counter_full_name = (counter_group + ":" + counter_name + ":" +
counter_rollup_type)
self._perf_counter_id_lookup_map[counter_full_name] = counter_id
def get_perf_counter_id(self, counter_full_name):
"""Method returns the ID of VC performance counter by its full name.
A VC performance counter is uniquely identified by the
tuple {'Group Name', 'Counter Name', 'Rollup Type'}.
It will have an id - counter ID (changes from one VC to another),
which is required to query performance stats from that VC.
This method returns the ID for a counter,
assuming 'CounterFullName' => 'Group Name:CounterName:RollupType'.
"""
if not self._perf_counter_id_lookup_map:
self._init_perf_counter_id_lookup_map()
return self._perf_counter_id_lookup_map[counter_full_name]
# TODO(akhils@vmware.com) Move this method to common library
# when it gets checked-in
def query_vm_property(self, vm_mobj, property_name):
"""Method returns the value of specified property for a VM.
:param vm_mobj: managed object of the VM whose property is to be
queried
:param property_name: path of the property
"""
session = self._api_session
return session.invoke_api(vim_util, "get_object_property",
session.vim, vm_mobj, property_name)
def query_vm_aggregate_stats(self, vm_mobj, counter_id, duration):
"""Method queries the aggregated real-time stat value for a VM.
This method should be used for aggregate counters.
:param vm_mobj: managed object of the VM
:param counter_id: id of the perf counter in VC
:param duration: in seconds from current time,
over which the stat value was applicable
:return: the aggregated stats value for the counter
"""
# For aggregate counters, device_name should be ""
stats = self._query_vm_perf_stats(vm_mobj, counter_id, "", duration)
# Performance manager provides the aggregated stats value
# with device name -> None
return stats.get(None, 0)
def query_vm_device_stats(self, vm_mobj, counter_id, duration):
"""Method queries the real-time stat values for a VM, for all devices.
This method should be used for device(non-aggregate) counters.
:param vm_mobj: managed object of the VM
:param counter_id: id of the perf counter in VC
:param duration: in seconds from current time,
over which the stat value was applicable
:return: a map containing the stat values keyed by the device ID/name
"""
# For device counters, device_name should be "*" to get stat values
# for all devices.
stats = self._query_vm_perf_stats(vm_mobj, counter_id, "*", duration)
# For some device counters, in addition to the per device value
# the Performance manager also returns the aggregated value.
# Just to be consistent, deleting the aggregated value if present.
stats.pop(None, None)
return stats
def _query_vm_perf_stats(self, vm_mobj, counter_id, device_name, duration):
"""Method queries the real-time stat values for a VM.
:param vm_mobj: managed object of the VM for which stats are needed
:param counter_id: id of the perf counter in VC
:param device_name: name of the device for which stats are to be
queried. For aggregate counters pass empty string ("").
For device counters pass "*", if stats are required over all
devices.
:param duration: in seconds from current time,
over which the stat value was applicable
:return: a map containing the stat values keyed by the device ID/name
"""
session = self._api_session
client_factory = session.vim.client.factory
# Construct the QuerySpec
metric_id = client_factory.create('ns0:PerfMetricId')
metric_id.counterId = counter_id
metric_id.instance = device_name
query_spec = client_factory.create('ns0:PerfQuerySpec')
query_spec.entity = vm_mobj
query_spec.metricId = [metric_id]
query_spec.intervalId = VC_REAL_TIME_SAMPLING_INTERVAL
# We query all samples which are applicable over the specified duration
samples_cnt = (int(duration / VC_REAL_TIME_SAMPLING_INTERVAL)
if duration and
duration >= VC_REAL_TIME_SAMPLING_INTERVAL else 1)
query_spec.maxSample = samples_cnt
perf_manager = session.vim.service_content.perfManager
perf_stats = session.invoke_api(session.vim, 'QueryPerf', perf_manager,
querySpec=[query_spec])
stat_values = {}
if perf_stats:
entity_metric = perf_stats[0]
sample_infos = entity_metric.sampleInfo
if len(sample_infos) > 0:
for metric_series in entity_metric.value:
# Take the average of all samples to improve the accuracy
# of the stat value
stat_value = float(sum(metric_series.value)) / samples_cnt
device_id = metric_series.id.instance
stat_values[device_id] = stat_value
return stat_values

View File

@ -1,184 +0,0 @@
# Copyright 2014 Intel
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Implementation of Inspector abstraction for XenAPI."""
from os_xenapi.client import session as xenapi_session
from os_xenapi.client import XenAPI
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import units
from ceilometer.compute.pollsters import util
from ceilometer.compute.virt import inspector as virt_inspector
from ceilometer.i18n import _
LOG = logging.getLogger(__name__)
opt_group = cfg.OptGroup(name='xenapi',
title='Options for XenAPI')
OPTS = [
cfg.StrOpt('connection_url',
help='URL for connection to XenServer/Xen Cloud Platform.'),
cfg.StrOpt('connection_username',
default='root',
help='Username for connection to XenServer/Xen Cloud '
'Platform.'),
cfg.StrOpt('connection_password',
help='Password for connection to XenServer/Xen Cloud Platform.',
secret=True),
]
class XenapiException(virt_inspector.InspectorException):
pass
def get_api_session(conf):
url = conf.xenapi.connection_url
username = conf.xenapi.connection_username
password = conf.xenapi.connection_password
if not url or password is None:
raise XenapiException(_('Must specify connection_url, and '
'connection_password to use'))
try:
session = xenapi_session.XenAPISession(url, username, password,
originator="ceilometer")
LOG.debug("XenAPI session is created successfully, %s", session)
except XenAPI.Failure as e:
msg = _("Could not connect to XenAPI: %s") % e.details[0]
raise XenapiException(msg)
return session
class XenapiInspector(virt_inspector.Inspector):
def __init__(self, conf):
super(XenapiInspector, self).__init__(conf)
self.session = get_api_session(self.conf)
def _lookup_by_name(self, instance_name):
vm_refs = self.session.VM.get_by_name_label(instance_name)
n = len(vm_refs)
if n == 0:
raise virt_inspector.InstanceNotFoundException(
_('VM %s not found in XenServer') % instance_name)
elif n > 1:
raise XenapiException(
_('Multiple VM %s found in XenServer') % instance_name)
else:
return vm_refs[0]
def inspect_instance(self, instance, duration):
instance_name = util.instance_name(instance)
vm_ref = self._lookup_by_name(instance_name)
cpu_util = self._get_cpu_usage(vm_ref, instance_name)
memory_usage = self._get_memory_usage(vm_ref)
LOG.debug("inspect_instance, cpu_util: %(cpu)s, memory_usage: %(mem)s",
{'cpu': cpu_util, 'mem': memory_usage}, instance=instance)
return virt_inspector.InstanceStats(cpu_util=cpu_util,
memory_usage=memory_usage)
def _get_cpu_usage(self, vm_ref, instance_name):
vcpus_number = int(self.session.VM.get_VCPUs_max(vm_ref))
if vcpus_number <= 0:
msg = _("Could not get VM %s CPU number") % instance_name
raise XenapiException(msg)
cpu_util = 0.0
for index in range(vcpus_number):
cpu_util += float(self.session.VM.query_data_source(
vm_ref, "cpu%d" % index))
return cpu_util / int(vcpus_number) * 100
def _get_memory_usage(self, vm_ref):
total_mem = float(self.session.VM.query_data_source(vm_ref, "memory"))
try:
free_mem = float(self.session.VM.query_data_source(
vm_ref, "memory_internal_free"))
except XenAPI.Failure:
# If PV tools is not installed in the guest instance, it's
# impossible to get free memory. So give it a default value
# as 0.
free_mem = 0
# memory provided from XenServer is in Bytes;
# memory_internal_free provided from XenServer is in KB,
# converting it to MB.
return (total_mem - free_mem * units.Ki) / units.Mi
def inspect_vnics(self, instance, duration):
instance_name = util.instance_name(instance)
vm_ref = self._lookup_by_name(instance_name)
dom_id = self.session.VM.get_domid(vm_ref)
vif_refs = self.session.VM.get_VIFs(vm_ref)
bw_all = self.session.call_plugin_serialized('bandwidth',
'fetch_all_bandwidth')
LOG.debug("inspect_vnics, all bandwidth: %s", bw_all,
instance=instance)
for vif_ref in vif_refs or []:
vif_rec = self.session.VIF.get_record(vif_ref)
bw_vif = bw_all[dom_id][vif_rec['device']]
# TODO(jianghuaw): Currently the plugin can only support
# rx_bytes and tx_bytes, so temporarily set others as -1.
yield virt_inspector.InterfaceStats(
name=vif_rec['uuid'],
mac=vif_rec['MAC'],
fref=None,
parameters=None,
rx_bytes=bw_vif['bw_in'], rx_packets=-1, rx_drop=-1,
rx_errors=-1, tx_bytes=bw_vif['bw_out'], tx_packets=-1,
tx_drop=-1, tx_errors=-1)
def inspect_vnic_rates(self, instance, duration):
instance_name = util.instance_name(instance)
vm_ref = self._lookup_by_name(instance_name)
vif_refs = self.session.VM.get_VIFs(vm_ref)
if vif_refs:
for vif_ref in vif_refs:
vif_rec = self.session.VIF.get_record(vif_ref)
rx_rate = float(self.session.VM.query_data_source(
vm_ref, "vif_%s_rx" % vif_rec['device']))
tx_rate = float(self.session.VM.query_data_source(
vm_ref, "vif_%s_tx" % vif_rec['device']))
yield virt_inspector.InterfaceRateStats(
name=vif_rec['uuid'],
mac=vif_rec['MAC'],
fref=None,
parameters=None,
rx_bytes_rate=rx_rate,
tx_bytes_rate=tx_rate)
def inspect_disk_rates(self, instance, duration):
instance_name = util.instance_name(instance)
vm_ref = self._lookup_by_name(instance_name)
vbd_refs = self.session.VM.get_VBDs(vm_ref)
if vbd_refs:
for vbd_ref in vbd_refs:
vbd_rec = self.session.VBD.get_record(vbd_ref)
read_rate = float(self.session.VM.query_data_source(
vm_ref, "vbd_%s_read" % vbd_rec['device']))
write_rate = float(self.session.VM.query_data_source(
vm_ref, "vbd_%s_write" % vbd_rec['device']))
yield virt_inspector.DiskRateStats(
device=vbd_rec['device'],
read_bytes_rate=read_rate,
read_requests_rate=0,
write_bytes_rate=write_rate,
write_requests_rate=0)

View File

@ -1,37 +0,0 @@
# Copyright 2016 Hewlett Packard Enterprise Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_middleware import cors
def set_cors_middleware_defaults():
"""Update default configuration options for oslo.middleware."""
cors.set_defaults(
allow_headers=['X-Auth-Token',
'X-Identity-Status',
'X-Roles',
'X-Service-Catalog',
'X-User-Id',
'X-Tenant-Id',
'X-Openstack-Request-Id'],
expose_headers=['X-Auth-Token',
'X-Subject-Token',
'X-Service-Token',
'X-Openstack-Request-Id'],
allow_methods=['GET',
'PUT',
'POST',
'DELETE',
'PATCH']
)

View File

@ -1,359 +0,0 @@
---
metric:
# Image
- name: "image.size"
event_type:
- "image.upload"
- "image.delete"
- "image.update"
type: "gauge"
unit: B
volume: $.payload.size
resource_id: $.payload.id
project_id: $.payload.owner
- name: "image.download"
event_type: "image.send"
type: "delta"
unit: "B"
volume: $.payload.bytes_sent
resource_id: $.payload.image_id
user_id: $.payload.receiver_user_id
project_id: $.payload.receiver_tenant_id
- name: "image.serve"
event_type: "image.send"
type: "delta"
unit: "B"
volume: $.payload.bytes_sent
resource_id: $.payload.image_id
project_id: $.payload.owner_id
- name: 'volume.size'
event_type:
- 'volume.exists'
- 'volume.create.*'
- 'volume.delete.*'
- 'volume.resize.*'
- 'volume.attach.*'
- 'volume.detach.*'
- 'volume.update.*'
type: 'gauge'
unit: 'GB'
volume: $.payload.size
user_id: $.payload.user_id
project_id: $.payload.tenant_id
resource_id: $.payload.volume_id
metadata:
display_name: $.payload.display_name
volume_type: $.payload.volume_type
- name: 'snapshot.size'
event_type:
- 'snapshot.exists'
- 'snapshot.create.*'
- 'snapshot.delete.*'
type: 'gauge'
unit: 'GB'
volume: $.payload.volume_size
user_id: $.payload.user_id
project_id: $.payload.tenant_id
resource_id: $.payload.snapshot_id
metadata:
display_name: $.payload.display_name
- name: 'backup.size'
event_type:
- 'backup.exists'
- 'backup.create.*'
- 'backup.delete.*'
- 'backup.restore.*'
type: 'gauge'
unit: 'GB'
volume: $.payload.size
user_id: $.payload.user_id
project_id: $.payload.tenant_id
resource_id: $.payload.backup_id
metadata:
display_name: $.payload.display_name
# Magnum
- name: $.payload.metrics.[*].name
event_type: 'magnum.bay.metrics.*'
type: 'gauge'
unit: $.payload.metrics.[*].unit
volume: $.payload.metrics.[*].value
user_id: $.payload.user_id
project_id: $.payload.project_id
resource_id: $.payload.resource_id
lookup: ['name', 'unit', 'volume']
# Swift
- name: $.payload.measurements.[*].metric.[*].name
event_type: 'objectstore.http.request'
type: 'delta'
unit: $.payload.measurements.[*].metric.[*].unit
volume: $.payload.measurements.[*].result
resource_id: $.payload.target.id
user_id: $.payload.initiator.id
project_id: $.payload.initiator.project_id
lookup: ['name', 'unit', 'volume']
- name: 'memory'
event_type: 'compute.instance.*'
type: 'gauge'
unit: 'MB'
volume: $.payload.memory_mb
user_id: $.payload.user_id
project_id: $.payload.tenant_id
resource_id: $.payload.instance_id
user_metadata: $.payload.metadata
metadata: &instance_meta
host: $.payload.host
flavor_id: $.payload.instance_flavor_id
flavor_name: $.payload.instance_type
display_name: $.payload.display_name
host: $.payload.host
image_ref: $.payload.image_meta.base_image_ref
- name: 'vcpus'
event_type: 'compute.instance.*'
type: 'gauge'
unit: 'vcpu'
volume: $.payload.vcpus
user_id: $.payload.user_id
project_id: $.payload.tenant_id
resource_id: $.payload.instance_id
user_metadata: $.payload.metadata
metadata:
<<: *instance_meta
- name: 'compute.instance.booting.time'
event_type: 'compute.instance.create.end'
type: 'gauge'
unit: 'sec'
volume:
fields: [$.payload.created_at, $.payload.launched_at]
plugin: 'timedelta'
project_id: $.payload.tenant_id
resource_id: $.payload.instance_id
user_metadata: $.payload.metadata
metadata:
<<: *instance_meta
- name: 'disk.root.size'
event_type: 'compute.instance.*'
type: 'gauge'
unit: 'GB'
volume: $.payload.root_gb
user_id: $.payload.user_id
project_id: $.payload.tenant_id
resource_id: $.payload.instance_id
user_metadata: $.payload.metadata
metadata:
<<: *instance_meta
- name: 'disk.ephemeral.size'
event_type: 'compute.instance.*'
type: 'gauge'
unit: 'GB'
volume: $.payload.ephemeral_gb
user_id: $.payload.user_id
project_id: $.payload.tenant_id
resource_id: $.payload.instance_id
user_metadata: $.payload.metadata
metadata:
<<: *instance_meta
- name: 'bandwidth'
event_type: 'l3.meter'
type: 'delta'
unit: 'B'
volume: $.payload.bytes
project_id: $.payload.tenant_id
resource_id: $.payload.label_id
- name: 'compute.node.cpu.frequency'
event_type: 'compute.metrics.update'
type: 'gauge'
unit: 'MHz'
volume: $.payload.metrics[?(@.name='cpu.frequency')].value
resource_id: $.payload.host + "_" + $.payload.nodename
timestamp: $.payload.metrics[?(@.name='cpu.frequency')].timestamp
metadata:
event_type: $.event_type
host: $.publisher_id
source: $.payload.metrics[?(@.name='cpu.frequency')].source
- name: 'compute.node.cpu.user.time'
event_type: 'compute.metrics.update'
type: 'cumulative'
unit: 'ns'
volume: $.payload.metrics[?(@.name='cpu.user.time')].value
resource_id: $.payload.host + "_" + $.payload.nodename
timestamp: $.payload.metrics[?(@.name='cpu.user.time')].timestamp
metadata:
event_type: $.event_type
host: $.publisher_id
source: $.payload.metrics[?(@.name='cpu.user.time')].source
- name: 'compute.node.cpu.kernel.time'
event_type: 'compute.metrics.update'
type: 'cumulative'
unit: 'ns'
volume: $.payload.metrics[?(@.name='cpu.kernel.time')].value
resource_id: $.payload.host + "_" + $.payload.nodename
timestamp: $.payload.metrics[?(@.name='cpu.kernel.time')].timestamp
metadata:
event_type: $.event_type
host: $.publisher_id
source: $.payload.metrics[?(@.name='cpu.kernel.time')].source
- name: 'compute.node.cpu.idle.time'
event_type: 'compute.metrics.update'
type: 'cumulative'
unit: 'ns'
volume: $.payload.metrics[?(@.name='cpu.idle.time')].value
resource_id: $.payload.host + "_" + $.payload.nodename
timestamp: $.payload.metrics[?(@.name='cpu.idle.time')].timestamp
metadata:
event_type: $.event_type
host: $.publisher_id
source: $.payload.metrics[?(@.name='cpu.idle.time')].source
- name: 'compute.node.cpu.iowait.time'
event_type: 'compute.metrics.update'
type: 'cumulative'
unit: 'ns'
volume: $.payload.metrics[?(@.name='cpu.iowait.time')].value
resource_id: $.payload.host + "_" + $.payload.nodename
timestamp: $.payload.metrics[?(@.name='cpu.iowait.time')].timestamp
metadata:
event_type: $.event_type
host: $.publisher_id
source: $.payload.metrics[?(@.name='cpu.iowait.time')].source
- name: 'compute.node.cpu.kernel.percent'
event_type: 'compute.metrics.update'
type: 'gauge'
unit: 'percent'
volume: $.payload.metrics[?(@.name='cpu.kernel.percent')].value * 100
resource_id: $.payload.host + "_" + $.payload.nodename
timestamp: $.payload.metrics[?(@.name='cpu.kernel.percent')].timestamp
metadata:
event_type: $.event_type
host: $.publisher_id
source: $.payload.metrics[?(@.name='cpu.kernel.percent')].source
- name: 'compute.node.cpu.idle.percent'
event_type: 'compute.metrics.update'
type: 'gauge'
unit: 'percent'
volume: $.payload.metrics[?(@.name='cpu.idle.percent')].value * 100
resource_id: $.payload.host + "_" + $.payload.nodename
timestamp: $.payload.metrics[?(@.name='cpu.idle.percent')].timestamp
metadata:
event_type: $.event_type
host: $.publisher_id
source: $.payload.metrics[?(@.name='cpu.idle.percent')].source
- name: 'compute.node.cpu.user.percent'
event_type: 'compute.metrics.update'
type: 'gauge'
unit: 'percent'
volume: $.payload.metrics[?(@.name='cpu.user.percent')].value * 100
resource_id: $.payload.host + "_" + $.payload.nodename
timestamp: $.payload.metrics[?(@.name='cpu.user.percent')].timestamp
metadata:
event_type: $.event_type
host: $.publisher_id
source: $.payload.metrics[?(@.name='cpu.user.percent')].source
- name: 'compute.node.cpu.iowait.percent'
event_type: 'compute.metrics.update'
type: 'gauge'
unit: 'percent'
volume: $.payload.metrics[?(@.name='cpu.iowait.percent')].value * 100
resource_id: $.payload.host + "_" + $.payload.nodename
timestamp: $.payload.metrics[?(@.name='cpu.iowait.percent')].timestamp
metadata:
event_type: $.event_type
host: $.publisher_id
source: $.payload.metrics[?(@.name='cpu.iowait.percent')].source
- name: 'compute.node.cpu.percent'
event_type: 'compute.metrics.update'
type: 'gauge'
unit: 'percent'
volume: $.payload.metrics[?(@.name='cpu.percent')].value * 100
resource_id: $.payload.host + "_" + $.payload.nodename
timestamp: $.payload.metrics[?(@.name='cpu.percent')].timestamp
metadata:
event_type: $.event_type
host: $.publisher_id
source: $.payload.metrics[?(@.name='cpu.percent')].source
# Identity
# NOTE(gordc): hack because jsonpath-rw-ext can't concat starting with string.
- name: $.payload.outcome - $.payload.outcome + 'identity.authenticate.' + $.payload.outcome
type: 'delta'
unit: 'user'
volume: 1
event_type:
- 'identity.authenticate'
resource_id: $.payload.initiator.id
user_id: $.payload.initiator.id
# DNS
- name: 'dns.domain.exists'
event_type: 'dns.domain.exists'
type: 'cumulative'
unit: 's'
volume:
fields: [$.payload.audit_period_beginning, $.payload.audit_period_ending]
plugin: 'timedelta'
project_id: $.payload.tenant_id
resource_id: $.payload.id
user_id: $._context_user
metadata:
status: $.payload.status
pool_id: $.payload.pool_id
host: $.publisher_id
# Trove
- name: 'trove.instance.exists'
event_type: 'trove.instance.exists'
type: 'cumulative'
unit: 's'
volume:
fields: [$.payload.audit_period_beginning, $.payload.audit_period_ending]
plugin: 'timedelta'
project_id: $.payload.tenant_id
resource_id: $.payload.instance_id
user_id: $.payload.user_id
metadata:
nova_instance_id: $.payload.nova_instance_id
state: $.payload.state
service_id: $.payload.service_id
instance_type: $.payload.instance_type
instance_type_id: $.payload.instance_type_id
# Manila
- name: 'manila.share.size'
event_type:
- 'share.create.*'
- 'share.delete.*'
- 'share.extend.*'
- 'share.shrink.*'
type: 'gauge'
unit: 'GB'
volume: $.payload.size
user_id: $.payload.user_id
project_id: $.payload.project_id
resource_id: $.payload.share_id
metadata:
name: $.payload.name
host: $.payload.host
availability_zone: $.payload.availability_zone
status: $.payload.status

View File

@ -1,187 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from jsonpath_rw_ext import parser
from oslo_log import log
import six
import yaml
from ceilometer.i18n import _
LOG = log.getLogger(__name__)
class DefinitionException(Exception):
def __init__(self, message, definition_cfg):
msg = '%s %s: %s' % (self.__class__.__name__, definition_cfg, message)
super(DefinitionException, self).__init__(msg)
self.brief_message = message
class MeterDefinitionException(DefinitionException):
pass
class EventDefinitionException(DefinitionException):
pass
class ResourceDefinitionException(DefinitionException):
pass
class Definition(object):
JSONPATH_RW_PARSER = parser.ExtentedJsonPathParser()
GETTERS_CACHE = {}
def __init__(self, name, cfg, plugin_manager):
self.cfg = cfg
self.name = name
self.plugin = None
if isinstance(cfg, dict):
if 'fields' not in cfg:
raise DefinitionException(
_("The field 'fields' is required for %s") % name,
self.cfg)
if 'plugin' in cfg:
plugin_cfg = cfg['plugin']
if isinstance(plugin_cfg, six.string_types):
plugin_name = plugin_cfg
plugin_params = {}
else:
try:
plugin_name = plugin_cfg['name']
except KeyError:
raise DefinitionException(
_('Plugin specified, but no plugin name supplied '
'for %s') % name, self.cfg)
plugin_params = plugin_cfg.get('parameters')
if plugin_params is None:
plugin_params = {}
try:
plugin_ext = plugin_manager[plugin_name]
except KeyError:
raise DefinitionException(
_('No plugin named %(plugin)s available for '
'%(name)s') % dict(
plugin=plugin_name,
name=name), self.cfg)
plugin_class = plugin_ext.plugin
self.plugin = plugin_class(**plugin_params)
fields = cfg['fields']
else:
# Simple definition "foobar: jsonpath"
fields = cfg
if isinstance(fields, list):
# NOTE(mdragon): if not a string, we assume a list.
if len(fields) == 1:
fields = fields[0]
else:
fields = '|'.join('(%s)' % path for path in fields)
if isinstance(fields, six.integer_types):
self.getter = fields
else:
try:
self.getter = self.make_getter(fields)
except Exception as e:
raise DefinitionException(
_("Parse error in JSONPath specification "
"'%(jsonpath)s' for %(name)s: %(err)s")
% dict(jsonpath=fields, name=name, err=e), self.cfg)
def _get_path(self, match):
if match.context is not None:
for path_element in self._get_path(match.context):
yield path_element
yield str(match.path)
def parse(self, obj, return_all_values=False):
if callable(self.getter):
values = self.getter(obj)
else:
return self.getter
values = [match for match in values
if return_all_values or match.value is not None]
if self.plugin is not None:
if return_all_values and not self.plugin.support_return_all_values:
raise DefinitionException("Plugin %s don't allows to "
"return multiple values" %
self.cfg["plugin"]["name"], self.cfg)
values_map = [('.'.join(self._get_path(match)), match.value) for
match in values]
values = [v for v in self.plugin.trait_values(values_map)
if v is not None]
else:
values = [match.value for match in values if match is not None]
if return_all_values:
return values
else:
return values[0] if values else None
def make_getter(self, fields):
if fields in self.GETTERS_CACHE:
return self.GETTERS_CACHE[fields]
else:
getter = self.JSONPATH_RW_PARSER.parse(fields).find
self.GETTERS_CACHE[fields] = getter
return getter
def load_definitions(conf, defaults, config_file, fallback_file=None):
"""Setup a definitions from yaml config file."""
if not os.path.exists(config_file):
config_file = conf.find_file(config_file)
if not config_file and fallback_file is not None:
LOG.debug("No Definitions configuration file found! "
"Using default config.")
config_file = fallback_file
if config_file is not None:
LOG.debug("Loading definitions configuration file: %s", config_file)
with open(config_file) as cf:
config = cf.read()
try:
definition_cfg = yaml.safe_load(config)
except yaml.YAMLError as err:
if hasattr(err, 'problem_mark'):
mark = err.problem_mark
errmsg = (_("Invalid YAML syntax in Definitions file "
"%(file)s at line: %(line)s, column: %(column)s.")
% dict(file=config_file,
line=mark.line + 1,
column=mark.column + 1))
else:
errmsg = (_("YAML error reading Definitions file "
"%(file)s")
% dict(file=config_file))
LOG.error(errmsg)
raise
else:
LOG.debug("No Definitions configuration file found! "
"Using default config.")
definition_cfg = defaults
LOG.info("Definitions: %s", definition_cfg)
return definition_cfg

Some files were not shown because too many files have changed in this diff Show More