deb-ceilometer/doc/source/webapi/v2.rst
gordon chung 51f478febe rewriting history
start to rewrite the purpose of 'ceilometer' and how it fits with
'telemetry'. this touches index and overview sections only.

updates
- fix gnocchi links
- correct goal of ceilometer (data collection only)
- make ceilometer api optional
- highlight purpose of ceilometer db vs gnocchi
- point out gnocchi is more responsive if not requiring full fidelity

Change-Id: Idce892227281d2cb1fe3c4c0cfdf048b487a3b7f
2016-02-02 23:15:50 +00:00

26 KiB

V2 Web API

Resources

ceilometer.api.controllers.v2.resources:ResourcesController

ceilometer.api.controllers.v2.resources.Resource

Meters

ceilometer.api.controllers.v2.meters:MetersController

ceilometer.api.controllers.v2.meters:MeterController

ceilometer.api.controllers.v2.meters.Meter

ceilometer.api.controllers.v2.meters.OldSample

Samples and Statistics

ceilometer.api.controllers.v2.samples:SamplesController

ceilometer.api.controllers.v2.samples.Sample

ceilometer.api.controllers.v2.meters.Statistics

When a simple statistics request is invoked (using GET /v2/meters/<meter_name>/statistics), it will return the standard set of Statistics: avg, sum, min, max, and count.

Note

If using Ceilometer data for statistics, it's recommended to use a backend such as Gnocchi rather than Ceilometer's interface. Gnocchi is designed specifically for this use case by providing a light-weight, aggregated model. As they manage data differently, the API models returned by Ceilometer and Gnocchi are different. The Gnocchi API can be found here.

Selectable Aggregates

The Statistics API has been extended to include the aggregate functions stddev and cardinality. You can explicitly select these functions or any from the standard set by specifying an aggregate function in the statistics query:

GET /v2/meters/<meter_name>/statistics?aggregate.func=<name>&aggregate.param=<value>

(where aggregate.param is optional).

Duplicate aggregate function and parameter pairs are silently discarded from the statistics query. Partial duplicates, in the sense of the same function but differing parameters, for example:

GET /v2/meters/<meter_name>/statistics?aggregate.func=cardinality&aggregate.param=resource_id&aggregate.func=cardinality&aggregate.param=project_id

are, on the other hand, both allowed by the API and supported by the storage drivers. See the functional-examples section for more detail.

Note

Currently only cardinality needs aggregate.param to be specified.

ceilometer.api.controllers.v2.meters.Aggregate

Capabilities

The Capabilities API allows you to directly discover which functions from the V2 API functionality, including the selectable aggregate functions, are supported by the currently configured storage driver. A capabilities query returns a flattened dictionary of properties with associated boolean values -a 'False' or absent value means that the corresponding feature is not available in the backend.

ceilometer.api.controllers.v2.capabilities:CapabilitiesController

ceilometer.api.controllers.v2.capabilities.Capabilities

Events and Traits

ceilometer.api.controllers.v2.events:EventTypesController

ceilometer.api.controllers.v2.events:TraitsController

ceilometer.api.controllers.v2.events:EventsController

ceilometer.api.controllers.v2.events.Event

ceilometer.api.controllers.v2.events.Trait

ceilometer.api.controllers.v2.events.TraitDescription

Filtering Queries

Ceilometer's REST API currently supports two types of queries. The Simple Query functionality provides simple filtering on several fields of the Sample type. Complex Query provides the possibility to specify queries with logical and comparison operators on the fields of Sample.

You may also apply filters based on the values of one or more of the resource_metadata field, which you can identify by using metadata.<field> syntax in either type of query. Note, however, that given the free-form nature of resource_metadata field, there is no practical or consistent way to validate the query fields under metadata domain like it is done for all other fields.

Note

The API call will return HTTP 200 OK status for both of the following cases: when a query with metadata.<field> does not match its value, and when <field> itself does not exist in any of the records being queried.

Simple Query

Many of the endpoints above accept a query filter argument, which should be a list of Query data structures. Whatever the endpoint you want to apply a filter on, you always filter on the fields of the Sample type (for example, if you apply a filter on a query for statistics, you won't target duration_start field of Statistics, but timestamp field of Sample). See api-queries for how to query the API.

ceilometer.api.controllers.v2.base.Query

Event Query

Event query is similar to simple query, its type EventQuery is actually a subclass of Query, so EventQuery has every attribute Query has. But there are some differences. If a field is one of the following: event_type, message_id, start_timestamp, end_timestamp, then this field will be applied on event, otherwise it will be treated as trait name and applied on trait. See api-queries for how to query the API.

ceilometer.api.controllers.v2.events.EventQuery

Complex Query

The filter expressions of the Complex Query feature operate on the fields of Sample. The following comparison operators are supported: =, !=, <, <=, >, >= and in; and the following logical operators can be used: and or and not. The field names are validated against the database models. See api-queries for how to query the API.

Note

The not operator has different meaning in MongoDB and in SQL DB engine. If the not operator is applied on a non existent metadata field then the result depends on the DB engine. For example if {"not": {"metadata.nonexistent_field" : "some value"}} filter is used in a query the MongoDB will return every Sample object as not operator evaluated true for every Sample where the given field does not exists. See more in the MongoDB doc. On the other hand SQL based DB engine will return empty result as the join operation on the metadata table will return zero rows as the on clause of the join which tries to match on the metadata field name is never fulfilled.

Complex Query supports defining the list of orderby expressions in the form of [{"field_name": "asc"}, {"field_name2": "desc"}, ...].

The number of the returned items can be bounded using the limit option.

The filter, orderby and limit are all optional fields in a query.

ceilometer.api.controllers.v2.query:QuerySamplesController

ceilometer.api.controllers.v2.query.ComplexQuery

ceilometer.api.controllers.v2.base.Link

API and CLI query examples

CLI Queries

Ceilometer CLI Commands:

$ ceilometer --debug --os-username <username_here> --os-password <password_here> --os-auth-url http://localhost:5000/v2.0/ --os-tenant-name admin  meter-list

Note

The username, password, and tenant-name options are required to be present in these arguments or specified via environment variables. Note that the in-line arguments will override the environment variables.

API Queries

Ceilometer API calls:

Note

To successfully query Ceilometer you must first get a project-specific token from the Keystone service and add it to any API calls that you execute against that project. See the OpenStack credentials documentation for additional details.

A simple query to return a list of available meters:

curl -H 'X-Auth-Token: <inserttokenhere>' \
  "http://localhost:8777/v2/meters"

A query to return the list of resources:

curl -H 'X-Auth-Token: <inserttokenhere>' \
  "http://localhost:8777/v2/resources"

A query to return the list of samples, limited to a specific meter type:

curl -H 'X-Auth-Token: <inserttokenhere>' \
  "http://localhost:8777/v2/meters/disk.root.size"

A query using filters (see: query filter section):

curl -H 'X-Auth-Token: <inserttokenhere>' \
  "http://localhost:8777/v2/meters/instance?q.field=metadata.event_type&q.value=compute.instance.delete.start"

Additional examples:

curl -H 'X-Auth-Token: <inserttokenhere>' \
  "http://localhost:8777/v2/meters/disk.root.size?q.field=resource_id&q.op=eq&q.value=<resource_id_here>"

or:

curl -H 'X-Auth-Token: <inserttokenhere>' \
  "http://localhost:8777/v2/meters/instance?q.field=metadata.event_type&q.value=compute.instance.exists"

You can specify multiple filters by using an array of queries (order matters):

curl -H 'X-Auth-Token: <inserttokenhere>' \
  "http://localhost:8777/v2/meters/instance"\
  "?q.field=metadata.event_type&q.value=compute.instance.exists"\
  "&q.field=timestamp&q.op=gt&q.value=2013-07-03T13:34:17"

A query to find the maximum value and standard deviation (max, stddev) of the CPU utilization for a given instance (identified by resource_id):

curl -H 'X-Auth-Token: <inserttokenhere>' \
  "http://localhost:8777/v2/meters/cpu_util/statistics?aggregate.func=max&aggregate.func=stddev"\
  "&q.field=resource_id&q.op=eq&q.value=64da755c-9120-4236-bee1-54acafe24980"

Note

If any of the requested aggregates are not supported by the storage driver, a HTTP 400 error code will be returned along with an appropriate error message.

JSON based example:

curl -X GET -H "X-Auth-Token: <inserttokenhere>" -H "Content-Type: application/json"
-d '{"q": [{"field": "timestamp", "op": "ge", "value": "2014-04-01T13:34:17"}]}'
  http://localhost:8777/v2/meters/instance

JSON based example with multiple filters:

curl -X GET -H "X-Auth-Token: <inserttokenhere>" -H "Content-Type: application/json"
 -d '{"q": [{"field": "timestamp", "op": "ge", "value": "2014-04-01T13:34:17"},
   {"field": "resource_id", "op": "eq", "value": "4da2b992-0dc3-4a7c-a19a-d54bf918de41"}]}'
   http://localhost:8777/v2/meters/instance

Functional examples

The examples below are meant to help you understand how to query the Ceilometer API to build custom meters report. The query parameters should be encoded using one of the above methods, e.g. as the URL parameters or as JSON encoded data passed to the GET request.

Get the list of samples about instances running for June 2013:

GET /v2/meters/instance
q: [{"field": "timestamp",
     "op": "ge",
     "value": "2013-06-01T00:00:00"},
    {"field": "timestamp",
     "op": "lt",
      "value": "2013-07-01T00:00:00"}]

Get the list of samples about instances running for June 2013 for a particular project:

GET /v2/meters/instance
q: [{"field": "timestamp",
     "op": "ge",
     "value": "2013-06-01T00:00:00"},
    {"field": "timestamp",
     "op": "lt",
     "value": "2013-07-01T00:00:00"},
    {"field": "project_id",
     "op": "eq",
     "value": "8d6057bc-5b90-4296-afe0-84acaa2ef909"}]

Now you may want to have statistics on the meters you are targeting. Consider the following example where you are getting the list of samples about CPU utilization of a given instance (identified by its resource_id) running for June 2013:

GET /v2/meters/cpu_util
q: [{"field": "timestamp",
     "op": "ge",
     "value": "2013-06-01T00:00:00"},
    {"field": "timestamp",
     "op": "lt",
     "value": "2013-07-01T00:00:00"},
    {"field": "resource_id",
     "op": "eq",
     "value": "64da755c-9120-4236-bee1-54acafe24980"}]

You can have statistics on the list of samples requested (avg, sum, max, min, count) computed on the full duration:

GET /v2/meters/cpu_util/statistics
q: [{"field": "timestamp",
     "op": "ge",
     "value": "2013-06-01T00:00:00"},
    {"field": "timestamp",
     "op": "lt",
     "value": "2013-07-01T00:00:00"},
    {"field": "resource_id",
     "op": "eq",
     "value": "64da755c-9120-4236-bee1-54acafe24980"}]

You may want to aggregate samples over a given period (10 minutes for example) in order to get an array of the statistics computed on smaller durations:

GET /v2/meters/cpu_util/statistics
q: [{"field": "timestamp",
     "op": "ge",
     "value": "2013-06-01T00:00:00"},
    {"field": "timestamp",
     "op": "lt",
     "value": "2013-07-01T00:00:00"},
    {"field": "resource_id",
     "op": "eq",
     "value": "64da755c-9120-4236-bee1-54acafe24980"}]
period: 600

The period parameter aggregates by time range. You can also aggregate by field using the groupby parameter. Currently, the user_id, resource_id, project_id, and source fields are supported. Below is an example that uses a query filter and group by aggregation on project_id and resource_id:

GET /v2/meters/instance/statistics
q: [{"field": "user_id",
    "op": "eq",
    "value": "user-2"},
    {"field": "source",
     "op": "eq",
     "value": "source-1"}]
groupby: ["project_id", "resource_id"]

The statistics will be returned in a list, and each entry of the list will be labeled with the group name. For the previous example, the first entry might have project_id be "project-1" and resource_id be "resource-1", the second entry have project_id be "project-1" and resource_id be "resource-2", and so on.

You can request both period and group by aggregation in the same query:

GET /v2/meters/instance/statistics
q: [{"field": "source",
    "op": "eq",
    "value": "source-1"}]
groupby: ["project_id"]
period: 7200

Note that period aggregation is applied first, followed by group by aggregation. Order matters because the period aggregation determines the time ranges for the statistics.

Below is a real-life query:

GET /v2/meters/image/statistics
groupby: ["project_id", "resource_id"]

With the return values:

[{"count": 4, "duration_start": "2013-09-18T19:08:33", "min": 1.0,
  "max": 1.0, "duration_end": "2013-09-18T19:27:30", "period": 0,
  "sum": 4.0, "period_end": "2013-09-18T19:27:30", "duration": 1137.0,
  "period_start": "2013-09-18T19:08:33", "avg": 1.0,
  "groupby": {"project_id": "c2334f175d8b4cb8b1db49d83cecde78",
              "resource_id": "551f495f-7f49-4624-a34c-c422f2c5f90b"},
  "unit": "image"},
 {"count": 4, "duration_start": "2013-09-18T19:08:36", "min": 1.0,
  "max": 1.0, "duration_end": "2013-09-18T19:27:30", "period": 0,
  "sum": 4.0, "period_end": "2013-09-18T19:27:30", "duration": 1134.0,
  "period_start": "2013-09-18T19:08:36", "avg": 1.0,
  "groupby": {"project_id": "c2334f175d8b4cb8b1db49d83cecde78",
              "resource_id": "7c1157ed-cf30-48af-a868-6c7c3ad7b531"},
  "unit": "image"},
 {"count": 4, "duration_start": "2013-09-18T19:08:34", "min": 1.0,
  "max": 1.0, "duration_end": "2013-09-18T19:27:30", "period": 0,
  "sum": 4.0, "period_end": "2013-09-18T19:27:30", "duration": 1136.0,
  "period_start": "2013-09-18T19:08:34", "avg": 1.0,
  "groupby": {"project_id": "c2334f175d8b4cb8b1db49d83cecde78",
              "resource_id": "eaed9cf4-fc99-4115-93ae-4a5c37a1a7d7"},
  "unit": "image"}]

You can request specific aggregate functions as well. For example, if you only want the average CPU utilization, the GET request would look like this:

GET /v2/meters/cpu_util/statistics?aggregate.func=avg

Use the same syntax to access the aggregate functions not in the standard set, e.g. stddev and cardinality. A request for the standard deviation of CPU utilization would take the form:

GET /v2/meters/cpu_util/statistics?aggregate.func=stddev

And would give a response such as the example:

[{"aggregate": {"stddev":0.6858829535841072},
  "duration_start": "2014-01-30T11:13:23",
  "duration_end": "2014-01-31T16:07:13",
  "duration": 104030.0,
  "period": 0,
  "period_start": "2014-01-30T11:13:23",
  "period_end": "2014-01-31T16:07:13",
  "groupby": null,
  "unit" : "%"}]

The request syntax is similar for cardinality but with the aggregate.param option provided. So, for example, if you want to know the number of distinct tenants with images, you would do:

GET /v2/meters/image/statistics?aggregate.func=cardinality
                                  &aggregate.param=project_id

For a more involved example, consider a requirement for determining, for some tenant, the number of distinct instances (cardinality) as well as the total number of instance samples (count). You might also want to see this information with 15 minute long intervals. Then, using the period and groupby options, a query would look like the following:

GET /v2/meters/instance/statistics?aggregate.func=cardinality
                                  &aggregate.param=resource_id
                                  &aggregate.func=count
                                  &groupby=project_id&period=900

This would give an example response of the form:

[{"count": 19,
  "aggregate": {"count": 19.0, "cardinality/resource_id": 3.0},
  "duration": 328.478029,
  "duration_start": "2014-01-31T10:00:41.823919",
  "duration_end": "2014-01-31T10:06:10.301948",
  "period": 900,
  "period_start": "2014-01-31T10:00:00",
  "period_end": "2014-01-31T10:15:00",
  "groupby": {"project_id": "061a5c91811e4044b7dc86c6136c4f99"},
  "unit": "instance"},
 {"count": 22,
  "aggregate": {"count": 22.0, "cardinality/resource_id": 4.0},
  "duration": 808.00384,
  "duration_start": "2014-01-31T10:15:15",
  "duration_end": "2014-01-31T10:28:43.003840",
  "period": 900,
  "period_start": "2014-01-31T10:15:00",
  "period_end": "2014-01-31T10:30:00",
  "groupby": {"project_id": "061a5c91811e4044b7dc86c6136c4f99"},
  "unit": "instance"},
 {"count": 2,
  "aggregate": {"count": 2.0, "cardinality/resource_id": 2.0},
  "duration": 0.0,
  "duration_start": "2014-01-31T10:35:15",
  "duration_end": "2014-01-31T10:35:15",
  "period": 900,
  "period_start": "2014-01-31T10:30:00",
  "period_end": "2014-01-31T10:45:00",
  "groupby": {"project_id": "061a5c91811e4044b7dc86c6136c4f99"},
  "unit": "instance"}]

If you want to retrieve all the instances (not the list of samples, but the resource itself) that have been run during this month for a given project, you should ask the resource endpoint for the list of resources (all types: including storage, images, networking, ...):

GET /v2/resources
q: [{"field": "timestamp",
     "op": "ge",
     "value": "2013-06-01T00:00:00"},
    {"field": "timestamp",
     "op": "lt",
     "value": "2013-07-01T00:00:00"},
    {"field": "project_id",
     "op": "eq",
     "value": "8d6057bc-5b90-4296-afe0-84acaa2ef909"}]

Then look for resources that have an instance meter linked to them. That will indicate resources that have been measured as being instance. You can then request their samples to have more detailed information, like their state or their flavor:

GET /v2/meter/instance
q: [{"field": "timestamp",
     "op": "ge",
     "value": "2013-06-01T00:00:00"},
    {"field": "timestamp",
     "op": "lt",
     "value": "2013-07-01T00:00:00"},
    {"field": "resource_id",
     "op": "eq",
     "value": "64da755c-9120-4236-bee1-54acafe24980"},
    {"field": "project_id",
     "op": "eq",
     "value": "8d6057bc-5b90-4296-afe0-84acaa2ef909"}]

This will return a list of samples that have been recorded on this particular resource. You can inspect them to retrieve information, such as the instance state (check the metadata.vm_state field) or the instance flavor (check the metadata.flavor field). You can request nested metadata fields by using a dot to delimit the fields (e.g. metadata.weighted_host.host for instance.scheduled meter)

To retrieve only the 3 last samples of a meters, you can pass the limit parameter to the query:

GET /v2/meter/instance
q: [{"field": "timestamp",
     "op": "ge",
     "value": "2013-06-01T00:00:00"},
    {"field": "timestamp",
     "op": "lt",
     "value": "2013-07-01T00:00:00"},
    {"field": "resource_id",
     "op": "eq",
     "value": "64da755c-9120-4236-bee1-54acafe24980"},
    {"field": "project_id",
     "op": "eq",
     "value": "8d6057bc-5b90-4296-afe0-84acaa2ef909"}]
limit: 3

This query would only return the last 3 samples.

Functional example for Complex Query

This example demonstrates how complex query filter expressions can be generated and sent to the /v2/query/samples endpoint of Ceilometer API using POST request.

To check for cpu_util samples reported between 18:00-18:15 or between 18:30 - 18:45 on a particular date (2013-12-01), where the utilization is between 23 and 26 percent, but not exactly 25.12 percent, the following filter expression can be created:

{"and":
 [{"and":
  [{"=": {"counter_name": "cpu_util"}},
   {">": {"counter_volume": 0.23}},
   {"<": {"counter_volume": 0.26}},
   {"not": {"=": {"counter_volume": 0.2512}}}]},
  {"or":
   [{"and":
    [{">": {"timestamp": "2013-12-01T18:00:00"}},
     {"<": {"timestamp": "2013-12-01T18:15:00"}}]},
    {"and":
     [{">": {"timestamp": "2013-12-01T18:30:00"}},
      {"<": {"timestamp": "2013-12-01T18:45:00"}}]}]}]}

Different sorting criteria can be defined for the query filter, for example the results can be ordered in an ascending order by the counter_volume and descending order based on the timestamp. The following order by expression has to be created for specifying this criteria:

[{"counter_volume": "ASC"}, {"timestamp": "DESC"}]

As the current implementation accepts only string values as query filter and order by definitions, the above defined expressions have to be converted to string values. By adding a limit criteria to the request, which maximizes the number of returned samples to four, the query looks like the following:

{
"filter" : "{\"and\":[{\"and\": [{\"=\": {\"counter_name\": \"cpu_util\"}}, {\">\": {\"counter_volume\": 0.23}}, {\"<\": {\"counter_volume\": 0.26}}, {\"not\": {\"=\": {\"counter_volume\": 0.2512}}}]}, {\"or\": [{\"and\": [{\">\": {\"timestamp\": \"2013-12-01T18:00:00\"}}, {\"<\": {\"timestamp\": \"2013-12-01T18:15:00\"}}]}, {\"and\": [{\">\": {\"timestamp\": \"2013-12-01T18:30:00\"}}, {\"<\": {\"timestamp\": \"2013-12-01T18:45:00\"}}]}]}]}",
"orderby" : "[{\"counter_volume\": \"ASC\"}, {\"timestamp\": \"DESC\"}]",
"limit" : 4
}

A query request looks like the following with curl:

curl -X POST -H 'X-Auth-Token: <inserttokenhere>' -H 'Content-Type: application/json' \
  -d '<insertyourqueryexpressionhere>' \
   http://localhost:8777/v2/query/samples

User-defined data

It is possible to add your own samples (created from data retrieved in any way like monitoring agents on your instances) in Ceilometer to store them and query on them. You can even get Statistics on your own inserted data. By adding a Sample to a Resource, you create automatically the corresponding Meter if it does not exist already. To achieve this, you have to POST a list of one to many samples in JSON format:

curl -X POST -H 'X-Auth-Token: <inserttokenhere>' -H 'Content-Type: application/json' \
  -d '<insertyoursampleslisthere>' \
  http://localhost:8777/v2/meters/<insertyourmeternamehere>

Fields source, timestamp, project_id and user_id are automatically added if not present in the samples. Field message_id is not taken into account if present and an internal value will be set.

By default, samples posted via API will be placed on the notification bus and processed by the notification agent.

To avoid re-queuing the data, samples posted via API can be stored directly to the storage backend verbatim by specifying a boolean flag 'direct' in the request URL, like this:

POST /v2/meters/ram_util?direct=True

Samples posted this way will bypass pipeline processing.

Here is an example showing how to add a sample for a ram_util meter (already existing or not):

POST /v2/meters/ram_util
body: [
        {
          "counter_name": "ram_util",
          "user_id": "4790fbafad2e44dab37b1d7bfc36299b",
          "resource_id": "87acaca4-ae45-43ae-ac91-846d8d96a89b",
          "resource_metadata": {
            "display_name": "my_instance",
            "my_custom_metadata_1": "value1",
            "my_custom_metadata_2": "value2"
           },
          "counter_unit": "%",
          "counter_volume": 8.57762938230384,
          "project_id": "97f9a6aaa9d842fcab73797d3abb2f53",
          "counter_type": "gauge"
        }
      ]

You get back the same list containing your example completed with the missing fields : source and timestamp in this case.