Add optional start_time and end_time for metrics list

This is a very useful feature for many of the dashboards,
when not using the merge flag.  These dashboards begin
with a metric-list, and then do a statistics call for each
unique set of dimensions returned.  For many dashboards,
metrics are returned where no data is currently being
collected (deleted VMs, etc), causing many unnecessary queries
for no data -- not to mention an ugly dashboard with tons
of noise.

This enhancement will help both the current grafana implementation
as well as our 2.0 port.  Note that this patch supports the java
implementations for vertica and influxdb, as well as the python
implementation of the api for influxdb (vertica not currently
supported in python).

Change-Id: I683f2a53aaf2d2ad8005dd1542883636086aad4a
This commit is contained in:
bklei 2015-11-02 15:14:48 -07:00
parent 7a7a0031bd
commit bc629f612f
11 changed files with 465 additions and 124 deletions

View File

@ -668,7 +668,7 @@ A hexadecimal string offset would look like this:
offset=01ce0acc66131296c8a17294f39aee44ea8963ec
```
```
A timestamp offset would look like this:
@ -829,10 +829,10 @@ Returns a JSON version object with details about the specified version.
#### Response Examples
```
{
{
"id":"v2.0",
"links":[
{
"links":[
{
"rel":"self",
"href":"http://192.168.10.4:8080/v2.0/"
}
@ -930,19 +930,19 @@ Content-Type: application/json
X-Auth-Token: 27feed73a0ce4138934e30d619b415b0
Cache-Control: no-cache
[
{
[
{
"name":"name1",
"dimensions":{
"dimensions":{
"key1":"value1",
"key2":"value2"
},
"timestamp":1405630174123,
"value":1.0
},
{
{
"name":"name2",
"dimensions":{
"dimensions":{
"key1":"value1",
"key2":"value2"
},
@ -980,6 +980,8 @@ None.
* tenant_id (string, optional, restricted) - Tenant ID to from which to get metrics. This parameter can be used to get metrics from a tenant other than the tenant the request auth token is scoped to. Usage of this query parameter is restricted to users with the the monasca admin role, as defined in the monasca api configuration file, which defaults to `monasca-admin`.
* name (string(255), optional) - A metric name to filter metrics by.
* dimensions (string, optional) - A dictionary to filter metrics by specified as a comma separated array of (key, value) pairs as `key1:value1,key2:value2, ...`
* start_time (string, optional) - The start time in ISO 8601 combined date and time format in UTC. This is useful for only listing metrics that have measurements since the specified start_time.
* end_time (string, optional) - The end time in ISO 8601 combined date and time format in UTC. Combined with start_time, this can be useful to only list metrics that have measurements in between the specified start_time and end_time.
* offset (integer (InfluxDB) or hexadecimal string (Vertica), optional)
* limit (integer, optional)
@ -1331,7 +1333,7 @@ ___
Operations for working with notification methods.
## Create Notification Method
Creates a notification method through which notifications can be sent to when an alarm state transition occurs. Notification methods can be associated with zero or many alarms.
Creates a notification method through which notifications can be sent to when an alarm state transition occurs. Notification methods can be associated with zero or many alarms.
### POST /v2.0/notification-methods
@ -1359,7 +1361,7 @@ Content-Type: application/json
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Cache-Control: no-cache
{
{
"name":"Name of notification method",
"type":"EMAIL",
"address":"john.doe@hp.com"
@ -1375,17 +1377,17 @@ Cache-Control: no-cache
Returns a JSON notification method object with the following fields:
* id (string) - ID of notification method
* links ([link])
* links ([link])
* name (string) - Name of notification method
* type (string) - Type of notification method
* address (string) - Address of notification method
#### Response Examples
```
{
{
"id":"35cc6f1c-3a29-49fb-a6fc-d9d97d190508",
"links":[
{
"links":[
{
"rel":"self",
"href":"http://192.168.10.4:8080/v2.0/notification-methods/35cc6f1c-3a29-49fb-a6fc-d9d97d190508"
}
@ -1434,7 +1436,7 @@ Cache-Control: no-cache
Returns a JSON object with a 'links' array of links and an 'elements' array of notification method objects with the following fields:
* id (string) - ID of notification method
* links ([link])
* links ([link])
* name (string) - Name of notification method
* type (string) - Type of notification method
* address (string) - Address of notification method
@ -1514,17 +1516,17 @@ GET http://192.168.10.4:8080/v2.0/notification-methods/35cc6f1c-3a29-49fb-a6fc-d
Returns a JSON notification method object with the following fields:
* id (string) - ID of notification method
* links ([link])
* links ([link])
* name (string) - Name of notification method
* type (string) - Type of notification method
* address (string) - Address of notification method
#### Response Examples
```
{
{
"id":"35cc6f1c-3a29-49fb-a6fc-d9d97d190508",
"links":[
{
"links":[
{
"rel":"self",
"href":"http://192.168.10.4:8080/v2.0/notification-methods/35cc6f1c-3a29-49fb-a6fc-d9d97d190508"
}
@ -1565,7 +1567,7 @@ Content-Type: application/json
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Cache-Control: no-cache
{
{
"name":"New name of notification method",
"type":"EMAIL",
"address":"jane.doe@hp.com"
@ -1581,17 +1583,17 @@ Cache-Control: no-cache
Returns a JSON notification method object with the following fields:
* id (string) - ID of notification method
* links ([link])
* links ([link])
* name (string) - Name of notification method
* type (string) - Type of notification method
* address (string) - Address of notification method
#### Response Examples
````
{
{
"id":"35cc6f1c-3a29-49fb-a6fc-d9d97d190508",
"links":[
{
"links":[
{
"rel":"self",
"href":"http://192.168.10.4:8080/v2.0/notification-methods/35cc6f1c-3a29-49fb-a6fc-d9d97d190508"
}
@ -1676,7 +1678,7 @@ Content-Type: application/json
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Cache-Control: no-cache
{
{
"name":"Average CPU percent greater than 10",
"description":"The average CPU percent is greater than 10",
"expression":"(avg(cpu.user_perc{hostname=devstack}) > 10)",
@ -1684,13 +1686,13 @@ Cache-Control: no-cache
"hostname"
],
"severity":"LOW",
"ok_actions":[
"ok_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"alarm_actions":[
"alarm_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions":[
"undetermined_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
}
@ -1718,10 +1720,10 @@ Returns a JSON object of alarm definition objects with the following fields:
#### Response Examples
```
{
{
"id":"b461d659-577b-4d63-9782-a99194d4a472",
"links":[
{
"links":[
{
"rel":"self",
"href":"http://192.168.10.4:8080/v2.0/alarm-definitions/b461d659-577b-4d63-9782-a99194d4a472"
}
@ -1729,10 +1731,10 @@ Returns a JSON object of alarm definition objects with the following fields:
"name":"Average CPU percent greater than 10",
"description":"The average CPU percent is greater than 10",
"expression":"(avg(cpu.user_perc{hostname=devstack}) > 10)",
"expression_data":{
"expression_data":{
"function":"AVG",
"metric_name":"cpu.user_perc",
"dimensions":{
"dimensions":{
"hostname":"devstack"
},
"operator":"GT",
@ -1744,13 +1746,13 @@ Returns a JSON object of alarm definition objects with the following fields:
"hostname"
],
"severity":"LOW",
"alarm_actions":[
"alarm_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"ok_actions":[
"ok_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions":[
"undetermined_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
}
@ -1981,7 +1983,7 @@ X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Content-Type: application/json
Cache-Control: no-cache
{
{
"name":"CPU percent greater than 15",
"description":"Release the hounds",
"expression":"(avg(cpu.user_perc{hostname=devstack}) > 15)",
@ -1989,13 +1991,13 @@ Cache-Control: no-cache
"hostname"
],
"severity": "LOW",
"alarm_actions":[
"alarm_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"ok_actions":[
"ok_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions":[
"undetermined_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"actions_enabled": true
@ -2101,7 +2103,7 @@ X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Content-Type: application/json
Cache-Control: no-cache
{
{
"name":"CPU percent greater than 15",
"description":"Release the hounds",
"expression":"(avg(cpu.user_perc{hostname=devstack}) > 15)",
@ -2109,13 +2111,13 @@ Cache-Control: no-cache
"hostname"
],
"severity":"CRITICAL",
"alarm_actions":[
"alarm_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"ok_actions":[
"ok_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
],
"undetermined_actions":[
"undetermined_actions":[
"c60ec47e-5038-4bf1-9f95-4046c6e9a759"
]
}
@ -2538,14 +2540,14 @@ Returns a JSON alarm object with the following fields:
#### Response Examples
```
{
{
"id":"f9935bcc-9641-4cbf-8224-0993a947ea83",
"links":[
{
"links":[
{
"rel":"self",
"href":"http://192.168.10.4:8080/v2.0/alarms/f9935bcc-9641-4cbf-8224-0993a947ea83"
},
{
{
"rel":"state-history",
"href":"http://192.168.10.4:8080/v2.0/alarms/f9935bcc-9641-4cbf-8224-0993a947ea83/state-history"
}
@ -2564,7 +2566,7 @@ Returns a JSON alarm object with the following fields:
},
"metrics":[{
"name":"cpu.system_perc",
"dimensions":{
"dimensions":{
"hostname":"devstack"
}
}],
@ -2609,7 +2611,7 @@ X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Content-Type: application/json
Cache-Control: no-cache
{
{
"state":"OK",
"lifecycle_state":"OPEN",
"link":"http://pagerduty.com/"
@ -2637,14 +2639,14 @@ Returns a JSON alarm object with the following parameters:
#### Response Examples
```
{
{
"id":"f9935bcc-9641-4cbf-8224-0993a947ea83",
"links":[
{
"links":[
{
"rel":"self",
"href":"http://192.168.10.4:8080/v2.0/alarms/f9935bcc-9641-4cbf-8224-0993a947ea83"
},
{
{
"rel":"state-history",
"href":"http://192.168.10.4:8080/v2.0/alarms/f9935bcc-9641-4cbf-8224-0993a947ea83/state-history"
}
@ -2652,7 +2654,7 @@ Returns a JSON alarm object with the following parameters:
"alarm_definition_id":"ad837fca-5564-4cbf-523-0117f7dac6ad",
"metrics":[{
"name":"cpu.system_perc",
"dimensions":{
"dimensions":{
"hostname":"devstack"
}
}],

View File

@ -1,11 +1,11 @@
/*
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
*
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
* in compliance with the License. You may obtain a copy of the License at
*
*
* http://www.apache.org/licenses/LICENSE-2.0
*
*
* Unless required by applicable law or agreed to in writing, software distributed under the License
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
* or implied. See the License for the specific language governing permissions and limitations under
@ -15,6 +15,8 @@ package monasca.api.domain.model.metric;
import monasca.common.model.metric.MetricDefinition;
import org.joda.time.DateTime;
import java.util.List;
import java.util.Map;
@ -27,7 +29,7 @@ public interface MetricDefinitionRepo {
* Finds metrics for the given criteria.
*/
List<MetricDefinition> find(String tenantId, String name, Map<String, String> dimensions,
String offset, int limit)
DateTime startTime, DateTime endTime, String offset, int limit)
throws Exception;
List<MetricName> findNames(String tenantId, Map<String, String> dimensions, String offset, int limit) throws Exception;

View File

@ -17,6 +17,7 @@ import com.google.inject.Inject;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.joda.time.DateTime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -26,6 +27,7 @@ import java.util.List;
import java.util.Map;
import monasca.api.ApiConfig;
import monasca.api.domain.model.measurement.Measurements;
import monasca.api.domain.model.metric.MetricDefinitionRepo;
import monasca.api.domain.model.metric.MetricName;
import monasca.common.model.metric.MetricDefinition;
@ -70,7 +72,12 @@ public class InfluxV9MetricDefinitionRepo implements MetricDefinitionRepo {
Series series = this.objectMapper.readValue(r, Series.class);
List<MetricDefinition> metricDefinitionList = metricDefinitionList(series, 0);
List<MetricDefinition> metricDefinitionList = metricDefinitionList(series,
tenantId,
name,
null,
null,
0);
logger.debug("Found {} metric definitions matching query", metricDefinitionList.size());
@ -81,6 +88,8 @@ public class InfluxV9MetricDefinitionRepo implements MetricDefinitionRepo {
@Override
public List<MetricDefinition> find(String tenantId, String name,
Map<String, String> dimensions,
DateTime startTime,
DateTime endTime,
String offset, int limit) throws Exception {
int startIndex = this.influxV9Utils.startIndex(offset);
@ -100,7 +109,12 @@ public class InfluxV9MetricDefinitionRepo implements MetricDefinitionRepo {
Series series = this.objectMapper.readValue(r, Series.class);
List<MetricDefinition> metricDefinitionList = metricDefinitionList(series, startIndex);
List<MetricDefinition> metricDefinitionList = metricDefinitionList(series,
tenantId,
name,
startTime,
endTime,
startIndex);
logger.debug("Found {} metric definitions matching query", metricDefinitionList.size());
@ -134,7 +148,13 @@ public class InfluxV9MetricDefinitionRepo implements MetricDefinitionRepo {
return metricNameList;
}
private List<MetricDefinition> metricDefinitionList(Series series, int startIndex) {
private List<MetricDefinition> metricDefinitionList(Series series,
String tenantId,
String name,
DateTime startTime,
DateTime endTime,
int startIndex)
{
List<MetricDefinition> metricDefinitionList = new ArrayList<>();
@ -147,9 +167,14 @@ public class InfluxV9MetricDefinitionRepo implements MetricDefinitionRepo {
for (String[] values : serie.getValues()) {
MetricDefinition m = new MetricDefinition(serie.getName(), dims(values, serie.getColumns()));
m.setId(String.valueOf(index++));
metricDefinitionList.add(m);
//
// If start/end time are specified, ensure we've got measurements
// for this definition before we add to the return list
//
if (hasMeasurements(m, tenantId, startTime, endTime)) {
m.setId(String.valueOf(index++));
metricDefinitionList.add(m);
}
}
}
}
@ -198,5 +223,66 @@ public class InfluxV9MetricDefinitionRepo implements MetricDefinitionRepo {
return dims;
}
}
private boolean hasMeasurements(MetricDefinition m,
String tenantId,
DateTime startTime,
DateTime endTime)
{
boolean hasMeasurements = true;
//
// Only make the additional query if startTime has been
// specified.
//
if (startTime == null) {
return hasMeasurements;
}
try {
String q = buildMeasurementsQuery(tenantId,
m.name,
m.dimensions,
startTime,
endTime);
String r = this.influxV9RepoReader.read(q);
Series series = this.objectMapper.readValue(r, Series.class);
hasMeasurements = !series.isEmpty();
} catch (Exception e) {
//
// If something goes wrong with the measurements query
// checking if there are current measurements, default to
// existing behavior and return the definition.
//
logger.error("Failed to query for measuremnts for: {}", m.name, e);
hasMeasurements = true;
}
return hasMeasurements;
}
private String buildMeasurementsQuery(String tenantId,
String name,
Map<String, String> dimensions,
DateTime startTime,
DateTime endTime) throws Exception
{
String q = String.format("select value, value_meta %1$s "
+ "where %2$s %3$s %4$s %5$s %6$s %7$s slimit 1",
this.influxV9Utils.namePart(name, true),
this.influxV9Utils.privateTenantIdPart(tenantId),
this.influxV9Utils.privateRegionPart(this.region),
this.influxV9Utils.startTimePart(startTime),
this.influxV9Utils.dimPart(dimensions),
this.influxV9Utils.endTimePart(endTime),
this.influxV9Utils.groupByPart());
logger.debug("Measurements query: {}", q);
return q;
}
}

View File

@ -21,6 +21,7 @@ import monasca.common.model.metric.MetricDefinition;
import org.apache.commons.codec.DecoderException;
import org.apache.commons.codec.binary.Hex;
import org.joda.time.DateTime;
import org.skife.jdbi.v2.DBI;
import org.skife.jdbi.v2.Handle;
import org.skife.jdbi.v2.Query;
@ -30,8 +31,10 @@ import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import javax.inject.Inject;
import javax.inject.Named;
@ -53,6 +56,7 @@ public class MetricDefinitionVerticaRepoImpl implements MetricDefinitionRepo {
+ "%s " // Name goes here.
+ "%s " // Offset goes here.
+ "%s " // Dimensions and clause goes here
+ "%s " // Optional timestamp qualifier goes here
+ "ORDER BY defDims.id ASC %s"; // Limit goes here.
private static final String
@ -72,6 +76,24 @@ public class MetricDefinitionVerticaRepoImpl implements MetricDefinitionRepo {
+ "%s " // Dimensions and clause goes here
+ "ORDER BY defSub.id ASC %s"; // Limit goes here.
private static final String
DEFDIM_IDS_SELECT =
"SELECT defDims.id "
+ "FROM MonMetrics.Definitions def, MonMetrics.DefinitionDimensions defDims "
+ "WHERE defDims.definition_id = def.id "
+ "AND def.tenant_id = :tenantId "
+ "%s " // Name and clause here
+ "%s;"; // Dimensions and clause goes here
private static final String
MEASUREMENT_AND_CLAUSE =
"AND defDims.id IN ("
+ "SELECT definition_dimensions_id FROM "
+ "MonMetrics.Measurements "
+ "WHERE to_hex(definition_dimensions_id) "
+ "%s " // List of definition dimension ids here
+ "%s ) "; // start or start and end time here
private static final String TABLE_TO_JOIN_DIMENSIONS_ON = "defDimsSub";
private final DBI db;
@ -171,12 +193,14 @@ public class MetricDefinitionVerticaRepoImpl implements MetricDefinitionRepo {
String tenantId,
String name,
Map<String, String> dimensions,
DateTime startTime,
DateTime endTime,
String offset,
int limit) {
List<Map<String, Object>>
rows =
executeMetricDefsQuery(tenantId, name, dimensions, offset, limit);
executeMetricDefsQuery(tenantId, name, dimensions, startTime, endTime, offset, limit);
List<MetricDefinition> metricDefs = new ArrayList<>(rows.size());
@ -225,6 +249,8 @@ public class MetricDefinitionVerticaRepoImpl implements MetricDefinitionRepo {
String tenantId,
String name,
Map<String, String> dimensions,
DateTime startTime,
DateTime endTime,
String offset,
int limit) {
@ -247,24 +273,34 @@ public class MetricDefinitionVerticaRepoImpl implements MetricDefinitionRepo {
// Can't bind limit in a nested sub query. So, just tack on as String.
String limitPart = " limit " + Integer.toString(limit + 1);
String sql =
String.format(FIND_METRIC_DEFS_SQL,
namePart, offsetPart,
MetricQueries.buildDimensionAndClause(dimensions, "defDims"),
limitPart);
Handle h = null;
try {
h = db.open();
// If startTime/endTime is specified, create the 'IN' select statement
String timeInClause = createTimeInClause(h, startTime, endTime, tenantId, name, dimensions);
String sql =
String.format(FIND_METRIC_DEFS_SQL,
namePart, offsetPart,
MetricQueries.buildDimensionAndClause(dimensions, "defDims"),
timeInClause,
limitPart);
Query<Map<String, Object>> query = h.createQuery(sql).bind("tenantId", tenantId);
if (name != null && !name.isEmpty()) {
logger.debug("binding name: {}", name);
query.bind("name", name);
}
if (startTime != null) {
query.bind("start_time", startTime);
}
if (endTime != null) {
query.bind("end_time", endTime);
}
if (offset != null && !offset.isEmpty()) {
@ -291,4 +327,64 @@ public class MetricDefinitionVerticaRepoImpl implements MetricDefinitionRepo {
}
}
}
private String createTimeInClause(
Handle dbHandle,
DateTime startTime,
DateTime endTime,
String tenantId,
String metricName,
Map<String, String> dimensions)
{
if (startTime == null) {
return "";
}
Set<byte[]> defDimIdSet = new HashSet<>();
String namePart = "";
if (metricName != null && !metricName.isEmpty()) {
namePart = "AND def.name = :name ";
}
String defDimSql = String.format(DEFDIM_IDS_SELECT, namePart,
MetricQueries.buildDimensionAndClause(dimensions, "defDims"));
Query<Map<String, Object>> query = dbHandle.createQuery(defDimSql).bind("tenantId", tenantId);
DimensionQueries.bindDimensionsToQuery(query, dimensions);
if (metricName != null && !metricName.isEmpty()) {
query.bind("name", metricName);
}
List<Map<String, Object>> rows = query.list();
for (Map<String, Object> row : rows) {
byte[] defDimId = (byte[]) row.get("id");
defDimIdSet.add(defDimId);
}
//
// If we didn't find any definition dimension ids,
// we won't add the time clause.
//
if (defDimIdSet.size() == 0) {
return "";
}
String timeAndClause = "";
if (endTime != null) {
timeAndClause = "AND time_stamp >= :start_time AND time_stamp <= :end_time ";
} else {
timeAndClause = "AND time_stamp >= :start_time ";
}
String defDimInClause = MetricQueries.createDefDimIdInClause(defDimIdSet);
return String.format(MEASUREMENT_AND_CLAUSE, defDimInClause, timeAndClause);
}
}

View File

@ -1,11 +1,11 @@
/*
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
*
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
* in compliance with the License. You may obtain a copy of the License at
*
*
* http://www.apache.org/licenses/LICENSE-2.0
*
*
* Unless required by applicable law or agreed to in writing, software distributed under the License
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
* or implied. See the License for the specific language governing permissions and limitations under
@ -15,7 +15,9 @@
package monasca.api.infrastructure.persistence.vertica;
import java.util.Map;
import java.util.Set;
import org.apache.commons.codec.binary.Hex;
import org.skife.jdbi.v2.Handle;
import monasca.common.persistence.SqlQueries;
@ -58,4 +60,27 @@ final class MetricQueries {
return SqlQueries.keyValuesFor(handle, "select name, value from MonMetrics.Dimensions "
+ "where" + " dimension_set_id = ?", dimensionSetId);
}
static String createDefDimIdInClause(Set<byte[]> defDimIdSet) {
StringBuilder sb = new StringBuilder("IN ");
sb.append("(");
boolean first = true;
for (byte[] defDimId : defDimIdSet) {
if (first) {
first = false;
} else {
sb.append(",");
}
sb.append("'" + Hex.encodeHexString(defDimId) + "'");
}
sb.append(") ");
return sb.toString();
}
}

View File

@ -308,7 +308,7 @@ public class StatisticVerticaRepoImpl implements StatisticRepo {
}
sb.append(" FROM MonMetrics.Measurements ");
String inClause = createInClause(defDimIdSet);
String inClause = MetricQueries.createDefDimIdInClause(defDimIdSet);
sb.append("WHERE to_hex(definition_dimensions_id) " + inClause);
sb.append(createWhereClause(startTime, endTime, offset));
@ -322,29 +322,6 @@ public class StatisticVerticaRepoImpl implements StatisticRepo {
return sb.toString();
}
private String createInClause(Set<byte[]> defDimIdSet) {
StringBuilder sb = new StringBuilder("IN ");
sb.append("(");
boolean first = true;
for (byte[] defDimId : defDimIdSet) {
if (first) {
first = false;
} else {
sb.append(",");
}
sb.append("'" + Hex.encodeHexString(defDimId) + "'");
}
sb.append(") ");
return sb.toString();
}
private String createWhereClause(
DateTime startTime,
DateTime endTime,

View File

@ -20,6 +20,8 @@ import com.google.common.base.Strings;
import com.codahale.metrics.annotation.Timed;
import org.joda.time.DateTime;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
@ -121,6 +123,8 @@ public class MetricResource {
@QueryParam("dimensions") String dimensionsStr,
@QueryParam("offset") String offset,
@QueryParam("limit") String limit,
@QueryParam("start_time") String startTimeStr,
@QueryParam("end_time") String endTimeStr,
@QueryParam("tenant_id") String crossTenantId) throws Exception
{
Map<String, String>
@ -129,6 +133,16 @@ public class MetricResource {
.parseAndValidateDimensions(dimensionsStr);
MetricNameValidation.validate(name, false);
DateTime startTime = Validation.parseAndValidateDate(startTimeStr, "start_time", false);
DateTime endTime = Validation.parseAndValidateDate(endTimeStr, "end_time", false);
if ((startTime != null) && (endTime != null)) {
//
// If both times are specified, make sure start is before end
//
Validation.validateTimes(startTime, endTime);
}
final String queryTenantId = Validation.getQueryProject(roles, crossTenantId, tenantId,
admin_role);
final int paging_limit = this.persistUtils.getLimit(limit);
@ -136,6 +150,8 @@ public class MetricResource {
queryTenantId,
name,
dimensions,
startTime,
endTime,
offset,
paging_limit
);

View File

@ -1,11 +1,11 @@
/*
* Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
*
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
* in compliance with the License. You may obtain a copy of the License at
*
*
* http://www.apache.org/licenses/LICENSE-2.0
*
*
* Unless required by applicable law or agreed to in writing, software distributed under the License
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
* or implied. See the License for the specific language governing permissions and limitations under
@ -17,6 +17,7 @@ package monasca.api.infrastructure.persistence.vertica;
import monasca.common.model.metric.MetricDefinition;
import monasca.api.domain.model.metric.MetricDefinitionRepo;
import org.joda.time.DateTime;
import org.skife.jdbi.v2.DBI;
import org.skife.jdbi.v2.Handle;
import org.testng.annotations.AfterClass;
@ -85,20 +86,64 @@ public class MetricDefinitionVerticaRepositoryImplTest {
}
public void shouldFindWithoutDimensions() throws Exception {
List<MetricDefinition> defs = repo.find("bob", "cpu_utilization", null, null, 1);
List<MetricDefinition> defs = repo.find("bob", "cpu_utilization", null, null, null, null, 1);
assertEquals(defs.size(), 3);
}
public void shouldFindWithStartTime() throws Exception {
List<MetricDefinition> defs = repo.find("bob",
"cpu_utilization",
null,
new DateTime(2014, 1, 1, 0, 0, 0),
null,
null,
1);
assertEquals(defs.size(), 3);
}
public void shouldExcludeWithStartTime() throws Exception {
List<MetricDefinition> defs = repo.find("bob",
"cpu_utilization",
null,
new DateTime(2014, 1, 1, 0, 1, 1),
null,
null,
1);
assertEquals(defs.size(), 0);
}
public void shouldFindWithEndTime() throws Exception {
List<MetricDefinition> defs = repo.find("bob",
"cpu_utilization",
null,
new DateTime(2014, 1, 1, 0, 0, 0),
new DateTime(2014, 1, 1, 0, 1, 1),
null,
1);
assertEquals(defs.size(), 3);
}
public void shouldExcludeWithEndTime() throws Exception {
List<MetricDefinition> defs = repo.find("bob",
"cpu_utilization",
null,
new DateTime(2013, 1, 1, 0, 0, 0),
new DateTime(2013, 12, 31, 0, 0, 0),
null,
1);
assertEquals(defs.size(), 0);
}
public void shouldFindWithDimensions() throws Exception {
Map<String, String> dims = new HashMap<>();
dims.put("service", "compute");
dims.put("instance_id", "123");
List<MetricDefinition> defs = repo.find("bob", "cpu_utilization", dims, null, 1);
List<MetricDefinition> defs = repo.find("bob", "cpu_utilization", dims, null, null, null, 1);
assertEquals(defs.size(), 2);
dims.put("flavor_id", "2");
defs = repo.find("bob", "cpu_utilization", dims, null, 1);
defs = repo.find("bob", "cpu_utilization", dims, null, null, null, 1);
assertEquals(defs.size(), 1);
}
}

View File

@ -56,10 +56,12 @@ class MetricsRepository(metrics_repository.MetricsRepository):
LOG.exception(ex)
raise exceptions.RepositoryException(ex)
def _build_show_series_query(self, dimensions, name, tenant_id, region):
def _build_show_series_query(self, dimensions, name, tenant_id, region,
start_timestamp=None, end_timestamp=None):
where_clause = self._build_where_clause(dimensions, name, tenant_id,
region)
region, start_timestamp,
end_timestamp)
query = 'show series ' + where_clause
@ -152,12 +154,11 @@ class MetricsRepository(metrics_repository.MetricsRepository):
return from_clause
def list_metrics(self, tenant_id, region, name, dimensions, offset,
limit):
limit, start_timestamp=None, end_timestamp=None):
try:
query = self._build_show_series_query(dimensions, name, tenant_id,
region)
query = self._build_show_series_query(dimensions, name, tenant_id, region)
query += " limit {}".format(limit + 1)
@ -166,7 +167,12 @@ class MetricsRepository(metrics_repository.MetricsRepository):
result = self.influxdb_client.query(query)
json_metric_list = self._build_serie_metric_list(result, offset)
json_metric_list = self._build_serie_metric_list(result,
tenant_id,
region,
start_timestamp,
end_timestamp,
offset)
return json_metric_list
@ -181,7 +187,9 @@ class MetricsRepository(metrics_repository.MetricsRepository):
LOG.exception(ex)
raise exceptions.RepositoryException(ex)
def _build_serie_metric_list(self, series_names, offset):
def _build_serie_metric_list(self, series_names, tenant_id, region,
start_timestamp, end_timestamp,
offset):
json_metric_list = []
@ -204,12 +212,19 @@ class MetricsRepository(metrics_repository.MetricsRepository):
if value and not name.startswith(u'_')
}
metric = {u'id': str(metric_id),
u'name': series[u'name'],
u'dimensions': dimensions}
metric_id += 1
if self._has_measurements(tenant_id,
region,
series[u'name'],
dimensions,
start_timestamp,
end_timestamp):
json_metric_list.append(metric)
metric = {u'id': str(metric_id),
u'name': series[u'name'],
u'dimensions': dimensions}
metric_id += 1
json_metric_list.append(metric)
return json_metric_list
@ -433,6 +448,37 @@ class MetricsRepository(metrics_repository.MetricsRepository):
return offset_clause
def _has_measurements(self, tenant_id, region, name, dimensions,
start_timestamp, end_timestamp):
has_measurements = True
#
# No need for the additional query if we don't have a start timestamp.
#
if not start_timestamp:
return True
#
# We set limit to 1 for the measurement_list call, as we are only
# interested in knowing if there is at least one measurement, and
# not ask too much of influxdb.
#
measurements = self.measurement_list(tenant_id,
region,
name,
dimensions,
start_timestamp,
end_timestamp,
0,
1,
False)
if len(measurements) == 0:
has_measurements = False
return has_measurements
def alarm_history(self, tenant_id, alarm_id_list,
offset, limit, start_timestamp=None,
end_timestamp=None):

View File

@ -101,12 +101,15 @@ class Metrics(metrics_api_v2.MetricsV2API):
@resource.resource_try_catch_block
def _list_metrics(self, tenant_id, name, dimensions, req_uri, offset,
limit):
limit, start_timestamp, end_timestamp):
result = self._metrics_repo.list_metrics(tenant_id,
self._region,
name,
dimensions, offset, limit)
dimensions,
offset, limit,
start_timestamp,
end_timestamp)
return helpers.paginate(result, req_uri, limit)
@ -133,8 +136,12 @@ class Metrics(metrics_api_v2.MetricsV2API):
helpers.validate_query_dimensions(dimensions)
offset = helpers.get_query_param(req, 'offset')
limit = helpers.get_limit(req)
start_timestamp = helpers.get_query_starttime_timestamp(req, False)
end_timestamp = helpers.get_query_endtime_timestamp(req, False)
helpers.validate_start_end_timestamps(start_timestamp, end_timestamp)
result = self._list_metrics(tenant_id, name, dimensions,
req.uri, offset, limit)
req.uri, offset, limit,
start_timestamp, end_timestamp)
res.body = helpers.dumpit_utf8(result)
res.status = falcon.HTTP_200

View File

@ -421,3 +421,42 @@ class TestMetrics(base.BaseMonascaTest):
self.assertEqual(str(element['dimensions'][test_key]), test_value)
if test_name is not None:
self.assertEqual(str(element['name']), test_name)
@test.attr(type='gate')
def test_list_metrics_with_time_args(self):
name = data_utils.rand_name('name')
key = data_utils.rand_name('key')
value_org = data_utils.rand_name('value')
now = int(round(time.time() * 1000))
#
# Built start and end time args before and after the measurement.
#
start_iso = helpers.timestamp_to_iso(now - 1000)
end_timestamp = int(round(now + 1000))
end_iso = helpers.timestamp_to_iso(end_timestamp)
metric = helpers.create_metric(name=name,
dimensions={key: value_org},
timestamp=now)
self.monasca_client.create_metrics(metric)
for timer in xrange(constants.MAX_RETRIES):
query_parms = '?name=' + name + '&start_time=' + start_iso + '&end_time=' + end_iso
resp, response_body = self.monasca_client.list_metrics(query_parms)
self.assertEqual(200, resp.status)
elements = response_body['elements']
if elements:
dimensions = elements[0]
dimension = dimensions['dimensions']
value = dimension[unicode(key)]
self.assertEqual(value_org, str(value))
break
else:
time.sleep(constants.RETRY_WAIT_SECS)
if timer == constants.MAX_RETRIES - 1:
skip_msg = "Skipped test_list_metrics_with_time_args: " \
"timeout on waiting for metrics: at least one " \
"metric is needed. Current number of metrics " \
"= 0"
raise self.skipException(skip_msg)