This commit adds support for adding optional prefix and/or suffix to
Prometheus queries.
For example, this can be used to perform vector matches between the
collected metric and another one, to gather additional metadata.
Change-Id: I725f0f5ad00b67f55bcacaf8447e050af3815c73
The quote API method has been broken for a while now. This patch fixes
the API and makes it work again. To use it, one can issue the following
command.
curl -s -X POST "http://cloudkitty_server_and_port/v1/rating/quote" -H "Accept: application/json" -H "User-Agent: python-keystoneclient" -H "Content-Type: application/json" -H "X-Auth-Token: ${ACCESS_TOKEN_KEYSTONE}" -d '{"resources": [{"service": "<cloudkitty_service_name>", "volume": "<quantity_to_quote>", "desc": {"metadata": {"<field_name_used_to_define_price>": "<value_to_activate_a_price>"}}}]}'
Change-Id: Ia6ef6543a5100445b203196d39b404bc74b75ba0
Story: 2009022
Task: 42744
The goal of this patch is to introduce support for multi-valued
parameters. For instance, for the `type` parameter, even though the code
was treating it as a possible list of types, the API would not allow a
user to send multiple types.
This patch enables users to send filters with multiple values, which can
be useful for filtering by project_ids for instance, or different types
(metric types).
Change-Id: I59397b33d014709eb976c78d517f009b8a2be4cf
The V2 summary endpoint uses a quite unconventional data format in
the response. Currently, the format is the following:
```
{"total": <number of elements in the response>,
"results": [array of arrays of data],
"columns": [array of columns]}
```
To process this, we need to find the index of a column in the column
list, and with this index, we retrieve the data in the array of data
that is found in the array of results. The proposal is to use the
following format in the response.
```
{"total": <number of elements in the response>,
"results": [array of objects/dictionary]}
```
With this new format, one does not need to consult the index of a
column to retrieve data in one of the entries. We would only need to
retrieve the data in the entry using its column name. Therefore, the
coding feels more natural. To maintain compatibility, this new format
would be only applied when an option is sent to CloudKitty via
`response_format` option.
Depends-on: https://review.opendev.org/c/openstack/cloudkitty/+/793973
Change-Id: I5869d527e6e4655c653b6852d6fb7bebc9d71520
This patch adds two options in fetcher_keystone to filter which tenants
should be rated:
ignore_disabled_tenants (Default=False)
ignore_rating_role (Default=False)
In our case we currently have 2k projects (growing) and we want to rate
all active projects, so checking the role rating is useless and consumes
resources for nothing. Besides, cloudkitty rates projects regardless if
there are enabled or disabled which is also useless and consumes
resources in our case.
Change-Id: I6479d76c367dc4217bce4de9c3db41c4612f0397
This patch adds active status fields in the storage state table
(cloudkitty_storage_states). A boolean column called "active",
which indicates if the CloudKitty scope is active for billing, and
another one called "scope_activation_toggle_date" (timestamp field)
to store the latest timestamp when the scope moved between the
active/deactivated states. Then, during CloudKitty processing, we
check the "active" column. If the resource is not active, we ignore
it during the processing.
Moreover, we introduce an API to allow operators to set the "active" field.
The "scope_activation_toggle_date" will not be exposed for operators to
change it. It is updated automatically according to the changes in the "active"
field.
This patch adds a new HTTP method to "/v2/scope" endpoint. We then use
"patch" HTTP method to allow operators to patch a storage scope. The API
will require the scope_id, and then, it takes into account some of the fields
we allow operators to change, and "active" field is one of them.
Change-Id: Ia02c2eeb98021c60549cb8deab6f2e964e573f1e
Implements: https://review.opendev.org/c/openstack/cloudkitty-specs/+/770928/
Add file to the reno documentation build to show release notes for
stable/xena.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/xena.
Sem-Ver: feature
Change-Id: I42f0a80b7eff0a3aaa42f188f7656391202dee29
It appears that the default metrics.yml needs force_granularity: 300
otherwise cloudkitty-processor will fail to process metrics from
Gnocchi. As this is what at least I (in Debian) ship in packages, it
would be nice to have a working default.
Change-Id: I28b43d6131182252c0576d9323bbcdbfb1dbbd0a
Story: 2008598
Task: 41771
This patch proposes a method for operators to customize the aggregation
query executed against Gnocchi. By default, we use the following query:
(aggregate RE_AGGREGATION_METHOD (metric METRIC_NAME AGGREGATION_METHOD))
Therefore, this option enables operators to take full advantage of
operations available in Gnocchi, such as any arithmetic operations,
logical operations and many others. When using a custom aggregation
query, one can use the placeholders `RE_AGGREGATION_METHOD`,
`AGGREGATION_METHOD`, and `METRIC_NAME`: they will be replaced at
runtime by values from the metric configuration.
Different use cases can be addressed with the use of custom queries such
as handling RadosGW usage data trimming, which causes a decrease in the
usage data values; Libvirt attach/detach of disks, migration of VMs,
start/stop of VMs, which will zero the usage data that are gathered by
Ceilometer compute, and many other use cases where one might desire a
more complex operation to be executed on the data before CloudKitty
rates it.
Change-Id: I3419075d6df165409cb1375ad11a5b3f7faa7471
ABCs in collections should be imported from collections.abc and direct
import from collections is deprecated since Python 3.3.
This replaces that deprecated usage mentioned in method descriptions.
Change-Id: I6864c6d84407ed9512917266ccb06a51bab5714a
Probably due six removal, exception handling from gnocchiclient
in cloudkitty is not working as expected.
This patch fixing this issue.
Bug described in storyboard:
https://storyboard.openstack.org/#!/story/2008985
Story: 2008985
Task: 42635
Change-Id: I387fee79d277daa4c739297f1734dad1eaf4cc94
SQLachemy was upgraded via the `requirements` projects commit
(dc86260b28).
That upgrade broke some code. Therefore, this patch intends to fix
the code that was broken due to a method removal and also due to a
change in `create_constraint` for Enum types, which was defaulted
to `False` in version 1.4.0
Change-Id: Ib36a7275ce8ff30aa19cbde521fd37f60bd00ff8
Setuptools v54.1.0 introduces a warning that the use of dash-separated
options in 'setup.cfg' will not be supported in a future version [1].
Get ahead of the issue by replacing the dashes with underscores. Without
this, we see 'UserWarning' messages like the following on new enough
versions of setuptools:
UserWarning: Usage of dash-separated 'description-file' will not be
supported in future versions. Please use the underscore name
'description_file' instead
[1] https://github.com/pypa/setuptools/commit/a2e9ae4cb
Change-Id: I5e5e86149c0ea6be5553bf8911dfc729d0ec57bb
Flask has been updated in the requirements projects from OpenStack [1].
This has broken some test cases that were mocking the root object
"flask.request".
Instead of mocking the root object, we address the issue by mocking only
the needed methods and attributes. This facilitates the understanding of
the unit test, and also helps people to pin-point problems right away.
[1] https://review.opendev.org/c/openstack/requirements/+/793023
Change-Id: I8703c7d3e69f35ef3e85234c27b4743242111f3d
By default not even an admin can use the get_summary endpoint with
all_tenants=True or using a tenant_id parameter. This commit fixes that.
This rule is now the same as how cinder defines admin_or_owner.
Change-Id: I3e34927e8ab88f25d2975b4dbac89b52a7d94c98
Add file to the reno documentation build to show release notes for
stable/wallaby.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/wallaby.
Sem-Ver: feature
Change-Id: If24445f0c581e3508821e746467a8ebac5323937
Instead of forcing the table recreation, rely on alembic to do the right
thing.
When using PostgreSQL, using "recreate='always'" prevents the migration
from happening correctly. It errors out with an exception and the table
is not present when checking the database directly.
Change-Id: I17f0e8c26b796885c6f7ca4fd5c53cd917c5f2ed
Story: 2008306
Task: 41193