[pre-commit] fix typos and configure codespell
This chanage enabled codespell in precommit and fixes the existing typos. A followup commit will enable this in tox and ci. Change-Id: I0a11bcd5a88247a48d3437525fc8a3cb3cdd4e58
This commit is contained in:
parent
4d5022ab94
commit
5f79ab87c7
@ -41,14 +41,11 @@ repos:
|
||||
hooks:
|
||||
- id: autopep8
|
||||
files: '^.*\.py$'
|
||||
# FIXME(sean-k-mooney): we have many typos and some false
|
||||
# positives that need to be added to the dictionary
|
||||
# correct this in a followup change
|
||||
# - repo: https://github.com/codespell-project/codespell
|
||||
# rev: v2.3.0
|
||||
# hooks:
|
||||
# - id: codespell
|
||||
# args: ['--ignore-words=doc/dictionary.txt']
|
||||
- repo: https://github.com/codespell-project/codespell
|
||||
rev: v2.3.0
|
||||
hooks:
|
||||
- id: codespell
|
||||
args: ['--ignore-words=doc/dictionary.txt']
|
||||
# FIXME(sean-k-mooney): we have many sphinx issues fix them
|
||||
# in a separate commit to make it easier to review
|
||||
# - repo: https://github.com/sphinx-contrib/sphinx-lint
|
||||
|
@ -12,7 +12,7 @@ Here are some examples of ``Goals``:
|
||||
- minimize the energy consumption
|
||||
- minimize the number of compute nodes (consolidation)
|
||||
- balance the workload among compute nodes
|
||||
- minimize the license cost (some softwares have a licensing model which is
|
||||
- minimize the license cost (some software have a licensing model which is
|
||||
based on the number of sockets or cores where the software is deployed)
|
||||
- find the most appropriate moment for a planned maintenance on a
|
||||
given group of host (which may be an entire availability zone):
|
||||
@ -123,4 +123,4 @@ Response
|
||||
**Example JSON representation of a Goal:**
|
||||
|
||||
.. literalinclude:: samples/goal-show-response.json
|
||||
:language: javascript
|
||||
:language: javascript
|
||||
|
@ -0,0 +1,4 @@
|
||||
thirdparty
|
||||
assertin
|
||||
notin
|
||||
|
@ -52,7 +52,7 @@ class BaseWatcherDirective(rst.Directive):
|
||||
obj_raw_docstring = obj.__init__.__doc__
|
||||
|
||||
if not obj_raw_docstring:
|
||||
# Raise a warning to make the tests fail wit doc8
|
||||
# Raise a warning to make the tests fail with doc8
|
||||
raise self.error("No docstring available for %s!" % obj)
|
||||
|
||||
obj_docstring = inspect.cleandoc(obj_raw_docstring)
|
||||
|
@ -34,7 +34,7 @@ own sections. However, the base *GMR* consists of several sections:
|
||||
|
||||
Package
|
||||
Shows information about the package to which this process belongs, including
|
||||
version informations.
|
||||
version information.
|
||||
|
||||
Threads
|
||||
Shows stack traces and thread ids for each of the threads within this
|
||||
|
@ -285,7 +285,7 @@ Audit and interval (in case of CONTINUOUS type). There is three types of Audit:
|
||||
ONESHOT, CONTINUOUS and EVENT. ONESHOT Audit is launched once and if it
|
||||
succeeded executed new action plan list will be provided; CONTINUOUS Audit
|
||||
creates action plans with specified interval (in seconds or cron format, cron
|
||||
inteval can be used like: `*/5 * * * *`), if action plan
|
||||
interval can be used like: `*/5 * * * *`), if action plan
|
||||
has been created, all previous action plans get CANCELLED state;
|
||||
EVENT audit is launched when receiving webhooks API.
|
||||
|
||||
|
@ -221,7 +221,7 @@ workflow engine can halt or take other actions while the action plan is being
|
||||
executed based on the success or failure of individual actions. However, the
|
||||
base workflow engine simply uses these notifies to store the result of
|
||||
individual actions in the database. Additionally, since taskflow uses a graph
|
||||
flow if any of the tasks would fail all childs of this tasks not be executed
|
||||
flow if any of the tasks would fail all children of this tasks not be executed
|
||||
while ``do_revert`` will be triggered for all parents.
|
||||
|
||||
.. code-block:: python
|
||||
|
@ -48,7 +48,7 @@
|
||||
logging configuration to any other existing logging
|
||||
options. Please see the Python logging module documentation
|
||||
for details on logging configuration files. The log-config
|
||||
name for this option is depcrecated.
|
||||
name for this option is deprecated.
|
||||
|
||||
**--log-format FORMAT**
|
||||
A logging.Formatter log message format string which may use any
|
||||
|
@ -4,4 +4,4 @@ features:
|
||||
will standardize interactions with scoring engines
|
||||
through the common API. It is possible to use the
|
||||
scoring engine by different Strategies, which
|
||||
improve the code and data model re-use.
|
||||
improve the code and data model reuse.
|
||||
|
@ -5,5 +5,5 @@ features:
|
||||
failure. The amount of failures allowed before giving up and the time before
|
||||
reattempting are configurable. The `api_call_retries` and
|
||||
`api_query_timeout` parameters in the `[collector]` group can be used to
|
||||
adjust these paremeters. 10 retries with a 1 second time in between
|
||||
adjust these parameters. 10 retries with a 1 second time in between
|
||||
reattempts is the default.
|
||||
|
@ -3,6 +3,6 @@ features:
|
||||
Watcher starts to support API microversions since Stein cycle. From now
|
||||
onwards all API changes should be made with saving backward compatibility.
|
||||
To specify API version operator should use OpenStack-API-Version
|
||||
HTTP header. If operator wants to know the mininum and maximum supported
|
||||
HTTP header. If operator wants to know the minimum and maximum supported
|
||||
versions by API, he/she can access /v1 resource and Watcher API will
|
||||
return appropriate headers in response.
|
||||
|
@ -7,7 +7,7 @@ prelude: >
|
||||
features:
|
||||
- |
|
||||
A new threadpool for the decision engine that contributors can use to
|
||||
improve the performance of many operations, primarily I/O bound onces.
|
||||
improve the performance of many operations, primarily I/O bound ones.
|
||||
The amount of workers used by the decision engine threadpool can be
|
||||
configured to scale according to the available infrastructure using
|
||||
the `watcher_decision_engine.max_general_workers` config option.
|
||||
|
@ -13,7 +13,7 @@ features:
|
||||
* disk_gb_reserved: The amount of disk a node has reserved for its own use.
|
||||
* disk_ratio: Disk allocation ratio.
|
||||
|
||||
We also add some new propeties:
|
||||
We also add some new properties:
|
||||
|
||||
* vcpu_capacity: The amount of vcpu, take allocation ratio into account,
|
||||
but do not include reserved.
|
||||
|
@ -4,5 +4,5 @@ features:
|
||||
Whenever a Watcher object is created, updated or deleted, a versioned
|
||||
notification will, if it's relevant, be automatically sent to notify in order
|
||||
to allow an event-driven style of architecture within Watcher. Moreover, it
|
||||
will also give other services and/or 3rd party softwares (e.g. monitoring
|
||||
will also give other services and/or 3rd party software (e.g. monitoring
|
||||
solutions or rules engines) the ability to react to such events.
|
||||
|
@ -1,3 +1,3 @@
|
||||
---
|
||||
features:
|
||||
- Add a service supervisor to watch Watcher deamons.
|
||||
- Add a service supervisor to watch Watcher daemons.
|
||||
|
@ -109,3 +109,8 @@ watcher_cluster_data_model_collectors =
|
||||
compute = watcher.decision_engine.model.collector.nova:NovaClusterDataModelCollector
|
||||
storage = watcher.decision_engine.model.collector.cinder:CinderClusterDataModelCollector
|
||||
baremetal = watcher.decision_engine.model.collector.ironic:BaremetalClusterDataModelCollector
|
||||
|
||||
[codespell]
|
||||
skip = *.po,*.js,*.css,*.html,*.svg,HACKING.py,*hacking*,*build*,*_static*,doc/dictionary.txt,*.pyc,*.inv,*.gz,*.jpg,*.png,*.vsd,*.graffle,*.json
|
||||
count =
|
||||
quiet-level = 4
|
@ -334,7 +334,7 @@ class ActionsController(rest.RestController):
|
||||
policy.enforce(context, 'action:detail',
|
||||
action='action:detail')
|
||||
|
||||
# NOTE(lucasagomes): /detail should only work agaist collections
|
||||
# NOTE(lucasagomes): /detail should only work against collections
|
||||
parent = pecan.request.path.split('/')[:-1][-1]
|
||||
if parent != "actions":
|
||||
raise exception.HTTPNotFound
|
||||
|
@ -433,7 +433,7 @@ class ActionPlansController(rest.RestController):
|
||||
policy.enforce(context, 'action_plan:detail',
|
||||
action='action_plan:detail')
|
||||
|
||||
# NOTE(lucasagomes): /detail should only work agaist collections
|
||||
# NOTE(lucasagomes): /detail should only work against collections
|
||||
parent = pecan.request.path.split('/')[:-1][-1]
|
||||
if parent != "action_plans":
|
||||
raise exception.HTTPNotFound
|
||||
|
@ -570,7 +570,7 @@ class AuditsController(rest.RestController):
|
||||
context = pecan.request.context
|
||||
policy.enforce(context, 'audit:detail',
|
||||
action='audit:detail')
|
||||
# NOTE(lucasagomes): /detail should only work agaist collections
|
||||
# NOTE(lucasagomes): /detail should only work against collections
|
||||
parent = pecan.request.path.split('/')[:-1][-1]
|
||||
if parent != "audits":
|
||||
raise exception.HTTPNotFound
|
||||
|
@ -576,7 +576,7 @@ class AuditTemplatesController(rest.RestController):
|
||||
policy.enforce(context, 'audit_template:detail',
|
||||
action='audit_template:detail')
|
||||
|
||||
# NOTE(lucasagomes): /detail should only work agaist collections
|
||||
# NOTE(lucasagomes): /detail should only work against collections
|
||||
parent = pecan.request.path.split('/')[:-1][-1]
|
||||
if parent != "audit_templates":
|
||||
raise exception.HTTPNotFound
|
||||
|
@ -24,7 +24,7 @@ Here are some examples of :ref:`Goals <goal_definition>`:
|
||||
- minimize the energy consumption
|
||||
- minimize the number of compute nodes (consolidation)
|
||||
- balance the workload among compute nodes
|
||||
- minimize the license cost (some softwares have a licensing model which is
|
||||
- minimize the license cost (some software have a licensing model which is
|
||||
based on the number of sockets or cores where the software is deployed)
|
||||
- find the most appropriate moment for a planned maintenance on a
|
||||
given group of host (which may be an entire availability zone):
|
||||
@ -217,7 +217,7 @@ class GoalsController(rest.RestController):
|
||||
context = pecan.request.context
|
||||
policy.enforce(context, 'goal:detail',
|
||||
action='goal:detail')
|
||||
# NOTE(lucasagomes): /detail should only work agaist collections
|
||||
# NOTE(lucasagomes): /detail should only work against collections
|
||||
parent = pecan.request.path.split('/')[:-1][-1]
|
||||
if parent != "goals":
|
||||
raise exception.HTTPNotFound
|
||||
|
@ -237,7 +237,7 @@ class ServicesController(rest.RestController):
|
||||
context = pecan.request.context
|
||||
policy.enforce(context, 'service:detail',
|
||||
action='service:detail')
|
||||
# NOTE(lucasagomes): /detail should only work agaist collections
|
||||
# NOTE(lucasagomes): /detail should only work against collections
|
||||
parent = pecan.request.path.split('/')[:-1][-1]
|
||||
if parent != "services":
|
||||
raise exception.HTTPNotFound
|
||||
|
@ -284,7 +284,7 @@ class StrategiesController(rest.RestController):
|
||||
context = pecan.request.context
|
||||
policy.enforce(context, 'strategy:detail',
|
||||
action='strategy:detail')
|
||||
# NOTE(lucasagomes): /detail should only work agaist collections
|
||||
# NOTE(lucasagomes): /detail should only work against collections
|
||||
parent = pecan.request.path.split('/')[:-1][-1]
|
||||
if parent != "strategies":
|
||||
raise exception.HTTPNotFound
|
||||
|
@ -72,8 +72,8 @@ class KeystoneHelper(object):
|
||||
message=(_("Project not Found: %s") % name_or_id))
|
||||
if len(projects) > 1:
|
||||
raise exception.Invalid(
|
||||
messsage=(_("Project name seems ambiguous: %s") %
|
||||
name_or_id))
|
||||
message=(_("Project name seems ambiguous: %s") %
|
||||
name_or_id))
|
||||
return projects[0]
|
||||
|
||||
def get_domain(self, name_or_id):
|
||||
|
@ -47,7 +47,7 @@ APPLIER_OPTS = [
|
||||
cfg.BoolOpt('rollback_when_actionplan_failed',
|
||||
default=False,
|
||||
help='If set True, the failed actionplan will rollback '
|
||||
'when executing. Defaule value is False.'),
|
||||
'when executing. Default value is False.'),
|
||||
]
|
||||
|
||||
|
||||
|
@ -52,8 +52,8 @@ class DefaultStrategySelector(base.BaseSelector):
|
||||
else:
|
||||
available_strategies = self.strategy_loader.list_available()
|
||||
available_strategies_for_goal = list(
|
||||
key for key, strat in available_strategies.items()
|
||||
if strat.get_goal_name() == self.goal_name)
|
||||
key for key, strategy in available_strategies.items()
|
||||
if strategy.get_goal_name() == self.goal_name)
|
||||
|
||||
if not available_strategies_for_goal:
|
||||
raise exception.NoAvailableStrategyForGoal(
|
||||
|
@ -32,7 +32,7 @@ class HostMaintenance(base.HostMaintenanceBaseStrategy):
|
||||
*Description*
|
||||
|
||||
It is a migration strategy for one compute node maintenance,
|
||||
without having the user's application been interruptted.
|
||||
without having the user's application been interrupted.
|
||||
If given one backup node, the strategy will firstly
|
||||
migrate all instances from the maintenance node to
|
||||
the backup node. If the backup node is not provided,
|
||||
|
@ -563,7 +563,7 @@ class ZoneMigration(base.ZoneMigrationBaseStrategy):
|
||||
filter_actions = self.get_priority_filter_list()
|
||||
LOG.debug(filter_actions)
|
||||
|
||||
# apply all filters set in input prameter
|
||||
# apply all filters set in input parameter
|
||||
for action in list(reversed(filter_actions)):
|
||||
LOG.debug(action)
|
||||
result = action.apply_filter(result)
|
||||
@ -795,7 +795,7 @@ class ComputeHostSortFilter(SortMovingToFrontFilter):
|
||||
|
||||
:param item: instance object
|
||||
:param sort_key: compute host name
|
||||
:returns: true: compute name on which intance is equals sort_key
|
||||
:returns: true: compute name on where instance host equals sort_key
|
||||
false: otherwise
|
||||
"""
|
||||
|
||||
@ -823,7 +823,7 @@ class StorageHostSortFilter(SortMovingToFrontFilter):
|
||||
|
||||
:param item: volume object
|
||||
:param sort_key: storage pool name
|
||||
:returns: true: pool name on which intance is equals sort_key
|
||||
:returns: true: pool name on where instance.host equals sort_key
|
||||
false: otherwise
|
||||
"""
|
||||
|
||||
|
@ -59,7 +59,7 @@ class DecisionEngineThreadPool(object, metaclass=service.Singleton):
|
||||
|
||||
:param futures: list, set or dictionary of futures
|
||||
:type futures: list :py:class:`futurist.GreenFuture`
|
||||
:param fn: function to execute upon the future finishing exection
|
||||
:param fn: function to execute upon the future finishing execution
|
||||
:param args: arguments for the function
|
||||
:param kwargs: amount of arguments for the function
|
||||
"""
|
||||
@ -83,7 +83,7 @@ class DecisionEngineThreadPool(object, metaclass=service.Singleton):
|
||||
|
||||
:param futures: list, set or dictionary of futures
|
||||
:type futures: list :py:class:`futurist.GreenFuture`
|
||||
:param fn: function to execute upon the future finishing exection
|
||||
:param fn: function to execute upon the future finishing execution
|
||||
:param args: arguments for the function
|
||||
:param kwargs: amount of arguments for the function
|
||||
"""
|
||||
|
@ -536,7 +536,7 @@ class TestActionPolicyEnforcement(api_base.FunctionalTest):
|
||||
self.policy.set_rules({
|
||||
"admin_api": "(role:admin or role:administrator)",
|
||||
"default": "rule:admin_api",
|
||||
rule: "rule:defaut"})
|
||||
rule: "rule:default"})
|
||||
response = func(*arg, **kwarg)
|
||||
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
|
||||
self.assertEqual('application/json', response.content_type)
|
||||
|
@ -635,7 +635,7 @@ class TestActionPlanPolicyEnforcement(api_base.FunctionalTest):
|
||||
self.policy.set_rules({
|
||||
"admin_api": "(role:admin or role:administrator)",
|
||||
"default": "rule:admin_api",
|
||||
rule: "rule:defaut"})
|
||||
rule: "rule:default"})
|
||||
response = func(*arg, **kwarg)
|
||||
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
|
||||
self.assertEqual('application/json', response.content_type)
|
||||
|
@ -785,7 +785,7 @@ class TestAuditTemplatePolicyEnforcement(api_base.FunctionalTest):
|
||||
self.policy.set_rules({
|
||||
"admin_api": "(role:admin or role:administrator)",
|
||||
"default": "rule:admin_api",
|
||||
rule: "rule:defaut"})
|
||||
rule: "rule:default"})
|
||||
response = func(*arg, **kwarg)
|
||||
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
|
||||
self.assertEqual('application/json', response.content_type)
|
||||
|
@ -1058,7 +1058,7 @@ class TestAuditPolicyEnforcement(api_base.FunctionalTest):
|
||||
self.policy.set_rules({
|
||||
"admin_api": "(role:admin or role:administrator)",
|
||||
"default": "rule:admin_api",
|
||||
rule: "rule:defaut"})
|
||||
rule: "rule:default"})
|
||||
response = func(*arg, **kwarg)
|
||||
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
|
||||
self.assertEqual('application/json', response.content_type)
|
||||
|
@ -58,7 +58,7 @@ class TestDataModelPolicyEnforcement(api_base.FunctionalTest):
|
||||
self.policy.set_rules({
|
||||
"admin_api": "(role:admin or role:administrator)",
|
||||
"default": "rule:admin_api",
|
||||
rule: "rule:defaut"})
|
||||
rule: "rule:default"})
|
||||
response = func(*arg, **kwarg)
|
||||
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
|
||||
self.assertEqual('application/json', response.content_type)
|
||||
|
@ -255,7 +255,7 @@ class TestStrategyPolicyEnforcement(api_base.FunctionalTest):
|
||||
self.policy.set_rules({
|
||||
"admin_api": "(role:admin or role:administrator)",
|
||||
"default": "rule:admin_api",
|
||||
rule: "rule:defaut"})
|
||||
rule: "rule:default"})
|
||||
response = func(*arg, **kwarg)
|
||||
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
|
||||
self.assertEqual('application/json', response.content_type)
|
||||
|
@ -38,7 +38,7 @@ class TestMaasNode(base.TestCase):
|
||||
|
||||
def test_get_power_state(self):
|
||||
if not maas_enum:
|
||||
self.skipTest("python-libmaas not intalled.")
|
||||
self.skipTest("python-libmaas not installed.")
|
||||
|
||||
self._wrapped_node.query_power_state.side_effect = (
|
||||
maas_enum.PowerState.ON,
|
||||
|
@ -105,10 +105,10 @@ class FakeGnocchiMetrics(object):
|
||||
|
||||
def get_compute_node_cpu_util(self, resource, period,
|
||||
aggregate, granularity):
|
||||
"""Calculates node utilization dynamicaly.
|
||||
"""Calculates node utilization dynamically.
|
||||
|
||||
node CPU utilization should consider
|
||||
and corelate with actual instance-node mappings
|
||||
and correlate with actual instance-node mappings
|
||||
provided within a cluster model.
|
||||
Returns relative node CPU utilization <0, 100>.
|
||||
:param r_id: resource id
|
||||
|
@ -76,22 +76,22 @@ class TestSyncer(base.DbTestCase):
|
||||
self.addCleanup(p_strategies.stop)
|
||||
|
||||
@staticmethod
|
||||
def _find_created_modified_unmodified_ids(befores, afters):
|
||||
def _find_created_modified_unmodified_ids(before, after):
|
||||
created = {
|
||||
a_item.id: a_item for a_item in afters
|
||||
if a_item.uuid not in (b_item.uuid for b_item in befores)
|
||||
a_item.id: a_item for a_item in after
|
||||
if a_item.uuid not in (b_item.uuid for b_item in before)
|
||||
}
|
||||
|
||||
modified = {
|
||||
a_item.id: a_item for a_item in afters
|
||||
a_item.id: a_item for a_item in after
|
||||
if a_item.as_dict() not in (
|
||||
b_items.as_dict() for b_items in befores)
|
||||
b_items.as_dict() for b_items in before)
|
||||
}
|
||||
|
||||
unmodified = {
|
||||
a_item.id: a_item for a_item in afters
|
||||
a_item.id: a_item for a_item in after
|
||||
if a_item.as_dict() in (
|
||||
b_items.as_dict() for b_items in befores)
|
||||
b_items.as_dict() for b_items in before)
|
||||
}
|
||||
|
||||
return created, modified, unmodified
|
||||
|
@ -459,24 +459,24 @@ class TestObjectVersions(test_base.TestCase):
|
||||
class TestObjectSerializer(test_base.TestCase):
|
||||
|
||||
def test_object_serialization(self):
|
||||
ser = base.WatcherObjectSerializer()
|
||||
obj_ser = base.WatcherObjectSerializer()
|
||||
obj = MyObj(self.context)
|
||||
primitive = ser.serialize_entity(self.context, obj)
|
||||
primitive = obj_ser.serialize_entity(self.context, obj)
|
||||
self.assertIn('watcher_object.name', primitive)
|
||||
obj2 = ser.deserialize_entity(self.context, primitive)
|
||||
obj2 = obj_ser.deserialize_entity(self.context, primitive)
|
||||
self.assertIsInstance(obj2, MyObj)
|
||||
self.assertEqual(self.context, obj2._context)
|
||||
|
||||
def test_object_serialization_iterables(self):
|
||||
ser = base.WatcherObjectSerializer()
|
||||
obj_ser = base.WatcherObjectSerializer()
|
||||
obj = MyObj(self.context)
|
||||
for iterable in (list, tuple, set):
|
||||
thing = iterable([obj])
|
||||
primitive = ser.serialize_entity(self.context, thing)
|
||||
primitive = obj_ser.serialize_entity(self.context, thing)
|
||||
self.assertEqual(1, len(primitive))
|
||||
for item in primitive:
|
||||
self.assertFalse(isinstance(item, base.WatcherObject))
|
||||
thing2 = ser.deserialize_entity(self.context, primitive)
|
||||
thing2 = obj_ser.deserialize_entity(self.context, primitive)
|
||||
self.assertEqual(1, len(thing2))
|
||||
for item in thing2:
|
||||
self.assertIsInstance(item, MyObj)
|
||||
@ -485,7 +485,7 @@ class TestObjectSerializer(test_base.TestCase):
|
||||
def _test_deserialize_entity_newer(self, obj_version, backported_to,
|
||||
mock_indirection_api,
|
||||
my_version='1.6'):
|
||||
ser = base.WatcherObjectSerializer()
|
||||
obj_ser = base.WatcherObjectSerializer()
|
||||
mock_indirection_api.object_backport_versions.return_value \
|
||||
= 'backported'
|
||||
|
||||
@ -496,7 +496,7 @@ class TestObjectSerializer(test_base.TestCase):
|
||||
obj = MyTestObj(self.context)
|
||||
obj.VERSION = obj_version
|
||||
primitive = obj.obj_to_primitive()
|
||||
result = ser.deserialize_entity(self.context, primitive)
|
||||
result = obj_ser.deserialize_entity(self.context, primitive)
|
||||
if backported_to is None:
|
||||
self.assertFalse(
|
||||
mock_indirection_api.object_backport_versions.called)
|
||||
|
Loading…
Reference in New Issue
Block a user