[pre-commit] fix typos and configure codespell

This chanage enabled codespell in precommit and
fixes the existing typos.

A followup commit will enable this in tox and ci.

Change-Id: I0a11bcd5a88247a48d3437525fc8a3cb3cdd4e58
This commit is contained in:
Sean Mooney 2024-10-22 19:31:14 +01:00
parent 4d5022ab94
commit 5f79ab87c7
39 changed files with 71 additions and 65 deletions

View File

@ -41,14 +41,11 @@ repos:
hooks: hooks:
- id: autopep8 - id: autopep8
files: '^.*\.py$' files: '^.*\.py$'
# FIXME(sean-k-mooney): we have many typos and some false - repo: https://github.com/codespell-project/codespell
# positives that need to be added to the dictionary rev: v2.3.0
# correct this in a followup change hooks:
# - repo: https://github.com/codespell-project/codespell - id: codespell
# rev: v2.3.0 args: ['--ignore-words=doc/dictionary.txt']
# hooks:
# - id: codespell
# args: ['--ignore-words=doc/dictionary.txt']
# FIXME(sean-k-mooney): we have many sphinx issues fix them # FIXME(sean-k-mooney): we have many sphinx issues fix them
# in a separate commit to make it easier to review # in a separate commit to make it easier to review
# - repo: https://github.com/sphinx-contrib/sphinx-lint # - repo: https://github.com/sphinx-contrib/sphinx-lint

View File

@ -12,7 +12,7 @@ Here are some examples of ``Goals``:
- minimize the energy consumption - minimize the energy consumption
- minimize the number of compute nodes (consolidation) - minimize the number of compute nodes (consolidation)
- balance the workload among compute nodes - balance the workload among compute nodes
- minimize the license cost (some softwares have a licensing model which is - minimize the license cost (some software have a licensing model which is
based on the number of sockets or cores where the software is deployed) based on the number of sockets or cores where the software is deployed)
- find the most appropriate moment for a planned maintenance on a - find the most appropriate moment for a planned maintenance on a
given group of host (which may be an entire availability zone): given group of host (which may be an entire availability zone):

View File

@ -0,0 +1,4 @@
thirdparty
assertin
notin

View File

@ -52,7 +52,7 @@ class BaseWatcherDirective(rst.Directive):
obj_raw_docstring = obj.__init__.__doc__ obj_raw_docstring = obj.__init__.__doc__
if not obj_raw_docstring: if not obj_raw_docstring:
# Raise a warning to make the tests fail wit doc8 # Raise a warning to make the tests fail with doc8
raise self.error("No docstring available for %s!" % obj) raise self.error("No docstring available for %s!" % obj)
obj_docstring = inspect.cleandoc(obj_raw_docstring) obj_docstring = inspect.cleandoc(obj_raw_docstring)

View File

@ -34,7 +34,7 @@ own sections. However, the base *GMR* consists of several sections:
Package Package
Shows information about the package to which this process belongs, including Shows information about the package to which this process belongs, including
version informations. version information.
Threads Threads
Shows stack traces and thread ids for each of the threads within this Shows stack traces and thread ids for each of the threads within this

View File

@ -285,7 +285,7 @@ Audit and interval (in case of CONTINUOUS type). There is three types of Audit:
ONESHOT, CONTINUOUS and EVENT. ONESHOT Audit is launched once and if it ONESHOT, CONTINUOUS and EVENT. ONESHOT Audit is launched once and if it
succeeded executed new action plan list will be provided; CONTINUOUS Audit succeeded executed new action plan list will be provided; CONTINUOUS Audit
creates action plans with specified interval (in seconds or cron format, cron creates action plans with specified interval (in seconds or cron format, cron
inteval can be used like: `*/5 * * * *`), if action plan interval can be used like: `*/5 * * * *`), if action plan
has been created, all previous action plans get CANCELLED state; has been created, all previous action plans get CANCELLED state;
EVENT audit is launched when receiving webhooks API. EVENT audit is launched when receiving webhooks API.

View File

@ -221,7 +221,7 @@ workflow engine can halt or take other actions while the action plan is being
executed based on the success or failure of individual actions. However, the executed based on the success or failure of individual actions. However, the
base workflow engine simply uses these notifies to store the result of base workflow engine simply uses these notifies to store the result of
individual actions in the database. Additionally, since taskflow uses a graph individual actions in the database. Additionally, since taskflow uses a graph
flow if any of the tasks would fail all childs of this tasks not be executed flow if any of the tasks would fail all children of this tasks not be executed
while ``do_revert`` will be triggered for all parents. while ``do_revert`` will be triggered for all parents.
.. code-block:: python .. code-block:: python

View File

@ -48,7 +48,7 @@
logging configuration to any other existing logging logging configuration to any other existing logging
options. Please see the Python logging module documentation options. Please see the Python logging module documentation
for details on logging configuration files. The log-config for details on logging configuration files. The log-config
name for this option is depcrecated. name for this option is deprecated.
**--log-format FORMAT** **--log-format FORMAT**
A logging.Formatter log message format string which may use any A logging.Formatter log message format string which may use any

View File

@ -4,4 +4,4 @@ features:
will standardize interactions with scoring engines will standardize interactions with scoring engines
through the common API. It is possible to use the through the common API. It is possible to use the
scoring engine by different Strategies, which scoring engine by different Strategies, which
improve the code and data model re-use. improve the code and data model reuse.

View File

@ -5,5 +5,5 @@ features:
failure. The amount of failures allowed before giving up and the time before failure. The amount of failures allowed before giving up and the time before
reattempting are configurable. The `api_call_retries` and reattempting are configurable. The `api_call_retries` and
`api_query_timeout` parameters in the `[collector]` group can be used to `api_query_timeout` parameters in the `[collector]` group can be used to
adjust these paremeters. 10 retries with a 1 second time in between adjust these parameters. 10 retries with a 1 second time in between
reattempts is the default. reattempts is the default.

View File

@ -3,6 +3,6 @@ features:
Watcher starts to support API microversions since Stein cycle. From now Watcher starts to support API microversions since Stein cycle. From now
onwards all API changes should be made with saving backward compatibility. onwards all API changes should be made with saving backward compatibility.
To specify API version operator should use OpenStack-API-Version To specify API version operator should use OpenStack-API-Version
HTTP header. If operator wants to know the mininum and maximum supported HTTP header. If operator wants to know the minimum and maximum supported
versions by API, he/she can access /v1 resource and Watcher API will versions by API, he/she can access /v1 resource and Watcher API will
return appropriate headers in response. return appropriate headers in response.

View File

@ -7,7 +7,7 @@ prelude: >
features: features:
- | - |
A new threadpool for the decision engine that contributors can use to A new threadpool for the decision engine that contributors can use to
improve the performance of many operations, primarily I/O bound onces. improve the performance of many operations, primarily I/O bound ones.
The amount of workers used by the decision engine threadpool can be The amount of workers used by the decision engine threadpool can be
configured to scale according to the available infrastructure using configured to scale according to the available infrastructure using
the `watcher_decision_engine.max_general_workers` config option. the `watcher_decision_engine.max_general_workers` config option.

View File

@ -13,7 +13,7 @@ features:
* disk_gb_reserved: The amount of disk a node has reserved for its own use. * disk_gb_reserved: The amount of disk a node has reserved for its own use.
* disk_ratio: Disk allocation ratio. * disk_ratio: Disk allocation ratio.
We also add some new propeties: We also add some new properties:
* vcpu_capacity: The amount of vcpu, take allocation ratio into account, * vcpu_capacity: The amount of vcpu, take allocation ratio into account,
but do not include reserved. but do not include reserved.

View File

@ -4,5 +4,5 @@ features:
Whenever a Watcher object is created, updated or deleted, a versioned Whenever a Watcher object is created, updated or deleted, a versioned
notification will, if it's relevant, be automatically sent to notify in order notification will, if it's relevant, be automatically sent to notify in order
to allow an event-driven style of architecture within Watcher. Moreover, it to allow an event-driven style of architecture within Watcher. Moreover, it
will also give other services and/or 3rd party softwares (e.g. monitoring will also give other services and/or 3rd party software (e.g. monitoring
solutions or rules engines) the ability to react to such events. solutions or rules engines) the ability to react to such events.

View File

@ -1,3 +1,3 @@
--- ---
features: features:
- Add a service supervisor to watch Watcher deamons. - Add a service supervisor to watch Watcher daemons.

View File

@ -109,3 +109,8 @@ watcher_cluster_data_model_collectors =
compute = watcher.decision_engine.model.collector.nova:NovaClusterDataModelCollector compute = watcher.decision_engine.model.collector.nova:NovaClusterDataModelCollector
storage = watcher.decision_engine.model.collector.cinder:CinderClusterDataModelCollector storage = watcher.decision_engine.model.collector.cinder:CinderClusterDataModelCollector
baremetal = watcher.decision_engine.model.collector.ironic:BaremetalClusterDataModelCollector baremetal = watcher.decision_engine.model.collector.ironic:BaremetalClusterDataModelCollector
[codespell]
skip = *.po,*.js,*.css,*.html,*.svg,HACKING.py,*hacking*,*build*,*_static*,doc/dictionary.txt,*.pyc,*.inv,*.gz,*.jpg,*.png,*.vsd,*.graffle,*.json
count =
quiet-level = 4

View File

@ -334,7 +334,7 @@ class ActionsController(rest.RestController):
policy.enforce(context, 'action:detail', policy.enforce(context, 'action:detail',
action='action:detail') action='action:detail')
# NOTE(lucasagomes): /detail should only work agaist collections # NOTE(lucasagomes): /detail should only work against collections
parent = pecan.request.path.split('/')[:-1][-1] parent = pecan.request.path.split('/')[:-1][-1]
if parent != "actions": if parent != "actions":
raise exception.HTTPNotFound raise exception.HTTPNotFound

View File

@ -433,7 +433,7 @@ class ActionPlansController(rest.RestController):
policy.enforce(context, 'action_plan:detail', policy.enforce(context, 'action_plan:detail',
action='action_plan:detail') action='action_plan:detail')
# NOTE(lucasagomes): /detail should only work agaist collections # NOTE(lucasagomes): /detail should only work against collections
parent = pecan.request.path.split('/')[:-1][-1] parent = pecan.request.path.split('/')[:-1][-1]
if parent != "action_plans": if parent != "action_plans":
raise exception.HTTPNotFound raise exception.HTTPNotFound

View File

@ -570,7 +570,7 @@ class AuditsController(rest.RestController):
context = pecan.request.context context = pecan.request.context
policy.enforce(context, 'audit:detail', policy.enforce(context, 'audit:detail',
action='audit:detail') action='audit:detail')
# NOTE(lucasagomes): /detail should only work agaist collections # NOTE(lucasagomes): /detail should only work against collections
parent = pecan.request.path.split('/')[:-1][-1] parent = pecan.request.path.split('/')[:-1][-1]
if parent != "audits": if parent != "audits":
raise exception.HTTPNotFound raise exception.HTTPNotFound

View File

@ -576,7 +576,7 @@ class AuditTemplatesController(rest.RestController):
policy.enforce(context, 'audit_template:detail', policy.enforce(context, 'audit_template:detail',
action='audit_template:detail') action='audit_template:detail')
# NOTE(lucasagomes): /detail should only work agaist collections # NOTE(lucasagomes): /detail should only work against collections
parent = pecan.request.path.split('/')[:-1][-1] parent = pecan.request.path.split('/')[:-1][-1]
if parent != "audit_templates": if parent != "audit_templates":
raise exception.HTTPNotFound raise exception.HTTPNotFound

View File

@ -24,7 +24,7 @@ Here are some examples of :ref:`Goals <goal_definition>`:
- minimize the energy consumption - minimize the energy consumption
- minimize the number of compute nodes (consolidation) - minimize the number of compute nodes (consolidation)
- balance the workload among compute nodes - balance the workload among compute nodes
- minimize the license cost (some softwares have a licensing model which is - minimize the license cost (some software have a licensing model which is
based on the number of sockets or cores where the software is deployed) based on the number of sockets or cores where the software is deployed)
- find the most appropriate moment for a planned maintenance on a - find the most appropriate moment for a planned maintenance on a
given group of host (which may be an entire availability zone): given group of host (which may be an entire availability zone):
@ -217,7 +217,7 @@ class GoalsController(rest.RestController):
context = pecan.request.context context = pecan.request.context
policy.enforce(context, 'goal:detail', policy.enforce(context, 'goal:detail',
action='goal:detail') action='goal:detail')
# NOTE(lucasagomes): /detail should only work agaist collections # NOTE(lucasagomes): /detail should only work against collections
parent = pecan.request.path.split('/')[:-1][-1] parent = pecan.request.path.split('/')[:-1][-1]
if parent != "goals": if parent != "goals":
raise exception.HTTPNotFound raise exception.HTTPNotFound

View File

@ -237,7 +237,7 @@ class ServicesController(rest.RestController):
context = pecan.request.context context = pecan.request.context
policy.enforce(context, 'service:detail', policy.enforce(context, 'service:detail',
action='service:detail') action='service:detail')
# NOTE(lucasagomes): /detail should only work agaist collections # NOTE(lucasagomes): /detail should only work against collections
parent = pecan.request.path.split('/')[:-1][-1] parent = pecan.request.path.split('/')[:-1][-1]
if parent != "services": if parent != "services":
raise exception.HTTPNotFound raise exception.HTTPNotFound

View File

@ -284,7 +284,7 @@ class StrategiesController(rest.RestController):
context = pecan.request.context context = pecan.request.context
policy.enforce(context, 'strategy:detail', policy.enforce(context, 'strategy:detail',
action='strategy:detail') action='strategy:detail')
# NOTE(lucasagomes): /detail should only work agaist collections # NOTE(lucasagomes): /detail should only work against collections
parent = pecan.request.path.split('/')[:-1][-1] parent = pecan.request.path.split('/')[:-1][-1]
if parent != "strategies": if parent != "strategies":
raise exception.HTTPNotFound raise exception.HTTPNotFound

View File

@ -72,7 +72,7 @@ class KeystoneHelper(object):
message=(_("Project not Found: %s") % name_or_id)) message=(_("Project not Found: %s") % name_or_id))
if len(projects) > 1: if len(projects) > 1:
raise exception.Invalid( raise exception.Invalid(
messsage=(_("Project name seems ambiguous: %s") % message=(_("Project name seems ambiguous: %s") %
name_or_id)) name_or_id))
return projects[0] return projects[0]

View File

@ -47,7 +47,7 @@ APPLIER_OPTS = [
cfg.BoolOpt('rollback_when_actionplan_failed', cfg.BoolOpt('rollback_when_actionplan_failed',
default=False, default=False,
help='If set True, the failed actionplan will rollback ' help='If set True, the failed actionplan will rollback '
'when executing. Defaule value is False.'), 'when executing. Default value is False.'),
] ]

View File

@ -52,8 +52,8 @@ class DefaultStrategySelector(base.BaseSelector):
else: else:
available_strategies = self.strategy_loader.list_available() available_strategies = self.strategy_loader.list_available()
available_strategies_for_goal = list( available_strategies_for_goal = list(
key for key, strat in available_strategies.items() key for key, strategy in available_strategies.items()
if strat.get_goal_name() == self.goal_name) if strategy.get_goal_name() == self.goal_name)
if not available_strategies_for_goal: if not available_strategies_for_goal:
raise exception.NoAvailableStrategyForGoal( raise exception.NoAvailableStrategyForGoal(

View File

@ -32,7 +32,7 @@ class HostMaintenance(base.HostMaintenanceBaseStrategy):
*Description* *Description*
It is a migration strategy for one compute node maintenance, It is a migration strategy for one compute node maintenance,
without having the user's application been interruptted. without having the user's application been interrupted.
If given one backup node, the strategy will firstly If given one backup node, the strategy will firstly
migrate all instances from the maintenance node to migrate all instances from the maintenance node to
the backup node. If the backup node is not provided, the backup node. If the backup node is not provided,

View File

@ -563,7 +563,7 @@ class ZoneMigration(base.ZoneMigrationBaseStrategy):
filter_actions = self.get_priority_filter_list() filter_actions = self.get_priority_filter_list()
LOG.debug(filter_actions) LOG.debug(filter_actions)
# apply all filters set in input prameter # apply all filters set in input parameter
for action in list(reversed(filter_actions)): for action in list(reversed(filter_actions)):
LOG.debug(action) LOG.debug(action)
result = action.apply_filter(result) result = action.apply_filter(result)
@ -795,7 +795,7 @@ class ComputeHostSortFilter(SortMovingToFrontFilter):
:param item: instance object :param item: instance object
:param sort_key: compute host name :param sort_key: compute host name
:returns: true: compute name on which intance is equals sort_key :returns: true: compute name on where instance host equals sort_key
false: otherwise false: otherwise
""" """
@ -823,7 +823,7 @@ class StorageHostSortFilter(SortMovingToFrontFilter):
:param item: volume object :param item: volume object
:param sort_key: storage pool name :param sort_key: storage pool name
:returns: true: pool name on which intance is equals sort_key :returns: true: pool name on where instance.host equals sort_key
false: otherwise false: otherwise
""" """

View File

@ -59,7 +59,7 @@ class DecisionEngineThreadPool(object, metaclass=service.Singleton):
:param futures: list, set or dictionary of futures :param futures: list, set or dictionary of futures
:type futures: list :py:class:`futurist.GreenFuture` :type futures: list :py:class:`futurist.GreenFuture`
:param fn: function to execute upon the future finishing exection :param fn: function to execute upon the future finishing execution
:param args: arguments for the function :param args: arguments for the function
:param kwargs: amount of arguments for the function :param kwargs: amount of arguments for the function
""" """
@ -83,7 +83,7 @@ class DecisionEngineThreadPool(object, metaclass=service.Singleton):
:param futures: list, set or dictionary of futures :param futures: list, set or dictionary of futures
:type futures: list :py:class:`futurist.GreenFuture` :type futures: list :py:class:`futurist.GreenFuture`
:param fn: function to execute upon the future finishing exection :param fn: function to execute upon the future finishing execution
:param args: arguments for the function :param args: arguments for the function
:param kwargs: amount of arguments for the function :param kwargs: amount of arguments for the function
""" """

View File

@ -536,7 +536,7 @@ class TestActionPolicyEnforcement(api_base.FunctionalTest):
self.policy.set_rules({ self.policy.set_rules({
"admin_api": "(role:admin or role:administrator)", "admin_api": "(role:admin or role:administrator)",
"default": "rule:admin_api", "default": "rule:admin_api",
rule: "rule:defaut"}) rule: "rule:default"})
response = func(*arg, **kwarg) response = func(*arg, **kwarg)
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int) self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
self.assertEqual('application/json', response.content_type) self.assertEqual('application/json', response.content_type)

View File

@ -635,7 +635,7 @@ class TestActionPlanPolicyEnforcement(api_base.FunctionalTest):
self.policy.set_rules({ self.policy.set_rules({
"admin_api": "(role:admin or role:administrator)", "admin_api": "(role:admin or role:administrator)",
"default": "rule:admin_api", "default": "rule:admin_api",
rule: "rule:defaut"}) rule: "rule:default"})
response = func(*arg, **kwarg) response = func(*arg, **kwarg)
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int) self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
self.assertEqual('application/json', response.content_type) self.assertEqual('application/json', response.content_type)

View File

@ -785,7 +785,7 @@ class TestAuditTemplatePolicyEnforcement(api_base.FunctionalTest):
self.policy.set_rules({ self.policy.set_rules({
"admin_api": "(role:admin or role:administrator)", "admin_api": "(role:admin or role:administrator)",
"default": "rule:admin_api", "default": "rule:admin_api",
rule: "rule:defaut"}) rule: "rule:default"})
response = func(*arg, **kwarg) response = func(*arg, **kwarg)
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int) self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
self.assertEqual('application/json', response.content_type) self.assertEqual('application/json', response.content_type)

View File

@ -1058,7 +1058,7 @@ class TestAuditPolicyEnforcement(api_base.FunctionalTest):
self.policy.set_rules({ self.policy.set_rules({
"admin_api": "(role:admin or role:administrator)", "admin_api": "(role:admin or role:administrator)",
"default": "rule:admin_api", "default": "rule:admin_api",
rule: "rule:defaut"}) rule: "rule:default"})
response = func(*arg, **kwarg) response = func(*arg, **kwarg)
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int) self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
self.assertEqual('application/json', response.content_type) self.assertEqual('application/json', response.content_type)

View File

@ -58,7 +58,7 @@ class TestDataModelPolicyEnforcement(api_base.FunctionalTest):
self.policy.set_rules({ self.policy.set_rules({
"admin_api": "(role:admin or role:administrator)", "admin_api": "(role:admin or role:administrator)",
"default": "rule:admin_api", "default": "rule:admin_api",
rule: "rule:defaut"}) rule: "rule:default"})
response = func(*arg, **kwarg) response = func(*arg, **kwarg)
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int) self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
self.assertEqual('application/json', response.content_type) self.assertEqual('application/json', response.content_type)

View File

@ -255,7 +255,7 @@ class TestStrategyPolicyEnforcement(api_base.FunctionalTest):
self.policy.set_rules({ self.policy.set_rules({
"admin_api": "(role:admin or role:administrator)", "admin_api": "(role:admin or role:administrator)",
"default": "rule:admin_api", "default": "rule:admin_api",
rule: "rule:defaut"}) rule: "rule:default"})
response = func(*arg, **kwarg) response = func(*arg, **kwarg)
self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int) self.assertEqual(HTTPStatus.FORBIDDEN, response.status_int)
self.assertEqual('application/json', response.content_type) self.assertEqual('application/json', response.content_type)

View File

@ -38,7 +38,7 @@ class TestMaasNode(base.TestCase):
def test_get_power_state(self): def test_get_power_state(self):
if not maas_enum: if not maas_enum:
self.skipTest("python-libmaas not intalled.") self.skipTest("python-libmaas not installed.")
self._wrapped_node.query_power_state.side_effect = ( self._wrapped_node.query_power_state.side_effect = (
maas_enum.PowerState.ON, maas_enum.PowerState.ON,

View File

@ -105,10 +105,10 @@ class FakeGnocchiMetrics(object):
def get_compute_node_cpu_util(self, resource, period, def get_compute_node_cpu_util(self, resource, period,
aggregate, granularity): aggregate, granularity):
"""Calculates node utilization dynamicaly. """Calculates node utilization dynamically.
node CPU utilization should consider node CPU utilization should consider
and corelate with actual instance-node mappings and correlate with actual instance-node mappings
provided within a cluster model. provided within a cluster model.
Returns relative node CPU utilization <0, 100>. Returns relative node CPU utilization <0, 100>.
:param r_id: resource id :param r_id: resource id

View File

@ -76,22 +76,22 @@ class TestSyncer(base.DbTestCase):
self.addCleanup(p_strategies.stop) self.addCleanup(p_strategies.stop)
@staticmethod @staticmethod
def _find_created_modified_unmodified_ids(befores, afters): def _find_created_modified_unmodified_ids(before, after):
created = { created = {
a_item.id: a_item for a_item in afters a_item.id: a_item for a_item in after
if a_item.uuid not in (b_item.uuid for b_item in befores) if a_item.uuid not in (b_item.uuid for b_item in before)
} }
modified = { modified = {
a_item.id: a_item for a_item in afters a_item.id: a_item for a_item in after
if a_item.as_dict() not in ( if a_item.as_dict() not in (
b_items.as_dict() for b_items in befores) b_items.as_dict() for b_items in before)
} }
unmodified = { unmodified = {
a_item.id: a_item for a_item in afters a_item.id: a_item for a_item in after
if a_item.as_dict() in ( if a_item.as_dict() in (
b_items.as_dict() for b_items in befores) b_items.as_dict() for b_items in before)
} }
return created, modified, unmodified return created, modified, unmodified

View File

@ -459,24 +459,24 @@ class TestObjectVersions(test_base.TestCase):
class TestObjectSerializer(test_base.TestCase): class TestObjectSerializer(test_base.TestCase):
def test_object_serialization(self): def test_object_serialization(self):
ser = base.WatcherObjectSerializer() obj_ser = base.WatcherObjectSerializer()
obj = MyObj(self.context) obj = MyObj(self.context)
primitive = ser.serialize_entity(self.context, obj) primitive = obj_ser.serialize_entity(self.context, obj)
self.assertIn('watcher_object.name', primitive) self.assertIn('watcher_object.name', primitive)
obj2 = ser.deserialize_entity(self.context, primitive) obj2 = obj_ser.deserialize_entity(self.context, primitive)
self.assertIsInstance(obj2, MyObj) self.assertIsInstance(obj2, MyObj)
self.assertEqual(self.context, obj2._context) self.assertEqual(self.context, obj2._context)
def test_object_serialization_iterables(self): def test_object_serialization_iterables(self):
ser = base.WatcherObjectSerializer() obj_ser = base.WatcherObjectSerializer()
obj = MyObj(self.context) obj = MyObj(self.context)
for iterable in (list, tuple, set): for iterable in (list, tuple, set):
thing = iterable([obj]) thing = iterable([obj])
primitive = ser.serialize_entity(self.context, thing) primitive = obj_ser.serialize_entity(self.context, thing)
self.assertEqual(1, len(primitive)) self.assertEqual(1, len(primitive))
for item in primitive: for item in primitive:
self.assertFalse(isinstance(item, base.WatcherObject)) self.assertFalse(isinstance(item, base.WatcherObject))
thing2 = ser.deserialize_entity(self.context, primitive) thing2 = obj_ser.deserialize_entity(self.context, primitive)
self.assertEqual(1, len(thing2)) self.assertEqual(1, len(thing2))
for item in thing2: for item in thing2:
self.assertIsInstance(item, MyObj) self.assertIsInstance(item, MyObj)
@ -485,7 +485,7 @@ class TestObjectSerializer(test_base.TestCase):
def _test_deserialize_entity_newer(self, obj_version, backported_to, def _test_deserialize_entity_newer(self, obj_version, backported_to,
mock_indirection_api, mock_indirection_api,
my_version='1.6'): my_version='1.6'):
ser = base.WatcherObjectSerializer() obj_ser = base.WatcherObjectSerializer()
mock_indirection_api.object_backport_versions.return_value \ mock_indirection_api.object_backport_versions.return_value \
= 'backported' = 'backported'
@ -496,7 +496,7 @@ class TestObjectSerializer(test_base.TestCase):
obj = MyTestObj(self.context) obj = MyTestObj(self.context)
obj.VERSION = obj_version obj.VERSION = obj_version
primitive = obj.obj_to_primitive() primitive = obj.obj_to_primitive()
result = ser.deserialize_entity(self.context, primitive) result = obj_ser.deserialize_entity(self.context, primitive)
if backported_to is None: if backported_to is None:
self.assertFalse( self.assertFalse(
mock_indirection_api.object_backport_versions.called) mock_indirection_api.object_backport_versions.called)