Files
distcloud/distributedcloud/dcorch/common/manager.py
Victor Romano 2c4344918e Add dcagent support for RegionOne dcorch audit data
Previously, dcorch interaction with dcagent was limited to gathering
the platform information with one single request. With this commit,
dcorch can now send dcagent the RegionOne audit data so the sync
status of the desired endpoint is determined inside the subcloud.
For both iuser and fernet repo, the response from dcagent is only
the sync status, returning "in-sync" or "out-of-sync". For certs,
dcagent will return a dict with each master cert signature and
the respective sync status.

Test plan:
  - PASS: Manage a subcloud and verify the sync status of the
          resources are determined correctly and they are synced if
          a difference is found.
  - PASS: Rotate the fernet key in the system controller and verify
          the periodic audit finds the discrepancy and creates a sync
          job.
  - PASS: Install a new certificate in the system controller without
          specifying "--os-region-name SystemController" and verify
          the periodic audit finds the discrepancy and creates a sync
          job.
  - PASS: Delete the certificate in the system controller and verify
          the periodic audit finds the discrepancy and deletes it
          from the subcloud.
  - PASS: Install a new certificate in the system controller with
          dcorch proxy ("--os-region-name SystemController") and
          verify the resource is created in the subcloud.
  - PASS: Delete the certificate in the system controller with
          dcorch proxy ("--os-region-name SystemController") and
          verify the resource was deleted from the subcloud.
  - PASS: Install a certificate directly in the subcloud and
          verify the resource is left untouched by periodic audit
          and other certificates are correctly being flagged as
          in-sync or out-of-sync.

Story: 2011106
Task: 50904

Change-Id: I814fe3131606e959a769aea04d51d7fe5e8f5df9
Signed-off-by: Victor Romano <victor.gluzromano@windriver.com>
2024-08-29 17:40:39 -03:00

112 lines
4.0 KiB
Python

# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright (c) 2024 Wind River Systems, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# copy and modify from Nova manager.py
"""Base Manager class.
Managers are responsible for a certain aspect of the system. It is a logical
grouping of code relating to a portion of the system. In general other
components should be using the manager to make changes to the components that
it is responsible for.
For example, other components that need to deal with volumes in some way,
should do so by calling methods on the VolumeManager instead of directly
changing fields in the database. This allows us to keep all of the code
relating to volumes in the same place.
We have adopted a basic strategy of Smart managers and dumb data, which means
rather than attaching methods to data objects, components should call manager
methods that act on the data.
Methods on managers that can be executed locally should be called directly. If
a particular method must execute on a remote host, this should be done via rpc
to the service that wraps the manager
Managers should be responsible for most of the db access, and
non-implementation specific data. Anything implementation specific that can't
be generalized should be done by the Driver.
Managers will often provide methods for initial setup of a host or periodic
tasks to a wrapping service.
This module provides Manager, a base class for managers.
"""
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import periodic_task
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class PeriodicTasks(periodic_task.PeriodicTasks):
def __init__(self):
super(PeriodicTasks, self).__init__(CONF)
class Manager(PeriodicTasks):
def __init__(self, host=None, service_name="undefined"):
if not host:
host = cfg.CONF.host
self.host = host
self.service_name = service_name
# self.notifier = rpc.get_notifier(self.service_name, self.host)
self.additional_endpoints = []
super(Manager, self).__init__()
def periodic_tasks(self, context, raise_on_error=False):
"""Tasks to be run at a periodic interval."""
return self.run_periodic_tasks(context, raise_on_error=raise_on_error)
def init_host(self):
"""init_host
Hook to do additional manager initialization when one requests
the service be started. This is called before any service record
is created.
Child classes should override this method.
"""
pass
def cleanup_host(self):
"""cleanup_host
Hook to do cleanup work when the service shuts down.
Child classes should override this method.
"""
pass
def pre_start_hook(self):
"""pre_start_hook
Hook to provide the manager the ability to do additional
start-up work before any RPC queues/consumers are created. This is
called after other initialization has succeeded and a service
record is created.
Child classes should override this method.
"""
pass
def post_start_hook(self):
"""post_start_hook
Hook to provide the manager the ability to do additional
start-up work immediately after a service creates RPC consumers
and starts 'running'.
Child classes should override this method.
"""
pass