os-brick/doc/source/user/tutorial.rst
Gorka Eguileor b72c034885 Support independent file lock path
OS-Brick uses file locks to ensure single access to critital sections,
and uses the oslo concurrency "lock_path" configuration option to
determine where to create these locks.

This is fine when each host has a single service using os-brick, which
is the case of Compute nodes and Controller nodes where Glance is not
using Cinder as its backend.

The problem happens when we have an HCI deployment where Cinder and Nova
are collocated on the same host or when Glance uses Cinder as its
backend and is running on the same host.  In those scenarios os-brick
will create file locks in different paths for each service, which is not
the intended behavior.

A possible solutions is to set the "lock_path" of all the services to
the same location, which is not great, not only because we'll have all
nova, cinder, glance, and os-brick locks mixed in a single location
(service prefixes help a bit here), but also because then Cinder will
have permissions on the Nova and Glance locks, which doesn't seem right.

This patch introduces a new mechanism in os-brick to have its own
"lock_path" configuration option in the "os_brick" group.  It defaults
to the current behavior if not explicitly defined, so it will use olso
concurrency's "lock_path".

To do this the patch introduces the oslo.config dependency and adds a
new "setup" method that sets the default of the os_brick "lock_path".

This new "setup" method is required because the order in which
configuration options for the different namespaces are imported cannot
be guaranteed, so the setup must be called *after* the service has
already imported all (or at least the ones os-brick cares about)
configuration option namespaces.

In other words, when os-brick files are loaded the value for oslo
concurrency's "lock_path" is not yet known, so it cannot be used as a
default, and the oslo_config substitution feature does not support
values from other namespaces, so we cannot default the "lock_path" from
the os_brick namespace to the value in "oslo_concurrency".

Since the value that is going to be used as the "lock_path" is not known
until the "setup" method is called, we cannot use the standard
"synchronized" method from "oslo_concurrency.lock_utils" and we need to
use our own.

In the current os-brick code, each connector that requires a lock is
importing and creating its own "synchronized" instance, but this patch
changes this behavior and creates a single instance, just like Cinder
does.

This feature requires changes in multiple projects -os-brick, cinder,
nova, glance, glance_store, devstack- to be fully supported, but this is
the base of all this effort.

Change-Id: Ic52338278eb5bb3d90ce582fe6b23f37eb5568c4
2022-07-15 09:21:19 +00:00

2.2 KiB

Tutorial

This tutorial is intended as an introduction to working with os-brick.

Prerequisites

Before we start, make sure that you have the os-brick distribution installed </install/index>. In the Python shell, the following should run without raising an exception:

>>> import os_brick

Configuration

There are some os-brick connectors that use file locks to prevent concurrent access to critical sections of the code.

These file locks use the oslo.concurrency lock_utils module and require the lock_path to be configured with the path where locks should be created.

os-brick can use a specific directory just for its locks or use the same directory as the service using os-brick.

The os-brick specific configuration option is [os_brick]/lock_path, and if left undefined it will use the value from [oslo_concurrency]/lock_path.

Setup

Once os_brick has been loaded it needs to be initialized, which is done by calling the os_brick.setup method with the oslo.conf configuration.

It is important that the call to setup method happens after oslo.config has been properly initialized.

from oslo_config import cfg
from cinder import version

CONF = cfg.CONF

def main():
    CONF(sys.argv[1:], project='cinder',
         version=version.version_string())
    os_brick.setup(CONF)

Fetch all of the initiator information from the host

An example of how to collect the initiator information that is needed to export a volume to this host.

from os_brick.initiator import connector


os_brick.setup(CONF)

# what helper do you want to use to get root access?
root_helper = "sudo"
# The ip address of the host you are running on
my_ip = "192.168.1.1"
# Do you want to support multipath connections?
multipath = True
# Do you want to enforce that multipath daemon is running?
enforce_multipath = False
initiator = connector.get_connector_properties(root_helper, my_ip,
                                               multipath,
                                               enforce_multipath)