Replication v2.1 (Cheesecake)

This focuses the replication work on a specific use case,
and eliminates some of the ambiguity in earlier versions.

Additionally this implementation addresses needs for
devices that do replication based on the whole backend-device
or on Pools.

Use case:
  DR scenario, where a storage device is rendered inoperable.
  This implementation allows the preservation of user data
  for those volumes that are of type replication-enabled.

  The goal is NOT to make failures completely transparent
  but instead to preserve data access while an Admin tries
  to rebuild/recover his/her cloud.

It's very important to note that we're no longer interested in
dealing with replication in Cinder at a Volume level.  The concept
of have "some" volumes failover, and "others" left behind, proved
to not only be overly complex and difficult to implement, but we
never identified a concrete use-case where one would use failover
in a scenario where some volumes would stay and be accessible on
a primary but other may be moved and accessed via a secondary.

In this model, it's host/backend based.  So when you failover,
you're failing over an entire backend.  We heavily leverage
existing resources, specifically services, and capabilities.

Implements: blueprint replication-update

Change-Id: If862bcd18515098639f94a8294a8e44e1358c52a
This commit is contained in:
John Griffith
2016-02-03 16:11:58 +00:00
parent 0f5e61bf6e
commit 106c14a84b
27 changed files with 573 additions and 1732 deletions

View File

@@ -1405,6 +1405,7 @@ class SolidFireDriver(san.SanISCSIDriver):
data["driver_version"] = self.VERSION
data["storage_protocol"] = 'iSCSI'
data['consistencygroup_support'] = True
data['replication_enabled'] = True
data['total_capacity_gb'] = (
float(results['maxProvisionedSpace'] / units.Gi))