Browse Source

Initial Commit

chaithanyak 2 years ago
commit
25e93d0f2a

+ 202
- 0
LICENSE View File

@@ -0,0 +1,202 @@
1
+Apache License
2
+                           Version 2.0, January 2004
3
+                        http://www.apache.org/licenses/
4
+
5
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+   1. Definitions.
8
+
9
+      "License" shall mean the terms and conditions for use, reproduction,
10
+      and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+      "Licensor" shall mean the copyright owner or entity authorized by
13
+      the copyright owner that is granting the License.
14
+
15
+      "Legal Entity" shall mean the union of the acting entity and all
16
+      other entities that control, are controlled by, or are under common
17
+      control with that entity. For the purposes of this definition,
18
+      "control" means (i) the power, direct or indirect, to cause the
19
+      direction or management of such entity, whether by contract or
20
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+      outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+      "You" (or "Your") shall mean an individual or Legal Entity
24
+      exercising permissions granted by this License.
25
+
26
+      "Source" form shall mean the preferred form for making modifications,
27
+      including but not limited to software source code, documentation
28
+      source, and configuration files.
29
+
30
+      "Object" form shall mean any form resulting from mechanical
31
+      transformation or translation of a Source form, including but
32
+      not limited to compiled object code, generated documentation,
33
+      and conversions to other media types.
34
+
35
+      "Work" shall mean the work of authorship, whether in Source or
36
+      Object form, made available under the License, as indicated by a
37
+      copyright notice that is included in or attached to the work
38
+      (an example is provided in the Appendix below).
39
+
40
+      "Derivative Works" shall mean any work, whether in Source or Object
41
+      form, that is based on (or derived from) the Work and for which the
42
+      editorial revisions, annotations, elaborations, or other modifications
43
+      represent, as a whole, an original work of authorship. For the purposes
44
+      of this License, Derivative Works shall not include works that remain
45
+      separable from, or merely link (or bind by name) to the interfaces of,
46
+      the Work and Derivative Works thereof.
47
+
48
+      "Contribution" shall mean any work of authorship, including
49
+      the original version of the Work and any modifications or additions
50
+      to that Work or Derivative Works thereof, that is intentionally
51
+      submitted to Licensor for inclusion in the Work by the copyright owner
52
+      or by an individual or Legal Entity authorized to submit on behalf of
53
+      the copyright owner. For the purposes of this definition, "submitted"
54
+      means any form of electronic, verbal, or written communication sent
55
+      to the Licensor or its representatives, including but not limited to
56
+      communication on electronic mailing lists, source code control systems,
57
+      and issue tracking systems that are managed by, or on behalf of, the
58
+      Licensor for the purpose of discussing and improving the Work, but
59
+      excluding communication that is conspicuously marked or otherwise
60
+      designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+      "Contributor" shall mean Licensor and any individual or Legal Entity
63
+      on behalf of whom a Contribution has been received by Licensor and
64
+      subsequently incorporated within the Work.
65
+
66
+   2. Grant of Copyright License. Subject to the terms and conditions of
67
+      this License, each Contributor hereby grants to You a perpetual,
68
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+      copyright license to reproduce, prepare Derivative Works of,
70
+      publicly display, publicly perform, sublicense, and distribute the
71
+      Work and such Derivative Works in Source or Object form.
72
+
73
+   3. Grant of Patent License. Subject to the terms and conditions of
74
+      this License, each Contributor hereby grants to You a perpetual,
75
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+      (except as stated in this section) patent license to make, have made,
77
+      use, offer to sell, sell, import, and otherwise transfer the Work,
78
+      where such license applies only to those patent claims licensable
79
+      by such Contributor that are necessarily infringed by their
80
+      Contribution(s) alone or by combination of their Contribution(s)
81
+      with the Work to which such Contribution(s) was submitted. If You
82
+      institute patent litigation against any entity (including a
83
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+      or a Contribution incorporated within the Work constitutes direct
85
+      or contributory patent infringement, then any patent licenses
86
+      granted to You under this License for that Work shall terminate
87
+      as of the date such litigation is filed.
88
+
89
+   4. Redistribution. You may reproduce and distribute copies of the
90
+      Work or Derivative Works thereof in any medium, with or without
91
+      modifications, and in Source or Object form, provided that You
92
+      meet the following conditions:
93
+
94
+      (a) You must give any other recipients of the Work or
95
+          Derivative Works a copy of this License; and
96
+
97
+      (b) You must cause any modified files to carry prominent notices
98
+          stating that You changed the files; and
99
+
100
+      (c) You must retain, in the Source form of any Derivative Works
101
+          that You distribute, all copyright, patent, trademark, and
102
+          attribution notices from the Source form of the Work,
103
+          excluding those notices that do not pertain to any part of
104
+          the Derivative Works; and
105
+
106
+      (d) If the Work includes a "NOTICE" text file as part of its
107
+          distribution, then any Derivative Works that You distribute must
108
+          include a readable copy of the attribution notices contained
109
+          within such NOTICE file, excluding those notices that do not
110
+          pertain to any part of the Derivative Works, in at least one
111
+          of the following places: within a NOTICE text file distributed
112
+          as part of the Derivative Works; within the Source form or
113
+          documentation, if provided along with the Derivative Works; or,
114
+          within a display generated by the Derivative Works, if and
115
+          wherever such third-party notices normally appear. The contents
116
+          of the NOTICE file are for informational purposes only and
117
+          do not modify the License. You may add Your own attribution
118
+          notices within Derivative Works that You distribute, alongside
119
+          or as an addendum to the NOTICE text from the Work, provided
120
+          that such additional attribution notices cannot be construed
121
+          as modifying the License.
122
+
123
+      You may add Your own copyright statement to Your modifications and
124
+      may provide additional or different license terms and conditions
125
+      for use, reproduction, or distribution of Your modifications, or
126
+      for any such Derivative Works as a whole, provided Your use,
127
+      reproduction, and distribution of the Work otherwise complies with
128
+      the conditions stated in this License.
129
+
130
+   5. Submission of Contributions. Unless You explicitly state otherwise,
131
+      any Contribution intentionally submitted for inclusion in the Work
132
+      by You to the Licensor shall be under the terms and conditions of
133
+      this License, without any additional terms or conditions.
134
+      Notwithstanding the above, nothing herein shall supersede or modify
135
+      the terms of any separate license agreement you may have executed
136
+      with Licensor regarding such Contributions.
137
+
138
+   6. Trademarks. This License does not grant permission to use the trade
139
+      names, trademarks, service marks, or product names of the Licensor,
140
+      except as required for reasonable and customary use in describing the
141
+      origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+   7. Disclaimer of Warranty. Unless required by applicable law or
144
+      agreed to in writing, Licensor provides the Work (and each
145
+      Contributor provides its Contributions) on an "AS IS" BASIS,
146
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+      implied, including, without limitation, any warranties or conditions
148
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+      PARTICULAR PURPOSE. You are solely responsible for determining the
150
+      appropriateness of using or redistributing the Work and assume any
151
+      risks associated with Your exercise of permissions under this License.
152
+
153
+   8. Limitation of Liability. In no event and under no legal theory,
154
+      whether in tort (including negligence), contract, or otherwise,
155
+      unless required by applicable law (such as deliberate and grossly
156
+      negligent acts) or agreed to in writing, shall any Contributor be
157
+      liable to You for damages, including any direct, indirect, special,
158
+      incidental, or consequential damages of any character arising as a
159
+      result of this License or out of the use or inability to use the
160
+      Work (including but not limited to damages for loss of goodwill,
161
+      work stoppage, computer failure or malfunction, or any and all
162
+      other commercial damages or losses), even if such Contributor
163
+      has been advised of the possibility of such damages.
164
+
165
+   9. Accepting Warranty or Additional Liability. While redistributing
166
+      the Work or Derivative Works thereof, You may choose to offer,
167
+      and charge a fee for, acceptance of support, warranty, indemnity,
168
+      or other liability obligations and/or rights consistent with this
169
+      License. However, in accepting such obligations, You may act only
170
+      on Your own behalf and on Your sole responsibility, not on behalf
171
+      of any other Contributor, and only if You agree to indemnify,
172
+      defend, and hold each Contributor harmless for any liability
173
+      incurred by, or claims asserted against, such Contributor by reason
174
+      of your accepting any such warranty or additional liability.
175
+
176
+   END OF TERMS AND CONDITIONS
177
+
178
+   APPENDIX: How to apply the Apache License to your work.
179
+
180
+      To apply the Apache License to your work, attach the following
181
+      boilerplate notice, with the fields enclosed by brackets "{}"
182
+      replaced with your own identifying information. (Don't include
183
+      the brackets!)  The text should be enclosed in the appropriate
184
+      comment syntax for the file format. We also recommend that a
185
+      file or class name and description of purpose be included on the
186
+      same "printed page" as the copyright notice for easier
187
+      identification within third-party archives.
188
+
189
+   Copyright {yyyy} {name of copyright owner}
190
+
191
+   Licensed under the Apache License, Version 2.0 (the "License");
192
+   you may not use this file except in compliance with the License.
193
+   You may obtain a copy of the License at
194
+
195
+       http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+   Unless required by applicable law or agreed to in writing, software
198
+   distributed under the License is distributed on an "AS IS" BASIS,
199
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+   See the License for the specific language governing permissions and
201
+   limitations under the License.
202
+

+ 4
- 0
README.md View File

@@ -0,0 +1,4 @@
1
+fuel-plugin-cinder-kaminario
2
+============
3
+
4
+Plugin description

+ 15
- 0
components.yaml View File

@@ -0,0 +1,15 @@
1
+- name: 'storage:block:backend:kaminario'
2
+  label: 'Kaminario'
3
+  description: 'Cinder with Kaminario backend'
4
+  compatible:
5
+    - name: storage:block:lvm
6
+    - name: storage:block:ceph
7
+    - name: storage:object:ceph
8
+    - name: storage:ephemeral:ceph
9
+    - name: storage:image:ceph
10
+    - name: hypervisor:qemu
11
+    - name: network:neutron:core:ml2
12
+    - name: network:neutron:ml2:vlan
13
+    - name: network:neutron:ml2:tun
14
+  incompatible:
15
+    - name: hypervisor:vmware

+ 9
- 0
deployment_scripts/puppet/manifests/cinder_kaminario.pp View File

@@ -0,0 +1,9 @@
1
+notice('MODULAR: cinder_kaminario')
2
+
3
+
4
+class { 'kaminario::driver': }->
5
+class { 'kaminario::krest': }->
6
+class { 'kaminario::config': }~> Exec[cinder_volume]
7
+
8
+exec {'cinder_volume':
9
+  command => '/usr/sbin/service cinder-volume restart',}

+ 8
- 0
deployment_scripts/puppet/manifests/cinder_parser.pp View File

@@ -0,0 +1,8 @@
1
+ini_setting { 'parser':
2
+    ensure  => present,
3
+    path    => '/etc/puppet/puppet.conf',
4
+    section => 'main',
5
+    setting => 'parser',
6
+    value   => 'future',
7
+  }
8
+

+ 1
- 0
deployment_scripts/puppet/manifests/cinder_type.pp View File

@@ -0,0 +1 @@
1
+include kaminario::type

+ 0
- 0
deployment_scripts/puppet/modules/kaminario/files/__init__.py View File


+ 1292
- 0
deployment_scripts/puppet/modules/kaminario/files/exception.py
File diff suppressed because it is too large
View File


+ 893
- 0
deployment_scripts/puppet/modules/kaminario/files/kaminario_common.py View File

@@ -0,0 +1,893 @@
1
+# Copyright (c) 2016 by Kaminario Technologies, Ltd.
2
+# All Rights Reserved.
3
+#
4
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
5
+#    not use this file except in compliance with the License. You may obtain
6
+#    a copy of the License at
7
+#
8
+#         http://www.apache.org/licenses/LICENSE-2.0
9
+#
10
+#    Unless required by applicable law or agreed to in writing, software
11
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13
+#    License for the specific language governing permissions and limitations
14
+#    under the License.
15
+"""Volume driver for Kaminario K2 all-flash arrays."""
16
+
17
+import math
18
+import re
19
+import threading
20
+
21
+import eventlet
22
+from oslo_config import cfg
23
+from oslo_log import log as logging
24
+from oslo_utils import importutils
25
+from oslo_utils import units
26
+from oslo_utils import versionutils
27
+import requests
28
+import six
29
+
30
+import cinder
31
+from cinder import exception
32
+from cinder.i18n import _, _LE, _LW, _LI
33
+from cinder.objects import fields
34
+from cinder import utils
35
+from cinder.volume.drivers.san import san
36
+from cinder.volume import utils as vol_utils
37
+
38
+krest = importutils.try_import("krest")
39
+
40
+K2_MIN_VERSION = '2.2.0'
41
+K2_LOCK_PREFIX = 'Kaminario'
42
+MAX_K2_RETRY = 5
43
+LOG = logging.getLogger(__name__)
44
+
45
+kaminario1_opts = [
46
+    cfg.StrOpt('kaminario_nodedup_substring',
47
+               default='K2-nodedup',
48
+               help="If volume-type name contains this substring "
49
+                    "nodedup volume will be created, otherwise "
50
+                    "dedup volume wil be created.",
51
+               deprecated_for_removal=True,
52
+               deprecated_reason="This option is deprecated in favour of "
53
+                                 "'kaminario:thin_prov_type' in extra-specs "
54
+                                 "and will be removed in the next release.")]
55
+kaminario2_opts = [
56
+    cfg.BoolOpt('auto_calc_max_oversubscription_ratio',
57
+                default=False,
58
+                help="K2 driver will calculate max_oversubscription_ratio "
59
+                     "on setting this option as True.")]
60
+
61
+CONF = cfg.CONF
62
+CONF.register_opts(kaminario1_opts)
63
+
64
+K2HTTPError = requests.exceptions.HTTPError
65
+K2_RETRY_ERRORS = ("MC_ERR_BUSY", "MC_ERR_BUSY_SPECIFIC",
66
+                   "MC_ERR_INPROGRESS", "MC_ERR_START_TIMEOUT")
67
+
68
+if krest:
69
+    class KrestWrap(krest.EndPoint):
70
+        def __init__(self, *args, **kwargs):
71
+            self.krestlock = threading.Lock()
72
+            super(KrestWrap, self).__init__(*args, **kwargs)
73
+
74
+        def _should_retry(self, err_code, err_msg):
75
+            if err_code == 400:
76
+                for er in K2_RETRY_ERRORS:
77
+                    if er in err_msg:
78
+                        LOG.debug("Retry ERROR: %d with status %s",
79
+                                  err_code, err_msg)
80
+                        return True
81
+            return False
82
+
83
+        @utils.retry(exception.KaminarioRetryableException,
84
+                     retries=MAX_K2_RETRY)
85
+        def _request(self, method, *args, **kwargs):
86
+            try:
87
+                LOG.debug("running through the _request wrapper...")
88
+                self.krestlock.acquire()
89
+                return super(KrestWrap, self)._request(method,
90
+                                                       *args, **kwargs)
91
+            except K2HTTPError as err:
92
+                err_code = err.response.status_code
93
+                err_msg = err.response.text
94
+                if self._should_retry(err_code, err_msg):
95
+                    raise exception.KaminarioRetryableException(
96
+                        reason=six.text_type(err_msg))
97
+                raise
98
+            finally:
99
+                self.krestlock.release()
100
+
101
+
102
+def kaminario_logger(func):
103
+    """Return a function wrapper.
104
+
105
+    The wrapper adds log for entry and exit to the function.
106
+    """
107
+    def func_wrapper(*args, **kwargs):
108
+        LOG.debug('Entering %(function)s of %(class)s with arguments: '
109
+                  ' %(args)s, %(kwargs)s',
110
+                  {'class': args[0].__class__.__name__,
111
+                   'function': func.__name__,
112
+                   'args': args[1:],
113
+                   'kwargs': kwargs})
114
+        ret = func(*args, **kwargs)
115
+        LOG.debug('Exiting %(function)s of %(class)s '
116
+                  'having return value: %(ret)s',
117
+                  {'class': args[0].__class__.__name__,
118
+                   'function': func.__name__,
119
+                   'ret': ret})
120
+        return ret
121
+    return func_wrapper
122
+
123
+
124
+class Replication(object):
125
+    def __init__(self, config, *args, **kwargs):
126
+        self.backend_id = config.get('backend_id')
127
+        self.login = config.get('login')
128
+        self.password = config.get('password')
129
+        self.rpo = config.get('rpo')
130
+
131
+
132
+class KaminarioCinderDriver(cinder.volume.driver.ISCSIDriver):
133
+    VENDOR = "Kaminario"
134
+    stats = {}
135
+
136
+    def __init__(self, *args, **kwargs):
137
+        super(KaminarioCinderDriver, self).__init__(*args, **kwargs)
138
+        self.configuration.append_config_values(san.san_opts)
139
+        self.configuration.append_config_values(kaminario2_opts)
140
+        self.replica = None
141
+        self._protocol = None
142
+        k2_lock_sfx = self.configuration.safe_get('volume_backend_name') or ''
143
+        self.k2_lock_name = "%s-%s" % (K2_LOCK_PREFIX, k2_lock_sfx)
144
+
145
+    def check_for_setup_error(self):
146
+        if krest is None:
147
+            msg = _("Unable to import 'krest' python module.")
148
+            LOG.error(msg)
149
+            raise exception.KaminarioCinderDriverException(reason=msg)
150
+        else:
151
+            conf = self.configuration
152
+            self.client = KrestWrap(conf.san_ip,
153
+                                    conf.san_login,
154
+                                    conf.san_password,
155
+                                    ssl_validate=False)
156
+            if self.replica:
157
+                self.target = KrestWrap(self.replica.backend_id,
158
+                                        self.replica.login,
159
+                                        self.replica.password,
160
+                                        ssl_validate=False)
161
+            v_rs = self.client.search("system/state")
162
+            if hasattr(v_rs, 'hits') and v_rs.total != 0:
163
+                ver = v_rs.hits[0].rest_api_version
164
+                ver_exist = versionutils.convert_version_to_int(ver)
165
+                ver_min = versionutils.convert_version_to_int(K2_MIN_VERSION)
166
+                if ver_exist < ver_min:
167
+                    msg = _("K2 rest api version should be "
168
+                            ">= %s.") % K2_MIN_VERSION
169
+                    LOG.error(msg)
170
+                    raise exception.KaminarioCinderDriverException(reason=msg)
171
+
172
+            else:
173
+                msg = _("K2 rest api version search failed.")
174
+                LOG.error(msg)
175
+                raise exception.KaminarioCinderDriverException(reason=msg)
176
+
177
+    @kaminario_logger
178
+    def _check_ops(self):
179
+        """Ensure that the options we care about are set."""
180
+        required_ops = ['san_ip', 'san_login', 'san_password']
181
+        for attr in required_ops:
182
+            if not getattr(self.configuration, attr, None):
183
+                raise exception.InvalidInput(reason=_('%s is not set.') % attr)
184
+
185
+        replica = self.configuration.safe_get('replication_device')
186
+        if replica and isinstance(replica, list):
187
+            replica_ops = ['backend_id', 'login', 'password', 'rpo']
188
+            for attr in replica_ops:
189
+                if attr not in replica[0]:
190
+                    msg = _('replication_device %s is not set.') % attr
191
+                    raise exception.InvalidInput(reason=msg)
192
+            self.replica = Replication(replica[0])
193
+
194
+    @kaminario_logger
195
+    def do_setup(self, context):
196
+        super(KaminarioCinderDriver, self).do_setup(context)
197
+        self._check_ops()
198
+
199
+    @kaminario_logger
200
+    def create_volume(self, volume):
201
+        """Volume creation in K2 needs a volume group.
202
+
203
+        - create a volume group
204
+        - create a volume in the volume group
205
+        """
206
+        vg_name = self.get_volume_group_name(volume.id)
207
+        vol_name = self.get_volume_name(volume.id)
208
+        prov_type = self._get_is_dedup(volume.get('volume_type'))
209
+        try:
210
+            LOG.debug("Creating volume group with name: %(name)s, "
211
+                      "quota: unlimited and dedup_support: %(dedup)s",
212
+                      {'name': vg_name, 'dedup': prov_type})
213
+
214
+            vg = self.client.new("volume_groups", name=vg_name, quota=0,
215
+                                 is_dedup=prov_type).save()
216
+            LOG.debug("Creating volume with name: %(name)s, size: %(size)s "
217
+                      "GB, volume_group: %(vg)s",
218
+                      {'name': vol_name, 'size': volume.size, 'vg': vg_name})
219
+            vol = self.client.new("volumes", name=vol_name,
220
+                                  size=volume.size * units.Mi,
221
+                                  volume_group=vg).save()
222
+        except Exception as ex:
223
+            vg_rs = self.client.search("volume_groups", name=vg_name)
224
+            if vg_rs.total != 0:
225
+                LOG.debug("Deleting vg: %s for failed volume in K2.", vg_name)
226
+                vg_rs.hits[0].delete()
227
+            LOG.exception(_LE("Creation of volume %s failed."), vol_name)
228
+            raise exception.KaminarioCinderDriverException(
229
+                reason=six.text_type(ex.message))
230
+
231
+        if self._get_is_replica(volume.volume_type) and self.replica:
232
+            self._create_volume_replica(volume, vg, vol, self.replica.rpo)
233
+
234
+    @kaminario_logger
235
+    def _create_volume_replica(self, volume, vg, vol, rpo):
236
+        """Volume replica creation in K2 needs session and remote volume.
237
+
238
+        - create a session
239
+        - create a volume in the volume group
240
+
241
+        """
242
+        session_name = self.get_session_name(volume.id)
243
+        rsession_name = self.get_rep_name(session_name)
244
+
245
+        rvg_name = self.get_rep_name(vg.name)
246
+        rvol_name = self.get_rep_name(vol.name)
247
+
248
+        k2peer_rs = self.client.search("replication/peer_k2arrays",
249
+                                       mgmt_host=self.replica.backend_id)
250
+        if hasattr(k2peer_rs, 'hits') and k2peer_rs.total != 0:
251
+            k2peer = k2peer_rs.hits[0]
252
+        else:
253
+            msg = _("Unable to find K2peer in source K2:")
254
+            LOG.error(msg)
255
+            raise exception.KaminarioCinderDriverException(reason=msg)
256
+        try:
257
+            LOG.debug("Creating source session with name: %(sname)s and "
258
+                      " target session name: %(tname)s",
259
+                      {'sname': session_name, 'tname': rsession_name})
260
+            src_ssn = self.client.new("replication/sessions")
261
+            src_ssn.replication_peer_k2array = k2peer
262
+            src_ssn.auto_configure_peer_volumes = "False"
263
+            src_ssn.local_volume_group = vg
264
+            src_ssn.replication_peer_volume_group_name = rvg_name
265
+            src_ssn.remote_replication_session_name = rsession_name
266
+            src_ssn.name = session_name
267
+            src_ssn.rpo = rpo
268
+            src_ssn.save()
269
+            LOG.debug("Creating remote volume with name: %s",
270
+                      rvol_name)
271
+            self.client.new("replication/peer_volumes",
272
+                            local_volume=vol,
273
+                            name=rvol_name,
274
+                            replication_session=src_ssn).save()
275
+            src_ssn.state = "in_sync"
276
+            src_ssn.save()
277
+        except Exception as ex:
278
+            LOG.exception(_LE("Replication for the volume %s has "
279
+                              "failed."), vol.name)
280
+            self._delete_by_ref(self.client, "replication/sessions",
281
+                                session_name, 'session')
282
+            self._delete_by_ref(self.target, "replication/sessions",
283
+                                rsession_name, 'remote session')
284
+            self._delete_by_ref(self.target, "volumes",
285
+                                rvol_name, 'remote volume')
286
+            self._delete_by_ref(self.client, "volumes", vol.name, "volume")
287
+            self._delete_by_ref(self.target, "volume_groups",
288
+                                rvg_name, "remote vg")
289
+            self._delete_by_ref(self.client, "volume_groups", vg.name, "vg")
290
+            raise exception.KaminarioCinderDriverException(
291
+                reason=six.text_type(ex.message))
292
+
293
+    def _delete_by_ref(self, device, url, name, msg):
294
+        rs = device.search(url, name=name)
295
+        for result in rs.hits:
296
+            result.delete()
297
+            LOG.debug("Deleting %(msg)s: %(name)s", {'msg': msg, 'name': name})
298
+
299
+    @kaminario_logger
300
+    def _failover_volume(self, volume):
301
+        """Promoting a secondary volume to primary volume."""
302
+        session_name = self.get_session_name(volume.id)
303
+        rsession_name = self.get_rep_name(session_name)
304
+        tgt_ssn = self.target.search("replication/sessions",
305
+                                     name=rsession_name).hits[0]
306
+        if tgt_ssn.state == 'in_sync':
307
+            tgt_ssn.state = 'failed_over'
308
+            tgt_ssn.save()
309
+            LOG.debug("The target session: %s state is "
310
+                      "changed to failed_over ", rsession_name)
311
+
312
+    @kaminario_logger
313
+    def failover_host(self, context, volumes, secondary_id=None):
314
+        """Failover to replication target."""
315
+        volume_updates = []
316
+        if secondary_id and secondary_id != self.replica.backend_id:
317
+            LOG.error(_LE("Kaminario driver received failover_host "
318
+                          "request, But backend is non replicated device"))
319
+            raise exception.UnableToFailOver(reason=_("Failover requested "
320
+                                                      "on non replicated "
321
+                                                      "backend."))
322
+        for v in volumes:
323
+            vol_name = self.get_volume_name(v['id'])
324
+            rv = self.get_rep_name(vol_name)
325
+            if self.target.search("volumes", name=rv).total:
326
+                self._failover_volume(v)
327
+                volume_updates.append(
328
+                    {'volume_id': v['id'],
329
+                     'updates':
330
+                     {'replication_status':
331
+                      fields.ReplicationStatus.FAILED_OVER}})
332
+            else:
333
+                volume_updates.append({'volume_id': v['id'],
334
+                                       'updates': {'status': 'error', }})
335
+
336
+        return self.replica.backend_id, volume_updates
337
+
338
+    @kaminario_logger
339
+    def create_volume_from_snapshot(self, volume, snapshot):
340
+        """Create volume from snapshot.
341
+
342
+        - search for snapshot and retention_policy
343
+        - create a view from snapshot and attach view
344
+        - create a volume and attach volume
345
+        - copy data from attached view to attached volume
346
+        - detach volume and view and finally delete view
347
+        """
348
+        snap_name = self.get_snap_name(snapshot.id)
349
+        view_name = self.get_view_name(volume.id)
350
+        vol_name = self.get_volume_name(volume.id)
351
+        cview = src_attach_info = dest_attach_info = None
352
+        rpolicy = self.get_policy()
353
+        properties = utils.brick_get_connector_properties()
354
+        LOG.debug("Searching for snapshot: %s in K2.", snap_name)
355
+        snap_rs = self.client.search("snapshots", short_name=snap_name)
356
+        if hasattr(snap_rs, 'hits') and snap_rs.total != 0:
357
+            snap = snap_rs.hits[0]
358
+            LOG.debug("Creating a view: %(view)s from snapshot: %(snap)s",
359
+                      {'view': view_name, 'snap': snap_name})
360
+            try:
361
+                cview = self.client.new("snapshots",
362
+                                        short_name=view_name,
363
+                                        source=snap, retention_policy=rpolicy,
364
+                                        is_exposable=True).save()
365
+            except Exception as ex:
366
+                LOG.exception(_LE("Creating a view: %(view)s from snapshot: "
367
+                                  "%(snap)s failed"), {"view": view_name,
368
+                                                       "snap": snap_name})
369
+                raise exception.KaminarioCinderDriverException(
370
+                    reason=six.text_type(ex.message))
371
+
372
+        else:
373
+            msg = _("Snapshot: %s search failed in K2.") % snap_name
374
+            LOG.error(msg)
375
+            raise exception.KaminarioCinderDriverException(reason=msg)
376
+
377
+        try:
378
+            conn = self.initialize_connection(cview, properties)
379
+            src_attach_info = self._connect_device(conn)
380
+            self.create_volume(volume)
381
+            conn = self.initialize_connection(volume, properties)
382
+            dest_attach_info = self._connect_device(conn)
383
+            vol_utils.copy_volume(src_attach_info['device']['path'],
384
+                                  dest_attach_info['device']['path'],
385
+                                  snapshot.volume.size * units.Ki,
386
+                                  self.configuration.volume_dd_blocksize,
387
+                                  sparse=True)
388
+            self.terminate_connection(volume, properties)
389
+            self.terminate_connection(cview, properties)
390
+        except Exception as ex:
391
+            self.terminate_connection(cview, properties)
392
+            self.terminate_connection(volume, properties)
393
+            cview.delete()
394
+            self.delete_volume(volume)
395
+            LOG.exception(_LE("Copy to volume: %(vol)s from view: %(view)s "
396
+                              "failed"), {"vol": vol_name, "view": view_name})
397
+            raise exception.KaminarioCinderDriverException(
398
+                reason=six.text_type(ex.message))
399
+
400
+    @kaminario_logger
401
+    def create_cloned_volume(self, volume, src_vref):
402
+        """Create a clone from source volume.
403
+
404
+        - attach source volume
405
+        - create and attach new volume
406
+        - copy data from attached source volume to attached new volume
407
+        - detach both volumes
408
+        """
409
+        clone_name = self.get_volume_name(volume.id)
410
+        src_name = self.get_volume_name(src_vref.id)
411
+        src_vol = self.client.search("volumes", name=src_name)
412
+        src_map = self.client.search("mappings", volume=src_vol)
413
+        if src_map.total != 0:
414
+            msg = _("K2 driver does not support clone of a attached volume. "
415
+                    "To get this done, create a snapshot from the attached "
416
+                    "volume and then create a volume from the snapshot.")
417
+            LOG.error(msg)
418
+            raise exception.KaminarioCinderDriverException(reason=msg)
419
+        try:
420
+            properties = utils.brick_get_connector_properties()
421
+            conn = self.initialize_connection(src_vref, properties)
422
+            src_attach_info = self._connect_device(conn)
423
+            self.create_volume(volume)
424
+            conn = self.initialize_connection(volume, properties)
425
+            dest_attach_info = self._connect_device(conn)
426
+            vol_utils.copy_volume(src_attach_info['device']['path'],
427
+                                  dest_attach_info['device']['path'],
428
+                                  src_vref.size * units.Ki,
429
+                                  self.configuration.volume_dd_blocksize,
430
+                                  sparse=True)
431
+
432
+            self.terminate_connection(volume, properties)
433
+            self.terminate_connection(src_vref, properties)
434
+        except Exception as ex:
435
+            self.terminate_connection(src_vref, properties)
436
+            self.terminate_connection(volume, properties)
437
+            self.delete_volume(volume)
438
+            LOG.exception(_LE("Create a clone: %s failed."), clone_name)
439
+            raise exception.KaminarioCinderDriverException(
440
+                reason=six.text_type(ex.message))
441
+
442
+    @kaminario_logger
443
+    def delete_volume(self, volume):
444
+        """Volume in K2 exists in a volume group.
445
+
446
+        - delete the volume
447
+        - delete the corresponding volume group
448
+        """
449
+        vg_name = self.get_volume_group_name(volume.id)
450
+        vol_name = self.get_volume_name(volume.id)
451
+        try:
452
+            if self._get_is_replica(volume.volume_type) and self.replica:
453
+                self._delete_volume_replica(volume, vg_name, vol_name)
454
+
455
+            LOG.debug("Searching and deleting volume: %s in K2.", vol_name)
456
+            vol_rs = self.client.search("volumes", name=vol_name)
457
+            if vol_rs.total != 0:
458
+                vol_rs.hits[0].delete()
459
+            LOG.debug("Searching and deleting vg: %s in K2.", vg_name)
460
+            vg_rs = self.client.search("volume_groups", name=vg_name)
461
+            if vg_rs.total != 0:
462
+                vg_rs.hits[0].delete()
463
+        except Exception as ex:
464
+            LOG.exception(_LE("Deletion of volume %s failed."), vol_name)
465
+            raise exception.KaminarioCinderDriverException(
466
+                reason=six.text_type(ex.message))
467
+
468
+    @kaminario_logger
469
+    def _delete_volume_replica(self, volume, vg_name, vol_name):
470
+        rvg_name = self.get_rep_name(vg_name)
471
+        rvol_name = self.get_rep_name(vol_name)
472
+        session_name = self.get_session_name(volume.id)
473
+        rsession_name = self.get_rep_name(session_name)
474
+        src_ssn = self.client.search('replication/sessions',
475
+                                     name=session_name).hits[0]
476
+        tgt_ssn = self.target.search('replication/sessions',
477
+                                     name=rsession_name).hits[0]
478
+        src_ssn.state = 'suspended'
479
+        src_ssn.save()
480
+        self._check_for_status(tgt_ssn, 'suspended')
481
+        src_ssn.state = 'idle'
482
+        src_ssn.save()
483
+        self._check_for_status(tgt_ssn, 'idle')
484
+        tgt_ssn.delete()
485
+        src_ssn.delete()
486
+
487
+        LOG.debug("Searching and deleting snapshots for volume groups:"
488
+                  "%(vg1)s, %(vg2)s in K2.", {'vg1': vg_name, 'vg2': rvg_name})
489
+        vg = self.client.search('volume_groups', name=vg_name).hits
490
+        rvg = self.target.search('volume_groups', name=rvg_name).hits
491
+        snaps = self.client.search('snapshots', volume_group=vg).hits
492
+        for s in snaps:
493
+            s.delete()
494
+        rsnaps = self.target.search('snapshots', volume_group=rvg).hits
495
+        for s in rsnaps:
496
+            s.delete()
497
+
498
+        self._delete_by_ref(self.target, "volumes", rvol_name, 'remote volume')
499
+        self._delete_by_ref(self.target, "volume_groups",
500
+                            rvg_name, "remote vg")
501
+
502
+    @kaminario_logger
503
+    def _check_for_status(self, obj, status):
504
+        while obj.state != status:
505
+            obj.refresh()
506
+            eventlet.sleep(1)
507
+
508
+    @kaminario_logger
509
+    def get_volume_stats(self, refresh=False):
510
+        if refresh:
511
+            self.update_volume_stats()
512
+        stats = self.stats
513
+        stats['storage_protocol'] = self._protocol
514
+        stats['driver_version'] = self.VERSION
515
+        stats['vendor_name'] = self.VENDOR
516
+        backend_name = self.configuration.safe_get('volume_backend_name')
517
+        stats['volume_backend_name'] = (backend_name or
518
+                                        self.__class__.__name__)
519
+        return stats
520
+
521
+    def create_export(self, context, volume, connector):
522
+        pass
523
+
524
+    def ensure_export(self, context, volume):
525
+        pass
526
+
527
+    def remove_export(self, context, volume):
528
+        pass
529
+
530
+    @kaminario_logger
531
+    def create_snapshot(self, snapshot):
532
+        """Create a snapshot from a volume_group."""
533
+        vg_name = self.get_volume_group_name(snapshot.volume_id)
534
+        snap_name = self.get_snap_name(snapshot.id)
535
+        rpolicy = self.get_policy()
536
+        try:
537
+            LOG.debug("Searching volume_group: %s in K2.", vg_name)
538
+            vg = self.client.search("volume_groups", name=vg_name).hits[0]
539
+            LOG.debug("Creating a snapshot: %(snap)s from vg: %(vg)s",
540
+                      {'snap': snap_name, 'vg': vg_name})
541
+            self.client.new("snapshots", short_name=snap_name,
542
+                            source=vg, retention_policy=rpolicy,
543
+                            is_auto_deleteable=False).save()
544
+        except Exception as ex:
545
+            LOG.exception(_LE("Creation of snapshot: %s failed."), snap_name)
546
+            raise exception.KaminarioCinderDriverException(
547
+                reason=six.text_type(ex.message))
548
+
549
+    @kaminario_logger
550
+    def delete_snapshot(self, snapshot):
551
+        """Delete a snapshot."""
552
+        snap_name = self.get_snap_name(snapshot.id)
553
+        try:
554
+            LOG.debug("Searching and deleting snapshot: %s in K2.", snap_name)
555
+            snap_rs = self.client.search("snapshots", short_name=snap_name)
556
+            if snap_rs.total != 0:
557
+                snap_rs.hits[0].delete()
558
+        except Exception as ex:
559
+            LOG.exception(_LE("Deletion of snapshot: %s failed."), snap_name)
560
+            raise exception.KaminarioCinderDriverException(
561
+                reason=six.text_type(ex.message))
562
+
563
+    @kaminario_logger
564
+    def extend_volume(self, volume, new_size):
565
+        """Extend volume."""
566
+        vol_name = self.get_volume_name(volume.id)
567
+        try:
568
+            LOG.debug("Searching volume: %s in K2.", vol_name)
569
+            vol = self.client.search("volumes", name=vol_name).hits[0]
570
+            vol.size = new_size * units.Mi
571
+            LOG.debug("Extending volume: %s in K2.", vol_name)
572
+            vol.save()
573
+        except Exception as ex:
574
+            LOG.exception(_LE("Extending volume: %s failed."), vol_name)
575
+            raise exception.KaminarioCinderDriverException(
576
+                reason=six.text_type(ex.message))
577
+
578
+    @kaminario_logger
579
+    def update_volume_stats(self):
580
+        conf = self.configuration
581
+        LOG.debug("Searching system capacity in K2.")
582
+        cap = self.client.search("system/capacity").hits[0]
583
+        LOG.debug("Searching total volumes in K2 for updating stats.")
584
+        total_volumes = self.client.search("volumes").total - 1
585
+        provisioned_vol = cap.provisioned_volumes
586
+        if (conf.auto_calc_max_oversubscription_ratio and cap.provisioned
587
+                and (cap.total - cap.free) != 0):
588
+            ratio = provisioned_vol / float(cap.total - cap.free)
589
+        else:
590
+            ratio = conf.max_over_subscription_ratio
591
+        self.stats = {'QoS_support': False,
592
+                      'free_capacity_gb': cap.free / units.Mi,
593
+                      'total_capacity_gb': cap.total / units.Mi,
594
+                      'thin_provisioning_support': True,
595
+                      'sparse_copy_volume': True,
596
+                      'total_volumes': total_volumes,
597
+                      'thick_provisioning_support': False,
598
+                      'provisioned_capacity_gb': provisioned_vol / units.Mi,
599
+                      'max_oversubscription_ratio': ratio,
600
+                      'kaminario:thin_prov_type': 'dedup/nodedup',
601
+                      'replication_enabled': True,
602
+                      'kaminario:replication': True}
603
+
604
+    @kaminario_logger
605
+    def get_initiator_host_name(self, connector):
606
+        """Return the initiator host name.
607
+
608
+        Valid characters: 0-9, a-z, A-Z, '-', '_'
609
+        All other characters are replaced with '_'.
610
+        Total characters in initiator host name: 32
611
+        """
612
+        return re.sub('[^0-9a-zA-Z-_]', '_', connector.get('host', ''))[:32]
613
+
614
+    @kaminario_logger
615
+    def get_volume_group_name(self, vid):
616
+        """Return the volume group name."""
617
+        return "cvg-{0}".format(vid)
618
+
619
+    @kaminario_logger
620
+    def get_volume_name(self, vid):
621
+        """Return the volume name."""
622
+        return "cv-{0}".format(vid)
623
+
624
+    @kaminario_logger
625
+    def get_session_name(self, vid):
626
+        """Return the volume name."""
627
+        return "ssn-{0}".format(vid)
628
+
629
+    @kaminario_logger
630
+    def get_snap_name(self, sid):
631
+        """Return the snapshot name."""
632
+        return "cs-{0}".format(sid)
633
+
634
+    @kaminario_logger
635
+    def get_view_name(self, vid):
636
+        """Return the view name."""
637
+        return "cview-{0}".format(vid)
638
+
639
+    @kaminario_logger
640
+    def get_rep_name(self, name):
641
+        """Return the corresponding replication names."""
642
+        return "r{0}".format(name)
643
+
644
+    @kaminario_logger
645
+    def _delete_host_by_name(self, name):
646
+        """Deleting host by name."""
647
+        host_rs = self.client.search("hosts", name=name)
648
+        if hasattr(host_rs, "hits") and host_rs.total != 0:
649
+            host = host_rs.hits[0]
650
+            host.delete()
651
+
652
+    @kaminario_logger
653
+    def get_policy(self):
654
+        """Return the retention policy."""
655
+        try:
656
+            LOG.debug("Searching for retention_policy in K2.")
657
+            return self.client.search("retention_policies",
658
+                                      name="Best_Effort_Retention").hits[0]
659
+        except Exception as ex:
660
+            LOG.exception(_LE("Retention policy search failed in K2."))
661
+            raise exception.KaminarioCinderDriverException(
662
+                reason=six.text_type(ex.message))
663
+
664
+    @kaminario_logger
665
+    def _get_volume_object(self, volume):
666
+        vol_name = self.get_volume_name(volume.id)
667
+        if volume.replication_status == 'failed-over':
668
+            vol_name = self.get_rep_name(vol_name)
669
+            self.client = self.target
670
+        LOG.debug("Searching volume : %s in K2.", vol_name)
671
+        vol_rs = self.client.search("volumes", name=vol_name)
672
+        if not hasattr(vol_rs, 'hits') or vol_rs.total == 0:
673
+            msg = _("Unable to find volume: %s from K2.") % vol_name
674
+            LOG.error(msg)
675
+            raise exception.KaminarioCinderDriverException(reason=msg)
676
+        return vol_rs.hits[0]
677
+
678
+    @kaminario_logger
679
+    def _get_lun_number(self, vol, host):
680
+        volsnap = None
681
+        LOG.debug("Searching volsnaps in K2.")
682
+        volsnap_rs = self.client.search("volsnaps", snapshot=vol)
683
+        if hasattr(volsnap_rs, 'hits') and volsnap_rs.total != 0:
684
+            volsnap = volsnap_rs.hits[0]
685
+
686
+        LOG.debug("Searching mapping of volsnap in K2.")
687
+        map_rs = self.client.search("mappings", volume=volsnap, host=host)
688
+        return map_rs.hits[0].lun
689
+
690
+    def initialize_connection(self, volume, connector):
691
+        pass
692
+
693
+    @kaminario_logger
694
+    def terminate_connection(self, volume, connector):
695
+        """Terminate connection of volume from host."""
696
+        # Get volume object
697
+        if type(volume).__name__ != 'RestObject':
698
+            vol_name = self.get_volume_name(volume.id)
699
+            if volume.replication_status == 'failed-over':
700
+                vol_name = self.get_rep_name(vol_name)
701
+                self.client = self.target
702
+            LOG.debug("Searching volume: %s in K2.", vol_name)
703
+            volume_rs = self.client.search("volumes", name=vol_name)
704
+            if hasattr(volume_rs, "hits") and volume_rs.total != 0:
705
+                volume = volume_rs.hits[0]
706
+        else:
707
+            vol_name = volume.name
708
+
709
+        # Get host object.
710
+        host_name = self.get_initiator_host_name(connector)
711
+        host_rs = self.client.search("hosts", name=host_name)
712
+        if hasattr(host_rs, "hits") and host_rs.total != 0 and volume:
713
+            host = host_rs.hits[0]
714
+            LOG.debug("Searching and deleting mapping of volume: %(name)s to "
715
+                      "host: %(host)s", {'host': host_name, 'name': vol_name})
716
+            map_rs = self.client.search("mappings", volume=volume, host=host)
717
+            if hasattr(map_rs, "hits") and map_rs.total != 0:
718
+                map_rs.hits[0].delete()
719
+            if self.client.search("mappings", host=host).total == 0:
720
+                LOG.debug("Deleting initiator hostname: %s in K2.", host_name)
721
+                host.delete()
722
+        else:
723
+            LOG.warning(_LW("Host: %s not found on K2."), host_name)
724
+
725
+    def k2_initialize_connection(self, volume, connector):
726
+        # Get volume object.
727
+        if type(volume).__name__ != 'RestObject':
728
+            vol = self._get_volume_object(volume)
729
+        else:
730
+            vol = volume
731
+        # Get host object.
732
+        host, host_rs, host_name = self._get_host_object(connector)
733
+        try:
734
+            # Map volume object to host object.
735
+            LOG.debug("Mapping volume: %(vol)s to host: %(host)s",
736
+                      {'host': host_name, 'vol': vol.name})
737
+            mapping = self.client.new("mappings", volume=vol, host=host).save()
738
+        except Exception as ex:
739
+            if host_rs.total == 0:
740
+                self._delete_host_by_name(host_name)
741
+            LOG.exception(_LE("Unable to map volume: %(vol)s to host: "
742
+                              "%(host)s"), {'host': host_name,
743
+                          'vol': vol.name})
744
+            raise exception.KaminarioCinderDriverException(
745
+                reason=six.text_type(ex.message))
746
+        # Get lun number.
747
+        if type(volume).__name__ == 'RestObject':
748
+            return self._get_lun_number(vol, host)
749
+        else:
750
+            return mapping.lun
751
+
752
+    def _get_host_object(self, connector):
753
+        pass
754
+
755
+    def _get_is_dedup(self, vol_type):
756
+        if vol_type:
757
+            specs_val = vol_type.get('extra_specs', {}).get(
758
+                'kaminario:thin_prov_type')
759
+            if specs_val == 'nodedup':
760
+                return False
761
+            elif CONF.kaminario_nodedup_substring in vol_type.get('name'):
762
+                LOG.info(_LI("'kaminario_nodedup_substring' option is "
763
+                             "deprecated in favour of 'kaminario:thin_prov_"
764
+                             "type' in extra-specs and will be removed in "
765
+                             "the 10.0.0 release."))
766
+                return False
767
+            else:
768
+                return True
769
+        else:
770
+            return True
771
+
772
+    def _get_is_replica(self, vol_type):
773
+        replica = False
774
+        if vol_type and vol_type.get('extra_specs'):
775
+            specs = vol_type.get('extra_specs')
776
+            if (specs.get('kaminario:replication') == 'enabled' and
777
+               self.replica):
778
+                replica = True
779
+        return replica
780
+
781
+    def _get_replica_status(self, vg_name):
782
+        vg = self.client.search("volume_groups", name=vg_name).hits[0]
783
+        if self.client.search("replication/sessions",
784
+                              local_volume_group=vg).total != 0:
785
+            return True
786
+        else:
787
+            return False
788
+
789
+    def manage_existing(self, volume, existing_ref):
790
+        vol_name = existing_ref['source-name']
791
+        new_name = self.get_volume_name(volume.id)
792
+        vg_new_name = self.get_volume_group_name(volume.id)
793
+        vg_name = None
794
+        is_dedup = self._get_is_dedup(volume.get('volume_type'))
795
+        try:
796
+            LOG.debug("Searching volume: %s in K2.", vol_name)
797
+            vol = self.client.search("volumes", name=vol_name).hits[0]
798
+            vg = vol.volume_group
799
+            vg_replica = self._get_replica_status(vg.name)
800
+            vol_map = False
801
+            if self.client.search("mappings", volume=vol).total != 0:
802
+                vol_map = True
803
+            if is_dedup != vg.is_dedup or vg_replica or vol_map:
804
+                raise exception.ManageExistingInvalidReference(
805
+                    existing_ref=existing_ref,
806
+                    reason=_('Manage volume type invalid.'))
807
+            vol.name = new_name
808
+            vg_name = vg.name
809
+            LOG.debug("Manage new volume name: %s", new_name)
810
+            vg.name = vg_new_name
811
+            LOG.debug("Manage volume group name: %s", vg_new_name)
812
+            vg.save()
813
+            LOG.debug("Manage volume: %s in K2.", vol_name)
814
+            vol.save()
815
+        except Exception as ex:
816
+            vg_rs = self.client.search("volume_groups", name=vg_new_name)
817
+            if hasattr(vg_rs, 'hits') and vg_rs.total != 0:
818
+                vg = vg_rs.hits[0]
819
+                if vg_name and vg.name == vg_new_name:
820
+                    vg.name = vg_name
821
+                    LOG.debug("Updating vg new name to old name: %s ", vg_name)
822
+                    vg.save()
823
+            LOG.exception(_LE("manage volume: %s failed."), vol_name)
824
+            raise exception.ManageExistingInvalidReference(
825
+                existing_ref=existing_ref,
826
+                reason=six.text_type(ex.message))
827
+
828
+    def manage_existing_get_size(self, volume, existing_ref):
829
+        vol_name = existing_ref['source-name']
830
+        v_rs = self.client.search("volumes", name=vol_name)
831
+        if hasattr(v_rs, 'hits') and v_rs.total != 0:
832
+            vol = v_rs.hits[0]
833
+            size = vol.size / units.Mi
834
+            return math.ceil(size)
835
+        else:
836
+            raise exception.ManageExistingInvalidReference(
837
+                existing_ref=existing_ref,
838
+                reason=_('Unable to get size of manage volume.'))
839
+
840
+    def after_volume_copy(self, ctxt, volume, new_volume, remote=None):
841
+        self.delete_volume(volume)
842
+        vg_name_old = self.get_volume_group_name(volume.id)
843
+        vol_name_old = self.get_volume_name(volume.id)
844
+        vg_name_new = self.get_volume_group_name(new_volume.id)
845
+        vol_name_new = self.get_volume_name(new_volume.id)
846
+        vg_new = self.client.search("volume_groups", name=vg_name_new).hits[0]
847
+        vg_new.name = vg_name_old
848
+        vg_new.save()
849
+        vol_new = self.client.search("volumes", name=vol_name_new).hits[0]
850
+        vol_new.name = vol_name_old
851
+        vol_new.save()
852
+
853
+    def retype(self, ctxt, volume, new_type, diff, host):
854
+        old_type = volume.get('volume_type')
855
+        vg_name = self.get_volume_group_name(volume.id)
856
+        old_rep_type = self._get_replica_status(vg_name)
857
+        new_rep_type = self._get_is_replica(new_type)
858
+        new_prov_type = self._get_is_dedup(new_type)
859
+        old_prov_type = self._get_is_dedup(old_type)
860
+        # Change dedup<->nodedup with add/remove replication is complex in K2
861
+        # since K2 does not have api to change dedup<->nodedup.
862
+        if new_prov_type == old_prov_type:
863
+            if not old_rep_type and new_rep_type:
864
+                self._add_replication(volume)
865
+                return True
866
+            elif old_rep_type and not new_rep_type:
867
+                self._delete_replication(volume)
868
+                return True
869
+        elif not new_rep_type and not old_rep_type:
870
+            LOG.debug("Use '--migration-policy on-demand' to change 'dedup "
871
+                      "without replication'<->'nodedup without replication'.")
872
+            return False
873
+        else:
874
+            LOG.error(_LE('Change from type1: %(type1)s to type2: %(type2)s '
875
+                          'is not supported directly in K2.'),
876
+                      {'type1': old_type, 'type2': new_type})
877
+            return False
878
+
879
+    def _add_replication(self, volume):
880
+        vg_name = self.get_volume_group_name(volume.id)
881
+        vol_name = self.get_volume_name(volume.id)
882
+        LOG.debug("Searching volume group with name: %(name)s",
883
+                  {'name': vg_name})
884
+        vg = self.client.search("volume_groups", name=vg_name).hits[0]
885
+        LOG.debug("Searching volume with name: %(name)s",
886
+                  {'name': vol_name})
887
+        vol = self.client.search("volumes", name=vol_name).hits[0]
888
+        self._create_volume_replica(volume, vg, vol, self.replica.rpo)
889
+
890
+    def _delete_replication(self, volume):
891
+        vg_name = self.get_volume_group_name(volume.id)
892
+        vol_name = self.get_volume_name(volume.id)
893
+        self._delete_volume_replica(volume, vg_name, vol_name)

+ 184
- 0
deployment_scripts/puppet/modules/kaminario/files/kaminario_fc.py View File

@@ -0,0 +1,184 @@
1
+# Copyright (c) 2016 by Kaminario Technologies, Ltd.
2
+# All Rights Reserved.
3
+#
4
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
5
+#    not use this file except in compliance with the License. You may obtain
6
+#    a copy of the License at
7
+#
8
+#         http://www.apache.org/licenses/LICENSE-2.0
9
+#
10
+#    Unless required by applicable law or agreed to in writing, software
11
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13
+#    License for the specific language governing permissions and limitations
14
+#    under the License.
15
+"""Volume driver for Kaminario K2 all-flash arrays."""
16
+import six
17
+
18
+from oslo_log import log as logging
19
+
20
+from cinder import coordination
21
+from cinder import exception
22
+from cinder.i18n import _, _LE
23
+from cinder.objects import fields
24
+from cinder.volume.drivers.kaminario import kaminario_common as common
25
+from cinder.zonemanager import utils as fczm_utils
26
+
27
+LOG = logging.getLogger(__name__)
28
+kaminario_logger = common.kaminario_logger
29
+
30
+
31
+class KaminarioFCDriver(common.KaminarioCinderDriver):
32
+    """Kaminario K2 FC Volume Driver.
33
+
34
+    Version history:
35
+        1.0 - Initial driver
36
+        1.1 - Added manage/unmanage and extra-specs support for nodedup
37
+        1.2 - Added replication support
38
+        1.3 - Added retype support
39
+    """
40
+
41
+    VERSION = '1.3'
42
+
43
+    # ThirdPartySystems wiki page name
44
+    CI_WIKI_NAME = "Kaminario_K2_CI"
45
+
46
+    @kaminario_logger
47
+    def __init__(self, *args, **kwargs):
48
+        super(KaminarioFCDriver, self).__init__(*args, **kwargs)
49
+        self._protocol = 'FC'
50
+        self.lookup_service = fczm_utils.create_lookup_service()
51
+
52
+    @fczm_utils.AddFCZone
53
+    @kaminario_logger
54
+    @coordination.synchronized('{self.k2_lock_name}')
55
+    def initialize_connection(self, volume, connector):
56
+        """Attach K2 volume to host."""
57
+        # Check wwpns in host connector.
58
+        if not connector.get('wwpns'):
59
+            msg = _("No wwpns found in host connector.")
60
+            LOG.error(msg)
61
+            raise exception.KaminarioCinderDriverException(reason=msg)
62
+        # Get target wwpns.
63
+        target_wwpns = self.get_target_info(volume)
64
+        # Map volume.
65
+        lun = self.k2_initialize_connection(volume, connector)
66
+        # Create initiator-target mapping.
67
+        target_wwpns, init_target_map = self._build_initiator_target_map(
68
+            connector, target_wwpns)
69
+        # Return target volume information.
70
+        return {'driver_volume_type': 'fibre_channel',
71
+                'data': {"target_discovered": True,
72
+                         "target_lun": lun,
73
+                         "target_wwn": target_wwpns,
74
+                         "initiator_target_map": init_target_map}}
75
+
76
+    @fczm_utils.RemoveFCZone
77
+    @kaminario_logger
78
+    @coordination.synchronized('{self.k2_lock_name}')
79
+    def terminate_connection(self, volume, connector, **kwargs):
80
+        super(KaminarioFCDriver, self).terminate_connection(volume, connector)
81
+        properties = {"driver_volume_type": "fibre_channel", "data": {}}
82
+        host_name = self.get_initiator_host_name(connector)
83
+        host_rs = self.client.search("hosts", name=host_name)
84
+        # In terminate_connection, host_entry is deleted if host
85
+        # is not attached to any volume
86
+        if host_rs.total == 0:
87
+            # Get target wwpns.
88
+            target_wwpns = self.get_target_info(volume)
89
+            target_wwpns, init_target_map = self._build_initiator_target_map(
90
+                connector, target_wwpns)
91
+            properties["data"] = {"target_wwn": target_wwpns,
92
+                                  "initiator_target_map": init_target_map}
93
+        return properties
94
+
95
+    @kaminario_logger
96
+    def get_target_info(self, volume):
97
+        rep_status = fields.ReplicationStatus.FAILED_OVER
98
+        if (hasattr(volume, 'replication_status') and
99
+                volume.replication_status == rep_status):
100
+            self.client = self.target
101
+        LOG.debug("Searching target wwpns in K2.")
102
+        fc_ports_rs = self.client.search("system/fc_ports")
103
+        target_wwpns = []
104
+        if hasattr(fc_ports_rs, 'hits') and fc_ports_rs.total != 0:
105
+            for port in fc_ports_rs.hits:
106
+                if port.pwwn:
107
+                    target_wwpns.append((port.pwwn).replace(':', ''))
108
+        if not target_wwpns:
109
+            msg = _("Unable to get FC target wwpns from K2.")
110
+            LOG.error(msg)
111
+            raise exception.KaminarioCinderDriverException(reason=msg)
112
+        return target_wwpns
113
+
114
+    @kaminario_logger
115
+    def _get_host_object(self, connector):
116
+        host_name = self.get_initiator_host_name(connector)
117
+        LOG.debug("Searching initiator hostname: %s in K2.", host_name)
118
+        host_rs = self.client.search("hosts", name=host_name)
119
+        host_wwpns = connector['wwpns']
120
+        if host_rs.total == 0:
121
+            try:
122
+                LOG.debug("Creating initiator hostname: %s in K2.", host_name)
123
+                host = self.client.new("hosts", name=host_name,
124
+                                       type="Linux").save()
125
+            except Exception as ex:
126
+                LOG.exception(_LE("Unable to create host : %s in K2."),
127
+                              host_name)
128
+                raise exception.KaminarioCinderDriverException(
129
+                    reason=six.text_type(ex.message))
130
+        else:
131
+            # Use existing host.
132
+            LOG.debug("Use existing initiator hostname: %s in K2.", host_name)
133
+            host = host_rs.hits[0]
134
+        # Adding host wwpn.
135
+        for wwpn in host_wwpns:
136
+            wwpn = ":".join([wwpn[i:i + 2] for i in range(0, len(wwpn), 2)])
137
+            if self.client.search("host_fc_ports", pwwn=wwpn,
138
+                                  host=host).total == 0:
139
+                LOG.debug("Adding wwpn: %(wwpn)s to host: "
140
+                          "%(host)s in K2.", {'wwpn': wwpn,
141
+                                              'host': host_name})
142
+                try:
143
+                    self.client.new("host_fc_ports", pwwn=wwpn,
144
+                                    host=host).save()
145
+                except Exception as ex:
146
+                    if host_rs.total == 0:
147
+                        self._delete_host_by_name(host_name)
148
+                    LOG.exception(_LE("Unable to add wwpn : %(wwpn)s to "
149
+                                      "host: %(host)s in K2."),
150
+                                  {'wwpn': wwpn, 'host': host_name})
151
+                    raise exception.KaminarioCinderDriverException(
152
+                        reason=six.text_type(ex.message))
153
+        return host, host_rs, host_name
154
+
155
+    @kaminario_logger
156
+    def _build_initiator_target_map(self, connector, all_target_wwns):
157
+        """Build the target_wwns and the initiator target map."""
158
+        target_wwns = []
159
+        init_targ_map = {}
160
+
161
+        if self.lookup_service is not None:
162
+            # use FC san lookup.
163
+            dev_map = self.lookup_service.get_device_mapping_from_network(
164
+                connector.get('wwpns'),
165
+                all_target_wwns)
166
+
167
+            for fabric_name in dev_map:
168
+                fabric = dev_map[fabric_name]
169
+                target_wwns += fabric['target_port_wwn_list']
170
+                for initiator in fabric['initiator_port_wwn_list']:
171
+                    if initiator not in init_targ_map:
172
+                        init_targ_map[initiator] = []
173
+                    init_targ_map[initiator] += fabric['target_port_wwn_list']
174
+                    init_targ_map[initiator] = list(set(
175
+                        init_targ_map[initiator]))
176
+            target_wwns = list(set(target_wwns))
177
+        else:
178
+            initiator_wwns = connector.get('wwpns', [])
179
+            target_wwns = all_target_wwns
180
+
181
+            for initiator in initiator_wwns:
182
+                init_targ_map[initiator] = target_wwns
183
+
184
+        return target_wwns, init_targ_map

+ 127
- 0
deployment_scripts/puppet/modules/kaminario/files/kaminario_iscsi.py View File

@@ -0,0 +1,127 @@
1
+# Copyright (c) 2016 by Kaminario Technologies, Ltd.
2
+# All Rights Reserved.
3
+#
4
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
5
+#    not use this file except in compliance with the License. You may obtain
6
+#    a copy of the License at
7
+#
8
+#         http://www.apache.org/licenses/LICENSE-2.0
9
+#
10
+#    Unless required by applicable law or agreed to in writing, software
11
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13
+#    License for the specific language governing permissions and limitations
14
+#    under the License.
15
+"""Volume driver for Kaminario K2 all-flash arrays."""
16
+import six
17
+
18
+from oslo_log import log as logging
19
+
20
+from cinder import coordination
21
+from cinder import exception
22
+from cinder.i18n import _, _LE
23
+from cinder import interface
24
+from cinder.objects import fields
25
+from cinder.volume.drivers.kaminario import kaminario_common as common
26
+
27
+ISCSI_TCP_PORT = "3260"
28
+LOG = logging.getLogger(__name__)
29
+kaminario_logger = common.kaminario_logger
30
+
31
+
32
+@interface.volumedriver
33
+class KaminarioISCSIDriver(common.KaminarioCinderDriver):
34
+    """Kaminario K2 iSCSI Volume Driver.
35
+
36
+    Version history:
37
+        1.0 - Initial driver
38
+        1.1 - Added manage/unmanage and extra-specs support for nodedup
39
+        1.2 - Added replication support
40
+        1.3 - Added retype support
41
+    """
42
+
43
+    VERSION = '1.3'
44
+
45
+    # ThirdPartySystems wiki page name
46
+    CI_WIKI_NAME = "Kaminario_K2_CI"
47
+
48
+    @kaminario_logger
49
+    def __init__(self, *args, **kwargs):
50
+        super(KaminarioISCSIDriver, self).__init__(*args, **kwargs)
51
+        self._protocol = 'iSCSI'
52
+
53
+    @kaminario_logger
54
+    @coordination.synchronized('{self.k2_lock_name}')
55
+    def initialize_connection(self, volume, connector):
56
+        """Attach K2 volume to host."""
57
+        # Get target_portal and target iqn.
58
+        iscsi_portal, target_iqn = self.get_target_info(volume)
59
+        # Map volume.
60
+        lun = self.k2_initialize_connection(volume, connector)
61
+        # Return target volume information.
62
+        return {"driver_volume_type": "iscsi",
63
+                "data": {"target_iqn": target_iqn,
64
+                         "target_portal": iscsi_portal,
65
+                         "target_lun": lun,
66
+                         "target_discovered": True}}
67
+
68
+    @kaminario_logger
69
+    @coordination.synchronized('{self.k2_lock_name}')
70
+    def terminate_connection(self, volume, connector, **kwargs):
71
+        super(KaminarioISCSIDriver, self).terminate_connection(volume,
72
+                                                               connector)
73
+
74
+    @kaminario_logger
75
+    def get_target_info(self, volume):
76
+        rep_status = fields.ReplicationStatus.FAILED_OVER
77
+        if (hasattr(volume, 'replication_status') and
78
+                volume.replication_status == rep_status):
79
+            self.client = self.target
80
+        LOG.debug("Searching first iscsi port ip without wan in K2.")
81
+        iscsi_ip_rs = self.client.search("system/net_ips", wan_port="")
82
+        iscsi_ip = target_iqn = None
83
+        if hasattr(iscsi_ip_rs, 'hits') and iscsi_ip_rs.total != 0:
84
+            iscsi_ip = iscsi_ip_rs.hits[0].ip_address
85
+        if not iscsi_ip:
86
+            msg = _("Unable to get ISCSI IP address from K2.")
87
+            LOG.error(msg)
88
+            raise exception.KaminarioCinderDriverException(reason=msg)
89
+        iscsi_portal = "{0}:{1}".format(iscsi_ip, ISCSI_TCP_PORT)
90
+        LOG.debug("Searching system state for target iqn in K2.")
91
+        sys_state_rs = self.client.search("system/state")
92
+
93
+        if hasattr(sys_state_rs, 'hits') and sys_state_rs.total != 0:
94
+            target_iqn = sys_state_rs.hits[0].iscsi_qualified_target_name
95
+
96
+        if not target_iqn:
97
+            msg = _("Unable to get target iqn from K2.")
98
+            LOG.error(msg)
99
+            raise exception.KaminarioCinderDriverException(reason=msg)
100
+        return iscsi_portal, target_iqn
101
+
102
+    @kaminario_logger
103
+    def _get_host_object(self, connector):
104
+        host_name = self.get_initiator_host_name(connector)
105
+        LOG.debug("Searching initiator hostname: %s in K2.", host_name)
106
+        host_rs = self.client.search("hosts", name=host_name)
107
+        """Create a host if not exists."""
108
+        if host_rs.total == 0:
109
+            try:
110
+                LOG.debug("Creating initiator hostname: %s in K2.", host_name)
111
+                host = self.client.new("hosts", name=host_name,
112
+                                       type="Linux").save()
113
+                LOG.debug("Adding iqn: %(iqn)s to host: %(host)s in K2.",
114
+                          {'iqn': connector['initiator'], 'host': host_name})
115
+                iqn = self.client.new("host_iqns", iqn=connector['initiator'],
116
+                                      host=host)
117
+                iqn.save()
118
+            except Exception as ex:
119
+                self._delete_host_by_name(host_name)
120
+                LOG.exception(_LE("Unable to create host: %s in K2."),
121
+                              host_name)
122
+                raise exception.KaminarioCinderDriverException(
123
+                    reason=six.text_type(ex.message))
124
+        else:
125
+            LOG.debug("Use existing initiator hostname: %s in K2.", host_name)
126
+            host = host_rs.hits[0]
127
+        return host, host_rs, host_name

+ 11
- 0
deployment_scripts/puppet/modules/kaminario/lib/puppet/parser/functions/get_replication_device.rb View File

@@ -0,0 +1,11 @@
1
+module Puppet::Parser::Functions
2
+  newfunction(:get_replication_device, :type => :rvalue) do |args|
3
+    ip = args[0].to_s
4
+    login = args[1]
5
+    password = args[2]
6
+    rpo = args[3]
7
+    replication_device = 'backend_id' + ':' + ip + ","  + 'login' + ':' + login + "," +  'password' + ':' + password + "," + 'rpo' + ':' + rpo
8
+    return replication_device
9
+  end
10
+end
11
+

+ 9
- 0
deployment_scripts/puppet/modules/kaminario/lib/puppet/parser/functions/section_name.rb View File

@@ -0,0 +1,9 @@
1
+module Puppet::Parser::Functions
2
+  newfunction(:section_name, :type => :rvalue) do |args|
3
+    ip = args[0]
4
+    str = args[1]
5
+    sec_name = str + '_' + ip
6
+    return sec_name
7
+  end
8
+end
9
+

+ 39
- 0
deployment_scripts/puppet/modules/kaminario/manifests/driver.pp View File

@@ -0,0 +1,39 @@
1
+class kaminario::driver{
2
+
3
+file { '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/kaminario':
4
+        ensure => 'directory',
5
+        owner  => 'root',
6
+        group  => 'root',
7
+        mode   => '0755',}
8
+
9
+file { '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/kaminario/__init__.py':
10
+        mode   => '0644',
11
+        owner  => root,
12
+        group  => root,
13
+        source => 'puppet:///modules/kaminario/__init__.py'}
14
+
15
+file { '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/kaminario/kaminario_common.py':
16
+        mode   => '0644',
17
+        owner  => root,
18
+        group  => root,
19
+        source => 'puppet:///modules/kaminario/kaminario_common.py'}
20
+
21
+file { '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/kaminario/kaminario_fc.py':
22
+        mode   => '0644',
23
+        owner  => root,
24
+        group  => root,
25
+        source => 'puppet:///modules/kaminario/kaminario_fc.py'}
26
+
27
+file { '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/kaminario/kaminario_iscsi.py':
28
+        mode   => '0644',
29
+        owner  => root,
30
+        group  => root,
31
+        source => 'puppet:///modules/kaminario/kaminario_iscsi.py'}
32
+
33
+file { '/usr/lib/python2.7/dist-packages/cinder/exception.py':
34
+        mode   => '0644',
35
+        owner  => root,
36
+        group  => root,
37
+        source => 'puppet:///modules/kaminario/exception.py'}
38
+
39
+}

+ 82
- 0
deployment_scripts/puppet/modules/kaminario/manifests/init.pp View File

@@ -0,0 +1,82 @@
1
+class kaminario::config {
2
+$num = [ '0', '1', '2', '3', '4', '5' ]
3
+$plugin_settings = hiera('cinder_kaminario')
4
+each($num) |$value| {
5
+config {"plugin_${value}":
6
+  cinder_node            =>      $plugin_settings["cinder_node_${value}"],
7
+  storage_protocol       =>      $plugin_settings["storage_protocol_${value}"],
8
+  backend_name           =>      $plugin_settings["backend_name_${value}"],
9
+  storage_user           =>      $plugin_settings["storage_user_${value}"],
10
+  storage_password       =>      $plugin_settings["storage_password_${value}"],
11
+  storage_ip             =>      $plugin_settings["storage_ip_${value}"],
12
+  enable_replication     =>      $plugin_settings["enable_replication_${value}"],
13
+  replication_ip         =>      $plugin_settings["replication_ip_${value}"],
14
+  replication_login      =>      $plugin_settings["replication_login_${value}"],
15
+  replication_rpo        =>      $plugin_settings["replication_rpo_${value}"],
16
+  replication_password   =>      $plugin_settings["replication_password_${value}"],
17
+  num                    =>      $value
18
+  }
19
+}
20
+}
21
+
22
+define config($storage_protocol,$backend_name,$storage_user,$storage_password,$storage_ip,$num,$cinder_node,$enable_replication,$replication_ip,$replication_login,$replication_rpo,$replication_password) {
23
+
24
+  $sec_name = section_name( $storage_ip , $backend_name )
25
+  $config_file = "/etc/cinder/cinder.conf"
26
+  if $cinder_node == hiera(user_node_name) {
27
+  if $storage_protocol == 'FC'{
28
+  ini_subsetting {"enable_backend_${num}":
29
+        ensure               => present,
30
+        section              => 'DEFAULT',
31
+        key_val_separator    => '=',
32
+        path                 => $config_file,
33
+        setting              => 'enabled_backends',
34
+        subsetting           => $backend_name,
35
+        subsetting_separator => ',',
36
+   }->
37
+    cinder_config {
38
+        "$sec_name/volume_driver"       : value => "cinder.volume.drivers.kaminario.kaminario_fc.KaminarioFCDriver";
39
+        "$sec_name/volume_backend_name" : value => $backend_name;
40
+        "$sec_name/san_ip"              : value => $storage_ip;
41
+        "$sec_name/san_login"           : value => $storage_user;
42
+        "$sec_name/san_password"        : value => $storage_password;
43
+   }
44
+
45
+    if $enable_replication == true {
46
+    $replication_device = get_replication_device($replication_ip, $replication_login , $replication_password , $replication_rpo)
47
+    cinder_config {
48
+        "$sec_name/replication_device"       : value => $replication_device;
49
+    }
50
+
51
+   }
52
+}
53
+  if $storage_protocol == 'ISCSI'{
54
+  ini_subsetting {"enable_backend_${num}":
55
+        ensure               => present,
56
+        section              => 'DEFAULT',
57
+        key_val_separator    => '=',
58
+        path                 => $config_file,
59
+        setting              => 'enabled_backends',
60
+        subsetting           => $backend_name,
61
+        subsetting_separator => ',',
62
+   }->
63
+    cinder_config {
64
+        "$sec_name/volume_driver"       : value => "cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver";
65
+        "$sec_name/volume_backend_name" : value => $backend_name;
66
+        "$sec_name/san_ip"              : value => $storage_ip;
67
+        "$sec_name/san_login"           : value => $storage_user;
68
+        "$sec_name/san_password"        : value => $storage_password;
69
+   }
70
+
71
+    if $enable_replication == true {
72
+    $replication_device = get_replication_device($replication_ip, $replication_login , $replication_password , $replication_rpo)
73
+    cinder_config {
74
+        "$sec_name/replication_device"       : value => $replication_device;
75
+         }
76
+
77
+   }
78
+}
79
+}
80
+}
81
+
82
+

+ 8
- 0
deployment_scripts/puppet/modules/kaminario/manifests/krest.pp View File

@@ -0,0 +1,8 @@
1
+class  kaminario::krest{
2
+  package { 'python-pip':
3
+  ensure => installed,}
4
+package { 'krest':
5
+  ensure => installed,
6
+  provider => pip,
7
+  require => Package['python-pip'],}
8
+}

+ 45
- 0
deployment_scripts/puppet/modules/kaminario/manifests/type.pp View File

@@ -0,0 +1,45 @@
1
+class kaminario::type {
2
+$num = [ '0', '1', '2', '3', '4', '5' ]
3
+$plugin_settings = hiera('cinder_kaminario')
4
+each($num) |$value| {
5
+kaminario_type {"plugin_${value}":
6
+  create_type            =>      $plugin_settings["create_type_${value}"],
7
+  options                =>      $plugin_settings["options_${value}"],
8
+  backend_name           =>      $plugin_settings["backend_name_${value}"]
9
+  }
10
+}
11
+}
12
+
13
+define kaminario_type ($create_type,$options,$backend_name) {
14
+if $create_type == true {
15
+case $options {
16
+  "enable_replication_type": {
17
+    cinder_type {$backend_name:
18
+      ensure     => present,
19
+      properties => ["volume_backend_name=${backend_name}",'kaminario:replication=enabled'],
20
+    }
21
+  }
22
+  "enable_dedup": {
23
+    cinder_type {$backend_name:
24
+      ensure     => present,
25
+      properties => ["volume_backend_name=${backend_name}",'kaminario:thin_prov_type=nodedup'],
26
+    }
27
+  }
28
+  "replication_dedup": {
29
+    cinder_type {$backend_name:
30
+      ensure     => present,
31
+      properties => ["volume_backend_name=${backend_name}",'kaminario:thin_prov_type=nodedup','kaminario:thin_prov_type=nodedup'],
32
+    }
33
+  }
34
+  "default": {
35
+    cinder_type {$backend_name:
36
+      ensure     => present,
37
+      properties => ["volume_backend_name=${backend_name}"],
38
+   }
39
+  }
40
+
41
+}
42
+
43
+}
44
+
45
+}

+ 39
- 0
deployment_tasks.yaml View File

@@ -0,0 +1,39 @@
1
+- id: kaminario_parser
2
+  type: puppet
3
+  version: 2.1.0
4
+  groups: [cinder,primary-controller,controller]
5
+  requires: [top-role-cinder]
6
+  required_for: [kaminario_cinder]
7
+  condition:
8
+    yaql_exp: "changedAny($.storage, $.cinder_kaminario)"
9
+  parameters:
10
+    puppet_manifest: puppet/manifests/cinder_parser.pp
11
+    puppet_modules:  puppet/modules:/etc/puppet/modules
12
+    timeout: 360
13
+
14
+- id: kaminario_cinder
15
+  type: puppet
16
+  version: 2.1.0
17
+  groups: [cinder]
18
+  requires: [kaminario_parser]
19
+  required_for: [deploy_end]
20
+  condition:
21
+    yaql_exp: "changedAny($.storage, $.cinder_kaminario)"
22
+  parameters:
23
+    puppet_manifest: puppet/manifests/cinder_kaminario.pp
24
+    puppet_modules:  puppet/modules:/etc/puppet/modules
25
+    timeout: 360
26
+
27
+- id: kaminario_types
28
+  type: puppet
29
+  version: 2.1.0
30
+  groups: [primary-controller]
31
+  requires: [openstack-cinder]
32
+  required_for: [deploy_end]
33
+  condition:
34
+    yaql_exp: "changedAny($.storage, $.cinder_kaminario)"
35
+  parameters:
36
+    puppet_manifest: puppet/manifests/cinder_type.pp
37
+    puppet_modules: puppet/modules:/etc/puppet/modules
38
+    timeout: 360
39
+

+ 1078
- 0
environment_config.yaml
File diff suppressed because it is too large
View File


+ 34
- 0
metadata.yaml View File

@@ -0,0 +1,34 @@
1
+# Plugin name
2
+name: cinder_kaminario
3
+# Human-readable name for your plugin
4
+title: Kaminario For Cinder
5
+# Plugin version
6
+version: '1.0.0'
7
+# Description
8
+description: Enable Kaminario Storage Array as a Cinder backend
9
+# Required fuel version
10
+fuel_version: ['9.0']
11
+# Specify license of your plugin
12
+licenses: ['Apache License Version 2.0']
13
+# Specify author or company name
14
+authors: ['Biarca']
15
+# A link to the plugin's page
16
+homepage: 'https://github.com/openstack/fuel-plugin-cinder-kaminario'
17
+# Specify a group which your plugin implements, possible options:
18
+# network, storage, storage::cinder, storage::glance, hypervisor,
19
+# equipment
20
+groups: ['storage::cinder']
21
+# Change `false` to `true` if the plugin can be installed in the environment
22
+# after the deployment.
23
+is_hotpluggable: true
24
+
25
+# The plugin is compatible with releases in the list
26
+releases:
27
+  - os: ubuntu
28
+    version: mitaka-9.0
29
+    mode: ['ha']
30
+    deployment_scripts_path: deployment_scripts/
31
+    repository_path: repositories/ubuntu
32
+
33
+# Version of plugin package
34
+package_version: '4.0.0'

+ 5
- 0
pre_build_hook View File

@@ -0,0 +1,5 @@
1
+#!/bin/bash
2
+
3
+# Add here any the actions which are required before plugin build
4
+# like packages building, packages downloading from mirrors and so on.
5
+# The script should return 0 if there were no errors.

+ 0
- 0
repositories/centos/.gitkeep View File


+ 0
- 0
repositories/ubuntu/.gitkeep View File


+ 7
- 0
volumes.yaml View File

@@ -0,0 +1,7 @@
1
+volumes_roles_mapping:
2
+  # Default role mapping
3
+  fuel-plugin-cinder-kaminario_role:
4
+    - {allocate_size: "min", id: "os"}
5
+
6
+# Set here new volumes for your role
7
+volumes: []

Loading…
Cancel
Save