Feature: Add raid configuration support for ibmc driver

This patch proposes to adding raid configuration support for HUAWEI iBMC driver.

Story: 2007554
Task: 39406
Signed-off-by: Qianbiao.NG <iampurse@vip.qq.com>
Change-Id: Iace17b2d233323f4648d2857ec1b9fb83d42c045
This commit is contained in:
Qianbiao.NG 2020-04-19 13:58:37 +08:00
parent 21873dfb52
commit cd6e5ec497
10 changed files with 647 additions and 17 deletions

View File

@ -9,6 +9,13 @@ The ``ibmc`` driver is targeted for Huawei V5 series rack server such as
2288H V5, CH121 V5. The iBMC hardware type enables the user to take advantage
of features of `Huawei iBMC`_ to control Huawei server.
The ``ibmc`` hardware type supports the following Ironic interfaces:
* Management Interface: Boot device management
* Power Interface: Power management
* `RAID Interface`_: RAID controller and disk management
* `Vendor Interface`_: BIOS management
Prerequisites
=============
@ -28,9 +35,10 @@ Enabling the iBMC driver
[DEFAULT]
...
enabled_hardware_types = ibmc,ipmi
enabled_power_interfaces = ibmc,ipmitool
enabled_management_interfaces = ibmc,ipmitool
enabled_hardware_types = ibmc
enabled_power_interfaces = ibmc
enabled_management_interfaces = ibmc
enabled_raid_interfaces = ibmc
enabled_vendor_interfaces = ibmc
#. Restart the ironic conductor service::
@ -91,19 +99,213 @@ a node with the ``ibmc`` driver. For example:
For more information about enrolling nodes see :ref:`enrollment`
in the install guide.
Features of the ``ibmc`` hardware type
RAID Interface
==============
Currently, only RAID controller which supports OOB management can be managed.
See :doc:`/admin/raid` for more information on Ironic RAID support.
The following properties are supported by the iBMC raid interface
implementation, ``ibmc``:
Mandatory properties
--------------------
* ``size_gb``: Size in gigabytes (integer) for the logical disk. Use ``MAX`` as
``size_gb`` if this logical disk is supposed to use the rest of the space
available.
* ``raid_level``: RAID level for the logical disk. Valid values are
``JBOD``, ``0``, ``1``, ``5``, ``6``, ``1+0``, ``5+0`` and ``6+0``. And it
is possible that some RAID controllers can only support a subset RAID
levels.
.. NOTE::
RAID level ``2`` is not supported by ``iBMC`` driver.
Optional properties
-------------------
* ``is_root_volume``: Optional. Specifies whether this disk is a root volume.
By default, this is ``False``.
* ``volume_name``: Optional. Name of the volume to be created. If this is not
specified, it will be N/A.
Backing physical disk hints
---------------------------
See :doc:`/admin/raid` for more information on backing disk hints.
These are machine-independent properties. The hints are specified for each
logical disk to help Ironic find the desired disks for RAID configuration.
* ``share_physical_disks``
* ``disk_type``
* ``interface_type``
* ``number_of_physical_disks``
Backing physical disks
----------------------
These are HUAWEI RAID controller dependent properties:
* ``controller``: Optional. Supported values are: RAID storage id,
RAID storage name or RAID controller name. If a bare metal server have more
than one controller, this is mandatory. Typical values would look like:
* RAID Storage Id: ``RAIDStorage0``
* RAID Storage Name: ``RAIDStorage0``
* RAID Controller Name: ``RAID Card1 Controller``.
* ``physical_disks``: Optional. Supported values are: disk-id, disk-name or
disk serial number. Typical values for hdd disk would look like:
* Disk Id: ``HDDPlaneDisk0``
* Disk Name: ``Disk0``.
* Disk SerialNumber: ``38DGK77LF77D``
Delete RAID configuration
-------------------------
For ``delete_configuration`` step, ``ibmc`` will do:
* delete all logical disks
* delete all hot-spare disks
Logical disks creation priority
-------------------------------
Logical Disks creation priority based on three properties:
* ``share_physical_disks``
* ``physical_disks``
* ``size_gb``
The logical disks creation priority strictly follow the table below, if
multiple logical disks have the same priority, then they will be created with
the same order in ``logical_disks`` array.
==================== ========================== =========
Share physical disks Specified Physical Disks Size
==================== ========================== =========
no yes int|max
no no int
yes yes int
yes yes max
yes no int
yes no max
no no max
==================== ========================== =========
Physical disks choice strategy
------------------------------
* If no ``physical_disks`` are specified, the "waste least" strategy will be
used to choose the physical disks.
* waste least disk capacity: when using disks with different capacity, it
will cause a waste of disk capacity. This is to avoid with highest
priority.
* using least total disk capacity: for example, we can create 400G RAID 5
with both 5 100G-disks and 3 200G-disks. 5 100G disks is a better
strategy because it uses a 500G capacity totally. While 3 200G-disks
are 600G totally.
* using least disk count: finally, if waste capacity and total disk
capacity are both the same (it rarely happens?), we will choose the one
with the minimum number of disks.
* when ``share_physical_disks`` option is present, ``ibmc`` driver will
create logical disk upon existing physical-disk-groups(logical-disks) first.
Only when no exists physical-disk-group matches, then it chooses unused
physical disks with same strategy described upon. When multiple exists
physical-disk-groups matches, it will use "waste least" strategy too,
the bigger capacity left the better. For example, to create a logical disk
shown below on a ``ibmc`` server which has two RAID5 logical disks already.
And the shareable capacity of this two logical-disks are 500G and 300G,
then ``ibmc`` driver will choose the second one.
.. code-block:: json
{
"logical_disks": [
{
"controller": "RAID Card1 Controller",
"raid_level": "5",
"size_gb": 100,
"share_physical_disks": true
}
]
}
And the ``ibmc`` server has two RAID5 logical disks already.
* When ``size_gb`` is set to ``MAX``, ``ibmc`` driver will auto work through
all possible cases and choose the "best" solution which has the biggest
capacity and use least capacity. For example: to create a RAID 5+0 logical
disk with MAX size in a server has 9 200G-disks, it will finally choose
"8 disks + span-number 2" but not "9 disks + span-number 3". Although they
both have 1200G capacity totally, but the former uses only 8 disks and the
latter uses 9 disks. If you want to choose the latter solution, you can
specified the disk count to use by adding ``number_of_physical_disks``
option.
.. code-block:: json
{
"logical_disks": [
{
"controller": "RAID Card1 Controller",
"raid_level": "5+0",
"size_gb": "MAX"
}
]
}
Examples
--------
A typical scene creates:
* RAID 5, 500G, root OS volume with 3 disks
* RAID 5, rest available space, data volume with rest disks
.. code-block:: json
{
"logical_disks": [
{
"volume_name": "os_volume",
"controller": "RAID Card1 Controller",
"is_root_volume": "True",
"physical_disks": [
"Disk0",
"Disk1",
"Disk2"
],
"raid_level": "5",
"size_gb": "500"
},
{
"volume_name": "data_volume",
"controller": "RAID Card1 Controller",
"raid_level": "5",
"size_gb": "MAX"
}
]
}
Vendor Interface
=========================================
Query boot up sequence
^^^^^^^^^^^^^^^^^^^^^^
The ``ibmc`` hardware type provides vendor passthru interfaces shown below:
The ``ibmc`` hardware type can query current boot up sequence from the
bare metal node
.. code-block:: bash
openstack baremetal node passthru call --http-method GET \
<node id or node name> boot_up_seq
======================== ============ ======================================
Method Name HTTP Method Description
======================== ============ ======================================
boot_up_seq GET Query boot up sequence
get_raid_controller_list GET Query RAID controller summary info
======================== ============ ======================================
PXE Boot and iSCSI Deploy Process with Ironic Standalone Environment

View File

@ -18,7 +18,9 @@ CH121 V5.
from ironic.drivers import generic
from ironic.drivers.modules.ibmc import management as ibmc_mgmt
from ironic.drivers.modules.ibmc import power as ibmc_power
from ironic.drivers.modules.ibmc import raid as ibmc_raid
from ironic.drivers.modules.ibmc import vendor as ibmc_vendor
from ironic.drivers.modules import inspector
from ironic.drivers.modules import noop
@ -39,3 +41,13 @@ class IBMCHardware(generic.GenericHardware):
def supported_vendor_interfaces(self):
"""List of supported vendor interfaces."""
return [ibmc_vendor.IBMCVendor, noop.NoVendor]
@property
def supported_raid_interfaces(self):
"""List of supported raid interfaces."""
return [ibmc_raid.IbmcRAID, noop.NoRAID]
@property
def supported_inspect_interfaces(self):
"""List of supported inspect interfaces."""
return [inspector.Inspector, noop.NoInspect]

View File

@ -0,0 +1,199 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
iBMC RAID configuration specific methods
"""
from ironic_lib import metrics_utils
from oslo_log import log as logging
from oslo_utils import importutils
from ironic.common.i18n import _
from ironic.common import raid
from ironic import conf
from ironic.drivers import base
from ironic.drivers.modules.ibmc import utils
constants = importutils.try_import('ibmc_client.constants')
ibmc_client = importutils.try_import('ibmc_client')
ibmc_error = importutils.try_import('ibmc_client.exceptions')
CONF = conf.CONF
LOG = logging.getLogger(__name__)
METRICS = metrics_utils.get_metrics_logger(__name__)
class IbmcRAID(base.RAIDInterface):
"""Implementation of RAIDInterface for iBMC."""
RAID_APPLY_CONFIGURATION_ARGSINFO = {
"raid_config": {
"description": "The RAID configuration to apply.",
"required": True,
},
"create_root_volume": {
"description": (
"Setting this to 'False' indicates not to create root "
"volume that is specified in 'raid_config'. Default "
"value is 'True'."
),
"required": False,
},
"create_nonroot_volumes": {
"description": (
"Setting this to 'False' indicates not to create "
"non-root volumes (all except the root volume) in "
"'raid_config'. Default value is 'True'."
),
"required": False,
},
"delete_existing": {
"description": (
"Setting this to 'True' indicates to delete existing RAID "
"configuration prior to creating the new configuration. "
"Default value is 'True'."
),
"required": False,
}
}
def get_properties(self):
"""Return the properties of the interface.
:returns: dictionary of <property name>:<property description> entries.
"""
return utils.COMMON_PROPERTIES.copy()
@utils.handle_ibmc_exception('delete iBMC RAID configuration')
def _delete_raid_configuration(self, task):
"""Delete the RAID configuration through `python-ibmcclient` lib.
:param task: a TaskManager instance containing the node to act on.
"""
ibmc = utils.parse_driver_info(task.node)
with ibmc_client.connect(**ibmc) as conn:
# NOTE(qianbiao.ng): To reduce review workload, we should keep all
# delete logic in python-ibmcclient. And delete raid configuration
# logic should be synchronized. if async required, do it in
# python-ibmcclient.
conn.system.storage.delete_all_raid_configuration()
@utils.handle_ibmc_exception('create iBMC RAID configuration')
def _create_raid_configuration(self, task, logical_disks):
"""Create the RAID configuration through `python-ibmcclient` lib.
:param task: a TaskManager instance containing the node to act on.
:param logical_disks: a list of JSON dictionaries which represents
the logical disks to be created. The JSON dictionary should match
the (ironic.drivers.raid_config_schema.json) scheme.
"""
ibmc = utils.parse_driver_info(task.node)
with ibmc_client.connect(**ibmc) as conn:
# NOTE(qianbiao.ng): To reduce review workload, we should keep all
# apply logic in python-ibmcclient. And apply raid configuration
# logic should be synchronized. if async required, do it in
# python-ibmcclient.
conn.system.storage.apply_raid_configuration(logical_disks)
@base.deploy_step(priority=0,
argsinfo=RAID_APPLY_CONFIGURATION_ARGSINFO)
def apply_configuration(self, task, raid_config, create_root_volume=True,
create_nonroot_volumes=False):
return super(IbmcRAID, self).apply_configuration(
task, raid_config, create_root_volume=create_root_volume,
create_nonroot_volumes=create_nonroot_volumes)
@METRICS.timer('IbmcRAID.create_configuration')
@base.clean_step(priority=0, abortable=False, argsinfo={
'create_root_volume': {
'description': ('This specifies whether to create the root '
'volume. Defaults to `True`.'),
'required': False
},
'create_nonroot_volumes': {
'description': ('This specifies whether to create the non-root '
'volumes. Defaults to `True`.'),
'required': False
},
"delete_existing": {
"description": ("Setting this to 'True' indicates to delete "
"existing RAID configuration prior to creating "
"the new configuration. "
"Default value is 'False'."),
"required": False,
}
})
def create_configuration(self, task, create_root_volume=True,
create_nonroot_volumes=True,
delete_existing=False):
"""Create a RAID configuration.
This method creates a RAID configuration on the given node.
:param task: a TaskManager instance.
:param create_root_volume: If True, a root volume is created
during RAID configuration. Otherwise, no root volume is
created. Default is True.
:param create_nonroot_volumes: If True, non-root volumes are
created. If False, no non-root volumes are created. Default
is True.
:param delete_existing: Setting this to True indicates to delete RAID
configuration prior to creating the new configuration. Default is
False.
:raises: MissingParameterValue, if node.target_raid_config is missing
or empty after skipping root volume and/or non-root volumes.
:raises: IBMCError, on failure to execute step.
"""
node = task.node
raid_config = raid.filter_target_raid_config(
node, create_root_volume=create_root_volume,
create_nonroot_volumes=create_nonroot_volumes)
LOG.info(_("Invoke RAID create_configuration step for node %s(uuid). "
"Current provision state is: %(status)s. "
"Target RAID configuration is: %(config)s."),
{'uuid': node.uuid, 'status': node.provision_state,
'target': raid_config})
# cache current raid config to node's driver_internal_info
node.driver_internal_info['raid_config'] = raid_config
node.save()
# delete exist volumes if necessary
if delete_existing:
self._delete_raid_configuration(task)
# create raid configuration
logical_disks = raid_config.get('logical_disks', [])
self._create_raid_configuration(task, logical_disks)
LOG.info(_("Succeed to create raid configuration on node %s."),
task.node.uuid)
@METRICS.timer('IbmcRAID.delete_configuration')
@base.clean_step(priority=0, abortable=False)
@base.deploy_step(priority=0)
def delete_configuration(self, task):
"""Delete the RAID configuration.
:param task: a TaskManager instance containing the node to act on.
:returns: states.CLEANWAIT if cleaning operation in progress
asynchronously or states.DEPLOYWAIT if deploy operation in
progress synchronously or None if it is completed.
:raises: IBMCError, on failure to execute step.
"""
node = task.node
LOG.info("Invoke RAID delete_configuration step for node %s(uuid). "
"Current provision state is: %(status)s. ",
{'uuid': node.uuid, 'status': node.provision_state})
self._delete_raid_configuration(task)
LOG.info(_("Succeed to delete raid configuration on node %s."),
task.node.uuid)

View File

@ -85,3 +85,24 @@ class IBMCVendor(base.VendorInterface):
system = conn.system.get()
boot_sequence = system.boot_sequence
return {'boot_up_sequence': boot_sequence}
@base.passthru(['GET'], async_call=False,
description=_('Returns a list of dictionary, every '
'dictionary represents a RAID controller '
'summary info'))
@utils.handle_ibmc_exception('get iBMC RAID controller summary')
def get_raid_controller_list(self, task, **kwargs):
"""List RAID controllers summary info of the node.
:param task: A TaskManager instance containing the node to act on.
:param kwargs: Not used.
:raises: IBMCConnectionError when it fails to connect to iBMC
:raises: IBMCError when iBMC responses an error information
:returns: A list of dictionaries, every dictionary represents a RAID
controller summary of node.
"""
driver_info = utils.parse_driver_info(task.node)
with ibmc_client.connect(**driver_info) as conn:
controllers = conn.system.storage.list()
summaries = [ctrl.summary() for ctrl in controllers]
return summaries

View File

@ -28,7 +28,8 @@ class IBMCTestCase(db_base.DbTestCase):
self.config(enabled_hardware_types=['ibmc'],
enabled_power_interfaces=['ibmc'],
enabled_management_interfaces=['ibmc'],
enabled_vendor_interfaces=['ibmc'])
enabled_vendor_interfaces=['ibmc'],
enabled_raid_interfaces=['ibmc'])
self.node = obj_utils.create_test_node(
self.context, driver='ibmc', driver_info=self.driver_info)
self.ibmc = utils.parse_driver_info(self.node)

View File

@ -0,0 +1,166 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Test class for iBMC RAID interface."""
import mock
from oslo_utils import importutils
from ironic.common import exception
from ironic.conductor import task_manager
from ironic.drivers.modules.ilo import raid as ilo_raid
from ironic.tests.unit.db import utils as db_utils
from ironic.tests.unit.drivers.modules.ibmc import base
constants = importutils.try_import('ibmc_client.constants')
ibmc_client = importutils.try_import('ibmc_client')
ibmc_error = importutils.try_import('ibmc_client.exceptions')
INFO_DICT = db_utils.get_test_ilo_info()
class IbmcRAIDTestCase(base.IBMCTestCase):
def setUp(self):
super(IbmcRAIDTestCase, self).setUp()
self.driver = mock.Mock(raid=ilo_raid.Ilo5RAID())
self.target_raid_config = {
"logical_disks": [
{
'size_gb': 200,
'raid_level': 0,
'is_root_volume': True
},
{
'size_gb': 'MAX',
'raid_level': 5
}
]
}
self.node.target_raid_config = self.target_raid_config
self.node.save()
@mock.patch.object(ibmc_client, 'connect', autospec=True)
def test_sync_create_configuration_without_delete(self, connect_ibmc):
conn = self.mock_ibmc_conn(connect_ibmc)
conn.system.storage.apply_raid_configuration.return_value = None
with task_manager.acquire(self.context, self.node.uuid) as task:
result = task.driver.raid.create_configuration(
task, create_root_volume=True, create_nonroot_volumes=True,
delete_existing=False)
self.assertIsNone(result, "synchronous create raid configuration "
"should return None")
conn.system.storage.apply_raid_configuration.assert_called_once_with(
self.node.target_raid_config.get('logical_disks')
)
@mock.patch.object(ibmc_client, 'connect', autospec=True)
def test_sync_create_configuration_with_delete(self, connect_ibmc):
conn = self.mock_ibmc_conn(connect_ibmc)
conn.system.storage.delete_all_raid_configuration.return_value = None
conn.system.storage.apply_raid_configuration.return_value = None
with task_manager.acquire(self.context, self.node.uuid) as task:
result = task.driver.raid.create_configuration(
task, create_root_volume=True, create_nonroot_volumes=True,
delete_existing=True)
self.assertIsNone(result, "synchronous create raid configuration "
"should return None")
conn.system.storage.delete_all_raid_configuration.assert_called_once()
conn.system.storage.apply_raid_configuration.assert_called_once_with(
self.node.target_raid_config.get('logical_disks')
)
@mock.patch.object(ibmc_client, 'connect', autospec=True)
def test_sync_create_configuration_without_nonroot(self, connect_ibmc):
conn = self.mock_ibmc_conn(connect_ibmc)
conn.system.storage.delete_all_raid_configuration.return_value = None
conn.system.storage.apply_raid_configuration.return_value = None
with task_manager.acquire(self.context, self.node.uuid) as task:
result = task.driver.raid.create_configuration(
task, create_root_volume=True, create_nonroot_volumes=False,
delete_existing=True)
self.assertIsNone(result, "synchronous create raid configuration "
"should return None")
conn.system.storage.delete_all_raid_configuration.assert_called_once()
conn.system.storage.apply_raid_configuration.assert_called_once_with(
[{'size_gb': 200, 'raid_level': 0, 'is_root_volume': True}]
)
@mock.patch.object(ibmc_client, 'connect', autospec=True)
def test_sync_create_configuration_without_root(self, connect_ibmc):
conn = self.mock_ibmc_conn(connect_ibmc)
conn.system.storage.delete_all_raid_configuration.return_value = None
conn.system.storage.apply_raid_configuration.return_value = None
with task_manager.acquire(self.context, self.node.uuid) as task:
result = task.driver.raid.create_configuration(
task, create_root_volume=False, create_nonroot_volumes=True,
delete_existing=True)
self.assertIsNone(result, "synchronous create raid configuration "
"should return None")
conn.system.storage.delete_all_raid_configuration.assert_called_once()
conn.system.storage.apply_raid_configuration.assert_called_once_with(
[{'size_gb': 'MAX', 'raid_level': 5}]
)
@mock.patch.object(ibmc_client, 'connect', autospec=True)
def test_sync_create_configuration_failed(self, connect_ibmc):
conn = self.mock_ibmc_conn(connect_ibmc)
conn.system.storage.delete_all_raid_configuration.return_value = None
conn.system.storage.apply_raid_configuration.side_effect = (
ibmc_error.IBMCClientError
)
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaisesRegex(
exception.IBMCError, 'create iBMC RAID configuration',
task.driver.raid.create_configuration, task,
create_root_volume=True, create_nonroot_volumes=True,
delete_existing=True)
conn.system.storage.delete_all_raid_configuration.assert_called_once()
conn.system.storage.apply_raid_configuration.assert_called_once_with(
self.node.target_raid_config.get('logical_disks')
)
@mock.patch.object(ibmc_client, 'connect', autospec=True)
def test_sync_delete_configuration_success(self, connect_ibmc):
conn = self.mock_ibmc_conn(connect_ibmc)
conn.system.storage.delete_all_raid_configuration.return_value = None
with task_manager.acquire(self.context, self.node.uuid) as task:
result = task.driver.raid.delete_configuration(task)
self.assertIsNone(result, "synchronous delete raid configuration "
"should return None")
conn.system.storage.delete_all_raid_configuration.assert_called_once()
@mock.patch.object(ibmc_client, 'connect', autospec=True)
def test_sync_delete_configuration_failed(self, connect_ibmc):
conn = self.mock_ibmc_conn(connect_ibmc)
conn.system.storage.delete_all_raid_configuration.side_effect = (
ibmc_error.IBMCClientError
)
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaisesRegex(
exception.IBMCError, 'delete iBMC RAID configuration',
task.driver.raid.delete_configuration, task)
conn.system.storage.delete_all_raid_configuration.assert_called_once()

View File

@ -57,6 +57,24 @@ class IBMCVendorTestCase(base.IBMCTestCase):
with task_manager.acquire(self.context, self.node.uuid,
shared=True) as task:
seq = task.driver.vendor.boot_up_seq(task)
conn.system.get.assert_called_once()
conn.system.get.assert_called_once_with()
connect_ibmc.assert_called_once_with(**self.ibmc)
self.assertEqual(expected, seq)
@mock.patch.object(ibmc_client, 'connect', autospec=True)
def test_list_raid_controller(self, connect_ibmc):
# Mocks
conn = self.mock_ibmc_conn(connect_ibmc)
ctrl = mock.Mock()
summary = mock.Mock()
ctrl.summary.return_value = summary
conn.system.storage.list.return_value = [ctrl]
with task_manager.acquire(self.context, self.node.uuid,
shared=True) as task:
summries = task.driver.vendor.get_raid_controller_list(task)
ctrl.summary.assert_called_once_with()
conn.system.storage.list.assert_called_once_with()
connect_ibmc.assert_called_once_with(**self.ibmc)
self.assertEqual([summary], summries)

View File

@ -16,7 +16,9 @@
from ironic.conductor import task_manager
from ironic.drivers.modules.ibmc import management as ibmc_mgmt
from ironic.drivers.modules.ibmc import power as ibmc_power
from ironic.drivers.modules.ibmc import raid as ibmc_raid
from ironic.drivers.modules.ibmc import vendor as ibmc_vendor
from ironic.drivers.modules import inspector
from ironic.drivers.modules import iscsi_deploy
from ironic.drivers.modules import noop
from ironic.drivers.modules import pxe
@ -31,7 +33,9 @@ class IBMCHardwareTestCase(db_base.DbTestCase):
self.config(enabled_hardware_types=['ibmc'],
enabled_power_interfaces=['ibmc'],
enabled_management_interfaces=['ibmc'],
enabled_vendor_interfaces=['ibmc'])
enabled_vendor_interfaces=['ibmc'],
enabled_raid_interfaces=['ibmc'],
enabled_inspect_interfaces=['inspector', 'no-inspect'])
def test_default_interfaces(self):
node = obj_utils.create_test_node(self.context, driver='ibmc')
@ -41,7 +45,8 @@ class IBMCHardwareTestCase(db_base.DbTestCase):
self.assertIsInstance(task.driver.power,
ibmc_power.IBMCPower)
self.assertIsInstance(task.driver.boot, pxe.PXEBoot)
self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy)
self.assertIsInstance(task.driver.console, noop.NoConsole)
self.assertIsInstance(task.driver.raid, noop.NoRAID)
self.assertIsInstance(task.driver.deploy, iscsi_deploy.ISCSIDeploy)
self.assertIsInstance(task.driver.raid, ibmc_raid.IbmcRAID)
self.assertIsInstance(task.driver.vendor, ibmc_vendor.IBMCVendor)
self.assertIsInstance(task.driver.inspect, inspector.Inspector)

View File

@ -0,0 +1,5 @@
---
features:
- |
Add raid interface for ibmc driver which includes ``delete_configuration``
and ``create_configuration`` steps.

View File

@ -135,6 +135,7 @@ ironic.hardware.interfaces.power =
ironic.hardware.interfaces.raid =
agent = ironic.drivers.modules.agent:AgentRAID
fake = ironic.drivers.modules.fake:FakeRAID
ibmc = ironic.drivers.modules.ibmc.raid:IbmcRAID
idrac = ironic.drivers.modules.drac.raid:DracRAID
idrac-wsman = ironic.drivers.modules.drac.raid:DracWSManRAID
ilo5 = ironic.drivers.modules.ilo.raid:Ilo5RAID