Rename discoverd -> inspector
As agreed on the summit I'm renaming the python modules and doing some adjustments: * This is a breaking change, so version is bumped to 2.0.0 * Used this chance to split conf options over proper sections * RELEASES.rst is gone; it's too hard to keep it up-to-date; anyway git does better job at doing history * Dropped deprecated option ports_for_inactive_interfaces * Dropped old /v1/discover endpoint and associated client call * No longer set on_discovery and newly_discovered in Node.extra (deprecated since 1.0.0, superseded by the get status API) * Default firewall chain name is "ironic-inspector" and is configurable Notes: * Some links will be updated after real move. * Stable branches will probably use the old name. * Some usage of discovery word is left in context of "discovered data" * DIB element will probably be deprecated, so leaving it alone for now. * Some usages of word "discovery" in the README will be updated later to make this patch a bit smaller * Ramdisk code will be moved to IPA, so not touching it too much Change-Id: I59f1f5bfb1248ab69973dab845aa028df493054e
This commit is contained in:
parent
b16c7b2223
commit
d6404d2f99
|
@ -59,22 +59,22 @@ Github::
|
||||||
|
|
||||||
Run the service with::
|
Run the service with::
|
||||||
|
|
||||||
.tox/py27/bin/ironic-discoverd --config-file example.conf
|
.tox/py27/bin/ironic-inspector --config-file example.conf
|
||||||
|
|
||||||
Of course you may have to modify ``example.conf`` to match your OpenStack
|
Of course you may have to modify ``example.conf`` to match your OpenStack
|
||||||
environment.
|
environment.
|
||||||
|
|
||||||
You can develop and test **ironic-discoverd** using
|
You can develop and test **ironic-inspector** using
|
||||||
`DevStack <http://docs.openstack.org/developer/devstack/>`_ plugin - see
|
`DevStack <http://docs.openstack.org/developer/devstack/>`_ plugin - see
|
||||||
https://etherpad.openstack.org/p/DiscoverdDevStack for the current status.
|
https://etherpad.openstack.org/p/DiscoverdDevStack for the current status.
|
||||||
|
|
||||||
Writing a Plugin
|
Writing a Plugin
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
**ironic-discoverd** allows to hook your code into data processing chain after
|
**ironic-inspector** allows to hook your code into data processing chain after
|
||||||
introspection. Inherit ``ProcessingHook`` class defined in
|
introspection. Inherit ``ProcessingHook`` class defined in
|
||||||
`ironic_discoverd.plugins.base
|
`ironic_inspector.plugins.base
|
||||||
<https://github.com/stackforge/ironic-discoverd/blob/master/ironic_discoverd/plugins/base.py>`_
|
<https://github.com/stackforge/ironic-discoverd/blob/master/ironic_inspector/plugins/base.py>`_
|
||||||
module and overwrite any or both of the following methods:
|
module and overwrite any or both of the following methods:
|
||||||
|
|
||||||
``before_processing(node_info)``
|
``before_processing(node_info)``
|
||||||
|
|
14
HTTP-API.rst
14
HTTP-API.rst
|
@ -1,7 +1,7 @@
|
||||||
HTTP API
|
HTTP API
|
||||||
--------
|
--------
|
||||||
|
|
||||||
By default **ironic-discoverd** listens on ``0.0.0.0:5050``, port
|
By default **ironic-inspector** listens on ``0.0.0.0:5050``, port
|
||||||
can be changed in configuration. Protocol is JSON over HTTP.
|
can be changed in configuration. Protocol is JSON over HTTP.
|
||||||
|
|
||||||
The HTTP API consist of these endpoints:
|
The HTTP API consist of these endpoints:
|
||||||
|
@ -17,7 +17,7 @@ Requires X-Auth-Token header with Keystone token for authentication.
|
||||||
|
|
||||||
Optional parameters:
|
Optional parameters:
|
||||||
|
|
||||||
* ``new_ipmi_password`` if set, **ironic-discoverd** will try to set IPMI
|
* ``new_ipmi_password`` if set, **ironic-inspector** will try to set IPMI
|
||||||
password on the machine to this value. Power credentials validation will be
|
password on the machine to this value. Power credentials validation will be
|
||||||
skipped and manual power on will be required. See `Setting IPMI
|
skipped and manual power on will be required. See `Setting IPMI
|
||||||
credentials`_ for details.
|
credentials`_ for details.
|
||||||
|
@ -28,7 +28,7 @@ Optional parameters:
|
||||||
|
|
||||||
Response:
|
Response:
|
||||||
|
|
||||||
* 202 - accepted discovery request
|
* 202 - accepted introspection request
|
||||||
* 400 - bad request
|
* 400 - bad request
|
||||||
* 401, 403 - missing or invalid authentication
|
* 401, 403 - missing or invalid authentication
|
||||||
* 404 - node cannot be found
|
* 404 - node cannot be found
|
||||||
|
@ -36,7 +36,7 @@ Response:
|
||||||
Get Introspection Status
|
Get Introspection Status
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
``GET /v1/introspection/<UUID>`` get hardware discovery status.
|
``GET /v1/introspection/<UUID>`` get hardware introspection status.
|
||||||
|
|
||||||
Requires X-Auth-Token header with Keystone token for authentication.
|
Requires X-Auth-Token header with Keystone token for authentication.
|
||||||
|
|
||||||
|
@ -49,14 +49,14 @@ Response:
|
||||||
|
|
||||||
Response body: JSON dictionary with keys:
|
Response body: JSON dictionary with keys:
|
||||||
|
|
||||||
* ``finished`` (boolean) whether discovery is finished
|
* ``finished`` (boolean) whether introspection is finished
|
||||||
* ``error`` error string or ``null``
|
* ``error`` error string or ``null``
|
||||||
|
|
||||||
Ramdisk Callback
|
Ramdisk Callback
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
``POST /v1/continue`` internal endpoint for the discovery ramdisk to post
|
``POST /v1/continue`` internal endpoint for the ramdisk to post back
|
||||||
back discovered data. Should not be used for anything other than implementing
|
discovered data. Should not be used for anything other than implementing
|
||||||
the ramdisk. Request body: JSON dictionary with at least these keys:
|
the ramdisk. Request body: JSON dictionary with at least these keys:
|
||||||
|
|
||||||
* ``cpus`` number of CPU
|
* ``cpus`` number of CPU
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
include example.conf
|
include example.conf
|
||||||
include LICENSE
|
include LICENSE
|
||||||
include ironic-discoverd.8
|
include ironic-inspector.8
|
||||||
include requirements.txt
|
include requirements.txt
|
||||||
include test-requirements.txt
|
include test-requirements.txt
|
||||||
include plugin-requirements.txt
|
include plugin-requirements.txt
|
||||||
|
|
126
README.rst
126
README.rst
|
@ -7,14 +7,14 @@ properties discovery is a process of getting hardware parameters required for
|
||||||
scheduling from a bare metal node, given it's power management credentials
|
scheduling from a bare metal node, given it's power management credentials
|
||||||
(e.g. IPMI address, user name and password).
|
(e.g. IPMI address, user name and password).
|
||||||
|
|
||||||
A special *discovery ramdisk* is required to collect the information on a
|
A special ramdisk is required to collect the information on a
|
||||||
node. The default one can be built using diskimage-builder_ and
|
node. The default one can be built using diskimage-builder_ and
|
||||||
`ironic-discoverd-ramdisk element`_ (see Configuration_ below).
|
`ironic-discoverd-ramdisk element`_ (see Configuration_ below).
|
||||||
|
|
||||||
Support for **ironic-discoverd** is present in `Tuskar UI`_ --
|
Support for **ironic-inspector** is present in `Tuskar UI`_ --
|
||||||
OpenStack Horizon plugin for TripleO_.
|
OpenStack Horizon plugin for TripleO_.
|
||||||
|
|
||||||
**ironic-discoverd** requires OpenStack Juno (2014.2) release or newer.
|
**ironic-inspector** requires OpenStack Kilo (2015.1) release or newer.
|
||||||
|
|
||||||
Please use launchpad_ to report bugs and ask questions. Use PyPI_ for
|
Please use launchpad_ to report bugs and ask questions. Use PyPI_ for
|
||||||
downloads and accessing the released version of this README. Refer to
|
downloads and accessing the released version of this README. Refer to
|
||||||
|
@ -27,12 +27,15 @@ CONTRIBUTING.rst_ for instructions on how to contribute.
|
||||||
.. _PyPI: https://pypi.python.org/pypi/ironic-discoverd
|
.. _PyPI: https://pypi.python.org/pypi/ironic-discoverd
|
||||||
.. _CONTRIBUTING.rst: https://github.com/stackforge/ironic-discoverd/blob/master/CONTRIBUTING.rst
|
.. _CONTRIBUTING.rst: https://github.com/stackforge/ironic-discoverd/blob/master/CONTRIBUTING.rst
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
**ironic-inspector** was called *ironic-discoverd* before version 2.0.0.
|
||||||
|
|
||||||
Workflow
|
Workflow
|
||||||
--------
|
--------
|
||||||
|
|
||||||
Usual hardware introspection flow is as follows:
|
Usual hardware introspection flow is as follows:
|
||||||
|
|
||||||
* Operator installs undercloud with **ironic-discoverd**
|
* Operator installs undercloud with **ironic-inspector**
|
||||||
(e.g. using instack-undercloud_).
|
(e.g. using instack-undercloud_).
|
||||||
|
|
||||||
* Operator enrolls nodes into Ironic either manually or by uploading CSV file
|
* Operator enrolls nodes into Ironic either manually or by uploading CSV file
|
||||||
|
@ -43,19 +46,18 @@ Usual hardware introspection flow is as follows:
|
||||||
`Node States`_.
|
`Node States`_.
|
||||||
|
|
||||||
* Operator sends nodes on introspection either manually using
|
* Operator sends nodes on introspection either manually using
|
||||||
**ironic-discoverd** API (see Usage_) or again via `Tuskar UI`_.
|
**ironic-inspector** API (see Usage_) or again via `Tuskar UI`_.
|
||||||
|
|
||||||
* On receiving node UUID **ironic-discoverd**:
|
* On receiving node UUID **ironic-inspector**:
|
||||||
|
|
||||||
* validates node power credentials, current power and provisioning states,
|
* validates node power credentials, current power and provisioning states,
|
||||||
* allows firewall access to PXE boot service for the nodes,
|
* allows firewall access to PXE boot service for the nodes,
|
||||||
* issues reboot command for the nodes, so that they boot the
|
* issues reboot command for the nodes, so that they boot the ramdisk.
|
||||||
discovery ramdisk.
|
|
||||||
|
|
||||||
* The discovery ramdisk collects the required information and posts it back
|
* The ramdisk collects the required information and posts it back to
|
||||||
to **ironic-discoverd**.
|
**ironic-inspector**.
|
||||||
|
|
||||||
* On receiving data from the discovery ramdisk, **ironic-discoverd**:
|
* On receiving data from the ramdisk, **ironic-inspector**:
|
||||||
|
|
||||||
* validates received data,
|
* validates received data,
|
||||||
* finds the node in Ironic database using it's BMC address (MAC address in
|
* finds the node in Ironic database using it's BMC address (MAC address in
|
||||||
|
@ -63,13 +65,13 @@ Usual hardware introspection flow is as follows:
|
||||||
* fills missing node properties with received data and creates missing ports.
|
* fills missing node properties with received data and creates missing ports.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
**ironic-discoverd** is responsible to create Ironic ports for some or all
|
**ironic-inspector** is responsible to create Ironic ports for some or all
|
||||||
NIC's found on the node. **ironic-discoverd** is also capable of
|
NIC's found on the node. **ironic-inspector** is also capable of
|
||||||
deleting ports that should not be present. There are two important
|
deleting ports that should not be present. There are two important
|
||||||
configuration options that affect this behavior: ``add_ports`` and
|
configuration options that affect this behavior: ``add_ports`` and
|
||||||
``keep_ports`` (please refer to ``example.conf`` for detailed explanation).
|
``keep_ports`` (please refer to ``example.conf`` for detailed explanation).
|
||||||
|
|
||||||
Default values as of **ironic-discoverd** 1.1.0 are ``add_ports=pxe``,
|
Default values as of **ironic-inspector** 1.1.0 are ``add_ports=pxe``,
|
||||||
``keep_ports=all``, which means that only one port will be added, which is
|
``keep_ports=all``, which means that only one port will be added, which is
|
||||||
associated with NIC the ramdisk PXE booted from. No ports will be deleted.
|
associated with NIC the ramdisk PXE booted from. No ports will be deleted.
|
||||||
This setting ensures that deploying on introspected nodes will succeed
|
This setting ensures that deploying on introspected nodes will succeed
|
||||||
|
@ -96,32 +98,23 @@ package and should be done separately.
|
||||||
Installation
|
Installation
|
||||||
------------
|
------------
|
||||||
|
|
||||||
**ironic-discoverd** is available as an RPM from Fedora 22 repositories or from
|
Install from PyPI_ (you may want to use virtualenv to isolate your
|
||||||
Juno (and later) `RDO <https://www.rdoproject.org/>`_ for Fedora 20, 21
|
environment)::
|
||||||
and EPEL 7. It will be installed and preconfigured if you used
|
|
||||||
instack-undercloud_ to build your undercloud.
|
|
||||||
Otherwise after enabling required repositories install it using::
|
|
||||||
|
|
||||||
yum install openstack-ironic-discoverd
|
pip install ironic-inspector
|
||||||
|
|
||||||
To install only Python packages (including the client), use::
|
Also there is a `DevStack <http://docs.openstack.org/developer/devstack/>`_
|
||||||
|
plugin for **ironic-inspector** - see
|
||||||
yum install python-ironic-discoverd
|
|
||||||
|
|
||||||
Alternatively (e.g. if you need the latest version), you can install package
|
|
||||||
from PyPI_ (you may want to use virtualenv to isolate your environment)::
|
|
||||||
|
|
||||||
pip install ironic-discoverd
|
|
||||||
|
|
||||||
Finally, there is a `DevStack <http://docs.openstack.org/developer/devstack/>`_
|
|
||||||
plugin for **ironic-discoverd** - see
|
|
||||||
https://etherpad.openstack.org/p/DiscoverdDevStack for the current status.
|
https://etherpad.openstack.org/p/DiscoverdDevStack for the current status.
|
||||||
|
|
||||||
|
Finally, some distributions (e.g. Fedora) provide **ironic-inspector**
|
||||||
|
packaged, some of them - under its old name *ironic-discoverd*.
|
||||||
|
|
||||||
Configuration
|
Configuration
|
||||||
~~~~~~~~~~~~~
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
Copy ``example.conf`` to some permanent place
|
Copy ``example.conf`` to some permanent place
|
||||||
(``/etc/ironic-discoverd/discoverd.conf`` is what is used in the RPM).
|
(e.g. ``/etc/ironic-inspector/inspector.conf``).
|
||||||
Fill in at least these configuration values:
|
Fill in at least these configuration values:
|
||||||
|
|
||||||
* ``os_username``, ``os_password``, ``os_tenant_name`` - Keystone credentials
|
* ``os_username``, ``os_password``, ``os_tenant_name`` - Keystone credentials
|
||||||
|
@ -130,7 +123,7 @@ Fill in at least these configuration values:
|
||||||
* ``os_auth_url``, ``identity_uri`` - Keystone endpoints for validating
|
* ``os_auth_url``, ``identity_uri`` - Keystone endpoints for validating
|
||||||
authentication tokens and checking user roles;
|
authentication tokens and checking user roles;
|
||||||
|
|
||||||
* ``database`` - where you want **ironic-discoverd** SQLite database
|
* ``database`` - where you want **ironic-inspector** SQLite database
|
||||||
to be placed;
|
to be placed;
|
||||||
|
|
||||||
* ``dnsmasq_interface`` - interface on which ``dnsmasq`` (or another DHCP
|
* ``dnsmasq_interface`` - interface on which ``dnsmasq`` (or another DHCP
|
||||||
|
@ -160,8 +153,8 @@ As for PXE boot environment, you'll need:
|
||||||
is always advised).
|
is always advised).
|
||||||
|
|
||||||
* You need PXE boot server (e.g. *dnsmasq*) running on **the same** machine as
|
* You need PXE boot server (e.g. *dnsmasq*) running on **the same** machine as
|
||||||
**ironic-discoverd**. Don't do any firewall configuration:
|
**ironic-inspector**. Don't do any firewall configuration:
|
||||||
**ironic-discoverd** will handle it for you. In **ironic-discoverd**
|
**ironic-inspector** will handle it for you. In **ironic-inspector**
|
||||||
configuration file set ``dnsmasq_interface`` to the interface your
|
configuration file set ``dnsmasq_interface`` to the interface your
|
||||||
PXE boot server listens on. Here is an example *dnsmasq.conf*::
|
PXE boot server listens on. Here is an example *dnsmasq.conf*::
|
||||||
|
|
||||||
|
@ -191,15 +184,17 @@ As for PXE boot environment, you'll need:
|
||||||
instead of ``discoverd_callback_url``. Modify ``pxelinux.cfg/default``
|
instead of ``discoverd_callback_url``. Modify ``pxelinux.cfg/default``
|
||||||
accordingly if you have one of these.
|
accordingly if you have one of these.
|
||||||
|
|
||||||
Here is *discoverd.conf* you may end up with::
|
Here is *inspector.conf* you may end up with::
|
||||||
|
|
||||||
[discoverd]
|
[DEFAULT]
|
||||||
debug = false
|
debug = false
|
||||||
|
[ironic]
|
||||||
identity_uri = http://127.0.0.1:35357
|
identity_uri = http://127.0.0.1:35357
|
||||||
os_auth_url = http://127.0.0.1:5000/v2.0
|
os_auth_url = http://127.0.0.1:5000/v2.0
|
||||||
os_username = admin
|
os_username = admin
|
||||||
os_password = password
|
os_password = password
|
||||||
os_tenant_name = admin
|
os_tenant_name = admin
|
||||||
|
[firewall]
|
||||||
dnsmasq_interface = br-ctlplane
|
dnsmasq_interface = br-ctlplane
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -211,40 +206,41 @@ Here is *discoverd.conf* you may end up with::
|
||||||
Running
|
Running
|
||||||
~~~~~~~
|
~~~~~~~
|
||||||
|
|
||||||
If you installed **ironic-discoverd** from the RPM, you already have
|
If you installed **ironic-inspector** from the RPM, you might already have
|
||||||
a *systemd* unit, so you can::
|
a *systemd* unit, so you can::
|
||||||
|
|
||||||
systemctl enable openstack-ironic-discoverd
|
systemctl enable openstack-ironic-inspector
|
||||||
systemctl start openstack-ironic-discoverd
|
systemctl start openstack-ironic-inspector
|
||||||
|
|
||||||
Otherwise run as ``root``::
|
Otherwise run as ``root``::
|
||||||
|
|
||||||
ironic-discoverd --config-file /etc/ironic-discoverd/discoverd.conf
|
ironic-inspector --config-file /etc/ironic-inspector/inspector.conf
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Running as ``root`` is not required if **ironic-discoverd** does not
|
Running as ``root`` is not required if **ironic-inspector** does not
|
||||||
manage the firewall (i.e. ``manage_firewall`` is set to ``false`` in the
|
manage the firewall (i.e. ``manage_firewall`` is set to ``false`` in the
|
||||||
configuration file).
|
configuration file).
|
||||||
|
|
||||||
A good starting point for writing your own *systemd* unit should be `one used
|
A good starting point for writing your own *systemd* unit should be `one used
|
||||||
in Fedora <http://pkgs.fedoraproject.org/cgit/openstack-ironic-discoverd.git/plain/openstack-ironic-discoverd.service>`_.
|
in Fedora <http://pkgs.fedoraproject.org/cgit/openstack-ironic-discoverd.git/plain/openstack-ironic-discoverd.service>`_
|
||||||
|
(note usage of old name).
|
||||||
|
|
||||||
Usage
|
Usage
|
||||||
-----
|
-----
|
||||||
|
|
||||||
**ironic-discoverd** has a simple client library for Python and a CLI tool
|
**ironic-inspector** has a simple client library for Python and a CLI tool
|
||||||
bundled with it.
|
bundled with it.
|
||||||
|
|
||||||
Client library is in module ``ironic_discoverd.client``, every call
|
Client library is in module ``ironic_inspector.client``, every call
|
||||||
accepts additional optional arguments:
|
accepts additional optional arguments:
|
||||||
|
|
||||||
* ``base_url`` **ironic-discoverd** API endpoint, defaults to
|
* ``base_url`` **ironic-inspector** API endpoint, defaults to
|
||||||
``127.0.0.1:5050``,
|
``127.0.0.1:5050``,
|
||||||
* ``auth_token`` Keystone authentication token.
|
* ``auth_token`` Keystone authentication token.
|
||||||
|
|
||||||
CLI tool is based on OpenStackClient_ with prefix
|
CLI tool is based on OpenStackClient_ with prefix
|
||||||
``openstack baremetal introspection``. Accepts optional argument
|
``openstack baremetal introspection``. Accepts optional argument
|
||||||
``--discoverd-url`` with the **ironic-discoverd** API endpoint.
|
``--inspector-url`` with the **ironic-inspector** API endpoint.
|
||||||
|
|
||||||
* **Start introspection on a node**:
|
* **Start introspection on a node**:
|
||||||
|
|
||||||
|
@ -256,7 +252,7 @@ CLI tool is based on OpenStackClient_ with prefix
|
||||||
|
|
||||||
* ``uuid`` - Ironic node UUID;
|
* ``uuid`` - Ironic node UUID;
|
||||||
* ``new_ipmi_username`` and ``new_ipmi_password`` - if these are set,
|
* ``new_ipmi_username`` and ``new_ipmi_password`` - if these are set,
|
||||||
**ironic-discoverd** will switch to manual power on and assigning IPMI
|
**ironic-inspector** will switch to manual power on and assigning IPMI
|
||||||
credentials on introspection. See `Setting IPMI Credentials`_ for details.
|
credentials on introspection. See `Setting IPMI Credentials`_ for details.
|
||||||
|
|
||||||
* **Query introspection status**:
|
* **Query introspection status**:
|
||||||
|
@ -279,7 +275,7 @@ Using from Ironic API
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Ironic Kilo introduced support for hardware introspection under name of
|
Ironic Kilo introduced support for hardware introspection under name of
|
||||||
"inspection". **ironic-discoverd** introspection is supported for some generic
|
"inspection". **ironic-inspector** introspection is supported for some generic
|
||||||
drivers, please refer to `Ironic inspection documentation`_ for details.
|
drivers, please refer to `Ironic inspection documentation`_ for details.
|
||||||
|
|
||||||
Node States
|
Node States
|
||||||
|
@ -312,17 +308,17 @@ Node States
|
||||||
Setting IPMI Credentials
|
Setting IPMI Credentials
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
If you have physical access to your nodes, you can use **ironic-discoverd** to
|
If you have physical access to your nodes, you can use **ironic-inspector** to
|
||||||
set IPMI credentials for them without knowing the original ones. The workflow
|
set IPMI credentials for them without knowing the original ones. The workflow
|
||||||
is as follows:
|
is as follows:
|
||||||
|
|
||||||
* Ensure nodes will PXE boot on the right network by default.
|
* Ensure nodes will PXE boot on the right network by default.
|
||||||
|
|
||||||
* Set ``enable_setting_ipmi_credentials = true`` in the **ironic-discoverd**
|
* Set ``enable_setting_ipmi_credentials = true`` in the **ironic-inspector**
|
||||||
configuration file.
|
configuration file.
|
||||||
|
|
||||||
* Enroll nodes in Ironic with setting their ``ipmi_address`` only. This step
|
* Enroll nodes in Ironic with setting their ``ipmi_address`` only. This step
|
||||||
allows **ironic-discoverd** to distinguish nodes.
|
allows **ironic-inspector** to distinguish nodes.
|
||||||
|
|
||||||
* Set maintenance mode on nodes. That's an important step, otherwise Ironic
|
* Set maintenance mode on nodes. That's an important step, otherwise Ironic
|
||||||
might interfere with introspection process.
|
might interfere with introspection process.
|
||||||
|
@ -336,16 +332,16 @@ is as follows:
|
||||||
* Manually power on the nodes and wait.
|
* Manually power on the nodes and wait.
|
||||||
|
|
||||||
* After introspection is finished (watch nodes power state or use
|
* After introspection is finished (watch nodes power state or use
|
||||||
**ironic-discoverd** status API) you can turn maintenance mode off.
|
**ironic-inspector** status API) you can turn maintenance mode off.
|
||||||
|
|
||||||
Note that due to various limitations on password value in different BMC,
|
Note that due to various limitations on password value in different BMC,
|
||||||
**ironic-discoverd** will only accept passwords with length between 1 and 20
|
**ironic-inspector** will only accept passwords with length between 1 and 20
|
||||||
consisting only of letters and numbers.
|
consisting only of letters and numbers.
|
||||||
|
|
||||||
Plugins
|
Plugins
|
||||||
~~~~~~~
|
~~~~~~~
|
||||||
|
|
||||||
**ironic-discoverd** heavily relies on plugins for data processing. Even the
|
**ironic-inspector** heavily relies on plugins for data processing. Even the
|
||||||
standard functionality is largely based on plugins. Set ``processing_hooks``
|
standard functionality is largely based on plugins. Set ``processing_hooks``
|
||||||
option in the configuration file to change the set of plugins to be run on
|
option in the configuration file to change the set of plugins to be run on
|
||||||
introspection data. Note that order does matter in this option.
|
introspection data. Note that order does matter in this option.
|
||||||
|
@ -389,7 +385,7 @@ Errors when starting introspection
|
||||||
|
|
||||||
In Kilo release with *python-ironicclient* 0.5.0 or newer Ironic
|
In Kilo release with *python-ironicclient* 0.5.0 or newer Ironic
|
||||||
defaults to reporting provision state ``AVAILABLE`` for newly enrolled
|
defaults to reporting provision state ``AVAILABLE`` for newly enrolled
|
||||||
nodes. **ironic-discoverd** will refuse to conduct introspection in
|
nodes. **ironic-inspector** will refuse to conduct introspection in
|
||||||
this state, as such nodes are supposed to be used by Nova for scheduling.
|
this state, as such nodes are supposed to be used by Nova for scheduling.
|
||||||
See `Node States`_ for instructions on how to put nodes into
|
See `Node States`_ for instructions on how to put nodes into
|
||||||
the correct state.
|
the correct state.
|
||||||
|
@ -403,7 +399,7 @@ There may be 3 reasons why introspection can time out after some time
|
||||||
#. Fatal failure in processing chain before node was found in the local cache.
|
#. Fatal failure in processing chain before node was found in the local cache.
|
||||||
See `Troubleshooting data processing`_ for the hints.
|
See `Troubleshooting data processing`_ for the hints.
|
||||||
|
|
||||||
#. Failure to load discovery ramdisk on the target node. See `Troubleshooting
|
#. Failure to load the ramdisk on the target node. See `Troubleshooting
|
||||||
PXE boot`_ for the hints.
|
PXE boot`_ for the hints.
|
||||||
|
|
||||||
#. Failure during ramdisk run. See `Troubleshooting ramdisk run`_ for the
|
#. Failure during ramdisk run. See `Troubleshooting ramdisk run`_ for the
|
||||||
|
@ -411,17 +407,19 @@ There may be 3 reasons why introspection can time out after some time
|
||||||
|
|
||||||
Troubleshooting data processing
|
Troubleshooting data processing
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
In this case **ironic-discoverd** logs should give a good idea what went wrong.
|
In this case **ironic-inspector** logs should give a good idea what went wrong.
|
||||||
E.g. for RDO or Fedora the following command will output the full log::
|
E.g. for RDO or Fedora the following command will output the full log::
|
||||||
|
|
||||||
sudo journalctl -u openstack-ironic-discoverd
|
sudo journalctl -u openstack-ironic-inspector
|
||||||
|
|
||||||
|
(use ``openstack-ironic-discoverd`` for version < 2.0.0).
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Service name and specific command might be different for other Linux
|
Service name and specific command might be different for other Linux
|
||||||
distributions.
|
distributions (and for old version of **ironic-inspector**).
|
||||||
|
|
||||||
If ``ramdisk_error`` plugin is enabled and ``ramdisk_logs_dir`` configuration
|
If ``ramdisk_error`` plugin is enabled and ``ramdisk_logs_dir`` configuration
|
||||||
option is set, **ironic-discoverd** will store logs received from the ramdisk
|
option is set, **ironic-inspector** will store logs received from the ramdisk
|
||||||
to the ``ramdisk_logs_dir`` directory. This depends, however, on the ramdisk
|
to the ``ramdisk_logs_dir`` directory. This depends, however, on the ramdisk
|
||||||
implementation.
|
implementation.
|
||||||
|
|
||||||
|
@ -436,7 +434,9 @@ on. You may need to restart introspection.
|
||||||
Another source of information is DHCP and TFTP server logs. Their location
|
Another source of information is DHCP and TFTP server logs. Their location
|
||||||
depends on how the servers were installed and run. For RDO or Fedora use::
|
depends on how the servers were installed and run. For RDO or Fedora use::
|
||||||
|
|
||||||
$ sudo journalctl -u openstack-ironic-discoverd-dnsmasq
|
$ sudo journalctl -u openstack-ironic-inspector-dnsmasq
|
||||||
|
|
||||||
|
(use ``openstack-ironic-discoverd-dnsmasq`` for version < 2.0.0).
|
||||||
|
|
||||||
The last resort is ``tcpdump`` utility. Use something like
|
The last resort is ``tcpdump`` utility. Use something like
|
||||||
::
|
::
|
||||||
|
@ -458,7 +458,7 @@ sure that:
|
||||||
propagating,
|
propagating,
|
||||||
|
|
||||||
#. there is no additional firewall rules preventing access to port 67 on the
|
#. there is no additional firewall rules preventing access to port 67 on the
|
||||||
machine where *ironic-discoverd* and its DHCP server are installed.
|
machine where *ironic-inspector* and its DHCP server are installed.
|
||||||
|
|
||||||
If you see node receiving DHCP address and then failing to get kernel and/or
|
If you see node receiving DHCP address and then failing to get kernel and/or
|
||||||
ramdisk or to boot them, make sure that:
|
ramdisk or to boot them, make sure that:
|
||||||
|
|
211
RELEASES.rst
211
RELEASES.rst
|
@ -1,211 +0,0 @@
|
||||||
Release Notes
|
|
||||||
-------------
|
|
||||||
|
|
||||||
1.2 Series
|
|
||||||
~~~~~~~~~~
|
|
||||||
|
|
||||||
See `1.2.0 release tracking page`_ for details.
|
|
||||||
|
|
||||||
**Upgrade Notes**
|
|
||||||
|
|
||||||
**Major Features**
|
|
||||||
|
|
||||||
**Other Changes**
|
|
||||||
|
|
||||||
**Known Issues**
|
|
||||||
|
|
||||||
.. _1.2.0 release tracking page: https://bugs.launchpad.net/ironic-discoverd/+milestone/1.2.0
|
|
||||||
|
|
||||||
1.1 Series
|
|
||||||
~~~~~~~~~~
|
|
||||||
|
|
||||||
See `1.1.0 release tracking page`_ for details.
|
|
||||||
|
|
||||||
**Upgrade Notes**
|
|
||||||
|
|
||||||
* This version no longer supports ancient ramdisks that sent ``macs`` instead
|
|
||||||
of ``interfaces``. It also raises exception if no valid interfaces were
|
|
||||||
found after processing.
|
|
||||||
|
|
||||||
* ``identity_uri`` parameter should be set to Keystone admin endpoint.
|
|
||||||
|
|
||||||
* ``overwrite_existing`` is now enabled by default.
|
|
||||||
|
|
||||||
* Running the service as
|
|
||||||
::
|
|
||||||
|
|
||||||
$ ironic-discoverd /path/to/config
|
|
||||||
|
|
||||||
is no longer supported, use
|
|
||||||
::
|
|
||||||
|
|
||||||
$ ironic-discoverd --config-file /path/to/config
|
|
||||||
|
|
||||||
**Major Features**
|
|
||||||
|
|
||||||
* Default to only creating a port for the NIC that the ramdisk was PXE booted
|
|
||||||
from, if such information is provided by ramdisk as ``boot_interface`` field.
|
|
||||||
Adjustable by ``add_ports`` option.
|
|
||||||
|
|
||||||
See `better-boot-interface-detection blueprint
|
|
||||||
<https://blueprints.launchpad.net/ironic-discoverd/+spec/better-boot-interface-detection>`_
|
|
||||||
for details.
|
|
||||||
|
|
||||||
* `Setting IPMI Credentials`_ feature is considered stable now and is exposed
|
|
||||||
in the client. It still needs to be enabled via configuration.
|
|
||||||
|
|
||||||
See `setup-ipmi-credentials-take2 blueprint
|
|
||||||
<https://blueprints.launchpad.net/ironic-discoverd/+spec/setup-ipmi-credentials-take2>`_
|
|
||||||
for what changed since 1.0.0 (tl;dr: everything).
|
|
||||||
|
|
||||||
* Proper CLI tool implemented as a plugin for OpenStackClient_.
|
|
||||||
|
|
||||||
Also client now properly sets error message from the server in its exception.
|
|
||||||
This might be a breaking change, if you relied on exception message
|
|
||||||
previously.
|
|
||||||
|
|
||||||
* The default value for ``overwrite_existing`` configuration option was
|
|
||||||
flipped, matching the default behavior for Ironic inspection.
|
|
||||||
|
|
||||||
* Switch to `oslo.config <http://docs.openstack.org/developer/oslo.config/>`_
|
|
||||||
for configuration management (many thanks to Yuiko Takada).
|
|
||||||
|
|
||||||
**Other Changes**
|
|
||||||
|
|
||||||
* New option ``add_ports`` allows precise control over which ports to add,
|
|
||||||
replacing deprecated ``ports_for_inactive_interfaces``.
|
|
||||||
|
|
||||||
* Experimental plugin ``edeploy`` to use with `eDeploy hardware detection and
|
|
||||||
classification utilities`_.
|
|
||||||
|
|
||||||
See `eDeploy blueprint`_ for details.
|
|
||||||
|
|
||||||
* Plugin ``root_device_hint`` for in-band root device discovery.
|
|
||||||
|
|
||||||
* Plugin ``ramdisk_error`` is now enabled by default.
|
|
||||||
|
|
||||||
* Serious authentication issues were fixed, ``keystonemiddleware`` is a new
|
|
||||||
requirement.
|
|
||||||
|
|
||||||
* Basic support for i18n via oslo.i18n.
|
|
||||||
|
|
||||||
**Known Issues**
|
|
||||||
|
|
||||||
.. _1.1.0 release tracking page: https://bugs.launchpad.net/ironic-discoverd/+milestone/1.1.0
|
|
||||||
.. _Setting IPMI Credentials: https://github.com/stackforge/ironic-discoverd#setting-ipmi-credentials
|
|
||||||
.. _OpenStackClient: http://docs.openstack.org/developer/python-openstackclient/
|
|
||||||
.. _eDeploy hardware detection and classification utilities: https://pypi.python.org/pypi/hardware
|
|
||||||
.. _eDeploy blueprint: https://blueprints.launchpad.net/ironic-discoverd/+spec/edeploy
|
|
||||||
|
|
||||||
1.0 Series
|
|
||||||
~~~~~~~~~~
|
|
||||||
|
|
||||||
1.0 is the first feature-complete release series. It's also the first series
|
|
||||||
to follow standard OpenStack processes from the beginning. All 0.2 series
|
|
||||||
users are advised to upgrade.
|
|
||||||
|
|
||||||
See `1.0.0 release tracking page`_ for details.
|
|
||||||
|
|
||||||
**1.0.1 release**
|
|
||||||
|
|
||||||
This maintenance fixed serious problem with authentication and unfortunately
|
|
||||||
brought new upgrade requirements:
|
|
||||||
|
|
||||||
* Dependency on *keystonemiddleware*;
|
|
||||||
* New configuration option ``identity_uri``, defaulting to localhost.
|
|
||||||
|
|
||||||
**Upgrade notes**
|
|
||||||
|
|
||||||
Action required:
|
|
||||||
|
|
||||||
* Fill in ``database`` option in the configuration file before upgrading.
|
|
||||||
* Stop relying on **ironic-discoverd** setting maintenance mode itself.
|
|
||||||
* Stop relying on ``discovery_timestamp`` node extra field.
|
|
||||||
|
|
||||||
Action recommended:
|
|
||||||
|
|
||||||
* Switch your init scripts to use ``ironic-discoverd --config-file <path>``
|
|
||||||
instead of just ``ironic-discoverd <path>``.
|
|
||||||
|
|
||||||
* Stop relying on ``on_discovery`` and ``newly_discovered`` being set in node
|
|
||||||
``extra`` field during and after introspection. Use new get status HTTP
|
|
||||||
endpoint and client API instead.
|
|
||||||
|
|
||||||
* Switch from ``discover`` to ``introspect`` HTTP endpoint and client API.
|
|
||||||
|
|
||||||
**Major features**
|
|
||||||
|
|
||||||
* Introspection now times out by default, set ``timeout`` option to alter.
|
|
||||||
|
|
||||||
* New API ``GET /v1/introspection/<uuid>`` and ``client.get_status`` for
|
|
||||||
getting discovery status.
|
|
||||||
|
|
||||||
See `get-status-api blueprint`_ for details.
|
|
||||||
|
|
||||||
* New API ``POST /v1/introspection/<uuid>`` and ``client.introspect``
|
|
||||||
is now used to initiate discovery, ``/v1/discover`` is deprecated.
|
|
||||||
|
|
||||||
See `v1 API reform blueprint`_ for details.
|
|
||||||
|
|
||||||
* ``/v1/continue`` is now sync:
|
|
||||||
|
|
||||||
* Errors are properly returned to the caller
|
|
||||||
* This call now returns value as a JSON dict (currently empty)
|
|
||||||
|
|
||||||
* Add support for plugins that hook into data processing pipeline. Refer to
|
|
||||||
Plugins_ for information on bundled plugins and to CONTRIBUTING.rst_ for
|
|
||||||
information on how to write your own.
|
|
||||||
|
|
||||||
See `plugin-architecture blueprint`_ for details.
|
|
||||||
|
|
||||||
* Support for OpenStack Kilo release and new Ironic state machine -
|
|
||||||
see `Kilo state machine blueprint`_.
|
|
||||||
|
|
||||||
As a side effect, no longer depend on maintenance mode for introspection.
|
|
||||||
Stop putting node in maintenance mode before introspection.
|
|
||||||
|
|
||||||
* Cache nodes under introspection in a local SQLite database.
|
|
||||||
``database`` configuration option sets where to place this database.
|
|
||||||
Improves performance by making less calls to Ironic API and makes possible
|
|
||||||
to get results of introspection.
|
|
||||||
|
|
||||||
**Other Changes**
|
|
||||||
|
|
||||||
* Firewall management can be disabled completely via ``manage_firewall``
|
|
||||||
option.
|
|
||||||
|
|
||||||
* Experimental support for updating IPMI credentials from within ramdisk.
|
|
||||||
|
|
||||||
Enable via configuration option ``enable_setting_ipmi_credentials``.
|
|
||||||
Beware that this feature lacks proper testing, is not supported
|
|
||||||
officially yet and is subject to changes without keeping backward
|
|
||||||
compatibility.
|
|
||||||
|
|
||||||
See `setup-ipmi-credentials blueprint`_ for details.
|
|
||||||
|
|
||||||
**Known Issues**
|
|
||||||
|
|
||||||
* `bug 1415040 <https://bugs.launchpad.net/ironic-discoverd/+bug/1415040>`_
|
|
||||||
it is required to set IP addresses instead of host names in
|
|
||||||
``ipmi_address``/``ilo_address``/``drac_host`` node ``driver_info`` field
|
|
||||||
for **ironic-discoverd** to work properly.
|
|
||||||
|
|
||||||
.. _1.0.0 release tracking page: https://bugs.launchpad.net/ironic-discoverd/+milestone/1.0.0
|
|
||||||
.. _setup-ipmi-credentials blueprint: https://blueprints.launchpad.net/ironic-discoverd/+spec/setup-ipmi-credentials
|
|
||||||
.. _Plugins: https://github.com/stackforge/ironic-discoverd#plugins
|
|
||||||
.. _CONTRIBUTING.rst: https://github.com/stackforge/ironic-discoverd/blob/master/CONTRIBUTING.rst
|
|
||||||
.. _plugin-architecture blueprint: https://blueprints.launchpad.net/ironic-discoverd/+spec/plugin-architecture
|
|
||||||
.. _get-status-api blueprint: https://blueprints.launchpad.net/ironic-discoverd/+spec/get-status-api
|
|
||||||
.. _Kilo state machine blueprint: https://blueprints.launchpad.net/ironic-discoverd/+spec/kilo-state-machine
|
|
||||||
.. _v1 API reform blueprint: https://blueprints.launchpad.net/ironic-discoverd/+spec/v1-api-reform
|
|
||||||
|
|
||||||
0.2 Series
|
|
||||||
~~~~~~~~~~
|
|
||||||
|
|
||||||
0.2 series is designed to work with OpenStack Juno release.
|
|
||||||
Not supported any more.
|
|
||||||
|
|
||||||
0.1 Series
|
|
||||||
~~~~~~~~~~
|
|
||||||
|
|
||||||
First stable release series. Not supported any more.
|
|
|
@ -1,4 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
from ironic_discoverd_ramdisk import main
|
|
||||||
main.main()
|
|
|
@ -0,0 +1,4 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
from ironic_inspector_ramdisk import main
|
||||||
|
main.main()
|
151
example.conf
151
example.conf
|
@ -1,52 +1,123 @@
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
|
|
||||||
|
|
||||||
[discoverd]
|
|
||||||
|
|
||||||
#
|
#
|
||||||
# From ironic_discoverd
|
# From ironic_inspector
|
||||||
#
|
#
|
||||||
|
|
||||||
# Keystone authentication endpoint. (string value)
|
# IP to listen on. (string value)
|
||||||
#os_auth_url = http://127.0.0.1:5000/v2.0
|
# Deprecated group/name - [discoverd]/listen_address
|
||||||
|
#listen_address = 0.0.0.0
|
||||||
|
|
||||||
# User name for accessing Keystone and Ironic API. (string value)
|
# Port to listen on. (integer value)
|
||||||
#os_username =
|
# Deprecated group/name - [discoverd]/listen_port
|
||||||
|
#listen_port = 5050
|
||||||
|
|
||||||
# Password for accessing Keystone and Ironic API. (string value)
|
# Whether to authenticate with Keystone on public HTTP endpoints. Note
|
||||||
#os_password =
|
# that introspection ramdisk postback endpoint is never authenticated.
|
||||||
|
# (boolean value)
|
||||||
|
# Deprecated group/name - [discoverd]/authenticate
|
||||||
|
#authenticate = true
|
||||||
|
|
||||||
# Tenant name for accessing Keystone and Ironic API. (string value)
|
# SQLite3 database to store nodes under introspection, required. Do
|
||||||
#os_tenant_name =
|
# not use :memory: here, it won't work. (string value)
|
||||||
|
# Deprecated group/name - [discoverd]/database
|
||||||
|
#database =
|
||||||
|
|
||||||
# Keystone admin endpoint. (string value)
|
# Debug mode enabled/disabled. (boolean value)
|
||||||
#identity_uri = http://127.0.0.1:35357
|
# Deprecated group/name - [discoverd]/debug
|
||||||
|
#debug = false
|
||||||
|
|
||||||
# Number of attempts to do when trying to connect to Ironic on start
|
# Timeout after which introspection is considered failed, set to 0 to
|
||||||
# up. (integer value)
|
# disable. (integer value)
|
||||||
#ironic_retry_attempts = 5
|
# Deprecated group/name - [discoverd]/timeout
|
||||||
|
#timeout = 3600
|
||||||
|
|
||||||
# Amount of time between attempts to connect to Ironic on start up.
|
# For how much time (in seconds) to keep status information about
|
||||||
# (integer value)
|
# nodes after introspection was finished for them. Default value is 1
|
||||||
#ironic_retry_period = 5
|
# week. (integer value)
|
||||||
|
# Deprecated group/name - [discoverd]/node_status_keep_time
|
||||||
|
#node_status_keep_time = 604800
|
||||||
|
|
||||||
|
# Amount of time in seconds, after which repeat clean up of timed out
|
||||||
|
# nodes and old nodes status information. (integer value)
|
||||||
|
# Deprecated group/name - [discoverd]/clean_up_period
|
||||||
|
#clean_up_period = 60
|
||||||
|
|
||||||
|
|
||||||
|
[firewall]
|
||||||
|
|
||||||
|
#
|
||||||
|
# From ironic_inspector
|
||||||
|
#
|
||||||
|
|
||||||
# Whether to manage firewall rules for PXE port. (boolean value)
|
# Whether to manage firewall rules for PXE port. (boolean value)
|
||||||
|
# Deprecated group/name - [discoverd]/manage_firewall
|
||||||
#manage_firewall = true
|
#manage_firewall = true
|
||||||
|
|
||||||
# Interface on which dnsmasq listens, the default is for VM's. (string
|
# Interface on which dnsmasq listens, the default is for VM's. (string
|
||||||
# value)
|
# value)
|
||||||
|
# Deprecated group/name - [discoverd]/dnsmasq_interface
|
||||||
#dnsmasq_interface = br-ctlplane
|
#dnsmasq_interface = br-ctlplane
|
||||||
|
|
||||||
# Amount of time in seconds, after which repeat periodic update of
|
# Amount of time in seconds, after which repeat periodic update of
|
||||||
# firewall. (integer value)
|
# firewall. (integer value)
|
||||||
|
# Deprecated group/name - [discoverd]/firewall_update_period
|
||||||
#firewall_update_period = 15
|
#firewall_update_period = 15
|
||||||
|
|
||||||
|
# iptables chain name to use. (string value)
|
||||||
|
#firewall_chain = ironic-inspector
|
||||||
|
|
||||||
|
|
||||||
|
[ironic]
|
||||||
|
|
||||||
|
#
|
||||||
|
# From ironic_inspector
|
||||||
|
#
|
||||||
|
|
||||||
|
# Keystone authentication endpoint. (string value)
|
||||||
|
# Deprecated group/name - [discoverd]/os_auth_url
|
||||||
|
#os_auth_url = http://127.0.0.1:5000/v2.0
|
||||||
|
|
||||||
|
# User name for accessing Keystone and Ironic API. (string value)
|
||||||
|
# Deprecated group/name - [discoverd]/os_username
|
||||||
|
#os_username =
|
||||||
|
|
||||||
|
# Password for accessing Keystone and Ironic API. (string value)
|
||||||
|
# Deprecated group/name - [discoverd]/os_password
|
||||||
|
#os_password =
|
||||||
|
|
||||||
|
# Tenant name for accessing Keystone and Ironic API. (string value)
|
||||||
|
# Deprecated group/name - [discoverd]/os_tenant_name
|
||||||
|
#os_tenant_name =
|
||||||
|
|
||||||
|
# Keystone admin endpoint. (string value)
|
||||||
|
# Deprecated group/name - [discoverd]/identity_uri
|
||||||
|
#identity_uri = http://127.0.0.1:35357
|
||||||
|
|
||||||
|
# Number of attempts to do when trying to connect to Ironic on start
|
||||||
|
# up. (integer value)
|
||||||
|
# Deprecated group/name - [discoverd]/ironic_retry_attempts
|
||||||
|
#ironic_retry_attempts = 5
|
||||||
|
|
||||||
|
# Amount of time between attempts to connect to Ironic on start up.
|
||||||
|
# (integer value)
|
||||||
|
# Deprecated group/name - [discoverd]/ironic_retry_period
|
||||||
|
#ironic_retry_period = 5
|
||||||
|
|
||||||
|
|
||||||
|
[processing]
|
||||||
|
|
||||||
|
#
|
||||||
|
# From ironic_inspector
|
||||||
|
#
|
||||||
|
|
||||||
# Which MAC addresses to add as ports during introspection. Possible
|
# Which MAC addresses to add as ports during introspection. Possible
|
||||||
# values: all (all MAC addresses), active (MAC addresses of NIC with
|
# values: all (all MAC addresses), active (MAC addresses of NIC with
|
||||||
# IP addresses), pxe (only MAC address of NIC node PXE booted from,
|
# IP addresses), pxe (only MAC address of NIC node PXE booted from,
|
||||||
# falls back to "active" if PXE MAC is not supplied by the ramdisk).
|
# falls back to "active" if PXE MAC is not supplied by the ramdisk).
|
||||||
# (string value)
|
# (string value)
|
||||||
# Allowed values: all, active, pxe
|
# Allowed values: all, active, pxe
|
||||||
|
# Deprecated group/name - [discoverd]/add_ports
|
||||||
#add_ports = pxe
|
#add_ports = pxe
|
||||||
|
|
||||||
# Which ports (already present on a node) to keep after introspection.
|
# Which ports (already present on a node) to keep after introspection.
|
||||||
|
@ -54,64 +125,36 @@
|
||||||
# which MACs were present in introspection data), added (keep only
|
# which MACs were present in introspection data), added (keep only
|
||||||
# MACs that we added during introspection). (string value)
|
# MACs that we added during introspection). (string value)
|
||||||
# Allowed values: all, present, added
|
# Allowed values: all, present, added
|
||||||
|
# Deprecated group/name - [discoverd]/keep_ports
|
||||||
#keep_ports = all
|
#keep_ports = all
|
||||||
|
|
||||||
# Timeout after which introspection is considered failed, set to 0 to
|
|
||||||
# disable. (integer value)
|
|
||||||
#timeout = 3600
|
|
||||||
|
|
||||||
# For how much time (in seconds) to keep status information about
|
|
||||||
# nodes after introspection was finished for them. Default value is 1
|
|
||||||
# week. (integer value)
|
|
||||||
#node_status_keep_time = 604800
|
|
||||||
|
|
||||||
# Amount of time in seconds, after which repeat clean up of timed out
|
|
||||||
# nodes and old nodes status information. (integer value)
|
|
||||||
#clean_up_period = 60
|
|
||||||
|
|
||||||
# Whether to overwrite existing values in node database. Disable this
|
# Whether to overwrite existing values in node database. Disable this
|
||||||
# option to make introspection a non-destructive operation. (boolean
|
# option to make introspection a non-destructive operation. (boolean
|
||||||
# value)
|
# value)
|
||||||
|
# Deprecated group/name - [discoverd]/overwrite_existing
|
||||||
#overwrite_existing = true
|
#overwrite_existing = true
|
||||||
|
|
||||||
# Whether to enable setting IPMI credentials during introspection.
|
# Whether to enable setting IPMI credentials during introspection.
|
||||||
# This is an experimental and not well tested feature, use at your own
|
# This is an experimental and not well tested feature, use at your own
|
||||||
# risk. (boolean value)
|
# risk. (boolean value)
|
||||||
|
# Deprecated group/name - [discoverd]/enable_setting_ipmi_credentials
|
||||||
#enable_setting_ipmi_credentials = false
|
#enable_setting_ipmi_credentials = false
|
||||||
|
|
||||||
# IP to listen on. (string value)
|
|
||||||
#listen_address = 0.0.0.0
|
|
||||||
|
|
||||||
# Port to listen on. (integer value)
|
|
||||||
#listen_port = 5050
|
|
||||||
|
|
||||||
# Whether to authenticate with Keystone on public HTTP endpoints. Note
|
|
||||||
# that introspection ramdisk postback endpoint is never authenticated.
|
|
||||||
# (boolean value)
|
|
||||||
#authenticate = true
|
|
||||||
|
|
||||||
# SQLite3 database to store nodes under introspection, required. Do
|
|
||||||
# not use :memory: here, it won't work. (string value)
|
|
||||||
#database =
|
|
||||||
|
|
||||||
# Comma-separated list of enabled hooks for processing pipeline. Hook
|
# Comma-separated list of enabled hooks for processing pipeline. Hook
|
||||||
# 'scheduler' updates the node with the minimum properties required by
|
# 'scheduler' updates the node with the minimum properties required by
|
||||||
# the Nova scheduler. Hook 'validate_interfaces' ensures that valid
|
# the Nova scheduler. Hook 'validate_interfaces' ensures that valid
|
||||||
# NIC data was provided by the ramdisk.Do not exclude these two unless
|
# NIC data was provided by the ramdisk.Do not exclude these two unless
|
||||||
# you really know what you're doing. (string value)
|
# you really know what you're doing. (string value)
|
||||||
|
# Deprecated group/name - [discoverd]/processing_hooks
|
||||||
#processing_hooks = ramdisk_error,scheduler,validate_interfaces
|
#processing_hooks = ramdisk_error,scheduler,validate_interfaces
|
||||||
|
|
||||||
# Debug mode enabled/disabled. (boolean value)
|
|
||||||
#debug = false
|
|
||||||
|
|
||||||
# If set, logs from ramdisk will be stored in this directory. (string
|
# If set, logs from ramdisk will be stored in this directory. (string
|
||||||
# value)
|
# value)
|
||||||
|
# Deprecated group/name - [discoverd]/ramdisk_logs_dir
|
||||||
#ramdisk_logs_dir = <None>
|
#ramdisk_logs_dir = <None>
|
||||||
|
|
||||||
# Whether to store ramdisk logs even if it did not return an error
|
# Whether to store ramdisk logs even if it did not return an error
|
||||||
# message (dependent upon "ramdisk_logs_dir" option being set).
|
# message (dependent upon "ramdisk_logs_dir" option being set).
|
||||||
# (boolean value)
|
# (boolean value)
|
||||||
|
# Deprecated group/name - [discoverd]/always_store_ramdisk_logs
|
||||||
#always_store_ramdisk_logs = false
|
#always_store_ramdisk_logs = false
|
||||||
|
|
||||||
# DEPRECATED: use add_ports. (boolean value)
|
|
||||||
#ports_for_inactive_interfaces = false
|
|
||||||
|
|
|
@ -28,20 +28,23 @@ import unittest
|
||||||
import mock
|
import mock
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
from ironic_discoverd import client
|
from ironic_inspector import client
|
||||||
from ironic_discoverd import main
|
from ironic_inspector import main
|
||||||
from ironic_discoverd.test import base
|
from ironic_inspector.test import base
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
|
|
||||||
CONF = """
|
CONF = """
|
||||||
[discoverd]
|
[ironic]
|
||||||
os_auth_url = http://url
|
os_auth_url = http://url
|
||||||
os_username = user
|
os_username = user
|
||||||
os_password = password
|
os_password = password
|
||||||
os_tenant_name = tenant
|
os_tenant_name = tenant
|
||||||
|
[firewall]
|
||||||
manage_firewall = False
|
manage_firewall = False
|
||||||
|
[processing]
|
||||||
enable_setting_ipmi_credentials = True
|
enable_setting_ipmi_credentials = True
|
||||||
|
[DEFAULT]
|
||||||
database = %(db_file)s
|
database = %(db_file)s
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
|
@ -1,15 +1,15 @@
|
||||||
.\" Manpage for ironic-discoverd.
|
.\" Manpage for ironic-inspector.
|
||||||
.TH man 8 "08 Oct 2014" "1.0" "ironic-discoverd man page"
|
.TH man 8 "08 Oct 2014" "1.0" "ironic-inspector man page"
|
||||||
.SH NAME
|
.SH NAME
|
||||||
ironic-discoverd \- hardware discovery daemon for OpenStack Ironic.
|
ironic-inspector \- hardware introspection daemon for OpenStack Ironic.
|
||||||
.SH SYNOPSIS
|
.SH SYNOPSIS
|
||||||
ironic-discoverd CONFFILE
|
ironic-inspector CONFFILE
|
||||||
.SH DESCRIPTION
|
.SH DESCRIPTION
|
||||||
This command starts ironic-discoverd service, which starts and finishes
|
This command starts ironic-inspector service, which starts and finishes
|
||||||
hardware discovery and maintains firewall rules for nodes accessing PXE
|
hardware discovery and maintains firewall rules for nodes accessing PXE
|
||||||
boot service (usually dnsmasq).
|
boot service (usually dnsmasq).
|
||||||
.SH OPTIONS
|
.SH OPTIONS
|
||||||
The ironic-discoverd does not take any options. However, you should supply
|
The ironic-inspector does not take any options. However, you should supply
|
||||||
path to the configuration file.
|
path to the configuration file.
|
||||||
.SH SEE ALSO
|
.SH SEE ALSO
|
||||||
README page located at https://pypi.python.org/pypi/ironic-discoverd
|
README page located at https://pypi.python.org/pypi/ironic-discoverd
|
|
@ -11,5 +11,5 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
__version_info__ = (1, 2, 0)
|
__version_info__ = (2, 0, 0)
|
||||||
__version__ = '%d.%d.%d' % __version_info__
|
__version__ = '%d.%d.%d' % __version_info__
|
|
@ -11,16 +11,11 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from __future__ import print_function
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import json
|
|
||||||
|
|
||||||
from oslo_utils import netutils
|
from oslo_utils import netutils
|
||||||
import requests
|
import requests
|
||||||
import six
|
import six
|
||||||
|
|
||||||
from ironic_discoverd.common.i18n import _
|
from ironic_inspector.common.i18n import _
|
||||||
|
|
||||||
|
|
||||||
_DEFAULT_URL = 'http://' + netutils.get_my_ipv4() + ':5050/v1'
|
_DEFAULT_URL = 'http://' + netutils.get_my_ipv4() + ':5050/v1'
|
||||||
|
@ -38,7 +33,7 @@ def _prepare(base_url, auth_token):
|
||||||
class ClientError(requests.HTTPError):
|
class ClientError(requests.HTTPError):
|
||||||
"""Error returned from a server."""
|
"""Error returned from a server."""
|
||||||
def __init__(self, response):
|
def __init__(self, response):
|
||||||
# discoverd returns error message in body
|
# inspector returns error message in body
|
||||||
msg = response.content.decode(_ERROR_ENCODING)
|
msg = response.content.decode(_ERROR_ENCODING)
|
||||||
super(ClientError, self).__init__(msg, response=response)
|
super(ClientError, self).__init__(msg, response=response)
|
||||||
|
|
||||||
|
@ -54,10 +49,10 @@ def introspect(uuid, base_url=None, auth_token=None,
|
||||||
"""Start introspection for a node.
|
"""Start introspection for a node.
|
||||||
|
|
||||||
:param uuid: node uuid
|
:param uuid: node uuid
|
||||||
:param base_url: *ironic-discoverd* URL in form: http://host:port[/ver],
|
:param base_url: *ironic-inspector* URL in form: http://host:port[/ver],
|
||||||
defaults to ``http://<current host>:5050/v1``.
|
defaults to ``http://<current host>:5050/v1``.
|
||||||
:param auth_token: Keystone authentication token.
|
:param auth_token: Keystone authentication token.
|
||||||
:param new_ipmi_password: if set, *ironic-discoverd* will update IPMI
|
:param new_ipmi_password: if set, *ironic-inspector* will update IPMI
|
||||||
password to this value.
|
password to this value.
|
||||||
:param new_ipmi_username: if new_ipmi_password is set, this values sets
|
:param new_ipmi_username: if new_ipmi_password is set, this values sets
|
||||||
new IPMI user name. Defaults to one in
|
new IPMI user name. Defaults to one in
|
||||||
|
@ -79,9 +74,9 @@ def introspect(uuid, base_url=None, auth_token=None,
|
||||||
def get_status(uuid, base_url=None, auth_token=None):
|
def get_status(uuid, base_url=None, auth_token=None):
|
||||||
"""Get introspection status for a node.
|
"""Get introspection status for a node.
|
||||||
|
|
||||||
New in ironic-discoverd version 1.0.0.
|
New in ironic-inspector version 1.0.0.
|
||||||
:param uuid: node uuid.
|
:param uuid: node uuid.
|
||||||
:param base_url: *ironic-discoverd* URL in form: http://host:port[/ver],
|
:param base_url: *ironic-inspector* URL in form: http://host:port[/ver],
|
||||||
defaults to ``http://<current host>:5050/v1``.
|
defaults to ``http://<current host>:5050/v1``.
|
||||||
:param auth_token: Keystone authentication token.
|
:param auth_token: Keystone authentication token.
|
||||||
:raises: *requests* library HTTP errors.
|
:raises: *requests* library HTTP errors.
|
||||||
|
@ -94,44 +89,3 @@ def get_status(uuid, base_url=None, auth_token=None):
|
||||||
headers=headers)
|
headers=headers)
|
||||||
ClientError.raise_if_needed(res)
|
ClientError.raise_if_needed(res)
|
||||||
return res.json()
|
return res.json()
|
||||||
|
|
||||||
|
|
||||||
def discover(uuids, base_url=None, auth_token=None):
|
|
||||||
"""Post node UUID's for discovery.
|
|
||||||
|
|
||||||
DEPRECATED. Use introspect instead.
|
|
||||||
"""
|
|
||||||
if not all(isinstance(s, six.string_types) for s in uuids):
|
|
||||||
raise TypeError(_("Expected list of strings for uuids argument, "
|
|
||||||
"got %s") % uuids)
|
|
||||||
|
|
||||||
base_url, headers = _prepare(base_url, auth_token)
|
|
||||||
headers['Content-Type'] = 'application/json'
|
|
||||||
res = requests.post(base_url + "/discover",
|
|
||||||
data=json.dumps(uuids), headers=headers)
|
|
||||||
ClientError.raise_if_needed(res)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__': # pragma: no cover
|
|
||||||
parser = argparse.ArgumentParser(description='Discover nodes.')
|
|
||||||
parser.add_argument('cmd', metavar='cmd',
|
|
||||||
choices=['introspect', 'get_status'],
|
|
||||||
help='command: introspect or get_status.')
|
|
||||||
parser.add_argument('uuid', metavar='UUID', type=str,
|
|
||||||
help='node UUID.')
|
|
||||||
parser.add_argument('--base-url', dest='base_url', action='store',
|
|
||||||
default=_DEFAULT_URL,
|
|
||||||
help='base URL, default to localhost.')
|
|
||||||
parser.add_argument('--auth-token', dest='auth_token', action='store',
|
|
||||||
default='',
|
|
||||||
help='Keystone token.')
|
|
||||||
args = parser.parse_args()
|
|
||||||
func = globals()[args.cmd]
|
|
||||||
try:
|
|
||||||
res = func(uuid=args.uuid, base_url=args.base_url,
|
|
||||||
auth_token=args.auth_token)
|
|
||||||
except Exception as exc:
|
|
||||||
print('Error:', exc)
|
|
||||||
else:
|
|
||||||
if res:
|
|
||||||
print(json.dumps(res))
|
|
|
@ -15,7 +15,7 @@
|
||||||
|
|
||||||
import oslo_i18n
|
import oslo_i18n
|
||||||
|
|
||||||
_translators = oslo_i18n.TranslatorFactory(domain='ironic-discoverd')
|
_translators = oslo_i18n.TranslatorFactory(domain='ironic-inspector')
|
||||||
|
|
||||||
# The primary translation function using the well-known name "_"
|
# The primary translation function using the well-known name "_"
|
||||||
_ = _translators.primary
|
_ = _translators.primary
|
|
@ -17,42 +17,64 @@ from oslo_config import cfg
|
||||||
VALID_ADD_PORTS_VALUES = ('all', 'active', 'pxe')
|
VALID_ADD_PORTS_VALUES = ('all', 'active', 'pxe')
|
||||||
VALID_KEEP_PORTS_VALUES = ('all', 'present', 'added')
|
VALID_KEEP_PORTS_VALUES = ('all', 'present', 'added')
|
||||||
|
|
||||||
SERVICE_OPTS = [
|
|
||||||
|
IRONIC_OPTS = [
|
||||||
cfg.StrOpt('os_auth_url',
|
cfg.StrOpt('os_auth_url',
|
||||||
default='http://127.0.0.1:5000/v2.0',
|
default='http://127.0.0.1:5000/v2.0',
|
||||||
help='Keystone authentication endpoint.'),
|
help='Keystone authentication endpoint.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.StrOpt('os_username',
|
cfg.StrOpt('os_username',
|
||||||
default='',
|
default='',
|
||||||
help='User name for accessing Keystone and Ironic API.'),
|
help='User name for accessing Keystone and Ironic API.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.StrOpt('os_password',
|
cfg.StrOpt('os_password',
|
||||||
default='',
|
default='',
|
||||||
help='Password for accessing Keystone and Ironic API.',
|
help='Password for accessing Keystone and Ironic API.',
|
||||||
secret=True),
|
secret=True,
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.StrOpt('os_tenant_name',
|
cfg.StrOpt('os_tenant_name',
|
||||||
default='',
|
default='',
|
||||||
help='Tenant name for accessing Keystone and Ironic API.'),
|
help='Tenant name for accessing Keystone and Ironic API.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.StrOpt('identity_uri',
|
cfg.StrOpt('identity_uri',
|
||||||
default='http://127.0.0.1:35357',
|
default='http://127.0.0.1:35357',
|
||||||
help='Keystone admin endpoint.'),
|
help='Keystone admin endpoint.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.IntOpt('ironic_retry_attempts',
|
cfg.IntOpt('ironic_retry_attempts',
|
||||||
default=5,
|
default=5,
|
||||||
help='Number of attempts to do when trying to connect to '
|
help='Number of attempts to do when trying to connect to '
|
||||||
'Ironic on start up.'),
|
'Ironic on start up.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.IntOpt('ironic_retry_period',
|
cfg.IntOpt('ironic_retry_period',
|
||||||
default=5,
|
default=5,
|
||||||
help='Amount of time between attempts to connect to Ironic '
|
help='Amount of time between attempts to connect to Ironic '
|
||||||
'on start up.'),
|
'on start up.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
FIREWALL_OPTS = [
|
||||||
cfg.BoolOpt('manage_firewall',
|
cfg.BoolOpt('manage_firewall',
|
||||||
default=True,
|
default=True,
|
||||||
help='Whether to manage firewall rules for PXE port.'),
|
help='Whether to manage firewall rules for PXE port.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.StrOpt('dnsmasq_interface',
|
cfg.StrOpt('dnsmasq_interface',
|
||||||
default='br-ctlplane',
|
default='br-ctlplane',
|
||||||
help='Interface on which dnsmasq listens, the default is for '
|
help='Interface on which dnsmasq listens, the default is for '
|
||||||
'VM\'s.'),
|
'VM\'s.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.IntOpt('firewall_update_period',
|
cfg.IntOpt('firewall_update_period',
|
||||||
default=15,
|
default=15,
|
||||||
help='Amount of time in seconds, after which repeat periodic '
|
help='Amount of time in seconds, after which repeat periodic '
|
||||||
'update of firewall.'),
|
'update of firewall.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
|
cfg.StrOpt('firewall_chain',
|
||||||
|
default='ironic-inspector',
|
||||||
|
help='iptables chain name to use.'),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
PROCESSING_OPTS = [
|
||||||
cfg.StrOpt('add_ports',
|
cfg.StrOpt('add_ports',
|
||||||
default='pxe',
|
default='pxe',
|
||||||
help='Which MAC addresses to add as ports during '
|
help='Which MAC addresses to add as ports during '
|
||||||
|
@ -61,7 +83,8 @@ SERVICE_OPTS = [
|
||||||
'addresses), pxe (only MAC address of NIC node PXE booted '
|
'addresses), pxe (only MAC address of NIC node PXE booted '
|
||||||
'from, falls back to "active" if PXE MAC is not supplied '
|
'from, falls back to "active" if PXE MAC is not supplied '
|
||||||
'by the ramdisk).',
|
'by the ramdisk).',
|
||||||
choices=VALID_ADD_PORTS_VALUES),
|
choices=VALID_ADD_PORTS_VALUES,
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.StrOpt('keep_ports',
|
cfg.StrOpt('keep_ports',
|
||||||
default='all',
|
default='all',
|
||||||
help='Which ports (already present on a node) to keep after '
|
help='Which ports (already present on a node) to keep after '
|
||||||
|
@ -69,45 +92,20 @@ SERVICE_OPTS = [
|
||||||
'all (do not delete anything), present (keep ports which MACs '
|
'all (do not delete anything), present (keep ports which MACs '
|
||||||
'were present in introspection data), added (keep only MACs '
|
'were present in introspection data), added (keep only MACs '
|
||||||
'that we added during introspection).',
|
'that we added during introspection).',
|
||||||
choices=VALID_KEEP_PORTS_VALUES),
|
choices=VALID_KEEP_PORTS_VALUES,
|
||||||
cfg.IntOpt('timeout',
|
deprecated_group='discoverd'),
|
||||||
default=3600,
|
|
||||||
help='Timeout after which introspection is considered failed, '
|
|
||||||
'set to 0 to disable.'),
|
|
||||||
cfg.IntOpt('node_status_keep_time',
|
|
||||||
default=604800,
|
|
||||||
help='For how much time (in seconds) to keep status '
|
|
||||||
'information about nodes after introspection was '
|
|
||||||
'finished for them. Default value is 1 week.'),
|
|
||||||
cfg.IntOpt('clean_up_period',
|
|
||||||
default=60,
|
|
||||||
help='Amount of time in seconds, after which repeat clean up '
|
|
||||||
'of timed out nodes and old nodes status information.'),
|
|
||||||
cfg.BoolOpt('overwrite_existing',
|
cfg.BoolOpt('overwrite_existing',
|
||||||
default=True,
|
default=True,
|
||||||
help='Whether to overwrite existing values in node database. '
|
help='Whether to overwrite existing values in node database. '
|
||||||
'Disable this option to make introspection a '
|
'Disable this option to make introspection a '
|
||||||
'non-destructive operation.'),
|
'non-destructive operation.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.BoolOpt('enable_setting_ipmi_credentials',
|
cfg.BoolOpt('enable_setting_ipmi_credentials',
|
||||||
default=False,
|
default=False,
|
||||||
help='Whether to enable setting IPMI credentials during '
|
help='Whether to enable setting IPMI credentials during '
|
||||||
'introspection. This is an experimental and not well '
|
'introspection. This is an experimental and not well '
|
||||||
'tested feature, use at your own risk.'),
|
'tested feature, use at your own risk.',
|
||||||
cfg.StrOpt('listen_address',
|
deprecated_group='discoverd'),
|
||||||
default='0.0.0.0',
|
|
||||||
help='IP to listen on.'),
|
|
||||||
cfg.IntOpt('listen_port',
|
|
||||||
default=5050,
|
|
||||||
help='Port to listen on.'),
|
|
||||||
cfg.BoolOpt('authenticate',
|
|
||||||
default=True,
|
|
||||||
help='Whether to authenticate with Keystone on public HTTP '
|
|
||||||
'endpoints. Note that introspection ramdisk postback '
|
|
||||||
'endpoint is never authenticated.'),
|
|
||||||
cfg.StrOpt('database',
|
|
||||||
default='',
|
|
||||||
help='SQLite3 database to store nodes under introspection, '
|
|
||||||
'required. Do not use :memory: here, it won\'t work.'),
|
|
||||||
cfg.StrOpt('processing_hooks',
|
cfg.StrOpt('processing_hooks',
|
||||||
default='ramdisk_error,scheduler,validate_interfaces',
|
default='ramdisk_error,scheduler,validate_interfaces',
|
||||||
help='Comma-separated list of enabled hooks for processing '
|
help='Comma-separated list of enabled hooks for processing '
|
||||||
|
@ -116,27 +114,74 @@ SERVICE_OPTS = [
|
||||||
'Hook \'validate_interfaces\' ensures that valid NIC '
|
'Hook \'validate_interfaces\' ensures that valid NIC '
|
||||||
'data was provided by the ramdisk.'
|
'data was provided by the ramdisk.'
|
||||||
'Do not exclude these two unless you really know what '
|
'Do not exclude these two unless you really know what '
|
||||||
'you\'re doing.'),
|
'you\'re doing.',
|
||||||
cfg.BoolOpt('debug',
|
deprecated_group='discoverd'),
|
||||||
default=False,
|
|
||||||
help='Debug mode enabled/disabled.'),
|
|
||||||
cfg.StrOpt('ramdisk_logs_dir',
|
cfg.StrOpt('ramdisk_logs_dir',
|
||||||
help='If set, logs from ramdisk will be stored in this '
|
help='If set, logs from ramdisk will be stored in this '
|
||||||
'directory.'),
|
'directory.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
cfg.BoolOpt('always_store_ramdisk_logs',
|
cfg.BoolOpt('always_store_ramdisk_logs',
|
||||||
default=False,
|
default=False,
|
||||||
help='Whether to store ramdisk logs even if it did not return '
|
help='Whether to store ramdisk logs even if it did not return '
|
||||||
'an error message (dependent upon "ramdisk_logs_dir" option '
|
'an error message (dependent upon "ramdisk_logs_dir" option '
|
||||||
'being set).'),
|
'being set).',
|
||||||
cfg.BoolOpt('ports_for_inactive_interfaces',
|
deprecated_group='discoverd'),
|
||||||
default=False,
|
|
||||||
help='DEPRECATED: use add_ports.'),
|
|
||||||
]
|
]
|
||||||
|
|
||||||
cfg.CONF.register_opts(SERVICE_OPTS, group='discoverd')
|
|
||||||
|
SERVICE_OPTS = [
|
||||||
|
cfg.StrOpt('listen_address',
|
||||||
|
default='0.0.0.0',
|
||||||
|
help='IP to listen on.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
|
cfg.IntOpt('listen_port',
|
||||||
|
default=5050,
|
||||||
|
help='Port to listen on.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
|
cfg.BoolOpt('authenticate',
|
||||||
|
default=True,
|
||||||
|
help='Whether to authenticate with Keystone on public HTTP '
|
||||||
|
'endpoints. Note that introspection ramdisk postback '
|
||||||
|
'endpoint is never authenticated.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
|
cfg.StrOpt('database',
|
||||||
|
default='',
|
||||||
|
help='SQLite3 database to store nodes under introspection, '
|
||||||
|
'required. Do not use :memory: here, it won\'t work.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
|
cfg.BoolOpt('debug',
|
||||||
|
default=False,
|
||||||
|
help='Debug mode enabled/disabled.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
|
cfg.IntOpt('timeout',
|
||||||
|
default=3600,
|
||||||
|
help='Timeout after which introspection is considered failed, '
|
||||||
|
'set to 0 to disable.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
|
cfg.IntOpt('node_status_keep_time',
|
||||||
|
default=604800,
|
||||||
|
help='For how much time (in seconds) to keep status '
|
||||||
|
'information about nodes after introspection was '
|
||||||
|
'finished for them. Default value is 1 week.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
|
cfg.IntOpt('clean_up_period',
|
||||||
|
default=60,
|
||||||
|
help='Amount of time in seconds, after which repeat clean up '
|
||||||
|
'of timed out nodes and old nodes status information.',
|
||||||
|
deprecated_group='discoverd'),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
cfg.CONF.register_opts(SERVICE_OPTS)
|
||||||
|
cfg.CONF.register_opts(FIREWALL_OPTS, group='firewall')
|
||||||
|
cfg.CONF.register_opts(PROCESSING_OPTS, group='processing')
|
||||||
|
cfg.CONF.register_opts(IRONIC_OPTS, group='ironic')
|
||||||
|
|
||||||
|
|
||||||
def list_opts():
|
def list_opts():
|
||||||
return [
|
return [
|
||||||
('discoverd', SERVICE_OPTS)
|
('', SERVICE_OPTS),
|
||||||
|
('firewall', FIREWALL_OPTS),
|
||||||
|
('ironic', IRONIC_OPTS),
|
||||||
|
('processing', PROCESSING_OPTS),
|
||||||
]
|
]
|
|
@ -17,17 +17,17 @@ import subprocess
|
||||||
from eventlet import semaphore
|
from eventlet import semaphore
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd.common.i18n import _LE
|
from ironic_inspector.common.i18n import _LE
|
||||||
from ironic_discoverd import node_cache
|
from ironic_inspector import node_cache
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger("ironic_discoverd.firewall")
|
CONF = cfg.CONF
|
||||||
NEW_CHAIN = 'discovery_temp'
|
LOG = logging.getLogger("ironic_inspector.firewall")
|
||||||
CHAIN = 'discovery'
|
NEW_CHAIN = None
|
||||||
|
CHAIN = None
|
||||||
INTERFACE = None
|
INTERFACE = None
|
||||||
LOCK = semaphore.BoundedSemaphore()
|
LOCK = semaphore.BoundedSemaphore()
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
|
|
||||||
def _iptables(*args, **kwargs):
|
def _iptables(*args, **kwargs):
|
||||||
|
@ -51,11 +51,14 @@ def init():
|
||||||
|
|
||||||
Must be called one on start-up.
|
Must be called one on start-up.
|
||||||
"""
|
"""
|
||||||
if not CONF.discoverd.manage_firewall:
|
if not CONF.firewall.manage_firewall:
|
||||||
return
|
return
|
||||||
|
|
||||||
global INTERFACE
|
global INTERFACE, CHAIN, NEW_CHAIN
|
||||||
INTERFACE = CONF.discoverd.dnsmasq_interface
|
INTERFACE = CONF.firewall.dnsmasq_interface
|
||||||
|
CHAIN = CONF.firewall.firewall_chain
|
||||||
|
NEW_CHAIN = CHAIN + '_temp'
|
||||||
|
|
||||||
_clean_up(CHAIN)
|
_clean_up(CHAIN)
|
||||||
# Not really needed, but helps to validate that we have access to iptables
|
# Not really needed, but helps to validate that we have access to iptables
|
||||||
_iptables('-N', CHAIN)
|
_iptables('-N', CHAIN)
|
||||||
|
@ -71,7 +74,7 @@ def _clean_up(chain):
|
||||||
|
|
||||||
def clean_up():
|
def clean_up():
|
||||||
"""Clean up everything before exiting."""
|
"""Clean up everything before exiting."""
|
||||||
if not CONF.discoverd.manage_firewall:
|
if not CONF.firewall.manage_firewall:
|
||||||
return
|
return
|
||||||
|
|
||||||
_clean_up(CHAIN)
|
_clean_up(CHAIN)
|
||||||
|
@ -96,7 +99,7 @@ def update_filters(ironic=None):
|
||||||
|
|
||||||
:param ironic: Ironic client instance, optional.
|
:param ironic: Ironic client instance, optional.
|
||||||
"""
|
"""
|
||||||
if not CONF.discoverd.manage_firewall:
|
if not CONF.firewall.manage_firewall:
|
||||||
return
|
return
|
||||||
|
|
||||||
assert INTERFACE is not None
|
assert INTERFACE is not None
|
|
@ -20,21 +20,21 @@ import eventlet
|
||||||
from ironicclient import exceptions
|
from ironicclient import exceptions
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd.common.i18n import _, _LI, _LW
|
from ironic_inspector.common.i18n import _, _LI, _LW
|
||||||
from ironic_discoverd import firewall
|
from ironic_inspector import firewall
|
||||||
from ironic_discoverd import node_cache
|
from ironic_inspector import node_cache
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger("ironic_discoverd.introspect")
|
LOG = logging.getLogger("ironic_inspector.introspect")
|
||||||
PASSWORD_ACCEPTED_CHARS = set(string.ascii_letters + string.digits)
|
PASSWORD_ACCEPTED_CHARS = set(string.ascii_letters + string.digits)
|
||||||
PASSWORD_MAX_LENGTH = 20 # IPMI v2.0
|
PASSWORD_MAX_LENGTH = 20 # IPMI v2.0
|
||||||
|
|
||||||
|
|
||||||
def _validate_ipmi_credentials(node, new_ipmi_credentials):
|
def _validate_ipmi_credentials(node, new_ipmi_credentials):
|
||||||
if not CONF.discoverd.enable_setting_ipmi_credentials:
|
if not CONF.processing.enable_setting_ipmi_credentials:
|
||||||
raise utils.Error(
|
raise utils.Error(
|
||||||
_('IPMI credentials setup is disabled in configuration'))
|
_('IPMI credentials setup is disabled in configuration'))
|
||||||
|
|
||||||
|
@ -113,9 +113,6 @@ def introspect(uuid, new_ipmi_credentials=None):
|
||||||
|
|
||||||
|
|
||||||
def _background_introspect(ironic, cached_node):
|
def _background_introspect(ironic, cached_node):
|
||||||
patch = [{'op': 'add', 'path': '/extra/on_discovery', 'value': 'true'}]
|
|
||||||
utils.retry_on_conflict(ironic.node.update, cached_node.uuid, patch)
|
|
||||||
|
|
||||||
# TODO(dtantsur): pagination
|
# TODO(dtantsur): pagination
|
||||||
macs = [p.address for p in ironic.node.list_ports(cached_node.uuid,
|
macs = [p.address for p in ironic.node.list_ports(cached_node.uuid,
|
||||||
limit=0)]
|
limit=0)]
|
||||||
|
@ -147,4 +144,4 @@ def _background_introspect(ironic, cached_node):
|
||||||
LOG.info(_LI('Introspection environment is ready for node %(node)s, '
|
LOG.info(_LI('Introspection environment is ready for node %(node)s, '
|
||||||
'manual power on is required within %(timeout)d seconds') %
|
'manual power on is required within %(timeout)d seconds') %
|
||||||
{'node': cached_node.uuid,
|
{'node': cached_node.uuid,
|
||||||
'timeout': CONF.discoverd.timeout})
|
'timeout': CONF.timeout})
|
|
@ -23,21 +23,21 @@ import flask
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
from oslo_utils import uuidutils
|
from oslo_utils import uuidutils
|
||||||
|
|
||||||
from ironic_discoverd.common.i18n import _, _LC, _LE, _LI, _LW
|
from ironic_inspector.common.i18n import _, _LC, _LE, _LI, _LW
|
||||||
# Import configuration options
|
# Import configuration options
|
||||||
from ironic_discoverd import conf # noqa
|
from ironic_inspector import conf # noqa
|
||||||
from ironic_discoverd import firewall
|
from ironic_inspector import firewall
|
||||||
from ironic_discoverd import introspect
|
from ironic_inspector import introspect
|
||||||
from ironic_discoverd import node_cache
|
from ironic_inspector import node_cache
|
||||||
from ironic_discoverd.plugins import base as plugins_base
|
from ironic_inspector.plugins import base as plugins_base
|
||||||
from ironic_discoverd import process
|
from ironic_inspector import process
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
|
||||||
app = flask.Flask(__name__)
|
app = flask.Flask(__name__)
|
||||||
LOG = logging.getLogger('ironic_discoverd.main')
|
LOG = logging.getLogger('ironic_inspector.main')
|
||||||
|
|
||||||
|
|
||||||
def convert_exceptions(func):
|
def convert_exceptions(func):
|
||||||
|
@ -90,23 +90,6 @@ def api_introspection(uuid):
|
||||||
error=node_info.error or None)
|
error=node_info.error or None)
|
||||||
|
|
||||||
|
|
||||||
@app.route('/v1/discover', methods=['POST'])
|
|
||||||
@convert_exceptions
|
|
||||||
def api_discover():
|
|
||||||
utils.check_auth(flask.request)
|
|
||||||
|
|
||||||
data = flask.request.get_json(force=True)
|
|
||||||
LOG.debug("/v1/discover got JSON %s", data)
|
|
||||||
|
|
||||||
for uuid in data:
|
|
||||||
if not uuidutils.is_uuid_like(uuid):
|
|
||||||
raise utils.Error(_('Invalid UUID value'), code=400)
|
|
||||||
|
|
||||||
for uuid in data:
|
|
||||||
introspect.introspect(uuid)
|
|
||||||
return "", 202
|
|
||||||
|
|
||||||
|
|
||||||
def periodic_update(period): # pragma: no cover
|
def periodic_update(period): # pragma: no cover
|
||||||
while True:
|
while True:
|
||||||
LOG.debug('Running periodic update of filters')
|
LOG.debug('Running periodic update of filters')
|
||||||
|
@ -136,9 +119,9 @@ def check_ironic_available():
|
||||||
2. Keystone has already started
|
2. Keystone has already started
|
||||||
3. Ironic has already started
|
3. Ironic has already started
|
||||||
"""
|
"""
|
||||||
attempts = CONF.discoverd.ironic_retry_attempts
|
attempts = CONF.ironic.ironic_retry_attempts
|
||||||
assert attempts >= 0
|
assert attempts >= 0
|
||||||
retry_period = CONF.discoverd.ironic_retry_period
|
retry_period = CONF.ironic.ironic_retry_period
|
||||||
LOG.debug('Trying to connect to Ironic')
|
LOG.debug('Trying to connect to Ironic')
|
||||||
for i in range(attempts + 1): # one attempt always required
|
for i in range(attempts + 1): # one attempt always required
|
||||||
try:
|
try:
|
||||||
|
@ -155,7 +138,7 @@ def check_ironic_available():
|
||||||
|
|
||||||
|
|
||||||
def init():
|
def init():
|
||||||
if CONF.discoverd.authenticate:
|
if CONF.authenticate:
|
||||||
utils.add_auth_middleware(app)
|
utils.add_auth_middleware(app)
|
||||||
else:
|
else:
|
||||||
LOG.warning(_LW('Starting unauthenticated, please check'
|
LOG.warning(_LW('Starting unauthenticated, please check'
|
||||||
|
@ -173,21 +156,21 @@ def init():
|
||||||
|
|
||||||
LOG.info(_LI('Enabled processing hooks: %s'), hooks)
|
LOG.info(_LI('Enabled processing hooks: %s'), hooks)
|
||||||
|
|
||||||
if CONF.discoverd.manage_firewall:
|
if CONF.firewall.manage_firewall:
|
||||||
firewall.init()
|
firewall.init()
|
||||||
period = CONF.discoverd.firewall_update_period
|
period = CONF.firewall.firewall_update_period
|
||||||
eventlet.greenthread.spawn_n(periodic_update, period)
|
eventlet.greenthread.spawn_n(periodic_update, period)
|
||||||
|
|
||||||
if CONF.discoverd.timeout > 0:
|
if CONF.timeout > 0:
|
||||||
period = CONF.discoverd.clean_up_period
|
period = CONF.clean_up_period
|
||||||
eventlet.greenthread.spawn_n(periodic_clean_up, period)
|
eventlet.greenthread.spawn_n(periodic_clean_up, period)
|
||||||
else:
|
else:
|
||||||
LOG.warning(_LW('Timeout is disabled in configuration'))
|
LOG.warning(_LW('Timeout is disabled in configuration'))
|
||||||
|
|
||||||
|
|
||||||
def main(args=sys.argv[1:]): # pragma: no cover
|
def main(args=sys.argv[1:]): # pragma: no cover
|
||||||
CONF(args, project='ironic-discoverd')
|
CONF(args, project='ironic-inspector')
|
||||||
debug = CONF.discoverd.debug
|
debug = CONF.debug
|
||||||
|
|
||||||
logging.basicConfig(level=logging.DEBUG if debug else logging.INFO)
|
logging.basicConfig(level=logging.DEBUG if debug else logging.INFO)
|
||||||
for third_party in ('urllib3.connectionpool',
|
for third_party in ('urllib3.connectionpool',
|
||||||
|
@ -200,7 +183,7 @@ def main(args=sys.argv[1:]): # pragma: no cover
|
||||||
init()
|
init()
|
||||||
try:
|
try:
|
||||||
app.run(debug=debug,
|
app.run(debug=debug,
|
||||||
host=CONF.discoverd.listen_address,
|
host=CONF.listen_address,
|
||||||
port=CONF.discoverd.listen_port)
|
port=CONF.listen_port)
|
||||||
finally:
|
finally:
|
||||||
firewall.clean_up()
|
firewall.clean_up()
|
|
@ -23,13 +23,13 @@ import time
|
||||||
|
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd.common.i18n import _, _LC, _LE
|
from ironic_inspector.common.i18n import _, _LC, _LE
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger("ironic_discoverd.node_cache")
|
LOG = logging.getLogger("ironic_inspector.node_cache")
|
||||||
_DB_NAME = None
|
_DB_NAME = None
|
||||||
_SCHEMA = """
|
_SCHEMA = """
|
||||||
create table if not exists nodes
|
create table if not exists nodes
|
||||||
|
@ -135,9 +135,9 @@ def init():
|
||||||
"""Initialize the database."""
|
"""Initialize the database."""
|
||||||
global _DB_NAME
|
global _DB_NAME
|
||||||
|
|
||||||
_DB_NAME = CONF.discoverd.database.strip()
|
_DB_NAME = CONF.database.strip()
|
||||||
if not _DB_NAME:
|
if not _DB_NAME:
|
||||||
LOG.critical(_LC('Configuration option discoverd.database'
|
LOG.critical(_LC('Configuration option inspector.database'
|
||||||
' should be set'))
|
' should be set'))
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
|
@ -269,13 +269,13 @@ def clean_up():
|
||||||
:return: list of timed out node UUID's
|
:return: list of timed out node UUID's
|
||||||
"""
|
"""
|
||||||
status_keep_threshold = (time.time() -
|
status_keep_threshold = (time.time() -
|
||||||
CONF.discoverd.node_status_keep_time)
|
CONF.node_status_keep_time)
|
||||||
|
|
||||||
with _db() as db:
|
with _db() as db:
|
||||||
db.execute('delete from nodes where finished_at < ?',
|
db.execute('delete from nodes where finished_at < ?',
|
||||||
(status_keep_threshold,))
|
(status_keep_threshold,))
|
||||||
|
|
||||||
timeout = CONF.discoverd.timeout
|
timeout = CONF.timeout
|
||||||
if timeout <= 0:
|
if timeout <= 0:
|
||||||
return []
|
return []
|
||||||
|
|
|
@ -45,7 +45,7 @@ class ProcessingHook(object): # pragma: no cover
|
||||||
|
|
||||||
:param node: Ironic node as returned by the Ironic client, should not
|
:param node: Ironic node as returned by the Ironic client, should not
|
||||||
be modified directly by the hook.
|
be modified directly by the hook.
|
||||||
:param ports: Ironic ports created by discoverd, also should not be
|
:param ports: Ironic ports created by inspector, also should not be
|
||||||
updated directly.
|
updated directly.
|
||||||
:param node_info: processed data from the ramdisk.
|
:param node_info: processed data from the ramdisk.
|
||||||
:returns: tuple (node patches, port patches) where
|
:returns: tuple (node patches, port patches) where
|
||||||
|
@ -74,9 +74,9 @@ def processing_hooks_manager(*args):
|
||||||
global _HOOKS_MGR
|
global _HOOKS_MGR
|
||||||
if _HOOKS_MGR is None:
|
if _HOOKS_MGR is None:
|
||||||
names = [x.strip()
|
names = [x.strip()
|
||||||
for x in CONF.discoverd.processing_hooks.split(',')
|
for x in CONF.processing.processing_hooks.split(',')
|
||||||
if x.strip()]
|
if x.strip()]
|
||||||
_HOOKS_MGR = named.NamedExtensionManager('ironic_discoverd.hooks',
|
_HOOKS_MGR = named.NamedExtensionManager('ironic_inspector.hooks',
|
||||||
names=names,
|
names=names,
|
||||||
invoke_on_load=True,
|
invoke_on_load=True,
|
||||||
invoke_args=args,
|
invoke_args=args,
|
|
@ -19,14 +19,14 @@ details on how to use it. Note that this plugin requires a special ramdisk.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from ironic_discoverd.common.i18n import _LW
|
from ironic_inspector.common.i18n import _LW
|
||||||
from ironic_discoverd.plugins import base
|
from ironic_inspector.plugins import base
|
||||||
|
|
||||||
LOG = logging.getLogger('ironic_discoverd.plugins.edeploy')
|
LOG = logging.getLogger('ironic_inspector.plugins.edeploy')
|
||||||
|
|
||||||
|
|
||||||
class eDeployHook(base.ProcessingHook):
|
class eDeployHook(base.ProcessingHook):
|
||||||
"""Interact with eDeploy ramdisk for discovery data processing hooks."""
|
"""Processing hook for saving additional data from eDeploy ramdisk."""
|
||||||
|
|
||||||
def before_update(self, node, ports, node_info):
|
def before_update(self, node, ports, node_info):
|
||||||
"""Store the hardware data from what has been discovered."""
|
"""Store the hardware data from what has been discovered."""
|
|
@ -15,10 +15,10 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from ironic_discoverd.plugins import base
|
from ironic_inspector.plugins import base
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger('ironic_discoverd.plugins.example')
|
LOG = logging.getLogger('ironic_inspector.plugins.example')
|
||||||
|
|
||||||
|
|
||||||
class ExampleProcessingHook(base.ProcessingHook): # pragma: no cover
|
class ExampleProcessingHook(base.ProcessingHook): # pragma: no cover
|
|
@ -15,15 +15,15 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from ironic_discoverd.common.i18n import _LI, _LW
|
from ironic_inspector.common.i18n import _LI, _LW
|
||||||
from ironic_discoverd.plugins import base
|
from ironic_inspector.plugins import base
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger('ironic_discoverd.plugins.root_device_hint')
|
LOG = logging.getLogger('ironic_inspector.plugins.root_device_hint')
|
||||||
|
|
||||||
|
|
||||||
class RootDeviceHintHook(base.ProcessingHook):
|
class RootDeviceHintHook(base.ProcessingHook):
|
||||||
"""Interact with Instack ramdisk for discovery data processing hooks.
|
"""Processing hook for learning the root device after RAID creation.
|
||||||
|
|
||||||
The plugin can figure out the root device in 2 runs. First, it saves the
|
The plugin can figure out the root device in 2 runs. First, it saves the
|
||||||
discovered block device serials in node.extra. The second run will check
|
discovered block device serials in node.extra. The second run will check
|
||||||
|
@ -72,7 +72,7 @@ class RootDeviceHintHook(base.ProcessingHook):
|
||||||
], {}
|
], {}
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# No previously discovered devices - save the discoverd block
|
# No previously discovered devices - save the inspector block
|
||||||
# devices in node.extra
|
# devices in node.extra
|
||||||
return [
|
return [
|
||||||
{'op': 'add',
|
{'op': 'add',
|
|
@ -21,15 +21,15 @@ import sys
|
||||||
|
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd.common.i18n import _, _LC, _LI, _LW
|
from ironic_inspector.common.i18n import _, _LC, _LI, _LW
|
||||||
from ironic_discoverd import conf
|
from ironic_inspector import conf
|
||||||
from ironic_discoverd.plugins import base
|
from ironic_inspector.plugins import base
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger('ironic_discoverd.plugins.standard')
|
LOG = logging.getLogger('ironic_inspector.plugins.standard')
|
||||||
|
|
||||||
|
|
||||||
class SchedulerHook(base.ProcessingHook):
|
class SchedulerHook(base.ProcessingHook):
|
||||||
|
@ -51,7 +51,7 @@ class SchedulerHook(base.ProcessingHook):
|
||||||
|
|
||||||
def before_update(self, node, ports, node_info):
|
def before_update(self, node, ports, node_info):
|
||||||
"""Update node with scheduler properties."""
|
"""Update node with scheduler properties."""
|
||||||
overwrite = CONF.discoverd.overwrite_existing
|
overwrite = CONF.processing.overwrite_existing
|
||||||
patch = [{'op': 'add', 'path': '/properties/%s' % key,
|
patch = [{'op': 'add', 'path': '/properties/%s' % key,
|
||||||
'value': str(node_info[key])}
|
'value': str(node_info[key])}
|
||||||
for key in self.KEYS
|
for key in self.KEYS
|
||||||
|
@ -63,28 +63,20 @@ class ValidateInterfacesHook(base.ProcessingHook):
|
||||||
"""Hook to validate network interfaces."""
|
"""Hook to validate network interfaces."""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
if CONF.discoverd.add_ports not in conf.VALID_ADD_PORTS_VALUES:
|
if CONF.processing.add_ports not in conf.VALID_ADD_PORTS_VALUES:
|
||||||
LOG.critical(_LC('Accepted values for [discoverd]add_ports are '
|
LOG.critical(_LC('Accepted values for [processing]add_ports are '
|
||||||
'%(valid)s, got %(actual)s'),
|
'%(valid)s, got %(actual)s'),
|
||||||
{'valid': conf.VALID_ADD_PORTS_VALUES,
|
{'valid': conf.VALID_ADD_PORTS_VALUES,
|
||||||
'actual': CONF.discoverd.add_ports})
|
'actual': CONF.processing.add_ports})
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
if CONF.discoverd.keep_ports not in conf.VALID_KEEP_PORTS_VALUES:
|
if CONF.processing.keep_ports not in conf.VALID_KEEP_PORTS_VALUES:
|
||||||
LOG.critical(_LC('Accepted values for [discoverd]keep_ports are '
|
LOG.critical(_LC('Accepted values for [processing]keep_ports are '
|
||||||
'%(valid)s, got %(actual)s'),
|
'%(valid)s, got %(actual)s'),
|
||||||
{'valid': conf.VALID_KEEP_PORTS_VALUES,
|
{'valid': conf.VALID_KEEP_PORTS_VALUES,
|
||||||
'actual': CONF.discoverd.keep_ports})
|
'actual': CONF.processing.keep_ports})
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
def _ports_to_add(self):
|
|
||||||
if CONF.discoverd.ports_for_inactive_interfaces:
|
|
||||||
LOG.warning(_LW('Using deprecated option '
|
|
||||||
'[discoverd]ports_for_inactive_interfaces'))
|
|
||||||
return 'all'
|
|
||||||
else:
|
|
||||||
return CONF.discoverd.add_ports
|
|
||||||
|
|
||||||
def before_processing(self, node_info):
|
def before_processing(self, node_info):
|
||||||
"""Validate information about network interfaces."""
|
"""Validate information about network interfaces."""
|
||||||
bmc_address = node_info.get('ipmi_address')
|
bmc_address = node_info.get('ipmi_address')
|
||||||
|
@ -96,10 +88,9 @@ class ValidateInterfacesHook(base.ProcessingHook):
|
||||||
if utils.is_valid_mac(iface.get('mac'))
|
if utils.is_valid_mac(iface.get('mac'))
|
||||||
}
|
}
|
||||||
|
|
||||||
ports_to_add = self._ports_to_add()
|
|
||||||
pxe_mac = node_info.get('boot_interface')
|
pxe_mac = node_info.get('boot_interface')
|
||||||
|
|
||||||
if ports_to_add == 'pxe' and pxe_mac:
|
if CONF.processing.add_ports == 'pxe' and pxe_mac:
|
||||||
LOG.info(_LI('PXE boot interface was %s'), pxe_mac)
|
LOG.info(_LI('PXE boot interface was %s'), pxe_mac)
|
||||||
if '-' in pxe_mac:
|
if '-' in pxe_mac:
|
||||||
# pxelinux format: 01-aa-bb-cc-dd-ee-ff
|
# pxelinux format: 01-aa-bb-cc-dd-ee-ff
|
||||||
|
@ -110,7 +101,7 @@ class ValidateInterfacesHook(base.ProcessingHook):
|
||||||
n: iface for n, iface in valid_interfaces.items()
|
n: iface for n, iface in valid_interfaces.items()
|
||||||
if iface['mac'].lower() == pxe_mac
|
if iface['mac'].lower() == pxe_mac
|
||||||
}
|
}
|
||||||
elif ports_to_add != 'all':
|
elif CONF.processing.add_ports != 'all':
|
||||||
valid_interfaces = {
|
valid_interfaces = {
|
||||||
n: iface for n, iface in valid_interfaces.items()
|
n: iface for n, iface in valid_interfaces.items()
|
||||||
if iface.get('ip')
|
if iface.get('ip')
|
||||||
|
@ -139,10 +130,10 @@ class ValidateInterfacesHook(base.ProcessingHook):
|
||||||
|
|
||||||
def before_update(self, node, ports, node_info):
|
def before_update(self, node, ports, node_info):
|
||||||
"""Drop ports that are not present in the data."""
|
"""Drop ports that are not present in the data."""
|
||||||
if CONF.discoverd.keep_ports == 'present':
|
if CONF.processing.keep_ports == 'present':
|
||||||
expected_macs = {iface['mac']
|
expected_macs = {iface['mac']
|
||||||
for iface in node_info['all_interfaces'].values()}
|
for iface in node_info['all_interfaces'].values()}
|
||||||
elif CONF.discoverd.keep_ports == 'added':
|
elif CONF.processing.keep_ports == 'added':
|
||||||
expected_macs = set(node_info['macs'])
|
expected_macs = set(node_info['macs'])
|
||||||
else:
|
else:
|
||||||
return
|
return
|
||||||
|
@ -169,25 +160,25 @@ class RamdiskErrorHook(base.ProcessingHook):
|
||||||
error = node_info.get('error')
|
error = node_info.get('error')
|
||||||
logs = node_info.get('logs')
|
logs = node_info.get('logs')
|
||||||
|
|
||||||
if logs and (error or CONF.discoverd.always_store_ramdisk_logs):
|
if logs and (error or CONF.processing.always_store_ramdisk_logs):
|
||||||
self._store_logs(logs, node_info)
|
self._store_logs(logs, node_info)
|
||||||
|
|
||||||
if error:
|
if error:
|
||||||
raise utils.Error(_('Ramdisk reported error: %s') % error)
|
raise utils.Error(_('Ramdisk reported error: %s') % error)
|
||||||
|
|
||||||
def _store_logs(self, logs, node_info):
|
def _store_logs(self, logs, node_info):
|
||||||
if not CONF.discoverd.ramdisk_logs_dir:
|
if not CONF.processing.ramdisk_logs_dir:
|
||||||
LOG.warn(_LW('Failed to store logs received from the discovery '
|
LOG.warn(_LW('Failed to store logs received from the ramdisk '
|
||||||
'ramdisk because ramdisk_logs_dir configuration '
|
'because ramdisk_logs_dir configuration option '
|
||||||
'option is not set'))
|
'is not set'))
|
||||||
return
|
return
|
||||||
|
|
||||||
if not os.path.exists(CONF.discoverd.ramdisk_logs_dir):
|
if not os.path.exists(CONF.processing.ramdisk_logs_dir):
|
||||||
os.makedirs(CONF.discoverd.ramdisk_logs_dir)
|
os.makedirs(CONF.processing.ramdisk_logs_dir)
|
||||||
|
|
||||||
time_fmt = datetime.datetime.utcnow().strftime(self.DATETIME_FORMAT)
|
time_fmt = datetime.datetime.utcnow().strftime(self.DATETIME_FORMAT)
|
||||||
bmc_address = node_info.get('ipmi_address', 'unknown')
|
bmc_address = node_info.get('ipmi_address', 'unknown')
|
||||||
file_name = 'bmc_%s_%s' % (bmc_address, time_fmt)
|
file_name = 'bmc_%s_%s' % (bmc_address, time_fmt)
|
||||||
with open(os.path.join(CONF.discoverd.ramdisk_logs_dir, file_name),
|
with open(os.path.join(CONF.processing.ramdisk_logs_dir, file_name),
|
||||||
'wb') as fp:
|
'wb') as fp:
|
||||||
fp.write(base64.b64decode(logs))
|
fp.write(base64.b64decode(logs))
|
|
@ -18,21 +18,21 @@ import logging
|
||||||
import eventlet
|
import eventlet
|
||||||
from ironicclient import exceptions
|
from ironicclient import exceptions
|
||||||
|
|
||||||
from ironic_discoverd.common.i18n import _, _LE, _LI, _LW
|
from ironic_inspector.common.i18n import _, _LE, _LI, _LW
|
||||||
from ironic_discoverd import firewall
|
from ironic_inspector import firewall
|
||||||
from ironic_discoverd import node_cache
|
from ironic_inspector import node_cache
|
||||||
from ironic_discoverd.plugins import base as plugins_base
|
from ironic_inspector.plugins import base as plugins_base
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger("ironic_discoverd.process")
|
LOG = logging.getLogger("ironic_inspector.process")
|
||||||
|
|
||||||
_CREDENTIALS_WAIT_RETRIES = 10
|
_CREDENTIALS_WAIT_RETRIES = 10
|
||||||
_CREDENTIALS_WAIT_PERIOD = 3
|
_CREDENTIALS_WAIT_PERIOD = 3
|
||||||
|
|
||||||
|
|
||||||
def process(node_info):
|
def process(node_info):
|
||||||
"""Process data from the discovery ramdisk.
|
"""Process data from the ramdisk.
|
||||||
|
|
||||||
This function heavily relies on the hooks to do the actual data processing.
|
This function heavily relies on the hooks to do the actual data processing.
|
||||||
"""
|
"""
|
||||||
|
@ -212,10 +212,5 @@ def _finish(ironic, cached_node):
|
||||||
raise utils.Error(msg)
|
raise utils.Error(msg)
|
||||||
|
|
||||||
cached_node.finished()
|
cached_node.finished()
|
||||||
|
|
||||||
patch = [{'op': 'add', 'path': '/extra/newly_discovered', 'value': 'true'},
|
|
||||||
{'op': 'remove', 'path': '/extra/on_discovery'}]
|
|
||||||
utils.retry_on_conflict(ironic.node.update, cached_node.uuid, patch)
|
|
||||||
|
|
||||||
LOG.info(_LI('Introspection finished successfully for node %s'),
|
LOG.info(_LI('Introspection finished successfully for node %s'),
|
||||||
cached_node.uuid)
|
cached_node.uuid)
|
|
@ -11,7 +11,7 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
"""OpenStackClient plugin for ironic-discoverd."""
|
"""OpenStackClient plugin for ironic-inspector."""
|
||||||
|
|
||||||
from __future__ import print_function
|
from __future__ import print_function
|
||||||
|
|
||||||
|
@ -21,24 +21,24 @@ from cliff import command
|
||||||
from cliff import show
|
from cliff import show
|
||||||
from openstackclient.common import utils
|
from openstackclient.common import utils
|
||||||
|
|
||||||
from ironic_discoverd import client
|
from ironic_inspector import client
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger('ironic_discoverd.shell')
|
LOG = logging.getLogger('ironic_inspector.shell')
|
||||||
API_NAME = 'baremetal-introspection'
|
API_NAME = 'baremetal-introspection'
|
||||||
API_VERSION_OPTION = 'discoverd_api_version'
|
API_VERSION_OPTION = 'inspector_api_version'
|
||||||
DEFAULT_VERSION = '1'
|
DEFAULT_VERSION = '1'
|
||||||
API_VERSIONS = {
|
API_VERSIONS = {
|
||||||
"1": "ironic_discoverd.shell",
|
"1": "ironic_inspector.shell",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
def build_option_parser(parser):
|
def build_option_parser(parser):
|
||||||
parser.add_argument('--discoverd-api-version',
|
parser.add_argument('--inspector-api-version',
|
||||||
default=utils.env('DISCOVERD_VERSION',
|
default=utils.env('INSPECTOR_VERSION',
|
||||||
default=DEFAULT_VERSION),
|
default=DEFAULT_VERSION),
|
||||||
help='discoverd API version, only 1 is supported now '
|
help='inspector API version, only 1 is supported now '
|
||||||
'(env: DISCOVERD_VERSION).')
|
'(env: INSPECTOR_VERSION).')
|
||||||
return parser
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
@ -50,17 +50,17 @@ class StartCommand(command.Command):
|
||||||
_add_common_arguments(parser)
|
_add_common_arguments(parser)
|
||||||
parser.add_argument('--new-ipmi-username',
|
parser.add_argument('--new-ipmi-username',
|
||||||
default=None,
|
default=None,
|
||||||
help='if set, *ironic-discoverd* will update IPMI '
|
help='if set, *ironic-inspector* will update IPMI '
|
||||||
'user name to this value')
|
'user name to this value')
|
||||||
parser.add_argument('--new-ipmi-password',
|
parser.add_argument('--new-ipmi-password',
|
||||||
default=None,
|
default=None,
|
||||||
help='if set, *ironic-discoverd* will update IPMI '
|
help='if set, *ironic-inspector* will update IPMI '
|
||||||
'password to this value')
|
'password to this value')
|
||||||
return parser
|
return parser
|
||||||
|
|
||||||
def take_action(self, parsed_args):
|
def take_action(self, parsed_args):
|
||||||
auth_token = self.app.client_manager.auth_ref.auth_token
|
auth_token = self.app.client_manager.auth_ref.auth_token
|
||||||
client.introspect(parsed_args.uuid, base_url=parsed_args.discoverd_url,
|
client.introspect(parsed_args.uuid, base_url=parsed_args.inspector_url,
|
||||||
auth_token=auth_token,
|
auth_token=auth_token,
|
||||||
new_ipmi_username=parsed_args.new_ipmi_username,
|
new_ipmi_username=parsed_args.new_ipmi_username,
|
||||||
new_ipmi_password=parsed_args.new_ipmi_password)
|
new_ipmi_password=parsed_args.new_ipmi_password)
|
||||||
|
@ -80,7 +80,7 @@ class StatusCommand(show.ShowOne):
|
||||||
def take_action(self, parsed_args):
|
def take_action(self, parsed_args):
|
||||||
auth_token = self.app.client_manager.auth_ref.auth_token
|
auth_token = self.app.client_manager.auth_ref.auth_token
|
||||||
status = client.get_status(parsed_args.uuid,
|
status = client.get_status(parsed_args.uuid,
|
||||||
base_url=parsed_args.discoverd_url,
|
base_url=parsed_args.inspector_url,
|
||||||
auth_token=auth_token)
|
auth_token=auth_token)
|
||||||
return zip(*sorted(status.items()))
|
return zip(*sorted(status.items()))
|
||||||
|
|
||||||
|
@ -90,7 +90,7 @@ def _add_common_arguments(parser):
|
||||||
parser.add_argument('uuid', help='baremetal node UUID')
|
parser.add_argument('uuid', help='baremetal node UUID')
|
||||||
# FIXME(dtantsur): this should be in build_option_parser, but then it won't
|
# FIXME(dtantsur): this should be in build_option_parser, but then it won't
|
||||||
# be available in commands
|
# be available in commands
|
||||||
parser.add_argument('--discoverd-url',
|
parser.add_argument('--inspector-url',
|
||||||
default=utils.env('DISCOVERD_URL', default=None),
|
default=utils.env('INSPECTOR_URL', default=None),
|
||||||
help='discoverd URL, defaults to localhost '
|
help='inspector URL, defaults to localhost '
|
||||||
'(env: DISCOVERD_URL).')
|
'(env: INSPECTOR_URL).')
|
|
@ -17,11 +17,11 @@ import unittest
|
||||||
import mock
|
import mock
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd.common import i18n
|
from ironic_inspector.common import i18n
|
||||||
# Import configuration options
|
# Import configuration options
|
||||||
from ironic_discoverd import conf # noqa
|
from ironic_inspector import conf # noqa
|
||||||
from ironic_discoverd import node_cache
|
from ironic_inspector import node_cache
|
||||||
from ironic_discoverd.plugins import base as plugins_base
|
from ironic_inspector.plugins import base as plugins_base
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
@ -33,11 +33,12 @@ def init_test_conf():
|
||||||
# Unit tests
|
# Unit tests
|
||||||
except Exception:
|
except Exception:
|
||||||
CONF.reset()
|
CONF.reset()
|
||||||
CONF.register_group(cfg.OptGroup('discoverd'))
|
for group in ('firewall', 'processing', 'ironic'):
|
||||||
if not CONF.discoverd.database:
|
CONF.register_group(cfg.OptGroup(group))
|
||||||
|
if not CONF.database:
|
||||||
# Might be set in functional tests
|
# Might be set in functional tests
|
||||||
db_file = tempfile.NamedTemporaryFile()
|
db_file = tempfile.NamedTemporaryFile()
|
||||||
CONF.set_override('database', db_file.name, 'discoverd')
|
CONF.set_override('database', db_file.name)
|
||||||
else:
|
else:
|
||||||
db_file = None
|
db_file = None
|
||||||
node_cache._DB_NAME = None
|
node_cache._DB_NAME = None
|
||||||
|
@ -71,6 +72,6 @@ class NodeTest(BaseTest):
|
||||||
uuid=self.uuid,
|
uuid=self.uuid,
|
||||||
power_state='power on',
|
power_state='power on',
|
||||||
provision_state='inspecting',
|
provision_state='inspecting',
|
||||||
extra={'on_discovery': 'true'},
|
extra={},
|
||||||
instance_uuid=None,
|
instance_uuid=None,
|
||||||
maintenance=False)
|
maintenance=False)
|
|
@ -17,7 +17,7 @@ import mock
|
||||||
from oslo_utils import netutils
|
from oslo_utils import netutils
|
||||||
from oslo_utils import uuidutils
|
from oslo_utils import uuidutils
|
||||||
|
|
||||||
from ironic_discoverd import client
|
from ironic_inspector import client
|
||||||
|
|
||||||
|
|
||||||
@mock.patch.object(client.requests, 'post', autospec=True,
|
@mock.patch.object(client.requests, 'post', autospec=True,
|
||||||
|
@ -86,36 +86,6 @@ class TestIntrospect(unittest.TestCase):
|
||||||
client.introspect, self.uuid)
|
client.introspect, self.uuid)
|
||||||
|
|
||||||
|
|
||||||
@mock.patch.object(client.requests, 'post', autospec=True,
|
|
||||||
**{'return_value.status_code': 200})
|
|
||||||
class TestDiscover(unittest.TestCase):
|
|
||||||
def setUp(self):
|
|
||||||
super(TestDiscover, self).setUp()
|
|
||||||
self.uuid = uuidutils.generate_uuid()
|
|
||||||
|
|
||||||
def test_old_discover(self, mock_post):
|
|
||||||
uuid2 = uuidutils.generate_uuid()
|
|
||||||
client.discover([self.uuid, uuid2], base_url="http://host:port",
|
|
||||||
auth_token="token")
|
|
||||||
mock_post.assert_called_once_with(
|
|
||||||
"http://host:port/v1/discover",
|
|
||||||
data='["%(uuid1)s", "%(uuid2)s"]' % {'uuid1': self.uuid,
|
|
||||||
'uuid2': uuid2},
|
|
||||||
headers={'Content-Type': 'application/json',
|
|
||||||
'X-Auth-Token': 'token'}
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_invalid_input(self, _):
|
|
||||||
self.assertRaises(TypeError, client.discover, 42)
|
|
||||||
self.assertRaises(TypeError, client.discover, [42])
|
|
||||||
|
|
||||||
def test_failed(self, mock_post):
|
|
||||||
mock_post.return_value.status_code = 404
|
|
||||||
mock_post.return_value.content = b"boom"
|
|
||||||
self.assertRaisesRegexp(client.ClientError, "boom",
|
|
||||||
client.discover, [self.uuid])
|
|
||||||
|
|
||||||
|
|
||||||
@mock.patch.object(client.requests, 'get', autospec=True,
|
@mock.patch.object(client.requests, 'get', autospec=True,
|
||||||
**{'return_value.status_code': 200})
|
**{'return_value.status_code': 200})
|
||||||
class TestGetStatus(unittest.TestCase):
|
class TestGetStatus(unittest.TestCase):
|
|
@ -17,10 +17,10 @@ import mock
|
||||||
|
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd import firewall
|
from ironic_inspector import firewall
|
||||||
from ironic_discoverd import node_cache
|
from ironic_inspector import node_cache
|
||||||
from ironic_discoverd.test import base as test_base
|
from ironic_inspector.test import base as test_base
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
@ -31,7 +31,7 @@ CONF = cfg.CONF
|
||||||
class TestFirewall(test_base.NodeTest):
|
class TestFirewall(test_base.NodeTest):
|
||||||
def test_update_filters_without_manage_firewall(self, mock_get_client,
|
def test_update_filters_without_manage_firewall(self, mock_get_client,
|
||||||
mock_iptables):
|
mock_iptables):
|
||||||
CONF.set_override('manage_firewall', False, 'discoverd')
|
CONF.set_override('manage_firewall', False, 'firewall')
|
||||||
firewall.update_filters()
|
firewall.update_filters()
|
||||||
self.assertEqual(0, mock_iptables.call_count)
|
self.assertEqual(0, mock_iptables.call_count)
|
||||||
|
|
||||||
|
@ -39,10 +39,10 @@ class TestFirewall(test_base.NodeTest):
|
||||||
firewall.init()
|
firewall.init()
|
||||||
init_expected_args = [
|
init_expected_args = [
|
||||||
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport', '67',
|
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport', '67',
|
||||||
'-j', 'discovery'),
|
'-j', CONF.firewall.firewall_chain),
|
||||||
('-F', 'discovery'),
|
('-F', CONF.firewall.firewall_chain),
|
||||||
('-X', 'discovery'),
|
('-X', CONF.firewall.firewall_chain),
|
||||||
('-N', 'discovery')]
|
('-N', CONF.firewall.firewall_chain)]
|
||||||
|
|
||||||
call_args_list = mock_iptables.call_args_list
|
call_args_list = mock_iptables.call_args_list
|
||||||
|
|
||||||
|
@ -66,23 +66,23 @@ class TestFirewall(test_base.NodeTest):
|
||||||
|
|
||||||
update_filters_expected_args = [
|
update_filters_expected_args = [
|
||||||
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
||||||
'67', '-j', 'discovery'),
|
'67', '-j', CONF.firewall.firewall_chain),
|
||||||
('-F', 'discovery'),
|
('-F', CONF.firewall.firewall_chain),
|
||||||
('-X', 'discovery'),
|
('-X', CONF.firewall.firewall_chain),
|
||||||
('-N', 'discovery'),
|
('-N', CONF.firewall.firewall_chain),
|
||||||
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
||||||
'67', '-j', 'discovery_temp'),
|
'67', '-j', firewall.NEW_CHAIN),
|
||||||
('-F', 'discovery_temp'),
|
('-F', firewall.NEW_CHAIN),
|
||||||
('-X', 'discovery_temp'),
|
('-X', firewall.NEW_CHAIN),
|
||||||
('-N', 'discovery_temp'),
|
('-N', firewall.NEW_CHAIN),
|
||||||
('-A', 'discovery_temp', '-j', 'ACCEPT'),
|
('-A', firewall.NEW_CHAIN, '-j', 'ACCEPT'),
|
||||||
('-I', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
('-I', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
||||||
'67', '-j', 'discovery_temp'),
|
'67', '-j', firewall.NEW_CHAIN),
|
||||||
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
||||||
'67', '-j', 'discovery'),
|
'67', '-j', CONF.firewall.firewall_chain),
|
||||||
('-F', 'discovery'),
|
('-F', CONF.firewall.firewall_chain),
|
||||||
('-X', 'discovery'),
|
('-X', CONF.firewall.firewall_chain),
|
||||||
('-E', 'discovery_temp', 'discovery')
|
('-E', firewall.NEW_CHAIN, CONF.firewall.firewall_chain)
|
||||||
]
|
]
|
||||||
|
|
||||||
firewall.update_filters()
|
firewall.update_filters()
|
||||||
|
@ -131,26 +131,26 @@ class TestFirewall(test_base.NodeTest):
|
||||||
|
|
||||||
update_filters_expected_args = [
|
update_filters_expected_args = [
|
||||||
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
||||||
'67', '-j', 'discovery'),
|
'67', '-j', CONF.firewall.firewall_chain),
|
||||||
('-F', 'discovery'),
|
('-F', CONF.firewall.firewall_chain),
|
||||||
('-X', 'discovery'),
|
('-X', CONF.firewall.firewall_chain),
|
||||||
('-N', 'discovery'),
|
('-N', CONF.firewall.firewall_chain),
|
||||||
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
||||||
'67', '-j', 'discovery_temp'),
|
'67', '-j', firewall.NEW_CHAIN),
|
||||||
('-F', 'discovery_temp'),
|
('-F', firewall.NEW_CHAIN),
|
||||||
('-X', 'discovery_temp'),
|
('-X', firewall.NEW_CHAIN),
|
||||||
('-N', 'discovery_temp'),
|
('-N', firewall.NEW_CHAIN),
|
||||||
# Blacklist
|
# Blacklist
|
||||||
('-A', 'discovery_temp', '-m', 'mac', '--mac-source',
|
('-A', firewall.NEW_CHAIN, '-m', 'mac', '--mac-source',
|
||||||
inactive_mac[0], '-j', 'DROP'),
|
inactive_mac[0], '-j', 'DROP'),
|
||||||
('-A', 'discovery_temp', '-j', 'ACCEPT'),
|
('-A', firewall.NEW_CHAIN, '-j', 'ACCEPT'),
|
||||||
('-I', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
('-I', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
||||||
'67', '-j', 'discovery_temp'),
|
'67', '-j', firewall.NEW_CHAIN),
|
||||||
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
('-D', 'INPUT', '-i', 'br-ctlplane', '-p', 'udp', '--dport',
|
||||||
'67', '-j', 'discovery'),
|
'67', '-j', CONF.firewall.firewall_chain),
|
||||||
('-F', 'discovery'),
|
('-F', CONF.firewall.firewall_chain),
|
||||||
('-X', 'discovery'),
|
('-X', CONF.firewall.firewall_chain),
|
||||||
('-E', 'discovery_temp', 'discovery')
|
('-E', firewall.NEW_CHAIN, CONF.firewall.firewall_chain)
|
||||||
]
|
]
|
||||||
|
|
||||||
firewall.update_filters(mock_get_client)
|
firewall.update_filters(mock_get_client)
|
|
@ -16,11 +16,11 @@ from ironicclient import exceptions
|
||||||
import mock
|
import mock
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd import firewall
|
from ironic_inspector import firewall
|
||||||
from ironic_discoverd import introspect
|
from ironic_inspector import introspect
|
||||||
from ironic_discoverd import node_cache
|
from ironic_inspector import node_cache
|
||||||
from ironic_discoverd.test import base as test_base
|
from ironic_inspector.test import base as test_base
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
@ -35,11 +35,8 @@ class BaseTest(test_base.NodeTest):
|
||||||
maintenance=True,
|
maintenance=True,
|
||||||
# allowed with maintenance=True
|
# allowed with maintenance=True
|
||||||
power_state='power on',
|
power_state='power on',
|
||||||
provision_state='foobar',
|
provision_state='foobar')
|
||||||
extra={'on_discovery': True})
|
|
||||||
self.ports = [mock.Mock(address=m) for m in self.macs]
|
self.ports = [mock.Mock(address=m) for m in self.macs]
|
||||||
self.patch = [{'op': 'add', 'path': '/extra/on_discovery',
|
|
||||||
'value': 'true'}]
|
|
||||||
self.cached_node = mock.Mock(uuid=self.uuid, options={})
|
self.cached_node = mock.Mock(uuid=self.uuid, options={})
|
||||||
|
|
||||||
def _prepare(self, client_mock):
|
def _prepare(self, client_mock):
|
||||||
|
@ -67,7 +64,6 @@ class TestIntrospect(BaseTest):
|
||||||
cli.node.validate.assert_called_once_with(self.uuid)
|
cli.node.validate.assert_called_once_with(self.uuid)
|
||||||
cli.node.list_ports.assert_called_once_with(self.uuid, limit=0)
|
cli.node.list_ports.assert_called_once_with(self.uuid, limit=0)
|
||||||
|
|
||||||
cli.node.update.assert_called_once_with(self.uuid, self.patch)
|
|
||||||
add_mock.assert_called_once_with(self.uuid,
|
add_mock.assert_called_once_with(self.uuid,
|
||||||
bmc_address=self.bmc_address)
|
bmc_address=self.bmc_address)
|
||||||
self.cached_node.add_attribute.assert_called_once_with('mac',
|
self.cached_node.add_attribute.assert_called_once_with('mac',
|
||||||
|
@ -96,9 +92,6 @@ class TestIntrospect(BaseTest):
|
||||||
cli = self._prepare(client_mock)
|
cli = self._prepare(client_mock)
|
||||||
cli.node.validate.side_effect = [exceptions.Conflict,
|
cli.node.validate.side_effect = [exceptions.Conflict,
|
||||||
mock.Mock(power={'result': True})]
|
mock.Mock(power={'result': True})]
|
||||||
cli.node.update.side_effect = [exceptions.Conflict,
|
|
||||||
exceptions.Conflict,
|
|
||||||
None]
|
|
||||||
cli.node.set_boot_device.side_effect = [exceptions.Conflict,
|
cli.node.set_boot_device.side_effect = [exceptions.Conflict,
|
||||||
None]
|
None]
|
||||||
cli.node.set_power_state.side_effect = [exceptions.Conflict,
|
cli.node.set_power_state.side_effect = [exceptions.Conflict,
|
||||||
|
@ -111,7 +104,6 @@ class TestIntrospect(BaseTest):
|
||||||
cli.node.validate.assert_called_with(self.uuid)
|
cli.node.validate.assert_called_with(self.uuid)
|
||||||
cli.node.list_ports.assert_called_once_with(self.uuid, limit=0)
|
cli.node.list_ports.assert_called_once_with(self.uuid, limit=0)
|
||||||
|
|
||||||
cli.node.update.assert_called_with(self.uuid, self.patch)
|
|
||||||
add_mock.assert_called_once_with(self.uuid,
|
add_mock.assert_called_once_with(self.uuid,
|
||||||
bmc_address=self.bmc_address)
|
bmc_address=self.bmc_address)
|
||||||
filters_mock.assert_called_with(cli)
|
filters_mock.assert_called_with(cli)
|
||||||
|
@ -131,7 +123,6 @@ class TestIntrospect(BaseTest):
|
||||||
|
|
||||||
cli.node.get.assert_called_once_with(self.uuid)
|
cli.node.get.assert_called_once_with(self.uuid)
|
||||||
|
|
||||||
cli.node.update.assert_called_once_with(self.uuid, self.patch)
|
|
||||||
add_mock.assert_called_once_with(self.uuid,
|
add_mock.assert_called_once_with(self.uuid,
|
||||||
bmc_address=self.bmc_address)
|
bmc_address=self.bmc_address)
|
||||||
cli.node.set_boot_device.assert_called_once_with(self.uuid,
|
cli.node.set_boot_device.assert_called_once_with(self.uuid,
|
||||||
|
@ -151,14 +142,13 @@ class TestIntrospect(BaseTest):
|
||||||
|
|
||||||
cli.node.get.assert_called_once_with(self.uuid)
|
cli.node.get.assert_called_once_with(self.uuid)
|
||||||
|
|
||||||
cli.node.update.assert_called_once_with(self.uuid, self.patch)
|
|
||||||
add_mock.assert_called_once_with(self.uuid,
|
add_mock.assert_called_once_with(self.uuid,
|
||||||
bmc_address=self.bmc_address)
|
bmc_address=self.bmc_address)
|
||||||
self.assertFalse(cli.node.set_boot_device.called)
|
self.assertFalse(cli.node.set_boot_device.called)
|
||||||
add_mock.return_value.finished.assert_called_once_with(
|
add_mock.return_value.finished.assert_called_once_with(
|
||||||
error=mock.ANY)
|
error=mock.ANY)
|
||||||
|
|
||||||
def test_juno_compat(self, client_mock, add_mock, filters_mock):
|
def test_with_maintenance(self, client_mock, add_mock, filters_mock):
|
||||||
cli = client_mock.return_value
|
cli = client_mock.return_value
|
||||||
cli.node.get.return_value = self.node_compat
|
cli.node.get.return_value = self.node_compat
|
||||||
cli.node.validate.return_value = mock.Mock(power={'result': True})
|
cli.node.validate.return_value = mock.Mock(power={'result': True})
|
||||||
|
@ -173,8 +163,6 @@ class TestIntrospect(BaseTest):
|
||||||
cli.node.list_ports.assert_called_once_with(self.node_compat.uuid,
|
cli.node.list_ports.assert_called_once_with(self.node_compat.uuid,
|
||||||
limit=0)
|
limit=0)
|
||||||
|
|
||||||
cli.node.update.assert_called_once_with(self.node_compat.uuid,
|
|
||||||
self.patch)
|
|
||||||
add_mock.assert_called_once_with(self.node_compat.uuid,
|
add_mock.assert_called_once_with(self.node_compat.uuid,
|
||||||
bmc_address=None)
|
bmc_address=None)
|
||||||
add_mock.return_value.add_attribute.assert_called_once_with('mac',
|
add_mock.return_value.add_attribute.assert_called_once_with('mac',
|
||||||
|
@ -195,7 +183,6 @@ class TestIntrospect(BaseTest):
|
||||||
|
|
||||||
cli.node.list_ports.assert_called_once_with(self.uuid, limit=0)
|
cli.node.list_ports.assert_called_once_with(self.uuid, limit=0)
|
||||||
|
|
||||||
cli.node.update.assert_called_once_with(self.uuid, self.patch)
|
|
||||||
add_mock.assert_called_once_with(self.uuid,
|
add_mock.assert_called_once_with(self.uuid,
|
||||||
bmc_address=self.bmc_address)
|
bmc_address=self.bmc_address)
|
||||||
self.assertFalse(self.cached_node.add_attribute.called)
|
self.assertFalse(self.cached_node.add_attribute.called)
|
||||||
|
@ -221,7 +208,6 @@ class TestIntrospect(BaseTest):
|
||||||
self.assertEqual(0, cli.node.list_ports.call_count)
|
self.assertEqual(0, cli.node.list_ports.call_count)
|
||||||
self.assertEqual(0, filters_mock.call_count)
|
self.assertEqual(0, filters_mock.call_count)
|
||||||
self.assertEqual(0, cli.node.set_power_state.call_count)
|
self.assertEqual(0, cli.node.set_power_state.call_count)
|
||||||
self.assertEqual(0, cli.node.update.call_count)
|
|
||||||
self.assertFalse(add_mock.called)
|
self.assertFalse(add_mock.called)
|
||||||
|
|
||||||
def test_failed_to_validate_node(self, client_mock, add_mock,
|
def test_failed_to_validate_node(self, client_mock, add_mock,
|
||||||
|
@ -240,7 +226,6 @@ class TestIntrospect(BaseTest):
|
||||||
self.assertEqual(0, cli.node.list_ports.call_count)
|
self.assertEqual(0, cli.node.list_ports.call_count)
|
||||||
self.assertEqual(0, filters_mock.call_count)
|
self.assertEqual(0, filters_mock.call_count)
|
||||||
self.assertEqual(0, cli.node.set_power_state.call_count)
|
self.assertEqual(0, cli.node.set_power_state.call_count)
|
||||||
self.assertEqual(0, cli.node.update.call_count)
|
|
||||||
self.assertFalse(add_mock.called)
|
self.assertFalse(add_mock.called)
|
||||||
|
|
||||||
def test_wrong_provision_state(self, client_mock, add_mock, filters_mock):
|
def test_wrong_provision_state(self, client_mock, add_mock, filters_mock):
|
||||||
|
@ -256,7 +241,6 @@ class TestIntrospect(BaseTest):
|
||||||
self.assertEqual(0, cli.node.list_ports.call_count)
|
self.assertEqual(0, cli.node.list_ports.call_count)
|
||||||
self.assertEqual(0, filters_mock.call_count)
|
self.assertEqual(0, filters_mock.call_count)
|
||||||
self.assertEqual(0, cli.node.set_power_state.call_count)
|
self.assertEqual(0, cli.node.set_power_state.call_count)
|
||||||
self.assertEqual(0, cli.node.update.call_count)
|
|
||||||
self.assertFalse(add_mock.called)
|
self.assertFalse(add_mock.called)
|
||||||
|
|
||||||
|
|
||||||
|
@ -268,7 +252,8 @@ class TestIntrospect(BaseTest):
|
||||||
class TestSetIpmiCredentials(BaseTest):
|
class TestSetIpmiCredentials(BaseTest):
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
super(TestSetIpmiCredentials, self).setUp()
|
super(TestSetIpmiCredentials, self).setUp()
|
||||||
CONF.set_override('enable_setting_ipmi_credentials', True, 'discoverd')
|
CONF.set_override('enable_setting_ipmi_credentials', True,
|
||||||
|
'processing')
|
||||||
self.new_creds = ('user', 'password')
|
self.new_creds = ('user', 'password')
|
||||||
self.cached_node.options['new_ipmi_credentials'] = self.new_creds
|
self.cached_node.options['new_ipmi_credentials'] = self.new_creds
|
||||||
self.node.maintenance = True
|
self.node.maintenance = True
|
||||||
|
@ -279,7 +264,6 @@ class TestSetIpmiCredentials(BaseTest):
|
||||||
|
|
||||||
introspect.introspect(self.uuid, new_ipmi_credentials=self.new_creds)
|
introspect.introspect(self.uuid, new_ipmi_credentials=self.new_creds)
|
||||||
|
|
||||||
cli.node.update.assert_called_once_with(self.uuid, self.patch)
|
|
||||||
add_mock.assert_called_once_with(self.uuid,
|
add_mock.assert_called_once_with(self.uuid,
|
||||||
bmc_address=self.bmc_address)
|
bmc_address=self.bmc_address)
|
||||||
filters_mock.assert_called_with(cli)
|
filters_mock.assert_called_with(cli)
|
||||||
|
@ -291,7 +275,7 @@ class TestSetIpmiCredentials(BaseTest):
|
||||||
|
|
||||||
def test_disabled(self, client_mock, add_mock, filters_mock):
|
def test_disabled(self, client_mock, add_mock, filters_mock):
|
||||||
CONF.set_override('enable_setting_ipmi_credentials', False,
|
CONF.set_override('enable_setting_ipmi_credentials', False,
|
||||||
'discoverd')
|
'processing')
|
||||||
self._prepare(client_mock)
|
self._prepare(client_mock)
|
||||||
|
|
||||||
self.assertRaisesRegexp(utils.Error, 'disabled',
|
self.assertRaisesRegexp(utils.Error, 'disabled',
|
||||||
|
@ -312,7 +296,6 @@ class TestSetIpmiCredentials(BaseTest):
|
||||||
introspect.introspect(self.uuid,
|
introspect.introspect(self.uuid,
|
||||||
new_ipmi_credentials=(None, self.new_creds[1]))
|
new_ipmi_credentials=(None, self.new_creds[1]))
|
||||||
|
|
||||||
cli.node.update.assert_called_once_with(self.uuid, self.patch)
|
|
||||||
add_mock.assert_called_once_with(self.uuid,
|
add_mock.assert_called_once_with(self.uuid,
|
||||||
bmc_address=self.bmc_address)
|
bmc_address=self.bmc_address)
|
||||||
filters_mock.assert_called_with(cli)
|
filters_mock.assert_called_with(cli)
|
|
@ -18,15 +18,15 @@ import eventlet
|
||||||
import mock
|
import mock
|
||||||
from oslo_utils import uuidutils
|
from oslo_utils import uuidutils
|
||||||
|
|
||||||
from ironic_discoverd import firewall
|
from ironic_inspector import firewall
|
||||||
from ironic_discoverd import introspect
|
from ironic_inspector import introspect
|
||||||
from ironic_discoverd import main
|
from ironic_inspector import main
|
||||||
from ironic_discoverd import node_cache
|
from ironic_inspector import node_cache
|
||||||
from ironic_discoverd.plugins import base as plugins_base
|
from ironic_inspector.plugins import base as plugins_base
|
||||||
from ironic_discoverd.plugins import example as example_plugin
|
from ironic_inspector.plugins import example as example_plugin
|
||||||
from ironic_discoverd import process
|
from ironic_inspector import process
|
||||||
from ironic_discoverd.test import base as test_base
|
from ironic_inspector.test import base as test_base
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
@ -37,12 +37,12 @@ class TestApi(test_base.BaseTest):
|
||||||
super(TestApi, self).setUp()
|
super(TestApi, self).setUp()
|
||||||
main.app.config['TESTING'] = True
|
main.app.config['TESTING'] = True
|
||||||
self.app = main.app.test_client()
|
self.app = main.app.test_client()
|
||||||
CONF.set_override('authenticate', False, 'discoverd')
|
CONF.set_override('authenticate', False)
|
||||||
self.uuid = uuidutils.generate_uuid()
|
self.uuid = uuidutils.generate_uuid()
|
||||||
|
|
||||||
@mock.patch.object(introspect, 'introspect', autospec=True)
|
@mock.patch.object(introspect, 'introspect', autospec=True)
|
||||||
def test_introspect_no_authentication(self, introspect_mock):
|
def test_introspect_no_authentication(self, introspect_mock):
|
||||||
CONF.set_override('authenticate', False, 'discoverd')
|
CONF.set_override('authenticate', False)
|
||||||
res = self.app.post('/v1/introspection/%s' % self.uuid)
|
res = self.app.post('/v1/introspection/%s' % self.uuid)
|
||||||
self.assertEqual(202, res.status_code)
|
self.assertEqual(202, res.status_code)
|
||||||
introspect_mock.assert_called_once_with(self.uuid,
|
introspect_mock.assert_called_once_with(self.uuid,
|
||||||
|
@ -50,7 +50,7 @@ class TestApi(test_base.BaseTest):
|
||||||
|
|
||||||
@mock.patch.object(introspect, 'introspect', autospec=True)
|
@mock.patch.object(introspect, 'introspect', autospec=True)
|
||||||
def test_introspect_set_ipmi_credentials(self, introspect_mock):
|
def test_introspect_set_ipmi_credentials(self, introspect_mock):
|
||||||
CONF.set_override('authenticate', False, 'discoverd')
|
CONF.set_override('authenticate', False)
|
||||||
res = self.app.post('/v1/introspection/%s?new_ipmi_username=user&'
|
res = self.app.post('/v1/introspection/%s?new_ipmi_username=user&'
|
||||||
'new_ipmi_password=password' % self.uuid)
|
'new_ipmi_password=password' % self.uuid)
|
||||||
self.assertEqual(202, res.status_code)
|
self.assertEqual(202, res.status_code)
|
||||||
|
@ -60,7 +60,7 @@ class TestApi(test_base.BaseTest):
|
||||||
|
|
||||||
@mock.patch.object(introspect, 'introspect', autospec=True)
|
@mock.patch.object(introspect, 'introspect', autospec=True)
|
||||||
def test_introspect_set_ipmi_credentials_no_user(self, introspect_mock):
|
def test_introspect_set_ipmi_credentials_no_user(self, introspect_mock):
|
||||||
CONF.set_override('authenticate', False, 'discoverd')
|
CONF.set_override('authenticate', False)
|
||||||
res = self.app.post('/v1/introspection/%s?'
|
res = self.app.post('/v1/introspection/%s?'
|
||||||
'new_ipmi_password=password' % self.uuid)
|
'new_ipmi_password=password' % self.uuid)
|
||||||
self.assertEqual(202, res.status_code)
|
self.assertEqual(202, res.status_code)
|
||||||
|
@ -82,7 +82,7 @@ class TestApi(test_base.BaseTest):
|
||||||
@mock.patch.object(introspect, 'introspect', autospec=True)
|
@mock.patch.object(introspect, 'introspect', autospec=True)
|
||||||
def test_introspect_failed_authentication(self, introspect_mock,
|
def test_introspect_failed_authentication(self, introspect_mock,
|
||||||
auth_mock):
|
auth_mock):
|
||||||
CONF.set_override('authenticate', True, 'discoverd')
|
CONF.set_override('authenticate', True)
|
||||||
auth_mock.side_effect = utils.Error('Boom', code=403)
|
auth_mock.side_effect = utils.Error('Boom', code=403)
|
||||||
res = self.app.post('/v1/introspection/%s' % self.uuid,
|
res = self.app.post('/v1/introspection/%s' % self.uuid,
|
||||||
headers={'X-Auth-Token': 'token'})
|
headers={'X-Auth-Token': 'token'})
|
||||||
|
@ -95,22 +95,10 @@ class TestApi(test_base.BaseTest):
|
||||||
res = self.app.post('/v1/introspection/%s' % uuid_dummy)
|
res = self.app.post('/v1/introspection/%s' % uuid_dummy)
|
||||||
self.assertEqual(400, res.status_code)
|
self.assertEqual(400, res.status_code)
|
||||||
|
|
||||||
@mock.patch.object(introspect, 'introspect', autospec=True)
|
|
||||||
def test_discover(self, discover_mock):
|
|
||||||
res = self.app.post('/v1/discover', data='["%s"]' % self.uuid)
|
|
||||||
self.assertEqual(202, res.status_code)
|
|
||||||
discover_mock.assert_called_once_with(self.uuid)
|
|
||||||
|
|
||||||
@mock.patch.object(introspect, 'introspect', autospec=True)
|
|
||||||
def test_discover_invalid_uuid(self, discover_mock):
|
|
||||||
uuid_dummy = 'uuid1'
|
|
||||||
res = self.app.post('/v1/discover', data='["%s"]' % uuid_dummy)
|
|
||||||
self.assertEqual(400, res.status_code)
|
|
||||||
|
|
||||||
@mock.patch.object(process, 'process', autospec=True)
|
@mock.patch.object(process, 'process', autospec=True)
|
||||||
def test_continue(self, process_mock):
|
def test_continue(self, process_mock):
|
||||||
# should be ignored
|
# should be ignored
|
||||||
CONF.set_override('authenticate', True, 'discoverd')
|
CONF.set_override('authenticate', True)
|
||||||
process_mock.return_value = [42]
|
process_mock.return_value = [42]
|
||||||
res = self.app.post('/v1/continue', data='"JSON"')
|
res = self.app.post('/v1/continue', data='"JSON"')
|
||||||
self.assertEqual(200, res.status_code)
|
self.assertEqual(200, res.status_code)
|
||||||
|
@ -161,10 +149,10 @@ class TestCheckIronicAvailable(test_base.BaseTest):
|
||||||
self.assertEqual(2, client_mock.call_count)
|
self.assertEqual(2, client_mock.call_count)
|
||||||
cli.driver.list.assert_called_once_with()
|
cli.driver.list.assert_called_once_with()
|
||||||
sleep_mock.assert_called_once_with(
|
sleep_mock.assert_called_once_with(
|
||||||
CONF.discoverd.ironic_retry_period)
|
CONF.ironic.ironic_retry_period)
|
||||||
|
|
||||||
def test_failed(self, client_mock, sleep_mock):
|
def test_failed(self, client_mock, sleep_mock):
|
||||||
attempts = CONF.discoverd.ironic_retry_attempts
|
attempts = CONF.ironic.ironic_retry_attempts
|
||||||
client_mock.side_effect = RuntimeError()
|
client_mock.side_effect = RuntimeError()
|
||||||
self.assertRaises(RuntimeError, main.check_ironic_available)
|
self.assertRaises(RuntimeError, main.check_ironic_available)
|
||||||
self.assertEqual(1 + attempts, client_mock.call_count)
|
self.assertEqual(1 + attempts, client_mock.call_count)
|
||||||
|
@ -178,7 +166,7 @@ class TestPlugins(unittest.TestCase):
|
||||||
'before_update', autospec=True)
|
'before_update', autospec=True)
|
||||||
def test_hook(self, mock_post, mock_pre):
|
def test_hook(self, mock_post, mock_pre):
|
||||||
plugins_base._HOOKS_MGR = None
|
plugins_base._HOOKS_MGR = None
|
||||||
CONF.set_override('processing_hooks', 'example', 'discoverd')
|
CONF.set_override('processing_hooks', 'example', 'processing')
|
||||||
mgr = plugins_base.processing_hooks_manager()
|
mgr = plugins_base.processing_hooks_manager()
|
||||||
mgr.map_method('before_processing', 'node_info')
|
mgr.map_method('before_processing', 'node_info')
|
||||||
mock_pre.assert_called_once_with(mock.ANY, 'node_info')
|
mock_pre.assert_called_once_with(mock.ANY, 'node_info')
|
||||||
|
@ -199,15 +187,15 @@ class TestPlugins(unittest.TestCase):
|
||||||
class TestInit(test_base.BaseTest):
|
class TestInit(test_base.BaseTest):
|
||||||
def test_ok(self, mock_node_cache, mock_get_client, mock_auth,
|
def test_ok(self, mock_node_cache, mock_get_client, mock_auth,
|
||||||
mock_firewall, mock_spawn_n):
|
mock_firewall, mock_spawn_n):
|
||||||
CONF.set_override('authenticate', True, 'discoverd')
|
CONF.set_override('authenticate', True)
|
||||||
main.init()
|
main.init()
|
||||||
mock_auth.assert_called_once_with(main.app)
|
mock_auth.assert_called_once_with(main.app)
|
||||||
mock_node_cache.assert_called_once_with()
|
mock_node_cache.assert_called_once_with()
|
||||||
mock_firewall.assert_called_once_with()
|
mock_firewall.assert_called_once_with()
|
||||||
|
|
||||||
spawn_n_expected_args = [
|
spawn_n_expected_args = [
|
||||||
(main.periodic_update, CONF.discoverd.firewall_update_period),
|
(main.periodic_update, CONF.firewall.firewall_update_period),
|
||||||
(main.periodic_clean_up, CONF.discoverd.clean_up_period)]
|
(main.periodic_clean_up, CONF.clean_up_period)]
|
||||||
spawn_n_call_args_list = mock_spawn_n.call_args_list
|
spawn_n_call_args_list = mock_spawn_n.call_args_list
|
||||||
|
|
||||||
for (args, call) in zip(spawn_n_expected_args,
|
for (args, call) in zip(spawn_n_expected_args,
|
||||||
|
@ -216,18 +204,18 @@ class TestInit(test_base.BaseTest):
|
||||||
|
|
||||||
def test_init_without_authenticate(self, mock_node_cache, mock_get_client,
|
def test_init_without_authenticate(self, mock_node_cache, mock_get_client,
|
||||||
mock_auth, mock_firewall, mock_spawn_n):
|
mock_auth, mock_firewall, mock_spawn_n):
|
||||||
CONF.set_override('authenticate', False, 'discoverd')
|
CONF.set_override('authenticate', False)
|
||||||
main.init()
|
main.init()
|
||||||
self.assertFalse(mock_auth.called)
|
self.assertFalse(mock_auth.called)
|
||||||
|
|
||||||
def test_init_without_manage_firewall(self, mock_node_cache,
|
def test_init_without_manage_firewall(self, mock_node_cache,
|
||||||
mock_get_client, mock_auth,
|
mock_get_client, mock_auth,
|
||||||
mock_firewall, mock_spawn_n):
|
mock_firewall, mock_spawn_n):
|
||||||
CONF.set_override('manage_firewall', False, 'discoverd')
|
CONF.set_override('manage_firewall', False, 'firewall')
|
||||||
main.init()
|
main.init()
|
||||||
self.assertFalse(mock_firewall.called)
|
self.assertFalse(mock_firewall.called)
|
||||||
spawn_n_expected_args = [
|
spawn_n_expected_args = [
|
||||||
(main.periodic_clean_up, CONF.discoverd.clean_up_period)]
|
(main.periodic_clean_up, CONF.clean_up_period)]
|
||||||
spawn_n_call_args_list = mock_spawn_n.call_args_list
|
spawn_n_call_args_list = mock_spawn_n.call_args_list
|
||||||
for (args, call) in zip(spawn_n_expected_args,
|
for (args, call) in zip(spawn_n_expected_args,
|
||||||
spawn_n_call_args_list):
|
spawn_n_call_args_list):
|
||||||
|
@ -235,10 +223,10 @@ class TestInit(test_base.BaseTest):
|
||||||
|
|
||||||
def test_init_with_timeout_0(self, mock_node_cache, mock_get_client,
|
def test_init_with_timeout_0(self, mock_node_cache, mock_get_client,
|
||||||
mock_auth, mock_firewall, mock_spawn_n):
|
mock_auth, mock_firewall, mock_spawn_n):
|
||||||
CONF.set_override('timeout', 0, 'discoverd')
|
CONF.set_override('timeout', 0)
|
||||||
main.init()
|
main.init()
|
||||||
spawn_n_expected_args = [
|
spawn_n_expected_args = [
|
||||||
(main.periodic_update, CONF.discoverd.firewall_update_period)]
|
(main.periodic_update, CONF.firewall.firewall_update_period)]
|
||||||
spawn_n_call_args_list = mock_spawn_n.call_args_list
|
spawn_n_call_args_list = mock_spawn_n.call_args_list
|
||||||
|
|
||||||
for (args, call) in zip(spawn_n_expected_args,
|
for (args, call) in zip(spawn_n_expected_args,
|
||||||
|
@ -249,7 +237,7 @@ class TestInit(test_base.BaseTest):
|
||||||
def test_init_failed_processing_hook(self, mock_log, mock_node_cache,
|
def test_init_failed_processing_hook(self, mock_log, mock_node_cache,
|
||||||
mock_get_client, mock_auth,
|
mock_get_client, mock_auth,
|
||||||
mock_firewall, mock_spawn_n):
|
mock_firewall, mock_spawn_n):
|
||||||
CONF.set_override('processing_hooks', 'foo!', 'discoverd')
|
CONF.set_override('processing_hooks', 'foo!', 'processing')
|
||||||
plugins_base._HOOKS_MGR = None
|
plugins_base._HOOKS_MGR = None
|
||||||
|
|
||||||
self.assertRaises(SystemExit, main.init)
|
self.assertRaises(SystemExit, main.init)
|
|
@ -19,9 +19,9 @@ import unittest
|
||||||
import mock
|
import mock
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd import node_cache
|
from ironic_inspector import node_cache
|
||||||
from ironic_discoverd.test import base as test_base
|
from ironic_inspector.test import base as test_base
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
@ -158,7 +158,7 @@ class TestNodeCacheCleanUp(test_base.NodeTest):
|
||||||
'values(?, ?, ?)', (self.uuid, 'foo', 'bar'))
|
'values(?, ?, ?)', (self.uuid, 'foo', 'bar'))
|
||||||
|
|
||||||
def test_no_timeout(self):
|
def test_no_timeout(self):
|
||||||
CONF.set_override('timeout', 0, 'discoverd')
|
CONF.set_override('timeout', 0)
|
||||||
|
|
||||||
self.assertFalse(node_cache.clean_up())
|
self.assertFalse(node_cache.clean_up())
|
||||||
|
|
||||||
|
@ -192,7 +192,7 @@ class TestNodeCacheCleanUp(test_base.NodeTest):
|
||||||
'values(?, ?, ?)', (self.uuid + '1',
|
'values(?, ?, ?)', (self.uuid + '1',
|
||||||
self.started_at,
|
self.started_at,
|
||||||
self.started_at + 60))
|
self.started_at + 60))
|
||||||
CONF.set_override('timeout', 99, 'discoverd')
|
CONF.set_override('timeout', 99)
|
||||||
time_mock.return_value = self.started_at + 100
|
time_mock.return_value = self.started_at + 100
|
||||||
|
|
||||||
self.assertEqual([self.uuid], node_cache.clean_up())
|
self.assertEqual([self.uuid], node_cache.clean_up())
|
||||||
|
@ -208,7 +208,7 @@ class TestNodeCacheCleanUp(test_base.NodeTest):
|
||||||
'select * from options').fetchall())
|
'select * from options').fetchall())
|
||||||
|
|
||||||
def test_old_status(self):
|
def test_old_status(self):
|
||||||
CONF.set_override('node_status_keep_time', 42, 'discoverd')
|
CONF.set_override('node_status_keep_time', 42)
|
||||||
with self.db:
|
with self.db:
|
||||||
self.db.execute('update nodes set finished_at=?',
|
self.db.execute('update nodes set finished_at=?',
|
||||||
(time.time() - 100,))
|
(time.time() - 100,))
|
||||||
|
@ -276,7 +276,7 @@ class TestInit(unittest.TestCase):
|
||||||
|
|
||||||
def test_ok(self):
|
def test_ok(self):
|
||||||
with tempfile.NamedTemporaryFile() as db_file:
|
with tempfile.NamedTemporaryFile() as db_file:
|
||||||
CONF.set_override('database', db_file.name, 'discoverd')
|
CONF.set_override('database', db_file.name)
|
||||||
node_cache.init()
|
node_cache.init()
|
||||||
|
|
||||||
self.assertIsNotNone(node_cache._DB_NAME)
|
self.assertIsNotNone(node_cache._DB_NAME)
|
||||||
|
@ -285,12 +285,11 @@ class TestInit(unittest.TestCase):
|
||||||
|
|
||||||
def test_create_dir(self):
|
def test_create_dir(self):
|
||||||
temp = tempfile.mkdtemp()
|
temp = tempfile.mkdtemp()
|
||||||
CONF.set_override('database', os.path.join(temp, 'dir', 'file'),
|
CONF.set_override('database', os.path.join(temp, 'dir', 'file'))
|
||||||
'discoverd')
|
|
||||||
node_cache.init()
|
node_cache.init()
|
||||||
|
|
||||||
def test_no_database(self):
|
def test_no_database(self):
|
||||||
CONF.set_override('database', '', 'discoverd')
|
CONF.set_override('database', '')
|
||||||
self.assertRaises(SystemExit, node_cache.init)
|
self.assertRaises(SystemExit, node_cache.init)
|
||||||
|
|
||||||
|
|
|
@ -11,8 +11,8 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from ironic_discoverd.plugins import edeploy
|
from ironic_inspector.plugins import edeploy
|
||||||
from ironic_discoverd.test import base as test_base
|
from ironic_inspector.test import base as test_base
|
||||||
|
|
||||||
|
|
||||||
class TestEdeploy(test_base.NodeTest):
|
class TestEdeploy(test_base.NodeTest):
|
|
@ -11,8 +11,8 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from ironic_discoverd.plugins import root_device_hint
|
from ironic_inspector.plugins import root_device_hint
|
||||||
from ironic_discoverd.test import base as test_base
|
from ironic_inspector.test import base as test_base
|
||||||
|
|
||||||
|
|
||||||
class TestRootDeviceHint(test_base.NodeTest):
|
class TestRootDeviceHint(test_base.NodeTest):
|
|
@ -18,10 +18,10 @@ import tempfile
|
||||||
|
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd.plugins import standard as std_plugins
|
from ironic_inspector.plugins import standard as std_plugins
|
||||||
from ironic_discoverd import process
|
from ironic_inspector import process
|
||||||
from ironic_discoverd.test import base as test_base
|
from ironic_inspector.test import base as test_base
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
@ -38,7 +38,7 @@ class TestRamdiskError(test_base.BaseTest):
|
||||||
|
|
||||||
self.tempdir = tempfile.mkdtemp()
|
self.tempdir = tempfile.mkdtemp()
|
||||||
self.addCleanup(lambda: shutil.rmtree(self.tempdir))
|
self.addCleanup(lambda: shutil.rmtree(self.tempdir))
|
||||||
CONF.set_override('ramdisk_logs_dir', self.tempdir, 'discoverd')
|
CONF.set_override('ramdisk_logs_dir', self.tempdir, 'processing')
|
||||||
|
|
||||||
def test_no_logs(self):
|
def test_no_logs(self):
|
||||||
self.assertRaisesRegexp(utils.Error,
|
self.assertRaisesRegexp(utils.Error,
|
||||||
|
@ -48,7 +48,7 @@ class TestRamdiskError(test_base.BaseTest):
|
||||||
|
|
||||||
def test_logs_disabled(self):
|
def test_logs_disabled(self):
|
||||||
self.data['logs'] = 'some log'
|
self.data['logs'] = 'some log'
|
||||||
CONF.set_override('ramdisk_logs_dir', None, 'discoverd')
|
CONF.set_override('ramdisk_logs_dir', None, 'processing')
|
||||||
|
|
||||||
self.assertRaisesRegexp(utils.Error,
|
self.assertRaisesRegexp(utils.Error,
|
||||||
self.msg,
|
self.msg,
|
||||||
|
@ -94,7 +94,7 @@ class TestRamdiskError(test_base.BaseTest):
|
||||||
self.assertFalse(files)
|
self.assertFalse(files)
|
||||||
|
|
||||||
def test_always_store_logs(self):
|
def test_always_store_logs(self):
|
||||||
CONF.set_override('always_store_ramdisk_logs', True, 'discoverd')
|
CONF.set_override('always_store_ramdisk_logs', True, 'processing')
|
||||||
|
|
||||||
log = b'log contents'
|
log = b'log contents'
|
||||||
del self.data['error']
|
del self.data['error']
|
|
@ -19,14 +19,14 @@ from ironicclient import exceptions
|
||||||
import mock
|
import mock
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd import firewall
|
from ironic_inspector import firewall
|
||||||
from ironic_discoverd import node_cache
|
from ironic_inspector import node_cache
|
||||||
from ironic_discoverd.plugins import base as plugins_base
|
from ironic_inspector.plugins import base as plugins_base
|
||||||
from ironic_discoverd.plugins import example as example_plugin
|
from ironic_inspector.plugins import example as example_plugin
|
||||||
from ironic_discoverd.plugins import standard as std_plugins
|
from ironic_inspector.plugins import standard as std_plugins
|
||||||
from ironic_discoverd import process
|
from ironic_inspector import process
|
||||||
from ironic_discoverd.test import base as test_base
|
from ironic_inspector.test import base as test_base
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
@ -140,7 +140,7 @@ class TestProcess(BaseTest):
|
||||||
|
|
||||||
@prepare_mocks
|
@prepare_mocks
|
||||||
def test_add_ports_active(self, cli, pop_mock, process_mock):
|
def test_add_ports_active(self, cli, pop_mock, process_mock):
|
||||||
CONF.set_override('add_ports', 'active', 'discoverd')
|
CONF.set_override('add_ports', 'active', 'processing')
|
||||||
|
|
||||||
res = process.process(self.data)
|
res = process.process(self.data)
|
||||||
|
|
||||||
|
@ -160,7 +160,7 @@ class TestProcess(BaseTest):
|
||||||
|
|
||||||
@prepare_mocks
|
@prepare_mocks
|
||||||
def test_add_ports_all(self, cli, pop_mock, process_mock):
|
def test_add_ports_all(self, cli, pop_mock, process_mock):
|
||||||
CONF.set_override('add_ports', 'all', 'discoverd')
|
CONF.set_override('add_ports', 'all', 'processing')
|
||||||
|
|
||||||
res = process.process(self.data)
|
res = process.process(self.data)
|
||||||
|
|
||||||
|
@ -195,23 +195,6 @@ class TestProcess(BaseTest):
|
||||||
del self.data['interfaces']
|
del self.data['interfaces']
|
||||||
self.assertRaises(utils.Error, process.process, self.data)
|
self.assertRaises(utils.Error, process.process, self.data)
|
||||||
|
|
||||||
@prepare_mocks
|
|
||||||
def test_ports_for_inactive(self, cli, pop_mock, process_mock):
|
|
||||||
CONF.set_override('ports_for_inactive_interfaces', True, 'discoverd')
|
|
||||||
del self.data['boot_interface']
|
|
||||||
|
|
||||||
process.process(self.data)
|
|
||||||
|
|
||||||
self.assertEqual(['em1', 'em2', 'em3'],
|
|
||||||
sorted(self.data['interfaces']))
|
|
||||||
self.assertEqual(self.all_macs, sorted(self.data['macs']))
|
|
||||||
|
|
||||||
pop_mock.assert_called_once_with(bmc_address=self.bmc_address,
|
|
||||||
mac=self.data['macs'])
|
|
||||||
cli.node.get.assert_called_once_with(self.uuid)
|
|
||||||
process_mock.assert_called_once_with(cli, cli.node.get.return_value,
|
|
||||||
self.data, pop_mock.return_value)
|
|
||||||
|
|
||||||
@prepare_mocks
|
@prepare_mocks
|
||||||
def test_invalid_interfaces_all(self, cli, pop_mock, process_mock):
|
def test_invalid_interfaces_all(self, cli, pop_mock, process_mock):
|
||||||
self.data['interfaces'] = {
|
self.data['interfaces'] = {
|
||||||
|
@ -333,23 +316,19 @@ class TestProcessNode(BaseTest):
|
||||||
CONF.set_override('processing_hooks',
|
CONF.set_override('processing_hooks',
|
||||||
'ramdisk_error,scheduler,validate_interfaces,'
|
'ramdisk_error,scheduler,validate_interfaces,'
|
||||||
'example',
|
'example',
|
||||||
'discoverd')
|
'processing')
|
||||||
self.validate_attempts = 5
|
self.validate_attempts = 5
|
||||||
self.data['macs'] = self.macs # validate_interfaces hook
|
self.data['macs'] = self.macs # validate_interfaces hook
|
||||||
self.data['all_interfaces'] = self.data['interfaces']
|
self.data['all_interfaces'] = self.data['interfaces']
|
||||||
self.ports = self.all_ports
|
self.ports = self.all_ports
|
||||||
self.cached_node = node_cache.NodeInfo(uuid=self.uuid,
|
self.cached_node = node_cache.NodeInfo(uuid=self.uuid,
|
||||||
started_at=self.started_at)
|
started_at=self.started_at)
|
||||||
self.patch_before = [
|
self.patch_props = [
|
||||||
{'path': '/properties/cpus', 'value': '2', 'op': 'add'},
|
{'path': '/properties/cpus', 'value': '2', 'op': 'add'},
|
||||||
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
|
{'path': '/properties/cpu_arch', 'value': 'x86_64', 'op': 'add'},
|
||||||
{'path': '/properties/memory_mb', 'value': '1024', 'op': 'add'},
|
{'path': '/properties/memory_mb', 'value': '1024', 'op': 'add'},
|
||||||
{'path': '/properties/local_gb', 'value': '20', 'op': 'add'}
|
{'path': '/properties/local_gb', 'value': '20', 'op': 'add'}
|
||||||
] # scheduler hook
|
] # scheduler hook
|
||||||
self.patch_after = [
|
|
||||||
{'op': 'add', 'path': '/extra/newly_discovered', 'value': 'true'},
|
|
||||||
{'op': 'remove', 'path': '/extra/on_discovery'},
|
|
||||||
]
|
|
||||||
self.new_creds = ('user', 'password')
|
self.new_creds = ('user', 'password')
|
||||||
self.patch_credentials = [
|
self.patch_credentials = [
|
||||||
{'op': 'add', 'path': '/driver_info/ipmi_username',
|
{'op': 'add', 'path': '/driver_info/ipmi_username',
|
||||||
|
@ -381,8 +360,8 @@ class TestProcessNode(BaseTest):
|
||||||
address=self.macs[0])
|
address=self.macs[0])
|
||||||
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
|
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
|
||||||
address=self.macs[1])
|
address=self.macs[1])
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_before)
|
self.cli.node.update.assert_called_once_with(self.uuid,
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_after)
|
self.patch_props)
|
||||||
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
|
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
|
||||||
self.assertFalse(self.cli.node.validate.called)
|
self.assertFalse(self.cli.node.validate.called)
|
||||||
|
|
||||||
|
@ -394,7 +373,7 @@ class TestProcessNode(BaseTest):
|
||||||
finished_mock.assert_called_once_with(mock.ANY)
|
finished_mock.assert_called_once_with(mock.ANY)
|
||||||
|
|
||||||
def test_overwrite_disabled(self, filters_mock, post_hook_mock):
|
def test_overwrite_disabled(self, filters_mock, post_hook_mock):
|
||||||
CONF.set_override('overwrite_existing', False, 'discoverd')
|
CONF.set_override('overwrite_existing', False, 'processing')
|
||||||
patch = [
|
patch = [
|
||||||
{'op': 'add', 'path': '/properties/cpus', 'value': '2'},
|
{'op': 'add', 'path': '/properties/cpus', 'value': '2'},
|
||||||
{'op': 'add', 'path': '/properties/memory_mb', 'value': '1024'},
|
{'op': 'add', 'path': '/properties/memory_mb', 'value': '1024'},
|
||||||
|
@ -402,8 +381,7 @@ class TestProcessNode(BaseTest):
|
||||||
|
|
||||||
self.call()
|
self.call()
|
||||||
|
|
||||||
self.cli.node.update.assert_any_call(self.uuid, patch)
|
self.cli.node.update.assert_called_once_with(self.uuid, patch)
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_after)
|
|
||||||
|
|
||||||
def test_update_retry_on_conflict(self, filters_mock, post_hook_mock):
|
def test_update_retry_on_conflict(self, filters_mock, post_hook_mock):
|
||||||
self.cli.node.update.side_effect = [exceptions.Conflict, self.node,
|
self.cli.node.update.side_effect = [exceptions.Conflict, self.node,
|
||||||
|
@ -415,9 +393,8 @@ class TestProcessNode(BaseTest):
|
||||||
address=self.macs[0])
|
address=self.macs[0])
|
||||||
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
|
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
|
||||||
address=self.macs[1])
|
address=self.macs[1])
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_before)
|
self.cli.node.update.assert_called_with(self.uuid, self.patch_props)
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_after)
|
self.assertEqual(2, self.cli.node.update.call_count)
|
||||||
self.assertEqual(4, self.cli.node.update.call_count)
|
|
||||||
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
|
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
|
||||||
|
|
||||||
def test_power_off_retry_on_conflict(self, filters_mock, post_hook_mock):
|
def test_power_off_retry_on_conflict(self, filters_mock, post_hook_mock):
|
||||||
|
@ -429,8 +406,8 @@ class TestProcessNode(BaseTest):
|
||||||
address=self.macs[0])
|
address=self.macs[0])
|
||||||
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
|
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
|
||||||
address=self.macs[1])
|
address=self.macs[1])
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_before)
|
self.cli.node.update.assert_called_once_with(self.uuid,
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_after)
|
self.patch_props)
|
||||||
self.cli.node.set_power_state.assert_called_with(self.uuid, 'off')
|
self.cli.node.set_power_state.assert_called_with(self.uuid, 'off')
|
||||||
self.assertEqual(2, self.cli.node.set_power_state.call_count)
|
self.assertEqual(2, self.cli.node.set_power_state.call_count)
|
||||||
|
|
||||||
|
@ -443,8 +420,8 @@ class TestProcessNode(BaseTest):
|
||||||
address=self.macs[0])
|
address=self.macs[0])
|
||||||
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
|
self.cli.port.create.assert_any_call(node_uuid=self.uuid,
|
||||||
address=self.macs[1])
|
address=self.macs[1])
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_before)
|
self.cli.node.update.assert_called_once_with(self.uuid,
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_after)
|
self.patch_props)
|
||||||
|
|
||||||
post_hook_mock.assert_called_once_with(self.node, self.ports[1:],
|
post_hook_mock.assert_called_once_with(self.node, self.ports[1:],
|
||||||
self.data)
|
self.data)
|
||||||
|
@ -457,9 +434,9 @@ class TestProcessNode(BaseTest):
|
||||||
|
|
||||||
self.call()
|
self.call()
|
||||||
|
|
||||||
self.cli.node.update.assert_any_call(self.uuid,
|
self.cli.node.update.assert_called_once_with(self.uuid,
|
||||||
self.patch_before + node_patches)
|
self.patch_props
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_after)
|
+ node_patches)
|
||||||
self.cli.port.update.assert_called_once_with(self.ports[1].uuid,
|
self.cli.port.update.assert_called_once_with(self.ports[1].uuid,
|
||||||
port_patch)
|
port_patch)
|
||||||
|
|
||||||
|
@ -499,7 +476,7 @@ class TestProcessNode(BaseTest):
|
||||||
self.assertRaisesRegexp(utils.Error, 'Failed to validate',
|
self.assertRaisesRegexp(utils.Error, 'Failed to validate',
|
||||||
self.call)
|
self.call)
|
||||||
|
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_before)
|
self.cli.node.update.assert_any_call(self.uuid, self.patch_props)
|
||||||
self.cli.node.update.assert_any_call(self.uuid, self.patch_credentials)
|
self.cli.node.update.assert_any_call(self.uuid, self.patch_credentials)
|
||||||
self.assertEqual(2, self.cli.node.update.call_count)
|
self.assertEqual(2, self.cli.node.update.call_count)
|
||||||
self.assertEqual(process._CREDENTIALS_WAIT_RETRIES,
|
self.assertEqual(process._CREDENTIALS_WAIT_RETRIES,
|
||||||
|
@ -520,7 +497,7 @@ class TestProcessNode(BaseTest):
|
||||||
|
|
||||||
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
|
self.cli.node.set_power_state.assert_called_once_with(self.uuid, 'off')
|
||||||
self.cli.node.update.assert_called_once_with(self.uuid,
|
self.cli.node.update.assert_called_once_with(self.uuid,
|
||||||
self.patch_before)
|
self.patch_props)
|
||||||
finished_mock.assert_called_once_with(
|
finished_mock.assert_called_once_with(
|
||||||
mock.ANY,
|
mock.ANY,
|
||||||
error='Failed to power off node %s, check it\'s power management'
|
error='Failed to power off node %s, check it\'s power management'
|
||||||
|
@ -529,7 +506,7 @@ class TestProcessNode(BaseTest):
|
||||||
@mock.patch.object(utils, 'get_client')
|
@mock.patch.object(utils, 'get_client')
|
||||||
def test_keep_ports_present(self, client_mock, filters_mock,
|
def test_keep_ports_present(self, client_mock, filters_mock,
|
||||||
post_hook_mock):
|
post_hook_mock):
|
||||||
CONF.set_override('keep_ports', 'present', 'discoverd')
|
CONF.set_override('keep_ports', 'present', 'processing')
|
||||||
|
|
||||||
# 2 MACs valid, one invalid, one not present in data
|
# 2 MACs valid, one invalid, one not present in data
|
||||||
all_macs = self.all_macs + ['01:09:02:08:03:07']
|
all_macs = self.all_macs + ['01:09:02:08:03:07']
|
||||||
|
@ -548,7 +525,7 @@ class TestProcessNode(BaseTest):
|
||||||
|
|
||||||
@mock.patch.object(utils, 'get_client')
|
@mock.patch.object(utils, 'get_client')
|
||||||
def test_keep_ports_added(self, client_mock, filters_mock, post_hook_mock):
|
def test_keep_ports_added(self, client_mock, filters_mock, post_hook_mock):
|
||||||
CONF.set_override('keep_ports', 'added', 'discoverd')
|
CONF.set_override('keep_ports', 'added', 'processing')
|
||||||
|
|
||||||
# 2 MACs valid, one invalid, one not present in data
|
# 2 MACs valid, one invalid, one not present in data
|
||||||
all_macs = self.all_macs + ['01:09:02:08:03:07']
|
all_macs = self.all_macs + ['01:09:02:08:03:07']
|
||||||
|
@ -570,9 +547,9 @@ class TestProcessNode(BaseTest):
|
||||||
|
|
||||||
class TestValidateInterfacesHook(test_base.BaseTest):
|
class TestValidateInterfacesHook(test_base.BaseTest):
|
||||||
def test_wrong_add_ports(self):
|
def test_wrong_add_ports(self):
|
||||||
CONF.set_override('add_ports', 'foobar', 'discoverd')
|
CONF.set_override('add_ports', 'foobar', 'processing')
|
||||||
self.assertRaises(SystemExit, std_plugins.ValidateInterfacesHook)
|
self.assertRaises(SystemExit, std_plugins.ValidateInterfacesHook)
|
||||||
|
|
||||||
def test_wrong_keep_ports(self):
|
def test_wrong_keep_ports(self):
|
||||||
CONF.set_override('keep_ports', 'foobar', 'discoverd')
|
CONF.set_override('keep_ports', 'foobar', 'processing')
|
||||||
self.assertRaises(SystemExit, std_plugins.ValidateInterfacesHook)
|
self.assertRaises(SystemExit, std_plugins.ValidateInterfacesHook)
|
|
@ -19,8 +19,8 @@ from keystonemiddleware import auth_token
|
||||||
import mock
|
import mock
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
|
||||||
from ironic_discoverd.test import base
|
from ironic_inspector.test import base
|
||||||
from ironic_discoverd import utils
|
from ironic_inspector import utils
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
@ -28,13 +28,13 @@ CONF = cfg.CONF
|
||||||
class TestCheckAuth(base.BaseTest):
|
class TestCheckAuth(base.BaseTest):
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
super(TestCheckAuth, self).setUp()
|
super(TestCheckAuth, self).setUp()
|
||||||
CONF.set_override('authenticate', True, 'discoverd')
|
CONF.set_override('authenticate', True)
|
||||||
|
|
||||||
@mock.patch.object(auth_token, 'AuthProtocol')
|
@mock.patch.object(auth_token, 'AuthProtocol')
|
||||||
def test_middleware(self, mock_auth):
|
def test_middleware(self, mock_auth):
|
||||||
CONF.set_override('os_username', 'admin', 'discoverd')
|
CONF.set_override('os_username', 'admin', 'ironic')
|
||||||
CONF.set_override('os_tenant_name', 'admin', 'discoverd')
|
CONF.set_override('os_tenant_name', 'admin', 'ironic')
|
||||||
CONF.set_override('os_password', 'password', 'discoverd')
|
CONF.set_override('os_password', 'password', 'ironic')
|
||||||
|
|
||||||
app = mock.Mock(wsgi_app=mock.sentinel.app)
|
app = mock.Mock(wsgi_app=mock.sentinel.app)
|
||||||
utils.add_auth_middleware(app)
|
utils.add_auth_middleware(app)
|
||||||
|
@ -62,12 +62,12 @@ class TestCheckAuth(base.BaseTest):
|
||||||
self.assertRaises(utils.Error, utils.check_auth, request)
|
self.assertRaises(utils.Error, utils.check_auth, request)
|
||||||
|
|
||||||
def test_disabled(self):
|
def test_disabled(self):
|
||||||
CONF.set_override('authenticate', False, 'discoverd')
|
CONF.set_override('authenticate', False)
|
||||||
request = mock.Mock(headers={'X-Identity-Status': 'Invalid'})
|
request = mock.Mock(headers={'X-Identity-Status': 'Invalid'})
|
||||||
utils.check_auth(request)
|
utils.check_auth(request)
|
||||||
|
|
||||||
|
|
||||||
@mock.patch('ironic_discoverd.node_cache.NodeInfo')
|
@mock.patch('ironic_inspector.node_cache.NodeInfo')
|
||||||
class TestGetIpmiAddress(base.BaseTest):
|
class TestGetIpmiAddress(base.BaseTest):
|
||||||
def test_ipv4_in_resolves(self, mock_node):
|
def test_ipv4_in_resolves(self, mock_node):
|
||||||
node = mock_node.return_value
|
node = mock_node.return_value
|
|
@ -22,7 +22,7 @@ from keystonemiddleware import auth_token
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
import six
|
import six
|
||||||
|
|
||||||
from ironic_discoverd.common.i18n import _, _LE, _LI, _LW
|
from ironic_inspector.common.i18n import _, _LE, _LI, _LW
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
@ -30,13 +30,13 @@ CONF = cfg.CONF
|
||||||
VALID_STATES = {'enroll', 'manageable', 'inspecting', 'inspectfail'}
|
VALID_STATES = {'enroll', 'manageable', 'inspecting', 'inspectfail'}
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger('ironic_discoverd.utils')
|
LOG = logging.getLogger('ironic_inspector.utils')
|
||||||
RETRY_COUNT = 12
|
RETRY_COUNT = 12
|
||||||
RETRY_DELAY = 5
|
RETRY_DELAY = 5
|
||||||
|
|
||||||
|
|
||||||
class Error(Exception):
|
class Error(Exception):
|
||||||
"""Discoverd exception."""
|
"""Inspector exception."""
|
||||||
|
|
||||||
def __init__(self, msg, code=400):
|
def __init__(self, msg, code=400):
|
||||||
super(Error, self).__init__(msg)
|
super(Error, self).__init__(msg)
|
||||||
|
@ -46,10 +46,10 @@ class Error(Exception):
|
||||||
|
|
||||||
def get_client(): # pragma: no cover
|
def get_client(): # pragma: no cover
|
||||||
"""Get Ironic client instance."""
|
"""Get Ironic client instance."""
|
||||||
args = dict({'os_password': CONF.discoverd.os_password,
|
args = dict({'os_password': CONF.ironic.os_password,
|
||||||
'os_username': CONF.discoverd.os_username,
|
'os_username': CONF.ironic.os_username,
|
||||||
'os_auth_url': CONF.discoverd.os_auth_url,
|
'os_auth_url': CONF.ironic.os_auth_url,
|
||||||
'os_tenant_name': CONF.discoverd.os_tenant_name})
|
'os_tenant_name': CONF.ironic.os_tenant_name})
|
||||||
return client.get_client(1, **args)
|
return client.get_client(1, **args)
|
||||||
|
|
||||||
|
|
||||||
|
@ -58,12 +58,12 @@ def add_auth_middleware(app):
|
||||||
|
|
||||||
:param app: application.
|
:param app: application.
|
||||||
"""
|
"""
|
||||||
auth_conf = dict({'admin_password': CONF.discoverd.os_password,
|
auth_conf = dict({'admin_password': CONF.ironic.os_password,
|
||||||
'admin_user': CONF.discoverd.os_username,
|
'admin_user': CONF.ironic.os_username,
|
||||||
'auth_uri': CONF.discoverd.os_auth_url,
|
'auth_uri': CONF.ironic.os_auth_url,
|
||||||
'admin_tenant_name': CONF.discoverd.os_tenant_name})
|
'admin_tenant_name': CONF.ironic.os_tenant_name})
|
||||||
auth_conf['delay_auth_decision'] = True
|
auth_conf['delay_auth_decision'] = True
|
||||||
auth_conf['identity_uri'] = CONF.discoverd.identity_uri
|
auth_conf['identity_uri'] = CONF.ironic.identity_uri
|
||||||
app.wsgi_app = auth_token.AuthProtocol(app.wsgi_app, auth_conf)
|
app.wsgi_app = auth_token.AuthProtocol(app.wsgi_app, auth_conf)
|
||||||
|
|
||||||
|
|
||||||
|
@ -73,7 +73,7 @@ def check_auth(request):
|
||||||
:param request: Flask request
|
:param request: Flask request
|
||||||
:raises: utils.Error if access is denied
|
:raises: utils.Error if access is denied
|
||||||
"""
|
"""
|
||||||
if not CONF.discoverd.authenticate:
|
if not CONF.authenticate:
|
||||||
return
|
return
|
||||||
if request.headers.get('X-Identity-Status').lower() == 'invalid':
|
if request.headers.get('X-Identity-Status').lower() == 'invalid':
|
||||||
raise Error(_('Authentication required'), code=401)
|
raise Error(_('Authentication required'), code=401)
|
|
@ -23,7 +23,7 @@ import netifaces
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger('ironic-discoverd-ramdisk')
|
LOG = logging.getLogger('ironic-inspector-ramdisk')
|
||||||
|
|
||||||
|
|
||||||
def try_call(*cmd, **kwargs):
|
def try_call(*cmd, **kwargs):
|
||||||
|
@ -213,13 +213,13 @@ def discover_hardware(args, data, failures):
|
||||||
discover_block_devices(data)
|
discover_block_devices(data)
|
||||||
|
|
||||||
|
|
||||||
def call_discoverd(args, data, failures):
|
def call_inspector(args, data, failures):
|
||||||
data['error'] = failures.get_error()
|
data['error'] = failures.get_error()
|
||||||
|
|
||||||
LOG.info('posting collected data to %s', args.callback_url)
|
LOG.info('posting collected data to %s', args.callback_url)
|
||||||
resp = requests.post(args.callback_url, data=json.dumps(data))
|
resp = requests.post(args.callback_url, data=json.dumps(data))
|
||||||
if resp.status_code >= 400:
|
if resp.status_code >= 400:
|
||||||
LOG.error('discoverd error %d: %s',
|
LOG.error('inspector error %d: %s',
|
||||||
resp.status_code,
|
resp.status_code,
|
||||||
resp.content.decode('utf-8'))
|
resp.content.decode('utf-8'))
|
||||||
resp.raise_for_status()
|
resp.raise_for_status()
|
|
@ -17,16 +17,16 @@ import sys
|
||||||
|
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
from ironic_discoverd_ramdisk import discover
|
from ironic_inspector_ramdisk import discover
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger('ironic-discoverd-ramdisk')
|
LOG = logging.getLogger('ironic-inspector-ramdisk')
|
||||||
|
|
||||||
|
|
||||||
def parse_args(args):
|
def parse_args(args):
|
||||||
parser = argparse.ArgumentParser(description='Detect present hardware.')
|
parser = argparse.ArgumentParser(description='Detect present hardware.')
|
||||||
parser.add_argument('-L', '--system-log-file', action='append',
|
parser.add_argument('-L', '--system-log-file', action='append',
|
||||||
help='System log file to be sent to discoverd, may be '
|
help='System log file to be sent to inspector, may be '
|
||||||
'specified multiple times')
|
'specified multiple times')
|
||||||
parser.add_argument('-l', '--log-file', default='discovery-logs',
|
parser.add_argument('-l', '--log-file', default='discovery-logs',
|
||||||
help='Path to log file, defaults to ./discovery-logs')
|
help='Path to log file, defaults to ./discovery-logs')
|
||||||
|
@ -37,9 +37,9 @@ def parse_args(args):
|
||||||
'python-hardware package')
|
'python-hardware package')
|
||||||
parser.add_argument('--benchmark', action='store_true',
|
parser.add_argument('--benchmark', action='store_true',
|
||||||
help='Enables benchmarking for hardware-detect')
|
help='Enables benchmarking for hardware-detect')
|
||||||
# ironic-discoverd callback
|
# ironic-inspector callback
|
||||||
parser.add_argument('callback_url',
|
parser.add_argument('callback_url',
|
||||||
help='Full ironic-discoverd callback URL')
|
help='Full ironic-inspector callback URL')
|
||||||
return parser.parse_args(args)
|
return parser.parse_args(args)
|
||||||
|
|
||||||
|
|
||||||
|
@ -74,11 +74,11 @@ def main():
|
||||||
call_error = True
|
call_error = True
|
||||||
resp = {}
|
resp = {}
|
||||||
try:
|
try:
|
||||||
resp = discover.call_discoverd(args, data, failures)
|
resp = discover.call_inspector(args, data, failures)
|
||||||
except requests.RequestException as exc:
|
except requests.RequestException as exc:
|
||||||
LOG.error('%s when calling to discoverd', exc)
|
LOG.error('%s when calling to inspector', exc)
|
||||||
except Exception:
|
except Exception:
|
||||||
LOG.exception('failed to call discoverd')
|
LOG.exception('failed to call inspector')
|
||||||
else:
|
else:
|
||||||
call_error = False
|
call_error = False
|
||||||
|
|
|
@ -29,7 +29,7 @@ except ImportError:
|
||||||
import netifaces
|
import netifaces
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
from ironic_discoverd_ramdisk import discover
|
from ironic_inspector_ramdisk import discover
|
||||||
|
|
||||||
|
|
||||||
def get_fake_args():
|
def get_fake_args():
|
||||||
|
@ -284,7 +284,7 @@ class TestCallDiscoverd(unittest.TestCase):
|
||||||
data = collections.OrderedDict(data=42)
|
data = collections.OrderedDict(data=42)
|
||||||
mock_post.return_value.status_code = 200
|
mock_post.return_value.status_code = 200
|
||||||
|
|
||||||
discover.call_discoverd(FAKE_ARGS, data, failures)
|
discover.call_inspector(FAKE_ARGS, data, failures)
|
||||||
|
|
||||||
mock_post.assert_called_once_with('url',
|
mock_post.assert_called_once_with('url',
|
||||||
data='{"data": 42, "error": null}')
|
data='{"data": 42, "error": null}')
|
||||||
|
@ -295,17 +295,17 @@ class TestCallDiscoverd(unittest.TestCase):
|
||||||
data = collections.OrderedDict(data=42)
|
data = collections.OrderedDict(data=42)
|
||||||
mock_post.return_value.status_code = 200
|
mock_post.return_value.status_code = 200
|
||||||
|
|
||||||
discover.call_discoverd(FAKE_ARGS, data, failures)
|
discover.call_inspector(FAKE_ARGS, data, failures)
|
||||||
|
|
||||||
mock_post.assert_called_once_with('url',
|
mock_post.assert_called_once_with('url',
|
||||||
data='{"data": 42, "error": "boom"}')
|
data='{"data": 42, "error": "boom"}')
|
||||||
|
|
||||||
def test_discoverd_error(self, mock_post):
|
def test_inspector_error(self, mock_post):
|
||||||
failures = discover.AccumulatedFailure()
|
failures = discover.AccumulatedFailure()
|
||||||
data = collections.OrderedDict(data=42)
|
data = collections.OrderedDict(data=42)
|
||||||
mock_post.return_value.status_code = 400
|
mock_post.return_value.status_code = 400
|
||||||
|
|
||||||
discover.call_discoverd(FAKE_ARGS, data, failures)
|
discover.call_inspector(FAKE_ARGS, data, failures)
|
||||||
|
|
||||||
mock_post.assert_called_once_with('url',
|
mock_post.assert_called_once_with('url',
|
||||||
data='{"data": 42, "error": null}')
|
data='{"data": 42, "error": null}')
|
|
@ -16,9 +16,9 @@ import unittest
|
||||||
import mock
|
import mock
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
from ironic_discoverd_ramdisk import discover
|
from ironic_inspector_ramdisk import discover
|
||||||
from ironic_discoverd_ramdisk import main
|
from ironic_inspector_ramdisk import main
|
||||||
from ironic_discoverd_ramdisk.test import test_discover
|
from ironic_inspector_ramdisk.test import test_discover
|
||||||
|
|
||||||
|
|
||||||
FAKE_ARGS = test_discover.get_fake_args()
|
FAKE_ARGS = test_discover.get_fake_args()
|
||||||
|
@ -41,7 +41,7 @@ class TestParseArgs(unittest.TestCase):
|
||||||
@mock.patch.object(main, 'parse_args', return_value=FAKE_ARGS,
|
@mock.patch.object(main, 'parse_args', return_value=FAKE_ARGS,
|
||||||
autospec=True)
|
autospec=True)
|
||||||
@mock.patch.object(discover, 'setup_ipmi_credentials', autospec=True)
|
@mock.patch.object(discover, 'setup_ipmi_credentials', autospec=True)
|
||||||
@mock.patch.object(discover, 'call_discoverd', autospec=True,
|
@mock.patch.object(discover, 'call_inspector', autospec=True,
|
||||||
return_value={})
|
return_value={})
|
||||||
@mock.patch.object(discover, 'collect_logs', autospec=True)
|
@mock.patch.object(discover, 'collect_logs', autospec=True)
|
||||||
@mock.patch.object(discover, 'discover_hardware', autospec=True)
|
@mock.patch.object(discover, 'discover_hardware', autospec=True)
|
|
@ -1,15 +1,15 @@
|
||||||
# Translations template for ironic-discoverd.
|
# Translations template for ironic-inspector.
|
||||||
# Copyright (C) 2015 ORGANIZATION
|
# Copyright (C) 2015 ORGANIZATION
|
||||||
# This file is distributed under the same license as the ironic-discoverd
|
# This file is distributed under the same license as the ironic-inspector
|
||||||
# project.
|
# project.
|
||||||
# FIRST AUTHOR <EMAIL@ADDRESS>, 2015.
|
# FIRST AUTHOR <EMAIL@ADDRESS>, 2015.
|
||||||
#
|
#
|
||||||
#, fuzzy
|
#, fuzzy
|
||||||
msgid ""
|
msgid ""
|
||||||
msgstr ""
|
msgstr ""
|
||||||
"Project-Id-Version: ironic-discoverd 1.1.0\n"
|
"Project-Id-Version: ironic-inspector 2.0.0\n"
|
||||||
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
|
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
|
||||||
"POT-Creation-Date: 2015-03-02 02:49+0000\n"
|
"POT-Creation-Date: 2015-05-28 15:25+0200\n"
|
||||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||||
|
@ -18,67 +18,62 @@ msgstr ""
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
"Content-Transfer-Encoding: 8bit\n"
|
||||||
"Generated-By: Babel 1.3\n"
|
"Generated-By: Babel 1.3\n"
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:40
|
#: ironic_inspector/client.py:62 ironic_inspector/client.py:85
|
||||||
|
#, python-format
|
||||||
|
msgid "Expected string for uuid argument, got %r"
|
||||||
|
msgstr ""
|
||||||
|
|
||||||
|
#: ironic_inspector/client.py:64
|
||||||
|
msgid "Setting IPMI user name requires a new password"
|
||||||
|
msgstr ""
|
||||||
|
|
||||||
|
#: ironic_inspector/introspect.py:39
|
||||||
msgid "IPMI credentials setup is disabled in configuration"
|
msgid "IPMI credentials setup is disabled in configuration"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:44
|
#: ironic_inspector/introspect.py:43
|
||||||
msgid "Node should be in maintenance mode to set IPMI credentials on it"
|
msgid "Node should be in maintenance mode to set IPMI credentials on it"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:51
|
#: ironic_inspector/introspect.py:50
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid ""
|
msgid ""
|
||||||
"Setting IPMI credentials requested for node %s, but neither new user name"
|
"Setting IPMI credentials requested for node %s, but neither new user name"
|
||||||
" nor driver_info[ipmi_username] are provided"
|
" nor driver_info[ipmi_username] are provided"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:58
|
#: ironic_inspector/introspect.py:57
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid ""
|
msgid ""
|
||||||
"Forbidden characters encountered in new IPMI password for node %(node)s: "
|
"Forbidden characters encountered in new IPMI password for node %(node)s: "
|
||||||
"\"%(chars)s\"; use only letters and numbers"
|
"\"%(chars)s\"; use only letters and numbers"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:63
|
#: ironic_inspector/introspect.py:62
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "IPMI password length should be > 0 and <= %d"
|
msgid "IPMI password length should be > 0 and <= %d"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:81
|
#: ironic_inspector/introspect.py:80
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Cannot find node %s"
|
msgid "Cannot find node %s"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:83
|
#: ironic_inspector/introspect.py:82
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Cannot get node %(node)s: %(exc)s"
|
msgid "Cannot get node %(node)s: %(exc)s"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:89
|
#: ironic_inspector/introspect.py:93
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Refusing to introspect node %(node)s with provision state \"%(state)s\" "
|
|
||||||
"and maintenance mode off"
|
|
||||||
msgstr ""
|
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:96
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Refusing to introspect node %(node)s with power state \"%(state)s\" and "
|
|
||||||
"maintenance mode off"
|
|
||||||
msgstr ""
|
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:109
|
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Failed validation of power interface for node %(node)s, reason: %(reason)s"
|
msgid "Failed validation of power interface for node %(node)s, reason: %(reason)s"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:124
|
#: ironic_inspector/introspect.py:108
|
||||||
msgid "Unexpected exception in background introspection thread"
|
msgid "Unexpected exception in background introspection thread"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/introspect.py:158
|
#: ironic_inspector/introspect.py:139
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid ""
|
msgid ""
|
||||||
"Failed to power on node %(node)s, check it's power management "
|
"Failed to power on node %(node)s, check it's power management "
|
||||||
|
@ -86,95 +81,127 @@ msgid ""
|
||||||
"%(exc)s"
|
"%(exc)s"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/main.py:46
|
#: ironic_inspector/main.py:70
|
||||||
msgid "Authentication required"
|
msgid "Invalid UUID value"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/main.py:51
|
#: ironic_inspector/node_cache.py:118
|
||||||
msgid "Access denied"
|
|
||||||
msgstr ""
|
|
||||||
|
|
||||||
#: ironic_discoverd/node_cache.py:115
|
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Some or all of %(name)s's %(value)s are already on introspection"
|
msgid "Some or all of %(name)s's %(value)s are already on introspection"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/node_cache.py:202
|
#: ironic_inspector/node_cache.py:209
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Could not find node %s in cache"
|
msgid "Could not find node %s in cache"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/node_cache.py:233
|
#: ironic_inspector/node_cache.py:240
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Could not find a node for attributes %s"
|
msgid "Could not find a node for attributes %s"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/node_cache.py:236
|
#: ironic_inspector/node_cache.py:243
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Multiple matching nodes found for attributes %(attr)s: %(found)s"
|
msgid "Multiple matching nodes found for attributes %(attr)s: %(found)s"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/node_cache.py:244
|
#: ironic_inspector/node_cache.py:251
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid ""
|
msgid ""
|
||||||
"Could not find node %s in introspection cache, probably it's not on "
|
"Could not find node %s in introspection cache, probably it's not on "
|
||||||
"introspection now"
|
"introspection now"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/node_cache.py:249
|
#: ironic_inspector/node_cache.py:256
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Introspection for node %(node)s already finished on %(finish)s"
|
msgid "Introspection for node %(node)s already finished on %(finish)s"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/process.py:54
|
#: ironic_inspector/process.py:56
|
||||||
|
#, python-format
|
||||||
|
msgid "Unexpected exception during preprocessing in hook %s"
|
||||||
|
msgstr ""
|
||||||
|
|
||||||
|
#: ironic_inspector/process.py:65
|
||||||
|
#, python-format
|
||||||
|
msgid "Look up error: %s"
|
||||||
|
msgstr ""
|
||||||
|
|
||||||
|
#: ironic_inspector/process.py:71
|
||||||
|
#, python-format
|
||||||
|
msgid ""
|
||||||
|
"The following failures happened during running pre-processing hooks for "
|
||||||
|
"node %(uuid)s:\n"
|
||||||
|
"%(failures)s"
|
||||||
|
msgstr ""
|
||||||
|
|
||||||
|
#: ironic_inspector/process.py:76
|
||||||
|
msgid "Data pre-processing failed"
|
||||||
|
msgstr ""
|
||||||
|
|
||||||
|
#: ironic_inspector/process.py:79
|
||||||
|
#, python-format
|
||||||
|
msgid ""
|
||||||
|
"The following failures happened during running pre-processing hooks for "
|
||||||
|
"unknown node:\n"
|
||||||
|
"%(failures)s"
|
||||||
|
msgstr ""
|
||||||
|
|
||||||
|
#: ironic_inspector/process.py:89
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Node UUID %s was found in cache, but is not found in Ironic"
|
msgid "Node UUID %s was found in cache, but is not found in Ironic"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/process.py:65
|
#: ironic_inspector/process.py:100
|
||||||
msgid "Unexpected exception during processing"
|
msgid "Unexpected exception during processing"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/process.py:155
|
#: ironic_inspector/process.py:196
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid ""
|
msgid ""
|
||||||
"Failed to validate updated IPMI credentials for node %s, node might "
|
"Failed to validate updated IPMI credentials for node %s, node might "
|
||||||
"require maintenance"
|
"require maintenance"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/process.py:167
|
#: ironic_inspector/process.py:208
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid ""
|
msgid ""
|
||||||
"Failed to power off node %(node)s, check it's power management "
|
"Failed to power off node %(node)s, check it's power management "
|
||||||
"configuration: %(exc)s"
|
"configuration: %(exc)s"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/process.py:183
|
#: ironic_inspector/utils.py:79
|
||||||
|
msgid "Authentication required"
|
||||||
|
msgstr ""
|
||||||
|
|
||||||
|
#: ironic_inspector/utils.py:83
|
||||||
|
msgid "Access denied"
|
||||||
|
msgstr ""
|
||||||
|
|
||||||
|
#: ironic_inspector/utils.py:128
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Timeout waiting for node %s to power off after introspection"
|
msgid ""
|
||||||
|
"Refusing to introspect node %(node)s with provision state \"%(state)s\" "
|
||||||
|
"and maintenance mode off"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/plugins/edeploy.py:50
|
#: ironic_inspector/plugins/standard.py:45
|
||||||
msgid "edeploy plugin: no \"data\" key in the received JSON"
|
|
||||||
msgstr ""
|
|
||||||
|
|
||||||
#: ironic_discoverd/plugins/standard.py:37
|
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "The following required parameters are missing: %s"
|
msgid "The following required parameters are missing: %s"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/plugins/standard.py:61
|
#: ironic_inspector/plugins/standard.py:84
|
||||||
msgid "No interfaces supplied by the ramdisk"
|
msgid "No interfaces supplied by the ramdisk"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/plugins/standard.py:91
|
#: ironic_inspector/plugins/standard.py:111
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid ""
|
msgid ""
|
||||||
"No valid interfaces found for node with BMC %(ipmi_address)s, got "
|
"No valid interfaces found for node with BMC %(ipmi_address)s, got "
|
||||||
"%(interfaces)s"
|
"%(interfaces)s"
|
||||||
msgstr ""
|
msgstr ""
|
||||||
|
|
||||||
#: ironic_discoverd/plugins/standard.py:119
|
#: ironic_inspector/plugins/standard.py:167
|
||||||
#, python-format
|
#, python-format
|
||||||
msgid "Ramdisk reported error: %s"
|
msgid "Ramdisk reported error: %s"
|
||||||
msgstr ""
|
msgstr ""
|
|
@ -1,13 +1,13 @@
|
||||||
[compile_catalog]
|
[compile_catalog]
|
||||||
directory = locale
|
directory = locale
|
||||||
domain = ironic-discoverd
|
domain = ironic-inspector
|
||||||
|
|
||||||
[update_catalog]
|
[update_catalog]
|
||||||
domain = ironic-discoverd
|
domain = ironic-inspector
|
||||||
output_dir = locale
|
output_dir = locale
|
||||||
input_file = locale/ironic-discoverd.pot
|
input_file = locale/ironic-inspector.pot
|
||||||
|
|
||||||
[extract_messages]
|
[extract_messages]
|
||||||
keywords = _ gettext ngettext l_ lazy_gettext
|
keywords = _ gettext ngettext l_ lazy_gettext
|
||||||
mapping_file = babel.cfg
|
mapping_file = babel.cfg
|
||||||
output_file = locale/ironic-discoverd.pot
|
output_file = locale/ironic-inspector.pot
|
||||||
|
|
36
setup.py
36
setup.py
|
@ -13,44 +13,44 @@ except EnvironmentError:
|
||||||
install_requires = []
|
install_requires = []
|
||||||
|
|
||||||
|
|
||||||
with open('ironic_discoverd/__init__.py', 'rb') as fp:
|
with open('ironic_inspector/__init__.py', 'rb') as fp:
|
||||||
exec(fp.read())
|
exec(fp.read())
|
||||||
|
|
||||||
|
|
||||||
setup(
|
setup(
|
||||||
name = "ironic-discoverd",
|
name = "ironic-inspector",
|
||||||
version = __version__,
|
version = __version__,
|
||||||
description = open('README.rst', 'r').readline().strip(),
|
description = open('README.rst', 'r').readline().strip(),
|
||||||
author = "Dmitry Tantsur",
|
author = "Dmitry Tantsur",
|
||||||
author_email = "dtantsur@redhat.com",
|
author_email = "dtantsur@redhat.com",
|
||||||
url = "https://pypi.python.org/pypi/ironic-discoverd",
|
url = "https://pypi.python.org/pypi/ironic-discoverd",
|
||||||
packages = ['ironic_discoverd', 'ironic_discoverd.plugins',
|
packages = ['ironic_inspector', 'ironic_inspector.plugins',
|
||||||
'ironic_discoverd.test', 'ironic_discoverd.common',
|
'ironic_inspector.test', 'ironic_inspector.common',
|
||||||
'ironic_discoverd_ramdisk', 'ironic_discoverd_ramdisk.test'],
|
'ironic_inspector_ramdisk', 'ironic_inspector_ramdisk.test'],
|
||||||
install_requires = install_requires,
|
install_requires = install_requires,
|
||||||
# because entry points don't work with multiple packages
|
# because entry points don't work with multiple packages
|
||||||
scripts = ['bin/ironic-discoverd-ramdisk'],
|
scripts = ['bin/ironic-inspector-ramdisk'],
|
||||||
entry_points = {
|
entry_points = {
|
||||||
'console_scripts': [
|
'console_scripts': [
|
||||||
"ironic-discoverd = ironic_discoverd.main:main",
|
"ironic-inspector = ironic_inspector.main:main",
|
||||||
],
|
],
|
||||||
'ironic_discoverd.hooks': [
|
'ironic_inspector.hooks': [
|
||||||
"scheduler = ironic_discoverd.plugins.standard:SchedulerHook",
|
"scheduler = ironic_inspector.plugins.standard:SchedulerHook",
|
||||||
"validate_interfaces = ironic_discoverd.plugins.standard:ValidateInterfacesHook",
|
"validate_interfaces = ironic_inspector.plugins.standard:ValidateInterfacesHook",
|
||||||
"ramdisk_error = ironic_discoverd.plugins.standard:RamdiskErrorHook",
|
"ramdisk_error = ironic_inspector.plugins.standard:RamdiskErrorHook",
|
||||||
"example = ironic_discoverd.plugins.example:ExampleProcessingHook",
|
"example = ironic_inspector.plugins.example:ExampleProcessingHook",
|
||||||
"edeploy = ironic_discoverd.plugins.edeploy:eDeployHook",
|
"edeploy = ironic_inspector.plugins.edeploy:eDeployHook",
|
||||||
"root_device_hint = ironic_discoverd.plugins.root_device_hint:RootDeviceHintHook",
|
"root_device_hint = ironic_inspector.plugins.root_device_hint:RootDeviceHintHook",
|
||||||
],
|
],
|
||||||
'openstack.cli.extension': [
|
'openstack.cli.extension': [
|
||||||
'baremetal-introspection = ironic_discoverd.shell',
|
'baremetal-introspection = ironic_inspector.shell',
|
||||||
],
|
],
|
||||||
'openstack.baremetal_introspection.v1': [
|
'openstack.baremetal_introspection.v1': [
|
||||||
"baremetal_introspection_start = ironic_discoverd.shell:StartCommand",
|
"baremetal_introspection_start = ironic_inspector.shell:StartCommand",
|
||||||
"baremetal_introspection_status = ironic_discoverd.shell:StatusCommand",
|
"baremetal_introspection_status = ironic_inspector.shell:StatusCommand",
|
||||||
],
|
],
|
||||||
'oslo.config.opts': [
|
'oslo.config.opts': [
|
||||||
"ironic_discoverd = ironic_discoverd.conf:list_opts",
|
"ironic_inspector = ironic_inspector.conf:list_opts",
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
classifiers = [
|
classifiers = [
|
||||||
|
|
12
tox.ini
12
tox.ini
|
@ -8,8 +8,8 @@ deps =
|
||||||
-r{toxinidir}/test-requirements.txt
|
-r{toxinidir}/test-requirements.txt
|
||||||
-r{toxinidir}/plugin-requirements.txt
|
-r{toxinidir}/plugin-requirements.txt
|
||||||
commands =
|
commands =
|
||||||
coverage run --branch --include "ironic_discoverd*" -m unittest discover ironic_discoverd.test
|
coverage run --branch --include "ironic_inspector*" -m unittest discover ironic_inspector.test
|
||||||
coverage run --branch --include "ironic_discoverd_ramdisk*" -a -m unittest discover ironic_discoverd_ramdisk.test
|
coverage run --branch --include "ironic_inspector_ramdisk*" -a -m unittest discover ironic_inspector_ramdisk.test
|
||||||
coverage report -m --fail-under 90
|
coverage report -m --fail-under 90
|
||||||
setenv = PYTHONDONTWRITEBYTECODE=1
|
setenv = PYTHONDONTWRITEBYTECODE=1
|
||||||
|
|
||||||
|
@ -20,14 +20,14 @@ deps =
|
||||||
-r{toxinidir}/test-requirements.txt
|
-r{toxinidir}/test-requirements.txt
|
||||||
-r{toxinidir}/plugin-requirements.txt
|
-r{toxinidir}/plugin-requirements.txt
|
||||||
commands =
|
commands =
|
||||||
flake8 ironic_discoverd ironic_discoverd_ramdisk
|
flake8 ironic_inspector ironic_inspector_ramdisk
|
||||||
doc8 README.rst CONTRIBUTING.rst HTTP-API.rst RELEASES.rst
|
doc8 README.rst CONTRIBUTING.rst HTTP-API.rst
|
||||||
|
|
||||||
[flake8]
|
[flake8]
|
||||||
max-complexity=15
|
max-complexity=15
|
||||||
|
|
||||||
[hacking]
|
[hacking]
|
||||||
import_exceptions = ironicclient.exceptions,ironic_discoverd.common.i18n
|
import_exceptions = ironicclient.exceptions,ironic_inspector.common.i18n
|
||||||
|
|
||||||
[testenv:func]
|
[testenv:func]
|
||||||
basepython = python2.7
|
basepython = python2.7
|
||||||
|
@ -42,4 +42,4 @@ commands =
|
||||||
commands =
|
commands =
|
||||||
oslo-config-generator \
|
oslo-config-generator \
|
||||||
--output-file example.conf \
|
--output-file example.conf \
|
||||||
--namespace ironic_discoverd
|
--namespace ironic_inspector
|
||||||
|
|
Loading…
Reference in New Issue