Retire ironic-inspector

Removes all repo content per the infra-manual, includes links back to
Ironic documentation on where to find inspector now.

Change-Id: I0caabd1ddeeada471531af7093d24a5add10ca5b
Signed-Off-By: Jay Faulkner <jay@jvf.cc>
This commit is contained in:
Jay Faulkner
2025-10-08 13:58:40 -07:00
parent 25934e5bdd
commit 7bb56b3c5a
452 changed files with 12 additions and 38263 deletions

View File

@@ -1,3 +0,0 @@
[DEFAULT]
test_path=${TESTS_DIR:-./ironic_inspector/test/unit/}
top_dir=./

View File

@@ -1,361 +0,0 @@
=================
How To Contribute
=================
Basics
~~~~~~
* Our source code is hosted on `OpenStack GitHub`_, but please do not send pull
requests there.
* Please follow usual OpenStack `Gerrit Workflow`_ to submit a patch.
* Update change log in README.rst on any significant change.
* It goes without saying that any code change should by accompanied by unit
tests.
* Note the branch you're proposing changes to. ``master`` is the current focus
of development, use ``stable/VERSION`` for proposing an urgent fix, where
``VERSION`` is the current stable series. E.g. at the moment of writing the
stable branch is ``stable/1.0``.
* Please file an RFE in StoryBoard_ for any significant code change and a
regular story for any significant bug fix.
.. _OpenStack GitHub: https://github.com/openstack/ironic-inspector
.. _Gerrit Workflow: https://docs.openstack.org/infra/manual/developers.html#development-workflow
.. _StoryBoard: https://storyboard.openstack.org/#!/project/944
Development Environment
~~~~~~~~~~~~~~~~~~~~~~~
First of all, install *tox* utility. It's likely to be in your distribution
repositories under name of ``python-tox``. Alternatively, you can install it
from PyPI.
Next checkout and create environments::
git clone https://github.com/openstack/ironic-inspector.git
cd ironic-inspector
tox
Repeat *tox* command each time you need to run tests. If you don't have Python
interpreter of one of supported versions (currently 3.6 and 3.7), use
``-e`` flag to select only some environments, e.g.
::
tox -e py36
.. note::
This command also runs tests for database migrations. By default the sqlite
backend is used. For testing with mysql or postgresql, you need to set up
a db named 'openstack_citest' with user 'openstack_citest' and password
'openstack_citest' on localhost. Use the script
``tools/test_setup.sh`` to set the database up the same way as
done in the OpenStack CI environment.
.. note::
Users of Fedora <= 23 will need to run "sudo dnf --releasever=24 update
python-virtualenv" to run unit tests
To run the functional tests, use::
tox -e func
Once you have added new state or transition into inspection state machine, you
should regenerate :ref:`State machine diagram <state_machine_diagram>` with::
tox -e genstates
Run the service with::
.tox/py36/bin/ironic-inspector --config-file example.conf
Of course you may have to modify ``example.conf`` to match your OpenStack
environment. See the `install guide <../install#sample-configuration-files>`_
for information on generating or downloading an example configuration file.
You can develop and test **ironic-inspector** using DevStack - see
`Deploying Ironic Inspector with DevStack`_ for the current status.
Deploying Ironic Inspector with DevStack
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`DevStack <https://docs.openstack.org/devstack/latest/>`_ provides a way to
quickly build a full OpenStack development environment with requested
components. There is a plugin for installing **ironic-inspector** in DevStack.
Installing **ironic-inspector** requires a machine running Ubuntu 14.04 (or
later) or Fedora 23 (or later). Make sure this machine is fully up to date and
has the latest packages installed before beginning this process.
Download DevStack::
git clone https://git.openstack.org/openstack-dev/devstack.git
cd devstack
Create ``local.conf`` file with minimal settings required to
enable both the **ironic** and the **ironic-inspector**. You can start with the
`Example local.conf`_ and extend it as needed.
Example local.conf
------------------
.. literalinclude:: ../../../devstack/example.local.conf
Notes
-----
* Set IRONIC_INSPECTOR_BUILD_RAMDISK to True if you want to build ramdisk.
Default value is False and ramdisk will be downloaded instead of building.
* 1024 MiB of RAM is a minimum required for the default build of IPA based on
CoreOS. If you plan to use another operating system and build IPA with
diskimage-builder 2048 MiB is recommended.
* Network configuration is pretty sensitive, better not to touch it
without deep understanding.
* This configuration disables **horizon**, **heat**, **cinder** and
**tempest**, adjust it if you need these services.
Start the install::
./stack.sh
Usage
-----
After installation is complete, you can source ``openrc`` in your shell, and
then use the OpenStack CLI to manage your DevStack::
source openrc admin demo
Show DevStack screens::
screen -x stack
To exit screen, hit ``CTRL-a d``.
List baremetal nodes::
baremetal node list
Bring the node to manageable state::
baremetal node manage <NodeID>
Inspect the node::
baremetal node inspect <NodeID>
.. note::
The deploy driver used must support the inspect interface. See also the
`Ironic Python Agent
<https://docs.openstack.org/ironic/latest/admin/drivers/ipa.html>`_.
A node can also be inspected using the following command. However, this will
not affect the provision state of the node::
baremetal introspection start <NodeID>
Check inspection status::
baremetal introspection status <NodeID>
Optionally, get the inspection data::
baremetal introspection data save <NodeID>
Writing a Plugin
~~~~~~~~~~~~~~~~
* **ironic-inspector** allows you to hook code into the data processing chain
after introspection. Inherit ``ProcessingHook`` class defined in
:doc:`/contributor/api/ironic_inspector.plugins.base` and overwrite any or
both of the following methods:
``before_processing(introspection_data,**)``
called before any data processing, providing the raw data. Each plugin in
the chain can modify the data, so order in which plugins are loaded
matters here. Returns nothing.
``before_update(introspection_data,node_info,**)``
called after node is found and ports are created, but before data is
updated on a node. Please refer to the docstring for details
and examples.
You can optionally define the following attribute:
``dependencies``
a list of entry point names of the hooks this hook depends on. These
hooks are expected to be enabled before the current hook.
Make your plugin a setuptools entry point under
``ironic_inspector.hooks.processing`` namespace and enable it in the
configuration file (``processing.processing_hooks`` option).
* **ironic-inspector** allows plugins to override the action when node is not
found in node cache. Write a callable with the following signature:
``(introspection_data,**)``
called when node is not found in cache, providing the processed data.
Should return a ``NodeInfo`` class instance.
Make your plugin a setuptools entry point under
``ironic_inspector.hooks.node_not_found`` namespace and enable it in the
configuration file (``processing.node_not_found_hook`` option).
* **ironic-inspector** allows more condition types to be added for
:ref:`Introspection Rules <introspection_rules>`.
Inherit ``RuleConditionPlugin`` class defined in
:doc:`/contributor/api/ironic_inspector.plugins.base` and overwrite at least
the following method:
``check(node_info,field,params,**)``
called to check that condition holds for a given field. Field value is
provided as ``field`` argument, ``params`` is a dictionary defined
at the time of condition creation. Returns boolean value.
The following methods and attributes may also be overridden:
``validate(params,**)``
called to validate parameters provided during condition creating.
Default implementation requires keys listed in ``REQUIRED_PARAMS`` (and
only them).
``REQUIRED_PARAMS``
contains set of required parameters used in the default implementation
of ``validate`` method, defaults to ``value`` parameter.
``ALLOW_NONE``
if it's set to ``True``, missing fields will be passed as ``None``
values instead of failing the condition. Defaults to ``False``.
Make your plugin a setuptools entry point under
``ironic_inspector.rules.conditions`` namespace.
* **ironic-inspector** allows more action types to be added for
:ref:`Introspection Rules <introspection_rules>`.
Inherit ``RuleActionPlugin`` class defined in
:doc:`/contributor/api/ironic_inspector.plugins.base` and overwrite at least
the following method:
``apply(node_info,params,**)``
called to apply the action.
The following methods and attributes may also be overridden:
``validate(params,**)``
called to validate parameters provided during actions creating.
Default implementation requires keys listed in ``REQUIRED_PARAMS`` (and
only them).
``REQUIRED_PARAMS``
contains set of required parameters used in the default implementation
of ``validate`` method, defaults to no parameters.
Make your plugin a setuptools entry point under
``ironic_inspector.rules.conditions`` namespace.
.. note::
``**`` argument is needed so that we can add optional arguments without
breaking out-of-tree plugins. Please make sure to include and ignore it.
Making changes to the database
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to make a change to the ironic-inspector database you must update the
database models found in :doc:`/contributor/api/ironic_inspector.db` and then
create a migration to reflect that change.
There are two ways to create a migration which are described below, both of
these generate a new migration file. In this file there is only one function:
* ``upgrade`` - The function to run when
``ironic-inspector-dbsync upgrade`` is run, and should be populated with
code to bring the database up to its new state from the state it was in
after the last migration.
For further information on creating a migration, refer to
`Create a Migration Script`_ from the alembic documentation.
Autogenerate
------------
This is the simplest way to create a migration. Alembic will compare the models
to an up to date database, and then attempt to write a migration based on the
differences. This should generate correct migrations in most cases however
there are some cases when it can not detect some changes and may require
manual modification, see `What does Autogenerate Detect (and what does it not
detect?)`_ from the alembic documentation.
::
ironic-inspector-dbsync upgrade
ironic-inspector-dbsync revision -m "A short description" --autogenerate
Manual
------
This will generate an empty migration file, with the correct revision
information already included. However the upgrade function is left empty
and must be manually populated in order to perform the correct actions on
the database::
ironic-inspector-dbsync revision -m "A short description"
.. _Create a Migration Script: http://alembic.zzzcomputing.com/en/latest/tutorial.html#create-a-migration-script
.. _What does Autogenerate Detect (and what does it not detect?): http://alembic.zzzcomputing.com/en/latest/autogenerate.html#what-does-autogenerate-detect-and-what-does-it-not-detect
Implementing PXE Filter Drivers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Background
----------
**inspector** in-band introspection PXE-boots the Ironic Python Agent "live"
image, to inspect the baremetal server. **ironic** also PXE-boots IPA to
perform tasks on a node, such as deploying an image. **ironic** uses
**neutron** to provide DHCP, however **neutron** does not provide DHCP for
unknown MAC addresses so **inspector** has to use its own DHCP/TFTP stack for
discovery and inspection.
When **ironic** and **inspector** are operating in the same L2 network, there
is a potential for the two DHCPs to race, which could result in a node being
deployed by **ironic** being PXE booted by **inspector**.
To prevent DHCP races between the **inspector** DHCP and **ironic** DHCP,
**inspector** has to be able to filter which nodes can get a DHCP lease from
the **inspector** DHCP server. These filters can then be used to prevent
node's enrolled in **ironic** inventory from being PXE-booted unless they are
explicitly moved into the ``inspected`` state.
Filter Interface
----------------
.. py:currentmodule:: ironic_inspector.pxe_filter.interface
The contract between **inspector** and a PXE filter driver is described in the
:class:`FilterDriver` interface. The methods a driver has to implement are:
* :meth:`~FilterDriver.init_filter` called on the service start to initialize
internal driver state
* :meth:`~FilterDriver.sync` called both periodically and when a node starts or
finishes introspection to allow or deny its ports MAC addresses in the driver
* :meth:`~FilterDriver.tear_down_filter` called on service exit to reset the
internal driver state
.. py:currentmodule:: ironic_inspector.pxe_filter.base
The driver-specific configuration is suggested to be parsed during
instantiation. There's also a convenience generic interface implementation
:class:`BaseFilter` that provides base locking and initialization
implementation. If required, a driver can opt-out from the periodic
synchronization by overriding the :meth:`~BaseFilter.get_periodic_sync_task`.

202
LICENSE
View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,44 +1,15 @@
===============================================
Hardware introspection for OpenStack Bare Metal
===============================================
This project is no longer maintained.
.. warning::
This project is now in the maintenance mode and new deployments of it are
discouraged. Please use `built-in in-band inspection in ironic
<https://docs.openstack.org/ironic/latest/admin/inspection/index.html>`_
instead. For existing deployments, see the `migration guide
<https://docs.openstack.org/ironic/latest/admin/inspection/migration.html>`_.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
Introduction
============
Please use `built-in in-band inspection in ironic
<https://docs.openstack.org/ironic/latest/admin/inspection/index.html>`_
instead. For existing deployments, see the `migration guide
<https://docs.openstack.org/ironic/latest/admin/inspection/migration.html>`_.
.. image:: https://governance.openstack.org/tc/badges/ironic-inspector.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
This is an auxiliary service for discovering hardware properties for a
node managed by `Ironic`_. Hardware introspection or hardware
properties discovery is a process of getting hardware parameters required for
scheduling from a bare metal node, given its power management credentials
(e.g. IPMI address, user name and password).
* Free software: Apache license
* Source: https://opendev.org/openstack/ironic-inspector/
* Bugs: https://bugs.launchpad.net/ironic-inspector
* Downloads: https://tarballs.openstack.org/ironic-inspector/
* Documentation: https://docs.openstack.org/ironic-inspector/latest/
* Python client library and CLI tool: `python-ironic-inspector-client
<https://pypi.org/project/python-ironic-inspector-client>`_
(`documentation
<https://docs.openstack.org/python-ironic-inspector-client/latest/>`_).
.. _Ironic: https://wiki.openstack.org/wiki/Ironic
.. note::
**ironic-inspector** was called *ironic-discoverd* before version 2.0.0.
Release Notes
=============
For information on any current or prior version, see `the release notes`_.
.. _the release notes: https://docs.openstack.org/releasenotes/ironic-inspector/
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@@ -1,210 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# ironic-inspector documentation build configuration file, created by
# sphinx-quickstart on Tue Jul 25 15:17:47 2017.
#
# This file is execfile()d with the current directory set to
# its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
extensions = [
'os_api_ref',
'openstackdocstheme',
]
html_theme = 'openstackdocs'
html_theme_options = {
"sidebar_dropdown": "api_ref",
"sidebar_mode": "toc",
}
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#
# source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/ironic-inspector'
openstackdocs_use_storyboard = True
openstackdocs_auto_name = False
# General information about the project.
project = 'Hardware Introspection API Reference'
copyright = '2017-present, Ironic Inspector Developers'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# The reST default role (used for this markup: `text`) to use
# for all documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# -- Options for man page output ----------------------------------------------
# Grouping the document tree for man pages.
# List of tuples 'sourcefile', 'target', title', 'Authors name', 'manual'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_use_modindex = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'IronicInspectorAPIRefdoc'
# -- Options for LaTeX output -------------------------------------------------
# The paper size ('letter' or 'a4').
# latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
# latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
'OpenStack Hardware Introspection API Documentation',
'OpenStack Foundation', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# Additional stuff for the LaTeX preamble.
# latex_preamble = ''
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_use_modindex = True

View File

@@ -1,21 +0,0 @@
:tocdepth: 2
============================
Bare Metal Introspection API
============================
By default **ironic-inspector** listens on ``[::]:5050``, host and port
can be changed in the configuration file. Protocol is JSON over HTTP.
.. warning::
The ironic-inspector project is in the maintenance mode, its API reference
is provided for historical reasons. New applications should use the
`baremetal API <https://docs.openstack.org/api-ref/baremetal/>`_ instead.
.. rest_expand_all::
.. include:: introspection-api-versions.inc
.. include:: introspection-api-v1-introspection.inc
.. include:: introspection-api-v1-introspection-management.inc
.. include:: introspection-api-v1-continue.inc
.. include:: introspection-api-v1-rules.inc

View File

@@ -1,68 +0,0 @@
.. -*- rst -*-
==========================
Process introspection data
==========================
After the ramdisk collects the required information from the bare metal
node, it should post it back to Inspector via ``POST /v1/continue`` API.
.. warning::
Operators are reminded not to expose the Ironic Inspector API to
unsecured and untrusted networks. API below is available to
*unauthenticated* clients because **ironic-python-agent** ramdisk
does not have access to keystone credentials.
Ramdisk Callback
================
.. rest_method:: POST /v1/continue
It is the API for the ramdisk to post back all discovered data.
This should not be used for clients other than the ramdisk.
Full list of hardware inventory keys may be found in **ironic-python-agent**
documentation: `hardware inventory <https://docs.openstack.org/ironic-python-agent/latest/admin/how_it_works.html#hardware-inventory>`_.
Normal response codes: 201
Error codes: 400
Request
-------
List of mandatory hardware keys:
.. rest_parameters:: parameters.yaml
- inventory: inventory
- memory: memory
- cpu: cpu
- interfaces: interfaces
- disks: disks
- root_disk: root_disk
- bmc_address: bmc_address
- boot_interface: boot_interface
- error: ramdisk_error
- logs: logs
**Example node introspection continue request:**
.. literalinclude:: samples/api-v1-continue-request.json
:language: javascript
Response
--------
The response will contain Ironic node ``uuid`` record.
.. rest_parameters:: parameters.yaml
- uuid: node_uuid
**Example JSON representation:**
.. literalinclude:: samples/api-v1-common-node-uuid.json
:language: javascript

View File

@@ -1,146 +0,0 @@
.. -*- rst -*-
========================================
Introspection Management (introspection)
========================================
Abort introspection, get introspection data and reapply introspection can be
done through introspection sub-resources.
Abort Introspection
===================
.. rest_method:: POST /v1/introspection/{node_id}/abort
Abort running introspection.
Normal response codes: 202
Error codes:
* 400 - bad request
* 401, 403 - missing or invalid authentication
* 404 - node cannot be found
* 409 - inspector has locked this node for processing
Request
-------
.. rest_parameters:: parameters.yaml
- node_id: node_id
Get Introspection data
======================
.. rest_method:: GET /v1/introspection/{node_id}/data
Return stored data from successful introspection.
.. note::
We do not provide any backward compatibility guarantees regarding the
format and contents of the stored data. Notably, it depends on the ramdisk
used and plugins enabled both in the ramdisk and in inspector itself.
Normal response codes: 200
Error codes:
* 400 - bad request
* 401, 403 - missing or invalid authentication
* 404 - data cannot be found or data storage not configured
Request
-------
.. rest_parameters:: parameters.yaml
- node_id: node_id
Response
--------
The response will contain introspection data in the form of json string.
**Example JSON representation of an introspection data:**
.. literalinclude:: samples/api-v1-data-introspection-response.json
:language: javascript
Get Unprocessed Introspection data
==================================
.. rest_method:: GET /v1/introspection/{node_id}/data/unprocessed
Return stored raw (unprocessed) data from introspection.
.. versionadded:: 1.17
Unprocessed introspection data can now be retrieved.
.. note::
We do not provide any backward compatibility guarantees regarding the
format and contents of the stored data. Notably, it depends on the ramdisk
used and plugins enabled both in the ramdisk and in inspector itself.
Normal response codes: 200
Error codes:
* 400 - bad request
* 401, 403 - missing or invalid authentication
* 404 - data cannot be found or data storage not configured
Request
-------
.. rest_parameters:: parameters.yaml
- node_id: node_id
Response
--------
The response will contain introspection data in the form of json string.
**Example JSON representation of an introspection data:**
.. literalinclude:: samples/api-v1-data-introspection-response.json
:language: javascript
Reapply Introspection on data
=============================
.. rest_method:: POST /v1/introspection/{node_id}/data/unprocessed
This method triggers introspection on either stored introspection data or raw
introspection data provided in the request. If the introspection data is
provided in the request body, it should be a valid JSON with content similar to
ramdisk callback request.
.. versionadded:: 1.15
Unprocessed introspection data can be sent via request body.
.. note::
Reapplying introspection on stored data is only possible when a storage
backend is enabled via ``[processing]store_data``.
Normal response codes: 202
Error codes:
* 400 - bad request, store not configured or malformed data in request body
* 401, 403 - missing or invalid authentication
* 404 - node not found for Node ID
* 409 - inspector locked node for processing
Request
-------
.. rest_parameters:: parameters.yaml
- node_id: node_id

View File

@@ -1,128 +0,0 @@
.. -*- rst -*-
==================================
Node Introspection (introspection)
==================================
Start introspection, get introspection status are done through the
``/v1/introspection`` resource. There are also several sub-resources, which
allow further actions to be performed on introspection.
Start Introspection
===================
.. rest_method:: POST /v1/introspection/{node_id}
Initiate hardware introspection for node {node_id} . All power management
configuration for this node needs to be done prior to calling the endpoint.
In the case of missing or invalid authentication, the response code will be
401 and 403 respectively.
If Inspector doesn't find node {node_id}, it will return 404.
Normal response codes: 202
Error codes: 400, 401, 403, 404
Request
-------
.. rest_parameters:: parameters.yaml
- node_id: node_id
- manage_boot: manage_boot
List All Introspection statuses
===============================
.. rest_method:: GET /v1/introspection/
Returned status list is sorted by the ``started_at, uuid`` attribute pair,
newer items first.
In case missing or invalid authentication response code will be 401 and 403.
Normal response codes: 200
Error codes: 400, 401, 403
Request
-------
Status list may be paginated with these query string fields:
.. rest_parameters:: parameters.yaml
- marker: marker
- limit: limit
- state: state
Response
--------
The response will contain a list of status objects:
.. rest_parameters:: parameters.yaml
- error: error
- finished: finished
- finished_at: finished_at
- links: links
- started_at: started_at
- state: state
- uuid: node_id
**Example JSON representation of an introspection:**
.. literalinclude:: samples/api-v1-get-introspections-response.json
:language: javascript
Show Introspection status
=========================
.. rest_method:: GET /v1/introspection/{node_id}
Show node introspection status.
In case missing or invalid authentication response code will be 401 and 403.
If Inspector don't find node {node_id}, it will return 404.
Normal response codes: 200
Error codes: 400, 401, 403, 404
Request
-------
.. rest_parameters:: parameters.yaml
- node_id: node_id
Response
--------
The response will contain the complete introspection info, like
start, finish time, introspection state, errors if any.
.. rest_parameters:: parameters.yaml
- error: error
- finished: finished
- finished_at: finished_at
- links: links
- started_at: started_at
- state: state
- uuid: node_id
**Example JSON representation of an introspection:**
.. literalinclude:: samples/api-v1-get-introspection-response.json
:language: javascript

View File

@@ -1,159 +0,0 @@
.. -*- rst -*-
===================
Introspection Rules
===================
Simple JSON-based DSL to define rules, which run during introspection.
See `<https://docs.openstack.org/ironic-inspector/latest/user/usage.html#introspection-rules>`_
for more information on rules.
Create Introspection Rule
=========================
.. rest_method:: POST /v1/rules
Create a new introspection rule.
Normal response codes:
* 200 - OK for API version < 1.6
* 201 - OK for API version 1.6 and higher
Error codes:
* 400 - wrong rule format
Request
-------
.. rest_parameters:: parameters.yaml
- uuid: uuid
- conditions: conditions
- actions: actions
- description: description
- scope: scope
**Example creating rule request:**
.. literalinclude:: samples/api-v1-create-rule-request.json
:language: javascript
Response
--------
The response will contain full rule object, also ``condition``
section may contain additional default fields, like ``invert``,
``multiple`` and ``field``, see ` Conditions https://docs.openstack.org/ironic-inspector/latest/user/usage.html#conditions>`_
.. rest_parameters:: parameters.yaml
- uuid: uuid
- conditions: conditions
- actions: actions
- description: description
- scope: scope
**Example JSON representation:**
.. literalinclude:: samples/api-v1-create-rule-response.json
:language: javascript
Get Introspection Rules
=======================
.. rest_method:: GET /v1/rules
List all introspection rules
Normal response codes: 200
Response
--------
.. rest_parameters:: parameters.yaml
- uuid: uuid
- description: description
- scope: scope
- links: links
**Example JSON representation:**
.. literalinclude:: samples/api-v1-get-rules-response.json
:language: javascript
Get Introspection Rule
======================
.. rest_method:: GET /v1/rules/{uuid}
Get one introspection rule by its ``uuid``
Normal response codes: 200
Error codes:
* 404 - rule not found
Request
-------
.. rest_parameters:: parameters.yaml
- uuid: uuid
Response
--------
The response will contain full rule object:
.. rest_parameters:: parameters.yaml
- uuid: uuid
- conditions: conditions
- actions: actions
- description: description
- scope: scope
**Example JSON representation:**
.. literalinclude:: samples/api-v1-get-rule-response.json
:language: javascript
Delete Introspection Rules
==========================
.. rest_method:: DELETE /v1/rules
Delete all introspection rules
Normal response codes: 204
Delete Introspection Rule
=========================
.. rest_method:: DELETE /v1/rules/{uuid}
Delete introspection rule by ``uuid``.
Normal response codes: 204
Error codes:
* 404 - rule not found
Request
-------
.. rest_parameters:: parameters.yaml
- uuid: uuid

View File

@@ -1,116 +0,0 @@
.. -*- rst -*-
============
API versions
============
Concepts
========
In order to bring new features to users over time, the Ironic
Inspector API supports versioning. There are two kinds of versions:
- ``major versions``, which have dedicated urls.
- ``microversions``, which can be requested through the use of the
``X-OpenStack-Ironic-Inspector-API-Version`` header or
the new standard singular header
``OpenStack-API-Version: baremetal-introspection <version>``.
The Version APIs work differently from other APIs as they *do not* require
authentication.
All API requests support the new standard singular header
``OpenStack-API-Version: baremetal-introspection <version>`` and the legacy
``X-OpenStack-Ironic-Inspector-API-Version`` header.
Either of these headers SHOULD be supplied with every request; in the absence
of both headers, server will default to current supported version in all
responses.
List API versions
=================
.. rest_method:: GET /
This fetches all the information about all known major API versions in the
deployment. Links to more specific information will be provided for each major
API version, as well as information about supported min and max microversions.
Normal response codes: 200
Request
-------
Response Example
----------------
.. rest_parameters:: parameters.yaml
- versions: versions
- id: id
- links: links
- status: status
- x-openstack-ironic-inspector-api-min-version: api-minimum-version
- x-openstack-ironic-inspector-api-max-version: api-maximum-version
.. literalinclude:: samples/api-root-response.json
:language: javascript
Show v1 API
===========
.. rest_method:: GET /v1/
Show all the resources within the Ironic Inspector v1 API.
Normal response codes: 200
Request
-------
Response Example
----------------
.. rest_parameters:: parameters.yaml
- resources: resources
- links: links
- href: href
- rel: rel
- name: name
- x-openstack-ironic-inspector-api-min-version: api-minimum-version
- x-openstack-ironic-inspector-api-max-version: api-maximum-version
.. literalinclude:: samples/api-v1-root-response.json
:language: javascript
Version History
===============
* **1.0** version of API at the moment of introducing versioning.
* **1.1** adds endpoint to retrieve stored introspection data.
* **1.2** endpoints for manipulating introspection rules.
* **1.3** endpoint for canceling running introspection.
* **1.4** endpoint for reapplying the introspection over stored data.
* **1.5** support for Ironic node names.
* **1.6** endpoint for rules creating returns 201 instead of 200 on success.
* **1.7** UUID, ``started_at``, ``finished_at`` in the introspection
status API.
* **1.8** support for listing all introspection statuses.
* **1.9** de-activate setting IPMI credentials, if IPMI credentials
are requested, API gets HTTP 400 response.
* **1.10** adds node state to the ``GET /v1/introspection/<node>`` and
``GET /v1/introspection`` API response data.
* **1.11** adds ``invert`` and multiple fields into rules response data.
* **1.12** this version indicates that support for setting IPMI credentials
was completely removed from API (all versions).
* **1.13** adds ``manage_boot`` parameter for the introspection API.
* **1.14** allows formatting to be applied to strings nested in dicts and lists
in the actions of introspection rules.
* **1.15** allows reapply with provided introspection data from request.
* **1.16** adds ``scope`` field to introspection rule.
* **1.17** adds ``GET /v1/introspection/<node>/data/unprocessed``.
* **1.18** adds state selector ``GET /v1/introspection?state=starting,...``.

View File

@@ -1,284 +0,0 @@
# variables in header
api-maximum-version:
description: |
Maximum API microversion supported by this endpoint, eg. "1.10"
in: header
required: true
type: string
api-minimum-version:
description: |
Minimum API microversion supported by this endpoint, eg. "1.1"
in: header
required: true
type: string
x-auth-token:
description: |
The client token being passed into Ironic Inspector API to make
authentication.
in: header
required: true
type: string
x-openstack-ironic-inspector-api-version:
description: >
A request SHOULD include this header to indicate to the Ironic Inspector
API service what version the client supports. The server may transform
the response object into compliance with the requested version, if it is
supported, or return a 406 Not Supported error.
If this header is not supplied, the server will default to current
supported version in all responses.
in: header
required: true
type: string
# variables in path
node_id:
description: |
The UUID of the Ironic node.
in: path
required: true
type: string
uuid:
description: |
The UUID of the Ironic Inspector rule.
in: path
required: true
type: string
# common variables to query strings
limit:
description: |
Requests a page size of items. Returns a number of items up to a limit
value. Use the ``limit`` parameter to make an initial limited request and
use the ID of the last-seen item from the response as the ``marker``
parameter value in a subsequent limited request. This value cannot be
larger than the ``api_max_limit`` option in the configuration. If it is
higher than ``api_max_limit``, return 400 Bad Request.
in: query
required: false
type: integer
manage_boot:
description: |
Whether the current installation of ironic-inspector can manage PXE
booting of nodes.
in: query
required: false
type: string
marker:
description: |
The ID of the last-seen item. Use the ``limit`` parameter to make an
initial limited request and use the ID of the last-seen item from the
response as the ``marker`` parameter value in a subsequent request.
in: query
required: false
type: string
# variables to methods
actions:
description: |
List of operations that will be performed if ``conditions`` of this
rule are fulfilled.
in: body
required: true
type: array
bmc_address:
description: |
IP address of the node's BMC
in: body
required: false
type: string
boot_interface:
description: |
MAC address of the NIC that the machine PXE booted from
in: body
required: false
type: string
conditions:
description: |
List of a logic statementd or operations in rules, that can be
evaluated as True or False.
in: body
required: false
type: array
cpu:
description: |
CPU information containing at least keys ``count`` (CPU count) and
``architecture`` (CPU architecture, e.g. ``x86_64``),
in: body
required: true
type: string
description:
description: |
Rule human-readable description.
in: body
required: false
type: string
disks:
description: |
List of disk block devices containing at least ``name`` and ``size``
keys. In case ``disks`` are not provided **ironic-inspector** assumes
that this is a disk-less node.
in: body
required: true
type: array
error:
description: |
Error description string or ``null``;
``Canceled by operator`` in case introspection was aborted.
in: body
required: true
type: string
finished:
description: |
Whether introspection has finished for this node.
in: body
required: true
type: boolean
finished_at:
description: |
UTC ISO8601 timestamp of introspection finished or ``null``.
in: body
required: true
type: string
href:
description: |
A bookmark link to resource object.
in: body
required: true
type: string
id:
description: |
API microversion, eg, "1.12".
in: body
required: true
type: string
interfaces:
description: |
List of dictionaries with interfaces info, contains following keys:
* ``name`` interface name,
* ``ipv4_address`` IPv4 address of the interface,
* ``mac_address`` MAC (physical) address of the interface.
* ``client_id`` InfiniBand Client-ID, for Ethernet is None.
in: body
required: true
type: array
inventory:
description: Dictionary with hardware inventory keys.
in: body
required: true
type: object
links:
description: |
A list of relative links. Includes the self and
bookmark links.
in: body
required: true
type: array
logs:
description: Base64-encoded logs from the ramdisk.
in: body
required: false
type: string
memory:
description: |
Memory information containing at least ``physical_mb`` key,
memory size is reported by dmidecod.
in: body
required: true
type: string
name:
description: |
Resource name, like `introspection`, `rules`.
in: body
required: true
type: string
node_uuid:
description: Ironic node UUID.
in: body
required: true
type: string
ramdisk_error:
description: |
An error that happened during ramdisk run, interpreted by the
``ramdisk_error`` processing hook.
in: body
required: false
type: string
rel:
description: |
The relationship between the version and the href.
in: body
required: true
type: string
resources:
description: |
A list of available API resources.
in: body
required: true
type: array
root_disk:
description: |
Default deployment root disk as calculated by the **ironic-python-agent**
algorithm.
.. note::
The default processing hook ``root_disk_selection`` may change
``root_disk`` based on root device hints if node specify hints via
properties ``root_device`` key. See the `root device hints docs
<https://docs.openstack.org/ironic/latest/install/advanced.html#specifying-the-disk-for-deployment-root-device-hints>`_.
in: body
required: true
type: string
scope:
description: |
Scope of an introspection rule. If set, the rule is only applied to nodes
that have matching ``inspection_scope`` property.
in: body
required: false
type: string
started_at:
description: |
UTC ISO8601 timestamp of introspection start.
in: body
required: true
type: string
state:
description: |
Current state of the introspection, possible values: ``enrolling``,
``error``, ``finished``, ``processing``, ``reapplying``, ``starting``,
``waiting``. For detail information about states see
`Inspector states <https://docs.openstack.org/ironic-inspector/latest/user/workflow.html#state-machine-diagram>`_.
in: body
required: true
type: string
status:
description: |
The status of this API version. This can be one of:
- ``CURRENT`` This version is up to date recent and should be prioritized over all others.
- ``SUPPORTED`` This version is available and may not be updated in future.
- ``DEPRECATED`` This version is still available but may be removed in future.
- ``EXPERIMENTAL`` This version is under development and may be changed in future.
in: body
required: true
type: string
version:
description: |
Versioning of this API response, eg. "1.12".
in: body
required: true
type: string
versions:
description: |
Array of information about currently supported versions.
in: body
required: true
type: array

View File

@@ -1,14 +0,0 @@
{
"versions": [
{
"id": "1.12",
"links": [
{
"href": "http://127.0.0.1:5050/v1",
"rel": "self"
}
],
"status": "CURRENT"
}
]
}

View File

@@ -1,3 +0,0 @@
{
"uuid": "c244557e-899f-46fa-a1ff-5b2c6718616b"
}

View File

@@ -1,3 +0,0 @@
{
"uuid": "b0ea6361-03cd-467c-859c-7230547dcb9a"
}

View File

@@ -1,71 +0,0 @@
{
"root_disk": {
"rotational": true,
"vendor": "0x1af4",
"name": "/dev/vda",
"hctl": null,
"wwn_vendor_extension": null,
"wwn_with_extension": null,
"model": "",
"wwn": null,
"serial": null,
"size": 13958643712
},
"boot_interface": "52:54:00:4e:3d:30",
"inventory": {
"bmc_address": "192.167.2.134",
"interfaces": [
{
"lldp": null,
"product": "0x0001",
"vendor": "0x1af4",
"name": "eth1",
"has_carrier": true,
"ipv4_address": "172.24.42.101",
"client_id": null,
"mac_address": "52:54:00:47:20:4d"
},
{
"lldp": null,
"product": "0x0001",
"vendor": "0x1af4",
"name": "eth0",
"has_carrier": true,
"ipv4_address": "172.24.42.100",
"client_id": null,
"mac_address": "52:54:00:4e:3d:30"
}
],
"disks": [
{
"rotational": true,
"vendor": "0x1af4",
"name": "/dev/vda",
"hctl": null,
"wwn_vendor_extension": null,
"wwn_with_extension": null,
"model": "",
"wwn": null,
"serial": null,
"size": 13958643712
}
],
"memory": {
"physical_mb": 2048,
"total": 2105864192
},
"cpu": {
"count": 2,
"frequency": "2100.084",
"flags": [
"fpu",
"mmx",
"fxsr",
"sse",
"sse2",
],
"architecture": "x86_64"
}
},
"logs": "<hidden>"
}

View File

@@ -1,27 +0,0 @@
{
"uuid":"7459bf7c-9ff9-43a8-ba9f-48542ecda66c",
"description":"Set deploy info if not already set on node",
"actions":[
{
"action":"set-attribute",
"path":"driver_info/deploy_kernel",
"value":"8fd65-c97b-4d00-aa8b-7ed166a60971"
},
{
"action":"set-attribute",
"path":"driver_info/deploy_ramdisk",
"value":"09e5420c-6932-4199-996e-9485c56b3394"
}
],
"conditions":[
{
"op":"is-empty",
"field":"node://driver_info.deploy_ramdisk"
},
{
"op":"is-empty",
"field":"node://driver_info.deploy_kernel"
}
],
"scope":"Delivery_1"
}

View File

@@ -1,37 +0,0 @@
{
"actions": [
{
"action": "set-attribute",
"path": "driver_info/deploy_kernel",
"value": "8fd65-c97b-4d00-aa8b-7ed166a60971"
},
{
"action": "set-attribute",
"path": "driver_info/deploy_ramdisk",
"value": "09e5420c-6932-4199-996e-9485c56b3394"
}
],
"conditions": [
{
"field": "node://driver_info.deploy_ramdisk",
"invert": false,
"multiple": "any",
"op": "is-empty"
},
{
"field": "node://driver_info.deploy_kernel",
"invert": false,
"multiple": "any",
"op": "is-empty"
}
],
"description": "Set deploy info if not already set on node",
"links": [
{
"href": "/v1/rules/7459bf7c-9ff9-43a8-ba9f-48542ecda66c",
"rel": "self"
}
],
"uuid": "7459bf7c-9ff9-43a8-ba9f-48542ecda66c",
"scope": ""
}

View File

@@ -1,110 +0,0 @@
{
"cpu_arch":"x86_64",
"macs":[
"52:54:00:4e:3d:30"
],
"root_disk":{
"rotational":true,
"vendor":"0x1af4",
"name":"/dev/vda",
"hctl":null,
"wwn_vendor_extension":null,
"wwn_with_extension":null,
"model":"",
"wwn":null,
"serial":null,
"size":13958643712
},
"interfaces":{
"eth0":{
"ip":"172.24.42.100",
"mac":"52:54:00:4e:3d:30",
"pxe":true,
"client_id":null
}
},
"cpus":2,
"boot_interface":"52:54:00:4e:3d:30",
"memory_mb":2048,
"ipmi_address":"192.167.2.134",
"inventory":{
"bmc_address":"192.167.2.134",
"interfaces":[
{
"lldp":null,
"product":"0x0001",
"vendor":"0x1af4",
"name":"eth1",
"has_carrier":true,
"ipv4_address":"172.24.42.101",
"client_id":null,
"mac_address":"52:54:00:47:20:4d"
},
{
"lldp":null,
"product":"0x0001",
"vendor":"0x1af4",
"name":"eth0",
"has_carrier":true,
"ipv4_address":"172.24.42.100",
"client_id":null,
"mac_address":"52:54:00:4e:3d:30"
}
],
"disks":[
{
"rotational":true,
"vendor":"0x1af4",
"name":"/dev/vda",
"hctl":null,
"wwn_vendor_extension":null,
"wwn_with_extension":null,
"model":"",
"wwn":null,
"serial":null,
"size":13958643712
}
],
"boot":{
"current_boot_mode":"bios",
"pxe_interface":"52:54:00:4e:3d:30"
},
"system_vendor":{
"serial_number":"Not Specified",
"product_name":"Bochs",
"manufacturer":"Bochs"
},
"memory":{
"physical_mb":2048,
"total":2105864192
},
"cpu":{
"count":2,
"frequency":"2100.084",
"flags": [
"fpu",
"mmx",
"fxsr",
"sse",
"sse2"
],
"architecture":"x86_64"
}
},
"error":null,
"local_gb":12,
"all_interfaces":{
"eth1":{
"ip":"172.24.42.101",
"mac":"52:54:00:47:20:4d",
"pxe":false,
"client_id":null
},
"eth0":{
"ip":"172.24.42.100",
"mac":"52:54:00:4e:3d:30",
"pxe":true,
"client_id":null
}
}
}

View File

@@ -1,14 +0,0 @@
{
"error": null,
"finished": true,
"finished_at": "2017-08-16T12:24:30",
"links": [
{
"href": "http://127.0.0.1:5050/v1/introspection/c244557e-899f-46fa-a1ff-5b2c6718616b",
"rel": "self"
}
],
"started_at": "2017-08-16T12:22:01",
"state": "finished",
"uuid": "c244557e-899f-46fa-a1ff-5b2c6718616b"
}

View File

@@ -1,32 +0,0 @@
{
"introspection": [
{
"error": null,
"finished": true,
"finished_at": "2017-08-17T11:36:16",
"links": [
{
"href": "http://127.0.0.1:5050/v1/introspection/05ccda19-581b-49bf-8f5a-6ded99701d87",
"rel": "self"
}
],
"started_at": "2017-08-17T11:33:43",
"state": "finished",
"uuid": "05ccda19-581b-49bf-8f5a-6ded99701d87"
},
{
"error": null,
"finished": true,
"finished_at": "2017-08-16T12:24:30",
"links": [
{
"href": "http://127.0.0.1:5050/v1/introspection/c244557e-899f-46fa-a1ff-5b2c6718616b",
"rel": "self"
}
],
"started_at": "2017-08-16T12:22:01",
"state": "finished",
"uuid": "c244557e-899f-46fa-a1ff-5b2c6718616b"
}
]
}

View File

@@ -1,41 +0,0 @@
{
"actions": [
{
"action": "set-attribute",
"path": "driver",
"value": "agent_ipmitool"
},
{
"action": "set-attribute",
"path": "driver_info/ipmi_username",
"value": "username"
},
{
"action": "set-attribute",
"path": "driver_info/ipmi_password",
"value": "password"
}
],
"conditions": [
{
"field": "node://driver_info.ipmi_password",
"invert": false,
"multiple": "any",
"op": "is-empty"
},
{
"field": "node://driver_info.ipmi_username",
"invert": false,
"multiple": "any",
"op": "is-empty"
}
],
"description": "Set IPMI driver_info if no credentials",
"links": [
{
"href": "/v1/rules/b0ea6361-03cd-467c-859c-7230547dcb9a",
"rel": "self"
}
],
"uuid": "b0ea6361-03cd-467c-859c-7230547dcb9a"
}

View File

@@ -1,24 +0,0 @@
{
"rules": [
{
"description": "Set deploy info if not already set on node",
"links": [
{
"href": "/v1/rules/7459bf7c-9ff9-43a8-ba9f-48542ecda66c",
"rel": "self"
}
],
"uuid": "7459bf7c-9ff9-43a8-ba9f-48542ecda66c"
},
{
"description": "Set IPMI driver_info if no credentials",
"links": [
{
"href": "/v1/rules/b0ea6361-03cd-467c-859c-7230547dcb9a",
"rel": "self"
}
],
"uuid": "b0ea6361-03cd-467c-859c-7230547dcb9a"
}
]
}

View File

@@ -1,31 +0,0 @@
{
"resources": [
{
"links": [
{
"href": "http://127.0.0.1:5050/v1/introspection",
"rel": "self"
}
],
"name": "introspection"
},
{
"links": [
{
"href": "http://127.0.0.1:5050/v1/continue",
"rel": "self"
}
],
"name": "continue"
},
{
"links": [
{
"href": "http://127.0.0.1:5050/v1/rules",
"rel": "self"
}
],
"name": "rules"
}
]
}

View File

@@ -1,12 +0,0 @@
# needed for mysql
mysql-client [platform:dpkg !platform:debian-bookworm]
mysql-server [platform:dpkg !platform:debian-bookworm]
mariadb-client [platform:debian-bookworm]
mariadb-server [platform:debian-bookworm]
# needed for psql
postgresql
postgresql-client [platform:dpkg]
# libsrvg2 is needed for sphinxcontrib-svg2pdfconverter in docs builds.
librsvg2-tools [doc platform:rpm]
librsvg2-bin [doc platform:dpkg]

View File

@@ -1,80 +0,0 @@
[[local|localrc]]
# Credentials
ADMIN_PASSWORD=password
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
SWIFT_HASH=password
SWIFT_TEMPURL_KEY=password
# Enable Ironic plugin
enable_plugin ironic https://opendev.org/openstack/ironic
enable_plugin ironic-inspector https://opendev.org/openstack/ironic-inspector
# Disable nova novnc service, ironic does not support it anyway.
disable_service n-novnc
# Enable Swift for the direct deploy interface.
enable_service s-proxy
enable_service s-object
enable_service s-container
enable_service s-account
# Disable Horizon
disable_service horizon
# Disable Cinder
disable_service cinder c-sch c-api c-vol
# Swift temp URL's are required for the direct deploy interface
SWIFT_ENABLE_TEMPURLS=True
# Create 3 virtual machines to pose as Ironic's baremetal nodes.
IRONIC_VM_COUNT=3
IRONIC_BAREMETAL_BASIC_OPS=True
DEFAULT_INSTANCE_TYPE=baremetal
# Enable additional hardware types, if needed.
#IRONIC_ENABLED_HARDWARE_TYPES=ipmi,fake-hardware
# Don't forget that many hardware types require enabling of additional
# interfaces, most often power and management:
#IRONIC_ENABLED_MANAGEMENT_INTERFACES=ipmitool,fake
#IRONIC_ENABLED_POWER_INTERFACES=ipmitool,fake
# The 'ipmi' hardware type's default deploy interface is 'iscsi'.
# This would change the default to 'direct':
#IRONIC_DEFAULT_DEPLOY_INTERFACE=direct
# Enable inspection via ironic-inspector
IRONIC_ENABLED_INSPECT_INTERFACES=inspector,no-inspect
# Make it the default for all hardware types:
IRONIC_DEFAULT_INSPECT_INTERFACE=inspector
# Change this to alter the default driver for nodes created by devstack.
# This driver should be in the enabled list above.
IRONIC_DEPLOY_DRIVER=ipmi
# The parameters below represent the minimum possible values to create
# functional nodes.
IRONIC_VM_SPECS_RAM=2048
IRONIC_VM_SPECS_DISK=10
# Size of the ephemeral partition in GB. Use 0 for no ephemeral partition.
IRONIC_VM_EPHEMERAL_DISK=0
# To build your own IPA ramdisk from source, set this to True
IRONIC_BUILD_DEPLOY_RAMDISK=False
IRONIC_INSPECTOR_BUILD_RAMDISK=False
VIRT_DRIVER=ironic
# By default, DevStack creates a 10.0.0.0/24 network for instances.
# If this overlaps with the hosts network, you may adjust with the
# following.
NETWORK_GATEWAY=10.1.0.1
FIXED_RANGE=10.1.0.0/24
FIXED_NETWORK_SIZE=256
# Log all output to files
LOGFILE=/opt/stack/devstack.log
LOGDIR=/opt/stack/logs
IRONIC_VM_LOG_DIR=/opt/stack/ironic-bm-logs

View File

@@ -1,591 +0,0 @@
#!/usr/bin/env bash
IRONIC_INSPECTOR_DEBUG=${IRONIC_INSPECTOR_DEBUG:-True}
IRONIC_INSPECTOR_DIR=$DEST/ironic-inspector
IRONIC_INSPECTOR_DATA_DIR=$DATA_DIR/ironic-inspector
IRONIC_INSPECTOR_BIN_DIR=$(get_python_exec_prefix)
IRONIC_INSPECTOR_BIN_FILE=$IRONIC_INSPECTOR_BIN_DIR/ironic-inspector
IRONIC_INSPECTOR_BIN_FILE_CONDUCTOR=$IRONIC_INSPECTOR_BIN_DIR/ironic-inspector-conductor
IRONIC_INSPECTOR_DBSYNC_BIN_FILE=$IRONIC_INSPECTOR_BIN_DIR/ironic-inspector-dbsync
IRONIC_INSPECTOR_CONF_DIR=${IRONIC_INSPECTOR_CONF_DIR:-/etc/ironic-inspector}
IRONIC_INSPECTOR_CONF_FILE=$IRONIC_INSPECTOR_CONF_DIR/inspector.conf
IRONIC_INSPECTOR_CMD="$IRONIC_INSPECTOR_BIN_FILE --config-file $IRONIC_INSPECTOR_CONF_FILE"
IRONIC_INSPECTOR_CMD_CONDUCTOR="$IRONIC_INSPECTOR_BIN_FILE_CONDUCTOR --config-file $IRONIC_INSPECTOR_CONF_FILE"
IRONIC_INSPECTOR_DHCP_CONF_FILE=$IRONIC_INSPECTOR_CONF_DIR/dnsmasq.conf
IRONIC_INSPECTOR_ROOTWRAP_CONF_FILE=$IRONIC_INSPECTOR_CONF_DIR/rootwrap.conf
IRONIC_INSPECTOR_ADMIN_USER=${IRONIC_INSPECTOR_ADMIN_USER:-ironic-inspector}
IRONIC_INSPECTOR_DHCP_FILTER=${IRONIC_INSPECTOR_DHCP_FILTER:-iptables}
IRONIC_INSPECTOR_STANDALONE=${IRONIC_INSPECTOR_STANDALONE:-True}
# Support entry points installation of console scripts
IRONIC_INSPECTOR_UWSGI=${IRONIC_INSPECTOR_UWSGI:-ironic_inspector.wsgi:application}
IRONIC_INSPECTOR_UWSGI_CONF=$IRONIC_INSPECTOR_CONF_DIR/ironic-inspector-uwsgi.ini
# Determine if ironic is in enforce scope node, infer that to mean our operating mode
# explicitly unless otherwise set.
IRONIC_INSPECTOR_ENFORCE_SCOPE=${IRONIC_INSPECTOR_ENFORCE_SCOPE:-${IRONIC_ENFORCE_SCOPE:-False}}
# and then fallback to trueorfalse to put it into the standardized string format for the jobs.
IRONIC_INSPECTOR_ENFORCE_SCOPE=$(trueorfalse True IRONIC_INSPECTOR_ENFORCE_SCOPE)
# Reset the input in the event the plugin is running separately from ironic's
# devstack plugin.
IRONIC_ENFORCE_SCOPE=$(trueorfalse True IRONIC_ENFORCE_SCOPE)
if [[ -n ${IRONIC_INSPECTOR_MANAGE_FIREWALL} ]] ; then
echo "IRONIC_INSPECTOR_MANAGE_FIREWALL is deprecated." >&2
echo "Please, use IRONIC_INSPECTOR_DHCP_FILTER == noop/iptables/dnsmasq instead." >&2
if [[ "$IRONIC_INSPECTOR_DHCP_FILTER" != "iptables" ]] ; then
# both manage firewall and filter driver set together but driver isn't iptables
echo "Inconsistent configuration: IRONIC_INSPECTOR_MANAGE_FIREWALL used while" >&2
echo "IRONIC_INSPECTOR_DHCP_FILTER == $IRONIC_INSPECTOR_DHCP_FILTER" >&2
exit 1
fi
if [[ $(trueorfalse True IRONIC_INSPECTOR_MANAGE_FIREWALL) == "False" ]] ; then
echo "IRONIC_INSPECTOR_MANAGE_FIREWALL == False" >&2
echo "Setting IRONIC_INSPECTOR_DHCP_FILTER=noop" >&2
IRONIC_INSPECTOR_DHCP_FILTER=noop
fi
fi
# dnsmasq dhcp filter configuration
# override the default hostsdir so devstack collects the MAC files (/etc)
IRONIC_INSPECTOR_DHCP_HOSTSDIR=${IRONIC_INSPECTOR_DHCP_HOSTSDIR:-/etc/ironic-inspector/dhcp-hostsdir}
IRONIC_INSPECTOR_DNSMASQ_STOP_COMMAND=${IRONIC_INSPECTOR_DNSMASQ_STOP_COMMAND:-systemctl stop devstack@ironic-inspector-dhcp}
IRONIC_INSPECTOR_DNSMASQ_START_COMMAND=${IRONIC_INSPECTOR_DNSMASQ_START_COMMAND:-systemctl start devstack@ironic-inspector-dhcp}
IRONIC_INSPECTOR_HOST=$SERVICE_HOST
IRONIC_INSPECTOR_PORT=5050
if [[ "$IRONIC_INSPECTOR_STANDALONE" == "False" ]]; then
IRONIC_INSPECTOR_URI="http://$IRONIC_INSPECTOR_HOST/baremetal-introspection"
else
IRONIC_INSPECTOR_URI="http://$IRONIC_INSPECTOR_HOST:$IRONIC_INSPECTOR_PORT"
fi
IRONIC_INSPECTOR_BUILD_RAMDISK=$(trueorfalse False IRONIC_INSPECTOR_BUILD_RAMDISK)
IRONIC_RAMDISK_BRANCH=${IRONIC_RAMDISK_BRANCH:-${ZUUL_BRANCH:-master}}
IRONIC_AGENT_KERNEL_URL=${IRONIC_AGENT_KERNEL_URL:-http://tarballs.openstack.org/ironic-python-agent-builder/dib/files/ipa-centos8-$IRONIC_RAMDISK_BRANCH.kernel}
IRONIC_AGENT_RAMDISK_URL=${IRONIC_AGENT_RAMDISK_URL:-http://tarballs.openstack.org/ironic-python-agent-builder/dib/files/ipa-centos8-$IRONIC_RAMDISK_BRANCH.initramfs}
IRONIC_INSPECTOR_COLLECTORS=${IRONIC_INSPECTOR_COLLECTORS:-default,logs,pci-devices}
IRONIC_INSPECTOR_RAMDISK_LOGDIR=${IRONIC_INSPECTOR_RAMDISK_LOGDIR:-$IRONIC_INSPECTOR_DATA_DIR/ramdisk-logs}
IRONIC_INSPECTOR_ALWAYS_STORE_RAMDISK_LOGS=${IRONIC_INSPECTOR_ALWAYS_STORE_RAMDISK_LOGS:-True}
IRONIC_INSPECTOR_TIMEOUT=${IRONIC_INSPECTOR_TIMEOUT:-600}
IRONIC_INSPECTOR_CLEAN_UP_PERIOD=${IRONIC_INSPECTOR_CLEAN_UP_PERIOD:-}
# These should not overlap with other ranges/networks
IRONIC_INSPECTOR_INTERNAL_IP=${IRONIC_INSPECTOR_INTERNAL_IP:-172.24.42.254}
IRONIC_INSPECTOR_INTERNAL_SUBNET_SIZE=${IRONIC_INSPECTOR_INTERNAL_SUBNET_SIZE:-24}
IRONIC_INSPECTOR_DHCP_RANGE=${IRONIC_INSPECTOR_DHCP_RANGE:-172.24.42.100,172.24.42.253}
IRONIC_INSPECTOR_INTERFACE=${IRONIC_INSPECTOR_INTERFACE:-br-inspector}
IRONIC_INSPECTOR_INTERFACE_PHYSICAL=$(trueorfalse False IRONIC_INSPECTOR_INTERFACE_PHYSICAL)
if [[ "$IRONIC_INSPECTOR_STANDALONE" == "False" ]]; then
IRONIC_INSPECTOR_INTERNAL_URI="http://$IRONIC_INSPECTOR_INTERNAL_IP/baremetal-introspection"
else
IRONIC_INSPECTOR_INTERNAL_URI="http://$IRONIC_INSPECTOR_INTERNAL_IP:$IRONIC_INSPECTOR_PORT"
fi
IRONIC_INSPECTOR_INTERNAL_IP_WITH_NET="$IRONIC_INSPECTOR_INTERNAL_IP/$IRONIC_INSPECTOR_INTERNAL_SUBNET_SIZE"
# Whether DevStack will be setup for bare metal or VMs
IRONIC_IS_HARDWARE=$(trueorfalse False IRONIC_IS_HARDWARE)
IRONIC_INSPECTOR_NODE_NOT_FOUND_HOOK=${IRONIC_INSPECTOR_NODE_NOT_FOUND_HOOK:-""}
IRONIC_INSPECTOR_OVS_PORT=${IRONIC_INSPECTOR_OVS_PORT:-brbm-inspector}
IRONIC_INSPECTOR_EXTRA_KERNEL_CMDLINE=${IRONIC_INSPECTOR_EXTRA_KERNEL_CMDLINE:-""}
IRONIC_INSPECTOR_POWER_OFF=${IRONIC_INSPECTOR_POWER_OFF:-True}
IRONIC_INSPECTOR_MANAGED_BOOT=$(trueorfalse False IRONIC_INSPECTOR_MANAGED_BOOT)
IRONIC_INSPECTION_NET_NAME=${IRONIC_INSPECTION_NET_NAME:-$IRONIC_CLEAN_NET_NAME}
if is_service_enabled swift; then
DEFAULT_DATA_STORE=swift
else
DEFAULT_DATA_STORE=database
fi
IRONIC_INSPECTOR_INTROSPECTION_DATA_STORE=${IRONIC_INSPECTOR_INTROSPECTION_DATA_STORE:-$DEFAULT_DATA_STORE}
GITDIR["python-ironic-inspector-client"]=$DEST/python-ironic-inspector-client
GITREPO["python-ironic-inspector-client"]=${IRONIC_INSPECTOR_CLIENT_REPO:-${GIT_BASE}/openstack/python-ironic-inspector-client.git}
GITBRANCH["python-ironic-inspector-client"]=${IRONIC_INSPECTOR_CLIENT_BRANCH:-master}
# This is defined in ironic's devstack plugin. Redefine it just in case, and
# insert "inspector" if it's missing.
IRONIC_ENABLED_INSPECT_INTERFACES=${IRONIC_ENABLED_INSPECT_INTERFACES:-"inspector,no-inspect,fake"}
if [[ "$IRONIC_ENABLED_INSPECT_INTERFACES" != *inspector* ]]; then
IRONIC_ENABLED_INSPECT_INTERFACES="inspector,$IRONIC_ENABLED_INSPECT_INTERFACES"
fi
# Ironic Inspector tempest variables
IRONIC_INSPECTOR_TEMPEST_DISCOVERY_TIMEOUT=${IRONIC_INSPECTOR_TEMPEST_DISCOVERY_TIMEOUT:-}
IRONIC_INSPECTOR_TEMPEST_INTROSPECTION_TIMEOUT=${IRONIC_INSPECTOR_TEMPEST_INTROSPECTION_TIMEOUT:-}
### Utilities
function mkdir_chown_stack {
if [[ ! -d "$1" ]]; then
sudo mkdir -p "$1"
fi
sudo chown $STACK_USER "$1"
}
function inspector_iniset {
local section=$1
local option=$2
shift 2
# value in iniset is at $4; wrapping in quotes
iniset "$IRONIC_INSPECTOR_CONF_FILE" $section $option "$*"
}
### Install-start-stop
function install_inspector {
setup_develop $IRONIC_INSPECTOR_DIR
# Check if things look okay
$IRONIC_INSPECTOR_BIN_DIR/ironic-inspector-status upgrade check
if [[ "$IRONIC_INSPECTOR_STANDALONE" == "False" ]]; then
install_apache_wsgi
# NOTE(rpittau) since devstack doesn't install test-requirements
# anymore we need to install dependencies for drivers before
# starting inspector services
pip_install_gr pymemcache
fi
}
function install_inspector_dhcp {
install_package dnsmasq
}
function install_inspector_client {
if use_library_from_git python-ironic-inspector-client; then
git_clone_by_name python-ironic-inspector-client
setup_dev_lib python-ironic-inspector-client
else
pip_install_gr python-ironic-inspector-client
fi
}
function start_inspector {
if [[ "$IRONIC_INSPECTOR_STANDALONE" == "True" ]]; then
run_process ironic-inspector "$IRONIC_INSPECTOR_CMD"
else
run_process ironic-inspector-api "$(which uwsgi) --procname-prefix ironic-inspector-api --ini $IRONIC_INSPECTOR_UWSGI_CONF --pyargv \"--config-file $IRONIC_INSPECTOR_CONF_FILE\""
run_process ironic-inspector-conductor "$IRONIC_INSPECTOR_CMD_CONDUCTOR"
fi
echo "Waiting for ironic-inspector API to start..."
if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- $IRONIC_INSPECTOR_URI; do sleep 1; done"; then
die $LINENO "ironic-inspector API did not start"
fi
}
function is_inspector_dhcp_required {
[[ "$IRONIC_INSPECTOR_MANAGE_FIREWALL" == "True" ]] || \
[[ "${IRONIC_INSPECTOR_DHCP_FILTER:-iptables}" != "noop" ]] && \
[[ "$IRONIC_INSPECTOR_MANAGED_BOOT" == "False" ]]
}
function start_inspector_dhcp {
# NOTE(dtantsur): USE_SYSTEMD requires an absolute path
run_process ironic-inspector-dhcp \
"$(which dnsmasq) --conf-file=$IRONIC_INSPECTOR_DHCP_CONF_FILE" \
"" root
}
function stop_inspector {
if [[ "$IRONIC_INSPECTOR_STANDALONE" == "True" ]]; then
stop_process ironic-inspector
else
stop_process ironic-inspector-api
stop_process ironic-inspector-conductor
fi
}
function stop_inspector_dhcp {
stop_process ironic-inspector-dhcp
}
### Configuration
function prepare_tftp {
IRONIC_INSPECTOR_IMAGE_PATH="$TOP_DIR/files/ironic-inspector"
IRONIC_INSPECTOR_KERNEL_PATH="$IRONIC_INSPECTOR_IMAGE_PATH.kernel"
IRONIC_INSPECTOR_INITRAMFS_PATH="$IRONIC_INSPECTOR_IMAGE_PATH.initramfs"
IRONIC_INSPECTOR_CALLBACK_URI="$IRONIC_INSPECTOR_INTERNAL_URI/v1/continue"
IRONIC_INSPECTOR_KERNEL_CMDLINE="root=/dev/ram0 $IRONIC_INSPECTOR_EXTRA_KERNEL_CMDLINE"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE ipa-inspection-callback-url=$IRONIC_INSPECTOR_CALLBACK_URI"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE ipa-api-url=$SERVICE_PROTOCOL://$SERVICE_HOST/baremetal"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE ipa-insecure=1 systemd.journald.forward_to_console=yes"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE vga=normal console=tty0 console=ttyS0"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE ipa-inspection-collectors=$IRONIC_INSPECTOR_COLLECTORS"
IRONIC_INSPECTOR_KERNEL_CMDLINE="$IRONIC_INSPECTOR_KERNEL_CMDLINE ipa-debug=1"
if [[ "$IRONIC_INSPECTOR_BUILD_RAMDISK" == "True" ]]; then
if [ ! -e "$IRONIC_INSPECTOR_KERNEL_PATH" -o ! -e "$IRONIC_INSPECTOR_INITRAMFS_PATH" ]; then
build_ipa_ramdisk "$IRONIC_INSPECTOR_KERNEL_PATH" "$IRONIC_INSPECTOR_INITRAMFS_PATH"
fi
else
# download the agent image tarball
if [ ! -e "$IRONIC_INSPECTOR_KERNEL_PATH" -o ! -e "$IRONIC_INSPECTOR_INITRAMFS_PATH" ]; then
if [ -e "$IRONIC_DEPLOY_KERNEL" -a -e "$IRONIC_DEPLOY_RAMDISK" ]; then
cp $IRONIC_DEPLOY_KERNEL $IRONIC_INSPECTOR_KERNEL_PATH
cp $IRONIC_DEPLOY_RAMDISK $IRONIC_INSPECTOR_INITRAMFS_PATH
else
wget "$IRONIC_AGENT_KERNEL_URL" -O $IRONIC_INSPECTOR_KERNEL_PATH
wget "$IRONIC_AGENT_RAMDISK_URL" -O $IRONIC_INSPECTOR_INITRAMFS_PATH
fi
fi
fi
if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then
cp $IRONIC_INSPECTOR_KERNEL_PATH $IRONIC_HTTP_DIR/ironic-inspector.kernel
cp $IRONIC_INSPECTOR_INITRAMFS_PATH $IRONIC_HTTP_DIR
cat > "$IRONIC_HTTP_DIR/ironic-inspector.ipxe" <<EOF
#!ipxe
dhcp
kernel http://$IRONIC_HTTP_SERVER:$IRONIC_HTTP_PORT/ironic-inspector.kernel BOOTIF=\${mac} $IRONIC_INSPECTOR_KERNEL_CMDLINE initrd=ironic-inspector.initramfs
initrd http://$IRONIC_HTTP_SERVER:$IRONIC_HTTP_PORT/ironic-inspector.initramfs
boot
EOF
else
mkdir_chown_stack "$IRONIC_TFTPBOOT_DIR/pxelinux.cfg"
cp $IRONIC_INSPECTOR_KERNEL_PATH $IRONIC_TFTPBOOT_DIR/ironic-inspector.kernel
cp $IRONIC_INSPECTOR_INITRAMFS_PATH $IRONIC_TFTPBOOT_DIR
cat > "$IRONIC_TFTPBOOT_DIR/pxelinux.cfg/default" <<EOF
default inspect
label inspect
kernel ironic-inspector.kernel
append initrd=ironic-inspector.initramfs $IRONIC_INSPECTOR_KERNEL_CMDLINE
ipappend 3
EOF
fi
}
function inspector_configure_auth_for {
inspector_iniset $1 auth_type password
inspector_iniset $1 auth_url "$KEYSTONE_SERVICE_URI"
if [[ "$1" == "ironic" ]] && [[ "$IRONIC_ENFORCE_SCOPE" == "True" ]]; then
# If ironic is enforcing scope, service credentials are not
# enough, because they live in a "service project" and does not
# have a full view of the system.
inspector_iniset $1 username admin
inspector_iniset $1 password $ADMIN_PASSWORD
inspector_iniset $1 system_scope all
else
inspector_iniset $1 username $IRONIC_INSPECTOR_ADMIN_USER
inspector_iniset $1 password $SERVICE_PASSWORD
inspector_iniset $1 project_name $SERVICE_PROJECT_NAME
inspector_iniset $1 project_domain_id default
fi
inspector_iniset $1 user_domain_id default
inspector_iniset $1 cafile $SSL_BUNDLE_FILE
inspector_iniset $1 region_name $REGION_NAME
}
function is_dnsmasq_filter_required {
[[ "$IRONIC_INSPECTOR_DHCP_FILTER" == "dnsmasq" ]]
}
function configure_inspector_pxe_filter_dnsmasq {
mkdir_chown_stack $IRONIC_INSPECTOR_DHCP_HOSTSDIR
inspector_iniset pxe_filter driver dnsmasq
inspector_iniset dnsmasq_pxe_filter dhcp_hostsdir $IRONIC_INSPECTOR_DHCP_HOSTSDIR
inspector_iniset dnsmasq_pxe_filter dnsmasq_stop_command "$IRONIC_INSPECTOR_DNSMASQ_STOP_COMMAND"
inspector_iniset dnsmasq_pxe_filter dnsmasq_start_command "$IRONIC_INSPECTOR_DNSMASQ_START_COMMAND"
}
function configure_dnsmasq_dhcp_hostsdir {
sed -ie '/dhcp-hostsdir.*=/d' $IRONIC_INSPECTOR_DHCP_CONF_FILE
echo "dhcp-hostsdir=$IRONIC_INSPECTOR_DHCP_HOSTSDIR" >> $IRONIC_INSPECTOR_DHCP_CONF_FILE
}
function _dnsmasq_rootwrap_ctl_tail {
# cut off the command head and amend white-spaces with commas
shift
local bits=$*
echo ${bits//\ /, }
}
function configure_inspector_dnsmasq_rootwrap {
# turn the ctl commands into filter rules and dump the roorwrap file
local stop_cmd=( $IRONIC_INSPECTOR_DNSMASQ_STOP_COMMAND )
local start_cmd=( $IRONIC_INSPECTOR_DNSMASQ_START_COMMAND )
local stop_cmd_tail=$( _dnsmasq_rootwrap_ctl_tail ${stop_cmd[@]} )
local start_cmd_tail=$( _dnsmasq_rootwrap_ctl_tail ${start_cmd[@]} )
cat > "$IRONIC_INSPECTOR_CONF_DIR/rootwrap.d/ironic-inspector-dnsmasq.filters" <<EOF
[Filters]
# ironic_inspector/pxe_filter/dnsmasq.py
${stop_cmd[0]}: CommandFilter, ${stop_cmd[0]}, root, ${stop_cmd_tail}
${start_cmd[0]}: CommandFilter, ${start_cmd[0]}, root, ${start_cmd_tail}
EOF
}
function configure_inspector {
mkdir_chown_stack "$IRONIC_INSPECTOR_CONF_DIR"
mkdir_chown_stack "$IRONIC_INSPECTOR_DATA_DIR"
create_service_user "$IRONIC_INSPECTOR_ADMIN_USER" "admin"
# start with a fresh config file
rm -f "$IRONIC_INSPECTOR_CONF_FILE"
inspector_iniset DEFAULT debug $IRONIC_INSPECTOR_DEBUG
inspector_iniset DEFAULT standalone $IRONIC_INSPECTOR_STANDALONE
inspector_configure_auth_for ironic
inspector_configure_auth_for service_catalog
configure_keystone_authtoken_middleware $IRONIC_INSPECTOR_CONF_FILE $IRONIC_INSPECTOR_ADMIN_USER
inspector_iniset DEFAULT listen_port $IRONIC_INSPECTOR_PORT
inspector_iniset pxe_filter driver $IRONIC_INSPECTOR_DHCP_FILTER
inspector_iniset iptables dnsmasq_interface $IRONIC_INSPECTOR_INTERFACE
inspector_iniset database connection `database_connection_url ironic_inspector`
if [[ -n "$IRONIC_INSPECTOR_PROCESSING_HOOKS" ]]; then
inspector_iniset processing processing_hooks "\$default_processing_hooks,$IRONIC_INSPECTOR_PROCESSING_HOOKS"
fi
inspector_iniset processing power_off $IRONIC_INSPECTOR_POWER_OFF
iniset_rpc_backend ironic-inspector $IRONIC_INSPECTOR_CONF_FILE
if [[ "$IRONIC_INSPECTOR_STANDALONE" == "False" ]]; then
# memcached listens localhost instead of $SERVICE_HOST, which is exactly the default value,
# but set explicitly in case that changed.
inspector_iniset coordination backend_url "memcached://127.0.0.1:11211"
fi
if is_service_enabled swift; then
configure_inspector_swift
fi
inspector_iniset processing store_data $IRONIC_INSPECTOR_INTROSPECTION_DATA_STORE
iniset "$IRONIC_CONF_FILE" inspector enabled True
iniset "$IRONIC_CONF_FILE" inspector service_url $IRONIC_INSPECTOR_URI
if [[ "$IRONIC_INSPECTOR_MANAGED_BOOT" == "True" ]]; then
iniset "$IRONIC_CONF_FILE" neutron inspection_network $IRONIC_INSPECTION_NET_NAME
iniset "$IRONIC_CONF_FILE" inspector require_managed_boot True
iniset "$IRONIC_CONF_FILE" inspector extra_kernel_params \
"ipa-inspection-collectors=\"$IRONIC_INSPECTOR_COLLECTORS\""
# In this mode we do not have our own PXE environment, so do not accept
# requests without manage_boot=False.
inspector_iniset DEFAULT can_manage_boot False
fi
setup_logging $IRONIC_INSPECTOR_CONF_FILE DEFAULT
# Adds uWSGI for inspector API
if [[ "$IRONIC_INSPECTOR_STANDALONE" == "False" ]]; then
write_uwsgi_config "$IRONIC_INSPECTOR_UWSGI_CONF" "$IRONIC_INSPECTOR_UWSGI" "/baremetal-introspection"
fi
cp "$IRONIC_INSPECTOR_DIR/rootwrap.conf" "$IRONIC_INSPECTOR_ROOTWRAP_CONF_FILE"
cp -r "$IRONIC_INSPECTOR_DIR/rootwrap.d" "$IRONIC_INSPECTOR_CONF_DIR"
local ironic_inspector_rootwrap=$(get_rootwrap_location ironic-inspector)
local rootwrap_sudoer_cmd="$ironic_inspector_rootwrap $IRONIC_INSPECTOR_CONF_DIR/rootwrap.conf *"
# Set up the rootwrap sudoers for ironic-inspector
local tempfile=`mktemp`
echo "$STACK_USER ALL=(root) NOPASSWD: $rootwrap_sudoer_cmd" >$tempfile
chmod 0640 $tempfile
sudo chown root:root $tempfile
sudo mv $tempfile /etc/sudoers.d/ironic-inspector-rootwrap
inspector_iniset DEFAULT rootwrap_config $IRONIC_INSPECTOR_ROOTWRAP_CONF_FILE
mkdir_chown_stack "$IRONIC_INSPECTOR_RAMDISK_LOGDIR"
inspector_iniset processing ramdisk_logs_dir "$IRONIC_INSPECTOR_RAMDISK_LOGDIR"
inspector_iniset processing always_store_ramdisk_logs "$IRONIC_INSPECTOR_ALWAYS_STORE_RAMDISK_LOGS"
if [ -n "$IRONIC_INSPECTOR_NODE_NOT_FOUND_HOOK" ]; then
inspector_iniset processing node_not_found_hook "$IRONIC_INSPECTOR_NODE_NOT_FOUND_HOOK"
fi
inspector_iniset DEFAULT timeout $IRONIC_INSPECTOR_TIMEOUT
if [ -n "$IRONIC_INSPECTOR_CLEAN_UP_PERIOD" ]; then
inspector_iniset DEFAULT clean_up_period "$IRONIC_INSPECTOR_CLEAN_UP_PERIOD"
fi
get_or_create_service "ironic-inspector" "baremetal-introspection" "Ironic Inspector baremetal introspection service"
get_or_create_endpoint "baremetal-introspection" "$REGION_NAME" \
"$IRONIC_INSPECTOR_URI" "$IRONIC_INSPECTOR_URI" "$IRONIC_INSPECTOR_URI"
if is_dnsmasq_filter_required ; then
configure_inspector_dnsmasq_rootwrap
configure_inspector_pxe_filter_dnsmasq
fi
# Set if inspector should also be running in a scope enforced mode.
if [[ "$IRONIC_INSPECTOR_ENFORCE_SCOPE" == "True" ]]; then
inspector_iniset oslo_policy enforce_scope true
inspector_iniset oslo_policy enforce_new_defaults true
fi
}
function configure_inspector_swift {
inspector_configure_auth_for swift
}
function configure_inspector_dhcp {
mkdir_chown_stack "$IRONIC_INSPECTOR_CONF_DIR"
if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then
cat > "$IRONIC_INSPECTOR_DHCP_CONF_FILE" <<EOF
no-daemon
port=0
interface=$IRONIC_INSPECTOR_INTERFACE
bind-interfaces
dhcp-range=$IRONIC_INSPECTOR_DHCP_RANGE
dhcp-match=ipxe,175
dhcp-match=set:efi,option:client-arch,7
dhcp-match=set:efi,option:client-arch,9
dhcp-match=set:efi,option:client-arch,11
dhcp-boot=tag:efi,tag:!ipxe,snponly.efi
dhcp-boot=tag:!efi,tag:!ipxe,undionly.kpxe
dhcp-boot=tag:ipxe,http://$IRONIC_HTTP_SERVER:$IRONIC_HTTP_PORT/ironic-inspector.ipxe
dhcp-sequential-ip
EOF
else
cat > "$IRONIC_INSPECTOR_DHCP_CONF_FILE" <<EOF
no-daemon
port=0
interface=$IRONIC_INSPECTOR_INTERFACE
bind-interfaces
dhcp-range=$IRONIC_INSPECTOR_DHCP_RANGE
dhcp-match=set:efi,option:client-arch,7
dhcp-match=set:efi,option:client-arch,9
dhcp-match=set:efi,option:client-arch,11
dhcp-boot=tag:efi,bootx64.efi
dhcp-boot=pxelinux.0
dhcp-sequential-ip
EOF
fi
if is_dnsmasq_filter_required ; then
configure_dnsmasq_dhcp_hostsdir
fi
}
function prepare_environment {
if [[ "$IRONIC_INSPECTOR_MANAGED_BOOT" == "False" ]]; then
prepare_tftp
if [[ "$IRONIC_BAREMETAL_BASIC_OPS" == "True" && "$IRONIC_IS_HARDWARE" == "False" ]]; then
sudo ip link add $IRONIC_INSPECTOR_OVS_PORT type veth peer name $IRONIC_INSPECTOR_INTERFACE
sudo ip link set dev $IRONIC_INSPECTOR_OVS_PORT up
sudo ip link set dev $IRONIC_INSPECTOR_OVS_PORT mtu $PUBLIC_BRIDGE_MTU
sudo ovs-vsctl add-port $IRONIC_VM_NETWORK_BRIDGE $IRONIC_INSPECTOR_OVS_PORT
fi
sudo ip link set dev $IRONIC_INSPECTOR_INTERFACE up
sudo ip link set dev $IRONIC_INSPECTOR_INTERFACE mtu $PUBLIC_BRIDGE_MTU
sudo ip addr add $IRONIC_INSPECTOR_INTERNAL_IP_WITH_NET dev $IRONIC_INSPECTOR_INTERFACE
sudo iptables -I INPUT -i $IRONIC_INSPECTOR_INTERFACE -p udp \
--dport 69 -j ACCEPT
sudo iptables -I INPUT -i $IRONIC_INSPECTOR_INTERFACE -p tcp \
--dport $IRONIC_INSPECTOR_PORT -j ACCEPT
sudo iptables -I INPUT -i $PUBLIC_BRIDGE -p tcp \
--dport $IRONIC_INSPECTOR_PORT -j ACCEPT
if [[ "$IRONIC_INSPECTOR_STANDALONE" == "False" ]]; then
sudo iptables -I INPUT -i $IRONIC_INSPECTOR_INTERFACE -p tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT -i $IRONIC_INSPECTOR_INTERFACE -p tcp --dport 443 -j ACCEPT
fi
else
sudo iptables -I INPUT -d $HOST_IP -p tcp --dport $IRONIC_INSPECTOR_PORT -j ACCEPT
fi
}
function cleanup_inspector {
if [[ "$IRONIC_IPXE_ENABLED" == "True" ]] ; then
rm -f $IRONIC_HTTP_DIR/ironic-inspector.*
else
rm -f $IRONIC_TFTPBOOT_DIR/pxelinux.cfg/default
rm -f $IRONIC_TFTPBOOT_DIR/ironic-inspector.*
fi
sudo rm -f /etc/sudoers.d/ironic-inspector-rootwrap
sudo rm -rf "$IRONIC_INSPECTOR_RAMDISK_LOGDIR"
if [[ "$IRONIC_INSPECTOR_STANDALONE" == "False" ]]; then
sudo iptables -D INPUT -i $IRONIC_INSPECTOR_INTERFACE -p tcp --dport 80 -j ACCEPT | true
sudo iptables -D INPUT -i $IRONIC_INSPECTOR_INTERFACE -p tcp --dport 443 -j ACCEPT | true
fi
# Always try to clean up firewall rules, no matter filter driver used
sudo iptables -D INPUT -i $IRONIC_INSPECTOR_INTERFACE -p udp \
--dport 69 -j ACCEPT | true
sudo iptables -D INPUT -i $IRONIC_INSPECTOR_INTERFACE -p tcp \
--dport $IRONIC_INSPECTOR_PORT -j ACCEPT | true
sudo iptables -D INPUT -i $IRONIC_INSPECTOR_INTERFACE -p udp \
--dport 67 -j ironic-inspector | true
sudo iptables -F ironic-inspector | true
sudo iptables -X ironic-inspector | true
if [[ $IRONIC_INSPECTOR_INTERFACE != $OVS_PHYSICAL_BRIDGE && "$IRONIC_INSPECTOR_INTERFACE_PHYSICAL" == "False" ]]; then
sudo ip link show $IRONIC_INSPECTOR_INTERFACE && sudo ip link delete $IRONIC_INSPECTOR_INTERFACE
fi
sudo ip link show $IRONIC_INSPECTOR_OVS_PORT && sudo ip link delete $IRONIC_INSPECTOR_OVS_PORT
sudo ovs-vsctl --if-exists del-port $IRONIC_INSPECTOR_OVS_PORT
if [[ "$IRONIC_INSPECTOR_STANDALONE" == "False" ]]; then
remove_uwsgi_config "$IRONIC_INSPECTOR_UWSGI_CONF" "$IRONIC_INSPECTOR_UWSGI"
restart_apache_server
fi
}
function sync_inspector_database {
recreate_database ironic_inspector
$IRONIC_INSPECTOR_DBSYNC_BIN_FILE --config-file $IRONIC_INSPECTOR_CONF_FILE upgrade
}
### Entry points
if [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing ironic-inspector"
if is_inspector_dhcp_required; then
install_inspector_dhcp
fi
install_inspector
install_inspector_client
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring ironic-inspector"
cleanup_inspector
if is_inspector_dhcp_required; then
configure_inspector_dhcp
fi
configure_inspector
sync_inspector_database
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Initializing ironic-inspector"
prepare_environment
if is_inspector_dhcp_required; then
start_inspector_dhcp
fi
start_inspector
elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then
if is_service_enabled tempest; then
echo_summary "Configuring Tempest for Ironic Inspector"
iniset $TEMPEST_CONFIG service_available ironic_inspector True
if [ -n "$IRONIC_INSPECTOR_NODE_NOT_FOUND_HOOK" ]; then
iniset $TEMPEST_CONFIG baremetal_introspection auto_discovery_feature True
iniset $TEMPEST_CONFIG baremetal_introspection auto_discovery_default_driver fake-hardware
iniset $TEMPEST_CONFIG baremetal_introspection auto_discovery_target_driver ipmi
fi
if [[ -n "${IRONIC_INSPECTOR_TEMPEST_DISCOVERY_TIMEOUT}" ]]; then
iniset $TEMPEST_CONFIG baremetal_introspection discovery_timeout $IRONIC_INSPECTOR_TEMPEST_DISCOVERY_TIMEOUT
fi
if [[ -n "${IRONIC_INSPECTOR_TEMPEST_INTROSPECTION_TIMEOUT}" ]]; then
iniset $TEMPEST_CONFIG baremetal_introspection introspection_timeout $IRONIC_INSPECTOR_TEMPEST_INTROSPECTION_TIMEOUT
fi
iniset $TEMPEST_CONFIG baremetal_introspection data_store $IRONIC_INSPECTOR_INTROSPECTION_DATA_STORE
fi
fi
if [[ "$1" == "unstack" ]]; then
stop_inspector
if is_inspector_dhcp_required; then
stop_inspector_dhcp
fi
cleanup_inspector
fi

View File

@@ -1 +0,0 @@
enable_service ironic-inspector ironic-inspector-dhcp

View File

@@ -1,77 +0,0 @@
#!/bin/bash
#
# Copyright 2015 Hewlett-Packard Development Company, L.P.
# Copyright 2016 Intel Corporation
# Copyright 2016 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
## based on Ironic/devstack/upgrade/resources.sh
set -o errexit
source $GRENADE_DIR/grenaderc
source $GRENADE_DIR/functions
source $TOP_DIR/openrc admin admin
# Inspector relies on a couple of Ironic variables
source $TARGET_RELEASE_DIR/ironic/devstack/lib/ironic
INSPECTOR_DEVSTACK_DIR=$(cd $(dirname "$0")/.. && pwd)
source $INSPECTOR_DEVSTACK_DIR/plugin.sh
set -o xtrace
function early_create {
:
}
function create {
:
}
function verify {
:
}
function verify_noapi {
:
}
function destroy {
:
}
# Dispatcher
case $1 in
"early_create")
early_create
;;
"create")
create
;;
"verify_noapi")
verify_noapi
;;
"verify")
verify
;;
"destroy")
destroy
;;
"force_destroy")
set +o errexit
destroy
;;
esac

View File

@@ -1,4 +0,0 @@
# Enabling Inspector grenade plug-in
# Based on Ironic/devstack/grenade/settings
register_project_for_upgrade ironic-inspector
register_db_to_save ironic_inspector

View File

@@ -1,29 +0,0 @@
#!/bin/bash
#
# based on Ironic/devstack/upgrade/shutdown.sh
set -o errexit
source $GRENADE_DIR/grenaderc
source $GRENADE_DIR/functions
# We need base DevStack functions for this
source $BASE_DEVSTACK_DIR/functions
source $BASE_DEVSTACK_DIR/stackrc # needed for status directory
source $BASE_DEVSTACK_DIR/lib/tls
source $BASE_DEVSTACK_DIR/lib/apache
# Inspector relies on a couple of Ironic variables
source $TARGET_RELEASE_DIR/ironic/devstack/lib/ironic
# Keep track of the DevStack directory
INSPECTOR_DEVSTACK_DIR=$(cd $(dirname "$0")/.. && pwd)
source $INSPECTOR_DEVSTACK_DIR/plugin.sh
set -o xtrace
stop_inspector
if is_inspector_dhcp_required; then
stop_inspector_dhcp
fi

View File

@@ -1,106 +0,0 @@
#!/usr/bin/env bash
## based on Ironic/devstack/upgrade/upgrade.sh
# ``upgrade-inspector``
echo "*********************************************************************"
echo "Begin $0"
echo "*********************************************************************"
# Clean up any resources that may be in use
cleanup() {
set +o errexit
echo "*********************************************************************"
echo "ERROR: Abort $0"
echo "*********************************************************************"
# Kill ourselves to signal any calling process
trap 2; kill -2 $$
}
trap cleanup SIGHUP SIGINT SIGTERM
# Keep track of the grenade directory
RUN_DIR=$(cd $(dirname "$0") && pwd)
# Source params
source $GRENADE_DIR/grenaderc
# Import common functions
source $GRENADE_DIR/functions
# This script exits on an error so that errors don't compound and you see
# only the first error that occurred.
set -o errexit
# Upgrade Inspector
# =================
# Duplicate some setup bits from target DevStack
source $TARGET_DEVSTACK_DIR/stackrc
source $TARGET_DEVSTACK_DIR/lib/tls
source $TARGET_DEVSTACK_DIR/lib/nova
source $TARGET_DEVSTACK_DIR/lib/neutron
source $TARGET_DEVSTACK_DIR/lib/apache
source $TARGET_DEVSTACK_DIR/lib/keystone
source $TARGET_DEVSTACK_DIR/lib/database
source $TARGET_DEVSTACK_DIR/lib/rpc_backend
# Inspector relies on couple of Ironic variables
source $TARGET_RELEASE_DIR/ironic/devstack/lib/ironic
# Keep track of the DevStack directory
INSPECTOR_DEVSTACK_DIR=$(cd $(dirname "$0")/.. && pwd)
INSPECTOR_PLUGIN=$INSPECTOR_DEVSTACK_DIR/plugin.sh
source $INSPECTOR_PLUGIN
# Print the commands being run so that we can see the command that triggers
# an error. It is also useful for following allowing as the install occurs.
set -o xtrace
initialize_database_backends
function wait_for_keystone {
if ! wait_for_service $SERVICE_TIMEOUT ${KEYSTONE_AUTH_URI}/v$IDENTITY_API_VERSION/; then
die $LINENO "keystone did not start"
fi
}
# Save current config files for posterity
if [[ -d $IRONIC_INSPECTOR_CONF_DIR ]] && [[ ! -d $SAVE_DIR/etc.inspector ]] ; then
cp -pr $IRONIC_INSPECTOR_CONF_DIR $SAVE_DIR/etc.inspector
fi
# This call looks for install_<NAME>, which is install_inspector in our case:
# https://github.com/openstack-dev/devstack/blob/dec121114c3ea6f9e515a452700e5015d1e34704/lib/stack#L32
stack_install_service inspector
if is_inspector_dhcp_required; then
stack_install_service inspector_dhcp
fi
$IRONIC_INSPECTOR_DBSYNC_BIN_FILE --config-file $IRONIC_INSPECTOR_CONF_FILE upgrade
# calls upgrade inspector for specific release
upgrade_project ironic-inspector $RUN_DIR $BASE_DEVSTACK_BRANCH $TARGET_DEVSTACK_BRANCH
# setup transport_url for rpc messaging
iniset_rpc_backend ironic-inspector $IRONIC_INSPECTOR_CONF_FILE
start_inspector
if is_inspector_dhcp_required; then
start_inspector_dhcp
fi
# Don't succeed unless the services come up
ensure_services_started ironic-inspector
if is_inspector_dhcp_required; then
ensure_services_started dnsmasq
fi
set +o xtrace
echo "*********************************************************************"
echo "SUCCESS: End $0"
echo "*********************************************************************"

View File

@@ -1,159 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " xml to make Docutils-native XML files"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Heat.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Heat.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/Heat"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Heat"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The xml files are in $(BUILDDIR)/xml."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."

View File

@@ -1,6 +0,0 @@
os-api-ref>=1.4.0 # Apache-2.0
reno>=3.1.0 # Apache-2.0
sphinx>=2.0.0 # BSD
sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD
sphinxcontrib-apidoc>=0.2.0 # BSD
openstackdocstheme>=2.2.0 # Apache-2.0

View File

@@ -1,2 +0,0 @@
target/
build/

View File

@@ -1,125 +0,0 @@
.. _dnsmasq_pxe_filter:
**dnsmasq** PXE filter
======================
An inspection PXE DHCP stack is often implemented by the **dnsmasq** service.
The **dnsmasq** PXE filter implementation relies on directly configuring the
**dnsmasq** DHCP service to provide a caching PXE traffic filter of node MAC
addresses.
How it works
------------
The filter works by populating the **dnsmasq** DHCP hosts directory with a
configuration file per MAC address. Each file is either enabling or disabling,
thru the ``ignore`` directive, the DHCP service for a particular MAC address::
$ cat /etc/dnsmasq.d/de-ad-be-ef-de-ad
de:ad:be:ef:de:ad,ignore
$
The filename is used to keep track of all MAC addresses in the cache, avoiding
file parsing. The content of the file determines the MAC address access policy.
Thanks to the ``inotify`` facility, **dnsmasq** is notified once a new file is
*created* or an existing file is *modified* in the DHCP hosts directory. Thus,
to allow a MAC address, the filter removes the ``ignore`` directive::
$ cat /etc/dnsmasq.d/de-ad-be-ef-de-ad
de:ad:be:ef:de:ad
$
The hosts directory content establishes a *cached* MAC addresses filter that is
kept synchronized with the **ironic** port list.
.. note::
The **dnsmasq** inotify facility implementation doesn't react to a file being
removed or truncated.
Configuration
-------------
The ``inotify`` facility was introduced_ to **dnsmasq** in the version `2.73`.
This filter driver has been checked by **ironic-inspector** CI with
**dnsmasq** versions `>=2.76`.
.. _introduced: http://www.thekelleys.org.uk/dnsmasq/CHANGELOG
To enable the **dnsmasq** PXE filter, update the PXE filter driver name in the
**ironic-inspector** configuration file::
[pxe_filter]
driver = dnsmasq
The DHCP hosts directory can be specified to override the default
``/var/lib/ironic-inspector/dhcp-hostsdir``::
[dnsmasq_pxe_filter]
dhcp_hostsdir = /etc/ironic-inspector/dhcp-hostsdir
The filter design relies on the hosts directory being in exclusive
**ironic-inspector** control. The hosts directory should be considered a
*private cache* directory of **ionic-inspector** that **dnsmasq** polls
configuration updates from, through the ``inotify`` facility. The directory
has to be writable by **ironic-inspector** and readable by **dnsmasq**.
It is also possible to override the default (empty) **dnsmasq** start and stop
commands to, for instance, directly control the **dnsmasq** service::
[dnsmasq_pxe_filter]
dnsmasq_start_command = dnsmasq --conf-file /etc/ironic-inspector/dnsmasq.conf
dnsmasq_stop_command = kill $(cat /var/run/dnsmasq.pid)
.. note::
The commands support shell expansion. The default empty start command means
the **dnsmasq** service won't be started upon the filter initialization.
Conversely, the default empty stop command means the service won't be
stopped upon an (error) exit.
.. note::
These commands are executed through the :oslo.rootwrap-doc:`rootwrap <>`
facility, so overriding may require a filter file to be created in the
``rootwrap.d`` directory. A sample configuration to use with the
**systemctl** facility might be:
.. code-block:: console
sudo cat > /etc/ironic-inspector/rootwrap.d/ironic-inspector-dnsmasq-systemctl.filters <<EOF
[Filters]
# ironic_inspector/pxe_filter/dnsmasq.py
systemctl: CommandFilter, systemctl, root, restart, dnsmasq
systemctl: CommandFilter, systemctl, root, stop, dnsmasq
EOF
Caveats
-------
The initial synchronization will put some load on the **dnsmasq** service
starting based on the amount of ports **ironic** keeps. The start-up can take
up to a minute of full CPU load for huge amounts of MACs (tens of thousands).
Subsequent filter synchronizations will only cause the **dnsmasq** to parse
the modified files. Typically those are the bare metal nodes being added or
phased out from the compute service, meaning dozens of file updates per sync
call.
The **ironic-inspector** takes over the control of the DHCP hosts directory to
implement its filter cache. Files are generated dynamically so should not be
edited by hand. To minimize the interference between the deployment and
introspection, **ironic-inspector** has to start the **dnsmasq** service only
after the initial synchronization. Conversely, the **dnsmasq** service is
stopped upon (unexpected) **ironic-inspector** exit.
To avoid accumulating stale DHCP host files over time, the driver cleans up
the DHCP hosts directory before the initial synchronization during the
start-up.
Although the filter driver tries its best to always stop the **dnsmasq**
service, it is recommended that the operator configures the **dnsmasq**
service in such a way that it terminates upon **ironic-inspector**
(unexpected) exit to prevent a stale deny list from being used by the
**dnsmasq** service.

View File

@@ -1,18 +0,0 @@
Administrator Guide
===================
How to upgrade Ironic Inspector
-------------------------------
.. toctree::
:maxdepth: 2
upgrade
Dnsmasq PXE filter driver
-------------------------
.. toctree::
:maxdepth: 2
dnsmasq-pxe-filter

View File

@@ -1,41 +0,0 @@
Upgrade Guide
-------------
The `release notes <https://docs.openstack.org/releasenotes/ironic-inspector/>`_
should always be read carefully when upgrading the ironic-inspector service.
Starting with the Mitaka series, specific upgrade steps and considerations are
well-documented in the release notes.
Upgrades are only supported one series at a time, or within a series.
Only offline (with downtime) upgrades are currently supported.
When upgrading ironic-inspector, the following steps should always be taken:
* Update ironic-inspector code, without restarting the service yet.
* Stop the ironic-inspector service.
* Run database migrations::
ironic-inspector-dbsync --config-file <PATH-TO-INSPECTOR.CONF> upgrade
* Start the ironic-inspector service.
* Upgrade the ironic-python-agent image used for introspection.
.. note::
There is no implicit upgrade order between ironic and ironic-inspector,
unless the `release notes`_ say otherwise.
Migrating introspection data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Starting with Stein release, ironic-inspector supports two introspection data
storage backends: ``swift`` and ``database``. If you decide to change the
backend, you can use the provided command to migrate the data::
ironic-inspector-migrate-data --from swift --to database --config-file /etc/ironic-inspector/inspector.conf
.. note::
Configuration for **both** backends is expected to be present in the
configuration file for this command to succeed.

View File

@@ -1,9 +0,0 @@
Command References
==================
Here are references for commands not elsewhere documented.
.. toctree::
:maxdepth: 1
ironic-inspector-status

View File

@@ -1,83 +0,0 @@
=======================
ironic-inspector-status
=======================
Synopsis
========
::
ironic-inspector-status <category> <command> [<args>]
Description
===========
:program:`ironic-inspector-status` is a tool that provides routines for
checking the status of the ironic-inspector deployment.
Options
=======
The standard pattern for executing a :program:`ironic-inspector-status`
command is::
ironic-inspector-status <category> <command> [<args>]
Run without arguments to see a list of available command categories::
ironic-inspector-status
Categories are:
* ``upgrade``
Detailed descriptions are below.
You can also run with a category argument such as ``upgrade`` to see a list of
all commands in that category::
ironic-inspector-status upgrade
These sections describe the available categories and arguments for
:program:`ironic-inspector-status`.
Upgrade
~~~~~~~
.. _ironic-inspector-status-checks:
``ironic-status upgrade check``
Performs a release-specific readiness check before restarting services with
new code. This command expects to have complete configuration and access
to databases and services.
**Return Codes**
.. list-table::
:widths: 20 80
:header-rows: 1
* - Return code
- Description
* - 0
- All upgrade readiness checks passed successfully and there is nothing
to do.
* - 1
- At least one check encountered an issue and requires further
investigation. This is considered a warning but the upgrade may be OK.
* - 2
- There was an upgrade status check failure that needs to be
investigated. This should be considered something that stops an
upgrade.
* - 255
- An unexpected error occurred.
**History of Checks**
**Wallaby**
* Adds initial status check command as it was not previously needed
as the database structure and use of ironic-inspector's of
ironic-inspector did not require the command previously.
* Adds a check to validate the configured policy file is not JSON
based as JSON based policies have been deprecated.

View File

@@ -1,124 +0,0 @@
# -*- coding: utf-8 -*-
#
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinxcontrib.apidoc',
'sphinx.ext.viewcode',
'sphinxcontrib.rsvgconverter',
'oslo_policy.sphinxext',
'oslo_policy.sphinxpolicygen',
'oslo_config.sphinxext',
'oslo_config.sphinxconfiggen']
try:
import openstackdocstheme
extensions.append('openstackdocstheme')
except ImportError:
openstackdocstheme = None
openstackdocs_repo_name = 'openstack/ironic-inspector'
openstackdocs_pdf_link = True
openstackdocs_use_storyboard = True
openstackdocs_projects = [
'bifrost',
'devstack',
'ironic',
'ironic-python-agent',
'oslo.rootwrap',
'python-ironicclient',
'python-ironic-inspector-client',
'tooz',
]
wsme_protocols = ['restjson']
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
copyright = 'OpenStack Developers'
config_generator_config_file = '../../tools/config-generator.conf'
sample_config_basename = '_static/ironic-inspector'
policy_generator_config_file = '../../tools/policy-generator.conf'
sample_policy_basename = '_static/ironic-inspector'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['ironic.']
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# NOTE(cinerama): mock out nova modules so docs can build without warnings
#import mock
#import sys
#MOCK_MODULES = ['nova', 'nova.compute', 'nova.context']
#for module in MOCK_MODULES:
# sys.modules[module] = mock.Mock()
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
if openstackdocstheme is not None:
html_theme = 'openstackdocs'
else:
html_theme = 'default'
#html_theme_path = ["."]
#html_theme = '_theme'
#html_static_path = ['_static']
# Output file base name for HTML help builder.
htmlhelp_basename = 'ironic-inspectordoc'
latex_use_xindy = False
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
(
'index',
'doc-ironic-inspector.tex',
'Ironic Inspector Documentation',
'OpenStack Foundation',
'manual'
),
]
# -- Options for seqdiag ------------------------------------------------------
seqdiag_html_image_format = "SVG"
# -- sphinxcontrib.apidoc configuration --------------------------------------
apidoc_module_dir = '../../ironic_inspector'
apidoc_output_dir = 'contributor/api'
apidoc_excluded_paths = [
'migrations',
'test',
'common/i18n*',
'wsgi/__init__.py',
]
apidoc_separate_modules = True

View File

@@ -1,22 +0,0 @@
Configuration Guide
===================
The ironic-inspector service operation is defined by a configuration
file. The overview of configuration file options follow.
.. toctree::
:maxdepth: 1
Ironic Inspector Configuration Options <ironic-inspector>
Policies <policy>
.. only:: html
Sample files
------------
.. toctree::
:maxdepth: 1
Sample Ironic Inspector Configuration <sample-config>
Sample policy file <sample-policy>

View File

@@ -1,7 +0,0 @@
---------------------
ironic-inspector.conf
---------------------
.. show-options::
:config-file: tools/config-generator.conf

View File

@@ -1,19 +0,0 @@
========
Policies
========
.. warning::
JSON formatted policy files were deprecated in the Wallaby development
cycle due to the Victoria deprecation by the ``olso.policy`` library.
Use the `oslopolicy-convert-json-to-yaml`__ tool
to convert the existing JSON to YAML formatted policy file in backward
compatible way.
.. __: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html
The following is an overview of all available policies in **ironic inspector**.
For a sample configuration file, refer to :doc:`sample-policy`.
.. show-policy::
:config-file: tools/policy-generator.conf

View File

@@ -1,14 +0,0 @@
======================================
Ironic Inspector Configuration Options
======================================
The following is a sample Ironic Inspector configuration for
adaptation and use. It is auto-generated from Ironic Inspector
when this documentation is built, so if you find issues with an
option, please compare your version of Ironic Inspector with the
version of this documentation.
The sample configuration can also be downloaded as a :download:`file
</_static/ironic-inspector.conf.sample>`.
.. literalinclude:: /_static/ironic-inspector.conf.sample

View File

@@ -1,13 +0,0 @@
=======================
Ironic Inspector Policy
=======================
The following is a sample **ironic-inspector** policy file, autogenerated from
Ironic Inspector when this documentation is built.
To avoid issues, make sure your version of **ironic-inspector**
matches that of the example policy file.
The sample policy can also be downloaded as a :download:`file
</_static/ironic-inspector.policy.yaml.sample>`.
.. literalinclude:: /_static/ironic-inspector.policy.yaml.sample

View File

@@ -1,22 +0,0 @@
.. _contributing_link:
.. include:: ../../../CONTRIBUTING.rst
Python API
~~~~~~~~~~
.. toctree::
:maxdepth: 1
api/modules
Ironic Inspector CI
~~~~~~~~~~~~~~~~~~~
It's important to understand the role of each job in the CI. To facilitate
that, we have created the documentation below.
.. toctree::
:maxdepth: 1
Job roles in the CI <jobs-description>

View File

@@ -1,43 +0,0 @@
.. _jobs-description:
================
Jobs description
================
The description of each jobs that runs in the CI when you submit a patch for
`openstack/ironic-inspector` is shown in the following table.
.. note::
All jobs are configured to use a pre-build tinyipa ramdisk, a wholedisk
image that is downloaded from a Swift temporary url, `pxe` boot and
`ipmi` driver.
.. list-table:: Table. OpenStack Ironic Inspector CI jobs description
:widths: 45 55
:header-rows: 1
* - Job name
- Description
* - ironic-inspector-grenade
- Deploys Ironic and Ironic Inspector in DevStack and runs upgrade for
all enabled services.
* - ironic-inspector-tempest
- Deploys Ironic and Ironic Inspector in DevStack.
Runs tempest tests that match the regex `InspectorBasicTest` and
deploys 1 virtual baremetal.
* - ironic-inspector-tempest-discovery
- Deploys Ironic and Ironic Inspector in DevStack.
Runs tempest tests that match the regex `InspectorDiscoveryTest` and
deploys 1 virtual baremetal.
* - ironic-inspector-tempest-python3
- Deploys Ironic and Ironic Inspector in DevStack under Python3.
Runs tempest tests that match the regex `Inspector` and deploys 1
virtual baremetal.
* - openstack-tox-functional-py36
- Run tox-based functional tests for Ironic Inspector under Python3.6
* - bifrost-integration-tinyipa-ubuntu-xenial
- Tests the integration between Ironic Inspector and Bifrost.
* - ironic-inspector-tox-bandit
- Runs bandit security tests in a tox environment to find known issues in
the Ironic Inspector code.

View File

@@ -1,243 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Generated by graphviz version 2.40.1 (20161225.0304)
-->
<!-- Title: Ironic Inspector states Pages: 1 -->
<svg width="851pt" height="382pt"
viewBox="0.00 0.00 851.12 382.00" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g id="graph0" class="graph" transform="scale(1 1) rotate(0) translate(4 378)">
<title>Ironic Inspector states</title>
<polygon fill="#ffffff" stroke="transparent" points="-4,4 -4,-378 847.1163,-378 847.1163,4 -4,4"/>
<!-- aborting -->
<g id="node1" class="node">
<title>aborting</title>
<ellipse fill="none" stroke="#000000" cx="32.7967" cy="-161" rx="30.9953" ry="18"/>
<text text-anchor="middle" x="32.7967" y="-157.7" font-family="Times,serif" font-size="11.00" fill="#c0c0c0">aborting</text>
</g>
<!-- error -->
<g id="node2" class="node">
<title>error</title>
<ellipse fill="none" stroke="#000000" cx="168.5787" cy="-189" rx="27" ry="18"/>
<text text-anchor="middle" x="168.5787" y="-185.7" font-family="Times,serif" font-size="11.00" fill="#ff0000">error</text>
</g>
<!-- aborting&#45;&gt;error -->
<g id="edge1" class="edge">
<title>aborting&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M52.0855,-175.1999C56.7664,-179.0856 61.5754,-183.4801 65.5934,-188 75.4225,-199.0568 70.6085,-208.9109 83.5934,-216 101.8225,-225.9522 124.2849,-218.0586 141.5376,-208.2902"/>
<polygon fill="#000000" stroke="#000000" points="143.8354,-210.9866 150.5265,-202.7719 140.1731,-205.021 143.8354,-210.9866"/>
<text text-anchor="middle" x="103.5861" y="-222" font-family="Times,serif" font-size="10.00" fill="#ff0000">abort_end</text>
</g>
<!-- aborting&#45;&gt;error -->
<g id="edge2" class="edge">
<title>aborting&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M55.9956,-173.0111C64.4643,-176.8635 74.2575,-180.7193 83.5934,-183 98.8555,-186.7284 116.0834,-188.3958 131.0182,-189.0706"/>
<polygon fill="#000000" stroke="#000000" points="131.3207,-192.5819 141.427,-189.4018 131.5434,-185.5854 131.3207,-192.5819"/>
<text text-anchor="middle" x="103.5861" y="-190" font-family="Times,serif" font-size="10.00" fill="#ff0000">timeout</text>
</g>
<!-- error&#45;&gt;error -->
<g id="edge6" class="edge">
<title>error&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M163.1858,-206.7817C162.2694,-216.3149 164.067,-225 168.5787,-225 171.3281,-225 173.0695,-221.7749 173.8032,-217.0981"/>
<polygon fill="#000000" stroke="#000000" points="177.3078,-216.8376 173.9716,-206.7817 170.3088,-216.7232 177.3078,-216.8376"/>
<text text-anchor="middle" x="168.5787" y="-227" font-family="Times,serif" font-size="10.00" fill="#ff0000">abort</text>
</g>
<!-- error&#45;&gt;error -->
<g id="edge7" class="edge">
<title>error&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M159.8007,-206.1418C154.6371,-223.585 157.5631,-243 168.5787,-243 177.5289,-243 181.1387,-230.183 179.4081,-216.0206"/>
<polygon fill="#000000" stroke="#000000" points="182.8169,-215.2213 177.3568,-206.1418 175.9631,-216.6445 182.8169,-215.2213"/>
<text text-anchor="middle" x="168.5787" y="-245" font-family="Times,serif" font-size="10.00" fill="#ff0000">error</text>
</g>
<!-- reapplying -->
<g id="node5" class="node">
<title>reapplying</title>
<ellipse fill="none" stroke="#000000" cx="299.2398" cy="-206" rx="37.219" ry="18"/>
<text text-anchor="middle" x="299.2398" y="-202.7" font-family="Times,serif" font-size="11.00" fill="#c0c0c0">reapplying</text>
</g>
<!-- error&#45;&gt;reapplying -->
<g id="edge8" class="edge">
<title>error&#45;&gt;reapplying</title>
<path fill="none" stroke="#000000" d="M180.2602,-205.6885C188.3266,-215.6767 200.0205,-227.473 213.5787,-233 231.4429,-240.2823 252.047,-234.2198 268.5706,-226.0102"/>
<polygon fill="#000000" stroke="#000000" points="270.6574,-228.8599 277.7763,-221.0131 267.3179,-222.7078 270.6574,-228.8599"/>
<text text-anchor="middle" x="228.8546" y="-238" font-family="Times,serif" font-size="10.00" fill="#000000">reapply</text>
</g>
<!-- starting -->
<g id="node6" class="node">
<title>starting</title>
<ellipse fill="none" stroke="#000000" cx="558.1456" cy="-134" rx="28.6835" ry="18"/>
<text text-anchor="middle" x="558.1456" y="-130.7" font-family="Times,serif" font-size="11.00" fill="#c0c0c0">starting</text>
</g>
<!-- error&#45;&gt;starting -->
<g id="edge9" class="edge">
<title>error&#45;&gt;starting</title>
<path fill="none" stroke="#000000" d="M195.1811,-184.4987C201.2175,-183.5785 207.6089,-182.6838 213.5787,-182 279.4894,-174.4508 447.7374,-178.9859 511.3043,-160 517.5036,-158.1484 523.7925,-155.3833 529.6645,-152.3379"/>
<polygon fill="#000000" stroke="#000000" points="531.4549,-155.3469 538.476,-147.4125 528.0394,-149.2367 531.4549,-155.3469"/>
<text text-anchor="middle" x="369.625" y="-177" font-family="Times,serif" font-size="10.00" fill="#000000">start</text>
</g>
<!-- enrolling -->
<g id="node3" class="node">
<title>enrolling</title>
<ellipse fill="none" stroke="#000000" cx="32.7967" cy="-215" rx="32.5946" ry="18"/>
<text text-anchor="middle" x="32.7967" y="-211.7" font-family="Times,serif" font-size="11.00" fill="#c0c0c0">enrolling</text>
</g>
<!-- enrolling&#45;&gt;error -->
<g id="edge3" class="edge">
<title>enrolling&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M51.1993,-199.8824C55.8972,-196.0142 60.9359,-191.8577 65.5934,-188 73.6133,-181.3573 73.8017,-176.5451 83.5934,-173 100.6616,-166.8205 120.5641,-170.0469 136.8166,-175.1731"/>
<polygon fill="#000000" stroke="#000000" points="135.8115,-178.5296 146.4064,-178.5432 138.1324,-171.9256 135.8115,-178.5296"/>
<text text-anchor="middle" x="103.5861" y="-175" font-family="Times,serif" font-size="10.00" fill="#ff0000">error</text>
</g>
<!-- enrolling&#45;&gt;error -->
<g id="edge5" class="edge">
<title>enrolling&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M52.9994,-200.4234C57.4523,-196.6728 61.9286,-192.4341 65.5934,-188 76.2367,-175.1224 69.2274,-163.5281 83.5934,-155 98.8749,-145.9284 106.7294,-149.3503 123.5787,-155 131.5424,-157.6703 139.2218,-162.3686 145.9131,-167.4359"/>
<polygon fill="#000000" stroke="#000000" points="143.8604,-170.2793 153.8075,-173.9271 148.3063,-164.8724 143.8604,-170.2793"/>
<text text-anchor="middle" x="103.5861" y="-157" font-family="Times,serif" font-size="10.00" fill="#ff0000">timeout</text>
</g>
<!-- processing -->
<g id="node4" class="node">
<title>processing</title>
<ellipse fill="none" stroke="#000000" cx="806.0038" cy="-280" rx="37.2253" ry="18"/>
<text text-anchor="middle" x="806.0038" y="-276.7" font-family="Times,serif" font-size="11.00" fill="#c0c0c0">processing</text>
</g>
<!-- enrolling&#45;&gt;processing -->
<g id="edge4" class="edge">
<title>enrolling&#45;&gt;processing</title>
<path fill="none" stroke="#000000" d="M39.1648,-232.6774C54.6464,-272.1459 98.2329,-364 168.5787,-364 168.5787,-364 168.5787,-364 674.0597,-364 718.8487,-364 760.6945,-329.1311 784.7975,-304.3245"/>
<polygon fill="#000000" stroke="#000000" points="787.3872,-306.6797 791.6917,-296.9987 782.2895,-301.8824 787.3872,-306.6797"/>
<text text-anchor="middle" x="432.8267" y="-366" font-family="Times,serif" font-size="10.00" fill="#000000">process</text>
</g>
<!-- processing&#45;&gt;error -->
<g id="edge13" class="edge">
<title>processing&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M778.8595,-292.6534C752.6321,-303.7077 711.4854,-318 674.0597,-318 299.2398,-318 299.2398,-318 299.2398,-318 259.7022,-318 243.3154,-320.0567 213.5787,-294 190.8399,-274.0751 179.2253,-240.6479 173.5495,-216.864"/>
<polygon fill="#000000" stroke="#000000" points="176.952,-216.0393 171.4031,-207.0138 170.1125,-217.5297 176.952,-216.0393"/>
<text text-anchor="middle" x="496.0284" y="-320" font-family="Times,serif" font-size="10.00" fill="#ff0000">error</text>
</g>
<!-- processing&#45;&gt;error -->
<g id="edge15" class="edge">
<title>processing&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M768.7865,-280C710.5917,-280 594.5102,-280 496.0284,-280 299.2398,-280 299.2398,-280 299.2398,-280 258.901,-280 246.0295,-273.9619 213.5787,-250 201.093,-240.7805 190.5607,-227.1091 182.832,-215.0446"/>
<polygon fill="#000000" stroke="#000000" points="185.6635,-212.9642 177.4823,-206.2323 179.6798,-216.5968 185.6635,-212.9642"/>
<text text-anchor="middle" x="496.0284" y="-282" font-family="Times,serif" font-size="10.00" fill="#ff0000">timeout</text>
</g>
<!-- finished -->
<g id="node7" class="node">
<title>finished</title>
<ellipse fill="none" stroke="#000000" cx="432.8267" cy="-206" rx="29.8518" ry="18"/>
<text text-anchor="middle" x="432.8267" y="-202.7" font-family="Times,serif" font-size="11.00" fill="#c0c0c0">finished</text>
</g>
<!-- processing&#45;&gt;finished -->
<g id="edge14" class="edge">
<title>processing&#45;&gt;finished</title>
<path fill="none" stroke="#000000" d="M771.6489,-273.1875C702.0548,-259.3872 544.8371,-228.2113 471.6566,-213.6999"/>
<polygon fill="#000000" stroke="#000000" points="471.9795,-210.1958 461.4897,-211.6838 470.6179,-217.0621 471.9795,-210.1958"/>
<text text-anchor="middle" x="616.1027" y="-245" font-family="Times,serif" font-size="10.00" fill="#000000">finish</text>
</g>
<!-- reapplying&#45;&gt;error -->
<g id="edge16" class="edge">
<title>reapplying&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M261.981,-204.7684C246.8902,-203.9118 229.3396,-202.46 213.5787,-200 210.3207,-199.4915 206.9521,-198.8637 203.5918,-198.1679"/>
<polygon fill="#000000" stroke="#000000" points="204.1798,-194.7127 193.6548,-195.9277 202.6402,-201.5413 204.1798,-194.7127"/>
<text text-anchor="middle" x="228.8546" y="-205" font-family="Times,serif" font-size="10.00" fill="#ff0000">error</text>
</g>
<!-- reapplying&#45;&gt;error -->
<g id="edge19" class="edge">
<title>reapplying&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M269.5871,-195.0759C261.4486,-192.5688 252.546,-190.2685 244.1305,-189 231.7917,-187.1401 218.19,-186.6625 205.968,-186.7968"/>
<polygon fill="#000000" stroke="#000000" points="205.559,-183.3057 195.649,-187.0528 205.7328,-190.3036 205.559,-183.3057"/>
<text text-anchor="middle" x="228.8546" y="-191" font-family="Times,serif" font-size="10.00" fill="#ff0000">timeout</text>
</g>
<!-- reapplying&#45;&gt;reapplying -->
<g id="edge18" class="edge">
<title>reapplying&#45;&gt;reapplying</title>
<path fill="none" stroke="#000000" d="M286.8358,-223.0373C284.3044,-232.8579 288.439,-242 299.2398,-242 305.9903,-242 310.1369,-238.4289 311.6794,-233.3529"/>
<polygon fill="#000000" stroke="#000000" points="315.1784,-233.0251 311.6438,-223.0373 308.1785,-233.0494 315.1784,-233.0251"/>
<text text-anchor="middle" x="299.2398" y="-244" font-family="Times,serif" font-size="10.00" fill="#000000">reapply</text>
</g>
<!-- reapplying&#45;&gt;finished -->
<g id="edge17" class="edge">
<title>reapplying&#45;&gt;finished</title>
<path fill="none" stroke="#000000" d="M336.4512,-206C353.8759,-206 374.6665,-206 392.4694,-206"/>
<polygon fill="#000000" stroke="#000000" points="392.5582,-209.5001 402.5581,-206 392.5581,-202.5001 392.5582,-209.5001"/>
<text text-anchor="middle" x="369.625" y="-208" font-family="Times,serif" font-size="10.00" fill="#000000">finish</text>
</g>
<!-- starting&#45;&gt;error -->
<g id="edge20" class="edge">
<title>starting&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M529.2657,-135.6441C469.6892,-139.4273 329.0596,-150.383 213.5787,-175 209.9228,-175.7793 206.1385,-176.7264 202.3881,-177.7565"/>
<polygon fill="#000000" stroke="#000000" points="201.2581,-174.4405 192.6514,-180.6191 203.2326,-181.1563 201.2581,-174.4405"/>
<text text-anchor="middle" x="369.625" y="-153" font-family="Times,serif" font-size="10.00" fill="#ff0000">error</text>
</g>
<!-- starting&#45;&gt;error -->
<g id="edge22" class="edge">
<title>starting&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M529.1622,-133.2847C523.2537,-133.166 517.0791,-133.0616 511.3043,-133 497.7265,-132.8551 494.331,-132.9463 480.7525,-133 424.5719,-133.2222 410.3036,-128.9589 354.3492,-134 312.9186,-137.7326 302.5786,-140.285 262.1305,-150 240.1701,-155.2745 234.4263,-156.3133 213.5787,-165 208.4706,-167.1285 203.1744,-169.6481 198.0866,-172.2412"/>
<polygon fill="#000000" stroke="#000000" points="196.4433,-169.1508 189.2389,-176.9191 199.7153,-175.3391 196.4433,-169.1508"/>
<text text-anchor="middle" x="369.625" y="-136" font-family="Times,serif" font-size="10.00" fill="#ff0000">timeout</text>
</g>
<!-- waiting -->
<g id="node8" class="node">
<title>waiting</title>
<ellipse fill="none" stroke="#000000" cx="674.0597" cy="-104" rx="28.6835" ry="18"/>
<text text-anchor="middle" x="674.0597" y="-100.7" font-family="Times,serif" font-size="11.00" fill="#c0c0c0">waiting</text>
</g>
<!-- starting&#45;&gt;waiting -->
<g id="edge21" class="edge">
<title>starting&#45;&gt;waiting</title>
<path fill="none" stroke="#000000" d="M585.03,-127.042C600.5376,-123.0284 620.2591,-117.9243 637.1971,-113.5405"/>
<polygon fill="#000000" stroke="#000000" points="638.5171,-116.8143 647.3211,-110.9203 636.7631,-110.0376 638.5171,-116.8143"/>
<text text-anchor="middle" x="616.1027" y="-124" font-family="Times,serif" font-size="10.00" fill="#000000">wait</text>
</g>
<!-- finished&#45;&gt;reapplying -->
<g id="edge11" class="edge">
<title>finished&#45;&gt;reapplying</title>
<path fill="none" stroke="#000000" d="M405.3864,-213.4152C398.7314,-214.9027 391.6021,-216.2408 384.9009,-217 371.1291,-218.5602 356.1342,-217.5595 342.5951,-215.6438"/>
<polygon fill="#000000" stroke="#000000" points="342.9994,-212.1639 332.5698,-214.0278 341.8854,-219.0747 342.9994,-212.1639"/>
<text text-anchor="middle" x="369.625" y="-219" font-family="Times,serif" font-size="10.00" fill="#000000">reapply</text>
</g>
<!-- finished&#45;&gt;starting -->
<g id="edge12" class="edge">
<title>finished&#45;&gt;starting</title>
<path fill="none" stroke="#000000" d="M454.6556,-193.4586C474.8717,-181.8437 505.1314,-164.4585 527.7155,-151.4831"/>
<polygon fill="#000000" stroke="#000000" points="529.6397,-154.4142 536.5669,-146.3977 526.1525,-148.3447 529.6397,-154.4142"/>
<text text-anchor="middle" x="496.0284" y="-178" font-family="Times,serif" font-size="10.00" fill="#000000">start</text>
</g>
<!-- finished&#45;&gt;finished -->
<g id="edge10" class="edge">
<title>finished&#45;&gt;finished</title>
<path fill="none" stroke="#000000" d="M421.8403,-223.0373C419.5982,-232.8579 423.2603,-242 432.8267,-242 438.8057,-242 442.4784,-238.4289 443.8447,-233.3529"/>
<polygon fill="#000000" stroke="#000000" points="447.3438,-233.0265 443.8131,-223.0373 440.3438,-233.048 447.3438,-233.0265"/>
<text text-anchor="middle" x="432.8267" y="-244" font-family="Times,serif" font-size="10.00" fill="#000000">finish</text>
</g>
<!-- waiting&#45;&gt;aborting -->
<g id="edge23" class="edge">
<title>waiting&#45;&gt;aborting</title>
<path fill="none" stroke="#000000" d="M645.777,-99.8122C633.1209,-97.5459 618.1234,-94.3127 604.9869,-90 498.5822,-55.0672 481.6173,0 369.625,0 168.5787,0 168.5787,0 168.5787,0 99.5714,0 58.515,-87.5008 41.7023,-133.4889"/>
<polygon fill="#000000" stroke="#000000" points="38.3188,-132.5602 38.3021,-143.155 44.9222,-134.8831 38.3188,-132.5602"/>
<text text-anchor="middle" x="369.625" y="-2" font-family="Times,serif" font-size="10.00" fill="#000000">abort</text>
</g>
<!-- waiting&#45;&gt;error -->
<g id="edge26" class="edge">
<title>waiting&#45;&gt;error</title>
<path fill="none" stroke="#000000" d="M656.9354,-89.0636C635.4172,-71.8786 596.6251,-46 558.1456,-46 299.2398,-46 299.2398,-46 299.2398,-46 236.9733,-46 196.4266,-120.8054 178.7539,-162.2077"/>
<polygon fill="#000000" stroke="#000000" points="175.5131,-160.8858 174.9451,-171.4654 181.9867,-163.5492 175.5131,-160.8858"/>
<text text-anchor="middle" x="432.8267" y="-48" font-family="Times,serif" font-size="10.00" fill="#ff0000">timeout</text>
</g>
<!-- waiting&#45;&gt;processing -->
<g id="edge24" class="edge">
<title>waiting&#45;&gt;processing</title>
<path fill="none" stroke="#000000" d="M686.4727,-120.5577C709.6349,-151.4538 759.571,-218.0633 787.0016,-254.653"/>
<polygon fill="#000000" stroke="#000000" points="784.3406,-256.9386 793.1395,-262.8404 789.9415,-252.7397 784.3406,-256.9386"/>
<text text-anchor="middle" x="735.8961" y="-204" font-family="Times,serif" font-size="10.00" fill="#000000">process</text>
</g>
<!-- waiting&#45;&gt;starting -->
<g id="edge25" class="edge">
<title>waiting&#45;&gt;starting</title>
<path fill="none" stroke="#000000" d="M645.9643,-99.7816C633.2083,-98.8275 618.079,-99.0601 604.9869,-103 597.5398,-105.2411 590.1868,-109.0944 583.5919,-113.3378"/>
<polygon fill="#000000" stroke="#000000" points="581.3225,-110.6538 575.1313,-119.2515 585.3328,-116.3912 581.3225,-110.6538"/>
<text text-anchor="middle" x="616.1027" y="-105" font-family="Times,serif" font-size="10.00" fill="#000000">start</text>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 17 KiB

View File

@@ -1,28 +0,0 @@
.. include:: ../../README.rst
Using Ironic Inspector
======================
.. toctree::
:maxdepth: 2
install/index
cli/index
configuration/index
user/index
admin/index
Contributor Docs
================
.. toctree::
:maxdepth: 2
contributor/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@@ -1,518 +0,0 @@
Install Guide
=============
Install from PyPI_ (you may want to use virtualenv to isolate your
environment)::
pip install ironic-inspector
Also there is a :devstack-doc:`DevStack <>` plugin for **ironic-inspector** -
see :ref:`contributing_link` for the current status.
Finally, some distributions (e.g. Fedora) provide **ironic-inspector**
packaged, some of them - under its old name *ironic-discoverd*.
There are several projects you can use to set up **ironic-inspector** in
production. `puppet-ironic <https://git.openstack.org/cgit/openstack/puppet-ironic/>`_
provides Puppet manifests, while :bifrost-doc:`bifrost <>` provides an
Ansible-based standalone installer. Refer to Configuration_ if you plan on
installing **ironic-inspector** manually.
.. _PyPI: https://pypi.org/project/ironic-inspector
.. note::
Please beware of :ref:`possible DNS issues <ubuntu-dns>` when installing
**ironic-inspector** on Ubuntu.
Sample Configuration Files
--------------------------
To generate a sample configuration file, run the following command from the
top level of the code tree::
tox -egenconfig
For a pre-generated sample configuration file, see
:doc:`/configuration/sample-config`.
To generate a sample policy file, run the following command from the
top level of the code tree::
tox -egenpolicy
For a pre-generated sample configuration file, see
:doc:`/configuration/sample-policy`.
Installation options
--------------------
Starting with Train release, ironic-inspector can run in a non-standalone
mode, which means ironic-inspector API and ironic-inspector conductor are
separated services, they can be installed on the same host or different
hosts.
Following are some considerations when you run ironic-inspector in
non-standalone mode:
* Additional packages may be required depending on the tooz backend used in
the installation. For example, ``etcd3gw`` is required if the backend driver
is configured to use ``etcd3+http://``, ``pymemcache`` is required to use
``memcached://``. Some distributions may provide packages like
``python3-etcd3gw`` or ``python3-memcache``. Supported drivers are listed at
:tooz-doc:`Tooz drivers <user/drivers.html>`.
* For ironic-inspector running in non-standalone mode, PXE configuration is
only required on the node where ironic-inspector conductor service is
deployed.
* Switch to a database backend other than sqlite.
Configuration
-------------
Copy the sample configuration files to some permanent place
(e.g. ``/etc/ironic-inspector/inspector.conf``).
Fill in these minimum configuration values:
* The ``standalone`` in the ``DEFAULT`` section - This determines whether
ironic-inspector services are intended to be deployed separately.
* The ``keystone_authtoken`` section - credentials to use when checking user
authentication.
* The ``ironic`` section - credentials to use when accessing **ironic**
API. When **ironic** is deployed standalone with no authentication, specify
the following::
[ironic]
auth_type=none
When **ironic** is deployed standalone with HTTP Basic authentication, valid
credentials are also required::
[ironic]
auth_type=http_basic
username=myName
password=myPassword
* ``connection`` in the ``database`` section - SQLAlchemy connection string
for the database. By default ironic-inspector uses sqlite as the database
backend, if you are running ironic-inspector in a non-standalone mode,
please change to other database backends.
* ``dnsmasq_interface`` in the ``iptables`` section - interface on which
``dnsmasq`` (or another DHCP service) listens for PXE boot requests
(defaults to ``br-ctlplane`` which is a sane default for **tripleo**-based
installations but is unlikely to work for other cases).
* if you wish to use the ``dnsmasq`` PXE/DHCP filter driver rather than the
default ``iptables`` driver, see the :ref:`dnsmasq_pxe_filter` description.
* ``store_data`` in the ``processing`` section defines where introspection data
is stored and takes one of three values:
``none``
introspection data is not stored (the default)
``database``
introspection data is stored in the database (recommended for standalone
deployments)
``swift``
introspection data is stored in the Object Store service (recommended for
full openstack deployments)
.. note::
It is possible to create third party storage backends using the
``ironic_inspector.introspection_data.store`` entry point.
See comments inside :doc:`the sample configuration
</configuration/sample-config>` for other possible configuration options.
.. note::
Configuration file contains a password and thus should be owned by ``root``
and should have access rights like ``0600``.
Here is an example *inspector.conf* (adapted from a gate run)::
[DEFAULT]
debug = false
rootwrap_config = /etc/ironic-inspector/rootwrap.conf
[database]
connection = mysql+pymysql://root:<PASSWORD>@127.0.0.1/ironic_inspector?charset=utf8
[pxe_filter]
driver=iptables
[iptables]
dnsmasq_interface = br-ctlplane
[ironic]
os_region = RegionOne
project_name = service
password = <PASSWORD>
username = ironic-inspector
auth_url = http://127.0.0.1/identity
auth_type = password
[keystone_authtoken]
www_authenticate_uri = http://127.0.0.1/identity
project_name = service
password = <PASSWORD>
username = ironic-inspector
auth_url = http://127.0.0.1/identity_v2_admin
auth_type = password
[processing]
ramdisk_logs_dir = /var/log/ironic-inspector/ramdisk
store_data = swift
[swift]
os_region = RegionOne
project_name = service
password = <PASSWORD>
username = ironic-inspector
auth_url = http://127.0.0.1/identity
auth_type = password
.. note::
Set ``debug = true`` if you want to see complete logs.
**ironic-inspector** requires root rights for managing ``iptables``. It
gets them by running ``ironic-inspector-rootwrap`` utility with ``sudo``.
To allow it, copy file ``rootwrap.conf`` and directory ``rootwrap.d`` to the
configuration directory (e.g. ``/etc/ironic-inspector/``) and create file
``/etc/sudoers.d/ironic-inspector-rootwrap`` with the following content::
Defaults:stack !requiretty
stack ALL=(root) NOPASSWD: /usr/bin/ironic-inspector-rootwrap /etc/ironic-inspector/rootwrap.conf *
.. DANGER::
Be very careful about typos in ``/etc/sudoers.d/ironic-inspector-rootwrap``
as any typo will break sudo for **ALL** users on the system. Especially,
make sure there is a new line at the end of this file.
.. note::
``rootwrap.conf`` and all files in ``rootwrap.d`` must be writeable
only by root.
.. note::
If you store ``rootwrap.d`` in a different location, make sure to update
the *filters_path* option in ``rootwrap.conf`` to reflect the change.
If your ``rootwrap.conf`` is in a different location, then you need
to update the *rootwrap_config* option in ``ironic-inspector.conf``
to point to that location.
Replace ``stack`` with whatever user you'll be using to run
**ironic-inspector**.
Configuring IPA
~~~~~~~~~~~~~~~
:ironic-python-agent-doc:`ironic-python-agent <>` is a ramdisk developed for
**ironic** and support for **ironic-inspector** was added during the Liberty
cycle. This is the default ramdisk starting with the Mitaka release.
.. note::
You need at least 2 GiB of RAM on the machines to use IPA built with
diskimage-builder_ and at least 384 MiB to use the *TinyIPA*.
To build an **ironic-python-agent** ramdisk, use ironic-python-agent-builder_.
Alternatively, you can download a `prebuild image
<https://tarballs.openstack.org/ironic-python-agent/dib/files/>`_.
For local testing and CI purposes you can use `a TinyIPA image
<https://tarballs.openstack.org/ironic-python-agent/tinyipa/files/>`_.
.. NOTE(dtantsur): both projects are branchless, using direct links
.. _ironic-python-agent-builder: https://docs.openstack.org/ironic-python-agent-builder/latest/admin/dib.html
.. _diskimage-builder: https://docs.openstack.org/diskimage-builder/latest/
Configuring PXE
~~~~~~~~~~~~~~~
For the PXE boot environment, you'll need:
* TFTP server running and accessible (see below for using *dnsmasq*).
Ensure ``pxelinux.0`` is present in the TFTP root.
Copy ``ironic-python-agent.kernel`` and ``ironic-python-agent.initramfs``
to the TFTP root as well.
* Next, setup ``$TFTPROOT/pxelinux.cfg/default`` as follows::
default introspect
label introspect
kernel ironic-python-agent.kernel
append initrd=ironic-python-agent.initramfs ipa-inspection-callback-url=http://{IP}:5050/v1/continue systemd.journald.forward_to_console=yes
ipappend 3
Replace ``{IP}`` with IP of the machine (do not use loopback interface, it
will be accessed by ramdisk on a booting machine).
.. note::
While ``systemd.journald.forward_to_console=yes`` is not actually
required, it will substantially simplify debugging if something
goes wrong. You can also enable IPA debug logging by appending
``ipa-debug=1``.
IPA is pluggable: you can insert introspection plugins called
*collectors* into it. For example, to enable a very handy ``logs`` collector
(sending ramdisk logs to **ironic-inspector**), modify the ``append``
line in ``$TFTPROOT/pxelinux.cfg/default``::
append initrd=ironic-python-agent.initramfs ipa-inspection-callback-url=http://{IP}:5050/v1/continue ipa-inspection-collectors=default,logs systemd.journald.forward_to_console=yes
.. note::
You probably want to always keep the ``default`` collector, as it provides
the basic information required for introspection.
* You need PXE boot server (e.g. *dnsmasq*) running on **the same** machine as
**ironic-inspector**. Don't do any firewall configuration:
**ironic-inspector** will handle it for you. In **ironic-inspector**
configuration file set ``dnsmasq_interface`` to the interface your
PXE boot server listens on. Here is an example *dnsmasq.conf*::
port=0
interface={INTERFACE}
bind-interfaces
dhcp-range={DHCP IP RANGE, e.g. 192.168.0.50,192.168.0.150}
enable-tftp
tftp-root={TFTP ROOT, e.g. /tftpboot}
dhcp-boot=pxelinux.0
dhcp-sequential-ip
.. note::
``dhcp-sequential-ip`` is used because otherwise a lot of nodes booting
simultaneously cause conflicts - the same IP address is suggested to
several nodes.
Configuring iPXE
~~~~~~~~~~~~~~~~
iPXE allows better scaling as it primarily uses the HTTP protocol instead of
slow and unreliable TFTP. You still need a TFTP server as a fallback for
nodes not supporting iPXE. To use iPXE, you'll need:
* TFTP server running and accessible (see above for using *dnsmasq*).
Ensure ``undionly.kpxe`` is present in the TFTP root. If any of your nodes
boot with UEFI, you'll also need ``ipxe.efi`` there.
* You also need an HTTP server capable of serving static files.
Copy ``ironic-python-agent.kernel`` and ``ironic-python-agent.initramfs``
there.
* Create a file called ``inspector.ipxe`` in the HTTP root (you can name and
place it differently, just don't forget to adjust the *dnsmasq.conf* example
below)::
#!ipxe
:retry_dhcp
dhcp || goto retry_dhcp
:retry_boot
imgfree
kernel --timeout 30000 http://{IP}:8088/ironic-python-agent.kernel ipa-inspection-callback-url=http://{IP}>:5050/v1/continue systemd.journald.forward_to_console=yes BOOTIF=${mac} initrd=agent.ramdisk || goto retry_boot
initrd --timeout 30000 http://{IP}:8088/ironic-python-agent.ramdisk || goto retry_boot
boot
.. note::
Older versions of the iPXE ROM tend to misbehave on unreliable network
connection, thus we use the timeout option with retries.
Just like with PXE, you can customize the list of collectors by appending
the ``ipa-inspection-collectors`` kernel option. For example::
ipa-inspection-collectors=default,logs,extra_hardware
* Just as with PXE, you'll need a PXE boot server. The configuration, however,
will be different. Here is an example *dnsmasq.conf*::
port=0
interface={INTERFACE}
bind-interfaces
dhcp-range={DHCP IP RANGE, e.g. 192.168.0.50,192.168.0.150}
enable-tftp
tftp-root={TFTP ROOT, e.g. /tftpboot}
dhcp-sequential-ip
dhcp-match=ipxe,175
dhcp-match=set:efi,option:client-arch,7
dhcp-match=set:efi,option:client-arch,9
dhcp-match=set:efi,option:client-arch,11
# dhcpv6.option: Client System Architecture Type (61)
dhcp-match=set:efi6,option6:61,0007
dhcp-match=set:efi6,option6:61,0009
dhcp-match=set:efi6,option6:61,0011
dhcp-userclass=set:ipxe6,iPXE
# Client is already running iPXE; move to next stage of chainloading
dhcp-boot=tag:ipxe,http://{IP}:8088/inspector.ipxe
# Client is PXE booting over EFI without iPXE ROM,
# send EFI version of iPXE chainloader
dhcp-boot=tag:efi,tag:!ipxe,ipxe.efi
dhcp-option=tag:efi6,tag:!ipxe6,option6:bootfile-url,tftp://{IP}/ipxe.efi
# Client is running PXE over BIOS; send BIOS version of iPXE chainloader
dhcp-boot=undionly.kpxe,localhost.localdomain,{IP}
First, we configure the same common parameters as with PXE. Then we define
``ipxe`` and ``efi`` tags for IPv4 and ``ipxe6`` and ``efi6`` for IPv6.
Nodes already supporting iPXE are ordered to download and execute
``inspector.ipxe``. Nodes without iPXE booted with UEFI will get ``ipxe.efi``
firmware to execute, while the remaining will get ``undionly.kpxe``.
Configuring PXE for aarch64
~~~~~~~~~~~~~~~~~~~~~~~~~~~
For aarch64 Bare Metals, the PXE boot environment is basically the same as
x86_64, you'll need:
* TFTP server running and accessible (see below for using *dnsmasq*).
Ensure ``grubaa64.efi`` is present in the TFTP root. The firmware can be
retrieved from the installation distributions for aarch64.
* Copy ``ironic-agent.kernel`` and ``ironic-agent.initramfs`` to the TFTP root
as well. Note that the ramdisk needs to be pre-built on an aarch64 machine
with tools like ``ironic-python-agent-builder``, see
https://docs.openstack.org/ironic-python-agent-builder/latest/admin/dib.html
for how to build ramdisk for aarch64.
* Next, setup ``$TFTPROOT/EFI/BOOT/grub.cfg`` as follows::
set default="1"
set timeout=5
menuentry 'Introspection for aarch64' {
linux ironic-agent.kernel text showopts selinux=0 ipa-inspection-callback-url=http://{IP}:5050/v1/continue ipa-inspection-collectors=default ipa-collect-lldp=1 systemd.journald.forward_to_console=no
initrd ironic-agent.initramfs
}
Replace ``{IP}`` with IP of the machine (do not use loopback interface, it
will be accessed by ramdisk on a booting machine).
* Update DHCP options for aarch64, here is an example *dnsmasq.conf*::
port=0
interface={INTERFACE}
bind-interfaces
dhcp-range={DHCP IP RANGE, e.g. 192.168.0.50,192.168.0.150}
enable-tftp
dhcp-match=aarch64, option:client-arch, 11 # aarch64
dhcp-boot=tag:aarch64, grubaa64.efi
tftp-root={TFTP ROOT, e.g. /tftpboot}
dhcp-sequential-ip
Configuring PXE for Multi-arch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the environment consists of bare metals with different architectures,
normally different ramdisks are required for each architecture. The grub
built-in variable `grub_cpu`_ could be used to locate the correct config
file for each of them.
.. _grub_cpu: https://www.gnu.org/software/grub/manual/grub/html_node/grub_005fcpu.html
For example, setup ``$TFTPROOT/EFI/BOOT/grub.cfg`` as following::
set default=master
set timeout=5
set hidden_timeout_quiet=false
menuentry "master" {
configfile /tftpboot/grub-${grub_cpu}.cfg
}
Prepare specific grub config for each existing architectures, e.g.
``grub-arm64.cfg`` for ARM64 and ``grub-x86_64.cfg`` for x86_64.
Update dnsmasq configuration to contain options for supported architectures.
Managing the **ironic-inspector** Database
------------------------------------------
**ironic-inspector** provides a command line client for managing its
database. This client can be used for upgrading, and downgrading the database
using `alembic <https://alembic.readthedocs.org/>`_ migrations.
If this is your first time running **ironic-inspector** to migrate the
database, simply run:
::
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
If you have previously run a version of **ironic-inspector** earlier than
2.2.0, the safest thing is to delete the existing SQLite database and run
``upgrade`` as shown above. However, if you want to save the existing
database, to ensure your database will work with the migrations, you'll need to
run an extra step before upgrading the database. You only need to do this the
first time running version 2.2.0 or later.
If you are upgrading from **ironic-inspector** version 2.1.0 or lower:
::
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf stamp --revision 578f84f38d
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
If you are upgrading from a git master install of the **ironic-inspector**
after :ref:`rules <introspection_rules>` were introduced:
::
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf stamp --revision d588418040d
ironic-inspector-dbsync --config-file /etc/ironic-inspector/inspector.conf upgrade
Other available commands can be discovered by running::
ironic-inspector-dbsync --help
Running
-------
Running in standalone mode
~~~~~~~~~~~~~~~~~~~~~~~~~~
Execute::
ironic-inspector --config-file /etc/ironic-inspector/inspector.conf
Running in non-standalone mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
API service can be started in development mode with::
ironic-inspector-api-wsgi -p 5050 -- --config-file /etc/ironic-inspector/inspector.conf
For production, the ironic-inspector API service should be hosted under a web
service. Below is a sample configuration for Apache with module mod_wsgi::
Listen 5050
<VirtualHost *:5050>
WSGIDaemonProcess ironic-inspector user=stack group=stack threads=10 display-name=%{GROUP}
WSGIScriptAlias / /usr/local/bin/ironic-inspector-api-wsgi
SetEnv APACHE_RUN_USER stack
SetEnv APACHE_RUN_GROUP stack
WSGIProcessGroup ironic-inspector
ErrorLog /var/log/apache2/ironic_inspector_error.log
LogLevel info
CustomLog /var/log/apache2/ironic_inspector_access.log combined
<Directory /opt/stack/ironic-inspector/ironic_inspector/cmd>
WSGIProcessGroup ironic-inspector
WSGIApplicationGroup %{GLOBAL}
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
You can refer to
:ironic-doc:`ironic installation document
<install/install-rdo.html#configuring-ironic-api-behind-mod-wsgi>`
for more guides.
ironic-inspector conductor can be started with::
ironic-inspector-conductor --config-file /etc/ironic-inspector/inspector.conf

View File

@@ -1,4 +0,0 @@
HTTP API
--------
See https://docs.openstack.org/api-ref/baremetal-introspection/

View File

@@ -1,38 +0,0 @@
User Guide
==========
How Ironic Inspector Works
--------------------------
.. toctree::
:maxdepth: 2
workflow
How to use Ironic Inspector
---------------------------
.. toctree::
:maxdepth: 2
usage
HTTP API Reference
------------------
* `Bare Metal Introspection API Reference
<https://docs.openstack.org/api-ref/baremetal-introspection/>`_.
Troubleshooting
---------------
.. toctree::
:maxdepth: 2
troubleshooting
.. toctree::
:hidden:
http-api

View File

@@ -1,192 +0,0 @@
Troubleshooting
---------------
Errors when starting introspection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* *Invalid provision state "available"*
In Kilo release with *python-ironicclient* 0.5.0 or newer Ironic defaults to
reporting provision state ``AVAILABLE`` for newly enrolled nodes.
**ironic-inspector** will refuse to conduct introspection in this state, as
such nodes are supposed to be used by Nova for scheduling. See :ref:`node
states <node_states>` for instructions on how to put nodes into the correct
state.
Introspection times out
~~~~~~~~~~~~~~~~~~~~~~~
There may be 3 reasons why introspection can time out after some time
(defaulting to 60 minutes, altered by ``timeout`` configuration option):
#. Fatal failure in processing chain before node was found in the local cache.
See `Troubleshooting data processing`_ for the hints.
#. Failure to load the ramdisk on the target node. See
`Troubleshooting PXE boot`_ for the hints.
#. Failure during ramdisk run. See `Troubleshooting ramdisk run`_ for the
hints.
Troubleshooting data processing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In this case **ironic-inspector** logs should give a good idea what went wrong.
E.g. for RDO or Fedora the following command will output the full log::
sudo journalctl -u openstack-ironic-inspector
(use ``openstack-ironic-discoverd`` for version < 2.0.0).
.. note::
Service name and specific command might be different for other Linux
distributions (and for old version of **ironic-inspector**).
If ``ramdisk_error`` plugin is enabled and ``ramdisk_logs_dir`` configuration
option is set, **ironic-inspector** will store logs received from the ramdisk
to the ``ramdisk_logs_dir`` directory. This depends, however, on the ramdisk
implementation.
A local cache miss during data processing would leave a message like:
.. code-block:: bash
ERROR ironic_python_agent.inspector [-] inspectorerror 400: {"error":{"message":"The following failures happened during running pre-processing hooks:\nLook up error: Could not find a node for attributes {'bmc_address': u'10.x.y.z', 'mac': [u'00:aa:bb:cc:dd:ee', u'00:aa:bb:cc:dd:ef']}"}}
One potential explanation for such an error is a misconfiguration in the BMC
where a channel with the wrong IP address is active (and hence detected and
reported back by the Ironic Python Agent), but can then not be matched to the
IP address Ironic has in its cache for this node.
Troubleshooting PXE boot
^^^^^^^^^^^^^^^^^^^^^^^^
PXE booting most often becomes a problem for bare metal environments with
several physical networks. If the hardware vendor provides a remote console
(e.g. iDRAC for DELL), use it to connect to the machine and see what is going
on. You may need to restart introspection.
Another source of information is DHCP and TFTP server logs. Their location
depends on how the servers were installed and run. For RDO or Fedora use::
$ sudo journalctl -u openstack-ironic-inspector-dnsmasq
(use ``openstack-ironic-discoverd-dnsmasq`` for version < 2.0.0).
The last resort is ``tcpdump`` utility. Use something like
::
$ sudo tcpdump -i any port 67 or port 68 or port 69
to watch both DHCP and TFTP traffic going through your machine. Replace
``any`` with a specific network interface to check that DHCP and TFTP
requests really reach it.
If you see node not attempting PXE boot or attempting PXE boot on the wrong
network, reboot the machine into BIOS settings and make sure that only one
relevant NIC is allowed to PXE boot.
If you see node attempting PXE boot using the correct NIC but failing, make
sure that:
#. network switches configuration does not prevent PXE boot requests from
propagating,
#. there is no additional firewall rules preventing access to port 67 on the
machine where *ironic-inspector* and its DHCP server are installed.
If you see node receiving DHCP address and then failing to get kernel and/or
ramdisk or to boot them, make sure that:
#. TFTP server is running and accessible (use ``tftp`` utility to verify),
#. no firewall rules prevent access to TFTP port,
#. SELinux is configured properly to allow external TFTP access,
If SELinux is neither permissive nor disabled,
you should config ``tftp_home_dir`` in SELinux by executing the command
::
$ sudo setsebool -P tftp_home_dir 1
See `the man page`_ for more details.
.. _the man page: https://www.systutorials.com/docs/linux/man/8-tftpd_selinux/
#. DHCP server is correctly set to point to the TFTP server,
#. ``pxelinux.cfg/default`` within TFTP root contains correct reference to the
kernel and ramdisk.
.. note::
If using iPXE instead of PXE, check the HTTP server logs and the iPXE
configuration instead.
Troubleshooting ramdisk run
^^^^^^^^^^^^^^^^^^^^^^^^^^^
First, check if the ramdisk logs were stored locally as described in the
`Troubleshooting data processing`_ section. If not, ensure that the ramdisk
actually booted as described in the `Troubleshooting PXE boot`_ section.
Finally, you can try connecting to the IPA ramdisk. If you have any remote
console access to the machine, you can check the logs as they appear on the
screen. Otherwise, you can rebuild the IPA image with your SSH key to be able
to log into it. Use the `dynamic-login`_ or `devuser`_ element for a DIB-based
build or put an authorized_keys file in ``/usr/share/oem/`` for a CoreOS-based
one.
.. _devuser: https://docs.openstack.org/diskimage-builder/latest/elements/devuser/README.html
.. _dynamic-login: https://docs.openstack.org/diskimage-builder/latest/elements/dynamic-login/README.html
Troubleshooting DNS issues on Ubuntu
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. _ubuntu-dns:
Ubuntu uses local DNS caching, so tries localhost for DNS results first
before calling out to an external DNS server. When DNSmasq is installed and
configured for use with ironic-inspector, it can cause problems by interfering
with the local DNS cache. To fix this issue ensure that ``/etc/resolve.conf``
points to your external DNS servers and not to ``127.0.0.1``.
On Ubuntu 14.04 this can be done by editing your
``/etc/resolvconf/resolv.conf.d/head`` and adding your nameservers there.
This will ensure they will come up first when ``/etc/resolv.conf``
is regenerated.
Troubleshooting DnsmasqFilter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When introspection fails and the following error is in ``ironic-inspector.log``
.. code-block:: bash
ERROR ironic_inspector.node_cache [-] [node: 651da5a3-4ecb-4214-a87d-139cc7778c05
state starting] Processing the error event because of an exception
<class 'ironic_inspector.pxe_filter.base.InvalidFilterDriverState'>:
The PXE filter driver DnsmasqFilter, state=uninitialized: my fsm encountered an
exception: Can not transition from state 'uninitialized' on event 'sync'
(no defined transition) raised by ironic_inspector.introspect._do_introspect:
ironic_inspector.pxe_filter.base.InvalidFilterDriverState: The PXE filter driver
DnsmasqFilter, state=uninitialized: my fsm encountered an exception:
Can not transition from state 'uninitialized' on event 'sync'
(no defined transition)
restart ``ironic-inspector``.
Running Inspector in a VirtualBox environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default VirtualBox does not expose a DMI table to the guest. This prevents
ironic-inspector from being able to discover the properties of the a node. In
order to run ironic-inspector on a VirtualBox guest the host must be configured
to expose DMI data inside the guest. To do this run the following command on
the VirtualBox host::
VBoxManage setextradata {NodeName} "VBoxInternal/Devices/pcbios/0/Config/DmiExposeMemoryTable" 1
.. note::
Replace `{NodeName}` with the name of the guest you wish to expose the DMI
table on. This command will need to be run once per host to enable this
functionality.

View File

@@ -1,486 +0,0 @@
Usage
-----
.. _usage_guide:
Refer to the `API reference`_ for information on the HTTP API.
Refer to the :python-ironic-inspector-client-doc:`client documentation <>`
for information on how to use CLI and Python library.
.. _API reference: https://docs.openstack.org/api-ref/baremetal-introspection/
Using from Ironic API
~~~~~~~~~~~~~~~~~~~~~
Ironic Kilo introduced support for hardware introspection under name of
"inspection". **ironic-inspector** introspection is supported for some generic
drivers, please refer to
:ironic-doc:`Ironic inspection documentation <admin/inspection.html>`
for details.
Node States
~~~~~~~~~~~
.. _node_states:
* The nodes should be moved to ``MANAGEABLE`` provision state before
introspection (requires *python-ironicclient* of version 0.5.0 or newer)::
baremetal node manage <node>
* The introspection can be triggered by using the following command::
baremetal node inspect <node>
* After successful introspection and before deploying nodes should be made
available to Nova, by moving them to ``AVAILABLE`` state::
baremetal node provide <node>
.. note::
Due to how Nova interacts with Ironic driver, you should wait 1 minute
before Nova becomes aware of available nodes after issuing this command.
Use ``nova hypervisor-stats`` command output to check it.
Introspection Rules
~~~~~~~~~~~~~~~~~~~
.. _introspection_rules:
Inspector supports a simple JSON-based DSL to define rules to run during
introspection. Inspector provides an API to manage such rules, and will run
them automatically after running all processing hooks.
A rule consists of conditions to check, and actions to run. If conditions
evaluate to true on the introspection data, then actions are run on a node.
Please refer to the command below to import introspection rule::
baremetal introspection rule import <json file>
Available conditions and actions are defined by plugins, and can be extended,
see :ref:`contributing_link` for details. See the `API reference`_ for
specific calls to define introspection rules.
Conditions
^^^^^^^^^^
A condition is represented by an object with fields:
``op`` the type of comparison operation, default available operators include:
* ``eq``, ``le``, ``ge``, ``ne``, ``lt``, ``gt`` - basic comparison operators;
* ``in-net`` - checks that an IP address is in a given network;
* ``matches`` - requires a full match against a given regular expression;
* ``contains`` - requires a value to contain a given regular expression;
* ``is-empty`` - checks that field is an empty string, list, dict or
None value.
``field`` a `JSON path <http://goessner.net/articles/JsonPath/>`_ to the field
in the introspection data to use in comparison.
Starting with the Mitaka release, you can also apply conditions to ironic node
field. Prefix field with schema (``data://`` or ``node://``) to distinguish
between values from introspection data and node. Both schemes use JSON path::
{"field": "node://property.path", "op": "eq", "value": "val"}
{"field": "data://introspection.path", "op": "eq", "value": "val"}
if scheme (node or data) is missing, condition compares data with
introspection data.
``invert`` boolean value, whether to invert the result of the comparison.
``multiple`` how to treat situations where the ``field`` query returns multiple
results (e.g. the field contains a list), available options are:
* ``any`` (the default) require any to match,
* ``all`` require all to match,
* ``first`` require the first to match.
All other fields are passed to the condition plugin, e.g. numeric comparison
operations require a ``value`` field to compare against.
Scope
^^^^^
By default, introspection rules are applied to all nodes being inspected.
In order for the rule to be applied only to specific nodes, a matching scope
variable must be set to both the rule and the node. To set the scope for a
rule include field ``"scope"`` in JSON file before importing. For example::
cat <json file>
{
"description": "...",
"actions": [...],
"conditions": [...],
"scope": "SCOPE"
}
Set the property ``inspection_scope`` on the node you want the rule to be
applied to::
baremetal node set --property inspection_scope="SCOPE" <node>
Now, when inspecting, the rule will be applied only to nodes with matching
scope value. It will also ignore nodes that do not have ``inspection_scope``
property set. Note that if a rule has no scope set, it will be applied to all
nodes, regardless if they have ``inspection_scope`` set or not.
Actions
^^^^^^^
An action is represented by an object with fields:
``action`` type of action. Possible values are defined by plugins.
All other fields are passed to the action plugin.
Default available actions include:
* ``fail`` fail introspection. Requires a ``message`` parameter for the failure
message.
* ``set-attribute`` sets an attribute on an Ironic node. Requires a ``path``
field, which is the path to the attribute as used by ironic (e.g.
``/properties/something``), and a ``value`` to set.
* ``set-capability`` sets a capability on an Ironic node. Requires ``name``
and ``value`` fields, which are the name and the value for a new capability
accordingly. Existing value for this same capability is replaced.
* ``extend-attribute`` the same as ``set-attribute``, but treats existing
value as a list and appends value to it. If optional ``unique`` parameter is
set to ``True``, nothing will be added if given value is already in a list.
* ``add-trait`` adds a trait to an Ironic node. Requires a ``name`` field
with the name of the trait to add.
* ``remove-trait`` removes a trait from an Ironic node. Requires a ``name``
field with the name of the trait to remove.
Starting from Mitaka release, ``value`` field in actions supports fetching data
from introspection, using `python string formatting notation
<https://docs.python.org/2/library/string.html#formatspec>`_::
{"action": "set-attribute", "path": "/driver_info/ipmi_address",
"value": "{data[inventory][bmc_address]}"}
Note that any value referenced in this way will be converted to a string.
If ``value`` is a dict or list, strings nested at any level within the
structure will be formatted as well::
{"action": "set-attribute", "path": "/properties/root_device",
"value": {"serial": "{data[root_device][serial]}"}}
Plugins
~~~~~~~
.. _introspection_plugins:
**ironic-inspector** heavily relies on plugins for data processing. Even the
standard functionality is largely based on plugins. Set ``processing_hooks``
option in the configuration file to change the set of plugins to be run on
introspection data. Note that order **does** matter in this option, especially
for hooks that have dependencies on other hooks.
These are plugins that are enabled by default and should not be disabled,
unless you understand what you're doing:
``scheduler``
validates and updates basic hardware scheduling properties: CPU number and
architecture, memory and disk size.
.. note::
Diskless nodes have the disk size property ``local_gb == 0``. Always use
node driver ``root_device`` hints to prevent unexpected HW failures
passing silently.
``validate_interfaces``
validates network interfaces information. Creates new
ports, optionally deletes ports that were not present in the introspection
data. Also sets the ``pxe_enabled`` flag for the PXE-booting port and
unsets it for all the other ports to avoid **nova** picking a random port
to boot the node.
.. note::
When the ``pxe_filter`` is configured to only open the DHCP server for
known MAC addresses, i.e the ``[pxe_filter]deny_unknown_macs``
configuration option is enabled, it is not possible to rely on the
``validate_interfaces`` processing plug-in to create the PXE-booting port
in ironic. Nodes must have at least one enrolled port prior to
introspection in this case.
The following plugins are enabled by default, but can be disabled if not
needed:
``ramdisk_error``
reports error, if ``error`` field is set by the ramdisk, also optionally
stores logs from ``logs`` field, see the `API reference`_ for details.
``capabilities``
detect node capabilities: CPU, boot mode, etc. See `Capabilities
Detection`_ for more details.
``pci_devices``
gathers the list of all PCI devices returned by the ramdisk and compares to
those defined in ``alias`` field(s) from ``pci_devices`` section of
configuration file. The recognized PCI devices and their count are then
stored in node properties. This information can be later used in nova
flavors for node scheduling.
Here are some plugins that can be additionally enabled:
``example``
example plugin logging it's input and output.
``raid_device``
gathers block devices from ramdisk and exposes root device in multiple
runs.
``extra_hardware``
stores the value of the 'data' key returned by the ramdisk as a JSON
encoded string in a Swift object. The plugin will also attempt to convert
the data into a format usable by introspection rules. If this is successful
then the new format will be stored in the 'extra' key. The 'data' key is
then deleted from the introspection data, as unless converted it's assumed
unusable by introspection rules.
``lldp_basic``
Processes LLDP data returned from inspection, parses TLVs from the
Basic Management (802.1AB), 802.1Q, and 802.3 sets and stores the
processed data back in the Ironic inspector database. To enable LLDP in the
inventory from IPA, ``ipa-collect-lldp=1`` should be passed as a kernel
parameter to the IPA ramdisk.
``local_link_connection``
Processes LLDP data returned from inspection, specifically looking for the
port ID and chassis ID. If found, it configures the local link connection
information on the Ironic ports with that data. To enable LLDP in the
inventory from IPA, ``ipa-collect-lldp=1`` should be passed as a kernel
parameter to the IPA ramdisk. In order to avoid processing the raw LLDP
data twice, the ``lldp_basic`` plugin should also be installed and run
prior to this plugin.
``physnet_cidr_map``
Configures the ``physical_network`` property of the nodes Ironic port when
the IP address is in a configured CIDR mapping. CIDR to physical network
mappings is set in configuration using the ``[port_physnet]/cidr_map``
option, for example::
[port_physnet]
cidr_map = 10.10.10.0/24:physnet_a, 2001:db8::/64:physnet_b
``accelerators``
Processes PCI data returned from inspection and compares with the
accelerator inventory, it will update accelerator device information to
the properties field of the ironic node if any accelerator device is
found, for example::
{'local_gb': '1115', 'cpus': '40', 'cpu_arch': 'x86_64', 'memory_mb': '32768',
'capabilities': 'boot_mode:bios,cpu_vt:true,cpu_aes:true,cpu_hugepages:true,cpu_hugepages_1g:true,cpu_txt:true',
'accel': [{'vendor_id': '10de', 'device_id': '1eb8', 'type': 'GPU',
'pci_address': '0000:82:00.0',
'device_info': 'NVIDIA Corporation Tesla T4'}]
}
Refer to :ref:`contributing_link` for information on how to write your
own plugin.
Discovery
~~~~~~~~~
Starting from Mitaka, **ironic-inspector** is able to register new nodes
in Ironic.
The existing ``node-not-found-hook`` handles what happens if
**ironic-inspector** receives inspection data from a node it can not identify.
This can happen if a node is manually booted without registering it with
Ironic first.
For discovery, the configuration file option ``node_not_found_hook`` should be
set to load the hook called ``enroll``. This hook will enroll the unidentified
node into Ironic using the ``fake-hardware`` hardware type. This is
a configurable option: set ``enroll_node_driver`` in the **ironic-inspector**
configuration file to the hardware type you want. You can also configure
arbitrary fields to set on discovery, for example:
.. code-block:: ini
[discovery]
enroll_node_driver = ipmi
enroll_node_fields = management_interface:noop,resource_class:baremetal
The ``enroll`` hook will also set the ``ipmi_address`` property on the new
node, if its available in the introspection data we received,
see `ramdisk callback`_.
.. _ramdisk callback: https://docs.openstack.org/api-ref/baremetal-introspection/?expanded=ramdisk-callback-detail#ramdisk-callback
Once the ``enroll`` hook is finished, **ironic-inspector** will process the
introspection data in the same way it would for an identified node. It runs
the processing :ref:`plugins <introspection_plugins>`, and after that it runs
introspection rules, which would allow for more customisable node
configuration, see :ref:`rules <introspection_rules>`.
A rule to set a node's Ironic driver to ``ipmi`` and populate the required
``driver_info`` for that driver would look like::
[{
"description": "Set IPMI driver_info if no credentials",
"actions": [
{"action": "set-attribute", "path": "driver", "value": "ipmi"},
{"action": "set-attribute", "path": "driver_info/ipmi_username",
"value": "username"},
{"action": "set-attribute", "path": "driver_info/ipmi_password",
"value": "password"}
],
"conditions": [
{"op": "is-empty", "field": "node://driver_info.ipmi_password"},
{"op": "is-empty", "field": "node://driver_info.ipmi_username"}
]
},{
"description": "Set deploy info if not already set on node",
"actions": [
{"action": "set-attribute", "path": "driver_info/deploy_kernel",
"value": "<glance uuid>"},
{"action": "set-attribute", "path": "driver_info/deploy_ramdisk",
"value": "<glance uuid>"}
],
"conditions": [
{"op": "is-empty", "field": "node://driver_info.deploy_ramdisk"},
{"op": "is-empty", "field": "node://driver_info.deploy_kernel"}
]
}]
All nodes discovered and enrolled via the ``enroll`` hook, will contain an
``auto_discovered`` flag in the introspection data, this flag makes it
possible to distinguish between manually enrolled nodes and auto-discovered
nodes in the introspection rules using the rule condition ``eq``::
{
"description": "Enroll auto-discovered nodes with ipmi hardware type",
"actions": [
{"action": "set-attribute", "path": "driver", "value": "ipmi"}
],
"conditions": [
{"op": "eq", "field": "data://auto_discovered", "value": true}
]
}
Reapplying introspection on stored data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To allow correcting mistakes in introspection rules the API provides
an entry point that triggers the introspection over stored data. The
data to use for processing is kept in Swift separately from the data
already processed. Reapplying introspection overwrites processed data
in the store. Updating the introspection data through the endpoint
isn't supported yet. Following preconditions are checked before
reapplying introspection:
* no data is being sent along with the request
* Swift store is configured and enabled
* introspection data is stored in Swift for the node UUID
* node record is kept in database for the UUID
* introspection is not ongoing for the node UUID
Should the preconditions fail an immediate response is given to the
user:
* ``400`` if the request contained data or in case Swift store is not
enabled in configuration
* ``404`` in case Ironic doesn't keep track of the node UUID
* ``409`` if an introspection is already ongoing for the node
If the preconditions are met a background task is executed to carry
out the processing and a ``202 Accepted`` response is returned to the
endpoint user. As requested, these steps are performed in the
background task:
* preprocessing hooks
* post processing hooks, storing result in Swift
* introspection rules
These steps are avoided, based on the feature requirements:
* ``node_not_found_hook`` is skipped
* power operations
* roll-back actions done by hooks
Limitations:
* there's no way to update the unprocessed data atm.
* the unprocessed data is never cleaned from the store
* check for stored data presence is performed in background;
missing data situation still results in a ``202`` response
Capabilities Detection
~~~~~~~~~~~~~~~~~~~~~~
Starting with the Newton release, **Ironic Inspector** can optionally discover
several node capabilities. A recent (Newton or newer) IPA image is required
for it to work.
Boot mode
^^^^^^^^^
The current boot mode (BIOS or UEFI) can be detected and recorded as
``boot_mode`` capability in Ironic. It will make some drivers to change their
behaviour to account for this capability.
Set the :oslo.config:option:`capabilities.boot_mode` configuration option to
``True`` to enable.
CPU capabilities
^^^^^^^^^^^^^^^^
Several CPU flags are detected by default and recorded as following
capabilities:
* ``cpu_aes`` AES instructions.
* ``cpu_vt`` virtualization support.
* ``cpu_txt`` TXT support.
* ``cpu_hugepages`` huge pages (2 MiB) support.
* ``cpu_hugepages_1g`` huge pages (1 GiB) support.
It is possible to define your own rules for detecting CPU capabilities.
Set the :oslo.config:option:`capabilities.cpu_flags` configuration option
to a mapping between a CPU flag and a capability, for example::
cpu_flags = aes:cpu_aes,svm:cpu_vt,vmx:cpu_vt
See the default value of this option for a more detail example.
InfiniBand support
^^^^^^^^^^^^^^^^^^
Starting with the Ocata release, **Ironic Inspector** supports detection of
InfiniBand network interfaces. A recent (Ocata or newer) IPA image is required
for that to work. When an InfiniBand network interface is discovered, the
**Ironic Inspector** adds a ``client-id`` attribute to the ``extra`` attribute
in the ironic port. The **Ironic Inspector** should be configured with
``iptables.ethoib_interfaces`` to indicate the Ethernet Over InfiniBand (EoIB)
which are used for physical access to the DHCP network.
For example if **Ironic Inspector** DHCP server is using ``br-inspector`` and
the ``br-inspector`` has EoIB port e.g. ``eth0``,
the ``iptables.ethoib_interfaces`` should be set to ``eth0``.
The ``iptables.ethoib_interfaces`` allows to map the baremetal GUID to it's
EoIB MAC based on the neighs files. This is needed for blocking DHCP traffic
of the nodes (MACs) which are not part of the introspection.
The format of the ``/sys/class/net/<ethoib>/eth/neighs`` file::
# EMAC=<ethernet mac of the ethoib> IMAC=<qp number:lid:GUID>
# For example:
IMAC=97:fe:80:00:00:00:00:00:00:7c:fe:90:03:00:29:26:52
qp number=97:fe
lid=80:00:00:00:00:00:00
GUID=7c:fe:90:03:00:29:26:52
Example of content::
EMAC=02:00:02:97:00:01 IMAC=97:fe:80:00:00:00:00:00:00:7c:fe:90:03:00:29:26:52
EMAC=02:00:00:61:00:02 IMAC=61:fe:80:00:00:00:00:00:00:7c:fe:90:03:00:29:24:4f

View File

@@ -1,88 +0,0 @@
How Ironic Inspector Works
==========================
Workflow
--------
Usual hardware introspection flow is as follows:
* Operator enrolls nodes into Ironic_ e.g. via
:python-ironicclient-doc:`baremetal CLI <cli/index.html>`
command. Power management credentials should be provided to Ironic at this
step.
* Nodes are put in the correct state for introspection as described in
:ref:`node states <node_states>`.
* Operator sends nodes on introspection using **ironic-inspector** API or CLI
(see :ref:`usage <usage_guide>`).
* On receiving node UUID **ironic-inspector**:
* validates node power credentials, current power and provisioning states,
* allows access to PXE boot service for the nodes,
* issues reboot command for the nodes, so that they boot the ramdisk.
* The ramdisk collects the required information and posts it back to
**ironic-inspector**.
* On receiving data from the ramdisk, **ironic-inspector**:
* validates received data,
* finds the node in Ironic database using it's BMC address (MAC address in
case of SSH driver),
* fills missing node properties with received data and creates missing ports.
.. note::
**ironic-inspector** is responsible to create Ironic ports for some or all
NIC's found on the node. **ironic-inspector** is also capable of
deleting ports that should not be present. There are two important
configuration options that affect this behavior: ``add_ports`` and
``keep_ports`` (please refer to :doc:`the sample configuration file
</configuration/sample-config>` for a detailed explanation).
Default values as of **ironic-inspector** 1.1.0 are ``add_ports=pxe``,
``keep_ports=all``, which means that only one port will be added, which is
associated with NIC the ramdisk PXE booted from. No ports will be deleted.
This setting ensures that deploying on introspected nodes will succeed
despite `Ironic bug 1405131
<https://bugs.launchpad.net/ironic/+bug/1405131>`_.
Ironic inspection feature by default requires different settings:
``add_ports=all``, ``keep_ports=present``, which means that ports will be
created for all detected NIC's, and all other ports will be deleted.
Refer to the
:ironic-doc:`Ironic inspection documentation <admin/inspection.html>`
for details.
Ironic inspector can also be configured to not create any ports. This is
done by setting ``add_ports=disabled``. If setting ``add_ports`` to disabled
the ``keep_ports`` option should be also set to ``all``. This will ensure
no manually added ports will be deleted.
* Separate API (see :ref:`usage <usage_guide>` and `API reference`_) can
be used to query introspection results for a given node.
* Nodes are put in the correct state for deploying as described in
:ref:`node states <node_states>`.
Starting DHCP server and configuring PXE boot environment is not part of this
package and should be done separately.
.. _API reference: https://docs.openstack.org/api-ref/baremetal-introspection/
State machine diagram
---------------------
.. _state_machine_diagram:
The diagram below shows the introspection states that an **ironic-inspector**
FSM goes through during the node introspection, discovery and reprocessing.
The diagram also shows events that trigger state transitions.
.. figure:: ../images/states.svg
:width: 660px
:align: center
:alt: ironic-inspector state machine diagram
.. _Ironic: https://wiki.openstack.org/wiki/Ironic

View File

@@ -1,20 +0,0 @@
.\" Manpage for ironic-inspector.
.TH man 8 "08 Oct 2014" "1.0" "ironic-inspector man page"
.SH NAME
ironic-inspector \- hardware introspection daemon for OpenStack Ironic.
.SH SYNOPSIS
ironic-inspector CONFFILE
.SH DESCRIPTION
This command starts ironic-inspector service, which starts and finishes
hardware discovery and maintains firewall rules for nodes accessing PXE
boot service (usually dnsmasq).
.SH OPTIONS
The ironic-inspector does not take any options. However, you should supply
path to the configuration file.
.SH SEE ALSO
README page located at https://docs.openstack.org/ironic-inspector/latest/
provides some information about how to configure and use the service.
.SH BUGS
No known bugs.
.SH AUTHOR
Dmitry Tantsur (divius.inside@gmail.com)

View File

@@ -1,100 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generic Rest Api tools."""
import functools
import flask
from oslo_config import cfg
from oslo_utils import uuidutils
from ironic_inspector.common.i18n import _
from ironic_inspector import introspection_state as istate
from ironic_inspector import utils
CONF = cfg.CONF
def raises_coercion_exceptions(fn):
"""Convert coercion function exceptions to utils.Error.
:raises: utils.Error when the coercion function raises an
AssertionError or a ValueError
"""
@functools.wraps(fn)
def inner(*args, **kwargs):
try:
ret = fn(*args, **kwargs)
except (AssertionError, ValueError) as exc:
raise utils.Error(_('Bad request: %s') % exc, code=400)
return ret
return inner
def request_field(field_name):
"""Decorate a function that coerces the specified field.
:param field_name: name of the field to fetch
:returns: a decorator
"""
def outer(fn):
@functools.wraps(fn)
def inner(*args, **kwargs):
default = kwargs.pop('default', None)
field = flask.request.args.get(field_name, default=default)
if field == default:
# field not found or the same as the default, just return
return default
return fn(field, *args, **kwargs)
return inner
return outer
@request_field('marker')
@raises_coercion_exceptions
def marker_field(value):
"""Fetch the pagination marker field from flask.request.args.
:returns: an uuid
"""
assert uuidutils.is_uuid_like(value), _('Marker not UUID-like')
return value
@request_field('limit')
@raises_coercion_exceptions
def limit_field(value):
"""Fetch the pagination limit field from flask.request.args.
:returns: the limit
"""
# limit of zero means the default limit
value = int(value) or CONF.api_max_limit
assert value >= 0, _('Limit cannot be negative')
assert value <= CONF.api_max_limit, _('Limit over %s') % CONF.api_max_limit
return value
@request_field('state')
@raises_coercion_exceptions
def state_field(value):
"""Fetch the pagination state field from flask.request.args.
:returns: list of the state(s)
"""
states = istate.States.all()
value = value.split(',')
invalid_states = [state for state in value if state not in states]
assert not invalid_states, \
_('State(s) "%s" are not valid') % ', '.join(invalid_states)
return value

View File

@@ -1,14 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
eventlet.monkey_patch()

View File

@@ -1,44 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""The Ironic Inspector service."""
import sys
from oslo_config import cfg
from oslo_service import service
from ironic_inspector.common.i18n import _
from ironic_inspector.common.rpc_service import RPCService
from ironic_inspector.common import service_utils
from ironic_inspector import wsgi_service
CONF = cfg.CONF
def main(args=sys.argv[1:]):
# Parse config file and command line options, then start logging
service_utils.prepare_service(args)
if not CONF.standalone:
msg = _('To run ironic-inspector in standalone mode, '
'[DEFAULT]standalone should be set to True.')
sys.exit(msg)
launcher = service.ServiceLauncher(CONF, restart_method='mutate')
launcher.launch_service(wsgi_service.WSGIService())
launcher.launch_service(RPCService(CONF.host))
launcher.wait()
if __name__ == '__main__':
sys.exit(main())

View File

@@ -1,42 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""The Ironic Inspector Conductor service."""
import sys
from oslo_config import cfg
from oslo_service import service
from ironic_inspector.common.i18n import _
from ironic_inspector.common.rpc_service import RPCService
from ironic_inspector.common import service_utils
CONF = cfg.CONF
def main(args=sys.argv[1:]):
# Parse config file and command line options, then start logging
service_utils.prepare_service(args)
if CONF.standalone:
msg = _('To run ironic-inspector-conductor, [DEFAULT]standalone '
'should be set to False.')
sys.exit(msg)
launcher = service.ServiceLauncher(CONF, restart_method='mutate')
launcher.launch_service(RPCService(CONF.host))
launcher.wait()
if __name__ == '__main__':
sys.exit(main())

View File

@@ -1,91 +0,0 @@
# Copyright 2015 Cisco Systems
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import sys
from alembic import command as alembic_command
from alembic import config as alembic_config
from alembic import util as alembic_util
from oslo_config import cfg
from oslo_log import log
from ironic_inspector import conf # noqa
CONF = cfg.CONF
def add_alembic_command(subparsers, name):
return subparsers.add_parser(
name, help=getattr(alembic_command, name).__doc__)
def add_command_parsers(subparsers):
for name in ['current', 'history', 'branches', 'heads']:
parser = add_alembic_command(subparsers, name)
parser.set_defaults(func=do_alembic_command)
for name in ['stamp', 'show', 'edit']:
parser = add_alembic_command(subparsers, name)
parser.set_defaults(func=with_revision)
parser.add_argument('--revision', nargs='?', required=True)
parser = add_alembic_command(subparsers, 'upgrade')
parser.set_defaults(func=with_revision)
parser.add_argument('--revision', nargs='?')
parser = add_alembic_command(subparsers, 'revision')
parser.set_defaults(func=do_revision)
parser.add_argument('-m', '--message')
parser.add_argument('--autogenerate', action='store_true')
command_opt = cfg.SubCommandOpt('command',
title='Command',
help='Available commands',
handler=add_command_parsers)
def _get_alembic_config():
base_path = os.path.split(os.path.dirname(__file__))[0]
return alembic_config.Config(os.path.join(base_path, 'db/alembic.ini'))
def do_revision(config, cmd, *args, **kwargs):
do_alembic_command(config, cmd, message=CONF.command.message,
autogenerate=CONF.command.autogenerate)
def with_revision(config, cmd, *args, **kwargs):
revision = CONF.command.revision or 'head'
do_alembic_command(config, cmd, revision)
def do_alembic_command(config, cmd, *args, **kwargs):
try:
getattr(alembic_command, cmd)(config, *args, **kwargs)
except alembic_util.CommandError as e:
alembic_util.err(str(e))
def main(args=sys.argv[1:]):
log.register_options(CONF)
CONF.register_cli_opt(command_opt)
CONF(args, project='ironic-inspector')
config = _get_alembic_config()
config.set_main_option('script_location', "ironic_inspector.db:migrations")
config.ironic_inspector_config = CONF
CONF.command.func(config, CONF.command.name)

View File

@@ -1,135 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Migrate introspected data between Swift and database."""
import sys
from oslo_config import cfg
from oslo_log import log
from oslo_utils import encodeutils
from ironic_inspector.common.i18n import _
from ironic_inspector.conf import opts
from ironic_inspector import node_cache
from ironic_inspector.plugins import base as plugins_base
from ironic_inspector import utils
LOG = log.getLogger(__name__)
CONF = cfg.CONF
_AVAILABLE_STORAGES = [('database', _('The database storage backend')),
('swift', _('The Swift storage backend'))]
_OPTS = [
cfg.StrOpt('from',
dest='source_storage',
required=True,
choices=_AVAILABLE_STORAGES,
help=_('The source storage where the introspected data will be '
'read from.')),
cfg.StrOpt('to',
dest='target_storage',
required=True,
choices=_AVAILABLE_STORAGES,
help=_('The target storage where the introspected data will be '
'saved to.'))
]
# Migration result
RESULT_NOCONTENT = 'no content'
RESULT_FAILED = 'failed'
RESULT_SUCCESS = 'success'
def _setup_logger(args=None):
args = [] if args is None else args
log.register_options(CONF)
opts.set_config_defaults()
opts.parse_args(args)
log.setup(CONF, 'ironic_inspector')
class MigrationTool(object):
def _migrate_one(self, node, processed):
LOG.debug('Starting to migrate introspection data for node '
'%(node)s (processed %(processed)s)',
{'node': node.uuid, 'processed': processed})
try:
data = self.ext_src.get(node.uuid, processed=processed,
get_json=True)
if not data:
return RESULT_NOCONTENT
self.ext_tgt.save(node.uuid, data, processed=processed)
except Exception as e:
try:
already_migrated = self.ext_tgt.get(node.uuid,
processed=processed,
get_json=True)
except Exception:
already_migrated = False
if not already_migrated:
# If we already have data on the target, there is nothing
# for us to do here.
LOG.error('Migrate introspection data failed for node '
'%(node)s (processed %(processed)s), error: '
'%(error)s', {'node': node.uuid,
'processed': processed,
'error': e})
return RESULT_FAILED
return RESULT_SUCCESS
def main(self):
CONF.register_cli_opts(_OPTS)
_setup_logger(sys.argv[1:])
if CONF.source_storage == CONF.target_storage:
raise utils.Error(_('Source and destination can not be the same.'))
introspection_data_manager = plugins_base.introspection_data_manager()
self.ext_src = introspection_data_manager[CONF.source_storage].obj
self.ext_tgt = introspection_data_manager[CONF.target_storage].obj
nodes = node_cache.get_node_list()
migration_list = [(n, p) for n in nodes for p in [True, False]]
failed_records = []
for node, processed in migration_list:
result = self._migrate_one(node, processed)
if result == RESULT_FAILED:
failed_records.append((node.uuid, processed))
msg = ('Finished introspection data migration, total records: %d. '
% len(migration_list))
if failed_records:
msg += 'Failed to migrate:\n' + '\n'.join([
'%s(processed=%s)' % (record[0], record[1])
for record in failed_records])
elif len(migration_list) > 0:
msg += 'all records are migrated successfully.'
print(msg)
def main():
try:
MigrationTool().main()
except KeyboardInterrupt:
print(_("... terminating migration tool"), file=sys.stderr)
return 130
except Exception as e:
print(encodeutils.safe_encode(str(e)), file=sys.stderr)
return 1
if __name__ == '__main__':
sys.exit(main())

View File

@@ -1,58 +0,0 @@
# Copyright (c) 2018 NEC, Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
from oslo_config import cfg
from oslo_upgradecheck import common_checks
from oslo_upgradecheck import upgradecheck
from ironic_inspector.common.i18n import _
import ironic_inspector.conf as conf
from ironic_inspector import policy # noqa Import for configuratiog loading.
CONF = conf.CONF
class Checks(upgradecheck.UpgradeCommands):
"""Upgrade checks for the ironic-status upgrade check command
Upgrade checks should be added as separate methods in this class
and added to _upgrade_checks tuple.
"""
# A tuple of check tuples of (<name of check>, <check function>).
# The name of the check will be used in the output of this command.
# The check function takes no arguments and returns an
# oslo_upgradecheck.upgradecheck.Result object with the appropriate
# oslo_upgradecheck.upgradecheck.Code and details set. If the
# check function hits warnings or failures then those should be stored
# in the returned Result's "details" attribute. The
# summary will be rolled up at the end of the check() method.
_upgrade_checks = (
# Added in Wallaby to raise visibility of the Victoria deprecation
# of oslo.policy's json policy support.
(_('Policy File JSON to YAML Migration'),
(common_checks.check_policy_json, {'conf': CONF})),
)
def main():
return upgradecheck.main(
cfg.CONF, project='ironic', upgrade_command=Checks())
if __name__ == '__main__':
sys.exit(main())

View File

@@ -1,34 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""WSGI script for Ironic Inspector API, installed by pbr."""
import sys
from oslo_config import cfg
from ironic_inspector.common.i18n import _
from ironic_inspector.common import service_utils
from ironic_inspector import main
CONF = cfg.CONF
def initialize_wsgi_app():
# Parse config file and command line options, then start logging
service_utils.prepare_service(sys.argv[1:])
if CONF.standalone:
msg = _('To run ironic-inspector-api, [DEFAULT]standalone should be '
'set to False.')
sys.exit(msg)
return main.get_app()

View File

@@ -1,204 +0,0 @@
# Copyright 2020 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import base64
import binascii
import logging
import bcrypt
import webob
from ironic_inspector.common import exception
from ironic_inspector.common.i18n import _
LOG = logging.getLogger(__name__)
class BasicAuthMiddleware(object):
"""Middleware which performs HTTP basic authentication on requests
"""
def __init__(self, app, auth_file):
self.app = app
self.auth_file = auth_file
validate_auth_file(auth_file)
def format_exception(self, e):
result = {'error': {'message': str(e), 'code': e.code}}
headers = list(e.headers.items()) + [
('Content-Type', 'application/json')
]
return webob.Response(content_type='application/json',
status_code=e.code,
json_body=result,
headerlist=headers)
def __call__(self, env, start_response):
try:
token = parse_header(env)
username, password = parse_token(token)
env.update(authenticate(self.auth_file, username, password))
return self.app(env, start_response)
except exception.IronicException as e:
response = self.format_exception(e)
return response(env, start_response)
def authenticate(auth_file, username, password):
"""Finds username and password match in Apache style user auth file
The user auth file format is expected to comply with Apache
documentation[1] however the bcrypt password digest is the *only*
digest format supported.
[1] https://httpd.apache.org/docs/current/misc/password_encryptions.html
:param: auth_file: Path to user auth file
:param: username: Username to authenticate
:param: password: Password encoded as bytes
:returns: A dictionary of WSGI environment values to append to the request
:raises: Unauthorized, if no file entries match supplied username/password
"""
line_prefix = username + ':'
try:
with open(auth_file, 'r') as f:
for line in f:
entry = line.strip()
if entry and entry.startswith(line_prefix):
return auth_entry(entry, password)
except OSError as exc:
LOG.error('Problem reading auth user file: %s', exc)
raise exception.ConfigInvalid(
error_msg=_('Problem reading auth user file'))
# reached end of file with no matches
LOG.info('User %s not found', username)
unauthorized()
def auth_entry(entry, password):
"""Compare a password with a single user auth file entry
:param: entry: Line from auth user file to use for authentication
:param: password: Password encoded as bytes
:returns: A dictionary of WSGI environment values to append to the request
:raises: Unauthorized, if the entry doesn't match supplied password or
if the entry is crypted with a method other than bcrypt
"""
username, crypted = parse_entry(entry)
if not bcrypt.checkpw(password, crypted):
LOG.info('Password for %s does not match', username)
unauthorized()
return {
'HTTP_X_USER': username,
'HTTP_X_USER_NAME': username
}
def validate_auth_file(auth_file):
"""Read the auth user file and validate its correctness
:param: auth_file: Path to user auth file
:raises: ConfigInvalid on validation error
"""
try:
with open(auth_file, 'r') as f:
for line in f:
entry = line.strip()
if entry and ':' in entry:
parse_entry(entry)
except OSError:
raise exception.ConfigInvalid(
error_msg=_('Problem reading auth user file: %s') % auth_file)
def parse_entry(entry):
"""Extrace the username and crypted password from a user auth file entry
:param: entry: Line from auth user file to use for authentication
:returns: a tuple of username and crypted password
:raises: ConfigInvalid if the password is not in the supported bcrypt
format
"""
username, crypted_str = entry.split(':', maxsplit=1)
crypted = crypted_str.encode('utf-8')
if crypted[:4] not in (b'$2y$', b'$2a$', b'$2b$'):
error_msg = _('Only bcrypt digested passwords are supported for '
'%(username)s') % {'username': username}
raise exception.ConfigInvalid(error_msg=error_msg)
return username, crypted
def parse_token(token):
"""Parse the token portion of the Authentication header value
:param: token: Token value from basic authorization header
:returns: tuple of username, password
:raises: Unauthorized, if username and password could not be parsed for any
reason
"""
try:
if isinstance(token, str):
token = token.encode('utf-8')
auth_pair = base64.b64decode(token, validate=True)
(username, password) = auth_pair.split(b':', maxsplit=1)
return (username.decode('utf-8'), password)
except (TypeError, binascii.Error, ValueError) as exc:
LOG.info('Could not decode authorization token: %s', exc)
raise exception.BadRequest(_('Could not decode authorization token'))
def parse_header(env):
"""Parse WSGI environment for Authorization header of type Basic
:param: env: WSGI environment to get header from
:returns: Token portion of the header value
:raises: Unauthorized, if header is missing or if the type is not Basic
"""
try:
auth_header = env.pop('HTTP_AUTHORIZATION')
except KeyError:
LOG.info('No authorization token received')
unauthorized(_('Authorization required'))
try:
auth_type, token = auth_header.strip().split(maxsplit=1)
except (ValueError, AttributeError) as exc:
LOG.info('Could not parse Authorization header: %s', exc)
raise exception.BadRequest(_('Could not parse Authorization header'))
if auth_type.lower() != 'basic':
msg = _('Unsupported authorization type "%s"') % auth_type
LOG.info(msg)
raise exception.BadRequest(msg)
return token
def unauthorized(message=None):
"""Raise an Unauthorized exception to prompt for basic authentication
:param: message: Optional message for esception
:raises: Unauthorized with WWW-Authenticate header set
"""
if not message:
message = _('Incorrect username or password')
raise exception.Unauthorized(message)

View File

@@ -1,45 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_context import context
class RequestContext(context.RequestContext):
"""Extends security contexts from the oslo.context library."""
def __init__(self, is_public_api=False, **kwargs):
"""Initialize the RequestContext
:param is_public_api: Specifies whether the request should be processed
without authentication.
:param kwargs: additional arguments passed to oslo.context.
"""
super(RequestContext, self).__init__(**kwargs)
self.is_public_api = is_public_api
def to_policy_values(self):
policy_values = super(RequestContext, self).to_policy_values()
policy_values.update({'is_public_api': self.is_public_api})
return policy_values
@classmethod
def from_dict(cls, values, **kwargs):
kwargs.setdefault('is_public_api', values.get('is_public_api', False))
return super(RequestContext, RequestContext).from_dict(values,
**kwargs)
@classmethod
def from_environ(cls, environ, **kwargs):
kwargs.setdefault('is_public_api', environ.get('is_public_api', False))
return super(RequestContext, RequestContext).from_environ(environ,
**kwargs)

View File

@@ -1,172 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_concurrency import lockutils
from oslo_config import cfg
from oslo_log import log
import tooz
from tooz import coordination
from ironic_inspector import utils
CONF = cfg.CONF
LOG = log.getLogger(__name__)
COORDINATION_PREFIX = 'ironic_inspector'
COORDINATION_GROUP_NAME = '.'.join([COORDINATION_PREFIX, 'service_group'])
LOCK_PREFIX = 'ironic_inspector.'
class Coordinator(object):
"""Tooz coordination wrapper."""
group_name = COORDINATION_GROUP_NAME.encode('ascii')
lock_prefix = LOCK_PREFIX
def __init__(self, prefix=None):
"""Creates a coordinator instance for service coordination.
:param prefix: The prefix to be part of the member id of the service.
Different types of services on the same host should use
different prefix to work properly.
"""
self.coordinator = None
self.started = False
self.prefix = prefix if prefix else 'default'
self.is_leader = False
self.supports_election = True
def start(self, heartbeat=True):
"""Start coordinator.
:param heartbeat: Whether spawns a new thread to keep heartbeating with
the tooz backend. Unless there is periodic task to
do heartbeat manually, it should be always set to
True.
"""
if self.started:
return
member_id = '.'.join([COORDINATION_PREFIX, self.prefix,
CONF.host]).encode('ascii')
self.coordinator = coordination.get_coordinator(
CONF.coordination.backend_url, member_id)
self.coordinator.start(start_heart=heartbeat)
self.started = True
LOG.debug('Coordinator started successfully.')
def stop(self):
"""Disconnect from coordination backend and stop heartbeat."""
if self.started:
try:
self.coordinator.stop()
except Exception as e:
LOG.error('Failed to stop coordinator: %s', e)
self.coordinator = None
self.started = False
LOG.debug('Coordinator stopped successfully')
def _validate_state(self):
if not self.started:
raise utils.Error('Coordinator should be started before '
'executing coordination actions.')
def _create_group(self):
try:
request = self.coordinator.create_group(self.group_name)
request.get()
except coordination.GroupAlreadyExist:
LOG.debug('Group %s already exists.', self.group_name)
def _join_election(self):
self.is_leader = False
def _when_elected(event):
LOG.info('This conductor instance is a group leader now.')
self.is_leader = True
try:
self.coordinator.watch_elected_as_leader(
self.group_name, _when_elected)
self.coordinator.run_elect_coordinator()
except tooz.NotImplemented:
LOG.warning('The coordination backend does not support leader '
'elections, assuming we are a leader. This is '
'deprecated, please use a supported backend.')
self.is_leader = True
self.supports_election = False
def join_group(self):
"""Join service group."""
self._validate_state()
try:
request = self.coordinator.join_group(self.group_name)
request.get()
except coordination.GroupNotCreated:
self._create_group()
request = self.coordinator.join_group(self.group_name)
request.get()
except coordination.MemberAlreadyExist:
pass
self._join_election()
LOG.debug('Joined group %s', self.group_name)
def leave_group(self):
"""Leave service group"""
self._validate_state()
try:
request = self.coordinator.leave_group(self.group_name)
request.get()
LOG.debug('Left group %s', self.group_name)
except coordination.MemberNotJoined:
LOG.debug('Leaving a non-existing group.')
def get_members(self):
"""Get members in the service group."""
self._validate_state()
try:
result = self.coordinator.get_members(self.group_name)
return result.get()
except coordination.GroupNotCreated:
# If the group does not exist, there should be no members in it.
return set()
def get_lock(self, uuid):
"""Get lock for node uuid."""
self._validate_state()
lock_name = (self.lock_prefix + uuid).encode('ascii')
return self.coordinator.get_lock(lock_name)
def run_elect_coordinator(self):
"""Trigger a new leader election."""
if self.supports_election:
LOG.debug('Starting leader election')
self.coordinator.run_elect_coordinator()
LOG.debug('Finished leader election')
else:
LOG.warning('The coordination backend does not support leader '
'elections, assuming we are a leader. This is '
'deprecated, please use a supported backend.')
self.is_leader = True
_COORDINATOR = None
@lockutils.synchronized('inspector_coordinator')
def get_coordinator(prefix=None):
global _COORDINATOR
if _COORDINATOR is None:
_COORDINATOR = Coordinator(prefix=prefix)
return _COORDINATOR

View File

@@ -1,322 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import logging
import re
from urllib import parse as urlparse
from oslo_utils import specs_matcher
from oslo_utils import strutils
from oslo_utils import units
from ironic_inspector.common.i18n import _
LOG = logging.getLogger(__name__)
# A dictionary in the form {hint name: hint type}
VALID_ROOT_DEVICE_HINTS = {
'size': int, 'model': str, 'wwn': str, 'serial': str, 'vendor': str,
'wwn_with_extension': str, 'wwn_vendor_extension': str, 'name': str,
'rotational': bool, 'hctl': str, 'by_path': str,
}
ROOT_DEVICE_HINTS_GRAMMAR = specs_matcher.make_grammar()
def _extract_hint_operator_and_values(hint_expression, hint_name):
"""Extract the operator and value(s) of a root device hint expression.
A root device hint expression could contain one or more values
depending on the operator. This method extracts the operator and
value(s) and returns a dictionary containing both.
:param hint_expression: The hint expression string containing value(s)
and operator (optionally).
:param hint_name: The name of the hint. Used for logging.
:raises: ValueError if the hint_expression is empty.
:returns: A dictionary containing:
:op: The operator. An empty string in case of None.
:values: A list of values stripped and converted to lowercase.
"""
expression = str(hint_expression).strip().lower()
if not expression:
raise ValueError(
_('Root device hint "%s" expression is empty') % hint_name)
# parseString() returns a list of tokens which the operator (if
# present) is always the first element.
ast = ROOT_DEVICE_HINTS_GRAMMAR.parseString(expression)
if len(ast) <= 1:
# hint_expression had no operator
return {'op': '', 'values': [expression]}
op = ast[0]
return {'values': [v.strip() for v in re.split(op, expression) if v],
'op': op}
def _normalize_hint_expression(hint_expression, hint_name):
"""Normalize a string type hint expression.
A string-type hint expression contains one or more operators and
one or more values: [<op>] <value> [<op> <value>]*. This normalizes
the values by url-encoding white spaces and special characters. The
operators are not normalized. For example: the hint value of "<or>
foo bar <or> bar" will become "<or> foo%20bar <or> bar".
:param hint_expression: The hint expression string containing value(s)
and operator (optionally).
:param hint_name: The name of the hint. Used for logging.
:raises: ValueError if the hint_expression is empty.
:returns: A normalized string.
"""
hdict = _extract_hint_operator_and_values(hint_expression, hint_name)
result = hdict['op'].join([' %s ' % urlparse.quote(t)
for t in hdict['values']])
return (hdict['op'] + result).strip()
def _append_operator_to_hints(root_device):
"""Add an equal (s== or ==) operator to the hints.
For backwards compatibility, for root device hints where no operator
means equal, this method adds the equal operator to the hint. This is
needed when using oslo.utils.specs_matcher methods.
:param root_device: The root device hints dictionary.
"""
for name, expression in root_device.items():
# NOTE(lucasagomes): The specs_matcher from oslo.utils does not
# support boolean, so we don't need to append any operator
# for it.
if VALID_ROOT_DEVICE_HINTS[name] is bool:
continue
expression = str(expression)
ast = ROOT_DEVICE_HINTS_GRAMMAR.parseString(expression)
if len(ast) > 1:
continue
op = 's== %s' if VALID_ROOT_DEVICE_HINTS[name] is str else '== %s'
root_device[name] = op % expression
return root_device
def parse_root_device_hints(root_device):
"""Parse the root_device property of a node.
Parses and validates the root_device property of a node. These are
hints for how a node's root device is created. The 'size' hint
should be a positive integer. The 'rotational' hint should be a
Boolean value.
:param root_device: the root_device dictionary from the node's property.
:returns: a dictionary with the root device hints parsed or
None if there are no hints.
:raises: ValueError, if some information is invalid.
"""
if not root_device:
return
root_device = copy.deepcopy(root_device)
invalid_hints = set(root_device) - set(VALID_ROOT_DEVICE_HINTS)
if invalid_hints:
raise ValueError(
_('The hints "%(invalid_hints)s" are invalid. '
'Valid hints are: "%(valid_hints)s"') %
{'invalid_hints': ', '.join(invalid_hints),
'valid_hints': ', '.join(VALID_ROOT_DEVICE_HINTS)})
for name, expression in root_device.items():
hint_type = VALID_ROOT_DEVICE_HINTS[name]
if hint_type is str:
if not isinstance(expression, str):
raise ValueError(
_('Root device hint "%(name)s" is not a string value. '
'Hint expression: %(expression)s') %
{'name': name, 'expression': expression})
root_device[name] = _normalize_hint_expression(expression, name)
elif hint_type is int:
for v in _extract_hint_operator_and_values(expression,
name)['values']:
try:
integer = int(v)
except ValueError:
raise ValueError(
_('Root device hint "%(name)s" is not an integer '
'value. Current value: %(expression)s') %
{'name': name, 'expression': expression})
if integer <= 0:
raise ValueError(
_('Root device hint "%(name)s" should be a positive '
'integer. Current value: %(expression)s') %
{'name': name, 'expression': expression})
elif hint_type is bool:
try:
root_device[name] = strutils.bool_from_string(
expression, strict=True)
except ValueError:
raise ValueError(
_('Root device hint "%(name)s" is not a Boolean value. '
'Current value: %(expression)s') %
{'name': name, 'expression': expression})
return _append_operator_to_hints(root_device)
def find_devices_by_hints(devices, root_device_hints):
"""Find all devices that match the root device hints.
Try to find devices that match the root device hints. In order
for a device to be matched it needs to satisfy all the given hints.
:param devices: A list of dictionaries representing the devices
containing one or more of the following keys:
:name: (String) The device name, e.g /dev/sda
:size: (Integer) Size of the device in *bytes*
:model: (String) Device model
:vendor: (String) Device vendor name
:serial: (String) Device serial number
:wwn: (String) Unique storage identifier
:wwn_with_extension: (String): Unique storage identifier with
the vendor extension appended
:wwn_vendor_extension: (String): United vendor storage identifier
:rotational: (Boolean) Whether it's a rotational device or
not. Useful to distinguish HDDs (rotational) and SSDs
(not rotational).
:hctl: (String): The SCSI address: Host, channel, target and lun.
For example: '1:0:0:0'.
:by_path: (String): The alternative device name,
e.g. /dev/disk/by-path/pci-0000:00
:param root_device_hints: A dictionary with the root device hints.
:raises: ValueError, if some information is invalid.
:returns: A generator with all matching devices as dictionaries.
"""
LOG.debug('Trying to find devices from "%(devs)s" that match the '
'device hints "%(hints)s"',
{'devs': ', '.join([d.get('name') for d in devices]),
'hints': root_device_hints})
parsed_hints = parse_root_device_hints(root_device_hints)
for dev in devices:
device_name = dev.get('name')
for hint in parsed_hints:
hint_type = VALID_ROOT_DEVICE_HINTS[hint]
device_value = dev.get(hint)
hint_value = parsed_hints[hint]
if hint_type is str:
try:
device_value = _normalize_hint_expression(device_value,
hint)
except ValueError:
LOG.warning(
'The attribute "%(attr)s" of the device "%(dev)s" '
'has an empty value. Skipping device.',
{'attr': hint, 'dev': device_name})
break
if hint == 'size':
# Since we don't support units yet we expect the size
# in GiB for now
device_value = device_value / units.Gi
LOG.debug('Trying to match the device hint "%(hint)s" '
'with a value of "%(hint_value)s" against the same '
'device\'s (%(dev)s) attribute with a value of '
'"%(dev_value)s"', {'hint': hint, 'dev': device_name,
'hint_value': hint_value,
'dev_value': device_value})
# NOTE(lucasagomes): Boolean hints are not supported by
# specs_matcher.match(), so we need to do the comparison
# ourselves
if hint_type is bool:
try:
device_value = strutils.bool_from_string(device_value,
strict=True)
except ValueError:
LOG.warning('The attribute "%(attr)s" (with value '
'"%(value)s") of device "%(dev)s" is not '
'a valid Boolean. Skipping device.',
{'attr': hint, 'value': device_value,
'dev': device_name})
break
if device_value == hint_value:
continue
elif specs_matcher.match(device_value, hint_value):
continue
LOG.debug('The attribute "%(attr)s" (with value "%(value)s") '
'of device "%(dev)s" does not match the hint %(hint)s',
{'attr': hint, 'value': device_value,
'dev': device_name, 'hint': hint_value})
break
else:
yield dev
def match_root_device_hints(devices, root_device_hints):
"""Try to find a device that matches the root device hints.
Try to find a device that matches the root device hints. In order
for a device to be matched it needs to satisfy all the given hints.
:param devices: A list of dictionaries representing the devices
containing one or more of the following keys:
:name: (String) The device name, e.g /dev/sda
:size: (Integer) Size of the device in *bytes*
:model: (String) Device model
:vendor: (String) Device vendor name
:serial: (String) Device serial number
:wwn: (String) Unique storage identifier
:wwn_with_extension: (String): Unique storage identifier with
the vendor extension appended
:wwn_vendor_extension: (String): United vendor storage identifier
:rotational: (Boolean) Whether it's a rotational device or
not. Useful to distinguish HDDs (rotational) and SSDs
(not rotational).
:hctl: (String): The SCSI address: Host, channel, target and lun.
For example: '1:0:0:0'.
:by_path: (String): The alternative device name,
e.g. /dev/disk/by-path/pci-0000:00
:param root_device_hints: A dictionary with the root device hints.
:raises: ValueError, if some information is invalid.
:returns: The first device to match all the hints or None.
"""
try:
dev = next(find_devices_by_hints(devices, root_device_hints))
except StopIteration:
LOG.warning('No device found that matches the root device hints %s',
root_device_hints)
else:
LOG.info('Root device found! The device "%s" matches the root '
'device hints %s', dev, root_device_hints)
return dev

View File

@@ -1,155 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Ironic base exception handling.
Includes decorator for re-raising Ironic-type exceptions.
SHOULD include dedicated exception logging.
"""
import collections
from http import client as http_client
import json
import logging
from oslo_config import cfg
from oslo_utils import excutils
from ironic_inspector.common.i18n import _
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
def _ensure_exception_kwargs_serializable(exc_class_name, kwargs):
"""Ensure that kwargs are serializable
Ensure that all kwargs passed to exception constructor can be passed over
RPC, by trying to convert them to JSON, or, as a last resort, to string.
If it is not possible, unserializable kwargs will be removed, letting the
receiver to handle the exception string as it is configured to.
:param exc_class_name: a IronicException class name.
:param kwargs: a dictionary of keyword arguments passed to the exception
constructor.
:returns: a dictionary of serializable keyword arguments.
"""
serializers = [(json.dumps, _('when converting to JSON')),
(str, _('when converting to string'))]
exceptions = collections.defaultdict(list)
serializable_kwargs = {}
for k, v in kwargs.items():
for serializer, msg in serializers:
try:
serializable_kwargs[k] = serializer(v)
exceptions.pop(k, None)
break
except Exception as e:
exceptions[k].append(
'(%(serializer_type)s) %(e_type)s: %(e_contents)s' %
{'serializer_type': msg, 'e_contents': e,
'e_type': e.__class__.__name__})
if exceptions:
LOG.error("One or more arguments passed to the %(exc_class)s "
"constructor as kwargs can not be serialized. The "
"serialized arguments: %(serialized)s. These "
"unserialized kwargs were dropped because of the "
"exceptions encountered during their "
"serialization:\n%(errors)s",
dict(errors=';\n'.join("%s: %s" % (k, '; '.join(v))
for k, v in exceptions.items()),
exc_class=exc_class_name,
serialized=serializable_kwargs))
# We might be able to actually put the following keys' values into
# format string, but there is no guarantee, drop it just in case.
for k in exceptions:
del kwargs[k]
return serializable_kwargs
class IronicException(Exception):
"""Base Ironic Exception
To correctly use this class, inherit from it and define
a '_msg_fmt' property. That _msg_fmt will get printf'd
with the keyword arguments provided to the constructor.
If you need to access the message from an exception you should use
str(exc)
"""
_msg_fmt = _("An unknown exception occurred.")
code = 500
headers = {}
safe = False
def __init__(self, message=None, **kwargs):
self.kwargs = _ensure_exception_kwargs_serializable(
self.__class__.__name__, kwargs)
if 'code' not in self.kwargs:
try:
self.kwargs['code'] = self.code
except AttributeError:
pass
else:
self.code = int(kwargs['code'])
if not message:
try:
message = self._msg_fmt % kwargs
except Exception:
with excutils.save_and_reraise_exception() as ctxt:
# kwargs doesn't match a variable in the message
# log the issue and the kwargs
prs = ', '.join('%s=%s' % pair for pair in kwargs.items())
LOG.exception('Exception in string format operation '
'(arguments %s)', prs)
if not CONF.exception.fatal_exception_format_errors:
# at least get the core message out if something
# happened
message = self._msg_fmt
ctxt.reraise = False
super(IronicException, self).__init__(message)
class ServiceLookupFailure(IronicException):
_msg_fmt = _("Cannot find %(service)s service through multicast")
class ServiceRegistrationFailure(IronicException):
_msg_fmt = _("Cannot register %(service)s service: %(error)s")
class BadRequest(IronicException):
code = http_client.BAD_REQUEST
class Unauthorized(IronicException):
code = http_client.UNAUTHORIZED
headers = {'WWW-Authenticate': 'Basic realm="Baremetal API"'}
class ConfigInvalid(IronicException):
_msg_fmt = _("Invalid configuration file. %(error_msg)s")

View File

@@ -1,21 +0,0 @@
# Copyright 2015 NEC Corporation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='ironic_inspector')
# The primary translation function using the well-known name "_"
_ = _translators.primary

View File

@@ -1,306 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import socket
import urllib
import netaddr
import openstack
from openstack import exceptions as os_exc
from oslo_config import cfg
from oslo_utils import excutils
import tenacity
from ironic_inspector.common.i18n import _
from ironic_inspector.common import keystone
from ironic_inspector import utils
CONF = cfg.CONF
LOG = utils.getProcessingLogger(__name__)
# See https://docs.openstack.org/ironic/latest/contributor/states.html
VALID_STATES = frozenset(['enroll', 'manageable', 'inspecting', 'inspect wait',
'inspect failed'])
# States where an instance is deployed and an admin may be doing something.
VALID_ACTIVE_STATES = frozenset(['active', 'rescue'])
_IRONIC_SESSION = None
_CONNECTION = None
class NotFound(utils.Error):
"""Node not found in Ironic."""
def __init__(self, node_ident, code=404, *args, **kwargs):
msg = _('Node %s was not found in Ironic') % node_ident
super(NotFound, self).__init__(msg, code, *args, **kwargs)
def _get_ironic_session():
global _IRONIC_SESSION
if not _IRONIC_SESSION:
_IRONIC_SESSION = keystone.get_session('ironic')
return _IRONIC_SESSION
def get_client(token=None):
"""Get an ironic client connection."""
global _CONNECTION
if _CONNECTION is None:
try:
session = _get_ironic_session()
_CONNECTION = openstack.connection.Connection(
session=session, oslo_conf=CONF)
except Exception as exc:
LOG.error('Failed to create an openstack connection: %s', exc)
raise
try:
return _CONNECTION.baremetal
except Exception as exc:
with excutils.save_and_reraise_exception():
LOG.error('Failed to connect to Ironic: %s', exc)
# Force creating a new connection on the next retry
try:
_CONNECTION.close()
except Exception as exc2:
LOG.error('Unable to close an openstack connection, '
'a memory leak is possible. Error: %s', exc2)
_CONNECTION = None
def reset_ironic_session():
"""Reset the global session variable.
Mostly useful for unit tests.
"""
global _IRONIC_SESSION, _CONNECTION
_CONNECTION = _IRONIC_SESSION = None
def get_ipmi_address(node):
"""Get the BMC address defined in node.driver_info dictionary
Possible names of BMC address value examined in order of list
['ipmi_address'] + CONF.ipmi_address_fields. The value could
be an IP address or a hostname. DNS lookup performed for the
first non empty value.
The first valid BMC address value returned along with
it's v4 and v6 IP addresses.
:param node: Node object with defined driver_info dictionary
:return: tuple (ipmi_address, ipv4_address, ipv6_address)
"""
none_address = None, None, None
ipmi_fields = ['ipmi_address'] + CONF.ipmi_address_fields
# NOTE(sambetts): IPMI Address is useless to us if bridging is enabled so
# just ignore it and return None
if node.driver_info.get("ipmi_bridging", "no") != "no":
return none_address
for name in ipmi_fields:
value = node.driver_info.get(name)
if not value:
continue
ipv4 = None
ipv6 = None
if '//' in value:
url = urllib.parse.urlparse(value)
value = url.hostname
# Strip brackets in case used on IPv6 address.
value = value.strip('[').strip(']')
try:
addrinfo = socket.getaddrinfo(value, None, 0, 0, socket.SOL_TCP)
for family, socket_type, proto, canon_name, sockaddr in addrinfo:
ip = sockaddr[0]
if netaddr.IPAddress(ip).is_loopback():
LOG.warning('Ignoring loopback BMC address %s', ip,
node_info=node)
elif family == socket.AF_INET:
ipv4 = ip
elif family == socket.AF_INET6:
ipv6 = ip
except socket.gaierror:
LOG.warning('Failed to resolve the hostname (%s)'
' for node %s', value, node.id, node_info=node)
return (value, ipv4, ipv6) if ipv4 or ipv6 else none_address
return none_address
def check_provision_state(node):
"""Sanity checks the provision state of the node.
:param node: An API client returned node object describing
the baremetal node according to ironic's node
data model.
:returns: None if no action is to be taken, True if the
power node state should not be modified.
:raises: Error on an invalid state being detected.
"""
state = node.provision_state.lower()
if state not in VALID_STATES:
if (CONF.processing.permit_active_introspection
and state in VALID_ACTIVE_STATES):
# Hey, we can leave the power on! Lets return
# True to let the caller know.
return True
msg = _('Invalid provision state for introspection: '
'"%(state)s", valid states are "%(valid)s"')
raise utils.Error(msg % {'state': state,
'valid': list(VALID_STATES)},
node_info=node)
def capabilities_to_dict(caps):
"""Convert the Node's capabilities into a dictionary."""
if not caps:
return {}
return dict([key.split(':', 1) for key in caps.split(',')])
def dict_to_capabilities(caps_dict):
"""Convert a dictionary into a string with the capabilities syntax."""
return ','.join(["%s:%s" % (key, value)
for key, value in caps_dict.items()
if value is not None])
def get_node(node_id, ironic=None, **kwargs):
"""Get a node from Ironic.
:param node_id: node UUID or name.
:param ironic: ironic client instance.
:param kwargs: arguments to pass to Ironic client.
:raises: Error on failure
"""
ironic = ironic if ironic is not None else get_client()
try:
node = ironic.get_node(node_id, **kwargs)
except os_exc.ResourceNotFound:
raise NotFound(node_id)
except os_exc.BadRequestException as exc:
raise utils.Error(_("Cannot get node %(node)s: %(exc)s") %
{'node': node_id, 'exc': exc})
return node
@tenacity.retry(
retry=tenacity.retry_if_exception_type(os_exc.SDKException),
stop=tenacity.stop_after_attempt(5),
wait=tenacity.wait_fixed(1),
reraise=True)
def call_with_retries(func, *args, **kwargs):
"""Call an ironic client function retrying all errors.
If an ironic client exception is raised, try calling the func again,
at most 5 times, waiting 1 sec between each call. If on the 5th attempt
the func raises again, the exception is propagated to the caller.
"""
return func(*args, **kwargs)
def lookup_node_by_macs(macs, introspection_data=None,
ironic=None, fail=False):
"""Find a node by its MACs."""
if ironic is None:
ironic = get_client()
nodes = set()
for mac in macs:
ports = ironic.ports(address=mac, fields=["uuid", "node_uuid"])
ports = list(ports)
if not ports:
continue
elif fail:
raise utils.Error(
_('Port %(mac)s already exists, uuid: %(uuid)s') %
{'mac': mac, 'uuid': ports[0].id}, data=introspection_data)
else:
nodes.update(p.node_id for p in ports)
if len(nodes) > 1:
raise utils.Error(_('MAC addresses %(macs)s correspond to more than '
'one node: %(nodes)s') %
{'macs': ', '.join(macs),
'nodes': ', '.join(nodes)},
data=introspection_data)
elif nodes:
return nodes.pop()
def lookup_node_by_bmc_addresses(addresses, introspection_data=None,
ironic=None, fail=False):
"""Find a node by its BMC address."""
if ironic is None:
ironic = get_client()
# FIXME(aarefiev): it's not effective to fetch all nodes, and may
# impact on performance on big clusters
# TODO(TheJulia): We should likely first loop through nodes being
# inspected, i.e. inspect wait, and then fallback
# to the rest of the physical nodes so we limit
# overall-impact of the operation.
nodes = ironic.nodes(fields=('uuid', 'driver_info'), limit=None)
found = set()
for node in nodes:
bmc_address, bmc_ipv4, bmc_ipv6 = get_ipmi_address(node)
for addr in addresses:
if addr not in (bmc_ipv4, bmc_ipv6):
continue
elif fail:
raise utils.Error(
_('Node %(uuid)s already has BMC address %(addr)s') %
{'addr': addr, 'uuid': node.id},
data=introspection_data)
else:
found.add(node.id)
if len(found) > 1:
raise utils.Error(_('BMC addresses %(addr)s correspond to more than '
'one node: %(nodes)s') %
{'addr': ', '.join(addresses),
'nodes': ', '.join(found)},
data=introspection_data)
elif found:
return found.pop()
def lookup_node(macs=None, bmc_addresses=None, introspection_data=None,
ironic=None):
"""Lookup a node in the ironic database."""
node = node2 = None
if macs:
node = lookup_node_by_macs(macs, ironic=ironic)
if bmc_addresses:
node2 = lookup_node_by_bmc_addresses(bmc_addresses, ironic=ironic)
if node and node2 and node != node2:
raise utils.Error(_('MAC addresses %(mac)s and BMC addresses %(addr)s '
'correspond to different nodes: %(node1)s and '
'%(node2)s') %
{'mac': ', '.join(macs),
'addr': ', '.join(bmc_addresses),
'node1': node, 'node2': node2})
return node or node2

View File

@@ -1,77 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keystoneauth1 import loading
from oslo_config import cfg
CONF = cfg.CONF
DEFAULT_VALID_INTERFACES = ['internal', 'public']
# TODO(pas-ha) set default values in conf.opts.set_defaults()
def register_auth_opts(group, service_type):
loading.register_session_conf_options(CONF, group)
loading.register_auth_conf_options(CONF, group)
CONF.set_default('auth_type', default='password', group=group)
loading.register_adapter_conf_options(CONF, group)
CONF.set_default('valid_interfaces', DEFAULT_VALID_INTERFACES,
group=group)
CONF.set_default('service_type', service_type, group=group)
def get_session(group):
auth = loading.load_auth_from_conf_options(CONF, group)
session = loading.load_session_from_conf_options(
CONF, group, auth=auth)
return session
def get_adapter(group, **adapter_kwargs):
return loading.load_adapter_from_conf_options(CONF, group,
**adapter_kwargs)
# TODO(pas-ha) set default values in conf.opts.set_defaults()
def add_auth_options(options, service_type):
def add_options(opts, opts_to_add):
for new_opt in opts_to_add:
for opt in opts:
if opt.name == new_opt.name:
break
else:
opts.append(new_opt)
opts = copy.deepcopy(options)
opts.insert(0, loading.get_auth_common_conf_options()[0])
# NOTE(dims): There are a lot of auth plugins, we just generate
# the config options for a few common ones
plugins = ['password', 'v2password', 'v3password']
for name in plugins:
plugin = loading.get_plugin_loader(name)
add_options(opts, loading.get_auth_plugin_conf_options(plugin))
add_options(opts, loading.get_session_conf_options())
adapter_opts = loading.get_adapter_conf_options(
include_deprecated=False)
cfg.set_defaults(adapter_opts, service_type=service_type,
valid_interfaces=DEFAULT_VALID_INTERFACES)
add_options(opts, adapter_opts)
opts.sort(key=lambda x: x.name)
return opts
def get_endpoint(group, **kwargs):
return get_adapter(group, session=get_session(group)).get_endpoint(
**kwargs)

View File

@@ -1,369 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Names and mapping functions used to map LLDP TLVs to name/value pairs """
import binascii
from construct import core
import netaddr
from ironic_inspector.common.i18n import _
from ironic_inspector.common import lldp_tlvs as tlv
from ironic_inspector import utils
LOG = utils.getProcessingLogger(__name__)
# Names used in name/value pair from parsed TLVs
LLDP_CHASSIS_ID_NM = 'switch_chassis_id'
LLDP_PORT_ID_NM = 'switch_port_id'
LLDP_PORT_DESC_NM = 'switch_port_description'
LLDP_SYS_NAME_NM = 'switch_system_name'
LLDP_SYS_DESC_NM = 'switch_system_description'
LLDP_SWITCH_CAP_NM = 'switch_capabilities'
LLDP_CAP_SUPPORT_NM = 'switch_capabilities_support'
LLDP_CAP_ENABLED_NM = 'switch_capabilities_enabled'
LLDP_MGMT_ADDRESSES_NM = 'switch_mgmt_addresses'
LLDP_PORT_VLANID_NM = 'switch_port_untagged_vlan_id'
LLDP_PORT_PROT_NM = 'switch_port_protocol'
LLDP_PORT_PROT_VLAN_ENABLED_NM = 'switch_port_protocol_vlan_enabled'
LLDP_PORT_PROT_VLAN_SUPPORT_NM = 'switch_port_protocol_vlan_support'
LLDP_PORT_PROT_VLANIDS_NM = 'switch_port_protocol_vlan_ids'
LLDP_PORT_VLANS_NM = 'switch_port_vlans'
LLDP_PROTOCOL_IDENTITIES_NM = 'switch_protocol_identities'
LLDP_PORT_MGMT_VLANID_NM = 'switch_port_management_vlan_id'
LLDP_PORT_LINK_AGG_NM = 'switch_port_link_aggregation'
LLDP_PORT_LINK_AGG_ENABLED_NM = 'switch_port_link_aggregation_enabled'
LLDP_PORT_LINK_AGG_SUPPORT_NM = 'switch_port_link_aggregation_support'
LLDP_PORT_LINK_AGG_ID_NM = 'switch_port_link_aggregation_id'
LLDP_PORT_MAC_PHY_NM = 'switch_port_mac_phy_config'
LLDP_PORT_LINK_AUTONEG_ENABLED_NM = 'switch_port_autonegotiation_enabled'
LLDP_PORT_LINK_AUTONEG_SUPPORT_NM = 'switch_port_autonegotiation_support'
LLDP_PORT_CAPABILITIES_NM = 'switch_port_physical_capabilities'
LLDP_PORT_MAU_TYPE_NM = 'switch_port_mau_type'
LLDP_MTU_NM = 'switch_port_mtu'
class LLDPParser(object):
"""Base class to handle parsing of LLDP TLVs
Each class that inherits from this base class must provide a parser map.
Parser maps are used to associate a LLDP TLV with a function handler
and arguments necessary to parse the TLV and generate one or more
name/value pairs. Each LLDP TLV maps to a tuple with the following
fields:
function - handler function to generate name/value pairs
construct - name of construct definition for TLV
name - user-friendly name of TLV. For TLVs that generate only one
name/value pair this is the name used
len_check - boolean indicating if length check should be done on construct
It's valid to have a function handler of None, this is for TLVs that
are not mapped to a name/value pair(e.g.LLDP_TLV_TTL).
"""
def __init__(self, node_info, nv=None):
"""Create LLDPParser
:param node_info - node being introspected
:param nv - dictionary of name/value pairs to use
"""
self.nv_dict = nv or {}
self.node_info = node_info
self.parser_map = {}
def set_value(self, name, value):
"""Set name value pair in dictionary
The value for a name should not be changed if it exists.
"""
self.nv_dict.setdefault(name, value)
def append_value(self, name, value):
"""Add value to a list mapped to name"""
self.nv_dict.setdefault(name, []).append(value)
def add_single_value(self, struct, name, data):
"""Add a single name/value pair to the nv dict"""
self.set_value(name, struct.value)
def add_nested_value(self, struct, name, data):
"""Add a single nested name/value pair to the dict"""
self.set_value(name, struct.value.value)
def parse_tlv(self, tlv_type, data):
"""Parse TLVs from mapping table
This functions takes the TLV type and the raw data for
this TLV and gets a tuple from the parser_map. The
construct field in the tuple contains the construct lib
definition of the TLV which can be parsed to access
individual fields. Once the TLV is parsed, the handler
function for each TLV will store the individual fields as
name/value pairs in nv_dict.
If the handler function does not exist, then no name/value pairs
will be added to nv_dict, but since the TLV was handled,
True will be returned.
:param: tlv_type - type identifier for TLV
:param: data - raw TLV value
:returns: True if TLV in parser_map and data is valid, otherwise False.
"""
s = self.parser_map.get(tlv_type)
if not s:
return False
func = s[0] # handler
if not func:
return True # TLV is handled
try:
tlv_parser = s[1]
name = s[2]
check_len = s[3]
except KeyError as e:
LOG.warning("Key error in TLV table: %s", e,
node_info=self.node_info)
return False
# Some constructs require a length validation to ensure the
# proper number of bytes has been provided, for example
# when a BitStruct is used.
if check_len and (tlv_parser.sizeof() != len(data)):
LOG.warning("Invalid data for %(name)s expected len %(expect)d, "
"got %(actual)d", {'name': name,
'expect': tlv_parser.sizeof(),
'actual': len(data)},
node_info=self.node_info)
return False
# Use the construct parser to parse TLV so that it's
# individual fields can be accessed
try:
struct = tlv_parser.parse(data)
except (core.ConstructError, netaddr.AddrFormatError) as e:
LOG.warning("TLV parse error: %s", e,
node_info=self.node_info)
return False
# Call functions with parsed structure
try:
func(struct, name, data)
except ValueError as e:
LOG.warning("TLV value error: %s", e,
node_info=self.node_info)
return False
return True
def add_dot1_link_aggregation(self, struct, name, data):
"""Add name/value pairs for TLV Dot1_LinkAggregationId
This is in base class since it can be used by both dot1 and dot3.
"""
self.set_value(LLDP_PORT_LINK_AGG_ENABLED_NM,
struct.status.enabled)
self.set_value(LLDP_PORT_LINK_AGG_SUPPORT_NM,
struct.status.supported)
self.set_value(LLDP_PORT_LINK_AGG_ID_NM, struct.portid)
class LLDPBasicMgmtParser(LLDPParser):
"""Class to handle parsing of 802.1AB Basic Management set
This class will also handle 802.1Q and 802.3 OUI TLVs.
"""
def __init__(self, nv=None):
super(LLDPBasicMgmtParser, self).__init__(nv)
self.parser_map = {
tlv.LLDP_TLV_CHASSIS_ID:
(self.add_nested_value, tlv.ChassisId, LLDP_CHASSIS_ID_NM,
False),
tlv.LLDP_TLV_PORT_ID:
(self.add_nested_value, tlv.PortId, LLDP_PORT_ID_NM, False),
tlv.LLDP_TLV_TTL: (None, None, None, False),
tlv.LLDP_TLV_PORT_DESCRIPTION:
(self.add_single_value, tlv.PortDesc, LLDP_PORT_DESC_NM,
False),
tlv.LLDP_TLV_SYS_NAME:
(self.add_single_value, tlv.SysName, LLDP_SYS_NAME_NM, False),
tlv.LLDP_TLV_SYS_DESCRIPTION:
(self.add_single_value, tlv.SysDesc, LLDP_SYS_DESC_NM, False),
tlv.LLDP_TLV_SYS_CAPABILITIES:
(self.add_capabilities, tlv.SysCapabilities,
LLDP_SWITCH_CAP_NM, True),
tlv.LLDP_TLV_MGMT_ADDRESS:
(self.add_mgmt_address, tlv.MgmtAddress,
LLDP_MGMT_ADDRESSES_NM, False),
tlv.LLDP_TLV_ORG_SPECIFIC:
(self.handle_org_specific_tlv, tlv.OrgSpecific, None, False),
tlv.LLDP_TLV_END_LLDPPDU: (None, None, None, False)
}
def add_mgmt_address(self, struct, name, data):
"""Handle LLDP_TLV_MGMT_ADDRESS
There can be multiple Mgmt Address TLVs, store in list.
"""
if struct.address:
self.append_value(name, struct.address)
def _get_capabilities_list(self, caps):
"""Get capabilities from bit map"""
cap_map = [
(caps.repeater, 'Repeater'),
(caps.bridge, 'Bridge'),
(caps.wlan, 'WLAN'),
(caps.router, 'Router'),
(caps.telephone, 'Telephone'),
(caps.docsis, 'DOCSIS cable device'),
(caps.station, 'Station only'),
(caps.cvlan, 'C-Vlan'),
(caps.svlan, 'S-Vlan'),
(caps.tpmr, 'TPMR')]
return [cap for (bit, cap) in cap_map if bit]
def add_capabilities(self, struct, name, data):
"""Handle LLDP_TLV_SYS_CAPABILITIES"""
self.set_value(LLDP_CAP_SUPPORT_NM,
self._get_capabilities_list(struct.system))
self.set_value(LLDP_CAP_ENABLED_NM,
self._get_capabilities_list(struct.enabled))
def handle_org_specific_tlv(self, struct, name, data):
"""Handle Organizationally Unique ID TLVs
This class supports 802.1Q and 802.3 OUI TLVs.
See http://www.ieee802.org/1/pages/802.1Q-2014.html, Annex D
and http://standards.ieee.org/about/get/802/802.3.html
"""
oui = binascii.hexlify(struct.oui).decode()
subtype = struct.subtype
oui_data = data[4:]
if oui == tlv.LLDP_802dot1_OUI:
parser = LLDPdot1Parser(self.node_info, self.nv_dict)
if parser.parse_tlv(subtype, oui_data):
LOG.debug("Handled 802.1 subtype %d", subtype)
else:
LOG.debug("Subtype %d not found for 802.1", subtype)
elif oui == tlv.LLDP_802dot3_OUI:
parser = LLDPdot3Parser(self.node_info, self.nv_dict)
if parser.parse_tlv(subtype, oui_data):
LOG.debug("Handled 802.3 subtype %d", subtype)
else:
LOG.debug("Subtype %d not found for 802.3", subtype)
else:
LOG.warning("Organizationally Unique ID %s not "
"recognized", oui, node_info=self.node_info)
class LLDPdot1Parser(LLDPParser):
"""Class to handle parsing of 802.1Q TLVs"""
def __init__(self, node_info, nv=None):
super(LLDPdot1Parser, self).__init__(node_info, nv)
self.parser_map = {
tlv.dot1_PORT_VLANID:
(self.add_single_value, tlv.Dot1_UntaggedVlanId,
LLDP_PORT_VLANID_NM, False),
tlv.dot1_PORT_PROTOCOL_VLANID:
(self.add_dot1_port_protocol_vlan, tlv.Dot1_PortProtocolVlan,
LLDP_PORT_PROT_NM, True),
tlv.dot1_VLAN_NAME:
(self.add_dot1_vlans, tlv.Dot1_VlanName, None, False),
tlv.dot1_PROTOCOL_IDENTITY:
(self.add_dot1_protocol_identities, tlv.Dot1_ProtocolIdentity,
LLDP_PROTOCOL_IDENTITIES_NM, False),
tlv.dot1_MANAGEMENT_VID:
(self.add_single_value, tlv.Dot1_MgmtVlanId,
LLDP_PORT_MGMT_VLANID_NM, False),
tlv.dot1_LINK_AGGREGATION:
(self.add_dot1_link_aggregation, tlv.Dot1_LinkAggregationId,
LLDP_PORT_LINK_AGG_NM, True)
}
def add_dot1_port_protocol_vlan(self, struct, name, data):
"""Handle dot1_PORT_PROTOCOL_VLANID"""
self.set_value(LLDP_PORT_PROT_VLAN_ENABLED_NM, struct.flags.enabled)
self.set_value(LLDP_PORT_PROT_VLAN_SUPPORT_NM, struct.flags.supported)
# There can be multiple port/protocol vlans TLVs, store in list
self.append_value(LLDP_PORT_PROT_VLANIDS_NM, struct.vlanid)
def add_dot1_vlans(self, struct, name, data):
"""Handle dot1_VLAN_NAME
There can be multiple vlan TLVs, add dictionary entry with id/vlan
to list.
"""
vlan_dict = {}
vlan_dict['name'] = struct.vlan_name
vlan_dict['id'] = struct.vlanid
self.append_value(LLDP_PORT_VLANS_NM, vlan_dict)
def add_dot1_protocol_identities(self, struct, name, data):
"""Handle dot1_PROTOCOL_IDENTITY
There can be multiple protocol ids TLVs, store in list
"""
self.append_value(LLDP_PROTOCOL_IDENTITIES_NM,
binascii.b2a_hex(struct.protocol).decode())
class LLDPdot3Parser(LLDPParser):
"""Class to handle parsing of 802.3 TLVs"""
def __init__(self, node_info, nv=None):
super(LLDPdot3Parser, self).__init__(node_info, nv)
# Note that 802.3 link Aggregation has been deprecated and moved to
# 802.1 spec, but it is in the same format. Use the same function as
# dot1 handler.
self.parser_map = {
tlv.dot3_MACPHY_CONFIG_STATUS:
(self.add_dot3_macphy_config, tlv.Dot3_MACPhy_Config_Status,
LLDP_PORT_MAC_PHY_NM, True),
tlv.dot3_LINK_AGGREGATION:
(self.add_dot1_link_aggregation, tlv.Dot1_LinkAggregationId,
LLDP_PORT_LINK_AGG_NM, True),
tlv.dot3_MTU:
(self.add_single_value, tlv.Dot3_MTU, LLDP_MTU_NM, False)
}
def add_dot3_macphy_config(self, struct, name, data):
"""Handle dot3_MACPHY_CONFIG_STATUS"""
try:
mau_type = tlv.OPER_MAU_TYPES[struct.mau_type]
except KeyError:
raise ValueError(_('Invalid index for mau type'))
self.set_value(LLDP_PORT_LINK_AUTONEG_ENABLED_NM,
struct.autoneg.enabled)
self.set_value(LLDP_PORT_LINK_AUTONEG_SUPPORT_NM,
struct.autoneg.supported)
self.set_value(LLDP_PORT_CAPABILITIES_NM,
tlv.get_autoneg_cap(struct.pmd_autoneg))
self.set_value(LLDP_PORT_MAU_TYPE_NM, mau_type)

View File

@@ -1,365 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Link Layer Discovery Protocol TLVs """
import functools
# See http://construct.readthedocs.io/en/latest/index.html
import construct
from construct import core
import netaddr
from ironic_inspector import utils
LOG = utils.getProcessingLogger(__name__)
# Constants defined according to 802.1AB-2016 LLDP spec
# https://standards.ieee.org/findstds/standard/802.1AB-2016.html
# TLV types
LLDP_TLV_END_LLDPPDU = 0
LLDP_TLV_CHASSIS_ID = 1
LLDP_TLV_PORT_ID = 2
LLDP_TLV_TTL = 3
LLDP_TLV_PORT_DESCRIPTION = 4
LLDP_TLV_SYS_NAME = 5
LLDP_TLV_SYS_DESCRIPTION = 6
LLDP_TLV_SYS_CAPABILITIES = 7
LLDP_TLV_MGMT_ADDRESS = 8
LLDP_TLV_ORG_SPECIFIC = 127
# 802.1Q defines from http://www.ieee802.org/1/pages/802.1Q-2014.html, Annex D
LLDP_802dot1_OUI = "0080c2"
# subtypes
dot1_PORT_VLANID = 1
dot1_PORT_PROTOCOL_VLANID = 2
dot1_VLAN_NAME = 3
dot1_PROTOCOL_IDENTITY = 4
dot1_MANAGEMENT_VID = 6
dot1_LINK_AGGREGATION = 7
# 802.3 defines from http://standards.ieee.org/about/get/802/802.3.html,
# section 79
LLDP_802dot3_OUI = "00120f"
# Subtypes
dot3_MACPHY_CONFIG_STATUS = 1
dot3_LINK_AGGREGATION = 3 # Deprecated, but still in use
dot3_MTU = 4
def bytes_to_int(obj):
"""Convert bytes to an integer
:param: obj - array of bytes
"""
return functools.reduce(lambda x, y: x << 8 | y, obj)
def mapping_for_enum(mapping):
"""Return tuple used for keys as a dict
:param: mapping - dict with tuple as keys
"""
return dict(mapping.keys())
def mapping_for_switch(mapping):
"""Return dict from values
:param: mapping - dict with tuple as keys
"""
return {key[0]: value for key, value in mapping.items()}
IPv4Address = core.ExprAdapter(
core.Byte[4],
encoder=lambda obj, ctx: netaddr.IPAddress(obj).words,
decoder=lambda obj, ctx: str(netaddr.IPAddress(bytes_to_int(obj)))
)
IPv6Address = core.ExprAdapter(
core.Byte[16],
encoder=lambda obj, ctx: netaddr.IPAddress(obj).words,
decoder=lambda obj, ctx: str(netaddr.IPAddress(bytes_to_int(obj)))
)
MACAddress = core.ExprAdapter(
core.Byte[6],
encoder=lambda obj, ctx: netaddr.EUI(obj).words,
decoder=lambda obj, ctx: str(netaddr.EUI(bytes_to_int(obj),
dialect=netaddr.mac_unix_expanded))
)
IANA_ADDRESS_FAMILY_ID_MAPPING = {
('ipv4', 1): IPv4Address,
('ipv6', 2): IPv6Address,
('mac', 6): MACAddress,
}
IANAAddress = core.Struct(
'family' / core.Enum(core.Int8ub, **mapping_for_enum(
IANA_ADDRESS_FAMILY_ID_MAPPING)),
'value' / core.Switch(construct.this.family, mapping_for_switch(
IANA_ADDRESS_FAMILY_ID_MAPPING)))
# Note that 'GreedyString()' is used in cases where string len is not defined
CHASSIS_ID_MAPPING = {
('entPhysAlias_c', 1): core.Struct('value' / core.GreedyString("utf8")),
('ifAlias', 2): core.Struct('value' / core.GreedyString("utf8")),
('entPhysAlias_p', 3): core.Struct('value' / core.GreedyString("utf8")),
('mac_address', 4): core.Struct('value' / MACAddress),
('IANA_address', 5): IANAAddress,
('ifName', 6): core.Struct('value' / core.GreedyString("utf8")),
('local', 7): core.Struct('value' / core.GreedyString("utf8"))
}
#
# Basic Management Set TLV field definitions
#
# Chassis ID value is based on the subtype
ChassisId = core.Struct(
'subtype' / core.Enum(core.Byte, **mapping_for_enum(
CHASSIS_ID_MAPPING)),
'value' / core.Switch(construct.this.subtype,
mapping_for_switch(CHASSIS_ID_MAPPING))
)
PORT_ID_MAPPING = {
('ifAlias', 1): core.Struct('value' / core.GreedyString("utf8")),
('entPhysicalAlias', 2): core.Struct('value' / core.GreedyString("utf8")),
('mac_address', 3): core.Struct('value' / MACAddress),
('IANA_address', 4): IANAAddress,
('ifName', 5): core.Struct('value' / core.GreedyString("utf8")),
('local', 7): core.Struct('value' / core.GreedyString("utf8"))
}
# Port ID value is based on the subtype
PortId = core.Struct(
'subtype' / core.Enum(core.Byte, **mapping_for_enum(
PORT_ID_MAPPING)),
'value' / core.Switch(construct.this.subtype,
mapping_for_switch(PORT_ID_MAPPING))
)
PortDesc = core.Struct('value' / core.GreedyString("utf8"))
SysName = core.Struct('value' / core.GreedyString("utf8"))
SysDesc = core.Struct('value' / core.GreedyString("utf8"))
MgmtAddress = core.Struct(
'len' / core.Int8ub,
'family' / core.Enum(core.Int8ub, **mapping_for_enum(
IANA_ADDRESS_FAMILY_ID_MAPPING)),
'address' / core.Switch(construct.this.family, mapping_for_switch(
IANA_ADDRESS_FAMILY_ID_MAPPING))
)
Capabilities = core.BitStruct(
core.Padding(5),
'tpmr' / core.Bit,
'svlan' / core.Bit,
'cvlan' / core.Bit,
'station' / core.Bit,
'docsis' / core.Bit,
'telephone' / core.Bit,
'router' / core.Bit,
'wlan' / core.Bit,
'bridge' / core.Bit,
'repeater' / core.Bit,
core.Padding(1)
)
SysCapabilities = core.Struct(
'system' / Capabilities,
'enabled' / Capabilities
)
OrgSpecific = core.Struct(
'oui' / core.Bytes(3),
'subtype' / core.Int8ub
)
#
# 802.1Q TLV field definitions
# See http://www.ieee802.org/1/pages/802.1Q-2014.html, Annex D
#
Dot1_UntaggedVlanId = core.Struct('value' / core.Int16ub)
Dot1_PortProtocolVlan = core.Struct(
'flags' / core.BitStruct(
core.Padding(5),
'enabled' / core.Flag,
'supported' / core.Flag,
core.Padding(1),
),
'vlanid' / core.Int16ub
)
Dot1_VlanName = core.Struct(
'vlanid' / core.Int16ub,
'name_len' / core.Rebuild(core.Int8ub,
construct.len_(construct.this.value)),
'vlan_name' / core.PaddedString(construct.this.name_len, "utf8")
)
Dot1_ProtocolIdentity = core.Struct(
'len' / core.Rebuild(core.Int8ub, construct.len_(construct.this.value)),
'protocol' / core.Bytes(construct.this.len)
)
Dot1_MgmtVlanId = core.Struct('value' / core.Int16ub)
Dot1_LinkAggregationId = core.Struct(
'status' / core.BitStruct(
core.Padding(6),
'enabled' / core.Flag,
'supported' / core.Flag
),
'portid' / core.Int32ub
)
#
# 802.3 TLV field definitions
# See http://standards.ieee.org/about/get/802/802.3.html,
# section 79
#
def get_autoneg_cap(pmd):
"""Get autonegotiated capability strings
This returns a list of capability strings from the Physical Media
Dependent (PMD) capability bits.
:param pmd: PMD bits
:return: Sorted list containing capability strings
"""
caps_set = set()
pmd_map = [
(pmd._10base_t_hdx, '10BASE-T hdx'),
(pmd._10base_t_hdx, '10BASE-T fdx'),
(pmd._10base_t4, '10BASE-T4'),
(pmd._100base_tx_hdx, '100BASE-TX hdx'),
(pmd._100base_tx_fdx, '100BASE-TX fdx'),
(pmd._100base_t2_hdx, '100BASE-T2 hdx'),
(pmd._100base_t2_fdx, '100BASE-T2 fdx'),
(pmd.pause_fdx, 'PAUSE fdx'),
(pmd.asym_pause, 'Asym PAUSE fdx'),
(pmd.sym_pause, 'Sym PAUSE fdx'),
(pmd.asym_sym_pause, 'Asym and Sym PAUSE fdx'),
(pmd._1000base_x_hdx, '1000BASE-X hdx'),
(pmd._1000base_x_fdx, '1000BASE-X fdx'),
(pmd._1000base_t_hdx, '1000BASE-T hdx'),
(pmd._1000base_t_fdx, '1000BASE-T fdx')]
for bit, cap in pmd_map:
if bit:
caps_set.add(cap)
return sorted(caps_set)
Dot3_MACPhy_Config_Status = core.Struct(
'autoneg' / core.BitStruct(
core.Padding(6),
'enabled' / core.Flag,
'supported' / core.Flag,
),
# See IANAifMauAutoNegCapBits
# RFC 4836, Definitions of Managed Objects for IEEE 802.3
'pmd_autoneg' / core.BitStruct(
core.Padding(1),
'_10base_t_hdx' / core.Bit,
'_10base_t_fdx' / core.Bit,
'_10base_t4' / core.Bit,
'_100base_tx_hdx' / core.Bit,
'_100base_tx_fdx' / core.Bit,
'_100base_t2_hdx' / core.Bit,
'_100base_t2_fdx' / core.Bit,
'pause_fdx' / core.Bit,
'asym_pause' / core.Bit,
'sym_pause' / core.Bit,
'asym_sym_pause' / core.Bit,
'_1000base_x_hdx' / core.Bit,
'_1000base_x_fdx' / core.Bit,
'_1000base_t_hdx' / core.Bit,
'_1000base_t_fdx' / core.Bit
),
'mau_type' / core.Int16ub
)
# See ifMauTypeList in
# RFC 4836, Definitions of Managed Objects for IEEE 802.3
OPER_MAU_TYPES = {
0: "Unknown",
1: "AUI",
2: "10BASE-5",
3: "FOIRL",
4: "10BASE-2",
5: "10BASE-T duplex mode unknown",
6: "10BASE-FP",
7: "10BASE-FB",
8: "10BASE-FL duplex mode unknown",
9: "10BROAD36",
10: "10BASE-T half duplex",
11: "10BASE-T full duplex",
12: "10BASE-FL half duplex",
13: "10BASE-FL full duplex",
14: "100 BASE-T4",
15: "100BASE-TX half duplex",
16: "100BASE-TX full duplex",
17: "100BASE-FX half duplex",
18: "100BASE-FX full duplex",
19: "100BASE-T2 half duplex",
20: "100BASE-T2 full duplex",
21: "1000BASE-X half duplex",
22: "1000BASE-X full duplex",
23: "1000BASE-LX half duplex",
24: "1000BASE-LX full duplex",
25: "1000BASE-SX half duplex",
26: "1000BASE-SX full duplex",
27: "1000BASE-CX half duplex",
28: "1000BASE-CX full duplex",
29: "1000BASE-T half duplex",
30: "1000BASE-T full duplex",
31: "10GBASE-X",
32: "10GBASE-LX4",
33: "10GBASE-R",
34: "10GBASE-ER",
35: "10GBASE-LR",
36: "10GBASE-SR",
37: "10GBASE-W",
38: "10GBASE-EW",
39: "10GBASE-LW",
40: "10GBASE-SW",
41: "10GBASE-CX4",
42: "2BASE-TL",
43: "10PASS-TS",
44: "100BASE-BX10D",
45: "100BASE-BX10U",
46: "100BASE-LX10",
47: "1000BASE-BX10D",
48: "1000BASE-BX10U",
49: "1000BASE-LX10",
50: "1000BASE-PX10D",
51: "1000BASE-PX10U",
52: "1000BASE-PX20D",
53: "1000BASE-PX20U",
}
Dot3_MTU = core.Struct('value' / core.Int16ub)

View File

@@ -1,106 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
from oslo_concurrency import lockutils
from oslo_config import cfg
from ironic_inspector.common import coordination
CONF = cfg.CONF
_LOCK_TEMPLATE = 'node-%s'
_SEMAPHORES = lockutils.Semaphores()
class BaseLock(object, metaclass=abc.ABCMeta):
@abc.abstractmethod
def acquire(self, blocking=True):
"""Acquire lock."""
@abc.abstractmethod
def release(self):
"""Release lock."""
@abc.abstractmethod
def is_locked(self):
"""Return lock status"""
class InternalLock(BaseLock):
"""Locking mechanism based on threading.Semaphore."""
def __init__(self, uuid):
self._lock = lockutils.internal_lock(_LOCK_TEMPLATE % uuid,
semaphores=_SEMAPHORES)
self._locked = False
def acquire(self, blocking=True):
if not self._locked:
self._locked = self._lock.acquire(blocking=blocking)
return self._locked
def release(self):
if self._locked:
self._lock.release()
self._locked = False
def is_locked(self):
return self._locked
def __enter__(self):
self._lock.acquire()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self._lock.release()
class ToozLock(BaseLock):
"""Wrapper on tooz locks."""
def __init__(self, lock):
"""Creates a wrapper on the tooz lock.
:param lock: a tooz lock instance.
"""
self._lock = lock
def acquire(self, blocking=True):
if not self._lock.acquired:
self._lock.acquire(blocking=blocking)
return self._lock.acquired
def release(self):
if self._lock.acquired:
self._lock.release()
def is_locked(self):
return self._lock.acquired
def __enter__(self):
self._lock.acquire()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self._lock.release()
def get_lock(uuid):
if CONF.standalone:
return InternalLock(uuid)
coordinator = coordination.get_coordinator()
lock = coordinator.get_lock(uuid)
return ToozLock(lock)

View File

@@ -1,290 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Multicast DNS implementation for API discovery.
This implementation follows RFC 6763 as clarified by the API SIG guideline
https://review.opendev.org/651222.
"""
import collections
import ipaddress
import logging
import socket
import time
from urllib import parse as urlparse
from oslo_config import cfg
import zeroconf
from ironic_inspector.common import exception
from ironic_inspector.common.i18n import _
from ironic_inspector import utils
LOG = logging.getLogger(__name__)
_MDNS_DOMAIN = '_openstack._tcp.local.'
_endpoint = collections.namedtuple('Endpoint',
['addresses', 'hostname', 'port', 'params'])
CONF = cfg.CONF
class Zeroconf(object):
"""Multicast DNS implementation client and server.
Uses threading internally, so there is no start method. It starts
automatically on creation.
.. warning::
The underlying library does not yet support IPv6.
"""
def __init__(self):
"""Initialize and start the mDNS server."""
interfaces = (CONF.mdns.interfaces if CONF.mdns.interfaces
else zeroconf.InterfaceChoice.All)
# If interfaces are set, let zeroconf auto-detect the version
ip_version = None if CONF.mdns.interfaces else zeroconf.IPVersion.All
self._zc = zeroconf.Zeroconf(interfaces=interfaces,
ip_version=ip_version)
self._registered = []
def register_service(self, service_type, endpoint, params=None):
"""Register a service.
This call announces the new services via multicast and instructs the
built-in server to respond to queries about it.
:param service_type: OpenStack service type, e.g. "baremetal".
:param endpoint: full endpoint to reach the service.
:param params: optional properties as a dictionary.
:raises: :exc:`.ServiceRegistrationFailure` if the service cannot be
registered, e.g. because of conflicts.
"""
parsed = _parse_endpoint(endpoint, service_type)
all_params = CONF.mdns.params.copy()
if params:
all_params.update(params)
all_params.update(parsed.params)
properties = {
(key.encode('utf-8') if isinstance(key, str) else key):
(value.encode('utf-8') if isinstance(value, str) else value)
for key, value in all_params.items()
}
# TODO(dtantsur): allow overriding TTL values via configuration
info = zeroconf.ServiceInfo(_MDNS_DOMAIN,
'%s.%s' % (service_type, _MDNS_DOMAIN),
addresses=parsed.addresses,
port=parsed.port,
properties=properties,
server=parsed.hostname)
LOG.debug('Registering %s via mDNS', info)
# Work around a potential race condition in the registration code:
# https://github.com/jstasiak/python-zeroconf/issues/163
delay = 0.1
try:
for attempt in range(CONF.mdns.registration_attempts):
try:
self._zc.register_service(info)
except zeroconf.NonUniqueNameException:
LOG.debug('Could not register %s - conflict', info)
if attempt == CONF.mdns.registration_attempts - 1:
raise
# reset the cache to purge learned records and retry
self._zc.cache = zeroconf.DNSCache()
time.sleep(delay)
delay *= 2
else:
break
except zeroconf.Error as exc:
raise exception.ServiceRegistrationFailure(
service=service_type, error=exc)
self._registered.append(info)
def get_endpoint(self, service_type, skip_loopback=True, # noqa: C901
skip_link_local=False):
"""Get an endpoint and its properties from mDNS.
If the requested endpoint is already in the built-in server cache, and
its TTL is not exceeded, the cached value is returned.
:param service_type: OpenStack service type.
:param skip_loopback: Whether to ignore loopback addresses.
:param skip_link_local: Whether to ignore link local V6 addresses.
:returns: tuple (endpoint URL, properties as a dict).
:raises: :exc:`.ServiceLookupFailure` if the service cannot be found.
"""
delay = 0.1
for attempt in range(CONF.mdns.lookup_attempts):
name = '%s.%s' % (service_type, _MDNS_DOMAIN)
info = self._zc.get_service_info(name, name)
if info is not None:
break
elif attempt == CONF.mdns.lookup_attempts - 1:
raise exception.ServiceLookupFailure(service=service_type)
else:
time.sleep(delay)
delay *= 2
all_addr = info.parsed_addresses()
# Try to find the first routable address
fallback = None
for addr in all_addr:
try:
loopback = ipaddress.ip_address(addr).is_loopback
except ValueError:
LOG.debug('Skipping invalid IP address %s', addr)
continue
else:
if loopback and skip_loopback:
LOG.debug('Skipping loopback IP address %s', addr)
continue
if utils.get_route_source(addr, skip_link_local):
address = addr
break
elif fallback is None:
fallback = addr
else:
if fallback is None:
raise exception.ServiceLookupFailure(
_('None of addresses %(addr)s for service %(service)s '
'are valid')
% {'addr': all_addr, 'service': service_type})
else:
LOG.warning('None of addresses %s seem routable, using %s',
all_addr, fallback)
address = fallback
properties = {}
for key, value in info.properties.items():
try:
if isinstance(key, bytes):
key = key.decode('utf-8')
except UnicodeError as exc:
raise exception.ServiceLookupFailure(
_('Invalid properties for service %(svc)s. Cannot decode '
'key %(key)r: %(exc)r') %
{'svc': service_type, 'key': key, 'exc': exc})
try:
if isinstance(value, bytes):
value = value.decode('utf-8')
except UnicodeError as exc:
LOG.debug('Cannot convert value %(value)r for key %(key)s '
'to string, assuming binary: %(exc)s',
{'key': key, 'value': value, 'exc': exc})
properties[key] = value
path = properties.pop('path', '')
protocol = properties.pop('protocol', None)
if not protocol:
if info.port == 80:
protocol = 'http'
else:
protocol = 'https'
if info.server.endswith('.local.'):
# Local hostname means that the catalog lists an IP address,
# so use it
host = address
if int(ipaddress.ip_address(host).version) == 6:
host = '[%s]' % host
else:
# Otherwise use the provided hostname.
host = info.server.rstrip('.')
return ('{proto}://{host}:{port}{path}'.format(proto=protocol,
host=host,
port=info.port,
path=path),
properties)
def close(self):
"""Shut down mDNS and unregister services.
.. note::
If another server is running for the same services, it will
re-register them immediately.
"""
for info in self._registered:
try:
self._zc.unregister_service(info)
except Exception:
LOG.exception('Could not unregister mDNS service %s', info)
self._zc.close()
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
def _parse_endpoint(endpoint, service_type=None):
params = {}
url = urlparse.urlparse(endpoint)
port = url.port
if port is None:
if url.scheme == 'https':
port = 443
else:
port = 80
addresses = []
hostname = url.hostname
try:
infos = socket.getaddrinfo(hostname, port, 0, socket.IPPROTO_TCP)
except socket.error as exc:
raise exception.ServiceRegistrationFailure(
service=service_type,
error=_('Could not resolve hostname %(host)s: %(exc)s') %
{'host': hostname, 'exc': exc})
for info in infos:
ip = info[4][0]
if ip == hostname:
# we need a host name for the service record. if what we have in
# the catalog is an IP address, use the local hostname instead
hostname = None
# zeroconf requires addresses in network format
ip = socket.inet_pton(info[0], ip)
if ip not in addresses:
addresses.append(ip)
if not addresses:
raise exception.ServiceRegistrationFailure(
service=service_type,
error=_('No suitable addresses found for %s') % url.hostname)
# avoid storing information that can be derived from existing data
if url.path not in ('', '/'):
params['path'] = url.path
if (not (port == 80 and url.scheme == 'http')
and not (port == 443 and url.scheme == 'https')):
params['protocol'] = url.scheme
# zeroconf is pretty picky about having the trailing dot
if hostname is not None and not hostname.endswith('.'):
hostname += '.'
return _endpoint(addresses, hostname, port, params)

View File

@@ -1,54 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
import oslo_messaging as messaging
from oslo_messaging.rpc import dispatcher
from ironic_inspector.conductor import manager
CONF = cfg.CONF
TRANSPORT = None
def init():
global TRANSPORT
TRANSPORT = messaging.get_rpc_transport(CONF)
def get_client(topic=None):
"""Get a RPC client instance.
:param topic: The topic of the message will be delivered to. This argument
is ignored if CONF.standalone is True.
"""
assert TRANSPORT is not None
if CONF.standalone:
target = messaging.Target(topic=manager.MANAGER_TOPIC,
server=CONF.host,
version='1.3')
else:
target = messaging.Target(topic=topic, version='1.3')
return messaging.get_rpc_client(TRANSPORT, target)
def get_server(endpoints):
"""Get a RPC server instance."""
assert TRANSPORT is not None
target = messaging.Target(topic=manager.MANAGER_TOPIC, server=CONF.host,
version='1.3')
return messaging.get_rpc_server(
TRANSPORT, target, endpoints, executor='eventlet',
access_policy=dispatcher.DefaultRPCAccessPolicy)

View File

@@ -1,62 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from oslo_log import log
from oslo_service import service
from ironic_inspector.common import rpc
from ironic_inspector.conductor import manager
CONF = cfg.CONF
LOG = log.getLogger(__name__)
SERVER_NAME = 'ironic-inspector-rpc-server'
class RPCService(service.Service):
def __init__(self, host):
super(RPCService, self).__init__()
self.host = host
self.manager = manager.ConductorManager()
self.rpcserver = None
def start(self):
super(RPCService, self).start()
self.rpcserver = rpc.get_server([self.manager])
self.rpcserver.start()
self.manager.init_host()
LOG.info('Created RPC server for service %(service)s on host '
'%(host)s.',
{'service': manager.MANAGER_TOPIC, 'host': self.host})
def stop(self):
try:
self.rpcserver.stop()
self.rpcserver.wait()
except Exception as e:
LOG.exception('Service error occurred when stopping the '
'RPC server. Error: %s', e)
try:
self.manager.del_host()
except Exception as e:
LOG.exception('Service error occurred when cleaning up '
'the RPC manager. Error: %s', e)
super(RPCService, self).stop(graceful=True)
LOG.info('Stopped RPC server for service %(service)s on host '
'%(host)s.',
{'service': manager.MANAGER_TOPIC, 'host': self.host})

View File

@@ -1,33 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log
from ironic_inspector.common import rpc
from ironic_inspector.conf import opts
LOG = log.getLogger(__name__)
CONF = cfg.CONF
def prepare_service(args=None):
args = [] if args is None else args
log.register_options(CONF)
opts.set_config_defaults()
opts.parse_args(args)
rpc.init()
log.setup(CONF, 'ironic_inspector')
LOG.debug("Configuration:")
CONF.log_opt_values(LOG, log.DEBUG)

View File

@@ -1,148 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Mostly copied from ironic/common/swift.py
import json
import openstack
from openstack import exceptions as os_exc
from oslo_config import cfg
from ironic_inspector.common.i18n import _
from ironic_inspector.common import keystone
from ironic_inspector import utils
CONF = cfg.CONF
OBJECT_NAME_PREFIX = 'inspector_data'
SWIFT_SESSION = None
def reset_swift_session():
"""Reset the global session variable.
Mostly useful for unit tests.
"""
global SWIFT_SESSION
SWIFT_SESSION = None
class SwiftAPI(object):
"""API for communicating with Swift."""
def __init__(self):
"""Constructor for creating a SwiftAPI object.
Authentication is loaded from config file.
"""
global SWIFT_SESSION
try:
if not SWIFT_SESSION:
SWIFT_SESSION = keystone.get_session('swift')
self.connection = openstack.connection.Connection(
session=SWIFT_SESSION,
oslo_conf=CONF).object_store
except Exception as exc:
raise utils.Error(_("Could not connect to the object storage "
"service: %s") % exc)
def create_object(self, object, data, container=None, headers=None):
"""Uploads a given string to Swift.
:param object: The name of the object in Swift
:param data: string data to put in the object
:param container: The name of the container for the object.
Defaults to the value set in the configuration options.
:param headers: the headers for the object to pass to Swift
:returns: The Swift UUID of the object
:raises: utils.Error, if any operation with Swift fails.
"""
container = container or CONF.swift.container
try:
self.connection.create_container(container)
except os_exc.SDKException as e:
err_msg = (_('Swift failed to create container %(container)s. '
'Error was: %(error)s') %
{'container': container, 'error': e})
raise utils.Error(err_msg)
if CONF.swift.delete_after > 0:
headers = headers or {}
headers['X-Delete-After'] = CONF.swift.delete_after
try:
obj_uuid = self.connection.create_object(
container, object, data=data, headers=headers)
except os_exc.SDKException as e:
err_msg = (_('Swift failed to create object %(object)s in '
'container %(container)s. Error was: %(error)s') %
{'object': object, 'container': container, 'error': e})
raise utils.Error(err_msg)
return obj_uuid
def get_object(self, object, container=None):
"""Downloads a given object from Swift.
:param object: The name of the object in Swift
:param container: The name of the container for the object.
Defaults to the value set in the configuration options.
:returns: Swift object
:raises: utils.Error, if the Swift operation fails.
"""
container = container or CONF.swift.container
try:
obj = self.connection.download_object(object, container=container)
except os_exc.SDKException as e:
err_msg = (_('Swift failed to get object %(object)s in '
'container %(container)s. Error was: %(error)s') %
{'object': object, 'container': container, 'error': e})
raise utils.Error(err_msg)
return obj
def store_introspection_data(data, uuid, suffix=None):
"""Uploads introspection data to Swift.
:param data: data to store in Swift
:param uuid: UUID of the Ironic node that the data came from
:param suffix: optional suffix to add to the underlying swift
object name
:returns: name of the Swift object that the data is stored in
"""
swift_api = SwiftAPI()
swift_object_name = '%s-%s' % (OBJECT_NAME_PREFIX, uuid)
if suffix is not None:
swift_object_name = '%s-%s' % (swift_object_name, suffix)
swift_api.create_object(swift_object_name, json.dumps(data))
return swift_object_name
def get_introspection_data(uuid, suffix=None):
"""Downloads introspection data from Swift.
:param uuid: UUID of the Ironic node that the data came from
:param suffix: optional suffix to add to the underlying swift
object name
:returns: Swift object with the introspection data
"""
swift_api = SwiftAPI()
swift_object_name = '%s-%s' % (OBJECT_NAME_PREFIX, uuid)
if suffix is not None:
swift_object_name = '%s-%s' % (swift_object_name, suffix)
return swift_api.get_object(swift_object_name)

View File

@@ -1,240 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import traceback as traceback_mod
from eventlet import semaphore
from futurist import periodics
from oslo_config import cfg
from oslo_log import log
import oslo_messaging as messaging
from oslo_utils import excutils
from oslo_utils import reflection
import tooz
from ironic_inspector.common import coordination
from ironic_inspector.common.i18n import _
from ironic_inspector.common import ironic as ir_utils
from ironic_inspector.common import keystone
from ironic_inspector.common import mdns
from ironic_inspector.db import api as dbapi
from ironic_inspector import introspect
from ironic_inspector import node_cache
from ironic_inspector.plugins import base as plugins_base
from ironic_inspector import process
from ironic_inspector.pxe_filter import base as pxe_filter
from ironic_inspector import utils
LOG = log.getLogger(__name__)
CONF = cfg.CONF
MANAGER_TOPIC = 'ironic_inspector.conductor'
class ConductorManager(object):
"""ironic inspector conductor manager"""
RPC_API_VERSION = '1.3'
target = messaging.Target(version=RPC_API_VERSION)
def __init__(self):
self._periodics_worker = None
self._zeroconf = None
self._shutting_down = semaphore.Semaphore()
self.coordinator = None
self.dbapi = None
def init_host(self):
"""Initialize Worker host
Init db connection, load and validate processing
hooks, runs periodic tasks.
:returns None
"""
if CONF.processing.store_data == 'none':
LOG.warning('Introspection data will not be stored. Change '
'"[processing] store_data" option if this is not '
'the desired behavior')
else:
LOG.info('Introspection data will be stored in the %s backend',
CONF.processing.store_data)
if not self.dbapi:
self.dbapi = dbapi.init()
self.coordinator = None
try:
self.coordinator = coordination.get_coordinator(prefix='conductor')
self.coordinator.start(heartbeat=True)
self.coordinator.join_group()
except Exception as exc:
if CONF.standalone:
LOG.info('Coordination backend cannot be started, assuming '
'no other instances are running. Error: %s', exc)
self.coordinator = None
else:
with excutils.save_and_reraise_exception():
LOG.critical('Failure when connecting to coordination '
'backend', exc_info=True)
self.del_host()
else:
LOG.info('Successfully connected to coordination backend.')
try:
hooks = plugins_base.validate_processing_hooks()
except Exception as exc:
LOG.critical(str(exc))
sys.exit(1)
LOG.info('Enabled processing hooks: %s', [h.name for h in hooks])
driver = pxe_filter.driver()
driver.init_filter()
periodic_clean_up_ = periodics.periodic(
spacing=CONF.clean_up_period,
enabled=(CONF.clean_up_period != 0)
)(periodic_clean_up)
sync_with_ironic_ = periodics.periodic(
spacing=CONF.clean_up_period,
enabled=(CONF.clean_up_period != 0)
)(sync_with_ironic)
callables = [(periodic_clean_up_, None, None),
(sync_with_ironic_, (self,), None)]
driver_task = driver.get_periodic_sync_task()
if driver_task is not None:
callables.append((driver_task, None, None))
# run elections periodically if we have a coordinator
# that we were able to start
if (self.coordinator and self.coordinator.started):
periodic_leader_election_ = periodics.periodic(
spacing=CONF.leader_election_interval
)(periodic_leader_election)
callables.append((periodic_leader_election_, (self,), None))
self._periodics_worker = periodics.PeriodicWorker(
callables=callables,
executor_factory=periodics.ExistingExecutor(utils.executor()),
on_failure=self._periodics_watchdog)
utils.executor().submit(self._periodics_worker.start)
if CONF.enable_mdns:
endpoint = keystone.get_endpoint('service_catalog')
self._zeroconf = mdns.Zeroconf()
self._zeroconf.register_service('baremetal-introspection',
endpoint)
def del_host(self):
"""Shutdown the ironic inspector conductor service."""
if self.coordinator is not None:
try:
if self.coordinator.started:
self.coordinator.leave_group()
self.coordinator.stop()
except tooz.ToozError:
LOG.exception('Failed to stop coordinator')
if not self._shutting_down.acquire(blocking=False):
LOG.warning('Attempted to shut down while already shutting down')
return
pxe_filter.driver().tear_down_filter()
if self._periodics_worker is not None:
try:
self._periodics_worker.stop()
self._periodics_worker.wait()
except Exception as e:
LOG.exception('Service error occurred when stopping '
'periodic workers. Error: %s', e)
self._periodics_worker = None
if utils.executor().alive:
utils.executor().shutdown(wait=True)
if self._zeroconf is not None:
self._zeroconf.close()
self._zeroconf = None
self.dbapi = None
self._shutting_down.release()
LOG.info('Shut down successfully')
def _periodics_watchdog(self, callable_, activity, spacing, exc_info,
traceback=None):
LOG.exception("The periodic %(callable)s failed with: %(exception)s", {
'exception': ''.join(traceback_mod.format_exception(*exc_info)),
'callable': reflection.get_callable_name(callable_)})
@messaging.expected_exceptions(utils.Error)
def do_introspection(self, context, node_id, token=None,
manage_boot=True):
introspect.introspect(node_id, token=token, manage_boot=manage_boot)
@messaging.expected_exceptions(utils.Error)
def do_abort(self, context, node_id, token=None):
introspect.abort(node_id, token=token)
@messaging.expected_exceptions(utils.Error)
def do_reapply(self, context, node_uuid, token=None, data=None):
if not data:
try:
data = process.get_introspection_data(node_uuid,
processed=False,
get_json=True)
except utils.IntrospectionDataStoreDisabled:
raise utils.Error(_('Inspector is not configured to store '
'introspection data. Set the '
'[processing]store_data configuration '
'option to change this.'))
else:
process.store_introspection_data(node_uuid, data, processed=False)
process.reapply(node_uuid, data=data)
@messaging.expected_exceptions(utils.Error)
def do_continue(self, context, data):
return process.process(data)
def periodic_clean_up(): # pragma: no cover
if node_cache.clean_up():
pxe_filter.driver().sync(ir_utils.get_client())
def sync_with_ironic(conductor):
if (conductor.coordinator is not None
and not conductor.coordinator.is_leader):
LOG.debug('The conductor is not a leader, skipping syncing '
'with ironic')
return
LOG.debug('Syncing with ironic')
ironic = ir_utils.get_client()
# TODO(yuikotakada): pagination
ironic_nodes = ironic.nodes(fields=["uuid"], limit=None)
ironic_node_uuids = {node.id for node in ironic_nodes}
node_cache.delete_nodes_not_in_list(ironic_node_uuids)
def periodic_leader_election(conductor):
if conductor.coordinator is not None:
conductor.coordinator.run_elect_coordinator()
return

View File

@@ -1,55 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.from oslo_config import cfg
from oslo_config import cfg
from ironic_inspector.conf import accelerators
from ironic_inspector.conf import capabilities
from ironic_inspector.conf import coordination
from ironic_inspector.conf import default
from ironic_inspector.conf import discovery
from ironic_inspector.conf import dnsmasq_pxe_filter
from ironic_inspector.conf import exception
from ironic_inspector.conf import extra_hardware
from ironic_inspector.conf import healthcheck
from ironic_inspector.conf import iptables
from ironic_inspector.conf import ironic
from ironic_inspector.conf import mdns
from ironic_inspector.conf import pci_devices
from ironic_inspector.conf import port_physnet
from ironic_inspector.conf import processing
from ironic_inspector.conf import pxe_filter
from ironic_inspector.conf import service_catalog
from ironic_inspector.conf import swift
CONF = cfg.CONF
accelerators.register_opts(CONF)
capabilities.register_opts(CONF)
coordination.register_opts(CONF)
discovery.register_opts(CONF)
default.register_opts(CONF)
dnsmasq_pxe_filter.register_opts(CONF)
exception.register_opts(CONF)
extra_hardware.register_opts(CONF)
healthcheck.register_opts(CONF)
iptables.register_opts(CONF)
ironic.register_opts(CONF)
mdns.register_opts(CONF)
pci_devices.register_opts(CONF)
port_physnet.register_opts(CONF)
processing.register_opts(CONF)
pxe_filter.register_opts(CONF)
service_catalog.register_opts(CONF)
swift.register_opts(CONF)

View File

@@ -1,35 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from oslo_config import cfg
from ironic_inspector.common.i18n import _
_OPTS = [
cfg.StrOpt('known_devices',
default=os.path.abspath(os.path.join(
os.path.dirname(__file__), '../known_accelerators.yaml')),
help=_('The predefined accelerator devices which contains '
'information used for identifying accelerators.')),
]
def register_opts(conf):
conf.register_opts(_OPTS, 'accelerators')
def list_opts():
return _OPTS

View File

@@ -1,45 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from ironic_inspector.common.i18n import _
DEFAULT_CPU_FLAGS_MAPPING = {
'vmx': 'cpu_vt',
'svm': 'cpu_vt',
'aes': 'cpu_aes',
'pse': 'cpu_hugepages',
'pdpe1gb': 'cpu_hugepages_1g',
'smx': 'cpu_txt',
}
_OPTS = [
cfg.BoolOpt('boot_mode',
default=False,
help=_('Whether to store the boot mode (BIOS or UEFI).')),
cfg.DictOpt('cpu_flags',
default=DEFAULT_CPU_FLAGS_MAPPING,
help=_('Mapping between a CPU flag and a capability to set '
'if this flag is present.')),
]
def register_opts(conf):
conf.register_opts(_OPTS, 'capabilities')
def list_opts():
return _OPTS

View File

@@ -1,36 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from ironic_inspector.common.i18n import _
# NOTE(kaifeng) The capability of various backend varies, please check tooz
# documentation for driver compatibilities:
# https://docs.openstack.org/tooz/latest/user/compatibility.html
_OPTS = [
cfg.StrOpt('backend_url',
default='memcached://localhost:11211',
secret=True,
help=_('The backend URL to use for distributed coordination. '
'EXPERIMENTAL.')),
]
def register_opts(conf):
conf.register_opts(_OPTS, 'coordination')
def list_opts():
return _OPTS

View File

@@ -1,127 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import socket
from oslo_config import cfg
from oslo_config import types as cfg_types
from ironic_inspector.common.i18n import _
class Octal(cfg_types.Integer):
def __call__(self, value):
if isinstance(value, int):
return value
else:
return int(str(value), 8)
_OPTS = [
cfg.StrOpt('listen_address',
default='::',
help=_('IP to listen on.')),
cfg.PortOpt('listen_port',
default=5050,
help=_('Port to listen on.')),
cfg.StrOpt('listen_unix_socket',
help=_('Unix socket to listen on. Disables listen_address and '
'listen_port.')),
cfg.Opt('listen_unix_socket_mode', type=Octal(),
help=_('File mode (an octal number) of the unix socket to '
'listen on. Ignored if listen_unix_socket is not set.')),
cfg.StrOpt('host',
default=socket.getfqdn(),
sample_default='localhost',
help=_('Name of this node. This can be an opaque identifier. '
'It is not necessarily a hostname, FQDN, or IP address. '
'However, the node name must be valid within '
'an AMQP key.')),
cfg.StrOpt('auth_strategy',
default='keystone',
choices=[('noauth', _('no authentication')),
('keystone', _('use the Identity service for '
'authentication')),
('http_basic', _('HTTP basic authentication'))],
help=_('Authentication method used on the ironic-inspector '
'API. "noauth", "keystone" or "http_basic" are valid '
'options. "noauth" will disable all authentication.')),
cfg.StrOpt('http_basic_auth_user_file',
default='/etc/ironic-inspector/htpasswd',
help=_('Path to Apache format user authentication file used '
'when auth_strategy=http_basic')),
cfg.IntOpt('timeout',
default=3600,
# We're using timedelta which can overflow if somebody sets this
# too high, so limit to a sane value of 10 years.
max=315576000,
help=_('Timeout after which introspection is considered '
'failed, set to 0 to disable.')),
cfg.IntOpt('clean_up_period',
default=60,
min=0,
help=_('Amount of time in seconds, after which repeat clean up '
'of timed out nodes and old nodes status information. '
'WARNING: If set to a value of 0, then the periodic '
'task is disabled and inspector will not sync with '
'ironic to complete the internal clean-up process. '
'Not advisable if the deployment uses a PXE filter, '
'and will result in the ironic-inspector ceasing '
'periodic cleanup activities.')),
cfg.IntOpt('leader_election_interval',
default=10,
help=_('Interval (in seconds) between leader elections.')),
cfg.BoolOpt('use_ssl',
default=False,
help=_('SSL Enabled/Disabled')),
cfg.IntOpt('max_concurrency',
default=1000, min=2,
help=_('The green thread pool size.')),
cfg.IntOpt('introspection_delay',
default=5,
help=_('Delay (in seconds) between two introspections. Only '
'applies when boot is managed by ironic-inspector (i.e. '
'manage_boot==True).')),
cfg.ListOpt('ipmi_address_fields',
default=['redfish_address', 'ilo_address', 'drac_host',
'drac_address', 'ibmc_address'],
help=_('Ironic driver_info fields that are equivalent '
'to ipmi_address.')),
cfg.StrOpt('rootwrap_config',
default="/etc/ironic-inspector/rootwrap.conf",
help=_('Path to the rootwrap configuration file to use for '
'running commands as root')),
cfg.IntOpt('api_max_limit', default=1000, min=1,
help=_('Limit the number of elements an API list-call '
'returns')),
cfg.BoolOpt('can_manage_boot', default=True,
help=_('Whether the current installation of ironic-inspector '
'can manage PXE booting of nodes. If set to False, '
'the API will reject introspection requests with '
'manage_boot missing or set to True.')),
cfg.BoolOpt('enable_mdns', default=False,
help=_('Whether to enable publishing the ironic-inspector API '
'endpoint via multicast DNS.')),
cfg.BoolOpt('standalone', default=True,
help=_('Whether to run ironic-inspector as a standalone '
'service. It\'s EXPERIMENTAL to set to False.')),
]
def register_opts(conf):
conf.register_opts(_OPTS)
def list_opts():
return _OPTS

View File

@@ -1,44 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from ironic_inspector.common.i18n import _
_OPTS = [
cfg.StrOpt('enroll_node_driver',
default='fake-hardware',
help=_('The name of the Ironic driver used by the enroll '
'hook when creating a new node in Ironic.')),
cfg.DictOpt('enroll_node_fields', default={},
help=_('Additional fields to set on newly discovered nodes.')),
cfg.ListOpt('enabled_bmc_address_version',
default=['4', '6'],
help=_('IP version of BMC address that will be '
'used when enrolling a new node in Ironic. '
'Defaults to "4,6". Could be "4" (use v4 address '
'only), "4,6" (v4 address have higher priority and '
'if both addresses found v6 version is ignored), '
'"6,4" (v6 is desired but fall back to v4 address '
'for BMCs having v4 address, opposite to "4,6"), '
'"6" (use v6 address only and ignore v4 version).')),
]
def register_opts(conf):
conf.register_opts(_OPTS, 'discovery')
def list_opts():
return _OPTS

View File

@@ -1,48 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from ironic_inspector.common.i18n import _
_OPTS = [
cfg.StrOpt('dhcp_hostsdir',
default='/var/lib/ironic-inspector/dhcp-hostsdir',
help=_('The MAC address cache directory, exposed to dnsmasq.'
'This directory is expected to be in exclusive control '
'of the driver.')),
cfg.BoolOpt('purge_dhcp_hostsdir', default=True,
help=_('Purge the hostsdir upon driver initialization. '
'Setting to false should only be performed when the '
'deployment of inspector is such that there are '
'multiple processes executing inside of the same host '
'and namespace. In this case, the Operator is '
'responsible for setting up a custom cleaning '
'facility.')),
cfg.StrOpt('dnsmasq_start_command', default='',
help=_('A (shell) command line to start the dnsmasq service '
'upon filter initialization. Default: don\'t start.')),
cfg.StrOpt('dnsmasq_stop_command', default='',
help=_('A (shell) command line to stop the dnsmasq service '
'upon inspector (error) exit. Default: don\'t stop.')),
]
def register_opts(conf):
conf.register_opts(_OPTS, 'dnsmasq_pxe_filter')
def list_opts():
return _OPTS

View File

@@ -1,43 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Ironic base exception handling.
Includes decorator for re-raising Ironic-type exceptions.
SHOULD include dedicated exception logging.
"""
from oslo_config import cfg
from ironic_inspector.common.i18n import _
opts = [
cfg.BoolOpt('fatal_exception_format_errors',
default=False,
help=_('Used if there is a formatting error when generating '
'an exception message (a programming error). If True, '
'raise an exception; if False, use the unformatted '
'message.'),
deprecated_group='ironic_lib'),
]
CONF = cfg.CONF
def register_opts(conf):
conf.register_opts(opts, group='exception')

View File

@@ -1,33 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from ironic_inspector.common.i18n import _
_OPTS = [
cfg.BoolOpt('strict',
default=False,
help=_('If True, refuse to parse extra data if at least one '
'record is too short. Additionally, remove the '
'incoming "data" even if parsing failed.')),
]
def register_opts(conf):
conf.register_opts(_OPTS, group='extra_hardware')
def list_opts():
return _OPTS

View File

@@ -1,33 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from ironic_inspector.common.i18n import _
_OPTS = [
cfg.BoolOpt('enabled',
default=False,
help=_('Enable the health check endpoint at /healthcheck. '
'Note that this is unauthenticated. More information '
'is available at '
'https://docs.openstack.org/oslo.middleware/latest/'
'reference/healthcheck_plugins.html.')),
]
def register_opts(conf):
conf.register_opts(_OPTS, group='healthcheck')
def list_opts():
return _OPTS

View File

@@ -1,51 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from ironic_inspector.common.i18n import _
_OPTS = [
cfg.StrOpt('dnsmasq_interface',
default='br-ctlplane',
help=_('Interface on which dnsmasq listens, the default is for '
'VM\'s.')),
cfg.StrOpt('firewall_chain',
default='ironic-inspector',
help=_('iptables chain name to use.')),
cfg.ListOpt('ethoib_interfaces',
default=[],
help=_('List of Ethernet Over InfiniBand interfaces '
'on the Inspector host which are used for physical '
'access to the DHCP network. Multiple interfaces would '
'be attached to a bond or bridge specified in '
'dnsmasq_interface. The MACs of the InfiniBand nodes '
'which are not in desired state are going to be '
'blocked based on the list of neighbor MACs '
'on these interfaces.')),
cfg.StrOpt('ip_version',
default='4',
choices=[('4', _('IPv4')),
('6', _('IPv6'))],
help=_('The IP version that will be used for iptables filter. '
'Defaults to 4.')),
]
def register_opts(conf):
conf.register_opts(_OPTS, 'iptables')
def list_opts():
return _OPTS

View File

@@ -1,42 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from ironic_inspector.common.i18n import _
from ironic_inspector.common import keystone
IRONIC_GROUP = 'ironic'
SERVICE_TYPE = 'baremetal'
_OPTS = [
cfg.IntOpt('retry_interval',
default=2,
help=_('Interval between retries in case of conflict error '
'(HTTP 409).')),
cfg.IntOpt('max_retries',
default=30,
help=_('Maximum number of retries in case of conflict error '
'(HTTP 409).')),
]
def register_opts(conf):
conf.register_opts(_OPTS, IRONIC_GROUP)
keystone.register_auth_opts(IRONIC_GROUP, SERVICE_TYPE)
def list_opts():
return keystone.add_auth_options(_OPTS, SERVICE_TYPE)

View File

@@ -1,44 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from oslo_config import types as cfg_types
opts = [
cfg.IntOpt('registration_attempts',
min=1, default=5,
help='Number of attempts to register a service. Currently '
'has to be larger than 1 because of race conditions '
'in the zeroconf library.'),
cfg.IntOpt('lookup_attempts',
min=1, default=3,
help='Number of attempts to lookup a service.'),
cfg.Opt('params',
# This is required for values that contain commas.
type=cfg_types.Dict(cfg_types.String(quotes=True)),
default={},
help='Additional parameters to pass for the registered '
'service.'),
cfg.ListOpt('interfaces',
help='List of IP addresses of interfaces to use for mDNS. '
'Defaults to all interfaces on the system.'),
]
CONF = cfg.CONF
opt_group = cfg.OptGroup(name='mdns', title='Options for multicast DNS')
def register_opts(conf):
conf.register_group(opt_group)
conf.register_opts(opts, group=opt_group)

View File

@@ -1,84 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from oslo_log import log
from oslo_middleware import cors
import ironic_inspector.conf
from ironic_inspector import version
MIN_VERSION_HEADER = 'X-OpenStack-Ironic-Inspector-API-Minimum-Version'
MAX_VERSION_HEADER = 'X-OpenStack-Ironic-Inspector-API-Maximum-Version'
VERSION_HEADER = 'X-OpenStack-Ironic-Inspector-API-Version'
def set_config_defaults():
"""Return a list of oslo.config options available in Inspector code."""
log.set_defaults(default_log_levels=['sqlalchemy=WARNING',
'iso8601=WARNING',
'requests=WARNING',
'urllib3.connectionpool=WARNING',
'keystonemiddleware=WARNING',
'keystoneauth=WARNING',
'ironicclient=WARNING',
'amqp=WARNING',
'amqplib=WARNING',
'stevedore=WARNING',
# This comes in two flavors
'oslo.messaging=WARNING',
'oslo_messaging=WARNING'])
set_cors_middleware_defaults()
def set_cors_middleware_defaults():
"""Update default configuration options for oslo.middleware."""
cors.set_defaults(
allow_headers=['X-Auth-Token',
MIN_VERSION_HEADER,
MAX_VERSION_HEADER,
VERSION_HEADER],
allow_methods=['GET', 'POST', 'PUT', 'HEAD',
'PATCH', 'DELETE', 'OPTIONS']
)
def parse_args(args, default_config_files=None):
cfg.CONF(args,
project='ironic-inspector',
version=version.version_info.release_string(),
default_config_files=default_config_files)
def list_opts():
return [
('capabilities', ironic_inspector.conf.capabilities.list_opts()),
('coordination', ironic_inspector.conf.coordination.list_opts()),
('DEFAULT', ironic_inspector.conf.default.list_opts()),
('discovery', ironic_inspector.conf.discovery.list_opts()),
('dnsmasq_pxe_filter',
ironic_inspector.conf.dnsmasq_pxe_filter.list_opts()),
('exception', ironic_inspector.conf.exception.opts),
('extra_hardware', ironic_inspector.conf.extra_hardware.list_opts()),
('healthcheck', ironic_inspector.conf.healthcheck.list_opts()),
('ironic', ironic_inspector.conf.ironic.list_opts()),
('iptables', ironic_inspector.conf.iptables.list_opts()),
('mdns', ironic_inspector.conf.mdns.opts),
('port_physnet', ironic_inspector.conf.port_physnet.list_opts()),
('processing', ironic_inspector.conf.processing.list_opts()),
('pci_devices', ironic_inspector.conf.pci_devices.list_opts()),
('pxe_filter', ironic_inspector.conf.pxe_filter.list_opts()),
('service_catalog', ironic_inspector.conf.service_catalog.list_opts()),
('swift', ironic_inspector.conf.swift.list_opts()),
]

View File

@@ -1,34 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from ironic_inspector.common.i18n import _
_OPTS = [
cfg.MultiStrOpt('alias',
default=[],
help=_('An alias for PCI device identified by '
'\'vendor_id\' and \'product_id\' fields. Format: '
'{"vendor_id": "1234", "product_id": "5678", '
'"name": "pci_dev1"}')),
]
def register_opts(conf):
conf.register_opts(_OPTS, group='pci_devices')
def list_opts():
return _OPTS

Some files were not shown because too many files have changed in this diff Show More