Initial commit for ironic-lib
This commit is contained in:
parent
6f0c39d3f4
commit
1d78cb7167
@ -1,4 +1,10 @@
|
||||
[DEFAULT]
|
||||
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ ${TESTS_DIR:-./ironic/tests/} $LISTOPT $IDOPTION
|
||||
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
|
||||
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
|
||||
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
|
||||
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
|
||||
OS_DEBUG=${OS_DEBUG:-0} \
|
||||
${PYTHON:-python} -m subunit.run discover -t ./ $LISTOPT $IDOPTION
|
||||
|
||||
test_id_option=--load-list $IDFILE
|
||||
test_list_option=--list
|
||||
|
202
LICENSE
Normal file
202
LICENSE
Normal file
@ -0,0 +1,202 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
2
README.md
Normal file
2
README.md
Normal file
@ -0,0 +1,2 @@
|
||||
# ironic-lib
|
||||
A collection of common Ironic utilities
|
36
README.rst
36
README.rst
@ -1,31 +1,17 @@
|
||||
Ironic
|
||||
======
|
||||
----------
|
||||
ironic_lib
|
||||
----------
|
||||
|
||||
Ironic is an integrated OpenStack project which aims to provision bare
|
||||
metal machines instead of virtual machines, forked from the Nova Baremetal
|
||||
driver. It is best thought of as a bare metal hypervisor **API** and a set
|
||||
of plugins which interact with the bare metal hypervisors. By default, it
|
||||
will use PXE and IPMI in concert to provision and turn on/off machines,
|
||||
but Ironic also supports vendor-specific plugins which may implement
|
||||
additional functionality.
|
||||
Running Tests
|
||||
-------------
|
||||
|
||||
-----------------
|
||||
Project Resources
|
||||
-----------------
|
||||
To run tests in virtualenvs (preferred)::
|
||||
|
||||
Project status, bugs, and blueprints are tracked on Launchpad:
|
||||
sudo pip install tox
|
||||
tox
|
||||
|
||||
http://launchpad.net/ironic
|
||||
To run tests in the current environment::
|
||||
|
||||
Developer documentation can be found here:
|
||||
sudo pip install -r requirements.txt
|
||||
nosetests
|
||||
|
||||
http://docs.openstack.org/developer/ironic
|
||||
|
||||
Additional resources are linked from the project wiki page:
|
||||
|
||||
https://wiki.openstack.org/wiki/Ironic
|
||||
|
||||
Anyone wishing to contribute to an OpenStack project should
|
||||
find a good reference here:
|
||||
|
||||
http://docs.openstack.org/infra/manual/developers.html
|
||||
|
88
TESTING.rst
Normal file
88
TESTING.rst
Normal file
@ -0,0 +1,88 @@
|
||||
===========================
|
||||
Testing Your OpenStack Code
|
||||
===========================
|
||||
------------
|
||||
A Quickstart
|
||||
------------
|
||||
|
||||
This is designed to be enough information for you to run your first tests.
|
||||
Detailed information on testing can be found here: https://wiki.openstack.org/wiki/Testing
|
||||
|
||||
*Install pip*::
|
||||
|
||||
[apt-get | yum] install python-pip
|
||||
More information on pip here: http://www.pip-installer.org/en/latest/
|
||||
|
||||
*Use pip to install tox*::
|
||||
|
||||
pip install tox
|
||||
|
||||
Run The Tests
|
||||
-------------
|
||||
|
||||
*Navigate to the project's root directory and execute*::
|
||||
|
||||
tox
|
||||
Note: completing this command may take a long time (depends on system resources)
|
||||
also, you might not see any output until tox is complete.
|
||||
|
||||
Information about tox can be found here: http://testrun.org/tox/latest/
|
||||
|
||||
|
||||
Run The Tests in One Environment
|
||||
--------------------------------
|
||||
|
||||
Tox will run your entire test suite in the environments specified in the project tox.ini::
|
||||
|
||||
[tox]
|
||||
|
||||
envlist = <list of available environments>
|
||||
|
||||
To run the test suite in just one of the environments in envlist execute::
|
||||
|
||||
tox -e <env>
|
||||
so for example, *run the test suite in py26*::
|
||||
|
||||
tox -e py26
|
||||
|
||||
Run One Test
|
||||
------------
|
||||
|
||||
To run individual tests with tox:
|
||||
|
||||
if testr is in tox.ini, for example::
|
||||
|
||||
[testenv]
|
||||
|
||||
includes "python setup.py testr --slowest --testr-args='{posargs}'"
|
||||
|
||||
run individual tests with the following syntax::
|
||||
|
||||
tox -e <env> -- path.to.module:Class.test
|
||||
so for example, *run the cpu_limited test in Nova*::
|
||||
|
||||
tox -e py27 -- nova.tests.test_claims:ClaimTestCase.test_cpu_unlimited
|
||||
|
||||
if nose is in tox.ini, for example::
|
||||
|
||||
[testenv]
|
||||
|
||||
includes "nosetests {posargs}"
|
||||
|
||||
run individual tests with the following syntax::
|
||||
|
||||
tox -e <env> -- --tests path.to.module:Class.test
|
||||
so for example, *run the list test in Glance*::
|
||||
|
||||
tox -e py27 -- --tests glance.tests.unit.test_auth.py:TestImageRepoProxy.test_list
|
||||
|
||||
Need More Info?
|
||||
---------------
|
||||
|
||||
More information about testr: https://wiki.openstack.org/wiki/Testr
|
||||
|
||||
More information about nose: https://nose.readthedocs.org/en/latest/
|
||||
|
||||
|
||||
More information about testing OpenStack code can be found here:
|
||||
https://wiki.openstack.org/wiki/Testing
|
@ -1,22 +0,0 @@
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
|
||||
os.environ['EVENTLET_NO_GREENDNS'] = 'yes'
|
||||
|
||||
import eventlet
|
||||
|
||||
eventlet.monkey_patch(os=False)
|
@ -1,526 +0,0 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Justin Santa Barbara
|
||||
# Copyright (c) 2012 NTT DOCOMO, INC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Utilities and helper functions."""
|
||||
|
||||
import contextlib
|
||||
import errno
|
||||
import hashlib
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import shutil
|
||||
import tempfile
|
||||
|
||||
import netaddr
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_utils import excutils
|
||||
import paramiko
|
||||
import six
|
||||
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common.i18n import _LE
|
||||
from ironic.common.i18n import _LW
|
||||
from ironic.openstack.common import log as logging
|
||||
|
||||
utils_opts = [
|
||||
cfg.StrOpt('rootwrap_config',
|
||||
default="/etc/ironic/rootwrap.conf",
|
||||
help='Path to the rootwrap configuration file to use for '
|
||||
'running commands as root.'),
|
||||
cfg.StrOpt('tempdir',
|
||||
help='Explicitly specify the temporary working directory.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(utils_opts)
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _get_root_helper():
|
||||
return 'sudo ironic-rootwrap %s' % CONF.rootwrap_config
|
||||
|
||||
|
||||
def execute(*cmd, **kwargs):
|
||||
"""Convenience wrapper around oslo's execute() method.
|
||||
|
||||
:param cmd: Passed to processutils.execute.
|
||||
:param use_standard_locale: True | False. Defaults to False. If set to
|
||||
True, execute command with standard locale
|
||||
added to environment variables.
|
||||
:returns: (stdout, stderr) from process execution
|
||||
:raises: UnknownArgumentError
|
||||
:raises: ProcessExecutionError
|
||||
"""
|
||||
|
||||
use_standard_locale = kwargs.pop('use_standard_locale', False)
|
||||
if use_standard_locale:
|
||||
env = kwargs.pop('env_variables', os.environ.copy())
|
||||
env['LC_ALL'] = 'C'
|
||||
kwargs['env_variables'] = env
|
||||
if kwargs.get('run_as_root') and 'root_helper' not in kwargs:
|
||||
kwargs['root_helper'] = _get_root_helper()
|
||||
result = processutils.execute(*cmd, **kwargs)
|
||||
LOG.debug('Execution completed, command line is "%s"',
|
||||
' '.join(map(str, cmd)))
|
||||
LOG.debug('Command stdout is: "%s"' % result[0])
|
||||
LOG.debug('Command stderr is: "%s"' % result[1])
|
||||
return result
|
||||
|
||||
|
||||
def trycmd(*args, **kwargs):
|
||||
"""Convenience wrapper around oslo's trycmd() method."""
|
||||
if kwargs.get('run_as_root') and 'root_helper' not in kwargs:
|
||||
kwargs['root_helper'] = _get_root_helper()
|
||||
return processutils.trycmd(*args, **kwargs)
|
||||
|
||||
|
||||
def ssh_connect(connection):
|
||||
"""Method to connect to a remote system using ssh protocol.
|
||||
|
||||
:param connection: a dict of connection parameters.
|
||||
:returns: paramiko.SSHClient -- an active ssh connection.
|
||||
:raises: SSHConnectFailed
|
||||
|
||||
"""
|
||||
try:
|
||||
ssh = paramiko.SSHClient()
|
||||
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
|
||||
key_contents = connection.get('key_contents')
|
||||
if key_contents:
|
||||
data = six.moves.StringIO(key_contents)
|
||||
if "BEGIN RSA PRIVATE" in key_contents:
|
||||
pkey = paramiko.RSAKey.from_private_key(data)
|
||||
elif "BEGIN DSA PRIVATE" in key_contents:
|
||||
pkey = paramiko.DSSKey.from_private_key(data)
|
||||
else:
|
||||
# Can't include the key contents - secure material.
|
||||
raise ValueError(_("Invalid private key"))
|
||||
else:
|
||||
pkey = None
|
||||
ssh.connect(connection.get('host'),
|
||||
username=connection.get('username'),
|
||||
password=connection.get('password'),
|
||||
port=connection.get('port', 22),
|
||||
pkey=pkey,
|
||||
key_filename=connection.get('key_filename'),
|
||||
timeout=connection.get('timeout', 10))
|
||||
|
||||
# send TCP keepalive packets every 20 seconds
|
||||
ssh.get_transport().set_keepalive(20)
|
||||
except Exception as e:
|
||||
LOG.debug("SSH connect failed: %s" % e)
|
||||
raise exception.SSHConnectFailed(host=connection.get('host'))
|
||||
|
||||
return ssh
|
||||
|
||||
|
||||
def generate_uid(topic, size=8):
|
||||
characters = '01234567890abcdefghijklmnopqrstuvwxyz'
|
||||
choices = [random.choice(characters) for _x in range(size)]
|
||||
return '%s-%s' % (topic, ''.join(choices))
|
||||
|
||||
|
||||
def random_alnum(size=32):
|
||||
characters = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'
|
||||
return ''.join(random.choice(characters) for _ in range(size))
|
||||
|
||||
|
||||
def delete_if_exists(pathname):
|
||||
"""delete a file, but ignore file not found error."""
|
||||
|
||||
try:
|
||||
os.unlink(pathname)
|
||||
except OSError as e:
|
||||
if e.errno == errno.ENOENT:
|
||||
return
|
||||
else:
|
||||
raise
|
||||
|
||||
|
||||
def is_valid_boolstr(val):
|
||||
"""Check if the provided string is a valid bool string or not."""
|
||||
boolstrs = ('true', 'false', 'yes', 'no', 'y', 'n', '1', '0')
|
||||
return str(val).lower() in boolstrs
|
||||
|
||||
|
||||
def is_valid_mac(address):
|
||||
"""Verify the format of a MAC address.
|
||||
|
||||
Check if a MAC address is valid and contains six octets. Accepts
|
||||
colon-separated format only.
|
||||
|
||||
:param address: MAC address to be validated.
|
||||
:returns: True if valid. False if not.
|
||||
|
||||
"""
|
||||
m = "[0-9a-f]{2}(:[0-9a-f]{2}){5}$"
|
||||
return (isinstance(address, six.string_types) and
|
||||
re.match(m, address.lower()))
|
||||
|
||||
|
||||
def is_hostname_safe(hostname):
|
||||
"""Determine if the supplied hostname is RFC compliant.
|
||||
|
||||
Check that the supplied hostname conforms to:
|
||||
* http://en.wikipedia.org/wiki/Hostname
|
||||
* http://tools.ietf.org/html/rfc952
|
||||
* http://tools.ietf.org/html/rfc1123
|
||||
|
||||
:param hostname: The hostname to be validated.
|
||||
:returns: True if valid. False if not.
|
||||
|
||||
"""
|
||||
m = '^[a-z0-9]([a-z0-9\-]{0,61}[a-z0-9])?$'
|
||||
return (isinstance(hostname, six.string_types) and
|
||||
(re.match(m, hostname) is not None))
|
||||
|
||||
|
||||
def validate_and_normalize_mac(address):
|
||||
"""Validate a MAC address and return normalized form.
|
||||
|
||||
Checks whether the supplied MAC address is formally correct and
|
||||
normalize it to all lower case.
|
||||
|
||||
:param address: MAC address to be validated and normalized.
|
||||
:returns: Normalized and validated MAC address.
|
||||
:raises: InvalidMAC If the MAC address is not valid.
|
||||
|
||||
"""
|
||||
if not is_valid_mac(address):
|
||||
raise exception.InvalidMAC(mac=address)
|
||||
return address.lower()
|
||||
|
||||
|
||||
def is_valid_ipv6_cidr(address):
|
||||
try:
|
||||
str(netaddr.IPNetwork(address, version=6).cidr)
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def get_shortened_ipv6(address):
|
||||
addr = netaddr.IPAddress(address, version=6)
|
||||
return str(addr.ipv6())
|
||||
|
||||
|
||||
def get_shortened_ipv6_cidr(address):
|
||||
net = netaddr.IPNetwork(address, version=6)
|
||||
return str(net.cidr)
|
||||
|
||||
|
||||
def is_valid_cidr(address):
|
||||
"""Check if the provided ipv4 or ipv6 address is a valid CIDR address."""
|
||||
try:
|
||||
# Validate the correct CIDR Address
|
||||
netaddr.IPNetwork(address)
|
||||
except netaddr.core.AddrFormatError:
|
||||
return False
|
||||
except UnboundLocalError:
|
||||
# NOTE(MotoKen): work around bug in netaddr 0.7.5 (see detail in
|
||||
# https://github.com/drkjam/netaddr/issues/2)
|
||||
return False
|
||||
|
||||
# Prior validation partially verify /xx part
|
||||
# Verify it here
|
||||
ip_segment = address.split('/')
|
||||
|
||||
if (len(ip_segment) <= 1 or
|
||||
ip_segment[1] == ''):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def get_ip_version(network):
|
||||
"""Returns the IP version of a network (IPv4 or IPv6).
|
||||
|
||||
:raises: AddrFormatError if invalid network.
|
||||
"""
|
||||
if netaddr.IPNetwork(network).version == 6:
|
||||
return "IPv6"
|
||||
elif netaddr.IPNetwork(network).version == 4:
|
||||
return "IPv4"
|
||||
|
||||
|
||||
def convert_to_list_dict(lst, label):
|
||||
"""Convert a value or list into a list of dicts."""
|
||||
if not lst:
|
||||
return None
|
||||
if not isinstance(lst, list):
|
||||
lst = [lst]
|
||||
return [{label: x} for x in lst]
|
||||
|
||||
|
||||
def sanitize_hostname(hostname):
|
||||
"""Return a hostname which conforms to RFC-952 and RFC-1123 specs."""
|
||||
if isinstance(hostname, six.text_type):
|
||||
hostname = hostname.encode('latin-1', 'ignore')
|
||||
|
||||
hostname = re.sub('[ _]', '-', hostname)
|
||||
hostname = re.sub('[^\w.-]+', '', hostname)
|
||||
hostname = hostname.lower()
|
||||
hostname = hostname.strip('.-')
|
||||
|
||||
return hostname
|
||||
|
||||
|
||||
def read_cached_file(filename, cache_info, reload_func=None):
|
||||
"""Read from a file if it has been modified.
|
||||
|
||||
:param cache_info: dictionary to hold opaque cache.
|
||||
:param reload_func: optional function to be called with data when
|
||||
file is reloaded due to a modification.
|
||||
|
||||
:returns: data from file
|
||||
|
||||
"""
|
||||
mtime = os.path.getmtime(filename)
|
||||
if not cache_info or mtime != cache_info.get('mtime'):
|
||||
LOG.debug("Reloading cached file %s" % filename)
|
||||
with open(filename) as fap:
|
||||
cache_info['data'] = fap.read()
|
||||
cache_info['mtime'] = mtime
|
||||
if reload_func:
|
||||
reload_func(cache_info['data'])
|
||||
return cache_info['data']
|
||||
|
||||
|
||||
def file_open(*args, **kwargs):
|
||||
"""Open file
|
||||
|
||||
see built-in file() documentation for more details
|
||||
|
||||
Note: The reason this is kept in a separate module is to easily
|
||||
be able to provide a stub module that doesn't alter system
|
||||
state at all (for unit tests)
|
||||
"""
|
||||
return file(*args, **kwargs)
|
||||
|
||||
|
||||
def hash_file(file_like_object):
|
||||
"""Generate a hash for the contents of a file."""
|
||||
checksum = hashlib.sha1()
|
||||
for chunk in iter(lambda: file_like_object.read(32768), b''):
|
||||
checksum.update(chunk)
|
||||
return checksum.hexdigest()
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def temporary_mutation(obj, **kwargs):
|
||||
"""Temporarily change object attribute.
|
||||
|
||||
Temporarily set the attr on a particular object to a given value then
|
||||
revert when finished.
|
||||
|
||||
One use of this is to temporarily set the read_deleted flag on a context
|
||||
object:
|
||||
|
||||
with temporary_mutation(context, read_deleted="yes"):
|
||||
do_something_that_needed_deleted_objects()
|
||||
"""
|
||||
def is_dict_like(thing):
|
||||
return hasattr(thing, 'has_key')
|
||||
|
||||
def get(thing, attr, default):
|
||||
if is_dict_like(thing):
|
||||
return thing.get(attr, default)
|
||||
else:
|
||||
return getattr(thing, attr, default)
|
||||
|
||||
def set_value(thing, attr, val):
|
||||
if is_dict_like(thing):
|
||||
thing[attr] = val
|
||||
else:
|
||||
setattr(thing, attr, val)
|
||||
|
||||
def delete(thing, attr):
|
||||
if is_dict_like(thing):
|
||||
del thing[attr]
|
||||
else:
|
||||
delattr(thing, attr)
|
||||
|
||||
NOT_PRESENT = object()
|
||||
|
||||
old_values = {}
|
||||
for attr, new_value in kwargs.items():
|
||||
old_values[attr] = get(obj, attr, NOT_PRESENT)
|
||||
set_value(obj, attr, new_value)
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
for attr, old_value in old_values.items():
|
||||
if old_value is NOT_PRESENT:
|
||||
delete(obj, attr)
|
||||
else:
|
||||
set_value(obj, attr, old_value)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def tempdir(**kwargs):
|
||||
tempfile.tempdir = CONF.tempdir
|
||||
tmpdir = tempfile.mkdtemp(**kwargs)
|
||||
try:
|
||||
yield tmpdir
|
||||
finally:
|
||||
try:
|
||||
shutil.rmtree(tmpdir)
|
||||
except OSError as e:
|
||||
LOG.error(_LE('Could not remove tmpdir: %s'), e)
|
||||
|
||||
|
||||
def mkfs(fs, path, label=None):
|
||||
"""Format a file or block device
|
||||
|
||||
:param fs: Filesystem type (examples include 'swap', 'ext3', 'ext4'
|
||||
'btrfs', etc.)
|
||||
:param path: Path to file or block device to format
|
||||
:param label: Volume label to use
|
||||
"""
|
||||
if fs == 'swap':
|
||||
args = ['mkswap']
|
||||
else:
|
||||
args = ['mkfs', '-t', fs]
|
||||
# add -F to force no interactive execute on non-block device.
|
||||
if fs in ('ext3', 'ext4'):
|
||||
args.extend(['-F'])
|
||||
if label:
|
||||
if fs in ('msdos', 'vfat'):
|
||||
label_opt = '-n'
|
||||
else:
|
||||
label_opt = '-L'
|
||||
args.extend([label_opt, label])
|
||||
args.append(path)
|
||||
try:
|
||||
execute(*args, run_as_root=True, use_standard_locale=True)
|
||||
except processutils.ProcessExecutionError as e:
|
||||
with excutils.save_and_reraise_exception() as ctx:
|
||||
if os.strerror(errno.ENOENT) in e.stderr:
|
||||
ctx.reraise = False
|
||||
LOG.exception(_LE('Failed to make file system. '
|
||||
'File system %s is not supported.'), fs)
|
||||
raise exception.FileSystemNotSupported(fs=fs)
|
||||
else:
|
||||
LOG.exception(_LE('Failed to create a file system '
|
||||
'in %(path)s. Error: %(error)s'),
|
||||
{'path': path, 'error': e})
|
||||
|
||||
|
||||
def unlink_without_raise(path):
|
||||
try:
|
||||
os.unlink(path)
|
||||
except OSError as e:
|
||||
if e.errno == errno.ENOENT:
|
||||
return
|
||||
else:
|
||||
LOG.warn(_LW("Failed to unlink %(path)s, error: %(e)s"),
|
||||
{'path': path, 'e': e})
|
||||
|
||||
|
||||
def rmtree_without_raise(path):
|
||||
try:
|
||||
if os.path.isdir(path):
|
||||
shutil.rmtree(path)
|
||||
except OSError as e:
|
||||
LOG.warn(_LW("Failed to remove dir %(path)s, error: %(e)s"),
|
||||
{'path': path, 'e': e})
|
||||
|
||||
|
||||
def write_to_file(path, contents):
|
||||
with open(path, 'w') as f:
|
||||
f.write(contents)
|
||||
|
||||
|
||||
def create_link_without_raise(source, link):
|
||||
try:
|
||||
os.symlink(source, link)
|
||||
except OSError as e:
|
||||
if e.errno == errno.EEXIST:
|
||||
return
|
||||
else:
|
||||
LOG.warn(_LW("Failed to create symlink from %(source)s to %(link)s"
|
||||
", error: %(e)s"),
|
||||
{'source': source, 'link': link, 'e': e})
|
||||
|
||||
|
||||
def safe_rstrip(value, chars=None):
|
||||
"""Removes trailing characters from a string if that does not make it empty
|
||||
|
||||
:param value: A string value that will be stripped.
|
||||
:param chars: Characters to remove.
|
||||
:return: Stripped value.
|
||||
|
||||
"""
|
||||
if not isinstance(value, six.string_types):
|
||||
LOG.warn(_LW("Failed to remove trailing character. Returning original "
|
||||
"object. Supplied object is not a string: %s,"), value)
|
||||
return value
|
||||
|
||||
return value.rstrip(chars) or value
|
||||
|
||||
|
||||
def mount(src, dest, *args):
|
||||
"""Mounts a device/image file on specified location.
|
||||
|
||||
:param src: the path to the source file for mounting
|
||||
:param dest: the path where it needs to be mounted.
|
||||
:param args: a tuple containing the arguments to be
|
||||
passed to mount command.
|
||||
:raises: processutils.ProcessExecutionError if it failed
|
||||
to run the process.
|
||||
"""
|
||||
args = ('mount', ) + args + (src, dest)
|
||||
execute(*args, run_as_root=True, check_exit_code=[0])
|
||||
|
||||
|
||||
def umount(loc, *args):
|
||||
"""Umounts a mounted location.
|
||||
|
||||
:param loc: the path to be unmounted.
|
||||
:param args: a tuple containing the argumnets to be
|
||||
passed to the umount command.
|
||||
:raises: processutils.ProcessExecutionError if it failed
|
||||
to run the process.
|
||||
"""
|
||||
args = ('umount', ) + args + (loc, )
|
||||
execute(*args, run_as_root=True, check_exit_code=[0])
|
||||
|
||||
|
||||
def dd(src, dst, *args):
|
||||
"""Execute dd from src to dst.
|
||||
|
||||
:param src: the input file for dd command.
|
||||
:param dst: the output file for dd command.
|
||||
:param args: a tuple containing the arguments to be
|
||||
passed to dd command.
|
||||
:raises: processutils.ProcessExecutionError if it failed
|
||||
to run the process.
|
||||
"""
|
||||
LOG.debug("Starting dd process.")
|
||||
execute('dd', 'if=%s' % src, 'of=%s' % dst, *args,
|
||||
run_as_root=True, check_exit_code=[0])
|
||||
|
||||
|
||||
def is_http_url(url):
|
||||
url = url.lower()
|
||||
return url.startswith('http://') or url.startswith('https://')
|
@ -1,730 +0,0 @@
|
||||
# Copyright (c) 2012 NTT DOCOMO, INC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
import base64
|
||||
import gzip
|
||||
import math
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import socket
|
||||
import stat
|
||||
import tempfile
|
||||
import time
|
||||
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_serialization import jsonutils
|
||||
from oslo_utils import excutils
|
||||
from oslo_utils import units
|
||||
import requests
|
||||
import six
|
||||
|
||||
from ironic.common import disk_partitioner
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common.i18n import _LE
|
||||
from ironic.common import images
|
||||
from ironic.common import states
|
||||
from ironic.common import utils
|
||||
from ironic.conductor import utils as manager_utils
|
||||
from ironic.drivers.modules import image_cache
|
||||
from ironic.openstack.common import log as logging
|
||||
|
||||
|
||||
deploy_opts = [
|
||||
cfg.StrOpt('dd_block_size',
|
||||
default='1M',
|
||||
help='Block size to use when writing to the nodes disk.'),
|
||||
cfg.IntOpt('iscsi_verify_attempts',
|
||||
default=3,
|
||||
help='Maximum attempts to verify an iSCSI connection is '
|
||||
'active, sleeping 1 second between attempts.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(deploy_opts, group='deploy')
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# All functions are called from deploy() directly or indirectly.
|
||||
# They are split for stub-out.
|
||||
|
||||
def discovery(portal_address, portal_port):
|
||||
"""Do iSCSI discovery on portal."""
|
||||
utils.execute('iscsiadm',
|
||||
'-m', 'discovery',
|
||||
'-t', 'st',
|
||||
'-p', '%s:%s' % (portal_address, portal_port),
|
||||
run_as_root=True,
|
||||
check_exit_code=[0],
|
||||
attempts=5,
|
||||
delay_on_retry=True)
|
||||
|
||||
|
||||
def login_iscsi(portal_address, portal_port, target_iqn):
|
||||
"""Login to an iSCSI target."""
|
||||
utils.execute('iscsiadm',
|
||||
'-m', 'node',
|
||||
'-p', '%s:%s' % (portal_address, portal_port),
|
||||
'-T', target_iqn,
|
||||
'--login',
|
||||
run_as_root=True,
|
||||
check_exit_code=[0],
|
||||
attempts=5,
|
||||
delay_on_retry=True)
|
||||
# Ensure the login complete
|
||||
verify_iscsi_connection(target_iqn)
|
||||
# force iSCSI initiator to re-read luns
|
||||
force_iscsi_lun_update(target_iqn)
|
||||
# ensure file system sees the block device
|
||||
check_file_system_for_iscsi_device(portal_address,
|
||||
portal_port,
|
||||
target_iqn)
|
||||
|
||||
|
||||
def check_file_system_for_iscsi_device(portal_address,
|
||||
portal_port,
|
||||
target_iqn):
|
||||
"""Ensure the file system sees the iSCSI block device."""
|
||||
check_dir = "/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-1" % (portal_address,
|
||||
portal_port,
|
||||
target_iqn)
|
||||
total_checks = CONF.deploy.iscsi_verify_attempts
|
||||
for attempt in range(total_checks):
|
||||
if os.path.exists(check_dir):
|
||||
break
|
||||
time.sleep(1)
|
||||
LOG.debug("iSCSI connection not seen by file system. Rechecking. "
|
||||
"Attempt %(attempt)d out of %(total)d",
|
||||
{"attempt": attempt + 1,
|
||||
"total": total_checks})
|
||||
else:
|
||||
msg = _("iSCSI connection was not seen by the file system after "
|
||||
"attempting to verify %d times.") % total_checks
|
||||
LOG.error(msg)
|
||||
raise exception.InstanceDeployFailure(msg)
|
||||
|
||||
|
||||
def verify_iscsi_connection(target_iqn):
|
||||
"""Verify iscsi connection."""
|
||||
LOG.debug("Checking for iSCSI target to become active.")
|
||||
|
||||
for attempt in range(CONF.deploy.iscsi_verify_attempts):
|
||||
out, _err = utils.execute('iscsiadm',
|
||||
'-m', 'node',
|
||||
'-S',
|
||||
run_as_root=True,
|
||||
check_exit_code=[0])
|
||||
if target_iqn in out:
|
||||
break
|
||||
time.sleep(1)
|
||||
LOG.debug("iSCSI connection not active. Rechecking. Attempt "
|
||||
"%(attempt)d out of %(total)d", {"attempt": attempt + 1,
|
||||
"total": CONF.deploy.iscsi_verify_attempts})
|
||||
else:
|
||||
msg = _("iSCSI connection did not become active after attempting to "
|
||||
"verify %d times.") % CONF.deploy.iscsi_verify_attempts
|
||||
LOG.error(msg)
|
||||
raise exception.InstanceDeployFailure(msg)
|
||||
|
||||
|
||||
def force_iscsi_lun_update(target_iqn):
|
||||
"""force iSCSI initiator to re-read luns."""
|
||||
LOG.debug("Re-reading iSCSI luns.")
|
||||
|
||||
utils.execute('iscsiadm',
|
||||
'-m', 'node',
|
||||
'-T', target_iqn,
|
||||
'-R',
|
||||
run_as_root=True,
|
||||
check_exit_code=[0])
|
||||
|
||||
|
||||
def logout_iscsi(portal_address, portal_port, target_iqn):
|
||||
"""Logout from an iSCSI target."""
|
||||
utils.execute('iscsiadm',
|
||||
'-m', 'node',
|
||||
'-p', '%s:%s' % (portal_address, portal_port),
|
||||
'-T', target_iqn,
|
||||
'--logout',
|
||||
run_as_root=True,
|
||||
check_exit_code=[0],
|
||||
attempts=5,
|
||||
delay_on_retry=True)
|
||||
|
||||
|
||||
def delete_iscsi(portal_address, portal_port, target_iqn):
|
||||
"""Delete the iSCSI target."""
|
||||
# Retry delete until it succeeds (exit code 0) or until there is
|
||||
# no longer a target to delete (exit code 21).
|
||||
utils.execute('iscsiadm',
|
||||
'-m', 'node',
|
||||
'-p', '%s:%s' % (portal_address, portal_port),
|
||||
'-T', target_iqn,
|
||||
'-o', 'delete',
|
||||
run_as_root=True,
|
||||
check_exit_code=[0, 21],
|
||||
attempts=5,
|
||||
delay_on_retry=True)
|
||||
|
||||
|
||||
def make_partitions(dev, root_mb, swap_mb, ephemeral_mb,
|
||||
configdrive_mb, commit=True):
|
||||
"""Partition the disk device.
|
||||
|
||||
Create partitions for root, swap, ephemeral and configdrive on a
|
||||
disk device.
|
||||
|
||||
:param root_mb: Size of the root partition in mebibytes (MiB).
|
||||
:param swap_mb: Size of the swap partition in mebibytes (MiB). If 0,
|
||||
no partition will be created.
|
||||
:param ephemeral_mb: Size of the ephemeral partition in mebibytes (MiB).
|
||||
If 0, no partition will be created.
|
||||
:param configdrive_mb: Size of the configdrive partition in
|
||||
mebibytes (MiB). If 0, no partition will be created.
|
||||
:param commit: True/False. Default for this setting is True. If False
|
||||
partitions will not be written to disk.
|
||||
:returns: A dictionary containing the partition type as Key and partition
|
||||
path as Value for the partitions created by this method.
|
||||
|
||||
"""
|
||||
LOG.debug("Starting to partition the disk device: %(dev)s",
|
||||
{'dev': dev})
|
||||
part_template = dev + '-part%d'
|
||||
part_dict = {}
|
||||
dp = disk_partitioner.DiskPartitioner(dev)
|
||||
if ephemeral_mb:
|
||||
LOG.debug("Add ephemeral partition (%(size)d MB) to device: %(dev)s",
|
||||
{'dev': dev, 'size': ephemeral_mb})
|
||||
part_num = dp.add_partition(ephemeral_mb)
|
||||
part_dict['ephemeral'] = part_template % part_num
|
||||
if swap_mb:
|
||||
LOG.debug("Add Swap partition (%(size)d MB) to device: %(dev)s",
|
||||
{'dev': dev, 'size': swap_mb})
|
||||
part_num = dp.add_partition(swap_mb, fs_type='linux-swap')
|
||||
part_dict['swap'] = part_template % part_num
|
||||
if configdrive_mb:
|
||||
LOG.debug("Add config drive partition (%(size)d MB) to device: "
|
||||
"%(dev)s", {'dev': dev, 'size': configdrive_mb})
|
||||
part_num = dp.add_partition(configdrive_mb)
|
||||
part_dict['configdrive'] = part_template % part_num
|
||||
|
||||
# NOTE(lucasagomes): Make the root partition the last partition. This
|
||||
# enables tools like cloud-init's growroot utility to expand the root
|
||||
# partition until the end of the disk.
|
||||
LOG.debug("Add root partition (%(size)d MB) to device: %(dev)s",
|
||||
{'dev': dev, 'size': root_mb})
|
||||
part_num = dp.add_partition(root_mb)
|
||||
part_dict['root'] = part_template % part_num
|
||||
|
||||
if commit:
|
||||
# write to the disk
|
||||
dp.commit()
|
||||
return part_dict
|
||||
|
||||
|
||||
def is_block_device(dev):
|
||||
"""Check whether a device is block or not."""
|
||||
attempts = CONF.deploy.iscsi_verify_attempts
|
||||
for attempt in range(attempts):
|
||||
try:
|
||||
s = os.stat(dev)
|
||||
except OSError as e:
|
||||
LOG.debug("Unable to stat device %(dev)s. Attempt %(attempt)d "
|
||||
"out of %(total)d. Error: %(err)s", {"dev": dev,
|
||||
"attempt": attempt + 1, "total": attempts, "err": e})
|
||||
time.sleep(1)
|
||||
else:
|
||||
return stat.S_ISBLK(s.st_mode)
|
||||
msg = _("Unable to stat device %(dev)s after attempting to verify "
|
||||
"%(attempts)d times.") % {'dev': dev, 'attempts': attempts}
|
||||
LOG.error(msg)
|
||||
raise exception.InstanceDeployFailure(msg)
|
||||
|
||||
|
||||
def dd(src, dst):
|
||||
"""Execute dd from src to dst."""
|
||||
utils.dd(src, dst, 'bs=%s' % CONF.deploy.dd_block_size, 'oflag=direct')
|
||||
|
||||
|
||||
def populate_image(src, dst):
|
||||
data = images.qemu_img_info(src)
|
||||
if data.file_format == 'raw':
|
||||
dd(src, dst)
|
||||
else:
|
||||
images.convert_image(src, dst, 'raw', True)
|
||||
|
||||
|
||||
def mkswap(dev, label='swap1'):
|
||||
"""Execute mkswap on a device."""
|
||||
utils.mkfs('swap', dev, label)
|
||||
|
||||
|
||||
def mkfs_ephemeral(dev, ephemeral_format, label="ephemeral0"):
|
||||
utils.mkfs(ephemeral_format, dev, label)
|
||||
|
||||
|
||||
def block_uuid(dev):
|
||||
"""Get UUID of a block device."""
|
||||
out, _err = utils.execute('blkid', '-s', 'UUID', '-o', 'value', dev,
|
||||
run_as_root=True,
|
||||
check_exit_code=[0])
|
||||
return out.strip()
|
||||
|
||||
|
||||
def switch_pxe_config(path, root_uuid, boot_mode):
|
||||
"""Switch a pxe config from deployment mode to service mode."""
|
||||
with open(path) as f:
|
||||
lines = f.readlines()
|
||||
root = 'UUID=%s' % root_uuid
|
||||
rre = re.compile(r'\{\{ ROOT \}\}')
|
||||
|
||||
if boot_mode == 'uefi':
|
||||
dre = re.compile('^default=.*$')
|
||||
boot_line = 'default=boot'
|
||||
else:
|
||||
pxe_cmd = 'goto' if CONF.pxe.ipxe_enabled else 'default'
|
||||
dre = re.compile('^%s .*$' % pxe_cmd)
|
||||
boot_line = '%s boot' % pxe_cmd
|
||||
|
||||
with open(path, 'w') as f:
|
||||
for line in lines:
|
||||
line = rre.sub(root, line)
|
||||
line = dre.sub(boot_line, line)
|
||||
f.write(line)
|
||||
|
||||
|
||||
def notify(address, port):
|
||||
"""Notify a node that it becomes ready to reboot."""
|
||||
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
try:
|
||||
s.connect((address, port))
|
||||
s.send('done')
|
||||
finally:
|
||||
s.close()
|
||||
|
||||
|
||||
def get_dev(address, port, iqn, lun):
|
||||
"""Returns a device path for given parameters."""
|
||||
dev = ("/dev/disk/by-path/ip-%s:%s-iscsi-%s-lun-%s"
|
||||
% (address, port, iqn, lun))
|
||||
return dev
|
||||
|
||||
|
||||
def get_image_mb(image_path, virtual_size=True):
|
||||
"""Get size of an image in Megabyte."""
|
||||
mb = 1024 * 1024
|
||||
if not virtual_size:
|
||||
image_byte = os.path.getsize(image_path)
|
||||
else:
|
||||
image_byte = images.converted_size(image_path)
|
||||
# round up size to MB
|
||||
image_mb = int((image_byte + mb - 1) / mb)
|
||||
return image_mb
|
||||
|
||||
|
||||
def get_dev_block_size(dev):
|
||||
"""Get the device size in 512 byte sectors."""
|
||||
block_sz, cmderr = utils.execute('blockdev', '--getsz', dev,
|
||||
run_as_root=True, check_exit_code=[0])
|
||||
return int(block_sz)
|
||||
|
||||
|
||||
def destroy_disk_metadata(dev, node_uuid):
|
||||
"""Destroy metadata structures on node's disk.
|
||||
|
||||
Ensure that node's disk appears to be blank without zeroing the entire
|
||||
drive. To do this we will zero:
|
||||
- the first 18KiB to clear MBR / GPT data
|
||||
- the last 18KiB to clear GPT and other metadata like: LVM, veritas,
|
||||
MDADM, DMRAID, ...
|
||||
"""
|
||||
# NOTE(NobodyCam): This is needed to work around bug:
|
||||
# https://bugs.launchpad.net/ironic/+bug/1317647
|
||||
LOG.debug("Start destroy disk metadata for node %(node)s.",
|
||||
{'node': node_uuid})
|
||||
try:
|
||||
utils.execute('dd', 'if=/dev/zero', 'of=%s' % dev,
|
||||
'bs=512', 'count=36', run_as_root=True,
|
||||
check_exit_code=[0])
|
||||
except processutils.ProcessExecutionError as err:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Failed to erase beginning of disk for node "
|
||||
"%(node)s. Command: %(command)s. Error: %(error)s."),
|
||||
{'node': node_uuid,
|
||||
'command': err.cmd,
|
||||
'error': err.stderr})
|
||||
|
||||
# now wipe the end of the disk.
|
||||
# get end of disk seek value
|
||||
try:
|
||||
block_sz = get_dev_block_size(dev)
|
||||
except processutils.ProcessExecutionError as err:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Failed to get disk block count for node %(node)s. "
|
||||
"Command: %(command)s. Error: %(error)s."),
|
||||
{'node': node_uuid,
|
||||
'command': err.cmd,
|
||||
'error': err.stderr})
|
||||
else:
|
||||
seek_value = block_sz - 36
|
||||
try:
|
||||
utils.execute('dd', 'if=/dev/zero', 'of=%s' % dev,
|
||||
'bs=512', 'count=36', 'seek=%d' % seek_value,
|
||||
run_as_root=True, check_exit_code=[0])
|
||||
except processutils.ProcessExecutionError as err:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Failed to erase the end of the disk on node "
|
||||
"%(node)s. Command: %(command)s. "
|
||||
"Error: %(error)s."),
|
||||
{'node': node_uuid,
|
||||
'command': err.cmd,
|
||||
'error': err.stderr})
|
||||
|
||||
|
||||
def _get_configdrive(configdrive, node_uuid):
|
||||
"""Get the information about size and location of the configdrive.
|
||||
|
||||
:param configdrive: Base64 encoded Gzipped configdrive content or
|
||||
configdrive HTTP URL.
|
||||
:param node_uuid: Node's uuid. Used for logging.
|
||||
:raises: InstanceDeployFailure if it can't download or decode the
|
||||
config drive.
|
||||
:returns: A tuple with the size in MiB and path to the uncompressed
|
||||
configdrive file.
|
||||
|
||||
"""
|
||||
# Check if the configdrive option is a HTTP URL or the content directly
|
||||
is_url = utils.is_http_url(configdrive)
|
||||
if is_url:
|
||||
try:
|
||||
data = requests.get(configdrive).content
|
||||
except requests.exceptions.RequestException as e:
|
||||
raise exception.InstanceDeployFailure(
|
||||
_("Can't download the configdrive content for node %(node)s "
|
||||
"from '%(url)s'. Reason: %(reason)s") %
|
||||
{'node': node_uuid, 'url': configdrive, 'reason': e})
|
||||
else:
|
||||
data = configdrive
|
||||
|
||||
try:
|
||||
data = six.StringIO(base64.b64decode(data))
|
||||
except TypeError:
|
||||
error_msg = (_('Config drive for node %s is not base64 encoded '
|
||||
'or the content is malformed.') % node_uuid)
|
||||
if is_url:
|
||||
error_msg += _(' Downloaded from "%s".') % configdrive
|
||||
raise exception.InstanceDeployFailure(error_msg)
|
||||
|
||||
configdrive_file = tempfile.NamedTemporaryFile(delete=False,
|
||||
prefix='configdrive')
|
||||
configdrive_mb = 0
|
||||
with gzip.GzipFile('configdrive', 'rb', fileobj=data) as gunzipped:
|
||||
try:
|
||||
shutil.copyfileobj(gunzipped, configdrive_file)
|
||||
except EnvironmentError as e:
|
||||
# Delete the created file
|
||||
utils.unlink_without_raise(configdrive_file.name)
|
||||
raise exception.InstanceDeployFailure(
|
||||
_('Encountered error while decompressing and writing '
|
||||
'config drive for node %(node)s. Error: %(exc)s') %
|
||||
{'node': node_uuid, 'exc': e})
|
||||
else:
|
||||
# Get the file size and convert to MiB
|
||||
configdrive_file.seek(0, os.SEEK_END)
|
||||
bytes_ = configdrive_file.tell()
|
||||
configdrive_mb = int(math.ceil(float(bytes_) / units.Mi))
|
||||
finally:
|
||||
configdrive_file.close()
|
||||
|
||||
return (configdrive_mb, configdrive_file.name)
|
||||
|
||||
|
||||
def work_on_disk(dev, root_mb, swap_mb, ephemeral_mb, ephemeral_format,
|
||||
image_path, node_uuid, preserve_ephemeral=False,
|
||||
configdrive=None):
|
||||
"""Create partitions and copy an image to the root partition.
|
||||
|
||||
:param dev: Path for the device to work on.
|
||||
:param root_mb: Size of the root partition in megabytes.
|
||||
:param swap_mb: Size of the swap partition in megabytes.
|
||||
:param ephemeral_mb: Size of the ephemeral partition in megabytes. If 0,
|
||||
no ephemeral partition will be created.
|
||||
:param ephemeral_format: The type of file system to format the ephemeral
|
||||
partition.
|
||||
:param image_path: Path for the instance's disk image.
|
||||
:param node_uuid: node's uuid. Used for logging.
|
||||
:param preserve_ephemeral: If True, no filesystem is written to the
|
||||
ephemeral block device, preserving whatever content it had (if the
|
||||
partition table has not changed).
|
||||
:param configdrive: Optional. Base64 encoded Gzipped configdrive content
|
||||
or configdrive HTTP URL.
|
||||
:returns: the UUID of the root partition.
|
||||
"""
|
||||
if not is_block_device(dev):
|
||||
raise exception.InstanceDeployFailure(
|
||||
_("Parent device '%s' not found") % dev)
|
||||
|
||||
# the only way for preserve_ephemeral to be set to true is if we are
|
||||
# rebuilding an instance with --preserve_ephemeral.
|
||||
commit = not preserve_ephemeral
|
||||
# now if we are committing the changes to disk clean first.
|
||||
if commit:
|
||||
destroy_disk_metadata(dev, node_uuid)
|
||||
|
||||
try:
|
||||
# If requested, get the configdrive file and determine the size
|
||||
# of the configdrive partition
|
||||
configdrive_mb = 0
|
||||
configdrive_file = None
|
||||
if configdrive:
|
||||
configdrive_mb, configdrive_file = _get_configdrive(configdrive,
|
||||
node_uuid)
|
||||
|
||||
part_dict = make_partitions(dev, root_mb, swap_mb, ephemeral_mb,
|
||||
configdrive_mb, commit=commit)
|
||||
|
||||
ephemeral_part = part_dict.get('ephemeral')
|
||||
swap_part = part_dict.get('swap')
|
||||
configdrive_part = part_dict.get('configdrive')
|
||||
root_part = part_dict.get('root')
|
||||
|
||||
if not is_block_device(root_part):
|
||||
raise exception.InstanceDeployFailure(
|
||||
_("Root device '%s' not found") % root_part)
|
||||
|
||||
for part in ('swap', 'ephemeral', 'configdrive'):
|
||||
part_device = part_dict.get(part)
|
||||
LOG.debug("Checking for %(part)s device (%(dev)s) on node "
|
||||
"%(node)s.", {'part': part, 'dev': part_device,
|
||||
'node': node_uuid})
|
||||
if part_device and not is_block_device(part_device):
|
||||
raise exception.InstanceDeployFailure(
|
||||
_("'%(partition)s' device '%(part_device)s' not found") %
|
||||
{'partition': part, 'part_device': part_device})
|
||||
|
||||
if configdrive_part:
|
||||
# Copy the configdrive content to the configdrive partition
|
||||
dd(configdrive_file, configdrive_part)
|
||||
|
||||
finally:
|
||||
# If the configdrive was requested make sure we delete the file
|
||||
# after copying the content to the partition
|
||||
if configdrive_file:
|
||||
utils.unlink_without_raise(configdrive_file)
|
||||
|
||||
populate_image(image_path, root_part)
|
||||
|
||||
if swap_part:
|
||||
mkswap(swap_part)
|
||||
|
||||
if ephemeral_part and not preserve_ephemeral:
|
||||
mkfs_ephemeral(ephemeral_part, ephemeral_format)
|
||||
|
||||
try:
|
||||
root_uuid = block_uuid(root_part)
|
||||
except processutils.ProcessExecutionError:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Failed to detect root device UUID."))
|
||||
|
||||
return root_uuid
|
||||
|
||||
|
||||
def deploy(address, port, iqn, lun, image_path,
|
||||
root_mb, swap_mb, ephemeral_mb, ephemeral_format, node_uuid,
|
||||
preserve_ephemeral=False, configdrive=None):
|
||||
"""All-in-one function to deploy a node.
|
||||
|
||||
:param address: The iSCSI IP address.
|
||||
:param port: The iSCSI port number.
|
||||
:param iqn: The iSCSI qualified name.
|
||||
:param lun: The iSCSI logical unit number.
|
||||
:param image_path: Path for the instance's disk image.
|
||||
:param root_mb: Size of the root partition in megabytes.
|
||||
:param swap_mb: Size of the swap partition in megabytes.
|
||||
:param ephemeral_mb: Size of the ephemeral partition in megabytes. If 0,
|
||||
no ephemeral partition will be created.
|
||||
:param ephemeral_format: The type of file system to format the ephemeral
|
||||
partition.
|
||||
:param node_uuid: node's uuid. Used for logging.
|
||||
:param preserve_ephemeral: If True, no filesystem is written to the
|
||||
ephemeral block device, preserving whatever content it had (if the
|
||||
partition table has not changed).
|
||||
:param configdrive: Optional. Base64 encoded Gzipped configdrive content
|
||||
or configdrive HTTP URL.
|
||||
:returns: the UUID of the root partition.
|
||||
"""
|
||||
dev = get_dev(address, port, iqn, lun)
|
||||
image_mb = get_image_mb(image_path)
|
||||
if image_mb > root_mb:
|
||||
root_mb = image_mb
|
||||
discovery(address, port)
|
||||
login_iscsi(address, port, iqn)
|
||||
try:
|
||||
root_uuid = work_on_disk(dev, root_mb, swap_mb, ephemeral_mb,
|
||||
ephemeral_format, image_path, node_uuid,
|
||||
preserve_ephemeral=preserve_ephemeral,
|
||||
configdrive=configdrive)
|
||||
except processutils.ProcessExecutionError as err:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Deploy to address %s failed."), address)
|
||||
LOG.error(_LE("Command: %s"), err.cmd)
|
||||
LOG.error(_LE("StdOut: %r"), err.stdout)
|
||||
LOG.error(_LE("StdErr: %r"), err.stderr)
|
||||
except exception.InstanceDeployFailure as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Deploy to address %s failed."), address)
|
||||
LOG.error(e)
|
||||
finally:
|
||||
logout_iscsi(address, port, iqn)
|
||||
delete_iscsi(address, port, iqn)
|
||||
|
||||
return root_uuid
|
||||
|
||||
|
||||
def notify_deploy_complete(address):
|
||||
"""Notifies the completion of deployment to the baremetal node.
|
||||
|
||||
:param address: The IP address of the node.
|
||||
"""
|
||||
# Ensure the node started netcat on the port after POST the request.
|
||||