Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: I00ee2fa8e49543d0cc1342f123e8cc3ca697a469
This commit is contained in:
Tony Breeds
2017-09-12 15:58:09 -06:00
parent a6da39acb8
commit afddb333b4
726 changed files with 14 additions and 24776 deletions

24
.gitignore vendored
View File

@@ -1,24 +0,0 @@
.coverage
coverage.xml
cover/*
*~
.testrepository
*.sw?
#*#
*.pyc
.tox
*.egg
*.egg-info
dist
*.qcow2
*.raw
*.initrd
*.vmlinuz
/*-manifests
/*.d
build
AUTHORS
ChangeLog
bin/diskimage_builder
*.bak
*.orig

View File

@@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/diskimage-builder.git

View File

@@ -1,10 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
OS_DEBUG=${OS_DEBUG:-0} \
${PYTHON:-python} -m subunit.run discover . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

202
LICENSE
View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

14
README Normal file
View File

@@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@@ -1,51 +0,0 @@
Image building tools for OpenStack
==================================
``diskimage-builder`` is a flexible suite of components for building a
wide-range of disk images, filesystem images and ramdisk images for
use with OpenStack.
This repository has the core functionality for building such images,
both virtual and bare metal. Images are composed using `elements`;
while fundamental elements are provided here, individual projects have
the flexibility to customise the image build with their own elements.
For example::
$ DIB_RELEASE=trusty disk-image-create -o ubuntu-trusty.qcow2 vm ubuntu
will create a bootable Ubuntu Trusty based ``qcow2`` image.
``diskimage-builder`` is useful to anyone looking to produce
customised images for deployment into clouds. These tools are the
components of `TripleO <https://wiki.openstack.org/wiki/TripleO>`__
that are responsible for building disk images. They are also used
extensively to build images for testing OpenStack itself, particularly
with `nodepool
<https://docs.openstack.org/infra/system-config/nodepool.html>`__.
Platforms supported include Ubuntu, CentOS, RHEL and Fedora.
Full documentation, the source of which is in ``doc/source/``, is
published at:
* https://docs.openstack.org/diskimage-builder/latest/
Copyright
=========
Copyright 2012 Hewlett-Packard Development Company, L.P.
Copyright (c) 2012 NTT DOCOMO, INC.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.

View File

@@ -1 +0,0 @@
[python: **.py]

View File

@@ -1,300 +0,0 @@
#!/bin/bash
# Copyright 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script checks all files in the "elements" directory for some
# common mistakes and exits with a non-zero status if it finds any.
set -eu
set -o pipefail
ELEMENTS_DIR=${ELEMENTS_DIR:-diskimage_builder/elements}
LIB_DIR=${LIB_DIR:-diskimage_builder/lib}
parse_exclusions() {
# Per-file exclusions
# Example: # dib-lint: disable=sete setpipefail
local filename=$1
local disable_pattern="# dib-lint: disable="
local exclusions=$(grep "^$disable_pattern.*$" $filename | sed "s/$disable_pattern//g")
# Global exclusions read from tox.ini
# Example section in tox.ini:
# [dib-lint]
# ignore = sete setu
section="dib-lint"
option="ignore"
global_exclusions=$(python - <<EOF
try:
import configparser
except ImportError:
import ConfigParser as configparser
conf=configparser.ConfigParser()
conf.read('tox.ini')
print(conf.get('$section', '$option')) if conf.has_option('$section', '$option') else ''
EOF
)
echo $exclusions $global_exclusions
}
excluded() {
local test_name=$1
for e in $exclusions; do
if [ "$e" = "$test_name" ]; then
return 0
fi
done
return 1
}
error() {
echo -e "ERROR: $1"
rc=1
}
echo "Running dib-lint in $(pwd)"
rc=0
TMPDIR=$(mktemp -d /tmp/tmp.XXXXXXXXXX)
trap "rm -rf $TMPDIR" EXIT
# note .py files are run through flake8 directly in tox.ini
for i in $(find $ELEMENTS_DIR -type f \
-not -name \*.rst \
-not -name \*.yaml \
-not -name \*.py); do
# Skip files in .gitignore
if git check-ignore -q "$i" ; then
echo Skipping $i
continue
fi
echo "Checking $i"
exclusions=("$(parse_exclusions $i)")
# Check that files starting with a shebang are +x
firstline=$(head -n 1 "$i")
if [ "${firstline:0:2}" = "#!" ]; then
if [ ! -x "$i" ] && ! excluded executable; then
error "$i is not executable"
fi
# run flake8 over python files that don't have .py. Note our
# "dib-python" interpreter can confuse the magic matching
# being done in "file" and make it think the file is not
# python; special-case it.
if [[ "$(file -b -k --mime-type $i)" =~ "text/x-python" ]] || \
[[ $firstline =~ "dib-python" ]]; then
flake8 $i || error "$i failed flake8"
else
# Ensure 4 spaces indent are used
if ! excluded indent ; then
indent_regex='^\( \{4\}\)* \{1,3\}[^ ]'
if grep -q "$indent_regex" ${i}; then
error "$i should use 4 spaces indent"
# outline the failing lines with line number
grep -n "$indent_regex" ${i}
fi
fi
fi
fi
# Check alphabetical ordering of element-deps
if [ $(basename $i) = "element-deps" ]; then
UNSORTED=${TMPDIR}/element-deps.unsorted
SORTED=${TMPDIR}/element-deps.sorted
grep -v -e '^#' -e '^$' $i > ${UNSORTED}
sort ${UNSORTED} > ${SORTED}
if [ -n "$(diff -c ${UNSORTED} ${SORTED})" ]; then
error "$i is not sorted alphabetically"
diff -y ${UNSORTED} ${SORTED}
fi
fi
# for consistency, let's just use #!/bin/bash everywhere (not
# /usr/bin/env, etc)
regex='^#!.*bash'
if [[ "$firstline" =~ $regex &&
"$firstline" != "#!/bin/bash" ]]; then
error "$i : only use #!/bin/bash for scripts"
fi
# Check that all scripts are set -eu -o pipefail and look for
# DIB_DEBUG_TRACE
# NOTE(bnemec): This doesn't verify that the set call occurs high
# enough in the file to be useful, but hopefully nobody will be
# sticking set calls at the end of their file to trick us. And if
# they are, that's easy enough to catch in reviews.
# Also, this is only going to check bash scripts - we've decided to
# explicitly require bash for any scripts that don't have a specific
# need to run under other shells, and any exceptions to that rule
# may not want these checks either.
if [[ "$firstline" =~ '#!/bin/bash' ]]; then
if ! excluded sete; then
if [ -z "$(grep "^set -[^ ]*e" $i)" ]; then
error "$i is not set -e"
fi
fi
if ! excluded setu; then
if [ -z "$(grep "^set -[^ ]*u" $i)" ]; then
error "$i is not set -u"
fi
fi
if ! excluded setpipefail; then
if [ -z "$(grep "^set -o pipefail" $i)" ]; then
error "$i is not set -o pipefail"
fi
fi
if ! excluded dibdebugtrace; then
if [ -z "$(grep "DIB_DEBUG_TRACE" $i)" ]; then
error "$i does not follow DIB_DEBUG_TRACE"
fi
fi
fi
# check that environment files don't "set -x" and they have no executable
# bits set
if [[ "$i" =~ (environment.d) ]]; then
if grep -q "set -x" $i; then
error "Environment file $i should not set tracing"
fi
if [[ -x $i ]]; then
error "Environment file $i should not be marked as executable"
fi
fi
# check for
# export FOO=$(bar)
# calls. These are dangerous, because the export hides the return
# code of the $(bar) call. Split this into 2 lines and -e will
# fail on the assignment
if grep -q 'export .*\$(' $i; then
error "Split export and assignments in $i"
fi
# check that sudo calls in phases run outside the chroot look
# "safe"; meaning that they seem to operate within the chroot
# somehow. This is not fool-proof, but catches egregious errors,
# and makes you think about it if you're doing something outside
# the box.
if ! excluded safe_sudo; then
if [[ $(dirname $i) =~ (root.d|extra-data.d|block-device.d|finalise.d|cleanup.d) ]]; then
while read LINE
do
if [[ $LINE =~ "sudo " ]]; then
# messy regex ahead! Don't match:
# - explicitly ignored
# - basic comments
# - install-packages ... sudo ...
# - any of the paths passed into the out-of-chroot elements
if [[ $LINE =~ (dib-lint: safe_sudo|^#|install-packages|TARGET_ROOT|IMAGE_BLOCK_DEVICE|TMP_MOUNT_PATH|TMP_IMAGE_PATH) ]]; then
continue
fi
error "$i : potentially unsafe sudo\n -- $LINE"
fi
done < $i
fi
fi
# check that which calls are not used. It is not built in and is missing
# from some constrained environments
if ! excluded which; then
while read LINE
do
if [[ $LINE =~ "which " ]]; then
# Don't match:
# - explicitly ignored
# - commented
if [[ $LINE =~ (dib-lint: which|^#) ]]; then
continue
fi
error "$i : potential use of which\n -- $LINE"
fi
done < $i
fi
done
echo "Checking indents..."
for i in $(find $ELEMENTS_DIR -type f -and -name '*.rst' -or -type f -executable) \
$(find $LIB_DIR -type f); do
# Skip files in .gitignore
if git check-ignore -q "$i" ; then
echo Skipping $i
continue
fi
# Check for tab indentation
if ! excluded tabindent; then
if grep -q $'^ *\t' ${i}; then
error "$i contains tab characters"
fi
fi
if ! excluded newline; then
if [ "$(tail -c 1 $i)" != "" ]; then
error "No newline at end of file: $i"
fi
fi
done
if ! excluded mddocs; then
md_docs=$(find $ELEMENTS_DIR -name '*.md')
if [ -n "$md_docs" ]; then
error ".md docs found: $md_docs"
fi
fi
echo "Checking YAML parsing..."
for i in $(find $ELEMENTS_DIR -type f -name '*.yaml'); do
echo "Parsing $i"
py_check="
import yaml
import sys
try:
objs = yaml.safe_load(open('$i'))
except yaml.parser.ParserError:
sys.exit(1)
"
if ! python -c "$py_check"; then
error "$i is not a valid YAML file"
fi
done
echo "Checking pkg-map files..."
for i in $(find $ELEMENTS_DIR -type f \
-name 'pkg-map' -a \! -executable); do
echo "Parsing $i"
py_check="
import json
import sys
try:
objs = json.load(open('$i'))
except ValueError:
sys.exit(1)
"
if ! python -c "$py_check"; then
error "$i is not a valid JSON file"
fi
done
if [[ $rc == 0 ]]; then
echo "PASS"
else
echo "*** FAIL: Some tests failed!"
fi
exit $rc

View File

@@ -1,6 +0,0 @@
# This is a cross-platform list tracking distribution packages needed by tests;
# see https://docs.openstack.org/infra/bindep/ for additional information.
squashfs-tools [!platform:suse]
squashfs [platform:suse]
zypper [!platform:redhat !platform:ubuntu-trusty]
gnupg2 [!platform:redhat !platform:ubuntu-trusty !platform:suse]

View File

@@ -1,81 +0,0 @@
#!/bin/bash
set -x
#
# This tool creates repo/sources files that point to the mirrors for
# the host region in the OpenStack CI gate.
#
# This pre-created on CI nodes by slave scripts
source /etc/ci/mirror_info.sh
# Tests should probe for this directory and then use the repos/sources
# files inside it for the gate tests.
BASE_DIR=$WORKSPACE/dib-mirror
mkdir -p $BASE_DIR
## REPOS
# all should start with "dib-mirror-"
# gpg check turned off, because we don't have the keys outside the chroot
# fedora-minimal
FEDORA_MIN_DIR=$BASE_DIR/fedora-minimal/yum.repos.d
mkdir -p $FEDORA_MIN_DIR
cat <<EOF > $FEDORA_MIN_DIR/dib-mirror-fedora.repo
[fedora]
name=Fedora \$releasever - \$basearch
failovermethod=priority
baseurl=$NODEPOOL_FEDORA_MIRROR/releases/\$releasever/Everything/\$basearch/os/
enabled=1
metadata_expire=7d
gpgcheck=0
skip_if_unavailable=False
deltarpm=False
deltarpm_percentage=0
EOF
cat <<EOF > $FEDORA_MIN_DIR/dib-mirror-fedora-updates.repo
[updates]
name=Fedora \$releasever - \$basearch - Updates
failovermethod=priority
baseurl=$NODEPOOL_FEDORA_MIRROR/updates/\$releasever/\$basearch/
enabled=1
gpgcheck=0
metadata_expire=6h
skip_if_unavailable=False
deltarpm=False
deltarpm_percentage=0
EOF
# Centos Minimal
CENTOS_MIN_DIR=$BASE_DIR/centos-minimal/yum.repos.d
mkdir -p $CENTOS_MIN_DIR
cat <<EOF > $CENTOS_MIN_DIR/dib-mirror-base.repo
[base]
name=CentOS-\$releasever - Base
baseurl=$NODEPOOL_CENTOS_MIRROR/\$releasever/os/\$basearch/
gpgcheck=0
EOF
cat <<EOF > $CENTOS_MIN_DIR/dib-mirror-updates.repo
#released updates
[updates]
name=CentOS-\$releasever - Updates
baseurl=$NODEPOOL_CENTOS_MIRROR/\$releasever/updates/\$basearch/
gpgcheck=0
EOF
cat <<EOF > $CENTOS_MIN_DIR/dib-mirror-extras.repo
#additional packages that may be useful
[extras]
name=CentOS-\$releasever - Extras
baseurl=$NODEPOOL_CENTOS_MIRROR/\$releasever/extras/\$basearch/
gpgcheck=0
EOF
## apt sources (todo)

View File

@@ -1,459 +0,0 @@
# Copyright 2016-2017 Andreas Florath (andreas@florath.net)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import codecs
import collections
import json
import logging
import os
import pickle
import pprint
import shutil
import yaml
from diskimage_builder.block_device.config import config_tree_to_graph
from diskimage_builder.block_device.config import create_graph
from diskimage_builder.block_device.exception import \
BlockDeviceSetupException
from diskimage_builder.block_device.utils import exec_sudo
logger = logging.getLogger(__name__)
def _load_json(file_name):
"""Load file from .json file on disk, return None if not existing"""
if os.path.exists(file_name):
with codecs.open(file_name, encoding="utf-8", mode="r") as fd:
return json.load(fd)
return None
class BlockDeviceState(collections.MutableMapping):
"""The global state singleton
An reference to an instance of this object is saved into nodes as
a global repository. It wraps a single dictionary "state" and
provides a few helper functions.
The state ends up used in two contexts:
- The node list (including this state) is pickled and dumped
between cmd_create() and later cmd_* calls that need to call
the nodes.
- Some other cmd_* calls, such as cmd_writefstab, only need
access to values inside the state and not the whole node list,
and load it from the json dump created after cmd_create()
"""
# XXX:
# - we could implement getters/setters such that if loaded from
# disk, the state is read-only? or make it append-only
# (i.e. you can't overwrite existing keys)
def __init__(self, filename=None):
"""Initialise state
:param filename: if :param:`filename` is passed and exists, it
will be loaded as the state. If it does not exist an
exception is raised. If :param:`filename` is not
passed, state will be initalised to a blank dictionary.
"""
if filename:
if not os.path.exists(filename):
raise BlockDeviceSetupException("State dump not found")
else:
self.state = _load_json(filename)
assert self.state is not None
else:
self.state = {}
def __delitem__(self, key):
del self.state[key]
def __getitem__(self, key):
return self.state[key]
def __setitem__(self, key, value):
self.state[key] = value
def __iter__(self):
return iter(self.state)
def __len__(self):
return len(self.state)
def save_state(self, filename):
"""Persist the state to disk
:param filename: The file to persist state to
"""
logger.debug("Writing state to: %s", filename)
self.debug_dump()
with open(filename, "w") as fd:
json.dump(self.state, fd)
def debug_dump(self):
"""Log state to debug"""
# This is pretty good for human consumption, but maybe a bit
# verbose.
nice_output = pprint.pformat(self.state, width=40)
for l in nice_output.split('\n'):
logger.debug('{0:{fill}{align}50}'.format(l, fill=' ', align='<'))
class BlockDevice(object):
"""Handles block devices.
This class handles the complete setup and deletion of all aspects
of the block device level.
A typical call sequence:
cmd_init: initialize the block device level config. After this
call it is possible to e.g. query information from the (partially
automatic generated) internal state like root-label.
cmd_getval: retrieve information about the (internal) block device
state like the block image device (for bootloader) or the
root-label (for writing fstab).
cmd_create: creates all the different aspects of the block
device. When this call is successful, the complete block level
device is set up, filesystems are created and are mounted at
the correct position.
After this call it is possible to copy / install all the needed
files into the appropriate directories.
cmd_writefstab: creates the (complete) fstab for the system.
cmd_umount: unmount and detaches all directories and used many
resources. After this call the used (e.g.) images are still
available for further handling, e.g. converting from raw in
some other format.
cmd_cleanup: removes everything that was created with the
'cmd_create' call, i.e. all images files themselves and
internal temporary configuration.
cmd_delete: unmounts and removes everything that was created
during the 'cmd_create' all. This call should be used in error
conditions when there is the need to remove all allocated
resources immediately and as good as possible.
From the functional point of view this is mostly the same as a
call to 'cmd_umount' and 'cmd_cleanup' - but is typically more
error tolerance.
In a script this should be called in the following way:
dib-block-device init ...
# From that point the database can be queried, like
ROOT_LABEL=$(dib-block-device getval root-label)
Please note that currently the dib-block-device executable can
only be used outside the chroot.
dib-block-device create ...
trap "dib-block-device delete ..." EXIT
# copy / install files
dib-block-device umount ...
# convert image(s)
dib-block-device cleanup ...
trap - EXIT
"""
def _merge_into_config(self):
"""Merge old (default) config into new
There is the need to be compatible using some old environment
variables. This is done in the way, that if there is no
explicit value given, these values are inserted into the current
configuration.
"""
for entry in self.config:
for k, v in entry.items():
if k == 'mkfs':
if 'name' not in v:
continue
if v['name'] != 'mkfs_root':
continue
if 'type' not in v \
and 'root-fs-type' in self.params:
v['type'] = self.params['root-fs-type']
if 'opts' not in v \
and 'root-fs-opts' in self.params:
v['opts'] = self.params['root-fs-opts']
if 'label' not in v \
and 'root-label' in self.params:
if self.params['root-label'] is not None:
v['label'] = self.params['root-label']
else:
v['label'] = "cloudimg-rootfs"
def __init__(self, params):
"""Create BlockDevice object
Arguments:
:param params: YAML file from --params
"""
logger.debug("Creating BlockDevice object")
self.params = params
logger.debug("Params [%s]", self.params)
self.state_dir = os.path.join(
self.params['build-dir'], "states/block-device")
self.state_json_file_name \
= os.path.join(self.state_dir, "state.json")
self.config_json_file_name \
= os.path.join(self.state_dir, "config.json")
self.node_pickle_file_name \
= os.path.join(self.state_dir, "nodes.pickle")
self.config = _load_json(self.config_json_file_name)
# This needs to exists for the state and config files
try:
os.makedirs(self.state_dir)
except OSError:
pass
def cmd_init(self):
"""Initialize block device setup
This initializes the block device setup layer. One major task
is to parse and check the configuration, write it down for
later examiniation and execution.
"""
with open(self.params['config'], "rt") as config_fd:
self.config = yaml.safe_load(config_fd)
logger.debug("Config before merge [%s]", self.config)
self.config = config_tree_to_graph(self.config)
logger.debug("Config before merge [%s]", self.config)
self._merge_into_config()
logger.debug("Final config [%s]", self.config)
# Write the final config
with open(self.config_json_file_name, "wt") as fd:
json.dump(self.config, fd)
logger.info("Wrote final block device config to [%s]",
self.config_json_file_name)
def _config_get_mount(self, path):
for entry in self.config:
for k, v in entry.items():
if k == 'mount' and v['mount_point'] == path:
return v
assert False
def _config_get_all_mount_points(self):
rvec = []
for entry in self.config:
for k, v in entry.items():
if k == 'mount':
rvec.append(v['mount_point'])
return rvec
def _config_get_mkfs(self, name):
for entry in self.config:
for k, v in entry.items():
if k == 'mkfs' and v['name'] == name:
return v
assert False
def cmd_getval(self, symbol):
"""Retrieve value from block device level
The value of SYMBOL is printed to stdout. This is intended to
be captured into bash-variables for backward compatibility
(non python) access to internal configuration.
Arguments:
:param symbol: the symbol to get
"""
logger.info("Getting value for [%s]", symbol)
if symbol == "root-label":
root_mount = self._config_get_mount("/")
root_fs = self._config_get_mkfs(root_mount['base'])
logger.debug("root-label [%s]", root_fs['label'])
print("%s" % root_fs['label'])
return 0
if symbol == "root-fstype":
root_mount = self._config_get_mount("/")
root_fs = self._config_get_mkfs(root_mount['base'])
logger.debug("root-fstype [%s]", root_fs['type'])
print("%s" % root_fs['type'])
return 0
if symbol == 'mount-points':
mount_points = self._config_get_all_mount_points()
# we return the mountpoints joined by a pipe, because it is not
# a valid char in directories, so it is a safe separator for the
# mountpoints list
print("%s" % "|".join(mount_points))
return 0
# the following symbols all come from the global state
# dictionary. They can only be accessed after the state has
# been dumped; i.e. after cmd_create() called.
state = BlockDeviceState(self.state_json_file_name)
# The path to the .raw file for conversion
if symbol == 'image-path':
print("%s" % state['blockdev']['image0']['image'])
return 0
# This is the loopback device where the above image is setup
if symbol == 'image-block-device':
print("%s" % state['blockdev']['image0']['device'])
return 0
# Full list of created devices by name. Some bootloaders, for
# example, want to be able to see their boot partitions to
# copy things in. Intended to be read into a bash array
if symbol == 'image-block-devices':
out = ""
for k, v in state['blockdev'].items():
out += " [%s]=%s " % (k, v['device'])
print(out)
return 0
logger.error("Invalid symbol [%s] for getval", symbol)
return 1
def cmd_writefstab(self):
"""Creates the fstab"""
logger.info("Creating fstab")
# State should have been created by prior calls; we only need
# the dict
state = BlockDeviceState(self.state_json_file_name)
tmp_fstab = os.path.join(self.state_dir, "fstab")
with open(tmp_fstab, "wt") as fstab_fd:
# This gives the order in which this must be mounted
for mp in state['mount_order']:
logger.debug("Writing fstab entry for [%s]", mp)
fs_base = state['mount'][mp]['base']
fs_name = state['mount'][mp]['name']
fs_val = state['filesys'][fs_base]
if 'label' in fs_val:
diskid = "LABEL=%s" % fs_val['label']
else:
diskid = "UUID=%s" % fs_val['uuid']
# If there is no fstab entry - do not write anything
if 'fstab' not in state:
continue
if fs_name not in state['fstab']:
continue
options = state['fstab'][fs_name]['options']
dump_freq = state['fstab'][fs_name]['dump-freq']
fsck_passno = state['fstab'][fs_name]['fsck-passno']
fstab_fd.write("%s %s %s %s %s %s\n"
% (diskid, mp, fs_val['fstype'],
options, dump_freq, fsck_passno))
target_etc_dir = os.path.join(self.params['build-dir'], 'built', 'etc')
exec_sudo(['mkdir', '-p', target_etc_dir])
exec_sudo(['cp', tmp_fstab, os.path.join(target_etc_dir, "fstab")])
return 0
def cmd_create(self):
"""Creates the block device"""
logger.info("create() called")
logger.debug("Using config [%s]", self.config)
# Create a new, empty state
state = BlockDeviceState()
try:
dg, call_order = create_graph(self.config, self.params, state)
for node in call_order:
node.create()
except Exception:
logger.exception("Create failed; rollback initiated")
reverse_order = reversed(call_order)
for node in reverse_order:
node.rollback()
# save the state for debugging
state.save_state(self.state_json_file_name)
logger.error("Rollback complete, exiting")
raise
# dump state and nodes, in order
# XXX: we only dump the call_order (i.e. nodes) not the whole
# graph here, because later calls do not need the graph
# at this stage. might they?
state.save_state(self.state_json_file_name)
pickle.dump(call_order, open(self.node_pickle_file_name, 'wb'))
logger.info("create() finished")
return 0
def cmd_umount(self):
"""Unmounts the blockdevice and cleanup resources"""
# If the state is not here, cmd_cleanup removed it? Nothing
# more to do?
# XXX: better understand this...
if not os.path.exists(self.node_pickle_file_name):
logger.info("State already cleaned - no way to do anything here")
return 0
call_order = pickle.load(open(self.node_pickle_file_name, 'rb'))
reverse_order = reversed(call_order)
for node in reverse_order:
node.umount()
return 0
def cmd_cleanup(self):
"""Cleanup all remaining relicts - in good case"""
# Cleanup must be done in reverse order
try:
call_order = pickle.load(open(self.node_pickle_file_name, 'rb'))
except IOError:
raise BlockDeviceSetupException("Pickle file not found")
reverse_order = reversed(call_order)
for node in reverse_order:
node.cleanup()
logger.info("Removing temporary state dir [%s]", self.state_dir)
shutil.rmtree(self.state_dir)
return 0
def cmd_delete(self):
"""Cleanup all remaining relicts - in case of an error"""
# Deleting must be done in reverse order
try:
call_order = pickle.load(open(self.node_pickle_file_name, 'rb'))
except IOError:
raise BlockDeviceSetupException("Pickle file not found")
reverse_order = reversed(call_order)
for node in reverse_order:
node.delete()
logger.info("Removing temporary state dir [%s]", self.state_dir)
shutil.rmtree(self.state_dir)
return 0

View File

@@ -1,123 +0,0 @@
# Copyright 2016-2017 Andreas Florath (andreas@florath.net)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import logging
import os
import sys
import yaml
from diskimage_builder.block_device.blockdevice import BlockDevice
from diskimage_builder import logging_config
logger = logging.getLogger(__name__)
class BlockDeviceCmd(object):
def cmd_init(self):
self.bd.cmd_init()
def cmd_getval(self):
self.bd.cmd_getval(self.args.symbol)
def cmd_create(self):
self.bd.cmd_create()
def cmd_umount(self):
self.bd.cmd_umount()
def cmd_cleanup(self):
self.bd.cmd_cleanup()
def cmd_delete(self):
self.bd.cmd_delete()
def cmd_writefstab(self):
self.bd.cmd_writefstab()
def main(self):
logging_config.setup()
parser = argparse.ArgumentParser(description="DIB Block Device helper")
parser.add_argument('--params', required=False,
help="YAML file containing parameters for"
"block-device handling. Default is "
"DIB_BLOCK_DEVICE_PARAMS_YAML")
subparsers = parser.add_subparsers(title='commands',
description='valid commands',
dest='command',
help='additional help')
cmd_init = subparsers.add_parser('init',
help='Initialize configuration')
cmd_init.set_defaults(func=self.cmd_init)
cmd_getval = subparsers.add_parser('getval',
help='Retrieve information about'
'internal state')
cmd_getval.set_defaults(func=self.cmd_getval)
cmd_getval.add_argument('symbol', help='symbol to print')
cmd_create = subparsers.add_parser('create',
help='Create the block device')
cmd_create.set_defaults(func=self.cmd_create)
cmd_umount = subparsers.add_parser('umount',
help='Unmount blockdevice and'
'cleanup resources')
cmd_umount.set_defaults(func=self.cmd_umount)
cmd_cleanup = subparsers.add_parser('cleanup', help='Final cleanup')
cmd_cleanup.set_defaults(func=self.cmd_cleanup)
cmd_delete = subparsers.add_parser('delete', help='Error cleanup')
cmd_delete.set_defaults(func=self.cmd_delete)
cmd_writefstab = subparsers.add_parser('writefstab',
help='Create fstab for system')
cmd_writefstab.set_defaults(func=self.cmd_writefstab)
self.args = parser.parse_args()
# Find, open and parse the parameters file
if not self.args.params:
if 'DIB_BLOCK_DEVICE_PARAMS_YAML' in os.environ:
param_file = os.environ['DIB_BLOCK_DEVICE_PARAMS_YAML']
else:
parser.error(
"DIB_BLOCK_DEVICE_PARAMS_YAML or --params not set")
else:
param_file = self.args.params
logger.info("params [%s]", param_file)
try:
with open(param_file) as f:
self.params = yaml.safe_load(f)
except Exception:
logger.exception("Failed to open parameter YAML")
sys.exit(1)
# Setup main BlockDevice object from args
self.bd = BlockDevice(self.params)
self.args.func()
def main():
bdc = BlockDeviceCmd()
return bdc.main()
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,272 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import networkx as nx
import os
from stevedore import extension
from diskimage_builder.block_device.exception import \
BlockDeviceSetupException
from diskimage_builder.block_device.plugin import NodeBase
from diskimage_builder.block_device.plugin import PluginBase
logger = logging.getLogger(__name__)
_extensions = extension.ExtensionManager(
namespace='diskimage_builder.block_device.plugin',
invoke_on_load=False)
# check if a given name is registered as a plugin
def is_a_plugin(name):
return any(
_extensions.map(lambda x: x.name == name))
def recurse_config(config, parent_base=None):
"""Convert a config "tree" to it's canonical name/base graph version
This is a recursive function to convert a YAML layout "tree"
config into a "flat" graph-based config.
Arguments:
:param config: the incoming config dictionary
:param parent_base: the name of the parent node, if any
:return: a list of expanded, graph-based config items
"""
output = []
this = {}
# We should only have one key, with multiple values, being the
# config entries. e.g. (this was checked by config_tree_to_graph)
# mkfs:
# type: ext4
# label: 1234
assert len(config.items()) == 1
for k, v in config.items():
key = k
values = v
# If we don't have a base, we take the parent base; first element
# can have no base, however.
if 'base' not in values:
if parent_base is not None:
this['base'] = parent_base
else:
this['base'] = values['base']
# If we don't have a name, it is made up as "key_base"
if 'name' not in values:
this['name'] = "%s_%s" % (key, this['base'])
else:
this['name'] = values['name']
# Go through the the values dictionary. Either this is a "plugin"
# key that needs to be recursed, or it is a value that is part of
# this config entry.
for nk, nv in values.items():
if nk == "partitions":
# "partitions" is a special key of the "partitioning"
# object. It is a list. Each list-entry gets treated
# as a top-level entry, so we need to recurse it's
# keys. But instead of becoming its own entry in the
# graph, it gets attached to the .partitions attribute
# of the parent. (see end for example)
this['partitions'] = []
for partition in nv:
new_part = {}
for pk, pv in partition.items():
if is_a_plugin(pk):
output.extend(
recurse_config({pk: pv}, partition['name']))
else:
new_part[pk] = pv
new_part['base'] = this['base']
this['partitions'].append(new_part)
elif is_a_plugin(nk):
# is this key a plugin directive? If so, we recurse
# into it.
output.extend(recurse_config({nk: nv}, this['name']))
else:
# A value entry; just save as part of this entry
this[nk] = nv
output.append({k: this})
return output
def config_tree_to_graph(config):
"""Turn a YAML config into a graph config
Our YAML config is a list of entries. Each
Arguments:
:parm config: YAML config; either graph or tree
:return: graph-based result
"""
output = []
for entry in config:
# Top-level entries should be a dictionary and have a plugin
# registered for it
if not isinstance(entry, dict):
raise BlockDeviceSetupException(
"Config entry not a dict: %s" % entry)
keys = list(entry.keys())
if len(keys) != 1:
raise BlockDeviceSetupException(
"Config entry top-level should be a single dict: %s" % entry)
if not is_a_plugin(keys[0]):
raise BlockDeviceSetupException(
"Config entry is not a plugin value: %s" % entry)
output.extend(recurse_config(entry))
return output
def create_graph(config, default_config, state):
"""Generate configuration digraph
Generate the configuration digraph from the config
:param config: graph configuration file
:param default_config: default parameters (from --params)
:param state: reference to global state dictionary.
Passed to :func:`PluginBase.__init__`
:return: tuple with the graph object (a :class:`nx.Digraph`),
ordered list of :class:`NodeBase` objects
"""
# This is the directed graph of nodes: each parse method must
# add the appropriate nodes and edges.
dg = nx.DiGraph()
for config_entry in config:
# this should have been checked by generate_config
assert len(config_entry) == 1
logger.debug("Config entry [%s]", config_entry)
cfg_obj_name = list(config_entry.keys())[0]
cfg_obj_val = config_entry[cfg_obj_name]
# Instantiate a "plugin" object, passing it the
# configuration entry
# XXX : would a "factory" pattern for plugins, where we
# make a method call on an object stevedore has instantiated
# be better here?
if not is_a_plugin(cfg_obj_name):
raise BlockDeviceSetupException(
("Config element [%s] is not implemented" % cfg_obj_name))
plugin = _extensions[cfg_obj_name].plugin
assert issubclass(plugin, PluginBase)
cfg_obj = plugin(cfg_obj_val, default_config, state)
# Ask the plugin for the nodes it would like to insert
# into the graph. Some plugins, such as partitioning,
# return multiple nodes from one config entry.
nodes = cfg_obj.get_nodes()
assert isinstance(nodes, list)
for node in nodes:
# plugins should return nodes...
assert isinstance(node, NodeBase)
# ensure node names are unique. networkx by default
# just appends the attribute to the node dict for
# existing nodes, which is not what we want.
if node.name in dg.node:
raise BlockDeviceSetupException(
"Duplicate node name: %s" % (node.name))
logger.debug("Adding %s : %s", node.name, node)
dg.add_node(node.name, obj=node)
# Now find edges
for name, attr in dg.nodes(data=True):
obj = attr['obj']
# Unfortunately, we can not determine node edges just from
# the configuration file. It's not always simply the
# "base:" pointer. So ask nodes for a list of nodes they
# want to point to. *mostly* it's just base: ... but
# mounting is different.
# edges_from are the nodes that point to us
# edges_to are the nodes we point to
edges_from, edges_to = obj.get_edges()
logger.debug("Edges for %s: f:%s t:%s", name,
edges_from, edges_to)
for edge_from in edges_from:
if edge_from not in dg.node:
raise BlockDeviceSetupException(
"Edge not defined: %s->%s" % (edge_from, name))
dg.add_edge(edge_from, name)
for edge_to in edges_to:
if edge_to not in dg.node:
raise BlockDeviceSetupException(
"Edge not defined: %s->%s" % (name, edge_to))
dg.add_edge(name, edge_to)
# this can be quite helpful debugging but needs pydotplus which
# isn't in requirements. for debugging, do
# .tox/py27/bin/pip install pydotplus
# DUMP_CONFIG_GRAPH=1 tox -e py27 -- specific_test
# dotty /tmp/graph_dump.dot
# to see helpful output
if 'DUMP_CONFIG_GRAPH' in os.environ:
nx.nx_pydot.write_dot(dg, '/tmp/graph_dump.dot')
# Topological sort (i.e. create a linear array that satisfies
# dependencies) and return the object list
call_order_nodes = nx.topological_sort(dg)
logger.debug("Call order: %s", list(call_order_nodes))
call_order = [dg.node[n]['obj'] for n in call_order_nodes]
return dg, call_order
#
# On partitioning: objects
#
# To be concrete --
#
# partitioning:
# base: loop0
# name: mbr
# partitions:
# - name: partition1
# foo: bar
# mkfs:
# type: xfs
# mount:
# mount_point: /
#
# gets turned into the following graph:
#
# partitioning:
# partitions:
# - name: partition1
# base: image0
# foo: bar
#
# mkfs:
# base: partition1
# name: mkfs_partition1
# type: xfs
#
# mount:
# base: mkfs_partition1
# name: mount_mkfs_partition1
# mount_point: /

View File

@@ -1,15 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class BlockDeviceSetupException(Exception):
"""Generic exception"""

View File

@@ -1,136 +0,0 @@
# Copyright 2016 Andreas Florath (andreas@florath.net)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import subprocess
from diskimage_builder.block_device.exception import \
BlockDeviceSetupException
from diskimage_builder.block_device.plugin import NodeBase
from diskimage_builder.block_device.plugin import PluginBase
from diskimage_builder.block_device.utils import parse_abs_size_spec
logger = logging.getLogger(__name__)
def image_create(filename, size):
logger.info("Create image file [%s]", filename)
with open(filename, "w") as fd:
fd.seek(size - 1)
fd.write("\0")
def image_delete(filename):
logger.info("Remove image file [%s]", filename)
os.remove(filename)
def loopdev_attach(filename):
logger.info("loopdev attach")
logger.debug("Calling [sudo losetup --show -f %s]", filename)
subp = subprocess.Popen(["sudo", "losetup", "--show", "-f",
filename], stdout=subprocess.PIPE)
rval = subp.wait()
if rval == 0:
# [:-1]: Cut of the newline
block_device = subp.stdout.read()[:-1].decode("utf-8")
logger.info("New block device [%s]", block_device)
return block_device
else:
logger.error("losetup failed")
raise BlockDeviceSetupException("losetup failed")
def loopdev_detach(loopdev):
logger.info("loopdev detach")
# loopback dev may be tied up a bit by udev events triggered
# by partition events
for try_cnt in range(10, 1, -1):
logger.debug("Calling [sudo losetup -d %s]", loopdev)
subp = subprocess.Popen(["sudo", "losetup", "-d",
loopdev])
rval = subp.wait()
if rval == 0:
logger.info("Successfully detached [%s]", loopdev)
return 0
else:
logger.error("loopdev detach failed")
# Do not raise an error - maybe other cleanup methods
# can at least do some more work.
logger.debug("Gave up trying to detach [%s]", loopdev)
return rval
class LocalLoopNode(NodeBase):
"""Level0: Local loop image device handling.
This class handles local loop devices that can be used
for VM image installation.
"""
def __init__(self, config, default_config, state):
logger.debug("Creating LocalLoop object; config [%s] "
"default_config [%s]", config, default_config)
super(LocalLoopNode, self).__init__(config['name'], state)
if 'size' in config:
self.size = parse_abs_size_spec(config['size'])
logger.debug("Image size [%s]", self.size)
else:
self.size = parse_abs_size_spec(default_config['image-size'])
logger.debug("Using default image size [%s]", self.size)
if 'directory' in config:
self.image_dir = config['directory']
else:
self.image_dir = default_config['image-dir']
self.filename = os.path.join(self.image_dir, self.name + ".raw")
def get_edges(self):
"""Because this is created without base, there are no edges."""
return ([], [])
def create(self):
logger.debug("[%s] Creating loop on [%s] with size [%d]",
self.name, self.filename, self.size)
self.add_rollback(image_delete, self.filename)
image_create(self.filename, self.size)
block_device = loopdev_attach(self.filename)
self.add_rollback(loopdev_detach, block_device)
if 'blockdev' not in self.state:
self.state['blockdev'] = {}
self.state['blockdev'][self.name] = {"device": block_device,
"image": self.filename}
logger.debug("Created loop name [%s] device [%s] image [%s]",
self.name, block_device, self.filename)
return
def umount(self):
loopdev_detach(self.state['blockdev'][self.name]['device'])
def delete(self):
image_delete(self.state['blockdev'][self.name]['image'])
class LocalLoop(PluginBase):
def __init__(self, config, defaults, state):
super(LocalLoop, self).__init__()
self.node = LocalLoopNode(config, defaults, state)
def get_nodes(self):
return [self.node]

View File

@@ -1,374 +0,0 @@
# Copyright 2016 Andreas Florath (andreas@florath.net)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import random
from struct import pack
logger = logging.getLogger(__name__)
# Details of the MBR object itself can be found in the inline
# documentation.
#
# General design and implementation remarks:
# o Because the whole GNU parted and co. (e.g. the python-parted that
# is based on GNU parted) cannot be used because of the license:
# everything falls under GPL2 (not LGPL2!) and therefore does not
# fit into the Apache License here.
# o It looks that there is no real alternative available (2016-06).
# o The interface of python-parted is not that simple to handle - and
# the initial try to use GNU (python-)parted was not that much
# easier and shorter than this approach.
# o When using tools (like fdisk or parted) they try to optimize the
# alignment of partitions based on the data found on the host
# system. These might be misleading and might lead to (very) poor
# performance.
# o These ready-to-use tools typically also change the CHS layout
# based on the disk size. In case that the disk is enlarged (which
# is a normal use case for golden images), the CHS layout of the
# disk changes for those tools (and is not longer correct).
# In the DIB implementation the CHS are chosen that way, that also
# for very small disks the maximum heads/cylinder and sectors/track
# is used: even if the disk size in increased, the CHS numbers will
# not change.
# o In the easy and straight forward way when only using one
# partition, exactly 40 bytes (!) must be written - and the biggest
# part of this data is fixed (same in all cases).
#
# Limitations and Incompatibilities
# o With the help of this class it is possible to create an
# arbitrarily number of extended partitions (tested with over 1000).
# o There are limitations and shortcomings in the OS and in tools
# handling these partitions.
# o Under Linux the loop device is able to handle a limited number of
# partitions. The module parameter max_loop can be set - the maximum
# number might vary depending on the distribution and kernel build.
# o Under Linux fdisk is able to handle 'only' 60 partitions. Only
# those are listed, can be changed or written.
# o Under Linux GNU parted can handle about 60 partitions.
#
# Be sure only to pass in the number of partitions that the host OS
# and target OS are able to handle.
class MBR(object):
"""MBR Disk / Partition Table Layout
Primary partitions are created first - and must also be passed in
first.
The extended partition layout is done in the way, that there is
one entry in the MBR (the last) that uses the whole disk.
EBR (extended boot records) are used to describe the partitions
themselves. This has the advantage, that the same procedure can
be used for all partitions and arbitrarily many partitions can be
created in the same way (the EBR is placed as block 0 in each
partition itself).
In conjunction with a fixed and 'fits all' partition alignment the
major design focus is maximum performance for the installed image
(vs. minimal size).
Because of the chosen default alignment of 1MiB there will be
(1MiB - 512B) unused disk space for the MBR and also the same
size unused in every partition.
Assuming that 512 byte blocks are used, the resulting layout for
extended partitions looks like (blocks offset in extended
partition given):
======== ==============================================
Offset Description
======== ==============================================
0 MBR - 2047 blocks unused
2048 EBR for partition 1 - 2047 blocks unused
4096 Start of data for partition 1
... ...
X EBR for partition N - 2047 blocks unused
X+2048 Start of data for partition N
======== ==============================================
Direct (native) writing of MBR, EBR (partition table) is
implemented - no other partitioning library or tools is used -
to be sure to get the correct CHS and alignment for a wide range
of host systems.
"""
# Design & Implementation details:
# o A 'block' is a storage unit on disk. It is similar (equal) to a
# sector - but with LBA addressing.
# o It is assumed that a disk block has that number of bytes
bytes_per_sector = 512
# o CHS is the 'good and very old way' specifying blocks.
# When passing around these numbers, they are also ordered like 'CHS':
# (cylinder, head, sector).
# o The computation from LBA to CHS is not unique (it is based
# on the 'real' (or assumed) number of heads/cylinder and
# sectors/track), these are the assumed numbers. Please note
# that these are also the maximum numbers:
heads_per_cylinder = 254
sectors_per_track = 63
max_cylinders = 1023
# o There is the need for some offsets that are defined in the
# MBR/EBR domain.
MBR_offset_disk_id = 440
MBR_offset_signature = 510
MBR_offset_first_partition_table_entry = 446
MBR_partition_type_extended_chs = 0x5
MBR_partition_type_extended_lba = 0xF
MBR_signature = 0xAA55
def __init__(self, name, disk_size, alignment):
"""Initialize a disk partitioning MBR object.
The name is the (existing) name of the disk.
The disk_size is the (used) size of the disk. It must be a
proper multiple of the disk bytes per sector (currently 512)
"""
logger.info("Create MBR disk partitioning object")
assert disk_size % MBR.bytes_per_sector == 0
self.disk_size = disk_size
self.disk_size_in_blocks \
= self.disk_size // MBR.bytes_per_sector
self.alignment_blocks = alignment // MBR.bytes_per_sector
# Because the extended partitions are a chain of blocks, when
# creating a new partition, the reference in the already
# existing EBR must be updated. This holds a reference to the
# latest EBR. (A special case is the first: when it points to
# 0 (MBR) there is no need to update the reference.)
self.disk_block_last_ref = 0
self.name = name
self.partition_abs_start = None
self.partition_abs_next_free = None
# Start of partition number
self.partition_number = 0
self.primary_partitions_created = 0
self.extended_partitions_created = 0
def __enter__(self):
# Open existing file for writing (r+)
self.image_fd = open(self.name, "r+b")
self.write_mbr()
self.write_mbr_signature(0)
self.partition_abs_start = self.align(1)
self.partition_abs_next_free \
= self.partition_abs_start
return self
def __exit__(self, exc_type, exc_value, traceback):
self.image_fd.flush()
os.fsync(self.image_fd.fileno())
self.image_fd.close()
def lba2chs(self, lba):
"""Converts a LBA block number to CHS
If the LBA block number is bigger than the max (1023, 63, 254)
the maximum is returned.
"""
if lba > MBR.heads_per_cylinder * MBR.sectors_per_track \
* MBR.max_cylinders:
return MBR.max_cylinders, MBR.heads_per_cylinder, \
MBR.sectors_per_track
cylinder = lba // (MBR.heads_per_cylinder * MBR.sectors_per_track)
head = (lba // MBR.sectors_per_track) % MBR.heads_per_cylinder
sector = (lba % MBR.sectors_per_track) + 1
logger.debug("Convert LBA to CHS [%d] -> [%d, %d, %d]",
lba, cylinder, head, sector)
return cylinder, head, sector
def encode_chs(self, cylinders, heads, sectors):
"""Encodes a CHS triple into disk format"""
# Head - nothing to convert
assert heads <= MBR.heads_per_cylinder
eh = heads
# Sector
assert sectors <= MBR.sectors_per_track
es = sectors
# top two bits are set in cylinder conversion
# Cylinder
assert cylinders <= MBR.max_cylinders
ec = cylinders % 256 # lower part
hc = cylinders // 4 # extract top two bits and
es = es | hc # pass them into the top two bits of the sector
logger.debug("Encode CHS to disk format [%d %d %d] "
"-> [%02x %02x %02x]", cylinders, heads, sectors,
eh, es, ec)
return eh, es, ec
def write_mbr(self):
"""Write MBR
This method writes the MBR to disk. It creates a random disk
id as well that it creates the extended partition (as
first partition) which uses the whole disk.
"""
disk_id = random.randint(0, 0xFFFFFFFF)
self.image_fd.seek(MBR.MBR_offset_disk_id)
self.image_fd.write(pack("<I", disk_id))
def write_mbr_signature(self, blockno):
"""Writes the MBR/EBR signature to a block
The signature consists of a 0xAA55 in the last two bytes of the
block.
"""
self.image_fd.seek(blockno *
MBR.bytes_per_sector +
MBR.MBR_offset_signature)
self.image_fd.write(pack("<H", MBR.MBR_signature))
def write_partition_entry(self, bootflag, blockno, entry, ptype,
lba_start, lba_length):
"""Writes a partition entry
The entries are always the same and contain 16 bytes. The MBR
and also the EBR use the same format.
"""
logger.info("Write partition entry blockno [%d] entry [%d] "
"start [%d] length [%d]", blockno, entry,
lba_start, lba_length)
self.image_fd.seek(
blockno * MBR.bytes_per_sector +
MBR.MBR_offset_first_partition_table_entry +
16 * entry)
# Boot flag
self.image_fd.write(pack("<B", 0x80 if bootflag else 0x00))
# Encode lba start / length into CHS
chs_start = self.lba2chs(lba_start)
chs_end = self.lba2chs(lba_start + lba_length)
# Encode CHS into disk format
chs_start_bin = self.encode_chs(*chs_start)
chs_end_bin = self.encode_chs(*chs_end)
# Write CHS start
self.image_fd.write(pack("<BBB", *chs_start_bin))
# Write partition type
self.image_fd.write(pack("<B", ptype))
# Write CHS end
self.image_fd.write(pack("<BBB", *chs_end_bin))
# Write LBA start & length
self.image_fd.write(pack("<I", lba_start))
self.image_fd.write(pack("<I", lba_length))
def align(self, blockno):
"""Align the blockno to next alignment count"""
if blockno % self.alignment_blocks == 0:
# Already aligned
return blockno
return (blockno // self.alignment_blocks + 1) \
* self.alignment_blocks
def compute_partition_lbas(self, abs_start, size):
lba_partition_abs_start = self.align(abs_start)
lba_partition_rel_start \
= lba_partition_abs_start - self.partition_abs_start
lba_partition_length = size // MBR.bytes_per_sector
lba_abs_partition_end \
= self.align(lba_partition_abs_start + lba_partition_length)
logger.info("Partition absolute [%d] relative [%d] "
"length [%d] absolute end [%d]",
lba_partition_abs_start, lba_partition_rel_start,
lba_partition_length, lba_abs_partition_end)
return lba_partition_abs_start, lba_partition_length, \
lba_abs_partition_end
def add_primary_partition(self, bootflag, size, ptype):
lba_partition_abs_start, lba_partition_length, lba_abs_partition_end \
= self.compute_partition_lbas(self.partition_abs_next_free, size)
self.write_partition_entry(
bootflag, 0, self.partition_number, ptype,
self.align(lba_partition_abs_start), lba_partition_length)
self.partition_abs_next_free = lba_abs_partition_end
logger.debug("Next free [%d]", self.partition_abs_next_free)
self.primary_partitions_created += 1
self.partition_number += 1
return self.partition_number
def add_extended_partition(self, bootflag, size, ptype):
lba_ebr_abs = self.partition_abs_next_free
logger.info("EBR block absolute [%d]", lba_ebr_abs)
_, lba_partition_length, lba_abs_partition_end \
= self.compute_partition_lbas(lba_ebr_abs + 1, size)
# Write the reference to the new partition
if self.disk_block_last_ref != 0:
partition_complete_len = lba_abs_partition_end - lba_ebr_abs
self.write_partition_entry(
False, self.disk_block_last_ref, 1,
MBR.MBR_partition_type_extended_chs,
lba_ebr_abs - self.partition_abs_start,
partition_complete_len)
self.write_partition_entry(
bootflag, lba_ebr_abs, 0, ptype, self.align(1),
lba_partition_length)
self.write_mbr_signature(lba_ebr_abs)
self.partition_abs_next_free = lba_abs_partition_end
logger.debug("Next free [%d]", self.partition_abs_next_free)
self.disk_block_last_ref = lba_ebr_abs
self.extended_partitions_created += 1
self.partition_number += 1
return self.partition_number
def add_partition(self, primaryflag, bootflag, size, ptype):
"""Adds a partition with the given type and size"""
logger.debug("Add new partition primary [%s] boot [%s] "
"size [%d] type [%x]",
primaryflag, bootflag, size, ptype)
# primaries must be created before extended
if primaryflag and self.extended_partitions_created > 0:
raise RuntimeError("All primary partitions must be "
"given first")
if primaryflag:
return self.add_primary_partition(bootflag, size, ptype)
if self.extended_partitions_created == 0:
# When this is the first extended partition, the extended
# partition entry has to be written.
self.partition_abs_start = self.partition_abs_next_free
self.write_partition_entry(
False, 0, self.partition_number,
MBR.MBR_partition_type_extended_lba,
self.partition_abs_next_free,
self.disk_size_in_blocks - self.partition_abs_next_free)
self.partition_number = 4
return self.add_extended_partition(bootflag, size, ptype)
def free(self):
"""Returns the free (not yet partitioned) size"""
return self.disk_size \
- (self.partition_abs_next_free + self.align(1)) \
* MBR.bytes_per_sector

View File

@@ -1,79 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from diskimage_builder.block_device.exception import \
BlockDeviceSetupException
from diskimage_builder.block_device.plugin import NodeBase
logger = logging.getLogger(__name__)
class PartitionNode(NodeBase):
flag_boot = 1
flag_primary = 2
def __init__(self, config, state, parent, prev_partition):
super(PartitionNode, self).__init__(config['name'], state)
self.base = config['base']
self.partitioning = parent
self.prev_partition = prev_partition
self.flags = set()
if 'flags' in config:
for f in config['flags']:
if f == 'boot':
self.flags.add(self.flag_boot)
elif f == 'primary':
self.flags.add(self.flag_primary)
else:
raise BlockDeviceSetupException("Unknown flag: %s" % f)
if 'size' not in config:
raise BlockDeviceSetupException("No size in partition" % self.name)
self.size = config['size']
self.ptype = int(config['type'], 16) if 'type' in config else 0x83
def get_flags(self):
return self.flags
def get_size(self):
return self.size
def get_type(self):
return self.ptype
def get_edges(self):
edge_from = [self.base]
edge_to = []
if self.prev_partition is not None:
edge_from.append(self.prev_partition.name)
return (edge_from, edge_to)
# These all call back to the parent "partitioning" object to do
# the real work. Every node calls it, but only one will succeed;
# see the gating we do in the parent function.
#
# XXX: A better model here would be for the parent object to a
# real node in the config graph, so it's create() gets called.
# These can then just be stubs.
def create(self):
self.partitioning.create()
def cleanup(self):
self.partitioning.cleanup()

View File

@@ -1,171 +0,0 @@
# Copyright 2016 Andreas Florath (andreas@florath.net)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
from diskimage_builder.block_device.exception import \
BlockDeviceSetupException
from diskimage_builder.block_device.level1.mbr import MBR
from diskimage_builder.block_device.level1.partition import PartitionNode
from diskimage_builder.block_device.plugin import PluginBase
from diskimage_builder.block_device.utils import exec_sudo
from diskimage_builder.block_device.utils import parse_abs_size_spec
from diskimage_builder.block_device.utils import parse_rel_size_spec
logger = logging.getLogger(__name__)
class Partitioning(PluginBase):
def __init__(self, config, default_config, state):
logger.debug("Creating Partitioning object; config [%s]", config)
super(Partitioning, self).__init__()
# Unlike other PluginBase we are somewhat persistent, as the
# partition nodes call back to us (see create() below). We
# need to keep this reference.
self.state = state
# Because using multiple partitions of one base is done
# within one object, there is the need to store a flag if the
# creation of the partitions was already done.
self.already_created = False
self.already_cleaned = False
# Parameter check
if 'base' not in config:
raise BlockDeviceSetupException("Partitioning config needs 'base'")
self.base = config['base']
if 'partitions' not in config:
raise BlockDeviceSetupException(
"Partitioning config needs 'partitions'")
if 'label' not in config:
raise BlockDeviceSetupException(
"Partitioning config needs 'label'")
self.label = config['label']
if self.label not in ("mbr", ):
raise BlockDeviceSetupException("Label must be 'mbr'")
# It is VERY important to get the alignment correct. If this
# is not correct, the disk performance might be very poor.
# Example: In some tests a 'off by one' leads to a write
# performance of 30% compared to a correctly aligned
# partition.
# The problem for DIB is, that it cannot assume that the host
# system uses the same IO sizes as the target system,
# therefore here a fixed approach (as used in all modern
# systems with large disks) is used. The partitions are
# aligned to 1MiB (which are about 2048 times 512 bytes
# blocks)
self.align = 1024 * 1024 # 1MiB as default
if 'align' in config:
self.align = parse_abs_size_spec(config['align'])
self.partitions = []
prev_partition = None
for part_cfg in config['partitions']:
np = PartitionNode(part_cfg, state, self, prev_partition)
self.partitions.append(np)
prev_partition = np
def get_nodes(self):
# return the list of partitions
return self.partitions
def _size_of_block_dev(self, dev):
with open(dev, "r") as fd:
fd.seek(0, 2)
return fd.tell()
# not this is NOT a node and this is not called directly! The
# create() calls in the partition nodes this plugin has
# created are calling back into this.
def create(self):
# This is a bit of a hack. Each of the partitions is actually
# in the graph, so for every partition we get a create() call
# as the walk happens. But we only need to create the
# partition table once...
if self.already_created:
logger.info("Not creating the partitions a second time.")
return
self.already_created = True
# the raw file on disk
image_path = self.state['blockdev'][self.base]['image']
# the /dev/loopX device of the parent
device_path = self.state['blockdev'][self.base]['device']
logger.info("Creating partition on [%s] [%s]", self.base, image_path)
assert self.label == 'mbr'
disk_size = self._size_of_block_dev(image_path)
with MBR(image_path, disk_size, self.align) as part_impl:
for part_cfg in self.partitions:
part_name = part_cfg.get_name()
part_bootflag = PartitionNode.flag_boot \
in part_cfg.get_flags()
part_primary = PartitionNode.flag_primary \
in part_cfg.get_flags()
part_size = part_cfg.get_size()
part_free = part_impl.free()
part_type = part_cfg.get_type()
logger.debug("Not partitioned space [%d]", part_free)
part_size = parse_rel_size_spec(part_size,
part_free)[1]
part_no \
= part_impl.add_partition(part_primary, part_bootflag,
part_size, part_type)
logger.debug("Create partition [%s] [%d]",
part_name, part_no)
# We're going to mount all partitions with kpartx
# below once we're done. So the device this partition
# will be seen at becomes "/dev/mapper/loop0pX"
assert device_path[:5] == "/dev/"
partition_device_name = "/dev/mapper/%sp%d" % \
(device_path[5:], part_no)
self.state['blockdev'][part_name] \
= {'device': partition_device_name}
# "saftey sync" to make sure the partitions are written
exec_sudo(["sync"])
# now all the partitions are created, get device-mapper to
# mount them
if not os.path.exists("/.dockerenv"):
exec_sudo(["kpartx", "-avs", device_path])
else:
# If running inside Docker, make our nodes manually,
# because udev will not be working. kpartx cannot run in
# sync mode in docker.
exec_sudo(["kpartx", "-av", device_path])
exec_sudo(["dmsetup", "--noudevsync", "mknodes"])
return
def cleanup(self):
# remove the partition mappings made for the parent
# block-device by create() above. this is called from the
# child PartitionNode umount/delete/cleanup. Thus every
# partition calls it, but we only want to do it once and our
# gate.
if not self.already_cleaned:
self.already_cleaned = True
exec_sudo(["kpartx", "-d",
self.state['blockdev'][self.base]['device']])

View File

@@ -1,162 +0,0 @@
# Copyright 2017 Andreas Florath (andreas@florath.net)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import uuid
from diskimage_builder.block_device.exception \
import BlockDeviceSetupException
from diskimage_builder.block_device.plugin import NodeBase
from diskimage_builder.block_device.plugin import PluginBase
from diskimage_builder.block_device.utils import exec_sudo
logger = logging.getLogger(__name__)
# There is the need to check the length of the label of
# the filesystem. The maximum length depends on the used filesystem.
# This map provides information about the maximum label length.
file_system_max_label_length = {
"ext2": 16,
"ext3": 16,
"ext4": 16,
"xfs": 12,
"vfat": 11
}
class FilesystemNode(NodeBase):
def __init__(self, config, state):
logger.debug("Create filesystem object; config [%s]", config)
super(FilesystemNode, self).__init__(config['name'], state)
# Parameter check (mandatory)
for pname in ['base', 'type']:
if pname not in config:
raise BlockDeviceSetupException(
"Mkfs config needs [%s]" % pname)
setattr(self, pname, config[pname])
# Parameter check (optional)
for pname in ['label', 'opts', 'uuid']:
setattr(self, pname,
config[pname] if pname in config else None)
if self.label is None:
self.label = self.name
# Historic reasons - this will hopefully vanish in one of
# the next major releases
if self.label == "cloudimg-rootfs" and self.type == "xfs":
logger.warning("Default label [cloudimg-rootfs] too long for xfs "
"file system - using [img-rootfs] instead")
self.label = "img-rootfs"
# ensure we don't already have a fs with this label ... they
# all must be unique.
if 'fs_labels' in self.state:
if self.label in self.state['fs_labels']:
raise BlockDeviceSetupException(
"File system label [%s] used more than once" % self.label)
self.state['fs_labels'].append(self.label)
else:
self.state['fs_labels'] = [self.label]
if self.type in file_system_max_label_length:
if file_system_max_label_length[self.type] < len(self.label):
raise BlockDeviceSetupException(
"Label [{label}] too long for filesystem [{type}]: "
"{len} > {max_len}".format(**{
'label': self.label,
'type': self.type,
'len': len(self.label),
'max_len': file_system_max_label_length[self.type]}))
else:
logger.warning("Length of label [%s] cannot be checked for "
"filesystem [%s]: unknown max length",
self.label, self.type)
logger.warning("Continue - but this might lead to an error")
if self.opts is not None:
self.opts = self.opts.strip().split(' ')
if self.uuid is None:
self.uuid = str(uuid.uuid4())
logger.debug("Filesystem created [%s]", self)
def get_edges(self):
edge_from = [self.base]
edge_to = []
return (edge_from, edge_to)
def create(self):
cmd = ["mkfs"]
cmd.extend(['-t', self.type])
if self.opts:
cmd.extend(self.opts)
if self.type in ('vfat', 'fat'):
cmd.extend(["-n", self.label])
else:
cmd.extend(["-L", self.label])
if self.type in ('ext2', 'ext3', 'ext4'):
cmd.extend(['-U', self.uuid])
elif self.type == 'xfs':
cmd.extend(['-m', "uuid=%s" % self.uuid])
else:
logger.warning("UUID will not be written for fs type [%s]",
self.type)
if self.type in ('ext2', 'ext3', 'ext4', 'xfs'):
cmd.append('-q')
if 'blockdev' not in self.state:
self.state['blockdev'] = {}
device = self.state['blockdev'][self.base]['device']
cmd.append(device)
logger.debug("Creating fs command [%s]", cmd)
exec_sudo(cmd)
if 'filesys' not in self.state:
self.state['filesys'] = {}
self.state['filesys'][self.name] \
= {'uuid': self.uuid, 'label': self.label,
'fstype': self.type, 'opts': self.opts,
'device': device}
class Mkfs(PluginBase):
"""Create a file system
This block device module handles creating different file
systems.
"""
def __init__(self, config, defaults, state):
super(Mkfs, self).__init__()
self.filesystems = {}
fs = FilesystemNode(config, state)
self.filesystems[fs.get_name()] = fs
def get_nodes(self):
nodes = []
for _, fs in self.filesystems.items():
nodes.append(fs)
return nodes

View File

@@ -1,158 +0,0 @@
# Copyright 2017 Andreas Florath (andreas@florath.net)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import logging
import os
from diskimage_builder.block_device.exception \
import BlockDeviceSetupException
from diskimage_builder.block_device.plugin import NodeBase
from diskimage_builder.block_device.plugin import PluginBase
from diskimage_builder.block_device.utils import exec_sudo
logger = logging.getLogger(__name__)
class MountPointNode(NodeBase):
def __init__(self, mount_base, config, state):
super(MountPointNode, self).__init__(config['name'], state)
# Parameter check
self.mount_base = mount_base
for pname in ['base', 'mount_point']:
if pname not in config:
raise BlockDeviceSetupException(
"MountPoint config needs [%s]" % pname)
setattr(self, pname, config[pname])
logger.debug("MountPoint created [%s]", self)
def get_edges(self):
"""Insert all edges
The dependency edge is created in all cases from the base
element (typically a mkfs) and, if this is not the 'first'
mount-point, an edge is created from the mount-point before in
"sorted order" (see :func:`sort_mount_points`). This ensures
that during mounting (and umounting) the globally correct
order is used.
"""
edge_from = []
edge_to = []
# should have been added by __init__...
assert 'sorted_mount_points' in self.state
sorted_mount_points = self.state['sorted_mount_points']
# If we are not first, add our parent in the global dependency
# list. sorted_mount_points is tuples (mount_point, node_name).
# find ourselves in the mount_points, and our parent node
# is one before us in node_name list.
mount_points = [x[0] for x in sorted_mount_points]
node_name = [x[1] for x in sorted_mount_points]
mpi = mount_points.index(self.mount_point)
if mpi > 0:
dep = node_name[mpi - 1]
edge_from.append(dep)
edge_from.append(self.base)
return (edge_from, edge_to)
def create(self):
logger.debug("mount called [%s]", self.mount_point)
rel_mp = self.mount_point if self.mount_point[0] != '/' \
else self.mount_point[1:]
mount_point = os.path.join(self.mount_base, rel_mp)
if not os.path.exists(mount_point):
# Need to sudo this because of permissions in the new
# file system tree.
exec_sudo(['mkdir', '-p', mount_point])
logger.info("Mounting [%s] to [%s]", self.name, mount_point)
exec_sudo(["mount", self.state['filesys'][self.base]['device'],
mount_point])
if 'mount' not in self.state:
self.state['mount'] = {}
self.state['mount'][self.mount_point] \
= {'name': self.name, 'base': self.base, 'path': mount_point}
if 'mount_order' not in self.state:
self.state['mount_order'] = []
self.state['mount_order'].append(self.mount_point)
def umount(self):
logger.info("Called for [%s]", self.name)
exec_sudo(["umount", self.state['mount'][self.mount_point]['path']])
def delete(self):
self.umount()
def cmp_mount_order(this, other):
"""Sort comparision function for mount-point sorting
See if ``this`` comes before ``other`` in mount-order list. In
words: if the other mount-point has us as it's parent, we come
before it (are less than it). e.g. ``/var < /var/log <
/var/log/foo``
:param this: tuple of mount_point, node name
:param other: tuple of mount_point, node name
:returns int: cmp value
"""
# sort is only based on the mount_point.
this, _ = this
other, _ = other
if this == other:
return 0
if other.startswith(this):
return -1
else:
return 1
class Mount(PluginBase):
def __init__(self, config, defaults, state):
super(Mount, self).__init__()
if 'mount-base' not in defaults:
raise BlockDeviceSetupException(
"Mount default config needs 'mount-base'")
self.node = MountPointNode(defaults['mount-base'], config, state)
# save this new node to the global mount-point list and
# re-order it to keep it in mount-order. Used in get_edges()
# to ensure we build the mount graph in order
#
# note we can't just put the MountPointNode into the state,
# because it's not json serialisable and we still dump the
# state to json. that's why we have this (mount_point, name)
# tuples and sorting trickery
sorted_mount_points = state.get('sorted_mount_points', [])
mount_points = [mp for mp, name in sorted_mount_points]
if self.node.mount_point in mount_points:
raise BlockDeviceSetupException(
"Mount point [%s] specified more than once"
% self.node.mount_point)
sorted_mount_points.append((self.node.mount_point, self.node.name))
sorted(sorted_mount_points, key=functools.cmp_to_key(cmp_mount_order))
# reset the state key to the new list
state['sorted_mount_points'] = sorted_mount_points
logger.debug("Ordered mounts now: %s", sorted_mount_points)
def get_nodes(self):
return [self.node]

View File

@@ -1,59 +0,0 @@
# Copyright 2017 Andreas Florath (andreas@florath.net)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from diskimage_builder.block_device.plugin import NodeBase
from diskimage_builder.block_device.plugin import PluginBase
logger = logging.getLogger(__name__)
class FstabNode(NodeBase):
def __init__(self, config, state):
super(FstabNode, self).__init__(config['name'], state)
self.base = config['base']
self.options = config.get('options', 'defaults')
self.dump_freq = config.get('dump-freq', 0)
self.fsck_passno = config.get('fsck-passno', 2)
def get_edges(self):
edge_from = [self.base]
edge_to = []
return (edge_from, edge_to)
def create(self):
logger.debug("fstab create called [%s]", self.name)
if 'fstab' not in self.state:
self.state['fstab'] = {}
self.state['fstab'][self.base] = {
'name': self.name,
'base': self.base,
'options': self.options,
'dump-freq': self.dump_freq,
'fsck-passno': self.fsck_passno
}
class Fstab(PluginBase):
def __init__(self, config, defaults, state):
super(Fstab, self).__init__()
self.node = FstabNode(config, state)
def get_nodes(self):
return [self.node]

View File

@@ -1,222 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import logging
import six
#
# Plugins convert configuration entries into graph nodes ready for
# processing. This defines the abstract classes for both.
#
logger = logging.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class NodeBase(object):
"""A configuration node entry
This is the main driver class for dib-block-device operation.
The final operations graph is composed of instantiations of this
class. The graph undergoes a topological sort (i.e. is linearised
in dependency order) and each node has :func:`create` called in
order to perform its operations.
Every node has a unique string ``name``. This is its key in the
graph and used for edge relationships. Implementations must
ensure they initalize it; e.g.
.. code-block:: python
class FooNode(NodeBase):
def __init__(name, arg1, ...):
super(FooNode, self).__init__(name)
"""
def __init__(self, name, state):
self.name = name
self.state = state
self.rollbacks = []
def get_name(self):
return self.name
def add_rollback(self, func, *args, **kwargs):
"""Add a call for rollback
Functions registered with this method will be called in
reverse-order in the case of failures during
:func:`Nodebase.create`.
:param func: function to call
:param args: arguments
:param kwargs: keyword arguments
:return: None
"""
self.rollbacks.append((func, args, kwargs))
def rollback(self):
"""Initiate rollback
Call registered rollback in reverse order. This method is
called by the driver in the case of failures during
:func:`Nodebase.create`.
:return None:
"""
# XXX: maybe ignore SystemExit so we always continue?
logger.debug("Calling rollback for %s", self.name)
for func, args, kwargs in reversed(self.rollbacks):
func(*args, **kwargs)
@abc.abstractmethod
def get_edges(self):
"""Return the dependencies/edges for this node
This function will be called after all nodes are created (this
is because some plugins need to know the global state of all
nodes to decide their dependencies).
This function returns a tuple with two lists
* ``edges_from`` : a list of node names that point to us
* ``edges_to`` : a list of node names we point to
In most cases, node creation will have saved a single parent
that was given in the ``base`` parameter of the configuration.
A usual return might look like:
.. code-block:: python
def get_edges(self):
return ( [self.base], [] )
Some nodes (``level0``) don't have a base, however
"""
return
@abc.abstractmethod
def create(self):
"""Main creation driver
This is the main driver function. After the graph is
linearised, each node has it's :func:`create` function called.
:raises Exception: A failure should raise an exception. This
will initiate a rollback. See :func:`Nodebase.add_rollback`.
:return: None
"""
return
def umount(self):
"""Umount actions
Actions to taken when ``dib-block-device umount`` is called.
The nodes are called in the reverse order to :func:`create`
:return: None
"""
return
def cleanup(self):
"""Cleanup actions
Actions to taken when ``dib-block-device cleanup`` is called.
This is the cleanup path in the *success* case. The nodes are
called in the reverse order to :func:`create`
:return: None
"""
return
def delete(self):
"""Cleanup actions
Actions to taken when ``dib-block-device delete`` is called.
This is the cleanup path in case of a reported external
*failure*. The nodes are called in the reverse order to
:func:`create`
:return: None
"""
return
@six.add_metaclass(abc.ABCMeta)
class PluginBase(object):
"""The base plugin object
This is the base plugin object. Plugins are an instantiation of
this class. There should be an entry-point (see setup.cfg)
defined under ``diskimage_builder.block_device.plugin`` for each
plugin, e.g.
foo = diskimage_builder.block_device.levelX.foo:Foo
A configuration entry in the graph config that matches this entry
point will create an instance of this class, e.g.
.. code-block:: yaml
foo:
name: foo_node
base: parent_node
argument_a: bar
argument_b: baz
The ``__init__`` function will be passed three arguments:
``config``
The full configuration dictionary for the entry.
A unique ``name`` entry can be assumed. In most cases
a ``base`` entry will be present giving the parent node
(see :func:`NodeBase.get_edges`).
``state``
A reference to the gobal state dictionary. This should be
passed to :func:`NodeBase.__init__` on node creation
``defaults``
The global defaults dictionary (see ``--params``)
``get_nodes()`` should return the node object(s) created by the
config for insertion into the final configuration graph. In the
simplest case, this is probably a single node created during
instantiation. e.g.
.. code-block:: python
class Foo(PluginBase):
def __init__(self, config, defaults, state):
super(Foo, self).__init__()
self.node = FooNode(config.name, state, ...)
def get_nodes(self):
return [self.node]
Some plugins require more, however.
"""
def __init__(self):
pass
@abc.abstractmethod
def get_nodes(self):
"""Return nodes created by the plugin
:returns: a list of :class:`.NodeBase` objects for insertion
into the graph
"""
return

View File

@@ -1,28 +0,0 @@
- local_loop:
name: image0
- partitioning:
base: image0
name: mbr
label: mbr
partitions:
- flags: [boot, primary]
name: root
base: image0
size: 100%
- mount:
base: mkfs_root
name: mount_mkfs_root
mount_point: /
- fstab:
base: mount_mkfs_root
name: fstab_mount_mkfs_root
fsck-passno: 1
options: defaults
- mkfs:
base: this_is_not_a_node
name: mkfs_root
type: ext4

View File

@@ -1,3 +0,0 @@
- this_is_not_a_plugin_name:
foo: bar
baz: moo

View File

@@ -1,6 +0,0 @@
- test_a:
name: test_node_a
- test_b:
name: test_node_b
base: test_node_a

View File

@@ -1,28 +0,0 @@
- local_loop:
name: image0
- partitioning:
base: image0
name: mbr
label: mbr
partitions:
- flags: [boot, primary]
name: root
base: image0
size: 100%
- mount:
base: mkfs_root
name: mount_mkfs_root
mount_point: /
- fstab:
base: mount_mkfs_root
name: fstab_mount_mkfs_root
fsck-passno: 1
options: defaults
- mkfs:
base: root
name: mkfs_root
type: ext4

View File

@@ -1,18 +0,0 @@
- local_loop:
name: image0
- partitioning:
name: mbr
base: image0
label: mbr
partitions:
- name: root
flags: [ boot, primary ]
size: 100%
mkfs:
type: ext4
mount:
mount_point: /
fstab:
options: "defaults"
fsck-passno: 1

View File

@@ -1,68 +0,0 @@
- local_loop:
name: image0
- partitioning:
base: image0
name: mbr
label: mbr
partitions:
- name: root
base: image0
flags: [ boot, primary ]
size: 55%
- name: var
base: image0
size: 40%
- name: var_log
base: image0
size: 5%
- mkfs:
base: root
name: mkfs_root
label: duplicate
type: xfs
- mount:
base: mkfs_root
name: mount_mkfs_root
mount_point: /
- fstab:
base: mount_mkfs_root
name: fstab_mount_mkfs_root
fsck-passno: 1
options: defaults
- mkfs:
base: var
name: mkfs_var
label: duplicate
type: xfs
- mount:
base: mkfs_var
name: mount_mkfs_var
mount_point: /var
- fstab:
base: mount_mkfs_var
name: fstab_mount_mkfs_var
fsck-passno: 1
options: defaults
- mkfs:
base: var_log
name: mkfs_var_log
type: xfs
- mount:
base: mkfs_var_log
name: mount_mkfs_var_log
mount_point: /var/log
- fstab:
base: mount_mkfs_var_log
name: fstab_mount_mkfs_var_log
fsck-passno: 1
options: defaults

View File

@@ -1,28 +0,0 @@
- local_loop:
name: this_is_a_duplicate
- partitioning:
base: this_is_a_duplicate
name: root
label: mbr
partitions:
- flags: [boot, primary]
name: root
base: image0
size: 100%
- mount:
base: mkfs_root
name: this_is_a_duplicate
mount_point: /
- fstab:
base: mount_mkfs_root
name: fstab_mount_mkfs_root
fsck-passno: 1
options: defaults
- mkfs:
base: root
name: mkfs_root
type: ext4

View File

@@ -1,8 +0,0 @@
- mkfs:
name: root_fs
base: root_part
type: xfs
mount:
name: mount_root_fs
base: root_fs
mount_point: /

View File

@@ -1,66 +0,0 @@
- local_loop:
name: image0
- partitioning:
base: image0
name: mbr
label: mbr
partitions:
- name: root
base: image0
flags: [ boot, primary ]
size: 55%
- name: var
base: image0
size: 40%
- name: var_log
base: image0
size: 5%
- mkfs:
base: root
name: mkfs_root
type: xfs
- mount:
base: mkfs_root
name: mount_mkfs_root
mount_point: /
- fstab:
base: mount_mkfs_root
name: fstab_mount_mkfs_root
fsck-passno: 1
options: defaults
- mkfs:
base: var
name: mkfs_var
type: xfs
- mount:
base: mkfs_var
name: mount_mkfs_var
mount_point: /var
- fstab:
base: mount_mkfs_var
name: fstab_mount_mkfs_var
fsck-passno: 1
options: defaults
- mkfs:
base: var_log
name: mkfs_var_log
type: xfs
- mount:
base: mkfs_var_log
name: mount_mkfs_var_log
mount_point: /var/log
- fstab:
base: mount_mkfs_var_log
name: fstab_mount_mkfs_var_log
fsck-passno: 1
options: defaults

View File

@@ -1,37 +0,0 @@
- local_loop:
name: image0
- partitioning:
base: image0
name: mbr
label: mbr
partitions:
- name: root
flags: [ boot, primary ]
size: 55%
mkfs:
type: xfs
mount:
mount_point: /
fstab:
options: "defaults"
fsck-passno: 1
- name: var
size: 40%
mkfs:
type: xfs
mount:
mount_point: /var
fstab:
options: "defaults"
fsck-passno: 1
- name: var_log
size: 5%
mkfs:
type: xfs
mount:
mount_point: /var/log
fstab:
options: "defaults"
fsck-passno: 1

View File

@@ -1,29 +0,0 @@
- test_a:
name: test_node_a
rollback_one_arg: down
rollback_two_arg: you
- test_b:
base: test_node_a
name: test_node_b
rollback_one_arg: let
rollback_two_arg: gonna
- test_a:
base: test_node_b
name: test_node_aa
rollback_one_arg: never
rollback_two_arg: up
- test_b:
base: test_node_aa
name: test_node_bb
rollback_one_arg: you
rollback_two_arg: give
- test_a:
base: test_node_bb
name: test_node_aaa
rollback_one_arg: gonna
rollback_two_arg: never
trigger_rollback: yes

View File

@@ -1,9 +0,0 @@
- mkfs:
name: root_fs
base: root_part
type: xfs
- mount:
name: mount_root_fs
base: root_fs
mount_point: /

View File

@@ -1,6 +0,0 @@
- mkfs:
name: root_fs
base: root_part
type: xfs
mount:
mount_point: /

View File

@@ -1,5 +0,0 @@
- mkfs:
base: fake
name: mkfs_root
label: this_label_is_too_long_to_work_with_xfs
type: xfs

View File

@@ -1,81 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# plugin test case
import logging
from diskimage_builder.block_device.plugin import NodeBase
from diskimage_builder.block_device.plugin import PluginBase
logger = logging.getLogger(__name__)
class TestANode(NodeBase):
def __init__(self, config, state, test_rollback):
logger.debug("Create test 1")
super(TestANode, self).__init__(config['name'], state)
# might be a root node, so possibly no base
if 'base' in config:
self.base = config['base']
# put something in the state for test_b to check for
state['test_init_state'] = 'here'
# If we're doing rollback testing the config has some strings
# set for us
if test_rollback:
self.add_rollback(self.do_rollback, config['rollback_one_arg'])
self.add_rollback(self.do_rollback, config['rollback_two_arg'])
# see if we're the node who is going to fail
self.trigger_rollback = True if 'trigger_rollback' in config else False
def get_edges(self):
# may not have a base, if used as root node
to = [self.base] if hasattr(self, 'base') else []
return (to, [])
def do_rollback(self, string):
# We will check this after all rollbacks to make sure they ran
# in the right order
self.state['rollback_test'].append(string)
def create(self):
# put some fake entries into state
self.state['test_a'] = {}
self.state['test_a']['value'] = 'foo'
self.state['test_a']['value2'] = 'bar'
if self.trigger_rollback:
# The rollback test will append the strings to this as
# it unrolls, and we'll check it's value at the end
self.state['rollback_test'] = []
raise RuntimeError("Rollback triggered")
return
def umount(self):
# Umount is run in reverse. This key should exist from test_b
self.state['umount'].append('test_a')
class TestA(PluginBase):
def __init__(self, config, defaults, state):
super(TestA, self).__init__()
test_rollback = True if 'test_rollback' in defaults else False
self.node = TestANode(config, state, test_rollback)
def get_nodes(self):
return [self.node]

View File

@@ -1,71 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# plugin test case
import logging
from diskimage_builder.block_device.plugin import NodeBase
from diskimage_builder.block_device.plugin import PluginBase
logger = logging.getLogger(__name__)
class TestBNode(NodeBase):
def __init__(self, config, state, test_rollback):
logger.debug("Create test 1")
super(TestBNode, self).__init__(config['name'], state)
self.base = config['base']
# If we're doing rollback testing the config has some strings
# set for us.
if test_rollback:
self.add_rollback(self.do_rollback, config['rollback_one_arg'])
self.add_rollback(self.do_rollback, config['rollback_two_arg'])
def get_edges(self):
# this should have been inserted by test_a before
# we are called
assert self.state['test_init_state'] == 'here'
return ([self.base], [])
def do_rollback(self, string):
# We will check this after all rollbacks to make sure they ran
# in the right order
self.state['rollback_test'].append(string)
def create(self):
self.state['test_b'] = {}
self.state['test_b']['value'] = 'baz'
return
def umount(self):
# these values should have persisteted from create()
assert self.state['test_b']['value'] == 'baz'
# umount run in reverse. this should run before test_a
assert 'umount' not in self.state
self.state['umount'] = []
self.state['umount'].append('test_b')
class TestB(PluginBase):
def __init__(self, config, defaults, state):
super(TestB, self).__init__()
test_rollback = True if 'test_rollback' in defaults else False
self.node = TestBNode(config, state, test_rollback)
def get_nodes(self):
return [self.node]

View File

@@ -1,41 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
import logging
import os
import testtools
import yaml
logger = logging.getLogger(__name__)
class TestBase(testtools.TestCase):
"""Base for all test cases"""
def setUp(self):
super(TestBase, self).setUp()
fs = '%(asctime)s %(levelname)s [%(name)s] %(message)s'
self.log_fixture = self.useFixture(
fixtures.FakeLogger(level=logging.DEBUG, format=fs))
def get_config_file(self, f):
"""Get the full path to sample config file f """
logger.debug(os.path.dirname(__file__))
return os.path.join(os.path.dirname(__file__), 'config', f)
def load_config_file(self, f):
"""Load f and return it after yaml parsing"""
path = self.get_config_file(f)
with open(path, 'r') as config:
return yaml.safe_load(config)

View File

@@ -1,152 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from diskimage_builder.block_device.config import config_tree_to_graph
from diskimage_builder.block_device.config import create_graph
from diskimage_builder.block_device.exception import \
BlockDeviceSetupException
from diskimage_builder.block_device.tests.test_base import TestBase
logger = logging.getLogger(__name__)
class TestConfig(TestBase):
"""Helper for setting up and reading a config"""
def setUp(self):
super(TestConfig, self).setUp()
# previously we mocked some globals here ...
class TestGraphGeneration(TestConfig):
"""Extra helper class for testing graph generation"""
def setUp(self):
super(TestGraphGeneration, self).setUp()
self.fake_default_config = {
'build-dir': '/fake',
'image-size': '1000',
'image-dir': '/fake',
'mount-base': '/fake',
}
class TestConfigParsing(TestConfig):
"""Test parsing config file into a graph"""
# test an entry in the config not being a valid plugin
def test_config_bad_plugin(self):
config = self.load_config_file('bad_plugin.yaml')
self.assertRaises(BlockDeviceSetupException,
config_tree_to_graph,
config)
# test a config that has multiple keys for a top-level entry
def test_config_multikey_node(self):
config = self.load_config_file('multi_key_node.yaml')
self.assertRaisesRegex(BlockDeviceSetupException,
"Config entry top-level should be a single "
"dict:",
config_tree_to_graph,
config)
# a graph should remain the same
def test_graph(self):
graph = self.load_config_file('simple_graph.yaml')
parsed_graph = config_tree_to_graph(graph)
self.assertItemsEqual(parsed_graph, graph)
# equivalence of simple tree to graph
def test_simple_tree(self):
tree = self.load_config_file('simple_tree.yaml')
graph = self.load_config_file('simple_graph.yaml')
parsed_graph = config_tree_to_graph(tree)
self.assertItemsEqual(parsed_graph, graph)
# equivalence of a deeper tree to graph
def test_deep_tree(self):
tree = self.load_config_file('deep_tree.yaml')
graph = self.load_config_file('deep_graph.yaml')
parsed_graph = config_tree_to_graph(tree)
self.assertItemsEqual(parsed_graph, graph)
# equivalence of a complicated multi-partition tree to graph
def test_multipart_tree(self):
tree = self.load_config_file('multiple_partitions_tree.yaml')
graph = self.load_config_file('multiple_partitions_graph.yaml')
parsed_graph = config_tree_to_graph(tree)
logger.debug(parsed_graph)
self.assertItemsEqual(parsed_graph, graph)
class TestCreateGraph(TestGraphGeneration):
# Test a graph with bad edge pointing to an invalid node
def test_invalid_missing(self):
config = self.load_config_file('bad_edge_graph.yaml')
self.assertRaisesRegex(BlockDeviceSetupException,
"Edge not defined: this_is_not_a_node",
create_graph,
config, self.fake_default_config, {})
# Test a graph with bad edge pointing to an invalid node
def test_duplicate_name(self):
config = self.load_config_file('duplicate_name.yaml')
self.assertRaisesRegex(BlockDeviceSetupException,
"Duplicate node name: "
"this_is_a_duplicate",
create_graph,
config, self.fake_default_config, {})
# Test digraph generation from deep_graph config file
def test_deep_graph_generator(self):
config = self.load_config_file('deep_graph.yaml')
graph, call_order = create_graph(config, self.fake_default_config, {})
call_order_list = [n.name for n in call_order]
# manually created from deep_graph.yaml
# Note unlike below, the sort here is stable because the graph
# doesn't have multiple paths with only one partition
call_order_names = ['image0', 'root', 'mkfs_root',
'mount_mkfs_root',
'fstab_mount_mkfs_root']
self.assertListEqual(call_order_list, call_order_names)
# Test multiple partition digraph generation
def test_multiple_partitions_graph_generator(self):
config = self.load_config_file('multiple_partitions_graph.yaml')
graph, call_order = create_graph(config, self.fake_default_config, {})
call_order_list = [n.name for n in call_order]
# The sort creating call_order_list is unstable.
# We want to ensure we see the "partitions" object in
# root->var->var_log order
root_pos = call_order_list.index('root')
var_pos = call_order_list.index('var')
var_log_pos = call_order_list.index('var_log')
self.assertGreater(var_pos, root_pos)
self.assertGreater(var_log_pos, var_pos)
# Ensure mkfs happens after partition
mkfs_root_pos = call_order_list.index('mkfs_root')
self.assertLess(root_pos, mkfs_root_pos)
mkfs_var_pos = call_order_list.index('mkfs_var')
self.assertLess(var_pos, mkfs_var_pos)
mkfs_var_log_pos = call_order_list.index('mkfs_var_log')
self.assertLess(var_log_pos, mkfs_var_log_pos)

View File

@@ -1,164 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
import logging
import mock
import os
import subprocess
import diskimage_builder.block_device.tests.test_base as tb
from diskimage_builder.block_device.level0.localloop import image_create
from diskimage_builder.block_device.level1.mbr import MBR
logger = logging.getLogger(__name__)
class TestMBR(tb.TestBase):
disk_size_10M = 10 * 1024 * 1024
disk_size_1G = 1024 * 1024 * 1024
def _get_path_for_partx(self):
"""Searches and sets the path for partx
Because different distributions store the partx binary
at different places, there is the need to look for it.
"""
dirs = ["/bin", "/usr/bin", "/sbin", "/usr/sbin"]
for d in dirs:
if os.path.exists(os.path.join(d, "partx")):
return os.path.join(d, "partx")
return
# If not found, try without path.
return "partx"
def setUp(self):
super(TestMBR, self).setUp()
self.tmp_dir = fixtures.TempDir()
self.useFixture(self.tmp_dir)
self.image_path = os.path.join(self.tmp_dir.path, "image.raw")
image_create(self.image_path, TestMBR.disk_size_1G)
logger.debug("Temp image is %s", self.image_path)
self.partx_args = [self._get_path_for_partx(), "--raw",
"--output", "NR,START,END,TYPE,FLAGS,SCHEME",
"-g", "-b", "-", self.image_path]
def _run_partx(self, image_path):
logger.info("Running command: %s", self.partx_args)
return subprocess.check_output(self.partx_args).decode("ascii")
@mock.patch('os.fsync', wraps=os.fsync)
def test_one_ext_partition(self, mock_os_fsync):
"""Creates one partition and check correctness with partx."""
with MBR(self.image_path, TestMBR.disk_size_1G, 1024 * 1024) as mbr:
mbr.add_partition(False, False, TestMBR.disk_size_10M, 0x83)
# the exit handler of MBR should have synced the raw device
# before exit
mock_os_fsync.assert_called()
output = self._run_partx(self.image_path)
self.assertEqual(
"1 2048 2097151 0xf 0x0 dos\n"
"5 4096 24575 0x83 0x0 dos\n", output)
def test_zero_partitions(self):
"""Creates no partition and check correctness with partx."""
with MBR(self.image_path, TestMBR.disk_size_1G, 1024 * 1024):
pass
output = self._run_partx(self.image_path)
self.assertEqual("", output)
def test_many_ext_partitions(self):
"""Creates many partition and check correctness with partx."""
with MBR(self.image_path, TestMBR.disk_size_1G, 1024 * 1024) as mbr:
for nr in range(0, 64):
mbr.add_partition(False, False, TestMBR.disk_size_10M, 0x83)
output = self._run_partx(self.image_path)
lines = output.split("\n")
self.assertEqual(66, len(lines))
self.assertEqual(
"1 2048 2097151 0xf 0x0 dos", lines[0])
start_block = 4096
end_block = start_block + TestMBR.disk_size_10M / 512 - 1
for nr in range(1, 65):
fields = lines[nr].split(" ")
self.assertEqual(6, len(fields))
self.assertEqual(nr + 4, int(fields[0]))
self.assertEqual(start_block, int(fields[1]))
self.assertEqual(end_block, int(fields[2]))
self.assertEqual("0x83", fields[3])
self.assertEqual("0x0", fields[4])
self.assertEqual("dos", fields[5])
start_block += 22528
end_block = start_block + TestMBR.disk_size_10M / 512 - 1
def test_one_pri_partition(self):
"""Creates one primary partition and check correctness with partx."""
with MBR(self.image_path, TestMBR.disk_size_1G, 1024 * 1024) as mbr:
mbr.add_partition(True, False, TestMBR.disk_size_10M, 0x83)
output = self._run_partx(self.image_path)
self.assertEqual(
"1 2048 22527 0x83 0x0 dos\n", output)
def test_three_pri_partition(self):
"""Creates three primary partition and check correctness with partx."""
with MBR(self.image_path, TestMBR.disk_size_1G, 1024 * 1024) as mbr:
for _ in range(3):
mbr.add_partition(True, False, TestMBR.disk_size_10M, 0x83)
output = self._run_partx(self.image_path)
self.assertEqual(
"1 2048 22527 0x83 0x0 dos\n"
"2 22528 43007 0x83 0x0 dos\n"
"3 43008 63487 0x83 0x0 dos\n", output)
def test_many_pri_and_ext_partition(self):
"""Creates many primary and extended partitions."""
with MBR(self.image_path, TestMBR.disk_size_1G, 1024 * 1024) as mbr:
# Create three primary partitions
for _ in range(3):
mbr.add_partition(True, False, TestMBR.disk_size_10M, 0x83)
for _ in range(7):
mbr.add_partition(False, False, TestMBR.disk_size_10M, 0x83)
output = self._run_partx(self.image_path)
self.assertEqual(
"1 2048 22527 0x83 0x0 dos\n" # Primary 1
"2 22528 43007 0x83 0x0 dos\n" # Primary 2
"3 43008 63487 0x83 0x0 dos\n" # Primary 3
"4 63488 2097151 0xf 0x0 dos\n" # Extended
"5 65536 86015 0x83 0x0 dos\n" # Extended Partition 1
"6 88064 108543 0x83 0x0 dos\n" # Extended Partition 2
"7 110592 131071 0x83 0x0 dos\n" # ...
"8 133120 153599 0x83 0x0 dos\n"
"9 155648 176127 0x83 0x0 dos\n"
"10 178176 198655 0x83 0x0 dos\n"
"11 200704 221183 0x83 0x0 dos\n", output)

View File

@@ -1,43 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import diskimage_builder.block_device.tests.test_config as tc
from diskimage_builder.block_device.config import create_graph
from diskimage_builder.block_device.exception import \
BlockDeviceSetupException
logger = logging.getLogger(__name__)
class TestMkfs(tc.TestGraphGeneration):
def test_duplicate_labels(self):
config = self.load_config_file('duplicate_fs_labels.yaml')
self.assertRaisesRegex(BlockDeviceSetupException,
"used more than once",
create_graph, config,
self.fake_default_config, {})
def test_too_long_labels(self):
config = self.load_config_file('too_long_fs_label.yaml')
self.assertRaisesRegex(BlockDeviceSetupException,
"too long for filesystem",
create_graph, config,
self.fake_default_config, {})

View File

@@ -1,55 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import mock
import diskimage_builder.block_device.tests.test_config as tc
from diskimage_builder.block_device.config import create_graph
from diskimage_builder.block_device.level3.mount import MountPointNode
logger = logging.getLogger(__name__)
class TestMountOrder(tc.TestGraphGeneration):
@mock.patch('diskimage_builder.block_device.level3.mount.exec_sudo')
def test_mount_order(self, mock_exec_sudo):
config = self.load_config_file('multiple_partitions_graph.yaml')
state = {}
graph, call_order = create_graph(config, self.fake_default_config,
state)
# build up some fake state so that we don't have to mock out
# all the parent calls that would really make these values, as
# we just want to test MountPointNode
state['filesys'] = {}
state['filesys']['mkfs_root'] = {}
state['filesys']['mkfs_root']['device'] = 'fake'
state['filesys']['mkfs_var'] = {}
state['filesys']['mkfs_var']['device'] = 'fake'
state['filesys']['mkfs_var_log'] = {}
state['filesys']['mkfs_var_log']['device'] = 'fake'
for node in call_order:
if isinstance(node, MountPointNode):
# XXX: do we even need to create? We could test the
# sudo arguments from the mock in the below asserts
# too
node.create()
# ensure that partitions are mounted in order root->var->var/log
self.assertListEqual(state['mount_order'], ['/', '/var', '/var/log'])

View File

@@ -1,152 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import codecs
import fixtures
import json
import logging
import os
from stevedore import extension
from testtools.matchers import FileExists
import diskimage_builder.block_device.blockdevice as bd
import diskimage_builder.block_device.tests.test_base as tb
from diskimage_builder.block_device.exception import \
BlockDeviceSetupException
logger = logging.getLogger(__name__)
class TestStateBase(tb.TestBase):
def setUp(self):
super(TestStateBase, self).setUp()
# override the extensions to the test extensions
test_extensions = extension.ExtensionManager(
namespace='diskimage_builder.block_device.plugin_test',
invoke_on_load=False)
extensions_fixture = fixtures.MonkeyPatch(
'diskimage_builder.block_device.config._extensions',
test_extensions)
self.useFixture(extensions_fixture)
# status and other bits saved here
self.build_dir = fixtures.TempDir()
self.useFixture(self.build_dir)
class TestState(TestStateBase):
# The the state generation & saving methods
def test_state_create(self):
params = {
'build-dir': self.build_dir.path,
'config': self.get_config_file('cmd_create.yaml')
}
bd_obj = bd.BlockDevice(params)
bd_obj.cmd_init()
bd_obj.cmd_create()
# cmd_create should have persisted this to disk
state_file = bd_obj.state_json_file_name
self.assertThat(state_file, FileExists())
# ensure we see the values put in by the test extensions
# persisted
with codecs.open(state_file, encoding='utf-8', mode='r') as fd:
state = json.load(fd)
self.assertDictEqual(state,
{'test_a': {'value': 'foo',
'value2': 'bar'},
'test_b': {'value': 'baz'},
'test_init_state': 'here'})
pickle_file = bd_obj.node_pickle_file_name
self.assertThat(pickle_file, FileExists())
# run umount, which should load the picked nodes and run in
# reverse. This will create some state in "test_b" that it
# added to by "test_a" ... ensuring it was run backwards. It
# also checks the state was persisted through the pickling
# process.
bd_obj.cmd_umount()
# Test state going missing between phases
def test_missing_state(self):
params = {
'build-dir': self.build_dir.path,
'config': self.get_config_file('cmd_create.yaml')
}
bd_obj = bd.BlockDevice(params)
bd_obj.cmd_init()
bd_obj.cmd_create()
# cmd_create should have persisted this to disk
state_file = bd_obj.state_json_file_name
self.assertThat(state_file, FileExists())
pickle_file = bd_obj.node_pickle_file_name
self.assertThat(pickle_file, FileExists())
# simulate the state somehow going missing, and ensure that
# later calls notice
os.unlink(state_file)
os.unlink(pickle_file)
# This reads from the state dump json file
self.assertRaisesRegex(BlockDeviceSetupException,
"State dump not found",
bd_obj.cmd_getval, 'image-path')
self.assertRaisesRegex(BlockDeviceSetupException,
"State dump not found",
bd_obj.cmd_writefstab)
# this uses the pickled nodes
self.assertRaisesRegex(BlockDeviceSetupException,
"Pickle file not found",
bd_obj.cmd_delete)
self.assertRaisesRegex(BlockDeviceSetupException,
"Pickle file not found",
bd_obj.cmd_cleanup)
# XXX: figure out unit test for umount
# Test ordering of rollback calls if create() fails
def test_rollback(self):
params = {
'build-dir': self.build_dir.path,
'config': self.get_config_file('rollback.yaml'),
'test_rollback': True
}
bd_obj = bd.BlockDevice(params)
bd_obj.cmd_init()
# The config file has flags in that tell the last node to
# fail, which will trigger the rollback.
self.assertRaises(RuntimeError, bd_obj.cmd_create)
# cmd_create should have persisted this to disk even after the
# failure
state_file = bd_obj.state_json_file_name
self.assertThat(state_file, FileExists())
with codecs.open(state_file, encoding='utf-8', mode='r') as fd:
state = json.load(fd)
# ensure the rollback was called in order
self.assertListEqual(state['rollback_test'],
['never', 'gonna', 'give', 'you', 'up',
'never', 'gonna', 'let', 'you', 'down'])

View File

@@ -1,64 +0,0 @@
# Copyright 2016 Andreas Florath (andreas@florath.net)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import diskimage_builder.block_device.tests.test_base as tb
from diskimage_builder.block_device.utils import parse_abs_size_spec
from diskimage_builder.block_device.utils import parse_rel_size_spec
logger = logging.getLogger(__name__)
class TestBlockDeviceUtils(tb.TestBase):
"""Tests for the utils.py
This tests mostly the error and failure cases - because the good
cases are tested implicitly with the higher level unit tests.
"""
def test_parse_rel_size_with_abs(self):
"""Calls parse_rel_size_spec with an absolute number"""
is_rel, size = parse_rel_size_spec("154MiB", 0)
self.assertFalse(is_rel)
self.assertEqual(154 * 1024 * 1024, size)
def test_parse_abs_size_without_spec(self):
"""Call parse_abs_size_spec without spec"""
size = parse_abs_size_spec("198")
self.assertEqual(198, size)
def test_invalid_unit_spec(self):
"""Call parse_abs_size_spec with invalid unit spec"""
self.assertRaises(RuntimeError, parse_abs_size_spec, "747InVaLiDUnIt")
def test_broken_unit_spec(self):
"""Call parse_abs_size_spec with a completely broken unit spec"""
self.assertRaises(RuntimeError, parse_abs_size_spec, "_+!HuHi+-=")
def test_parse_size_spec(self):
map(lambda tspec:
self.assertEqual(parse_abs_size_spec(tspec[0]), tspec[1]),
[["20TiB", 20 * 1024**4],
["1024KiB", 1024 * 1024],
["1.2TB", 1.2 * 1000**4],
["2.4T", 2.4 * 1000**4],
["512B", 512],
["364", 364]])

View File

@@ -1,125 +0,0 @@
# Copyright 2016 Andreas Florath (andreas@florath.net)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import re
import subprocess
logger = logging.getLogger(__name__)
SIZE_UNIT_SPECS = [
["TiB", 1024**4],
["GiB", 1024**3],
["MiB", 1024**2],
["KiB", 1024**1],
["TB", 1000**4],
["GB", 1000**3],
["MB", 1000**2],
["KB", 1000**1],
["T", 1000**4],
["G", 1000**3],
["M", 1000**2],
["K", 1000**1],
["B", 1],
["", 1], # No unit -> size is given in bytes
]
# Basic RE to check and split floats (without exponent)
# and a given unit specification (which must be non-numerical).
size_unit_spec_re = re.compile("^([\d\.]*) ?([a-zA-Z0-9_]*)$")
def _split_size_unit_spec(size_unit_spec):
"""Helper function to split unit specification into parts.
The first part is the numeric part - the second one is the unit.
"""
match = size_unit_spec_re.match(size_unit_spec)
if match is None:
raise RuntimeError("Invalid size unit spec [%s]" % size_unit_spec)
return match.group(1), match.group(2)
def _get_unit_factor(unit_str):
"""Helper function to get the unit factor.
The given unit_str needs to be a string of the
SIZE_UNIT_SPECS table.
If the unit is not found, a runtime error is raised.
"""
for spec_key, spec_value in SIZE_UNIT_SPECS:
if unit_str == spec_key:
return spec_value
raise RuntimeError("unit_str [%s] not known" % unit_str)
def parse_abs_size_spec(size_spec):
size_cnt_str, size_unit_str = _split_size_unit_spec(size_spec)
unit_factor = _get_unit_factor(size_unit_str)
return int(unit_factor * (
float(size_cnt_str) if len(size_cnt_str) > 0 else 1))
def parse_rel_size_spec(size_spec, abs_size):
"""Parses size specifications - can be relative like 50%
In addition to the absolute parsing also a relative
parsing is done. If the size specification ends in '%',
then the relative size of the given 'abs_size' is returned.
"""
if size_spec[-1] == '%':
percent = float(size_spec[:-1])
return True, int(abs_size * percent / 100.0)
return False, parse_abs_size_spec(size_spec)
def exec_sudo(cmd):
"""Run a command under sudo
Run command under sudo, with debug trace of output. This is like
subprocess.check_call() but sudo wrapped and with output tracing
at debug levels.
Arguments:
:param cmd: str command list; for Popen()
:return: nothing
:raises: subprocess.CalledProcessError if return code != 0
"""
assert isinstance(cmd, list)
sudo_cmd = ["sudo"]
sudo_cmd.extend(cmd)
try:
logger.info("Calling [%s]", " ".join(sudo_cmd))
except TypeError:
# Popen actually doesn't care, but we've managed to get mixed
# str and bytes in argument lists which causes errors logging
# commands. Give a clue as to what's going on.
logger.exception("Ensure all arguments are str type!")
raise
proc = subprocess.Popen(sudo_cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in iter(proc.stdout.readline, b""):
logger.debug("exec_sudo: %s", line.rstrip())
proc.wait()
if proc.returncode != 0:
raise subprocess.CalledProcessError(proc.returncode,
' '.join(sudo_cmd))

View File

@@ -1,75 +0,0 @@
# Copyright 2016 Ian Wienand (iwienand@redhat.com)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import os.path
import runpy
import sys
import diskimage_builder.paths
# borrowed from pip:locations.py
def running_under_virtualenv():
"""Return True if we're running inside a virtualenv, False otherwise."""
if hasattr(sys, 'real_prefix'):
return True
elif sys.prefix != getattr(sys, "base_prefix", sys.prefix):
return True
return False
def activate_venv():
if running_under_virtualenv():
activate_this = os.path.join(sys.prefix, "bin", "activate_this.py")
globs = runpy.run_path(activate_this, globals())
globals().update(globs)
del globs
def main():
# If we are called directly from a venv install
# (/path/venv/bin/disk-image-create) then nothing has added the
# virtualenv bin/ dir to $PATH. the exec'd script below will be
# unable to find call other dib tools like dib-run-parts.
#
# One solution is to say that you should only ever run
# disk-image-create in a shell that has already sourced
# bin/activate.sh (all this really does is add /path/venv/bin to
# $PATH). That's not a great interface as resulting errors will
# be very non-obvious.
#
# We can detect if we are running in a virtualenv and use
# virtualenv's "activate_this.py" script to activate it ourselves
# before we call the script. This ensures we have the path setting
activate_venv()
environ = os.environ
# pre-seed some paths for the shell script
environ['_LIB'] = diskimage_builder.paths.get_path('lib')
# export the path to the current python
if not os.environ.get('DIB_PYTHON_EXEC'):
os.environ['DIB_PYTHON_EXEC'] = sys.executable
# we have to handle being called as "disk-image-create" or
# "ramdisk-image-create". ramdisk-iamge-create is just a symlink
# to disk-image-create
# XXX: we could simplify things by removing the symlink, and
# just setting IS_RAMDISK in environ here depending on sys.argv[1]
script = "%s/%s" % (diskimage_builder.paths.get_path('lib'),
os.path.basename(sys.argv[0]))
os.execve("/bin/bash", ['bash', script] + sys.argv[1:], environ)

View File

@@ -1,348 +0,0 @@
# Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import argparse
import collections
import errno
import logging
import os
import sys
import yaml
import diskimage_builder.logging_config
logger = logging.getLogger(__name__)
class MissingElementException(Exception):
pass
class AlreadyProvidedException(Exception):
pass
class MissingOSException(Exception):
pass
class InvalidElementDir(Exception):
pass
class Element(object):
"""An element"""
def _get_element_set(self, path):
"""Get element set from element-[deps|provides] file
Arguments:
:param path: path to element description
:return: the set of elements in the file, or a blank set if
the file is not found.
"""
try:
with open(path) as f:
lines = (line.strip() for line in f)
# Strip blanks, but do we want to strip comment lines
# too? No use case at the moment, and comments might
# break other things that poke at the element-* files.
lines = (line for line in lines if line)
return set(lines)
except IOError as e:
if e.errno == errno.ENOENT:
return set([])
else:
raise
def _make_rdeps(self, all_elements):
"""Make a list of reverse dependencies (who depends on us).
Only valid after _find_all_elements()
Arguments:
:param all_elements: dict as returned by _find_all_elements()
:return: nothing, but elements will have r_depends var
"""
# note; deliberatly left out of __init__ so that accidental
# access without init raises error
self.r_depends = []
for name, element in all_elements.items():
if self.name in element.depends:
self.r_depends.append(element.name)
def __init__(self, name, path):
"""A new element
:param name: The element name
:param path: Full path to element. element-deps and
element-provides files will be parsed
"""
self.name = name
self.path = path
# read the provides & depends files for this element into a
# set; if the element has them.
self.provides = self._get_element_set(
os.path.join(path, 'element-provides'))
self.depends = self._get_element_set(
os.path.join(path, 'element-deps'))
logger.debug("New element : %s", str(self))
def __eq__(self, other):
return self.name == other.name
def __repr__(self):
return self.name
def __str__(self):
return '%s p:<%s> d:<%s>' % (self.name,
','.join(self.provides),
','.join(self.depends))
def _get_elements_dir():
if not os.environ.get('ELEMENTS_PATH'):
raise Exception("$ELEMENTS_PATH must be set.")
return os.environ['ELEMENTS_PATH']
def _expand_element_dependencies(user_elements, all_elements):
"""Expand user requested elements using element-deps files.
Arguments:
:param user_elements: iterable enumerating the elements a user requested
:param all_elements: Element object dictionary from find_all_elements
:return: a set containing the names of user_elements and all
dependent elements including any transitive dependencies.
"""
final_elements = set(user_elements)
check_queue = collections.deque(user_elements)
provided = set()
provided_by = collections.defaultdict(list)
while check_queue:
# bug #1303911 - run through the provided elements first to avoid
# adding unwanted dependencies and looking for virtual elements
element = check_queue.popleft()
if element in provided:
continue
elif element not in all_elements:
raise MissingElementException("Element '%s' not found" % element)
element_obj = all_elements[element]
element_deps = element_obj.depends
element_provides = element_obj.provides
# save which elements provide another element for potential
# error message
for provide in element_provides:
provided_by[provide].append(element)
provided.update(element_provides)
check_queue.extend(element_deps - (final_elements | provided))
final_elements.update(element_deps)
conflicts = set(user_elements) & provided
if conflicts:
logger.error(
"The following elements are already provided by another element")
for element in conflicts:
logger.error("%s : already provided by %s",
element, provided_by[element])
raise AlreadyProvidedException()
if "operating-system" not in provided:
raise MissingOSException("Please include an operating system element")
out = final_elements - provided
return([all_elements[element] for element in out])
def _find_all_elements(paths=None):
"""Build a dictionary Element() objects
Walk ELEMENTS_PATH and find all elements. Make an Element object
for each element we wish to consider. Note we process overrides
such that elements specified earlier in the ELEMENTS_PATH override
those seen later.
:param paths: A list of paths to find elements in. If None will
use ELEMENTS_PATH from environment
:return: a dictionary of all elements
"""
all_elements = {}
# note we process the later entries *first*, so that earlier
# entries will override later ones. i.e. with
# ELEMENTS_PATH=path1:path2:path3
# we want the elements in "path1" to override "path3"
if not paths:
paths = list(reversed(_get_elements_dir().split(':')))
else:
paths = list(reversed(paths.split(':')))
logger.debug("ELEMENTS_PATH is: %s", ":".join(paths))
for path in paths:
if not os.path.isdir(path):
raise InvalidElementDir("ELEMENTS_PATH entry '%s' "
"is not a directory " % path)
# In words : make a list of directories in "path". Since an
# element is a directory, this is our list of elements.
elements = [os.path.realpath(os.path.join(path, f))
for f in os.listdir(path)
if os.path.isdir(os.path.join(path, f))]
for element in elements:
# the element name is the last part of the full path in
# element (these are all directories, we know that from
# above)
name = os.path.basename(element)
new_element = Element(name, element)
if name in all_elements:
logger.warning("Element <%s> overrides <%s>",
new_element.path, all_elements[name].path)
all_elements[name] = new_element
# Now we have all the elements, make a call on each element to
# store it's reverse dependencies
for name, element in all_elements.items():
element._make_rdeps(all_elements)
return all_elements
def _get_elements(elements, paths=None):
"""Return the canonical list of Element objects
This function returns Element objects. For exernal calls, use
get_elements which returns a simple tuple & list.
:param elements: user specified list of elements
:param paths: element paths, default to environment
"""
all_elements = _find_all_elements(paths)
return _expand_element_dependencies(elements, all_elements)
def get_elements(elements, paths=None):
"""Return the canonical list of elements with their dependencies
.. note::
You probably do not want to use this! Elements that require
access to the list of all other elements should generally use
the environment variables exported by disk-image-create below.
:param elements: user specified elements
:param paths: Alternative ELEMENTS_PATH; default is to use from env
:return: A de-duplicated list of tuples [(element, path),
(element, path) ...] with all elements and their
dependents, including any transitive dependencies.
"""
elements = _get_elements(elements, paths)
return [(element.name, element.path) for element in elements]
def expand_dependencies(user_elements, element_dirs):
"""Deprecated method for expanding element dependencies.
.. warning::
DO NOT USE THIS FUNCTION. For compatibility reasons, this
function does not provide paths to the returned elements. This
means the caller must process override rules if two elements
with the same name appear in element_dirs
:param user_elements: iterable enumerating the elements a user requested
:param element_dirs: The ELEMENTS_PATH to process
:return: a set containing user_elements and all dependent
elements including any transitive dependencies.
"""
logger.warning("expand_dependencies() deprecated, use get_elements")
elements = _get_elements(user_elements, element_dirs)
return set([element.name for element in elements])
def _output_env_vars(elements):
"""Output eval-able bash strings for IMAGE_ELEMENT vars
:param elements: list of Element objects to represent
"""
# first the "legacy" environment variable that just lists the
# elements
print("export IMAGE_ELEMENT='%s'" %
' '.join([element.name for element in elements]))
# Then YAML
output = {}
for element in elements:
output[element.name] = element.path
print("export IMAGE_ELEMENT_YAML='%s'" % yaml.safe_dump(output))
# Then bash array. Unfortunately, bash can't export array
# variables. So we take a compromise and produce an exported
# function that outputs the string to re-create the array.
# You can then simply do
# eval declare -A element_array=$(get_image_element_array)
# and you have it.
output = ""
for element in elements:
output += '[%s]=%s ' % (element.name, element.path)
print("function get_image_element_array {\n"
" echo \"%s\"\n"
"};\n"
"export -f get_image_element_array;" % output)
def main():
diskimage_builder.logging_config.setup()
parser = argparse.ArgumentParser()
parser.add_argument('elements', nargs='+',
help='display dependencies of the given elements')
parser.add_argument('--env', '-e', action='store_true',
default=False,
help=('Output eval-able bash strings for '
'IMAGE_ELEMENT variables'))
args = parser.parse_args(sys.argv[1:])
elements = _get_elements(args.elements)
if args.env:
_output_env_vars(elements)
else:
# deprecated compatibility output; doesn't include paths.
print(' '.join([element.name for element in elements]))
return 0
if __name__ == "__main__":
main()

View File

@@ -1,16 +0,0 @@
========
apt-conf
========
This element overrides the default apt.conf for APT based systems.
Environment Variables
---------------------
DIB_APT_CONF:
:Required: No
:Default: None
:Description: To override `DIB_APT_CONF`, set it to the path to your
apt.conf. The new apt.conf will take effect at build time and
run time.
:Example: ``DIB_APT_CONF=/etc/apt/apt.conf``

View File

@@ -1,21 +0,0 @@
#!/bin/bash
# Override the default /etc/apt/apt.conf with $DIB_APT_CONF
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# exit directly if DIB_APT_CONF is not defined properly
if [ -z "${DIB_APT_CONF:-}" ] ; then
echo "DIB_APT_CONF is not set - no apt.conf will be copied in"
exit 0
elif [ ! -f "$DIB_APT_CONF" ] ; then
echo "$DIB_APT_CONF is not a valid apt.conf file."
echo "You should assign a proper apt.conf file in DIB_APT_CONF"
exit 1
fi
# copy the apt.conf to cloudimg
sudo cp -L -f $DIB_APT_CONF $TMP_MOUNT_PATH/etc/apt/apt.conf

View File

@@ -1,21 +0,0 @@
===============
apt-preferences
===============
This element generates the APT preferences file based on the provided manifest
provided by the :doc:`../manifests/README` element.
The APT preferences file can be used to control which versions of packages will
be selected for installation. APT uses a priority system to make this
determination. For more information about APT preferences, see the apt_preferences(5)
man page.
Environment Variables
---------------------
DIB_DPKG_MANIFEST:
:Required: No
:Default: None
:Description: The manifest file to generate the APT preferences file from.
:Example: ``DIB\_DPKG\_MANIFEST=~/image.d/dib-manifests/dib-manifest-dpkg-image``

View File

@@ -1,52 +0,0 @@
#!/bin/bash
#
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# exit directly if DIB_DPKG_MANIFEST is not defined properly
if [ -z "${DIB_DPKG_MANIFEST:-}" ]; then
echo "DIB_DPKG_MANIFEST must be set to the location of a manifest file you wish to use"
exit 0
elif [ ! -f "$DIB_DPKG_MANIFEST" -o ! -s "$DIB_DPKG_MANIFEST" ]; then
echo "$DIB_DPKG_MANIFEST is not a valid manifest file."
echo "You should assign a proper manifest file in DIB_DPKG_MANIFEST"
exit 1
fi
DIB_DPKG_MANIFEST=$(readlink -f $DIB_DPKG_MANIFEST)
# Create the preferences file from the given manifest
outfile=$(mktemp)
for package in $(jq -r ".packages[].package" $DIB_DPKG_MANIFEST); do
version=$(jq -r ".packages[] | select(.package == \"${package}\") |\
.version" $DIB_DPKG_MANIFEST)
cat << EOF >> $outfile
Package: ${package}
Pin: version ${version}
Pin-Priority: 1001
EOF
done
if [ -s $outfile ]; then
sudo mv $outfile $TMP_MOUNT_PATH/etc/apt/preferences
else
rm $outfile
fi

View File

@@ -1,17 +0,0 @@
===========
apt-sources
===========
Specify an apt sources.list file which is used during image building and then
remains on the image when it is run.
Environment Variables
---------------------
DIB_APT_SOURCES
:Required: No
:Default: None (Does not replace sources.list file)
:Description: Path to a file on the build host which is used in place of
``/etc/apt/sources.list``
:Example: ``DIB_APT_SOURCES=/etc/apt/sources.list`` will use the same
sources.list as the build host.

View File

@@ -1,25 +0,0 @@
#!/bin/bash
# Override the default /etc/apt/sources.list with $DIB_APT_SOURCES
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# exit directly if DIB_APT_SOURCES is not defined properly
if [ -z "${DIB_APT_SOURCES:-}" ] ; then
echo "DIB_APT_SOURCES must be set to the location of a sources.list file you wish to use"
exit 0
elif [ ! -f "$DIB_APT_SOURCES" -o ! -s "$DIB_APT_SOURCES" ] ; then
echo "$DIB_APT_SOURCES is not a valid sources.list file."
echo "You should assign proper sources.list file in DIB_APT_SOURCES"
exit 1
fi
DIB_APT_SOURCES=`readlink -f $DIB_APT_SOURCES`
# copy the sources.list to cloudimg
pushd $TMP_MOUNT_PATH/etc/apt/
sudo cp -f $DIB_APT_SOURCES sources.list # dib-lint: safe_sudo
popd

View File

@@ -1,3 +0,0 @@
base
openstack-ci-mirrors
ubuntu-minimal

View File

@@ -1,2 +0,0 @@
DIB_APT_SOURCES=$(mktemp)
export DIB_APT_SOURCES

View File

@@ -1,9 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
echo "testdata" > $DIB_APT_SOURCES

View File

@@ -1,13 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eux
set -o pipefail
echo "Verifying apt sources.list content"
[ -f /etc/apt/sources.list ]
[ "$(cat /etc/apt/sources.list)" = "testdata" ]
touch /tmp/dib-test-should-fail && exit 1

View File

@@ -1,19 +0,0 @@
.. _element-baremetal:
=========
baremetal
=========
This is the baremetal (IE: real hardware) element.
Does the following:
* extracts the kernel and initial ramdisk of the built image.
Optional parameters:
* DIB_BAREMETAL_KERNEL_PATTERN and DIB_BAREMETAL_INITRD_PATTERN
may be supplied to specify which kernel files are preferred; this
can be of use when using custom kernels that don't fit the
standard naming patterns. Both variables must be provided in
order for them to have any effect.

View File

@@ -1,39 +0,0 @@
#!/bin/bash
#
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# dib-lint: disable=safe_sudo
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
[ -n "$TARGET_ROOT" ]
source $_LIB/img-functions
# Dig up the initrd and kernel to use.
select_boot_kernel_initrd $TARGET_ROOT
sudo cp $BOOTDIR/$KERNEL ${IMAGE_NAME}.vmlinuz
sudo cp $BOOTDIR/$RAMDISK ${IMAGE_NAME}.initrd
sudo chmod a+r ${IMAGE_NAME}.vmlinuz
sudo chmod a+r ${IMAGE_NAME}.initrd
if [ -f $TARGET_ROOT/dib-signed-kernel-version ] ; then
echo "Removing $TARGET_ROOT/dib-signed-kernel-version"
sudo rm -f $TARGET_ROOT/dib-signed-kernel-version
fi

View File

@@ -1,28 +0,0 @@
====
base
====
This is the base element.
Almost all users will want to include this in their disk image build,
as it includes a lot of useful functionality.
The `DIB_CLOUD_INIT_ETC_HOSTS` environment variable can be used to
customize cloud-init's management of `/etc/hosts`:
* If the variable is set to something, write that value as
cloud-init's manage_etc_hosts.
* If the variable is set to an empty string, don't create
manage_etc_hosts setting (cloud-init will use its default value).
* If the variable is not set, use "localhost" for now. Later, not
setting the variable will mean using cloud-init's default. (To
preserve diskimage-builder's current default behavior in the
future, set the variable to "localhost" explicitly.)
Notes:
* If you are getting warnings during the build about your locale
being missing, consider installing/generating the relevant locale.
This may be as simple as having language-pack-XX installed in the
pre-install stage

View File

@@ -1,3 +0,0 @@
dib-init-system
install-types
pkg-map

View File

@@ -1,10 +0,0 @@
#!/bin/bash
# These are useful, or at worst not harmful, for all images we build.
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
install-packages -m base iscsi_package

View File

@@ -1,11 +0,0 @@
#!/bin/bash
# Fully upgrade everything on the system (if the package manager knows how to
# do it).
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
install-packages -u

View File

@@ -1,17 +0,0 @@
#!/bin/bash
# Tweak the stock ubuntu cloud-init config
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# cloud-init May not actually be installed
mkdir -p /etc/cloud/cloud.cfg.d
if [ -n "${DIB_CLOUD_INIT_ETC_HOSTS:-}" ]; then
dd of=/etc/cloud/cloud.cfg.d/10_etc_hosts.cfg << EOF
manage_etc_hosts: $DIB_CLOUD_INIT_ETC_HOSTS
EOF
fi

View File

@@ -1,33 +0,0 @@
{
"family": {
"redhat": {
"iscsi_package": "iscsi-initiator-utils"
},
"gentoo": {
"curl": "net-misc/curl",
"dhcp_client": "net-misc/dhcp",
"extlinux": "sys-boot/syslinux",
"git": "dev-vcs/git",
"grub_bios": "sys-boot/grub",
"grub-pc": "sys-boot/grub",
"ironic-python-agent": "",
"iscsi_package": "sys-block/open-iscsi",
"isc-dhcp-client": "net-misc/dhcp",
"isolinux": "",
"ncat": "net-analyzer/netcat",
"qemu-utils": "app-emulation/qemu",
"python-dev": "",
"PyYAML": "dev-python/pyyaml",
"syslinux": "sys-boot/syslinux",
"syslinux-common": "",
"tftp": "net-ftp/tftp-hpa",
"tgt": "sys-block/tgt"
},
"suse": {
"qemu-utils": "qemu-tools"
}
},
"default": {
"iscsi_package": "open-iscsi"
}
}

View File

@@ -1,17 +0,0 @@
#!/bin/bash
# Install baseline packages and tools.
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
case $DISTRO_NAME in
'ubuntu'|'debian')
# Note: add-apt-repository would be nice for RPM platforms too - so when we
# need something like it, create a wrapper in dpkg/bin and fedora/bin.
apt-get -y update
install-packages software-properties-common
;;
esac

View File

@@ -1,18 +0,0 @@
==========
bootloader
==========
Installs ``grub[2]`` on boot partition on the system. In case GRUB2 is
not available in the system, a fallback to Extlinux will happen. It's
also possible to enforce the use of Extlinux by exporting a
``DIB_EXTLINUX`` variable to the environment.
Arguments
=========
* ``DIB_GRUB_TIMEOUT`` sets the ``grub`` menu timeout. It defaults to
5 seconds. Set this to 0 (no timeout) for fast boot times.
* ``DIB_BOOTLOADER_DEFAULT_CMDLINE`` sets the CMDLINE parameters that
are appended to the grub.cfg configuration. It defaults to
'nofb nomodeset vga=normal'

View File

@@ -1,55 +0,0 @@
#!/bin/bash
#
# Copyright 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# dib-lint: disable=safe_sudo
if [ ${DIB_DEBUG_TRACE:-1} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
[ -n "$TARGET_ROOT" ]
source $_LIB/img-functions
if [ -d $TARGET_ROOT/boot/extlinux ] ; then
CONF=$TARGET_ROOT/boot/extlinux/extlinux.conf
elif [ -d $TARGET_ROOT/boot/syslinux ] ; then
CONF=$TARGET_ROOT/boot/syslinux/syslinux.cfg
else
exit 0
fi
# Dig up the initrd and kernel to use.
select_boot_kernel_initrd $TARGET_ROOT
# Serial console on Power is hvc0
if [ "powerpc ppc64 ppc64le" =~ "$ARCH" ] ; then
SERIAL_CONSOLE="hvc0"
else
SERIAL_CONSOLE="ttyS0,115200"
fi
sudo sh -c "cat > $CONF <<_EOF_
DEFAULT linux
LABEL linux
KERNEL /boot/$KERNEL
APPEND ro root=LABEL=${DIB_ROOT_LABEL} console=tty0 console=${SERIAL_CONSOLE} nofb nomodeset vga=normal
INITRD /boot/$RAMDISK
_EOF_"

View File

@@ -1 +0,0 @@
export DIB_BOOTLOADER_DEFAULT_CMDLINE=${DIB_BOOTLOADER_DEFAULT_CMDLINE:-"nofb nomodeset vga=normal"}

View File

@@ -1,215 +0,0 @@
#!/bin/bash
# Configure grub. Note that the various conditionals here are to handle
# different distributions gracefully.
if [ ${DIB_DEBUG_TRACE:-1} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
BOOT_DEV=$IMAGE_BLOCK_DEVICE
# All available devices, handy for some bootloaders...
declare -A DEVICES
eval DEVICES=( $IMAGE_BLOCK_DEVICES )
function install_extlinux {
install-packages -m bootloader extlinux
echo "Installing Extlinux..."
# Find and install mbr.bin
for MBR in /usr/share/syslinux/mbr.bin /usr/lib/syslinux/mbr.bin \
/usr/lib/extlinux/mbr.bin /usr/lib/EXTLINUX/mbr.bin ; do
if [ -f $MBR ]; then
break
fi
done
if [ ! -f $MBR ]; then
echo "mbr.bin (from EXT/SYSLINUX) not found."
exit 1
fi
dd if=$MBR of=$BOOT_DEV
# Find any pre-created extlinux install directory
for EXTDIR in /boot/extlinux /boot/syslinux ; do
if [ -d $EXTDIR ] ; then
break
fi
done
if [ ! -d $EXTDIR ] ; then
# No install directory found so default to /boot/syslinux
EXTDIR=/boot/syslinux
mkdir -p $EXTDIR
fi
# Finally install extlinux
extlinux --install $EXTDIR
}
function install_grub2 {
# Check for offline installation of grub
if [ -f "/tmp/grub/install" ] ; then
source /tmp/grub/install
# Right now we can't use pkg-map to branch by arch, so tag an architecture
# specific virtual package so we can install the rigth thing based on
# distribution.
elif [[ "$ARCH" =~ "ppc" ]]; then
install-packages -m bootloader grub-ppc64
else
install-packages -m bootloader grub-pc
fi
# XXX: grub-probe on the nbd0/loop0 device returns nothing - workaround, manually
# specify modules. https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1073731
GRUBNAME=$(type -p grub-install) || echo "trying grub2-install"
if [ -z "$GRUBNAME" ]; then
GRUBNAME=$(type -p grub2-install)
fi
# If no GRUB2 is found, fallback to extlinux
if [ -z "$GRUBNAME" ] || [ $($GRUBNAME --version | grep "0.97" | wc -l) -ne 0 ]; then
echo "No GRUB2 found. Fallback to Extlinux..."
install_extlinux
exit 0
fi
echo "Installing GRUB2..."
# We need --force so grub does not fail due to being installed on the
# root partition of a block device.
GRUB_OPTS=${GRUB_OPTS:-"--force"}
# XXX: This is buggy:
# - --target=i386-pc is invalid for non-i386/amd64 architectures
# - and for UEFI too.
# GRUB_OPTS="$GRUB_OPTS --target=i386-pc"
if [[ ! $GRUB_OPTS == *--target* ]] && [[ $($GRUBNAME --version) =~ ' 2.' ]]; then
# /sys/ comes from the host machine. If the host machine is using EFI
# but the image being built doesn't have EFI boot-images installed we
# should set the --target to use a BIOS-based boot-image.
#
# * --target tells grub what's the target platform
# * the boot images are placed in /usr/lib/grub/<cpu>-<platform>
# * i386-pc is used for BIOS-based machines
# http://www.gnu.org/software/grub/manual/grub.html#Installation
#
if [ -d /sys/firmware/efi ]; then
if [ ! -d /usr/lib/grub/*-efi ]; then
case $ARCH in
"x86_64"|"amd64")
GRUB_OPTS="$GRUB_OPTS --target=i386-pc"
;;
"i386")
target=i386-pc
if [ -e /proc/device-tree ]; then
for x in /proc/device-tree/*; do
if [ -e "$x" ]; then
target="i386-ieee1275"
fi
done
fi
GRUB_OPTS="$GRUB_OPTS --target=$target"
;;
esac
fi
fi
fi
if [[ "$ARCH" =~ "ppc" ]] ; then
# For PPC (64-Bit regardless of Endian-ness), we use the "boot"
# partition as the one to point grub-install to, not the loopback
# device. ppc has a dedicated PReP boot partition.
# For grub2 < 2.02~beta3 this needs to be a /dev/mapper/... node after
# that a dev/loopXpN node will work fine.
$GRUBNAME --modules="part_msdos" $GRUB_OPTS ${DEVICES[boot]} --no-nvram
else
$GRUBNAME --modules="biosdisk part_msdos" $GRUB_OPTS $BOOT_DEV
fi
# This might be better factored out into a per-distro 'install-bootblock'
# helper.
if [ -d /boot/grub2 ]; then
GRUB_CFG=/boot/grub2/grub.cfg
elif [ -d /boot/grub ]; then
GRUB_CFG=/boot/grub/grub.cfg
fi
# Override the root device to the default label, and disable uuid
# lookup.
echo "GRUB_DEVICE=LABEL=${DIB_ROOT_LABEL}" >> /etc/default/grub
echo 'GRUB_DISABLE_LINUX_UUID=true' >> /etc/default/grub
echo "GRUB_TIMEOUT=${DIB_GRUB_TIMEOUT:-5}" >>/etc/default/grub
echo 'GRUB_TERMINAL="serial console"' >>/etc/default/grub
echo 'GRUB_GFXPAYLOAD_LINUX=text' >>/etc/default/grub
# Serial console on Power is hvc0
if [ "powerpc ppc64 ppc64le" =~ "$ARCH" ] ; then
SERIAL_CONSOLE="hvc0"
else
SERIAL_CONSOLE="ttyS0,115200"
fi
GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=${SERIAL_CONSOLE} no_timer_check"
echo "GRUB_CMDLINE_LINUX_DEFAULT=\"${GRUB_CMDLINE_LINUX_DEFAULT}\"" >>/etc/default/grub
echo 'GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"' >>/etc/default/grub
if type grub2-mkconfig >/dev/null; then
GRUB_MKCONFIG="grub2-mkconfig -o $GRUB_CFG"
else
GRUB_MKCONFIG="grub-mkconfig -o $GRUB_CFG"
fi
DISTRO_NAME=${DISTRO_NAME:-}
case $DISTRO_NAME in
'ubuntu'|'debian')
sed -i -e "s/\(^GRUB_CMDLINE_LINUX.*\)\"$/\1 ${DIB_BOOTLOADER_DEFAULT_CMDLINE}\"/" /etc/default/grub
GRUB_MKCONFIG=update-grub
;;
'fedora'|'centos7'|'centos')
echo "GRUB_CMDLINE_LINUX=\"${DIB_BOOTLOADER_DEFAULT_CMDLINE}\"" >>/etc/default/grub
;;
'opensuse')
sed -i -e "s/\(^GRUB_CMDLINE_LINUX.*\)\"$/\1 ${DIB_BOOTLOADER_DEFAULT_CMDLINE}\"/" /etc/default/grub
;;
esac
# os-prober leaks /dev/sda into config file in dual-boot host
# Disable grub-os-prober to avoid the issue while running
# grub-mkconfig
# Setting a flag to track whether the entry is already there in grub config
PROBER_DISABLED=
if ! grep -qe "^\s*GRUB_DISABLE_OS_PROBER=true" /etc/default/grub; then
PROBER_DISABLED=true
echo 'GRUB_DISABLE_OS_PROBER=true' >> /etc/default/grub
fi
$GRUB_MKCONFIG
# Remove the fix to disable os_prober
if [ -n "$PROBER_DISABLED" ]; then
sed -i '$d' /etc/default/grub
fi
# grub-mkconfig generates a config with the device in it,
# This shouldn't be needed, but old code has bugs
DIB_RELEASE=${DIB_RELEASE:-}
if [ "$DIB_RELEASE" = 'wheezy' ]; then
sed -i "s%search --no.*%%" $GRUB_CFG
sed -i "s%set root=.*%set root=(hd0,1)%" $GRUB_CFG
fi
# Fix efi specific instructions in grub config file
if [ -d /sys/firmware/efi ]; then
sed -i 's%\(initrd\|linux\)efi /boot%\1 /boot%g' $GRUB_CFG
fi
}
DIB_EXTLINUX=${DIB_EXTLINUX:-0}
if [ "$DIB_EXTLINUX" != "0" ]; then
install_extlinux
else
install_grub2
fi

View File

@@ -1,25 +0,0 @@
{
"family": {
"gentoo": {
"dkms_package": "",
"extlinux": "syslinux",
"grub-pc": "grub"
},
"suse": {
"dkms_package": "",
"extlinux": "syslinux",
"grub-pc": "grub2"
},
"redhat": {
"extlinux": "syslinux-extlinux",
"grub-pc": "grub2-tools grub2",
"grub-ppc64": "grub2-tools grub2"
}
},
"default": {
"dkms_package": "dkms",
"extlinux": "extlinux",
"grub-pc": "grub-pc",
"grub-ppc64": "grub-ieee1275"
}
}

View File

@@ -1,4 +0,0 @@
=========
cache-url
=========
A helper script to download images into a local cache.

View File

@@ -1,119 +0,0 @@
#!/bin/bash
# Copyright 2012 Hewlett-Packard Development Company, L.P.
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
# Download a URL to a local cache
# e.g. cache-url http://.../foo ~/.cache/image-create/foo
SCRIPT_NAME=$(basename $0)
SCRIPT_HOME=$(dirname $0)
FORCE_REVALIDATE=0
function show_options () {
echo "Usage: $SCRIPT_NAME [options] <url> <destination_file>"
echo
echo "Download a URL and cache it to a specified location."
echo "Subsequent requests will compare the last modified date"
echo "of the upstream file to determine whether it needs to be"
echo "downloaded again."
echo
echo "Options:"
echo " -f -- force upstream caches to fetch a new copy of the file"
echo " -h -- show this help"
echo
exit $1
}
TEMP=$(getopt -o hf -n $SCRIPT_NAME -- "$@")
if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi
# Note the quotes around `$TEMP': they are essential!
eval set -- "$TEMP"
while true ; do
case "$1" in
-h|"-?") show_options 0;;
-f) FORCE_REVALIDATE=1; shift 1;;
--) shift; break;;
*) echo "Error: unsupported option $1." ; exit 1 ;;
esac
done
url=$1
dest=$2
time_cond=
curl_opts=""
if [ -z $url -o -z $dest ] ; then
show_options 1
fi
if [ -p $dest ]; then
type="fifo"
tmp=$(mktemp --tmpdir download.XXXXXXXX)
else
type="normal"
mkdir -p $(dirname $dest)
tmp=$(mktemp $(dirname $dest)/.download.XXXXXXXX)
fi
if [ "$FORCE_REVALIDATE" = "1" ]; then
curl_opts="-H 'Pragma: no-cache, must-revalidate' -H 'Cache-Control: no-cache, must-revalidate'"
success="Downloaded and cached $url, having forced upstream caches to revalidate"
elif [ -f $dest -a -s $dest ] ; then
time_cond="-z $dest"
success="Server copy has changed. Using server version of $url"
else
success="Downloaded and cached $url for the first time"
fi
rcode=$(curl -v -L -o $tmp -w '%{http_code}' --connect-timeout 10 $curl_opts $url $time_cond)
if [ "$rcode" == "200" -o "${url:0:7}" == "file://" ] ; then
# In cases where servers ignore the Modified time,
# curl cancels the download, outputs a 200 and leaves
# the output file untouched, we don't want this empty file.
if [ -n "$time_cond" -a ! -s $tmp ] ; then
echo "Ignoring empty file returned by curl. Using locally cached $url"
rm -f $tmp
else
echo $success
if [ "fifo" = "$type" ]; then
cp $tmp $dest
rm $tmp
else
mv $tmp $dest
fi
fi
# 213 is the response to a ftp MDTM command, curl outputs a 213 as the status
# if the url redirected to a ftp server and Not-Modified
elif [ "$rcode" = "304" -o "$rcode" = "213" ] ; then
echo "Server copy has not changed. Using locally cached $url"
rm -f $tmp
else
echo "Server returned an unexpected response code. [$rcode]"
rm -f $tmp
# expose some error codes so the calling process might know what happened
if [ "$rcode" = "404" ] ; then
exit 44
fi
exit 1
fi

View File

@@ -1,41 +0,0 @@
# Copyright 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import tempfile
import time
from diskimage_builder.tests import base
class TestCacheUrl(base.ScriptTestBase):
def test_cache_url_caches(self):
tempdir = tempfile.mkdtemp()
target = os.path.join(tempdir, 'target')
source = 'http://fake/url'
# Write fake data to the target file and return success
self._stub_script('curl', 'echo "test" > ${3:7:100}\necho 200')
self._run_command(['elements/cache-url/bin/cache-url',
source,
target])
self.assertTrue(os.path.exists(target))
modification_time = os.path.getmtime(target)
# Make sure that the timestamp would change if the file does
time.sleep(1)
self._stub_script('curl', 'echo "304"')
self._run_command(['elements/cache-url/bin/cache-url',
source,
target])
self.assertEqual(modification_time, os.path.getmtime(target))

View File

@@ -1,14 +0,0 @@
==============
centos-minimal
==============
Create a minimal image based on CentOS 7.
Use of this element will require 'yum' and 'yum-utils' to be installed on
Ubuntu and Debian. Nothing additional is needed on Fedora or CentOS.
By default, ``DIB_YUM_MINIMAL_CREATE_INTERFACES`` is set to enable the
creation of ``/etc/sysconfig/network-scripts/ifcfg-eth[0|1]`` scripts to
enable DHCP on the ``eth0`` & ``eth1`` interfaces. If you do not have
these interfaces, or if you are using something else to setup the
network such as cloud-init, glean or network-manager, you would want
to set this to ``0``.

View File

@@ -1,2 +0,0 @@
yum-minimal

View File

@@ -1 +0,0 @@
operating-system

View File

@@ -1,9 +0,0 @@
export DISTRO_NAME=centos
export DIB_RELEASE=${DIB_RELEASE:-7}
# by default, enable DHCP configuration of eth0 & eth1 in network
# scripts. See yum-minimal for full details
export DIB_YUM_MINIMAL_CREATE_INTERFACES=${DIB_YUM_MINIMAL_CREATE_INTERFACES:-1}
# Useful for elements that work with fedora (dnf) & centos
export YUM=${YUM:-yum}

View File

@@ -1 +0,0 @@
Verify we can build a centos-minimal image.

View File

@@ -1,6 +0,0 @@
[centos]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=0
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
#gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

View File

@@ -1,21 +0,0 @@
=======
centos7
=======
Use Centos 7 cloud images as the baseline for built disk images.
For further details see the redhat-common README.
DIB_DISTRIBUTION_MIRROR:
:Required: No
:Default: None
:Description: To use a CentOS Yum mirror, set this variable to the mirror URL
before running bin/disk-image-create. This URL should point to
the directory containing the ``5/6/7`` directories.
:Example: ``DIB_DISTRIBUTION_MIRROR=http://amirror.com/centos``
DIB_CLOUD_IMAGES
:Required: No
:Description: Set the desired URL to fetch the images from. ppc64le:
Currently the CentOS community is working on providing the
ppc64le images until then you'll need to set this to a local
image file.

View File

@@ -1,5 +0,0 @@
cache-url
redhat-common
rpm-distro
source-repositories
yum

View File

@@ -1 +0,0 @@
operating-system

View File

@@ -1,6 +0,0 @@
export DISTRO_NAME=centos7
export DIB_RELEASE=GenericCloud
# Useful for elements that work with fedora (dnf) & centos
export YUM=${YUM:-yum}

View File

@@ -1,15 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
DIB_DISTRIBUTION_MIRROR=${DIB_DISTRIBUTION_MIRROR:-}
[ -n "$DIB_DISTRIBUTION_MIRROR" ] || exit 0
# Only set the mirror for the Base, Extras and Updates repositories
# The others arn't enabled and do not exist on all mirrors
sed -e "s|^#baseurl=http://mirror.centos.org/centos|baseurl=$DIB_DISTRIBUTION_MIRROR|;/^mirrorlist=/d" -i /etc/yum.repos.d/CentOS-Base.repo

View File

@@ -1,43 +0,0 @@
#!/bin/bash
if [ ${DIB_DEBUG_TRACE:-0} -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
[ -n "$ARCH" ]
[ -n "$TARGET_ROOT" ]
if [[ "amd64 x86_64" =~ "$ARCH" ]]; then
ARCH="x86_64"
DIB_CLOUD_IMAGES=${DIB_CLOUD_IMAGES:-http://cloud.centos.org/centos/7/images}
elif [[ "arm64 aarch64" =~ "$ARCH" ]]; then
ARCH="aarch64"
DIB_CLOUD_IMAGES=${DIB_CLOUD_IMAGES:-http://cloud.centos.org/altarch/7/images/aarch64}
elif [[ "ppc64le" =~ "$ARCH" ]]; then
DIB_CLOUD_IMAGES=${DIB_CLOUD_IMAGES:-http://cloud.centos.org/altarch/7/images/ppc64le}
else
echo 'centos7 root element only support the x86_64, aarch64 and ppc64le values for $ARCH'
exit 1
fi
DIB_LOCAL_IMAGE=${DIB_LOCAL_IMAGE:-}
if [ -n "$DIB_LOCAL_IMAGE" ]; then
IMAGE_LOCATION=$DIB_LOCAL_IMAGE
# No need to copy a local image into the cache directory, so just specify
# the cached path as the original path.
CACHED_IMAGE=$IMAGE_LOCATION
BASE_IMAGE_FILE=$(basename $DIB_LOCAL_IMAGE)
BASE_IMAGE_TAR=$BASE_IMAGE_FILE.tgz
else
DIB_RELEASE=${DIB_RELEASE:-GenericCloud}
DIB_CLOUD_IMAGES=${DIB_CLOUD_IMAGES}
BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-CentOS-7-${ARCH}-$DIB_RELEASE.qcow2.xz}
BASE_IMAGE_TAR=$BASE_IMAGE_FILE.tgz
IMAGE_LOCATION=$DIB_CLOUD_IMAGES/$BASE_IMAGE_FILE
CACHED_IMAGE=$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE
fi
$TMP_HOOKS_PATH/bin/extract-image $BASE_IMAGE_FILE $BASE_IMAGE_TAR $IMAGE_LOCATION $CACHED_IMAGE

Some files were not shown because too many files have changed in this diff Show More