Initial commit

Change-Id: Ic4a2603e5e23671e3cc6df0e0fee9f80c9921e6c
This commit is contained in:
Andrey Shestakov 2015-08-10 14:35:04 +03:00
parent 8467171151
commit f08dc6819e
52 changed files with 2544 additions and 0 deletions

6
.gitmodules vendored Normal file
View File

@ -0,0 +1,6 @@
[submodule "deployment_scripts/puppet/modules/tftp"]
path = deployment_scripts/puppet/modules/tftp
url = https://github.com/puppetlabs/puppetlabs-tftp
[submodule "deployment_scripts/puppet/modules/ironic"]
path = deployment_scripts/puppet/modules/ironic
url = https://github.com/openstack/puppet-ironic

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

4
README.md Normal file
View File

@ -0,0 +1,4 @@
fuel-plugin-ironic
============
Plugin description

View File

@ -0,0 +1,51 @@
VERSION?=7.0.0
top_srcdir:=$(shell pwd)
ubuntu_DATA:=$(shell cd $(top_srcdir) && find share -type f)
top_builddir?=$(shell pwd)
-include config.mk
PREFIX?=/usr
all:
@echo nop
install:
install -d -m 755 $(DESTDIR)$(PREFIX)/bin
install -d -m 755 $(DESTDIR)$(PREFIX)/share/fuel-bootstrap-image
install -m 755 -t $(DESTDIR)$(PREFIX)/bin $(top_srcdir)/bin/fuel-bootstrap-image
tar cf - -C $(top_srcdir) share | tar xf - -C $(DESTDIR)$(PREFIX)
dist: $(top_builddir)/fuel-bootstrap-image-builder-$(VERSION).tar.gz
$(top_builddir)/fuel-bootstrap-image-builder-$(VERSION).tar.gz: STAGEDIR:=$(top_builddir)/dist/fuel-bootstrap-image-builder
$(top_builddir)/fuel-bootstrap-image-builder-$(VERSION).tar.gz: bin/fuel-bootstrap-image $(ubuntu_DATA) Makefile configure
mkdir -p $(STAGEDIR)/share
mkdir -p $(STAGEDIR)/bin
tar cf - -C $(top_srcdir) bin share | tar xf - -C $(STAGEDIR)
cp -a $(top_srcdir)/Makefile $(top_srcdir)/configure $(top_srcdir)/fuel-bootstrap-image-builder.spec $(STAGEDIR)
tar czf $@.tmp -C $(dir $(STAGEDIR)) $(notdir $(STAGEDIR))
mv $@.tmp $@
rpm: SANDBOX:=$(top_builddir)/rpmbuild
rpm: $(top_builddir)/fuel-bootstrap-image-builder-$(VERSION).tar.gz fuel-bootstrap-image-builder.spec
rm -rf $(SANDBOX)
mkdir -p $(SANDBOX)/SOURCES $(SANDBOX)/SPECS $(SANDBOX)/tmp
cp -a $< $(SANDBOX)/SOURCES
cp -a $(top_srcdir)/fuel-bootstrap-image-builder.spec $(SANDBOX)/SPECS
fakeroot rpmbuild --nodeps \
--define '_tmppath $(SANDBOX)/tmp' \
--define '_topdir $(SANDBOX)' \
--define 'version $(VERSION)' \
-ba $(SANDBOX)/SPECS/fuel-bootstrap-image-builder.spec
clean:
-@rm -f $(top_builddir)/config.mk
distclean: clean
-@rm -f $(top_builddir)/fuel-bootstrap-image-builder-$(VERSION).tar.gz
-@rm -rf $(top_builddir)/rpmbuild
-@rm -rf $(top_builddir)/dist
.PHONY: all install dist clean rpm

View File

@ -0,0 +1,448 @@
#!/bin/sh
set -ex
MYSELF="${0##*/}"
bindir="${0%/*}"
datadir="${bindir%/*}/share/fuel-bootstrap-image"
global_conf="/etc/fuel-bootstrap-image.conf"
[ -r "$global_conf" ] && . "$global_conf"
[ -z "$MOS_VERSION" ] && MOS_VERSION="7.0"
[ -z "$DISTRO_RELEASE" ] && DISTRO_RELEASE="trusty"
[ -z "$MIRROR_DISTRO" ] && MIRROR_DISTRO="http://archive.ubuntu.com/ubuntu"
[ -z "$MIRROR_MOS" ] && MIRROR_MOS="http://mirror.fuel-infra.org/mos-repos/$MOS_VERSION/cluster/base/$DISTRO_RELEASE"
[ -z "$KERNEL_FLAVOR" ] && KERNEL_FLAVOR="-generic-lts-trusty"
[ -z "$ARCH" ] && ARCH="amd64"
[ -z "$DESTDIR" ] && DESTDIR="/var/www/nailgun/bootstrap/ubuntu"
[ -z "$BOOTSTRAP_SSH_KEYS" ] && BOOTSTRAP_SSH_KEYS="$datadir/ubuntu/files/root/.ssh/authorized_keys"
BOOTSTRAP_FUEL_PKGS_DFLT="openssh-server ntp"
# Packages required for the master node to discover a bootstrap node
if [ -z "$BOOTSTRAP_IRONIC" ]; then
BOOTSTRAP_FUEL_PKGS_DFLT="$BOOTSTRAP_FUEL_PKGS_DFLT openssh-client mcollective nailgun-agent nailgun-mcagents nailgun-net-check"
GONFIG_SOURCE="$datadir/ubuntu/files/"
else
GONFIG_SOURCE="$datadir/ubuntu/files.ironic/"
fi
[ -z "$BOOTSTRAP_FUEL_PKGS" ] && BOOTSTRAP_FUEL_PKGS="$BOOTSTRAP_FUEL_PKGS_DFLT"
if [ -n "$http_proxy" ]; then
export HTTP_PROXY="$http_proxy"
elif [ -n "$HTTP_PROXY" ]; then
export http_proxy="$HTTP_PROXY"
fi
# Kernel, firmware, live boot
BOOTSTRAP_PKGS="ubuntu-minimal live-boot live-boot-initramfs-tools linux-image${KERNEL_FLAVOR} linux-firmware linux-firmware-nonfree"
# compress initramfs with xz, make squashfs root filesystem image
BOOTSTRAP_PKGS="$BOOTSTRAP_PKGS xz-utils squashfs-tools"
# Smaller tools providing the standard ones.
# - mdadm depends on mail-transport-agent, default one is postfix => use msmtp instead
BOOTSTRAP_PKGS="$BOOTSTRAP_PKGS msmtp-mta gdebi-core"
apt_setup ()
{
local root="$1"
local sources_list="${root}/etc/apt/sources.list"
local apt_prefs="${root}/etc/apt/preferences"
local mos_codename="mos${MOS_VERSION}-${DISTRO_RELEASE}"
local broken_repo=''
local release_file="$MIRROR_MOS/dists/$mos_codename/Release"
if ! wget -q -O /dev/null "$release_file" 2>/dev/null; then
broken_repo='yes'
fi
mkdir -p "${sources_list%/*}"
cat > "$sources_list" <<-EOF
deb $MIRROR_DISTRO ${DISTRO_RELEASE} main universe multiverse restricted
deb $MIRROR_DISTRO ${DISTRO_RELEASE}-security main universe multiverse restricted
deb $MIRROR_DISTRO ${DISTRO_RELEASE}-updates main universe multiverse restricted
EOF
if [ -z "$broken_repo" ]; then
cat >> "$sources_list" <<-EOF
deb $MIRROR_MOS ${mos_codename} main
deb $MIRROR_MOS ${mos_codename}-security main
deb $MIRROR_MOS ${mos_codename}-updates main
deb $MIRROR_MOS ${mos_codename}-holdback main
EOF
else
# TODO(asheplyakov): remove this after perestroika repo gets fixed
cat >> "$sources_list" <<-EOF
deb $MIRROR_MOS ${DISTRO_RELEASE} main
EOF
fi
if [ -n "$EXTRA_DEB_REPOS" ]; then
l="$EXTRA_DEB_REPOS"
IFS='|'
set -- $l
unset IFS
for repo; do
echo "$repo"
done >> "$sources_list"
fi
cat > "$apt_prefs" <<-EOF
Package: *
Pin: release o=Mirantis, n=mos${MOS_VERSION}
Pin-Priority: 1101
Package: *
Pin: release o=Mirantis, n=${DISTRO_RELEASE}
Pin-Priority: 1101
EOF
if [ -n "$HTTP_PROXY" ]; then
cat > "$root/etc/apt/apt.conf.d/01mirantis-use-proxy" <<-EOF
Acquire::http::Proxy "$HTTP_PROXY";
EOF
fi
}
run_apt_get ()
{
local root="$1"
shift
chroot "$root" env \
LC_ALL=C \
DEBIAN_FRONTEND=noninteractive \
DEBCONF_NONINTERACTIVE_SEEN=true \
TMPDIR=/tmp \
TMP=/tmp \
apt-get $@
}
run_apt_key ()
{
local root="$1"
shift
chroot "$root" env \
LC_ALL=C \
DEBIAN_FRONTEND=noninteractive \
DEBCONF_NONINTERACTIVE_SEEN=true \
TMPDIR=/tmp \
TMP=/tmp \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys $@
}
dpkg_is_too_old ()
{
# XXX: dpkg-deb versions older than 1.15.6 can't handle data.tar.xz
# (which is the default payload of Ubuntu packages)
# Such an ancient version of dpkg is shipped with CentOS 6.[56]
local dpkg_version
local dpkg_major_version
local dpkg_minor_version
local dpkg_patch_version
if ! dpkg-deb --help >/dev/null 2>&1; then
return 0
fi
dpkg_version=`dpkg-deb --version | sed -rne '1 s/^.*\s+version\s+([0-9]+)\.([0-9]+)\.([0-9]+).*/\1.\2.\3/p'`
[ -z "$dpkg_version" ] && return 0
IFS='.'
set -- $dpkg_version
unset IFS
dpkg_major_version="$1"
dpkg_minor_version="$2"
dpkg_patch_version="$3"
if [ $dpkg_major_version -le 1 ] && [ $dpkg_minor_version -le 15 ] && [ $dpkg_patch_version -lt 6 ]; then
echo "DEBUG: $MYSELF: dpkg is too old, using ar to unpack debian packages" >&2
return 0
fi
return 1
}
run_debootstrap ()
{
local root="$1"
[ -z "$root" ] && exit 1
local insecure="--no-check-gpg"
local extractor=''
if dpkg_is_too_old; then
# Ubuntu packages use data.tar.xz payload. Ancient versions of
# dpkg (in particular the ones shipped with CentOS 6.x) can't
# handle such packages. Tell debootstrap to use ar instead to
# avoid the failure.
extractor='--extractor=ar'
fi
env \
LC_ALL=C \
DEBIAN_FRONTEND=noninteractive \
DEBCONF_NONINTERACTIVE_SEEN=true \
debootstrap $insecure $extractor --arch=${ARCH} ${DISTRO_RELEASE} "$root" $MIRROR_DISTRO
}
install_packages ()
{
local root="$1"
shift
echo "INFO: $MYSELF: installing pkgs: $*" >&2
run_apt_get "$root" install --yes $@
}
upgrade_chroot ()
{
local root="$1"
run_apt_key "$root" CA2B20483E301371
run_apt_get "$root" update
if ! mountpoint -q "$root/proc"; then
mount -t proc bootstrapproc "$root/proc"
fi
run_apt_get "$root" dist-upgrade --yes
}
add_local_mos_repo ()
{
# we need the local APT repo (/var/www/nailgun/ubuntu/x86_64)
# before web server is up and running => use bind mount
local root="$1"
# TODO(asheplyakov): use proper arch name (amd64)
local local_repo="/var/www/nailgun/ubuntu/x86_64"
local path_in_chroot="/tmp/local-apt"
local source_parts_d="${root}/etc/apt/sources.list.d"
# TODO(asheplyakov): update the codename after repo get fixed
local mos_codename="mos${MOS_VERSION}"
mkdir -p "${root}${path_in_chroot}" "${source_parts_d}"
mount -o bind "$local_repo" "${root}${path_in_chroot}"
mount -o remount,ro,bind "${root}${path_in_chroot}"
cat > "${source_parts_d}/nailgun-local.list" <<-EOF
deb file://${path_in_chroot} ${mos_codename} main
EOF
}
allow_insecure_apt ()
{
local root="$1"
local conflet="${root}/etc/apt/apt.conf.d/02mirantis-insecure-apt"
mkdir -p "${conflet%/*}"
echo 'APT::Get::AllowUnauthenticated 1;' > "$conflet"
}
suppress_services_start ()
{
local root="$1"
local policy_rc="$root/usr/sbin/policy-rc.d"
mkdir -p "${policy_rc%/*}"
cat > "$policy_rc" <<-EOF
#!/bin/sh
# suppress services start in the staging chroot
exit 101
EOF
chmod 755 "$policy_rc"
}
propagate_host_resolv_conf ()
{
local root="$1"
mkdir -p "$root/etc"
for conf in "/etc/resolv.conf" "/etc/hosts"; do
if [ -e "${root}${conf}" ]; then
cp -a "${root}${conf}" "${root}${conf}.bak"
fi
done
}
restore_resolv_conf ()
{
local root="$1"
for conf in "/etc/resolv.conf" "/etc/hosts"; do
if [ -e "${root}${conf}.bak" ]; then
rm -f "${root}${conf}"
cp -a "${root}${conf}.bak" "${root}${conf}"
fi
done
}
make_utf8_locale ()
{
local root="$1"
chroot "$root" /bin/sh -c "locale-gen en_US.UTF-8 && dpkg-reconfigure locales"
}
copy_conf_files ()
{
local root="$1"
local sdir="$2"
rsync -rlptDK "${sdir}" "${root%/}"
sed -i $root/etc/shadow -e '/^root/c\root:$$6$$oC7haQNQ$$LtVf6AI.QKn9Jb89r83PtQN9fBqpHT9bAFLzy.YVxTLiFgsoqlPY3awKvbuSgtxYHx4RUcpUqMotp.WZ0Hwoj.:15441:0:99999:7:::'
}
install_ssh_keys ()
{
local root="$1"
shift
if [ -z "$*" ]; then
echo "*** Error: $MYSELF: no ssh keys specified" >&2
exit 1
fi
local authorized_keys="$root/root/.ssh/authorized_keys"
local dot_ssh_dir="${authorized_keys%/*}"
if [ ! -d "${dot_ssh_dir}" ]; then
mkdir -p -m700 "${dot_ssh_dir}"
fi
for key; do
if [ ! -r "$key" ]; then
echo "*** Error: $MYSELF: no such file: $key" >&2
exit 1
fi
done
cat $@ > "$authorized_keys"
chmod 640 "$authorized_keys"
}
cleanup_chroot ()
{
local root="$1"
[ -z "$root" ] && exit 1
signal_chrooted_processes "$root" SIGTERM
signal_chrooted_processes "$root" SIGKILL
# umount "${root}/tmp/local-apt" 2>/dev/null || umount -l "${root}/tmp/local-apt"
# rm -f "${root}/etc/apt/sources.list.d/nailgun-local.list"
rm -rf $root/var/cache/apt/archives/*.deb
rm -f $root/etc/apt/apt.conf.d/01mirantis-use-proxy.conf
rm -f $root/var/log/bootstrap.log
rm -rf $root/tmp/*
rm -rf $root/run/*
}
install_agent ()
{
local root="$1"
local package_path="$2"
local full_path=`ls $package_path/fuel-agent*.deb`
local package=`basename $full_path`
cp $full_path $root/tmp
chroot "$root" env \
LC_ALL=C \
DEBIAN_FRONTEND=noninteractive \
DEBCONF_NONINTERACTIVE_SEEN=true \
TMPDIR=/tmp \
TMP=/tmp \
gdebi -n /tmp/$package
rm -f $root/tmp/$package
}
recompress_initramfs ()
{
local root="$1"
local initramfs_conf="$root/etc/initramfs-tools/initramfs.conf"
sed -i $initramfs_conf -re 's/COMPRESS\s*=\s*gzip/COMPRESS=xz/'
rm -f $root/boot/initrd*
chroot "$root" \
env \
LC_ALL=C \
DEBIAN_FRONTEND=noninteractive \
DEBCONF_NONINTERACTIVE_SEEN=true \
TMPDIR=/tmp \
TMP=/tmp \
update-initramfs -c -k all
}
mk_squashfs_image ()
{
local root="$1"
local tmp="$$"
[ -d "$DESTDIR" ] && mkdir -p "$DESTDIR"
cp -a $root/boot/initrd* $DESTDIR/initramfs.img.${tmp}
cp -a $root/boot/vmlinuz* $DESTDIR/linux.${tmp}
rm -f $root/boot/initrd*
rm -f $root/boot/vmlinuz*
# run mksquashfs inside a chroot (Ubuntu kernel will be able to
# mount an image produced by Ubuntu squashfs-tools)
mount -t tmpfs -o rw,nodev,nosuid,noatime,mode=0755,size=4M mnt${tmp} "$root/mnt"
mkdir -p "$root/mnt/src" "$root/mnt/dst"
mount -o bind "$root" "$root/mnt/src"
mount -o remount,bind,ro "$root/mnt/src"
mount -o bind "$DESTDIR" "$root/mnt/dst"
if ! mountpoint -q "$root/proc"; then
mount -t proc sandboxproc "$root/proc"
fi
chroot "$root" mksquashfs /mnt/src /mnt/dst/root.squashfs.${tmp} -comp xz -no-progress -noappend
mv $DESTDIR/initramfs.img.${tmp} $DESTDIR/initramfs.img
mv $DESTDIR/linux.${tmp} $DESTDIR/linux
mv $DESTDIR/root.squashfs.${tmp} $DESTDIR/root.squashfs
umount "$root/mnt/dst"
umount "$root/mnt/src"
umount "$root/mnt"
}
build_image ()
{
local root="$1"
chmod 755 "$root"
suppress_services_start "$root"
run_debootstrap "$root"
suppress_services_start "$root"
propagate_host_resolv_conf "$root"
make_utf8_locale "$root"
apt_setup "$root"
# add_local_mos_repo "$root"
allow_insecure_apt "$root"
upgrade_chroot "$root"
install_packages "$root" $BOOTSTRAP_PKGS $BOOTSTRAP_FUEL_PKGS
install_agent "$root" $AGENT_PACKAGE_PATH
recompress_initramfs "$root"
copy_conf_files "$root" $GONFIG_SOURCE
install_ssh_keys "$root" $BOOTSTRAP_SSH_KEYS
restore_resolv_conf "$root"
cleanup_chroot "$root"
mk_squashfs_image "$root"
}
root=`mktemp -d --tmpdir fuel-bootstrap-image.XXXXXXXXX`
main ()
{
build_image "$root"
}
signal_chrooted_processes ()
{
local root="$1"
local signal="${2:-SIGTERM}"
local max_attempts=10
local timeout=2
local count=0
local found_processes
[ ! -d "$root" ] && return 0
while [ $count -lt $max_attempts ]; do
found_processes=''
for pid in `fuser $root 2>/dev/null`; do
[ "$pid" = "kernel" ] && continue
if [ "`readlink /proc/$pid/root`" = "$root" ]; then
found_processes='yes'
kill "-${signal}" $pid
fi
done
[ -z "$found_processes" ] && break
count=$((count+1))
sleep $timeout
done
}
final_cleanup ()
{
signal_chrooted_processes "$root" SIGTERM
signal_chrooted_processes "$root" SIGKILL
for mnt in /tmp/local-apt /mnt/dst /mnt/src /mnt /proc; do
if mountpoint -q "${root}${mnt}"; then
umount "${root}${mnt}" || umount -l "${root}${mnt}" || true
fi
done
if [ -z "$SAVE_TEMPS" ]; then
rm -rf "$root"
fi
}
trap final_cleanup 0
trap final_cleanup HUP TERM INT QUIT
main

View File

@ -0,0 +1,21 @@
#!/bin/sh
set -e
# Stub configure script to make rpmbuild happy
PREFIX=''
for arg; do
case $arg in
--prefix)
shift
PREFIX="$arg"
;;
--prefix=*)
PREFIX="${arg##--prefix=*}"
;;
esac
done
cat > config.mk <<-EOF
PREFIX:=${PREFIX:-/usr}
EOF

View File

@ -0,0 +1,38 @@
%define name fuel-bootstrap-image-builder
%{!?version: %define version 7.0.0}
%{!?release: %define release 1}
Summary: Fuel bootstrap image generator
Name: %{name}
Version: %{version}
Release: %{release}
URL: http://github.com/asheplyakov/fuel-bootstrap-image
Source0: fuel-bootstrap-image-builder-%{version}.tar.gz
License: Apache
BuildRoot: %{_tmppath}/%{name}-%{version}-buildroot
Prefix: %{_prefix}
Requires: debootstrap, wget
BuildArch: noarch
%description
Fuel bootstrap image generator package
%prep
%autosetup -n %{name}
%build
%configure
%install
%make_install
mkdir -p %{buildroot}/var/www/nailgun/bootstrap/ubuntu
%clean
rm -rf $RPM_BUILD_ROOT
%files
%defattr(-,root,root)
%{_bindir}/*
%{_datadir}/fuel-bootstrap-image/*
%dir /var/www/nailgun/bootstrap/ubuntu

View File

@ -0,0 +1,13 @@
[problems]
# Superblock last mount time is in the future (PR_0_FUTURE_SB_LAST_MOUNT).
0x000031 = {
preen_ok = true
preen_nomessage = true
}
# Superblock last write time is in the future (PR_0_FUTURE_SB_LAST_WRITE).
0x000032 = {
preen_ok = true
preen_nomessage = true
}

View File

@ -0,0 +1,7 @@
description "Ironic call back script"
start on started ssh
task
exec /usr/bin/ironic_callback

View File

@ -0,0 +1,20 @@
Protocol 2
SyslogFacility AUTHPRIV
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication no
GSSAPIAuthentication no
UsePAM no
UseDNS no
# Accept locale-related environment variables
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
Subsystem sftp /usr/lib/openssh/sftp-server
# Secure Ciphers and MACs
Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1

View File

@ -0,0 +1 @@
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDtrVTSM8tGd4E8khJn2gfN/2fymnX/0YKAGSVZTWDNIcYL5zXTlSwrccn/8EgmnNsJNxucJRT+oWqrDGaFaehuwlY/IBqm50KJVaUr5QYzOUpqVpFIpoX3UwETCxcSB1LiQYbCvrJcqOPQ4Zu9fMhMGKaAX1ohzOumn4czuLDYIvCnPnoU5RDWt7g1GaFFlzGU3JFooj7/aWFJMqJLinvay3vr2vFpBvO1y29nKu+zgpZkzzJCc0ndoVqvB+W9DY6QtgTSWfd3ZE/8vg4h8QV8H+xxqL/uWCxDkv2Y3rviAHivR/V+1YCSQH0NBJrNSkRjd+1roLhcEGT7/YEnbgVV nailgun@bootstrap

View File

@ -0,0 +1,13 @@
[problems]
# Superblock last mount time is in the future (PR_0_FUTURE_SB_LAST_MOUNT).
0x000031 = {
preen_ok = true
preen_nomessage = true
}
# Superblock last write time is in the future (PR_0_FUTURE_SB_LAST_WRITE).
0x000032 = {
preen_ok = true
preen_nomessage = true
}

View File

@ -0,0 +1,28 @@
/var/log/cron
/var/log/maillog
/var/log/messages
/var/log/secure
/var/log/spooler
/var/log/mcollective.log
/var/log/nailgun-agent.log
{
# This file is used for daily log rotations, do not use size options here
sharedscripts
daily
# rotate only if 30M size or bigger
minsize 30M
maxsize 50M
# truncate file, do not delete & recreate
copytruncate
# keep logs for XXX rotations
rotate 3
# compression will be postponed to the next rotation, if uncommented
compress
# ignore missing files
missingok
# do not rotate empty files
notifempty
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
endscript
}

View File

@ -0,0 +1,27 @@
main_collective = mcollective
collectives = mcollective
libdir = /usr/share/mcollective/plugins
logfile = /var/log/mcollective.log
loglevel = debug
direct_addressing = 1
daemonize = 0
# Set TTL to 1.5 hours
ttl = 5400
# Plugins
securityprovider = psk
plugin.psk = unset
connector = rabbitmq
plugin.rabbitmq.vhost = mcollective
plugin.rabbitmq.pool.size = 1
plugin.rabbitmq.pool.1.host =
plugin.rabbitmq.pool.1.port = 61613
plugin.rabbitmq.pool.1.user = mcollective
plugin.rabbitmq.pool.1.password = marionette
plugin.rabbitmq.heartbeat_interval = 30
# Facts
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml

View File

@ -0,0 +1,6 @@
#!/bin/sh -e
fix-configs-on-startup || true
flock -w 0 -o /var/lock/agent.lock -c "/opt/nailgun/bin/agent >> /var/log/nailgun-agent.log 2>&1" || true
touch /var/lock/subsys/local

View File

@ -0,0 +1,6 @@
# Log all messages with this template
$template CustomLog, "%$NOW%T%TIMESTAMP:8:$%Z %syslogseverity-text% %syslogtag% %msg%\n"
$ActionFileDefaultTemplate CustomLog
user.debug /var/log/messages

View File

@ -0,0 +1,20 @@
{
"watchlist": [
{"servers": [ {"host": "@MASTER_NODE_IP@"} ],
"watchfiles": [
{"tag": "bootstrap/dmesg", "files": ["/var/log/dmesg"]},
{"tag": "bootstrap/secure", "files": ["/var/log/secure"]},
{"tag": "bootstrap/messages", "files": ["/var/log/messages"]},
{"tag": "bootstrap/fuel-agent", "files": ["/var/log/fuel-agent.log"]},
{"tag": "bootstrap/mcollective", "log_type": "ruby",
"files": ["/var/log/mcollective.log"]},
{"tag": "bootstrap/agent", "log_type": "ruby",
"files": ["/var/log/nailgun-agent.log"]},
{"tag": "bootstrap/netprobe_sender", "log_type": "netprobe",
"files": ["/var/log/netprobe_sender.log"]},
{"tag": "bootstrap/netprobe_listener", "log_type": "netprobe",
"files": ["/var/log/netprobe_listener.log"]}
]
}
]
}

View File

@ -0,0 +1,20 @@
Protocol 2
SyslogFacility AUTHPRIV
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication no
GSSAPIAuthentication no
UsePAM no
UseDNS no
# Accept locale-related environment variables
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
Subsystem sftp /usr/lib/openssh/sftp-server
# Secure Ciphers and MACs
Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1

View File

@ -0,0 +1 @@
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDtrVTSM8tGd4E8khJn2gfN/2fymnX/0YKAGSVZTWDNIcYL5zXTlSwrccn/8EgmnNsJNxucJRT+oWqrDGaFaehuwlY/IBqm50KJVaUr5QYzOUpqVpFIpoX3UwETCxcSB1LiQYbCvrJcqOPQ4Zu9fMhMGKaAX1ohzOumn4czuLDYIvCnPnoU5RDWt7g1GaFFlzGU3JFooj7/aWFJMqJLinvay3vr2vFpBvO1y29nKu+zgpZkzzJCc0ndoVqvB+W9DY6QtgTSWfd3ZE/8vg4h8QV8H+xxqL/uWCxDkv2Y3rviAHivR/V+1YCSQH0NBJrNSkRjd+1roLhcEGT7/YEnbgVV nailgun@bootstrap

View File

@ -0,0 +1,30 @@
#!/bin/sh
masternode_ip=`sed -rn 's/^.*url=http:\/\/(([0-9]{1,3}\.){3}[0-9]{1,3}).*$/\1/ p' /proc/cmdline`
mco_user=$(sed 's/\ /\n/g' /proc/cmdline | grep mco_user | awk -F\= '{print $2}')
mco_pass=$(sed 's/\ /\n/g' /proc/cmdline | grep mco_pass | awk -F\= '{print $2}')
[ -z "$mco_user" ] && mco_user="mcollective"
[ -z "$mco_pass" ] && mco_pass="marionette"
# Send logs to master node.
sed -i /etc/send2syslog.conf -re "s/@MASTER_NODE_IP@/$masternode_ip/"
/usr/bin/send2syslog.py -i < /etc/send2syslog.conf
# Set up NTP
# Disable panic about huge clock offset
sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
sed -i '1 i tinker panic 0' /etc/ntp.conf
# Sync clock with master node
sed -i "/^\s*server\b/ d" /etc/ntp.conf
echo "server $masternode_ip burst iburst" >> /etc/ntp.conf
service ntp restart
# Update mcollective config
sed -i "s/^plugin.rabbitmq.pool.1.host\b.*$/plugin.rabbitmq.pool.1.host = $masternode_ip/" /etc/mcollective/server.cfg
sed -i "s/^plugin.rabbitmq.pool.1.user\b.*$/plugin.rabbitmq.pool.1.user = $mco_user/" /etc/mcollective/server.cfg
sed -i "s/^plugin.rabbitmq.pool.1.password\b.*$/plugin.rabbitmq.pool.1.password= $mco_pass/" /etc/mcollective/server.cfg
service mcollective restart

View File

@ -0,0 +1,505 @@
#!/usr/bin/env python
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import logging
from logging.handlers import SysLogHandler
from optparse import OptionParser
import os
import re
import signal
import sys
import time
# Add syslog levels to logging module.
logging.NOTICE = 25
logging.ALERT = 60
logging.EMERG = 70
logging.addLevelName(logging.NOTICE, 'NOTICE')
logging.addLevelName(logging.ALERT, 'ALERT')
logging.addLevelName(logging.EMERG, 'EMERG')
SysLogHandler.priority_map['NOTICE'] = 'notice'
SysLogHandler.priority_map['ALERT'] = 'alert'
SysLogHandler.priority_map['EMERG'] = 'emerg'
# Define data and message format according to RFC 5424.
rfc5424_format = '{version} {timestamp} {hostname} {appname} {procid}'\
' {msgid} {structured_data} {msg}'
date_format = '%Y-%m-%dT%H:%M:%SZ'
# Define global semaphore.
sending_in_progress = 0
# Define file types.
msg_levels = {'ruby': {'regex': '(?P<level>[DIWEF]), \[[0-9-]{10}T',
'levels': {'D': logging.DEBUG,
'I': logging.INFO,
'W': logging.WARNING,
'E': logging.ERROR,
'F': logging.FATAL
}
},
'syslog': {'regex': ('[0-9-]{10}T[0-9:]{8}Z (?P<level>'
'debug|info|notice|warning|err|crit|'
'alert|emerg)'),
'levels': {'debug': logging.DEBUG,
'info': logging.INFO,
'notice': logging.NOTICE,
'warning': logging.WARNING,
'err': logging.ERROR,
'crit': logging.CRITICAL,
'alert': logging.ALERT,
'emerg': logging.EMERG
}
},
'anaconda': {'regex': ('[0-9:]{8},[0-9]+ (?P<level>'
'DEBUG|INFO|WARNING|ERROR|CRITICAL)'),
'levels': {'DEBUG': logging.DEBUG,
'INFO': logging.INFO,
'WARNING': logging.WARNING,
'ERROR': logging.ERROR,
'CRITICAL': logging.CRITICAL
}
},
'netprobe': {'regex': ('[0-9-]{10} [0-9:]{8},[0-9]+ (?P<level>'
'DEBUG|INFO|WARNING|ERROR|CRITICAL)'),
'levels': {'DEBUG': logging.DEBUG,
'INFO': logging.INFO,
'WARNING': logging.WARNING,
'ERROR': logging.ERROR,
'CRITICAL': logging.CRITICAL
}
}
}
relevel_errors = {
'anaconda': [
{
'regex': 'Error downloading \
http://.*/images/(product|updates).img: HTTP response code said error',
'levelfrom': logging.ERROR,
'levelto': logging.WARNING
},
{
'regex': 'got to setupCdrom without a CD device',
'levelfrom': logging.ERROR,
'levelto': logging.WARNING
}
]
}
# Create a main logger.
logging.basicConfig(format='%(levelname)s: %(message)s')
main_logger = logging.getLogger()
main_logger.setLevel(logging.NOTSET)
class WatchedFile:
"""WatchedFile(filename) => Object that read lines from file if exist."""
def __init__(self, name):
self.name = name
self.fo = None
self.where = 0
def reset(self):
if self.fo:
self.fo.close()
self.fo = None
self.where = 0
def _checkRewrite(self):
try:
if os.stat(self.name)[6] < self.where:
self.reset()
except OSError:
self.close()
def readLines(self):
"""Return list of last append lines from file if exist."""
self._checkRewrite()
if not self.fo:
try:
self.fo = open(self.name, 'r')
except IOError:
return ()
lines = self.fo.readlines()
self.where = self.fo.tell()
return lines
def close(self):
self.reset()
class WatchedGroup:
"""Can send data from group of specified files to specified servers."""
def __init__(self, servers, files, name):
self.servers = servers
self.files = files
self.log_type = files.get('log_type', 'syslog')
self.name = name
self._createLogger()
def _createLogger(self):
self.watchedfiles = []
logger = logging.getLogger(self.name)
logger.setLevel(logging.NOTSET)
logger.propagate = False
# Create log formatter.
format_dict = {'version': '1',
'timestamp': '%(asctime)s',
'hostname': config['hostname'],
'appname': self.files['tag'],
'procid': '-',
'msgid': '-',
'structured_data': '-',
'msg': '%(message)s'
}
log_format = rfc5424_format.format(**format_dict)
formatter = logging.Formatter(log_format, date_format)
# Add log handler for each server.
for server in self.servers:
port = 'port' in server and server['port'] or 514
syslog = SysLogHandler((server["host"], port))
syslog.setFormatter(formatter)
logger.addHandler(syslog)
self.logger = logger
# Create WatchedFile objects from list of files.
for name in self.files['files']:
self.watchedfiles.append(WatchedFile(name))
def send(self):
"""Send append data from files to servers."""
for watchedfile in self.watchedfiles:
for line in watchedfile.readLines():
line = line.strip()
level = self._get_msg_level(line, self.log_type)
# Get rid of duplicated information in anaconda logs
line = re.sub(
msg_levels[self.log_type]['regex'] + "\s*:?\s?",
"",
line
)
# Ignore meaningless errors
try:
for r in relevel_errors[self.log_type]:
if level == r['levelfrom'] and \
re.match(r['regex'], line):
level = r['levelto']
except KeyError:
pass
self.logger.log(level, line)
main_logger and main_logger.log(
level,
'From file "%s" send: %s' % (watchedfile.name, line)
)
@staticmethod
def _get_msg_level(line, log_type):
if log_type in msg_levels:
msg_type = msg_levels[log_type]
regex = re.match(msg_type['regex'], line)
if regex:
return msg_type['levels'][regex.group('level')]
return logging.INFO
def sig_handler(signum, frame):
"""Send all new data when signal arrived."""
if not sending_in_progress:
send_all()
exit(signum)
else:
config['run_once'] = True
def send_all():
"""Send any updates."""
for group in watchlist:
group.send()
def main_loop():
"""Periodicaly call sendlogs() for each group in watchlist."""
signal.signal(signal.SIGINT, sig_handler)
signal.signal(signal.SIGTERM, sig_handler)
while watchlist:
time.sleep(0.5)
send_all()
# If asked to run_once, exit now
if config['run_once']:
break
class Config:
"""Collection of config generation methods.
Usage: config = Config.getConfig()
"""
@classmethod
def getConfig(cls):
"""Generate config from command line arguments and config file."""
# example_config = {
# "daemon": True,
# "run_once": False,
# "debug": False,
# "watchlist": [
# {"servers": [ {"host": "localhost", "port": 514} ],
# "watchfiles": [
# {"tag": "anaconda",
# "log_type": "anaconda",
# "files": ["/tmp/anaconda.log",
# "/mnt/sysimage/root/install.log"]
# }
# ]
# }
# ]
# }
default_config = {"daemon": True,
"run_once": False,
"debug": False,
"hostname": cls._getHostname(),
"watchlist": []
}
# First use default config as running config.
config = dict(default_config)
# Get command line options and validate it.
cmdline = cls.cmdlineParse()[0]
# Check config file source and read it.
if cmdline.config_file or cmdline.stdin_config:
try:
if cmdline.stdin_config is True:
fo = sys.stdin
else:
fo = open(cmdline.config_file, 'r')
parsed_config = json.load(fo)
if cmdline.debug:
print(parsed_config)
except IOError: # Raised if IO operations failed.
main_logger.error("Can not read config file %s\n" %
cmdline.config_file)
exit(1)
except ValueError as e: # Raised if json parsing failed.
main_logger.error("Can not parse config file. %s\n" %
e.message)
exit(1)
# Validate config from config file.
cls.configValidate(parsed_config)
# Copy gathered config from config file to running config
# structure.
for key, value in parsed_config.items():
config[key] = value
else:
# If no config file specified use watchlist setting from
# command line.
watchlist = {"servers": [{"host": cmdline.host,
"port": cmdline.port}],
"watchfiles": [{"tag": cmdline.tag,
"log_type": cmdline.log_type,
"files": cmdline.watchfiles}]}
config['watchlist'].append(watchlist)
# Apply behavioural command line options to running config.
if cmdline.no_daemon:
config["daemon"] = False
if cmdline.run_once:
config["run_once"] = True
if cmdline.debug:
config["debug"] = True
return config
@staticmethod
def _getHostname():
"""Generate hostname by BOOTIF kernel option or use os.uname()."""
with open('/proc/cmdline') as fo:
cpu_cmdline = fo.readline().strip()
regex = re.search('(?<=BOOTIF=)([0-9a-fA-F-]*)', cpu_cmdline)
if regex:
mac = regex.group(0).upper()
return ''.join(mac.split('-'))
return os.uname()[1]
@staticmethod
def cmdlineParse():
"""Parse command line config options."""
parser = OptionParser()
parser.add_option("-c", "--config", dest="config_file", metavar="FILE",
help="Read config from FILE.")
parser.add_option("-i", "--stdin", dest="stdin_config", default=False,
action="store_true", help="Read config from Stdin.")
# FIXIT Add optionGroups.
parser.add_option("-r", "--run-once", dest="run_once",
action="store_true", help="Send all data and exit.")
parser.add_option("-n", "--no-daemon", dest="no_daemon",
action="store_true", help="Do not daemonize.")
parser.add_option("-d", "--debug", dest="debug",
action="store_true", help="Print debug messages.")
parser.add_option("-t", "--tag", dest="tag", metavar="TAG",
help="Set tag of sending messages as TAG.")
parser.add_option("-T", "--type", dest="log_type", metavar="TYPE",
default='syslog',
help="Set type of files as TYPE"
"(default: %default).")
parser.add_option("-f", "--watchfile", dest="watchfiles",
action="append",
metavar="FILE", help="Add FILE to watchlist.")
parser.add_option("-s", "--host", dest="host", metavar="HOSTNAME",
help="Set destination as HOSTNAME.")
parser.add_option("-p", "--port", dest="port", type="int", default=514,
metavar="PORT",
help="Set remote port as PORT (default: %default).")
options, args = parser.parse_args()
# Validate gathered options.
if options.config_file and options.stdin_config:
parser.error("You must not set both options --config"
" and --stdin at the same time.")
exit(1)
if ((options.config_file or options.stdin_config) and
(options.tag or options.watchfiles or options.host)):
main_logger.warning("If --config or --stdin is set up options"
" --tag, --watchfile, --type,"
" --host and --port will be ignored.")
if (not (options.config_file or options.stdin_config) and
not (options.tag and options.watchfiles and options.host)):
parser.error("Options --tag, --watchfile and --host"
" must be set up at the same time.")
exit(1)
return options, args
@staticmethod
def _checkType(value, value_type, value_name='', msg=None):
"""Check correctness of type of value and exit if not."""
if not isinstance(value, value_type):
message = msg or "Value %r in config have type %r but"\
" %r is expected." %\
(value_name, type(value).__name__, value_type.__name__)
main_logger.error(message)
exit(1)
@classmethod
def configValidate(cls, config):
"""Validate types and names of data items in config."""
cls._checkType(config, dict, msg='Config must be a dict.')
for key in ("daemon", "run_once", "debug"):
if key in config:
cls._checkType(config[key], bool, key)
key = "hostname"
if key in config:
cls._checkType(config[key], basestring, key)
key = "watchlist"
if key in config:
cls._checkType(config[key], list, key)
else:
main_logger.error("There must be key %r in config." % key)
exit(1)
for item in config["watchlist"]:
cls._checkType(item, dict, "watchlist[n]")
key, name = "servers", "watchlist[n] => servers"
if key in item:
cls._checkType(item[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key, '"watchlist[n]" item'))
exit(1)
key, name = "watchfiles", "watchlist[n] => watchfiles"
if key in item:
cls._checkType(item[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key, '"watchlist[n]" item'))
exit(1)
for item2 in item["servers"]:
cls._checkType(item2, dict, "watchlist[n] => servers[n]")
key, name = "host", "watchlist[n] => servers[n] => host"
if key in item2:
cls._checkType(item2[key], basestring, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => servers[n]" item'))
exit(1)
key, name = "port", "watchlist[n] => servers[n] => port"
if key in item2:
cls._checkType(item2[key], int, name)
for item2 in item["watchfiles"]:
cls._checkType(item2, dict, "watchlist[n] => watchfiles[n]")
key, name = "tag", "watchlist[n] => watchfiles[n] => tag"
if key in item2:
cls._checkType(item2[key], basestring, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => watchfiles[n]" item'))
exit(1)
key = "log_type"
name = "watchlist[n] => watchfiles[n] => log_type"
if key in item2:
cls._checkType(item2[key], basestring, name)
key, name = "files", "watchlist[n] => watchfiles[n] => files"
if key in item2:
cls._checkType(item2[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => watchfiles[n]" item'))
exit(1)
for item3 in item2["files"]:
name = "watchlist[n] => watchfiles[n] => files[n]"
cls._checkType(item3, basestring, name)
# Create global config.
config = Config.getConfig()
# Create list of WatchedGroup objects with different log names.
watchlist = []
i = 0
for item in config["watchlist"]:
for files in item['watchfiles']:
watchlist.append(WatchedGroup(item['servers'], files, str(i)))
i = i + 1
# Fork and loop
if config["daemon"]:
if not os.fork():
# Redirect the standard I/O file descriptors to the specified file.
main_logger = None
DEVNULL = getattr(os, "devnull", "/dev/null")
os.open(DEVNULL, os.O_RDWR) # standard input (0)
os.dup2(0, 1) # Duplicate standard input to standard output (1)
os.dup2(0, 2) # Duplicate standard input to standard error (2)
main_loop()
sys.exit(1)
sys.exit(0)
else:
if not config['debug']:
main_logger = None
main_loop()

View File

@ -0,0 +1,38 @@
#!/usr/bin/env ruby
require 'hiera'
ENV['LANG'] = 'C'
hiera = Hiera.new(:config => '/etc/hiera.yaml')
glanced = hiera.lookup 'glance', {} , {}
management_vip = hiera.lookup 'management_vip', nil, {}
auth_addr = hiera.lookup 'service_endpoint', "#{management_vip}", {}
tenant_name = glanced['tenant'].nil? ? "services" : glanced['tenant']
user_name = glanced['user'].nil? ? "glance" : glanced['user']
endpoint_type = glanced['endpoint_type'].nil? ? "internalURL" : glanced['endpoint_type']
region_name = hiera.lookup 'region', 'RegionOne', {}
ironic_hash = hiera.lookup 'fuel-plugin-ironic', {}, {}
ironic_swift_tempurl_key = ironic_hash['password'].nil? ? "ironic" : ironic_hash['password']
ENV['OS_TENANT_NAME']="#{tenant_name}"
ENV['OS_USERNAME']="#{user_name}"
ENV['OS_PASSWORD']="#{glanced['user_password']}"
ENV['OS_AUTH_URL']="http://#{auth_addr}:5000/v2.0"
ENV['OS_ENDPOINT_TYPE'] = "#{endpoint_type}"
ENV['OS_REGION_NAME']="#{region_name}"
command = <<-EOF
/usr/bin/swift post -m 'Temp-URL-Key:#{ironic_swift_tempurl_key}'
EOF
puts command
5.times.each do |retries|
sleep 10 if retries > 0
stdout = `#{command}`
return_code = $?.exitstatus
puts stdout
exit 0 if return_code == 0
end
puts "Secret key registration have FAILED!"
exit 1

View File

@ -0,0 +1,51 @@
notice('MODULAR: ironic/db.pp')
$node_name = hiera('node_name')
$ironic_hash = hiera_hash('fuel-plugin-ironic', {})
$mysql_hash = hiera_hash('mysql', {})
$mysql_root_user = pick($mysql_hash['root_user'], 'root')
$mysql_db_create = pick($mysql_hash['db_create'], true)
$mysql_root_password = $mysql_hash['root_password']
$db_user = pick($ironic_hash['db_user'], 'ironic')
$db_name = pick($ironic_hash['db_name'], 'ironic')
$db_password = pick($ironic_hash['password'], 'ironic')
$db_host = pick($ironic_hash['db_host'], $database_vip, 'localhost')
$db_create = pick($ironic_hash['db_create'], $mysql_db_create)
$db_root_user = pick($ironic_hash['root_user'], $mysql_root_user)
$db_root_password = pick($ironic_hash['root_password'], $mysql_root_password)
$allowed_hosts = [ $node_name, 'localhost', '127.0.0.1', '%' ]
if $ironic_hash['metadata']['enabled'] and $db_create {
class { 'galera::client':
custom_setup_class => hiera('mysql_custom_setup_class', 'galera'),
}
class { 'ironic::db::mysql':
user => $db_user,
password => $db_password,
dbname => $db_name,
allowed_hosts => $allowed_hosts,
}
class { 'osnailyfacter::mysql_access':
db_host => $db_host,
db_user => $db_root_user,
db_password => $db_root_password,
}
Class['galera::client'] ->
Class['osnailyfacter::mysql_access'] ->
Class['ironic::db::mysql']
}
class mysql::config {}
include mysql::config
class mysql::server {}
include mysql::server

View File

@ -0,0 +1,58 @@
notice('MODULAR: ironic/haproxy.pp')
$network_metadata = hiera_hash('network_metadata')
$public_ssl_hash = hiera('public_ssl')
$ironic_api_nodes = get_nodes_hash_by_roles($network_metadata, ['primary-controller', 'controller'])
$ironic_address_map = get_node_to_ipaddr_map_by_network_role($ironic_api_nodes, 'ironic/api')
$ironic_server_names = hiera_array('ironic_names', keys($ironic_address_map))
$ironic_ipaddresses = hiera_array('ironic_ipaddresses', values($ironic_address_map))
$swift_proxies_address_map = get_node_to_ipaddr_map_by_network_role(hiera_hash('swift_proxies', undef), 'swift/api')
$swift_server_names = hiera_array('swift_server_names', keys($swift_proxies_address_map))
$swift_ipaddresses = hiera_array('swift_ipaddresses', values($swift_proxies_address_map))
$public_virtual_ip = hiera('public_vip')
$internal_virtual_ip = hiera('management_vip')
$baremetal_virtual_ip = $network_metadata['vips']['baremetal']['ipaddr']
Openstack::Ha::Haproxy_service {
ipaddresses => $ironic_ipaddresses,
public_virtual_ip => $public_virtual_ip,
server_names => $ironic_server_names,
public => true,
public_ssl => $public_ssl_hash['services'],
haproxy_config_options => {
option => ['httpchk GET /', 'httplog','httpclose'],
},
}
openstack::ha::haproxy_service { 'ironic-api':
order => '180',
listen_port => 6385,
internal_virtual_ip => $internal_virtual_ip,
}
openstack::ha::haproxy_service { 'ironic-baremetal':
order => '185',
listen_port => 6385,
public => false,
public_ssl => false,
public_virtual_ip => false,
internal_virtual_ip => $baremetal_virtual_ip,
}
openstack::ha::haproxy_service { 'swift-baremetal':
order => '125',
listen_port => 8080,
ipaddresses => $swift_ipaddresses,
server_names => $swift_server_names,
public => false,
public_ssl => false,
public_virtual_ip => false,
internal_virtual_ip => $baremetal_virtual_ip,
haproxy_config_options => {
'option' => ['httpchk', 'httplog', 'httpclose'],
},
balancermember_options => 'check port 49001 inter 15s fastinter 2s downinter 8s rise 3 fall 3',
}

View File

@ -0,0 +1,91 @@
notice('MODULAR: ironic/ironic-compute.pp')
$ironic_hash = hiera_hash('fuel-plugin-ironic', {})
$nova_hash = hiera_hash('nova', {})
$management_vip = hiera('management_vip')
$database_vip = hiera('database_vip', $management_vip)
$keystone_endpoint = hiera('keystone_endpoint', $management_vip)
$neutron_endpoint = hiera('neutron_endpoint', $management_vip)
$ironic_endpoint = hiera('ironic_endpoint', $management_vip)
$glance_api_servers = hiera('glance_api_servers', "${management_vip}:9292")
$debug = hiera('debug', false)
$verbose = hiera('verbose', true)
$use_syslog = hiera('use_syslog', true)
$syslog_log_facility_ironic = hiera('syslog_log_facility_ironic', 'LOG_LOCAL0')
$syslog_log_facility_nova = hiera('syslog_log_facility_nova', 'LOG_LOCAL6')
$amqp_hosts = hiera('amqp_hosts')
$rabbit_hash = hiera('rabbit_hash')
$nova_report_interval = hiera('nova_report_interval')
$nova_service_down_time = hiera('nova_service_down_time')
$neutron_config = hiera_hash('quantum_settings')
$ironic_tenant = pick($ironic_hash['tenant'],'services')
$ironic_user = pick($ironic_hash['user'],'ironic')
$ironic_user_password = pick($ironic_hash['password'],'ironic')
$db_host = pick($nova_hash['db_host'], $database_vip)
$db_user = pick($nova_hash['db_user'], 'nova')
$db_name = pick($nova_hash['db_name'], 'nova')
$db_password = pick($nova_hash['db_password'], 'nova')
$database_connection = "mysql://${db_name}:${db_password}@${db_host}/${db_name}?read_timeout=60"
$memcache_nodes = get_nodes_hash_by_roles(hiera('network_metadata'), hiera('memcache_roles'))
$cache_server_ip = ipsort(values(get_node_to_ipaddr_map_by_network_role($memcache_nodes,'mgmt/memcache')))
$memcached_addresses = suffix($cache_server_ip, inline_template(":<%= @cache_server_port %>"))
$notify_on_state_change = 'vm_and_task_state'
class { '::nova':
install_utilities => false,
ensure_package => installed,
database_connection => $database_connection,
rpc_backend => 'nova.openstack.common.rpc.impl_kombu',
#FIXME(bogdando) we have to split amqp_hosts until all modules synced
rabbit_hosts => split($amqp_hosts, ','),
rabbit_userid => $rabbit_hash['user'],
rabbit_password => $rabbit_hash['password'],
image_service => 'nova.image.glance.GlanceImageService',
glance_api_servers => $glance_api_servers,
verbose => $verbose,
debug => $debug,
use_syslog => $use_syslog,
log_facility => $syslog_log_facility_nova,
state_path => $nova_hash['state_path'],
report_interval => $nova_report_interval,
service_down_time => $nova_service_down_time,
notify_on_state_change => $notify_on_state_change,
memcached_servers => $memcached_addresses,
}
class { '::nova::compute':
ensure_package => installed,
enabled => true,
vnc_enabled => false,
force_config_drive => $nova_hash['force_config_drive'],
#NOTE(bogdando) default became true in 4.0.0 puppet-nova (was false)
neutron_enabled => true,
default_availability_zone => $nova_hash['default_availability_zone'],
default_schedule_zone => $nova_hash['default_schedule_zone'],
reserved_host_memory => '0',
}
class { 'nova::compute::ironic':
admin_url => "http://${keystone_endpoint}:35357/v2.0",
admin_user => $ironic_user,
admin_tenant_name => $ironic_tenant,
admin_passwd => $ironic_user_password,
api_endpoint => "http://${ironic_endpoint}:6385/v1",
}
class { 'nova::network::neutron':
neutron_admin_password => $neutron_config['keystone']['admin_password'],
neutron_url => "http://${neutron_endpoint}:9696",
neutron_admin_auth_url => "http://${keystone_endpoint}:35357/v2.0",
}
file { '/etc/nova/nova-compute.conf':
ensure => absent,
require => Package['nova-compute'],
} ~> Service['nova-compute']

View File

@ -0,0 +1,118 @@
notice('MODULAR: ironic/ironic-conductor.pp')
$network_scheme = hiera('network_scheme', {})
prepare_network_config($network_scheme)
$baremetal_address = get_network_role_property('ironic/baremetal', 'ipaddr')
$ironic_hash = hiera_hash('fuel-plugin-ironic', {})
$management_vip = hiera('management_vip')
$network_metadata = hiera_hash('network_metadata', {})
$baremetal_vip = $network_metadata['vips']['baremetal']['ipaddr']
$database_vip = hiera('database_vip', $management_vip)
$keystone_endpoint = hiera('keystone_endpoint', $management_vip)
$neutron_endpoint = hiera('neutron_endpoint', $management_vip)
$glance_api_servers = hiera('glance_api_servers', "${management_vip}:9292")
$amqp_hosts = hiera('amqp_hosts')
$rabbit_hosts = split($amqp_hosts, ',')
$debug = hiera('debug', false)
$verbose = hiera('verbose', true)
$use_syslog = hiera('use_syslog', true)
$syslog_log_facility_ironic = hiera('syslog_log_facility_ironic', 'LOG_USER')
$rabbit_hash = hiera('rabbit_hash')
$rabbit_ha_queues = hiera('rabbit_ha_queues')
$ironic_tenant = pick($ironic_hash['tenant'],'services')
$ironic_user = pick($ironic_hash['user'],'ironic')
$ironic_user_password = pick($ironic_hash['password'],'ironic')
$ironic_swift_tempurl_key = pick($ironic_hash['password'],'ironic')
$db_host = pick($ironic_hash['db_host'], $database_vip)
$db_user = pick($ironic_hash['db_user'], 'ironic')
$db_name = pick($ironic_hash['db_name'], 'ironic')
$db_password = pick($ironic_hash['password'], 'ironic')
$database_connection = "mysql://${db_name}:${db_password}@${db_host}/${db_name}?charset=utf8&read_timeout=60"
$tftp_root = "/var/lib/ironic/tftpboot"
class { '::ironic':
verbose => $verbose,
debug => $debug,
enabled_drivers => ['fuel_ssh', 'fuel_ipmitool'],
rabbit_hosts => $rabbit_hosts,
rabbit_port => 5673,
rabbit_userid => $rabbit_hash['user'],
rabbit_password => $rabbit_hash['password'],
amqp_durable_queues => $rabbit_ha_queues,
use_syslog => $use_syslog,
log_facility => $syslog_log_facility_ironic,
database_connection => $database_connection,
glance_api_servers => $glance_api_servers,
}
class { '::ironic::client': }
class { '::ironic::conductor': }
class { '::ironic::drivers::pxe':
tftp_server => $baremetal_address,
tftp_root => $tftp_root,
tftp_master_path => "${tftp_root}/master_images",
}
ironic_config {
'neutron/url': value => "http://${neutron_endpoint}:9696";
'keystone_authtoken/auth_uri': value => "http://${keystone_endpoint}:5000/";
'keystone_authtoken/auth_host': value => $keystone_endpoint;
'keystone_authtoken/admin_tenant_name': value => $ironic_tenant;
'keystone_authtoken/admin_user': value => $ironic_user;
'keystone_authtoken/admin_password': value => $ironic_user_password, secret => true;
'glance/swift_temp_url_key': value => $ironic_swift_tempurl_key;
'glance/swift_endpoint_url': value => "http://${baremetal_vip}:8080";
#'glance/swift_account': value => "AUTH_${services_tenant_id}";
'conductor/api_url': value => "http://${baremetal_vip}:6385";
}
file { $tftp_root:
ensure => directory,
owner => 'ironic',
group => 'ironic',
mode => 755,
require => Class['ironic'],
}
file { "$tftp_root/pxelinux.0":
ensure => present,
source => '/usr/lib/syslinux/pxelinux.0',
require => Package['syslinux'],
}
file { "${tftp_root}/map-file":
content => "r ^([^/]) ${tftp_root}/\\1",
}
class { '::tftp':
directory => $tftp_root,
options => "--map-file ${tftp_root}/map-file",
inetd => false,
require => File["${tftp_root}/map-file"],
}
package { 'syslinux':
ensure => 'present',
}
package { 'ipmitool':
ensure => 'present',
before => Class['::ironic::conductor'],
}
file { "/etc/ironic/fuel_key":
ensure => present,
source => '/var/lib/astute/ironic/bootstrap.rsa',
owner => 'ironic',
group => 'ironic',
mode => 600,
require => Class['ironic'],
}

View File

@ -0,0 +1,96 @@
notice('MODULAR: ironic/ironic.pp')
$ironic_hash = hiera_hash('fuel-plugin-ironic', {})
$nova_hash = hiera_hash('nova_hash', {})
$access_hash = hiera_hash('access',{})
$public_vip = hiera('public_vip')
$management_vip = hiera('management_vip')
$network_metadata = hiera_hash('network_metadata', {})
$baremetal_vip = $network_metadata['vips']['baremetal']['ipaddr']
$database_vip = hiera('database_vip', $management_vip)
$keystone_endpoint = hiera('keystone_endpoint', $management_vip)
$neutron_endpoint = hiera('neutron_endpoint', $management_vip)
$glance_api_servers = hiera('glance_api_servers', "${management_vip}:9292")
$debug = hiera('debug', false)
$verbose = hiera('verbose', true)
$use_syslog = hiera('use_syslog', true)
$syslog_log_facility_ironic = hiera('syslog_log_facility_ironic', 'LOG_USER')
$rabbit_hash = hiera_hash('rabbit_hash', {})
$rabbit_ha_queues = hiera('rabbit_ha_queues')
$amqp_hosts = hiera('amqp_hosts')
$rabbit_hosts = split($amqp_hosts, ',')
$neutron_config = hiera_hash('quantum_settings')
$db_host = pick($ironic_hash['db_host'], $database_vip)
$db_user = pick($ironic_hash['db_user'], 'ironic')
$db_name = pick($ironic_hash['db_name'], 'ironic')
$db_password = pick($ironic_hash['password'], 'ironic')
$database_connection = "mysql://${db_name}:${db_password}@${db_host}/${db_name}?charset=utf8&read_timeout=60"
$region = hiera('region', 'RegionOne')
$public_url = "http://${public_vip}:6385"
$admin_url = "http://${management_vip}:6385"
$internal_url = "http://${management_vip}:6385"
$ironic_tenant = pick($ironic_hash['tenant'],'services')
$ironic_user = pick($ironic_hash['user'],'ironic')
$ironic_user_password = pick($ironic_hash['password'],'ironic')
prepare_network_config(hiera('network_scheme', {}))
if $ironic_hash['metadata']['enabled'] {
class { 'ironic':
verbose => $verbose,
debug => $debug,
enabled_drivers => ['fuel_ssh', 'fuel_ipmitool'],
rabbit_hosts => $rabbit_hosts,
rabbit_port => 5673,
rabbit_userid => $rabbit_hash['user'],
rabbit_password => $rabbit_hash['password'],
amqp_durable_queues => $rabbit_ha_queues,
use_syslog => $use_syslog,
log_facility => $syslog_log_facility_ironic,
database_connection => $database_connection,
glance_api_servers => $glance_api_servers,
}
class { 'ironic::client': }
class { 'ironic::api':
host_ip => get_network_role_property('ironic/api', 'ipaddr'),
auth_host => $keystone_endpoint,
admin_tenant_name => $ironic_tenant,
admin_user => $ironic_user,
admin_password => $ironic_user_password,
neutron_url => "http://${neutron_endpoint}:9696",
}
class { 'ironic::keystone::auth':
password => $ironic_user_password,
region => $region,
public_url => $public_url,
internal_url => $internal_url,
admin_url => $admin_url,
}
firewall { '207 ironic-api' :
dport => '6385',
proto => 'tcp',
action => 'accept',
}
nova_config {
'DEFAULT/scheduler_host_manager': value => 'nova.scheduler.ironic_host_manager.IronicHostManager';
'DEFAULT/scheduler_use_baremetal_filters': value => true;
}
include ::nova::params
service { 'nova-scheduler':
ensure => 'running',
name => $::nova::params::scheduler_service_name,
}
Nova_config<| |> ~> Service['nova-scheduler']
}

View File

@ -0,0 +1,54 @@
notice('MODULAR: ironic/network-conductor.pp')
$network_scheme = hiera('network_scheme', {})
prepare_network_config($network_scheme)
$baremetal_int = get_network_role_property('ironic/baremetal', 'interface')
$baremetal_ipaddr = get_network_role_property('ironic/baremetal', 'ipaddr')
$baremetal_network = get_network_role_property('ironic/baremetal', 'network')
# Firewall
###############################
firewallchain { 'baremetal:filter:IPv4':
ensure => present,
} ->
firewall { '101 allow TFTP':
chain => 'baremetal',
source => $baremetal_network,
destination => $baremetal_ipaddr,
proto => 'udp',
dport => '69',
action => 'accept',
} ->
firewall { '102 allow related':
chain => 'baremetal',
source => $baremetal_network,
destination => $baremetal_ipaddr,
proto => 'all',
state => ['RELATED', 'ESTABLISHED'],
action => 'accept',
} ->
firewall { '999 drop all':
chain => 'baremetal',
action => 'drop',
proto => 'all',
} ->
firewall {'00 baremetal-filter ':
proto => 'all',
iniface => $baremetal_int,
jump => 'baremetal',
require => Class['openstack::firewall'],
}
exec { 'fix_ipt_modules':
command => '/bin/sed -i "s/^IPT_MODULES=.*/IPT_MODULES=\"nf_conntrack_ftp nf_nat_ftp nf_conntrack_netbios_ns nf_conntrack_tftp\"/g" /etc/default/ufw',
unless => '/bin/grep "^IPT_MODULES=.*nf_conntrack_tftp" /etc/default/ufw > /dev/null',
notify => Exec['load_tftp_mod']
}
exec { 'load_tftp_mod':
command => '/sbin/modprobe nf_conntrack_tftp',
refreshonly => true,
}
class { 'openstack::firewall':}

View File

@ -0,0 +1,21 @@
notice('MODULAR: ironic/network-ovs.pp')
$network_scheme = hiera('network_scheme', {})
prepare_network_config($network_scheme)
$baremetal_int = get_network_role_property('ironic/baremetal', 'interface')
$sdn = generate_network_config()
# OVS patch
###############################
class { 'l23network':
use_ovs => true,
} ->
l23network::l2::bridge { 'br-ironic':
provider => 'ovs'
} ->
l23network::l2::patch { "patch__${baremetal_int}--br-ironic":
bridges => ['br-ironic', $baremetal_int],
provider => 'ovs',
mtu => 65000,
}

View File

@ -0,0 +1,140 @@
notice('MODULAR: ironic/network.pp')
$network_scheme = hiera('network_scheme', {})
prepare_network_config($network_scheme)
$network_metadata = hiera_hash('network_metadata', {})
$neutron_config = hiera_hash('quantum_settings')
$pnets = $neutron_config['L2']['phys_nets']
$baremetal_vip = $network_metadata['vips']['baremetal']['ipaddr']
$baremetal_int = get_network_role_property('ironic/baremetal', 'interface')
$baremetal_ipaddr = get_network_role_property('ironic/baremetal', 'ipaddr')
$baremetal_netmask = get_network_role_property('ironic/baremetal', 'netmask')
$baremetal_network = get_network_role_property('ironic/baremetal', 'network')
$nameservers = $neutron_config['predefined_networks']['net04']['L3']['nameservers']
$ironic_hash = hiera_hash('fuel-plugin-ironic', {})
$baremetal_L3_allocation_pool = $ironic_hash['l3_allocation_pool']
$baremetal_L3_gateway = $ironic_hash['l3_gateway']
# Firewall
###############################
firewallchain { 'baremetal:filter:IPv4':
ensure => present,
} ->
firewall { '100 allow ping from VIP':
chain => 'baremetal',
source => $baremetal_vip,
destination => $baremetal_ipaddr,
proto => 'icmp',
icmp => 'echo-request',
action => 'accept',
} ->
firewall { '999 drop all':
chain => 'baremetal',
action => 'drop',
proto => 'all',
} ->
firewall {'00 baremetal-filter ':
proto => 'all',
iniface => $baremetal_int,
jump => 'baremetal',
require => Class['openstack::firewall'],
}
class { 'openstack::firewall':}
# VIP
###############################
$ns_iptables_start_rules = "iptables -A INPUT -i baremetal-ns -s ${baremetal_network} -d ${baremetal_vip} -p tcp -m multiport --dports 6385,8080 -m state --state NEW -j ACCEPT; iptables -A INPUT -i baremetal-ns -s ${baremetal_network} -d ${baremetal_vip} -m state --state ESTABLISHED,RELATED -j ACCEPT; iptables -A INPUT -i baremetal-ns -j DROP"
$ns_iptables_stop_rules = "iptables -D INPUT -i baremetal-ns -s ${baremetal_network} -d ${baremetal_vip} -p tcp -m multiport --dports 6385,8080 -m state --state NEW -j ACCEPT; iptables -D INPUT -i baremetal-ns -s ${baremetal_network} -d ${baremetal_vip} -m state --state ESTABLISHED,RELATED -j ACCEPT; iptables -D INPUT -i baremetal-ns -j DROP"
$baremetal_vip_data = {
namespace => 'haproxy',
nic => $baremetal_int,
base_veth => 'baremetal-base',
ns_veth => 'baremetal-ns',
ip => $baremetal_vip,
cidr_netmask => netmask_to_cidr($baremetal_netmask),
gateway => 'none',
gateway_metric => '0',
bridge => $baremetal_int,
ns_iptables_start_rules => $ns_iptables_start_rules,
ns_iptables_stop_rules => $ns_iptables_stop_rules,
iptables_comment => 'baremetal-filter',
}
cluster::virtual_ip { 'baremetal' :
vip => $baremetal_vip_data,
}
# Physnets
###############################
if $pnets['physnet1'] {
$physnet1 = "physnet1:${pnets['physnet1']['bridge']}"
}
if $pnets['physnet2'] {
$physnet2 = "physnet2:${pnets['physnet2']['bridge']}"
}
$physnet_ironic = "physnet-ironic:br-ironic"
$physnets_array = [$physnet1, $physnet2, $physnet_ironic]
$bridge_mappings = delete_undef_values($physnets_array)
$br_map_str = join($bridge_mappings, ',')
neutron_agent_ovs {
'ovs/bridge_mappings': value => $br_map_str;
}
$flat_networks = ['physnet-ironic']
neutron_plugin_ml2 {
'ml2_type_flat/flat_networks': value => join($flat_networks, ',');
}
service { 'p_neutron-plugin-openvswitch-agent':
ensure => 'running',
enable => true,
provider => 'pacemaker',
}
service { 'p_neutron-dhcp-agent':
ensure => 'running',
enable => true,
provider => 'pacemaker',
}
Neutron_plugin_ml2<||> ~> Service['p_neutron-plugin-openvswitch-agent'] ~> Service['p_neutron-dhcp-agent']
Neutron_agent_ovs<||> ~> Service['p_neutron-plugin-openvswitch-agent'] ~> Service['p_neutron-dhcp-agent']
# Predefined network
###############################
$netdata = {
'L2' => {
network_type => 'flat',
physnet => 'physnet-ironic',
router_ext => 'false',
segment_id => 'null'
},
'L3' => {
enable_dhcp => true,
floating => $baremetal_L3_allocation_pool,
gateway => $baremetal_L3_gateway,
nameservers => $nameservers,
subnet => $baremetal_network
},
'shared' => 'true',
'tenant' => 'admin',
}
openstack::network::create_network{'baremetal':
netdata => $netdata,
segmentation_type => 'flat',
} ->
neutron_router_interface { "router04:baremetal__subnet":
ensure => present,
}
# Order
###############################
Firewall<||> -> Cluster::Virtual_ip<||> -> Neutron_plugin_ml2<||> -> Neutron_agent_ovs<||> -> Openstack::Network::Create_network<||>

@ -0,0 +1 @@
Subproject commit 69fa70013893a323a7cf62bc57963bd7a86bab04

@ -0,0 +1 @@
Subproject commit e7e5b5f1a38833e769453a848a7b20741039b415

View File

@ -0,0 +1,141 @@
#!/usr/bin/env ruby
require 'hiera'
ENV['LANG'] = 'C'
hiera = Hiera.new(:config => '/etc/hiera.yaml')
glanced = hiera.lookup 'glance', {} , {}
management_vip = hiera.lookup 'management_vip', nil, {}
auth_addr = hiera.lookup 'service_endpoint', "#{management_vip}", {}
tenant_name = glanced['tenant'].nil? ? "services" : glanced['tenant']
user_name = glanced['user'].nil? ? "glance" : glanced['user']
endpoint_type = glanced['endpoint_type'].nil? ? "internalURL" : glanced['endpoint_type']
region_name = hiera.lookup 'region', 'RegionOne', {}
master_ip = hiera.lookup 'master_ip', nil, {}
ENV['OS_TENANT_NAME']="#{tenant_name}"
ENV['OS_USERNAME']="#{user_name}"
ENV['OS_PASSWORD']="#{glanced['user_password']}"
ENV['OS_AUTH_URL']="http://#{auth_addr}:5000/v2.0"
ENV['OS_ENDPOINT_TYPE'] = "#{endpoint_type}"
ENV['OS_REGION_NAME']="#{region_name}"
ironic_images = [
{"os_name"=>"ironic-deploy-linux",
"img_location"=>"http://#{master_ip}:8080/bootstrap/ironic/linux",
"container_format"=>"aki",
"min_ram"=>2048,
"disk_format"=>"aki",
"glance_properties"=>"",
"img_name"=>"ironic-deploy-linux",
"public"=>"true",
"protected"=>"true",
},
{"os_name"=>"ironic-deploy-initramfs",
"img_location"=>"http://#{master_ip}:8080/bootstrap/ironic/initramfs.img",
"container_format"=>"ari",
"min_ram"=>2048,
"disk_format"=>"ari",
"glance_properties"=>"",
"img_name"=>"ironic-deploy-initramfs",
"public"=>"true",
"protected"=>"true",
},
{"os_name"=>"ironic-deploy-squashfs",
"img_location"=>"http://#{master_ip}:8080/bootstrap/ironic/root.squashfs",
"container_format"=>"ari",
"min_ram"=>2048,
"disk_format"=>"ari",
"glance_properties"=>"",
"img_name"=>"ironic-deploy-squashfs",
"public"=>"true",
"protected"=>"true",
},
]
ironic_images.each do |image|
%w(
disk_format
img_location
img_name
os_name
public
protected
container_format
min_ram
).each do |f|
raise "Data field '#{f}' is missing!" unless image[f]
end
end
def image_list
stdout = `glance image-list`
return_code = $?.exitstatus
images = []
stdout.split("\n").each do |line|
fields = line.split('|').map { |f| f.chomp.strip }
next if fields[1] == 'ID'
next unless fields[2]
images << fields[2]
end
{:images => images, :exit_code => return_code}
end
def image_create(image_hash)
command = <<-EOF
/usr/bin/glance image-create \
--name '#{image_hash['img_name']}' \
--is-public '#{image_hash['public']}' \
--is-protected '#{image_hash['protected']}' \
--container-format='#{image_hash['container_format']}' \
--disk-format='#{image_hash['disk_format']}' \
--min-ram='#{image_hash['min_ram']}' \
#{image_hash['glance_properties']} \
--copy-from '#{image_hash['img_location']}'
EOF
puts command
stdout = `#{command}`
return_code = $?.exitstatus
[ stdout, return_code ]
end
# check if Glance is online
# waited until the glance is started because when vCenter used as a glance
# backend launch may takes up to 1 minute.
def wait_for_glance
5.times.each do |retries|
sleep 10 if retries > 0
return if image_list[:exit_code] == 0
end
raise 'Could not get a list of glance images!'
end
# upload image to Glance
# if it have not been already uploaded
def upload_image(image)
list_of_images = image_list
if list_of_images[:images].include?(image['img_name']) && list_of_images[:exit_code] == 0
puts "Image '#{image['img_name']}' is already present!"
return 0
end
stdout, return_code = image_create(image)
if return_code == 0
puts "Image '#{image['img_name']}' was uploaded from '#{image['img_location']}'"
else
puts "Image '#{image['img_name']}' upload from '#{image['img_location']}' have FAILED!"
end
puts stdout
return return_code
end
########################
wait_for_glance
errors = 0
ironic_images.each do |image|
errors += upload_image(image)
end
exit 1 unless errors == 0

131
deployment_tasks.yaml Normal file
View File

@ -0,0 +1,131 @@
- id: ironic-copy-bootstrap-keys
type: copy_files
role: ['ironic']
required_for: [pre_deployment_end]
requires: [pre_deployment_start]
parameters:
permissions: '0600'
dir_permissions: '0700'
files:
- src: /var/lib/fuel/keys/ironic/bootstrap.rsa
dst: /var/lib/astute/ironic/bootstrap.rsa
- id: ironic-haproxy
groups: ['primary-controller', 'controller']
type: puppet
required_for: [ironic-api]
requires: [openstack-haproxy, ironic-network]
parameters:
puppet_manifest: puppet/manifests/haproxy.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 3600
- id: ironic-network-ovs
groups: ['primary-controller', 'controller']
type: puppet
required_for: [virtual_ips]
requires: [netconfig]
parameters:
puppet_manifest: puppet/manifests/network-ovs.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 3600
- id: ironic-network
groups: ['primary-controller', 'controller']
type: puppet
required_for: [ironic-haproxy]
requires: [openstack-controller, ironic-network-ovs]
parameters:
puppet_manifest: puppet/manifests/network.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 3600
- id: ironic-db
groups: ['primary-controller']
type: puppet
required_for: [ironic-api]
requires: [database]
parameters:
puppet_manifest: puppet/manifests/db.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 3600
- id: ironic-upload-images
role: ['primary-controller']
type: shell
required_for: [post_deployment_end]
requires: [enable_quorum]
parameters:
cmd: ruby upload_images.rb
retries: 3
interval: 20
timeout: 180
- id: ironic-swift-key
role: ['primary-controller']
type: shell
required_for: [post_deployment_end]
requires: [enable_quorum]
parameters:
cmd: ruby post_swift_key.rb
retries: 3
interval: 20
timeout: 180
- id: ironic-api
groups: ['primary-controller', 'controller']
type: puppet
required_for: [deploy_end, controller_remaining_tasks]
requires: [openstack-controller, ironic-db, ironic-network, ironic-haproxy, swift]
parameters:
puppet_manifest: puppet/manifests/ironic.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 3600
- id: ironic-network-conductor
groups: ['ironic']
type: puppet
required_for: [deploy_end, ironic-conductor]
requires: [hosts, firewall]
parameters:
puppet_manifest: puppet/manifests/network-conductor.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 3600
- id: ironic-conductor
groups: ['ironic']
type: puppet
required_for: [deploy_end, ironic-compute]
requires: [hosts, firewall, ironic-network-conductor]
parameters:
puppet_manifest: puppet/manifests/ironic-conductor.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 3600
- id: ironic-compute
groups: ['ironic']
type: puppet
required_for: [deploy_end]
requires: [hosts, firewall, ironic-conductor]
parameters:
puppet_manifest: puppet/manifests/ironic-compute.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 3600
- id: ironic
type: group
role: [ironic]
tasks:
- fuel_pkgs
- hiera
- globals
- logging
- tools
- netconfig
- hosts
- firewall
required_for: [deploy_end]
requires: [deploy_start]
parameters:
strategy:
type: parallel

31
environment_config.yaml Normal file
View File

@ -0,0 +1,31 @@
attributes:
metadata:
restrictions:
- "cluster:net_provider != 'neutron' or networking_parameters:segmentation_type != 'vlan'": "Ironic requires Neutron with VLAN segmentation."
- "settings:storage.images_ceph.value == true": "Ironic requires Swift as a backend for Glance image service."
password:
value: "I_love_plugins"
label: "Password for user, db and swift"
type: "text"
weight: 10
regex:
source: '^([a-zA-Z0-9_-]+)$'
error: "Password should match regex '^([a-zA-Z0-9_-]+)$'"
l3_allocation_pool:
value: "192.168.3.52:192.168.3.254"
label: "Allocation pool for Neutron network"
description: 'Semicolon separated IP addresses of start and end of pool'
type: "text"
weight: 20
regex:
source: '^(?:\d|1?\d\d|2[0-4]\d|25[0-5])(?:\.(?:\d|1?\d\d|2[0-4]\d|25[0-5])){3}(?::\s*(?:\d|1?\d\d|2[0-4]\d|25[0-5])(?:\.(?:\d|1?\d\d|2[0-4]\d|25[0-5])){3})*$'
error: "Invalid IP addresses pool"
l3_gateway:
value: "192.168.3.51"
label: "Gateway for Neutron network"
type: "text"
weight: 30
regex:
source: '^(?:\d|1?\d\d|2[0-4]\d|25[0-5])(?:\.(?:\d|1?\d\d|2[0-4]\d|25[0-5])){3}$'
error: "Invalid IP address of gateway"

30
metadata.yaml Normal file
View File

@ -0,0 +1,30 @@
# Plugin name
name: fuel-plugin-ironic
# Human-readable name for your plugin
title: Ironic
# Plugin version
version: '1.0.0'
# Description
description: Enable Ironic
# Required fuel version
fuel_version: ['7.0']
# Specify license of your plugin
licenses: ['Apache License Version 2.0']
# Specify author or company name
authors: ['Mirantis']
# A link to the plugin's page
homepage: 'https://github.com/stackforge/fuel-plugins'
# Specify a group which your plugin implements, possible options:
# network, storage, storage::cinder, storage::glance, hypervisor
groups: []
# The plugin is compatible with releases in the list
releases:
- os: ubuntu
version: 2015.1.0-7.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
# Version of plugin package
package_version: '3.0.0'

24
network_roles.yaml Normal file
View File

@ -0,0 +1,24 @@
- id: "ironic/api"
# Role mapping to network
default_mapping: "management"
properties:
# Should be true if network role requires subnet being set
subnet: true
# Should be true if network role requires gateway being set
gateway: true
vip: []
- id: "ironic/baremetal"
# Role mapping to network
default_mapping: "baremetal"
properties:
# Should be true if network role requires subnet being set
subnet: true
# Should be true if network role requires gateway being set
gateway: false
vip:
# Unique VIP name
- name: "baremetal"
# Optional linux namespace for VIP
namespace: "haproxy"

11
node_roles.yaml Normal file
View File

@ -0,0 +1,11 @@
ironic:
name: "Ironic"
description: "Ironic Conductor"
has_primary: false # whether has primary role or not
public_ip_required: false # whether requires public net or not
weight: 100
limits:
min: 1
recommended: 3
conflicts:
- compute

23
post_install.sh Executable file
View File

@ -0,0 +1,23 @@
#!/bin/bash -ex
package_path=$(rpm -ql fuel-plugin-ironic-1.0 | head -n1)
deployment_scripts_path="${package_path}/deployment_scripts"
key_path="/var/lib/fuel/keys/ironic"
mkdir -p "${key_path}"
key_file="${key_path}/bootstrap.rsa"
if [ ! -f "${key_file}" ]; then
ssh-keygen -b 2048 -t rsa -N '' -f "${key_file}" 2>&1
else
echo "Key ${key_file} already exists"
fi
export BOOTSTRAP_IRONIC="yes"
export EXTRA_DEB_REPOS="deb http://127.0.0.1:8080/plugins/fuel-plugin-ironic-1.0/repositories/ubuntu /"
export DESTDIR="/var/www/nailgun/bootstrap/ironic"
export BOOTSTRAP_SSH_KEYS="${key_file}.pub"
export AGENT_PACKAGE_PATH="${package_path}/repositories/ubuntu"
mkdir -p "${DESTDIR}"
${deployment_scripts_path}/fuel-bootstrap-image-builder/bin/fuel-bootstrap-image
chmod 755 -R "${DESTDIR}"

5
pre_build_hook Executable file
View File

@ -0,0 +1,5 @@
#!/bin/bash
# Add here any the actions which are required before plugin build
# like packages building, packages downloading from mirrors and so on.
# The script should return 0 if there were no errors.

View File

View File

5
volumes.yaml Normal file
View File

@ -0,0 +1,5 @@
volumes: []
volumes_roles_mapping:
ironic:
# Default role mapping
- {allocate_size: "all", id: "os"}