Retire repository

Fuel (from openstack namespace) and fuel-ccp (in x namespace)
repositories are unused and ready to retire.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011647.html

Depends-On: https://review.opendev.org/699362
Change-Id: Icc6a810696788e65e9afd8be691400d4d39de313
This commit is contained in:
Andreas Jaeger 2019-12-18 09:32:20 +01:00
parent 861acf9240
commit 797016c54b
205 changed files with 10 additions and 24489 deletions

21
.gitignore vendored
View File

@ -1,21 +0,0 @@
.venv
*.pyc
# vim swap files
.*.swp
# services' runtime files
*.log
*.pid
build
dist
*.egg
*.eggs
.testrepository
.cache
.tox
.idea
.DS_Store
*.egg-info

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,72 +0,0 @@
---
description:
For Fuel team structure and contribution policy, see [1].
This is repository level MAINTAINERS file. All contributions to this
repository must be approved by one or more Core Reviewers [2].
If you are contributing to files (or create new directories) in
root folder of this repository, please contact Core Reviewers for
review and merge requests.
If you are contributing to subfolders of this repository, please
check 'maintainers' section of this file in order to find maintainers
for those specific modules.
It is mandatory to get +1 from one or more maintainers before asking
Core Reviewers for review/merge in order to decrease a load on Core Reviewers [3].
Exceptions are when maintainers are actually cores, or when maintainers
are not available for some reason (e.g. on vacation).
[1] https://specs.openstack.org/openstack/fuel-specs/policy/team-structure
[2] https://review.openstack.org/#/admin/groups/995,members
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
Please keep this file in YAML format in order to allow helper scripts
to read this as a configuration data.
maintainers:
- ./:
- name: Alexander Gordeev
email: agordeev@mirantis.com
IRC: agordeev
- name: Vladimir Kozhukalov
email: vkozhukalov@mirantis.com
IRC: kozhukalov
- specs/: &MOS_packaging_team
- name: Mikhail Ivanov
email: mivanov@mirantis.com
IRC: mivanov
- name: Artem Silenkov
email: asilenkov@mirantis.com
IRC: asilenkov
- name: Alexander Tsamutali
email: atsamutali@mirantis.com
IRC: astsmtl
- name: Daniil Trishkin
email: dtrishkin@mirantis.com
IRC: dtrishkin
- name: Ivan Udovichenko
email: iudovichenko@mirantis.com
IRC: tlbr
- name: Igor Yozhikov
email: iyozhikov@mirantis.com
IRC: IgorYozhikov
- debian/: *MOS_packaging_team
- contrib/fuel_bootstrap:
- name: Artur Svechnikov
email: asvechnikov@mirantis.com
IRC: asvechnikov
- name: Aleksey Zvyagintsev
email: azvyagintsev@mirantis.com
IRC: azvyagintsev

208
README.md
View File

@ -1,208 +0,0 @@
Team and repository tags
========================
[![Team and repository tags](http://governance.openstack.org/badges/fuel-agent.svg)](http://governance.openstack.org/reference/tags/index.html)
<!-- Change things from this point on -->
fuel-agent README
=================
## Table of Contents
- [Overview](#overview)
- [Structure](#structure)
- [Usage](#usage)
- [Development](#development)
- [Core Reviewers](#core-reviewers)
- [Contributors](#contributors)
## Overview
fuel-agent is nothing more than just a set of data driven executable
scripts.
- One of these scripts is used for building operating system images. One can run
this script on wherever needed passing a set of repository URIs and a set of
package names that are to be installed into the image.
- Another script is used for the actual provisioning. This script being installed
into a ramdisk (live image) can be run to provision an operating system on a hard drive.
When running one needs to pass input data that contain information about disk
partitions, initial node configuration, operating system image location, etc.
This script is to prepare disk partitions according to the input data, download
operating system images and put these images on partitions.
### Motivation
- Native operating installation tools like anaconda and debian-installer are:
* hard to customize (if the case is really non-trivial)
* hard to troubleshoot (it is usually quite difficult to understand which log file
contains necessary information and how to run those tools in debug mode)
- Image based approach to operating system installation allows to make this
process really scalable. For example, we can use BitTorrent based image
delivery scheme when provisioning that makes the process easily scalable up
to thousands of nodes.
- When provisioning we can check hash sum of the image and use other validation
mechanisms that can make the process more stable.
### Designed to address requirements
- Support various input data formats (pluggable input data drivers)
- Support plain partitions, lvm, md, root on lvm, etc.
- Be able to do initial node configuration (network, mcollective, puppet, ntp)
- Be able to deploy standalone OS (local kernel, local bootloader)
- Support various image storages (tftp, http, torrent)
- Support various image formats (compressed, disk image, fs image, tar image)
### Design outline
- Use cloud-init for initial node configuration
- Avoid using parted and lvm native python bindings (to make it easy to
troubleshoot and modify for deployment engineers)
- No REST API, just executable entry points (like /usr/bin/fa_*)
- Passing input data either via file (--input_data_file) or CLI parameter (--input_data)
- Detailed logging of all components
## Structure
### Basic Repository Layout
```
fuel-agent
├── cloud-init-templates
├── contrib
├── debian
├── etc
├── fuel_agent
│   ├── cmd
│   ├── drivers
│   ├── objects
│   ├── openstack
│   ├── tests
│   ├── utils
├── README.md
├── LICENSE
├── requirements.txt
├── run_tests.sh
├── setup.cfg
├── setup.py
├── specs
├── test-requirements.txt
```
### root
The root level contains important repository documentation and license information.
It also contais files which are typical for the infracture of python project such
as requirements.txt and setup.py
### cloud-init-templates
This folder contains Jinja2 templates to prepare [cloud-init](https://cloudinit.readthedocs.org/en/latest/) related data for [nocloud](http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#no-cloud) [datasource](http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#what-is-a-datasource).
### contrib
This directory contains third party code that is not a part of fuel-agent itself but
can be used together with fuel-agent.
### debian
This folder contains the DEB package specification.
Included debian rules are mainly suitable for Ubuntu 12.04 or higher.
### etc
This folder contains the sample config file for fuel-agent. Every parameter is well documented.
We use oslo-config as a configuration module.
### fuel_agent
This folder contains the python code: drivers, objects, unit tests and utils, manager and entry points.
- fuel_agent/cmd/agent.py
* That is where executable entry points are. It reads input data and
instantiates Manager class with these data.
- fuel_agent/manager.py
* That is the file where the top level agent logic is implemented.
It contains all those methods which do something useful (do_*)
- fuel_agent/drivers
* That is where input data drivers are located.
(Nailgun, NailgunBuildImage, Simple etc.)
Data drivers convert json into a set of python objects.
- fuel_agent/objects
* Here is the place where python objects are defined. fuel-agent manager
does not understand any particular data format except these objects.
For example, to do disk partitioning we need PartitionScheme object.
PartitionScheme object in turn contains disk labels, plain partitions,
lvm, md, fs objects. This PartitionScheme object is to be created by input
data driver.
- fuel_agent/utils
* That is the place where we put the code which does something useful on the OS
level. Here we have simple parted, lvm, md, grub bindings, etc.
### specs
This folder contains the RPM package specfication file.
Included RPM spec is mainly suitable for Centos 6.x or higher.
## Usage
### Use case #1 (Fuel)
fuel-agent is used in Fuel project as a part of operating system provisioning scheme.
When a user starts deployment of OpenStack cluster, the first task is to install
an operating system on slave nodes. First, Fuel runs fuel-agent on the master node
to build OS images. Once images are built, Fuel then runs fuel-agent on slave nodes
using Mcollective. Slave nodes are supposed to be booted with so called bootstrap ramdisk.
Bootstrap ramdisk is an in-memory OS where fuel-agent is installed.
Detailed documentation on this case is available here:
* [Image based provisionig](https://docs.mirantis.com/openstack/fuel/fuel-master/reference-architecture.html#image-based-provisioning)
* [fuel-agent](https://docs.mirantis.com/openstack/fuel/fuel-master/reference-architecture.html#fuel-agent)
* [Operating system provisioning](https://docs.mirantis.com/openstack/fuel/fuel-master/reference-architecture.html#operating-system-provisioning)
* [Image building](https://docs.mirantis.com/openstack/fuel/fuel-master/reference-architecture.html#image-building)
### Use case #2 (Independent on Fuel)
fuel-agent can easily be used in third party projects as a convenient operating system
provisioning tool. As described above fuel-agent is fully data driven and supports
various input data formats using pluggable input data drivers. Currently there are three
input data drivers available. Those are
- NailgunBuildImage and Nailgun
* Build image and provisioning input data drivers used in Fuel project. To use them
independently read Fuel documentation.
- NailgunSimpleDriver
* fuel-agent native partitioning input data driver. It is just a de-serializer for
fuel-agent PartitionScheme object.
In order to be able to use another specific data format one can implement his own data
driver and install it independently. fuel-agent uses stevedore to find installed drivers.
A new driver needs to be exposed via fuel_agent.driver setuptools name space. See for example
setup.cfg file where entry points are defined.
One can also take a look at ```contrib``` directory for some additional examples.
### How to install
fuel-agent can be installed either using RPM/DEB packages or using ```python setup.py install```.
## Development
fuel-agent currently is a subproject of Fuel project. So, we follow the same development
practices as Fuel itself.
* [Fuel Development Documentation](https://docs.fuel-infra.org/fuel-dev/)
* [Fuel How to Contribute](https://wiki.openstack.org/wiki/Fuel/How_to_contribute)
## Core Reviewers
* [fuel-agent cores](https://review.openstack.org/#/admin/groups/995,members)
## Contributors
* [Stackalytics](http://stackalytics.com/?release=all&project_type=all&module=fuel-agent&metric=commits)

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,94 +0,0 @@
#cloud-boothook
#!/bin/bash
function add_str_to_file_if_not_exists {
file=$1
str=$2
val=$3
if ! grep -q "^ *${str}" $file; then
echo $val >> $file
fi
}
cloud-init-per instance disable_selinux_on_the_fly setenforce 0
cloud-init-per instance disable_selinux sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux
#FIXME(agordeev): if operator updates dns settings on masternode after the node had been provisioned,
# cloud-init will start to generate resolv.conf with non-actual data
cloud-init-per instance resolv_conf_remove rm -f /etc/resolv.conf
cloud-init-per instance resolv_conf_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolv.conf'
cloud-init-per instance resolv_conf_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip }} | tee -a /etc/resolv.conf'
# configure black module lists
# virt-what should be installed
if [ ! -f /etc/modprobe.d/blacklist-i2c_piix4.conf ]; then
( (virt-what | fgrep -q "virtualbox") && echo "blacklist i2c_piix4" >> /etc/modprobe.d/blacklist-i2c_piix4.conf || :)
modprobe -r i2c_piix4
fi
cloud-init-per instance conntrack_ipv4 /bin/sh -c 'echo nf_conntrack_ipv4 | tee -a /etc/rc.modules'
cloud-init-per instance conntrack_ipv6 /bin/sh -c 'echo nf_conntrack_ipv6 | tee -a /etc/rc.modules'
cloud-init-per instance conntrack_proto_gre /bin/sh -c 'echo nf_conntrack_proto_gre | tee -a /etc/rc.modules'
cloud-init-per instance chmod_rc_modules chmod +x /etc/rc.modules
cloud-init-per instance conntrack_max /bin/sh -c 'echo "net.nf_conntrack_max=1048576" | tee -a /etc/sysctl.conf'
cloud-init-per instance kernel_panic /bin/sh -c 'echo "kernel.panic=60" | tee -a /etc/sysctl.conf'
cloud-init-per instance conntrack_ipv4_load modprobe nf_conntrack_ipv4
cloud-init-per instance conntrack_ipv6_load modprobe nf_conntrack_ipv6
cloud-init-per instance conntrack_proto_gre_load modprobe nf_conntrack_proto_gre
cloud-init-per instance conntrack_max_set sysctl -w "net.nf_conntrack_max=1048576"
cloud-init-per instance kernel_panic_set sysctl -w "kernel.panic=60"
cloud-init-per instance mkdir_coredump mkdir -p /var/log/coredump
cloud-init-per instance set_coredump /bin/sh -c 'echo -e "kernel.core_pattern=/var/log/coredump/core.%e.%p.%h.%t" | tee -a /etc/sysctl.conf'
cloud-init-per instance set_coredump_sysctl sysctl -w "kernel.core_pattern=/var/log/coredump/core.%e.%p.%h.%t"
cloud-init-per instance set_chmod chmod 777 /var/log/coredump
cloud-init-per instance set_limits /bin/sh -c 'echo -e "* soft core unlimited\n* hard core unlimited" | tee -a /etc/security/limits.conf'
#NOTE: disabled for centos?
#cloud-init-per instance dhclient echo 'supersede routers 0;' | tee /etc/dhcp/dhclient.conf
# ntp sync
# '| tee /dev/null' is needed for returning zero execution code always
cloud-init-per instance stop_ntpd /bin/sh -c 'service ntpd stop | tee /dev/null'
cloud-init-per instance sync_date ntpdate -t 4 -b {{ common.master_ip }}
cloud-init-per instance sync_hwclock hwclock --systohc
cloud-init-per instance edit_ntp_conf1 sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf2 sed -i '1 i tinker panic 0' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf_mkdir mkdir -p /var/lib/ntp
cloud-init-per instance edit_ntp_conf3 /bin/sh -c 'echo 0 | tee /var/lib/ntp/drift'
cloud-init-per instance edit_ntp_conf4 chown ntp: /var/lib/ntp/drift
cloud-init-per instance edit_ntp_conf5 sed -i '/^\s*server/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf6 /bin/sh -c 'echo "server {{ common.master_ip }} burst iburst" | tee -a /etc/ntp.conf'
# Point installed ntpd to Master node
cloud-init-per instance set_ntpdate sed -i 's/SYNC_HWCLOCK\s*=\s*no/SYNC_HWCLOCK=yes/' /etc/sysconfig/ntpdate
cloud-init-per instance set_ntpd_0 chkconfig ntpd on
cloud-init-per instance set_ntpd_1 chkconfig ntpdate on
cloud-init-per instance start_ntpd service ntpd start
cloud-init-per instance removeUseDNS sed -i --follow-symlinks -e '/UseDNS/d' /etc/ssh/sshd_config
add_str_to_file_if_not_exists /etc/ssh/sshd_config 'UseDNS' 'UseDNS no'
cloud-init-per instance gssapi_disable sed -i -e "/^\s*GSSAPICleanupCredentials yes/d" -e "/^\s*GSSAPIAuthentication yes/d" /etc/ssh/sshd_config
cloud-init-per instance nailgun_agent_0 /bin/sh -c 'echo "rm -f /etc/nailgun-agent/nodiscover" | tee /etc/rc.local'
cloud-init-per instance nailgun_agent_1 /bin/sh -c 'echo "flock -w 0 -o /var/lock/agent.lock -c \"/usr/bin/nailgun-agent >> /var/log/nailgun-agent.log 2>&1\"" | tee -a /etc/rc.local'
# Copying default bash settings to the root directory
cloud-init-per instance skel_bash cp -f /etc/skel/.bash* /root/
# Puppet config
cloud-init-per instance hiera_puppet mkdir -p /etc/puppet /var/lib/hiera
cloud-init-per instance touch_puppet touch /var/lib/hiera/common.yaml /etc/puppet/hiera.yaml /var/log/puppet.log
cloud-init-per instance chmod_puppet chmod 600 /var/log/puppet.log
# Mcollective enable
cloud-init-per instance mcollective_enable sed -i /etc/rc.d/init.d/mcollective -e 's/\(# chkconfig:\s\+[-0-6]\+\) [0-9]\+ \([0-9]\+\)/\1 81 \2/'

View File

@ -1,89 +0,0 @@
#cloud-boothook
#!/bin/bash
function add_str_to_file_if_not_exists {
file=$1
str=$2
val=$3
if ! grep -q "^ *${str}" $file; then
echo $val >> $file
fi
}
cloud-init-per instance disable_selinux_on_the_fly setenforce 0
cloud-init-per instance disable_selinux sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux
#FIXME(agordeev): if operator updates dns settings on masternode after the node had been provisioned,
# cloud-init will start to generate resolv.conf with non-actual data
cloud-init-per instance resolv_conf_remove rm -f /etc/resolv.conf
cloud-init-per instance resolv_conf_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolv.conf'
cloud-init-per instance resolv_conf_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip }} | tee -a /etc/resolv.conf'
# configure black module lists
# virt-what should be installed
if [ ! -f /etc/modprobe.d/blacklist-i2c_piix4.conf ]; then
( (virt-what | fgrep -q "virtualbox") && echo "blacklist i2c_piix4" >> /etc/modprobe.d/blacklist-i2c_piix4.conf || :)
modprobe -r i2c_piix4
fi
cloud-init-per instance conntrack_ipv4 /bin/sh -c 'echo nf_conntrack_ipv4 | tee -a /etc/rc.modules'
cloud-init-per instance conntrack_ipv6 /bin/sh -c 'echo nf_conntrack_ipv6 | tee -a /etc/rc.modules'
cloud-init-per instance chmod_rc_modules chmod +x /etc/rc.modules
cloud-init-per instance conntrack_max /bin/sh -c 'echo "net.nf_conntrack_max=1048576" | tee -a /etc/sysctl.conf'
cloud-init-per instance conntrack_ipv4_load modprobe nf_conntrack_ipv4
cloud-init-per instance conntrack_ipv6_load modprobe nf_conntrack_ipv6
cloud-init-per instance conntrack_max_set sysctl -w "net.nf_conntrack_max=1048576"
cloud-init-per instance mkdir_coredump mkdir -p /var/log/coredump
cloud-init-per instance set_coredump /bin/sh -c 'echo -e "kernel.core_pattern=/var/log/coredump/core.%e.%p.%h.%t" | tee -a /etc/sysctl.conf'
cloud-init-per instance set_coredump_sysctl sysctl -w "kernel.core_pattern=/var/log/coredump/core.%e.%p.%h.%t"
cloud-init-per instance set_chmod chmod 777 /var/log/coredump
cloud-init-per instance set_limits /bin/sh -c 'echo -e "* soft core unlimited\n* hard core unlimited" | tee -a /etc/security/limits.conf'
#NOTE: disabled for centos?
#cloud-init-per instance dhclient echo 'supersede routers 0;' | tee /etc/dhcp/dhclient.conf
# ntp sync
# '| tee /dev/null' is needed for returning zero execution code always
cloud-init-per instance stop_ntpd /bin/sh -c 'service ntpd stop | tee /dev/null'
cloud-init-per instance sync_date ntpdate -t 4 -b {{ common.master_ip }}
cloud-init-per instance sync_hwclock hwclock --systohc
cloud-init-per instance edit_ntp_conf1 sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf2 sed -i '1 i tinker panic 0' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf_mkdir mkdir -p /var/lib/ntp
cloud-init-per instance edit_ntp_conf3 /bin/sh -c 'echo 0 | tee /var/lib/ntp/drift'
cloud-init-per instance edit_ntp_conf4 chown ntp: /var/lib/ntp/drift
cloud-init-per instance edit_ntp_conf5 sed -i '/^\s*server/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf6 /bin/sh -c 'echo "server {{ common.master_ip }} burst iburst" | tee -a /etc/ntp.conf'
# Point installed ntpd to Master node
cloud-init-per instance set_ntpdate sed -i 's/SYNC_HWCLOCK\s*=\s*no/SYNC_HWCLOCK=yes/' /etc/sysconfig/ntpdate
cloud-init-per instance set_ntpd_0 chkconfig ntpd on
cloud-init-per instance set_ntpd_1 chkconfig ntpdate on
cloud-init-per instance start_ntpd service ntpd start
cloud-init-per instance removeUseDNS sed -i --follow-symlinks -e '/UseDNS/d' /etc/ssh/sshd_config
add_str_to_file_if_not_exists /etc/ssh/sshd_config 'UseDNS' 'UseDNS no'
cloud-init-per instance gssapi_disable sed -i -e "/^\s*GSSAPICleanupCredentials yes/d" -e "/^\s*GSSAPIAuthentication yes/d" /etc/ssh/sshd_config
cloud-init-per instance nailgun_agent_0 /bin/sh -c 'echo "rm -f /etc/nailgun-agent/nodiscover" | tee /etc/rc.local'
cloud-init-per instance nailgun_agent_1 /bin/sh -c 'echo "flock -w 0 -o /var/lock/agent.lock -c \"/opt/nailgun/bin/agent >> /var/log/nailgun-agent.log 2>&1\"" | tee -a /etc/rc.local'
# Copying default bash settings to the root directory
cloud-init-per instance skel_bash cp -f /etc/skel/.bash* /root/
# Puppet config
cloud-init-per instance hiera_puppet mkdir -p /etc/puppet /var/lib/hiera
cloud-init-per instance touch_puppet touch /var/lib/hiera/common.yaml /etc/puppet/hiera.yaml
# Mcollective enable
cloud-init-per instance mcollective_enable sed -i /etc/rc.d/init.d/mcollective -e 's/\(# chkconfig:\s\+[-0-6]\+\) [0-9]\+ \([0-9]\+\)/\1 81 \2/'

View File

@ -1,72 +0,0 @@
#cloud-boothook
#!/bin/bash
function add_str_to_file_if_not_exists {
file=$1
str=$2
val=$3
if ! grep -q "^ *${str}" $file; then
echo $val >> $file
fi
}
cloud-init-per instance wipe_sources_list_templates /bin/sh -c 'echo | tee /etc/cloud/templates/sources.list.ubuntu.tmpl'
#FIXME(agordeev): if operator updates dns settings on masternode after the node had been provisioned,
# cloud-init will start to generate resolv.conf with non-actual data
cloud-init-per instance resolv_conf_mkdir mkdir -p /etc/resolvconf/resolv.conf.d
cloud-init-per instance resolv_conf_remove rm -f /etc/resolv.conf
cloud-init-per instance resolv_conf_head_remove rm -f /etc/resolvconf/resolv.conf.d/head
cloud-init-per instance resolv_conf_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolv.conf'
cloud-init-per instance resolv_conf_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_head_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_head_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_head_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip|replace('"','') }} | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_head_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip|replace('"','') }} | tee -a /etc/resolvconf/resolv.conf.d/head'
# configure black module lists
# virt-what should be installed
if [ ! -f /etc/modprobe.d/blacklist-i2c_piix4.conf ]; then
( (virt-what | fgrep -q "virtualbox") && echo "blacklist i2c_piix4" >> /etc/modprobe.d/blacklist-i2c_piix4.conf || :) && update-initramfs -u -k all
modprobe -r i2c_piix4
fi
cloud-init-per instance conntrack_ipv4 /bin/sh -c 'echo nf_conntrack_ipv4 | tee -a /etc/modules'
cloud-init-per instance conntrack_ipv6 /bin/sh -c 'echo nf_conntrack_ipv6 | tee -a /etc/modules'
cloud-init-per instance conntrack_max /bin/sh -c 'echo "net.nf_conntrack_max=1048576" | tee -a /etc/sysctl.conf'
cloud-init-per instance conntrack_ipv4_load modprobe nf_conntrack_ipv4
cloud-init-per instance conntrack_ipv6_load modprobe nf_conntrack_ipv6
cloud-init-per instance conntrack_max_set sysctl -w "net.nf_conntrack_max=1048576"
cloud-init-per instance dhclient /bin/sh -c 'echo "supersede routers 0;" | tee /etc/dhcp/dhclient.conf'
# ntp sync
# '| tee /dev/null' is needed for returning zero execution code always
cloud-init-per instance stop_ntp /bin/sh -c 'service ntp stop | tee /dev/null'
cloud-init-per instance sync_date ntpdate -t 4 -b {{ common.master_ip }}
cloud-init-per instance sync_hwclock hwclock --systohc
cloud-init-per instance edit_ntp_conf1 sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf2 sed -i '1 i tinker panic 0' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf_mkdir mkdir -p /var/lib/ntp
cloud-init-per instance edit_ntp_conf3 /bin/sh -c 'echo 0 | tee /var/lib/ntp/drift'
cloud-init-per instance edit_ntp_conf4 sed -i '/^\s*server/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf5 /bin/sh -c 'echo "server {{ common.master_ip }} burst iburst" | tee -a /etc/ntp.conf'
cloud-init-per instance start_ntp service ntp start
cloud-init-per instance removeUseDNS sed -i --follow-symlinks -e '/UseDNS/d' /etc/ssh/sshd_config
add_str_to_file_if_not_exists /etc/ssh/sshd_config 'UseDNS' 'UseDNS no'
cloud-init-per instance gssapi_disable sed -i -e "/^\s*GSSAPICleanupCredentials yes/d" -e "/^\s*GSSAPIAuthentication yes/d" /etc/ssh/sshd_config
cloud-init-per instance nailgun_agent_0 /bin/sh -c 'echo "rm -f /etc/nailgun-agent/nodiscover" | tee /etc/rc.local'
cloud-init-per instance nailgun_agent_1 /bin/sh -c 'echo "flock -w 0 -o /var/lock/agent.lock -c \"/opt/nailgun/bin/agent >> /var/log/nailgun-agent.log 2>&1\"" | tee -a /etc/rc.local'
# Copying default bash settings to the root directory
cloud-init-per instance skel_bash cp -f /etc/skel/.bash* /root/
cloud-init-per instance hiera_puppet mkdir -p /etc/puppet /var/lib/hiera
cloud-init-per instance touch_puppet touch /var/lib/hiera/common.yaml /etc/puppet/hiera.yaml

View File

@ -1,94 +0,0 @@
#cloud-boothook
#!/bin/bash
function add_str_to_file_if_not_exists {
file=$1
str=$2
val=$3
if ! grep -q "^ *${str}" $file; then
echo $val >> $file
fi
}
cloud-init-per instance disable_selinux_on_the_fly setenforce 0
cloud-init-per instance disable_selinux sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux
#FIXME(agordeev): if operator updates dns settings on masternode after the node had been provisioned,
# cloud-init will start to generate resolv.conf with non-actual data
cloud-init-per instance resolv_conf_remove rm -f /etc/resolv.conf
cloud-init-per instance resolv_conf_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolv.conf'
cloud-init-per instance resolv_conf_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip }} | tee -a /etc/resolv.conf'
# configure black module lists
# virt-what should be installed
if [ ! -f /etc/modprobe.d/blacklist-i2c_piix4.conf ]; then
( (virt-what | fgrep -q "virtualbox") && echo "blacklist i2c_piix4" >> /etc/modprobe.d/blacklist-i2c_piix4.conf || :)
modprobe -r i2c_piix4
fi
cloud-init-per instance conntrack_ipv4 /bin/sh -c 'echo nf_conntrack_ipv4 | tee -a /etc/rc.modules'
cloud-init-per instance conntrack_ipv6 /bin/sh -c 'echo nf_conntrack_ipv6 | tee -a /etc/rc.modules'
cloud-init-per instance conntrack_proto_gre /bin/sh -c 'echo nf_conntrack_proto_gre | tee -a /etc/rc.modules'
cloud-init-per instance chmod_rc_modules chmod +x /etc/rc.modules
cloud-init-per instance conntrack_max /bin/sh -c 'echo "net.nf_conntrack_max=1048576" | tee -a /etc/sysctl.conf'
cloud-init-per instance kernel_panic /bin/sh -c 'echo "kernel.panic=60" | tee -a /etc/sysctl.conf'
cloud-init-per instance conntrack_ipv4_load modprobe nf_conntrack_ipv4
cloud-init-per instance conntrack_ipv6_load modprobe nf_conntrack_ipv6
cloud-init-per instance conntrack_proto_gre_load modprobe nf_conntrack_proto_gre
cloud-init-per instance conntrack_max_set sysctl -w "net.nf_conntrack_max=1048576"
cloud-init-per instance kernel_panic_set sysctl -w "kernel.panic=60"
cloud-init-per instance mkdir_coredump mkdir -p /var/log/coredump
cloud-init-per instance set_coredump /bin/sh -c 'echo -e "kernel.core_pattern=/var/log/coredump/core.%e.%p.%h.%t" | tee -a /etc/sysctl.conf'
cloud-init-per instance set_coredump_sysctl sysctl -w "kernel.core_pattern=/var/log/coredump/core.%e.%p.%h.%t"
cloud-init-per instance set_chmod chmod 777 /var/log/coredump
cloud-init-per instance set_limits /bin/sh -c 'echo -e "* soft core unlimited\n* hard core unlimited" | tee -a /etc/security/limits.conf'
#NOTE: disabled for centos?
#cloud-init-per instance dhclient echo 'supersede routers 0;' | tee /etc/dhcp/dhclient.conf
# ntp sync
# '| tee /dev/null' is needed for returning zero execution code always
cloud-init-per instance stop_ntpd /bin/sh -c 'service ntpd stop | tee /dev/null'
cloud-init-per instance sync_date ntpdate -t 4 -b {{ common.master_ip }}
cloud-init-per instance sync_hwclock hwclock --systohc
cloud-init-per instance edit_ntp_conf1 sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf2 sed -i '1 i tinker panic 0' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf_mkdir mkdir -p /var/lib/ntp
cloud-init-per instance edit_ntp_conf3 /bin/sh -c 'echo 0 | tee /var/lib/ntp/drift'
cloud-init-per instance edit_ntp_conf4 chown ntp: /var/lib/ntp/drift
cloud-init-per instance edit_ntp_conf5 sed -i '/^\s*server/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf6 /bin/sh -c 'echo "server {{ common.master_ip }} burst iburst" | tee -a /etc/ntp.conf'
# Point installed ntpd to Master node
cloud-init-per instance set_ntpdate sed -i 's/SYNC_HWCLOCK\s*=\s*no/SYNC_HWCLOCK=yes/' /etc/sysconfig/ntpdate
cloud-init-per instance set_ntpd_0 chkconfig ntpd on
cloud-init-per instance set_ntpd_1 chkconfig ntpdate on
cloud-init-per instance start_ntpd service ntpd start
cloud-init-per instance removeUseDNS sed -i --follow-symlinks -e '/UseDNS/d' /etc/ssh/sshd_config
add_str_to_file_if_not_exists /etc/ssh/sshd_config 'UseDNS' 'UseDNS no'
cloud-init-per instance gssapi_disable sed -i -e "/^\s*GSSAPICleanupCredentials yes/d" -e "/^\s*GSSAPIAuthentication yes/d" /etc/ssh/sshd_config
cloud-init-per instance nailgun_agent_0 /bin/sh -c 'echo "rm -f /etc/nailgun-agent/nodiscover" | tee /etc/rc.local'
cloud-init-per instance nailgun_agent_1 /bin/sh -c 'echo "flock -w 0 -o /var/lock/agent.lock -c \"/opt/nailgun/bin/agent >> /var/log/nailgun-agent.log 2>&1\"" | tee -a /etc/rc.local'
# Copying default bash settings to the root directory
cloud-init-per instance skel_bash cp -f /etc/skel/.bash* /root/
# Puppet config
cloud-init-per instance hiera_puppet mkdir -p /etc/puppet /var/lib/hiera
cloud-init-per instance touch_puppet touch /var/lib/hiera/common.yaml /etc/puppet/hiera.yaml /var/log/puppet.log
cloud-init-per instance chmod_puppet chmod 600 /var/log/puppet.log
# Mcollective enable
cloud-init-per instance mcollective_enable sed -i /etc/rc.d/init.d/mcollective -e 's/\(# chkconfig:\s\+[-0-6]\+\) [0-9]\+ \([0-9]\+\)/\1 81 \2/'

View File

@ -1,77 +0,0 @@
#cloud-boothook
#!/bin/bash
function add_str_to_file_if_not_exists {
file=$1
str=$2
val=$3
if ! grep -q "^ *${str}" $file; then
echo $val >> $file
fi
}
cloud-init-per instance wipe_sources_list_templates /bin/sh -c 'echo | tee /etc/cloud/templates/sources.list.ubuntu.tmpl'
#FIXME(agordeev): if operator updates dns settings on masternode after the node had been provisioned,
# cloud-init will start to generate resolv.conf with non-actual data
cloud-init-per instance resolv_conf_mkdir mkdir -p /etc/resolvconf/resolv.conf.d
cloud-init-per instance resolv_conf_remove rm -f /etc/resolv.conf
cloud-init-per instance resolv_conf_head_remove rm -f /etc/resolvconf/resolv.conf.d/head
cloud-init-per instance resolv_conf_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolv.conf'
cloud-init-per instance resolv_conf_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_head_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_head_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_head_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip|replace('"','') }} | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_head_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip|replace('"','') }} | tee -a /etc/resolvconf/resolv.conf.d/head'
# configure black module lists
# virt-what should be installed
if [ ! -f /etc/modprobe.d/blacklist-i2c_piix4.conf ]; then
( (virt-what | fgrep -q "virtualbox") && echo "blacklist i2c_piix4" >> /etc/modprobe.d/blacklist-i2c_piix4.conf || :) && update-initramfs -u -k all
modprobe -r i2c_piix4
fi
cloud-init-per instance conntrack_ipv4 /bin/sh -c 'echo nf_conntrack_ipv4 | tee -a /etc/modules'
cloud-init-per instance conntrack_ipv6 /bin/sh -c 'echo nf_conntrack_ipv6 | tee -a /etc/modules'
cloud-init-per instance conntrack_proto_gre /bin/sh -c 'echo nf_conntrack_proto_gre | tee -a /etc/modules'
cloud-init-per instance conntrack_max /bin/sh -c 'echo "net.nf_conntrack_max=1048576" | tee -a /etc/sysctl.conf'
cloud-init-per instance kernel_panic /bin/sh -c 'echo "kernel.panic=60" | tee -a /etc/sysctl.conf'
cloud-init-per instance conntrack_ipv4_load modprobe nf_conntrack_ipv4
cloud-init-per instance conntrack_ipv6_load modprobe nf_conntrack_ipv6
cloud-init-per instance conntrack_proto_gre_load modprobe nf_conntrack_proto_gre
cloud-init-per instance conntrack_max_set sysctl -w "net.nf_conntrack_max=1048576"
cloud-init-per instance kernel_panic_set sysctl -w "kernel.panic=60"
cloud-init-per instance dhclient /bin/sh -c 'echo "supersede routers 0;" | tee /etc/dhcp/dhclient.conf'
# ntp sync
# '| tee /dev/null' is needed for returning zero execution code always
cloud-init-per instance stop_ntp /bin/sh -c 'service ntp stop | tee /dev/null'
cloud-init-per instance sync_date ntpdate -t 4 -b {{ common.master_ip }}
cloud-init-per instance sync_hwclock hwclock --systohc
cloud-init-per instance edit_ntp_conf1 sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf2 sed -i '1 i tinker panic 0' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf_mkdir mkdir -p /var/lib/ntp
cloud-init-per instance edit_ntp_conf3 /bin/sh -c 'echo 0 | tee /var/lib/ntp/drift'
cloud-init-per instance edit_ntp_conf4 sed -i '/^\s*server/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf5 /bin/sh -c 'echo "server {{ common.master_ip }} burst iburst" | tee -a /etc/ntp.conf'
cloud-init-per instance start_ntp service ntp start
cloud-init-per instance removeUseDNS sed -i --follow-symlinks -e '/UseDNS/d' /etc/ssh/sshd_config
add_str_to_file_if_not_exists /etc/ssh/sshd_config 'UseDNS' 'UseDNS no'
cloud-init-per instance gssapi_disable sed -i -e "/^\s*GSSAPICleanupCredentials yes/d" -e "/^\s*GSSAPIAuthentication yes/d" /etc/ssh/sshd_config
cloud-init-per instance nailgun_agent_0 /bin/sh -c 'echo "rm -f /etc/nailgun-agent/nodiscover" | tee /etc/rc.local'
cloud-init-per instance nailgun_agent_1 /bin/sh -c 'echo "flock -w 0 -o /var/lock/agent.lock -c \"/opt/nailgun/bin/agent >> /var/log/nailgun-agent.log 2>&1\"" | tee -a /etc/rc.local'
# Copying default bash settings to the root directory
cloud-init-per instance skel_bash cp -f /etc/skel/.bash* /root/
cloud-init-per instance hiera_puppet mkdir -p /etc/puppet /var/lib/hiera
cloud-init-per instance touch_puppet touch /var/lib/hiera/common.yaml /etc/puppet/hiera.yaml /var/log/puppet.log
cloud-init-per instance chmod_puppet chmod 600 /var/log/puppet.log

View File

@ -1,79 +0,0 @@
#cloud-boothook
#!/bin/bash
function add_str_to_file_if_not_exists {
file=$1
str=$2
val=$3
if ! grep -q "^ *${str}" $file; then
echo $val >> $file
fi
}
cloud-init-per instance wipe_sources_list_templates /bin/sh -c 'echo | tee /etc/cloud/templates/sources.list.ubuntu.tmpl'
#FIXME(agordeev): if operator updates dns settings on masternode after the node had been provisioned,
# cloud-init will start to generate resolv.conf with non-actual data
cloud-init-per instance resolv_conf_mkdir mkdir -p /etc/resolvconf/resolv.conf.d
cloud-init-per instance resolv_conf_remove rm -f /etc/resolv.conf
cloud-init-per instance resolv_conf_head_remove rm -f /etc/resolvconf/resolv.conf.d/head
cloud-init-per instance resolv_conf_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolv.conf'
cloud-init-per instance resolv_conf_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_head_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_head_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_head_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip|replace('"','') }} | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_head_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip|replace('"','') }} | tee -a /etc/resolvconf/resolv.conf.d/head'
# configure black module lists
# virt-what should be installed
if [ ! -f /etc/modprobe.d/blacklist-i2c_piix4.conf ]; then
( (virt-what | fgrep -q "virtualbox") && echo "blacklist i2c_piix4" >> /etc/modprobe.d/blacklist-i2c_piix4.conf || :) && update-initramfs -u -k all
modprobe -r i2c_piix4
fi
cloud-init-per instance conntrack_ipv4 /bin/sh -c 'echo nf_conntrack_ipv4 | tee -a /etc/modules'
cloud-init-per instance conntrack_ipv6 /bin/sh -c 'echo nf_conntrack_ipv6 | tee -a /etc/modules'
cloud-init-per instance conntrack_proto_gre /bin/sh -c 'echo nf_conntrack_proto_gre | tee -a /etc/modules'
cloud-init-per instance conntrack_max /bin/sh -c 'echo "net.nf_conntrack_max=1048576" | tee -a /etc/sysctl.conf'
cloud-init-per instance kernel_panic /bin/sh -c 'echo "kernel.panic=60" | tee -a /etc/sysctl.conf'
cloud-init-per instance conntrack_ipv4_load modprobe nf_conntrack_ipv4
cloud-init-per instance conntrack_ipv6_load modprobe nf_conntrack_ipv6
cloud-init-per instance conntrack_proto_gre_load modprobe nf_conntrack_proto_gre
cloud-init-per instance conntrack_max_set sysctl -w "net.nf_conntrack_max=1048576"
cloud-init-per instance kernel_panic_set sysctl -w "kernel.panic=60"
cloud-init-per instance dhclient /bin/sh -c 'echo "supersede routers 0;" | tee /etc/dhcp/dhclient.conf'
# ntp sync
# '| tee /dev/null' is needed for returning zero execution code always
cloud-init-per instance stop_ntp /bin/sh -c 'service ntp stop | tee /dev/null'
cloud-init-per instance sync_date ntpdate -t 4 -b {{ common.master_ip }}
cloud-init-per instance sync_hwclock hwclock --systohc
cloud-init-per instance edit_ntp_conf1 sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf2 sed -i '1 i tinker panic 0' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf_mkdir mkdir -p /var/lib/ntp
cloud-init-per instance edit_ntp_conf_chown_dir chown ntp: /var/lib/ntp
cloud-init-per instance edit_ntp_conf3 /bin/sh -c 'echo 0 | tee /var/lib/ntp/ntp.drift'
cloud-init-per instance edit_ntp_conf_chown_drift chown ntp: /var/lib/ntp/ntp.drift
cloud-init-per instance edit_ntp_conf4 sed -i '/^\s*server/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf5 /bin/sh -c 'echo "server {{ common.master_ip }} burst iburst" | tee -a /etc/ntp.conf'
cloud-init-per instance start_ntp service ntp start
cloud-init-per instance removeUseDNS sed -i --follow-symlinks -e '/UseDNS/d' /etc/ssh/sshd_config
add_str_to_file_if_not_exists /etc/ssh/sshd_config 'UseDNS' 'UseDNS no'
cloud-init-per instance gssapi_disable sed -i -e "/^\s*GSSAPICleanupCredentials yes/d" -e "/^\s*GSSAPIAuthentication yes/d" /etc/ssh/sshd_config
cloud-init-per instance nailgun_agent_0 /bin/sh -c 'echo "rm -f /etc/nailgun-agent/nodiscover" | tee /etc/rc.local'
cloud-init-per instance nailgun_agent_1 /bin/sh -c 'echo "flock -w 0 -o /var/lock/agent.lock -c \"/usr/bin/nailgun-agent >> /var/log/nailgun-agent.log 2>&1\"" | tee -a /etc/rc.local'
# Copying default bash settings to the root directory
cloud-init-per instance skel_bash cp -f /etc/skel/.bash* /root/
cloud-init-per instance hiera_puppet mkdir -p /etc/puppet /var/lib/hiera
cloud-init-per instance touch_puppet touch /var/lib/hiera/common.yaml /etc/puppet/hiera.yaml /var/log/puppet.log
cloud-init-per instance chmod_puppet chmod 600 /var/log/puppet.log

View File

@ -1,79 +0,0 @@
#cloud-boothook
#!/bin/bash
function add_str_to_file_if_not_exists {
file=$1
str=$2
val=$3
if ! grep -q "^ *${str}" $file; then
echo $val >> $file
fi
}
cloud-init-per instance wipe_sources_list_templates /bin/sh -c 'echo | tee /etc/cloud/templates/sources.list.ubuntu.tmpl'
#FIXME(agordeev): if operator updates dns settings on masternode after the node had been provisioned,
# cloud-init will start to generate resolv.conf with non-actual data
cloud-init-per instance resolv_conf_mkdir mkdir -p /etc/resolvconf/resolv.conf.d
cloud-init-per instance resolv_conf_remove rm -f /etc/resolv.conf
cloud-init-per instance resolv_conf_head_remove rm -f /etc/resolvconf/resolv.conf.d/head
cloud-init-per instance resolv_conf_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolv.conf'
cloud-init-per instance resolv_conf_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_head_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_head_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_head_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip|replace('"','') }} | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_head_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip|replace('"','') }} | tee -a /etc/resolvconf/resolv.conf.d/head'
# configure black module lists
# virt-what should be installed
if [ ! -f /etc/modprobe.d/blacklist-i2c_piix4.conf ]; then
( (virt-what | fgrep -q "virtualbox") && echo "blacklist i2c_piix4" >> /etc/modprobe.d/blacklist-i2c_piix4.conf || :) && update-initramfs -u -k all
modprobe -r i2c_piix4
fi
cloud-init-per instance conntrack_ipv4 /bin/sh -c 'echo nf_conntrack_ipv4 | tee -a /etc/modules'
cloud-init-per instance conntrack_ipv6 /bin/sh -c 'echo nf_conntrack_ipv6 | tee -a /etc/modules'
cloud-init-per instance conntrack_proto_gre /bin/sh -c 'echo nf_conntrack_proto_gre | tee -a /etc/modules'
cloud-init-per instance conntrack_max /bin/sh -c 'echo "net.nf_conntrack_max=1048576" | tee -a /etc/sysctl.conf'
cloud-init-per instance kernel_panic /bin/sh -c 'echo "kernel.panic=60" | tee -a /etc/sysctl.conf'
cloud-init-per instance conntrack_ipv4_load modprobe nf_conntrack_ipv4
cloud-init-per instance conntrack_ipv6_load modprobe nf_conntrack_ipv6
cloud-init-per instance conntrack_proto_gre_load modprobe nf_conntrack_proto_gre
cloud-init-per instance conntrack_max_set sysctl -w "net.nf_conntrack_max=1048576"
cloud-init-per instance kernel_panic_set sysctl -w "kernel.panic=60"
cloud-init-per instance dhclient /bin/sh -c 'echo "supersede routers 0;" | tee /etc/dhcp/dhclient.conf'
# ntp sync
# '| tee /dev/null' is needed for returning zero execution code always
cloud-init-per instance stop_ntp /bin/sh -c 'service ntp stop | tee /dev/null'
cloud-init-per instance sync_date ntpdate -t 4 -b {{ common.master_ip }}
cloud-init-per instance sync_hwclock hwclock --systohc
cloud-init-per instance edit_ntp_conf1 sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf2 sed -i '1 i tinker panic 0' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf_mkdir mkdir -p /var/lib/ntp
cloud-init-per instance edit_ntp_conf_chown_dir chown ntp: /var/lib/ntp
cloud-init-per instance edit_ntp_conf3 /bin/sh -c 'echo 0 | tee /var/lib/ntp/ntp.drift'
cloud-init-per instance edit_ntp_conf_chown_drift chown ntp: /var/lib/ntp/ntp.drift
cloud-init-per instance edit_ntp_conf4 sed -i '/^\s*server/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf5 /bin/sh -c 'echo "server {{ common.master_ip }} burst iburst" | tee -a /etc/ntp.conf'
cloud-init-per instance start_ntp service ntp start
cloud-init-per instance removeUseDNS sed -i --follow-symlinks -e '/UseDNS/d' /etc/ssh/sshd_config
add_str_to_file_if_not_exists /etc/ssh/sshd_config 'UseDNS' 'UseDNS no'
cloud-init-per instance gssapi_disable sed -i -e "/^\s*GSSAPICleanupCredentials yes/d" -e "/^\s*GSSAPIAuthentication yes/d" /etc/ssh/sshd_config
cloud-init-per instance nailgun_agent_0 /bin/sh -c 'echo "rm -f /etc/nailgun-agent/nodiscover" | tee /etc/rc.local'
cloud-init-per instance nailgun_agent_1 /bin/sh -c 'echo "flock -w 0 -o /var/lock/agent.lock -c \"/usr/bin/nailgun-agent >> /var/log/nailgun-agent.log 2>&1\"" | tee -a /etc/rc.local'
# Copying default bash settings to the root directory
cloud-init-per instance skel_bash cp -f /etc/skel/.bash* /root/
cloud-init-per instance hiera_puppet mkdir -p /etc/puppet /var/lib/hiera
cloud-init-per instance touch_puppet touch /var/lib/hiera/common.yaml /etc/puppet/hiera.yaml /var/log/puppet.log
cloud-init-per instance chmod_puppet chmod 600 /var/log/puppet.log

View File

@ -1,104 +0,0 @@
#cloud-boothook
#!/bin/bash
function add_str_to_file_if_not_exists {
file=$1
str=$2
val=$3
if ! grep -q "^ *${str}" $file; then
echo $val >> $file
fi
}
cloud-init-per instance wipe_sources_list_templates /bin/sh -c 'echo | tee /etc/cloud/templates/sources.list.ubuntu.tmpl'
#FIXME(agordeev): if operator updates dns settings on masternode after the node had been provisioned,
# cloud-init will start to generate resolv.conf with non-actual data
cloud-init-per instance resolv_conf_mkdir mkdir -p /etc/resolvconf/resolv.conf.d
cloud-init-per instance resolv_conf_remove rm -f /etc/resolv.conf
cloud-init-per instance resolv_conf_head_remove rm -f /etc/resolvconf/resolv.conf.d/head
cloud-init-per instance resolv_conf_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolv.conf'
cloud-init-per instance resolv_conf_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_head_header /bin/sh -c 'echo "# re-generated by cloud-init boothook only at the first boot;" | tee /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_head_search /bin/sh -c 'echo "search {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_head_domain /bin/sh -c 'echo "domain {{ common.search_domain|replace('"','') }}" | tee -a /etc/resolvconf/resolv.conf.d/head'
cloud-init-per instance resolv_conf_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip|replace('"','') }} | tee -a /etc/resolv.conf'
cloud-init-per instance resolv_conf_head_nameserver /bin/sh -c 'echo nameserver {{ common.master_ip|replace('"','') }} | tee -a /etc/resolvconf/resolv.conf.d/head'
# configure black module lists
# virt-what should be installed
if [ ! -f /etc/modprobe.d/blacklist-i2c_piix4.conf ]; then
( (virt-what | fgrep -q "virtualbox") && echo "blacklist i2c_piix4" >> /etc/modprobe.d/blacklist-i2c_piix4.conf || :) && update-initramfs -u -k all
modprobe -r i2c_piix4
fi
cloud-init-per instance conntrack_ipv4 /bin/sh -c 'echo nf_conntrack_ipv4 | tee -a /etc/modules'
cloud-init-per instance conntrack_ipv6 /bin/sh -c 'echo nf_conntrack_ipv6 | tee -a /etc/modules'
cloud-init-per instance conntrack_proto_gre /bin/sh -c 'echo nf_conntrack_proto_gre | tee -a /etc/modules'
cloud-init-per instance conntrack_max /bin/sh -c 'echo "net.nf_conntrack_max=1048576" | tee -a /etc/sysctl.conf'
cloud-init-per instance kernel_panic /bin/sh -c 'echo "kernel.panic=60" | tee -a /etc/sysctl.conf'
cloud-init-per instance conntrack_ipv4_load modprobe nf_conntrack_ipv4
cloud-init-per instance conntrack_ipv6_load modprobe nf_conntrack_ipv6
cloud-init-per instance conntrack_proto_gre_load modprobe nf_conntrack_proto_gre
cloud-init-per instance conntrack_max_set sysctl -w "net.nf_conntrack_max=1048576"
cloud-init-per instance kernel_panic_set sysctl -w "kernel.panic=60"
cloud-init-per instance dhclient /bin/sh -c 'echo "supersede routers 0;" | tee /etc/dhcp/dhclient.conf'
# ntp sync
# '| tee /dev/null' is needed for returning zero execution code always
cloud-init-per instance stop_ntp /bin/sh -c 'service ntp stop | tee /dev/null'
cloud-init-per instance sync_date ntpdate -t 4 -b {{ common.master_ip }}
cloud-init-per instance sync_hwclock hwclock --systohc
cloud-init-per instance edit_ntp_conf1 sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf2 sed -i '1 i tinker panic 0' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf_mkdir mkdir -p /var/lib/ntp
cloud-init-per instance edit_ntp_conf_chown_dir chown ntp: /var/lib/ntp
cloud-init-per instance edit_ntp_conf3 /bin/sh -c 'echo 0 | tee /var/lib/ntp/ntp.drift'
cloud-init-per instance edit_ntp_conf_chown_drift chown ntp: /var/lib/ntp/ntp.drift
cloud-init-per instance edit_ntp_conf4 sed -i '/^\s*server/ d' /etc/ntp.conf
cloud-init-per instance edit_ntp_conf5 /bin/sh -c 'echo "server {{ common.master_ip }} burst iburst" | tee -a /etc/ntp.conf'
cloud-init-per instance start_ntp service ntp start
cloud-init-per instance removeUseDNS sed -i --follow-symlinks -e '/UseDNS/d' /etc/ssh/sshd_config
add_str_to_file_if_not_exists /etc/ssh/sshd_config 'UseDNS' 'UseDNS no'
cloud-init-per instance gssapi_disable sed -i -e "/^\s*GSSAPICleanupCredentials yes/d" -e "/^\s*GSSAPIAuthentication yes/d" /etc/ssh/sshd_config
cloud-init-per instance nailgun_agent_0 /bin/sh -c 'echo "#!/bin/sh" | tee /etc/rc.local'
cloud-init-per instance nailgun_agent_1 /bin/sh -c 'echo "rm -f /etc/nailgun-agent/nodiscover" | tee -a /etc/rc.local'
cloud-init-per instance nailgun_agent_2 /bin/sh -c 'echo "flock -w 0 -o /var/lock/agent.lock -c \"/usr/bin/nailgun-agent >> /var/log/nailgun-agent.log 2>&1\"" | tee -a /etc/rc.local'
# Copying default bash settings to the root directory
cloud-init-per instance skel_bash cp -f /etc/skel/.bash* /root/
cloud-init-per instance hiera_puppet mkdir -p /etc/puppet /var/lib/hiera
cloud-init-per instance touch_puppet touch /var/lib/hiera/common.yaml /etc/puppet/hiera.yaml /var/log/puppet.log
cloud-init-per instance chmod_puppet chmod 600 /var/log/puppet.log
cloud-init-per instance upstart_console /bin/sh -c 'for i in $(seq 0 1); do
cat >/etc/init/ttyS${i}.conf <<-EOF
# ttyS${i} - getty
start on stopped rc RUNLEVEL=[12345]
stop on runlevel [!12345]
respawn
pre-start script
# exit if console not present on ttyS${i}
cat /proc/cmdline | grep -q "console=ttyS${i}"
end script
script
# get console speed if provded with "console=ttySx,38400"
SPEED=\$(cat /proc/cmdline | sed -e"s/^.*console=ttyS${i}[,]*\([^ ]*\)[ ]*.*\$/\1/g")
# or use 9600 console speed as default
exec /sbin/getty -L \${SPEED:-9600} ttyS${i}
end script
EOF
done
'

View File

@ -1,106 +0,0 @@
#cloud-config
resize_rootfs: false
growpart:
mode: false
disable_ec2_metadata: true
disable_root: false
# password: RANDOM
# chpasswd: { expire: True }
ssh_pwauth: false
ssh_authorized_keys:
{% for key in common.ssh_auth_keys %}
- {{ key }}
{% endfor %}
# set the locale to a given locale
# default: en_US.UTF-8
locale: en_US.UTF-8
timezone: {{ common.timezone }}
hostname: {{ common.hostname }}
fqdn: {{ common.fqdn }}
# add entries to rsyslog configuration
rsyslog:
- filename: 00-remote.conf
content: |
$template LogToMaster, "<%PRI%>1 %$NOW%T%TIMESTAMP:8:$%Z %HOSTNAME% %APP-NAME% %PROCID% %MSGID% -%msg%\n"
*.* @{{ common.master_ip }};LogToMaster
runcmd:
{% if puppet.enable != 1 %}
- service puppet stop
- chkconfig puppet off
{% endif %}
{% if mcollective.enable != 1 %}
- service mcollective stop
- chkconfig mcollective off
{% else %}
- chkconfig mcollective on
{% endif %}
- iptables -t filter -F INPUT
- iptables -t filter -F FORWARD
- service iptables save
# that module's missing in 0.6.3, but existent for >= 0.7.3
write_files:
- content: |
---
url: {{ common.master_url }}
path: /etc/nailgun-agent/config.yaml
- content: target
path: /etc/nailgun_systemtype
mcollective:
conf:
main_collective: mcollective
collectives: mcollective
libdir: /usr/libexec/mcollective
logfile: /var/log/mcollective.log
loglevel: debug
daemonize: 1
direct_addressing: 1
ttl: 4294957
securityprovider: psk
plugin.psk: {{ mcollective.pskey }}
identity: {{ mcollective.identity }}
{% if mcollective.connector == 'stomp' %}
connector = stomp
plugin.stomp.host: {{ mcollective.host }}
plugin.stomp.port: {{ mcollective.port|default(61613) }}
plugin.stomp.user: {{ mcollective.user }}
plugin.stomp.password: {{ mcollective.password }}
{% else %}
connector: rabbitmq
plugin.rabbitmq.vhost: {{ mcollective.vhost }}
plugin.rabbitmq.pool.size: 1
plugin.rabbitmq.pool.1.host: {{ mcollective.host }}
plugin.rabbitmq.pool.1.port: {{ mcollective.port|default(61613) }}
plugin.rabbitmq.pool.1.user: {{ mcollective.user }}
plugin.rabbitmq.pool.1.password: {{ mcollective.password }}
plugin.rabbitmq.heartbeat_interval: 30
{% endif %}
factsource: yaml
plugin.yaml: /etc/mcollective/facts.yaml
puppet:
conf:
main:
logdir: /var/log/puppet
rundir: /var/run/puppet
ssldir: $vardir/ssl
pluginsync: true
prerun_command: /bin/true
postrun_command: /bin/true
agent:
classfile: $vardir/classes.txt
localconfig: $vardir/localconfig
server: {{ puppet.master }}
report: false
configtimeout: 600
final_message: "YAY! The system is finally up, after $UPTIME seconds"

View File

@ -1,105 +0,0 @@
#cloud-config
resize_rootfs: false
growpart:
mode: false
disable_ec2_metadata: true
disable_root: false
user: root
password: r00tme
chpasswd: { expire: false }
ssh_pwauth: false
ssh_authorized_keys:
{% for key in common.ssh_auth_keys %}
- {{ key }}
{% endfor %}
# set the locale to a given locale
# default: en_US.UTF-8
locale: en_US.UTF-8
timezone: {{ common.timezone }}
hostname: {{ common.hostname }}
fqdn: {{ common.fqdn }}
# add entries to rsyslog configuration
rsyslog:
- filename: 10-log2master.conf
content: |
$template LogToMaster, "<%PRI%>1 %$NOW%T%TIMESTAMP:8:$%Z %HOSTNAME% %APP-NAME% %PROCID% %MSGID% -%msg%\n"
*.* @{{ common.master_ip }};LogToMaster
# that module's missing in 0.6.3, but existent for >= 0.7.3
write_files:
- content: |
---
url: {{ common.master_url }}
path: /etc/nailgun-agent/config.yaml
- content: target
path: /etc/nailgun_systemtype
mcollective:
conf:
main_collective: mcollective
collectives: mcollective
libdir: /usr/share/mcollective/plugins
logfile: /var/log/mcollective.log
loglevel: debug
daemonize: 0
direct_addressing: 1
ttl: 4294957
securityprovider: psk
plugin.psk: {{ mcollective.pskey }}
identity: {{ mcollective.identity }}
{% if mcollective.connector == 'stomp' %}
connector = stomp
plugin.stomp.host: {{ mcollective.host }}
plugin.stomp.port: {{ mcollective.port|default(61613) }}
plugin.stomp.user: {{ mcollective.user }}
plugin.stomp.password: {{ mcollective.password }}
{% else %}
connector: rabbitmq
plugin.rabbitmq.vhost: {{ mcollective.vhost }}
plugin.rabbitmq.pool.size: 1
plugin.rabbitmq.pool.1.host: {{ mcollective.host }}
plugin.rabbitmq.pool.1.port: {{ mcollective.port|default(61613) }}
plugin.rabbitmq.pool.1.user: {{ mcollective.user }}
plugin.rabbitmq.pool.1.password: {{ mcollective.password }}
plugin.rabbitmq.heartbeat_interval: 30
{% endif %}
factsource: yaml
plugin.yaml: /etc/mcollective/facts.yaml
puppet:
conf:
main:
logdir: /var/log/puppet
rundir: /var/run/puppet
ssldir: $vardir/ssl
pluginsync: true
prerun_command: /bin/true
postrun_command: /bin/true
agent:
classfile: $vardir/classes.txt
localconfig: $vardir/localconfig
server: {{ puppet.master }}
report: false
configtimeout: 600
runcmd:
{% if puppet.enable != 1 %}
- /usr/sbin/invoke-rc.d puppet stop
- /usr/sbin/update-rc.d -f puppet remove
{% endif %}
{% if mcollective.enable != 1 %}
- /usr/sbin/invoke-rc.d mcollective stop
- echo manual > /etc/init/mcollective.override
{% else %}
- rm -f /etc/init/mcollective.override
{% endif %}
- iptables -t filter -F INPUT
- iptables -t filter -F FORWARD
final_message: "YAY! The system is finally up, after $UPTIME seconds"

View File

@ -1,105 +0,0 @@
#cloud-config
resize_rootfs: false
growpart:
mode: false
disable_ec2_metadata: true
disable_root: false
user: root
password: r00tme
chpasswd: { expire: false }
ssh_pwauth: false
ssh_authorized_keys:
{% for key in common.ssh_auth_keys %}
- {{ key }}
{% endfor %}
# set the locale to a given locale
# default: en_US.UTF-8
locale: en_US.UTF-8
timezone: {{ common.timezone }}
hostname: {{ common.hostname }}
fqdn: {{ common.fqdn }}
# add entries to rsyslog configuration
rsyslog:
- filename: 10-log2master.conf
content: |
$template LogToMaster, "<%PRI%>1 %$NOW%T%TIMESTAMP:8:$%Z %HOSTNAME% %APP-NAME% %PROCID% %MSGID% -%msg%\n"
*.* @{{ common.master_ip }};LogToMaster
# that module's missing in 0.6.3, but existent for >= 0.7.3
write_files:
- content: |
---
url: {{ common.master_url }}
path: /etc/nailgun-agent/config.yaml
- content: target
path: /etc/nailgun_systemtype
mcollective:
conf:
main_collective: mcollective
collectives: mcollective
libdir: /usr/share/mcollective/plugins
logfile: /var/log/mcollective.log
loglevel: debug
daemonize: 0
direct_addressing: 1
ttl: 4294957
securityprovider: psk
plugin.psk: {{ mcollective.pskey }}
identity: {{ mcollective.identity }}
{% if mcollective.connector == 'stomp' %}
connector = stomp
plugin.stomp.host: {{ mcollective.host }}
plugin.stomp.port: {{ mcollective.port|default(61613) }}
plugin.stomp.user: {{ mcollective.user }}
plugin.stomp.password: {{ mcollective.password }}
{% else %}
connector: rabbitmq
plugin.rabbitmq.vhost: {{ mcollective.vhost }}
plugin.rabbitmq.pool.size: 1
plugin.rabbitmq.pool.1.host: {{ mcollective.host }}
plugin.rabbitmq.pool.1.port: {{ mcollective.port|default(61613) }}
plugin.rabbitmq.pool.1.user: {{ mcollective.user }}
plugin.rabbitmq.pool.1.password: {{ mcollective.password }}
plugin.rabbitmq.heartbeat_interval: 30
{% endif %}
factsource: yaml
plugin.yaml: /etc/mcollective/facts.yaml
puppet:
conf:
main:
logdir: /var/log/puppet
rundir: /var/run/puppet
ssldir: $vardir/ssl
pluginsync: true
prerun_command: /bin/true
postrun_command: /bin/true
agent:
classfile: $vardir/classes.txt
localconfig: $vardir/localconfig
server: {{ puppet.master }}
report: false
configtimeout: 600
runcmd:
{% if puppet.enable != 1 %}
- /usr/sbin/invoke-rc.d puppet stop
- /usr/sbin/update-rc.d -f puppet remove
{% endif %}
{% if mcollective.enable != 1 %}
- /usr/sbin/invoke-rc.d mcollective stop
- echo manual > /etc/init/mcollective.override
{% else %}
- rm -f /etc/init/mcollective.override
{% endif %}
- iptables -t filter -F INPUT
- iptables -t filter -F FORWARD
final_message: "YAY! The system is finally up, after $UPTIME seconds"

View File

@ -1,105 +0,0 @@
#cloud-config
resize_rootfs: false
growpart:
mode: false
disable_ec2_metadata: true
disable_root: false
user: root
password: r00tme
chpasswd: { expire: false }
ssh_pwauth: false
ssh_authorized_keys:
{% for key in common.ssh_auth_keys %}
- {{ key }}
{% endfor %}
# set the locale to a given locale
# default: en_US.UTF-8
locale: en_US.UTF-8
timezone: {{ common.timezone }}
hostname: {{ common.hostname }}
fqdn: {{ common.fqdn }}
# add entries to rsyslog configuration
rsyslog:
- filename: 10-log2master.conf
content: |
$template LogToMaster, "<%PRI%>1 %$NOW%T%TIMESTAMP:8:$%Z %HOSTNAME% %APP-NAME% %PROCID% %MSGID% -%msg%\n"
*.* @{{ common.master_ip }};LogToMaster
# that module's missing in 0.6.3, but existent for >= 0.7.3
write_files:
- content: |
---
url: {{ common.master_url }}
path: /etc/nailgun-agent/config.yaml
- content: target
path: /etc/nailgun_systemtype
mcollective:
conf:
main_collective: mcollective
collectives: mcollective
libdir: /usr/share/mcollective/plugins
logfile: /var/log/mcollective.log
loglevel: debug
daemonize: 0
direct_addressing: 1
ttl: 4294957
securityprovider: psk
plugin.psk: {{ mcollective.pskey }}
identity: {{ mcollective.identity }}
{% if mcollective.connector == 'stomp' %}
connector = stomp
plugin.stomp.host: {{ mcollective.host }}
plugin.stomp.port: {{ mcollective.port|default(61613) }}
plugin.stomp.user: {{ mcollective.user }}
plugin.stomp.password: {{ mcollective.password }}
{% else %}
connector: rabbitmq
plugin.rabbitmq.vhost: {{ mcollective.vhost }}
plugin.rabbitmq.pool.size: 1
plugin.rabbitmq.pool.1.host: {{ mcollective.host }}
plugin.rabbitmq.pool.1.port: {{ mcollective.port|default(61613) }}
plugin.rabbitmq.pool.1.user: {{ mcollective.user }}
plugin.rabbitmq.pool.1.password: {{ mcollective.password }}
plugin.rabbitmq.heartbeat_interval: 30
{% endif %}
factsource: yaml
plugin.yaml: /etc/mcollective/facts.yaml
puppet:
conf:
main:
logdir: /var/log/puppet
rundir: /var/run/puppet
ssldir: $vardir/ssl
pluginsync: true
prerun_command: /bin/true
postrun_command: /bin/true
agent:
classfile: $vardir/classes.txt
localconfig: $vardir/localconfig
server: {{ puppet.master }}
report: false
configtimeout: 600
runcmd:
{% if puppet.enable != 1 %}
- /usr/sbin/invoke-rc.d puppet stop
- /usr/sbin/update-rc.d -f puppet remove
{% endif %}
{% if mcollective.enable != 1 %}
- /usr/sbin/invoke-rc.d mcollective stop
- echo manual > /etc/init/mcollective.override
{% else %}
- rm -f /etc/init/mcollective.override
{% endif %}
- iptables -t filter -F INPUT
- iptables -t filter -F FORWARD
final_message: "YAY! The system is finally up, after $UPTIME seconds"

View File

@ -1,120 +0,0 @@
#cloud-config
resize_rootfs: false
growpart:
mode: false
disable_ec2_metadata: true
disable_root: false
users:
{% for user in user_accounts %}
- name: {{ user.name }}
passwd: {{ user.hashed_password }}
lock_passwd: False
homedir: {{ user.homedir }}
shell: {{ user.shell }}
{% if user.ssh_keys|length > 0 %}
ssh_authorized_keys:
{% for key in user.ssh_keys %}
- {{ key }}
{% endfor %}
{% endif %}
{% if user.sudo|length > 0 %}
sudo:
{% for entry in user.sudo %}
- "{{ entry }}"
{% endfor %}
{% endif %}
{% endfor %}
chpasswd: { expire: false }
ssh_pwauth: false
# set the locale to a given locale
# default: en_US.UTF-8
locale: en_US.UTF-8
timezone: {{ common.timezone }}
hostname: {{ common.hostname }}
fqdn: {{ common.fqdn }}
# add entries to rsyslog configuration
rsyslog:
- filename: 00-remote.conf
content: |
$template LogToMaster, "<%PRI%>1 %$NOW%T%TIMESTAMP:8:$%Z %HOSTNAME% %APP-NAME% %PROCID% %MSGID% -%msg%\n"
*.* @{{ common.master_ip }};LogToMaster
# that module's missing in 0.6.3, but existent for >= 0.7.3
write_files:
- content: |
---
url: {{ common.master_url }}
path: /etc/nailgun-agent/config.yaml
- content: target
path: /etc/nailgun_systemtype
mcollective:
conf:
main_collective: mcollective
collectives: mcollective
libdir: /usr/share/mcollective/plugins
logfile: /var/log/mcollective.log
loglevel: debug
daemonize: 0
direct_addressing: 1
ttl: 4294957
securityprovider: psk
plugin.psk: {{ mcollective.pskey }}
identity: {{ mcollective.identity }}
{% if mcollective.connector == 'stomp' %}
connector = stomp
plugin.stomp.host: {{ mcollective.host }}
plugin.stomp.port: {{ mcollective.port|default(61613) }}
plugin.stomp.user: {{ mcollective.user }}
plugin.stomp.password: {{ mcollective.password }}
{% else %}
connector: rabbitmq
plugin.rabbitmq.vhost: {{ mcollective.vhost }}
plugin.rabbitmq.pool.size: 1
plugin.rabbitmq.pool.1.host: {{ mcollective.host }}
plugin.rabbitmq.pool.1.port: {{ mcollective.port|default(61613) }}
plugin.rabbitmq.pool.1.user: {{ mcollective.user }}
plugin.rabbitmq.pool.1.password: {{ mcollective.password }}
plugin.rabbitmq.heartbeat_interval: 30
{% endif %}
factsource: yaml
plugin.yaml: /etc/mcollective/facts.yaml
puppet:
conf:
main:
logdir: /var/log/puppet
rundir: /var/run/puppet
ssldir: $vardir/ssl
pluginsync: true
prerun_command: /bin/true
postrun_command: /bin/true
stringify_facts: false
agent:
classfile: $vardir/classes.txt
localconfig: $vardir/localconfig
server: {{ puppet.master }}
report: false
configtimeout: 600
runcmd:
{% if puppet.enable != 1 %}
- /usr/sbin/invoke-rc.d puppet stop
- /usr/sbin/update-rc.d -f puppet remove
{% endif %}
{% if mcollective.enable != 1 %}
- /usr/sbin/invoke-rc.d mcollective stop
- echo manual > /etc/init/mcollective.override
{% else %}
- rm -f /etc/init/mcollective.override
{% endif %}
- iptables -t filter -F INPUT
- iptables -t filter -F FORWARD
final_message: "YAY! The system is finally up, after $UPTIME seconds"

View File

@ -1,145 +0,0 @@
#cloud-config
resize_rootfs: false
growpart:
mode: false
disable_ec2_metadata: true
disable_root: false
users:
{% for user in user_accounts %}
- name: {{ user.name }}
passwd: {{ user.hashed_password }}
lock_passwd: False
homedir: {{ user.homedir }}
shell: {{ user.shell }}
{% if user.ssh_keys|length > 0 %}
ssh_authorized_keys:
{% for key in user.ssh_keys %}
- {{ key }}
{% endfor %}
{% endif %}
{% if user.sudo|length > 0 %}
sudo:
{% for entry in user.sudo %}
- "{{ entry }}"
{% endfor %}
{% endif %}
{% endfor %}
chpasswd: { expire: false }
ssh_pwauth: false
# set the locale to a given locale
# default: en_US.UTF-8
locale: en_US.UTF-8
timezone: {{ common.timezone }}
hostname: {{ common.hostname }}
fqdn: {{ common.fqdn }}
# add entries to rsyslog configuration
rsyslog:
- filename: 00-remote.conf
content: |
$template LogToMaster, "<%PRI%>1 %$NOW%T%TIMESTAMP:8:$%Z %HOSTNAME% %APP-NAME% %PROCID% %MSGID% -%msg%\n"
*.* @{{ common.master_ip }};LogToMaster
# that module's missing in 0.6.3, but existent for >= 0.7.3
write_files:
- content: |
---
url: {{ common.master_url }}
path: /etc/nailgun-agent/config.yaml
- content: target
path: /etc/nailgun_systemtype
mcollective:
conf:
main_collective: mcollective
collectives: mcollective
libdir: /usr/share/mcollective/plugins
logfile: /var/log/mcollective.log
loglevel: debug
daemonize: 1
direct_addressing: 1
ttl: 4294957
securityprovider: psk
plugin.psk: {{ mcollective.pskey }}
identity: {{ mcollective.identity }}
{% if mcollective.connector == 'stomp' %}
connector = stomp
plugin.stomp.host: {{ mcollective.host }}
plugin.stomp.port: {{ mcollective.port|default(61613) }}
plugin.stomp.user: {{ mcollective.user }}
plugin.stomp.password: {{ mcollective.password }}
{% else %}
connector: rabbitmq
plugin.rabbitmq.vhost: {{ mcollective.vhost }}
plugin.rabbitmq.pool.size: 1
plugin.rabbitmq.pool.1.host: {{ mcollective.host }}
plugin.rabbitmq.pool.1.port: {{ mcollective.port|default(61613) }}
plugin.rabbitmq.pool.1.user: {{ mcollective.user }}
plugin.rabbitmq.pool.1.password: {{ mcollective.password }}
plugin.rabbitmq.heartbeat_interval: 30
{% endif %}
factsource: yaml
plugin.yaml: /etc/mcollective/facts.yaml
puppet:
conf:
main:
logdir: /var/log/puppet
rundir: /var/run/puppet
ssldir: $vardir/ssl
pluginsync: true
prerun_command: /bin/true
postrun_command: /bin/true
stringify_facts: false
agent:
classfile: $vardir/classes.txt
localconfig: $vardir/localconfig
server: {{ puppet.master }}
report: false
configtimeout: 600
runcmd:
{% if puppet.enable != 1 %}
- if [ -x /bin/systemctl ]; then
- /bin/systemctl stop puppet
- /bin/systemctl disable puppet
- else
- /usr/sbin/invoke-rc.d puppet stop
- /usr/sbin/update-rc.d -f puppet remove
- echo manual > /etc/init/puppet.override
- fi
{% else %}
- if [ -x /bin/systemctl ]; then
- /bin/systemctl enable puppet
- else
- rm -f /etc/init/puppet.override
- fi
{% endif %}
{% if mcollective.enable != 1 %}
- if [ -x /bin/systemctl ]; then
- /bin/systemctl stop mcollective
- /bin/systemctl disable mcollective
- else
- /usr/sbin/invoke-rc.d mcollective stop
- /usr/sbin/update-rc.d -f mcollective remove
- echo manual > /etc/init/mcollective.override
- fi
{% else %}
- if [ -x /bin/systemctl ]; then
- /bin/systemctl enable mcollective
# TODO(dteselkin) rework start sequence when bug
# https://bugs.launchpad.net/fuel/+bug/1543063 is fixed
- /bin/systemctl start mcollective
- else
- rm -f /etc/init/mcollective.override
- fi
{% endif %}
- iptables -t filter -F INPUT
- iptables -t filter -F FORWARD
final_message: "YAY! The system is finally up, after $UPTIME seconds"

View File

@ -1,11 +0,0 @@
# instance-id will be autogenerated
# instance-id: iid-abcdefg
#network-interfaces: |
# auto {{ common.admin_iface_name|default("eth0") }}
# iface {{ common.admin_iface_name|default("eth0") }} inet static
# address {{ common.admin_ip }}
# # network 192.168.1.0
# netmask {{ common.admin_mask }}
# # broadcast 192.168.1.255
# # gateway 192.168.1.254
hostname: {{ common.hostname }}

View File

@ -1,4 +0,0 @@
{
"hostname": "{{ common.hostname }}",
"uuid": "some-unused-id"
}

View File

@ -1,11 +0,0 @@
# instance-id will be autogenerated
# instance-id: iid-abcdefg
#network-interfaces: |
# auto {{ common.admin_iface_name|default("eth0") }}
# iface {{ common.admin_iface_name|default("eth0") }} inet static
# address {{ common.admin_ip }}
# # network 192.168.1.0
# netmask {{ common.admin_mask }}
# # broadcast 192.168.1.255
# # gateway 192.168.1.254
hostname: {{ common.hostname }}

View File

@ -1,16 +0,0 @@
[options]
broken_system_clock = true
[problems]
# Superblock last mount time is in the future (PR_0_FUTURE_SB_LAST_MOUNT).
0x000031 = {
preen_ok = true
preen_nomessage = true
}
# Superblock last write time is in the future (PR_0_FUTURE_SB_LAST_WRITE).
0x000032 = {
preen_ok = true
preen_nomessage = true
}

View File

@ -1 +0,0 @@
bootstrap

View File

@ -1,18 +0,0 @@
# ttyS0 - getty
start on stopped rc RUNLEVEL=[12345]
stop on runlevel [!12345]
respawn
pre-start script
# exit if console not present on ttyS0
cat /proc/cmdline | grep -q "console=ttyS0"
end script
script
# get console speed if provded with "console=ttySx,38400"
SPEED=$(cat /proc/cmdline | sed -e"s/^.*console=ttyS0[,]*\([^ ]*\)[ ]*.*$/\1/g")
# or use 9600 console speed as default
exec /sbin/getty -L ${SPEED:-9600} ttyS0
end script

View File

@ -1,18 +0,0 @@
# ttyS1 - getty
start on stopped rc RUNLEVEL=[12345]
stop on runlevel [!12345]
respawn
pre-start script
# exit if console not present on ttyS1
cat /proc/cmdline | grep -q "console=ttyS1"
end script
script
# get console speed if provded with "console=ttySx,38400"
SPEED=$(cat /proc/cmdline | sed -e"s/^.*console=ttyS1[,]*\([^ ]*\)[ ]*.*$/\1/g")
# or use 9600 console speed as default
exec /sbin/getty -L ${SPEED:-9600} ttyS1
end script

View File

@ -1,28 +0,0 @@
/var/log/cron
/var/log/maillog
/var/log/messages
/var/log/secure
/var/log/spooler
/var/log/mcollective.log
/var/log/nailgun-agent.log
{
# This file is used for daily log rotations, do not use size options here
sharedscripts
daily
# rotate only if 30M size or bigger
minsize 30M
maxsize 50M
# truncate file, do not delete & recreate
copytruncate
# keep logs for XXX rotations
rotate 3
# compression will be postponed to the next rotation, if uncommented
compress
# ignore missing files
missingok
# do not rotate empty files
notifempty
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
endscript
}

View File

@ -1,27 +0,0 @@
main_collective = mcollective
collectives = mcollective
libdir = /usr/share/mcollective/plugins
logfile = /var/log/mcollective.log
loglevel = debug
direct_addressing = 1
daemonize = 0
# Set TTL to 1.5 hours
ttl = 5400
# Plugins
securityprovider = psk
plugin.psk = unset
connector = rabbitmq
plugin.rabbitmq.vhost = mcollective
plugin.rabbitmq.pool.size = 1
plugin.rabbitmq.pool.1.host =
plugin.rabbitmq.pool.1.port = 61613
plugin.rabbitmq.pool.1.user = mcollective
plugin.rabbitmq.pool.1.password = marionette
plugin.rabbitmq.heartbeat_interval = 30
# Facts
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml

View File

@ -1 +0,0 @@
options mlx4_core port_type_array=2,2

View File

@ -1,8 +0,0 @@
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.
# Enabling of VLAN support
8021q

View File

@ -1,7 +0,0 @@
#!/bin/sh -e
# Perform multipath reload
multipath-reload || true
# Perform fuel bootstrap configuration
fix-configs-on-startup || true

View File

@ -1,6 +0,0 @@
# Log all messages with this template
$template CustomLog, "%$NOW%T%TIMESTAMP:8:$%Z %syslogseverity-text% %syslogtag% %msg%\n"
$ActionFileDefaultTemplate CustomLog
user.debug /var/log/messages

View File

@ -1,22 +0,0 @@
{
"watchlist": [
{"servers": [ {"host": "@MASTER_NODE_IP@"} ],
"watchfiles": [
{"tag": "bootstrap/dmesg", "files": ["/var/log/dmesg"]},
{"tag": "bootstrap/secure", "files": ["/var/log/secure"]},
{"tag": "bootstrap/messages", "files": ["/var/log/messages"]},
{"tag": "bootstrap/fuel-agent", "files": ["/var/log/fuel-agent.log"]},
{"tag": "bootstrap/syslog", "files": ["/var/log/syslog"]},
{"tag": "bootstrap/auth", "files": ["/var/log/auth.log"]},
{"tag": "bootstrap/mcollective", "log_type": "ruby",
"files": ["/var/log/mcollective.log"]},
{"tag": "bootstrap/agent", "log_type": "ruby",
"files": ["/var/log/nailgun-agent.log"]},
{"tag": "bootstrap/netprobe_sender", "log_type": "netprobe",
"files": ["/var/log/netprobe_sender.log"]},
{"tag": "bootstrap/netprobe_listener", "log_type": "netprobe",
"files": ["/var/log/netprobe_listener.log"]}
]
}
]
}

View File

@ -1,20 +0,0 @@
Protocol 2
SyslogFacility AUTHPRIV
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication no
GSSAPIAuthentication no
UsePAM no
UseDNS no
# Accept locale-related environment variables
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
Subsystem sftp /usr/lib/openssh/sftp-server
# Secure Ciphers and MACs
Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1

View File

@ -1,20 +0,0 @@
# do not edit this file, it will be overwritten on update
ACTION=="remove", GOTO="net_name_slot_end"
SUBSYSTEM!="net", GOTO="net_name_slot_end"
NAME!="", GOTO="net_name_slot_end"
IMPORT{cmdline}="net.ifnames"
ENV{net.ifnames}=="0", GOTO="net_name_slot_end"
# Workaround for resolving too long ID_NET_NAME_ONBOARD issue on VMware ( #LP1543378)
ENV{ID_NET_NAME_ONBOARD}=="eno????????*", GOTO="net_name_onboard_end"
NAME=="", ENV{ID_NET_NAME_ONBOARD}!="", NAME="$env{ID_NET_NAME_ONBOARD}"
LABEL="net_name_onboard_end"
NAME=="", ENV{ID_NET_NAME_SLOT}!="", NAME="$env{ID_NET_NAME_SLOT}"
NAME=="", ENV{ID_NET_NAME_PATH}!="", NAME="$env{ID_NET_NAME_PATH}"
ENV{DEVTYPE}=="fcoe", ENV{ID_NET_NAME_PATH}!="", NAME="$env{ID_NET_NAME_PATH}"
LABEL="net_name_slot_end"

View File

@ -1,65 +0,0 @@
#!/bin/sh
set -e
masternode_ip=$(sed -rn 's/^.*url=http:\/\/(([0-9]{1,3}\.){3}[0-9]{1,3}).*$/\1/ p' /proc/cmdline)
mco_user=$(sed 's/\ /\n/g' /proc/cmdline | grep mco_user | awk -F\= '{print $2}')
mco_pass=$(sed 's/\ /\n/g' /proc/cmdline | grep mco_pass | awk -F\= '{print $2}')
[ -z "$mco_user" ] && mco_user="mcollective"
[ -z "$mco_pass" ] && mco_pass="marionette"
# Send logs to master node.
cat > /etc/send2syslog.conf <<EOF
{
"watchlist": [
{"servers": [ {"host": "$masternode_ip"} ],
"watchfiles": [
{"tag": "bootstrap/kern.log", "files": ["/var/log/kern.log"]},
{"tag": "bootstrap/udev", "files": ["/var/log/udev"]},
{"tag": "bootstrap/dmesg", "files": ["/var/log/dmesg"]},
{"tag": "bootstrap/secure", "files": ["/var/log/secure"]},
{"tag": "bootstrap/messages", "files": ["/var/log/messages"]},
{"tag": "bootstrap/fuel-agent", "files": ["/var/log/fuel-agent.log"]},
{"tag": "bootstrap/syslog", "files": ["/var/log/syslog"]},
{"tag": "bootstrap/auth", "files": ["/var/log/auth.log"]},
{"tag": "bootstrap/mcollective", "log_type": "ruby",
"files": ["/var/log/mcollective.log"]},
{"tag": "bootstrap/agent", "log_type": "ruby",
"files": ["/var/log/nailgun-agent.log"]},
{"tag": "bootstrap/netprobe_sender", "log_type": "netprobe",
"files": ["/var/log/netprobe_sender.log"]},
{"tag": "bootstrap/netprobe_listener", "log_type": "netprobe",
"files": ["/var/log/netprobe_listener.log"]}
]
}
]
}
EOF
/usr/bin/send2syslog.py -i < /etc/send2syslog.conf
#
# Set up NTP
#
# Disable panic about huge clock offset
#
sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
sed -i '1 i tinker panic 0' /etc/ntp.conf
# Create default drift file
#
mkdir -p /var/lib/ntp
chown ntp: /var/lib/ntp
echo 0 > /var/lib/ntp/ntp.drift
chown ntp: /var/lib/ntp/ntp.drift
# Sync clock with master node
#
sed -i "/^\s*server\b/ d" /etc/ntp.conf
echo "server $masternode_ip burst iburst" >> /etc/ntp.conf
service ntp restart
#
# Update mcollective config
#
sed -i "s/^plugin.rabbitmq.pool.1.host\b.*$/plugin.rabbitmq.pool.1.host = $masternode_ip/" /etc/mcollective/server.cfg
sed -i "s/^plugin.rabbitmq.pool.1.user\b.*$/plugin.rabbitmq.pool.1.user = $mco_user/" /etc/mcollective/server.cfg
sed -i "s/^plugin.rabbitmq.pool.1.password\b.*$/plugin.rabbitmq.pool.1.password= $mco_pass/" /etc/mcollective/server.cfg
service mcollective restart

View File

@ -1,28 +0,0 @@
#!/bin/bash -x
LOG_NAME='multipath-reload'
TIMEOUT='360'
KILL_TIMEOUT='120'
SETTLE_ATTEMPTS='10'
wait_for_udev_settle() {
# See more info in https://review.openstack.org/#/c/285340/
for i in `seq 1 ${SETTLE_ATTEMPTS}`; do
udevadm settle --timeout=${TIMEOUT}
sleep 0.1
done
}
m_reload() {
# See more info in https://review.openstack.org/#/c/294430/
echo "`date '+%Y-%m-%d-%H-%M-%S'`: Perform multipath reloading"
timeout --kill-after=${KILL_TIMEOUT} ${TIMEOUT} dmsetup remove_all
timeout --kill-after=${KILL_TIMEOUT} ${TIMEOUT} dmsetup udevcomplete_all -y
timeout --kill-after=${KILL_TIMEOUT} ${TIMEOUT} multipath -F
timeout --kill-after=${KILL_TIMEOUT} ${TIMEOUT} multipath -r
timeout --kill-after=${KILL_TIMEOUT} ${TIMEOUT} udevadm trigger --subsystem-match=block
wait_for_udev_settle
echo "`date '+%Y-%m-%d-%H-%M-%S'`: Multipath reloading is done"
}
m_reload 2>&1 | tee -a /var/log/${LOG_NAME} | /usr/bin/logger -i -t ${LOG_NAME}

View File

@ -1,505 +0,0 @@
#!/usr/bin/env python
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import logging
from logging.handlers import SysLogHandler
from optparse import OptionParser
import os
import re
import signal
import sys
import time
# Add syslog levels to logging module.
logging.NOTICE = 25
logging.ALERT = 60
logging.EMERG = 70
logging.addLevelName(logging.NOTICE, 'NOTICE')
logging.addLevelName(logging.ALERT, 'ALERT')
logging.addLevelName(logging.EMERG, 'EMERG')
SysLogHandler.priority_map['NOTICE'] = 'notice'
SysLogHandler.priority_map['ALERT'] = 'alert'
SysLogHandler.priority_map['EMERG'] = 'emerg'
# Define data and message format according to RFC 5424.
rfc5424_format = '{version} {timestamp} {hostname} {appname} {procid}'\
' {msgid} {structured_data} {msg}'
date_format = '%Y-%m-%dT%H:%M:%SZ'
# Define global semaphore.
sending_in_progress = 0
# Define file types.
msg_levels = {'ruby': {'regex': '(?P<level>[DIWEF]), \[[0-9-]{10}T',
'levels': {'D': logging.DEBUG,
'I': logging.INFO,
'W': logging.WARNING,
'E': logging.ERROR,
'F': logging.FATAL
}
},
'syslog': {'regex': ('[0-9-]{10}T[0-9:]{8}Z (?P<level>'
'debug|info|notice|warning|err|crit|'
'alert|emerg)'),
'levels': {'debug': logging.DEBUG,
'info': logging.INFO,
'notice': logging.NOTICE,
'warning': logging.WARNING,
'err': logging.ERROR,
'crit': logging.CRITICAL,
'alert': logging.ALERT,
'emerg': logging.EMERG
}
},
'anaconda': {'regex': ('[0-9:]{8},[0-9]+ (?P<level>'
'DEBUG|INFO|WARNING|ERROR|CRITICAL)'),
'levels': {'DEBUG': logging.DEBUG,
'INFO': logging.INFO,
'WARNING': logging.WARNING,
'ERROR': logging.ERROR,
'CRITICAL': logging.CRITICAL
}
},
'netprobe': {'regex': ('[0-9-]{10} [0-9:]{8},[0-9]+ (?P<level>'
'DEBUG|INFO|WARNING|ERROR|CRITICAL)'),
'levels': {'DEBUG': logging.DEBUG,
'INFO': logging.INFO,
'WARNING': logging.WARNING,
'ERROR': logging.ERROR,
'CRITICAL': logging.CRITICAL
}
}
}
relevel_errors = {
'anaconda': [
{
'regex': 'Error downloading \
http://.*/images/(product|updates).img: HTTP response code said error',
'levelfrom': logging.ERROR,
'levelto': logging.WARNING
},
{
'regex': 'got to setupCdrom without a CD device',
'levelfrom': logging.ERROR,
'levelto': logging.WARNING
}
]
}
# Create a main logger.
logging.basicConfig(format='%(levelname)s: %(message)s')
main_logger = logging.getLogger()
main_logger.setLevel(logging.NOTSET)
class WatchedFile:
"""WatchedFile(filename) => Object that read lines from file if exist."""
def __init__(self, name):
self.name = name
self.fo = None
self.where = 0
def reset(self):
if self.fo:
self.fo.close()
self.fo = None
self.where = 0
def _checkRewrite(self):
try:
if os.stat(self.name)[6] < self.where:
self.reset()
except OSError:
self.close()
def readLines(self):
"""Return list of last append lines from file if exist."""
self._checkRewrite()
if not self.fo:
try:
self.fo = open(self.name, 'r')
except IOError:
return ()
lines = self.fo.readlines()
self.where = self.fo.tell()
return lines
def close(self):
self.reset()
class WatchedGroup:
"""Can send data from group of specified files to specified servers."""
def __init__(self, servers, files, name):
self.servers = servers
self.files = files
self.log_type = files.get('log_type', 'syslog')
self.name = name
self._createLogger()
def _createLogger(self):
self.watchedfiles = []
logger = logging.getLogger(self.name)
logger.setLevel(logging.NOTSET)
logger.propagate = False
# Create log formatter.
format_dict = {'version': '1',
'timestamp': '%(asctime)s',
'hostname': config['hostname'],
'appname': self.files['tag'],
'procid': '-',
'msgid': '-',
'structured_data': '-',
'msg': '%(message)s'
}
log_format = rfc5424_format.format(**format_dict)
formatter = logging.Formatter(log_format, date_format)
# Add log handler for each server.
for server in self.servers:
port = 'port' in server and server['port'] or 514
syslog = SysLogHandler((server["host"], port))
syslog.setFormatter(formatter)
logger.addHandler(syslog)
self.logger = logger
# Create WatchedFile objects from list of files.
for name in self.files['files']:
self.watchedfiles.append(WatchedFile(name))
def send(self):
"""Send append data from files to servers."""
for watchedfile in self.watchedfiles:
for line in watchedfile.readLines():
line = line.strip()
level = self._get_msg_level(line, self.log_type)
# Get rid of duplicated information in anaconda logs
line = re.sub(
msg_levels[self.log_type]['regex'] + "\s*:?\s?",
"",
line
)
# Ignore meaningless errors
try:
for r in relevel_errors[self.log_type]:
if level == r['levelfrom'] and \
re.match(r['regex'], line):
level = r['levelto']
except KeyError:
pass
self.logger.log(level, line)
main_logger and main_logger.log(
level,
'From file "%s" send: %s' % (watchedfile.name, line)
)
@staticmethod
def _get_msg_level(line, log_type):
if log_type in msg_levels:
msg_type = msg_levels[log_type]
regex = re.match(msg_type['regex'], line)
if regex:
return msg_type['levels'][regex.group('level')]
return logging.INFO
def sig_handler(signum, frame):
"""Send all new data when signal arrived."""
if not sending_in_progress:
send_all()
exit(signum)
else:
config['run_once'] = True
def send_all():
"""Send any updates."""
for group in watchlist:
group.send()
def main_loop():
"""Periodicaly call sendlogs() for each group in watchlist."""
signal.signal(signal.SIGINT, sig_handler)
signal.signal(signal.SIGTERM, sig_handler)
while watchlist:
time.sleep(0.5)
send_all()
# If asked to run_once, exit now
if config['run_once']:
break
class Config:
"""Collection of config generation methods.
Usage: config = Config.getConfig()
"""
@classmethod
def getConfig(cls):
"""Generate config from command line arguments and config file."""
# example_config = {
# "daemon": True,
# "run_once": False,
# "debug": False,
# "watchlist": [
# {"servers": [ {"host": "localhost", "port": 514} ],
# "watchfiles": [
# {"tag": "anaconda",
# "log_type": "anaconda",
# "files": ["/tmp/anaconda.log",
# "/mnt/sysimage/root/install.log"]
# }
# ]
# }
# ]
# }
default_config = {"daemon": True,
"run_once": False,
"debug": False,
"hostname": cls._getHostname(),
"watchlist": []
}
# First use default config as running config.
config = dict(default_config)
# Get command line options and validate it.
cmdline = cls.cmdlineParse()[0]
# Check config file source and read it.
if cmdline.config_file or cmdline.stdin_config:
try:
if cmdline.stdin_config is True:
fo = sys.stdin
else:
fo = open(cmdline.config_file, 'r')
parsed_config = json.load(fo)
if cmdline.debug:
print(parsed_config)
except IOError: # Raised if IO operations failed.
main_logger.error("Can not read config file %s\n" %
cmdline.config_file)
exit(1)
except ValueError as e: # Raised if json parsing failed.
main_logger.error("Can not parse config file. %s\n" %
e.message)
exit(1)
# Validate config from config file.
cls.configValidate(parsed_config)
# Copy gathered config from config file to running config
# structure.
for key, value in parsed_config.items():
config[key] = value
else:
# If no config file specified use watchlist setting from
# command line.
watchlist = {"servers": [{"host": cmdline.host,
"port": cmdline.port}],
"watchfiles": [{"tag": cmdline.tag,
"log_type": cmdline.log_type,
"files": cmdline.watchfiles}]}
config['watchlist'].append(watchlist)
# Apply behavioural command line options to running config.
if cmdline.no_daemon:
config["daemon"] = False
if cmdline.run_once:
config["run_once"] = True
if cmdline.debug:
config["debug"] = True
return config
@staticmethod
def _getHostname():
"""Generate hostname by BOOTIF kernel option or use os.uname()."""
with open('/proc/cmdline') as fo:
cpu_cmdline = fo.readline().strip()
regex = re.search('(?<=BOOTIF=)([0-9a-fA-F-]*)', cpu_cmdline)
if regex:
mac = regex.group(0).upper()
return ''.join(mac.split('-'))
return os.uname()[1]
@staticmethod
def cmdlineParse():
"""Parse command line config options."""
parser = OptionParser()
parser.add_option("-c", "--config", dest="config_file", metavar="FILE",
help="Read config from FILE.")
parser.add_option("-i", "--stdin", dest="stdin_config", default=False,
action="store_true", help="Read config from Stdin.")
# FIXIT Add optionGroups.
parser.add_option("-r", "--run-once", dest="run_once",
action="store_true", help="Send all data and exit.")
parser.add_option("-n", "--no-daemon", dest="no_daemon",
action="store_true", help="Do not daemonize.")
parser.add_option("-d", "--debug", dest="debug",
action="store_true", help="Print debug messages.")
parser.add_option("-t", "--tag", dest="tag", metavar="TAG",
help="Set tag of sending messages as TAG.")
parser.add_option("-T", "--type", dest="log_type", metavar="TYPE",
default='syslog',
help="Set type of files as TYPE"
"(default: %default).")
parser.add_option("-f", "--watchfile", dest="watchfiles",
action="append",
metavar="FILE", help="Add FILE to watchlist.")
parser.add_option("-s", "--host", dest="host", metavar="HOSTNAME",
help="Set destination as HOSTNAME.")
parser.add_option("-p", "--port", dest="port", type="int", default=514,
metavar="PORT",
help="Set remote port as PORT (default: %default).")
options, args = parser.parse_args()
# Validate gathered options.
if options.config_file and options.stdin_config:
parser.error("You must not set both options --config"
" and --stdin at the same time.")
exit(1)
if ((options.config_file or options.stdin_config) and
(options.tag or options.watchfiles or options.host)):
main_logger.warning("If --config or --stdin is set up options"
" --tag, --watchfile, --type,"
" --host and --port will be ignored.")
if (not (options.config_file or options.stdin_config) and
not (options.tag and options.watchfiles and options.host)):
parser.error("Options --tag, --watchfile and --host"
" must be set up at the same time.")
exit(1)
return options, args
@staticmethod
def _checkType(value, value_type, value_name='', msg=None):
"""Check correctness of type of value and exit if not."""
if not isinstance(value, value_type):
message = msg or "Value %r in config have type %r but"\
" %r is expected." %\
(value_name, type(value).__name__, value_type.__name__)
main_logger.error(message)
exit(1)
@classmethod
def configValidate(cls, config):
"""Validate types and names of data items in config."""
cls._checkType(config, dict, msg='Config must be a dict.')
for key in ("daemon", "run_once", "debug"):
if key in config:
cls._checkType(config[key], bool, key)
key = "hostname"
if key in config:
cls._checkType(config[key], basestring, key)
key = "watchlist"
if key in config:
cls._checkType(config[key], list, key)
else:
main_logger.error("There must be key %r in config." % key)
exit(1)
for item in config["watchlist"]:
cls._checkType(item, dict, "watchlist[n]")
key, name = "servers", "watchlist[n] => servers"
if key in item:
cls._checkType(item[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key, '"watchlist[n]" item'))
exit(1)
key, name = "watchfiles", "watchlist[n] => watchfiles"
if key in item:
cls._checkType(item[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key, '"watchlist[n]" item'))
exit(1)
for item2 in item["servers"]:
cls._checkType(item2, dict, "watchlist[n] => servers[n]")
key, name = "host", "watchlist[n] => servers[n] => host"
if key in item2:
cls._checkType(item2[key], basestring, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => servers[n]" item'))
exit(1)
key, name = "port", "watchlist[n] => servers[n] => port"
if key in item2:
cls._checkType(item2[key], int, name)
for item2 in item["watchfiles"]:
cls._checkType(item2, dict, "watchlist[n] => watchfiles[n]")
key, name = "tag", "watchlist[n] => watchfiles[n] => tag"
if key in item2:
cls._checkType(item2[key], basestring, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => watchfiles[n]" item'))
exit(1)
key = "log_type"
name = "watchlist[n] => watchfiles[n] => log_type"
if key in item2:
cls._checkType(item2[key], basestring, name)
key, name = "files", "watchlist[n] => watchfiles[n] => files"
if key in item2:
cls._checkType(item2[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => watchfiles[n]" item'))
exit(1)
for item3 in item2["files"]:
name = "watchlist[n] => watchfiles[n] => files[n]"
cls._checkType(item3, basestring, name)
# Create global config.
config = Config.getConfig()
# Create list of WatchedGroup objects with different log names.
watchlist = []
i = 0
for item in config["watchlist"]:
for files in item['watchfiles']:
watchlist.append(WatchedGroup(item['servers'], files, str(i)))
i = i + 1
# Fork and loop
if config["daemon"]:
if not os.fork():
# Redirect the standard I/O file descriptors to the specified file.
main_logger = None
DEVNULL = getattr(os, "devnull", "/dev/null")
os.open(DEVNULL, os.O_RDWR) # standard input (0)
os.dup2(0, 1) # Duplicate standard input to standard output (1)
os.dup2(0, 2) # Duplicate standard input to standard error (2)
main_loop()
sys.exit(1)
sys.exit(0)
else:
if not config['debug']:
main_logger = None
main_loop()

View File

@ -1,21 +0,0 @@
#!/bin/sh -e
PREREQS="udev"
prereqs() { echo "$PREREQS"; }
case "$1" in
prereqs)
prereqs
exit 0
;;
esac
. /usr/share/initramfs-tools/hook-functions
RULES=80-net-name-slot.rules
if [ -e /etc/udev/rules.d/$RULES ]; then
cp -p /etc/udev/rules.d/$RULES $DESTDIR/lib/udev/rules.d/
elif [ -e /lib/udev/rules.d/$RULES ]; then
cp -p /lib/udev/rules.d/$RULES $DESTDIR/lib/udev/rules.d/
fi

View File

@ -1,16 +0,0 @@
[options]
broken_system_clock = true
[problems]
# Superblock last mount time is in the future (PR_0_FUTURE_SB_LAST_MOUNT).
0x000031 = {
preen_ok = true
preen_nomessage = true
}
# Superblock last write time is in the future (PR_0_FUTURE_SB_LAST_WRITE).
0x000032 = {
preen_ok = true
preen_nomessage = true
}

View File

@ -1 +0,0 @@
bootstrap

View File

@ -1,28 +0,0 @@
/var/log/cron
/var/log/maillog
/var/log/messages
/var/log/secure
/var/log/spooler
/var/log/mcollective.log
/var/log/nailgun-agent.log
{
# This file is used for daily log rotations, do not use size options here
sharedscripts
daily
# rotate only if 30M size or bigger
minsize 30M
maxsize 50M
# truncate file, do not delete & recreate
copytruncate
# keep logs for XXX rotations
rotate 3
# compression will be postponed to the next rotation, if uncommented
compress
# ignore missing files
missingok
# do not rotate empty files
notifempty
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
endscript
}

View File

@ -1,27 +0,0 @@
main_collective = mcollective
collectives = mcollective
libdir = /usr/share/mcollective/plugins
logfile = /var/log/mcollective.log
loglevel = debug
direct_addressing = 1
daemonize = 1
# Set TTL to 1.5 hours
ttl = 5400
# Plugins
securityprovider = psk
plugin.psk = unset
connector = rabbitmq
plugin.rabbitmq.vhost = mcollective
plugin.rabbitmq.pool.size = 1
plugin.rabbitmq.pool.1.host =
plugin.rabbitmq.pool.1.port = 61613
plugin.rabbitmq.pool.1.user = mcollective
plugin.rabbitmq.pool.1.password = marionette
plugin.rabbitmq.heartbeat_interval = 30
# Facts
factsource = yaml
plugin.yaml = /etc/mcollective/facts.yaml

View File

@ -1,8 +0,0 @@
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.
# Enabling of VLAN support
8021q

View File

@ -1,4 +0,0 @@
#!/bin/sh -e
# Perform fuel bootstrap configuration
/usr/bin/fix-configs-on-startup || /bin/true

View File

@ -1,6 +0,0 @@
# Log all messages with this template
$template CustomLog, "%$NOW%T%TIMESTAMP:8:$%Z %syslogseverity-text% %syslogtag% %msg%\n"
$ActionFileDefaultTemplate CustomLog
user.debug /var/log/messages

View File

@ -1,22 +0,0 @@
{
"watchlist": [
{"servers": [ {"host": "@MASTER_NODE_IP@"} ],
"watchfiles": [
{"tag": "bootstrap/dmesg", "files": ["/var/log/dmesg"]},
{"tag": "bootstrap/secure", "files": ["/var/log/secure"]},
{"tag": "bootstrap/messages", "files": ["/var/log/messages"]},
{"tag": "bootstrap/fuel-agent", "files": ["/var/log/fuel-agent.log"]},
{"tag": "bootstrap/syslog", "files": ["/var/log/syslog"]},
{"tag": "bootstrap/auth", "files": ["/var/log/auth.log"]},
{"tag": "bootstrap/mcollective", "log_type": "ruby",
"files": ["/var/log/mcollective.log"]},
{"tag": "bootstrap/agent", "log_type": "ruby",
"files": ["/var/log/nailgun-agent.log"]},
{"tag": "bootstrap/netprobe_sender", "log_type": "netprobe",
"files": ["/var/log/netprobe_sender.log"]},
{"tag": "bootstrap/netprobe_listener", "log_type": "netprobe",
"files": ["/var/log/netprobe_listener.log"]}
]
}
]
}

View File

@ -1,20 +0,0 @@
Protocol 2
SyslogFacility AUTHPRIV
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication no
GSSAPIAuthentication no
UsePAM no
UseDNS no
# Accept locale-related environment variables
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
Subsystem sftp /usr/lib/openssh/sftp-server
# Secure Ciphers and MACs
Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1

View File

@ -1,87 +0,0 @@
#!/bin/sh
set -e
export PATH=$PATH:/bin
masternode_ip=$(sed -rn 's/^.*url=http:\/\/(([0-9]{1,3}\.){3}[0-9]{1,3}).*$/\1/ p' /proc/cmdline)
mco_user=$(sed 's/\ /\n/g' /proc/cmdline | grep mco_user | awk -F\= '{print $2}')
mco_pass=$(sed 's/\ /\n/g' /proc/cmdline | grep mco_pass | awk -F\= '{print $2}')
[ -z "$mco_user" ] && mco_user="mcollective"
[ -z "$mco_pass" ] && mco_pass="marionette"
# Send logs to master node.
cat > /etc/send2syslog.conf <<EOF
{
"watchlist": [
{"servers": [ {"host": "$masternode_ip"} ],
"watchfiles": [
{"tag": "bootstrap/kern.log", "files": ["/var/log/kern.log"]},
{"tag": "bootstrap/udev", "files": ["/var/log/udev"]},
{"tag": "bootstrap/dmesg", "files": ["/var/log/dmesg"]},
{"tag": "bootstrap/secure", "files": ["/var/log/secure"]},
{"tag": "bootstrap/messages", "files": ["/var/log/messages"]},
{"tag": "bootstrap/fuel-agent", "files": ["/var/log/fuel-agent.log"]},
{"tag": "bootstrap/syslog", "files": ["/var/log/syslog"]},
{"tag": "bootstrap/auth", "files": ["/var/log/auth.log"]},
{"tag": "bootstrap/mcollective", "log_type": "ruby",
"files": ["/var/log/mcollective.log"]},
{"tag": "bootstrap/agent", "log_type": "ruby",
"files": ["/var/log/nailgun-agent.log"]},
{"tag": "bootstrap/netprobe_sender", "log_type": "netprobe",
"files": ["/var/log/netprobe_sender.log"]},
{"tag": "bootstrap/netprobe_listener", "log_type": "netprobe",
"files": ["/var/log/netprobe_listener.log"]}
]
}
]
}
EOF
/usr/bin/send2syslog.py -i < /etc/send2syslog.conf
#
# Set up NTP
#
# Disable panic about huge clock offset
#
sed -i '/^\s*tinker panic/ d' /etc/ntp.conf
sed -i '1 i tinker panic 0' /etc/ntp.conf
# Create default drift file
#
mkdir -p /var/lib/ntp
chown ntp: /var/lib/ntp
echo 0 > /var/lib/ntp/ntp.drift
chown ntp: /var/lib/ntp/ntp.drift
# Sync clock with master node
#
sed -i "/^\s*server\b/ d" /etc/ntp.conf
echo "server $masternode_ip burst iburst" >> /etc/ntp.conf
systemctl restart ntp
sync_identity() {
while true
do
if new_identity=$(grep -s --line-regexp "[[:digit:]]\+" /etc/nailgun_uid)
then
if [ "$new_identity" != "$identity" ]
then
identity=$new_identity
sed -i '/^identity =/d' /etc/mcollective/server.cfg > /dev/null 2>&1
echo "identity = $identity" >> /etc/mcollective/server.cfg
service mcollective restart
fi
fi
sleep 5
done
}
#
# Update mcollective config
#
sed -i "s/^plugin.rabbitmq.pool.1.host\b.*$/plugin.rabbitmq.pool.1.host = $masternode_ip/" /etc/mcollective/server.cfg
sed -i "s/^plugin.rabbitmq.pool.1.user\b.*$/plugin.rabbitmq.pool.1.user = $mco_user/" /etc/mcollective/server.cfg
sed -i "s/^plugin.rabbitmq.pool.1.password\b.*$/plugin.rabbitmq.pool.1.password= $mco_pass/" /etc/mcollective/server.cfg
sync_identity &
# starting distributed serialization worker
/usr/bin/dask-worker --nprocs=`nproc` --nthreads 1 @MASTER_NODE_IP@:8002

View File

@ -1,505 +0,0 @@
#!/usr/bin/env python
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import logging
from logging.handlers import SysLogHandler
from optparse import OptionParser
import os
import re
import signal
import sys
import time
# Add syslog levels to logging module.
logging.NOTICE = 25
logging.ALERT = 60
logging.EMERG = 70
logging.addLevelName(logging.NOTICE, 'NOTICE')
logging.addLevelName(logging.ALERT, 'ALERT')
logging.addLevelName(logging.EMERG, 'EMERG')
SysLogHandler.priority_map['NOTICE'] = 'notice'
SysLogHandler.priority_map['ALERT'] = 'alert'
SysLogHandler.priority_map['EMERG'] = 'emerg'
# Define data and message format according to RFC 5424.
rfc5424_format = '{version} {timestamp} {hostname} {appname} {procid}'\
' {msgid} {structured_data} {msg}'
date_format = '%Y-%m-%dT%H:%M:%SZ'
# Define global semaphore.
sending_in_progress = 0
# Define file types.
msg_levels = {'ruby': {'regex': '(?P<level>[DIWEF]), \[[0-9-]{10}T',
'levels': {'D': logging.DEBUG,
'I': logging.INFO,
'W': logging.WARNING,
'E': logging.ERROR,
'F': logging.FATAL
}
},
'syslog': {'regex': ('[0-9-]{10}T[0-9:]{8}Z (?P<level>'
'debug|info|notice|warning|err|crit|'
'alert|emerg)'),
'levels': {'debug': logging.DEBUG,
'info': logging.INFO,
'notice': logging.NOTICE,
'warning': logging.WARNING,
'err': logging.ERROR,
'crit': logging.CRITICAL,
'alert': logging.ALERT,
'emerg': logging.EMERG
}
},
'anaconda': {'regex': ('[0-9:]{8},[0-9]+ (?P<level>'
'DEBUG|INFO|WARNING|ERROR|CRITICAL)'),
'levels': {'DEBUG': logging.DEBUG,
'INFO': logging.INFO,
'WARNING': logging.WARNING,
'ERROR': logging.ERROR,
'CRITICAL': logging.CRITICAL
}
},
'netprobe': {'regex': ('[0-9-]{10} [0-9:]{8},[0-9]+ (?P<level>'
'DEBUG|INFO|WARNING|ERROR|CRITICAL)'),
'levels': {'DEBUG': logging.DEBUG,
'INFO': logging.INFO,
'WARNING': logging.WARNING,
'ERROR': logging.ERROR,
'CRITICAL': logging.CRITICAL
}
}
}
relevel_errors = {
'anaconda': [
{
'regex': 'Error downloading \
http://.*/images/(product|updates).img: HTTP response code said error',
'levelfrom': logging.ERROR,
'levelto': logging.WARNING
},
{
'regex': 'got to setupCdrom without a CD device',
'levelfrom': logging.ERROR,
'levelto': logging.WARNING
}
]
}
# Create a main logger.
logging.basicConfig(format='%(levelname)s: %(message)s')
main_logger = logging.getLogger()
main_logger.setLevel(logging.NOTSET)
class WatchedFile:
"""WatchedFile(filename) => Object that read lines from file if exist."""
def __init__(self, name):
self.name = name
self.fo = None
self.where = 0
def reset(self):
if self.fo:
self.fo.close()
self.fo = None
self.where = 0
def _checkRewrite(self):
try:
if os.stat(self.name)[6] < self.where:
self.reset()
except OSError:
self.close()
def readLines(self):
"""Return list of last append lines from file if exist."""
self._checkRewrite()
if not self.fo:
try:
self.fo = open(self.name, 'r')
except IOError:
return ()
lines = self.fo.readlines()
self.where = self.fo.tell()
return lines
def close(self):
self.reset()
class WatchedGroup:
"""Can send data from group of specified files to specified servers."""
def __init__(self, servers, files, name):
self.servers = servers
self.files = files
self.log_type = files.get('log_type', 'syslog')
self.name = name
self._createLogger()
def _createLogger(self):
self.watchedfiles = []
logger = logging.getLogger(self.name)
logger.setLevel(logging.NOTSET)
logger.propagate = False
# Create log formatter.
format_dict = {'version': '1',
'timestamp': '%(asctime)s',
'hostname': config['hostname'],
'appname': self.files['tag'],
'procid': '-',
'msgid': '-',
'structured_data': '-',
'msg': '%(message)s'
}
log_format = rfc5424_format.format(**format_dict)
formatter = logging.Formatter(log_format, date_format)
# Add log handler for each server.
for server in self.servers:
port = 'port' in server and server['port'] or 514
syslog = SysLogHandler((server["host"], port))
syslog.setFormatter(formatter)
logger.addHandler(syslog)
self.logger = logger
# Create WatchedFile objects from list of files.
for name in self.files['files']:
self.watchedfiles.append(WatchedFile(name))
def send(self):
"""Send append data from files to servers."""
for watchedfile in self.watchedfiles:
for line in watchedfile.readLines():
line = line.strip()
level = self._get_msg_level(line, self.log_type)
# Get rid of duplicated information in anaconda logs
line = re.sub(
msg_levels[self.log_type]['regex'] + "\s*:?\s?",
"",
line
)
# Ignore meaningless errors
try:
for r in relevel_errors[self.log_type]:
if level == r['levelfrom'] and \
re.match(r['regex'], line):
level = r['levelto']
except KeyError:
pass
self.logger.log(level, line)
main_logger and main_logger.log(
level,
'From file "%s" send: %s' % (watchedfile.name, line)
)
@staticmethod
def _get_msg_level(line, log_type):
if log_type in msg_levels:
msg_type = msg_levels[log_type]
regex = re.match(msg_type['regex'], line)
if regex:
return msg_type['levels'][regex.group('level')]
return logging.INFO
def sig_handler(signum, frame):
"""Send all new data when signal arrived."""
if not sending_in_progress:
send_all()
exit(signum)
else:
config['run_once'] = True
def send_all():
"""Send any updates."""
for group in watchlist:
group.send()
def main_loop():
"""Periodicaly call sendlogs() for each group in watchlist."""
signal.signal(signal.SIGINT, sig_handler)
signal.signal(signal.SIGTERM, sig_handler)
while watchlist:
time.sleep(0.5)
send_all()
# If asked to run_once, exit now
if config['run_once']:
break
class Config:
"""Collection of config generation methods.
Usage: config = Config.getConfig()
"""
@classmethod
def getConfig(cls):
"""Generate config from command line arguments and config file."""
# example_config = {
# "daemon": True,
# "run_once": False,
# "debug": False,
# "watchlist": [
# {"servers": [ {"host": "localhost", "port": 514} ],
# "watchfiles": [
# {"tag": "anaconda",
# "log_type": "anaconda",
# "files": ["/tmp/anaconda.log",
# "/mnt/sysimage/root/install.log"]
# }
# ]
# }
# ]
# }
default_config = {"daemon": True,
"run_once": False,
"debug": False,
"hostname": cls._getHostname(),
"watchlist": []
}
# First use default config as running config.
config = dict(default_config)
# Get command line options and validate it.
cmdline = cls.cmdlineParse()[0]
# Check config file source and read it.
if cmdline.config_file or cmdline.stdin_config:
try:
if cmdline.stdin_config is True:
fo = sys.stdin
else:
fo = open(cmdline.config_file, 'r')
parsed_config = json.load(fo)
if cmdline.debug:
print(parsed_config)
except IOError: # Raised if IO operations failed.
main_logger.error("Can not read config file %s\n" %
cmdline.config_file)
exit(1)
except ValueError as e: # Raised if json parsing failed.
main_logger.error("Can not parse config file. %s\n" %
e.message)
exit(1)
# Validate config from config file.
cls.configValidate(parsed_config)
# Copy gathered config from config file to running config
# structure.
for key, value in parsed_config.items():
config[key] = value
else:
# If no config file specified use watchlist setting from
# command line.
watchlist = {"servers": [{"host": cmdline.host,
"port": cmdline.port}],
"watchfiles": [{"tag": cmdline.tag,
"log_type": cmdline.log_type,
"files": cmdline.watchfiles}]}
config['watchlist'].append(watchlist)
# Apply behavioural command line options to running config.
if cmdline.no_daemon:
config["daemon"] = False
if cmdline.run_once:
config["run_once"] = True
if cmdline.debug:
config["debug"] = True
return config
@staticmethod
def _getHostname():
"""Generate hostname by BOOTIF kernel option or use os.uname()."""
with open('/proc/cmdline') as fo:
cpu_cmdline = fo.readline().strip()
regex = re.search('(?<=BOOTIF=)([0-9a-fA-F-]*)', cpu_cmdline)
if regex:
mac = regex.group(0).upper()
return ''.join(mac.split('-'))
return os.uname()[1]
@staticmethod
def cmdlineParse():
"""Parse command line config options."""
parser = OptionParser()
parser.add_option("-c", "--config", dest="config_file", metavar="FILE",
help="Read config from FILE.")
parser.add_option("-i", "--stdin", dest="stdin_config", default=False,
action="store_true", help="Read config from Stdin.")
# FIXIT Add optionGroups.
parser.add_option("-r", "--run-once", dest="run_once",
action="store_true", help="Send all data and exit.")
parser.add_option("-n", "--no-daemon", dest="no_daemon",
action="store_true", help="Do not daemonize.")
parser.add_option("-d", "--debug", dest="debug",
action="store_true", help="Print debug messages.")
parser.add_option("-t", "--tag", dest="tag", metavar="TAG",
help="Set tag of sending messages as TAG.")
parser.add_option("-T", "--type", dest="log_type", metavar="TYPE",
default='syslog',
help="Set type of files as TYPE"
"(default: %default).")
parser.add_option("-f", "--watchfile", dest="watchfiles",
action="append",
metavar="FILE", help="Add FILE to watchlist.")
parser.add_option("-s", "--host", dest="host", metavar="HOSTNAME",
help="Set destination as HOSTNAME.")
parser.add_option("-p", "--port", dest="port", type="int", default=514,
metavar="PORT",
help="Set remote port as PORT (default: %default).")
options, args = parser.parse_args()
# Validate gathered options.
if options.config_file and options.stdin_config:
parser.error("You must not set both options --config"
" and --stdin at the same time.")
exit(1)
if ((options.config_file or options.stdin_config) and
(options.tag or options.watchfiles or options.host)):
main_logger.warning("If --config or --stdin is set up options"
" --tag, --watchfile, --type,"
" --host and --port will be ignored.")
if (not (options.config_file or options.stdin_config) and
not (options.tag and options.watchfiles and options.host)):
parser.error("Options --tag, --watchfile and --host"
" must be set up at the same time.")
exit(1)
return options, args
@staticmethod
def _checkType(value, value_type, value_name='', msg=None):
"""Check correctness of type of value and exit if not."""
if not isinstance(value, value_type):
message = msg or "Value %r in config have type %r but"\
" %r is expected." %\
(value_name, type(value).__name__, value_type.__name__)
main_logger.error(message)
exit(1)
@classmethod
def configValidate(cls, config):
"""Validate types and names of data items in config."""
cls._checkType(config, dict, msg='Config must be a dict.')
for key in ("daemon", "run_once", "debug"):
if key in config:
cls._checkType(config[key], bool, key)
key = "hostname"
if key in config:
cls._checkType(config[key], basestring, key)
key = "watchlist"
if key in config:
cls._checkType(config[key], list, key)
else:
main_logger.error("There must be key %r in config." % key)
exit(1)
for item in config["watchlist"]:
cls._checkType(item, dict, "watchlist[n]")
key, name = "servers", "watchlist[n] => servers"
if key in item:
cls._checkType(item[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key, '"watchlist[n]" item'))
exit(1)
key, name = "watchfiles", "watchlist[n] => watchfiles"
if key in item:
cls._checkType(item[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key, '"watchlist[n]" item'))
exit(1)
for item2 in item["servers"]:
cls._checkType(item2, dict, "watchlist[n] => servers[n]")
key, name = "host", "watchlist[n] => servers[n] => host"
if key in item2:
cls._checkType(item2[key], basestring, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => servers[n]" item'))
exit(1)
key, name = "port", "watchlist[n] => servers[n] => port"
if key in item2:
cls._checkType(item2[key], int, name)
for item2 in item["watchfiles"]:
cls._checkType(item2, dict, "watchlist[n] => watchfiles[n]")
key, name = "tag", "watchlist[n] => watchfiles[n] => tag"
if key in item2:
cls._checkType(item2[key], basestring, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => watchfiles[n]" item'))
exit(1)
key = "log_type"
name = "watchlist[n] => watchfiles[n] => log_type"
if key in item2:
cls._checkType(item2[key], basestring, name)
key, name = "files", "watchlist[n] => watchfiles[n] => files"
if key in item2:
cls._checkType(item2[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => watchfiles[n]" item'))
exit(1)
for item3 in item2["files"]:
name = "watchlist[n] => watchfiles[n] => files[n]"
cls._checkType(item3, basestring, name)
# Create global config.
config = Config.getConfig()
# Create list of WatchedGroup objects with different log names.
watchlist = []
i = 0
for item in config["watchlist"]:
for files in item['watchfiles']:
watchlist.append(WatchedGroup(item['servers'], files, str(i)))
i = i + 1
# Fork and loop
if config["daemon"]:
if not os.fork():
# Redirect the standard I/O file descriptors to the specified file.
main_logger = None
DEVNULL = getattr(os, "devnull", "/dev/null")
os.open(DEVNULL, os.O_RDWR) # standard input (0)
os.dup2(0, 1) # Duplicate standard input to standard output (1)
os.dup2(0, 2) # Duplicate standard input to standard error (2)
main_loop()
sys.exit(1)
sys.exit(0)
else:
if not config['debug']:
main_logger = None
main_loop()

View File

@ -1 +0,0 @@
Artur Svechnikov <asvechnikov@mirantis.com>

View File

@ -1,5 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
global-exclude *.pyc

View File

@ -1,39 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from fuel_bootstrap.commands import base
from fuel_bootstrap.utils import bootstrap_image as bs_image
class ActivateCommand(base.BaseCommand):
"""Activate specified bootstrap image."""
def get_parser(self, prog_name):
parser = super(ActivateCommand, self).get_parser(prog_name)
parser.add_argument(
'id',
type=str,
metavar='ID',
help="ID of bootstrap image to be activated."
)
return parser
def take_action(self, parsed_args):
super(ActivateCommand, self).take_action(parsed_args)
# cliff handles errors by itself
image_uuid = bs_image.activate(parsed_args.id)
self.app.stdout.write("Bootstrap image {0} has been activated.\n"
.format(image_uuid))

View File

@ -1,41 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cliff import command
from fuel_bootstrap import consts
from fuel_bootstrap import settings
CONF = settings.CONF
class BaseCommand(command.Command):
def get_parser(self, prog_name):
parser = super(BaseCommand, self).get_parser(prog_name)
parser.add_argument(
'--config',
dest='config_file',
type=str,
metavar='FILE',
default=consts.CONFIG_FILE,
help="The config file is to be used for taking configuration"
" parameters from during building of the bootstrap."
)
return parser
def take_action(self, parsed_args):
CONF.read(parsed_args.config_file)

View File

@ -1,183 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from fuel_bootstrap.commands import base
from fuel_bootstrap.utils import bootstrap_image as bs_image
class BuildCommand(base.BaseCommand):
"""Build new bootstrap image with specified parameters."""
def get_parser(self, prog_name):
parser = super(BuildCommand, self).get_parser(prog_name)
parser.add_argument(
'--ubuntu-release',
type=str,
help="Choose the Ubuntu release",
)
parser.add_argument(
'--repo',
dest='repos',
type=str,
metavar='REPOSITORY',
help="Add one more repository. NOTE: The first repo should be"
" release repo. REPOSITORY format:"
" 'type uri codename [sections][,priority]'.",
action='append'
)
parser.add_argument(
'--http-proxy',
type=str,
metavar='URL',
help="Pass http-proxy URL."
)
parser.add_argument(
'--https-proxy',
type=str,
metavar='URL',
help="Pass https-proxy URL."
)
parser.add_argument(
'--direct-repo-addr',
metavar='ADDR',
help="Use a direct connection to repository(address)"
" bypass proxy.",
action='append'
)
parser.add_argument(
'--script',
dest='post_script_file',
type=str,
metavar='FILE',
help="The script is executed after installing packages (both"
" mandatory and user specified ones) and before creating"
" initramfs."
)
parser.add_argument(
'--package',
dest='packages',
type=str,
metavar='PKGNAME',
help="The option can be given multiple times, all specified"
" packages and their dependencies will be installed.",
action='append'
)
parser.add_argument(
'--label',
type=str,
metavar='LABEL',
help="Custom string, which will be presented in bootstrap"
" listing."
)
parser.add_argument(
'--extra-dir',
dest='extra_dirs',
type=str,
metavar='PATH',
help="Directory that will be injected to the image"
" root filesystem. The option can be given multiple times."
" **NOTE** Files/packages will be"
" injected after installing all packages, but before"
" generating system initramfs - thus it's possible to"
" adjust initramfs.",
action='append'
)
parser.add_argument(
'--extend-kopts',
type=str,
metavar='OPTS',
help="Extend default kernel options"
)
parser.add_argument(
'--kernel-flavor',
type=str,
help="Defines kernel version."
)
parser.add_argument(
'--root-ssh-authorized-file',
type=str,
metavar='FILE',
help="Copy public ssh key into image - makes it possible"
" to login as root into any bootstrap node using the"
" key in question."
)
parser.add_argument(
'--output-dir',
type=str,
metavar='DIR',
help="Directory to store built image."
)
parser.add_argument(
'--image-build-dir',
type=str,
metavar='DIR',
help="Which directory should be used for building image."
" /tmp/ will be used by default."
)
parser.add_argument(
'--activate',
help="Activate bootstrap image after build",
action='store_true'
)
parser.add_argument(
'--no-default-packages',
help="Do not append default packages",
action='store_true'
)
parser.add_argument(
'--no-default-direct-repo-addr',
help="Do not append default direct repo address",
action='store_true'
)
parser.add_argument(
'--no-default-extra-dirs',
help="Do not append default extra directories",
action='store_true'
)
parser.add_argument(
'--no-compress',
help="Do not compress bootstrap image to tar.gz. Bootstrap"
" files will be stored in output dir. NOTE: Not compressed"
" images are not supported by fuel-bootstrap.",
action='store_true'
)
parser.add_argument(
'--load-cert',
dest='certs',
metavar='FULL_PATH',
help="Load CA certificate for https connections. Work as extra"
" files",
action='append'
)
parser.add_argument(
'--root-password',
type=str,
help=("Root password for bootstrap image. PasswordAuthentication"
" by ssh still rejected by default! This password actual"
" only for tty login!"),
)
return parser
def take_action(self, parsed_args):
super(BuildCommand, self).take_action(parsed_args)
image_uuid, path = bs_image.make_bootstrap(vars(parsed_args))
self.app.stdout.write("Bootstrap image {0} has been built: {1}\n"
.format(image_uuid, path))
if parsed_args.activate:
bs_image.import_image(path)
bs_image.activate(image_uuid)
self.app.stdout.write("Bootstrap image {0} has been activated.\n"
.format(image_uuid))

View File

@ -1,39 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from fuel_bootstrap.commands import base
from fuel_bootstrap.utils import bootstrap_image as bs_image
class DeleteCommand(base.BaseCommand):
"""Delete specified bootstrap image from the system."""
def get_parser(self, prog_name):
parser = super(DeleteCommand, self).get_parser(prog_name)
parser.add_argument(
'id',
type=str,
metavar='ID',
help="ID of bootstrap image to be deleted"
)
return parser
def take_action(self, parsed_args):
super(DeleteCommand, self).take_action(parsed_args)
# cliff handles errors by itself
image_uuid = bs_image.delete(parsed_args.id)
self.app.stdout.write("Bootstrap image {0} has been deleted.\n"
.format(image_uuid))

View File

@ -1,49 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from fuel_bootstrap.commands import base
from fuel_bootstrap.utils import bootstrap_image as bs_image
class ImportCommand(base.BaseCommand):
"""Import already created bootstrap image to the system."""
def get_parser(self, prog_name):
parser = super(ImportCommand, self).get_parser(prog_name)
# shouldn't we check archive file type?
parser.add_argument(
'filename',
type=str,
metavar='ARCHIVE_FILE',
help="File name of bootstrap image archive"
)
parser.add_argument(
'--activate',
help="Activate bootstrap image after import",
action='store_true'
)
return parser
def take_action(self, parsed_args):
super(ImportCommand, self).take_action(parsed_args)
# Cliff handles errors by itself
image_uuid = bs_image.import_image(parsed_args.filename)
self.app.stdout.write("Bootstrap image {0} has been imported.\n"
.format(image_uuid))
if parsed_args.activate:
image_uuid = bs_image.activate(image_uuid)
self.app.stdout.write("Bootstrap image {0} has been activated\n"
.format(image_uuid))

View File

@ -1,34 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cliff import lister
from fuelclient.common import data_utils
from fuel_bootstrap.commands import base
from fuel_bootstrap.utils import bootstrap_image as bs_image
class ListCommand(base.BaseCommand, lister.Lister):
"""List all available bootstrap images."""
columns = ('uuid', 'label', 'status')
def take_action(self, parsed_args):
super(ListCommand, self).take_action(parsed_args)
data = bs_image.get_all()
data = data_utils.get_display_data_multi(self.columns, data)
return (self.columns, data)

View File

@ -1,53 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# These consts shouldn't be configured
# There is a possibility to specify custom config file with the key --config
CONFIG_FILE = "/etc/fuel-bootstrap-cli/fuel_bootstrap_cli.yaml"
METADATA_FILE = "metadata.yaml"
COMPRESSED_CONTAINER_FORMAT = "tar.gz"
UNCOMPRESSED_CONTAINER_FORMAT = "directory"
ROOTFS = {'name': 'rootfs',
'mask': 'rootfs',
'compress_format': 'xz',
'uri': 'http://127.0.0.1:8080/bootstraps/{uuid}/root.squashfs',
'format': 'ext4',
'container': 'raw'}
BOOTSTRAP_MODULES = [
{'name': 'kernel',
'mask': 'kernel',
'uri': 'http://127.0.0.1:8080/bootstraps/{uuid}/vmlinuz'},
{'name': 'initrd',
'mask': 'initrd',
'compress_format': 'xz',
'uri': 'http://127.0.0.1:8080/bootstraps/{uuid}/initrd.img'},
ROOTFS
]
IMAGE_DATA = {'/': ROOTFS}
# FIXME(azvyagintsev) bug: https://bugs.launchpad.net/fuel/+bug/1525882
# Nailgun\astute should support API call to change their bootstrap profile
# While its not implemented, we need astute.yaml file to perform
# bootstrap_image._activate_dockerized process
ASTUTE_CONFIG_FILE = "/etc/fuel/astute.yaml"
# FIXME(azvyagintsev) bug: https://bugs.launchpad.net/fuel/+bug/1525857
DISTROS = {'ubuntu': {'cobbler_profile': 'ubuntu_bootstrap',
'astute_flavor': 'ubuntu'}}
COBBLER_MANIFEST = '/etc/puppet/modules/fuel/examples/cobbler.pp'
ASTUTE_MANIFEST = '/etc/puppet/modules/fuel/examples/astute.pp'

View File

@ -1,57 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class FuelBootstrapException(Exception):
"""Base Exception for Fuel-Bootstrap
All child classes must be instantiated before raising.
"""
def __init__(self, *args, **kwargs):
super(FuelBootstrapException, self).__init__(*args, **kwargs)
self.message = args[0]
class ActiveImageException(FuelBootstrapException):
"""Should be raised when action can't be permited to active image"""
class ImageAlreadyExists(FuelBootstrapException):
"""Should be raised when image with same uuid already exists"""
class NotImplemented(FuelBootstrapException):
"""Should be raised when some method lacks implementation"""
class IncorrectRepository(FuelBootstrapException):
"""Should be raised when repository can't be parsed"""
class IncorrectImage(FuelBootstrapException):
"""Should be raised when image has incorrect format"""
class ConfigFileNotExists(FuelBootstrapException):
"""Should be raised when default config file is not found"""
class WrongCobblerProfile(FuelBootstrapException):
"""Should be raised when wrong cobbler profile has been chosen"""
class WrongUbuntuRelease(FuelBootstrapException):
"""Should be raised when wrong Ubuntu Release profile has been chosen"""

View File

@ -1,21 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def setup_hook(config):
import pbr
import pbr.packaging
# this monkey patch is to avoid appending git version to version
pbr.packaging._get_version_from_git = lambda pre_version: pre_version

View File

@ -1,59 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import sys
from pbr import version
from cliff import app
from cliff.commandmanager import CommandManager
LOG = logging.getLogger(__name__)
class FuelBootstrap(app.App):
"""Main cliff application class.
Performs initialization of the command manager and
configuration of basic engines.
"""
def __init__(self, **kwargs):
super(FuelBootstrap, self).__init__(
description='Command line Fuel bootstrap manager',
version=version.VersionInfo('fuel-bootstrap').version_string(),
command_manager=CommandManager('fuel_bootstrap',
convert_underscores=True),
**kwargs
)
def initialize_app(self, argv):
LOG.debug('initialize app')
def prepare_to_run_command(self, cmd):
LOG.debug('preparing following command to run: %s',
cmd.__class__.__name__)
def clean_up(self, cmd, result, err):
LOG.debug('clean up %s', cmd.__class__.__name__)
if err:
LOG.debug('got an error: %s', err)
def main(argv=sys.argv[1:]):
return FuelBootstrap().run(argv)

View File

@ -1,34 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from fuelclient import client
class MasterNodeSettings(object):
"""Class for working with Fuel master settings"""
class_api_path = "settings/"
def __init__(self):
self.connection = client.APIClient.default_client()
def update(self, data):
return self.connection.put_request(
self.class_api_path, data)
def get(self):
return self.connection.get_request(
self.class_api_path)

View File

@ -1,42 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import sys
import yaml
class Configuration(object):
def __init__(self):
self._data = {}
def read(self, config_file):
data = {}
if os.path.exists(config_file):
with open(config_file) as f:
data = yaml.safe_load(f)
else:
# TODO(atolochkova): need to add logger
sys.stderr.write("The config file couldn't be found: {0}"
.format(config_file))
self._data = data
def __getattr__(self, name):
return self._data.get(name)
CONF = Configuration()

View File

@ -1,175 +0,0 @@
---
# User can pass any type of executable script
#post_script_file: /tmp/my_custom_script
root_ssh_authorized_file: /root/.ssh/id_rsa.pub
# Extended kernel PXE options
extend_kopts: "biosdevname=0 net.ifnames=1"
# Choose the Ubuntu release (currently supports only Xenial). Keep in mind,
# that you also should fix 'kernel_flavor' parameter.
ubuntu_release: xenial
# Directory that will be injected to the image
# root filesystem. **NOTE** Files/packages will be
# injected after installing all packages, but before
# generating system initramfs - thus it's possible to
# adjust initramfs
extra_dirs:
- /usr/share/fuel_bootstrap_cli/files/xenial
# Save generated bootstrap container to
output_dir: /tmp/
# Defines kernel version
kernel_flavor: linux-image-generic-lts-xenial
# Define packages list
packages:
- daemonize
- fuel-agent
- hwloc
- i40e-dkms
- linux-firmware
- linux-headers-generic
- live-boot
- live-boot-initramfs-tools
- mc
- mcollective
- msmtp-mta
- multipath-tools
- multipath-tools-boot
- nailgun-agent
- nailgun-mcagents
- network-checker
- ntp
- ntpdate
- openssh-client
- openssh-server
- puppet
- squashfs-tools
- ubuntu-minimal
- vim
- wget
- xz-utils
# NOTE(el): Packages required for new generation
# network checker to be run without an access
# to repositories.
- sysfsutils
- bridge-utils
- ifenslave
- irqbalance
- iputils-arping
# Packages required for distributed serialization
- python-distributed
- python-alembic
- python-amqplib
- python-anyjson
- python-babel
- python-cinderclient
- python-crypto
- python-decorator
- python-fysom
- python-iso8601
- python-jinja2
- python-jsonschema
- python-keystoneclient
- python-keystonemiddleware
- python-kombu
- python-mako
- python-markupsafe
- python-migrate
- python-netaddr
- python-netifaces
- python-networkx
- python-novaclient
- python-oslo-config
- python-oslo-db
- python-oslo-serialization
- python-paste
- python-ply
- python-psycopg2
- python-pydot-ng
- python-requests
- python-simplejson
- python-six
- python-sqlalchemy
- python-stevedore
- python-tz
- python-urllib3
- python-uwsgidecorators
- python-webpy
- python-wsgilog
- python-yaml
- python-yaql
# Ignore proxy for this repos
#direct_repo_addresses:
# - 127.0.0.1
# - 172.18.196.50
# Pass proxy parameters, for access to repos
#http_proxy: "192.168.1.50:8080"
#https_proxy: "192.168.1.50:8080"
# Define repos: upstream ubuntu-mirror, MirantisOpenstack mirror, extra repos.
# First repo should be distro repo.
#repos:
# -
# name: ubuntu
# priority: null
# section: "main universe multiverse"
# suite: xenial
# type: deb
# uri: "http://archive.ubuntu.com/ubuntu"
# -
# name: ubuntu-updates
# priority: null
# section: "main universe multiverse"
# suite: xenial-updates
# type: deb
# uri: "http://archive.ubuntu.com/ubuntu"
# -
# name: ubuntu-security
# priority: null
# section: "main universe multiverse"
# suite: xenial-security
# type: deb
# uri: "http://archive.ubuntu.com/ubuntu"
# -
# name: mos
# priority: "1050"
# section: "main restricted"
# suite: mos10.0
# type: deb
# uri: "http://mirror.fuel-infra.org/mos-repos/ubuntu/10.0"
# -
# name: mos-updates
# priority: "1050"
# section: "main restricted"
# suite: mos10.0-updates
# type: deb
# uri: "http://mirror.fuel-infra.org/mos-repos/ubuntu/10.0"
# -
# name: mos-security
# priority: "1050"
# section: "main restricted"
# suite: mos10.0-security
# type: deb
# uri: "http://mirror.fuel-infra.org/mos-repos/ubuntu/10.0"
# -
# name: mos-holdback
# priority: "1100"
# section: "main restricted"
# suite: mos10.0-holdback
# type: deb
# uri: "http://mirror.fuel-infra.org/mos-repos/ubuntu/10.0"
# -
# name: Extra_repo
# priority: null
# section: main
# suite: xenial
# type: deb
# uri: "http://archive.ubuntu.com/ubuntu"
# For import\activate commands only.
bootstrap_images_dir: "/var/www/nailgun/bootstraps"
# For import\activate commands only
active_bootstrap_symlink: "/var/www/nailgun/bootstraps/active_bootstrap"
# For import\activate commands only
#"fuel_access"
# "user": "admin"
# "password": "admin"
# User can provide default hashed root password for bootstrap image
# hashed_root_password: "$6$IInX3Cqo$5xytL1VZbZTusOewFnG6couuF0Ia61yS3rbC6P5YbZP2TYclwHqMq9e3Tg8rvQxhxSlBXP1DZhdUamxdOBXK0."

View File

@ -1,23 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pytest
import unittest
@pytest.mark.usefixtures("bootstrap_app")
class BaseTest(unittest.TestCase):
pass

View File

@ -1,59 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import io
import pytest
from fuel_bootstrap import main
class SafeBootstrapApp(main.FuelBootstrap):
def build_option_parser(self, description, version, argparse_kwargs=None):
parser = super(SafeBootstrapApp, self).build_option_parser(
description, version, argparse_kwargs)
parser.set_defaults(debug=True)
return parser
def get_fuzzy_matches(self, cmd):
# Turn off guessing, we need exact failures in tests
return []
def run(self, argv):
try:
exit_code = super(SafeBootstrapApp, self).run(argv)
except SystemExit as e:
exit_code = e.code
assert exit_code == 0
class SafeStringIO(io.StringIO):
def write(self, s):
try:
s = unicode(s)
except NameError:
pass
super(SafeStringIO, self).write(s)
@pytest.fixture
def bootstrap_app(request):
request.cls.app = SafeBootstrapApp(
stdin=SafeStringIO(),
stdout=SafeStringIO(),
stderr=SafeStringIO()
)

View File

@ -1,31 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from fuel_bootstrap.tests import base
UUID = 'fake_uuid'
class TestActivateCommand(base.BaseTest):
@mock.patch('fuel_bootstrap.utils.bootstrap_image.activate',
return_value=UUID)
def test_parser(self, mock_activate):
self.app.run(['activate', UUID])
mock_activate.assert_called_once_with(UUID)
self.assertEqual("Bootstrap image {0} has been activated.\n"
.format(UUID), self.app.stdout.getvalue())

View File

@ -1,25 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from fuel_bootstrap.tests import base
class TestApp(base.BaseTest):
def test_help(self):
self.app.run(['--help'])
self.assertEqual('', self.app.stderr.getvalue())
self.assertNotIn('Could not', self.app.stdout.getvalue())

View File

@ -1,77 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import mock
from fuel_bootstrap import consts
from fuel_bootstrap.tests import base
PARSED_ARGS = {'extend_kopts': None,
'no_compress': False,
'output_dir': None,
'image_build_dir': None,
'post_script_file': None,
'root_ssh_authorized_file': None,
'activate': False,
'ubuntu_release': None,
'root_password': None,
'no_default_direct_repo_addr': False,
'https_proxy': None,
'http_proxy': None,
'direct_repo_addr': None,
'label': None,
'repos': None,
'kernel_flavor': None,
'certs': None,
'extra_dirs': None,
'no_default_packages': False,
'no_default_extra_dirs': False,
'packages': None,
'config_file': consts.CONFIG_FILE}
UUID = 'fake_uuid'
PATH = 'fake_path'
class TestBuildCommand(base.BaseTest):
@mock.patch('fuel_bootstrap.utils.bootstrap_image.make_bootstrap',
return_value=(UUID, PATH))
def test_parser(self, mock_make_bootstrap):
self.app.run(['build'])
mock_make_bootstrap.assert_called_once_with(PARSED_ARGS)
self.assertEqual("Bootstrap image {0} has been built: {1}\n"
.format(UUID, PATH),
self.app.stdout.getvalue())
@mock.patch('fuel_bootstrap.utils.bootstrap_image.activate',
return_value=(UUID, PATH))
@mock.patch('fuel_bootstrap.utils.bootstrap_image.import_image',
return_value=UUID)
@mock.patch('fuel_bootstrap.utils.bootstrap_image.make_bootstrap',
return_value=(UUID, PATH))
def test_parser_activate(self, mock_make_bootstrap,
mock_import, mock_activate):
self.app.run(['build', '--activate'])
parsed_args = copy.deepcopy(PARSED_ARGS)
parsed_args['activate'] = True
mock_make_bootstrap.assert_called_once_with(parsed_args)
mock_import.assert_called_once_with(PATH)
mock_activate.assert_called_once_with(UUID)
self.assertEqual("Bootstrap image {0} has been built: {1}\n"
"Bootstrap image {0} has been activated.\n"
.format(UUID, PATH),
self.app.stdout.getvalue())

View File

@ -1,32 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from fuel_bootstrap.tests import base
UUID = 'fake_uuid'
class TestDeleteCommand(base.BaseTest):
@mock.patch('fuel_bootstrap.utils.bootstrap_image.delete',
return_value=UUID)
def test_parser(self, mock_delete):
self.app.run(['delete', UUID])
mock_delete.assert_called_once_with(UUID)
self.assertEqual("Bootstrap image {0} has been deleted.\n"
.format(UUID), self.app.stdout.getvalue())

View File

@ -1,45 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from fuel_bootstrap.tests import base
UUID = 'fake_uuid'
PATH = 'fake_path'
class TestImportCommand(base.BaseTest):
@mock.patch('fuel_bootstrap.utils.bootstrap_image.import_image',
return_value=UUID)
def test_parser(self, mock_import):
self.app.run(['import', PATH])
mock_import.assert_called_once_with(PATH)
self.assertEqual("Bootstrap image {0} has been imported.\n"
.format(UUID), self.app.stdout.getvalue())
@mock.patch('fuel_bootstrap.utils.bootstrap_image.activate',
return_value=UUID)
@mock.patch('fuel_bootstrap.utils.bootstrap_image.import_image',
return_value=UUID)
def test_parser_activate(self, mock_import, mock_activate):
self.app.run(['import', PATH, '--activate'])
mock_import.assert_called_once_with(PATH)
mock_activate.assert_called_once_with(UUID)
self.assertEqual("Bootstrap image {0} has been imported.\n"
"Bootstrap image {0} has been activated\n"
.format(UUID), self.app.stdout.getvalue())

View File

@ -1,39 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from fuel_bootstrap.tests import base
class TestListCommand(base.BaseTest):
@mock.patch('fuel_bootstrap.utils.bootstrap_image.get_all')
def test_parser(self, m_get_all):
m_get_all.return_value = [{
'uuid': 'fake_uuid',
'label': 'fake_label',
'status': 'fake_status',
}]
self.app.run(['list'])
fake_list_result = ("+-----------+------------+-------------+\n"
"| uuid | label | status |\n"
"+-----------+------------+-------------+\n"
"| fake_uuid | fake_label | fake_status |\n"
"+-----------+------------+-------------+\n")
self.assertEqual(fake_list_result, self.app.stdout.getvalue())
self.assertEqual('', self.app.stderr.getvalue())

View File

@ -1,21 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from pkg_resources import require
def test_check_requirements_conflicts():
require('fuel-bootstrap')

View File

@ -1,370 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import unittest
import fuel_agent
import mock
from oslo_config import cfg
from fuel_bootstrap import consts
from fuel_bootstrap import errors
from fuel_bootstrap.utils import bootstrap_image as bs_image
from fuel_bootstrap.utils import data
from fuel_bootstrap.utils import notifier
# FAKE_OS is list of tuples which describes fake directories for testing.
# Each tuple has the following structure:
# (root, list_of_directories, list_of_files)
FAKE_OS = [
(
'/test',
['/test/image_1', '/test/image_2', '/test/link_active_bootstrap'],
['/test/test_file']
),
('/test/image_1', [], ['/test/image_1/metadata.yaml']),
('/test/image_2', [], [])
]
DATA = [{'uuid': 'image_1', 'status': 'active'}, {'uuid': 'image_2'}]
IMAGES_DIR = '/test'
BOOTSTRAP_SYMLINK = '/test/link_active_bootstrap'
def _is_link(dir_path):
return dir_path.startswith('link')
def _list_dir(bootstrap_images_dir):
result = []
for item in FAKE_OS:
if item[0] == bootstrap_images_dir:
result.extend(item[1])
result.extend(item[2])
return result
def _is_dir(dir_path):
for item in FAKE_OS:
if item[0] == dir_path:
return True
return False
def _exists(dir_path):
for root, dirs, files in FAKE_OS:
if dir_path in dirs or dir_path in files:
return True
class BootstrapImageTestCase(unittest.TestCase):
def setUp(self):
super(BootstrapImageTestCase, self).setUp()
self.conf_patcher = mock.patch.object(bs_image, 'CONF')
self.conf_mock = self.conf_patcher.start()
self.conf_mock.bootstrap_images_dir = IMAGES_DIR
self.conf_mock.active_bootstrap_symlink = BOOTSTRAP_SYMLINK
self.open_patcher = mock.patch('fuel_bootstrap.utils.bootstrap_image.'
'open', create=True,
new_callable=mock.mock_open)
self.open_mock = self.open_patcher.start()
self.yaml_patcher = mock.patch('yaml.safe_load')
self.yaml_mock = self.yaml_patcher.start()
self.dir_patcher = mock.patch('os.listdir')
self.dir_mock = self.dir_patcher.start()
self.dir_mock.side_effect = _list_dir
self.is_dir_patcher = mock.patch('os.path.isdir')
self.is_dir_mock = self.is_dir_patcher.start()
self.is_dir_mock.side_effect = _is_dir
self.is_link_patcher = mock.patch('os.path.islink')
self.is_link_mock = self.is_link_patcher.start()
self.is_link_mock.side_effect = _is_link
self.exists_patcher = mock.patch('os.path.exists')
self.exists_mock = self.exists_patcher.start()
self.exists_mock.side_effect = _exists
self.walk_patcher = mock.patch('os.walk')
self.walk_mock = self.walk_patcher.start()
self.walk_mock.return_value = [('/test/image_3',
['directory'], ['file'])]
def tearDown(self):
mock.patch.stopall()
@mock.patch.object(bs_image, 'parse', side_effect=DATA)
def test_get_all(self, parse_mock):
result = bs_image.get_all()
self.assertEqual(DATA, result)
self.assertEqual(2, parse_mock.call_count)
parse_mock.assert_has_calls([mock.call('/test/image_1'),
mock.call('/test/image_2')])
@mock.patch('os.path.islink', return_value=True)
def test_parse_link(self, islink_mock):
image_uuid = '/test/link_active_bootstrap'
error_msg = "There are no such image \[{0}].".format(image_uuid)
with self.assertRaisesRegexp(errors.IncorrectImage, error_msg):
bs_image.parse(image_uuid)
@mock.patch('os.path.isdir', return_value=False)
def test_parse_not_dir(self, isdir_mock):
image_uuid = '/test/test_file'
error_msg = "There are no such image \[{0}].".format(image_uuid)
with self.assertRaisesRegexp(errors.IncorrectImage, error_msg):
bs_image.parse(image_uuid)
def test_parse_no_metadata(self):
image_uuid = '/test/image_2'
error_msg = ("Image \[{0}] doesn't contain metadata file."
.format(image_uuid))
with self.assertRaisesRegexp(errors.IncorrectImage, error_msg):
bs_image.parse(image_uuid)
def test_parse_wrong_dir_name(self):
image_uuid = '/test/image_1'
self.yaml_mock.return_value = {'uuid': 'image_2'}
error_msg = ("UUID from metadata file \[{0}] doesn't equal"
" directory name \[{1}]".format('image_2', image_uuid))
with self.assertRaisesRegexp(errors.IncorrectImage, error_msg):
bs_image.parse(image_uuid)
@mock.patch.object(bs_image, 'is_active')
def test_parse_correct_image(self, active_mock):
active_mock.return_value = False
image_uuid = '/test/image_1'
self.yaml_mock.return_value = {'uuid': 'image_1'}
expected_data = {
'uuid': 'image_1',
'label': '',
'status': '',
}
data = bs_image.parse(image_uuid)
self.assertEqual(expected_data, data)
@mock.patch.object(bs_image, 'is_active')
def test_parse_active_image(self, active_mock):
active_mock.return_value = True
image_uuid = '/test/image_1'
self.yaml_mock.return_value = {'uuid': 'image_1'}
expected_data = {
'uuid': 'image_1',
'label': '',
'status': 'active',
}
data = bs_image.parse(image_uuid)
self.assertEqual(expected_data, data)
@mock.patch.object(bs_image, 'parse')
def test_delete_active_image(self, parse_mock):
parse_mock.return_value = DATA[0]
image_uuid = '/test/image_1'
error_msg = ("Image \[{0}] is active and can't be deleted."
.format(image_uuid))
with self.assertRaisesRegexp(errors.ActiveImageException, error_msg):
bs_image.delete(image_uuid)
@mock.patch.object(bs_image, 'parse')
@mock.patch('shutil.rmtree')
def test_delete(self, shutil_mock, parse_mock):
image_uuid = '/test/image_2'
self.assertEqual(image_uuid, bs_image.delete(image_uuid))
parse_mock.assert_called_once_with('/test/image_2')
shutil_mock.assert_called_once_with(image_uuid)
@mock.patch('os.path.realpath', return_value='/test/image_1')
def test_is_active(self, realpath_mock):
image_uuid = '/test/image_1'
self.assertTrue(bs_image.is_active(image_uuid))
def test_full_path_not_full(self):
image_uuid = 'image_1'
result = bs_image.full_path(image_uuid)
self.assertEqual(os.path.join(IMAGES_DIR, image_uuid), result)
def test_full_path_full(self):
image_uuid = '/test/image_1'
result = bs_image.full_path(image_uuid)
self.assertEqual(image_uuid, result)
@mock.patch('tempfile.mkdtemp')
@mock.patch('fuel_bootstrap.utils.bootstrap_image.extract_to_dir')
def test_import_exists_image(self, extract_mock, tempfile_mock):
self.yaml_mock.return_value = DATA[0]
image_uuid = DATA[0].get('uuid')
error_msg = ("Image \[{0}] already exists.".format(image_uuid))
with self.assertRaisesRegexp(errors.ImageAlreadyExists, error_msg):
bs_image.import_image('/path')
@mock.patch('os.chmod')
@mock.patch('shutil.move')
@mock.patch('tempfile.mkdtemp', return_value='/tmp/test')
@mock.patch('fuel_bootstrap.utils.bootstrap_image.extract_to_dir')
def test_import_image(self, extract_mock, tempfile_mock, shutil_mock,
chmod_mock):
arch_path = '/path'
extract_dir = '/tmp/test'
dir_path = '/test/image_3'
self.yaml_mock.return_value = {'uuid': dir_path}
self.assertEqual(bs_image.import_image('/path'), dir_path)
tempfile_mock.assert_called_once_with()
extract_mock.assert_called_once_with(arch_path, extract_dir)
shutil_mock.assert_called_once_with(extract_dir, dir_path)
chmod_mock.assert_has_calls([
mock.call(dir_path, 0o755),
mock.call(os.path.join(dir_path, 'directory'), 0o755),
mock.call(os.path.join(dir_path, 'file'), 0o755)])
@mock.patch('tarfile.open')
def test_extract_to_dir(self, tarfile_mock):
bs_image.extract_to_dir('arch_path', 'extract_path')
tarfile_mock.assert_called_once_with('arch_path', 'r')
tarfile_mock().extractall.assert_called_once_with('extract_path')
@mock.patch.object(cfg, 'CONF')
@mock.patch.object(fuel_agent.manager, 'Manager')
@mock.patch.object(data, 'BootstrapDataBuilder')
def test_make_bootstrap(self, bdb_mock, manager_mock, conf_mock):
data = {}
boot_data = {'bootstrap': {'uuid': 'image_1'},
'output': '/image/path'}
opts = ['--data_driver', 'bootstrap_build_image']
bdb_mock(data).build.return_value = boot_data
self.assertEqual(('image_1', '/image/path'),
bs_image.make_bootstrap(data))
conf_mock.assert_called_once_with(opts, project='fuel-agent')
manager_mock(boot_data).do_mkbootstrap.assert_called_once_with()
@mock.patch.object(cfg, 'CONF')
@mock.patch.object(fuel_agent.manager, 'Manager')
@mock.patch.object(data, 'BootstrapDataBuilder')
def test_make_bootstrap_image_build_dir(self,
bdb_mock,
manager_mock,
conf_mock):
data = {'image_build_dir': '/image/build_dir'}
boot_data = {'bootstrap': {'uuid': 'image_1'},
'output': '/image/path'}
opts = ['--data_driver', 'bootstrap_build_image',
'--image_build_dir', data['image_build_dir']]
bdb_mock(data).build.return_value = boot_data
self.assertEqual(('image_1', '/image/path'),
bs_image.make_bootstrap(data))
self.assertEqual(2, bdb_mock.call_count)
conf_mock.assert_called_once_with(opts, project='fuel-agent')
manager_mock(boot_data).do_mkbootstrap.assert_called_once_with()
def test_update_astute_yaml_key_error(self):
self.yaml_mock.return_value = {}
with self.assertRaises(KeyError):
bs_image._update_astute_yaml()
def test_update_astute_yaml_type_error(self):
self.yaml_mock.return_value = []
with self.assertRaises(TypeError):
bs_image._update_astute_yaml()
@mock.patch('fuel_agent.utils.utils.execute')
def test_run_puppet_no_manifest(self, execute_mock):
bs_image._run_puppet()
execute_mock.assert_called_once_with('puppet', 'apply',
'--detailed-exitcodes',
'-dv', None, logged=True,
check_exit_code=[0, 2],
attempts=2)
def test_activate_flavor_not_in_distros(self):
flavor = 'not_ubuntu'
error_msg = ('Wrong cobbler profile passed: {0} \n '
'possible profiles: \{1}'.
format(flavor, list(consts.DISTROS.keys())))
with self.assertRaisesRegexp(errors.WrongCobblerProfile, error_msg):
bs_image._activate_flavor(flavor)
@mock.patch.object(bs_image, '_update_astute_yaml')
@mock.patch.object(bs_image, '_run_puppet')
@mock.patch.object(fuel_agent.utils.utils, 'execute')
def test_activate_flavor(self,
execute_mock,
run_puppet_mock,
update_astute_yaml_mock):
flavor = 'ubuntu'
bs_image._activate_flavor(flavor)
update_astute_yaml_mock.assert_called_once_with(
consts.DISTROS[flavor]['astute_flavor'])
run_puppet_mock.assert_any_call(consts.COBBLER_MANIFEST)
run_puppet_mock.assert_any_call(consts.ASTUTE_MANIFEST)
self.assertEqual(2, run_puppet_mock.call_count)
execute_mock.assert_called_once_with('service', 'astute', 'restart')
@mock.patch('os.path.lexists', return_value=False)
@mock.patch('os.unlink')
@mock.patch('os.symlink')
def tests_make_symlink(self, symlink_mock, unlink_mock, lexist_mock):
dir_path = '/test/test_image_uuid'
symlink = '/test/active_bootstrap'
bs_image._make_symlink(symlink, dir_path)
lexist_mock.assert_called_once_with(symlink)
unlink_mock.assert_not_called()
symlink_mock.assert_called_once_with(dir_path, symlink)
@mock.patch('os.path.lexists', return_value=True)
@mock.patch('os.unlink')
@mock.patch('os.symlink')
def tests_make_deteted_symlink(self, symlink_mock, unlink_mock,
lexist_mock):
dir_path = '/test/test_image_uuid'
symlink = '/test/active_bootstrap'
bs_image._make_symlink(symlink, dir_path)
lexist_mock.assert_called_once_with(symlink)
unlink_mock.assert_called_once_with(symlink)
symlink_mock.assert_called_once_with(dir_path, symlink)
@mock.patch.object(bs_image, '_activate_flavor')
@mock.patch.object(notifier, 'notify_webui')
@mock.patch.object(bs_image, '_make_symlink')
def test_activate_image_symlink_deleted(self,
make_symlink_mock,
notify_mock,
activate_flavor_mock):
image_uuid = '/test/test_image_uuid'
symlink = '/test/active_bootstrap'
self.conf_mock.active_bootstrap_symlink = symlink
self.assertEqual(image_uuid, bs_image._activate_image(image_uuid))
make_symlink_mock.assert_called_once_with(symlink, image_uuid)
activate_flavor_mock.assert_called_once_with('ubuntu')
notify_mock.assert_called_once_with("")
@mock.patch.object(bs_image, 'parse')
@mock.patch.object(bs_image, '_activate_image')
def test_activate(self, activate_mock, parse_mock):
image_uuid = '/test/test_image_uuid'
activate_mock.return_value = image_uuid
self.assertEqual(image_uuid, bs_image.activate(image_uuid))
parse_mock.assert_called_once_with(image_uuid)
activate_mock.assert_called_once_with(image_uuid)

View File

@ -1,241 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import os
import mock
import six
import unittest
from fuel_bootstrap import consts
from fuel_bootstrap import errors
from fuel_bootstrap.utils import data as bs_data
DATA = {'ubuntu_release': 'trusty',
'repos': ['deb http://archive.ubuntu.com/ubuntu suite'],
'post_script_file': None,
'root_ssh_authorized_file': '/root/test',
'extra_dirs': ['/test_extra_dirs'],
'packages': [],
'label': None,
'no_default_extra_dirs': True,
'no_default_packages': True,
'extend_kopts': 'test_extend_kopts',
'kernel_flavor': 'test_kernel_flavor',
'output_dir': '/test_dir',
'certs': None,
'root_password': '1234567_abc'
}
BOOTSTRAP_MODULES = [
{'name': 'kernel',
'mask': 'kernel',
'uri': 'http://127.0.0.1:8080/bootstraps/123/vmlinuz'},
{'name': 'initrd',
'mask': 'initrd',
'compress_format': 'xz',
'uri': 'http://127.0.0.1:8080/bootstraps/123/initrd.img'},
{'name': 'rootfs',
'mask': 'rootfs',
'compress_format': 'xz',
'uri': 'http://127.0.0.1:8080/bootstraps/123/root.squashfs',
'format': 'ext4',
'container': 'raw'}
]
REPOS = [{'name': 'repo_0',
'type': 'deb',
'uri': 'http://archive.ubuntu.com/ubuntu',
'priority': None,
'suite': 'suite',
'section': ''}]
IMAGE_DATA = {'/': {'name': 'rootfs',
'mask': 'rootfs',
'compress_format': 'xz',
'uri': 'http://127.0.0.1:8080/bootstraps/123/'
'root.squashfs',
'format': 'ext4',
'container': 'raw'}}
UUID = six.text_type(123)
class DataBuilderTestCase(unittest.TestCase):
@mock.patch('uuid.uuid4', return_value=UUID)
def setUp(self, uuid):
super(DataBuilderTestCase, self).setUp()
self.bd_builder = bs_data.BootstrapDataBuilder(DATA)
def test_build(self):
proxy_settings = {}
file_name = "{0}.{1}".format(UUID, consts.COMPRESSED_CONTAINER_FORMAT)
packages = [DATA.get('kernel_flavor')]
bootstrap = {
'bootstrap': {
'modules': BOOTSTRAP_MODULES,
'extend_kopts': DATA.get('extend_kopts'),
'post_script_file': DATA.get('post_script_file'),
'uuid': UUID,
'extra_files': DATA.get('extra_dirs'),
'root_ssh_authorized_file':
DATA.get('root_ssh_authorized_file'),
'container': {
'meta_file': consts.METADATA_FILE,
'format': consts.COMPRESSED_CONTAINER_FORMAT
},
'label': UUID,
'certs': DATA.get('certs')
},
'repos': REPOS,
'proxies': proxy_settings,
'codename': DATA.get('ubuntu_release'),
'output': os.path.join(DATA.get('output_dir'), file_name),
'packages': packages,
'image_data': IMAGE_DATA,
'hashed_root_password': None,
'root_password': DATA.get('root_password')
}
data = self.bd_builder.build()
self.assertEqual(data, bootstrap)
def test_get_extra_dirs_no_default(self):
result = self.bd_builder._get_extra_dirs()
self.assertEqual(result, DATA.get('extra_dirs'))
@mock.patch.object(bs_data, 'CONF')
def test_get_extra_dirs(self, conf_mock):
self.bd_builder.no_default_extra_dirs = False
conf_mock.extra_dirs = ['/conf_test_extra_dirs']
result = self.bd_builder._get_extra_dirs()
six.assertCountEqual(self, result, DATA.get('extra_dirs') +
['/conf_test_extra_dirs'])
def test_prepare_modules(self):
result = self.bd_builder._prepare_modules()
self.assertEqual(result, BOOTSTRAP_MODULES)
def test_prepare_image_data(self):
result = self.bd_builder._prepare_image_data()
self.assertEqual(result, IMAGE_DATA)
def test_get_no_proxy_settings(self):
self.assertEqual(self.bd_builder._get_proxy_settings(), {})
@mock.patch.object(bs_data, 'CONF')
def test_get_proxy_settings(self, conf_mock):
conf_mock.direct_repo_addresses = None
self.bd_builder.http_proxy = '127.0.0.1'
self.bd_builder.https_proxy = '127.0.0.2'
self.bd_builder.direct_repo_addr = ['127.0.0.3']
proxy = {'protocols': {'http': self.bd_builder.http_proxy,
'https': self.bd_builder.https_proxy},
'direct_repo_addr_list': self.bd_builder.direct_repo_addr}
result = self.bd_builder._get_proxy_settings()
self.assertEqual(result, proxy)
def test_get_direct_repo_addr_no_default(self):
self.bd_builder.no_default_direct_repo_addr = True
self.bd_builder.direct_repo_addr = ['127.0.0.3']
result = self.bd_builder._get_direct_repo_addr()
self.assertEqual(result, self.bd_builder.direct_repo_addr)
@mock.patch.object(bs_data, 'CONF')
def test_get_direct_repo_addr_conf(self, conf_mock):
self.bd_builder.direct_repo_addr = ['127.0.0.3']
conf_mock.direct_repo_addresses = ['127.0.0.4']
result = self.bd_builder._get_direct_repo_addr()
six.assertCountEqual(self, result,
self.bd_builder.direct_repo_addr + ['127.0.0.4'])
@mock.patch.object(bs_data, 'CONF')
def test_get_direct_repo_addr(self, conf_mock):
conf_mock.direct_repo_addresses = None
self.bd_builder.direct_repo_addr = ['127.0.0.3']
result = self.bd_builder._get_direct_repo_addr()
self.assertEqual(result, self.bd_builder.direct_repo_addr)
@mock.patch.object(bs_data, 'CONF')
def test_get_repos_conf(self, conf_mock):
self.bd_builder.repos = []
conf_mock.repos = REPOS
self.assertEqual(self.bd_builder._get_repos(), conf_mock.repos)
@mock.patch.object(bs_data, 'CONF')
def test_get_repos(self, conf_mock):
conf_mock.repos = None
self.assertEqual(self.bd_builder._get_repos(), REPOS)
def test_get_packages_no_default(self):
packages = copy.copy(DATA.get('packages'))
packages.append(DATA.get('kernel_flavor'))
six.assertCountEqual(self, self.bd_builder._get_packages(), packages)
@mock.patch.object(bs_data, 'CONF')
def test_get_packages(self, conf_mock):
self.bd_builder.packages = ['test_package']
self.bd_builder.no_default_packages = False
conf_mock.packages = ['conf_package']
result_packages = (self.bd_builder.packages + conf_mock.packages
+ [DATA.get('kernel_flavor')])
six.assertCountEqual(self, self.bd_builder._get_packages(),
result_packages)
def parse_incorrect(self, repo):
name = 'repo_0'
error_msg = "Couldn't parse repository '{0}'".format(repo)
with self.assertRaises(errors.IncorrectRepository, msg=error_msg):
bs_data.BootstrapDataBuilder._parse_repo(repo, name)
def test_parse_incorrect_type(self):
repo = 'deb-false http://archive.ubuntu.com/ubuntu codename'
self.parse_incorrect(repo)
def test_parse_empty_uri(self):
repo = 'deb codename'
self.parse_incorrect(repo)
def test_parse_empty_suite(self):
repo = 'deb http://archive.ubuntu.com/ubuntu'
self.parse_incorrect(repo)
def parse_correct(self, repo, return_repo):
name = 'repo_0'
result = bs_data.BootstrapDataBuilder._parse_repo(repo, name)
self.assertEqual(result, return_repo)
def test_parse_correct_necessary(self):
repo = DATA.get('repos')[0]
self.parse_correct(repo, REPOS[0])
def test_parse_correct_section(self):
repo = 'deb http://archive.ubuntu.com/ubuntu suite section'
return_repo = copy.deepcopy(REPOS[0])
return_repo['section'] = 'section'
self.parse_correct(repo, return_repo)
def test_parse_correct_priority(self):
repo = 'deb http://archive.ubuntu.com/ubuntu suite ,1'
return_repo = copy.deepcopy(REPOS[0])
return_repo['priority'] = '1'
self.parse_correct(repo, return_repo)
def test_parse_correct_all(self):
repo = 'deb http://archive.ubuntu.com/ubuntu suite section,1'
return_repo = copy.deepcopy(REPOS[0])
return_repo['section'] = 'section'
return_repo['priority'] = '1'
self.parse_correct(repo, return_repo)

View File

@ -1,270 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import re
import shutil
import tarfile
import tempfile
import yaml
from fuel_agent import manager
from fuel_agent.utils import utils
from oslo_config import cfg
from fuel_bootstrap import consts
from fuel_bootstrap import errors
from fuel_bootstrap import settings
from fuel_bootstrap.utils import data as data_util
from fuel_bootstrap.utils import notifier
CONF = settings.CONF
LOG = logging.getLogger(__name__)
ACTIVE = 'active'
def get_all():
"""Return info about all valid bootstrap images
:return: array of dict
"""
# TODO(asvechnikov): need to change of determining active bootstrap
# cobbler profile must be used
data = []
LOG.debug("Searching images in %s", CONF.bootstrap_images_dir)
for name in os.listdir(CONF.bootstrap_images_dir):
if not os.path.isdir(os.path.join(CONF.bootstrap_images_dir, name)):
continue
try:
data.append(parse(name))
except errors.IncorrectImage as e:
LOG.debug("Image [%s] is skipped due to %s", name, e)
return data
def _cobbler_profile():
"""Parse current active profile from cobbler system
:return: string
"""
stdout, _ = utils.execute('cobbler', 'system', 'report',
'--name', 'default')
regex = r"(?P<label>Profile)\s*:\s*(?P<profile>[^\s]+)"
return re.search(regex, stdout).group('profile')
def parse(image_uuid):
LOG.debug("Trying to parse [%s] image", image_uuid)
dir_path = full_path(image_uuid)
if os.path.islink(dir_path) or not os.path.isdir(dir_path):
raise errors.IncorrectImage("There are no such image [{0}]."
.format(image_uuid))
metafile = os.path.join(dir_path, consts.METADATA_FILE)
if not os.path.exists(metafile):
raise errors.IncorrectImage("Image [{0}] doesn't contain metadata "
"file.".format(image_uuid))
with open(metafile) as f:
try:
data = yaml.safe_load(f)
except yaml.YAMLError as e:
raise errors.IncorrectImage("Couldn't parse metadata file for"
" image [{0}] due to {1}"
.format(image_uuid, e))
if data.get('uuid') != os.path.basename(dir_path):
raise errors.IncorrectImage("UUID from metadata file [{0}] doesn't"
" equal directory name [{1}]"
.format(data.get('uuid'), image_uuid))
data['status'] = ACTIVE if is_active(data['uuid']) else ''
data.setdefault('label', '')
return data
def delete(image_uuid):
dir_path = full_path(image_uuid)
image = parse(image_uuid)
if image['status'] == ACTIVE:
raise errors.ActiveImageException("Image [{0}] is active and can't be"
" deleted.".format(image_uuid))
shutil.rmtree(dir_path)
return image_uuid
def is_active(image_uuid):
return full_path(image_uuid) == os.path.realpath(
CONF.active_bootstrap_symlink)
def full_path(image_uuid):
if not os.path.isabs(image_uuid):
return os.path.join(CONF.bootstrap_images_dir, image_uuid)
return image_uuid
def import_image(arch_path):
extract_dir = tempfile.mkdtemp()
extract_to_dir(arch_path, extract_dir)
metafile = os.path.join(extract_dir, consts.METADATA_FILE)
with open(metafile) as f:
try:
data = yaml.safe_load(f)
except yaml.YAMLError as e:
raise errors.IncorrectImage("Couldn't parse metadata file"
" due to {0}".format(e))
image_uuid = data['uuid']
dir_path = full_path(image_uuid)
if os.path.exists(dir_path):
raise errors.ImageAlreadyExists("Image [{0}] already exists."
.format(image_uuid))
shutil.move(extract_dir, dir_path)
os.chmod(dir_path, 0o755)
for root, dirs, files in os.walk(dir_path):
for d in dirs:
os.chmod(os.path.join(root, d), 0o755)
for f in files:
os.chmod(os.path.join(root, f), 0o755)
return image_uuid
def extract_to_dir(arch_path, extract_path):
LOG.info("Try extract %s to %s", arch_path, extract_path)
tarfile.open(arch_path, 'r').extractall(extract_path)
def make_bootstrap(data):
bootdata_builder = data_util.BootstrapDataBuilder(data)
bootdata = bootdata_builder.build()
LOG.info("Try to build image with data:\n%s", yaml.safe_dump(bootdata))
opts = ['--data_driver', 'bootstrap_build_image']
if data.get('image_build_dir'):
opts.extend(['--image_build_dir', data['image_build_dir']])
OSLO_CONF = cfg.CONF
OSLO_CONF(opts, project='fuel-agent')
mngr = manager.Manager(bootdata)
LOG.info("Build process is in progress. Usually it takes 15-20 minutes."
" It depends on your internet connection and hardware"
" performance.")
mngr.do_mkbootstrap()
return bootdata['bootstrap']['uuid'], bootdata['output']
def _update_astute_yaml(flavor=None):
config = consts.ASTUTE_CONFIG_FILE
LOG.debug("Switching in %s BOOTSTRAP/flavor to :%s",
config, flavor)
try:
with open(config, 'r') as f:
data = yaml.safe_load(f)
data['BOOTSTRAP']['flavor'] = flavor
with open(config, 'wt') as f:
yaml.safe_dump(data, stream=f, encoding='utf-8',
default_flow_style=False,
default_style='"')
except IOError:
LOG.error("Config file %s has not been processed successfully", config)
raise
except (KeyError, TypeError):
LOG.error("Seems config file %s is empty or doesn't contain BOOTSTRAP"
" section", config)
raise
def _run_puppet(manifest=None):
"""Run puppet apply
:param manifest:
:return:
"""
LOG.debug('Trying apply manifest: %s', manifest)
utils.execute('puppet', 'apply', '--detailed-exitcodes',
'-dv', manifest, logged=True,
check_exit_code=[0, 2], attempts=2)
def _activate_flavor(flavor=None):
"""Switch between cobbler distro profiles, in case dockerized system
Unfortunately, we don't support switching between profiles "on fly",
so to perform this we need:
1) Update asute.yaml - which used by puppet to determine options
2) Re-run puppet for cobbler(to perform default system update, regarding
new profile)
3) Re-run puppet for astute
:param flavor: Switch between cobbler profile
:return:
"""
flavor = flavor.lower()
if flavor not in consts.DISTROS:
raise errors.WrongCobblerProfile(
'Wrong cobbler profile passed: {0} \n '
'possible profiles: {1}'.format(flavor,
list(consts.DISTROS.keys())))
_update_astute_yaml(consts.DISTROS[flavor]['astute_flavor'])
_run_puppet(consts.COBBLER_MANIFEST)
_run_puppet(consts.ASTUTE_MANIFEST)
# restart astuted to be sure that it catches new profile
LOG.debug('Reloading astuted')
utils.execute('service', 'astute', 'restart')
def _make_symlink(symlink, dir_path):
if os.path.lexists(symlink):
os.unlink(symlink)
LOG.debug("Symlink %s was deleted", symlink)
os.symlink(dir_path, symlink)
LOG.debug("Symlink %s to %s directory has been created", symlink, dir_path)
@notifier.notify_webui_on_fail
def _activate_image(image_uuid):
symlink = CONF.active_bootstrap_symlink
dir_path = full_path(image_uuid)
_make_symlink(symlink, dir_path)
# FIXME: Add pre-activate verify
flavor = 'ubuntu'
_activate_flavor(flavor)
notifier.notify_webui("")
return image_uuid
def activate(image_uuid):
# need to verify image_uuid
# TODO(asvechnikov): add check for already active image_uuid
# after cobbler will be used for is_active
parse(image_uuid)
return _activate_image(image_uuid)

View File

@ -1,185 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import os
import re
import six
import uuid
from fuel_bootstrap import consts
from fuel_bootstrap import errors
from fuel_bootstrap import settings
CONF = settings.CONF
class BootstrapDataBuilder(object):
def __init__(self, data):
self.uuid = six.text_type(uuid.uuid4())
self.container_format = consts.COMPRESSED_CONTAINER_FORMAT
if data.get('no_compress'):
self.container_format = consts.UNCOMPRESSED_CONTAINER_FORMAT
self.ubuntu_release = data.get('ubuntu_release') or CONF.ubuntu_release
if not self.ubuntu_release:
raise errors.WrongUbuntuRelease(
"'ubuntu_release' value has not been passed!")
self.repos = data.get('repos') or []
self.http_proxy = data.get('http_proxy') or CONF.http_proxy
self.https_proxy = data.get('https_proxy') or CONF.https_proxy
self.direct_repo_addr = data.get('direct_repo_addr') or []
self.no_default_direct_repo_addr = data.get(
'no_default_direct_repo_addr')
self.post_script_file = \
data.get('post_script_file') or \
CONF.post_script_file
self.root_ssh_authorized_file = \
data.get('root_ssh_authorized_file') or \
CONF.root_ssh_authorized_file
self.extra_dirs = data.get('extra_dirs') or []
self.no_default_extra_dirs = data.get('no_default_extra_dirs')
self.packages = data.get('packages') or []
self.no_default_packages = data.get('no_default_packages')
self.label = data.get('label') or self.uuid
self.extend_kopts = data.get('extend_kopts') or CONF.extend_kopts
self.kernel_flavor = data.get('kernel_flavor') or CONF.kernel_flavor
self.output = data.get('output_dir') or CONF.output_dir
if not data.get('no_compress'):
file_name = "{0}.{1}".format(self.uuid, self.container_format)
self.output = os.path.join(self.output, file_name)
self.certs = data.get('certs')
self.root_password = data.get('root_password')
self.hashed_root_password = None
if self.root_password is None:
self.hashed_root_password = CONF.hashed_root_password
def build(self):
repos = self._get_repos()
return {
'bootstrap': {
'modules': self._prepare_modules(),
'extend_kopts': self.extend_kopts,
'post_script_file': self.post_script_file,
'uuid': self.uuid,
'extra_files': self._get_extra_dirs(),
'root_ssh_authorized_file': self.root_ssh_authorized_file,
'container': {
'meta_file': consts.METADATA_FILE,
'format': self.container_format
},
'label': self.label,
'certs': self.certs
},
'repos': repos,
'proxies': self._get_proxy_settings(),
'codename': self.ubuntu_release,
'output': self.output,
'packages': self._get_packages(),
'image_data': self._prepare_image_data(),
'hashed_root_password': self.hashed_root_password,
'root_password': self.root_password,
}
def _get_extra_dirs(self):
if self.no_default_extra_dirs:
return self.extra_dirs
dirs = set(self.extra_dirs)
if CONF.extra_dirs:
dirs |= set(CONF.extra_dirs)
return list(dirs)
def _prepare_modules(self):
modules = copy.deepcopy(consts.BOOTSTRAP_MODULES)
for module in modules:
module['uri'] = module['uri'].format(uuid=self.uuid)
return modules
def _prepare_image_data(self):
image_data = copy.deepcopy(consts.IMAGE_DATA)
image_data['/']['uri'] = image_data['/']['uri'].format(uuid=self.uuid)
return image_data
def _get_proxy_settings(self):
if self.http_proxy or self.https_proxy:
return {'protocols': {'http': self.http_proxy,
'https': self.https_proxy},
'direct_repo_addr_list': self._get_direct_repo_addr()}
return {}
def _get_direct_repo_addr(self):
if self.no_default_direct_repo_addr:
return self.direct_repo_addr
addrs = set(self.direct_repo_addr)
if CONF.direct_repo_addresses:
addrs |= set(CONF.direct_repo_addresses)
return list(addrs)
def _get_repos(self):
repos = []
for idx, repo in enumerate(self.repos):
repos.append(self._parse_repo(
repo,
name="repo_{0}".format(idx)))
if not self.repos and CONF.repos:
repos.extend(CONF.repos)
return repos
def _get_packages(self):
result = set(self.packages)
result.add(self.kernel_flavor)
if not self.no_default_packages and CONF.packages:
result |= set(CONF.packages)
return list(result)
@classmethod
def _parse_repo(cls, repo, name=None):
regexp = (r"(?P<type>deb(-src)?) (?P<uri>[^\s]+) (?P<suite>[^\s]+)( "
r"(?P<section>[\w\s]*))?(,(?P<priority>[\d]+))?")
match = re.match(regexp, repo)
if not match:
raise errors.IncorrectRepository("Couldn't parse repository '{0}'"
.format(repo))
repo_type = match.group('type')
repo_suite = match.group('suite')
repo_section = match.group('section')
repo_uri = match.group('uri')
repo_priority = match.group('priority')
return {'name': name,
'type': repo_type,
'uri': repo_uri,
'priority': repo_priority,
'suite': repo_suite,
'section': repo_section or ''}

View File

@ -1,45 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from fuel_bootstrap.objects import master_node_settings
LOG = logging.getLogger(__name__)
def notify_webui_on_fail(function):
def wrapper(*args, **kwargs):
try:
return function(*args, **kwargs)
except Exception:
notify_webui("Last bootstrap image activation was failed."
" It's possible that nodes will not discovered"
" after reboot.")
raise
return wrapper
def notify_webui(error_message):
try:
mn_settings = master_node_settings.MasterNodeSettings()
settings = mn_settings.get()
settings['settings'].setdefault('bootstrap', {}).setdefault(
'error', {})['value'] = error_message
mn_settings.update(settings)
except Exception as e:
LOG.warning("Can't send notification '%s' to WebUI due to %s",
error_message, e)

View File

@ -1,10 +0,0 @@
PyYAML>=3.1.0
stevedore
pbr>=0.6
six>=1.7.0
cliff>=1.7.0
python-fuelclient
## fuel-bootstrap requires fuel-agent as well, but fuel-agent
## is not installable via pip
# fuel-agent

View File

@ -1,39 +0,0 @@
[metadata]
name = fuel-bootstrap
version = 10.0.0
summary = Command line Fuel bootstrap manager
author = Mirantis Inc.
author-email = product@mirantis.com
home-page = http://mirantis.com
description-file =
README.rst
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.6
Programming Language :: Python :: 2.7
[files]
packages =
fuel_bootstrap
[global]
setup-hooks =
pbr.hooks.setup_hook
fuel_bootstrap.hooks.setup_hook
[entry_points]
console_scripts =
fuel-bootstrap=fuel_bootstrap.main:main
fuel_bootstrap =
build=fuel_bootstrap.commands.build:BuildCommand
list=fuel_bootstrap.commands.list:ListCommand
import=fuel_bootstrap.commands.import:ImportCommand
activate=fuel_bootstrap.commands.activate:ActivateCommand
delete=fuel_bootstrap.commands.delete:DeleteCommand

View File

@ -1,29 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,8 +0,0 @@
[DEFAULT]
use_stderr=false
logging_debug_format_suffix=
log_file=/var/log/fuel-agent.log
use_syslog=true
use_syslog_rfc_format=true
prepare_configdrive=false
fix_udev_net_rules=false

View File

@ -1,2 +0,0 @@
bootstrap-ironic

View File

@ -1,29 +0,0 @@
#################
#### MODULES ####
#################
$ModLoad imuxsock # provides support for local system logging
$ModLoad imklog # provides kernel logging support (previously done by rklogd)
#$ModLoad immark # provides --MARK-- message capability
###########################
#### GLOBAL DIRECTIVES ####
###########################
#
# Set the default permissions for all log files.
#
$FileOwner syslog
$FileGroup syslog
$FileCreateMode 0640
$DirCreateMode 0755
$umask 0000
$PrivDropToUser syslog
$PrivDropToGroup syslog
$MaxMessageSize 32k
#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf

View File

@ -1,36 +0,0 @@
# file is managed by puppet
#
# Log to remote syslog server
# Templates
# RFC3164 emulation with long tags (32+)
$Template RemoteLog, "<%pri%>%timestamp% ironic/@DEPLOYMENT_ID@/%syslogtag%%msg:::sp-if-no-1st-sp%%msg%\n"
# RFC544 emulation would be: "<%pri%>1 %timestamp:::date-rfc3339% %hostname% %syslogtag% %procid% %msgid% %structured-data% %msg%\n"
# Note: don't use %app-name% cuz it would be empty for some cases
$ActionFileDefaultTemplate RemoteLog
$WorkDirectory /var/spool/rsyslog/
#Start remote server 0
$ActionQueueType LinkedList # use asynchronous processing
$ActionQueueFileName remote0 # set file name, also enables disk mode
$ActionQueueMaxDiskSpace 1g
$ActionQueueSaveOnShutdown on
$ActionQueueLowWaterMark 2000
$ActionQueueHighWaterMark 8000
$ActionQueueSize 1000000 # Reserve 500Mb memory, each queue element is 512b
$ActionQueueDiscardMark 950000 # If the queue looks like filling, start discarding to not block ssh/login/etc.
$ActionQueueDiscardSeverity 0 # When in discarding mode discard everything.
$ActionQueueTimeoutEnqueue 0 # When in discarding mode do not enable throttling.
$ActionQueueDequeueSlowdown 1000
$ActionQueueWorkerThreads 2
$ActionQueueDequeueBatchSize 128
$ActionResumeRetryCount -1
# Isolate sudo logs locally
# match if "program name" is equal to "sudo"
:programname, isequal, "sudo" -/var/log/sudo.log
&~
# Send messages we receive to master node via tcp
# Use an octet-counted framing (understood for rsyslog only) to ensure correct multiline messages delivery
*.* @(o)@SYSLOG_SERVER_IP@:514;RemoteLog
#End remote server 0

View File

@ -1,20 +0,0 @@
Protocol 2
SyslogFacility AUTHPRIV
PasswordAuthentication no
PubkeyAuthentication yes
ChallengeResponseAuthentication no
GSSAPIAuthentication no
UsePAM no
UseDNS no
# Accept locale-related environment variables
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
Subsystem sftp /usr/lib/openssh/sftp-server
# Secure Ciphers and MACs
Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,hmac-sha1

Some files were not shown because too many files have changed in this diff Show More