Retire repository

Fuel (from openstack namespace) and fuel-ccp (in x namespace)
repositories are unused and ready to retire.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011647.html

Depends-On: https://review.opendev.org/699362
Change-Id: I05e9dc69ed58c70c50d5c6d065ba60b244c5c9d2
This commit is contained in:
Andreas Jaeger 2019-12-18 09:47:46 +01:00
parent a6d5c7c701
commit 6c3c29767d
256 changed files with 8 additions and 71939 deletions

View File

@ -1,6 +0,0 @@
[run]
branch = True
source = octane
[report]
ignore_errors = True

34
.gitignore vendored
View File

@ -1,34 +0,0 @@
*.py[cod]
# Packages
*.egg
*.egg-info
dist
build
.eggs
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib64
# Installer logs
pip-log.txt
# Runtime logs
octane.log
# Unit test / coverage reports
.coverage
.tox
.venv
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1 +0,0 @@
deployment/puppet/octane_tasks/Gemfile

View File

@ -1 +0,0 @@
deployment/puppet/octane_tasks/Gemfile.lock

View File

@ -1,4 +0,0 @@
Octane Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,80 +0,0 @@
description:
For Fuel team structure and contribution policy, see [1].
This is repository level MAINTAINERS file. All contributions to this
repository must be approved by one or more Core Reviewers [2].
If you are contributing to files (or create new directories) in
root folder of this repository, please contact Core Reviewers for
review and merge requests.
If you are contributing to subfolders of this repository, please
check 'maintainers' section of this file in order to find maintainers
for those specific modules.
It is mandatory to get +1 from one or more maintainers before asking Core
Reviewers for review/merge in order to decrease a load on Core Reviewers [3].
Exceptions are when maintainers are actually cores, or when maintainers
are not available for some reason (e.g. on vacation).
[1] https://specs.openstack.org/openstack/fuel-specs/policy/team-structure
[2] https://review.openstack.org/#/admin/groups/1020,members
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
Please keep this file in YAML format in order to allow helper scripts
to read this as a configuration data.
maintainers:
- octane/:
- name: Oleg Gelbukh
email: ogelbukh@mirantis.com
IRC: ogelbukh
- name: Ilya Kharin
email: ikharin@mirantis.com
IRC: akscram
- name: Sergey Abramov
email: sabramov@mirantis.com
IRC: pod2metra
- name: Viacheslav Valyavskiy
email: vvalyavskiy@mirantis.com
IRC: vvalyavskiy
- name: Alexey Stepanov
email: astepanov@mirantis.com
IRC: penguionolog
- name: Sergey Novikov
email: snovikov@mirantis.com
IRC: snovikov
- name: Vladimir Khlyunev
email: vkhlyunev@mirantis.com
IRC: vkhlyunev
- specs/: &MOS_packaging_team
- name: Mikhail Ivanov
email: mivanov@mirantis.com
IRC: mivanov
- name: Artem Silenkov
email: asilenkov@mirantis.com
IRC: asilenkov
- name: Alexander Tsamutali
email: atsamutali@mirantis.com
IRC: astsmtl
- name: Daniil Trishkin
email: dtrishkin@mirantis.com
IRC: dtrishkin
- name: Ivan Udovichenko
email: iudovichenko@mirantis.com
IRC: tlbr
- name: Igor Yozhikov
email: iyozhikov@mirantis.com
IRC: IgorYozhikov
- deployment/:
- name: Roman Sokolkov
email: rsokolkov@mirantis.com
- name: Pavel Chechetin
email: pchechetin@mirantis.com

View File

@ -1,7 +0,0 @@
include AUTHORS
include ChangeLog
include octane/patches/*
include octane/bin/*
exclude .gitignore
global-exclude *.pyc

View File

@ -1,221 +1,10 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: http://governance.openstack.org/badges/fuel-octane.svg
:target: http://governance.openstack.org/reference/tags/index.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on
===============================
octane
===============================
Octane - upgrade your Fuel.
Tool set to backup, restore and upgrade Fuel installer and OpenStack
environments that it manages. This version of the toolset supports
upgrade from versions 7.0 and 8.0 to version 9.1.
* Free software: Apache license
* Documentation: http://docs.openstack.org/developer/octane
* Source: http://git.openstack.org/cgit/stackforge/octane
* Bugs: http://bugs.launchpad.net/octane
Features
--------
* Backup the Fuel Master node configuraiton, OpenStack release bundles,
metadata of environments and target nodes
* Restore metadata of the Fuel Master node, environments and target nodes
from previous backup
* Upgrade OpenStack environment after upgrade of the Fuel Master node
that manages it
Installation
------------
Fuel Octane is installed on the Fuel Master node. Version of ``fuel-octane``
package must match the version of Fuel.
To download the latest version of ``fuel-octane`` package on the Fuel Master
node, use the following command:
::
yum instal fuel-octane
Usage
-----
Backup Fuel configuration
=========================
Use this command to backup configuration of the Fuel Master node, environments
and target nodes:
::
octane fuel-backup --to=/path/to/backup.file.tar.gz
Backup Fuel repos and images
============================
Use this command to backup packages and images for all supported OpenStack
release bundles from the Fuel Master node:
::
octane fuel-repo-backup --full --to=/path/to/repo-backup.file.tar.gz
Restore Fuel configuration
==========================
Use this command to restore configuration of the Fuel Master node, environments
and target nodes:
::
octane fuel-restore --from=/path/to/backup.file.tar.gz --admin-password=<passwod>
Replace ``<password>`` with appropriate password for user ``admin`` in your
installation of Fuel.
Restore Fuel repos and images
=============================
Use this command to restore package repositories and images for OpenStack
release bundbles from backup file:
::
octane fuel-repo-restore --from=/path/to/repo-backup.file.tar.gz
Upgrade Fuel Master node
========================
Upgrade of Fuel Master node requires making both backups of configuration
and repos and images from older Fuel, as described above. Copy those files
to a secure location. After you create two backup files, install a new
(9.1) version of Fuel on the same physical node or on a new one.
.. note::
Please, note that you must specify the same IP address for the new
installation of the Fuel Master node as for the old one. Otherwise,
target nodes won't be able to communicate with the new Fuel Master
node.
Copy backup files to a new node from the secure location. Use ``octane`` to
restore Fuel configuration and packages from backup files. Database schema
will be upgraded according to migration scripts. See detailed commands above.
The Fuel Master node of new version must now have all configuration data from
an old version of the Fuel Master node.
Upgrade OpenStack cluster
=========================
Install 9.0 Seed environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Pick environment of version <9.0 that you want to upgrade. Run tha following
command and remember an ID of the environment you picked:
::
export SEED_ID=<ID>
Run command to create Upgrade Seed environment:
::
octane upgrade-env $SEED_ID
Remember ID of environment that will be shown:
::
export ORIG_ID=<ID>
Upgrade controller #1
^^^^^^^^^^^^^^^^^^^^^
Pick controller with minimal ID:
::
export $NODE_ID=<ID>
Run the following command to upgrade it:
::
octane upgrade-node --isolated $SEED_ID $NODE_ID
Upgrade DB
^^^^^^^^^^
Run the following command to upgrade state database of OpenStack environment
to be upgraded:
::
octane upgrade-db $ORID_ID $SEED_ID
Upgrade Ceph (OPTIONAL)
^^^^^^^^^^^^^^^^^^^^^^^
Run the command to upgrade Ceph cluster:
::
octane upgrade-ceph $ORIG_ID $SEED_ID
Cutover to the updated control plane
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following command redirects all nodes in OpenStack cluster to talk to
the new OpenStack Controller with upgraded version:
::
octane upgrade-control $ORIG_ID $SEED_ID
Upgrade controller #2 and #3
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Run the following command to upgrade remaining controllers to version 9.1:
::
octane upgrade-node $SEED_ID $NODE_ID_2 $NODE_ID_3
Upgrade computes
^^^^^^^^^^^^^^^^
Pick a compute node(s) to upgrade and remember their IDs.
::
export NODE_ID_1=<ID1>
...
Run the command to upgrade the compute node(s) without evacuating virtual
machines:
::
octane upgrade-node --no-live-migration $SEED_ID $NODE_ID_1 ...
Run the command to upgrade the compute node(s) with evacuating virtual
machines to other compute nodes in the environment via live migration:
::
octane upgrade-node $SEED_ID $NODE_ID_1 ...
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1 +0,0 @@
deployment/puppet/octane_tasks/Rakefile

View File

@ -1,6 +0,0 @@
# Due to the problem with the new version of cryptography==1.4 we have
# to add these binary dependencies.
libffi-dev [platform:dpkg]
libssl-dev [platform:dpkg]
libffi-devel [platform:rpm]
openssl-devel [platform:rpm]

View File

@ -1,29 +0,0 @@
<domain type='kvm'>
<name>fuel</name>
<memory>4194304</memory>
<vcpu>2</vcpu>
<os>
<type arch='x86_64'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
</os>
<features><acpi/><apic/><pae/></features>
<on_reboot>destroy</on_reboot>
<devices>
<disk type='volume'>
<source pool='vms' volume='fuel'/>
<target dev='hda'/>
</disk>
<disk type='file' device='cdrom'>
<source file='%ISO%'/>
<target dev='hdb'/>
<address type='drive' bus='1'/>
</disk>
<interface type='network'>
<source network='admin'/>
<model type='e1000'/>
</interface>
<graphics type='vnc' listen='0.0.0.0' autoport='yes'/>
<memballoon model='virtio'/>
</devices>
</domain>

View File

@ -1,98 +0,0 @@
#!/usr/bin/python3
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Taken from https://github.com/nileshgr/utilities/blob/master/admin/libvirt-qemu-hook.py
'''
This script was written for Python 3.
I do not know if it will work on Python 2.
'''
'''
LibVirt hook for setting up port forwards when using
NATed networking.
Setup port spec below in the mapping dict.
Copy file to /etc/libvirt/hooks/<your favorite name>
chmod +x /etc/libvirt/hooks/<your favorite name>
restart libvirt
And it should work
'''
import os
import re
import subprocess
import sys
iptables='/sbin/iptables'
def get_ip(interface):
res = subprocess.check_output(['ip', 'addr', 'show', 'dev', interface])
m = re.search(b'inet ([0-9.]+)', res)
if not m:
raise RuntimeException("Address not found")
return m.group(1).decode()
mapping = {
'fuel': {
'ip': '10.20.0.2',
'publicip': get_ip('p1p1'),
'portmap': {
'tcp': [(2222, 22), (8000, 8000)],
}
},
}
def rules(act, map_dict):
if map_dict['portmap'] == 'all':
cmd = '{} -t nat {} PREROUTING -d {} -j DNAT --to {}'.format(iptables, act, map_dict['publicip'], map_dict['ip'])
os.system(cmd)
cmd = '{} -t nat {} POSTROUTING -s {} -j SNAT --to {}'.format(iptables, act, map_dict['ip'], map_dict['publicip'])
os.system(cmd)
cmd = '{} -t filter {} FORWARD -d {} -j ACCEPT'.format(iptables, act, map_dict['ip'])
os.system(cmd)
cmd = '{} -t filter {} FORWARD -s {} -j ACCEPT'.format(iptables, act, map_dict['ip'])
os.system(cmd)
else:
cmd = '{} -t filter {} FORWARD -d {} -p icmp -j ACCEPT'.format(iptables, act, map_dict['ip'])
os.system(cmd)
cmd = '{} -t filter {} FORWARD -s {} -p icmp -j ACCEPT'.format(iptables, act, map_dict['ip'])
os.system(cmd)
for proto in map_dict['portmap']:
for portmap in map_dict['portmap'].get(proto):
cmd = '{} -t nat {} PREROUTING -d {} -p {} --dport {} -j DNAT --to {}:{}'.format(iptables, act, map_dict['publicip'], proto, str(portmap[0]), map_dict['ip'], str(portmap[1]))
os.system(cmd)
cmd = '{} -t filter {} FORWARD -d {} -p {} --dport {} -j ACCEPT'.format(iptables, act, map_dict['ip'], proto, str(portmap[1]))
os.system(cmd)
cmd = '{} -t filter {} FORWARD -s {} -p {} --sport {} -j ACCEPT'.format(iptables, act, map_dict['ip'], proto, str(portmap[1]))
os.system(cmd)
if __name__ == '__main__':
domain=sys.argv[1]
action=sys.argv[2]
host=mapping.get(domain)
if host is None:
sys.exit(0)
if action == 'stopped' or action == 'reconnect':
rules('-D', host)
if action == 'start' or action == 'reconnect':
rules('-I', host)

View File

@ -1,49 +0,0 @@
--- tests/storagepoolxml2xmltest.c
+++ tests/storagepoolxml2xmltest.c.new
@@ -106,8 +106,8 @@
DO_TEST("pool-gluster-sub");
DO_TEST("pool-scsi-type-scsi-host-stable");
#ifdef WITH_STORAGE_ZFS
- DO_TEST("pool-zfs");
- DO_TEST("pool-zfs-sourcedev");
+// DO_TEST("pool-zfs");
+// DO_TEST("pool-zfs-sourcedev");
#endif
return ret == 0 ? EXIT_SUCCESS : EXIT_FAILURE;
--- debian/rules
+++ debian/rules.new
@@ -76,6 +76,7 @@
$(WITH_POLKIT) \
$(WITH_UDEV) \
--with-storage-fs \
+ --with-storage-zfs \
$(WITH_STORAGE_LVM) \
$(WITH_STORAGE_ISCSI) \
$(WITH_STORAGE_DISK) \
--- configure.ac
+++ configure.ac.new
@@ -1977,9 +1977,9 @@
with_storage_zfs=$with_freebsd
fi
-if test "$with_storage_zfs" = "yes" && test "$with_freebsd" = "no"; then
- AC_MSG_ERROR([The ZFS storage driver can be enabled on FreeBSD only.])
-fi
+#if test "$with_storage_zfs" = "yes" && test "$with_freebsd" = "no"; then
+ #AC_MSG_ERROR([The ZFS storage driver can be enabled on FreeBSD only.])
+#fi
if test "$with_storage_zfs" = "yes" ||
test "$with_storage_zfs" = "check"; then
--- src/storage/storage_backend_zfs.c
+++ src/storage/storage_backend_zfs.c.new
@@ -282,7 +282,7 @@
* will lookup vfs.zfs.vol.mode sysctl value
* -V -- tells to create a volume with the specified size
*/
- cmd = virCommandNewArgList(ZFS, "create", "-o", "volmode=dev",
+ cmd = virCommandNewArgList(ZFS, "create",// "-o", "volmode=dev",
"-V", NULL);
virCommandAddArgFormat(cmd, "%lluK",
VIR_DIV_UP(vol->target.capacity, 1024));

View File

@ -1 +0,0 @@
killall.sh; netcfg

View File

@ -1,20 +0,0 @@
<domain type='kvm'>
<name>%NAME%</name>
<memory unit='GiB'>%MEMORY%</memory>
<vcpu>%CPU%</vcpu>
<os>
<type arch='x86_64'>hvm</type>
</os>
<features><acpi/><apic/><pae/></features>
<devices>
<disk type='volume'><source pool='vms' volume='%NAME%'/><target dev='hda'/></disk>
<disk type='volume'><source pool='vms' volume='%NAME%-ceph'/><target dev='hdb'/><address type='drive' bus='1'/></disk>
<interface type='network'><source network='admin'/><model type='e1000'/><boot order='1'/></interface>
<interface type='network'><source network='management'/><model type='e1000'/></interface>
<interface type='network'><source network='private'/><model type='e1000'/></interface>
<interface type='network'><source network='public'/><model type='e1000'/></interface>
<interface type='network'><source network='storage'/><model type='e1000'/></interface>
<graphics type='vnc' listen='0.0.0.0' autoport='yes'/>
<memballoon model='virtio'/>
</devices>
</domain>

View File

@ -1,96 +0,0 @@
# To use this, change values marked with ### below and run HTTP server locally:
# python2 -m SimpleHTTPServer
# (note: you need to run it in this dir "deploy", it'll share current dir)
# Then load installer in some way (e.g. by throwing mini.iso to your iKVM) and
# add these kernel arguments:
# auto url=http://172.18.67.44:8000/preseed.cfg
# Now press enter 5 times (no way to preseed anything before network conf),
# select p1p1, enter, enter...
# Then you'll see invitation to SSH. You can login using provided ssh_key:
# ssh -i ssh_key installer@THATHOST
# (if Git failed you, do 'chmod go-rw ssh_key' before to fix permissions)
# Language, keymap, clock
d-i debian-installer/locale string en_US.UTF-8
d-i clock-setup/utc boolean true
d-i time/zone string Europe/Prague
d-i console-setup/ask_detect boolean false
d-i console-tools/archs select at
d-i console-keymaps-at/keymap select us
# Network config
d-i netcfg/choose_interface select p1p1
d-i netcfg/disable_dhcp boolean true
d-i netcfg/get_nameservers string 172.18.80.136
### CHANGE THIS IP
d-i netcfg/get_ipaddress string 172.18.167.143
d-i netcfg/get_netmask string 255.255.255.224
d-i netcfg/get_gateway string 172.18.167.129
d-i netcfg/confirm_static boolean true
d-i preseed/run string net_reconfigure.sh
### CHANGE THIS HOSTNAME
d-i netcfg/get_hostname string cz5540
d-i netcfg/get_domain string nodomain
# Local mirror
d-i mirror/country string manual
d-i mirror/http/hostname string caches.bud.mirantis.net
d-i mirror/http/directory string /ubuntu
d-i mirror/http/proxy string
d-i apt-setup/security_host string caches.bud.mirantis.net
# Partitions (TODO)
d-i partman-basicfilesystems/no_swap boolean false
# sda can be some weird thing, so we find first not-so-big device
d-i partman/early_command string debconf-set partman-auto/disk $( \
ls -d /dev/sd? /sys/block/sd? | logger ;\
for b in /sys/block/sd?; do \
size=$(cat $b/size) ;\
echo $b $size | logger ;\
if [ $size -gt 0 -a $size -lt 1000000000 ]; then \
echo /dev/${b##/sys/block/} | logger ;\
echo /dev/${b##/sys/block/} ;\
break ;\
fi ;\
done \
)
d-i partman-auto/method string regular
d-i partman-auto/choose_recipe select myrecipe
d-i partman-auto/expert_recipe string myrecipe : \
512 1 512 ext2 $primary{ } $bootable{ } method{ format } format{ } use_filesystem{ } filesystem{ ext2 } mountpoint{ /boot } . \
10240 10000 2000000000 xfs $primary{ } method{ format } format{ } use_filesystem{ } filesystem{ xfs } mountpoint{ / } .
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean false
# Accounts
d-i passwd/root-login boolean false
d-i passwd/user-fullname string Dumb Ubuntu User
d-i passwd/username string ubuntu
d-i passwd/user-password password ubuntuMira1
d-i passwd/user-password-again password ubuntuMira1
d-i user-setup/allow-password-weak boolean true
d-i user-setup/encrypt-home boolean false
# ZFS PPA
d-i apt-setup/local0/comment string ZFSonLinux PPA
d-i apt-setup/local0/repository string http://ppa.launchpad.net/zfs-native/stable/ubuntu vivid main
d-i apt-setup/local0/source boolean true
d-i apt-setup/local0/key string http://keyserver.ubuntu.com:11371/pks/lookup?op=get&search=0x1196BA81F6B0FC61
# Packages
tasksel tasksel/first multiselect standard
d-i pkgsel/include string openssh-server ubuntu-zfs libvirt-bin qemu-kvm vim git zsh mosh tmux
d-i pkgsel/update-policy select No automatic updates
# GRUB
d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean true
# Postinstall
d-i preseed/late_command string \
in-target apt-get remove -y nano; \
in-target chsh -s /bin/zsh ubuntu; \
echo 'PS1="%B%F{green}%n@%m%k %B%F{blue}%1~ %# %b%f%k"' > /target/home/ubuntu/.zshrc; \
in-target chown ubuntu:ubuntu /home/ubuntu/.zshrc; \

View File

@ -1,113 +0,0 @@
#!/bin/bash -ex
# Config-ish
MAGNET_511_ISO='magnet:?xt=urn:btih:63907abc2acf276d595cd12f9723088fd66cbe24&dn=MirantisOpenStack-5.1.1.iso&tr=http%3A%2F%2Ftracker01-bud.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Ftracker01-msk.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Ftracker01-mnv.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Fseed-qa.msk.mirantis.net%3A8080%2Fannounce&ws=http%3A%2F%2Ffuel-storage.srt.mirantis.net%2Ffuelweb%2FMirantisOpenStack-5.1.1.iso'
MAGNET_60_LRZ='magnet:?xt=urn:btih:d8bda80a9079e1fc0c598bc71ed64376103f2c4f&dn=MirantisOpenStack-6.0-upgrade.tar.lrz&tr=http%3A%2F%2Ftracker01-bud.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Ftracker01-msk.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Ftracker01-mnv.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Fseed-qa.msk.mirantis.net%3A8080%2Fannounce&ws=http%3A%2F%2Ffuel-storage.srt.mirantis.net%2Ffuelweb%2FMirantisOpenStack-6.0-upgrade.tar.lrz'
MAGNET_61_LRZ='magnet:?xt=urn:btih:ee1222ff4b8633229f49daa6e6e62d02ef77b606&dn=MirantisOpenStack-6.1-upgrade.tar.lrz&tr=http%3A%2F%2Ftracker01-bud.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Ftracker01-mnv.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Ftracker01-msk.infra.mirantis.net%3A8080%2Fannounce&ws=http%3A%2F%2Fvault.infra.mirantis.net%2FMirantisOpenStack-6.1-upgrade.tar.lrz'
MAGNET_61_ISO='magnet:?xt=urn:btih:9d59953417e0c2608f8fa0ffe43ceac00967708f&dn=MirantisOpenStack-6.1.iso&tr=http%3A%2F%2Ftracker01-bud.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Ftracker01-mnv.infra.mirantis.net%3A8080%2Fannounce&tr=http%3A%2F%2Ftracker01-msk.infra.mirantis.net%3A8080%2Fannounce&ws=http%3A%2F%2Fvault.infra.mirantis.net%2FMirantisOpenStack-6.1.iso'
DOWNLOAD_TORRENTS="$MAGNET_511_ISO $MAGNET_60_LRZ $MAGNET_61_LRZ"
FUEL_ISO='MirantisOpenStack-5.1.1.iso'
MYDIR="$(readlink -e "$(dirname "$BASH_SOURCE")")"
sudo apt-get update
# Use provided preseed.cfg to install everything
# Install and start PolicyKit separately to avoid issues during install later
sudo apt-get install -y policykit-1
sudo service polkitd start
sudo apt-get isntall -y dpkg-dev acl
# Transmission
sudo apt-get install -y transmission-cli transmission-daemon
DOWNLOADS_DIR="$HOME/Downloads"
mkdir -p "$DOWNLOADS_DIR"
setfacl -m 'user:debian-transmission:rwx' "$DOWNLOADS_DIR"
sudo service transmission-daemon stop
EDIT_SCRIPT='import sys,json; i=iter(sys.argv); next(i); fname=next(i); s=json.load(open(fname)); s.update(zip(i,i)); json.dump(s,open(fname,"w"),indent=4,sort_keys=True)'
sudo python3 -c "$EDIT_SCRIPT" /etc/transmission-daemon/settings.json download-dir "$DOWNLOADS_DIR"
sudo service transmission-daemon start
for magnet in $DOWNLOAD_TORRENTS; do
transmission-remote -n transmission:transmission -a "$magnet"
done
# Libvirt
# Fucking https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1343245
printf ' /dev/zvol/vms/* rw,\n /dev/zd* rw,\n' | sudo tee -a /etc/apparmor.d/abstractions/libvirt-qemu > /dev/null
# Build and install Libvirt package with ZFS support
mkdir ~/libvirt-build
pushd ~/libvirt-build
apt-get source libvirt-bin
sudo apt-get build-dep -y libvirt-bin
sudo apt-get install -y devscripts
cd libvirt-1.2.12
patch -p0 < "$MYDIR/libvirt.patch"
debuild -uc -us -b
cd ..
sudo dpkg -i --force-confnew libvirt0_*_amd64.deb libvirt-bin_*_amd64.deb
popd
# Setup ZFS pool
virsh pool-define-as vms zfs --source-name vms
# no pool-build since we need -f flag for zpool create
sudo zpool create -f vms /dev/sdc
virsh pool-autostart vms
virsh pool-start vms
# Networks
virsh net-undefine default
for net in admin management private public storage; do
if [ "$net" = "admin" ]; then
fwd="<forward mode='nat'/><ip address='10.20.0.1' prefix='24'></ip>"
elif [ "$net" = "public" ]; then
fwd="<forward mode='nat'/><ip address='172.16.0.1' prefix='24'></ip>"
else
fwd=""
fi
virsh net-define <(echo "<network><name>$net</name>$fwd</network>")
virsh net-autostart $net
virsh net-start $net
done
# Don't let LVM find zvols
sudo sed -i 's#.*global_filter =.*# global_filter = [ "r|^/dev/zd.*|", "r|^/dev/zvol/.*|" ]#' /etc/lvm/lvm.conf
# Install hook to create redirects to master node
sudo cp "$MYDIR/libvirt-qemu-hook.py" /etc/libvirt/hooks/qemu
sudo service libvirt-bin restart
# Master node
while [ ! -f "$DOWNLOADS_DIR/$FUEL_ISO" ]; do
sleep 10
done
virsh vol-create-as vms fuel 100G
virsh define <(sed "s|%ISO%|$DOWNLOADS_DIR/$FUEL_ISO|" "$MYDIR/fuel.xml")
virsh start fuel
virsh event fuel lifecycle # wait for shutdown on reboot
virsh event fuel lifecycle --timeout 5 # wait for final shutdown on reboot
# This error is OK: (see https://www.redhat.com/archives/libvir-list/2015-April/msg00619.html)
# error: internal error: virsh event: no domain VSH_OT_DATA option
EDITOR="sed -i '/boot.*cdrom/d; /on_reboot/d'" virsh edit fuel # don't boot from CD, don't destroy on reboot
virsh start fuel
virsh autostart fuel
sleep 600 # let it install everything
# Other nodes
for i in $(seq 1 6); do
name="controller-$i"
virsh vol-create-as vms $name 100G
virsh define <(sed "s/%NAME%/$name/; s/%CPU%/2/; s/%MEMORY%/4/; /-ceph/d" "$MYDIR/node.xml")
virsh autostart $name
virsh start $name
sleep 120
done
for i in $(seq 1 6); do
name="compute-$i"
virsh vol-create-as vms $name 100G
virsh vol-create-as vms $name-ceph 100G
virsh define <(sed "s/%NAME%/$name/; s/%CPU%/4/; s/%MEMORY%/8/" "$MYDIR/node.xml")
virsh autostart $name
virsh start $name
sleep 120
done

View File

@ -1,152 +0,0 @@
# vim syntax=sh
if type zfs > /dev/null 2>&1; then
SNAP_METHOD="zfs"
STATE_PATH="/vms/state"
else
SNAP_METHOD="lvm"
STATE_PATH="/var/lib/libvirt/qemu/save"
fi
lvm_snapshot() {
local volume=$1 snapname=$2
sudo lvcreate -sn $volume-$snapname -l 100%ORIGIN vms/$volume
}
lvm_revert() {
local volume=$1 snapname=$2
sudo lvconvert --merge vms/$volume-$snapname -i 5
sudo lvcreate -sn $volume-$snapname -l 100%ORIGIN vms/$volume # keep snapshot around
}
lvm_discard() {
local volume=$1 snapname=$2
sudo lvremove -f vms/$volume-$snapname
}
zfs_snapshot() {
local volume=$1 snapname=$2
sudo zfs snapshot vms/$volume@$snapname
sudo zfs clone vms/$volume@$snapname vms/$volume-$snapname
sudo zfs promote vms/$volume-$snapname
}
zfs_revert() {
local volume=$1 snapname=$2
sudo zfs destroy vms/$volume
sudo zfs clone vms/$volume-$snapname@$snapname vms/$volume
}
zfs_discard() {
local volume=$1 snapname=$2
clones="$(sudo zfs get -H clones vms/$volume-$snapname@$snapname | cut -f3)"
case $clones in
"" )
sudo zfs destroy vms/$volume-$snapname@$snapname
sudo zfs destroy vms/$volume-$snapname
;;
vms/$volume )
sudo zfs promote vms/$volume
sudo zfs destroy vms/$volume-$snapname
sudo zfs destroy vms/$volume@$snapname
;;
* )
echo "Can't remove snapshot vms/$volume-$snapname@$snapname since it has clones: $clones"
esac
}
virsh_all() {
local action=$1
shift
echo "$@" | xargs -P0 -n1 virsh $action | sed -n '/./p'
}
snapshot_vms() {
local snapname=$1 domain snap_arg
shift
sudo mkdir -p "$STATE_PATH"
sudo chown libvirt-qemu:kvm "$STATE_PATH"
virsh_all suspend "$@"
for domain; do
${SNAP_METHOD}_snapshot $domain $snapname
snap_arg=""
case $domain in
fuel )
snap_arg="--diskspec hdb,snapshot=no"
;;
compute-* )
snap_arg="--diskspec hdb,snapshot=no"
${SNAP_METHOD}_snapshot $domain-ceph $snapname
;;
esac
virsh snapshot-create-as $domain $domain-$snapname --atomic --memspec "$STATE_PATH/$domain-$snapname" --diskspec hda,snapshot=no $snap_arg
done
virsh_all resume "$@"
}
revert_vms() {
local snapname=$1 domain
shift
virsh_all destroy "$@"
for domain; do
${SNAP_METHOD}_revert $domain $snapname
case $domain in
compute-* )
${SNAP_METHOD}_revert $domain-ceph $snapname
;;
esac
virsh restore "$STATE_PATH/$domain-$snapname" --paused
done
virsh_all resume "$@"
}
discard_snapshots() {
local snapname=$1 domain
shift
for domain; do
${SNAP_METHOD}_discard $domain $snapname
case $domain in
compute-* )
${SNAP_METHOD}_discard $domain-ceph $snapname
;;
esac
virsh snapshot-delete $domain $domain-$snapname --metadata
sudo rm -f "$STATE_PATH/$domain-$snapname"
done
}
zfs_transfer() {
local volume=$1 snapname=$2 target=$3
local dataset="vms/$volume-$snapname@$snapname"
sudo zfs send "$dataset" | pv -cN $domain -s "$(sudo zfs list -Hp "$dataset" | cut -f4)" | ssh $target sudo zfs recv -vd vms
}
transfer_snapshots() {
local snapname=$1 target=$2 domain
shift; shift
if [ "$SNAP_METHOD" != "zfs" ]; then
echo "Can transfer only ZFS snapshots"
return 1
fi
if ! type pv > /dev/null 2>&1; then
sudo apt-get install -y pv
fi
if ! ssh -o KbdInteractiveAuthentication=no $target true; then
echo "Please set up passwordless SSH to node $target"
return 1
fi
if ! ssh $target sudo -nv 2> /dev/null; then
echo "Please set up passwordless sudo on node $target"
return 1
fi
ssh $target sudo mkdir -p "$STATE_PATH"
for domain; do
${SNAP_METHOD}_transfer $domain $snapname $target
case $domain in
compute-* )
${SNAP_METHOD}_transfer $domain-ceph $snapname $target
;;
esac
local state_file="$STATE_PATH/$domain-$snapname"
sudo pv "$state_file" | ssh $target sudo "sh -c \"cat > \\\"$state_file\\\"\""
done
}

View File

@ -1,16 +0,0 @@
source 'https://rubygems.org'
group :development, :test do
gem 'puppetlabs_spec_helper', :require => false
gem 'puppet-lint'
gem 'rake'
gem 'rspec-puppet'
end
if puppetversion = ENV['PUPPET_GEM_VERSION']
gem 'puppet', puppetversion, :require => false
else
gem 'puppet', :require => false
end
# vim:ft=ruby

View File

@ -1,53 +0,0 @@
GEM
remote: https://rubygems.org/
specs:
CFPropertyList (2.2.8)
diff-lcs (1.2.5)
facter (2.4.6)
CFPropertyList (~> 2.2.6)
hiera (3.2.0)
json_pure
json_pure (2.0.2)
metaclass (0.0.4)
mocha (1.1.0)
metaclass (~> 0.0.1)
puppet (4.5.3)
CFPropertyList (~> 2.2.6)
facter (> 2.0, < 4)
hiera (>= 2.0, < 4)
json_pure
puppet-lint (2.0.0)
puppet-syntax (2.1.0)
rake
puppetlabs_spec_helper (1.1.1)
mocha
puppet-lint
puppet-syntax
rake
rspec-puppet
rake (11.2.2)
rspec (3.5.0)
rspec-core (~> 3.5.0)
rspec-expectations (~> 3.5.0)
rspec-mocks (~> 3.5.0)
rspec-core (3.5.2)
rspec-support (~> 3.5.0)
rspec-expectations (3.5.0)
diff-lcs (>= 1.2.0, < 2.0)
rspec-support (~> 3.5.0)
rspec-mocks (3.5.0)
diff-lcs (>= 1.2.0, < 2.0)
rspec-support (~> 3.5.0)
rspec-puppet (2.4.0)
rspec
rspec-support (3.5.0)
PLATFORMS
ruby
DEPENDENCIES
puppet
puppet-lint
puppetlabs_spec_helper
rake
rspec-puppet

View File

@ -1,20 +0,0 @@
# Octane_tasks
#### Table of Contents
1. [Description](#description)
2. [Testing](#testing)
## Description
This module composes tasks needed during an upgrade.
The modular directory is for granular tasks (used by Fuel).
## Testing
To make sure the code conform to the style guide:
```
rake lint
```

View File

@ -1,10 +0,0 @@
# TODO(pchechetin): Uncomment when rspec-puppet is necessary.
# require 'rspec-puppet/rake_task'
require 'puppet-syntax/tasks/puppet-syntax'
require 'puppet-lint/tasks/puppet-lint'
PuppetLint.configuration.ignore_paths = ["spec/**/*.pp", "vendor/**/*.pp"]
PuppetLint.configuration.fail_on_warnings = true
PuppetLint.configuration.send('disable_class_inherits_from_params_class')

View File

@ -1,30 +0,0 @@
#!/usr/bin/python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import yaml
import sys
target_file = sys.argv[1]
section = sys.argv[2]
subsection = sys.argv[3]
try:
with open(target_file,'r+') as f:
data = yaml.load(f)
del data[section][subsection]
with open(target_file,'w+') as f:
yaml.dump(data,f,default_flow_style=False)
except KeyError as e:
print "Failed to find key: {0}".format(e)

View File

@ -1,7 +0,0 @@
#!/bin/bash
PUPPET_TYPE=$1
RESOURCE_NAME=$2
RESOURCE_PARAM=$3
echo `puppet resource ${PUPPET_TYPE} ${RESOURCE_NAME} | grep ${RESOURCE_PARAM} | awk '{print $3}' | tr -d "',"`

View File

@ -1,15 +0,0 @@
INSERT INTO networksecuritybindings
SELECT id, 1
FROM networks
WHERE id NOT IN (SELECT network_id FROM networksecuritybindings);
UPDATE ml2_network_segments
SET network_type='flat',physical_network='physnet1'
WHERE network_id IN (SELECT network_id FROM externalnetworks);
INSERT INTO ml2_flat_allocations
SELECT b.* FROM (SELECT 'physnet1') AS b
WHERE NOT EXISTS (
SELECT 1 FROM ml2_flat_allocations
WHERE physical_network = 'physnet1'
)

View File

@ -1,11 +0,0 @@
#!/bin/bash
STORE_PATH=$1
SCRIPT=`readlink -f $0`
DIR=`dirname ${SCRIPT}`
CINDER_HOST=`bash ${DIR}/fetch_puppet_resource_param.sh cinder_config DEFAULT/host value`
CINDER_BACKEND=`bash ${DIR}/fetch_puppet_resource_param.sh cinder_config DEFAULT/volume_backend_name value`
echo "export CURRENT_HOST=\"${CINDER_HOST}#${CINDER_BACKEND}\"" > ${STORE_PATH}

View File

@ -1,16 +0,0 @@
#!/bin/bash
STORE_PATH=$1
SCRIPT=`readlink -f $0`
DIR=`dirname ${SCRIPT}`
CINDER_HOST=`bash ${DIR}/fetch_puppet_resource_param.sh cinder_config DEFAULT/host value`
if [[ -z ${CINDER_HOST} ]]; then
CINDER_HOST=`bash ${DIR}/fetch_puppet_resource_param.sh cinder_config RBD-backend/backend_host value`
fi
CINDER_BACKEND=`bash ${DIR}/fetch_puppet_resource_param.sh cinder_config RBD-backend/volume_backend_name value`
echo "export NEW_HOST=\"${CINDER_HOST}@${CINDER_BACKEND}#${CINDER_BACKEND}\"" > ${STORE_PATH}

View File

@ -1,106 +0,0 @@
# GROUPS
- id: compute
type: group
role: [compute]
fault_tolerance: 0
# TASKS
- id: rsync_octane
type: sync
version: 2.1.0
groups: [compute]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: override_repos_in_hiera
type: upload_file
version: 2.1.0
groups: [compute]
requires: []
required_for: []
parameters:
path: /etc/hiera/override/common.yaml
data:
yaql_exp: >
({"repo_setup" => {"repos" => $.repo_setup.preupgrade_compute},
"preupgrade_packages" => $.preupgrade_packages}.toYaml())
- id: cleanup_existing_repos
type: shell
version: 2.1.0
groups: [compute]
requires: []
required_for: []
parameters:
cmd: >
tar zcf /root/sources.list.d-backup-$(date +%F-%H%M).tar.gz /etc/apt/sources.list.d;
rm /etc/apt/sources.list.d/*.list || true
timeout: 60
- id: rsync_latest_puppet
type: sync
version: 2.1.0
groups: [compute]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/puppet/modules/
dst: /etc/fuel/octane/latest_modules
timeout: 180
- id: setup_new_repositories
type: puppet
version: 2.1.0
groups: [compute]
requires: [cleanup_existing_repos, rsync_latest_puppet, override_repos_in_hiera, remove_hiera_section_repo_setup]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/latest_modules/osnailyfacter/modular/fuel_pkgs/setup_repositories.pp
puppet_modules: /etc/fuel/octane/latest_modules
timeout: 600
- id: stop_compute_services
type: puppet
version: 2.1.0
groups: [compute]
requires: [setup_new_repositories, rsync_octane]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/stop_compute_services.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
- id: preupgrade_compute
type: puppet
version: 2.1.0
groups: [compute]
requires: [stop_compute_services, rsync_octane, setup_new_repositories]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/preupgrade_compute.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
- id: remove_hiera_section_repo_setup
type: shell
version: 2.1.0
groups: [compute]
requires: [rsync_octane]
required_for: []
parameters:
cmd: python /etc/fuel/octane/puppet/octane_tasks/files/delete_section.py /etc/astute.yaml repo_setup repos
timeout: 60
- id: remove_hiera_override
type: shell
version: 2.1.0
groups: [compute]
requires: [preupgrade_compute]
required_for: []
parameters:
cmd: rm /etc/hiera/override/common.yaml || true
timeout: 60

View File

@ -1,45 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
- id: controller
type: group
role: [controller]
fault_tolerance: 0
# TASKS
- id: add_hiera_override
type: upload_file
version: 2.1.0
groups: [primary-controller, controller]
requires: []
required_for: []
parameters:
path: /etc/hiera/override/common.yaml
data:
yaql_exp: >
({"upgrade" => $.upgrade}.toYaml())
- id: rsync_octane
type: sync
version: 2.1.0
groups: [primary-controller, controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: kill_cluster
type: puppet
version: 2.1.0
groups: [primary-controller, controller]
requires: [add_hiera_override, rsync_octane]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/kill_cluster.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360

View File

@ -1,44 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
- id: controller
type: group
role: [controller]
fault_tolerance: 0
# TASKS
- id: rsync_octane
type: sync
version: 2.1.0
groups: [primary-controller, controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: stop_init_services
type: puppet
version: 2.1.0
groups: [primary-controller, controller]
requires: [rsync_octane]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/stop_init_services.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
- id: remove_hiera_override
type: shell
version: 2.1.0
groups: [primary-controller, controller]
requires: [stop_init_services]
required_for: []
parameters:
cmd: rm /etc/hiera/override/common.yaml || true
timeout: 60

View File

@ -1,45 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
- id: controller
type: group
role: [controller]
fault_tolerance: 0
# TASKS
- id: rsync_octane
type: sync
version: 2.1.0
groups: [primary-controller, controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: start_cluster
type: puppet
version: 2.1.0
groups: [primary-controller, controller]
requires: [rsync_octane]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/start_cluster.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
- id: netconfig
type: puppet
version: 2.1.0
groups: [primary-controller, controller]
required_for: []
requires: []
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/netconfig/netconfig.pp
puppet_modules: /etc/puppet/modules
timeout: 300

View File

@ -1,61 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
# TASKS
- id: rsync_octane
type: sync
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: ceph_mon_dump_create
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/ceph_mon_dump_create.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
- id: ceph_mon_dump_upload
type: sync
version: 2.1.0
groups: [primary-controller]
requires: [ceph_mon_dump_create]
required_for: []
parameters:
src: /var/tmp/ceph_mon.tar.gz
dst: rsync://{MASTER_IP}:/octane_data/
timeout: 180
- id: ceph_etc_dump_upload
type: sync
version: 2.1.0
groups: [primary-controller]
requires: [ceph_mon_dump_create]
required_for: []
parameters:
src: /var/tmp/ceph_etc.tar.gz
dst: rsync://{MASTER_IP}:/octane_data/
timeout: 180
- id: ceph_conf_upload
type: sync
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
src: /etc/ceph/ceph.conf
dst: rsync://{MASTER_IP}:/octane_data/
timeout: 180

View File

@ -1,22 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
- id: controller
type: group
role: [controller]
fault_tolerance: 0
# TASKS
# TODO: Improve with https://review.openstack.org/#/c/342959/
- id: start_haproxy
type: shell
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
cmd: pcs resource enable clone_p_haproxy
timeout: 180

View File

@ -1,105 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
# TASKS
- id: add_hiera_override
type: upload_file
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
path: /etc/hiera/override/common.yaml
data:
yaql_exp: >
({"upgrade" => $.upgrade}.toYaml())
- id: rsync_octane
type: sync
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: nova_db_migrate_flavor_data_70
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane, add_hiera_override]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/nova_db_migrate_flavor_data_70.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
# TODO: Don't stop haproxy, but disable only specific backend using Puppet provider
# from https://review.openstack.org/#/c/342959/
- id: stop_haproxy
type: shell
version: 2.1.0
groups: [primary-controller]
requires: [nova_db_migrate_flavor_data_70]
required_for: []
parameters:
cmd: pcs resource disable clone_p_haproxy
timeout: 180
- id: mysqldump_create
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane,stop_haproxy]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/mysqldump_create.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
- id: mysqldump_upload_to_master
type: sync
version: 2.1.0
groups: [primary-controller]
requires: [mysqldump_create]
required_for: []
parameters:
src: /var/tmp/dbs.original.sql.gz.enc
dst: rsync://{MASTER_IP}:/octane_data/
timeout: 180
- id: remove_hiera_override
type: shell
version: 2.1.0
groups: [primary-controller]
requires: [mysqldump_upload_to_master]
required_for: []
parameters:
cmd: rm /etc/hiera/override/common.yaml || true
timeout: 60
- id: store_cinder_current_host
type: shell
version: 2.1.0
groups: [primary-controller]
requires: [mysqldump_upload_to_master]
required_for: []
parameters:
cmd: bash /etc/fuel/octane/puppet/octane_tasks/files/store_current_host.sh /tmp/cinder_current_host
timeout: 60
- id: cinder_current_host_upload_to_master
type: sync
version: 2.1.0
groups: [primary-controller]
requires: [store_cinder_current_host]
required_for: []
parameters:
src: /tmp/cinder_current_host
dst: rsync://{MASTER_IP}:/octane_data/
timeout: 180

View File

@ -1,45 +0,0 @@
# GROUPS
- id: ceph-osd
type: group
role: [ceph-osd]
fault_tolerance: 0
# TASKS
- id: restart_ceph_osd
type: puppet
version: 2.1.0
groups: [ceph-osd]
requires: [upgrade_ceph_packages]
cross-depends:
- name: upgrade_ceph_packages
role: ceph-osd
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/restart_ceph_osd.pp
puppet_modules: /etc/fuel/octane/puppet
timeout: 600
- id: unset_noout
type: puppet
version: 2.1.0
groups: [ceph-osd]
requires: [restart_ceph_osd]
cross-depends:
- name: restart_ceph_osd
role: ceph-osd
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/unset_noout.pp
puppet_modules: /etc/fuel/octane/puppet
timeout: 600
- id: remove_hiera_override
type: shell
version: 2.1.0
groups: [ceph-osd]
requires: [unset_noout]
required_for: []
parameters:
cmd: rm /etc/hiera/override/common.yaml || true
timeout: 60

View File

@ -1,73 +0,0 @@
# GROUPS
- id: ceph-osd
type: group
role: [ceph-osd]
fault_tolerance: 0
# TASKS
- id: rsync_octane
type: sync
version: 2.1.0
groups: [ceph-osd]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: remove_hiera_section_repo_setup
type: shell
version: 2.1.0
groups: [ceph-osd]
requires: [rsync_octane]
required_for: []
parameters:
cmd: python /etc/fuel/octane/puppet/octane_tasks/files/delete_section.py /etc/astute.yaml repo_setup repos
timeout: 60
- id: override_repos_in_hiera
type: upload_file
version: 2.1.0
groups: [ceph-osd]
requires: []
required_for: []
parameters:
path: /etc/hiera/override/common.yaml
data:
yaql_exp: >
({"repo_setup" => {"repos" => $.repo_setup.upgrade_osd}}.toYaml())
- id: cleanup_existing_repos
type: shell
version: 2.1.0
groups: [ceph-osd]
requires: []
required_for: []
parameters:
cmd: >
tar zcf /root/sources.list.d-backup-$(date +%F-%H%M).tar.gz /etc/apt/sources.list.d;
rm /etc/apt/sources.list.d/*.list || true
timeout: 60
- id: rsync_latest_puppet
type: sync
version: 2.1.0
groups: [ceph-osd]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/puppet/modules/
dst: /etc/fuel/octane/latest_modules
timeout: 180
- id: setup_new_repositories
type: puppet
version: 2.1.0
groups: [ceph-osd]
requires: [cleanup_existing_repos, rsync_latest_puppet, override_repos_in_hiera, remove_hiera_section_repo_setup]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/latest_modules/osnailyfacter/modular/fuel_pkgs/setup_repositories.pp
puppet_modules: /etc/fuel/octane/latest_modules
timeout: 600

View File

@ -1,40 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
# TASKS
- id: add_hiera_override
type: upload_file
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
path: /etc/hiera/override/common.yaml
data:
yaql_exp: >
({"upgrade" => $.upgrade}.toYaml())
- id: rsync_octane
type: sync
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: start_controller_services
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [add_hiera_override, rsync_octane]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/start_controller_services.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360

View File

@ -1,40 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
- id: netconfig
type: puppet
version: 2.1.0
groups: [primary-controller]
required_for: []
requires: [upload_configuration]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/netconfig/netconfig.pp
puppet_modules: /etc/puppet/modules
timeout: 300
- id: upload_configuration
type: upload_file
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
path: /etc/fuel/cluster/{CLUSTER_ID}/astute.yaml
permissions: '0640'
dir_permissions: '0750'
timeout: 180
data:
yaql_exp: '$.toYaml()'
- id: remove_hiera_override
type: shell
version: 2.1.0
groups: [primary-controller]
requires: [upload_configuration]
required_for: []
parameters:
cmd: rm /etc/hiera/override/common.yaml || true
timeout: 60

View File

@ -1,29 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
# TASKS
- id: rsync_octane
type: sync
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: stop_init_services
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/stop_init_services.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360

View File

@ -1,104 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
# TASKS
- id: rsync_octane
type: sync
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: ceph_mon_dump_download
type: sync
version: 2.0.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_data/ceph_mon.tar.gz
dst: /var/tmp
timeout: 180
- id: ceph_etc_dump_download
type: sync
version: 2.0.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_data/ceph_etc.tar.gz
dst: /var/tmp
timeout: 180
- id: ceph_mon_conf_download
type: sync
version: 2.0.0
groups: [primary-controller]
requires: []
required_for: []
cross-depends:
- name: rsync_octane_section
role: master
parameters:
src: rsync://{MASTER_IP}:/octane_data/ceph.conf
dst: /var/tmp
timeout: 180
- id: ceph_reconfiguration
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [
rsync_octane,
ceph_mon_conf_download,
ceph_mon_dump_download,
ceph_etc_dump_download,
ceph_mon_stop
]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/ceph_reconfiguration.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
- id: ceph_mon_stop
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/ceph_mon_stop.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 180
- id: ceph_mon_start
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane,ceph_reconfiguration]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/ceph_mon_start.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 180
- id: ceph_bootstrap
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane,ceph_mon_start]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/ceph_bootstrap.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 180

View File

@ -1,153 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
- id: controller
type: group
role: [controller]
fault_tolerance: 0
# TASKS
- id: add_hiera_override
type: upload_file
version: 2.1.0
groups: [primary-controller, controller]
requires: []
required_for: []
parameters:
path: /etc/hiera/override/common.yaml
data:
yaql_exp: >
({"upgrade" => $.upgrade}.toYaml())
- id: rsync_octane
type: sync
version: 2.1.0
groups: [primary-controller, controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: delete_fuel_resources
type: shell
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane]
required_for: []
parameters:
cmd: >
. /root/openrc;
cd /etc/fuel/octane/puppet/octane_tasks/misc/;
python delete_fuel_resources.py
timeout: 180
- id: stop_controller_services
type: puppet
version: 2.1.0
groups: [primary-controller, controller]
requires: [rsync_octane, delete_fuel_resources, add_hiera_override]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/stop_controller_services.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
- id: mysqldump_download_from_master
type: sync
version: 2.0.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_data/dbs.original.sql.gz.enc
dst: /var/tmp
timeout: 180
- id: mysqldump_restore
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane, mysqldump_download_from_master, stop_controller_services]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/mysqldump_restore.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
- id: db_sync
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane, mysqldump_restore]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/db_sync.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
# Should be applied only on 6.0-7.0 -> 9.0+
- id: neutron_migrations_for_fuel_8
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane, db_sync]
required_for: []
condition:
# Double slashes so that Python's YAML parser doesn't try to escape \.
yaql_exp: "$.upgrade.relation_info.orig_cluster_version =~ '[6-7]\\.[0-1]' and $.upgrade.relation_info.seed_cluster_version =~ '9\\.[0-9]'"
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/neutron_migrations_for_fuel_8.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 360
- id: remove_hiera_override
type: shell
version: 2.1.0
groups: [primary-controller, controller]
requires: [neutron_migrations_for_fuel_8, stop_controller_services]
required_for: []
parameters:
cmd: rm /etc/hiera/override/common.yaml || true
timeout: 60
- id: cinder_current_host_download_from_master
type: sync
version: 2.0.0
groups: [primary-controller]
requires: []
required_for: []
condition:
yaql_exp: "$.upgrade.relation_info.orig_cluster_version =~ '7\\.0'"
parameters:
src: rsync://{MASTER_IP}:/octane_data/cinder_current_host
dst: /tmp
timeout: 180
- id: store_cinder_new_host
type: shell
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
condition:
yaql_exp: "$.upgrade.relation_info.orig_cluster_version =~ '7\\.0'"
parameters:
cmd: bash /etc/fuel/octane/puppet/octane_tasks/files/store_new_host.sh /tmp/cinder_new_host
timeout: 60
- id: cinder_update_host
type: shell
version: 2.1.0
groups: [primary-controller]
requires: [db_sync, cinder_current_host_download_from_master, store_cinder_new_host]
required_for: []
condition:
yaql_exp: "$.upgrade.relation_info.orig_cluster_version =~ '7\\.0'"
parameters:
cmd: 'source /tmp/cinder_current_host; source /tmp/cinder_new_host; cinder-manage volume update_host --currenthost ${CURRENT_HOST} --newhost ${NEW_HOST}'
timeout: 60

View File

@ -1,62 +0,0 @@
# GROUPS
- id: primary-controller
type: group
role: [primary-controller]
fault_tolerance: 0
# TASKS
- id: rsync_octane
type: sync
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
src: rsync://{MASTER_IP}:/octane_code/puppet
dst: /etc/fuel/octane/
timeout: 180
- id: ceph_osd_hiera
type: upload_file
version: 2.1.0
groups: [primary-controller]
requires: []
required_for: []
parameters:
path: /etc/hiera/override/common.yaml
data:
yaql_exp: >
({"ceph_upgrade_release" => $.ceph_upgrade_release,
"ceph_upgrade_hostnames" => $.ceph_upgrade_hostnames}.toYaml())
- id: set_noout
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [rsync_octane, ceph_osd_hiera]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/set_noout.pp
puppet_modules: /etc/fuel/octane/puppet
timeout: 600
- id: upgrade_ceph_packages
type: puppet
version: 2.1.0
groups: [primary-controller]
requires: [set_noout]
required_for: []
parameters:
puppet_manifest: /etc/fuel/octane/puppet/octane_tasks/modular/upgrade_ceph_packages.pp
puppet_modules: /etc/fuel/octane/puppet:/etc/puppet/modules
timeout: 600
- id: remove_hiera_override
type: shell
version: 2.1.0
groups: [primary-controller]
requires: [upgrade_ceph_packages]
required_for: []
parameters:
cmd: rm /etc/hiera/override/common.yaml || true
timeout: 60

View File

@ -1,29 +0,0 @@
(* Ceph module for Augeas
Author: Pavel Chechetin <pchechetin@mirantis.com>
ceph.conf is a standard INI File with whitespaces in the title.
TODO(pchechetin): Get rid of this lense when it'll be merge in the upstream.
See also: https://github.com/hercules-team/augeas/pull/401
*)
module Ceph =
autoload xfm
let comment = IniFile.comment IniFile.comment_re IniFile.comment_default
let sep = IniFile.sep IniFile.sep_re IniFile.sep_default
let entry_re = /[A-Za-z0-9_.-][A-Za-z0-9 _.-]*[A-Za-z0-9_.-]/
let entry = IniFile.indented_entry entry_re sep comment
let title = IniFile.indented_title IniFile.record_re
let record = IniFile.record title entry
let lns = IniFile.lns record comment
let filter = (incl "/etc/ceph/ceph.conf")
. (incl (Sys.getenv("HOME") . "/.ceph/config"))
let xfm = transform lns filter

View File

@ -1,21 +0,0 @@
Puppet::Parser::Functions.newfunction(:ceph_equal_versions, :type => :rvalue) do |args|
require 'json'
versions_1 = args[0]
versions_2 = args[1]
# Pre-check all versions for consistency. Each hash MUST contain same
# versions for all elements.
v1_equal = versions_1.values.all? {|val| val == versions_1.values[0]}
v2_equal = versions_2.values.all? {|val| val == versions_2.values[0]}
# Either array contains some values, that are not equal. This means, that something
# went wrong and relevant component has only been partially upgraded. Fail with info message.
fail "Partial upgrade detected, aborting. Current version layout: #{versions_1}, #{versions_2}" unless v1_equal and v2_equal
# Intersection of 2 arrays with any amount of equal elements will yield an
# array with only one element
ret = (versions_1.values & versions_2.values).length == 1
ret
end

View File

@ -1,7 +0,0 @@
Puppet::Parser::Functions.newfunction(:ceph_get_fsid, :arity => 1, :type => :rvalue) do |args|
require 'shellwords'
ceph_conf = Shellwords.escape(args[0])
Puppet::Util::Execution.execute("ceph-conf -c #{ceph_conf} --lookup fsid").strip
end

View File

@ -1,19 +0,0 @@
Puppet::Parser::Functions.newfunction(:ceph_get_version, :type => :rvalue) do |args|
require 'json'
service_type = args[0]
id = '*'
versions = {}
version_string = Puppet::Util::Execution.execute("ceph tell #{service_type}.#{id} version -f json")
version_string.lines.each do |line|
line = line.strip
if line.length > 0
entity, version = line.split(" ", 2)
entity = entity.tr(":", "")
versions[entity] = JSON.parse(version)['version']
end
end
versions
end

View File

@ -1,19 +0,0 @@
Puppet::Type.type(:exec).provide :bash, :parent => :posix do
include Puppet::Util::Execution
confine :feature => :posix
desc <<-EOT
Acts like shell provider, but adds `set -o pipefail` in front of any command to achive
more reliable error handling of commands with pipes.
EOT
def run(command, check = false)
super(['/bin/bash', '-c', "set -o pipefail; #{command}"], check)
end
def validatecmd(command)
true
end
end

View File

@ -1,20 +0,0 @@
# == Class: octane_tasks::ceph_bootstrap
#
# It reconfigures a keyring of Ceph with new keys. It should be done
# because Ceph Monitor metadata DB was transfered from the original controller
# and new keys are not there yet.
#
class octane_tasks::ceph_bootstrap {
Exec {
provider => shell,
}
exec { 'ceph.auth.import':
command => 'ceph auth import -i /root/ceph.bootstrap-osd.keyring',
}
exec { 'ceph.auth.caps':
command => 'ceph auth caps client.bootstrap-osd mon "allow profile bootstrap-osd"',
require => Exec['ceph.auth.import'],
}
}

View File

@ -1,20 +0,0 @@
# == Class: octane_tasks::ceph_mon_dump_create
#
# It creates a dump of Ceph Monitor database.
#
class octane_tasks::ceph_mon_dump_create {
Exec {
provider => shell,
}
exec { 'ceph_mon_dump_create':
command => 'tar -czPf /var/tmp/ceph_mon.tar.gz *',
cwd => "/var/lib/ceph/mon/ceph-${::hostname}",
}
exec { 'ceph_etc_dump_create':
command => 'tar -czPf /var/tmp/ceph_etc.tar.gz --exclude ceph.conf /etc/ceph',
}
}

View File

@ -1,5 +0,0 @@
# == Class: octane_tasks::ceph_mon_start
#
class octane_tasks::ceph_mon_start {
service { 'ceph-mon-all': ensure => running }
}

View File

@ -1,5 +0,0 @@
# == Class: octane_tasks::ceph_mon_start
#
class octane_tasks::ceph_mon_stop {
service { 'ceph-mon-all': ensure => stopped }
}

View File

@ -1,59 +0,0 @@
# == Class:octane_tasks::ceph_reconfiguration
#
# It replaces fsid by former fsid, using taken from the original controller ceph.conf, in:
# - ceph.conf
# - Ceph Monmap
#
class octane_tasks::ceph_reconfiguration {
Exec {
provider => shell,
}
$orig_fsid = ceph_get_fsid('/var/tmp/ceph.conf')
$tmp_mon_map = '/var/tmp/ceph_mon_map'
validate_re($orig_fsid, '\A[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}\z')
exec { 'extract_map':
command => "ceph-mon -i ${::hostname} --extract-monmap ${tmp_mon_map}",
}
exec { 'delete_old_mon_db':
command => "rm -rf /var/lib/ceph/mon/ceph-${::hostname}/*",
require => Exec['extract_map'],
}
exec { 'extract_db':
command => 'tar -xzPf /var/tmp/ceph_mon.tar.gz',
cwd => "/var/lib/ceph/mon/ceph-${::hostname}",
require => Exec['delete_old_mon_db'],
}
exec { 'extract_ceph_etc':
command => 'tar -xzPf /var/tmp/ceph_etc.tar.gz',
}
augeas { 'ceph.conf':
lens => 'Ceph.lns',
incl => '/etc/ceph/ceph.conf',
load_path => '/usr/share/augeas/lenses:/etc/fuel/octane/puppet/octane_tasks/lib/augeas/lenses',
changes => [
"set /files/etc/ceph/ceph.conf/global/fsid ${orig_fsid}",
]
}
exec { 'change_fsid':
command => "monmaptool --fsid ${orig_fsid} --clobber ${tmp_mon_map}",
require => Exec['extract_map'],
}
exec { 'inject_map':
command => "ceph-mon -i ${::hostname} --inject-monmap ${tmp_mon_map}",
require => [
Exec['change_fsid'],
Augeas['ceph.conf'],
],
}
}

View File

@ -1,31 +0,0 @@
# == Class: octane_tasks::dbsync
#
# This class is for applying latest database migrations
#
class octane_tasks::dbsync (
) inherits octane_tasks::params {
include ::keystone::db::sync
include ::nova::db::sync
include ::glance::db::sync
include ::neutron::db::sync
include ::cinder::db::sync
include ::heat::db::sync
if $octane_tasks::params::murano_enabled or $octane_tasks::params::murano_plugin_enabled {
include ::murano::db::sync
}
if $octane_tasks::params::sahara_enabled {
include ::sahara::db::sync
}
if $octane_tasks::params::ironic_enabled {
include ::ironic::db::sync
}
# All db sync classes have "refreshonly => true" by default
Exec <||> {
refreshonly => false
}
}

View File

@ -1,10 +0,0 @@
# == Class: octane_tasks::kill_cluster
#
# Kills Pacemaker cluster (can be started again).
#
class octane_tasks::kill_cluster {
exec { 'kill_cluster':
command => 'pcs cluster kill',
provider => shell,
}
}

View File

@ -1,42 +0,0 @@
# == Class: octane_tasks::maintenance
#
# This class is for managing OpenStack services on MOS controllers
#
class octane_tasks::maintenance (
$ensure_cluster_services = nil,
$ensure_init_services = nil,
$cluster_services_list = $octane_tasks::params::cluster_services_list,
$init_services_list = $octane_tasks::params::init_services_list,
) inherits octane_tasks::params {
# Manage init services
case $ensure_init_services {
'running', 'stopped', true, false: {
ensure_resource(
'service',
$init_services_list,
{'ensure' => $ensure_init_services}
)
}
default: {
notice("\$ensure_init_services is set to ${ensure_init_services}, skipping")
}
}
# Manage cluster services
case $ensure_cluster_services {
'running', 'stopped', true, false: {
ensure_resource(
'service',
$cluster_services_list,
{'ensure' => $ensure_cluster_services, provider => 'pacemaker'}
)
}
default: {
notice("\$ensure_cluster_services is set to ${ensure_cluster_services}, skipping")
}
}
}

View File

@ -1,16 +0,0 @@
# == Class: octane_tasks::migrate_flavor_data_70
#
# This class is for migrating nova db entries to new format
#
class octane_tasks::migrate_flavor_data_70 (
) inherits octane_tasks::params {
if $octane_tasks::params::fuel_version == '7.0' {
exec { 'nova-manage db migrate_flavor_data':
command => 'nova-manage db migrate_flavor_data | grep -q \'0 instances matched query, 0 completed\'',
path => ['/usr/bin', '/usr/sbin', '/bin'],
tries => 10,
try_sleep => 10,
}
}
}

View File

@ -1,38 +0,0 @@
# == Class: octane_tasks::mysqldump_create
#
# It dumps, encrypts and compreses DB to a dump.
#
class octane_tasks::mysqldump_create inherits octane_tasks::params {
$password = $octane_tasks::params::nova_hash['db_password']
$compress_and_enc_command = 'gzip | openssl enc -e -aes256 -pass env:PASSWORD -out /var/tmp/dbs.original.sql.gz.enc'
$mysql_args = '--defaults-file=/root/.my.cnf --host localhost --add-drop-database --lock-all-tables'
$os_base_dbs = ['cinder', 'glance', 'heat', 'keystone', 'neutron', 'nova']
if $octane_tasks::params::sahara_enabled {
$sahara_db = ['sahara']
} else {
$sahara_db = []
}
if $octane_tasks::params::murano_enabled {
$murano_db = ['murano']
} else {
$murano_db = []
}
if $octane_tasks::params::ironic_enabled {
$ironic_db = ['ironic']
} else {
$ironic_db = []
}
$db_list = join(concat($os_base_dbs, $sahara_db, $murano_db, $ironic_db), ' ')
exec { 'backup_and_encrypt':
command => "mysqldump ${mysql_args} --databases ${db_list} | ${compress_and_enc_command}",
environment => "PASSWORD=${password}",
provider => bash,
}
}

View File

@ -1,16 +0,0 @@
# == Class: octane_tasks::mysqldump_restore
#
# It decrypts, decompreses and restores DB dump.
#
class octane_tasks::mysqldump_restore inherits octane_tasks::params {
$password = $octane_tasks::params::nova_hash['db_password']
$dump_path = '/var/tmp/dbs.original.sql.gz.enc'
$restore_command = "openssl enc -d -aes256 -pass env:PASSWORD -in ${dump_path} | gzip -d | mysql --defaults-file=/root/.my.cnf"
exec { 'decrypt_and_restore':
command => $restore_command,
environment => "PASSWORD=${password}",
provider => bash,
}
}

View File

@ -1,17 +0,0 @@
# == Class: octane_tasks::neutron_migrations_for_fuel_8
#
# This class is for fixing an issue with floating IPs (the issue has been introduced in Fuel 8.0)
# The issue is in Fuel 8.0 timeframe external network type has been switched from local to flat
# which renders all already allocated floating IPs useless.
#
class octane_tasks::neutron_migrations_for_fuel_8 {
file { '/tmp/neutron_migrations_for_fuel_8':
source => 'puppet:///modules/octane_tasks/neutron_migrations_for_fuel_8',
}
exec { 'mysql neutron < /tmp/neutron_migrations_for_fuel_8':
provider => shell,
require => File['/tmp/neutron_migrations_for_fuel_8'],
environment => 'HOME=/root',
}
}

View File

@ -1,117 +0,0 @@
# == Class: octane_tasks::params
#
# This class contains paramaters for octane_tasks
#
class octane_tasks::params (
) {
$nova_hash = hiera_hash('nova')
$upgrade_hash = hiera_hash('upgrade')
$ceilometer_hash = hiera_hash('ceilometer', {'enabled' => false})
$sahara_hash = hiera_hash('sahara', {'enabled' => false})
$murano_hash = hiera_hash('murano', {'enabled' => false})
$ironic_hash = hiera_hash('ironic', {'enabled' => false})
$storage_hash = hiera_hash('storage', {})
$fuel_version = hiera('fuel_version', '9.0')
$murano_plugin_hash = hiera_hash('detach-murano', {'metadata' => {'enabled' => false} })
$ceilometer_enabled = $ceilometer_hash['enabled']
$sahara_enabled = $sahara_hash['enabled']
$murano_enabled = $murano_hash['enabled']
$murano_plugin_enabled = $murano_plugin_hash['metadata']['enabled']
$ironic_enabled = $ironic_hash['enabled']
$cinder_vol_on_ctrl = $storage_hash['volumes_ceph']
$orig_version = $upgrade_hash['relation_info']['orig_cluster_version']
$seed_version = $upgrade_hash['relation_info']['seed_cluster_version']
# Nova
$nova_services_list = [
'nova-api',
'nova-cert',
'nova-consoleauth',
'nova-conductor',
'nova-scheduler',
'nova-novncproxy',
]
# Glance
if $fuel_version >= '9.0' {
$glance_services_list = ['glance-registry', 'glance-api', 'glance-glare']
} else {
$glance_services_list = ['glance-registry', 'glance-api']
}
# Neutron
$neutron_services_list = [
'neutron-server',
]
# Cinder
if $cinder_vol_on_ctrl {
$cinder_services_list = [
'cinder-api',
'cinder-scheduler',
'cinder-volume',
'cinder-backup'
]
} else {
$cinder_services_list = [
'cinder-api',
'cinder-scheduler'
]
}
# Heat
$heat_services_list = [
'heat-api',
'heat-api-cloudwatch',
'heat-api-cfn',
]
# Murano
if $murano_enabled or $murano_plugin_enabled {
$murano_services_list = ['murano-api', 'murano-engine']
} else {
$murano_services_list = []
}
# Sahara
if $sahara_enabled {
$sahara_services_list = ['sahara-all']
} else {
$sahara_services_list = []
}
# Ironic
# NOTE(pchechetin): A list of services for Ironic support should be tested in a lab
if $ironic_enabled {
$ironic_services_list = ['ironic-api']
} else {
$ironic_services_list = []
}
# Pacemaker services
$cluster_services_list = [
'neutron-openvswitch-agent',
'neutron-l3-agent',
'neutron-metadata-agent',
'neutron-dhcp-agent',
'p_heat-engine',
]
# Concatenate init services
$init_services_list = concat(
$nova_services_list,
$glance_services_list,
$neutron_services_list,
$cinder_services_list,
$heat_services_list,
$murano_services_list,
$sahara_services_list,
$ironic_services_list
)
# NOTE: Swift is not supported by Octane
}

View File

@ -1,47 +0,0 @@
# == Class: octane_tasks::preupgrade_compute
#
# This class upgrades required packages on compute node
# inplace. See magic_consts.COMPUTE_PREUPGRADE_PACKAGES
# for the complete list.
#
class octane_tasks::preupgrade_compute {
$preupgrade_packages = hiera('preupgrade_packages')
$preupgrade_packages_str = join($preupgrade_packages, ' ')
# As much as I would love to use package type, it just won't
# cut it. The to-be-updated packages have dependencies between
# each other, so it would take a strict ordering of the package
# list to do so. Assuming that passing the list to the resource
# makes Puppet realize these resources in the same order, they
# are present in the list.
Exec {
path => '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
}
exec { 'pre-upgrade-apt-get-update':
command => 'apt-get update',
before => Exec['upgrade-packages'],
}
exec { 'upgrade-packages':
path => '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
command => "apt-get install --only-upgrade --yes --force-yes \
-o Dpkg::Options::=\"--force-confdef\" \
-o Dpkg::Options::=\"--force-confold\" \
${preupgrade_packages_str}",
environment => ['DEBIAN_FRONTEND=noninteractive'],
before => Anchor['packages-are-updated'],
}
anchor { 'packages-are-updated': }
service { 'nova-compute':
ensure => running,
subscribe => Anchor['packages-are-updated'],
}
service { 'neutron-plugin-openvswitch-agent':
ensure => running,
subscribe => Anchor['packages-are-updated'],
}
}

View File

@ -1,22 +0,0 @@
# == class: octane_tasks::restart_ceph_osd
#
# this class restarts ceph osd after package upgrade
#
class octane_tasks::restart_ceph_osd {
$ceph_mon_versions = ceph_get_version('mon')
$ceph_osd_versions = ceph_get_version('osd')
Exec {
provider => shell,
}
if ! ceph_equal_versions($ceph_mon_versions, $ceph_osd_versions) {
exec { 'restart-ceph-osd':
command => 'restart ceph-osd-all',
}
} else {
notice('the version of osd on current node matches mon version, nothing to upgrade.')
}
}

View File

@ -1,40 +0,0 @@
# == Class: octane_tasks::rsync_octane_section
#
# This class adds two section to rsyncd.conf for Octane:
# Code with with ready only access.
# Data with read and write access.
class octane_tasks::rsync_octane_section {
augeas { 'rsync_octane_section_code':
context => '/files/etc/rsyncd.conf/octane_code',
changes => [
'set path /var/www/nailgun/octane_code',
'set read\ only true',
'set uid 0',
'set gid 0',
'set use\ chroot no',
]
}
augeas { 'rsync_octane_section_data':
context => '/files/etc/rsyncd.conf/octane_data',
changes => [
'set path /var/www/nailgun/octane_data',
'set read\ only false',
'set use\ chroot no',
]
}
$admin_network = hiera_hash('ADMIN_NETWORK')
$admin_ip = $admin_network['ipaddress']
augeas { 'xinetd_rsync':
context => '/files/etc/xinetd.d/rsync/service',
notify => Service['xinetd'],
changes => [
"set bind ${admin_ip}",
]
}
service { 'xinetd': }
}

View File

@ -1,23 +0,0 @@
# == class: octane_tasks::set_noout
#
# this class sets the noout flag for osd pre-upgrade
#
class octane_tasks::set_noout {
$ceph_mon_versions = ceph_get_version('mon')
$ceph_osd_versions = ceph_get_version('osd')
Exec {
provider => shell,
}
if ! ceph_equal_versions($ceph_mon_versions, $ceph_osd_versions) {
exec { 'set-noout-flag':
command => 'ceph osd set noout',
unless => 'ceph -s | grep -q "noout flag.\+ set"',
}
} else {
notice('the version of osd on current node matches mon version, nothing to upgrade.')
}
}

View File

@ -1,10 +0,0 @@
# == Class: octane_tasks::start_cluster
#
# Starts Pacemaker cluster again (on rollback phase).
#
class octane_tasks::start_cluster {
exec { 'start_cluster':
command => 'pcs cluster start',
provider => shell,
}
}

View File

@ -1,14 +0,0 @@
# == Class: octane_tasks::stop_compute_service
#
# This class stops compute services to prepare
# for inplace package updates.
#
class octane_tasks::stop_compute_services {
service { 'nova-compute':
ensure => stopped,
}
service { 'neutron-plugin-openvswitch-agent':
ensure => stopped,
}
}

View File

@ -1,16 +0,0 @@
# == Class: octane_tasks::unset_noout
#
# This class unsets the noout flag for OSD pre-upgrade
#
class octane_tasks::unset_noout {
Exec {
provider => shell,
}
exec { 'unset-noout-flag':
command => 'ceph osd unset noout',
onlyif => 'ceph -s | grep -q "noout flag.\+ set"',
}
}

View File

@ -1,26 +0,0 @@
# == class: octane_tasks::upgrade_ceph_packages
#
# this class upgrades ceph packages on the current node
#
class octane_tasks::upgrade_ceph_packages {
$ceph_mon_versions = ceph_get_version('mon')
$ceph_osd_versions = ceph_get_version('osd')
$ceph_release = hiera('ceph_upgrade_release')
$node_hostnames_string = join(hiera('ceph_upgrade_hostnames'), ' ')
Exec {
provider => shell,
}
if ! ceph_equal_versions($ceph_mon_versions, $ceph_osd_versions) {
exec { 'upgrade-ceph-packages':
command => "ceph-deploy install --release ${ceph_release} ${node_hostnames_string}",
}
} else {
notice('the version of osd on current node matches mon version, nothing to upgrade.')
}
}

View File

@ -1,76 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glanceclient.client
import keystoneclient.client as ksclient
import neutronclient.neutron.client
def _get_keystone(username, password, tenant_name, auth_url):
klient = ksclient.Client(auth_url=auth_url)
klient.authenticate(
username=username,
password=password,
tenant_name=tenant_name)
return klient
def _get_glance(version=2, endpoint=None, token=None):
return glanceclient.client.Client(version, endpoint=endpoint,
token=token)
def _get_neutron(version='2.0', token=None, endpoint_url=None):
return neutronclient.neutron.client.Client(version,
token=token,
endpoint_url=endpoint_url)
def clenup_resources(username, password, tenant_name, auth_url):
keystone = _get_keystone(username, password, tenant_name, auth_url)
glance_endpoint = keystone.service_catalog.url_for(
service_type='image',
endpoint_type='publicURL')
glance = _get_glance(endpoint=glance_endpoint, token=keystone.auth_token)
neutron_endpoint = keystone.service_catalog.url_for(
service_type='network',
endpoint_type='publicURL')
neutron = _get_neutron(token=keystone.auth_token,
endpoint_url=neutron_endpoint)
for image in glance.images.list():
glance.images.delete(image["id"])
for i in neutron.list_floatingips()["floatingips"]:
neutron.delete_floatingip(i["id"])
for router in neutron.list_routers()["routers"]:
neutron.remove_gateway_router(router['id'])
for j in neutron.list_subnets()["subnets"]:
try:
neutron.remove_interface_router(router['id'],
{"subnet_id": j["id"]})
except Exception:
pass
neutron.delete_subnet(j["id"])
neutron.delete_router(router['id'])
for network in neutron.list_networks()["networks"]:
neutron.delete_network(network["id"])
if __name__ == '__main__':
import os
clenup_resources(
os.environ["OS_USERNAME"],
os.environ["OS_PASSWORD"],
os.environ["OS_TENANT_NAME"],
os.environ["OS_AUTH_URL"],
)

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::ceph_bootstrap.pp')
include ::octane_tasks::ceph_bootstrap

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::ceph_mon_dump_create.pp')
include ::octane_tasks::ceph_mon_dump_create

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::ceph_mon_start.pp')
include ::octane_tasks::ceph_mon_start

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::ceph_mon_stop.pp')
include ::octane_tasks::ceph_mon_stop

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::ceph_reconfiguration.pp')
include ::octane_tasks::ceph_reconfiguration

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks/db_sync.pp')
include ::octane_tasks::dbsync

View File

@ -1,2 +0,0 @@
notice('MODULAR: octane_tasks::kill_cluster')
include octane_tasks::kill_cluster

View File

@ -1,2 +0,0 @@
notice('MODULAR: octane_tasks::mysqldump_create.pp')
include octane_tasks::mysqldump_create

View File

@ -1,2 +0,0 @@
notice('MODULAR: octane_tasks::mysqldump_restore.pp')
include octane_tasks::mysqldump_restore

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks/neutron_migrations_for_fuel_8.pp')
include ::octane_tasks::neutron_migrations_for_fuel_8

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks/migrate_flavor_data_70')
include octane_tasks::migrate_flavor_data_70

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::preupgrade_compute')
include ::octane_tasks::preupgrade_compute

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::restart_ceph_osd.pp')
include ::octane_tasks::restart_ceph_osd

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::rsync_octane_section.pp')
include octane_tasks::rsync_octane_section

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::set_noout.pp')
include ::octane_tasks::set_noout

View File

@ -1,2 +0,0 @@
notice('MODULAR: octane_tasks::start_cluster')
include octane_tasks::start_cluster

View File

@ -1,6 +0,0 @@
notice('MODULAR: octane_tasks/start_controller_services.pp')
class { 'octane_tasks::maintenance':
ensure_cluster_services => 'running',
ensure_init_services => 'running',
}

View File

@ -1,5 +0,0 @@
notice('MODULAR: octane_tasks::start_init_services')
class {'octane_tasks::maintenance':
ensure_init_services => 'running',
}

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::stop_compute_services')
include ::octane_tasks::stop_compute_services

View File

@ -1,6 +0,0 @@
notice('MODULAR: octane_tasks/stop_controller_services.pp')
class { 'octane_tasks::maintenance':
ensure_cluster_services => 'stopped',
ensure_init_services => 'stopped',
}

View File

@ -1,5 +0,0 @@
notice('MODULAR: octane_tasks::stop_init_services')
class { 'octane_tasks::maintenance':
ensure_init_services => 'stopped',
}

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::unset_noout.pp')
include ::octane_tasks::unset_noout

View File

@ -1,3 +0,0 @@
notice('MODULAR: octane_tasks::upgrade_ceph_packages.pp')
include ::octane_tasks::upgrade_ceph_packages

View File

@ -1 +0,0 @@
../../../../manifests

View File

@ -1 +0,0 @@
require 'rspec-puppet/spec_helper'

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More