Retire repository

Fuel (from openstack namespace) and fuel-ccp (in x namespace)
repositories are unused and ready to retire.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011647.html

Depends-On: https://review.opendev.org/699362
Change-Id: Iffe3a7de281b48693606fe0c84ebec8190018167
This commit is contained in:
Andreas Jaeger 2019-12-18 09:41:48 +01:00
parent 0b79ab00f7
commit 854b3c5d37
1671 changed files with 10 additions and 107816 deletions

42
.gitignore vendored
View File

@ -1,42 +0,0 @@
# IDEs
.idea
.settings
.project
# ISO
iso/local_mirror
iso/build
# Python
*.pyc
.tox
# Doc
_build
doc/*/build/
publish-docs/
# Editors
*.swp
*~
# Vagrant
.vagrant
Vagrantfile
# Bundle
Gemfile.lock
.bundled_gems
.bundle
/rdoc
task_graph.png
*.log
*-catalog.log.pp
*-spec.log.rb
.librarian
tests/noop/coverage/
tests/noop/spec/fixtures/

View File

@ -1,12 +0,0 @@
3.0-alpha-174-g94a98d6
- Merge from master branch
- Rsyslog tuning
- Puppet debug output
- Centos 6.4
2.1-folsom-docs-324-g61d1599
- Grizzly support for centos simple
- Option for PKI auth for keystone (grizzly native)
- Nova-conductor as generic nova service at compute nodes
- CI scripts changes for grizzly tempest (host only routed IP addresses for public pool)
-

35
Gemfile
View File

@ -1,35 +0,0 @@
source 'https://rubygems.org'
group :development, :test do
gem 'nokogiri', '~> 1.6.0', :require => 'false'
gem 'puppetlabs_spec_helper', '1.1.1', :require => 'false'
gem 'rspec', :require => 'false'
gem 'rspec-puppet', :require => 'false'
gem 'librarian-puppet-simple', :require => 'false'
gem 'metadata-json-lint', :require => 'false'
gem 'puppet-lint-param-docs', :require => 'false'
gem 'puppet-lint-absolute_classname-check', :require => 'false'
gem 'puppet-lint-absolute_template_path', :require => 'false'
gem 'puppet-lint-unquoted_string-check', :require => 'false'
gem 'puppet-lint-leading_zero-check', :require => 'false'
gem 'puppet-lint-variable_contains_upcase', :require => 'false'
gem 'puppet-lint-numericvariable', :require => 'false'
gem 'puppet_facts', :require => 'false'
gem 'json', :require => 'false'
gem 'pry', :require => 'false'
gem 'simplecov', :require => 'false'
gem 'webmock', '1.22.6', :require => 'false'
gem 'fakefs', :require => 'false'
gem 'beaker', '2.50.0', :require => 'false' # 3.1.0 requires ruby version >= 2.2.5
gem 'beaker-rspec', :require => 'false'
gem 'beaker-puppet_install_helper', :require => 'false'
gem 'psych', :require => 'false'
gem 'puppet-spec', :require => 'false'
gem 'rspec-puppet-facts'
end
if puppetversion = ENV['PUPPET_GEM_VERSION']
gem 'puppet', puppetversion, :require => false
else
gem 'puppet'
end

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,119 +0,0 @@
---
description:
For Fuel team structure and contribution policy, see [1].
This is repository level MAINTAINERS file. All contributions to this
repository must be approved by one or more Core Reviewers [2].
If you are contributing to files (or create new directories) in
root folder of this repository, please contact Core Reviewers for
review and merge requests.
If you are contributing to subfolders of this repository, please
check 'maintainers' section of this file in order to find maintainers
for those specific modules.
It is mandatory to get +1 from one or more maintainers before asking
Core Reviewers for review/merge in order to decrease a load on Core Reviewers [3].
Exceptions are when maintainers are actually cores, or when maintainers
are not available for some reason (e.g. on vacation).
[1] https://specs.openstack.org/openstack/fuel-specs/policy/team-structure
[2] https://review.openstack.org/#/admin/groups/658,members
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
Please keep this file in YAML format in order to allow helper scripts
to read this as a configuration data.
maintainers:
- deployment/puppet/:
- name: Sergey Vasilenko
email: svasilenko@mirantis.com
IRC: xenolog
- name: Michael Polenchuk
email: mpolenchuk@mirantis.com
IRC: pma
- name: Dmitry Ilyin
email: dilyin@mirantis.com
IRC: dilyin
- name: Stanislaw Bogatkin
email: sbogatkin@mirantis.com
IRC: sbog
- name: Maksim Malchuk
email: mmalchuk@mirantis.com
IRC: mmalchuk
- debian/: &MOS_packaging_team
- name: Mikhail Ivanov
email: mivanov@mirantis.com
IRC: mivanov
- name: Artem Silenkov
email: asilenkov@mirantis.com
IRC: asilenkov
- name: Alexander Tsamutali
email: atsamutali@mirantis.com
IRC: astsmtl
- name: Daniil Trishkin
email: dtrishkin@mirantis.com
IRC: dtrishkin
- name: Ivan Udovichenko
email: iudovichenko@mirantis.com
IRC: tlbr
- name: Igor Yozhikov
email: iyozhikov@mirantis.com
IRC: IgorYozhikov
- files/fuel-docker-utils:
- name: Matthew Mosesohn
email: mmosesohn@mirantis.com
IRC: mattymo
- files/fuel-migrate:
- name: Peter Zhurba
email: pzhurba@mirantis.com
IRC: pzhurba
- files/fuel-ha-utils:
- name: Sergey Vasilenko
email: svasilenko@mirantis.com
IRC: xenolog
- specs/: *MOS_packaging_team
- tests/bats:
- name: Peter Zhurba
email: pzhurba@mirantis.com
IRC: pzhurba
- tests/noop:
- name: Dmitry Ilyin
email: dilyin@mirantis.com
IRC: dilyin
- name: Ivan Berezovskiy
email: iberezovskiy@mirantis.com
IRC: iberezovskiy
- name: Sergey Kolekonov
email: skolekonov@mirantis.com
IRC: skolekonov
- name: Denis Egorenko
email: degorenko@mirantis.com
IRC: degorenko

240
README.md
View File

@ -1,240 +0,0 @@
Team and repository tags
========================
[![Team and repository tags](http://governance.openstack.org/badges/fuel-library.svg)](http://governance.openstack.org/reference/tags/index.html)
<!-- Change things from this point on -->
# fuel-library
--------------
## Table of Contents
1. [Overview - What is the fuel-library?](#overview)
2. [Structure - What is in the fuel-library?](#structure)
3. [Granular Deployment - What is the granular deployment for Fuel?](#granular-deployment)
4. [Upstream Modules - How to work with librarian.](#upstream-modules)
5. [Testing - How to run fuel-library tests.](#testing)
6. [Building docs - How to build docs.](#build-docs)
7. [Development](#development)
8. [Core Reviers](#core-reviewers)
9. [Contributors](#contributors)
## Overview
-----------
The fuel-library is collection of Puppet modules and related code used by Fuel
to deploy OpenStack environments.
## Structure
------------
### Basic Repository Layout
```
fuel-library
├── CHANGELOG
├── LICENSE
├── README.md
├── MAINTAINERS
├── debian
├── deployment
├── files
├── specs
├── tests
└── utils
```
### root
The root level contains important repository documentation and license
information.
### MAINTAINERS
This is repository level MAINTAINERS file. One submitting a patch should
contact the appropriate maintainer or invite her or him for the code review.
Note, core reviewers are not the maintainers. Normally, cores do reviews
after maintainers.
### debian/
This folder contains the required information to create fuel-library debian
packages.
### deployment/
This folder contains the fuel-library Puppet code, the Puppetfile for
upstream modules, and scripts to manage modules with
[librarian-puppet-simple](https://github.com/bodepd/librarian-puppet-simple).
### files/
This folder contains scripts and configuration files that are used when
creating the packages for fuel-library.
### specs/
This folder contains our rpm spec file for fuel-library rpm packages.
### tests/
This folder contains our testing scripts for the fuel-library.
### utils/
This folder contains scripts that are useful when doing development on
fuel-library.
## Granular Deployment
----------------------
The [top-scope puppet manifests](deployment/puppet/osnailyfacter/modular)
(sometimes also referred as the composition layer) represent the known
deploy paths (aka supported deployment scenarios) for the
[task-based deployment](https://docs.mirantis.com/openstack/fuel/fuel-6.1/reference-architecture.html#task-based-deployment).
## Upstream Modules
-------------------
In order to be able to pull in upstream modules for use by the fuel-library,
the deployment folder contains a Puppetfile for use with
[librarian-puppet-simple](https://github.com/bodepd/librarian-puppet-simple).
Upstream modules should be used whenever possible. For additional details on
the process for working with upstream modules, please read the
[Fuel library for Puppet manifests](https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Fuel_library_for_puppet_manifests)
of the [Fuel wiki](https://wiki.openstack.org/wiki/Fuel).
## Testing
----------
Testing is important for the fuel-library to ensure changes do what they are
supposed to do, regressions are not introduced and all code is of the highest
quality. The fuel-library leverages existing Puppet module rspec tests,
[bats](https://github.com/sstephenson/bats) tests for bash scripts and [noop
tests](https://github.com/openstack/fuel-noop-fixtures) for testing the module
deployment tasks in fuel-library.
### Module Unit Tests
---------------------
The modules contained within fuel-library require that the module dependencies
have been downloaded prior to running their spec tests. Their fixtures.yml have
been updated to use relative links to the modules contained within the
deployment/puppet/ folder. Because of this we have updated the rake tasks for
the fuel-library root folder to include the ability to download the module
dependencies as well as run all of the module unit tests with one command. You
can run the following from the root of the fuel-library to run all module unit
tests.
```
bundle install
bundle exec rake spec
```
By default, running this command will only test the modules modified in the
previous commit. To test all modules, please run:
```
bundle install
bundle exec rake spec_all
```
If you only wish to download the module dependencies, you can run the following
in the root of the fuel-library.
```
bundle install
bundle exec rake spec_prep
```
If you wish to clean up the dependencies, you can run the following in the root
of the fuel-library.
```
bundle install
bundle exec rake spec_clean
```
Once you have downloaded the dependencies, you can also just work with a
particular module using the usual 'rake spec' commands if you only want to run
a single module's unit tests. The upstream modules defined in the fuel-library
Puppetfile are automatically excluded from rspec unit tests. To prevent non-
upstream modules that live in fuel-library from being included in unit tests,
add the name of the module to the utils/jenkins/modules.disable_rspec file.
### Module Syntax Tests
-----------------------
From within the fuel-library root, you can run the following to perform the
syntax checks for the files within fuel-library.
```
bundle install
bundle exec rake syntax
```
This will run syntax checks against all puppet, python, shell and hiera files
within fuel-library.
### Module Lint Checks
By default, Lint Checks will only test the modules modified in the previous
commit. From within the fuel-library root, you can run the following commands:
```
bundle install
bundle exec rake lint
```
To run lint on all of our puppet files you should use the following commands:
```
bundle install
bundle exec rake lint_all
```
This will run puppet-lint against all of the modules within fuel-library but
will skip checking the upstream module dependencies. The upstream module
dependencies are skipped by having their name in the deployment/Puppetfile
file, but also, additional modules could be defined in the
util/jenkins/modules.disable_rake-lint file.
### Puppet module tests
Puppet rspec tests should be provided for every module's directory included.
All of the discovered tests will be automatically executed by the
`rake spec` command issued from the repository root path.
### Bats: Bash Automated Testing System
Shell scripts residing in the `./files` directories should be
covered by the [BATS](https://github.com/sstephenson/bats) test cases.
These should be put under the `./tests/bats` path as well.
Here is an [example](https://review.openstack.org/198355) bats tests
written for the UMM feature.
See also the [bats how-to](https://blog.engineyard.com/2014/bats-test-command-line-tools).
### Fuel-library noop tests
A framework for integration testing of composition layers comprising
the modular tasks. For details, see the framework's
[documentation](http://fuel-noop-fixtures.readthedocs.org/en/latest/).
## Development
--------------
* [Fuel Development Documentation](https://docs.fuel-infra.org/fuel-dev/)
* [Fuel How to Contribute](https://wiki.openstack.org/wiki/Fuel/How_to_contribute)
## Core Reviewers
-----------------
* [Fuel Cores](https://review.openstack.org/#/admin/groups/209,members)
* [Fuel Library Cores](https://review.openstack.org/#/admin/groups/658,members)
## Contributors
---------------
* [Stackalytics](http://stackalytics.com/?release=all&project_type=all&module=fuel-library&metric=commits)

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

472
Rakefile
View File

@ -1,472 +0,0 @@
###############################################################################
#
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
###############################################################################
#
# Rakefile
# This file implements the lint and spec tasks for rake so that it will loop
# through all of the puppet modules in the deployment/puppet/ folder and run
# the respective lint or test tasks for each module. It will then return 0 if
# there are issues or return 1 if any of the modules fail.
#
# Authors: Alex Schultz <aschultz@mirantis.com>
# Maksim Malchuk <mmalchuk@mirantis.com>
#
require 'rake'
# Use puppet-syntax for the syntax tasks
require 'puppet-syntax/tasks/puppet-syntax'
PuppetSyntax.exclude_paths ||= []
PuppetSyntax.exclude_paths << "**/spec/fixtures/**/*"
PuppetSyntax.exclude_paths << "**/pkg/**/*"
PuppetSyntax.exclude_paths << "**/vendor/**/*"
PuppetSyntax.fail_on_deprecation_notices = false
# Main task list
task :default => ["common:help"]
task :help => ["common:help"]
# RSpec task list
desc "Pull down module dependencies, run tests against Git changes and cleanup"
task :spec => ["spec:prep", "spec:gemfile", "spec:clean"]
task :spec_all => ["spec:all"]
task :spec_prep => ["spec:prep"]
task :spec_clean => ["spec:clean"]
task :spec_standalone => ["spec:gemfile"]
# Lint task list
# TODO(aschultz): Use puppet-lint for the lint tasks
desc "Run lint tasks"
task :lint => ["lint:manual"]
task :lint_all => ["lint:all"]
task :lint_manual => ["lint:manual"]
# Syntax task list
task :syntax => ["syntax:manifests", "syntax:hiera", "syntax:files", "syntax:templates"]
task :syntax_files => ["syntax:files"]
# The common tasks is for internal use, so its description is commented
namespace :common do
# desc "Task to load list of modules to skip"
task :load_skip_file, :skip_file do |t, args|
def mod(name, options = {})
@modules ||= {}
module_name = name.split('/', 2).last
@modules[module_name] = options
end
module_list = []
library_dir = Dir.pwd
Dir["#{library_dir}/deployment/**/Puppetfile"].each do |puppetfile|
if File.exists?(puppetfile)
eval(File.read(puppetfile))
@modules.each { |module_name|
module_list << module_name[0]
}
end
end
# TODO(aschultz): Fix all modules so they have tests and we no longer need
# this file to exclude bad module tests
if not args[:skip_file].nil? and File.exists?(args[:skip_file])
File.open(args[:skip_file], 'r').each_line { |line|
next if line =~ /^\s*#/
module_list << line.chomp
}
end
$skip_module_list = module_list.uniq
end
# desc "Task to generate a list of modules with Git changes"
task :modulesgit, [:skip_file] do |t, args|
args.with_defaults(:skip_file => nil)
$module_directories = []
if ENV['SPEC_NO_GIT_MODULES']
$git_module_directories = []
else
$git_module_directories = %x[git diff --name-only HEAD~ 2>/dev/null |\
grep -o 'deployment/puppet/[^/]*/' | sort -u].split.select {|mod| File.directory? mod}
end
if $git_module_directories.empty?
Rake::Task["common:modules"].invoke(args[:skip_file])
else
Rake::Task["common:load_skip_file"].invoke(args[:skip_file])
$stderr.puts '-'*80
$stderr.puts "Git changes found! Build modules list..."
$stderr.puts '-'*80
$git_module_directories.each do |mod|
if $skip_module_list.include?(File.basename(mod))
$stderr.puts "Skipping tests... modules.disable_rspec includes #{mod}"
next
end
$module_directories << mod
end
end
end
# desc 'Task to generate a list of all modules'
task :modules, [:skip_file] do |t, args|
args.with_defaults(:skip_file => nil)
$module_directories = []
Rake::Task["common:load_skip_file"].invoke(args[:skip_file])
$stderr.puts '-'*80
$stderr.puts "No changes found! Build modules list..."
$stderr.puts '-'*80
Dir.glob('deployment/puppet/*') do |mod|
next unless File.directory?(mod)
if $skip_module_list.include?(File.basename(mod))
$stderr.puts "Skipping tests... modules.disable_rspec includes #{mod}"
next
end
$module_directories << mod
end
end
# desc "Display the list of available rake tasks"
task :help do
system("rake -T")
end
end
# our spec tasks to loop through the modules and run the 'rake spec'
namespace :spec do
desc 'Run prep to install gems and pull down module dependencies'
task :prep do |t|
$stderr.puts '-'*80
$stderr.puts "Install gems and pull down module dependencies..."
$stderr.puts '-'*80
library_dir = Dir.pwd
ENV['GEM_HOME']="#{library_dir}/.bundled_gems"
system("gem install bundler --no-rdoc --no-ri --verbose")
system("./deployment/update_modules.sh")
end
desc 'Remove module dependencies'
task :clean do |t|
system("./deployment/remove_modules.sh")
end
desc "Pull down module dependencies, run tests against all modules and cleanup"
task :all do |t|
Rake::Task["spec:prep"].invoke
Rake::Task["common:modules"].invoke('./utils/jenkins/modules.disable_rspec')
Rake::Task["spec:gemfile"].invoke
Rake::Task["spec:clean"].invoke
end
desc 'Run spec tasks via module bundler with Gemfile'
task :gemfile do |t|
if $module_directories.nil?
Rake::Task["common:modulesgit"].invoke('./utils/jenkins/modules.disable_rspec')
end
library_dir = Dir.pwd
ENV['GEM_HOME']="#{library_dir}/.bundled_gems"
status = true
report = {}
$module_directories.each do |mod|
next unless File.exists?("#{mod}/Gemfile")
$stderr.puts '-'*80
$stderr.puts "Running tests for #{mod}"
$stderr.puts '-'*80
Dir.chdir(mod)
begin
result = system("bundle exec rake spec")
if !result
status = false
report[mod] = false
$stderr.puts "!"*80
$stderr.puts "Unit tests failed for #{mod}"
$stderr.puts "!"*80
else
report[mod] = true
end
rescue Exception => e
$stderr.puts "ERROR: Unable to run tests for #{mod}, #{e.message}"
status = false
end
Dir.chdir(library_dir)
end
if report.any?
$stdout.puts '=' * 80
max_module_length = report.keys.max_by { |key| key.length }.length
report.each do |mod, success|
$stdout.puts "#{mod.ljust(max_module_length)} - #{success.to_s.upcase}"
end
end
$stdout.puts '=' * 80
fail unless status
end
end
# our lint tasks to loop through the modules and run the puppet-lint
namespace :lint do
desc 'Find all the puppet modules and run puppet-lint on them'
task :all do |t|
Rake::Task["common:modules"].invoke('./utils/jenkins/modules.disable_rake-lint')
Rake::Task["lint:manual"].invoke
end
desc 'Find the puppet modules with Git changes and run puppet-lint on them'
task :manual do |t|
if $module_directories.nil?
Rake::Task["common:modulesgit"].invoke('./utils/jenkins/modules.disable_rake-lint')
end
# lint checks to skip if no Gemfile or Rakefile
skip_checks = [ "--no-80chars-check",
"--no-autoloader_layout-check",
"--no-only_variable_string-check",
"--no-2sp_soft_tabs-check",
"--no-trailing_whitespace-check",
"--no-hard_tabs-check",
"--no-class_inherits_from_params_class-check",
"--with-filename"]
$stderr.puts '-'*80
$stderr.puts "Install gems..."
$stderr.puts '-'*80
library_dir = Dir.pwd
ENV['GEM_HOME']="#{library_dir}/.bundled_gems"
system("gem install bundler --no-rdoc --no-ri --verbose")
status = true
$module_directories.each do |mod|
# TODO(aschultz): uncomment this when :rakefile works
#next if File.exists?("#{mod}/Rakefile")
$stderr.puts '-'*80
$stderr.puts "Running lint for #{mod}"
$stderr.puts '-'*80
Dir.chdir(mod)
begin
result = true
Dir.glob("**/**.pp") do |puppet_file|
result = false unless system("puppet-lint #{skip_checks.join(" ")} #{puppet_file}")
end
if !result
status = false
$stderr.puts "!"*80
$stderr.puts "puppet-lint failed for #{mod}"
$stderr.puts "!"*80
end
rescue Exception => e
$stderr.puts "ERROR: Unable to run lint for #{mod}, #{e.message}"
status = false
end
Dir.chdir(library_dir)
end
fail unless status
end
# TODO(aschultz): fix all the modules with Rakefiles to make sure they work
# then include this task
desc 'Find the puppet modules with Git changes and run lint tasks using existing Gemfile/Rakefile'
task :rakefile do |t|
Rake::Task["common:modulesgit"].invoke('./utils/jenkins/modules.disable_rake-lint')
$stderr.puts '-'*80
$stderr.puts "Install gems..."
$stderr.puts '-'*80
library_dir = Dir.pwd
ENV['GEM_HOME']="#{library_dir}/.bundled_gems"
system("gem install bundler --no-rdoc --no-ri --verbose")
status = true
$module_directories.each do |mod|
next unless File.exists?("#{mod}/Rakefile")
$stderr.puts '-'*80
$stderr.puts "Running lint for #{mod}"
$stderr.puts '-'*80
Dir.chdir(mod)
begin
result = system("bundle exec rake lint >/dev/null")
$stderr.puts result
if !result
status = false
$stderr.puts "!"*80
$stderr.puts "rake lint failed for #{mod}"
$stderr.puts "!"*80
end
rescue Exception => e
$stderr.puts "ERROR: Unable to run lint for #{mod}, #{e.message}"
status = false
end
Dir.chdir(library_dir)
end
fail unless status
end
end
# Our syntax checking jobs
# The tasks here are an extension on top of the existing puppet helper ones.
namespace :syntax do
desc 'Syntax check for files/ folder'
task :files do |t|
$stderr.puts '---> syntax:files'
status = true
Dir.glob('./files/**/*') do |ocf_file|
next if File.directory?(ocf_file)
mime_type =`file --mime --brief #{ocf_file}`
begin
case mime_type.to_s
when /shellscript/
result = system("bash -n #{ocf_file}")
when /ruby/
result = system("ruby -c #{ocf_file}")
when /python/
result = system("python -m py_compile #{ocf_file}")
when /perl/
result = system("perl -c #{ocf_file}")
else
result = true
$stderr.puts "Unknown file format, skipping syntax check for #{ocf_file}"
end
rescue Exception => e
result = false
$stderr.puts "Checking #{ocf_file} failed with #{e.message}"
end
if !result
status = false
$stderr.puts "!"*80
$stderr.puts "Syntax check failed for #{ocf_file}"
$stderr.puts "!"*80
end
end
fail unless status
end
end
def noop(options = {})
require_relative 'tests/noop/fuel-noop-fixtures/noop_tests'
ENV['SPEC_ROOT_DIR'] = 'tests/noop/fuel-noop-fixtures'
ENV['SPEC_DEPLOYMENT_DIR'] = 'deployment'
ENV['SPEC_HIERA_DIR'] = 'tests/noop/fuel-noop-fixtures/hiera'
ENV['SPEC_FACTS_DIR'] = 'tests/noop/fuel-noop-fixtures/facts'
ENV['SPEC_REPORTS_DIR'] = 'tests/noop/fuel-noop-fixtures/reports'
ENV['SPEC_SPEC_DIR'] = 'tests/noop/spec/hosts'
ENV['SPEC_TASK_DIR'] = 'deployment/puppet/osnailyfacter/modular'
ENV['SPEC_MODULE_PATH'] = 'deployment/puppet'
ARGV.shift
ARGV.shift if ARGV.first == '--'
ENV['SPEC_TASK_DEBUG'] = 'yes'
manager = Noop::Manager.new
manager.main options
end
# options can be passed like this:
# rake noop:run:all -- -s apache/apache -y neut_tun.ceph.murano.sahara.ceil-controller
desc 'Prepare the Noop environment and run all tests'
task :noop => %w(noop:setup noop:run:bg noop:show:failed)
namespace :noop do
desc 'Prepare the Noop tests environment'
task :setup do
system 'tests/noop/setup_and_diagnostics.sh'
end
desc 'Run Noop tests self-check'
task :test do
noop(self_check: true)
end
task :run => %w(noop:run:bg)
namespace :run do
desc 'Run all Noop test'
task :bg do
noop(parallel_run: 'auto', debug: true)
end
desc 'Run all Noop tests in the foreground'
task :fg do
noop(parallel_run: 0, debug: true)
end
desc 'Run only failed tasks'
task :failed do
noop(run_failed_tasks: true, debug: true)
end
end
task :show => %w(noop:show:all)
namespace :show do
desc 'Show all task manifests'
task :tasks do
noop(list_tasks: true)
end
desc 'Show all Hiera files'
task :hiera do
noop(list_hiera: true)
end
desc 'Show all facts files'
task :facts do
noop(list_facts: true)
end
desc 'Show all spec files'
task :specs do
noop(list_specs: true)
end
desc 'Show the tasks list'
task :all do
noop(pretend: true)
end
desc 'Show previous reports'
task :reports do
noop(load_saved_reports: true)
end
desc 'Show failed tasks'
task :failed do
noop(load_saved_reports: true, report_only_failed: true, report_only_tasks: true)
end
end
end

29
debian/changelog vendored
View File

@ -1,29 +0,0 @@
fuel-library10.0 (10.0.0-1) trusty; urgency=low
* Bump version to 10.0
-- Sergey Kulanov <skulanov@mirantis.com> Mon, 21 Mar 2016 12:48:56 +0200
fuel-library9.0 (9.0.0-2) trusty; urgency=low
* Add librarian-puppet-simple support
-- Vitaly Parakhin <vparakhin@mirantis.com> Fri, 18 Dec 2015 22:50:58 +0200
fuel-library9.0 (9.0.0-1) trusty; urgency=low
* Bump version to 9.0
-- Sergey Kulanov <skulanov@mirantis.com> Fri, 18 Dec 2015 22:50:58 +0200
fuel-library8.0 (8.0.0-1ubuntu1) mos8.0; urgency=medium
* Bump version to 8.0
-- Vladimir Kuklin <vkuklin@mirantis.com> Thu, 03 Sep 2015 13:40:12 +0300
fuel-library7.0 (7.0.0-1) trusty; urgency=low
* Update version to 7.0
-- Igor Shishkin <ishishkin@mirantis.com> Wed, 22 Apr 2015 14:44:00 +0300

1
debian/compat vendored
View File

@ -1 +0,0 @@
7

60
debian/control vendored
View File

@ -1,60 +0,0 @@
Source: fuel-library10.0
Section: admin
Priority: optional
Maintainer: Mirantis Product <product@mirantis.com>
Build-Depends: debhelper (>= 9),
python-all,
librarian-puppet-simple,
# TODO(sgolovatiuk): ruby-thor should be removed
# when https://bugs.launchpad.net/ubuntu/+source/librarian-puppet-simple/+bug/1588246
# is merged (librarian-puppet-simple 0.0.5-2)
ruby-thor,
git
Standards-Version: 3.9.2
Package: fuel-library10.0
Provides: fuel-library
Architecture: all
Depends: ruby | ruby-interpreter, puppet, ${misc:Depends}, ${python:Depends}
Description: Set of puppet scripts for Fuel
Fuel is an open source deployment and management tool for OpenStack. Developed
as an OpenStack community effort, it provides an intuitive, GUI-driven
experience for deployment and management of OpenStack, related community
projects and plug-ins.
.
Fuel brings consumer-grade simplicity to streamline and accelerate the
otherwise time-consuming, often complex, and error-prone process of deploying,
testing and maintaining various configuration flavors of OpenStack at scale.
Unlike other platform-specific deployment or management utilities, Fuel is an
upstream OpenStack project that focuses on automating the deployment and
testing of OpenStack and a range of third-party options, so its not
compromised by hard bundling or vendor lock-in.
.
This package contains deployment manifests and code to execute provisioning of
master and slave nodes.
Package: fuel-ha-utils
Architecture: all
Depends: ${misc:Depends}, ${shlibs:Depends}, python-keystoneclient, python-neutronclient
Description: Fuel Library HA utils
.
Package: fuel-misc
Architecture: all
Depends: ${misc:Depends}, ${shlibs:Depends}, socat
Description: Misc Fuel library scripts
.
Package: fuel-rabbit-fence
Architecture: all
Depends: ${misc:Depends}, ${shlibs:Depends}, dbus, python-gobject-2, python-gobject, python-dbus, python-daemon, rabbitmq-server, corosync-notifyd
Description: Fuel RabbitMQ fencing utilitites
.
Package: fuel-umm
Architecture: all
Depends: ${misc:Depends}, ${shlibs:Depends}
Description: Unified maintenance mode
Packet provide posibility to put operation system in the state when it has only
critical set of working services which are needed for basic network and disk
operations. Also node in MM state is reachable with ssh from network.

View File

@ -1,6 +0,0 @@
files/fuel-ha-utils/ocf/* /usr/lib/ocf/resource.d/fuel
files/fuel-ha-utils/policy/set_rabbitmq_policy /usr/sbin
files/fuel-ha-utils/tools/wsrepclustercheckrc /etc
files/fuel-ha-utils/tools/galeracheck /usr/bin
files/fuel-ha-utils/tools/swiftcheck /usr/bin
files/fuel-ha-utils/tools/rabbitmq-dump-clean.py /usr/sbin

View File

@ -1,3 +0,0 @@
files/fuel-misc/haproxy-status.sh /usr/bin
files/fuel-misc/logrotate /usr/bin
files/fuel-misc/generate_vms.sh /usr/bin

View File

@ -1 +0,0 @@
files/rabbit-fence/rabbit-fence.py /usr/bin/

View File

@ -1,16 +0,0 @@
[Unit]
Description=RabbitMQ-fence daemon
[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStartPre=/bin/mkdir -p /var/run/rabbitmq
ExecStartPre=/bin/chown -R rabbitmq /var/run/rabbitmq
ExecStart=/usr/bin/rabbit-fence.py
[Install]
WantedBy=multi-user.target

View File

@ -1,14 +0,0 @@
description "RabbitMQ-fence daemon"
start on runlevel [2345]
stop on runlevel [016]
respawn
respawn limit 10 5
pre-start script
mkdir -p /var/run/rabbitmq
chown -R rabbitmq /var/run/rabbitmq
end script
exec /usr/bin/rabbit-fence.py

View File

@ -1,3 +0,0 @@
files/fuel-umm/root/* /
files/fuel-umm/ub14/* /usr/lib/umm/
files/fuel-umm/upstart/* /etc/init/

View File

@ -1,2 +0,0 @@
#!/bin/bash
/usr/lib/umm/umm-install

View File

@ -1,6 +0,0 @@
#!/bin/bash
rm -f /etc/init/rc-sysinit.override
rm -f /etc/init/failsafe.override
rm -f /etc/init/umm.conf
rm -f /etc/grub.d/55_umm
update-grub

36
debian/rules vendored
View File

@ -1,36 +0,0 @@
#!/usr/bin/make -f
export FUEL_RELEASE=10.0
export FUEL_LIB_DEST=/etc/puppet
export FULL_FUEL_LIB_DEST=/debian/fuel-library$(FUEL_RELEASE)$(FUEL_LIB_DEST)
%:
dh $@ --with python2
override_dh_auto_build:
if test -s $(CURDIR)/upstream_modules.tar.gz ; then \
tar xzvf $(CURDIR)/upstream_modules.tar.gz -C $(CURDIR)/deployment/puppet/ ; \
else \
bash -x $(CURDIR)/deployment/update_modules.sh ; \
fi
dh_auto_build
override_dh_fixperms:
chmod 755 debian/fuel-ha-utils/usr/lib/ocf/resource.d/fuel/*
dh_fixperms
override_dh_install:
dh_install
mv debian/fuel-misc/usr/bin/logrotate debian/fuel-misc/usr/bin/fuel-logrotate
# Install fuel-library
mkdir -p $(CURDIR)$(FULL_FUEL_LIB_DEST)/modules
mkdir -p $(CURDIR)$(FULL_FUEL_LIB_DEST)/manifests
cp -r $(CURDIR)/deployment/puppet/* $(CURDIR)$(FULL_FUEL_LIB_DEST)/modules
cp deployment/Puppetfile $(CURDIR)$(FULL_FUEL_LIB_DEST)/modules
#LP1515988
find $(CURDIR)$(FULL_FUEL_LIB_DEST)/modules -maxdepth 2 -type d \( -name .git -or -name spec \) -exec rm -rf '{}' +
# FIXME (vparakhin): fix for dh_md5sums "Argument list too long"
# Remove this as soon as upstream modules are packaged separately
override_dh_md5sums:

View File

@ -1 +0,0 @@
3.0 (quilt)

50
deployment/.gitignore vendored
View File

@ -1,50 +0,0 @@
Puppetfile.lock
.tmp
puppet/module_versions
# add modules being managed via librarian-puppet under here
puppet/aodh
puppet/apache
puppet/apt
puppet/ceilometer
puppet/ceph
puppet/cinder
puppet/concat
puppet/datacat
puppet/filemapper
puppet/firewall
puppet/galera
puppet/glance
puppet/heat
puppet/horizon
puppet/inifile
puppet/ironic
puppet/keystone
puppet/mcollective
puppet/memcached
puppet/mongodb
puppet/murano
puppet/mysql
puppet/neutron
puppet/nova
puppet/ntp
puppet/openssl
puppet/openstacklib
puppet/oslo
puppet/postgresql
puppet/rsync
puppet/rsyslog
puppet/sahara
puppet/ssh
puppet/staging
puppet/stdlib
puppet/swift
puppet/sysctl
puppet/tftp
puppet/vcsrepo
puppet/xinetd
puppet/corosync
puppet/rabbitmq
puppet/pacemaker
puppet/vswitch
puppet/limits

View File

@ -1,3 +0,0 @@
--color
--format doc
--require spec_helper

View File

@ -1,15 +0,0 @@
source 'https://rubygems.org'
gem 'pry'
# this is used to bundle install librarian puppet for the deployment folder
group :development, :test do
gem 'librarian-puppet-simple', :require => false
end
if ENV['PUPPET_GEM_VERSION']
gem 'puppet', ENV['PUPPET_GEM_VERSION'], :require => false
else
gem 'puppet', :require => false
end
# vim: set ft=ruby ts=2 sw=2 tw=0 et :

View File

@ -1,168 +0,0 @@
#!/usr/bin/env ruby
#^syntax detection
# See https://github.com/bodepd/librarian-puppet-simple for additional docs
#
# Important information for fuel-library:
# With librarian-puppet-simple you *must* remove the existing folder from the
# repo prior to trying to run librarian-puppet as it will not remove the folder
# for you and you may run into some errors.
#
############
# Examples #
############
# From git repo
# mod 'stdlib',
# :git => 'https://github.com/puppetlabs/puppetlabs-stdlib.git',
# :ref => '4.6.x'
#
# From tarbal
# mod 'stdlib',
# :tarbal => 'https://forgeapi.puppetlabs.com/v3/files/puppetlabs-stdlib-4.6.0.tar.gz'
#
#
# Pull in puppetlabs-stdlib
mod 'stdlib',
:git => 'https://github.com/fuel-infra/puppetlabs-stdlib.git',
:ref => '4.14.0'
# Pull in puppet-pacemaker modules
mod 'pacemaker',
:git => 'https://github.com/fuel-infra/puppet-pacemaker.git',
:ref => '1.0.9'
# Pull in puppetlabs-concat
mod 'concat',
:git => 'https://github.com/fuel-infra/puppetlabs-concat.git',
:ref => '2.2.0'
# Pull in puppetlabs-inifile
mod 'inifile',
:git => 'https://github.com/fuel-infra/puppetlabs-inifile.git',
:ref => '1.6.0'
# Pull in puppetlabs-xinetd
mod 'xinetd',
:git => 'https://github.com/fuel-infra/puppetlabs-xinetd.git',
:ref => '1.5.0'
# Pull in saz-ssh
mod 'ssh',
:git => 'https://github.com/fuel-infra/saz-ssh.git',
:ref => 'v2.9.1'
# Pull in puppetlabs-ntp
mod 'ntp',
:git => 'https://github.com/fuel-infra/puppetlabs-ntp.git',
:ref => '4.2.0'
# Pull in puppetlabs-apache
mod 'apache',
:git => 'https://github.com/fuel-infra/puppetlabs-apache.git',
:ref => '1.11.0'
# Pull in puppetlabs-apt
mod 'apt',
:git => 'https://github.com/fuel-infra/puppetlabs-apt.git',
:ref => '2.3.0'
# Pull in puppetlabs-firewall
mod 'firewall',
:git => 'https://github.com/fuel-infra/puppetlabs-firewall.git',
:ref => '1.8.0-mos-rc1'
# Pull in saz-memcached
mod 'memcached',
:git => 'https://github.com/fuel-infra/puppet-memcached.git',
:ref => 'v2.8.1'
# Pull in duritong-sysctl
mod 'sysctl',
:git => 'https://github.com/fuel-infra/puppet-sysctl.git',
:ref => 'v0.0.11'
# Pull in nanliu-staging
mod 'staging',
:git => 'https://github.com/fuel-infra/puppet-staging.git',
:ref => '1.0.4'
# Pull in puppetlabs-vcsrepo
mod 'vcsrepo',
:git => 'https://github.com/fuel-infra/puppetlabs-vcsrepo.git',
:ref => '1.5.0'
# Pull in puppetlabs-postgresql
mod 'postgresql',
:git => 'https://github.com/fuel-infra/puppetlabs-postgresql.git',
:ref => '4.8.0'
# Pull in saz-rsyslog
mod 'rsyslog',
:git => 'https://github.com/fuel-infra/puppet-rsyslog.git',
:ref => 'v5.0.0'
# Pull in puppet-openssl
mod 'openssl',
:git => 'https://github.com/fuel-infra/puppet-openssl.git',
:ref => '1.7.1'
# Pull in puppetlabs-mongodb
mod 'mongodb',
:git => 'https://github.com/fuel-infra/puppetlabs-mongodb.git',
:ref => '0.16.0'
# Pull in puppetlabs-rsync
mod 'rsync',
:git => 'https://github.com/fuel-infra/puppetlabs-rsync.git',
:ref => '0.4.0-mos-rc1'
# Pull in puppet-filemapper
mod 'filemapper',
:git => 'https://github.com/fuel-infra/puppet-filemapper.git',
:ref => 'v2.0.1'
# Pull in puppetlabs-tftp
mod 'tftp',
:git => 'https://github.com/fuel-infra/puppetlabs-tftp.git',
:ref => '0.2.3'
# Pull in richardc-datacat
mod 'datacat',
:git => 'https://github.com/fuel-infra/richardc-datacat.git',
:ref => '0.6.2'
# Pull in puppet-mcollective
mod 'mcollective',
:git => 'https://github.com/fuel-infra/puppet-mcollective.git',
:ref => 'v2.1.1'
# Pull in puppet-corosync
# FIXME(bogdando) We need the 0.8.0-29201ff. Use the 0.9.0 once released.
mod 'corosync',
:git => 'https://github.com/fuel-infra/puppet-corosync.git',
:ref => '0.8.0-mos-rc3'
# Pull in puppetlabs-rabbitmq
mod 'rabbitmq',
:git => 'https://github.com/fuel-infra/puppetlabs-rabbitmq.git',
:ref => '5.6.0'
# Pull in puppetlabs-mysql
mod 'mysql',
:git => 'https://github.com/fuel-infra/puppetlabs-mysql.git',
:ref => '3.10.0'
# Pull in michaeltchapman-galera
mod 'galera',
:git => 'https://github.com/fuel-infra/puppet-galera.git',
:ref => '0.0.6'
# Pull in puppet-vswitch
mod 'vswitch',
:git => 'https://github.com/fuel-infra/puppet-vswitch.git',
:ref => '6.1.0'
# Pull in saz-limits
mod 'limits',
:git => 'https://github.com/fuel-infra/puppet-limits.git',
:ref => 'v2.5.0'

View File

@ -1,5 +0,0 @@
fixtures:
repositories:
stdlib: "#{source_dir}/../stdlib"
symlinks:
cgroups: "#{source_dir}"

View File

@ -1 +0,0 @@
spec/fixtures

View File

@ -1,2 +0,0 @@
-f doc
-c

View File

@ -1,19 +0,0 @@
source 'https://rubygems.org'
group :development, :test do
gem 'rake', :require => false
gem 'pry', :require => false
gem 'rspec', :require => false
gem 'rspec-puppet', :require => false
gem 'puppetlabs_spec_helper', :require => false
gem 'puppet-lint', '~> 1.1'
gem 'json', :require => false
end
if puppetversion = ENV['PUPPET_GEM_VERSION']
gem 'puppet', puppetversion, :require => false
else
gem 'puppet', :require => false
end
# vim:ft=ruby

View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,90 +0,0 @@
CGroups
=======
This puppet module is for configuring Control Groups on the nodes.
At the moment, it supports Ubuntu 14.04+ only.
## Classes
### Initialization
Place this module at /etc/puppet/modules/cgroups or in the directory where
your puppet modules are stored.
The 'cgroups' class has the following parameters and default values:
class { 'cgroups':
cgroups_set => {},
packages => [cgroup-bin, libcgroup1, cgroup-upstart],
}
* *cgroups_set* - user settings of Control Groups defined in the hash
format.
* *packages* - list of necessary packages for cgroups.
By default cgroups is disabled. It will be enabled, if user specify limits
for cluster via API/CLI.
### Service
This class contains all necessary services for work of cgroups.
Service 'cgroup-lite' mounts cgroups at the "/sys/fs/cgroups" when starts
and unmounts them when stops.
Service 'cgconfigparser' parses /etc/cgconfig.conf and sets up cgroups in the
/sys/fs/cgroups every time when starts.
Service 'cgrulesengd' is a CGroups Rules Engine Daemon. This daemon
distributes processes to control groups. When any process changes its
effective UID or GID, cgrulesengd inspects the list of rules loaded from the
/etc/cgrules.conf file and moves the process to the appropriate control group.
Service 'cgclassify' moves processes defined by the list of processes to given
control groups.
The 'cgroups::service' class has only the 'cgroups_set' parameter.
## Usage
For activating cgroup user should add 'cgroup' section into cluster's settings
file via CLI. For example:
cgroups:
metadata:
group: general
label: Cgroups configuration
weight: 90
restrictions:
- condition: "true"
action: "hide"
keystone:
label: keystone
type: text
value: {"cpu":{"cpu.shares":70}}
Format of relative expressions is (for example, memory limits):
%percentage_value, minimal_value, maximum_value
It means that:
* percentage value(% of total memory) will be calculated and
then clamped to keep value within the range( percentage value
will be used if total node's RAM lower than minimal range value)
* minimal value will be taken if node's RAM lower than minimal
value
* maximum value will be taken if node's RAM upper than maximum
value
Example:
%20, 2G, 20G
## Documentation
Official documentation for CGroups can be found in the
https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt

View File

@ -1,8 +0,0 @@
require 'rubygems'
require 'puppetlabs_spec_helper/rake_tasks'
require 'puppet-lint/tasks/puppet-lint'
PuppetLint.configuration.fail_on_warnings = false
PuppetLint.configuration.send('disable_80chars')
PuppetLint.configuration.send('disable_class_parameter_defaults')
PuppetLint.configuration.send('disable_class_inherits_from_params_class')

View File

@ -1,66 +0,0 @@
#!/bin/sh
### BEGIN INIT INFO
# Provides: cgconfig
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Configures CGroups
### END INIT INFO
start_service() {
if is_running; then
echo "cgrulesengd is running already!"
return 1
else
echo "Processing /etc/cgconfig.conf ..."
cgconfigparser -l /etc/cgconfig.conf
echo "Processing /etc/cgrules.conf ..."
cgrulesengd -vvv --logfile=/var/log/cgrulesengd.log
return 0
fi
}
stop_service() {
if is_running; then
echo "Stopping cgrulesengd ..."
pkill cgrulesengd
else
echo "cgrulesengd is not running!"
return 1
fi
}
status() {
if pgrep cgrulesengd > /dev/null; then
echo "cgrulesengd is running"
return 0
else
echo "cgrulesengd is not running!"
return 3
fi
}
is_running() {
status >/dev/null 2>&1
}
case "${1:-}" in
start)
start_service
;;
stop)
stop_service
;;
status)
status
;;
*)
echo "Usage: /etc/init.d/cgconfig {start|stop|restart|status}"
exit 2
;;
esac
exit $?

View File

@ -1,60 +0,0 @@
module Puppet::Parser::Functions
newfunction(:map_cgclassify_opts,
:type => :rvalue,
:arity => 1,
:doc => <<-'ENDOFDOC'
@desc Single cgclassify options out from cgroups data
@params cgroup's hash
{
'service-x' => {
'memory' => {
'memory.soft_limit_in_bytes' => 500,
},
'cpu' => {
'cpu.shares' => 60,
},
},
'service-z' => {
'memory' => {
'memory.soft_limit_in_bytes' => 500,
'memory.limit_in_bytes' => 100,
}
}
}
@return mapped hash of cgclassify opts
{
'service-x' => ['memory:/service-x', 'cpu:/service-x'],
'service-z' => ['memory:/service-z']
}
@example map_cgclassify_opts(hiera_hash(cgroups))
ENDOFDOC
) do |args|
raise(
Puppet::ParseError,
"map_cgclassify_opts(): expected a hash, got #{args[0]} type #{args[0].class}"
) unless args[0].is_a? Hash
cgroups_config = args[0]
resources = Hash.new { |_h, _k| _h[_k] = {:cgroup => []} }
begin
cgroups_config.each do |service, cgroups|
cgroups.each_key do |ctrl|
resources[service][:cgroup] << "#{ctrl}:/#{service}"
end
end
rescue => e
Puppet.debug "Couldn't map cgroups config: #{cgroups_config}"
return {}
end
resources
end
end

View File

@ -1,83 +0,0 @@
require 'json'
module CgroupsSettings
require 'facter'
# value is valid if value has integer type or
# matches with pattern: %percent, min_value, max_value
def self.handle_value(option, value)
# transform value in megabytes to bytes for limits of memory
return handle_memory(value) if option.to_s.end_with? "_in_bytes"
# keep it as it is for others
return value if value.is_a?(Integer)
end
def self.handle_memory(value)
return mb_to_bytes(value) if value.is_a?(Integer)
if value.is_a?(String) and matched_v = value.match(/%(\d+),\s*(\d+),\s*(\d+)/)
percent, min, max = matched_v[1..-1].map(&:to_i)
total_memory = Facter.value(:memorysize_mb)
res = (total_memory.to_f / 100.0) * percent.to_f
return mb_to_bytes([min, max, res].sort[1]).to_i
end
end
def self.mb_to_bytes(value)
return value * 1024 * 1024
end
end
Puppet::Parser::Functions::newfunction(:prepare_cgroups_hash, :type => :rvalue, :arity => 1, :doc => <<-EOS
This function get hash contains service and its cgroups settings(in JSON format) and serialize it.
ex: prepare_cgroups_hash(hiera('cgroups'))
Following input:
{
'metadata' => {
'always_editable' => true,
'group' => 'general',
'label' => 'Cgroups',
'weight' => 50
},
cinder-api: {"blkio":{"blkio.weight":500}}
}
will be transformed to:
[{"cinder-api"=>{"blkio"=>{"blkio.weight"=>500}}}]
Pattern for value field:
{
group1 => {
param1 => value1,
param2 => value2
},
group2 => {
param3 => value3,
param4 => value4
}
}
EOS
) do |argv|
raise(Puppet::ParseError, "prepare_cgroups_hash(...): Wrong type of argument. Hash is expected.") unless argv[0].is_a?(Hash)
# wipe out UI metadata
cgroups = argv[0].tap { |el| el.delete('metadata') }
serialized_data = {}
cgroups.each do |service, settings|
hash_settings = JSON.parse(settings) rescue raise("'#{service}': JSON parsing error for : #{settings}")
hash_settings.each do |group, options|
raise("'#{service}': group '#{group}' options is not a HASH instance") unless options.is_a?(Hash)
options.each do |option, value|
options[option] = CgroupsSettings.handle_value(option, value)
raise("'#{service}': group '#{group}': option '#{option}' has wrong value") if options[option].nil?
end
end
serialized_data[service] = hash_settings unless hash_settings.empty?
end
serialized_data
end
# vim: set ts=2 sw=2 et :

View File

@ -1,118 +0,0 @@
Puppet::Type.type(:cgclassify).provide(:cgclassify) do
desc 'Move running task(s) to given cgroups'
commands({
:cgclassify => 'cgclassify',
:lscgroup => 'lscgroup',
:pidof => 'pidof',
:ps => 'ps',
})
defaultfor :kernel => :linux
confine :kernel => :linux
def self.instances
services = Hash.new { |_h, _k| _h[_k] = [] }
# get custom cgroups
cgroups = lscgroup.split("\n").reject { |cg| cg.end_with? '/' }
cgroups.each do |cgname|
services_in_cgroup = []
cgroup_path = cgname.delete ':'
tasks = File.open("#{self.cg_mount_point}/#{cgroup_path}/tasks").read.split("\n")
tasks.each do |process|
begin
# get process name by pid
services_in_cgroup << ps('-p', process, '-o', 'comm=').chomp
rescue Puppet::ExecutionFailure => e
Puppet.debug "[#{__method__}/#{caller[0][/\S+$/]}] #{e}"
next
end
end
services_in_cgroup.uniq.each do |s|
services[s] << cgname
end
end
services.collect do |name, cg|
new(
:ensure => :present,
:name => name,
:cgroup => cg,
)
end
end
# We iterate over each service entry in the catalog and compare it against
# the contents of the property_hash generated by self.instances
def self.prefetch(resources)
services = instances
resources.each do |service, resource|
if provider = services.find { |s| s.name == service }
resources[service].provider = provider
end
end
end
mk_resource_methods
def cgroup=(c_groups)
cg_opts = []
cg_remove = @property_hash[:cgroup] - c_groups
cg_add = c_groups - @property_hash[:cgroup]
# collect all the changes
cg_remove.each { |cg| cg_opts << cg[/(\S+):\//] }
cg_add.each { |cg| cg_opts << cg }
cgclassify_cmd(cg_opts, @resource[:name]) unless cg_opts.empty?
end
def create
cg_opts = @resource[:cgroup] || []
cgclassify_cmd(cg_opts, @resource[:name], @resource[:sticky])
@property_hash[:ensure] = :present
@property_hash[:cgroup] = @resource[:cgroup]
exists?
end
def destroy
# set root cgroup for all controllers
# if it hasn't been defined
cg_opts = @resource[:cgroup].map { |cg| cg[/(\S+):\//] } rescue ['*:/']
cgclassify_cmd(cg_opts, @resource[:name])
end
def exists?
@property_hash[:ensure] == :present
end
def self.cg_mount_point
'/sys/fs/cgroup'
end
private
def cgclassify_cmd(cgroups, service, sticky = nil)
self.class.cgclassify_cmd(cgroups, service, sticky)
end
def self.cgclassify_cmd(cgroups, service, sticky = nil)
pidlist = pidof('-x', service).split
cg_opts = cgroups.map { |cg| ['-g', cg]}.flatten
cg_opts << sticky if sticky
cgclassify(cg_opts, pidlist)
rescue Puppet::ExecutionFailure => e
Puppet.debug "[#{__method__}/#{caller[0][/\S+$/]}] #{e}"
false
end
end

View File

@ -1,28 +0,0 @@
Puppet::Type.newtype(:cgclassify) do
@doc = 'Move running task(s) to given cgroups'
ensurable
newparam(:name, :namevar => true) do
desc 'The name of the service to manage.'
end
newproperty(:cgroup, :array_matching => :all) do
desc 'Defines the control group where the task will be moved'
newvalues(/^\S+:\/\S*$/)
def insync?(is)
is.sort == should.sort
end
end
newparam(:sticky, :boolean => true) do
desc 'Prevents cgred from reassigning child processes'
newvalues(:true, :false)
munge do |value|
value ? '--sticky' : '--cancel-sticky'
end
end
end

View File

@ -1,52 +0,0 @@
# == Class: cgroups
#
# CGroups is a Linux kernel feature that manage the resource usage (CPU, memory,
# disk I/O, network, etc.) of a collection of processes.
#
# === Parameters
#
# [*cgroups_set*]
# (required) Hiera hash with cgroups settings
# [*packages*]
# (required) Names of packages for CGroups
#
class cgroups(
$cgroups_set = {},
$packages = $cgroups::params::packages,
) inherits cgroups::params {
validate_hash($cgroups_set)
ensure_packages($packages, { tag => 'cgroups' })
File {
ensure => file,
owner => 'root',
group => 'root',
mode => '0644',
notify => Service['cgconfig'],
}
file { '/etc/cgconfig.conf':
content => template('cgroups/cgconfig.conf.erb'),
tag => 'cgroups',
}
file { '/etc/cgrules.conf':
content => template('cgroups/cgrules.conf.erb'),
tag => 'cgroups',
}
file { '/etc/init.d/cgconfig':
mode => '0755',
source => "puppet:///modules/${module_name}/cgconfig.init",
tag => 'cgroups',
}
class { '::cgroups::service':
cgroups_settings => $cgroups_set,
}
Package <| tag == 'cgroups' |> -> File <| tag == 'cgroups' |>
Service['cgconfig'] -> Cgclassify <||>
}

View File

@ -1,12 +0,0 @@
class cgroups::params {
case $::osfamily {
'Debian': {
$packages = ['cgroup-bin', 'libcgroup1']
}
default: {
fail("Unsupported platform")
}
}
}

View File

@ -1,16 +0,0 @@
class cgroups::service (
$cgroups_settings = {},
) {
service { 'cgconfig':
ensure => running,
enable => true,
provider => 'init',
}
$cgclass_res = map_cgclassify_opts($cgroups_settings)
unless empty($cgclass_res) {
create_resources('cgclassify', $cgclass_res, { 'ensure' => present })
}
}

View File

@ -1,42 +0,0 @@
require 'spec_helper'
describe 'cgroups', :type => :class do
context "on a Debian OS" do
let :facts do
{
:osfamily => 'Debian',
:operatingsystem => 'Ubuntu',
}
end
let :file_defaults do
{
:ensure => :file,
:owner => 'root',
:group => 'root',
:mode => '0644',
:notify => 'Service[cgconfig]',
:tag => 'cgroups',
}
end
let (:params) {{ :cgroups_set => {} }}
it { is_expected.to compile }
it {
should contain_class('cgroups::service').with(
:cgroups_settings => params[:cgroups_set])
}
%w(libcgroup1 cgroup-bin).each do |cg_pkg|
it { is_expected.to contain_package(cg_pkg) }
end
%w(/etc/cgconfig.conf /etc/cgrules.conf).each do |cg_file|
it { is_expected.to contain_file(cg_file).with(file_defaults) }
it { p catalogue.resource 'file', cg_file }
end
it { is_expected.to contain_file('/etc/init.d/cgconfig').with(file_defaults.merge({:mode => '0755'})) }
end
end

View File

@ -1,14 +0,0 @@
require 'spec_helper'
describe 'cgroups::service', :type => :class do
context "on a Debian OS" do
let :facts do
{
:osfamily => 'Debian',
:operatingsystem => 'Ubuntu',
}
end
it { is_expected.to contain_service('cgconfig') }
end
end

View File

@ -1,45 +0,0 @@
require 'spec_helper'
describe 'map_cgclassify_opts' do
let :input_data do
{
'nova-api' => {
'memory' => {
'memory.soft_limit_in_bytes' => 500,
},
'cpu' => {
'cpu.shares' => 60,
},
},
'neutron-server' => {
'memory' => {
'memory.soft_limit_in_bytes' => 500,
'memory.limit_in_bytes' => 100,
}
}
}
end
let :mapped_options do
{
'nova-api' => {
:cgroup => ['memory:/nova-api', 'cpu:/nova-api']
},
'neutron-server' => {
:cgroup => ['memory:/neutron-server']
}
}
end
it { is_expected.not_to eq(nil) }
it { is_expected.to run.with_params().and_raise_error(ArgumentError, /Wrong number of arguments given/) }
it { is_expected.to run.with_params('string').and_raise_error(Puppet::ParseError, /expected a hash/) }
it { is_expected.to run.with_params({}).and_return({}) }
it { is_expected.to run.with_params({'service-x' => ['blkio', 0]}).and_return({}) }
it { is_expected.to run.with_params({'service-z' => {}}).and_return({}) }
it { is_expected.to run.with_params(input_data).and_return(mapped_options) }
end

View File

@ -1,249 +0,0 @@
require 'spec_helper'
describe 'prepare_cgroups_hash' do
it 'should exist' do
is_expected.not_to be_nil
end
before(:each) do
Facter.stubs(:fact).with(:memorysize_mb).returns Facter.add(:memorysize_mb) { setcode { 1024 } }
end
context "transform simple hash" do
let(:sample) {
{
'metadata' => {
'always_editable' => true,
'group' => 'general',
'label' => 'Cgroups',
'weight' => 50
},
'cinder' => '{"blkio":{"blkio.weight":500, "blkio.test":800}, "memory":{"memory.soft_limit_in_bytes":700}}',
'keystone' => '{"cpu":{"cpu.shares":70}}'
}
}
let(:result) {
{
'cinder' => {
'blkio' => {
'blkio.weight' => 500,
'blkio.test' => 800
},
'memory' => {
'memory.soft_limit_in_bytes' => 700 * 1024 * 1024
},
},
'keystone' => {
'cpu' => {
'cpu.shares' => 70
}
}
}
}
it 'should transform hash with simple values' do
is_expected.to run.with_params(sample).and_return(result)
end
end
context "transform hash with expression" do
let(:sample) {
{
'metadata' => {
'always_editable' => true,
'group' => 'general',
'label' => 'Cgroups',
'weight' => 50
},
'neutron' => '{"memory":{"memory.soft_limit_in_bytes":"%50, 300, 700"}}'
}
}
let(:result) {
{
'neutron' => {
'memory' => {
'memory.soft_limit_in_bytes' => 512 * 1024 * 1024
}
}
}
}
it 'should transform hash including expression to compute' do
is_expected.to run.with_params(sample).and_return(result)
end
end
context "transform hash with expression and return integer value" do
let(:sample) {
{
'metadata' => {
'always_editable' => true,
'group' => 'general',
'label' => 'Cgroups',
'weight' => 50
},
'neutron' => '{"memory":{"memory.soft_limit_in_bytes":"%51, 300, 700"}}'
}
}
let(:result) {
{
'neutron' => {
'memory' => {
'memory.soft_limit_in_bytes' => (522.24 * 1024 * 1024).to_i
}
}
}
}
it 'should transform hash including expression to compute and return int' do
is_expected.to run.with_params(sample).and_return(result)
end
end
context "transform hash with expression including extra whitespaces" do
let(:sample) {
{
'metadata' => {
'always_editable' => true,
'group' => 'general',
'label' => 'Cgroups',
'weight' => 50
},
'neutron' => '{"memory":{"memory.soft_limit_in_bytes":"%51, 300, 700"}}'
}
}
let(:result) {
{
'neutron' => {
'memory' => {
'memory.soft_limit_in_bytes' => (522.24 * 1024 * 1024).to_i
}
}
}
}
it 'should transform hash including expression to compute with whitespaces' do
is_expected.to run.with_params(sample).and_return(result)
end
end
context "transform hash with empty service's settings" do
let(:sample) {
{
'metadata' => {
'always_editable' => true,
'group' => 'general',
'label' => 'Cgroups',
'weight' => 50
},
'nova' => '{"memory":{"memory.soft_limit_in_bytes":700}}',
'cinder-api' => '{}'
}
}
let(:result) {
{
'nova' => {
'memory' => {
'memory.soft_limit_in_bytes' => 700 * 1024 * 1024
}
}
}
}
it 'should transform hash with empty service settings' do
is_expected.to run.with_params(sample).and_return(result)
end
end
context "wrong JSON format" do
let(:sample) {
{
'neutron' => '{"memory":{"memory.soft_limit_in_bytes":"%50, 300, 700"}}}}'
}
}
let(:result) {
{}
}
it 'should raise if settings have wrong JSON format' do
is_expected.to run.with_params(sample).and_raise_error(RuntimeError, /JSON parsing error/)
end
end
context "converting memory to megabytes only for bytes value" do
let(:sample) {
{
'neutron' => '{"memory":{"memory.swappiness": 10}}',
'nova' => '{"hugetlb":{"hugetlb.16GB.limit_in_bytes": 10}}'
}
}
let(:result) {
{
'neutron' => {
'memory' => {
'memory.swappiness' => 10
}
},
'nova' => {
'hugetlb' => {
'hugetlb.16GB.limit_in_bytes' => 10 * 1024 * 1024
}
}
}
}
it 'should convert memory values only for bytes values' do
is_expected.to run.with_params(sample).and_return(result)
end
end
context "service's cgroup settings are not a HASH" do
let(:sample) {
{
'neutron' => '{"memory": 28}'
}
}
it 'should raise if group option is not a Hash' do
is_expected.to run.with_params(sample).and_raise_error(RuntimeError, /options is not a HASH instance/)
end
end
context "cgroup limit is not an integer" do
let(:sample) {
{
'neutron' => '{"memory":{"memory.soft_limit_in_bytes":"test"}}'
}
}
it 'should raise if limit value is not an integer or template' do
is_expected.to run.with_params(sample).and_raise_error(RuntimeError, /has wrong value/)
end
end
end

View File

@ -1,30 +0,0 @@
require 'rubygems'
require 'puppetlabs_spec_helper/module_spec_helper'
fixture_path = File.expand_path(File.join(__FILE__, '..', 'fixtures'))
PROJECT_ROOT = File.expand_path('..', File.dirname(__FILE__))
$LOAD_PATH.unshift(File.join(PROJECT_ROOT, "lib"))
# Add fixture lib dirs to LOAD_PATH. Work-around for PUP-3336
if Puppet.version < '4.0.0'
Dir["#{fixture_path}/modules/*/lib"].entries.each do |lib_dir|
$LOAD_PATH << lib_dir
end
end
RSpec.configure do |c|
c.module_path = File.join(fixture_path, 'modules')
c.manifest_dir = File.join(fixture_path, 'manifests')
c.mock_with(:mocha)
c.alias_it_should_behave_like_to :it_configures, 'configures'
end
def puppet_debug_override
if ENV['SPEC_PUPPET_DEBUG']
Puppet::Util::Log.level = :debug
Puppet::Util::Log.newdestination(:console)
end
end
###

View File

@ -1,119 +0,0 @@
require 'spec_helper'
describe Puppet::Type.type(:cgclassify).provider(:cgclassify) do
let(:list_all_cgroups) {
%q(cpuset:/
cpu:/
cpu:/group_cx
cpuacct:/
memory:/
memory:/group_mx
devices:/
freezer:/
blkio:/
perf_event:/
hugetlb:/
)
}
let(:parsed_procs) { %w(service_x) }
let(:resource) {
Puppet::Type.type(:cgclassify).new(
{
:ensure => :present,
:name => 'service_x',
:cgroup => ['memory:/group_mx', 'cpu:/group_cx'],
:provider => described_class.name,
}
)
}
let(:provider) { resource.provider }
let(:instance) { provider.class.instances.first }
before :each do
provider.class.stubs(:lscgroup).returns(list_all_cgroups)
provider.class.stubs(:cgclassify).returns(true)
provider.class.stubs(:pidof).with('-x', 'service_x').returns("1 2\n")
%w(1 2).each do |pid|
provider.class.stubs(:ps).with('-p', pid, '-o', 'comm=').returns('service_x')
end
list_all_cgroups.split("\n").reject { |cg| cg.end_with? '/' }.each do |cg|
File.stubs(:open).with("/sys/fs/cgroup/#{cg.delete ':'}/tasks").returns(StringIO.new("1\n2\n"))
end
end
describe '#self.instances' do
it 'returns an array of procs/tasks in cgroups' do
procs = provider.class.instances.collect {|x| x.name }
expect(parsed_procs).to match_array(procs)
end
end
describe '#create' do
it 'moves tasks to given cgroups' do
provider.expects(:cgclassify_cmd).with(['memory:/group_mx', 'cpu:/group_cx'], 'service_x', nil)
provider.create
end
end
describe '#destroy' do
it 'moves tasks to root cgroup' do
provider.expects(:cgclassify_cmd)
provider.destroy
end
end
describe '#exists?' do
it 'checks if tasks are in cgroups' do
expect(instance.exists?).to eql true
end
end
describe '#cgroup=' do
it 'changes nothing' do
provider.set(
:ensure => :present,
:name => 'service_x',
:cgroup => ['memory:/group_mx', 'cpu:/group_cx'],
)
provider.expects(:cgclassify_cmd).times(0)
provider.cgroup=(['memory:/group_mx', 'cpu:/group_cx'])
end
it 'add tasks to cgroups' do
resource.provider.set(
:ensure => :present,
:name => 'service_x',
:cgroup => ['memory:/group_mx'],
)
provider.expects(:cgclassify_cmd).with(['cpu:/group_cx'], 'service_x')
provider.cgroup=(['memory:/group_mx', 'cpu:/group_cx'])
end
it 'removes tasks from cgroups' do
resource.provider.set(
:ensure => :present,
:name => 'service_x',
:cgroup => ['memory:/group_mx', 'cpu:/group_cx'],
)
provider.expects(:cgclassify_cmd).with(['memory:/'], 'service_x')
provider.cgroup=(['cpu:/group_cx'])
end
it 'exchanges a cgroup' do
resource.provider.set(
:ensure => :present,
:name => 'service_x',
:cgroup => ['memory:/group_mx', 'cpu:/group_cx'],
)
provider.expects(:cgclassify_cmd).with(['cpu:/', 'blkio:/group_bx'], 'service_x')
provider.cgroup=(['memory:/group_mx', 'blkio:/group_bx'])
end
end
end

View File

@ -1,50 +0,0 @@
require 'puppet'
require 'puppet/type/cgclassify'
describe 'Puppet::Type.type(:cgclassify)' do
before :each do
@cgclassify = Puppet::Type.type(:cgclassify).new(
:name => 'service_x',
:cgroup => ['memory:/group_x'],
:sticky => true,
)
end
it 'should require a name' do
expect {
Puppet::Type.type(:cgclassify).new({})
}.to raise_error Puppet::Error, 'Title or name must be provided'
end
it 'should set sticky option' do
expect(@cgclassify[:sticky]).to eq('--sticky')
end
context 'should reject invalid cgroup pattern' do
it 'with swapped /:' do
expect {
@cgclassify[:cgroup] = ['memory/:group_x']
}.to raise_error Puppet::ResourceError, /Invalid value/
end
it 'with absent controller' do
expect {
@cgclassify[:cgroup] = [':/group_x']
}.to raise_error Puppet::ResourceError, /Invalid value/
end
end
context 'should accept valid cgroup pattern' do
it 'with cpu:/group_x' do
@cgclassify[:cgroup] = ['cpu:/group_x']
expect(@cgclassify[:cgroup]).to eq(['cpu:/group_x'])
end
it 'with two cgroups at once' do
@cgclassify[:cgroup] = ['blkio:/', 'cpuset:/group_x']
expect(@cgclassify[:cgroup]).to eq(['blkio:/', 'cpuset:/group_x'])
end
end
end

View File

@ -1,48 +0,0 @@
#
#group daemons/www {
# perm {
# task {
# uid = root;
# gid = webmaster;
# }
# admin {
# uid = root;
# gid = root;
# }
# }
# cpu {
# cpu.shares = 1000;
# }
#}
#
#group daemons/ftp {
# perm {
# task {
# uid = root;
# gid = ftpmaster;
# }
# admin {
# uid = root;
# gid = root;
# }
# }
# cpu {
# cpu.shares = 500;
# }
#}
#
#mount {
# cpu = /mnt/cgroups/cpu;
# cpuacct = /mnt/cgroups/cpuacct;
#}
<% @cgroups_set.each do |service, ctrl_hash| -%>
group <%= service %> {
<% ctrl_hash.each do |controller, hash_rules| -%>
<%= controller %> {
<% hash_rules.each do |rule, value| -%>
<%= rule %> = <%= value %>;
<% end -%>
}
<% end -%>
}
<% end -%>

View File

@ -1,56 +0,0 @@
# /etc/cgrules.conf
#
#Each line describes a rule for a user in the forms:
#
#<user> <controllers> <destination>
#<user>:<process name> <controllers> <destination>
#
#Where:
# <user> can be:
# - an user name
# - a group name, with @group syntax
# - the wildcard *, for any user or group.
# - The %, which is equivalent to "ditto". This is useful for
# multiline rules where different cgroups need to be specified
# for various hierarchies for a single user.
#
# <process name> is optional and it can be:
# - a process name
# - a full command path of a process
#
# <controller> can be:
# - comma separated controller names (no spaces)
# - * (for all mounted controllers)
#
# <destination> can be:
# - path with-in the controller hierarchy (ex. pgrp1/gid1/uid1)
#
# Note:
# - It currently has rules based on uids, gids and process name.
#
# - Don't put overlapping rules. First rule which matches the criteria
# will be executed.
#
# - Multiline rules can be specified for specifying different cgroups
# for multiple hierarchies. In the example below, user "peter" has
# specified 2 line rule. First line says put peter's task in test1/
# dir for "cpu" controller and second line says put peter's tasks in
# test2/ dir for memory controller. Make a note of "%" sign in second line.
# This is an indication that it is continuation of previous rule.
#
#
#<user> <controllers> <destination>
#
#john cpu usergroup/faculty/john/
#john:cp cpu usergroup/faculty/john/cp
#@student cpu,memory usergroup/student/
#peter cpu test1/
#% memory test2/
#@root * admingroup/
#* * default/
# End of file
<% @cgroups_set.each do |service, ctrl_hash| -%>
<% ctrl_hash.each do |controller, rule_hash| -%>
*:<%= service %> <%= controller %> <%= service %>
<% end -%>
<% end -%>

View File

@ -1,19 +0,0 @@
fixtures:
symlinks:
cluster: "#{source_dir}"
corosync: "#{source_dir}/../corosync"
openstack: "#{source_dir}/../openstack"
stdlib: "#{source_dir}/../stdlib"
ntp: "#{source_dir}/../ntp"
heat: "#{source_dir}/../heat"
openstacklib: "#{source_dir}/../openstacklib"
inifile: "#{source_dir}/../inifile"
mysql: "#{source_dir}/../mysql"
pacemaker: "#{source_dir}/../pacemaker"
rabbitmq: "#{source_dir}/../rabbitmq"
apt: "#{source_dir}/../apt"
staging: "#{source_dir}/../staging"
xinetd: "#{source_dir}/../xinetd"
oslo: "#{source_dir}/../oslo"
osnailyfacter: "#{source_dir}/../osnailyfacter"
keystone: "#{source_dir}/../keystone"

View File

@ -1,2 +0,0 @@
spec/fixtures/*
.idea

View File

@ -1,23 +0,0 @@
source 'https://rubygems.org'
group :development, :test do
gem 'puppetlabs_spec_helper', :require => 'false'
gem 'rspec-puppet', '~> 2.2.0', :require => 'false'
gem 'metadata-json-lint', :require => 'false'
gem 'puppet-lint-param-docs', :require => 'false'
gem 'puppet-lint-absolute_classname-check', :require => 'false'
gem 'puppet-lint-absolute_template_path', :require => 'false'
gem 'puppet-lint-trailing_newline-check', :require => 'false'
gem 'puppet-lint-unquoted_string-check', :require => 'false'
gem 'puppet-lint-leading_zero-check', :require => 'false'
gem 'puppet-lint-variable_contains_upcase', :require => 'false'
gem 'puppet-lint-numericvariable', :require => 'false'
gem 'json', :require => 'false'
gem 'rspec-puppet-facts', :require => 'false'
end
if puppetversion = ENV['PUPPET_GEM_VERSION']
gem 'puppet', puppetversion, :require => false
else
gem 'puppet', :require => false
end

View File

@ -1 +0,0 @@
name 'Fuel-cluster'

View File

@ -1,22 +0,0 @@
require 'puppetlabs_spec_helper/rake_tasks'
require 'puppet-lint/tasks/puppet-lint'
require 'puppet-syntax/tasks/puppet-syntax'
PuppetSyntax.exclude_paths ||= []
PuppetSyntax.exclude_paths << "spec/fixtures/**/*"
PuppetSyntax.exclude_paths << "pkg/**/*"
PuppetSyntax.exclude_paths << "vendor/**/*"
Rake::Task[:lint].clear
PuppetLint::RakeTask.new :lint do |config|
config.ignore_paths = ["spec/**/*.pp", "vendor/**/*.pp"]
config.fail_on_warnings = false # TODO(aschultz): fix warnings
config.log_format = '%{path}:%{linenumber}:%{KIND}: %{message}'
config.disable_checks = [
"80chars",
"class_inherits_from_params_class",
"class_parameter_defaults",
"only_variable_string",
"autoloader_layout", # TODO(aschultz): this is from included defines in classes, should be fixed and this should be removed.
]
end

View File

@ -1,25 +0,0 @@
module Puppet::Parser::Functions
newfunction(
:resource_parameters,
type: :rvalue,
arity: -1,
doc: <<-eof
Gather resource parameters and their values
eof
) do |args|
parameters = {}
args.flatten.each_slice(2) do |key, value|
if value.nil? and key.is_a? Hash
parameters.merge! key
else
next if key.nil?
next if value.nil?
next if value == ''
next if value == :undef
key = key.to_s
parameters.store key, value
end
end
parameters
end
end

View File

@ -1,47 +0,0 @@
#
# Configure aodh-evaluator service in pacemaker/corosync
#
# == Parameters
#
# None.
#
class cluster::aodh_evaluator {
include ::aodh::params
$service_name = $::aodh::params::evaluator_service_name
# migration-threshold is number of tries to
# start resource on each controller node
$metadata = {
'resource-stickiness' => '1',
'migration-threshold' => '3'
}
$operations = {
'monitor' => {
'interval' => '20',
'timeout' => '30',
},
'start' => {
'interval' => '0',
'timeout' => '60',
},
'stop' => {
'interval' => '0',
'timeout' => '60',
},
}
$primitive_type = 'aodh-evaluator'
$parameters = { 'user' => 'aodh' }
pacemaker::service { $service_name :
primitive_type => $primitive_type,
metadata => $metadata,
parameters => $parameters,
operations => $operations
}
Pcmk_resource["p_${service_name}"] ->
Service[$service_name]
}

View File

@ -1,28 +0,0 @@
# == Class: cluster::ceilometer::central
#
# This class is used to configure pacemaker service for ceilometer agent central
#
class cluster::ceilometer_central (
) {
include ceilometer::agent::central
pacemaker::service { $::ceilometer::params::agent_central_service_name :
primitive_type => 'ceilometer-agent-central',
metadata => { 'resource-stickiness' => '1' },
parameters => { 'user' => 'ceilometer' },
operations => {
'monitor' => {
'interval' => '20',
'timeout' => '30',
},
'start' => {
'interval' => '0',
'timeout' => '360',
},
'stop' => {
'interval' => '0',
'timeout' => '360',
},
},
}
}

View File

@ -1,84 +0,0 @@
class cluster::conntrackd_ocf (
$vrouter_name,
$bind_address,
$mgmt_bridge,
) {
$service_name = 'p_conntrackd'
$conntrackd_package = 'conntrackd'
package { $conntrackd_package:
ensure => 'installed',
} ->
file { '/etc/conntrackd/conntrackd.conf':
content => template('cluster/conntrackd.conf.erb'),
} ->
service { $service_name :
ensure => 'running',
enable => true,
}
tweaks::ubuntu_service_override { 'conntrackd': }
$primitive_class = 'ocf'
$primitive_provider = 'fuel'
$primitive_type = 'ns_conntrackd'
$metadata = {
'migration-threshold' => 'INFINITY',
'failure-timeout' => '180s'
}
$parameters = {
'bridge' => $mgmt_bridge,
}
$complex_type = 'master'
$complex_metadata = {
'notify' => 'true',
'ordered' => 'false',
'interleave' => 'true',
'clone-node-max' => '1',
'master-max' => '1',
'master-node-max' => '1',
'target-role' => 'Master'
}
$operations = {
'monitor' => {
'interval' => '30',
'timeout' => '60'
},
'monitor:Master' => {
'role' => 'Master',
'interval' => '27',
'timeout' => '60'
},
}
pacemaker::service { $service_name :
prefix => false,
primitive_class => $primitive_class,
primitive_provider => $primitive_provider,
primitive_type => $primitive_type,
metadata => $metadata,
parameters => $parameters,
complex_type => $complex_type,
complex_metadata => $complex_metadata,
operations => $operations,
}
pcmk_colocation { "conntrackd-with-${vrouter_name}-vip":
first => "vip__vrouter_${vrouter_name}",
second => 'master_p_conntrackd:Master',
}
File['/etc/conntrackd/conntrackd.conf'] ->
Pcmk_resource[$service_name] ->
Service[$service_name] ->
Pcmk_colocation["conntrackd-with-${vrouter_name}-vip"]
# Workaround to ensure log is rotated properly
file { '/etc/logrotate.d/conntrackd':
content => template('openstack/95-conntrackd.conf.erb'),
}
Package[$conntrackd_package] -> File['/etc/logrotate.d/conntrackd']
}

View File

@ -1,70 +0,0 @@
# Not a doc string
#TODO (bogdando) move to extras ha wrappers,
# remove mangling due to new pcs provider
define cluster::corosync::cs_service (
$ocf_script,
$service_name,
$service_title = undef, # Title of Service, that been mangled for pacemakering
$package_name = undef,
$csr_complex_type = undef,
$csr_ms_metadata = undef,
$csr_parameters = undef,
$csr_metadata = undef,
$csr_mon_intr = 20,
$csr_mon_timeout = 20,
$csr_timeout = 60,
$primary = true,
$hasrestart = true,
# Mask services which are managed by pacemaker
# LP #1652748
$mask_service = true,
)
{
$service_true_title = $service_title ? {
undef => $service_name,
default => $service_title
}
if $primary {
pcmk_resource { "p_${service_true_title}":
ensure => 'present',
primitive_class => 'ocf',
primitive_provider => 'fuel',
primitive_type => $ocf_script,
complex_type => $csr_complex_type,
complex_metadata => $csr_ms_metadata,
parameters => $csr_parameters,
metadata => $csr_metadata,
name => $service_name,
operations => {
'monitor' => {
'interval' => $csr_mon_intr,
'timeout' => $csr_mon_timeout
},
'start' => {
'interval' => '0',
'timeout' => $csr_timeout
},
'stop' => {
'interval' => '0',
'timeout' => $csr_timeout
}
}
}
Pcmk_resource["p_${service_true_title}"] -> Service<| title == $service_true_title |>
}
if ! $package_name {
warning('Cluster::corosync::cs_service: Without package definition can\'t protect service for autostart correctly.')
} else {
tweaks::ubuntu_service_override { "${service_name}":
package_name => $package_name,
mask_service => $mask_service,
} -> Service<| title=="${service_true_title}" |>
}
Service<| title=="${service_true_title}" |> {
name => $service_name,
provider => 'pacemaker',
}
}

View File

@ -1,64 +0,0 @@
# == Class: cluster::dns_ocf
#
# Configure OCF service for DNS managed by corosync/pacemaker
#
class cluster::dns_ocf {
$service_name = 'p_dns'
$primitive_class = 'ocf'
$primitive_provider = 'fuel'
$primitive_type = 'ns_dns'
$complex_type = 'clone'
$complex_metadata = {
'interleave' => 'true',
}
$metadata = {
'migration-threshold' => '3',
'failure-timeout' => '120',
}
$parameters = {
'ns' => 'vrouter',
}
$operations = {
'monitor' => {
'interval' => '20',
'timeout' => '10'
},
'start' => {
'timeout' => '30'
},
'stop' => {
'timeout' => '30'
},
}
pacemaker::service { $service_name :
primitive_class => $primitive_class,
primitive_provider => $primitive_provider,
primitive_type => $primitive_type,
complex_type => $complex_type,
complex_metadata => $complex_metadata,
metadata => $metadata,
parameters => $parameters,
operations => $operations,
prefix => false,
}
pcmk_colocation { 'dns-with-vrouter-ns' :
ensure => 'present',
score => 'INFINITY',
first => "clone_p_vrouter",
second => "clone_${service_name}",
}
Pcmk_resource[$service_name] ->
Pcmk_colocation['dns-with-vrouter-ns'] ->
Service[$service_name]
service { $service_name:
ensure => 'running',
enable => true,
hasstatus => true,
hasrestart => true,
}
}

View File

@ -1,41 +0,0 @@
# == Class: cluster::galera_grants
#
# Configures a user that will check the status
# of galera cluster, assumes mysql module is in catalog
#
# === Parameters:
#
# [*status_user*]
# (optiona) The name of user to use for status checks
# Defaults to false
#
# [*status_password*]
# (optional) The password of the status check user
# Defaults to false
#
# [*status_allow*]
# (optional) The subnet to allow status checks from
# Defaults to '%'
#
class cluster::galera_grants (
$status_user = false,
$status_password = false,
$status_allow = '%',
) {
validate_string($status_user, $status_password)
mysql_user { "${status_user}@${status_allow}":
ensure => 'present',
password_hash => mysql_password($status_password),
require => Anchor['mysql::server::end'],
} ->
mysql_grant { "${status_user}@${status_allow}/*.*":
ensure => 'present',
options => [ 'GRANT' ],
privileges => [ 'USAGE' ],
table => '*.*',
user => "${status_user}@${status_allow}",
}
}

View File

@ -1,90 +0,0 @@
# == Class: cluster::galera_status
#
# Configures a script that will check the status
# of galera cluster
#
# === Parameters:
#
# [*status_user*]
# (required). String. Mysql user to use for connection testing and status
# checks.
#
# [*status_password*]
# (required). String. Password for the mysql user to check with.
#
# [*address*]
# (optional) xinet.d bind address for clustercheck
# Defaults to 0.0.0.0
#
# [*only_from*]
# (optional) xinet.d only_from address for swiftcheck
# Defaults to 127.0.0.1
#
# [*port*]
# (optional) Port for cluster check service
# Defaults to 49000
#
# [*backend_host*]
# (optional) The MySQL backend host for cluster check
# Defaults to 127.0.0.1
#
# [*backend_port*]
# (optional) The MySQL backend port for cluster check
# Defaults to 3306
#
# [*backend_timeout*]
# (optional) The timeout for MySQL backend connection for cluster check
# Defaults to 10 seconds
#
class cluster::galera_status (
$status_user,
$status_password,
$address = '0.0.0.0',
$only_from = '127.0.0.1',
$port = '49000',
$backend_host = '127.0.0.1',
$backend_port = '3306',
$backend_timeout = '10',
) {
$group = $::osfamily ? {
'redhat' => 'nobody',
'debian' => 'nogroup',
default => 'nobody',
}
file { '/etc/wsrepclustercheckrc':
content => template('openstack/galera_clustercheck.erb'),
owner => 'nobody',
group => $group,
mode => '0400',
require => Anchor['mysql::server::end'],
}
augeas { 'galeracheck':
context => '/files/etc/services',
changes => [
"set /files/etc/services/service-name[port = '${port}']/port ${port}",
"set /files/etc/services/service-name[port = '${port}'] galeracheck",
"set /files/etc/services/service-name[port = '${port}']/protocol tcp",
"set /files/etc/services/service-name[port = '${port}']/#comment 'Galera Cluster Check'",
],
require => Anchor['mysql::server::end'],
}
contain ::xinetd
xinetd::service { 'galeracheck':
bind => $address,
port => $port,
only_from => $only_from,
cps => '512 10',
per_source => 'UNLIMITED',
server => '/usr/bin/galeracheck',
user => 'nobody',
group => $group,
flags => 'IPv4',
require => [ Augeas['galeracheck'],
Anchor['mysql::server::end']],
}
}

View File

@ -1,156 +0,0 @@
# == Class: cluster::haproxy
#
# Configure HAProxy managed by corosync/pacemaker
#
# === Parameters
#
# [*haproxy_maxconn*]
# (optional) Max connections for haproxy
# Defaults to '4000'
#
# [*haproxy_bufsize*]
# (optional) Buffer size for haproxy
# Defaults to '16384'
#
# [*haproxy_maxrewrite*]
# (optional) Sets the reserved buffer space to this size in bytes
# Defaults to '1024'
#
# [*haproxy_log_file*]
# (optional) Log file location for haproxy.
# Defaults to '/var/log/haproxy.log'
#
# [*primary_controller*]
# (optional) Flag to indicate if this is the primary controller
# Defaults to false
#
# [*colocate_haproxy*]
# (optional) Flag to enable pacemaker to bind haproxy to controller VIPs
# Defaults to false
#
# [*debug*]
# (optional)
# Defaults to false
#
# [*other_networks*]
# (optional)
# Defaults to false
#
# [*stats_ipaddresses*]
# (optional) Array of addresses to allow stats calls
# Defaults to ['127.0.0.1']
#
class cluster::haproxy (
$haproxy_maxconn = '4000',
$haproxy_bufsize = '16384',
$haproxy_maxrewrite = '1024',
$haproxy_log_file = '/var/log/haproxy.log',
$primary_controller = false,
$debug = false,
$other_networks = false,
$colocate_haproxy = false,
$stats_ipaddresses = ['127.0.0.1'],
$spread_checks = '3',
$user_defined_options = {},
$ssl_default_ciphers = 'HIGH:!aNULL:!MD5:!kEDH',
$ssl_default_options = 'no-sslv3 no-tls-tickets',
) {
include ::haproxy::params
include ::rsyslog::params
package { 'haproxy':
name => $::haproxy::params::package_name,
}
#NOTE(bogdando) we want defaults w/o chroot
# and this override looks the only possible if
# upstream manifests must be kept intact
$global_options = {
'log' => '/dev/log local0',
'pidfile' => '/var/run/haproxy.pid',
'maxconn' => $haproxy_maxconn,
'user' => 'haproxy',
'group' => 'haproxy',
'daemon' => '',
'stats' => 'socket /var/lib/haproxy/stats',
'spread-checks' => $spread_checks,
'tune.bufsize' => $haproxy_bufsize,
'tune.maxrewrite' => $haproxy_maxrewrite,
'ssl-default-bind-ciphers' => $ssl_default_ciphers,
'ssl-default-server-ciphers' => $ssl_default_ciphers,
'ssl-default-bind-options' => $ssl_default_options,
'ssl-default-server-options' => $ssl_default_options,
}
$defaults_options = {
'log' => 'global',
'maxconn' => '8000',
'mode' => 'http',
'retries' => '3',
'option' => [
'redispatch',
'http-server-close',
'splice-auto',
'dontlognull',
],
'timeout' => [
'http-request 20s',
'queue 1m',
'connect 10s',
'client 1m',
'server 1m',
'check 10s',
],
}
$service_name = 'p_haproxy'
class { '::haproxy::base':
global_options => merge($global_options, $user_defined_options['global']),
defaults_options => merge($defaults_options, $user_defined_options['defaults']),
stats_ipaddresses => $stats_ipaddresses,
custom_fragment => $user_defined_options['custom_fragment'],
use_include => true,
}
sysctl::value { 'net.ipv4.ip_nonlocal_bind':
value => '1'
}
service { 'haproxy' :
ensure => 'running',
name => $service_name,
enable => true,
hasstatus => true,
hasrestart => true,
}
tweaks::ubuntu_service_override { 'haproxy' :
service_name => 'haproxy',
package_name => $haproxy::params::package_name,
}
class { '::cluster::haproxy::rsyslog':
log_file => $haproxy_log_file,
}
Package['haproxy'] ->
Class['haproxy::base']
Class['haproxy::base'] ~>
Service['haproxy']
Package['haproxy'] ~>
Service['haproxy']
Sysctl::Value['net.ipv4.ip_nonlocal_bind'] ~>
Service['haproxy']
# Pacemaker
class { '::cluster::haproxy_ocf':
debug => $debug,
other_networks => $other_networks,
colocate_haproxy => $colocate_haproxy,
}
}

View File

@ -1,27 +0,0 @@
# == Class: cluster::haproxy::rsyslog
#
# Configure rsyslog for corosync/pacemaker managed HAProxy
#
# === Parameters
#
# [*log_file*]
# Log file location for haproxy. Defaults to '/var/log/haproxy.log'
#
class cluster::haproxy::rsyslog (
$log_file = '/var/log/haproxy.log',
) {
include ::rsyslog::params
file { '/etc/rsyslog.d/49-haproxy.conf':
ensure => present,
content => template("${module_name}/haproxy.conf.erb"),
notify => Service[$::rsyslog::params::service_name],
}
if !defined(Service[$::rsyslog::params::service_name]) {
service { $::rsyslog::params::service_name:
ensure => 'running',
enable => true,
}
}
}

View File

@ -1,69 +0,0 @@
# == Class: cluster::haproxy_ocf
#
# Configure OCF service for HAProxy managed by corosync/pacemaker
#
class cluster::haproxy_ocf (
$debug = false,
$other_networks = false,
$colocate_haproxy = true,
) inherits cluster::haproxy {
$primitive_type = 'ns_haproxy'
$complex_type = 'clone'
$complex_metadata = {
'interleave' => true,
}
$metadata = {
'migration-threshold' => '3',
'failure-timeout' => '120',
}
$parameters = {
'ns' => 'haproxy',
'debug' => $debug,
'other_networks' => $other_networks,
}
$operations = {
'monitor' => {
'interval' => '30',
'timeout' => '60'
},
'start' => {
'timeout' => '60'
},
'stop' => {
'timeout' => '60'
},
}
pacemaker::service { $service_name :
primitive_type => $primitive_type,
parameters => $parameters,
metadata => $metadata,
operations => $operations,
complex_metadata => $complex_metadata,
complex_type => $complex_type,
prefix => false,
}
if $colocate_haproxy {
pcmk_colocation { 'vip_public-with-haproxy':
ensure => 'present',
score => 'INFINITY',
first => "clone_${service_name}",
second => "vip__public",
}
Service[$service_name] -> Pcmk_colocation['vip_public-with-haproxy']
pcmk_colocation { 'vip_management-with-haproxy':
ensure => 'present',
score => 'INFINITY',
first => "clone_${service_name}",
second => 'vip__management',
}
Service[$service_name] -> Pcmk_colocation['vip_management-with-haproxy']
}
Pcmk_resource[$service_name] ->
Service[$service_name]
}

View File

@ -1,52 +0,0 @@
#
# Configure heat-engine in pacemaker/corosync
#
# == Parameters
#
# None.
#
# === Notes
#
# This class requires that ::heat::engine be included in the catalog prior to
# the inclusion of this class.
#
class cluster::heat_engine {
include ::heat::params
$primitive_type = 'heat-engine'
# migration-threshold is number of tries to
# start resource on each controller node
$metadata = {
'resource-stickiness' => '1',
'migration-threshold' => '3'
}
$operations = {
'monitor' => {
'interval' => '20',
'timeout' => '30',
},
'start' => {
'interval' => '0',
'timeout' => '60',
},
'stop' => {
'interval' => '0',
'timeout' => '60',
},
}
$ms_metadata = {
'interleave' => true,
}
pacemaker::service { $::heat::params::engine_service_name :
primitive_type => $primitive_type,
metadata => $metadata,
complex_type => 'clone',
complex_metadata => $ms_metadata,
operations => $operations,
}
}

View File

@ -1,60 +0,0 @@
# == Class: cluster
#
# Module for configuring cluster resources.
#
class cluster (
$internal_address = '127.0.0.1',
$quorum_members = ['localhost'],
$quorum_members_ids = undef,
$unicast_addresses = ['127.0.0.1'],
$cluster_recheck_interval = '190s',
) {
#todo: move half of openstack::corosync
#to this module, another half -- to Neutron
case $::osfamily {
'Debian' : {
$packages = ['crmsh', 'pcs']
}
'RedHat' : {
# pcs will be installed by the corosync automatically
$packages = ['crmsh']
}
default: {}
}
if defined(Stage['corosync_setup']) {
class { 'openstack::corosync':
bind_address => $internal_address,
stage => 'corosync_setup',
quorum_members => $quorum_members,
quorum_members_ids => $quorum_members_ids,
unicast_addresses => $unicast_addresses,
packages => $packages,
cluster_recheck_interval => $cluster_recheck_interval,
}
} else {
class { 'openstack::corosync':
bind_address => $internal_address,
quorum_members => $quorum_members,
quorum_members_ids => $quorum_members_ids,
unicast_addresses => $unicast_addresses,
packages => $packages,
cluster_recheck_interval => $cluster_recheck_interval,
}
}
File<| title == '/etc/corosync/corosync.conf' |> -> Service['corosync']
file { 'ocf-fuel-path':
ensure => directory,
path =>'/usr/lib/ocf/resource.d/fuel',
recurse => true,
owner => 'root',
group => 'root',
}
Package['corosync'] -> File['ocf-fuel-path']
Package<| title == 'pacemaker' |> -> File['ocf-fuel-path']
}

View File

@ -1,141 +0,0 @@
# == Class: cluster::mysql
#
# Configure OCF service for mysql managed by corosync/pacemaker
#
# === Parameters
#
# [*primary_controller*]
# (required). Boolean. Flag to indicate if this on the primary contoller
#
# [*mysql_user*]
# (required). String. Mysql user to use for connection testing and status
# checks.
#
# [*mysql_password*]
# (requires). String. Password for the mysql user to checks with.
#
# [*mysql_config*]
# (optional). String. Location to the mysql.cnf to use when running the
# service.
# Defaults to '/etc/mysql/my.cnf'
#
# [*mysql_socket*]
# (optional). String. The socket file to use for connection checks.
# Defaults to '/var/run/mysqld/mysqld.sock'
#
class cluster::mysql (
$mysql_user,
$mysql_password,
$mysql_config = '/etc/mysql/my.cnf',
$mysql_socket = '/var/run/mysqld/mysqld.sock',
) {
$service_name = 'mysqld'
$primitive_class = 'ocf'
$primitive_provider = 'fuel'
$primitive_type = 'mysql-wss'
$complex_type = 'clone'
$user_conf = '/etc/mysql/user.cnf'
file { $user_conf:
owner => 'root',
mode => '0600',
content => template('cluster/mysql_user.cnf.erb')
} -> Exec <||>
$parameters = {
'config' => $mysql_config,
'test_conf' => $user_conf,
'socket' => $mysql_socket,
}
$operations = {
'monitor' => {
'interval' => '60',
'timeout' => '55'
},
'start' => {
'interval' => '30',
'timeout' => '330',
'on-fail' => 'restart',
},
'stop' => {
'interval' => '0',
'timeout' => '120'
},
}
$metadata = {
'migration-threshold' => '10',
'failure-timeout' => '30s',
'resource-stickiness' => '100',
}
$complex_metadata = {
'requires' => 'nothing',
}
pacemaker::service { $service_name:
primitive_class => $primitive_class,
primitive_provider => $primitive_provider,
primitive_type => $primitive_type,
complex_type => $complex_type,
complex_metadata => $complex_metadata,
metadata => $metadata,
parameters => $parameters,
operations => $operations,
prefix => true,
}
# NOTE(aschultz): strings must contain single quotes only, see the
# create-init-file exec as to why
$init_file_contents = join([
"set wsrep_on='off';",
"delete from mysql.user where user='';",
"GRANT USAGE ON *.* TO '${mysql_user}'@'%' IDENTIFIED BY '${mysql_password}';",
"GRANT USAGE ON *.* TO '${mysql_user}'@'localhost' IDENTIFIED BY '${mysql_password}';",
"flush privileges;",
], "\n")
# NOTE(bogdando) it's a positional param, must go first
$user_password_string = "--defaults-extra-file=${user_conf}"
# This file is used to prep the mysql instance with the monitor user so that
# pacemaker can check that the instance is UP.
# NOTE(aschultz): we are using an exec here because we only want to create
# the init file before the mysql service is not running. This is used to
# bootstrap the service so we only do it the first time. For idempotency
# this exec would be skipped when run a second time with mysql running.
exec { 'create-init-file':
path => '/bin:/sbin:/usr/bin:/usr/sbin',
command => "echo \"${init_file_contents}\" > /tmp/wsrep-init-file",
unless => "mysql ${user_password_string} -Nbe \"select 'OK';\" | grep -q OK",
require => Package['mysql-server'],
} ~>
exec { 'wait-for-sync':
path => '/bin:/sbin:/usr/bin:/usr/sbin',
command => "mysql ${user_password_string} -Nbe \"show status like 'wsrep_local_state_comment'\" | grep -q -e Synced && sleep 10",
try_sleep => 10,
tries => 60,
refreshonly => true,
}
exec { 'rm-init-file':
path => '/bin:/sbin:/usr/bin:/usr/sbin',
command => 'rm /tmp/wsrep-init-file',
onlyif => 'test -f /tmp/wsrep-init-file',
}
file { 'fix-log-dir':
ensure => directory,
path => '/var/log/mysql',
mode => '0770',
require => Package['mysql-server'],
}
Exec['create-init-file'] ->
File['fix-log-dir'] ->
Service<| title == $service_name |> ~>
Exec['wait-for-sync'] ->
Exec['rm-init-file']
}

View File

@ -1,18 +0,0 @@
# not a doc string
class cluster::neutron () {
Package['neutron'] ->
file {'/var/cache/neutron':
ensure => directory,
path => '/var/cache/neutron',
mode => '0755',
owner => neutron,
group => neutron,
}
if !defined(Package['lsof']) {
package { 'lsof': }
}
}

View File

@ -1,45 +0,0 @@
# Not a doc string
class cluster::neutron::dhcp (
$primary = false,
$ha_agents = ['ovs', 'metadata', 'dhcp', 'l3'],
$plugin_config = '/etc/neutron/dhcp_agent.ini',
$agents_per_net = 2, # Value, recommended by Neutron team.
) {
require cluster::neutron
Neutron_config<| name == 'DEFAULT/dhcp_agents_per_network' |> {
value => $agents_per_net
}
$csr_metadata = undef
$csr_complex_type = 'clone'
$csr_ms_metadata = { 'interleave' => 'true' }
$dhcp_agent_package = $::neutron::params::dhcp_agent_package ? {
false => $::neutron::params::package_name,
default => $::neutron::params::dhcp_agent_package
}
#TODO (bogdando) move to extras ha wrappers
cluster::corosync::cs_service {'dhcp':
ocf_script => 'neutron-dhcp-agent',
csr_parameters => {
'plugin_config' => $plugin_config,
'remove_artifacts_on_stop_start' => true,
},
csr_metadata => $csr_metadata,
csr_complex_type => $csr_complex_type,
csr_ms_metadata => $csr_ms_metadata,
csr_mon_intr => '20',
csr_mon_timeout => '30',
csr_timeout => '60',
service_name => $::neutron::params::dhcp_agent_service,
package_name => $dhcp_agent_package,
service_title => 'neutron-dhcp-service',
primary => $primary,
hasrestart => false,
}
}

View File

@ -1,40 +0,0 @@
# not a doc string
define cluster::neutron::l3 (
$plugin_config = '/etc/neutron/l3_agent.ini',
$primary = false,
$ha_agents = ['ovs', 'metadata', 'dhcp', 'l3'],
) {
require cluster::neutron
$csr_metadata = undef
$csr_complex_type = 'clone'
$csr_ms_metadata = { 'interleave' => 'true' }
$l3_agent_package = $::neutron::params::l3_agent_package ? {
false => $::neutron::params::package_name,
default => $::neutron::params::l3_agent_package,
}
#TODO (bogdando) move to extras ha wrappers
cluster::corosync::cs_service {'l3':
ocf_script => 'neutron-l3-agent',
csr_parameters => {
'plugin_config' => $plugin_config,
'remove_artifacts_on_stop_start' => true,
},
csr_metadata => $csr_metadata,
csr_complex_type => $csr_complex_type,
csr_ms_metadata => $csr_ms_metadata,
csr_mon_intr => '20',
csr_mon_timeout => '30',
csr_timeout => '60',
service_name => $::neutron::params::l3_agent_service,
package_name => $l3_agent_package,
service_title => 'neutron-l3',
primary => $primary,
hasrestart => false,
}
}

View File

@ -1,27 +0,0 @@
# not a doc string
class cluster::neutron::metadata (
$primary = false,
) {
require cluster::neutron
$metadata_agent_package = $::neutron::params::metadata_agent_package ? {
false => $::neutron::params::package_name,
default => $::neutron::params::metadata_agent_package,
}
#TODO (bogdando) move to extras ha wrappers
cluster::corosync::cs_service {'neutron-metadata-agent':
ocf_script => 'neutron-metadata-agent',
csr_complex_type => 'clone',
csr_ms_metadata => { 'interleave' => 'true' },
csr_mon_intr => '60',
csr_mon_timeout => '30',
csr_timeout => '30',
service_name => $::neutron::params::metadata_agent_service,
package_name => $metadata_agent_package,
service_title => 'neutron-metadata',
primary => $primary,
}
}

View File

@ -1,29 +0,0 @@
# not a doc string
class cluster::neutron::ovs (
$primary = false,
$plugin_config = '/etc/neutron/plugins/ml2/openvswitch_agent.ini',
) {
require cluster::neutron
$ovs_agent_package = $::neutron::params::ovs_agent_package ? {
false => $::neutron::params::package_name,
default => $::neutron::params::ovs_agent_package,
}
cluster::corosync::cs_service {'ovs':
ocf_script => 'neutron-ovs-agent',
csr_complex_type => 'clone',
csr_ms_metadata => { 'interleave' => 'true' },
csr_parameters => { 'plugin_config' => $plugin_config },
csr_mon_intr => '20',
csr_mon_timeout => '30',
csr_timeout => '80',
service_name => $::neutron::params::ovs_agent_service,
service_title => 'neutron-ovs-agent-service',
package_name => $ovs_agent_package,
primary => $primary,
hasrestart => false,
}
}

View File

@ -1,57 +0,0 @@
# == Class: cluster::ntp_ocf
#
# Configure OCF service for NTP managed by corosync/pacemaker
#
class cluster::ntp_ocf inherits ::ntp {
$primitive_type = 'ns_ntp'
$complex_type = 'clone'
$complex_metadata = {
'interleave' => 'true',
}
$metadata = {
'migration-threshold' => '3',
'failure-timeout' => '120',
}
$parameters = {
'ns' => 'vrouter',
}
$operations = {
'monitor' => {
'interval' => '20',
'timeout' => '10'
},
'start' => {
'interval' => '0',
'timeout' => '30'
},
'stop' => {
'interval' => '0',
'timeout' => '30'
},
}
pcmk_colocation { 'ntp-with-vrouter-ns' :
ensure => 'present',
score => 'INFINITY',
first => 'clone_p_vrouter',
second => "clone_p_${service_name}",
}
pacemaker::service { $service_name :
primitive_type => $primitive_type,
parameters => $parameters,
metadata => $metadata,
operations => $operations,
complex_metadata => $complex_metadata,
complex_type => $complex_type,
prefix => true,
}
Pcmk_resource["p_${service_name}"] ->
Pcmk_colocation['ntp-with-vrouter-ns'] ->
Service[$service_name]
}

View File

@ -1,96 +0,0 @@
# == Class: cluster:rabbitmq_fence
#
# Configures a rabbit-fence daemon for dead rabbitmq
# nodes fencing. The daemon uses dbus
# system events generated by the Corosync.
# Requires corosync and rabbitmq packages installed
# and corosync, rabbitmq services configured and running
#
# === Parameters
#
# [*enabled*]
# (Optional) Ensures state for the rabbit-fence daemon.
# Defaults to 'false'
#
class cluster::rabbitmq_fence(
$enabled = false,
) {
case $::osfamily {
'RedHat': {
$packages = ['dbus', 'dbus-python',
'pygobject2', 'python-daemon' ]
$dbus_service_name = 'messagebus'
$service_name = 'rabbit-fence'
}
'Debian': {
$packages = [ 'python-gobject', 'python-gobject-2',
'python-dbus', 'python-daemon' ]
$dbus_service_name = 'dbus'
$service_name = 'fuel-rabbit-fence'
}
default: {
fail("Unsupported osfamily: ${::osfamily} operatingsystem:\
${::operatingsystem}, module ${module_name} only support osfamily\
RedHat and Debian")
}
}
File {
owner => 'root',
group => 'root',
}
Service {
hasstatus => true,
hasrestart => true,
}
package { $packages: } ->
service { $dbus_service_name:
ensure => running,
enable => true,
} ->
# This package brings all necessary packages for services.
# So it is installed first.
package { 'fuel-rabbit-fence': } ->
service { 'corosync-notifyd':
ensure => running,
enable => true,
} ->
service { 'rabbit-fence':
ensure => $enabled ? {
true => running,
false => stopped },
name => $service_name,
enable => $enabled,
require => Package['rabbitmq-server'],
}
Exec {
path => [ '/bin', '/usr/bin' ],
before => Service['corosync-notifyd'],
require => Package['fuel-rabbit-fence'],
}
exec { 'enable_corosync_notifyd':
command => 'sed -i s/START=no/START=yes/ /etc/default/corosync-notifyd',
unless => 'grep START=yes /etc/default/corosync-notifyd',
}
#https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1437368
#FIXME(bogdando) remove these hacks after switched to systemd service.units
exec { 'fix_corosync_notifyd_init_args':
command => 'sed -i s/DAEMON_ARGS=\"\"/DAEMON_ARGS=\"-d\"/ /etc/init.d/corosync-notifyd',
onlyif => 'grep \'DAEMON_ARGS=""\' /etc/init.d/corosync-notifyd',
}
#https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1437359
exec { 'fix_corosync_notifyd_init_pidfile':
command => 'sed -i \'/PIDFILE=\/var\/run\/corosync.pid/d\' /etc/init.d/corosync-notifyd',
onlyif => 'grep \'PIDFILE=/var/run/corosync.pid\' /etc/init.d/corosync-notifyd',
}
}

View File

@ -1,217 +0,0 @@
# == Class: cluster::rabbitmq_ocf
#
# Overrides rabbitmq service provider as a pacemaker
#
# TODO(bogdando) that one just an example of Pacemaker service
# provider wrapper implementation and should be moved to openstack_extra
# and params should be described
#
# === Parameters
#
# [*primitive_type*]
# String. Corosync resource primitive_type
# Defaults to 'rabbitmq-server'
#
# [*service_name*]
# String. The service name of rabbitmq.
# Defaults to $::rabbitmq::service_name
#
# [*port*]
# Integer. The port for rabbitmq to listen on.
# Defaults to $::rabbitmq::port
#
# [*host_ip*]
# String. A string used for OCF script to collect
# RabbitMQ statistics
# Defaults to '127.0.0.1'
#
# [*debug*]
# Boolean. Flag to enable or disable debug logging.
# Defaults to false;
# [*ocf_script_file*]
# String. The script filename for use with pacemaker.
# Defaults to 'cluster/ocf/rabbitmq'
#
# [*command_timeout*]
# String.
#
# [*erlang_cookie*]
# String. A string used as a cookie for rabbitmq instances to
# communicate with eachother.
# Defaults to 'EOKOWXQREETZSHFNTPEY'
#
# [*admin_user*]
# String. An admin username that is used to import the rabbitmq
# definitions from a backup as part of a recovery action.
# Defaults to undef
#
# [*admin_pass*]
# String. An admin password that is used to import the rabbitmq
# definitions from a backup as part of a recovery action.
# Defaults to undef
#
# [*enable_rpc_ha*]
# Boolean. Set ha-mode=all policy for RPC queues. Note that
# Ceilometer queues are not affected by this flag.
#
# [*enable_notifications_ha*]
# Boolean. Set ha-mode=all policy for Ceilometer queues. Note
# that RPC queues are not affected by this flag.
#
# [*fqdn_prefix*]
# String. Optional FQDN prefix for node names.
# Defaults to empty string
#
# [*policy_file*]
# String. Optional path to the policy file for HA queues.
# Defaults to undef
#
# [*start_timeout*]
# String. Optional op start timeout for lrmd.
# Defaults to '120'
#
# [*stop_timeout*]
# String. Optional op stop timeout for lrmd.
# Defaults to '120'
#
# [*mon_timeout*]
# String. Optional op monitor timeout for lrmd.
# Defaults to '120'
#
# [*promote_timeout*]
# String. Optional op promote timeout for lrmd.
# Defaults to '120'
#
# [*demote_timeout*]
# String. Optional op demote timeout for lrmd.
# Defaults to '120'
#
# [*notify_timeout*]
# String. Optional op demote timeout for lrmd.
# Defaults to '120'
#
# [*master_mon_interval*]
# String. Optional op master's monitor interval for lrmd.
# Should be different from mon_interval. Defaults to '27'
#
# [*mon_interval*]
# String. Optional op slave's monitor interval for lrmd.
# Defaults to 30
#
class cluster::rabbitmq_ocf (
$primitive_type = 'rabbitmq-server',
$service_name = $::rabbitmq::service_name,
$port = $::rabbitmq::port,
$host_ip = '127.0.0.1',
$debug = false,
$ocf_script_file = 'cluster/ocf/rabbitmq',
$command_timeout = '',
$erlang_cookie = 'EOKOWXQREETZSHFNTPEY',
$admin_user = undef,
$admin_pass = undef,
$enable_rpc_ha = false,
$enable_notifications_ha = true,
$fqdn_prefix = '',
$pid_file = undef,
$policy_file = undef,
$start_timeout = '120',
$stop_timeout = '120',
$mon_timeout = '120',
$promote_timeout = '120',
$demote_timeout = '120',
$notify_timeout = '120',
$master_mon_interval = '27',
$mon_interval = '30',
) inherits ::rabbitmq::service {
if $host_ip == 'UNSET' or $host_ip == '0.0.0.0' {
$real_host_ip = '127.0.0.1'
} else {
$real_host_ip = $host_ip
}
$parameters = {
'host_ip' => $real_host_ip,
'node_port' => $port,
'debug' => $debug,
'command_timeout' => $command_timeout,
'erlang_cookie' => $erlang_cookie,
'admin_user' => $admin_user,
'admin_password' => $admin_pass,
'enable_rpc_ha' => $enable_rpc_ha,
'enable_notifications_ha' => $enable_notifications_ha,
'fqdn_prefix' => $fqdn_prefix,
'pid_file' => $pid_file,
'policy_file' => $policy_file,
}
$metadata = {
'migration-threshold' => '10',
'failure-timeout' => '30s',
'resource-stickiness' => '100',
}
$complex_metadata = {
'notify' => true,
# We shouldn't enable ordered start for parallel start of RA.
'ordered' => false,
'interleave' => true,
'master-max' => '1',
'master-node-max' => '1',
'target-role' => 'Master',
'requires' => 'nothing'
}
$operations = {
'monitor' => {
'interval' => $mon_interval,
'timeout' => $mon_timeout
},
'monitor:Master' => { # name:role
'role' => 'Master',
'interval' => $master_mon_interval,
'timeout' => $mon_timeout
},
'start' => {
'interval' => '0',
'timeout' => $start_timeout
},
'stop' => {
'interval' => '0',
'timeout' => $stop_timeout
},
'promote' => {
'interval' => '0',
'timeout' => $promote_timeout
},
'demote' => {
'interval' => '0',
'timeout' => $demote_timeout
},
'notify' => {
'interval' => '0',
'timeout' => $notify_timeout
},
}
pacemaker::service { $service_name :
primitive_type => $primitive_type,
complex_type => 'master',
complex_metadata => $complex_metadata,
metadata => $metadata,
operations => $operations,
parameters => $parameters,
#ocf_script_file => $ocf_script_file,
}
if !defined(Service_status['rabbitmq']) {
ensure_resource('service_status', ['rabbitmq'],
{ 'ensure' => 'online', 'check_cmd' => 'rabbitmqctl node_health_check && rabbitmqctl cluster_status'})
} else {
Service_status<| title == 'rabbitmq' |> {
check_cmd => 'rabbitmqctl node_health_check && rabbitmqctl cluster_status',
}
}
Service[$service_name] -> Service_status['rabbitmq']
}

View File

@ -1,61 +0,0 @@
# == Class: cluster::sysinfo
#
# Configure pacemaker sysinfo disk monitor
#
# === Parameters
#
# [*disks*]
# (optional) array of mount points to monitor for free space. / is monitored
# by default it does not need to be specified.
# Defaults to []
#
# [*min_disk_free*]
# (optional) Minimum amount of free space required for the paritions
# Defaults to '100M'
#
# [*disk_unit*]
# (optional) Unit for disk space
# Defaults to 'M'
#
# [*monitor_interval*]
# (optional) Internval to monitor free space
# Defaults to '60s'
#
# [*monitor_ensure*]
# (optional) Ensure the corosync monitor is installed
# Defaults to present
#
class cluster::sysinfo (
$disks = [],
$min_disk_free = '100M',
$disk_unit = 'M',
$monitor_interval = '15s',
$monitor_ensure = 'present',
) {
pcmk_resource { "sysinfo_${::fqdn}" :
ensure => $monitor_ensure,
primitive_class => 'ocf',
primitive_provider => 'pacemaker',
primitive_type => 'SysInfo',
parameters => {
'disks' => join(any2array($disks), ' '),
'min_disk_free' => $min_disk_free,
'disk_unit' => $disk_unit,
},
operations => { 'monitor' => { 'interval' => $monitor_interval } },
}
# Have service migrate if health turns red from the failed disk check
pcmk_property { 'node-health-strategy':
ensure => $monitor_ensure,
value => 'migrate-on-red',
}
pcmk_location { "sysinfo-on-${::fqdn}":
ensure => $monitor_ensure,
primitive => "sysinfo_${::fqdn}",
node => $::fqdn,
score => 'INFINITY',
}
}

View File

@ -1,230 +0,0 @@
# == Define: cluster::virtual_ip
#
# Configure VirtualIP resource for corosync/pacemaker.
#
# [*bridge*]
# (Required) Name of the bridge that has network
# namespace with VIP connected to it.
#
# [*ip*]
# (Required) The IPv4 address to be configured in
# dotted quad notation.
#
# [*ns_veth*]
# (Required) Name of network namespace side of
# the veth pair.
#
# [*base_veth*]
# (Required) Name of base system side of
# the veth pair.
#
# [*gateway*]
# Default route address.
# Default: none
#
# [*iflabel*]
# You can specify an additional label for your IP address here.
# This label is appended to your interface name.
# Defualt: ka
#
# [*cidr_netmask*]
# The netmask for the interface in CIDR format.
# Default: 24
#
# [*ns*]
# Name of network namespace.
# Default: haproxy
#
# [*gateway_metric*]
# The metric value of the default route.
#
# [*other_networks*]
# Additional routes that should be added to this resource.
# Routes will be added via value ns_veth.
# Should be space separated list of networks in CIDR format.
#
# [*iptables_comment*]
# Iptables comment to associate with rules.
#
# [*ns_iptables_start_rules*]
# Iptables rules that should be
# started along with IP in the namespace.
#
# [*ns_iptables_stop_rules*]
# Iptables rules that should be
# stopped along with IP in the namespace.
#
# [*also_check_interfaces*]
# Network interfaces list (ex. NIC), that should be in
# UP state for monitor action returns succesful.
#
# [*primitive_type*]
# The name of the OCF script to use.
# Default: ns_IPaddr2
#
# [*use_pcmk_prefix*]
# Should the 'p_' prefix be added to
# the primitive name.
# Default: false
#
# [*vip_prefix*]
# The prefix added to the VIP primitive name.
# Default: 'vip__'
#
# [*additional_parameters*]
# Any additional instance variables can be
# passed as a hash here.
# Default: {}
#
# [*colocation_before*]
# The name of an other virtual_ip instance
# that should have a colocation constraint to
# go before this virtual_ip.
#
# [*colocation_after*]
# The name of an other virtual_ip instance
# that should have a colocation constraint to
# go after this virtual_ip.
#
# [*colocation_score*]
# The score of the created colocation constraints.
# Default: INFINITY
#
# [*colocation_ensure*]
# Controlls the ensure value of the colocations.
# Default: present
#
# [*colocation_separator*]
# The separator between vip names in the colocation
# constraint name.
# Default: -with-
#
define cluster::virtual_ip (
$bridge,
$ip,
$ns_veth,
$base_veth,
$gateway = 'none',
$iflabel = 'ka',
$cidr_netmask = '24',
$ns = 'haproxy',
$gateway_metric = undef,
$other_networks = undef,
$iptables_comment = undef,
$ns_iptables_start_rules = undef,
$ns_iptables_stop_rules = undef,
$also_check_interfaces = undef,
$primitive_type = 'ns_IPaddr2',
$use_pcmk_prefix = false,
$vip_prefix = 'vip__',
$additional_parameters = { },
$colocation_before = undef,
$colocation_after = undef,
$colocation_score = 'INFINITY',
$colocation_ensure = 'present',
$colocation_separator = '-with-',
){
validate_string($primitive_type)
validate_bool($use_pcmk_prefix)
validate_hash($additional_parameters)
$vip_name = "${vip_prefix}${name}"
$metadata = {
# will try to start 3 times before migrating to another node
'migration-threshold' => '3',
# forget any start failures after this timeout
'failure-timeout' => '60',
# will not randomly migrate to the other nodes without a reason
'resource-stickiness' => '1',
}
$operations = {
'monitor' => {
'interval' => '5',
'timeout' => '20',
},
'start' => {
'interval' => '0',
'timeout' => '30',
},
'stop' => {
'interval' => '0',
'timeout' => '30',
},
}
$parameters = resource_parameters(
'bridge', $bridge,
'ip', $ip,
'cidr_netmask', $cidr_netmask,
'iflabel', $iflabel,
'ns', $ns,
'base_veth', $base_veth,
'ns_veth', $ns_veth,
'gateway', $gateway,
'gateway_metric', $gateway_metric,
'ns_iptables_start_rules', $ns_iptables_start_rules,
'ns_iptables_stop_rules', $ns_iptables_stop_rules,
'iptables_comment', $iptables_comment,
'also_check_interfaces', $also_check_interfaces,
'other_networks', $other_networks,
$additional_parameters
)
service { $vip_name:
ensure => 'running',
enable => true,
}
pacemaker::service { $vip_name :
primitive_type => $primitive_type,
parameters => $parameters,
metadata => $metadata,
operations => $operations,
prefix => $use_pcmk_prefix,
}
# I'am running before this other vip
# and this other vip cannot start without me running on this node
if $colocation_before {
$colocation_before_vip_name = "${vip_prefix}${colocation_before}"
$colocation_before_constraint_name = "${colocation_before_vip_name}${colocation_separator}${vip_name}"
pcmk_colocation { $colocation_before_constraint_name :
ensure => $colocation_ensure,
score => $colocation_score,
first => $vip_name,
second => $colocation_before_vip_name,
}
Pcmk_resource <| title == $vip_name |> -> Pcmk_resource <| title == $colocation_before_vip_name |>
Service <| title == $vip_name |> -> Service <| title == $colocation_before_vip_name |>
Service <| title == $colocation_before_vip_name |> -> Pcmk_colocation[$colocation_before_constraint_name]
Service <| title == $vip_name |> -> Pcmk_colocation[$colocation_before_constraint_name]
}
# I'm running after this other vip
# and I cannot start without other vip running on this node
if $colocation_after {
$colocation_after_vip_name = "${vip_prefix}${colocation_after}"
$colocation_after_constraint_name = "${vip_name}${colocation_separator}${colocation_after_vip_name}"
pcmk_colocation { $colocation_after_constraint_name :
ensure => $colocation_ensure,
score => $colocation_score,
first => $colocation_after_vip_name,
second => $vip_name,
}
Pcmk_resource <| title == $colocation_after_vip_name |> -> Pcmk_resource <| title == $vip_name |>
Service <| title == $colocation_after_vip_name |> -> Service <| title == $vip_name |>
Service <| title == $colocation_after_vip_name |> -> Pcmk_colocation[$colocation_after_constraint_name]
Service <| title == $vip_name |> -> Pcmk_colocation[$colocation_after_constraint_name]
}
}
Class['corosync'] -> Cluster::Virtual_ip <||>
if defined(Corosync::Service['pacemaker']) {
Corosync::Service['pacemaker'] -> Cluster::Virtual_ip <||>
}

View File

@ -1,63 +0,0 @@
define cluster::virtual_ip_ping (
$host_list = '127.0.0.1',
) {
$vip_name = $name
$service_name = "ping_${vip_name}"
$location_name = "loc_ping_${vip_name}"
$primitive_class = 'ocf'
$primitive_provider = 'pacemaker'
$primitive_type = 'ping'
$parameters = {
'host_list' => $host_list,
'multiplier' => '1000',
'dampen' => '30s',
'timeout' => '3s',
}
$operations = {
'monitor' => {
'interval' => '20',
'timeout' => '30',
},
}
$complex_type = 'clone'
service { $service_name :
ensure => 'running',
enable => true,
}
pacemaker::service { $service_name :
prefix => false,
primitive_class => $primitive_class,
primitive_provider => $primitive_provider,
primitive_type => $primitive_type,
parameters => $parameters,
operations => $operations,
complex_type => $complex_type,
}
pcmk_location { $location_name :
primitive => $vip_name,
rules => [
{
'score' => '50',
'expressions' => [
{
'attribute' => "pingd",
'operation' => 'defined',
},
{
'attribute' => "pingd",
'operation' => 'gte',
'value' => '1',
},
],
},
],
}
Pcmk_resource[$service_name] ->
Pcmk_location[$location_name] ->
Service[$service_name]
}

View File

@ -1,54 +0,0 @@
# == Class: cluster::vrouter_ocf
#
# Configure OCF service for vrouter managed by corosync/pacemaker
#
class cluster::vrouter_ocf (
$other_networks = false,
) {
$service_name = 'p_vrouter'
$primitive_type = 'ns_vrouter'
$complex_type = 'clone'
$complex_metadata = {
'interleave' => true,
}
$metadata = {
'migration-threshold' => '3',
'failure-timeout' => '120',
}
$parameters = {
'ns' => 'vrouter',
'other_networks' => "${other_networks}",
}
$operations = {
'monitor' => {
'interval' => '30',
'timeout' => '60'
},
'start' => {
'interval' => '0',
'timeout' => '30'
},
'stop' => {
'interval' => '0',
'timeout' => '60'
},
}
service { $service_name :
ensure => 'running',
enable => true,
hasstatus => true,
hasrestart => true,
provider => 'pacemaker',
}
pacemaker::service { $service_name :
primitive_type => $primitive_type,
parameters => $parameters,
metadata => $metadata,
operations => $operations,
complex_metadata => $complex_metadata,
complex_type => $complex_type,
prefix => false,
}
}

View File

@ -1,45 +0,0 @@
require 'spec_helper'
describe 'cluster::dns_ocf' do
let(:default_params) do
{}
end
shared_examples_for 'dns_ocf configuration' do
let :params do
default_params
end
it 'configures with the params params' do
should contain_class('cluster::dns_ocf')
should contain_pcmk_resource('p_dns')
should contain_pcmk_colocation('dns-with-vrouter-ns').with(
:ensure => 'present',
:score => 'INFINITY',
:first => 'clone_p_vrouter',
:second => 'clone_p_dns'
).that_requires('Pcmk_resource[p_dns]')
should contain_service('p_dns').with(
:name => 'p_dns',
:enable => true,
:ensure => 'running',
:hasstatus => true,
:hasrestart => true,
:provider => 'pacemaker',
).that_requires('Pcmk_colocation[dns-with-vrouter-ns]')
end
end
on_supported_os(supported_os: supported_os).each do |os, facts|
context "on #{os}" do
let(:facts) { facts.merge common_facts }
it_configures 'dns_ocf configuration'
end
end
end

View File

@ -1,23 +0,0 @@
require 'spec_helper'
describe 'cluster::galera_grants' do
shared_examples_for 'galera_grants configuration' do
context 'with valid parameters' do
let :params do
{
:status_user => 'user',
:status_password => 'password',
}
end
it 'should create grant with right privileges' do
should contain_mysql_grant("user@%/*.*").with(
:options => [ 'GRANT' ],
:privileges => [ 'USAGE' ]
)
end
end
end
end

View File

@ -1,22 +0,0 @@
require 'spec_helper'
describe 'cluster::galera_status' do
shared_examples_for 'galera_status configuration' do
end
on_supported_os(supported_os: supported_os).each do |os, facts|
context "on #{os}" do
let(:facts) do
facts.merge(common_facts).merge(
{
:openstack_version => {'nova' => 'present'}
}
)
end
it_configures 'galera_status configuration'
end
end
end

View File

@ -1,21 +0,0 @@
require 'spec_helper'
describe 'cluster::haproxy::rsyslog' do
let(:default_params) { {
:log_file => '/var/log/haproxy.log'
} }
shared_examples_for 'haproxy rsyslog configuration' do
let :params do
default_params
end
context 'with default parameters' do
it 'should configure rsyslog for haproxy' do
should contain_file('/etc/rsyslog.d/haproxy.conf')
end
end
end
end

View File

@ -1,77 +0,0 @@
require 'spec_helper'
describe 'cluster::heat_engine' do
let(:pre_condition) do
<<-eof
class { 'heat::keystone::authtoken' :
password => 'test',
}
class { '::heat::engine' :
auth_encryption_key => 'deadb33fdeadb33f',
}
eof
end
shared_examples_for 'cluster::heat_engine configuration' do
context 'with valid params' do
let :params do
{ }
end
it {
should contain_class('cluster::heat_engine')
}
it 'configures a heat engine pacemaker service' do
should contain_pacemaker__service(platform_params[:engine_service_name]).with(
:primitive_type => 'heat-engine',
:metadata => {
'resource-stickiness' => '1',
'migration-threshold' => '3'
},
:complex_type => 'clone',
:complex_metadata => {
'interleave' => true
},
:operations => {
'monitor' => {
'interval' => '20',
'timeout' => '30'
},
'start' => {
'interval' => '0',
'timeout' => '60'
},
'stop' => {
'interval' => '0',
'timeout' => '60'
},
}
)
end
end
end
on_supported_os(supported_os: supported_os).each do |os, facts|
context "on #{os}" do
let(:facts) { facts.merge(common_facts) }
let :platform_params do
if facts[:osfamily] == 'Debian'
{
:engine_service_name => 'heat-engine'
}
else
{
:engine_service_name => 'openstack-heat-engine'
}
end
end
it_configures 'cluster::heat_engine configuration'
end
end
end

View File

@ -1,73 +0,0 @@
require 'spec_helper'
describe 'cluster::mysql' do
let(:pre_condition) do
'include ::mysql::server'
end
shared_examples_for 'cluster::mysql configuration' do
context 'with valid params' do
let :params do
{
:mysql_user => 'username',
:mysql_password => 'password',
}
end
it 'defines a creds file' do
should contain_file('/etc/mysql/user.cnf').with_content(
"[client]\nuser = #{params[:mysql_user]}\npassword = #{params[:mysql_password]}\n"
)
end
it 'configures a cs_resource' do
should contain_pcmk_resource('p_mysqld').with(
:ensure => 'present',
:parameters => {
'config' => '/etc/mysql/my.cnf',
'test_conf' => '/etc/mysql/user.cnf',
'socket' =>'/var/run/mysqld/mysqld.sock'
}
)
should contain_pcmk_resource('p_mysqld').that_comes_before('Service[mysqld]')
end
it 'creates init-file with grants' do
should contain_exec('create-init-file').with_command(
/'username'@'%' IDENTIFIED BY 'password'/
)
should contain_exec('create-init-file').with_command(
/'username'@'localhost' IDENTIFIED BY 'password'/
)
should contain_exec('create-init-file').that_comes_before('File[fix-log-dir]')
should contain_exec('create-init-file').that_notifies('Exec[wait-for-sync]')
end
it 'creates exec to remove init-file' do
should contain_exec('rm-init-file')
end
it 'should have correct permissions for logging directory' do
should contain_file('fix-log-dir').with(
:ensure => 'directory',
:path => '/var/log/mysql',
:mode => '0770',
).that_requires('Package[mysql-server]')
should contain_file('fix-log-dir').that_comes_before('Service[mysqld]')
end
it 'creates exec to wait initial database sync' do
should contain_exec('wait-for-sync').that_subscribes_to('Service[mysqld]')
end
end
end
on_supported_os(supported_os: supported_os).each do |os, facts|
context "on #{os}" do
let(:facts) { facts.merge(common_facts) }
it_configures 'cluster::mysql configuration'
end
end
end

View File

@ -1,40 +0,0 @@
require 'spec_helper'
describe 'cluster::ntp_ocf' do
shared_examples_for 'ntp_ocf configuration' do
it 'configures with the default params' do
service_name = platform_params[:service_name]
should contain_class('cluster::ntp_ocf')
should contain_pcmk_resource("p_#{service_name}").with_before(["Pcmk_colocation[ntp-with-vrouter-ns]", "Service[#{service_name}]"])
should contain_pcmk_colocation("ntp-with-vrouter-ns").with(
:ensure => 'present',
:score => 'INFINITY',
:first => 'clone_p_vrouter',
:second => "clone_p_#{service_name}")
end
end
on_supported_os(supported_os: supported_os).each do |os, facts|
context "on #{os}" do
let(:facts) { facts.merge(common_facts) }
let :platform_params do
if facts[:osfamily] == 'Debian'
{
:service_name => 'ntp'
}
else
{
:service_name => 'ntpd'
}
end
end
it_configures 'ntp_ocf configuration'
end
end
end

View File

@ -1,53 +0,0 @@
require 'spec_helper'
describe 'cluster' do
let(:default_params) { {
:internal_address => '127.0.0.1',
:quorum_members => ['localhost'],
:unicast_addresses => ['127.0.0.1'],
:cluster_recheck_interval => '190s',
} }
shared_examples_for 'cluster configuration' do
let :params do
default_params
end
context 'with default params' do
it 'configures corosync with pacemaker' do
should contain_class('openstack::corosync').with(
:bind_address => default_params[:internal_address],
:quorum_members => default_params[:quorum_members],
:unicast_addresses => default_params[:unicast_addresses],
:packages => packages,
:cluster_recheck_interval => default_params[:cluster_recheck_interval])
should contain_file('ocf-fuel-path').with(
:ensure => 'directory',
:path => '/usr/lib/ocf/resource.d/fuel',
:recurse => true,
:owner => 'root',
:group => 'root')
end
end
end
on_supported_os(supported_os: supported_os).each do |os, facts|
context "on #{os}" do
let(:facts) { facts.merge(common_facts) }
let :packages do
if facts[:osfamily] == 'Debian'
[ 'crmsh', 'pcs' ]
else
['crmsh']
end
end
it_configures 'cluster configuration'
end
end
end

View File

@ -1,143 +0,0 @@
require 'spec_helper'
describe 'cluster::rabbitmq_ocf' do
shared_examples_for 'rabbitmq_ocf configuration' do
let(:pre_condition) {
'include rabbitmq::service'
}
let(:params) {{
:primitive_type => 'rabbitmq-server',
:service_name => 'rabbitmq-server',
:port => '5672',
:host_ip => '127.0.0.1',
:debug => false,
:ocf_script_file => 'cluster/ocf/rabbitmq',
:command_timeout => '',
:erlang_cookie => 'EOKOWXQREETZSHFNTPEY',
:admin_user => 'nil',
:admin_pass => 'nil',
:enable_rpc_ha => false,
:enable_notifications_ha => true,
:fqdn_prefix => 'nil',
:pid_file => 'nil',
:policy_file => 'nil',
:start_timeout => '120',
:stop_timeout => '120',
:mon_timeout => '120',
:promote_timeout => '120',
:demote_timeout => '120',
:notify_timeout => '120',
:master_mon_interval => '27',
:mon_interval => '30',
}}
let(:metadata) {{
'migration-threshold' => '10',
'failure-timeout' => '30s',
'resource-stickiness' => '100',
}}
let(:complex_metadata) {{
'notify' => 'true',
'ordered' => 'false',
'interleave' => 'true',
'master-max' => '1',
'master-node-max' => '1',
'target-role' => 'Master',
'requires' => 'nothing'
}}
let(:monitor) {{
'interval' => params[:mon_interval],
'timeout' => params[:mon_timeout]
}}
let(:monitor_master) {{
'role' => 'Master',
'interval' => params[:master_mon_interval],
'timeout' => params[:mon_timeout]
}}
let(:start) {{
'interval' => '0',
'timeout' => params[:start_timeout]
}}
let(:stop) {{
'interval' => '0',
'timeout' => params[:stop_timeout]
}}
let(:promote) {{
'interval' => '0',
'timeout' => params[:promote_timeout]
}}
let(:demote) {{
'interval' => '0',
'timeout' => params[:demote_timeout]
}}
let(:notify) {{
'interval' => '0',
'timeout' => params[:notify_timeout]
}}
let(:operations) {{
'monitor' => monitor,
'monitor:Master' => monitor_master,
'start' => start,
'stop' => stop,
'promote' => promote,
'demote' => demote,
'notify' => notify,
}}
let(:parameters) {{
'host_ip' => params[:host_ip],
'node_port' => params[:port],
'debug' => params[:debug],
'command_timeout' => params[:command_timeout],
'erlang_cookie' => params[:erlang_cookie],
'admin_user' => params[:admin_user],
'admin_password' => params[:admin_pass],
'enable_rpc_ha' => params[:enable_rpc_ha],
'enable_notifications_ha' => params[:enable_notifications_ha],
'fqdn_prefix' => params[:fqdn_prefix],
'pid_file' => params[:pid_file],
'policy_file' => params[:policy_file],
}}
it 'configures with the default params' do
should contain_class('cluster::rabbitmq_ocf')
should contain_pacemaker__service(params[:service_name]).with(
:primitive_type => params[:primitive_type],
:complex_type => 'master',
:complex_metadata => complex_metadata,
:metadata => metadata,
:operations => operations,
:parameters => parameters
)
end
end
on_supported_os(supported_os: supported_os).each do |os, facts|
context "on #{os}" do
let(:facts) { facts.merge(common_facts) }
let :packages do
if facts[:osfamily] == 'Debian'
[ 'crmsh', 'pcs' ]
else
['crmsh']
end
end
it_configures 'rabbitmq_ocf configuration'
end
end
end

View File

@ -1,64 +0,0 @@
require 'spec_helper'
describe 'cluster::virtual_ip', type: :define do
let(:title) do
'my_ip'
end
context 'with only basic parameters' do
let(:params) do
{
bridge: 'br0',
ip: '192.168.0.2',
ns_veth: 'tst',
base_veth: 'tst',
gateway: '192.168.0.1',
}
end
it { is_expected.to compile.with_all_deps }
it { is_expected.to contain_cluster__virtual_ip('my_ip') }
resource_parameters = {
:ensure => 'present',
:primitive_class => 'ocf',
:primitive_type => 'ns_IPaddr2',
:primitive_provider => 'fuel',
:parameters => {
'bridge' => 'br0',
'ip' => '192.168.0.2',
'gateway' => '192.168.0.1',
'cidr_netmask' => '24',
'iflabel' => 'ka',
'ns' => 'haproxy',
'ns_veth' => 'tst',
'base_veth' => 'tst',
},
:operations => {
'monitor' => {
'interval' => '5',
'timeout' => '20',
},
'start' => {
'interval' => '0',
'timeout' => '30',
},
'stop' => {
'interval' => '0',
'timeout' => '30',
}
},
:metadata => {
'migration-threshold' => '3',
'failure-timeout' => '60',
'resource-stickiness' => '1',
}
}
it { is_expected.to contain_pcmk_resource('vip__my_ip').with(resource_parameters) }
end
end

View File

@ -1,21 +0,0 @@
require 'spec_helper'
describe 'resource_parameters' do
it { is_expected.not_to eq(nil) }
it { is_expected.to run.with_params.and_return({}) }
it { is_expected.to run.with_params(nil).and_return({}) }
it { is_expected.to run.with_params(false).and_return({}) }
it { is_expected.to run.with_params(nil, 'b').and_return({}) }
it { is_expected.to run.with_params('a').and_return({}) }
it { is_expected.to run.with_params('a', 'b').and_return({'a' => 'b'}) }
it { is_expected.to run.with_params('a', 'b', 'c', nil, ['d', 1], 'e', :undef).and_return({'a' => 'b', 'd' => 1}) }
it { is_expected.to run.with_params('a', 'b', 'c', 'd', {'e' => 'f', 'a' => '10'}).and_return({'a' => '10', 'c' => 'd', 'e' => 'f'}) }
end

View File

@ -1,5 +0,0 @@
shared_examples_for "a Puppet::Error" do |description|
it "with message matching #{description.inspect}" do
expect { is_expected.to have_class_count(1) }.to raise_error(Puppet::Error, description)
end
end

View File

@ -1,30 +0,0 @@
require 'puppetlabs_spec_helper/module_spec_helper'
require 'shared_examples'
require 'rspec-puppet-facts'
include RspecPuppetFacts
RSpec.configure do |c|
c.alias_it_should_behave_like_to :it_configures, 'configures'
c.alias_it_should_behave_like_to :it_raises, 'raises'
end
at_exit { RSpec::Puppet::Coverage.report! }
def supported_os
[
{
'operatingsystem' => 'CentOS',
'operatingsystemrelease' => ['7.0'],
},
{
'operatingsystem' => 'Ubuntu',
'operatingsystemrelease' => ['16.04'],
},
]
end
def common_facts
{
:os_service_default => '<SERVICE DEFAULT>',
}
end

View File

@ -1,411 +0,0 @@
#
# Synchronizer settings
#
Sync {
Mode FTFW {
#
# Size of the resend queue (in objects). This is the maximum
# number of objects that can be stored waiting to be confirmed
# via acknoledgment. If you keep this value low, the daemon
# will have less chances to recover state-changes under message
# omission. On the other hand, if you keep this value high,
# the daemon will consume more memory to store dead objects.
# Default is 131072 objects.
#
# ResendQueueSize 131072
#
# This parameter allows you to set an initial fixed timeout
# for the committed entries when this node goes from backup
# to primary. This mechanism provides a way to purge entries
# that were not recovered appropriately after the specified
# fixed timeout. If you set a low value, TCP entries in
# Established states with no traffic may hang. For example,
# an SSH connection without KeepAlive enabled. If not set,
# the daemon uses an approximate timeout value calculation
# mechanism. By default, this option is not set.
#
# CommitTimeout 180
#
# If the firewall replica goes from primary to backup,
# the conntrackd -t command is invoked in the script.
# This command schedules a flush of the table in N seconds.
# This is useful to purge the connection tracking table of
# zombie entries and avoid clashes with old entries if you
# trigger several consecutive hand-overs. Default is 60 seconds.
#
# PurgeTimeout 60
# Set the acknowledgement window size. If you decrease this
# value, the number of acknowlegdments increases. More
# acknowledgments means more overhead as conntrackd has to
# handle more control messages. On the other hand, if you
# increase this value, the resend queue gets more populated.
# This results in more overhead in the queue releasing.
# The following value is based on some practical experiments
# measuring the cycles spent by the acknowledgment handling
# with oprofile. If not set, default window size is 300.
#
# ACKWindowSize 300
#
# This clause allows you to disable the external cache. Thus,
# the state entries are directly injected into the kernel
# conntrack table. As a result, you save memory in user-space
# but you consume slots in the kernel conntrack table for
# backup state entries. Moreover, disabling the external cache
# means more CPU consumption. You need a Linux kernel
# >= 2.6.29 to use this feature. By default, this clause is
# set off. If you are installing conntrackd for first time,
# please read the user manual and I encourage you to consider
# using the fail-over scripts instead of enabling this option!
#
# DisableExternalCache Off
}
#
# Multicast IP and interface where messages are
# broadcasted (dedicated link). IMPORTANT: Make sure
# that iptables accepts traffic for destination
# 225.0.0.50, eg:
#
# iptables -I INPUT -d 225.0.0.50 -j ACCEPT
# iptables -I OUTPUT -d 225.0.0.50 -j ACCEPT
#
Multicast {
#
# Multicast address: The address that you use as destination
# in the synchronization messages. You do not have to add
# this IP to any of your existing interfaces. If any doubt,
# do not modify this value.
#
IPv4_address 225.0.0.50
#
# The multicast group that identifies the cluster. If any
# doubt, do not modify this value.
#
Group 3780
#
# IP address of the interface that you are going to use to
# send the synchronization messages. Remember that you must
# use a dedicated link for the synchronization messages.
#
IPv4_interface 240.1.0.<%= @bind_address.rpartition(".").last %>
#
# The name of the interface that you are going to use to
# send the synchronization messages.
#
Interface conntrd
# The multicast sender uses a buffer to enqueue the packets
# that are going to be transmitted. The default size of this
# socket buffer is available at /proc/sys/net/core/wmem_default.
# This value determines the chances to have an overrun in the
# sender queue. The overrun results packet loss, thus, losing
# state information that would have to be retransmitted. If you
# notice some packet loss, you may want to increase the size
# of the sender buffer. The default size is usually around
# ~100 KBytes which is fairly small for busy firewalls.
#
SndSocketBuffer 1249280
# The multicast receiver uses a buffer to enqueue the packets
# that the socket is pending to handle. The default size of this
# socket buffer is available at /proc/sys/net/core/rmem_default.
# This value determines the chances to have an overrun in the
# receiver queue. The overrun results packet loss, thus, losing
# state information that would have to be retransmitted. If you
# notice some packet loss, you may want to increase the size of
# the receiver buffer. The default size is usually around
# ~100 KBytes which is fairly small for busy firewalls.
#
RcvSocketBuffer 1249280
#
# Enable/Disable message checksumming. This is a good
# property to achieve fault-tolerance. In case of doubt, do
# not modify this value.
#
Checksum on
}
#
# You can specify more than one dedicated link. Thus, if one dedicated
# link fails, conntrackd can fail-over to another. Note that adding
# more than one dedicated link does not mean that state-updates will
# be sent to all of them. There is only one active dedicated link at
# a given moment. The `Default' keyword indicates that this interface
# will be selected as the initial dedicated link. You can have
# up to 4 redundant dedicated links. Note: Use different multicast
# groups for every redundant link.
#
# Multicast Default {
# IPv4_address 225.0.0.51
# Group 3781
# IPv4_interface 192.168.100.101
# Interface eth3
# # SndSocketBuffer 1249280
# # RcvSocketBuffer 1249280
# Checksum on
# }
#
# You can use Unicast UDP instead of Multicast to propagate events.
# Note that you cannot use unicast UDP and Multicast at the same
# time, you can only select one.
#
# UDP {
#
# UDP address that this firewall uses to listen to events.
#
# IPv4_address 192.168.2.100
#
# or you may want to use an IPv6 address:
#
# IPv6_address fe80::215:58ff:fe28:5a27
#
# Destination UDP address that receives events, ie. the other
# firewall's dedicated link address.
#
# IPv4_Destination_Address 192.168.2.101
#
# or you may want to use an IPv6 address:
#
# IPv6_Destination_Address fe80::2d0:59ff:fe2a:775c
#
# UDP port used
#
# Port 3780
#
# The name of the interface that you are going to use to
# send the synchronization messages.
#
# Interface eth2
#
# The sender socket buffer size
#
# SndSocketBuffer 1249280
#
# The receiver socket buffer size
#
# RcvSocketBuffer 1249280
#
# Enable/Disable message checksumming.
#
# Checksum on
# }
#
# Other unsorted options that are related to the synchronization.
#
# Options {
#
# TCP state-entries have window tracking disabled by default,
# you can enable it with this option. As said, default is off.
# This feature requires a Linux kernel >= 2.6.36.
#
# TCPWindowTracking Off
# }
}
#
# General settings
#
General {
#
# Set the nice value of the daemon, this value goes from -20
# (most favorable scheduling) to 19 (least favorable). Using a
# very low value reduces the chances to lose state-change events.
# Default is 0 but this example file sets it to most favourable
# scheduling as this is generally a good idea. See man nice(1) for
# more information.
#
Nice -20
#
# Select a different scheduler for the daemon, you can select between
# RR and FIFO and the process priority (minimum is 0, maximum is 99).
# See man sched_setscheduler(2) for more information. Using a RT
# scheduler reduces the chances to overrun the Netlink buffer.
#
# Scheduler {
# Type FIFO
# Priority 99
# }
#
# Number of buckets in the cache hashtable. The bigger it is,
# the closer it gets to O(1) at the cost of consuming more memory.
# Read some documents about tuning hashtables for further reference.
#
HashSize 32768
#
# Maximum number of conntracks, it should be double of:
# $ cat /proc/sys/net/netfilter/nf_conntrack_max
# since the daemon may keep some dead entries cached for possible
# retransmission during state synchronization.
#
HashLimit 131072
#
# Logfile: on (/var/log/conntrackd.log), off, or a filename
# Default: off
#
LogFile on
#
# Syslog: on, off or a facility name (daemon (default) or local0..7)
# Default: off
#
#Syslog on
#
# Lockfile
#
LockFile /var/lock/conntrack.lock
#
# Unix socket configuration
#
UNIX {
Path /var/run/conntrackd.ctl
Backlog 20
}
#
# Netlink event socket buffer size. If you do not specify this clause,
# the default buffer size value in /proc/net/core/rmem_default is
# used. This default value is usually around 100 Kbytes which is
# fairly small for busy firewalls. This leads to event message dropping
# and high CPU consumption. This example configuration file sets the
# size to 2 MBytes to avoid this sort of problems.
#
NetlinkBufferSize 2097152
#
# The daemon doubles the size of the netlink event socket buffer size
# if it detects netlink event message dropping. This clause sets the
# maximum buffer size growth that can be reached. This example file
# sets the size to 8 MBytes.
#
NetlinkBufferSizeMaxGrowth 8388608
#
# If the daemon detects that Netlink is dropping state-change events,
# it automatically schedules a resynchronization against the Kernel
# after 30 seconds (default value). Resynchronizations are expensive
# in terms of CPU consumption since the daemon has to get the full
# kernel state-table and purge state-entries that do not exist anymore.
# Be careful of setting a very small value here. You have the following
# choices: On (enabled, use default 30 seconds value), Off (disabled)
# or Value (in seconds, to set a specific amount of time). If not
# specified, the daemon assumes that this option is enabled.
#
# NetlinkOverrunResync On
#
# If you want reliable event reporting over Netlink, set on this
# option. If you set on this clause, it is a good idea to set off
# NetlinkOverrunResync. This option is off by default and you need
# a Linux kernel >= 2.6.31.
#
# NetlinkEventsReliable Off
#
# By default, the daemon receives state updates following an
# event-driven model. You can modify this behaviour by switching to
# polling mode with the PollSecs clause. This clause tells conntrackd
# to dump the states in the kernel every N seconds. With regards to
# synchronization mode, the polling mode can only guarantee that
# long-lifetime states are recovered. The main advantage of this method
# is the reduction in the state replication at the cost of reducing the
# chances of recovering connections.
#
# PollSecs 15
#
# The daemon prioritizes the handling of state-change events coming
# from the core. With this clause, you can set the maximum number of
# state-change events (those coming from kernel-space) that the daemon
# will handle after which it will handle other events coming from the
# network or userspace. A low value improves interactivity (in terms of
# real-time behaviour) at the cost of extra CPU consumption.
# Default (if not set) is 100.
#
# EventIterationLimit 100
#
# Event filtering: This clause allows you to filter certain traffic,
# There are currently three filter-sets: Protocol, Address and
# State. The filter is attached to an action that can be: Accept or
# Ignore. Thus, you can define the event filtering policy of the
# filter-sets in positive or negative logic depending on your needs.
# You can select if conntrackd filters the event messages from
# user-space or kernel-space. The kernel-space event filtering
# saves some CPU cycles by avoiding the copy of the event message
# from kernel-space to user-space. The kernel-space event filtering
# is prefered, however, you require a Linux kernel >= 2.6.29 to
# filter from kernel-space. If you want to select kernel-space
# event filtering, use the keyword 'Kernelspace' instead of
# 'Userspace'.
#
Filter From Userspace {
#
# Accept only certain protocols: You may want to replicate
# the state of flows depending on their layer 4 protocol.
#
Protocol Accept {
TCP
SCTP
DCCP
# UDP
# ICMP # This requires a Linux kernel >= 2.6.31
# IPv6-ICMP # This requires a Linux kernel >= 2.6.31
}
#
# Ignore traffic for a certain set of IP's: Usually all the
# IP assigned to the firewall since local traffic must be
# ignored, only forwarded connections are worth to replicate.
# Note that these values depends on the local IPs that are
# assigned to the firewall.
#
Address Ignore {
IPv4_address 127.0.0.1 # loopback
IPv4_address 192.168.0.100 # virtual IP 1
IPv4_address 192.168.1.100 # virtual IP 2
IPv4_address 192.168.0.1
IPv4_address 192.168.1.1
IPv4_address 192.168.100.100 # dedicated link ip
#
# You can also specify networks in format IP/cidr.
# IPv4_address 192.168.0.0/24
#
# You can also specify an IPv6 address
# IPv6_address ::1
}
#
# Uncomment this line below if you want to filter by flow state.
# This option introduces a trade-off in the replication: it
# reduces CPU consumption at the cost of having lazy backup
# firewall replicas. The existing TCP states are: SYN_SENT,
# SYN_RECV, ESTABLISHED, FIN_WAIT, CLOSE_WAIT, LAST_ACK,
# TIME_WAIT, CLOSED, LISTEN.
#
# State Accept {
# ESTABLISHED CLOSED TIME_WAIT CLOSE_WAIT for TCP
# }
}
}

View File

@ -1,7 +0,0 @@
# Create an additional socket in haproxy's chroot in order to allow logging via
# /dev/log to chroot'ed HAProxy processes
$AddUnixListenSocket /var/lib/haproxy/dev/log
# Send HAProxy messages to a dedicated logfile
if $programname startswith 'haproxy' then <%= @log_file %>
& stop

View File

@ -1,3 +0,0 @@
[client]
user = <%= @mysql_user %>
password = <%= @mysql_password %>

View File

@ -1,10 +0,0 @@
fixtures:
symlinks:
cobbler: "#{source_dir}"
openssl: "#{source_dir}/../openssl"
inifile: "#{source_dir}/../inifile"
firewall: "#{source_dir}/../firewall"
stdlib: "#{source_dir}/../stdlib"
apache: "#{source_dir}/../apache"
concat: "#{source_dir}/../concat"
fuel: "#{source_dir}/../fuel"

View File

@ -1 +0,0 @@
spec/fixtures

View File

@ -1,24 +0,0 @@
source 'https://rubygems.org'
group :development, :test do
gem 'puppetlabs_spec_helper', :require => 'false'
gem 'rspec-puppet', '~> 2.2.0', :require => 'false'
gem 'metadata-json-lint', :require => 'false'
# TODO(aschultz): fix linting and enable these
#gem 'puppet-lint-param-docs', :require => 'false'
gem 'puppet-lint-absolute_classname-check', :require => 'false'
gem 'puppet-lint-absolute_template_path', :require => 'false'
gem 'puppet-lint-trailing_newline-check', :require => 'false'
gem 'puppet-lint-unquoted_string-check', :require => 'false'
gem 'puppet-lint-leading_zero-check', :require => 'false'
gem 'puppet-lint-variable_contains_upcase', :require => 'false'
gem 'puppet-lint-numericvariable', :require => 'false'
gem 'json', :require => 'false'
gem 'rspec-puppet-facts', :require => 'false'
end
if puppetversion = ENV['PUPPET_GEM_VERSION']
gem 'puppet', puppetversion, :require => false
else
gem 'puppet', :require => false
end

Some files were not shown because too many files have changed in this diff Show More