Merge remote branch 'origin/develop'

This commit is contained in:
Mike Scherbakov
2013-09-01 06:02:47 +04:00
94 changed files with 2667 additions and 652 deletions

View File

@@ -0,0 +1,307 @@
Mirantis Puppet module for Ceph
===============================
About
-----
This is a Puppet module to install a Ceph cluster inside of OpenStack. This
module has been developed specifically to work with Mirantis Fuel for
OpenStack.
* Puppet: http://www.puppetlabs.com/
* Ceph: http://ceph.com/
* Fuel: http://fuel.mirantis.com/
Status
------
Originally developped and tested on Ubuntu 12.04 LTS (Precise Pangolin),
targetting the Ceph 0.61 (Cuttlefish) release:
* Ubuntu 12.04.2 LTS
* Puppet 3.2.2
* Ceph 0.61.7
**Ubuntu support is currently broken but will be back soon**
Currently working on CentOS 6.4 with Ceph 0.61:
* CentOS 6.4
* Puppet 2.7.19
* Ceph 0.61.8
Known Issues
------------
**Glance**
There are currently issues with glance 2013.1.2 (grizzly) that cause ``glance
image-create`` with ``--location`` to not function. see
https://bugs.launchpad.net/glance/+bug/1215682
Features
--------
* Ceph package
* Ceph Monitors
* Ceph OSDs
* Ceph MDS (slightly broken)
* Ceph Object Gateway (radosgw): coming soon
Using
-----
To deploy a Ceph cluster you need at least one monitor and two OSD devices. If
you are deploying Ceph outside of Fuel, see the example/site.pp for the
parameters that you will need to adjust.
This module requires the puppet agents to have ``pluginsync = true``.
Understanding the example Puppet manifest
-----------------------------------------
```puppet
$mon_nodes = [
'ceph-mon-1',
]
```
This parameter defines the nodes for which the monitor process will be
installed. This should be one, three or more monitors.
```puppet
$osd_nodes = [
'ceph-osd-1',
'ceph-osd-2',
]
```
This parameter defines the nodes for which the OSD process` will run. One OSD
will be created for each ``$osd_volume`` per ``$osd_nodes``. There is a minimum
requirement of two OSD instances.
```puppet
$mds_server = 'ceph-mds-01'
```
Uncomment this line if you want to install metadata server. Metadata is only
necessary for CephFS and should run on separate hardware from the other
OpenStack nodes.
```puppet
$osd_devices = [ 'vdb', 'vdc1' ]
```
This parameter defines which drive, partition or path will be used in Ceph OSD
on each OSD node. When referring to whole devices or partitions, the /dev/ prefix
is not necessary.
```puppet
$ceph_pools = [ 'volumes', 'images' ]
```
This parameter defines the names of the ceph pools we want to pre-create. By
default, ``volumes`` and ``images`` are necessary to setup the OpenStack hooks.
```puppet
node 'default' {
...
}
```
This section configures components for all nodes of Ceph and OpenStack.
```puppet
class { 'ceph::deploy':
auth_supported => 'cephx',
osd_journal_size => '2048',
osd_mkfs_type => 'xfs',
}
```
In this section you can change authentication type, journal size (in KB), type
of filesystem.
Verifying the deployment
------------------------
You can issue ``ceph -s`` or ``ceph health`` (terse) to check the current
status of the cluster. The output of ``ceph -s`` should include:
* ``monmap``: this should contain the correct number of monitors
* ``osdmap``: this should contain the correct number of osd instances (one per
node per volume)
```
root@fuel-ceph-02:~# ceph -s
health HEALTH_OK
monmap e1: 2 mons at {fuel-ceph-01=10.0.0.253:6789/0,fuel-ceph-02=10.0.0.252:6789/0}, election epoch 4, quorum 0,1 fuel-ceph-01,fuel-ceph-02
osdmap e23: 4 osds: 4 up, 4 in
pgmap v275: 448 pgs: 448 active+clean; 9518 bytes data, 141 MB used, 28486 MB / 28627 MB avail
mdsmap e4: 1/1/1 up {0=fuel-ceph-02.local.try=up:active}
```
Here are some errors that may be reported.
``ceph -s`` returned ``health HEALTH_WARN``:
```
root@fuel-ceph-01:~# ceph -s
health HEALTH_WARN 63 pgs peering; 54 pgs stuck inactive; 208 pgs stuck unclean; recovery 2/34 degraded (5.882%)
...
```
``ceph`` commands return key errors:
```
[root@controller-13 ~]# ceph -s
2013-08-22 00:06:19.513437 7f79eedea760 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2013-08-22 00:06:19.513466 7f79eedea760 -1 ceph_tool_common_init failed.
```
Check the links in ``/root/ceph\*.keyring``. There should be one for each of
admin, osd, and mon. If any are missing this could be the cause.
Try to run ``ceph-deploy gatherkeys {mon-server-name}``. If this dosn't work
then there may have been an issue starting the cluster.
Check to see running ceph processes ``ps axu | grep ceph``. If there is a
python process running for ``ceph-authtool`` then there is likely a problem
with the MON processes talking to each other. Check their network and firewall.
The monitor defaults to a port 6789
Missing OSD instances
---------------------
By default there should be one OSD instance per volume per OSD node listed in
in the configuration. If one or more of them is missing you might have a
problem with the initialization of the disks. Properly working block devices be
mounted for you.
Common issues:
* the disk or volume is in use
* the disk partition didn't refresh in the kernel
Check the osd tree:
```
#ceph osd tree
# id weight type name up/down reweight
-1 6 root default
-2 2 host controller-1
0 1 osd.0 up 1
3 1 osd.3 up 1
-3 2 host controller-2
1 1 osd.1 up 1
4 1 osd.4 up 1
-4 2 host controller-3
2 1 osd.2 up 1
5 1 osd.5 up 1
```
Ceph pools
----------
By default we create two pools ``image``, and ``volumes``, there should also be
defaults of ``data``, ``metadata``, and ``rdb``. ``ceph osd lspools`` can show the
current pools:
# ceph osd lspools
0 data,1 metadata,2 rbd,3 images,4 volumes,
Testing openstack
-----------------
### Glance
To test Glance, upload an image to Glance to see if it is saved in Ceph:
```shell
source ~/openrc
glance image-create --name cirros --container-format bare \
--disk-format qcow2 --is-public yes --location \
https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
```
**Note: ``--location`` is currently broken in glance see known issues above use
below instead**
```
source ~/openrc
wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
glance image-create --name cirros --container-format bare \
--disk-format qcow2 --is-public yes < cirros-0.3.0-x86_64-disk.img
```
This will return somthing like:
```
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | None |
| container_format | bare |
| created_at | 2013-08-22T19:54:28 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | f52fb13e-29cf-4a2f-8ccf-a170954907b8 |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | baa3187b7df94d9ea5a8a14008fa62f5 |
| protected | False |
| size | 0 |
| status | active |
| updated_at | 2013-08-22T19:54:30 |
+------------------+--------------------------------------+
```
Then check rdb:
```shell
rdb ls images
```
```shell
rados -p images df
```
Hacking into Fuel
-----------------
After installing onto a fuel cluster
1. Define your partitions. If you will re-define any partations you must make
sure they are exposed in the kernel before running the scripts see ``partx -a
/dev/<device>`` after ``umount /boot``.
2. Copy ``fuel-pm:/etc/puppet/modules/*`` to ``{node}:/etc/puppet/modules``
3. Copy ``/etc/puppet/modules/ceph/examples/site.pp`` to ``/root/ceph.pp``.
4. Edit ceph.pp for desired changes to ``$mon_nodes``, ``$osd_nodes``, and ``$osd_disks``.
5. Run ``puppet apply ceph.pp`` on each node **except** ``$ceph_nodes[-1]``,
then run the same command on that last node.
Copyright and License
---------------------
Copyright: (C) 2013 [Mirantis](https://www.mirantis.com/) Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,121 @@
# Global settings
Exec { path => [ '/bin/', '/sbin/' , '/usr/bin/', '/usr/sbin/' ] }
#This permater defines the monitor nodes, these may be the same as the OSD's
# if you want. There should be one or >=3
$mon_nodes = [
'controller-3.domain.tld',
]
#This parameter defines the OSD storage nodes. One OSD will run per $osd_device
# per $osd_node. betweeen the two there must be two OSD processes
$osd_nodes = [
'compute-1.domain.tld',
'compute-2.domain.tld'
]
# Uncomment this line if you want to install RadosGW.
$rados_GW = 'fuel-controller-03.local.try'
# Uncomment this line if you want to install metadata server.
$mds_server = 'fuel-controller-03.local.try'
# This parameter defines which devices to aggregate into CEPH cluster.
# ALL THE DATA THAT RESIDES ON THESE DEVICES WILL BE LOST!
$osd_devices = [ 'vdb2', 'vdc2' ]
# This parameter defines rbd pools for Cinder & Glance. It is not necessary to change.
$ceph_pools = [ 'volumes', 'images' ]
#TODO: need to seperate mon and osd list
#TODO: need to resolve single node changes
# Determine CEPH and OpenStack nodes.
node 'default' {
#RE-enable this if not using fuelweb iso with Cehp packages
#include 'ceph::yum'
#TODO: this needs to be pulled back into mirantis mirrors
include 'ceph::ssh'
#TODO: this should be pulled back into existing modules for setting up ssh-key
#TODO: OR need to at least generate the key
include 'ntp'
include 'ceph::deps'
if $fqdn in $mon_nodes {
firewall {'010 ceph-mon allow':
chain => 'INPUT',
dport => 6789,
proto => 'tcp',
action => accept,
}
}
#TODO: These should only except traffic on the storage network
if $fqdn in $osd_nodes {
firewall {'011 ceph-osd allow':
chain => 'INPUT',
dport => '6800-7100',
proto => 'tcp',
action => accept,
}
}
if $fqdn == $mon_nodes[-1] and !str2bool($::ceph_conf) {
class { 'ceph::deploy':
#Global settings
auth_supported => 'cephx',
osd_journal_size => '2048',
osd_mkfs_type => 'xfs',
osd_pool_default_size => '2',
osd_pool_default_min_size => '1',
#TODO: calculate PG numbers
osd_pool_default_pg_num => '100',
osd_pool_default_pgp_num => '100',
cluster_network => '10.0.0.0/24',
public_network => '192.168.0.0/24',
#RadosGW settings
host => $::hostname,
keyring_path => '/etc/ceph/keyring.radosgw.gateway',
rgw_socket_path => '/tmp/radosgw.sock',
log_file => '/var/log/ceph/radosgw.log',
user => 'www-data',
rgw_keystone_url => '10.0.0.223:5000',
rgw_keystone_admin_token => 'nova',
rgw_keystone_token_cache_size => '10',
rgw_keystone_revocation_interval => '60',
rgw_data => '/var/lib/ceph/rados',
rgw_dns_name => $::hostname,
rgw_print_continue => 'false',
nss_db_path => '/etc/ceph/nss',
}
package {['ceph-deploy', 'python-pushy']:
ensure => latest,
} -> Class[['ceph::glance', 'ceph::cinder', 'ceph::nova_compute']]
#All classes that should run after ceph::deploy should be below
}
if $fqdn == $rados_GW {
ceph::radosgw {"${::hostname}":
require => Class['ceph::deploy']
}
}
class { 'ceph::glance':
default_store => 'rbd',
rbd_store_user => 'images',
rbd_store_pool => 'images',
show_image_direct_url => 'True',
}
class { 'ceph::cinder':
volume_driver => 'cinder.volume.drivers.rbd.RBDDriver',
rbd_pool => 'volumes',
glance_api_version => '2',
rbd_user => 'volumes',
#TODO: generate rbd_secret_uuid
rbd_secret_uuid => 'a5d0dd94-57c4-ae55-ffe0-7e3732a24455',
}
class { 'ceph::nova_compute': }
ceph::keystone { "Keystone":
pub_ip => "${rados_GW}",
adm_ip => "${rados_GW}",
int_ip => "${rados_GW}",
}
}

View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEoAIBAAKCAQEAnHMPz2XBZSiPoIeTcagq5fuuvOH393szgx+Qp6Ue97VUu1l/
13WNTHeYpwrvtZSgdag6AGygyeGjcZwZLOXDyIMY0xIMsAA/0te+tbhuL80wUzVG
tuBE73JBz0+NBxiiwJFeOEfalblS/Oa1XhMhnifMSbtyOvGLocIJjUKcE29XNPIy
iwWGBl5YaxYMAQimgZtrrIrOl/lVgWT434Io6B24OwXiB8tC+puN/S0phpxK9m+k
1tNGQRCaSlL060hhg9EnSzcTjJ3xHVkYNJUchtHJmZ/zjCQUJK8NPxSw9efRk4/l
GrST/7/rGkr+Vycj/Ll4GFIvCAmFSx1Q7No7IQIBIwKCAQAjwodFWRZDAfTxfhMS
qhhvFPS9dXqBtcKhoNCbWPEirRquettkcqP0OJfrqrpypaEE80B1ICTAbha64dne
YGdDxjGPVJUviwdGIq8/epzXud8o9jwMiwhxPq/03vup2b7M7gboSvAiOP0GmyIk
IaFIuKO5FObo5sDUhB9wvsSWugGWfgq/SmgI3W1iQ7PRS2EHSh/CpmsIJqptZFBd
acAo1J57QGYtXwJJvY8vNTVs33ZSSIaxQynwLwv+kj3OK1MM1dYr1u0I6RPRcldq
j9hvTkXotK6VzSm/XMIqdiieHTOLhfpK6dmTJl053GaDaVg4jOsLYL+dfSU4YpAh
BHprAoGBAMkZnkHbRY1E9EMS+cE7kCariRnwUjypdtLcLdbrdcP4brUaKAi8zMOe
NaVfmVUDhW9kvRofJYkQcG9zAvvhd9fJhJWPvJ/FM/XTEXOMNTorR56ug2eVqdmM
m9ByNrkYmQb+/DOzzWDl+D2+rNNNMsNO1OXQksGavAyQmTscLWVbAoGBAMco7IGq
2nGaoVbKNBgRzrdWLgbIoH2q2VkGoEJbXbw9DM3FpBpEftVZyJYvCwrGxTXptT1w
J+W10lZdkCqj0v5iJQi7ribAbkSV82Y2Ko90k7kRBhAnGKPy3WtTGQYpjwkSKYpf
KBs1/9V+eeqLB362fA31+CccSfXj1N9AOT4zAoGAM7YhYWRFFbKlNdGt79wdwM0F
/1sN1RWiNjieErGTT6ZIWnRwscomBmqC0sDPqCV6FVRrI/lhbGNQHKiLvRy4arAp
aEmIRldH4CBU8dOYqI7JRg+eIfNI7sxilK+nq/Bh3TpA2hhKwSUxNHLb+9IFvTGH
M8frORkpCouVHdQLrFkCgYAREiLmivzH6K6+S9iUWUw7mawsd5i6UHkHoXtzZurG
/esnlJkJkNer4x/SW82/GFoL73XvUsGXWLpB6sM22teSJatnJgec6+wxw7XG7rMw
3htKYIt9ujVP4Z3zQaMPKCIz+j4TLLpLeag23vSB0WcK3HEIgswgnAZW56SIKhOI
/QKBgBki3tnP3PrZBuF4jCegYmJvPB57wnmupf1jzV6CRcx6YQNcfeGAhWJ6NfKp
l5aIHbBmatGQ3qJngnglhbo5o4w3FWYJ4SmAi4IiffqYCL9PxEXJFcuqTozRYEzc
fk8xVg16lBbdvYoZ5K3nW2LgSc1ILEXpQDUFgBjcsq72O9B+
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1 @@
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnHMPz2XBZSiPoIeTcagq5fuuvOH393szgx+Qp6Ue97VUu1l/13WNTHeYpwrvtZSgdag6AGygyeGjcZwZLOXDyIMY0xIMsAA/0te+tbhuL80wUzVGtuBE73JBz0+NBxiiwJFeOEfalblS/Oa1XhMhnifMSbtyOvGLocIJjUKcE29XNPIyiwWGBl5YaxYMAQimgZtrrIrOl/lVgWT434Io6B24OwXiB8tC+puN/S0phpxK9m+k1tNGQRCaSlL060hhg9EnSzcTjJ3xHVkYNJUchtHJmZ/zjCQUJK8NPxSw9efRk4/lGrST/7/rGkr+Vycj/Ll4GFIvCAmFSx1Q7No7IQ== root@fuel-pm

View File

@@ -0,0 +1,9 @@
Facter.add("ceph_conf") do
setcode do
File.exists? '/etc/ceph/ceph.conf'
end
end

View File

@@ -0,0 +1,9 @@
Facter.add("cinder_conf") do
setcode do
File.exists? '/etc/cinder/cinder.conf'
end
end

View File

@@ -0,0 +1,9 @@
Facter.add("glance_api_conf") do
setcode do
File.exists? '/etc/glance/glance-api.conf'
end
end

View File

@@ -0,0 +1,9 @@
Facter.add("keystone_conf") do
setcode do
File.exists? '/etc/keystone/keystone.conf'
end
end

View File

@@ -0,0 +1,9 @@
Facter.add("nova_compute") do
setcode do
File.exists? '/etc/nova/nova-compute.conf'
end
end

View File

@@ -0,0 +1,27 @@
Puppet::Type.type(:ceph_conf).provide(
:ini_setting,
:parent => Puppet::Type.type(:ini_setting).provider(:ruby)
) do
def section
resource[:name].split('/', 2).first
end
def setting
resource[:name].split('/', 2).last
end
def separator
'='
end
def self.file_path
'./ceph.conf'
end
# this needs to be removed. This has been replaced with the class method
def file_path
self.class.file_path
end
end

View File

@@ -0,0 +1,27 @@
Puppet::Type.type(:cinder_config).provide(
:ini_setting,
:parent => Puppet::Type.type(:ini_setting).provider(:ruby)
) do
def section
resource[:name].split('/', 2).first
end
def setting
resource[:name].split('/', 2).last
end
def separator
'='
end
def self.file_path
'/etc/cinder/cinder.conf'
end
# added for backwards compatibility with older versions of inifile
def file_path
self.class.file_path
end
end

View File

@@ -0,0 +1,27 @@
Puppet::Type.type(:glance_api_config).provide(
:ini_setting,
:parent => Puppet::Type.type(:ini_setting).provider(:ruby)
) do
def section
resource[:name].split('/', 2).first
end
def setting
resource[:name].split('/', 2).last
end
def separator
'='
end
def self.file_path
'/etc/glance/glance-api.conf'
end
# this needs to be removed. This has been replaced with the class method
def file_path
self.class.file_path
end
end

View File

@@ -0,0 +1,42 @@
Puppet::Type.newtype(:ceph_conf) do
ensurable
newparam(:name, :namevar => true) do
desc 'Section/setting name to manage from ./ceph.conf'
newvalues(/\S+\/\S+/)
end
newproperty(:value) do
desc 'The value of the setting to be defined.'
munge do |value|
value = value.to_s.strip
value.capitalize! if value =~ /^(true|false)$/i
value
end
def is_to_s( currentvalue )
if resource.secret?
return '[old secret redacted]'
else
return currentvalue
end
end
def should_to_s( newvalue )
if resource.secret?
return '[new secret redacted]'
else
return newvalue
end
end
end
newparam(:secret, :boolean => true) do
desc 'Whether to hide the value from Puppet logs. Defaults to `false`.'
newvalues(:true, :false)
defaultto false
end
end

View File

@@ -0,0 +1,42 @@
Puppet::Type.newtype(:cinder_config) do
ensurable
newparam(:name, :namevar => true) do
desc 'Section/setting name to manage from /etc/cinder/cinder.conf'
newvalues(/\S+\/\S+/)
end
newproperty(:value) do
desc 'The value of the setting to be defined.'
munge do |value|
value = value.to_s.strip
value.capitalize! if value =~ /^(true|false)$/i
value
end
def is_to_s( currentvalue )
if resource.secret?
return '[old secret redacted]'
else
return currentvalue
end
end
def should_to_s( newvalue )
if resource.secret?
return '[new secret redacted]'
else
return newvalue
end
end
end
newparam(:secret, :boolean => true) do
desc 'Whether to hide the value from Puppet logs. Defaults to `false`.'
newvalues(:true, :false)
defaultto false
end
end

View File

@@ -0,0 +1,19 @@
Puppet::Type.newtype(:glance_api_config) do
ensurable
newparam(:name, :namevar => true) do
desc 'Section/setting name to manage from glance-api.conf'
newvalues(/\S+\/\S+/)
end
newproperty(:value) do
desc 'The value of the setting to be defined.'
munge do |value|
value = value.to_s.strip
value.capitalize! if value =~ /^(true|false)$/i
value
end
end
end

View File

@@ -0,0 +1,27 @@
class ceph::apt (
$release = 'cuttlefish'
) {
apt::key { 'ceph':
key => '17ED316D',
key_source => 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc',
require => Class['ceph::ssh']
}
apt::key { 'radosgw':
key => '6EAEAE2203C3951A',
require => Class['ceph::ssh']
}
Apt::Source {
require => Apt::Key['ceph', 'radosgw'],
release => $::lsbdistcodename,
before => Package['ceph'],
}
apt::source { 'ceph':
location => "http://ceph.com/debian-${release}/",
}
apt::source { 'radosgw-apache2':
location => "http://gitbuilder.ceph.com/apache2-deb-precise-x86_64-basic/ref/master/",
}
apt::source { 'radosgw-fastcgi':
location => "http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-precise-x86_64-basic/ref/master/",
}
}

View File

@@ -0,0 +1,56 @@
class ceph::cinder (
$volume_driver,
$rbd_pool,
$glance_api_version,
$rbd_user,
$rbd_secret_uuid,
) {
if str2bool($::cinder_conf) {
exec {'Copy configs':
command => "scp -r ${mon_nodes[-1]}:/etc/ceph/* /etc/ceph/",
require => Package['ceph'],
returns => 0,
}
Cinder_config<||> ~> Service['openstack-cinder-volume']
File_line<||> ~> Service['openstack-cinder-volume']
cinder_config {
'DEFAULT/volume_driver': value => $volume_driver;
'DEFAULT/rbd_pool': value => $rbd_pool;
'DEFAULT/glance_api_version': value => $glance_api_version;
'DEFAULT/rbd_user': value => $rbd_user;
'DEFAULT/rbd_secret_uuid': value => $rbd_secret_uuid;
}
file { '/etc/sysconfig/openstack-cinder-volume':
ensure => 'present',
} -> file_line { 'cinder-volume.conf':
#TODO: CentOS conversion
#path => '/etc/init/cinder-volume.conf',
#line => 'env CEPH_ARGS="--id volumes"',
path => '/etc/sysconfig/openstack-cinder-volume',
line => 'export CEPH_ARGS="--id volumes"',
}
service { 'openstack-cinder-volume':
ensure => "running",
enable => true,
hasstatus => true,
hasrestart => true,
}
exec { 'Create keys for pool volumes':
command => 'ceph auth get-or-create client.volumes > /etc/ceph/ceph.client.volumes.keyring',
before => File['/etc/ceph/ceph.client.volumes.keyring'],
require => [Package['ceph'], Exec['Copy configs']],
notify => Service['openstack-cinder-volume'],
returns => 0,
}
file { '/etc/ceph/ceph.client.volumes.keyring':
owner => cinder,
group => cinder,
require => Exec['Create keys for pool volumes'],
mode => '0600',
}
}
}

View File

@@ -0,0 +1,186 @@
class ceph::deploy (
$auth_supported = 'cephx',
$osd_journal_size = '2048',
$osd_mkfs_type = 'xfs',
$osd_pool_default_size = '2',
$osd_pool_default_min_size = '0',
$osd_pool_default_pg_num = '8',
$osd_pool_default_pgp_num = '8',
$cluster_network = '10.0.0.0/24',
$public_network = '192.168.0.0/24',
$host = $hostname,
$keyring_path = '/etc/ceph/keyring.radosgw.gateway',
$rgw_socket_path = '/tmp/radosgw.sock',
$log_file = '/var/log/ceph/radosgw.log',
$user = 'www-data',
$rgw_keystone_url = '127.0.0.1:5000',
$rgw_keystone_admin_token = 'nova',
$rgw_keystone_token_cache_size = '10',
$rgw_keystone_revocation_interval = '60',
$rgw_data = '/var/lib/ceph/rados',
$rgw_dns_name = $hostname,
$rgw_print_continue = 'false',
$nss_db_path = '/etc/ceph/nss',
) {
include p_osd, c_osd, c_pools
$range = join($mon_nodes, " ")
exec { 'ceph-deploy init config':
command => "ceph-deploy new ${range}",
require => Package['ceph-deploy', 'ceph', 'python-pushy'],
#TODO: see if add creates is relevant
logoutput => true,
}
Ceph_conf {require => Exec['ceph-deploy init config']}
ceph_conf {
'global/auth supported': value => $auth_supported;
'global/osd journal size': value => $osd_journal_size;
'global/osd mkfs type': value => $osd_mkfs_type;
'global/osd pool default size': value => $osd_pool_default_size;
'global/osd pool default min size': value => $osd_pool_default_min_size;
'global/osd pool default pg num': value => $osd_pool_default_pg_num;
'global/osd pool default pgp num': value => $osd_pool_default_pgp_num;
'global/cluster network': value => $cluster_network;
'global/public network': value => $public_network;
'client.radosgw.gateway/host': value => $host;
'client.radosgw.gateway/keyring': value => $keyring_path;
'client.radosgw.gateway/rgw socket path': value => $rgw_socket_path;
'client.radosgw.gateway/log file': value => $log_file;
'client.radosgw.gateway/user': value => $user;
'client.radosgw.gateway/rgw keystone url': value => $rgw_keystone_url;
'client.radosgw.gateway/rgw keystone admin token': value => $rgw_keystone_admin_token;
'client.radosgw.gateway/rgw keystone accepted roles': value => $rgw_keystone_accepted_roles;
'client.radosgw.gateway/rgw keystone token cache size': value => $rgw_keystone_token_cache_size;
'client.radosgw.gateway/rgw keystone revocation interval': value => $rgw_keystone_revocation_interval;
'client.radosgw.gateway/rgw data': value => $rgw_data;
'client.radosgw.gateway/rgw dns name': value => $rgw_dns_name;
'client.radosgw.gateway/rgw print continue': value => $rgw_print_continue;
'client.radosgw.gateway/nss db path': value => $nss_db_path;
}
Ceph_conf <||> -> Exec ['ceph-deploy deploy monitors']
exec { 'ceph-deploy deploy monitors':
#TODO: evaluate if this is idempotent
command => "ceph-deploy --overwrite-conf mon create ${range}",
# require => Ceph_conf['global/auth supported', 'global/osd journal size', 'global/osd mkfs type']
logoutput => true,
} -> exec { 'ceph-deploy gather-keys':
command => 'ceph-deploy gather-keys',
returns => 0,
tries => 60, #This is necessary to prevent race, mon must establish
# a quorum before it can generate keys, observed this takes upto 15 times.
# Keys must exist prior to other commands running
try_sleep => 1,
}
File {
ensure => 'link',
require => Exec['ceph-deploy-s2']
}
file { '/root/ceph.bootstrap-osd.keyring':
target => '/var/lib/ceph/bootstrap-osd/ceph.keyring',
}
file { '/root/ceph.bootstrap-mds.keyring':
target => '/var/lib/ceph/bootstrap-mds/ceph.keyring',
}
file { '/root/ceph.client.admin.keyring':
target => "/etc/ceph/ceph.client.admin.keyring",
}
class p_osd {
define int {
$devices = join(suffix($osd_nodes, ":${name}"), " ")
exec { "ceph-deploy osd prepare ${devices}":
#ceph-deploy osd prepare is ensuring there is a filesystem on the
# disk according to the args passed to ceph.conf (above).
#timeout: It has a long timeout because of the format taking forever.
# A resonable amount of time would be around 300 times the length
# of $osd_nodes. Right now its 0 to prevent puppet from aborting it.
command => "ceph-deploy osd prepare ${devices}",
returns => 0,
timeout => 0, #TODO: make this something reasonable
tries => 2, #This is necessary because of race for mon creating keys
try_sleep => 1,
require => [File['/root/ceph.bootstrap-osd.keyring',
'/root/ceph.bootstrap-mds.keyring',
'/root/ceph.client.admin.keyring'],
Exec['ceph-deploy gather-keys'],
],
logoutput => true,
}
}
int { $osd_devices: }
}
class c_osd {
define int {
$devices = join(suffix($osd_nodes, ":${name}"), " ")
exec { "Creating osd`s on ${devices}":
command => "ceph-deploy osd activate ${devices}",
returns => 0,
require => Class['p_osd'],
logoutput => true,
}
}
int { $osd_devices: }
}
if $mds_server {
exec { 'ceph-deploy-s4':
command => "ceph-deploy mds create ${mds_server}",
require => Class['c_osd']
logoutput => true,
}
}
class c_pools {
define int {
exec { "Creating pool ${name}":
command => "ceph osd pool create ${name} ${osd_pool_default_pg_num} ${osd_pool_default_pgp_num}",
require => Class['c_osd']
logoutput => true,
}
}
int { $ceph_pools: }
}
exec { 'CLIENT AUTHENTICATION':
#DO NOT SPLIT ceph auth command lines See http://tracker.ceph.com/issues/3279
command => "ceph auth get-or-create client.${ceph_pools[0]} mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=${ceph_pools[0]}, allow rx pool=${ceph_pools[1]}' && \
ceph auth get-or-create client.${ceph_pools[1]} mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=${ceph_pools[1]}'",
require => Class['c_pools'],
logoutput => true,
}
#TODO: remove blow here when we can do deploy from each mon (PRD-1570)
exec { 'Create keys for pool volumes':
command => 'ceph auth get-or-create client.volumes > /etc/ceph/ceph.client.volumes.keyring',
before => File['/etc/ceph/ceph.client.volumes.keyring'],
require => [Package['ceph']],
returns => 0,
}
file { '/etc/ceph/ceph.client.volumes.keyring':
owner => cinder,
group => cinder,
require => Exec['Create keys for pool volumes'],
mode => '0600',
}
exec { 'Create keys for pool images':
command => 'ceph auth get-or-create client.images > /etc/ceph/ceph.client.images.keyring',
before => File['/etc/ceph/ceph.client.images.keyring'],
require => [Package['ceph']],
returns => 0,
}
file { '/etc/ceph/ceph.client.images.keyring':
owner => glance,
group => glance,
require => Exec['Create keys for pool images'],
mode => '0600',
}
exec {'Deploy push config':
#This pushes config and keyrings to other nodes
command => "for node in ${mon_nodes}
do
scp -r /etc/ceph/* \${node}:/etc/ceph/
done",
require => [Exec['CLIENT AUTHENTICATION'],
File['/etc/ceph/ceph.client.images.keyring',
'/etc/ceph/ceph.client.volumes.keyring'],
],
returns => 0,
}
}

View File

@@ -0,0 +1,12 @@
class ceph::deps (
$type = 'base'
){
package { ['ceph', 'redhat-lsb-core','ceph-deploy', 'python-pushy']:
ensure => latest,
}
file {'/etc/sudoers.d/ceph':
content => "#This is required for ceph-deploy\nDefaults !requiretty\n"
}
}

View File

@@ -0,0 +1,42 @@
class ceph::glance (
$default_store,
$rbd_store_user,
$rbd_store_pool,
$show_image_direct_url,
) {
if str2bool($::glance_api_conf) {
package {['python-ceph']:
ensure => latest,
}
exec {'Copy config':
command => "scp -r ${mon_nodes[-1]}:/etc/ceph/* /etc/ceph/",
require => Package['ceph'],
returns => 0,
}
glance_api_config {
'DEFAULT/default_store': value => $default_store;
'DEFAULT/rbd_store_user': value => $rbd_store_user;
'DEFAULT/rbd_store_pool': value => $rbd_store_pool;
'DEFAULT/show_image_direct_url': value => $show_image_direct_url;
}~>
service { 'openstack-glance-api':
ensure => "running",
enable => true,
hasstatus => true,
hasrestart => true,
}
exec { 'Create keys for pool images':
command => 'ceph auth get-or-create client.images > /etc/ceph/ceph.client.images.keyring',
before => File['/etc/ceph/ceph.client.images.keyring'],
require => [Package['ceph'], Exec['Copy config']],
notify => Service['openstack-glance-api'],
returns => 0,
}
file { '/etc/ceph/ceph.client.images.keyring':
owner => glance,
group => glance,
require => Exec['Create keys for pool images'],
mode => '0600',
}
}
}

View File

@@ -0,0 +1,43 @@
define ceph::keystone (
$pub_ip,
$adm_ip,
$int_ip,
$directory = '/etc/ceph/nss',
) {
if str2bool($::keystone_conf) {
package { "libnss3-tools" :
ensure => 'latest'
}
file { "${directory}":
ensure => "directory",
require => Package['ceph'],
}
exec {"creating OpenSSL certificates":
command => "openssl x509 -in /etc/keystone/ssl/certs/ca.pem -pubkey \
| certutil -d ${directory} -A -n ca -t 'TCu,Cu,Tuw' && openssl x509 \
-in /etc/keystone/ssl/certs/signing_cert.pem -pubkey | certutil -A -d \
${directory} -n signing_cert -t 'P,P,P'",
require => [File["${directory}"], Package["libnss3-tools"]]
} ->
exec {"copy OpenSSL certificates":
command => "scp -r /etc/ceph/nss/* ${rados_GW}:/etc/ceph/nss/ && ssh ${rados_GW} '/etc/init.d/radosgw restart'",
}
keystone_service { "swift":
ensure => present,
type => 'object-store',
description => 'Openstack Object-Store Service',
notify => Service['keystone'],
}
keystone_endpoint { "RegionOne/swift":
ensure => present,
public_url => "http://${pub_ip}/swift/v1",
admin_url => "http://${adm_ip}/swift/v1",
internal_url => "http://${int_ip}/swift/v1",
notify => Service['keystone'],
}
service { "keystone":
enable => true,
ensure => "running",
}
}
}

View File

@@ -0,0 +1,29 @@
class ceph::nova_compute (
$rbd_secret_uuid = $::ceph::cinder::rbd_secret_uuid
) {
if str2bool($::nova_compute) {
exec {'Copy conf':
command => "scp -r ${ceph_nodes[-1]}:/etc/ceph/* /etc/ceph/",
require => Package['ceph'],
returns => [0,1],
}
file { '/tmp/secret.xml':
#TODO: use mktemp
content => template('ceph/secret.erb')
}
exec { 'Set value':
command => 'virsh secret-set-value --secret $(virsh secret-define --file /tmp/secret.xml | egrep -o "[0-9a-fA-F]{8}(-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}") --base64 $(ceph auth get-key client.volumes) && rm /tmp/secret.xml',
require => [File['/tmp/secret.xml'], Package ['ceph'], Exec['Copy conf']],
returns => [0,1],
}
service {'nova-compute':
ensure => "running",
enable => true,
hasstatus => true,
hasrestart => true,
subscribe => Exec['Set value']
} -> file {'/tmp/secret.xml':
ensure => absent,
}
}
}

View File

@@ -0,0 +1,70 @@
define apache::loadmodule () {
exec { "/usr/sbin/a2enmod $name" :
unless => "/bin/readlink -e /etc/apache2/mods-enabled/${name}.load",
notify => Service[apache2]
}
}
define ceph::radosgw (
$keyring_path = '/etc/ceph/keyring.radosgw.gateway',
$apache2_ssl = '/etc/apache2/ssl/',
$radosgw_auth_key = 'client.radosgw.gateway',
) {
package { ["apache2", "libapache2-mod-fastcgi", 'libnss3-tools', 'radosgw']:
ensure => "latest",
}
apache::loadmodule{["rewrite", "fastcgi", "ssl"]: }
file {'/etc/apache2/httpd.conf':
ensure => "present",
content => "ServerName ${fqdn}",
notify => Service["apache2"],
require => Package["apache2"],
}
file {["${apache2_ssl}", '/var/lib/ceph/radosgw/ceph-radosgw.gateway', '/var/lib/ceph/radosgw', '/etc/ceph/nss']:
ensure => "directory",
mode => 755,
}
exec {"generate SSL certificate on $name":
command => "openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${apache2_ssl}apache.key -out ${apache2_ssl}apache.crt -subj '/C=RU/ST=Russia/L=Saratov/O=Mirantis/OU=CA/CN=localhost'",
returns => [0,1],
}
file { "/etc/apache2/sites-available/rgw.conf":
content => template('ceph/rgw.conf.erb'),
notify => Service["apache2"],
require => Package["apache2"],
}
Exec {require => File["/etc/apache2/sites-available/rgw.conf"]}
exec {'a2ensite rgw.conf':}
exec {'a2dissite default':}
file { "/var/www/s3gw.fcgi":
content => template('ceph/s3gw.fcgi.erb'),
notify => Service["apache2"],
require => Package["apache2"],
mode => "+x",
}
exec { "ceph-create-radosgw-keyring-on $name":
command => "ceph-authtool --create-keyring ${keyring_path}",
require => Package['ceph'],
} ->
file { "${keyring_path}":
mode => "+r",
} ->
exec { "ceph-generate-key-on $name":
command => "ceph-authtool ${keyring_path} -n ${radosgw_auth_key} --gen-key",
require => Package["apache2"],
} ->
exec { "ceph-add-capabilities-to-the-key-on $name":
command => "ceph-authtool -n ${radosgw_auth_key} --cap osd 'allow rwx' --cap mon 'allow rw' ${keyring_path}",
require => Package["apache2"],
} ->
exec { "ceph-add-to-ceph-keyring-entries-on $name":
command => "ceph -k /etc/ceph/ceph.client.admin.keyring auth add ${radosgw_auth_key} -i ${keyring_path}",
require => Package["apache2"],
}
service { "apache2":
enable => true,
ensure => "running",
}
}

View File

@@ -0,0 +1,40 @@
class ceph::ssh{
package {['openssh-server', 'openssh-clients']:
#TODO: debian == openssh-client
#TODO: rhel == openssh-clients
ensure => latest
}
$ssh_private_key = 'puppet:///modules/ceph/openstack'
$ssh_public_key = 'puppet:///modules/ceph/openstack.pub'
File {
ensure => present,
owner => 'root',
group => 'root',
mode => '0400',
}
file { '/root/.ssh':
ensure => directory,
mode => '0700',
}
#file { '/root/.ssh/authorized_keys':
# source => $ssh_public_key,
#}
ssh_authorized_key {'ceph-ssh-key':
ensure => present,
key => 'AAAAB3NzaC1yc2EAAAABIwAAAQEAnHMPz2XBZSiPoIeTcagq5fuuvOH393szgx+Qp6Ue97VUu1l/13WNTHeYpwrvtZSgdag6AGygyeGjcZwZLOXDyIMY0xIMsAA/0te+tbhuL80wUzVGtuBE73JBz0+NBxiiwJFeOEfalblS/Oa1XhMhnifMSbtyOvGLocIJjUKcE29XNPIyiwWGBl5YaxYMAQimgZtrrIrOl/lVgWT434Io6B24OwXiB8tC+puN/S0phpxK9m+k1tNGQRCaSlL060hhg9EnSzcTjJ3xHVkYNJUchtHJmZ/zjCQUJK8NPxSw9efRk4/lGrST/7/rGkr+Vycj/Ll4GFIvCAmFSx1Q7No7IQ==',
type => 'ssh-rsa',
user => 'root',
}
file { '/root/.ssh/id_rsa':
source => $ssh_private_key,
}
file { '/root/.ssh/id_rsa.pub':
source => $ssh_public_key,
}
file { '/etc/ssh/ssh_config':
mode => '0600',
content => "Host *\n StrictHostKeyChecking no\n UserKnownHostsFile=/dev/null\n",
}
}

View File

@@ -0,0 +1,69 @@
class ceph::yum (
$release = 'cuttlefish'
)
{
yumrepo { 'ext-epel-6.8':
descr => 'External EPEL 6.8',
name => 'ext-epel-6.8',
baseurl => absent,
gpgcheck => '0',
gpgkey => absent,
mirrorlist => 'https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch',
}
yumrepo { 'ext-ceph':
descr => "External Ceph ${release}",
name => "ext-ceph-${release}",
baseurl => "http://ceph.com/rpm-${release}/el6/\$basearch",
gpgcheck => '1',
gpgkey => 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc',
mirrorlist => absent,
}
yumrepo { 'ext-ceph-noarch':
descr => 'External Ceph noarch',
name => "ext-ceph-${release}-noarch",
baseurl => "http://ceph.com/rpm-${release}/el6/noarch",
gpgcheck => '1',
gpgkey => 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc',
mirrorlist => absent,
}
#fuel repos
yumrepo { 'centos-base':
descr => 'Mirantis-CentOS-Base',
name => 'base',
baseurl => 'http://download.mirantis.com/centos-6.4',
gpgcheck => '1',
gpgkey => 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6',
mirrorlist => absent,
}
yumrepo { 'openstack-epel-fuel-grizzly':
descr => 'Mirantis OpenStack grizzly Custom Packages',
baseurl => 'http://download.mirantis.com/epel-fuel-grizzly-3.1',
gpgcheck => '1',
gpgkey => 'http://download.mirantis.com/epel-fuel-grizzly-3.1/mirantis.key',
mirrorlist => absent,
}
# completely disable additional out-of-box repos
yumrepo { 'extras':
descr => 'CentOS-$releasever - Extras',
mirrorlist => 'http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras',
gpgcheck => '1',
baseurl => absent,
gpgkey => 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6',
enabled => '0',
}
yumrepo { 'updates':
descr => 'CentOS-$releasever - Updates',
mirrorlist => 'http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates',
gpgcheck => '1',
baseurl => absent,
gpgkey => 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6',
enabled => '0',
}
}

View File

@@ -0,0 +1,23 @@
FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock
<VirtualHost *:80>
ServerName <%= @fqdn %>
DocumentRoot /var/www
RewriteEngine On
RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) /s3gw.fcgi?page=$1&params=$2&%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
<IfModule mod_fastcgi.c>
<Directory /var/www>
Options +ExecCGI
AllowOverride All
SetHandler fastcgi-script
Order allow,deny
Allow from all
AuthBasicAuthoritative Off
</Directory>
</IfModule>
AllowEncodedSlashes On
ServerSignature Off
</VirtualHost>

View File

@@ -0,0 +1,2 @@
#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway

View File

@@ -0,0 +1,6 @@
<secret ephemeral='no' private='no'>
<uuid><%= @rbd_secret_uuid %></uuid>
<usage type='ceph'>
<name>client.volumes secret</name>
</usage>
</secret>

View File

@@ -4,8 +4,7 @@
# $osapi_volume_extension = cinder.api.openstack.volume.contrib.standard_extensions
# $root_helper = sudo /usr/local/bin/cinder-rootwrap /etc/cinder/rootwrap.conf
# $use_syslog = Rather or not service should log to syslog. Optional.
# $syslog_log_facility = Facility for syslog, if used. Optional. Note: duplicating conf option
# wouldn't have been used, but more powerfull rsyslog features managed via conf template instead
# $syslog_log_facility = Facility for syslog, if used. Optional.
# $syslog_log_level = logging level for non verbose and non debug mode. Optional.
class cinder::base (
@@ -30,6 +29,7 @@ class cinder::base (
$use_syslog = false,
$syslog_log_facility = "LOCAL3",
$syslog_log_level = 'WARNING',
$log_dir = '/var/log/cinder',
) {
include cinder::params
@@ -54,34 +54,50 @@ class cinder::base (
ensure => present,
owner => 'cinder',
group => 'cinder',
mode => '0644',
mode => '0640',
require => Package['cinder'],
}
if $use_syslog {
if $use_syslog and !$debug =~ /(?i)(true|yes)/ {
cinder_config {
'DEFAULT/log_config': value => "/etc/cinder/logging.conf";
'DEFAULT/log_file': ensure=> absent;
'DEFAULT/logdir': ensure=> absent;
'DEFAULT/log_file': ensure=> absent;
'DEFAULT/log_dir': ensure=> absent;
'DEFAULT/logfile': ensure=> absent;
'DEFAULT/logdir': ensure=> absent;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/use_syslog': value => true;
'DEFAULT/syslog_log_facility': value => $syslog_log_facility;
}
file { "cinder-logging.conf":
content => template('cinder/logging.conf.erb'),
path => "/etc/cinder/logging.conf",
require => File[$::cinder::params::cinder_conf],
}
file { "cinder-all.log":
path => "/var/log/cinder-all.log",
}
# We must notify services to apply new logging rules
File['cinder-logging.conf'] ~> Service<| title == 'cinder-api' |>
File['cinder-logging.conf'] ~> Service<| title == 'cinder-volume' |>
File['cinder-logging.conf'] ~> Service<| title == 'cinder-scheduler' |>
}
else {
cinder_config {'DEFAULT/log_config': ensure=>absent;}
cinder_config {
'DEFAULT/log_config': ensure=> absent;
'DEFAULT/use_syslog': ensure=> absent;
'DEFAULT/syslog_log_facility': ensure=> absent;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/logdir':value=> $log_dir;
'DEFAULT/logging_context_format_string':
value => '%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(instance)s %(message)s';
'DEFAULT/logging_default_format_string':
value => '%(asctime)s %(levelname)s %(name)s [-] %(instance)s %(message)s';
}
# might be used for stdout logging instead, if configured
file { "cinder-logging.conf":
content => template('cinder/logging.conf-nosyslog.erb'),
path => "/etc/cinder/logging.conf",
require => File[$::cinder::params::cinder_conf],
}
}
# We must notify services to apply new logging rules
File['cinder-logging.conf'] ~> Service<| title == "$::cinder::params::api_service" |>
File['cinder-logging.conf'] ~> Service<| title == "$::cinder::params::volume_service" |>
File['cinder-logging.conf'] ~> Service<| title == "$::cinder::params::scheduler_service" |>
file { $::cinder::params::cinder_conf: }
file { $::cinder::params::cinder_paste_api_ini: }
@@ -141,7 +157,6 @@ else {
'DEFAULT/debug': value => $debug;
'DEFAULT/verbose': value => $verbose;
'DEFAULT/api_paste_config': value => '/etc/cinder/api-paste.ini';
'DEFAULT/use_syslog': value => $use_syslog;
}
exec { 'cinder-manage db_sync':
command => $::cinder::params::db_sync_command,
@@ -155,7 +170,7 @@ else {
Cinder_config<||> -> Exec['cinder-manage db_sync']
Nova_config<||> -> Exec['cinder-manage db_sync']
Cinder_api_paste_ini<||> -> Exec['cinder-manage db_sync']
Exec['cinder-manage db_sync'] -> Service<| title == 'cinder-api' |>
Exec['cinder-manage db_sync'] -> Service<| title == 'cinder-volume' |>
Exec['cinder-manage db_sync'] -> Service<| title == 'cinder-scheduler' |>
Exec['cinder-manage db_sync'] -> Service<| title == $::cinder::params::api_service |>
Exec['cinder-manage db_sync'] -> Service<| title == $::cinder::params::volume_service |>
Exec['cinder-manage db_sync'] -> Service<| title == $::cinder::params::scheduler_service |>
}

View File

@@ -0,0 +1,24 @@
[loggers]
keys = root
[handlers]
keys = root
[formatters]
keys = default
[formatter_default]
format=%(asctime)s %(levelname)s %(name)s:%(lineno)d %(message)s
[logger_root]
level=NOTSET
handlers = root
propagate = 1
[handler_root]
class = StreamHandler
level=NOTSET
formatter = default
args = (sys.stdout,)

View File

@@ -1,18 +1,16 @@
[loggers]
keys = root
# devel is reserved for future usage
[handlers]
keys = production,devel
keys = production,devel,stderr
[formatters]
keys = normal,debug
[logger_root]
level = NOTSET
handlers = production
handlers = production,devel,stderr
propagate = 1
#qualname = cinder
[formatter_debug]
format = cinder-%(name)s %(levelname)s: %(module)s %(funcName)s %(message)s
@@ -20,22 +18,46 @@ format = cinder-%(name)s %(levelname)s: %(module)s %(funcName)s %(message)s
[formatter_normal]
format = cinder-%(name)s %(levelname)s: %(message)s
# Extended logging info to LOG_<%= @syslog_log_facility %> with debug:<%= @debug %> and verbose:<%= @verbose %>
# Note: local copy goes to /var/log/cinder-all.log
# logging info to LOG_<%= @syslog_log_facility %> with debug:<%= @debug %> and verbose:<%= @verbose %>
[handler_production]
class = handlers.SysLogHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = ('/dev/log', handlers.SysLogHandler.LOG_<%= @syslog_log_facility %>)
formatter = normal
# TODO find out how it could be usefull and how it should be used
[handler_stderr]
class = StreamHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = (sys.stderr,)
[handler_devel]
class = StreamHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = (sys.stdout,)

View File

@@ -53,6 +53,12 @@ class cobbler::snippets {
group => root,
mode => 0644,
}
file { "/usr/bin/pmanager.py" :
content => template("cobbler/scripts/pmanager.py"),
owner => root,
group => root,
mode => 0644,
}
}
/(?i)(centos|redhat)/: {
file { "/usr/lib/python2.6/site-packages/cobbler/late_command.py" :
@@ -61,6 +67,12 @@ class cobbler::snippets {
group => root,
mode => 0644,
}
file { "/usr/lib/python2.6/site-packages/cobbler/pmanager.py" :
content => template("cobbler/scripts/pmanager.py"),
owner => root,
group => root,
mode => 0644,
}
}
}

View File

@@ -36,23 +36,6 @@ text
# SKIP CONFIGURING X
skipx
# BOOTLOADER CUSTOMIZATION
# INSTALL BOOTLOADER INTO MASTER BOOT RECORD
# --location=mbr
# WHICH ORDER OF DRIVES TO USE DURING TRYING TO INSTALL BOOTLOADER
# --driveorder=sda,hda
# APPEND STRING TO KERNEL BOOT COMMAND
# --append=""
%include /tmp/bootloader.ks
# PARTITIONING
# CLEAN ANY INVALID PARTITION TABLE
zerombr
# REMOVE ALL PARTITIONS BEFORE CREATING NEW ONES
clearpart --all --initlabel
# AUTOMATICALLY CREATE / AND swap PARTITIONS
%include /tmp/partition.ks
# COBBLER EMBEDDED SNIPPET: 'network_config'

View File

@@ -0,0 +1,342 @@
#!/usr/bin/env python
import json
class PManager(object):
def __init__(self, data):
if isinstance(data, (str, unicode)):
self.data = json.loads(data)
else:
self.data = data
self.factor = 1
self.unit = "MiB"
self._pre = []
self._kick = []
self._post = []
self._pcount = {}
self._pend = {}
self._rcount = 0
self._pvcount = 0
def pcount(self, disk_id, increment=0):
self._pcount[disk_id] = self._pcount.get(disk_id, 0) + increment
return self._pcount.get(disk_id, 0)
def psize(self, disk_id, increment=0):
self._pend[disk_id] = self._pend.get(disk_id, 0) + increment
return self._pend.get(disk_id, 0)
def rcount(self, increment=0):
self._rcount += increment
return self._rcount
def pvcount(self, increment=0):
self._pvcount += increment
return self._pvcount
def pre(self, command=None):
if command:
return self._pre.append(command)
return self._pre
def kick(self, command=None):
if command:
return self._kick.append(command)
return self._kick
def post(self, command=None):
if command:
return self._post.append(command)
return self._post
def _gettabfstype(self, vol):
if vol["mount"] == "/":
return "ext4"
elif vol["mount"] == "/boot":
return "ext3"
elif vol["mount"] == "swap":
return "swap"
return "xfs"
def _getfstype(self, vol):
fstype = self._gettabfstype(vol)
if fstype == "swap":
return ""
return "--fstype=%s" % fstype
def _parttype(self, n):
return "primary"
def _getsize(self, vol):
"""Anaconda has hard coded limitation in 16TB
for ext3/4 and xfs filesystems (the only filesystems
we are supposed to use). Besides there is no stable
64-bit ext4 implementation at the moment, so the
limitation in 16TB for ext4 is not only
anaconda limitation."""
"""Root partition can not be located on xfs file system
therefore we check if root filesystem is larger
than 16TB and set it size into 16TB if it is larger.
It is necessary to note that to format 16TB
volume on ext4 it is needed about 1G memory."""
if vol["size"] > 16777216 and vol["mount"] == "/":
return 16777216
return vol["size"]
def clean(self, disk):
self.pre("hdparm -z /dev/{0}".format(disk["id"]))
self.pre("test -e /dev/{0} && dd if=/dev/zero "
"of=/dev/{0} bs=1M count=10".format(disk["id"]))
self.pre("sleep 5")
self.pre("hdparm -z /dev/{0}".format(disk["id"]))
def gpt(self, disk):
self.pre("parted -s /dev/{0} mklabel gpt".format(disk["id"]))
def bootable(self, disk):
"""Create and mark Bios Boot partition to which grub will
embed its code later, useable for legacy boot.
May be way smaller, but be aware that the parted may
shrink 1M partition to zero at some disks and versions."""
self.pre("parted -a none -s /dev/{0} "
"unit {3} mkpart primary {1} {2}".format(
disk["id"],
self.psize(disk["id"]),
self.psize(disk["id"], 24 * self.factor),
self.unit
)
)
self.pre("parted -s /dev/{0} set {1} bios_grub on".format(
disk["id"],
self.pcount(disk["id"], 1)
)
)
"""Create partition for the EFI boot, minimum
size is 100M, recommended is 200M, with fat32 and
future mountpoint in the /boot/efi. There is also
'/usr/sbin/parted -s /dev/sda set 2 boot on'
which is strictly needed for EFI boot."""
self.pre("parted -a none -s /dev/{0} "
"unit {3} mkpart primary fat32 {1} {2}".format(
disk["id"],
self.psize(disk["id"]),
self.psize(disk["id"], 200 * self.factor),
self.unit
)
)
self.pre("parted -s /dev/{0} set {1} boot on".format(
disk["id"],
self.pcount(disk["id"], 1)
)
)
def boot(self):
self.plains(volume_filter=lambda x: x["mount"] == "/boot")
self.raids(volume_filter=lambda x: x["mount"] == "/boot")
def notboot(self):
self.plains(volume_filter=lambda x: x["mount"] != "/boot")
self.raids(volume_filter=lambda x: x["mount"] != "/boot")
def plains(self, volume_filter=None):
if not volume_filter:
volume_filter = lambda x: True
for disk in [d for d in self.data if d["type"] == "disk"]:
for part in filter(lambda p: p["type"] == "partition" and
volume_filter(p), disk["volumes"]):
if part["size"] <= 0:
continue
pcount = self.pcount(disk["id"], 1)
self.pre("parted -a none -s /dev/{0} "
"unit {4} mkpart {1} {2} {3}".format(
disk["id"],
self._parttype(pcount),
self.psize(disk["id"]),
self.psize(disk["id"], part["size"] * self.factor),
self.unit))
fstype = self._getfstype(part)
size = self._getsize(part)
tabmount = part["mount"] if part["mount"] != "swap" else "none"
tabfstype = self._gettabfstype(part)
if size > 0 and size <= 16777216:
self.kick("partition {0} "
"--onpart=$(readlink -f /dev/{2})"
"{3}".format(part["mount"], size,
disk["id"], pcount))
else:
if part["mount"] != "swap":
self.post("mkfs.{0} $(basename `readlink -f /dev/{1}`)"
"{2}".format(tabfstype, disk["id"], pcount))
self.post("mkdir -p /mnt/sysimage{0}".format(
part["mount"]))
self.post("echo 'UUID=$(blkid -s UUID -o value "
"$(basename `readlink -f /dev/{0}`){1}) "
"{2} {3} defaults 0 0'"
" >> /mnt/sysimage/etc/fstab".format(
disk["id"], pcount, tabmount, tabfstype))
def raids(self, volume_filter=None):
if not volume_filter:
volume_filter = lambda x: True
raids = {}
for disk in [d for d in self.data if d["type"] == "disk"]:
for raid in filter(lambda p: p["type"] == "raid" and
volume_filter(p), disk["volumes"]):
if raid["size"] <= 0:
continue
pcount = self.pcount(disk["id"], 1)
rname = "raid.{0:03d}".format(self.rcount(1))
begin_size = self.psize(disk["id"])
end_size = self.psize(disk["id"], raid["size"] * self.factor)
self.pre("parted -a none -s /dev/{0} "
"unit {4} mkpart {1} {2} {3}".format(
disk["id"], self._parttype(pcount),
begin_size, end_size, self.unit))
self.kick("partition {0} "
"--onpart=$(readlink -f /dev/{2}){3}"
"".format(rname, raid["size"], disk["id"], pcount))
if not raids.get(raid["mount"]):
raids[raid["mount"]] = []
raids[raid["mount"]].append(rname)
for (num, (mount, rnames)) in enumerate(raids.iteritems()):
fstype = self._gettabfstype({"mount": mount})
self.kick("raid {0} --device md{1} --fstype ext2 "
"--level=RAID1 {2}".format(mount, num, " ".join(rnames)))
def pvs(self):
pvs = {}
for disk in [d for d in self.data if d["type"] == "disk"]:
for pv in [p for p in disk["volumes"] if p["type"] == "pv"]:
if pv["size"] <= 0:
continue
pcount = self.pcount(disk["id"], 1)
pvname = "pv.{0:03d}".format(self.pvcount(1))
begin_size = self.psize(disk["id"])
end_size = self.psize(disk["id"], pv["size"] * self.factor)
self.pre("parted -a none -s /dev/{0} "
"unit {4} mkpart {1} {2} {3}".format(
disk["id"], self._parttype(pcount),
begin_size, end_size, self.unit))
self.kick("partition {0} "
"--onpart=$(readlink -f /dev/{2}){3}"
"".format(pvname, pv["size"], disk["id"], pcount))
if not pvs.get(pv["vg"]):
pvs[pv["vg"]] = []
pvs[pv["vg"]].append(pvname)
for vg, pvnames in pvs.iteritems():
self.kick("volgroup {0} {1}".format(vg, " ".join(pvnames)))
def lvs(self):
for vg in [g for g in self.data if g["type"] == "vg"]:
for lv in vg["volumes"]:
if lv["size"] <= 0:
continue
fstype = self._getfstype(lv)
size = self._getsize(lv)
tabmount = lv["mount"] if lv["mount"] != "swap" else "none"
tabfstype = self._gettabfstype(lv)
if size > 0 and size <= 16777216:
self.kick("logvol {0} --vgname={1} --size={2} "
"--name={3} {4}".format(
lv["mount"], vg["id"], size,
lv["name"], fstype))
else:
self.post("lvcreate --size {0} --name {1} {2}".format(
size, lv["name"], vg["id"]))
if lv["mount"] != swap:
self.post("mkfs.{0} /dev/mapper/{1}-{2}".format(
tabfstype, vg["id"], lv["name"]))
self.post("mkdir -p /mnt/sysimage{0}"
"".format(lv["mount"]))
"""
The name of the device. An LVM device is
expressed as the volume group name and the logical
volume name separated by a hyphen. A hyphen in
the original name is translated to two hyphens.
"""
self.post("echo '/dev/mapper/{0}-{1} {2} {3} defaults 0 0'"
" >> /mnt/sysimage/etc/fstab".format(
vg["id"].replace("-", "--"),
lv["name"].replace("-", "--"),
tabmount, tabfstype))
def bootloader(self):
devs = []
for disk in [d for d in self.data if d["type"] == "disk"]:
devs.append("$(basename `readlink -f /dev/{0}`)"
"".format(disk["id"]))
if devs:
self.kick("bootloader --location=mbr --driveorder={0} "
"--append=' biosdevname=0 "
"crashkernel=none'".format(",".join(devs)))
for dev in devs:
self.post("echo -n > /tmp/grub.script")
self.post("echo \\\"device (hd0) /dev/{0}\\\" >> "
"/tmp/grub.script".format(dev))
"""
This means that we set drive geometry manually into to
avoid grub register overlapping. We set it so that grub
thinks disk size is equal to 1G.
130 cylinders * (16065 * 512 = 8225280 bytes) = 1G
"""
self.post("echo \\\"geometry (hd0) 130 255 63\\\" >> "
"/tmp/grub.script")
self.post("echo \\\"root (hd0,2)\\\" >> /tmp/grub.script")
self.post("echo \\\"install /grub/stage1 (hd0) /grub/stage2 p "
"/grub/grub.conf\\\" >> /tmp/grub.script")
self.post("echo quit >> /tmp/grub.script")
self.post("cat /tmp/grub.script | chroot /mnt/sysimage "
"/sbin/grub --no-floppy --batch")
def expose(self,
kickfile="/tmp/partition.ks",
postfile="/tmp/post_partition.ks"
):
result = ""
for pre in self.pre():
result += "{0}\n".format(pre)
result += "echo > {0}\n".format(kickfile)
for kick in self.kick():
result += "echo \"{0}\" >> {1}\n".format(kick, kickfile)
result += "echo \"%post --nochroot\" > {0}\n".format(postfile)
result += "echo \"set -x -v\" >> {0}\n".format(postfile)
result += ("echo \"exec 1>/mnt/sysimage/root/post-partition.log "
"2>&1\" >> {0}\n".format(postfile))
for post in self.post():
result += "echo \"{0}\" >> {1}\n".format(post, postfile)
result += "echo \"%end\" >> {0}\n".format(postfile)
return result
def eval(self):
for disk in [d for d in self.data if d["type"] == "disk"]:
self.clean(disk)
self.gpt(disk)
self.bootable(disk)
self.boot()
self.notboot()
self.pvs()
self.lvs()
self.bootloader()
self.pre("sleep 10")
for disk in [d for d in self.data if d["type"] == "disk"]:
self.pre("hdparm -z /dev/{0}".format(disk["id"]))
def pm(data):
pmanager = PManager(data)
pmanager.eval()
return pmanager.expose()

View File

@@ -1,242 +1,4 @@
echo > /tmp/partition.ks
#import json
#if $getVar("ks_spaces","{}") != "{}"
#######################################
## Initializing variables
#######################################
#set $j = $getVar("ks_spaces","[]")
#set $spaces = $json.loads($j)
#set $clearpart_drives = []
#set $physical_volumes = []
#set $grub_drives = []
#set $custom_grub_drives = []
#set $partitions = []
#set $volume_groups = {}
#set $raid_volumes = {}
#set $logical_volumes = []
#set $post_logical_volumes = []
#set $parted_commands = []
#set $pvnum = 0
#set $raidnum = 0
#set $mdnum = 0
#for $space in $spaces
#set $space_id = $space.get("id")
#set $space_type = $space.get("type")
#set $space_volumes = $space.get("volumes")
#######################################
## Cleaning drives
#######################################
#if $space_type == "disk"
$clearpart_drives.append($space_id)
#end if
#######################################
## Labeling gpt
#######################################
#if $space_type == "disk"
$parted_commands.append("parted -s /dev/%s mklabel gpt" % $space_id)
#end if
#######################################
## Configuring space volumes
#######################################
#for $volume in $space_volumes
#set $volume_id = $volume.get("id")
#set $volume_type = $volume.get("type")
#######################################
## Configuring not boot partitions
#######################################
#if $space_type == "disk" and $volume_type == "boot"
## Create and mark Bios Boot partition to which grub will
## embed its code later, useable for legacy boot.
## May be way smaller, but be aware that the parted may
## shrink 1M partition to zero at some disks and versions.
## The following two lines are for future.
## $parted_commands.append("parted -a minimal -s /dev/%s unit MB mkpart primary 0 24M" % $space_id)
## $parted_commands.append("parted -s /dev/%s set 1 bios_grub on" % $space_id)
## Create partition for the EFI boot, minimum size is 100M,
## recommended is 200M, with fat32 and future mountpoint in
## the /boot/efi
## There is also '/usr/sbin/parted -s /dev/sda set 2 boot on'
## which is strictly needed for EFI boot itself.
## The following two lines are for future.
## $parted_commands.append("parted -a minimal -s /dev/%s unit MB mkpart primary fat32 24M 300M" % $space_id)
## $parted_commands.append("parted -s /dev/%s set 2 boot on" % $space_id)
#######################################
## Installing bootloader
#######################################
$grub_drives.append("\$(basename `readlink -f /dev/%s`)" % $space_id)
$custom_grub_drives.append("`readlink -f /dev/%s`" % $space_id)
#end if
#######################################
## Configuring plain partitions
#######################################
#if $space_type == "disk" and $volume_type == "partition"
#set $volume_mount = $volume.get("mount")
#set $volume_size = $int($volume.get("size") or 0)
#if $volume_size > 0
$partitions.append("partition %s --size=%s --ondisk=%s" % ($volume_mount, $volume_size, $space_id))
#end if
#end if
#######################################
## Configuring raid partitions
#######################################
#if $space_type == "disk" and $volume_type == "raid"
#set $volume_mount = $volume.get("mount")
#set $volume_size = $int($volume.get("size") or 0)
#set $volume_name = "raid.%03d" % $raidnum
#if $volume_size > 0
#if not $raid_volumes.get($volume_mount)
#set $raid_volumes[$volume_mount] = [{'size': $volume_size, 'name': $volume_name, 'ondisk': $space_id}]
#else
$raid_volumes[$volume_mount].append({'size': $volume_size, 'name': $volume_name, 'ondisk': $space_id})
#end if
#set $raidnum += 1
#end if
#end if
#######################################
## Configuring physical volumes
#######################################
#if $space_type == "disk" and $volume_type == "pv"
#set $volume_vg = $volume.get("vg")
#set $volume_size = $int($volume.get("size") or 0)
#set $volume_name = "pv.%03d" % $pvnum
#if $volume_size > 0
$physical_volumes.append("partition %s --size=%s --ondisk=%s" % ($volume_name, $volume_size, $space_id))
#if not $volume_groups.get($volume_vg)
#set $volume_groups[$volume_vg] = [$volume_name]
#else
$volume_groups[$volume_vg].append($volume_name)
#end if
#set $pvnum += 1
#end if
#end if
#######################################
## Configuring logical volumes
#######################################
#if $space_type == "vg" and $volume_type == "lv"
#set $volume_mount = $volume.get("mount")
## getting volume size in MB
#set $volume_size = $int($volume.get("size") or 0)
#set $volume_name = $volume.get("name")
##
## Anaconda has hard coded limitation of 16TB for ext3/4 and xfs filesystems (the only filesystems we are supposed to use).
## Besides there is no stable 64-bit ext4 implementation at the moment, so the limitation of 16TB is not only anaconda limitation.
## Root partition can not be located on xfs filesystem therefore we check if root filesystem is larger
## than 16TB and set it size into 16TB if it is. It is necessary to note that to format 16TB volume on ext4 it is needed about 1G memory.
#if $volume_size > 16777216 and $volume_mount == "/"
#set $volume_size = 16777216
#end if
## volume_size is less than or equal to 16TB
#if $volume_size > 0 and $volume_size <= 16777216
#set $fstype = "ext4"
#if $volume_name == "glance"
#set $fstype = "xfs"
#end if
$logical_volumes.append("logvol %s --vgname=%s --size=%s --name=%s --fstype=%s" % ($volume_mount, $space_id, $volume_size, $volume_name, $fstype))
## volume_size is more than 16TB, use xfs file system
#else
$post_logical_volumes.append("lvcreate --size %s --name %s %s" % ($volume_size, $volume_name, $space_id))
$post_logical_volumes.append("mkfs.xfs /dev/mapper/%s-%s" % ($space_id, $volume_name))
$post_logical_volumes.append("mkdir -p /mnt/sysimage%s" % $volume_mount)
$post_logical_volumes.append("echo '/dev/mapper/%s-%s %s xfs defaults 0 0' >> /mnt/sysimage/etc/fstab" % ($space_id, $volume_name, $volume_mount))
#end if
#end if
#######################################
#end for
#end for
##
##
#######################################
## Actual cleaning drives
#######################################
#for $d in $clearpart_drives
test -e $d && dd if=/dev/zero of=/dev/$d bs=1M count=10
#end for
#######################################
## Actual configuring boot partitions
#######################################
#for $parted in $parted_commands
$parted
#end for
#######################################
## Actual creating plain partitions
#######################################
#for $partition in $partitions
echo "$partition" >> /tmp/partition.ks
#end for
#######################################
## Actual creating raid volumes
#######################################
#for $raid_mount in $raid_volumes.keys()
#if $len($raid_volumes[$raid_mount]) < 2
#set $size = $raid_volumes[$raid_mount][0]['size']
#set $ondisk = $raid_volumes[$raid_mount][0]['ondisk']
echo "partition $raid_mount --size=$size --ondisk=$ondisk" >> /tmp/partition.ks
#continue
#else
#set $ks_raids = ""
#for $p in $raid_volumes[$raid_mount]
echo "partition $p['name'] --size=$p['size'] --ondisk=$p['ondisk']" >> /tmp/partition.ks
#set $ks_raids = "%s %s" % ($ks_raids, $p['name'])
#end for
#set $num_spares = $len($raid_volumes[$raid_mount]) - 2
#set $md_name = "md%d" % $mdnum
## #if $num_spares > 0
##echo "raid $raid_mount --device $md_name --spares=$num_spares --fstype ext2 --level=RAID1 $ks_raids" >> /tmp/partition.ks
## #else
echo "raid $raid_mount --device $md_name --fstype ext2 --level=RAID1 $ks_raids" >> /tmp/partition.ks
## #end if
#end if
#set $mdnum += 1
#end for
#######################################
## Actual creating physical volumes
#######################################
#for $pv in $physical_volumes
echo "$pv" >> /tmp/partition.ks
#end for
#######################################
## Actual creating volume groups
#######################################
#for $volgroup in $volume_groups.keys()
#set $ks_pvs = " ".join($volume_groups.get($volgroup))
echo "volgroup $volgroup $ks_pvs" >> /tmp/partition.ks
#end for
#######################################
## Actual creating logical volumes
#######################################
#for $lv in $logical_volumes
echo "$lv" >> /tmp/partition.ks
#end for
#######################################
## Actual creating logical volumes in %post section
#######################################
echo "%post --nochroot" > /tmp/post_partition.ks
echo "set -x -v" >> /tmp/post_partition.ks
echo "exec 1>/mnt/sysimage/root/post_partition.log 2>&1" >> /tmp/post_partition.ks
#for $lv in $post_logical_volumes
echo "$lv" >> /tmp/post_partition.ks
#end for
#######################################
## Actual bootloader installing
#######################################
#set $drives = ",".join($grub_drives)
echo "bootloader --location=mbr --driveorder=$drives --append=' biosdevname=0 crashkernel=none'" > /tmp/bootloader.ks
##
#######################################
## Actual custom bootloader installing
#######################################
#set $num = 0
#for $drive in $custom_grub_drives
echo "stage2_devnum=\\$((\\$(fdisk -l $drive | grep -E '^\/.+\*' | awk '{print \\$1}' | sed s/[^0-9]//g) - 1))" >> /tmp/post_partition.ks
echo "chroot /mnt/sysimage /bin/cp /boot/grub/grub.conf /boot/grub/grub${num}.conf" >> /tmp/post_partition.ks
echo "chroot /mnt/sysimage /bin/sed -i -re \"s/\(hd[0-9]+\,[0-9]+\)/\(hd0\,\\${stage2_devnum}\)/g\" /boot/grub/grub${num}.conf" >> /tmp/post_partition.ks
echo "echo -n > /tmp/grub.script" >> /tmp/post_partition.ks
echo "echo \"device (hd0) $drive\" >> /tmp/grub.script" >> /tmp/post_partition.ks
echo "echo \"root (hd0,\\${stage2_devnum})\" >> /tmp/grub.script" >> /tmp/post_partition.ks
echo "echo \"install /grub/stage1 (hd0) /grub/stage2 p /grub/grub${num}.conf\" >> /tmp/grub.script" >> /tmp/post_partition.ks
echo "echo quit >> /tmp/grub.script" >> /tmp/post_partition.ks
echo "cat /tmp/grub.script | chroot /mnt/sysimage /sbin/grub --batch" >> /tmp/post_partition.ks
#set $num += 1
#end for
echo "cp /tmp/post_partition.ks /mnt/sysimage/root/post_partition.ks" >> /tmp/post_partition.ks
#end if
#import pmanager
#set $pm = $pmanager.PManager($getVar("ks_spaces","[]"))
$pm.eval()
$pm.expose()

View File

@@ -78,13 +78,6 @@ class glance::api(
require => Class['glance'],
}
if !defined(File["glance-logging.conf"]) {
file {"glance-logging.conf":
content => template('glance/logging.conf.erb'),
path => "/etc/glance/logging.conf",
}
}
if($sql_connection =~ /mysql:\/\/\S+:\S+@\S+\/\S+/) {
require 'mysql::python'
} elsif($sql_connection =~ /postgresql:\/\/\S+:\S+@\S+\/\S+/) {
@@ -94,18 +87,43 @@ class glance::api(
} else {
fail("Invalid db connection ${sql_connection}")
}
if $use_syslog {
if $use_syslog and !$debug =~ /(?i)(true|yes)/ {
glance_api_config {
'DEFAULT/log_config': value => "/etc/glance/logging.conf";
'DEFAULT/log_file': ensure=> absent;
'DEFAULT/logdir': ensure=> absent;
'DEFAULT/log_dir': ensure=> absent;
'DEFAULT/logfile': ensure=> absent;
'DEFAULT/logdir': ensure=> absent;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/use_syslog': value => true;
'DEFAULT/syslog_log_facility': value => $syslog_log_facility;
}
if !defined(File["glance-logging.conf"]) {
file {"glance-logging.conf":
content => template('glance/logging.conf.erb'),
path => "/etc/glance/logging.conf",
}
}
} else {
glance_api_config {
'DEFAULT/log_config': ensure => absent;
'DEFAULT/log_file': value=> $log_file;
'DEFAULT/log_config': ensure=> absent;
'DEFAULT/use_syslog': ensure=> absent;
'DEFAULT/syslog_log_facility': ensure=> absent;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/log_file':value=> $log_file;
'DEFAULT/logging_context_format_string':
value => '%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(instance)s %(message)s';
'DEFAULT/logging_default_format_string':
value => '%(asctime)s %(levelname)s %(name)s [-] %(instance)s %(message)s';
}
# might be used for stdout logging instead, if configured
if !defined(File["glance-logging.conf"]) {
file {"glance-logging.conf":
content => template('glance/logging.conf-nosyslog.erb'),
path => "/etc/glance/logging.conf",
}
}
}
# basic service config
@@ -116,7 +134,6 @@ if $use_syslog {
'DEFAULT/bind_port': value => $bind_port;
'DEFAULT/backlog': value => $backlog;
'DEFAULT/workers': value => $workers;
'DEFAULT/use_syslog': value => $use_syslog;
'DEFAULT/registry_client_protocol': value => "http";
'DEFAULT/delayed_delete': value => "False";
'DEFAULT/scrub_time': value => "43200";

View File

@@ -12,7 +12,7 @@ class glance(
ensure => present,
owner => 'glance',
group => 'glance',
mode => '0644',
mode => '0640',
require => Package['glance'],
}
@@ -20,9 +20,6 @@ class glance(
ensure => directory,
mode => '0770',
}
file { "glance-all.log":
path => "/var/log/glance-all.log",
}
group {'glance': gid=> 161, ensure=>present, system=>true}
user {'glance': uid=> 161, ensure=>present, system=>true, gid=>"glance", require=>Group['glance']}

View File

@@ -21,17 +21,51 @@ class glance::registry(
$syslog_log_facility = 'LOCAL2',
$syslog_log_level = 'WARNING',
) inherits glance {
if $use_syslog {
File {
ensure => present,
owner => 'glance',
group => 'glance',
mode => '0640',
notify => Service['glance-registry'],
require => Class['glance']
}
if $use_syslog and !$debug =~ /(?i)(true|yes)/ {
glance_registry_config {
'DEFAULT/log_config': value => "/etc/glance/logging.conf";
'DEFAULT/log_file': ensure=> absent;
'DEFAULT/logdir': ensure=> absent;
'DEFAULT/log_dir': ensure=> absent;
'DEFAULT/logfile': ensure=> absent;
'DEFAULT/logdir': ensure=> absent;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/use_syslog': value => true;
'DEFAULT/syslog_log_facility': value => $syslog_log_facility;
}
if !defined(File["glance-logging.conf"]) {
file {"glance-logging.conf":
content => template('glance/logging.conf.erb'),
path => "/etc/glance/logging.conf",
}
}
} else {
glance_registry_config {
'DEFAULT/log_config': ensure => absent;
'DEFAULT/log_file': value => $log_file;
'DEFAULT/log_config': ensure=> absent;
'DEFAULT/use_syslog': ensure=> absent;
'DEFAULT/syslog_log_facility': ensure=> absent;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/log_file':value=>$log_file;
'DEFAULT/logging_context_format_string':
value => '%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(instance)s %(message)s';
'DEFAULT/logging_default_format_string':
value => '%(asctime)s %(levelname)s %(name)s [-] %(instance)s %(message)s';
}
# might be used for stdout logging instead, if configured
if !defined(File["glance-logging.conf"]) {
file {"glance-logging.conf":
content => template('glance/logging.conf-nosyslog.erb'),
path => "/etc/glance/logging.conf",
}
}
}
@@ -43,22 +77,6 @@ if $use_syslog {
Glance_registry_config<||> ~> Exec<| title == 'glance-manage db_sync' |>
Glance_registry_config<||> ~> Service['glance-registry']
File {
ensure => present,
owner => 'glance',
group => 'glance',
mode => '0640',
notify => Service['glance-registry'],
require => Class['glance']
}
if !defined(File["glance-logging.conf"]) {
file {"glance-logging.conf":
content => template('glance/logging.conf.erb'),
path => "/etc/glance/logging.conf",
}
}
if($sql_connection =~ /mysql:\/\/\S+:\S+@\S+\/\S+/) {
require 'mysql::python'
} elsif($sql_connection =~ /postgresql:\/\/\S+:\S+@\S+\/\S+/) {
@@ -78,7 +96,6 @@ if $use_syslog {
'DEFAULT/backlog': value => "4096";
'DEFAULT/api_limit_max': value => "1000";
'DEFAULT/limit_param_default': value => "25";
'DEFAULT/use_syslog': value => $use_syslog;
}
# db connection config
@@ -131,7 +148,7 @@ if $use_syslog {
{
package {'glance-registry':
name => $::glance::params::registry_package_name,
ensure => $package_ensure
ensure => $package_ensure
}
File['/etc/glance/glance-registry.conf'] -> Glance_registry_config<||>
Package['glance-registry']->Service['glance-registry']

View File

@@ -0,0 +1,24 @@
[loggers]
keys = root
[handlers]
keys = root
[formatters]
keys = default
[formatter_default]
format=%(asctime)s %(levelname)s %(name)s:%(lineno)d %(message)s
[logger_root]
level=NOTSET
handlers = root
propagate = 1
[handler_root]
class = StreamHandler
level=NOTSET
formatter = default
args = (sys.stdout,)

View File

@@ -1,18 +1,16 @@
[loggers]
keys = root
# devel is reserved for future usage
[handlers]
keys = production,devel
keys = production,devel,stderr
[formatters]
keys = normal,debug
[logger_root]
level = NOTSET
handlers = production
handlers = production,devel,stderr
propagate = 1
#qualname = glance
[formatter_debug]
format = glance-%(name)s %(levelname)s: %(module)s %(funcName)s %(message)s
@@ -20,22 +18,46 @@ format = glance-%(name)s %(levelname)s: %(module)s %(funcName)s %(message)s
[formatter_normal]
format = glance-%(name)s %(levelname)s: %(message)s
# Extended logging info to LOG_<%= @syslog_log_facility %> with debug:<%= @debug %> and verbose:<%= @verbose %>
# Note: local copy goes to /var/log/glance-all.log
# logging info to LOG_<%= @syslog_log_facility %> with debug:<%= @debug %> and verbose:<%= @verbose %>
[handler_production]
class = handlers.SysLogHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = ('/dev/log', handlers.SysLogHandler.LOG_<%= @syslog_log_facility %>)
formatter = normal
# TODO find out how it could be usefull and how it should be used
[handler_stderr]
class = StreamHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = (sys.stderr,)
[handler_devel]
class = StreamHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = (sys.stdout,)

View File

@@ -19,8 +19,7 @@
# Defaults to False.
# [use_syslog] Rather or not keystone should log to syslog. Optional.
# Defaults to False.
# [syslog_log_facility] Facility for syslog, if used. Optional. Note: duplicating conf option
# wouldn't have been used, but more powerfull rsyslog features managed via conf template instead
# [syslog_log_facility] Facility for syslog, if used. Optional.
# [syslog_log_level] logging level for non verbose and non debug mode. Optional.
# [catalog_type] Type of catalog that keystone uses to store endpoints,services. Optional.
# Defaults to sql. (Also accepts template)
@@ -84,15 +83,20 @@ class keystone(
ensure => present,
owner => 'keystone',
group => 'keystone',
mode => '0644',
mode => '0640',
require => Package['keystone'],
}
if $use_syslog {
if $use_syslog and !$debug =~ /(?i)(true|yes)/ {
keystone_config {
'DEFAULT/log_config': value => "/etc/keystone/logging.conf";
'DEFAULT/log_file': ensure=> absent;
'DEFAULT/logdir': ensure=> absent;
'DEFAULT/log_dir': ensure=> absent;
'DEFAULT/logfile': ensure=> absent;
'DEFAULT/logdir': ensure=> absent;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/use_syslog': value => true;
'DEFAULT/syslog_log_facility': value => $syslog_log_facility;
}
file {"keystone-logging.conf":
content => template('keystone/logging.conf.erb'),
@@ -101,14 +105,21 @@ class keystone(
# We must notify service for new logging rules
notify => Service['keystone'],
}
file { "keystone-all.log":
path => "/var/log/keystone-all.log",
}
} else {
keystone_config {
'DEFAULT/log_config': ensure => absent;
'DEFAULT/log_file': value => $log_file;
'DEFAULT/log_dir': value => $log_dir;
'DEFAULT/log_config': ensure=> absent;
'DEFAULT/use_syslog': ensure=> absent;
'DEFAULT/syslog_log_facility': ensure=> absent;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/log_dir':value=> $log_dir;
}
# might be used for stdout logging instead, if configured
file {"keystone-logging.conf":
content => template('keystone/logging.conf-nosyslog.erb'),
path => "/etc/keystone/logging.conf",
require => File['/etc/keystone'],
# We must notify service for new logging rules
notify => Service['keystone'],
}
}
@@ -169,7 +180,6 @@ class keystone(
'DEFAULT/compute_port': value => $compute_port;
'DEFAULT/debug': value => $debug;
'DEFAULT/verbose': value => $verbose;
'DEFAULT/use_syslog': value => $use_syslog;
'identity/driver': value =>"keystone.identity.backends.sql.Identity";
'token/driver': value =>"keystone.token.backends.sql.Token";
'policy/driver': value =>"keystone.policy.backends.rules.Policy";

View File

@@ -0,0 +1,24 @@
[loggers]
keys = root
[handlers]
keys = root
[formatters]
keys = default
[formatter_default]
format=%(asctime)s %(levelname)s %(name)s:%(lineno)d %(message)s
[logger_root]
level=NOTSET
handlers = root
propagate = 1
[handler_root]
class = StreamHandler
level=NOTSET
formatter = default
args = (sys.stdout,)

View File

@@ -1,18 +1,16 @@
[loggers]
keys = root
# devel is reserved for future usage
[handlers]
keys = production,devel
keys = production,devel,stderr
[formatters]
keys = normal,debug
[logger_root]
level = NOTSET
handlers = production
handlers = production,devel,stderr
propagate = 1
#qualname = keystone
[formatter_debug]
format = keystone-%(name)s %(levelname)s: %(module)s %(funcName)s %(message)s
@@ -20,22 +18,46 @@ format = keystone-%(name)s %(levelname)s: %(module)s %(funcName)s %(message)s
[formatter_normal]
format = keystone-%(name)s %(levelname)s: %(message)s
# Extended logging info to LOG_<%= @syslog_log_facility %> with debug:<%= @debug %> and verbose:<%= @verbose %>
# Note: local copy goes to /var/log/keystone-all.log
# logging info to LOG_<%= @syslog_log_facility %> with debug:<%= @debug %> and verbose:<%= @verbose %>
[handler_production]
class = handlers.SysLogHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = ('/dev/log', handlers.SysLogHandler.LOG_<%= @syslog_log_facility %>)
formatter = normal
# TODO find out how it could be usefull and how it should be used
[handler_stderr]
class = StreamHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = (sys.stderr,)
[handler_devel]
class = StreamHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = (sys.stdout,)

View File

@@ -70,6 +70,7 @@ Puppet::Type.type(:nova_floating_range).provide :nova_manage do
:auth_method => @resource[:auth_method],
:auth_url => @resource[:auth_url],
:authtenant_name => @resource[:authtenant_name],
:service_type => @resource[:service_type]
:service_type => @resource[:service_type],
:is_debug => Puppet[:debug]
end
end

View File

@@ -34,8 +34,7 @@
# $rabbit_nodes = ['node001', 'node002', 'node003']
# add rabbit nodes hostname
# [use_syslog] Rather or not service should log to syslog. Optional.
# [syslog_log_facility] Facility for syslog, if used. Optional. Note: duplicating conf option
# wouldn't have been used, but more powerfull rsyslog features managed via conf template instead
# [syslog_log_facility] Facility for syslog, if used. Optional.
# [syslog_log_level] logging level for non verbose and non debug mode. Optional.
#
class nova(
@@ -157,25 +156,22 @@ class nova(
ensure => present,
owner => 'nova',
group => 'nova',
mode => '0644',
mode => '0640',
require => Package['nova-common'],
}
#Configure logging in nova.conf
if $use_syslog
if $use_syslog and !$debug =~ /(?i)(true|yes)/
{
nova_config
{
'DEFAULT/log_config': value => "/etc/nova/logging.conf";
'DEFAULT/log_file': ensure=> absent;
'DEFAULT/logdir': ensure=> absent;
'DEFAULT/use_syslog': value => "True";
'DEFAULT/logfile': ensure=> absent;
'DEFAULT/use_syslog': value => true;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/syslog_log_facility': value => $syslog_log_facility;
'DEFAULT/logging_context_format_string':
value => '%(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(instance)s %(message)s';
'DEFAULT/logging_default_format_string':
value =>'%(levelname)s %(name)s [-] %(instance)s %(message)s';
}
file {"nova-logging.conf":
@@ -183,8 +179,25 @@ file {"nova-logging.conf":
path => "/etc/nova/logging.conf",
require => File[$logdir],
}
file { "nova-all.log":
path => "/var/log/nova-all.log",
}
else {
nova_config {
'DEFAULT/log_config': ensure=> absent;
'DEFAULT/use_syslog': ensure=> absent;
'DEFAULT/syslog_log_facility': ensure=> absent;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/logdir': value=> $logdir;
'DEFAULT/logging_context_format_string':
value => '%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(instance)s %(message)s';
'DEFAULT/logging_default_format_string':
value => '%(asctime)s %(levelname)s %(name)s [-] %(instance)s %(message)s';
}
# might be used for stdout logging instead, if configured
file {"nova-logging.conf":
content => template('nova/logging.conf-nosyslog.erb'),
path => "/etc/nova/logging.conf",
require => File[$logdir],
}
}
# We must notify services to apply new logging rules
@@ -204,14 +217,6 @@ File['nova-logging.conf'] ~> Service <| title == "$nova::params::vncproxy_servic
File['nova-logging.conf'] ~> Service <| title == "$nova::params::volume_service_name" |>
File['nova-logging.conf'] ~> Service <| title == "$nova::params::meta_api_service_name" |>
}
else {
nova_config {
'DEFAULT/log_config': ensure=>absent;
'DEFAULT/use_syslog': value =>"False";
'DEFAULT/logdir': value => $logdir;
}
}
file { $logdir:
ensure => directory,
mode => '0751',

View File

@@ -0,0 +1,33 @@
require 'puppet'
require 'test/unit'
require 'mocha/setup'
require 'puppet/provider/nova_floating_range/nova_manage'
describe 'Puppet::Type.type(:nova_floating_range)' do
before :all do
type_class = Puppet::Type::Nova_floating_range.new(:name => '192.168.1.2-192.168.1.9')
@provider_class = Puppet::Type.type(:nova_floating_range).provider(:nova_manage).new(type_class)
# Mock for return existing ip addresses
floating_ip_info_mock = [OpenStack::Compute::FloatingIPInfo.new('address' => '192.168.1.2'),OpenStack::Compute::FloatingIPInfo.new('address' => '192.168.1.3')]
@provider_class.stubs(:connect).returns(true)
@provider_class.connect.stubs(:get_floating_ips_bulk).returns(floating_ip_info_mock)
end
it 'ip range should be correct splited' do
@provider_class.ip_range.should == ['192.168.1.2', '192.168.1.3', '192.168.1.4', '192.168.1.5', '192.168.1.6', '192.168.1.7', '192.168.1.8', '192.168.1.9']
end
it 'should correct calculate range and remove existing ips' do
@provider_class.operate_range.should == ['192.168.1.4', '192.168.1.5', '192.168.1.6', '192.168.1.7', '192.168.1.8', '192.168.1.9']
end
it 'should create cidr including first and last ip' do
@provider_class.mixed_range.should == ['192.168.1.4', '192.168.1.7', '192.168.1.8', '192.168.1.9', '192.168.1.4/30']
end
it 'should correct calculate intersection range ips' do
@provider_class.resource[:ensure] = :absent
@provider_class.operate_range.should == ['192.168.1.2', '192.168.1.3']
end
end

View File

@@ -0,0 +1,22 @@
require 'puppet'
describe 'Puppet::Type.newtype(:nova_floating_range)' do
before :each do
@nova_floating_range = Puppet::Type.type(:nova_floating_range).new(:name => '10.0.0.1-10.0.0.254')
end
it 'should not expect a name without ip range' do
expect {
Puppet::Type.type(:nova_floating_range).new(:name => 'foo')
}.to raise_error(Puppet::Error, /does not look/)
end
it 'pull should be "nova" by default' do
@nova_floating_range[:pool].should == 'nova'
end
it 'auth url should be url' do
expect { @nova_floating_range[:auth_url] = 'h ttp://192.168.1.1:5000/v2.0/'
}.to raise_error(Puppet::Error, /does not look/)
end
end

View File

@@ -0,0 +1,24 @@
[loggers]
keys = root
[handlers]
keys = root
[formatters]
keys = default
[formatter_default]
format=%(asctime)s %(levelname)s %(name)s:%(lineno)d %(message)s
[logger_root]
level=NOTSET
handlers = root
propagate = 1
[handler_root]
class = StreamHandler
level=NOTSET
formatter = default
args = (sys.stdout,)

View File

@@ -1,18 +1,16 @@
[loggers]
keys = root
# devel is reserved for future usage
[handlers]
keys = production,devel
keys = production,devel,stderr
[formatters]
keys = normal,debug
[logger_root]
level = NOTSET
handlers = production
handlers = production,devel,stderr
propagate = 1
#qualname = nova
[formatter_debug]
format = nova-%(name)s %(levelname)s: %(module)s %(funcName)s %(message)s
@@ -20,22 +18,46 @@ format = nova-%(name)s %(levelname)s: %(module)s %(funcName)s %(message)s
[formatter_normal]
format = nova-%(name)s %(levelname)s: %(message)s
# Extended logging info to LOG_<%= @syslog_log_facility %> with debug:<%= @debug %> and verbose:<%= @verbose %>
# Note: local copy goes to /var/log/nova-all.log
# logging info to LOG_<%= @syslog_log_facility %> with debug:<%= @debug %> and verbose:<%= @verbose %>
[handler_production]
class = handlers.SysLogHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = ('/dev/log', handlers.SysLogHandler.LOG_<%= @syslog_log_facility %>)
formatter = normal
# TODO find out how it could be usefull and how it should be used
[handler_stderr]
class = StreamHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = (sys.stderr,)
[handler_devel]
class = StreamHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = (sys.stdout,)

View File

@@ -381,6 +381,15 @@ $master_swift_proxy_ip = $master_swift_proxy_nodes[0]['internal_address']
### Glance and swift END ###
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = false
### Syslog ###
# Enable error messages reporting to rsyslog. Rsyslog must be installed in this case.
$use_syslog = true
@@ -423,6 +432,7 @@ if $use_syslog {
# Rabbit doesn't support syslog directly, should be >= syslog_log_level,
# otherwise none rabbit's messages would have gone to syslog
rabbit_log_level => $syslog_log_level,
debug => $debug,
}
}
@@ -473,15 +483,6 @@ $mirror_type = 'default'
$enable_test_repo = false
$repo_proxy = undef
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = false
#Rate Limits for cinder and Nova
#Cinder and Nova can rate-limit your requests to API services.
#These limits can be reduced for your installation or usage scenario.
@@ -688,6 +689,8 @@ node /fuel-controller-[\d+]/ {
db_host => $internal_virtual_ip,
service_endpoint => $internal_virtual_ip,
cinder_rate_limits => $cinder_rate_limits,
debug => $debug,
verbose => $verbose,
syslog_log_level => $syslog_log_level,
syslog_log_facility_cinder => $syslog_log_facility_cinder,
}
@@ -705,6 +708,9 @@ node /fuel-controller-[\d+]/ {
controller_node_address => $internal_virtual_ip,
swift_local_net_ip => $swift_local_net_ip,
master_swift_proxy_ip => $master_swift_proxy_ip,
debug => $debug,
verbose => $verbose,
syslog_log_level => $syslog_log_level,
}
Class ['openstack::swift::proxy'] -> Class['openstack::swift::storage_node']
@@ -778,6 +784,7 @@ node /fuel-compute-[\d+]/ {
ssh_public_key => 'puppet:///ssh_keys/openstack.pub',
use_syslog => $use_syslog,
syslog_log_level => $syslog_log_level,
syslog_log_facility => $syslog_log_facility_nova,
syslog_log_facility_quantum => $syslog_log_facility_quantum,
syslog_log_facility_cinder => $syslog_log_facility_cinder,
nova_rate_limits => $nova_rate_limits,

View File

@@ -420,6 +420,15 @@ $master_swift_proxy_ip = $master_swift_proxy_nodes[0]['internal_address']
### Glance and swift END ###
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = false
### Syslog ###
# Enable error messages reporting to rsyslog. Rsyslog must be installed in this case.
$use_syslog = true
@@ -462,6 +471,7 @@ if $use_syslog {
# Rabbit doesn't support syslog directly, should be >= syslog_log_level,
# otherwise none rabbit's messages would have gone to syslog
rabbit_log_level => $syslog_log_level,
debug => $debug,
}
}
@@ -512,15 +522,6 @@ $mirror_type = 'default'
$enable_test_repo = false
$repo_proxy = undef
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = true
#Rate Limits for cinder and Nova
#Cinder and Nova can rate-limit your requests to API services.
#These limits can be reduced for your installation or usage scenario.
@@ -761,6 +762,8 @@ node /fuel-controller-[\d+]/ {
rabbit_password => $rabbit_password,
rabbit_user => $rabbit_user,
rabbit_ha_virtual_ip => $internal_virtual_ip,
debug => $debug,
verbose => $verbose,
syslog_log_level => $syslog_log_level,
syslog_log_facility_cinder => $syslog_log_facility_cinder,
qpid_nodes => [$internal_virtual_ip],
@@ -781,6 +784,9 @@ node /fuel-controller-[\d+]/ {
controller_node_address => $internal_virtual_ip,
swift_local_net_ip => $swift_local_net_ip,
master_swift_proxy_ip => $master_swift_proxy_ip,
debug => $debug,
verbose => $verbose,
syslog_log_level => $syslog_log_level,
}
Class ['openstack::swift::proxy'] -> Class['openstack::swift::storage_node']
@@ -857,6 +863,7 @@ node /fuel-compute-[\d+]/ {
ssh_public_key => 'puppet:///ssh_keys/openstack.pub',
use_syslog => $use_syslog,
syslog_log_level => $syslog_log_level,
syslog_log_facility => $syslog_log_facility_nova,
syslog_log_facility_quantum => $syslog_log_facility_quantum,
syslog_log_facility_cinder => $syslog_log_facility_cinder,
nova_rate_limits => $nova_rate_limits,

View File

@@ -443,6 +443,15 @@ $master_swift_proxy_ip = $master_swift_proxy_nodes[0]['internal_address']
### Glance and swift END ###
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = false
### Syslog ###
# Enable error messages reporting to rsyslog. Rsyslog must be installed in this case.
$use_syslog = true
@@ -485,6 +494,7 @@ if $use_syslog {
# Rabbit doesn't support syslog directly, should be >= syslog_log_level,
# otherwise none rabbit's messages would have gone to syslog
rabbit_log_level => $syslog_log_level,
debug => $debug,
}
}
@@ -535,15 +545,6 @@ $mirror_type = 'default'
$enable_test_repo = false
$repo_proxy = undef
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = false
#Rate Limits for cinder and Nova
#Cinder and Nova can rate-limit your requests to API services.
#These limits can be reduced for your installation or usage scenario.
@@ -833,6 +834,7 @@ node /fuel-compute-[\d+]/ {
ssh_public_key => 'puppet:///ssh_keys/openstack.pub',
use_syslog => $use_syslog,
syslog_log_level => $syslog_log_level,
syslog_log_facility => $syslog_log_facility_nova,
syslog_log_facility_quantum => $syslog_log_facility_quantum,
syslog_log_facility_cinder => $syslog_log_facility_cinder,
nova_rate_limits => $nova_rate_limits,
@@ -888,7 +890,9 @@ node /fuel-swift-[\d+]/ {
qpid_user => $rabbit_user,
qpid_nodes => [$internal_virtual_ip],
sync_rings => ! $primary_proxy,
syslog_log_level => $syslog_log_level,
debug => $debug,
verbose => $verbose,
syslog_log_level => $syslog_log_level,
syslog_log_facility_cinder => $syslog_log_facility_cinder,
}
@@ -931,6 +935,9 @@ node /fuel-swiftproxy-[\d+]/ {
controller_node_address => $internal_virtual_ip,
swift_local_net_ip => $swift_local_net_ip,
master_swift_proxy_ip => $master_swift_proxy_ip,
debug => $debug,
verbose => $verbose,
syslog_log_level => $syslog_log_level,
}
}

View File

@@ -382,6 +382,14 @@ if $node[0]['role'] == 'primary-controller' {
$primary_controller = false
}
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = false
### Syslog ###
# Enable error messages reporting to rsyslog. Rsyslog must be installed in this case.
@@ -425,6 +433,7 @@ if $use_syslog {
# Rabbit doesn't support syslog directly, should be >= syslog_log_level,
# otherwise none rabbit's messages would have gone to syslog
rabbit_log_level => $syslog_log_level,
debug => $debug,
}
}
@@ -475,15 +484,6 @@ $mirror_type = 'default'
$enable_test_repo = false
$repo_proxy = undef
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = true
#Rate Limits for cinder and Nova
#Cinder and Nova can rate-limit your requests to API services.
#These limits can be reduced for your installation or usage scenario.

View File

@@ -322,6 +322,15 @@ $swift_loopback = false
### Glance and swift END ###
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = false
### Syslog ###
# Enable error messages reporting to rsyslog. Rsyslog must be installed in this case,
# and configured to start at the very beginning of puppet agent run.
@@ -365,6 +374,7 @@ if $use_syslog {
# Rabbit doesn't support syslog directly, should be >= syslog_log_level,
# otherwise none rabbit's messages would have gone to syslog
rabbit_log_level => $syslog_log_level,
debug => $debug,
}
}
@@ -415,15 +425,6 @@ $enable_test_repo = false
$repo_proxy = undef
$use_upstream_mysql = true
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = true
#Rate Limits for cinder and Nova
#Cinder and Nova can rate-limit your requests to API services.
#These limits can be reduced for your installation or usage scenario.
@@ -706,6 +707,7 @@ node /fuel-compute-[\d+]/ {
cinder_iscsi_bind_addr => $cinder_iscsi_bind_addr,
use_syslog => $use_syslog,
syslog_log_level => $syslog_log_level,
syslog_log_facility => $syslog_log_facility_nova,
syslog_log_facility_quantum => $syslog_log_facility_quantum,
syslog_log_facility_cinder => $syslog_log_facility_cinder,
nova_rate_limits => $nova_rate_limits,

View File

@@ -287,6 +287,15 @@ $swift_loopback = false
### Glance and swift END ###
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = false
### Syslog ###
# Enable error messages reporting to rsyslog. Rsyslog must be installed in this case,
# and configured to start at the very beginning of puppet agent run.
@@ -330,6 +339,7 @@ if $use_syslog {
# Rabbit doesn't support syslog directly, should be >= syslog_log_level,
# otherwise none rabbit's messages would have gone to syslog
rabbit_log_level => $syslog_log_level,
debug => $debug,
}
}
@@ -380,15 +390,6 @@ $enable_test_repo = false
$repo_proxy = undef
$use_upstream_mysql = true
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = false
#Rate Limits for cinder and Nova
#Cinder and Nova can rate-limit your requests to API services.
#These limits can be reduced for your installation or usage scenario.

View File

@@ -18,6 +18,7 @@
# [virtual] if node is virtual, fix for udp checksums should be applied
# [rabbit_log_level] should be >= global syslog_log_level option,
# otherwise none messages would have gone to syslog (client role only)
# [debug] switch between debug and standard cases, client role only. imfile monitors for local logs would be used if debug.
class openstack::logging (
$role = 'client',
@@ -38,6 +39,7 @@ class openstack::logging (
$syslog_log_facility_nova = 'LOCAL6',
$syslog_log_facility_keystone = 'LOCAL7',
$rabbit_log_level = 'NOTICE',
$debug = false,
) {
validate_re($proto, 'tcp|udp')
@@ -58,6 +60,7 @@ if $role == 'client' {
syslog_log_facility_nova => $syslog_log_facility_nova,
syslog_log_facility_keystone => $syslog_log_facility_keystone,
log_level => $rabbit_log_level,
debug => $debug,
}
} else { # server

View File

@@ -179,12 +179,13 @@ class openstack::nova::controller (
image_service => 'nova.image.glance.GlanceImageService',
glance_api_servers => $glance_connection,
verbose => $verbose,
debug => $debug,
rabbit_nodes => $rabbit_nodes,
ensure_package => $ensure_package,
api_bind_address => $api_bind_address,
use_syslog => $use_syslog,
syslog_log_facility => $syslog_log_facility,
syslog_log_level => $syslog_log_level,
syslog_log_facility => $syslog_log_facility,
syslog_log_level => $syslog_log_level,
rabbit_ha_virtual_ip => $rabbit_ha_virtual_ip,
}
} else {
@@ -195,11 +196,12 @@ class openstack::nova::controller (
image_service => 'nova.image.glance.GlanceImageService',
glance_api_servers => $glance_connection,
verbose => $verbose,
debug => $debug,
rabbit_host => $rabbit_connection,
ensure_package => $ensure_package,
api_bind_address => $api_bind_address,
syslog_log_facility => $syslog_log_facility,
syslog_log_level => $syslog_log_level,
syslog_log_facility => $syslog_log_facility,
syslog_log_level => $syslog_log_level,
use_syslog => $use_syslog,
}
}
@@ -214,10 +216,11 @@ class openstack::nova::controller (
image_service => 'nova.image.glance.GlanceImageService',
glance_api_servers => $glance_connection,
verbose => $verbose,
debug => $debug,
ensure_package => $ensure_package,
api_bind_address => $api_bind_address,
syslog_log_facility => $syslog_log_facility,
syslog_log_level => $syslog_log_level,
syslog_log_facility => $syslog_log_facility,
syslog_log_level => $syslog_log_level,
use_syslog => $use_syslog,
}
}
@@ -326,7 +329,7 @@ class openstack::nova::controller (
}
# Do not enable it!!!!!
# metadata service provides by nova api
# metadata service provides by nova api
# while enabled_apis=ec2,osapi_compute,metadata
# and by quantum-metadata-agent on network node as proxy
#

View File

@@ -37,6 +37,9 @@ class openstack::swift::proxy (
$master_swift_proxy_ip = undef,
$collect_exported = false,
$rings = ['account', 'object', 'container'],
$debug = false,
$verbose = true,
$syslog_log_level = 'WARNING',
) {
if !defined(Class['swift']) {
class { 'swift':
@@ -57,6 +60,9 @@ class openstack::swift::proxy (
allow_account_management => $proxy_allow_account_management,
account_autocreate => $proxy_account_autocreate,
package_ensure => $package_ensure,
debug => $debug,
verbose => $verbose,
syslog_log_level => $syslog_log_level,
}
# configure all of the middlewares

View File

@@ -33,7 +33,9 @@ class openstack::swift::storage_node (
$service_endpoint = '127.0.0.1',
$use_syslog = false,
$syslog_log_facility_cinder = 'LOCAL3',
$syslog_log_level = 'WARNING',
$syslog_log_level = 'WARNING',
$debug = false,
$verbose = true,
# Rabbit details necessary for cinder
$rabbit_nodes = false,
$rabbit_password = 'rabbit_pw',
@@ -68,6 +70,9 @@ class openstack::swift::storage_node (
devices => $storage_mnt_base_dir,
devices_dirs => $storage_devices,
swift_zone => $swift_zone,
debug => $debug,
verbose => $verbose,
syslog_log_level => $syslog_log_level,
}
validate_string($master_swift_proxy_ip)

View File

@@ -1,5 +1,5 @@
"/var/log/*-all.log" "/var/log/remote/*/*log"
"/var/log/kern.log" "/var/log/debug" "/var/log/daemon.log"
"/var/log/*-all.log" "/var/log/corosync.log" "/var/log/remote/*/*log"
"/var/log/kern.log" "/var/log/debug" "/var/log/syslog" "/var/log/daemon.log"
"/var/log/auth.log" "/var/log/user.log" "var/log/mail.log"
"/var/log/cron.log" "/var/log/dashboard.log" "/var/log/ha.log"
{

View File

@@ -1,6 +1,8 @@
"/var/log/*-all.log" "/var/log/remote/*/*log"
"/var/log/*-all.log" "/var/log/corosync.log" "/var/log/remote/*/*log"
"/var/log/kern.log" "/var/log/debug" "/var/log/syslog"
"/var/log/dashboard.log" "/var/log/ha.log" "/var/log/quantum/*.log"
"/var/log/nova/*.log" "/var/log/keystone/*.log" "/var/log/glance/*.log"
"/var/log/cinder/*.log"
# This file is used for hourly log rotations, use (min)size options here
{
sharedscripts
@@ -8,7 +10,7 @@
copytruncate
# rotate only if 30M size or bigger
minsize 30M
# also rotate if 300M size have exceeded, should be size > minsize
# also rotate if <%= @limitsize %> size have exceeded, should be size > minsize
size <%= @limitsize %>
# keep logs for <%= @keep %> rotations
rotate <%= @keep %>

View File

@@ -52,6 +52,14 @@ if $nodes != undef {
}
}
# This parameter specifies the verbosity level of log messages
# in openstack components config.
# Debug would have set DEBUG level and ignore verbose settings, if any.
# Verbose would have set INFO level messages
# In case of non debug and non verbose - WARNING, default level would have set.
# Note: if syslog on, this default level may be configured (for syslog) with syslog_log_level option.
$verbose = true
$debug = false
### Syslog ###
# Enable error messages reporting to rsyslog. Rsyslog must be installed in this case.
@@ -126,7 +134,7 @@ class node_netconfig (
}
case $::operatingsystem {
'redhat' : {
'redhat' : {
$queue_provider = 'qpid'
$custom_mysql_setup_class = 'pacemaker_mysql'
}
@@ -185,7 +193,7 @@ class os_common {
# should be > 30M
limitsize => '300M',
# remote servers to send logs to
rservers => $rservers,
rservers => $rservers,
# should be true, if client is running at virtual node
virtual => true,
# facilities
@@ -197,16 +205,18 @@ class os_common {
# Rabbit doesn't support syslog directly, should be >= syslog_log_level,
# otherwise none rabbit's messages would have gone to syslog
rabbit_log_level => $syslog_log_level,
# debug mode
debug => $debug ? { 'true' => true, true => true, default=> false },
}
}
#case $role {
# /controller/: { $hostgroup = 'controller' }
# /controller/: { $hostgroup = 'controller' }
# /swift-proxy/: { $hostgroup = 'swift-proxy' }
# /storage/:{ $hostgroup = 'swift-storage' }
# /compute/: { $hostgroup = 'compute' }
# /cinder/: { $hostgroup = 'cinder' }
# default: { $hostgroup = 'generic' }
# default: { $hostgroup = 'generic' }
#}
# if $nagios != 'false' {
@@ -236,11 +246,11 @@ class os_common {
node default {
case $deployment_mode {
"singlenode": {
include "osnailyfacter::cluster_simple"
"singlenode": {
include "osnailyfacter::cluster_simple"
class {'os_common':}
}
"multinode": {
"multinode": {
include osnailyfacter::cluster_simple
class {'os_common':}
}

View File

@@ -8,7 +8,7 @@ if $quantum == 'true'
{
$quantum_hash = parsejson($::quantum_access)
$quantum_params = parsejson($::quantum_parameters)
$novanetwork_params = {}
$novanetwork_params = {}
}
else
@@ -140,14 +140,14 @@ $network_config = {
if !$verbose
if !$verbose
{
$verbose = 'true'
$verbose = 'false'
}
if !$debug
{
$debug = 'true'
$debug = 'false'
}
@@ -173,7 +173,7 @@ $multi_host = true
$manage_volumes = false
$glance_backend = 'swift'
$quantum_netnode_on_cnt = true
$swift_loopback = false
$swift_loopback = false
$mirror_type = 'external'
Exec { logoutput => true }
@@ -183,7 +183,7 @@ Exec { logoutput => true }
class compact_controller (
$quantum_network_node = $quantum_netnode_on_cnt
) {
class {'osnailyfacter::tinyproxy': }
class { 'openstack::controller_ha':
controller_public_addresses => $controller_public_addresses,
@@ -202,8 +202,8 @@ class compact_controller (
num_networks => $num_networks,
network_size => $network_size,
network_config => $network_config,
verbose => $verbose,
debug => $debug,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
queue_provider => $::queue_provider,
qpid_password => $rabbit_hash[password],
qpid_user => $rabbit_hash[user],
@@ -306,7 +306,10 @@ class virtual_ips () {
swift_zone => $swift_zone,
swift_local_net_ip => $storage_address,
master_swift_proxy_ip => $master_swift_proxy_ip,
sync_rings => ! $primary_proxy
sync_rings => ! $primary_proxy,
syslog_log_level => $syslog_log_level,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
}
if $primary_proxy {
ring_devices {'all': storages => $controllers }
@@ -317,7 +320,10 @@ class virtual_ips () {
primary_proxy => $primary_proxy,
controller_node_address => $management_vip,
swift_local_net_ip => $swift_local_net_ip,
master_swift_proxy_ip => $master_swift_proxy_ip
master_swift_proxy_ip => $master_swift_proxy_ip,
syslog_log_level => $syslog_log_level,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
}
#TODO: PUT this configuration stanza into nova class
nova_config { 'DEFAULT/start_guests_on_host_boot': value => $start_guests_on_host_boot }
@@ -337,7 +343,7 @@ class virtual_ips () {
Class[openstack::swift::storage_node] -> Class[openstack::img::cirros]
Class[openstack::swift::proxy] -> Class[openstack::img::cirros]
Service[swift-proxy] -> Class[openstack::img::cirros]
}
if !$quantum
{
@@ -351,7 +357,7 @@ class virtual_ips () {
auth_url => "http://${management_vip}:5000/v2.0/",
authtenant_name => $access_hash[tenant],
}
}
}
}
@@ -379,8 +385,8 @@ class virtual_ips () {
auto_assign_floating_ip => $bool_auto_assign_floating_ip,
glance_api_servers => "${management_vip}:9292",
vncproxy_host => $public_vip,
verbose => $verbose,
debug => $debug,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
cinder_volume_group => "cinder",
vnc_enabled => true,
manage_volumes => $cinder ? { false => $manage_volumes, default =>$is_cinder_node },
@@ -400,6 +406,7 @@ class virtual_ips () {
segment_range => $segment_range,
use_syslog => true,
syslog_log_level => $syslog_log_level,
syslog_log_facility => $syslog_log_facility_nova,
syslog_log_facility_quantum => $syslog_log_facility_quantum,
syslog_log_facility_cinder => $syslog_log_facility_cinder,
nova_rate_limits => $nova_rate_limits,
@@ -440,8 +447,8 @@ class virtual_ips () {
cinder_user_password => $cinder_hash[user_password],
syslog_log_facility => $syslog_log_facility_cinder,
syslog_log_level => $syslog_log_level,
debug => $debug ? { 'true' => 'True', default=>'False' },
verbose => $verbose ? { 'false' => 'False', default=>'True' },
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
use_syslog => true,
}
# class { "::rsyslog::client":

View File

@@ -8,7 +8,7 @@ if $quantum == 'true'
{
$quantum_hash = parsejson($::quantum_access)
$quantum_params = parsejson($::quantum_parameters)
$novanetwork_params = {}
$novanetwork_params = {}
}
else
{
@@ -62,14 +62,14 @@ if !$rabbit_hash[user]
$rabbit_user = $rabbit_hash['user']
if !$verbose
if !$verbose
{
$verbose = 'true'
$verbose = 'false'
}
if !$debug
{
$debug = 'true'
$debug = 'false'
}
if !$swift_partition
@@ -211,8 +211,8 @@ class ha_controller (
num_networks => $num_networks,
network_size => $network_size,
network_config => $network_config,
verbose => $verbose,
debug => $debug,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
auto_assign_floating_ip => $bool_auto_assign_floating_ip,
mysql_root_password => $mysql_hash[root_password],
admin_email => $access_hash[email],
@@ -335,8 +335,8 @@ case $role {
qpid_nodes => [$management_vip],
glance_api_servers => "${management_vip}:9292",
vncproxy_host => $public_vip,
verbose => $verbose,
debug => $debug,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
vnc_enabled => true,
nova_user_password => $nova_hash[user_password],
cache_server_ip => $controller_nodes,
@@ -355,6 +355,7 @@ case $role {
cinder_rate_limits => $::cinder_rate_limits,
use_syslog => $use_syslog,
syslog_log_level => $syslog_log_level,
syslog_log_facility => $syslog_log_facility_nova,
syslog_log_facility_quantum => $syslog_log_facility_quantum,
syslog_log_facility_cinder => $syslog_log_facility_cinder,
nova_rate_limits => $::nova_rate_limits,
@@ -394,7 +395,9 @@ case $role {
qpid_user => $rabbit_hash[user],
qpid_nodes => [$management_vip],
sync_rings => ! $primary_proxy,
syslog_log_level => $syslog_log_level,
syslog_log_level => $syslog_log_level,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
syslog_log_facility_cinder => $syslog_log_facility_cinder,
}
@@ -419,6 +422,9 @@ case $role {
controller_node_address => $management_vip,
swift_local_net_ip => $swift_local_net_ip,
master_swift_proxy_ip => $master_swift_proxy_ip,
syslog_log_level => $syslog_log_level,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
}
}
@@ -440,8 +446,8 @@ case $role {
auth_host => $management_vip,
iscsi_bind_host => $storage_address,
cinder_user_password => $cinder_hash[user_password],
debug => $debug ? { 'true' => 'True', default=>'False' },
verbose => $verbose ? { 'false' => 'False', default=>'True' },
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
syslog_log_facility => $syslog_log_facility_cinder,
syslog_log_level => $syslog_log_level,
use_syslog => true,

View File

@@ -5,7 +5,7 @@ if $quantum == 'true'
{
$quantum_hash = parsejson($::quantum_access)
$quantum_params = parsejson($::quantum_parameters)
$novanetwork_params = {}
$novanetwork_params = {}
}
else
@@ -103,18 +103,16 @@ $quantum_sql_connection = "mysql://${quantum_db_user}:${quantum_db_password}@${
$quantum_metadata_proxy_shared_secret = $quantum_params['metadata_proxy_shared_secret']
$quantum_gre_bind_addr = $::internal_address
if !$verbose
if !$verbose
{
$verbose = 'true'
$verbose = 'false'
}
if !$debug
{
$debug = 'true'
$debug = 'false'
}
case $role {
"controller" : {
include osnailyfacter::test_controller
@@ -133,8 +131,8 @@ if !$debug
num_networks => $num_networks,
network_size => $network_size,
network_config => $network_config,
verbose => $verbose,
debug => $debug,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
auto_assign_floating_ip => $bool_auto_assign_floating_ip,
mysql_root_password => $mysql_hash[root_password],
admin_email => $access_hash[email],
@@ -194,8 +192,8 @@ if !$debug
floating_range => $floating_hash,
fixed_range => $fixed_network_range,
create_networks => $create_networks,
verbose => $verbose,
debug => $debug,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
queue_provider => $queue_provider,
rabbit_password => $rabbit_hash[password],
rabbit_user => $rabbit_hash[user],
@@ -304,10 +302,11 @@ if !$debug
cinder_volume_group => "cinder",
manage_volumes => $cinder ? { false => $manage_volumes, default =>$is_cinder_node },
db_host => $controller_node_address,
verbose => $verbose,
debug => $debug,
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
use_syslog => true,
syslog_log_level => $syslog_log_level,
syslog_log_facility => $syslog_log_facility_nova,
syslog_log_facility_quantum => $syslog_log_facility_quantum,
syslog_log_facility_cinder => $syslog_log_facility_cinder,
state_path => $nova_hash[state_path],
@@ -342,8 +341,8 @@ if !$debug
cinder_user_password => $cinder_hash[user_password],
syslog_log_facility => $syslog_log_facility_cinder,
syslog_log_level => $syslog_log_level,
debug => $debug ? { 'true' => 'True', default=>'False' },
verbose => $verbose ? { 'false' => 'False', default=>'True' },
debug => $debug ? { 'true' => true, true => true, default=> false },
verbose => $verbose ? { 'true' => true, true => true, default=> false },
use_syslog => true,
}
}

View File

@@ -9,9 +9,10 @@ class osnailyfacter::tinyproxy {
}
package{'tinyproxy':} ->
exec{'tinyproxy-init':
command => "/bin/echo Allow $master_ip >> /etc/tinyproxy/tinyproxy.conf;
/sbin/chkconfig tinyproxy on;
command => "/bin/echo Allow $master_ip >> /etc/tinyproxy/tinyproxy.conf;
/sbin/chkconfig tinyproxy on;
/etc/init.d/tinyproxy restart; ",
unless => "/bin/grep -q '^Allow $master_ip' /etc/tinyproxy/tinyproxy.conf",
}
}

View File

@@ -316,22 +316,25 @@ quantum_l3_agent_start() {
fi
clean_up
if ocf_is_true ${OCF_RESKEY_syslog} ; then
L3_SYSLOG=" | logger -t quantum-quantum.agent.l3 "
if ocf_is_true ${OCF_RESKEY_debug} ; then
L3_LOG=" | tee -ia /var/log/quantum/l3.log "
else
L3_LOG=" "
fi
else
L3_SYSLOG=""
if ocf_is_true ${OCF_RESKEY_debug} ; then
L3_LOG=" >> /var/log/quantum/l3.log "
else
L3_LOG=" >> /dev/null "
fi
fi
# FIXME stderr should not be used unless quantum+agents init & OCF would reditect to stderr
# if ocf_is_true ${OCF_RESKEY_syslog} ; then
# Disable logger because we use imfile for log files grabbing to rsyslog
# L3_SYSLOG=" | logger -t quantum-quantum.agent.l3 "
# if ocf_is_true ${OCF_RESKEY_debug} ; then
# L3_LOG=" | tee -ia /var/log/quantum/l3.log "
# else
# L3_LOG=" "
# fi
# else
# L3_SYSLOG=""
# if ocf_is_true ${OCF_RESKEY_debug} ; then
# L3_LOG=" >> /var/log/quantum/l3.log "
# else
# L3_LOG=" >> /dev/null "
# fi
# fi
L3_SYSLOG=""
L3_LOG=" > /dev/null "
# run the actual quantum-l3-agent daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.

View File

@@ -1,7 +1,6 @@
#
# [use_syslog] Rather or not service should log to syslog. Optional.
# [syslog_log_facility] Facility for syslog, if used. Optional. Note: duplicating conf option
# wouldn't have been used, but more powerfull rsyslog features managed via conf template instead
# [syslog_log_facility] Facility for syslog, if used. Optional.
# [syslog_log_level] logging level for non verbose and non debug mode. Optional.
#
class quantum (
@@ -39,6 +38,7 @@ class quantum (
$auth_tenant = 'services',
$auth_user = 'quantum',
$log_file = '/var/log/quantum/server.log',
$log_dir = '/var/log/quantum',
$use_syslog = false,
$syslog_log_facility = 'LOCAL4',
$syslog_log_level = 'WARNING',
@@ -71,7 +71,7 @@ class quantum (
owner => root,
group => root,
source => "puppet:///modules/quantum/q-agent-cleanup.py",
}
}
file {'quantum-root':
path => '/etc/sudoers.d/quantum-root',
@@ -168,41 +168,58 @@ class quantum (
}
# logging for agents grabbing from stderr. It's workarround for bug in quantum-logging
# server givs this parameters from command line
# FIXME change init.d scripts for q&agents, fix daemon launch commands (CENTOS/RHEL):
# quantum-server:
# daemon --user quantum --pidfile $pidfile "$exec --config-file $config --config-file /etc/$prog/plugin.ini &>>/var/log/quantum/server.log & echo \$!
# quantum-ovs-cleanup:
# daemon --user quantum $exec --config-file /etc/$proj/$proj.conf --config-file $config &>>/var/log/$proj/$plugin.log
# quantum-ovs/metadata/l3/dhcp/-agents:
# daemon --user quantum --pidfile $pidfile "$exec --config-file /etc/$proj/$proj.conf --config-file $config &>>/var/log/$proj/$plugin.log & echo \$! > $pidfile"
quantum_config {
'DEFAULT/log_config': ensure=> absent;
'DEFAULT/log_file': ensure=> absent;
'DEFAULT/log_dir': ensure=> absent;
'DEFAULT/use_syslog': ensure=> absent;
'DEFAULT/use_stderr': value => true;
'DEFAULT/logfile': ensure=> absent;
}
if $use_syslog {
if $use_syslog and !$debug =~ /(?i)(true|yes)/ {
quantum_config {
'DEFAULT/log_dir': ensure=> absent;
'DEFAULT/logdir': ensure=> absent;
'DEFAULT/log_config': value => "/etc/quantum/logging.conf";
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/use_syslog': value=> true;
'DEFAULT/syslog_log_facility': value=> $syslog_log_facility;
}
file { "quantum-logging.conf":
content => template('quantum/logging.conf.erb'),
path => "/etc/quantum/logging.conf",
owner => "root",
group => "root",
mode => 644,
group => "quantum",
mode => 640,
}
file { "quantum-all.log":
path => "/var/log/quantum-all.log",
}
# We must setup logging before start services under pacemaker
File['quantum-logging.conf'] -> Service<| title == 'quantum-server' |>
File['quantum-logging.conf'] -> Anchor<| title == 'quantum-ovs-agent' |>
File['quantum-logging.conf'] -> Anchor<| title == 'quantum-l3' |>
File['quantum-logging.conf'] -> Anchor<| title == 'quantum-dhcp-agent' |>
} else {
quantum_config {
# logging for agents grabbing from stderr. It's workarround for bug in quantum-logging
'DEFAULT/use_syslog': ensure=> absent;
'DEFAULT/syslog_log_facility': ensure=> absent;
'DEFAULT/log_config': ensure=> absent;
# FIXME stderr should not be used unless quantum+agents init & OCF scripts would be fixed to redirect its output to stderr!
#'DEFAULT/use_stderr': value => true;
'DEFAULT/use_stderr': ensure=> absent;
'DEFAULT/log_dir': value => $log_dir;
}
file { "quantum-logging.conf":
content => template('quantum/logging.conf-nosyslog.erb'),
path => "/etc/quantum/logging.conf",
owner => "root",
group => "root",
mode => 644,
group => "quantum",
mode => 640,
}
}
# We must setup logging before start services under pacemaker
File['quantum-logging.conf'] -> Service<| title == "$::quantum::params::server_service" |>
File['quantum-logging.conf'] -> Anchor<| title == 'quantum-ovs-agent' |>
File['quantum-logging.conf'] -> Anchor<| title == 'quantum-l3' |>
File['quantum-logging.conf'] -> Anchor<| title == 'quantum-dhcp-agent' |>
File <| title=='/etc/quantum' |> -> File <| title=='quantum-logging.conf' |>
if defined(Anchor['quantum-server-config-done']) {
@@ -211,8 +228,16 @@ class quantum (
$endpoint_quantum_main_configuration = 'quantum-init-done'
}
# FIXME Workaround for FUEL-842: remove explicit --log-config from init scripts cuz it breaks logging!
# FIXME this hack should be deleted after FUEL-842 have resolved
exec {'init-dirty-hack':
command => "sed -i 's/\-\-log\-config=\$loggingconf//g' /etc/init.d/quantum-*",
path => ["/sbin", "/bin", "/usr/sbin", "/usr/bin"],
}
Anchor['quantum-init'] ->
Package['quantum'] ->
Exec['init-dirty-hack'] ->
File['/var/cache/quantum'] ->
Quantum_config<||> ->
Quantum_api_config<||> ->

View File

@@ -3,26 +3,23 @@
keys = root, l3agent, ovsagent, dhcpagent, metadata
[handlers]
keys = production,devel, l3agent, ovsagent, dhcpagent, metadata
keys = production,devel,stderr, l3agent, ovsagent, dhcpagent, metadata
<% else -%>
[loggers]
keys = root
# devel is reserved for future usage
[handlers]
keys = production,devel
keys = production,devel,stderr
<% end -%>
[formatters]
keys = normal,debug,default
[logger_root]
level = NOTSET
handlers = production
handlers = production,devel,stderr
propagate = 1
#qualname = quantum
[formatters]
keys = normal,debug,default
[formatter_debug]
format = quantum-%(name)s %(levelname)s: %(module)s %(funcName)s %(message)s
@@ -33,24 +30,48 @@ format = quantum-%(name)s %(levelname)s: %(message)s
[formatter_default]
format=%(asctime)s %(levelname)s: %(module)s %(name)s:%(lineno)d %(funcName)s %(message)s
# Extended logging info to LOG_<%= @syslog_log_facility %> with debug:<%= @debug %> and verbose:<%= @verbose %>
# Note: local copy goes to /var/log/quantum-all.log
# logging info to LOG_<%= @syslog_log_facility %> with debug:<%= @debug %> and verbose:<%= @verbose %>
[handler_production]
class = handlers.SysLogHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = ('/dev/log', handlers.SysLogHandler.LOG_<%= @syslog_log_facility %>)
formatter = normal
# TODO find out how it could be usefull and how it should be used
[handler_stderr]
class = StreamHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = (sys.stderr,)
[handler_devel]
class = StreamHandler
<% if @debug then -%>
level = DEBUG
formatter = debug
<% elsif @verbose then -%>
level = INFO
formatter = normal
<% else -%>
level = <%= @syslog_log_level %>
formatter = normal
<% end -%>
args = (sys.stdout,)
<% if @debug then -%>

View File

@@ -1,5 +1,5 @@
node default {
$rh_base_channels = "rhel-6-server-rpms rhel-6-server-optional-rpms rhel-lb-for-rhel-6-server-rpms rhel-rs-for-rhel-6-server-rpms rhel-ha-for-rhel-6-server-rpms rhel-server-ost-6-folsom-rpms"
$rh_base_channels = "rhel-6-server-rpms rhel-6-server-optional-rpms rhel-lb-for-rhel-6-server-rpms rhel-rs-for-rhel-6-server-rpms rhel-ha-for-rhel-6-server-rpms"
$rh_openstack_channel = "rhel-server-ost-6-3-rpms"
$numtries = "3"
$sat_base_channels = "rhel-x86_64-server-6 rhel-x86_64-server-optional-6 rhel-x86_64-server-lb-6 rhel-x86_64-server-rs-6 rhel-x86_64-server-ha-6"

View File

@@ -18,7 +18,6 @@ function revert_back_to_centos() {
subscription-manager unregister || :
}
trap revert_back_to_centos EXIT
rhsm_plugins="product-id subscription-manager"
rhn_plugins="rhnplugin"
@@ -28,12 +27,30 @@ for plugin in $rhsm_plugins; do
done
#Register
subscription-manager register "--username=<%= rh_username %>" "--password=<%= rh_password %>" --autosubscribe --force
exitcode=0
rhsmoutput=$(subscription-manager register "--username=<%= rh_username %>" "--password=<%= rh_password %>" --autosubscribe --force 2>&1) || exitcode=$?
exitcode=$?
case $exitcode in
0) echo "Register succeeded"
;;
1) echo "Register succeeded"
;;
*) echo -e "Register failed: $rhmsoutput"
exit $exitcode
;;
esac
#Attach to RHOS product
poolid="$(subscription-manager list --available | grep -A2 "OpenStack" | tail -1 | cut -c15- | tr -d ' \t')"
subscription-manager attach "--pool=$poolid"
trap revert_back_to_centos EXIT
#Set releasever and refresh repos
echo 6Server > /etc/yum/vars/releasever
yum clean expire-cache
#Enable channels
for channel in <%= rh_base_channels %> <%= rh_openstack_channel %>; do
yum-config-manager --enable "$channel" &> /dev/null
@@ -124,17 +141,15 @@ fi
#Download packages
mkdir -p <%= pkgdir %>/repodata <%= pkgdir %>/Packages
rm -f /etc/yum/vars/releasever
yum-config-manager --disable 'nailgun' &> /dev/null
yum-config-manager --disable 'centos' --disable 'extras' --disable 'updates' &> /dev/null
yum-config-manager --disable 'base' &> /dev/null
echo 6Server > /etc/yum/vars/releasever
echo "Building initial cache. This may take several minutes."
yum --releasever=<%= releasever %> makecache
for tries in $(seq 1 <%= numtries %>); do
#Retry if repotrack fails
/usr/local/bin/repotrack -a x86_64,noarch -p "<%= pkgdir %>/Packages" $(cat /etc/nailgun/required-rpms.txt | xargs echo -en) || continue
/usr/local/bin/repotrack -a x86_64,noarch -p "<%= pkgdir %>/Packages" $(cat /etc/nailgun/required-rpms.txt | xargs echo -en) 2>&1 || continue
status=$?
#Purge any corrupt downloaded RPMs
# FIXME: There is a error with a path substitution
@@ -150,7 +165,7 @@ for tries in $(seq 1 <%= numtries %>); do
fi
done
if [ "$status" -ne 0 ]; then
if [ $status -ne 0 ]; then
echo "ERROR: Repotrack did not exit cleanly after <%= numtries %> tries." 1>&2
exit 1
fi
@@ -177,3 +192,4 @@ umount /mnt/rhel_iso
rpm -e rhel-boot-image-6.4-20130130.0.el6ost.noarch
exit 0

View File

@@ -20,6 +20,7 @@ class rsyslog::client (
$syslog_log_facility_nova = 'LOCAL6',
$syslog_log_facility_keystone = 'LOCAL7',
$log_level = 'NOTICE',
$debug = false,
) inherits rsyslog {
# Fix for udp checksums should be applied if running on virtual node
@@ -79,16 +80,189 @@ if $virtual { include rsyslog::checksum_udp514 }
notify => Class["rsyslog::service"],
}
file { "${rsyslog::params::rsyslog_d}02-ha.conf":
ensure => present,
content => template("${module_name}/02-ha.conf.erb"),
}
file { "${rsyslog::params::rsyslog_d}03-dashboard.conf":
ensure => present,
content => template("${module_name}/03-dashboard.conf.erb"),
}
# openstack syslog compatible mode, would work only for debug case.
# because of its poor syslog debug messages quality, use local logs convertion
if $debug =~ /(?i)(true|yes)/ {
::rsyslog::imfile { "10-nova-api_debug" :
file_name => "/var/log/nova/api.log",
file_tag => "nova-api",
file_facility => $syslog_log_facility_nova,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "10-nova-cert_debug" :
file_name => "/var/log/nova/cert.log",
file_tag => "nova-cert",
file_facility => $syslog_log_facility_nova,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "10-nova-consoleauth_debug" :
file_name => "/var/log/nova/consoleauth.log",
file_tag => "nova-consoleauth",
file_facility => $syslog_log_facility_nova,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "10-nova-scheduler_debug" :
file_name => "/var/log/nova/scheduler.log",
file_tag => "nova-scheduler",
file_facility => $syslog_log_facility_nova,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "10-nova-network_debug" :
file_name => "/var/log/nova/network.log",
file_tag => "nova-network",
file_facility => $syslog_log_facility_nova,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "10-nova-compute_debug" :
file_name => "/var/log/nova/compute.log",
file_tag => "nova-compute",
file_facility => $syslog_log_facility_nova,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "10-nova-conductor_debug" :
file_name => "/var/log/nova/conductor.log",
file_tag => "nova-conductor",
file_facility => $syslog_log_facility_nova,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "10-nova-objectstore_debug" :
file_name => "/var/log/nova/objectstore.log",
file_tag => "nova-objectstore",
file_facility => $syslog_log_facility_nova,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "20-keystone_debug" :
file_name => "/var/log/keystone/keystone.log",
file_tag => "keystone",
file_facility => $syslog_log_facility_keystone,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "30-cinder-api_debug" :
file_name => "/var/log/cinder/api.log",
file_tag => "cinder-api",
file_facility => $syslog_log_facility_cinder,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "30-cinder-volume_debug" :
file_name => "/var/log/cinder/volume.log",
file_tag => "cinder-volume",
file_facility => $syslog_log_facility_cinder,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "30-cinder-scheduler_debug" :
file_name => "/var/log/cinder/scheduler.log",
file_tag => "cinder-scheduler",
file_facility => $syslog_log_facility_cinder,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "40-glance-api_debug" :
file_name => "/var/log/glance/api.log",
file_tag => "glance-api",
file_facility => $syslog_log_facility_glance,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "40-glance-registry_debug" :
file_name => "/var/log/glance/registry.log",
file_tag => "glance-registry",
file_facility => $syslog_log_facility_glance,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "50-quantum-server_debug" :
file_name => "/var/log/quantum/server.log",
file_tag => "quantum-server",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "50-quantum-ovs-cleanup_debug" :
file_name => "/var/log/quantum/ovs-cleanup.log",
file_tag => "quantum-ovs-cleanup",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "50-quantum-rescheduling_debug" :
file_name => "/var/log/quantum/rescheduling.log",
file_tag => "quantum-rescheduling",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "50-quantum-ovs-agent_debug" :
file_name => "/var/log/quantum/openvswitch-agent.log",
file_tag => "quantum-agent-ovs",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "50-quantum-l3-agent_debug" :
file_name => "/var/log/quantum/l3-agent.log",
file_tag => "quantum-agent-l3",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "50-quantum-dhcp-agent_debug" :
file_name => "/var/log/quantum/dhcp-agent.log",
file_tag => "quantum-agent-dhcp",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "50-quantum-metadata-agent_debug" :
file_name => "/var/log/quantum/metadata-agent.log",
file_tag => "quantum-agent-metadata",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
# FIXME Workaround for FUEL-843 (HA any)
# FIXME remove after FUEL-843 have reolved
::rsyslog::imfile { "50-ha-quantum-ovs-agent_debug" :
file_name => "/var/log/quantum/quantum-openvswitch-agent.log",
file_tag => "quantum-agent-ovs",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "50-ha-quantum-l3-agent_debug" :
file_name => "/var/log/quantum/quantum-l3-agent.log",
file_tag => "quantum-agent-l3",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "50-ha-quantum-dhcp-agent_debug" :
file_name => "/var/log/quantum/quantum-dhcp-agent.log",
file_tag => "quantum-agent-dhcp",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
::rsyslog::imfile { "50-ha-quantum-metadata-agent_debug" :
file_name => "/var/log/quantum/quantum-metadata-agent.log",
file_tag => "quantum-agent-metadata",
file_facility => $syslog_log_facility_quantum,
file_severity => "DEBUG",
notify => Class["rsyslog::service"],
}
# END fixme
} else { #non debug case
# standard logging configs for syslog client
file { "${rsyslog::params::rsyslog_d}10-nova.conf":
ensure => present,
content => template("${module_name}/10-nova.conf.erb"),
@@ -113,6 +287,18 @@ if $virtual { include rsyslog::checksum_udp514 }
ensure => present,
content => template("${module_name}/50-quantum.conf.erb"),
}
} #end if
file { "${rsyslog::params::rsyslog_d}02-ha.conf":
ensure => present,
content => template("${module_name}/02-ha.conf.erb"),
}
file { "${rsyslog::params::rsyslog_d}03-dashboard.conf":
ensure => present,
content => template("${module_name}/03-dashboard.conf.erb"),
}
file { "${rsyslog::params::rsyslog_d}60-puppet-agent.conf":
content => template("${module_name}/60-puppet-agent.conf.erb"),

View File

@@ -3,13 +3,10 @@
<% if scope.lookupvar('rsyslog::client::log_remote') -%>
# Log to remote syslog server using <%= scope.lookupvar('rsyslog::client::remote_type') %>
# Templates
<% if scope.lookupvar('rsyslog::client::high_precision_timestamps') -%>
# Use high precision timestamps (date-rfc3339, 2010-12-05T02:21:41.889482+01:00)
$Template RemoteLog, "<%%PRI%>%TIMEGENERATED:1:32:date-rfc3339% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%\n"
<% else -%>
# Use traditional timestamps (date-rfc3164, Dec 5 02:21:13)
$Template RemoteLog, "<%%PRI%>%TIMEGENERATED:1:15:date-rfc3164% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%\n"
<% end -%>
# RFC3164 emulation with long tags (32+)
$Template RemoteLog, "<%%pri%>%timestamp% %hostname% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%\n"
# RFC5424 emulation would be: "<%%pri%>1 %timestamp:::date-rfc3339% %hostname% %syslogtag% %procid% %msgid% %structured-data% %msg%\n"
# Note: don't use %app-name% cuz it would be empty for some cases
$ActionFileDefaultTemplate RemoteLog
<% scope.lookupvar('rsyslog::client::rservers_real').each do |rserver| -%>

View File

@@ -20,7 +20,6 @@ $EscapeControlCharactersOnReceive off
# Disk-Assisted Memory Queues, async writes, no escape chars
#
$OMFileASyncWriting on
$SystemLogRateLimitInterval 0 # disable rate limits for rsyslog
$MainMsgQueueType LinkedList
$WorkDirectory <%= scope.lookupvar('rsyslog::params::spool_dir') %>
$MainMsgQueueFileName mainmsgqueue

View File

@@ -8,7 +8,6 @@ $EscapeControlCharactersOnReceive off
# Disk-Assisted Memory Queues, async writes, no escape chars
#
$OMFileASyncWriting on
$SystemLogRateLimitInterval 0 # disable rate limits for rsyslog
$MainMsgQueueType LinkedList
$WorkDirectory <%= scope.lookupvar('rsyslog::params::spool_dir') %>
$MainMsgQueueFileName mainmsgqueue

View File

@@ -1,11 +1,4 @@
# managed by puppet
<% unless scope.lookupvar('rsyslog::client::high_precision_timestamps') -%>
#
# Use traditional timestamp format date-rfc3164 (Dec 5 02:21:13).
# To enable high precision timestamps date-rfc3339 (2010-12-05T02:21:41.889482+01:00), comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
<% end -%>
LOCAL0.* -/var/log/ha.log
LOCAL0.* ~

View File

@@ -1,11 +1,4 @@
# managed by puppet
<% unless scope.lookupvar('rsyslog::client::high_precision_timestamps') -%>
#
# Use traditional timestamp format date-rfc3164 (Dec 5 02:21:13).
# To enable high precision timestamps date-rfc3339 (2010-12-05T02:21:41.889482+01:00), comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
<% end -%>
LOCAL1.* -/var/log/dashboard.log
LOCAL1.* ~

View File

@@ -1,11 +1,4 @@
# managed by puppet
<% unless scope.lookupvar('rsyslog::client::high_precision_timestamps') -%>
#
# Use traditional timestamp format date-rfc3164 (Dec 5 02:21:13).
# To enable high precision timestamps date-rfc3339 (2010-12-05T02:21:41.889482+01:00), comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
<% end -%>
<%= @syslog_log_facility_nova %>.* -/var/log/nova-all.log
<%= @syslog_log_facility_nova %>.* ~

View File

@@ -1,11 +1,4 @@
# managed by puppet
<% unless scope.lookupvar('rsyslog::client::high_precision_timestamps') -%>
#
# Use traditional timestamp format date-rfc3164 (Dec 5 02:21:13).
# To enable high precision timestamps date-rfc3339 (2010-12-05T02:21:41.889482+01:00), comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
<% end -%>
<%= @syslog_log_facility_keystone %>.* -/var/log/keystone-all.log
<%= @syslog_log_facility_keystone %>.* ~

View File

@@ -1,11 +1,4 @@
# managed by puppet
<% unless scope.lookupvar('rsyslog::client::high_precision_timestamps') -%>
#
# Use traditional timestamp format date-rfc3164 (Dec 5 02:21:13).
# To enable high precision timestamps date-rfc3339 (2010-12-05T02:21:41.889482+01:00), comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
<% end -%>
<%= @syslog_log_facility_cinder %>.* -/var/log/cinder-all.log
<%= @syslog_log_facility_cinder %>.* ~

View File

@@ -1,11 +1,4 @@
# managed by puppet
<% unless scope.lookupvar('rsyslog::client::high_precision_timestamps') -%>
#
# Use traditional timestamp format date-rfc3164 (Dec 5 02:21:13).
# To enable high precision timestamps date-rfc3339 (2010-12-05T02:21:41.889482+01:00), comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
<% end -%>
<%= @syslog_log_facility_glance %>.* -/var/log/glance-all.log
<%= @syslog_log_facility_glance %>.* ~

View File

@@ -1,11 +1,4 @@
# managed by puppet
<% unless scope.lookupvar('rsyslog::client::high_precision_timestamps') -%>
#
# Use traditional timestamp format date-rfc3164 (Dec 5 02:21:13).
# To enable high precision timestamps date-rfc3339 (2010-12-05T02:21:41.889482+01:00), comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
<% end -%>
<%= @syslog_log_facility_quantum %>.* -/var/log/quantum-all.log
<%= @syslog_log_facility_quantum %>.* ~

View File

@@ -1,10 +1,3 @@
# file is managed by puppet
<% unless @high_precision_timestamps -%>
#
# Use traditional timestamp format date-rfc3164 (Dec 5 02:21:13).
# To enable high precision timestamps date-rfc3339 (2010-12-05T02:21:41.889482+01:00), comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
<% end -%>
if $programname == 'puppet-agent' then /var/log/puppet/agent.log

View File

@@ -1,12 +1,5 @@
# file is managed by puppet
#
<% unless scope.lookupvar('rsyslog::client::high_precision_timestamps') -%>
#
# Use traditional timestamp format date-rfc3164 (Dec 5 02:21:13).
# To enable high precision timestamps date-rfc3339 (2010-12-05T02:21:41.889482+01:00), comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
<% end -%>
<% if scope.lookupvar('rsyslog::client::log_auth_local') or scope.lookupvar('rsyslog::client::log_local') -%>
# Log auth messages locally

View File

@@ -0,0 +1,45 @@
#
# suffix.rb
#
module Puppet::Parser::Functions
newfunction(:suffix, :type => :rvalue, :doc => <<-EOS
This function applies a suffix to all elements in an array.
*Examples:*
suffix(['a','b','c'], 'p')
Will return: ['ap','bp','cp']
EOS
) do |arguments|
# Technically we support two arguments but only first is mandatory ...
raise(Puppet::ParseError, "suffix(): Wrong number of arguments " +
"given (#{arguments.size} for 1)") if arguments.size < 1
array = arguments[0]
unless array.is_a?(Array)
raise Puppet::ParseError, "suffix(): expected first argument to be an Array, got #{array.inspect}"
end
suffix = arguments[1] if arguments[1]
if suffix
unless suffix.is_a? String
raise Puppet::ParseError, "suffix(): expected second argument to be a String, got #{suffix.inspect}"
end
end
# Turn everything into string same as join would do ...
result = array.collect do |i|
i = i.to_s
suffix ? i + suffix : i
end
return result
end
end
# vim: set ts=2 sw=2 et :

View File

@@ -0,0 +1,19 @@
#! /usr/bin/env ruby -S rspec
require 'spec_helper'
describe "the suffix function" do
let(:scope) { PuppetlabsSpec::PuppetInternals.scope }
it "should exist" do
Puppet::Parser::Functions.function("suffix").should == "function_suffix"
end
it "should raise a ParseError if there is less than 1 arguments" do
lambda { scope.function_suffix([]) }.should( raise_error(Puppet::ParseError))
end
it "should return a suffixed array" do
result = scope.function_suffix([['a','b','c'], 'p'])
result.should(eq(['ap','bp','cp']))
end
end

View File

@@ -45,7 +45,10 @@ class swift::proxy(
$workers = $::processorcount,
$allow_account_management = true,
$account_autocreate = true,
$package_ensure = 'present'
$package_ensure = 'present',
$debug = false,
$verbose = true,
$syslog_log_level = 'WARNING',
) {
include 'swift::params'

View File

@@ -27,6 +27,9 @@ class swift::storage::all(
$container_pipeline = undef,
$account_pipeline = undef,
$export_devices = false,
$debug = false,
$verbose = true,
$syslog_log_level = 'WARNING',
) {
class { 'swift::storage':
@@ -69,6 +72,9 @@ class swift::storage::all(
swift_zone => $swift_zone,
devices => $devices,
storage_local_net_ip => $storage_local_net_ip,
debug => $debug,
verbose => $verbose,
syslog_log_level => $syslog_log_level,
}
swift::storage::server { $account_port:

View File

@@ -20,7 +20,10 @@ define swift::storage::server(
$updater_concurrency = $::processorcount,
$reaper_concurrency = $::processorcount,
# this parameters needs to be specified after type and name
$config_file_path = "${type}-server/${name}.conf"
$config_file_path = "${type}-server/${name}.conf",
$debug = false,
$verbose = true,
$syslog_log_level = 'WARNING',
) {
if (is_array($pipeline)) {
$pipeline_real = $pipeline

View File

@@ -5,12 +5,12 @@ bind_port = <%= bind_port %>
mount_check = <%= mount_check %>
user = <%= user %>
log_facility = LOG_SYSLOG
<% if scope.lookupvar('::debug') then -%>
<% if @debug then -%>
log_level = DEBUG
<% elsif scope.lookupvar('::verbose') then -%>
<% elsif @verbose then -%>
log_level = INFO
<% else -%>
log_level = <%= scope.lookupvar('::syslog_log_level') %>
log_level = <%= @syslog_log_level %>
<% end -%>
log_name = swift-account-server
workers = <%= workers %>

View File

@@ -5,12 +5,12 @@ bind_port = <%= bind_port %>
mount_check = <%= mount_check %>
user = <%= user %>
log_facility = LOG_SYSLOG
<% if scope.lookupvar('::debug') then -%>
<% if @debug then -%>
log_level = DEBUG
<% elsif scope.lookupvar('::verbose') then -%>
<% elsif @verbose then -%>
log_level = INFO
<% else -%>
log_level = <%= scope.lookupvar('::syslog_log_level') %>
log_level = <%= @syslog_log_level %>
<% end -%>
log_name = swift-container-server
workers = <%= workers %>

View File

@@ -5,12 +5,12 @@ bind_port = <%= bind_port %>
mount_check = <%= mount_check %>
user = <%= user %>
log_facility = LOG_SYSLOG
<% if scope.lookupvar('::debug') then -%>
<% if @debug then -%>
log_level = DEBUG
<% elsif scope.lookupvar('::verbose') then -%>
<% elsif @verbose then -%>
log_level = INFO
<% else -%>
log_level = <%= scope.lookupvar('::syslog_log_level') %>
log_level = <%= @syslog_log_level %>
<% end -%>
log_name = swift-object-server
workers = <%= workers %>

View File

@@ -5,12 +5,12 @@ bind_ip = <%= proxy_local_net_ip %>
bind_port = <%= port %>
workers = <%= workers %>
log_facility = LOG_SYSLOG
<% if scope.lookupvar('::debug') then -%>
<% if @debug then -%>
log_level = DEBUG
<% elsif scope.lookupvar('::verbose') then -%>
<% elsif @verbose then -%>
log_level = INFO
<% else -%>
log_level = <%= scope.lookupvar('::syslog_log_level') %>
log_level = <%= @syslog_log_level %>
<% end -%>
log_name = swift-proxy-server
user = swift

View File

@@ -32,8 +32,8 @@ puppet apply -e "
log_remote => false,
log_local => true,
log_auth_local => true,
rotation => 'daily',
keep => '7',
rotation => 'weekly',
keep => '4',
# should be > 30M
limitsize => '100M',
port => '514',