Retire project

Change-Id: Ie07045a0dc78e96522d2daf3913f8adc681fd4ce
This commit is contained in:
Filip Pytloun 2017-01-25 18:22:19 +01:00
parent 036253b2ba
commit 5676d9f2b2
30 changed files with 7 additions and 1600 deletions

View File

@ -1,6 +0,0 @@
swift formula
=============
2016.8.3 (2016-08-10)
- initial release

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,26 +0,0 @@
DESTDIR=/
SALTENVDIR=/usr/share/salt-formulas/env
RECLASSDIR=/usr/share/salt-formulas/reclass
FORMULANAME=$(shell grep name: metadata.yml|head -1|cut -d : -f 2|grep -Eo '[a-z0-9\-]*')
all:
@echo "make install - Install into DESTDIR"
@echo "make test - Run tests"
@echo "make clean - Cleanup after tests run"
install:
# Formula
[ -d $(DESTDIR)/$(SALTENVDIR) ] || mkdir -p $(DESTDIR)/$(SALTENVDIR)
cp -a $(FORMULANAME) $(DESTDIR)/$(SALTENVDIR)/
[ ! -d _modules ] || cp -a _modules $(DESTDIR)/$(SALTENVDIR)/
[ ! -d _states ] || cp -a _states $(DESTDIR)/$(SALTENVDIR)/ || true
# Metadata
[ -d $(DESTDIR)/$(RECLASSDIR)/service/$(FORMULANAME) ] || mkdir -p $(DESTDIR)/$(RECLASSDIR)/service/$(FORMULANAME)
cp -a metadata/service/* $(DESTDIR)/$(RECLASSDIR)/service/$(FORMULANAME)
test:
[ ! -d tests ] || (cd tests; ./run_tests.sh)
clean:
[ ! -d tests/build ] || rm -rf tests/build
[ ! -d build ] || rm -rf build

View File

@ -1,149 +1,9 @@
================
OpenStack Swift
================
Project moved
=============
Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply.
Sample pillars
==============
Swift proxy server
------------------
.. code-block:: yaml
swift:
common:
cache:
engine: memcached
members:
- host: 127.0.0.1
port: 11211
- host: 127.0.0.1
port: 11211
enabled: true
version: kilo
swift_hash_path_suffix: hash
swift_hash_path_prefix: hash
proxy:
version: kilo
enabled: true
bind:
address: 0.0.0.0
port: 8080
identity:
engine: keystone
host: 127.0.0.1
port: 35357
user: swift
password: pwd
tenant: service
Swift storage server
--------------------
.. code-block:: yaml
swift:
common:
cache:
engine: memcached
members:
- host: 127.0.0.1
port: 11211
- host: 127.0.0.1
port: 11211
version: kilo
enabled: true
swift_hash_path_suffix: hash
swift_hash_path_prefix: hash
object:
enabled: true
version: kilo
bind:
address: 0.0.0.0
port: 6000
container:
enabled: true
version: kilo
allow_versions: true
bind:
address: 0.0.0.0
port: 6001
account:
enabled: true
version: kilo
bind:
address: 0.0.0.0
port: 6002
To enable object versioning feature
.. code-block:: yaml
swift:
....
container:
....
allow_versions: true
....
Ring builder
------------
.. code-block:: yaml
parameters:
swift:
ring_builder:
enabled: true
rings:
- name: default
partition_power: 9
replicas: 3
hours: 1
region: 1
devices:
- address: ${_param:storage_node01_address}
device: vdb
- address: ${_param:storage_node02_address}
device: vdc
- address: ${_param:storage_node03_address}
device: vdd
- partition_power: 9
replicas: 2
hours: 1
region: 1
devices:
- address: ${_param:storage_node01_address}
device: vdb
- address: ${_param:storage_node02_address}
device: vdc
Documentation and Bugs
============================
To learn how to deploy OpenStack Salt, consult the documentation available
online at:
https://wiki.openstack.org/wiki/OpenStackSalt
In the unfortunate event that bugs are discovered, they should be reported to
the appropriate bug tracker. If you obtained the software from a 3rd party
operating system vendor, it is often wise to use their own bug tracker for
reporting problems. In all other cases use the master OpenStack bug tracker,
available at:
http://bugs.launchpad.net/openstack-salt
Developers wishing to work on the OpenStack Salt project should always base
their work on the latest formulas code, available from the master GIT
repository at:
https://git.openstack.org/cgit/openstack/salt-formula-swift
Developers should also join the discussion on the IRC list, at:
https://wiki.openstack.org/wiki/Meetings/openstack-salt
This repository as a part of openstack-salt project was moved to join rest of
salt-formulas ecosystem.
Github: https://github.com/salt-formulas
Launchpad https://launchpad.net/salt-formulas
IRC: #salt-formulas @ irc.freenode.net

View File

@ -1 +0,0 @@
2016.8.3

View File

@ -1,3 +0,0 @@
name: "swift"
version: "2016.8.3"
source: "https://github.com/github/salt-formula-swift"

View File

@ -1,29 +0,0 @@
applications:
- swift
parameters:
swift:
common:
enabled: true
version: ${_param:swift_version}
swift_hash_path_suffix: ${_param:swift_swift_hash_path_suffix}
swift_hash_path_prefix: ${_param:swift_swift_hash_path_prefix}
proxy:
enabled: true
version: ${_param:swift_version}
bind:
address: 0.0.0.0
port: 8080
cache:
engine: memcached
members:
- host: ${_param:cluster_node01_address}
port: 11211
- host: ${_param:cluster_node02_address}
port: 11211
identity:
engine: keystone
host: ${_param:cluster_vip_address}
port: 35357
user: swift
password: ${_param:keystone_swift_password}
tenant: service

View File

@ -1,27 +0,0 @@
applications:
- swift
parameters:
swift:
common:
enabled: true
version: ${_param:swift_version}
swift_hash_path_suffix: ${_param:swift_swift_hash_path_suffix}
swift_hash_path_prefix: ${_param:swift_swift_hash_path_prefix}
proxy:
enabled: true
version: ${_param:swift_version}
bind:
address: 0.0.0.0
port: 8080
cache:
engine: memcached
members:
- host: 127.0.0.1
port: 11211
identity:
engine: keystone
host: ${_param:single_address}
port: 35357
user: swift
password: ${_param:keystone_swift_password}
tenant: service

View File

@ -1,27 +0,0 @@
applications:
- swift
parameters:
swift:
common:
enabled: true
version: ${_param:swift_version}
swift_hash_path_suffix: ${_param:swift_swift_hash_path_suffix}
swift_hash_path_prefix: ${_param:swift_swift_hash_path_prefix}
object:
enabled: true
version: ${_param:swift_version}
bind:
address: 0.0.0.0
port: 6000
container:
enabled: true
version: ${_param:swift_version}
bind:
address: 0.0.0.0
port: 6001
account:
enabled: true
version: ${_param:swift_version}
bind:
address: 0.0.0.0
port: 6002

View File

@ -1,27 +0,0 @@
applications:
- swift
parameters:
swift:
common:
enabled: true
version: ${_param:swift_version}
swift_hash_path_suffix: ${_param:swift_swift_hash_path_suffix}
swift_hash_path_prefix: ${_param:swift_swift_hash_path_prefix}
object:
enabled: true
version: ${_param:swift_version}
bind:
address: 0.0.0.0
port: 6000
container:
enabled: true
version: ${_param:swift_version}
bind:
address: 0.0.0.0
port: 6001
account:
enabled: true
version: ${_param:swift_version}
bind:
address: 0.0.0.0
port: 6002

View File

@ -1 +0,0 @@
python-yaml

View File

@ -1,24 +0,0 @@
{% from "swift/map.jinja" import account with context %}
{%- if account.enabled %}
swift_account_packages:
pkg.installed:
- names: {{ account.pkgs }}
/etc/swift/account-server.conf:
file.managed:
- source: salt://swift/files/{{ account.version }}/account-server.conf
- template: jinja
- user: swift
- group: swift
- mode: 644
swift_account_services:
service.running:
- names: {{ account.services }}
- watch:
- file: /etc/swift/account-server.conf
- file: /etc/swift/memcache.conf
{%- endif %}

View File

@ -1,32 +0,0 @@
{% from "swift/map.jinja" import common with context %}
swift_common_packages:
pkg.installed:
- names: {{ common.pkgs }}
/etc/swift:
file.directory:
- user: root
- group: root
- require:
- pkg: swift_common_packages
/etc/swift/swift.conf:
file.managed:
- source: salt://swift/files/{{ common.version }}/swift.conf
- template: jinja
- user: root
- group: root
- mode: 644
- require:
- file: /etc/swift
/etc/swift/memcache.conf:
file.managed:
- source: salt://swift/files/{{ common.version }}/memcache.conf
- template: jinja
- user: root
- group: root
- mode: 644
- require:
- file: /etc/swift

View File

@ -1,37 +0,0 @@
{% from "swift/map.jinja" import container with context %}
{%- if container.enabled %}
swift_container_packages:
pkg.installed:
- names: {{ container.pkgs }}
/etc/swift/container-server.conf:
file.managed:
- source: salt://swift/files/{{ container.version }}/container-server.conf
- template: jinja
- user: swift
- group: swift
- mode: 644
- require:
- file: /etc/swift
/etc/swift/container-reconciler.conf:
file.managed:
- source: salt://swift/files/{{ container.version }}/container-reconciler.conf
- template: jinja
- user: root
- group: root
- mode: 644
- require:
- file: /etc/swift
swift_container_services:
service.running:
- names: {{ container.services }}
- watch:
- file: /etc/swift/container-server.conf
- file: /etc/swift/container-reconciler.conf
- file: /etc/swift/memcache.conf
{%- endif %}

View File

@ -1,32 +0,0 @@
{% from "swift/map.jinja" import account with context %}
[DEFAULT]
bind_ip = {{ account.bind.address }}
bind_port = {{ account.bind.port }}
# bind_timeout = 30
# backlog = 4096
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon account-server
[app:account-server]
use = egg:swift#account
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[account-replicator]
[account-auditor]
[account-reaper]
[filter:xprofile]
use = egg:swift#xprofile

View File

@ -1,52 +0,0 @@
[DEFAULT]
# swift_dir = /etc/swift
# user = swift
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
[container-reconciler]
# The reconciler will re-attempt reconciliation if the source object is not
# available up to reclaim_age seconds before it gives up and deletes the entry
# in the queue.
# reclaim_age = 604800
# The cycle time of the daemon
# interval = 30
# Server errors from requests will be retried by default
# request_tries = 3
[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server
[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options
[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options
[filter:proxy-logging]
use = egg:swift#proxy_logging
[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options

View File

@ -1,39 +0,0 @@
{% from "swift/map.jinja" import container with context %}
[DEFAULT]
bind_ip = {{ container.bind.address }}
bind_port = {{ container.bind.port }}
# bind_timeout = 30
# backlog = 4096
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon container-server
[app:container-server]
use = egg:swift#container
{%- if container.allow_versions is defined %}
allow_versions = {{ container.allow_versions|lower }}
{%- endif %}
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[container-replicator]
[container-updater]
[container-auditor]
[container-sync]
[filter:xprofile]
use = egg:swift#xprofile

View File

@ -1,32 +0,0 @@
{% from "swift/map.jinja" import common with context %}
[memcache]
# You can use this single conf file instead of having memcache_servers set in
# several other conf files under [filter:cache] for example. You can specify
# multiple servers separated with commas, as in: 10.1.2.3:11211,10.1.2.4:11211
# (IPv6 addresses must follow rfc3986 section-3.2.2, i.e. [::1]:11211)
# memcache_servers = 127.0.0.1:11211
memcache_servers = {%- for member in common.cache.members %}{{ member.host }}:{{ member.port }}{% if not loop.last %},{% endif %}{%- endfor %}
#
# Sets how memcache values are serialized and deserialized:
# 0 = older, insecure pickle serialization
# 1 = json serialization but pickles can still be read (still insecure)
# 2 = json serialization only (secure and the default)
# To avoid an instant full cache flush, existing installations should
# upgrade with 0, then set to 1 and reload, then after some time (24 hours)
# set to 2 and reload.
# In the future, the ability to use pickle serialization will be removed.
# memcache_serialization_support = 2
memcache_serialization_support = 2
#
# Sets the maximum number of connections to each memcached server per worker
# memcache_max_connections = 2
#
# Timeout for connection
# connect_timeout = 0.3
# Timeout for pooled connection
# pool_timeout = 1.0
# number of servers to retry on failures getting a pooled connection
# tries = 3
# Timeout for read and writes
# io_timeout = 2.0

View File

@ -1,35 +0,0 @@
{% from "swift/map.jinja" import object with context %}
[DEFAULT]
bind_ip = {{ object.bind.address }}
bind_port = {{ object.bind.port }}
# bind_timeout = 30
# backlog = 4096
user = swift
swift_dir = /etc/swift
devices = /srv/node
[pipeline:main]
pipeline = healthcheck recon object-server
[app:object-server]
use = egg:swift#object
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
[object-replicator]
[object-reconstructor]
[object-updater]
[object-auditor]
[filter:xprofile]
use = egg:swift#xprofile

View File

@ -1,106 +0,0 @@
{% from "swift/map.jinja" import proxy with context %}
[DEFAULT]
bind_ip = {{ proxy.bind.address }}
bind_port = {{ proxy.bind.port }}
swift_dir = /etc/swift
user = swift
workers = {{ proxy.workers }}
log_level = DEBUG
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk ratelimit list-endpoints {% if proxy.identity is defined %}authtoken keystoneauth {% endif %} formpost staticweb container-quotas account-quotas slo dlo proxy-logging proxy-server
[app:proxy-server]
account_autocreate = true
conn_timeout = 20
node_timeout = 120
use = egg:swift#proxy
#[filter:tempauth]
#use = egg:swift#tempauth
#reseller_prefix = TEMPAUTH
#user_admin_admin = admin .admin .reseller_admin
#user_test_tester = testing .admin
#user_test2_tester2 = testing2 .admin
#user_test_tester3 = testing3
#user_test5_tester5 = testing5 service
[filter:crossdomain]
use = egg:swift#crossdomain
[filter:list-endpoints]
use = egg:swift#list_endpoints
{%- if proxy.identity is defined %}
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_url = http://{{ proxy.identity.host }}:35357/
auth_uri = http://{{ proxy.identity.host }}:5000/
tenant_name = {{ proxy.identity.tenant }}
username = {{ proxy.identity.user }}
password = {{ proxy.identity.password }}
delay_auth_decision = true
auth_plugin = password
signing_dir = /var/cache/swift
cache = swift.cache
include_service_catalog = False
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin, Member
{%- endif %}
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
[filter:ratelimit]
use = egg:swift#ratelimit
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:staticweb]
use = egg:swift#staticweb
[filter:tempurl]
use = egg:swift#tempurl
[filter:formpost]
use = egg:swift#formpost
[filter:proxy-logging]
reveal_sensitive_prefix = 12
use = egg:swift#proxy_logging
[filter:bulk]
use = egg:swift#bulk
[filter:slo]
use = egg:swift#slo
[filter:dlo]
use = egg:swift#dlo
[filter:container-quotas]
use = egg:swift#container_quotas
[filter:account-quotas]
use = egg:swift#account_quotas
[filter:gatekeeper]
use = egg:swift#gatekeeper
[filter:container_sync]
use = egg:swift#container_sync
{#-
vim: syntax=jinja
-#}

View File

@ -1,179 +0,0 @@
[swift-hash]
# swift_hash_path_suffix and swift_hash_path_prefix are used as part of the
# the hashing algorithm when determining data placement in the cluster.
# These values should remain secret and MUST NOT change
# once a cluster has been deployed.
swift_hash_path_suffix = {{ pillar.swift.common.swift_hash_path_suffix }}
swift_hash_path_prefix = {{ pillar.swift.common.swift_hash_path_prefix }}
# storage policies are defined here and determine various characteristics
# about how objects are stored and treated. Policies are specified by name on
# a per container basis. Names are case-insensitive. The policy index is
# specified in the section header and is used internally. The policy with
# index 0 is always used for legacy containers and can be given a name for use
# in metadata however the ring file name will always be 'object.ring.gz' for
# backwards compatibility. If no policies are defined a policy with index 0
# will be automatically created for backwards compatibility and given the name
# Policy-0. A default policy is used when creating new containers when no
# policy is specified in the request. If no other policies are defined the
# policy with index 0 will be declared the default. If multiple policies are
# defined you must define a policy with index 0 and you must specify a
# default. It is recommended you always define a section for
# storage-policy:0.
#
# A 'policy_type' argument is also supported but is not mandatory. Default
# policy type 'replication' is used when 'policy_type' is unspecified.
{%- for ring in pillar.swift.ring_builder.rings %}
{%- if ring.get("enabled", True) %}
[storage-policy:{{ loop.index0 }}]
name = {{ ring.get('name', "Policy-"~loop.index0) }}
{%- if loop.index0 == 0 %}
default = yes
{%- endif %}
{%- endif %}
{%- endfor %}
# the following section would declare a policy called 'silver', the number of
# replicas will be determined by how the ring is built. In this example the
# 'silver' policy could have a lower or higher # of replicas than the
# 'Policy-0' policy above. The ring filename will be 'object-1.ring.gz'. You
# may only specify one storage policy section as the default. If you changed
# this section to specify 'silver' as the default, when a client created a new
# container w/o a policy specified, it will get the 'silver' policy because
# this config has specified it as the default. However if a legacy container
# (one created with a pre-policy version of swift) is accessed, it is known
# implicitly to be assigned to the policy with index 0 as opposed to the
# current default.
#[storage-policy:1]
#name = silver
#policy_type = replication
# The following declares a storage policy of type 'erasure_coding' which uses
# Erasure Coding for data reliability. The 'erasure_coding' storage policy in
# Swift is available as a "beta". Please refer to Swift documentation for
# details on how the 'erasure_coding' storage policy is implemented.
#
# Swift uses PyECLib, a Python Erasure coding API library, for encode/decode
# operations. Please refer to Swift documentation for details on how to
# install PyECLib.
#
# When defining an EC policy, 'policy_type' needs to be 'erasure_coding' and
# EC configuration parameters 'ec_type', 'ec_num_data_fragments' and
# 'ec_num_parity_fragments' must be specified. 'ec_type' is chosen from the
# list of EC backends supported by PyECLib. The ring configured for the
# storage policy must have it's "replica" count configured to
# 'ec_num_data_fragments' + 'ec_num_parity_fragments' - this requirement is
# validated when services start. 'ec_object_segment_size' is the amount of
# data that will be buffered up before feeding a segment into the
# encoder/decoder. More information about these configuration options and
# supported `ec_type` schemes is available in the Swift documentation. Please
# refer to Swift documentation for details on how to configure EC policies.
#
# The example 'deepfreeze10-4' policy defined below is a _sample_
# configuration with 10 'data' and 4 'parity' fragments. 'ec_type'
# defines the Erasure Coding scheme. 'jerasure_rs_vand' (Reed-Solomon
# Vandermonde) is used as an example below.
#[storage-policy:2]
#name = deepfreeze10-4
#policy_type = erasure_coding
#ec_type = jerasure_rs_vand
#ec_num_data_fragments = 10
#ec_num_parity_fragments = 4
#ec_object_segment_size = 1048576
# The swift-constraints section sets the basic constraints on data
# saved in the swift cluster. These constraints are automatically
# published by the proxy server in responses to /info requests.
[swift-constraints]
# max_file_size is the largest "normal" object that can be saved in
# the cluster. This is also the limit on the size of each segment of
# a "large" object when using the large object manifest support.
# This value is set in bytes. Setting it to lower than 1MiB will cause
# some tests to fail. It is STRONGLY recommended to leave this value at
# the default (5 * 2**30 + 2).
#max_file_size = 5368709122
# max_meta_name_length is the max number of bytes in the utf8 encoding
# of the name portion of a metadata header.
#max_meta_name_length = 128
# max_meta_value_length is the max number of bytes in the utf8 encoding
# of a metadata value
#max_meta_value_length = 256
# max_meta_count is the max number of metadata keys that can be stored
# on a single account, container, or object
#max_meta_count = 90
# max_meta_overall_size is the max number of bytes in the utf8 encoding
# of the metadata (keys + values)
#max_meta_overall_size = 4096
# max_header_size is the max number of bytes in the utf8 encoding of each
# header. Using 8192 as default because eventlet use 8192 as max size of
# header line. This value may need to be increased when using identity
# v3 API tokens including more than 7 catalog entries.
# See also include_service_catalog in proxy-server.conf-sample
# (documented in overview_auth.rst)
#max_header_size = 8192
# By default the maximum number of allowed headers depends on the number of max
# allowed metadata settings plus a default value of 32 for regular http
# headers. If for some reason this is not enough (custom middleware for
# example) it can be increased with the extra_header_count constraint.
#extra_header_count = 32
# max_object_name_length is the max number of bytes in the utf8 encoding
# of an object name
#max_object_name_length = 1024
# container_listing_limit is the default (and max) number of items
# returned for a container listing request
#container_listing_limit = 10000
# account_listing_limit is the default (and max) number of items returned
# for an account listing request
#account_listing_limit = 10000
# max_account_name_length is the max number of bytes in the utf8 encoding
# of an account name
#max_account_name_length = 256
# max_container_name_length is the max number of bytes in the utf8 encoding
# of a container name
#max_container_name_length = 256
# By default all REST API calls should use "v1" or "v1.0" as the version string,
# for example "/v1/account". This can be manually overridden to make this
# backward-compatible, in case a different version string has been used before.
# Use a comma-separated list in case of multiple allowed versions, for example
# valid_api_versions = v0,v1,v2
# This is only enforced for account, container and object requests. The allowed
# api versions are by default excluded from /info.
# valid_api_versions = v1,v1.0

View File

@ -1,20 +0,0 @@
include:
{% if pillar.swift.common is defined %}
- swift.common
{% endif %}
{%- if pillar.swift.object is defined %}
- swift.object
{%- endif %}
{%- if pillar.swift.container is defined %}
- swift.container
{%- endif %}
{%- if pillar.swift.account is defined %}
- swift.account
{%- endif %}
{%- if pillar.swift.proxy is defined %}
- swift.proxy
{%- endif %}
{%- if pillar.swift.ring_builder is defined %}
- swift.ring_builder
{%- endif %}

View File

@ -1,73 +0,0 @@
{% set common = salt['grains.filter_by']({
'Debian': {
'pkgs': ['swift']
},
'RedHat': {
'pkgs': ['openstack-swift']
},
}, merge=salt['pillar.get']('swift:common')) %}
{% set proxy = salt['grains.filter_by']({
'Debian': {
'pkgs': ['swift', 'python-swiftclient', 'swift-proxy', 'python-keystone'],
'services': ['swift-proxy'],
'workers': 5,
'bind': {
'address': '0.0.0.0',
'port': '8080'
}
},
'RedHat': {
'pkgs': ['openstack-swift', 'python-swiftclient', 'openstack-swift-proxy', 'python-keystone'],
'services': ['openstack-swift-proxy'],
'workers': 5,
'bind': {
'address': '0.0.0.0',
'port': '8080'
}
},
}, merge=salt['pillar.get']('swift:proxy')) %}
{% set object = salt['grains.filter_by']({
'Debian': {
'pkgs': ['swift', 'swift-object'],
'services': ['swift-object', 'swift-object-auditor', 'swift-object-updater'],
},
'RedHat': {
'pkgs': ['openstack-swift', 'openstack-swift-object'],
'services': ['openstack-swift-object', 'openstack-swift-object-auditor', 'openstack-swift-object-replicator', 'openstack-swift-object-updater', 'xfsprogs'] ,
},
}, merge=salt['pillar.get']('swift:object')) %}
{% set container = salt['grains.filter_by']({
'Debian': {
'pkgs': ['swift', 'swift-container'],
'services': ['swift-container-auditor', 'swift-container-updater', 'swift-container-replicator'],
},
'RedHat': {
'pkgs': ['openstack-swift', 'openstack-swift-container'],
'services': ['openstack-swift-container', 'openstack-swift-container-auditor', 'openstack-swift-container-replicator', 'openstack-swift-container-updater']
},
}, merge=salt['pillar.get']('swift:container')) %}
{% set account = salt['grains.filter_by']({
'Debian': {
'pkgs': ['swift', 'swift-account'],
'services': ['swift-account', 'swift-account-auditor', 'swift-account-reaper', 'swift-account-replicator'],
},
'RedHat': {
'pkgs': ['openstack-swift', 'openstack-swift-account'],
'services': ['openstack-swift-account', 'openstack-swift-account-auditor', 'openstack-swift-account-reaper', 'openstack-swift-account-replicator'],
},
}, merge=salt['pillar.get']('swift:account')) %}
{#
'Debian': {
$swift3 = 'swift-plugin-s3'
}
'RedHat': {
$swift3 = 'openstack-swift-plugin-swift3'
}
#}

View File

@ -1,25 +0,0 @@
{% from "swift/map.jinja" import object with context %}
{%- if object.enabled %}
swift_object_packages:
pkg.installed:
- names: {{ object.pkgs }}
swift_object_config:
file.managed:
- name: /etc/swift/object-server.conf
- source: salt://swift/files/{{ object.version }}/object-server.conf
- template: jinja
- user: swift
- group: swift
- mode: 644
swift_object_services:
service.running:
- names: {{ object.services }}
- watch:
- file: swift_object_config
- file: /etc/swift/memcache.conf
{%- endif %}

View File

@ -1,25 +0,0 @@
{% from "swift/map.jinja" import proxy with context %}
{%- if proxy.enabled %}
include:
- swift.ring_builder
swift_proxy_packages:
pkg.installed:
- names: {{ proxy.pkgs }}
/etc/swift/proxy-server.conf:
file.managed:
- source: salt://swift/files/{{ proxy.version }}/proxy-server.conf
- template: jinja
- user: swift
- group: swift
- mode: 644
swift_proxy_services:
service.running:
- names: {{ proxy.services }}
- watch:
- file: /etc/swift/proxy-server.conf
- file: /etc/swift/memcache.conf
{%- endif %}

View File

@ -1,115 +0,0 @@
{%- if pillar.swift.ring_builder.enabled %}
include:
- swift.common
{%- for ring in pillar.swift.ring_builder.rings %}
{%- if ring.get("enabled", True) %}
{%- set ring_num = loop.index %}
{%- set ring_num0 = loop.index0 %}
{%- if (ring_num0 == 0 and ring.get('account', True) != False) or ring.get('account', False) %}
{%- set ring_account = True %}
{%- else %}
{%- set ring_account = False %}
{%- endif %}
{%- if (ring_num0 == 0 and ring.get('container', True) != False) or ring.get('container', False) %}
{%- set ring_container = True %}
{%- else %}
{%- set ring_container = False %}
{%- endif %}
{%- if ring_num0 > 0 %}
{%- set object_builder = "/etc/swift/object-"~ring_num0~".builder" %}
{%- set account_builder = "/etc/swift/account-"~ring_num0~".builder" %}
{%- set container_builder = "/etc/swift/container-"~ring_num0~".builder" %}
{%- else %}
{%- set object_builder = "/etc/swift/object.builder" %}
{%- set account_builder = "/etc/swift/account.builder" %}
{%- set container_builder = "/etc/swift/container.builder" %}
{%- endif %}
{%- if ring.get('object', True) %}
swift_ring_object_create_{{ring_num}}:
cmd.run:
- name: swift-ring-builder {{ object_builder }} create {{ ring.partition_power }} {{ ring.replicas }} {{ ring.hours }}
- creates: {{ object_builder }}
- require:
- file: /etc/swift/swift.conf
{%- endif %}
{%- if ring_account %}
swift_ring_account_create:
cmd.run:
- name: swift-ring-builder {{ account_builder }} create {{ ring.partition_power }} {{ ring.replicas }} {{ ring.hours }}
- creates: {{ account_builder }}
- require:
- file: /etc/swift/swift.conf
{%- endif %}
{%- if ring_container %}
swift_ring_container_create:
cmd.run:
- name: swift-ring-builder {{ container_builder }} create {{ ring.partition_power }} {{ ring.replicas }} {{ ring.hours }}
- creates: {{ container_builder }}
- require:
- file: /etc/swift/swift.conf
{%- endif %}
{%- for device in ring.devices %}
{%- if ring.get('object', True) %}
swift_ring_object_{{ring_num}}_{{ device.address }}:
cmd.wait:
- name: swift-ring-builder {{ object_builder }} add r{{ ring.region }}z{{ loop.index }}-{{ device.address }}:{{ device.get("object_port", 6000) }}/{{ device.device }} {{ device.get("weight", 100) }}
- watch:
- cmd: swift_ring_object_create_{{ring_num}}
- watch_in:
- cmd: swift_ring_object_rebalance_{{ring_num}}
{%- endif %}
{%- if ring_account %}
swift_ring_account_{{ device.address }}:
cmd.wait:
- name: swift-ring-builder {{ account_builder }} add r{{ ring.region }}z{{ loop.index }}-{{ device.address }}:{{ device.get("account_port", 6002) }}/{{ device.device }} {{ device.get("weight", 100) }}
- watch:
- cmd: swift_ring_account_create
- watch_in:
- cmd: swift_ring_account_rebalance
{%- endif %}
{%- if ring_container %}
swift_ring_container_{{ device.address }}:
cmd.wait:
- name: swift-ring-builder {{ container_builder }} add r{{ ring.region }}z{{ loop.index }}-{{ device.address }}:{{ device.get("container_port", 6001) }}/{{ device.device }} {{ device.get("weight", 100) }}
- watch:
- cmd: swift_ring_container_create
- watch_in:
- cmd: swift_ring_container_rebalance
{%- endif %}
{%- endfor %}
{%- if ring.get('object', True) %}
swift_ring_object_rebalance_{{ring_num}}:
cmd.wait:
- name: swift-ring-builder {{ object_builder }} rebalance
{%- endif %}
{%- if ring_account %}
swift_ring_account_rebalance:
cmd.wait:
- name: swift-ring-builder {{ account_builder }} rebalance
{%- endif %}
{%- if ring_container %}
swift_ring_container_rebalance:
cmd.wait:
- name: swift-ring-builder {{ container_builder }} rebalance
{%- endif %}
{%- endif %}
{%- endfor %}
{%- endif %}

View File

@ -1,47 +0,0 @@
swift:
common:
cache:
engine: memcached
members:
- host: 127.0.0.1
port: 11211
- host: 127.0.0.1
port: 11211
enabled: true
version: kilo
swift_hash_path_suffix: myhash
swift_hash_path_prefix: myhash
proxy:
enabled: true
version: kilo
bind:
address: 0.0.0.0
port: 8080
identity:
engine: keystone
host: 127.0.0.1
port: 35357
user: swift
password: password
tenant: service
ring_builder:
enabled: true
rings:
- partition_power: 9
replicas: 3
hours: 1
region: 1
account_builder: /etc/swift/account.builder
container_builder: /etc/swift/container.builder
object_builder: /etc/swift/object.builder
devices:
- address: 192.168.1.1
device: vdb
weight: 100
object_port: 6000
container_port: 6001
account_port: 6002
- address: 192.168.1.2
device: vdb
- address: 192.168.1.3
device: vdb

View File

@ -1,41 +0,0 @@
swift:
common:
cache:
engine: memcached
members:
- host: 127.0.0.1
port: 11211
enabled: true
version: kilo
swift_hash_path_suffix: myhash
swift_hash_path_prefix: myhash
proxy:
enabled: true
version: kilo
bind:
address: 0.0.0.0
port: 8080
identity:
engine: keystone
host: 127.0.0.1
port: 35357
user: swift
password: password
tenant: service
ring_builder:
enabled: true
rings:
- partition_power: 6
replicas: 1
hours: 1
region: 1
account_builder: /etc/swift/account.builder
container_builder: /etc/swift/container.builder
object_builder: /etc/swift/object.builder
devices:
- address: 127.0.0.1
device: vdb
weight: 100
object_port: 6000
container_port: 6001
account_port: 6002

View File

@ -1,53 +0,0 @@
swift:
common:
cache:
engine: memcached
members:
- host: 127.0.0.1
port: 11211
- host: 127.0.0.1
port: 11211
enabled: true
version: kilo
swift_hash_path_suffix: hashsuffix
swift_hash_path_prefix: hasprefix
object:
enabled: true
version: kilo
bind:
address: 0.0.0.0
port: 6000
container:
enabled: true
allow_versions: true
version: kilo
bind:
address: 0.0.0.0
port: 6001
account:
enabled: true
version: kilo
bind:
address: 0.0.0.0
port: 6002
ring_builder:
enabled: true
rings:
- partition_power: 9
replicas: 3
hours: 1
region: 1
account_builder: /etc/swift/account.builder
container_builder: /etc/swift/container.builder
object_builder: /etc/swift/object.builder
devices:
- address: 192.168.1.1
device: vdb
weight: 100
object_port: 6000
container_port: 6001
account_port: 6002
- address: 192.168.1.2
device: vdb
- address: 192.168.1.3
device: vdb

View File

@ -1,163 +0,0 @@
#!/usr/bin/env bash
set -e
[ -n "$DEBUG" ] && set -x
CURDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
METADATA=${CURDIR}/../metadata.yml
FORMULA_NAME=$(cat $METADATA | python -c "import sys,yaml; print yaml.load(sys.stdin)['name']")
## Overrideable parameters
PILLARDIR=${PILLARDIR:-${CURDIR}/pillar}
BUILDDIR=${BUILDDIR:-${CURDIR}/build}
VENV_DIR=${VENV_DIR:-${BUILDDIR}/virtualenv}
DEPSDIR=${BUILDDIR}/deps
SALT_FILE_DIR=${SALT_FILE_DIR:-${BUILDDIR}/file_root}
SALT_PILLAR_DIR=${SALT_PILLAR_DIR:-${BUILDDIR}/pillar_root}
SALT_CONFIG_DIR=${SALT_CONFIG_DIR:-${BUILDDIR}/salt}
SALT_CACHE_DIR=${SALT_CACHE_DIR:-${SALT_CONFIG_DIR}/cache}
SALT_OPTS="${SALT_OPTS} --retcode-passthrough --local -c ${SALT_CONFIG_DIR} --log-file=/dev/null"
if [ "x${SALT_VERSION}" != "x" ]; then
PIP_SALT_VERSION="==${SALT_VERSION}"
fi
## Functions
log_info() {
echo "[INFO] $*"
}
log_err() {
echo "[ERROR] $*" >&2
}
setup_virtualenv() {
log_info "Setting up Python virtualenv"
virtualenv $VENV_DIR
source ${VENV_DIR}/bin/activate
pip install salt${PIP_SALT_VERSION}
}
setup_pillar() {
[ ! -d ${SALT_PILLAR_DIR} ] && mkdir -p ${SALT_PILLAR_DIR}
echo "base:" > ${SALT_PILLAR_DIR}/top.sls
for pillar in ${PILLARDIR}/*; do
state_name=$(basename ${pillar%.sls})
echo -e " ${state_name}:\n - ${state_name}" >> ${SALT_PILLAR_DIR}/top.sls
done
}
setup_salt() {
[ ! -d ${SALT_FILE_DIR} ] && mkdir -p ${SALT_FILE_DIR}
[ ! -d ${SALT_CONFIG_DIR} ] && mkdir -p ${SALT_CONFIG_DIR}
[ ! -d ${SALT_CACHE_DIR} ] && mkdir -p ${SALT_CACHE_DIR}
echo "base:" > ${SALT_FILE_DIR}/top.sls
for pillar in ${PILLARDIR}/*.sls; do
state_name=$(basename ${pillar%.sls})
echo -e " ${state_name}:\n - ${FORMULA_NAME}" >> ${SALT_FILE_DIR}/top.sls
done
cat << EOF > ${SALT_CONFIG_DIR}/minion
file_client: local
cachedir: ${SALT_CACHE_DIR}
verify_env: False
minion_id_caching: False
file_roots:
base:
- ${SALT_FILE_DIR}
- ${CURDIR}/..
- /usr/share/salt-formulas/env
pillar_roots:
base:
- ${SALT_PILLAR_DIR}
- ${PILLARDIR}
EOF
}
fetch_dependency() {
dep_name="$(echo $1|cut -d : -f 1)"
dep_source="$(echo $1|cut -d : -f 2-)"
dep_root="${DEPSDIR}/$(basename $dep_source .git)"
dep_metadata="${dep_root}/metadata.yml"
[ -d /usr/share/salt-formulas/env/${dep_name} ] && log_info "Dependency $dep_name already present in system-wide salt env" && return 0
[ -d $dep_root ] && log_info "Dependency $dep_name already fetched" && return 0
log_info "Fetching dependency $dep_name"
[ ! -d ${DEPSDIR} ] && mkdir -p ${DEPSDIR}
git clone $dep_source ${DEPSDIR}/$(basename $dep_source .git)
ln -s ${dep_root}/${dep_name} ${SALT_FILE_DIR}/${dep_name}
METADATA="${dep_metadata}" install_dependencies
}
install_dependencies() {
grep -E "^dependencies:" ${METADATA} >/dev/null || return 0
(python - | while read dep; do fetch_dependency "$dep"; done) << EOF
import sys,yaml
for dep in yaml.load(open('${METADATA}', 'ro'))['dependencies']:
print '%s:%s' % (dep["name"], dep["source"])
EOF
}
clean() {
log_info "Cleaning up ${BUILDDIR}"
[ -d ${BUILDDIR} ] && rm -rf ${BUILDDIR} || exit 0
}
salt_run() {
[ -e ${VEN_DIR}/bin/activate ] && source ${VENV_DIR}/bin/activate
salt-call ${SALT_OPTS} $*
}
prepare() {
[ -d ${BUILDDIR} ] && mkdir -p ${BUILDDIR}
which salt-call || setup_virtualenv
setup_pillar
setup_salt
install_dependencies
}
run() {
for pillar in ${PILLARDIR}/*.sls; do
state_name=$(basename ${pillar%.sls})
salt_run --id=${state_name} state.show_sls ${FORMULA_NAME} || (log_err "Execution of ${FORMULA_NAME}.${state_name} failed"; exit 1)
done
}
_atexit() {
RETVAL=$?
trap true INT TERM EXIT
if [ $RETVAL -ne 0 ]; then
log_err "Execution failed"
else
log_info "Execution successful"
fi
return $RETVAL
}
## Main
trap _atexit INT TERM EXIT
case $1 in
clean)
clean
;;
prepare)
prepare
;;
run)
run
;;
*)
prepare
run
;;
esac