Erlon R. Cruz
The partition power is used to build the swift rings. The total number of partitions in the cluster will be 2^partition-power. The recommend number of partitions of a cluster is a minimum of 100 per disk. Given that 2^8 is only 256, this number will lead to a sub- optimal configuration even on test deployments, not to say on production where disks can surpass the hundreds, and this will lead to a very unbalanced cluster. Closes-bug: #1931286 Change-Id: Ifdf2b96f27f18bc64549c40d8fa47474f8eb59c8
|2 months ago|
|actions||9 months ago|
|charmhelpers||2 months ago|
|files||2 years ago|
|hooks||5 months ago|
|lib||5 months ago|
|scripts||8 years ago|
|swift_manager||2 years ago|
|templates||1 year ago|
|tests||4 months ago|
|unit_tests||5 months ago|
|.gitignore||2 years ago|
|.gitreview||2 years ago|
|.project||8 years ago|
|.pydevproject||3 years ago|
|.stestr.conf||4 years ago|
|.zuul.yaml||1 year ago|
|LICENSE||5 years ago|
|Makefile||1 year ago|
|README.Swift_ring_management||2 years ago|
|README.md||12 months ago|
|actions.yaml||1 year ago|
|charm-helpers-hooks.yaml||1 month ago|
|config.yaml||4 days ago|
|copyright||5 years ago|
|hardening.yaml||5 years ago|
|icon.svg||4 years ago|
|metadata.yaml||3 months ago|
|osci.yaml||3 months ago|
|requirements.txt||6 months ago|
|revision||7 years ago|
|setup.cfg||1 year ago|
|test-requirements.txt||1 month ago|
|tox.ini||6 months ago|
OpenStack Swift is a highly available, distributed, eventually consistent object/blob store.
The swift-proxy charm deploys Swift's proxy component. The charm's basic function is to manage zone assignment and enforce replica requirements for the storage nodes. It works in tandem with the swift-storage charm, which is used to add storage nodes.
This section covers common configuration options. See file
the full list of options, along with their descriptions and default values.
zone-assignment option defines the zone assignment method for storage
nodes. Values include 'manual' (the default) and 'auto'.
replicas option stipulates the number of data replicas are needed. This
value should be equal to the number of zones. The default value is '3'.
swift.yaml contain the deployment configuration:
swift-proxy: zone-assignment: manual replicas: 3 swift-storage-zone1: zone: 1 block-device: /dev/sdb swift-storage-zone2: zone: 2 block-device: /dev/sdb swift-storage-zone3: zone: 3 block-device: /dev/sdb
Deploy the proxy and storage nodes:
juju deploy --config swift.yaml swift-proxy juju deploy --config swift.yaml swift-storage swift-storage-zone1 juju deploy --config swift.yaml swift-storage swift-storage-zone2 juju deploy --config swift.yaml swift-storage swift-storage-zone3
Add relations between the proxy node and all storage nodes:
juju add-relation swift-proxy:swift-storage swift-storage-zone1:swift-storage juju add-relation swift-proxy:swift-storage swift-storage-zone2:swift-storage juju add-relation swift-proxy:swift-storage swift-storage-zone3:swift-storage
This will result in a three-zone cluster, with each zone consisting of a single storage node, thereby satisfying the replica requirement of three.
Storage capacity is increased by adding swift-storage units to a zone. For example, to add two storage nodes to zone '3':
juju add-unit -n 2 swift-storage-zone3
Note: When scaling out ensure the candidate machines are equipped with the block devices currently configured for the associated application.
This charm will not balance the storage ring until there are enough storage zones to meet its minimum replica requirement, in this case three.
Swift may be used as a storage backend for the Glance image service. To do so, add a relation between the swift-proxy and glance applications:
juju add-relation swift-proxy:object-store glance:object-store
Starting with OpenStack Mitaka improved telemetry collection support can be achieved by adding a relation to rabbitmq-server:
juju add-relation swift-proxy rabbitmq-server
Doing the above in a busy Swift deployment can add a significant amount of load to the underlying message bus.
When more than one unit is deployed with the hacluster application the charm will bring up an HA active/active cluster.
There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases the hacluster subordinate charm is used to provide the Corosync and Pacemaker backend HA functionality.
This charm supports the use of Juju network spaces (Juju
v.2.0). This feature optionally allows specific types of the application's
network traffic to be bound to subnets that the underlying hardware is
Note: Spaces must be configured in the backing cloud prior to deployment.
API endpoints can be bound to distinct network spaces supporting the network separation of public, internal, and admin endpoints.
For example, providing that spaces 'public-space', 'internal-space', and 'admin-space' exist, the deploy command above could look like this:
juju deploy --config swift-proxy.yaml swift-proxy \ --bind "public=public-space internal=internal-space admin=admin-space"
Alternatively, configuration can be provided as part of a bundle:
swift-proxy: charm: cs:swift-proxy num_units: 1 bindings: public: public-space internal: internal-space admin: admin-space
Note: Existing cinder units configured with the
os-public-networkoptions will continue to honour them. Furthermore, these options override any space bindings, if set.
This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis.
To display action descriptions run
juju actions swift-proxy.
This feature allows for policy overrides using the
policy.d directory. This
is an advanced feature and the policies that the OpenStack service supports
should be clearly and unambiguously understood before trying to override, or
add to, the default policies that the service uses. The charm also has some
policy defaults. They should also be understood before being overridden.
Caution: It is possible to break the system (for tenants and other services) if policies are incorrectly applied to the service.
Policy overrides are YAML files that contain rules that will add to, or
override, existing policy rules in the service. The
policy.d directory is
a place to put the YAML override files. This charm owns the
/etc/swift/policy.d directory, and as such, any manual changes to it will
be overwritten on charm upgrades.
Overrides are provided to the charm using a Juju resource called
policyd-override. The resource is a ZIP file. This file, say
overrides.zip, is attached to the charm by:
juju attach-resource swift-proxy policyd-override=overrides.zip
The policy override is enabled in the charm using:
juju config swift-proxy use-policyd-override=true
True the status line of the charm will be
PO: indicating that policies have been overridden. If the
installation of the policy override YAML files failed for any reason then the
status line will be prefixed with
PO (broken):. The log file for the charm
will indicate the reason. No policy override files are installed if the
PO (broken): is shown. The status line indicates that the overrides are broken,
not that the policy for the service has failed. The policy will be the defaults
for the charm and service.
Policy overrides on one service may affect the functionality of another service. Therefore, it may be necessary to provide policy overrides for multiple service charms to achieve a consistent set of policies across the OpenStack system. The charms for the other services that may need overrides should be checked to ensure that they support overrides before proceeding.
Please report bugs on Launchpad.
For general charm questions refer to the OpenStack Charm Guide.