options: openstack-origin: default: distro type: string description: | Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Cloud Archive release pocket. . Supported Cloud Archive sources include: cloud:precise-folsom, cloud:precise-folsom/updates, cloud:precise-folsom/staging, cloud:precise-folsom/proposed. . Note that updating this setting to a source that is known to provide a later version of OpenStack will trigger a software upgrade. block-device: default: sdb type: string description: | Device to be used to back Swift storage. May be any valid block device or a path and size to a local file (/path/to/file.img|$sizeG), which will be created and used as a loopback device (for testing only). Multiple devices may be specified as a space-separated list of devices. If set to "guess", the charm will attempt to format and mount all extra block devices (this is currently experimental and potentially dangerous). overwrite: default: "false" type: string description: | If true, charm will attempt to unmount and overwrite existing and in-use block-devices (WARNING). zone: default: 1 type: int description: | Swift storage zone to request membership. Relevant only when the swift-proxy charm has been configured for manual zone assignment (the default). This should be changed for every service unit. object-server-port: default: 6000 type: int description: Listening port of the swift-object-server. container-server-port: default: 6001 type: int description: Listening port of the swift-container-server. account-server-port: default: 6002 type: int description: Listening port of the swift-account-server. worker-multiplier: default: 1 type: int description: | The CPU multiplier to use when configuring worker processes for the account, container and object server processes. object-server-threads-per-disk: default: 4 type: int description: | Size of the per-disk thread pool used for performing disk I/O. 0 means to not use a per-disk thread pool. It is recommended to keep this value small, as large values can result in high read latencies due to large queue depths. A good starting point is 4 threads per disk.