107 lines
4.2 KiB
INI
Raw Normal View History

[metadata]
name = cinder
summary = OpenStack Block Storage
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = https://docs.openstack.org/cinder/latest/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.5
[global]
setup-hooks =
pbr.hooks.setup_hook
[files]
data_files =
etc/cinder =
etc/cinder/api-paste.ini
etc/cinder/rootwrap.conf
etc/cinder/rootwrap.d = etc/cinder/rootwrap.d/*
packages =
cinder
[entry_points]
cinder.scheduler.filters =
AvailabilityZoneFilter = cinder.scheduler.filters.availability_zone_filter:AvailabilityZoneFilter
CapabilitiesFilter = cinder.scheduler.filters.capabilities_filter:CapabilitiesFilter
CapacityFilter = cinder.scheduler.filters.capacity_filter:CapacityFilter
Add affinity/anti-affinity filters Cinder has done a good job hiding the details of storage backends from end users by using volume types. However there are use cases where users who build their application on top of volumes would like to be able to 'choose' where a volume be created on. How can Cinder provide such capability without hurting the simplicity we have been keeping? Affinity/anti-affinity is one of the flexibility we can provide without exposing details to backends. The term affinity/anti-affinity here is to to describe the relationship between two sets of volumes in terms of location. To limit the scope, we describe one volume is affinity with the other one only when they reside in the same volume back-end (this notion can be extended to volume pools if volume pool support lands in Cinder); on the contrary, 'anti-affinity' relation between two sets of volumes simply implies they are on different Cinder back-ends (pools). This affinity/anti-affinity filter filters Cinder backend based on hint specified by end user. The hint expresses the affinity or anti-affinity relation between new volumes and existing volume(s). This allows end users to provide hints like 'please put this volume to a place that is different from where Volume-XYZ resides in'. This change adds two new filters to Cinder - SameBackendFilter and DifferentBackendFilter. These two filters will look at the scheduler hint provided by end users (via scheduler hint extension) and filter backends by checking the 'host' of old and new volumes see if a backend meets the requirement (being on the same backend as existing volume or not being on the same backend(s) as existing volume(s)). For example: Volume A is on 'backend 1', to create Volume B on the same backend as A, use: cinder create --hint same_host=VolA-UUID SIZE To create Volume C on different backend than that of A, use: cinder create --hint different_host=VolA-UUID SIZE Now, to create Volume D on different backend other than those of A and C, use: cinder create --hint different_host=VolA-UUID --hint different_host=VolC-UUID SIZE or: cinder create --hint different_host="[VolA-UUID, VolC-UUID]" SIZE implements bp: affinity-antiaffinity-filter DocImpact Change-Id: I19f298bd87b0069c0d1bb133202188d3bf65b770
2014-01-28 14:23:04 +08:00
DifferentBackendFilter = cinder.scheduler.filters.affinity_filter:DifferentBackendFilter
DriverFilter = cinder.scheduler.filters.driver_filter:DriverFilter
JsonFilter = cinder.scheduler.filters.json_filter:JsonFilter
RetryFilter = cinder.scheduler.filters.ignore_attempted_hosts_filter:IgnoreAttemptedHostsFilter
Add affinity/anti-affinity filters Cinder has done a good job hiding the details of storage backends from end users by using volume types. However there are use cases where users who build their application on top of volumes would like to be able to 'choose' where a volume be created on. How can Cinder provide such capability without hurting the simplicity we have been keeping? Affinity/anti-affinity is one of the flexibility we can provide without exposing details to backends. The term affinity/anti-affinity here is to to describe the relationship between two sets of volumes in terms of location. To limit the scope, we describe one volume is affinity with the other one only when they reside in the same volume back-end (this notion can be extended to volume pools if volume pool support lands in Cinder); on the contrary, 'anti-affinity' relation between two sets of volumes simply implies they are on different Cinder back-ends (pools). This affinity/anti-affinity filter filters Cinder backend based on hint specified by end user. The hint expresses the affinity or anti-affinity relation between new volumes and existing volume(s). This allows end users to provide hints like 'please put this volume to a place that is different from where Volume-XYZ resides in'. This change adds two new filters to Cinder - SameBackendFilter and DifferentBackendFilter. These two filters will look at the scheduler hint provided by end users (via scheduler hint extension) and filter backends by checking the 'host' of old and new volumes see if a backend meets the requirement (being on the same backend as existing volume or not being on the same backend(s) as existing volume(s)). For example: Volume A is on 'backend 1', to create Volume B on the same backend as A, use: cinder create --hint same_host=VolA-UUID SIZE To create Volume C on different backend than that of A, use: cinder create --hint different_host=VolA-UUID SIZE Now, to create Volume D on different backend other than those of A and C, use: cinder create --hint different_host=VolA-UUID --hint different_host=VolC-UUID SIZE or: cinder create --hint different_host="[VolA-UUID, VolC-UUID]" SIZE implements bp: affinity-antiaffinity-filter DocImpact Change-Id: I19f298bd87b0069c0d1bb133202188d3bf65b770
2014-01-28 14:23:04 +08:00
SameBackendFilter = cinder.scheduler.filters.affinity_filter:SameBackendFilter
Add an instance-locality filter Having an instance and an attached volume on the same physical host (i.e. data locality) can be desirable in some configurations, in order to achieve high-performance disk I/O. This patch adds an InstanceLocalityFilter filter that allow users to request creation of volumes 'local' to an existing instance, without specifying the hypervisor's hostname, and without any knowledge of the underlying back-ends. In order to work: - At least one physical host should run both nova-compute and cinder-volume services. - The Extended Server Attributes extension needs to be active in Nova (this is by default), so that the 'OS-EXT-SRV-ATTR:host' property is returned when requesting instance info. - The user making the call needs to have sufficient rights for the property to be returned by Nova. This can be achieved either by changing Nova's policy.json (the 'extended_server_attributes' option), or by setting an account with privileged rights in Cinder conf. For example: Instance 01234567-89ab-cdef is running in a hypervisor on the physical host 'my-host'. To create a 42 GB volume in a back-end hosted by 'my-host': cinder create --hint local_to_instance=01234567-89ab-cdef 42 Note: Currently it is not recommended to allow instance migrations for hypervisors where this hint will be used. In case of instance migration, a previously locally-created volume will not be automatically migrated. Also in case of instance migration during the volume's scheduling, the result is unpredictable. DocImpact: New Cinder scheduler filter Change-Id: Id428fa2132c1afed424443083645787ee3cb0399
2014-12-05 16:09:10 +01:00
InstanceLocalityFilter = cinder.scheduler.filters.instance_locality_filter:InstanceLocalityFilter
cinder.scheduler.weights =
Add AllocatedCapacityWeigher AllocatedCapacityWeigher is a weigher that weigh hosts by their allocated capacity. The main purpose of this weigher is to simulate the SimpleScheduler's behavior, which sorts hosts by the size of all volumes on them. So by allocated capacity, it equals to the sum of size of all volumes on target host. In order to keep track of 'allocated' capacity, host state is updated to add a 'allocated_capacity_gb' attribute to record the value, which means each back-end must report one extra stats to scheduler. Fortunately, the 'allocated' capacity we are interested in here is pure Cinder level capacity, the volume manager can take all the burden to calculate this value without having to query back-ends. The volume manager does the initial calculation in init_host() by the time when it has to query all existing volumes from DB for ensure_export(). After initial calculation, volume manager/scheduler will keep track of every new request that changes 'allocated_capacity' and make sure this value is up to date. !DriverImpact! Cinder driver developers, please read on: This patch contains a change that might IMPACT volume drivers: volume manager now uses 'stats' attribute to save 'allocated_capacity_gb'. And this information will be merged with those stats drivers provide as a whole for scheduler to consume. If you plan to report any form of allocated space other than the apparent Cinder level value, (e.g. actual capacity allocated), Please choose a key name other than 'allocated_capacity_gb', otherwise it will *OVERWRITE* the value volume manager has calculated and confuse scheduler. Partially implements bp: deprecate-chance-and-simple-schedulers Change-Id: I306230b8973c2d1ad77bcab14ccde68e997ea816
2013-12-11 21:46:38 +08:00
AllocatedCapacityWeigher = cinder.scheduler.weights.capacity:AllocatedCapacityWeigher
CapacityWeigher = cinder.scheduler.weights.capacity:CapacityWeigher
ChanceWeigher = cinder.scheduler.weights.chance:ChanceWeigher
GoodnessWeigher = cinder.scheduler.weights.goodness:GoodnessWeigher
VolumeNumberWeigher = cinder.scheduler.weights.volume_number:VolumeNumberWeigher
Dynamically create cinder.conf.sample As it stands, the opts.py file that is passed into oslo-config-generator isn't being generated dynamically and the old way of generating the cinder.conf.sample is dependent on oslo-incubator which Cinder is trying to move away from. oslo-config-generator works differently than oslo-incubator so a number of changes had to be made in order to make this switch. This patch adds the config directory to Cinder and in it are two files: -generate_cinder_opts.py that will take the results of a grep command to create the opts.py file to be passed into oslo-config-generator. -cinder.conf which is the new configuration for oslo-config-generator. The file is inside the config directory to be consistent with other projects. Some changes were made to the generate_sample.sh file in order to give the base directories and target directories to the generate_cinder_opts.py program. tox.ini was edited to remove the checkonly option because all that needs to happen in check_uptodate.sh is a check to ensure that the cinder.conf.sample is actually being generated with no issues. All options were removed from the check_uptodate.sh because they were unnecessary given the new, more simple way of generating the cinder.conf.sample. setup.cfg was also edited in order to add information oslo-config-generator needs to run. Co-Authored By: Jay Bryant <jsbryant@us.ibm.com> Co-Authored By: Jacob Gregor <jgregor@us.ibm.com> Change-Id: I643dbe5675ae9280e204f691781e617266f570d5 Closes-Bug: 1473768 Closes-Bug: 1437904 Closes-Bug: 1381563
2015-08-13 10:17:36 -05:00
oslo.config.opts =
cinder = cinder.opts:list_opts
oslo.config.opts.defaults =
cinder = cinder.common.config:set_middleware_defaults
oslo.policy.enforcer =
cinder = cinder.policy:get_enforcer
oslo.policy.policies =
# The sample policies will be ordered by entry point and then by list
# returned from that entry point. If more control is desired split out each
# list_rules method into a separate entry point rather than using the
# aggregate method.
cinder = cinder.policies:list_rules
console_scripts =
cinder-api = cinder.cmd.api:main
cinder-backup = cinder.cmd.backup:main
cinder-manage = cinder.cmd.manage:main
cinder-rootwrap = oslo_rootwrap.cmd:main
cinder-rtstool = cinder.cmd.rtstool:main
cinder-scheduler = cinder.cmd.scheduler:main
cinder-volume = cinder.cmd.volume:main
cinder-volume-usage-audit = cinder.cmd.volume_usage_audit:main
wsgi_scripts =
cinder-wsgi = cinder.wsgi.wsgi:initialize_application
Port to oslo.messaging The oslo.messaging library takes the existing RPC code from oslo and wraps it in a sane API with well defined semantics around which we can make a commitment to retain compatibility in future. The patch is large, but the changes can be summarized as: * oslo.messaging>=1.3.0a4 is required; a proper 1.3.0 release will be pushed before the icehouse release candidates. * The new rpc module has init() and cleanup() methods which manage the global oslo.messaging transport state. The TRANSPORT and NOTIFIER globals are conceptually similar to the current RPCIMPL global, except we're free to create and use alternate Transport objects in e.g. the cells code. * The rpc.get_{client,server,notifier}() methods are just helpers which wrap the global messaging state, specifiy serializers and specify the use of the eventlet executor. * In oslo.messaging, a request context is expected to be a dict so we add a RequestContextSerializer which can serialize to and from dicts using RequestContext.{to,from}_dict() * The allowed_rpc_exception_modules configuration option is replaced by an allowed_remote_exmods get_transport() parameter. This is not something that users ever need to configure, but it is something each project using oslo.messaging needs to be able to customize. * We maintain a global NOTIFIER object and create specializations of it with specific publisher IDs in order to avoid notification driver loading overhead. * rpc.py contains transport aliases for backwards compatibility purposes. setup.cfg also contains notification driver aliases for backwards compat. * The messaging options are moved about in cinder.conf.sample because the options are advertised via a oslo.config.opts entry point and picked up by the generator. * We use messaging.ConfFixture in tests to override oslo.messaging config options, rather than making assumptions about the options registered by the library. Implements blueprint: oslo-messaging Change-Id: Ib912809428d92e788558439e2d85b51272ebefdd
2014-02-07 12:20:44 +01:00
# These are for backwards compat with Havana notification_driver configuration values
oslo_messaging.notify.drivers =
cinder.openstack.common.notifier.log_notifier = oslo_messaging.notify._impl_log:LogDriver
cinder.openstack.common.notifier.no_op_notifier = oslo_messaging.notify._impl_noop:NoOpDriver
cinder.openstack.common.notifier.rpc_notifier2 = oslo_messaging.notify.messaging:MessagingV2Driver
cinder.openstack.common.notifier.rpc_notifier = oslo_messaging.notify.messaging:MessagingDriver
cinder.openstack.common.notifier.test_notifier = oslo_messaging.notify._impl_test:TestDriver
Move to the oslo.middleware library This patch moves Cinder to using olso.middleware, updates us so we are using the oslo_middleware namespace and syncs the latest middleware code from oslo-incubator to support grenade jobs. The details for the middleware sync from oslo-incubator are as follows: Current HEAD in OSLO: --------------------- commit e589dde0721a0a67e4030813e582afec6e70d042 Date: Wed Feb 18 03:08:12 2015 +0000 Merge "Have a little fun with release notes" Changes merged with this patch: --------------------- __init__.py 4ffc4c87 - Add middleware.request_id shim for Kilo 4504e4f4 - Remove middleware catch_errors.py a01a8527 - Use oslo_middleware instead of deprecated oslo.middleware ce8f8fa4 - Add middleware.catch_errors shim for Kilo 4504e4f4 - Remove middleware 5d40e143 - Remove code that moved to oslo.i18n 76183592 - add deprecation note to middleware 463e6916 - remove oslo log from middleware fcf517d7 - Update oslo log messages with translation domains request_id.py a01a8527 - Use oslo_middleware instead of deprecated oslo.middleware 66d8d613 - Fix oslo.middleware deprecation error 4ffc4c87 - Add middleware.request_id shim for Kilo 4504e4f4 - Remove middleware 76183592 - add deprecation note to middleware d7bd9dc3 - Don't store the request ID value in middleware as class variable Some notes on this change. It is based on the change made in Nova: https://review.openstack.org/#/c/130771 and is the recommended method for cleaning up the unused portions of middleware from oslo-incubator, moving to the oslo.middleware library and not breaking grenade in the gate. Change-Id: Ia99ab479cb8ef63a0db1a1208cc2501abba6132c
2015-03-10 20:15:00 -05:00
# These are for backwards compatibility with Juno middleware configurations
oslo_middleware =
cinder.openstack.common.middleware.request_id = oslo_middleware.request_id
cinder.database.migration_backend =
sqlalchemy = oslo_db.sqlalchemy.migration
2012-05-03 10:48:26 -07:00
[egg_info]
tag_build =
2012-05-03 10:48:26 -07:00
tag_date = 0
tag_svn_revision = 0
[compile_catalog]
directory = cinder/locale
domain = cinder
2012-05-03 10:48:26 -07:00
[update_catalog]
domain = cinder
output_dir = cinder/locale
input_file = cinder/locale/cinder.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = cinder/locale/cinder.pot