nova/setup.cfg

161 lines
8.3 KiB
INI
Raw Normal View History

[metadata]
name = nova
version = 2014.1
summary = Cloud computing fabric controller
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 2.6
[global]
setup-hooks =
pbr.hooks.setup_hook
[files]
packages =
nova
[entry_points]
Add plug-in modules for direct downloads of glance locations Glance can expose direct URL locations to its clients. In current versions of nova the only URL that can be accessed directly is file://. This patch adds a notion of download plug-ins. With this new download protocol modules can be added without disruption to the rest of the code base. Based on the scheme of the URL returned from Glance a plug-in will be loaded and used to download the data directly, instead of first routing it through Glance. If anything fails in the process the image will be downloaded by way of Glance. Handlers are loaded with stevedore. To add a new module follow the example in nova.image.downloads.file.py. The module is required to have two functions: get_download_hander(): This must return a child of nova.image.download.TransferBase get_scheme(): Return the URL scheme that this module handles (ex: 'file') If additional configuration is needed it can be added by the specific plug-in (as shown by file_download_module_config included with this patch submission). Once the module is created it must be added as an entry point to the python installation. For those included with nova this can be done by adding the following it setup.cfg: [entry_points] nova.download.modules = file = nova.image.xfers.file Additionally, as part of the multiple-locations work in Glance meta data comes back with each location describing it. As an example, this is needed for direct access to file URLs. Nova cannot assume that every file URL is accessible on its mounted file systems, nor can it assume that the mount points are the same. This patch solves that problem for direct access to files. blueprint image-multiple-location Change-Id: I79b863c0075cebaadce5b630f22b81d2959ddbb1
2013-07-18 10:39:33 -10:00
nova.image.download.modules =
file = nova.image.download.file
console_scripts =
nova-all = nova.cmd.all:main
nova-api = nova.cmd.api:main
nova-api-ec2 = nova.cmd.api_ec2:main
nova-api-metadata = nova.cmd.api_metadata:main
nova-api-os-compute = nova.cmd.api_os_compute:main
nova-baremetal-deploy-helper = nova.cmd.baremetal_deploy_helper:main
nova-baremetal-manage = nova.cmd.baremetal_manage:main
nova-rpc-zmq-receiver = nova.cmd.rpc_zmq_receiver:main
nova-cells = nova.cmd.cells:main
nova-cert = nova.cmd.cert:main
nova-clear-rabbit-queues = nova.cmd.clear_rabbit_queues:main
nova-compute = nova.cmd.compute:main
nova-conductor = nova.cmd.conductor:main
nova-console = nova.cmd.console:main
nova-consoleauth = nova.cmd.consoleauth:main
nova-dhcpbridge = nova.cmd.dhcpbridge:main
nova-manage = nova.cmd.manage:main
nova-network = nova.cmd.network:main
nova-novncproxy = nova.cmd.novncproxy:main
nova-objectstore = nova.cmd.objectstore:main
nova-rootwrap = oslo.rootwrap.cmd:main
nova-scheduler = nova.cmd.scheduler:main
nova-spicehtml5proxy = nova.cmd.spicehtml5proxy:main
nova-xvpvncproxy = nova.cmd.xvpvncproxy:main
nova.api.v3.extensions =
access_ips = nova.api.openstack.compute.plugins.v3.access_ips:AccessIPs
admin_actions = nova.api.openstack.compute.plugins.v3.admin_actions:AdminActions
admin_password = nova.api.openstack.compute.plugins.v3.admin_password:AdminPassword
agents = nova.api.openstack.compute.plugins.v3.agents:Agents
aggregates = nova.api.openstack.compute.plugins.v3.aggregates:Aggregates
attach_interfaces = nova.api.openstack.compute.plugins.v3.attach_interfaces:AttachInterfaces
availability_zone = nova.api.openstack.compute.plugins.v3.availability_zone:AvailabilityZone
block_device_mapping = nova.api.openstack.compute.plugins.v3.block_device_mapping:BlockDeviceMapping
cells = nova.api.openstack.compute.plugins.v3.cells:Cells
certificates = nova.api.openstack.compute.plugins.v3.certificates:Certificates
config_drive = nova.api.openstack.compute.plugins.v3.config_drive:ConfigDrive
console_auth_tokens = nova.api.openstack.compute.plugins.v3.console_auth_tokens:ConsoleAuthTokens
console_output = nova.api.openstack.compute.plugins.v3.console_output:ConsoleOutput
consoles = nova.api.openstack.compute.plugins.v3.consoles:Consoles
deferred_delete = nova.api.openstack.compute.plugins.v3.deferred_delete:DeferredDelete
evacuate = nova.api.openstack.compute.plugins.v3.evacuate:Evacuate
extended_availability_zone = nova.api.openstack.compute.plugins.v3.extended_availability_zone:ExtendedAvailabilityZone
extended_server_attributes = nova.api.openstack.compute.plugins.v3.extended_server_attributes:ExtendedServerAttributes
extended_status = nova.api.openstack.compute.plugins.v3.extended_status:ExtendedStatus
extended_volumes = nova.api.openstack.compute.plugins.v3.extended_volumes:ExtendedVolumes
extension_info = nova.api.openstack.compute.plugins.v3.extension_info:ExtensionInfo
flavors = nova.api.openstack.compute.plugins.v3.flavors:Flavors
flavors_extraspecs = nova.api.openstack.compute.plugins.v3.flavors_extraspecs:FlavorsExtraSpecs
flavor_access = nova.api.openstack.compute.plugins.v3.flavor_access:FlavorAccess
flavor_rxtx = nova.api.openstack.compute.plugins.v3.flavor_rxtx:FlavorRxtx
flavor_manage = nova.api.openstack.compute.plugins.v3.flavor_manage:FlavorManage
hide_server_addresses = nova.api.openstack.compute.plugins.v3.hide_server_addresses:HideServerAddresses
hosts = nova.api.openstack.compute.plugins.v3.hosts:Hosts
hypervisors = nova.api.openstack.compute.plugins.v3.hypervisors:Hypervisors
instance_actions = nova.api.openstack.compute.plugins.v3.instance_actions:InstanceActions
ips = nova.api.openstack.compute.plugins.v3.ips:IPs
keypairs = nova.api.openstack.compute.plugins.v3.keypairs:Keypairs
lock_server = nova.api.openstack.compute.plugins.v3.lock_server:LockServer
migrate_server = nova.api.openstack.compute.plugins.v3.migrate_server:MigrateServer
migrations = nova.api.openstack.compute.plugins.v3.migrations:Migrations
multinic = nova.api.openstack.compute.plugins.v3.multinic:Multinic
multiple_create = nova.api.openstack.compute.plugins.v3.multiple_create:MultipleCreate
pause_server = nova.api.openstack.compute.plugins.v3.pause_server:PauseServer
pci = nova.api.openstack.compute.plugins.v3.pci:Pci
quota_sets = nova.api.openstack.compute.plugins.v3.quota_sets:QuotaSets
remote_consoles = nova.api.openstack.compute.plugins.v3.remote_consoles:RemoteConsoles
rescue = nova.api.openstack.compute.plugins.v3.rescue:Rescue
scheduler_hints = nova.api.openstack.compute.plugins.v3.scheduler_hints:SchedulerHints
security_groups = nova.api.openstack.compute.plugins.v3.security_groups:SecurityGroups
server_diagnostics = nova.api.openstack.compute.plugins.v3.server_diagnostics:ServerDiagnostics
server_metadata = nova.api.openstack.compute.plugins.v3.server_metadata:ServerMetadata
server_password = nova.api.openstack.compute.plugins.v3.server_password:ServerPassword
server_usage = nova.api.openstack.compute.plugins.v3.server_usage:ServerUsage
servers = nova.api.openstack.compute.plugins.v3.servers:Servers
services = nova.api.openstack.compute.plugins.v3.services:Services
shelve = nova.api.openstack.compute.plugins.v3.shelve:Shelve
suspend_server = nova.api.openstack.compute.plugins.v3.suspend_server:SuspendServer
versions = nova.api.openstack.compute.plugins.v3.versions:Versions
nova.api.v3.extensions.server.create =
access_ips = nova.api.openstack.compute.plugins.v3.access_ips:AccessIPs
availability_zone = nova.api.openstack.compute.plugins.v3.availability_zone:AvailabilityZone
block_device_mapping = nova.api.openstack.compute.plugins.v3.block_device_mapping:BlockDeviceMapping
config_drive = nova.api.openstack.compute.plugins.v3.config_drive:ConfigDrive
keypairs_create = nova.api.openstack.compute.plugins.v3.keypairs:Keypairs
multiple_create = nova.api.openstack.compute.plugins.v3.multiple_create:MultipleCreate
scheduler_hints = nova.api.openstack.compute.plugins.v3.scheduler_hints:SchedulerHints
security_groups = nova.api.openstack.compute.plugins.v3.security_groups:SecurityGroups
user_data = nova.api.openstack.compute.plugins.v3.user_data:UserData
nova.api.v3.extensions.server.rebuild =
access_ips = nova.api.openstack.compute.plugins.v3.access_ips:AccessIPs
nova.api.v3.extensions.server.update =
access_ips = nova.api.openstack.compute.plugins.v3.access_ips:AccessIPs
Port to oslo.messaging The oslo.messaging library takes the existing RPC code from oslo and wraps it in a sane API with well defined semantics around which we can make a commitment to retain compatibility in future. The patch is large, but the changes can be summarized as: * oslo.messaging>=1.3.0a4 is required; a proper 1.3.0 release will be pushed before the icehouse release candidates. * The new rpc module has init() and cleanup() methods which manage the global oslo.messaging transport state. The TRANSPORT and NOTIFIER globals are conceptually similar to the current RPCIMPL global, except we're free to create and use alternate Transport objects in e.g. the cells code. * The rpc.get_{client,server,notifier}() methods are just helpers which wrap the global messaging state, specifiy serializers and specify the use of the eventlet executor. * In oslo.messaging, a request context is expected to be a dict so we add a RequestContextSerializer which can serialize to and from dicts using RequestContext.{to,from}_dict() * The allowed_rpc_exception_modules configuration option is replaced by an allowed_remote_exmods get_transport() parameter. This is not something that users ever need to configure, but it is something each project using oslo.messaging needs to be able to customize. * The nova.rpcclient module is removed; it was only a helper class to allow us split a lot of the more tedious changes out of this patch. * Finalizing the port from RpcProxy to RPCClient is straightforward. We put the default topic, version and namespace into a Target and contstruct the client using that. * Porting endpoint classes (like ComputeManager) just involves setting a target attribute on the class. * The @client_exceptions() decorator has been renamed to @expected_exceptions since it's used on the server side to designate exceptions we expect the decorated method to raise. * We maintain a global NOTIFIER object and create specializations of it with specific publisher IDs in order to avoid notification driver loading overhead. * rpc.py contains transport aliases for backwards compatibility purposes. setup.cfg also contains notification driver aliases for backwards compat. * The messaging options are moved about in nova.conf.sample because the options are advertised via a oslo.config.opts entry point and picked up by the generator. * We use messaging.ConfFixture in tests to override oslo.messaging config options, rather than making assumptions about the options registered by the library. The porting of cells code is particularly tricky: * messaging.TransportURL parse() and str() replaces the [un]parse_transport_url() methods. Note the complication that an oslo.messaging transport URL can actually have multiple hosts in order to support message broker clustering. Also the complication of transport aliases in rpc.get_transport_url(). * proxy_rpc_to_manager() is fairly nasty. Right now, we're proxying the on-the-wire message format over this call, but you can't supply such messages to oslo.messaging's cast()/call() methods. Rather than change the inter-cell RPC API to suit oslo.messaging, we instead just unpack the topic, server, method and args from the message on the remote side. cells_api.RPCClientCellsProxy is a mock RPCClient implementation which allows us to wrap up a RPC in the message format currently used for inter-cell RPCs. * Similarly, proxy_rpc_to_manager uses the on-the-wire format for exception serialization, but this format is an implementation detail of oslo.messaging's transport drivers. So, we need to duplicate the exception serialization code in cells.messaging. We may find a way to reconcile this in future - for example a ExceptionSerializer class might work, but with the current format it might be difficult for the deserializer to generically detect a serialized exception. * CellsRPCDriver.start_servers() and InterCellRPCAPI._get_client() need close review, but they're pretty straightforward ports of code to listen on some specialized topics and connect to a remote cell using its transport URL. blueprint: oslo-messaging Change-Id: Ib613e6300f2c215be90f924afbd223a3da053a69
2013-08-02 14:44:16 +01:00
# These are for backwards compat with Havana notification_driver configuration values
oslo.messaging.notify.drivers =
nova.openstack.common.notifier.log_notifier = oslo.messaging.notify._impl_log:LogDriver
nova.openstack.common.notifier.no_op_notifier = oslo.messaging.notify._impl_noop:NoOpDriver
nova.openstack.common.notifier.rpc_notifier2 = oslo.messaging.notify._impl_messaging:MessagingV2Driver
nova.openstack.common.notifier.rpc_notifier = oslo.messaging.notify._impl_messaging:MessagingDriver
nova.openstack.common.notifier.test_notifier = oslo.messaging.notify._impl_test:TestDriver
Port to oslo.messaging The oslo.messaging library takes the existing RPC code from oslo and wraps it in a sane API with well defined semantics around which we can make a commitment to retain compatibility in future. The patch is large, but the changes can be summarized as: * oslo.messaging>=1.3.0a4 is required; a proper 1.3.0 release will be pushed before the icehouse release candidates. * The new rpc module has init() and cleanup() methods which manage the global oslo.messaging transport state. The TRANSPORT and NOTIFIER globals are conceptually similar to the current RPCIMPL global, except we're free to create and use alternate Transport objects in e.g. the cells code. * The rpc.get_{client,server,notifier}() methods are just helpers which wrap the global messaging state, specifiy serializers and specify the use of the eventlet executor. * In oslo.messaging, a request context is expected to be a dict so we add a RequestContextSerializer which can serialize to and from dicts using RequestContext.{to,from}_dict() * The allowed_rpc_exception_modules configuration option is replaced by an allowed_remote_exmods get_transport() parameter. This is not something that users ever need to configure, but it is something each project using oslo.messaging needs to be able to customize. * The nova.rpcclient module is removed; it was only a helper class to allow us split a lot of the more tedious changes out of this patch. * Finalizing the port from RpcProxy to RPCClient is straightforward. We put the default topic, version and namespace into a Target and contstruct the client using that. * Porting endpoint classes (like ComputeManager) just involves setting a target attribute on the class. * The @client_exceptions() decorator has been renamed to @expected_exceptions since it's used on the server side to designate exceptions we expect the decorated method to raise. * We maintain a global NOTIFIER object and create specializations of it with specific publisher IDs in order to avoid notification driver loading overhead. * rpc.py contains transport aliases for backwards compatibility purposes. setup.cfg also contains notification driver aliases for backwards compat. * The messaging options are moved about in nova.conf.sample because the options are advertised via a oslo.config.opts entry point and picked up by the generator. * We use messaging.ConfFixture in tests to override oslo.messaging config options, rather than making assumptions about the options registered by the library. The porting of cells code is particularly tricky: * messaging.TransportURL parse() and str() replaces the [un]parse_transport_url() methods. Note the complication that an oslo.messaging transport URL can actually have multiple hosts in order to support message broker clustering. Also the complication of transport aliases in rpc.get_transport_url(). * proxy_rpc_to_manager() is fairly nasty. Right now, we're proxying the on-the-wire message format over this call, but you can't supply such messages to oslo.messaging's cast()/call() methods. Rather than change the inter-cell RPC API to suit oslo.messaging, we instead just unpack the topic, server, method and args from the message on the remote side. cells_api.RPCClientCellsProxy is a mock RPCClient implementation which allows us to wrap up a RPC in the message format currently used for inter-cell RPCs. * Similarly, proxy_rpc_to_manager uses the on-the-wire format for exception serialization, but this format is an implementation detail of oslo.messaging's transport drivers. So, we need to duplicate the exception serialization code in cells.messaging. We may find a way to reconcile this in future - for example a ExceptionSerializer class might work, but with the current format it might be difficult for the deserializer to generically detect a serialized exception. * CellsRPCDriver.start_servers() and InterCellRPCAPI._get_client() need close review, but they're pretty straightforward ports of code to listen on some specialized topics and connect to a remote cell using its transport URL. blueprint: oslo-messaging Change-Id: Ib613e6300f2c215be90f924afbd223a3da053a69
2013-08-02 14:44:16 +01:00
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0
[compile_catalog]
directory = nova/locale
domain = nova
[update_catalog]
domain = nova
output_dir = nova/locale
input_file = nova/locale/nova.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = nova/locale/nova.pot