a61b5d25de
Each service has a set of Resources which define the fundamental qualities of the remote resources. Because of this, a large portion of the methods on Proxy classes are (or can be) boilerplate. Add a metaclass (two in a row!) that reads the definition of the Proxy class and looks for Resource classes attached to it. It then checks them to see which operations are allowed by looking at the Resource's allow_ flags. Based on that, it generates the standard methods and doc strings from a template and adds them to the class. If a method exists on the class when it is read, the generated method does not overwrite the existing method. Instead, it is attached as ``_generated_{method_name}``. This allows people to either write specific proxy methods and completely ignore the generated method, or to write specialized methods that then delegate action to the generated method. Since this is done as a metaclass at class object creation time, things like sphinx continue to work. One of the results of this is the addition of a reference to each resource class on the proxy object. I've wanted one of those before (I don't remember right now why I wanted it) This makes a change to just a few methods/resources in Server as an example of impact. If we like it we can go through and remove all of the boilerplate methods and leave only methods that are special. openstack.compute.v2._proxy.Proxy.servers is left in place largely because it has a special doc string. I think we could (and should) update the generation to look at the query parameters to find and document in the docstring for list methods what the supported parameters are. This stems from some thinking we had in shade about being able to generate most of the methods that fit the pattern. It's likely we'll want to do that for shade methods as well - but we should actually be able to piggyback shade methods on top of the proxy methods, or at least use a similar approach to reduce most of the boilerplate in the shade layer. Change-Id: I9bee095d90cad25acadbf311d4dd8af2e76ba00a |
||
---|---|---|
devstack | ||
doc | ||
examples | ||
extras | ||
openstack | ||
playbooks/devstack | ||
releasenotes | ||
tools | ||
.coveragerc | ||
.gitignore | ||
.gitreview | ||
.mailmap | ||
.stestr.conf | ||
.zuul.yaml | ||
CONTRIBUTING.rst | ||
HACKING.rst | ||
LICENSE | ||
MANIFEST.in | ||
README.rst | ||
SHADE-MERGE-TODO.rst | ||
babel.cfg | ||
bindep.txt | ||
create_yaml.sh | ||
docs-requirements.txt | ||
post_test_hook.sh | ||
requirements.txt | ||
setup.cfg | ||
setup.py | ||
test-requirements.txt | ||
tox.ini |
README.rst
openstacksdk
openstacksdk is a client library for for building applications to work with OpenStack clouds. The project aims to provide a consistent and complete set of interactions with OpenStack's many services, along with complete documentation, examples, and tools.
It also contains an abstraction interface layer. Clouds can do many things, but there are probably only about 10 of them that most people care about with any regularity. If you want to do complicated things, the per-service oriented portions of the SDK are for you. However, if what you want is to be able to write an application that talks to clouds no matter what crazy choices the deployer has made in an attempt to be more hipster than their self-entitled narcissist peers, then the Cloud Abstraction layer is for you.
A Brief History
openstacksdk started its life as three different libraries: shade, os-client-config and python-openstacksdk.
shade
started its life as some code inside of OpenStack
Infra's nodepool project,
and as some code inside of the Ansible
OpenStack Modules. Ansible had a bunch of different OpenStack
related modules, and there was a ton of duplicated code. Eventually,
between refactoring that duplication into an internal library, and
adding the logic and features that the OpenStack Infra team had
developed to run client applications at scale, it turned out that we'd
written nine-tenths of what we'd need to have a standalone library.
Because of its background from nodepool, shade contained abstractions to work around deployment differences and is resource oriented rather than service oriented. This allows a user to think about Security Groups without having to know whether Security Groups are provided by Nova or Neutron on a given cloud. On the other hand, as an interface that provides an abstraction, it deviates from the published OpenStack REST API and adds its own opinions, which may not get in the way of more advanced users with specific needs.
os-client-config
was a library for collecting client
configuration for using an OpenStack cloud in a consistent and
comprehensive manner, which introduced the clouds.yaml
file
for expressing named cloud configurations.
python-openstacksdk
was a library that exposed the
OpenStack APIs to developers in a consistent and predictable manner.
After a while it became clear that there was value in both the high-level layer that contains additional business logic and the lower-level SDK that exposes services and their resources faithfully and consistently as Python objects.
Even with both of those layers, it is still beneficial at times to be able to make direct REST calls and to do so with the same properly configured Session from python-requests.
This led to the merge of the three projects.
The original contents of the shade library have been moved into
openstack.cloud
and os-client-config has been moved in to
openstack.config
. Future releases of shade will provide a
thin compatibility layer that subclasses the objects from
openstack.cloud
and provides different argument defaults
where needed for compatibility. Similarly future releases of
os-client-config will provide a compatibility layer shim around
openstack.config
.
openstack
List servers using objects configured with the
clouds.yaml
file:
import openstack
# Initialize and turn on debug logging
=True)
openstack.enable_logging(debug
# Initialize cloud
= openstack.connect(cloud='mordred')
conn
for server in conn.compute.servers():
print(server.to_dict())
openstack.config
openstack.config
will find cloud configuration for as
few as 1 clouds and as many as you want to put in a config file. It will
read environment variables and config files, and it also contains some
vendor specific default values so that you don't have to know extra info
to use OpenStack
- If you have a config file, you will get the clouds listed in it
- If you have environment variables, you will get a cloud named envvars
- If you have neither, you will get a cloud named defaults with base defaults
Sometimes an example is nice.
Create a clouds.yaml
file:
clouds:
mordred:
region_name: Dallas
auth:
username: 'mordred'
password: XXXXXXX
project_name: 'shade'
auth_url: 'https://identity.example.com'
Please note: openstack.config
will look for a file
called clouds.yaml
in the following locations:
- Current Directory
~/.config/openstack
/etc/openstack
More information at https://developer.openstack.org/sdks/python/openstacksdk/users/config
openstack.cloud
Create a server using objects configured with the
clouds.yaml
file:
import openstack.cloud
# Initialize and turn on debug logging
=True)
openstack.enable_logging(debug
# Initialize connection
# Cloud configs are read with openstack.config
= openstack.connect(cloud='mordred')
conn
# Upload an image to the cloud
= conn.create_image(
image 'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True)
# Find a flavor with at least 512M of RAM
= conn.get_flavor_by_ram(512)
flavor
# Boot a server, wait for it to boot, and then do whatever is needed
# to get a public ip for it.
conn.create_server('my-server', image=image, flavor=flavor, wait=True, auto_ip=True)