Add a few more docs

Change-Id: I7fa70c3b89a8246662a654b379dd16d6a7a6b873
This commit is contained in:
Monty Taylor 2016-11-08 09:15:07 -06:00
parent 7ee4290eb6
commit 5f8ea0588a
No known key found for this signature in database
GPG Key ID: 7BAE94BC7141A594
8 changed files with 146 additions and 12 deletions

View File

@ -12,6 +12,6 @@ submitted for review via the Gerrit tool:
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
Bugs should be filed on Storyboard, not GitHub:
https://bugs.launchpad.net/oaktree
https://storyboard.openstack.org/#!/project/855

View File

@ -23,7 +23,6 @@ sys.path.insert(0, os.path.abspath('../..'))
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy

39
doc/source/design.rst Normal file
View File

@ -0,0 +1,39 @@
==============
Oaktree Design
==============
Once 1.0.0 is released, oaktree pledges to never break backwards compatability.
Oaktree is intended to be safe for deployers to run CD from master. In fact,
a deployer running a kilo OpenStack should be able to install tip of master
of oaktree and have everything be perfectly fine.
Oaktree must be simple to install and operate. A single node install with no
shared caching or locking is likely fine for most smaller clouds. For larger
clouds, shared caching and locking are essential for scale out. Both must be
supported, and simple.
Oaktree is not pluggable.
Oaktree does not allow selectively enabling or disabling features or part of
its API.
Oaktree should be runnable by an end user pointed at a local clouds.yaml file.
Oaktree should be able to talk to other oaktrees.
Oaktree users should never need to know any information about the cloud other
than the address of the oaktree endpoint. Cloud-specific information the
user needs to know must be exposed via a capabilities API. For instance, in
order for a user to upload an image to a cloud, the user must know what format
the cloud requires the image to be in. The user must be able to ask oaktree
what image format(s) the cloud accepts.
Data returned from oaktree should be normalized such that it is consistent
no matter what drivers the cloud in question has chosen. This work is done in
shade, but shapes the design of the protobuf messages.
All objects in oaktree should have a Location. A Location defines the cloud,
the region, the zone and the project that contains the object. For objects
that exist at a region and not a zone level, like flavors and images, zone
will be null. For objects that exist at a cloud level, region will be null.

51
doc/source/faq.rst Normal file
View File

@ -0,0 +1,51 @@
==========================
Frequently Asked Questions
==========================
Why gRPC and not REST?
----------------------
There are three main reasons.
We already have REST APIs. oaktree is not intended to replace them, but to
supplement them to grease the 80% case that can be inter-operable.
gRPC comes out of the gate with direct support for a pile of languages, so
supporting our non-Python friends is direct and straightforward.
A TON of time is spent in shade polling OpenStack for results. That may not
sound like a problem - but when you spin up thousands of VMs a day like Infra
does, the polling becomes a major engineering challenge. gRPC operates over
http/2 and has support for bi-directional channels - which means you can just
have a function notify you when something is done. That's a win for everyone
Why write it in Python rather than XXX?
---------------------------------------
The hard part of this isn't the gRPC api - it's the business logic that's in
the shade library. If we wrote oaktree from scratch in C++ (because hello
super-high-performance gRPC backend!) - we'd be faced with the task of
re-implementing all of the shade business logic in C++. If you haven't looked,
there is a LOT.
shade is what infra uses for nodepool. It has copious features in it already
to deal with extremely high scale - including configurable caching, batched
list update operations to prevent thundering herds and well exercised
multi-threaded support.
The interesting part also isn't the server (it's a simple proxy layer) - it's
the clients. THOSE definitely want much love in the different languages. The
infrastructure is in place for Python, C++ and Go. Ruby, javascript and C#
should follow asap.
Can I add support for my project?
---------------------------------
Yes. It has to be added to shade first, which accepts patches from anything
that can be tested consistently in a devstack job. We require all new features
in shade to come with functional tests. Once it's in shade, it can be added as
an API to oaktree.
However ... oaktree and shade both promise 100% backwards compatibility at all
times. If your project is still young, be aware that once an API is added to
shade or oaktree it will need to be supported until the end of time.

View File

@ -13,7 +13,9 @@ Contents:
readme
installation
usage
design
faq
todo
contributing
Indices and tables

51
doc/source/todo.rst Normal file
View File

@ -0,0 +1,51 @@
===========
Work Needed
===========
Design the auth story
---------------------
The native/default auth for gRPC is oauth. It has the ability for pluggable
auth, but that would raise the barrier for new languages. I'd love it if we
can come up with a story that involves making API users in keystone and
authorizing them to use oaktree via an oauth transaction. The keystone auth
backends currently are all about integrating with other auth management
systems, which is great for environments where you have a web browser, but not
so much for ones where you need to put your auth credentials into a file so
that your scripts can work. I'm waving my hands wildly here - because all I
really have are problems to solve and none of the solutions I have are great.
Design Glance Image / Swift Object Uploads and Downloads
--------------------------------------------------------
Having those two data operations go through an API proxy seems inefficient.
However, having them not in the API seems like a bad user experience. Perhaps
if we take advantage of the gRPC streaming protocol support doing a direct
streaming passthrough actually wouldn't be awful. Or maybe the better approach
would be for the gRPC call to return a URL and token for a user to POST/PUT to
directly. Literally no clue.
Design and implement Capabilities API
-------------------------------------
shade and the current oaktree codebase rely on os-client-config and clouds.yaml
for information about the cloud and what it can do. As a service, some of the
pieces of information in os-client-config need to be queriable by the user.
Implement API surfaces
----------------------
In general, all of the API operations shade can perform should be exposed in
oaktree. In order to shape that work, we should tackle them in the following
order:
#. API surface needed for nodepool
#. API surface needed for existing Ansible modules
#. Everything else
Implement oaktree backend in shade
----------------------------------
It's turtles all the way down. If shade sees that a cloud has an oaktree
service, shade should talk to it over gRPC instead of talking to the REST
APIs directly.

View File

@ -1,7 +0,0 @@
========
Usage
========
To use oaktree in a project::
import oaktree

View File

@ -8,7 +8,6 @@ coverage>=3.6
discover
python-subunit>=0.0.18
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
oslosphinx>=2.5.0 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
testrepository>=0.0.18
testscenarios>=0.4