OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins as a part of the Git hosting and code review systems migration detailed in these mailing list posts: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html Attempts have been made to correct repository namespaces and hostnames based on simple pattern matching, but it's possible some were updated incorrectly or missed entirely. Please reach out to us via the contact information listed at https://opendev.org/ with any questions you may have.
|1 month ago|
|doc||11 months ago|
|releasenotes||11 months ago|
|.coveragerc||11 months ago|
|.gitignore||11 months ago|
|.gitreview||1 month ago|
|.mailmap||11 months ago|
|.stestr.conf||11 months ago|
|.zuul.yaml||7 months ago|
|CONTRIBUTING.rst||11 months ago|
|LICENSE||11 months ago|
|README.rst||11 months ago|
|babel.cfg||11 months ago|
|requirements.txt||11 months ago|
|setup.cfg||5 months ago|
|setup.py||11 months ago|
|test-requirements.txt||11 months ago|
|tox.ini||7 months ago|
Builds Ansible Playbook Bundles for use in Automation Broker to expose OpenStack resources in the Kubernetes Service Catalog.
The project is in the very early stages of development. We will first build a prototype that demonstrates the concept of managing OpenStack resources through the Kubernetes Service Catalog, using the http://automationbroker.io/ project to implement the Open Service Broker API, which in turn uses Ansible playbooks to drive the underlying services.
[service-broker]in the subject line)
The Open Service Broker API is a standard way to expose external resources to applications running in a PaaS. It was originally developed in the context of CloudFoundry, but the same standard was adopted by Kubernetes (and hence OpenShift) in the form of the Service Catalog extension. (The Service Catalog in Kubernetes is the component that calls out to a service broker.) So a single implementation can cover the most popular open-source PaaS offerings.
In many cases, the services take the form of simply a pre-packaged application that also runs inside the PaaS. But they don't have to be - services can be anything. Provisioning via the service broker ensures that the services requested are tied in to the PaaS's orchestration of the application's lifecycle.
(This is certainly not the be-all and end-all of integration between OpenStack and containers - we also need ways to tie PaaS-based applications into the OpenStack's orchestration of a larger group of resources. Some applications may even use both. But it's an important part of the story.)
Some example use cases might be:
AWS, Azure, and GCP all have service brokers available that support these and many more services that they provide. I don't know of any reason in principle not to expose every type of resource that OpenStack provides via a service broker.
The Cloud Controller interface in Kubernetes allows Kubernetes itself to access features of the cloud to provide its service. For example, if k8s needs persistent storage for a container then it can request that from Cinder through cloud-provider-openstack. It can also request a load balancer from Octavia instead of having to start a container running HAProxy to load balance between multiple instances of an application container (thus enabling use of hardware load balancers via the cloud's abstraction for them).
In contrast, the Service Catalog interface allows the application running on Kubernetes to access features of the cloud.
A service broker provides an HTTP API with 5 actions:
The binding step is used for things like providing a set of DB credentials to a container. You can rotate credentials when replacing a container by revoking the existing credentials on unbind and creating a new set on bind, without replacing the entire resource.
Yes! Folks from OpenShift came up with a project called the Automation Broker. To add support for a service to Automation Broker you just create a container with an Ansible playbook to handle each of the actions (create/bind/unbind/delete). This eliminates the need to write another implementation of the service broker API, and allows us to simply write Ansible playbooks instead.
(Aside: Heat uses a comparable method to allow users to manage an external resource using Mistral workflows: the OS::Mistral::ExternalResource resource type.)
Support for accessing AWS resources through a service broker is also implemented using these Ansible Playbook Bundles.
Maybe not. We already have per-project Python libraries, (deprecated) per-project CLIs, openstackclient CLIs, openstack-sdk, shade, Heat resource plugins, and Horizon dashboards. (Mistral actions are generated automatically from the clients.) Some consolidation is already planned, but it would be great not to require projects to maintain yet another interface.
One option is to implement a tool that generates a set of playbooks for each of the resources already exposed (via shade) in the OpenStack Ansible modules. Then in theory we'd only need to implement the common parts once, and then every service with support in shade would get this for free. Ideally the same broker could be used against any OpenStack cloud (so e.g. k8s might be running in your private cloud, but you may want its service catalog to allow you to connect to resources in one or more public clouds) - using shade is an advantage there because it is designed to abstract the differences between clouds.
Another option might be to write or generate Heat templates for each resource type we want to expose. Then we'd only need to implement a common way of creating a Heat stack, and just have a different template for each resource type. This is the approach taken by the AWS playbook bundles (except with CloudFormation, obviously). An advantage is that this allows Heat to do any checking and type conversion required on the input parameters. Heat templates can also be made to be fairly cloud-independent, mainly because they make it easier to be explicit about things like ports and subnets than on the command line, where it's more tempting to allow things to happen in a magical but cloud-specific way.
I'd prefer to go with the pure-Ansible autogenerated way so we can have support for everything, but looking at the GCP/Azure/AWS brokers they have 10, 11 and 17 services respectively, so arguably we could get a comparable number of features exposed without investing crazy amounts of time if we had to write templates explicitly.
There are two main deployment topologies we need to consider: Kubernetes deployed by an OpenStack tenant (Magnum-style, though not necessarily using Magnum) and accessing resources in that tenant's project in the local cloud, or accessing resources in some remote OpenStack cloud.
We also need to take into account that in the second case, the Kubernetes cluster may 'belong' to a single cloud tenant (as in the first case) or may be shared by applications that each need to authenticate to different OpenStack tenants. (Kubernetes has traditionally assumed the former, but I expect it to move in the direction of allowing the latter, and it's already fairly common for OpenShift deployments.)
The way e.g. the AWS broker works is that you can either use the credentials provisioned to the VM that k8s is installed on (a 'Role' in AWS parlance - note that this is completely different to a Keystone Role), or supply credentials to authenticate to AWS remotely.
OpenStack doesn't yet support per-instance credentials, although we're working on it. (One thing to keep in mind is that ideally we'll want a way to provide different permissions to the service broker and cloud-provider-openstack.) An option in the meantime might be to provide a way to set up credentials as part of the k8s installation. We'd also need to have a way to specify credentials manually. Unlike for proprietary clouds, the credentials also need to include the Keystone auth_url. We should try to reuse openstacksdk's clouds.yaml/secure.yaml format if possible.
The OpenShift Ansible Broker works by starting up an Ansible container on k8s to run a playbook from the bundle, so presumably credentials can be passed as regular k8s secrets.
In all cases we'll want to encourage users to authenticate using Keystone Application Credentials.
Kuryr allows us to connect application containers in Kubernetes to Neutron networks in OpenStack. It would be desirable if, when the user requests a VM or bare-metal server through the service broker, it were possible to choose between attaching to the same network as Kubernetes pods, or to a different network.