Resource management and orchestration engine for distributed systems
Go to file
2015-12-07 17:21:10 +01:00
bootstrap Add install tag to celery playbook and use it in Dockerfile 2015-11-27 13:23:11 +02:00
docs Merge branch 'master' into docs_handler_ansible 2015-12-01 16:56:20 +01:00
examples Moved riak_node_comp to riak_node 2015-12-07 17:21:10 +01:00
library Added information about library 2015-10-01 18:19:13 +02:00
resources Moved riak_node_comp to riak_node 2015-12-07 17:21:10 +01:00
solar Standarize API for inputs 2015-12-07 15:18:20 +01:00
templates Implement f2s/fsclient.py which will bootstrap solar stuff from fuel 2015-11-30 14:07:37 +02:00
utils Add build utils 2015-12-01 16:46:46 +02:00
.config Changed config keys 2015-11-18 14:25:37 +01:00
.gitignore Add docs generated by sphinx 2015-11-26 15:25:17 +02:00
.testr.conf Run tests in parallel 2015-11-27 18:12:16 +01:00
.travis.yml Install libluajit-5.1-dev on travis 2015-12-04 17:29:53 +01:00
.vagrantplugins Install vagrant-vbguest plugin automaticly 2015-10-02 15:26:36 +02:00
config.yaml Merge branch 'master' into resource-compiler 2015-06-29 11:58:46 +02:00
docker-compose.yml Remove all f2s specific software from Dockerfile 2015-12-01 16:08:45 +02:00
Dockerfile Remove all f2s specific software from Dockerfile 2015-12-01 16:08:45 +02:00
jenkins-config.yaml Redis: fix tests 2015-06-09 09:40:50 +02:00
LICENSE Initial commit 2015-03-27 15:54:19 -07:00
MANIFEST.in Include README in sdist 2015-11-24 17:35:11 +01:00
README.md Building IBP images on solar 'master' instead of downloading them 2015-11-12 11:47:22 +01:00
requirements.txt Added lupa to test-requirements, and added note in requirements 2015-12-03 17:16:54 +01:00
run_tests.sh Run all tests for solar 2015-11-23 22:34:54 +01:00
run.sh Add additional packages and config for container 2015-11-30 14:33:56 +02:00
setup.py Move sphinx requirement from setup.py to venv in tox 2015-11-27 11:09:40 +02:00
snapshotter.py Snapshotter show outputs ALL snapshots 2015-09-25 13:24:08 +02:00
test-requirements.txt Added lupa to test-requirements, and added note in requirements 2015-12-03 17:16:54 +01:00
tox.ini Add tox -e cover command 2015-12-01 15:04:13 +01:00
vagrant-settings.yaml_defaults The vagrant script is checking for presence of the original file 2015-11-13 07:12:12 -08:00
Vagrantfile Do not run docker-compose on slaves 2015-11-24 12:03:48 +01:00

Build Status Coverage Status

Requirements

Supported development platforms

Linux or MacOS

Additional software

VirtualBox: 5.x

Vagrant: 1.7.x

Note: Make sure that Vagrant VirtualBox Guest plugin is installed vagrant plugin install vagrant-vbguest

Note: If you are using VirtualBox 5.0 it's worth uncommenting paravirtprovider setting in vagrant-settings.yaml for speed improvements:

paravirtprovider: kvm

For details see Customizing vagrant-settings.yaml section.

Setup development env

Setup environment:

cd solar
vagrant up

Login into vm, the code is available in /vagrant directory

vagrant ssh
solar --help

Get ssh details for running slave nodes (vagrant/vagrant):

vagrant ssh-config

You can make/restore snapshots of boxes (this is way faster than reprovisioning them) with the snapshotter.py script:

./snapshotter.py take -n my-snapshot
./snapshotter.py show
./snapshotter.py restore -n my-snapshot

snapshoter.py to run requires python module click.

  1. On debian based systems you can install it via sudo aptitude install python-click-cli,
  2. On fedora 22 you can install it via sudo dnf install python-click,
  3. If you use virtualenv or similar tool then you can install it just with pip install click,
  4. If you don't have virtualenv and your operating system does not provide package for it then sudo pip install click.
  5. If you don't have pip then install it and then execute command step 4.

Solar usage

For now all commands should be executed from solar-dev machine from /vagrant directory.

Basic flow is:

  1. Create some resources (look at examples/openstack/openstack.py) and connect them between each other, and place them on nodes.
  2. Run solar changes stage (this stages the changes)
  3. Run solar changes process (this prepares orchestrator graph, returning change UUID)
  4. Run solar orch run-once <change-uuid> (or solar orch run-once last to run the lastly created graph)
  5. Observe progress of orch with watch 'solar orch report <change-uuid>' (or watch 'solar orch report last').

Some very simple cluster setup:

cd /vagrant

solar resource create nodes templates/nodes.yaml '{"count": 2}'
solar resource create mariadb_service resources/mariadb_service '{"image": "mariadb", "root_password": "mariadb", "port": 3306}'
solar resource create keystone_db resources/mariadb_db/ '{"db_name": "keystone_db", "login_user": "root"}'
solar resource create keystone_db_user resources/mariadb_user/ user_name=keystone user_password=keystone  # another valid format

solar connect node1 mariadb_service
solar connect node1 keystone_db
solar connect mariadb_service keystone_db '{"root_password": "login_password", "port": "login_port", "ip": "db_host"}'
# solar connect mariadb_service keystone_db_user 'root_password->login_password port->login_port'  # another valid format
solar connect keystone_db keystone_db_user

solar changes stage
solar changes process
# <uid>
solar orch run-once <uid> # or solar orch run-once last
watch 'solar orch report <uid>' # or solar orch report last

You can fiddle with the above configuration like this:

solar resource update keystone_db_user '{"user_password": "new_keystone_password"}'
solar resource update keystone_db_user user_password=new_keystone_password   # another valid format

solar changes stage
solar changes process
<uid>
solar orch run-once <uid>

To get data for the resource bar (raw and pretty-JSON):

solar resource show --tag 'resources/bar'
solar resource show --json --tag 'resources/bar' | jq .
solar resource show --name 'resource_name'
solar resource show --name 'resource_name' --json | jq .

To clear all resources/connections:

solar resource clear_all

Show the connections/graph:

solar connections show
solar connections graph

You can also limit graph to show only specific resources:

solar connections graph --start-with mariadb_service --end-with keystone_db

You can make sure that all input values are correct and mapped without duplicating your values with this command:

solar resource validate

Disconnect

solar disconnect mariadb_service node1

Tag a resource:

solar resource tag node1 test-tags # Remove tags
solar resource tag node1 test-tag --delete

Low level API

Usage:

Creating resources:

from solar.core.resource import virtual_resource as vr
node1 = vr.create('node1', 'resources/ro_node/', 'rs/', {'ip':'10.0.0.3', 'ssh_key' : '/vagrant/tmp/keys/ssh_private', 'ssh_user':'vagrant'})[0]

node2 = vr.create('node2', 'resources/ro_node/', 'rs/', {'ip':'10.0.0.4', 'ssh_key' : '/vagrant/tmp/keys/ssh_private', 'ssh_user':'vagrant'})[0]

keystone_db_data = vr.create('mariadb_keystone_data', 'resources/data_container/', 'rs/', {'image' : 'mariadb', 'export_volumes' : ['/var/lib/mysql'], 'ip': '', 'ssh_user': '', 'ssh_key': ''}, connections={'ip' : 'node2.ip', 'ssh_key':'node2.ssh_key', 'ssh_user':'node2.ssh_user'})[0]

nova_db_data = vr.create('mariadb_nova_data', 'resources/data_container/', 'rs/', {'image' : 'mariadb', 'export_volumes' : ['/var/lib/mysql'], 'ip': '', 'ssh_user': '', 'ssh_key': ''}, connections={'ip' : 'node1.ip', 'ssh_key':'node1.ssh_key', 'ssh_user':'node1.ssh_user'})[0]

To make connection after resource is created use signal.connect.

To test notifications:

keystone_db_data.args    # displays node2 IP

node2.update({'ip': '10.0.0.5'})

keystone_db_data.args   # updated IP

If you close the Python shell you can load the resources like this:

from solar.core import resource
node1 = resource.load('rs/node1')

node2 = resource.load('rs/node2')

keystone_db_data = resource.load('rs/mariadb_keystone_data')

nova_db_data = resource.load('rs/mariadb_nova_data')

Connections are loaded automatically.

You can also load all resources at once:

from solar.core import resource
all_resources = resource.load_all('rs')

Dry run

Solar CLI has possibility to show dry run of actions to be performed. To see what will happen when you run Puppet action, for example, try this:

solar resource action keystone_puppet run -d

This should print out something like this:

EXECUTED:
73c6cb1cf7f6cdd38d04dd2d0a0729f8: (0, 'SSH RUN', ('sudo cat /tmp/puppet-modules/Puppetfile',), {})
3dd4d7773ce74187d5108ace0717ef29: (1, 'SSH SUDO', ('mv "1038cb062449340bdc4832138dca18cba75caaf8" "/tmp/puppet-modules/Puppetfile"',), {})
ae5ad2455fe2b02ba46b4b7727eff01a: (2, 'SSH RUN', ('sudo librarian-puppet install',), {})
208764fa257ed3159d1788f73c755f44: (3, 'SSH SUDO', ('puppet apply -vd /tmp/action.pp',), {})

By default every mocked command returns an empty string. If you want it to return something else (to check how would dry run behave in different situation) you provide a mapping (in JSON format), something along the lines of:

solar resource action keystone_puppet run -d -m "{\"73c\": \"mod 'openstack-keystone'\n\"}"

The above means the return string of first command (with hash 73c6c...) will be as specified in the mapping. Notice that in mapping you don't have to specify the whole hash, just it's unique beginning. Also, you don't have to specify the whole return string in mapping. Dry run executor can read file and return it's contents instead, just use the > operator when specifying hash:

solar resource action keystone_puppet run -d -m "{\"73c>\": \"./Puppetlabs-file\"}"

Resource compiling

You can compile all meta.yaml definitions into Python code with classes that derive from Resource. To do this run

solar resource compile_all

This generates file resources_compiled.py in the main directory (do not commit this file into the repo). Then you can import classes from that file, create their instances and assign values just like these were normal properties. If your editor supports Python static checking, you will have autocompletion there too. An example on how to create a node with this:

import resources_compiled

node1 = resources_compiled.RoNodeResource('node1', None, {})
node1.ip = '10.0.0.3'
node1.ssh_key = '/vagrant/.vagrant/machines/solar-dev1/virtualbox/private_key'
node1.ssh_user = 'vagrant'

Higher-level API

There's also a higher-level API that allows to write resource instances in more functional way, and in particular avoid for loops. Here's an example:

from solar import template

nodes = template.nodes_from('templates/riak_nodes.yaml')

riak_services = nodes.on_each(
    'resources/riak_node',
    {
        'riak_self_name': 'riak{num}',
        'riak_hostname': 'riak_server{num}.solar',
        'riak_name': 'riak{num}@riak_server{num}.solar',
    }
)

riak_master_service = riak_services.take(0)
riak_slave_services = riak_services.tail()

riak_master_service.connect_list(
    riak_slave_services,
    {
        'riak_name': 'join_to',
    }
)

For full Riak example, please look at examples/riak/riaks-template.py.

Full documentation of individual functions is found in the solar/template.py file.

Customizing vagrant-settings.yaml

Solar is shipped with sane defaults in vagrant-setting.yaml_defaults. If you need to adjust them for your needs, e.g. changing resource allocation for VirtualBox machines, you should just compy the file to vagrant-setting.yaml and make your modifications.

Image based provisioning with Solar

  • In vagrant-setting.yaml_defaults or vagrant-settings.yaml file uncomment preprovisioned: false line.
  • Run vagrant up, it will take some time because it builds image for bootstrap and IBP images.
  • Now you can run provisioning /vagrant/examples/provisioning/provision.sh