5404a261aa
This enables basic clustering functionality. We add: tools/cluster/cluster/daemon.py: A server that handles validation of cluster passwords. tools/cluster/cluster/client.py: A client for this server. Important Note: This prototype does not support TLS, and the functionality in the client and server is basic. Before we roll clustering out to production, we need to have those two chat over TLS, and be much more careful about verifying credentials. Also included ... Various fixes and changes to the init script and config templates to support cluster configuration, and allow for the fact that we may have endpoint references for two network ips. Updates to snapcraft.yaml, adding the new tooling. A more formalized config infrastructure. It's still a TODO to move the specification out of the implicit definition in the install hook, and into a nice, explicit, well documented yaml file. Added nesting to the Question classes in the init script, as well as strings pointing at config keys, rather than having the config be implicitly indicated by the Question subclass' name. (This allows us to put together a config spec that doesn't require the person reading the spec to understand what Questions are, and how they are implemented.) Renamed and unified the "unit" and "lint" tox environments, to allow for the multiple Python tools that we want to lint and test. Added hooks in the init script to make it possible to do automated testing, and added an automated test for a cluster. Run with "tox -e cluster". Added cirros image to snap, to work around sporadic issues downloading it from download.cirros.net. Removed ping logic from snap, to workaround failures in gate. Need to add it back in once we fix them. Change-Id: I44ccd16168a7ed41486464df8c9e22a14d71ccfd |
||
---|---|---|
patches | ||
snap/hooks | ||
snap-overlay | ||
snap-wrappers | ||
tests | ||
tools | ||
.gitignore | ||
.gitreview | ||
.zuul.yaml | ||
CONTRIBUTING.md | ||
DEMO.md | ||
README.md | ||
snapcraft.yaml | ||
test-requirements.txt | ||
tox.ini |
microstack
OpenStack in a snap that you can run locally on a single machine! Excellent for ...
- Development and Testing of Openstack Workloads
- CI
- Edge Clouds (experimental)
microstack
currently provides Nova, Keystone, Glance, Horizon and Neutron OpenStack services.
If you want to roll up your sleeves and do interesting things with the services and settings, look in the .d directories in the filesystem tree under /var/snap/microstack/common/etc
. You can add services with your package manager, or take a look at CONTRIBUTING.md
and make a code based argument for adding a service to the default list. :-)
Installation
microstack
is frequently updated to provide the latest stable updates of the most recent OpenStack release. The quickest was to get started is to install directly from the snap store. You can install microstack
using:
sudo snap install microstack --classic --beta
Quickstart
To quickly configure networks and launch a vm, run
sudo microstack.init
This will configure various Openstack databases. Then run:
microstack.launch cirros --name test
.
This will launch an instance for you, and make it available to manage via the command line, or via the Horizon Dashboard.
To access the Dashboard, visit http://10.20.20.1 in a web browser, and login with the following credentials:
username: admin
password: keystone
To ssh into the instance, use the username "cirros" and the ssh key written to ~/.ssh/id_microstack:
ssh -i ~/.ssh/id_microstack cirros@<IP>
(Where 'IP' is listed in the output of microstack.launch
)
To run openstack commands, run microstack.openstack <some command>
For more detail and control, read the rest of this README. :-)
Accessing OpenStack
microstack
provides a pre-configured OpenStack CLI to access the local OpenStack deployment; its namespaced using the microstack
prefix:
microstack.openstack server list
You can setup this command as an alias for openstack
if you wish (removing the need for the microstack.
prefix):
sudo snap alias microstack.openstack openstack
Alternatively you can access the Horizon OpenStack dashboard on http://127.0.0.1
with the following credentials:
username: admin
password: keystone
Creating and accessing an instance
Create an instance in the usual way:
microstack.openstack server create --flavor m1.small --nic net-id=test --key-name microstack --image cirros my-microstack-server
For convenience, we've used items that the initialisation step provided (flavor, network, keypair, and image). You are free to manage your own.
To access the instance, you'll need to assign it a floating IP address:
ALLOCATED_FIP=`microstack.openstack floating ip create -f value -c floating_ip_address external`
microstack.openstack server add floating ip my-microstack-server $ALLOCATED_FIP
Since MicroStack is just like a normal OpenStack cloud you'll need to enable SSH and ICMP access to the instance (this may have been done by the initialisation step):
SECGROUP_ID=`microstack.openstack security group list --project admin -f value -c ID`
microstack.openstack security group rule create $SECGROUP_ID --proto tcp --remote-ip 0.0.0.0/0 --dst-port 22
microstack.openstack security group rule create $SECGROUP_ID --proto icmp --remote-ip 0.0.0.0/0
You should now be able to SSH to the instance:
ssh -i ~/.ssh/id_microstack cirros@$ALLOCATED_FIP
Happy microstack
ing!
Stopping and starting microstack
You may wish to temporarily shutdown microstack when not in use without un-installing it.
microstack
can be shutdown using:
sudo snap disable microstack
and re-enabled latest using:
sudo snap enable microstack
Raising a Bug
Please report bugs to the microstack project on launchpad: https://bugs.launchpad.net/microstack
Clustering Preview
The latests --edge version of the clustering snap contains a preview of microstack's clustering functionality. If you're interested in building a small "edge" cloud with microstack, please take a look at the notes below. Keep in mind that this is preview functionality. Interfaces may not be stable, and the security of the preview is light, and not suitable for production use!
To setup a cluster, you first must setup a control node. Do so with the following commands:
sudo snap install microstack
sudo microstack.init
Answer the questions in the interactive prompt as follows:
Clustering: yes
Role: control
IP Address: Note and accept the default
On a second machine, run:
sudo snap install microstack
sudo microstack.init
Answer the questions in the interactive prompt as follows:
Setup clustering: yes
Role: compute
Control IP: the ip address noted above
Compute IP: accept the default
You should now have a small, two node cloud, with the first node serving as both the control plane and a hypvervisor, and the second node serving as a hypervisor. You can create vms on both.