Virtual bare metal cluster management
Go to file
2018-09-14 13:35:27 +00:00
ansible Add basis for action plugin tests 2018-09-14 13:35:27 +00:00
assets Add updated diagram 2018-09-12 15:17:31 +00:00
tests Add basis for action plugin tests 2018-09-14 13:35:27 +00:00
.gitignore Persist physnet indices in state file 2018-09-14 10:27:21 +00:00
.travis.yml Add Travis CI config file 2018-09-07 09:05:58 +00:00
LICENSE Initial commit 2018-08-17 11:38:20 +01:00
README.md Update documentation 2018-09-14 11:59:10 +00:00
requirements.txt Add basic package structure 2018-08-28 11:02:47 +00:00
requirements.yml Add basis for README 2018-08-23 13:04:48 +00:00
setup.cfg Add basic package structure 2018-08-28 11:02:47 +00:00
setup.py Add basic package structure 2018-08-28 11:02:47 +00:00
test-requirements.txt Add basis for action plugin tests 2018-09-14 13:35:27 +00:00
tox.ini Add basis for action plugin tests 2018-09-14 13:35:27 +00:00

Tenks

Tenks is a utility that manages virtual bare metal clusters for development and testing purposes.

Getting Started

Pre-Requisites

Tenks has dependencies on Ansible roles that are hosted by Ansible Galaxy. Given that your virtualenv of choice is active and Ansible (>=2.6) is installed inside it, Tenks' role dependencies can be installed by ansible-galaxy install --role-file=requirements.yml --roles-path=ansible/roles/.

Hosts

Tenks uses Ansible inventory to manage hosts. A multi-host setup is therefore supported, although the default hosts configuration will deploy an all-in-one setup on the host where the ansible-playbook command is executed (localhost).

  • Configuration management of the Tenks cluster is always performed on localhost.

  • The hypervisors group should not directly contain any hosts. Its sub-groups must contain one or more system. Systems in its sub-groups will host a subset of the noides deployed by Tenks.

    • The libvirt group is a sub-group of hypervisors. Systems in this group will act as hypervisors using the Libvirt provider.

Configuration

An override file should be created to configure Tenks. Any variables specified in this file will take precedence over their default settings in Tenks. This will allow you to set options as necessary for your setup, without needing to directly modify Tenks' variable files. An example override file can be found in ansible/override.yml.example.

Most of the configuration you will need to do relates to variables defined in ansible/host_vars/localhost. You can set your own values for these in your override file (mentioned above). In addition to other options, you will need to define the types of node you'd like to be able to manage as a dict in node_types, as well as the desired deployment specifications in specs. Format and guidance for available options will be found within the variable file.

Broadly, most variables in ansible/group_vars/* have sensible defaults which may be left as-is unless you have a particular need to configure them. A notable exception to this is the variable physnet_mappings in ansible/group_vars/hypervisors, which should map physical network names to the device to use for that network: this can be a network interface, or an existing OVS or Linux bridge. If these mappings are the same for all hosts in your hypervisors group, you may set a single dict physnet_mappings in your overrides file, and this will be used for all hosts. If different mappings are required for different hosts, you will need to individually specify them: for a host with hostname myhost, set physnet_mappings within the file ansible/host_vars/myhost.

Commands

Tenks has a variable cmd which specifies the command to be run. This variable can be set in your override file (see above). The possible values it can take are:

  • deploy: create a virtual cluster to the specification given. This is the default command.
  • teardown: tear down any existing virtual cluster with the specification given.

Running Tenks

Currently, Tenks does not have a CLI or wrapper. Deployment can be run by calling ansible-playbook --inventory ansible/inventory ansible/deploy.yml --extra-vars=@override.yml, where override.yml is the path to your override file.

The deploy.yml playbook will run deployment from start to finish; teardown.yml is deploy.yml's "mirror image" to tear down a cluster. These playbooks automatically set cmd appropriately, and they contain various constituent playbooks which perform different parts of the deployment. An individual section of Tenks can be run separately by substituting ansible/deploy.yml in the command above with the path to the playbook(s) you want to run. The current playbooks can be seen in the Ansible structure diagram in the Development section. Bear in mind that you will have to set cmd in your override file if you are running any of the sub-playbooks individually.

Once a cluster has been deployed, it can be reconfigured by modifying the Tenks configuration and rerunning deploy.yml. Node specs can be changed (including increasing/decreasing the number of nodes); node types can also be reconfigured. Existing nodes will be preserved where possible.

Limitations

The following is a non-exhaustive list of current known limitations of Tenks:

  • When using the Libvirt provider (currently the only provider), Tenks hypervisors cannot co-exist with a containerised Libvirt daemon (for example, as deployed by Kolla in the nova-libvirt container). Tenks will configure an uncontainerised Libvirt daemon instance on the hypervisor, and this may conflict with an existing containerised daemon. A workaround is to disable the Nova virtualised compute service on each Tenks hypervisor if it is present (for example, docker stop nova_libvirt) before running Tenks.

Development

A diagram representing the Ansible structure of Tenks can be seen below. Blue rectangles represent playbooks, green rounded rectangles represent task books, red ellipses represent roles and yellow rhombi represent action plugins.

Tenks Ansible structure