Treasuremap contains scripts to aid developers and automation of Airship. These tools are found in ./treasuremap/tools/deployment
gate.sh is the starting point to run each of the named gates, found in ./airship_gate/manifests, e.g.:
$ ./gate.sh multinode_deploy
where the argument for the gate.sh script is the filename of the json file in ./airship_gate/manifests without the json extension. if you run the script without any arguments:
$ ./gate.sh
then it will by default stand up a four-node Airship cluster.
If you'd like to bring your own manifest to drive the framework, you can set and export the GATE_MANIFEST env var prior to running gate.sh
Each of the defined manifests used for the gate defines a virtual machine configuration, and the steps to run as part of that gate. Additional information found in each file is a configuration that targets a particular set of Airship site configurations, which in some of the provided manifests are found in the deployment_files/site directory.
Several useful utilities are found in ./airship_gate/bin to facilitate interactions with the VMs created. These commands are effectively wrapped scripts providing the functionality of the utility they wrap, but also incorporating the necessary identifying information needed for a particular run of a gate. E.g.:
$ ./airship_gate/bin/ssh.sh n0
Custom manifests can be used to drive this framework with testing outside the default virtual site deployment scenario. Here is some information on how to create a manifest to define custom network or VM configuration or run a custom stage pipeline. Manifest files are written in JSON and the documentation below will use dotted JSON paths when describing structure. Unless the root is otherwise defined, assume it is from the document root.
The .networking
key defines the network topology of the site. Each subkey is the name of a network. Under each network name is a semi-recursive stanza defining the layer 2 and layer 3 attributes of the network:
or
The .disk_layouts
key defines the various disk layouts that can be assigned to VMs being built. Each named layout key then defines one or more block devices that will be created as file-backed volumes.
Under the .vm
key is a mapping of all the VMs that will be created via virt-install. This can be a mix of VMs that are bootstrapped via virsh/cloud-init and those deployed via Airship. Each key is the name of a VM and value is a JSON object:
- memory - VM RAM in megabytes
- vcpus - Number of VM CPUs
- sockets - Number of sockets. (Optional)
- threads - Number of threads. (Optional)
- disk_layout - A disk profile for the VM matching one defined under
.disk_layouts
- bootstrap - True/False for whether the framework should bootstrap the VM's OS
- cpu_cells - Parameter to setup NUMA nodes and allocate RAM memory. (Optional)
- cpu_mode - CPU mode for the VM. (if not specified, default: host)
- userdata - Cloud-init userdata to feed the VM when bootstrapped for further customization
networking - Network attachment and addressing configuration. Every key but
addresses
is assumed to be a desired NIC on the VM. For each NIC stanza, the following fields are respected:
- mac - A MAC address for the NIC
- model - A model for the NIC (if not specified, default: virtio)
- pci - A JSON object specifying
slot
andport
specifying the PCI address for the NIC- attachment - What network from
.networking
is attached to this NICThe
addresses
key specifies the IP address for each layer 3 network that the VM is attached to.
TODO
TODO