All the integration testing has been moved to Bionic now[1] and py3.5 is not tested runtime for Train or stable/stein[2]. As per below ML thread, we are good to drop the py35 testing now: http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005097.html [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004647.html [2] https://governance.openstack.org/tc/reference/runtimes/stein.html https://governance.openstack.org/tc/reference/runtimes/train.html Change-Id: Ief9e9871f35721f697dec2b1efd284a7f9e135af
Team and repository tags
Freezer Disaster Recovery
freezer-dr, OpenStack Compute node High Available provides compute node high availability for OpenStack. Simply freezer-dr monitors all compute nodes running in a cloud deployment and if there is any failure in one of the compute nodes freezer-dr will fence this compute node then freezer-dr will try to evacuate all running instances on this compute node, finally freezer-dr will notify all users who have workload/instances running on this compute node as well as will notify the cloud administrators.
freezer-dr has a pluggable architecture so it can be used with:
- Any monitoring system to monitor the compute nodes (currently we support only native OpenStack services status)
- Any fencing driver (currently supports IPMI, libvirt, ...)
- Any evacuation driver (currently supports evacuate api call, may be migrate ??)
- Any notification system (currently supports email based notifications, ...)
just by adding a simple plugin and adjust the configuration file to use this plugin or in future a combination of plugins if required
freezer-dr should run in the control plane, however the architecture supports different scenarios. For running freezer-dr under high availability mode, it should run with active passive mode.
How it works
Starting freezer-dr:
- freezer-dr Monitoring manager is going to load the required monitoring driver according to the configuration
- freezer-dr will query the monitoring system to check if it considers any compute nodes to be down ?
- if no, freezer-dr will exit displaying No failed nodes
- if yes, freezer-dr will call the fencing manager to fence the failed compute node
- Fencing manager will load the correct fencer according to the configuration
- once the compute node is fenced and is powered off now we will start the evacuation process
- freezer-dr will load the correct evacuation driver
- freezer-dr will evacuate all instances to another computes
- Once the evacuation process completed, freezer-dr will call the notification manager
- The notification manager will load the correct driver based on the configurations
- freezer-dr will start the notification process ...