3c6c6b8697
Boolean flag will be set to True in case of compute nodes are running on a shared storage and will be False otherwise Depends-On: Ie6f3f12efafbca530271d6771fac83480ee19000 Change-Id: I46e9eb2e1ace17959a02b3107e5ad3e85e2cd851 implements: blueprint dr-enable-shared-storage |
||
---|---|---|
config-generator | ||
doc | ||
etc | ||
freezer_dr | ||
tests | ||
.coveragerc | ||
.gitignore | ||
.gitreview | ||
.pylintrc | ||
.testr.conf | ||
CREDITS.rst | ||
HACKING.rst | ||
README.rst | ||
requirements.txt | ||
setup.cfg | ||
setup.py | ||
test-requirements.txt | ||
tox.ini |
Freezer Disaster Recovery
freezer-dr, Openstack Compute node High Available provides compute node high availability for OpenStack. Simply freezer-dr monitors all compute nodes running in a cloud deployment and if there is any failure in one of the compute nodes freezer-dr will fence this compute node then freezer-dr will try to evacuate all running instances on this compute node, finally freezer-dr will notify all users who have workload/instances running on this compute node as well as will notify the cloud administrators.
freezer-dr has a pluggable architecture so it can be used with:
- Any monitoring system to monitor the compute nodes (currently we support only native openstack services status)
- Any fencing driver (currently supports IPMI, libvirt, ...)
- Any evacuation driver (currently supports evacuate api call, may be migrate ??)
- Any notification system (currently supports email based notifications, ...)
just by adding a simple plugin and adjust the configuration file to use this plugin or in future a combination of plugins if required
freezer-dr should run in the control plane, however the architecture supports different scenarios. For running freezer-dr under high availability mode, it should run with active passive mode.
How it works
Starting freezer-dr:
- freezer-dr Monitoring manager is going to load the required monitoring driver according to the configuration
- freezer-dr will query the monitoring system to check if it considers any compute nodes to be down ?
- if no, freezer-dr will exit displaying No failed nodes
- if yes, freezer-dr will call the fencing manager to fence the failed compute node
- Fencing manager will load the correct fencer according to the configuration
- once the compute node is fenced and is powered off now we will start the evacuation process
- freezer-dr will load the correct evacuation driver
- freezer-dr will evacuate all instances to another computes
- Once the evacuation process completed, freezer-dr will call the notification manager
- The notification manager will load the correct driver based on the configurations
- freezer-dr will start the notification process ...