cd20731d96
We add a function which will detect if rerunning the manifest would mean adding any cluster nodes. For those nodes we trigger the cluster node addition via: pcs cluster auth newnode pcs cluster node add newnode pcs cluster start newnode Initial run with one node in cluster_members: Online: [ foobar-0 ] Full list of resources: stonith-fence_amt-stonith-fence-1 (stonith:fence_amt): Stopped Docker container: test_bundle [docker.io/sdelrio/docker-minimal-nginx] test_bundle-docker-0 (ocf:💓docker): Started foobar-0 ip-172.16.11.97 (ocf:💓IPaddr2): Started foobar-0 Clone Set: rabbitmq-clone [rabbitmq] Started: [ foobar-0 ] Rerun with additional two additional nodes and by setting ::deep_compare hiera keys to true so that resources get updated: Online: [ foobar-0 foobar-1 foobar-2 ] Full list of resources: stonith-fence_amt-stonith-fence-1 (stonith:fence_amt): Started foobar-1 ip-172.16.11.97 (ocf:💓IPaddr2): Started foobar-2 Clone Set: rabbitmq-clone [rabbitmq] Started: [ foobar-0 foobar-1 foobar-2 ] stonith-fence_amt-stonith-fence-2 (stonith:fence_amt): Started foobar-0 stonith-fence_amt-stonith-fence-3 (stonith:fence_amt): Started foobar-1 Docker container set: test_bundle [docker.io/sdelrio/docker-minimal-nginx] test_bundle-docker-0 (ocf:💓docker): Started foobar-0 test_bundle-docker-1 (ocf:💓docker): Started foobar-2 test_bundle-docker-2 (ocf:💓docker): Started foobar-1 Ran about 50 scaleup tests and this node addition code has worked all the time. We have intentionally not yet added removal support as that needs to have a use-case and a lot of testing. For this scaleup to work properly we need a fix for the firewall ordering issue, otherwise this could be racy when stonith is configured. (i.e. we'll need I01e681a6305e2708bf364781a2032265b146d065 if this review ever gets backported). Change-Id: Iac31035da98bd68a5481d97ee3765a99563db49f |
||
---|---|---|
.. | ||
pacemaker_cluster_nodes.rb | ||
pacemaker_cluster_options.rb | ||
pacemaker_resource_parameters.rb | ||
pcmk_nodes_added.rb |