Browse Source

Update HA doc with corections and minor fixes

Update HA doc with corrections and fixes to typos and mistakes.

Change-Id: I8c27f7c8584bb53bd63c036ebceaf90e893cb78d
ashish.billore 2 months ago
parent
commit
d4b31ae944
1 changed files with 5 additions and 5 deletions
  1. 5
    5
      doc/source/devref/high_availability.rst

+ 5
- 5
doc/source/devref/high_availability.rst View File

@@ -72,13 +72,13 @@ Kuryr Controller Implementation
72 72
 The main issue with having multiple controllers is task division. All of the
73 73
 controllers are watching the same endpoints and getting the same notifications,
74 74
 but those notifications cannot be processed by multiple controllers at once,
75
-because we end up with a huge race conditon, where each controller creates
76
-Neutron resources but only on succeeds to put the annotation on the Kubernetes
75
+because we end up with a huge race condition, where each controller creates
76
+Neutron resources but only one succeeds to put the annotation on the Kubernetes
77 77
 resource it is processing.
78 78
 
79 79
 This is obviously unacceptable so as a first step we're implementing A/P HA,
80
-where only the leader is working on the resources and the rest waits in
81
-standby. This will be implemented by periodically calling the leader-elector
80
+where only the leader is working on the resources and the other instances wait
81
+as standby. This will be implemented by periodically calling the leader-elector
82 82
 API to check the current leader. On leader change:
83 83
 
84 84
 * Pod losing the leadership will stop its Watcher. Please note that it will be
@@ -105,7 +105,7 @@ hit. Those can happen in two cases:
105 105
   Kubernetes resource by creating resources again.
106 106
 * During leader transition (short period after a leader died, but before its
107 107
   lease expired and periodic task on other controllers noticed that; this
108
-  shouldn't exceed 10 s) some K8s resources are deleted. New leader will not
108
+  shouldn't exceed 10s) some K8s resources are deleted. New leader will not
109 109
   get the notification about the deletion and those will go unnoticed.
110 110
 
111 111
 Both of this issues can be tackled by garbage-collector mechanism that will

Loading…
Cancel
Save