<div dir="ltr"><div>Hi,</div><div><br></div><div>I have been thinking about this issue and I believe the concerns are genuine</div><div>and we need to figure out a way around.<br><br>IMHO one possible solution is to change how watchdog does the quorum calculations</div><div>and which nodes makes up the watchdog cluster.<br><br>The current implementation calculates the quorum based on the number of configured</div><div>watchdog nodes and alive nodes. And if we make the watchdog cluster adjust itself dynamically</div><div>based on the current situation, then we can have a better user experience.<br><br>As of now the watchdog cluster definition recognises node as either alive or absent.<br>And the number of alive-nodes need to be >= to the total number of configured nodes</div><div>for the quorum to hold.<br><br>So my suggestion is that instead of using a binary status, we consider that watchdog node</div><div>can be in one of three states 'Alive', 'Dead' or 'Lost', and all dead nodes should be considered</div><div>as not part of the current cluster.<br><br>Consider the example where we have 5 configured watchdog nodes.<br>With current implementation the quorum will require 3 alive nodes.<br><br>Now suppose we have started only 3 nodes. That would be good enough to make the cluster</div><div>hold the quorum and one of the nodes will eventually acquire the VIP, so no problems there.<br>But as soon as we shutdown one of the nodes or it becomes 'Lost' the cluster will lose the</div><div>quorum and release the VIP, making the service unavailable.<br><br>Consider the same scenario, with above mentioned new definition of watchdog cluster.<br>When we initially start 3 nodes out of 5 the cluster marks the remaining two nodes</div><div>(after configurable time) as dead, and removes them for the cluster until one of those nodes</div><div>is started and connects with the cluster. So after that configured time, even if we have 5 configured</div><div>watchdog nodes our cluster dynamically adjusts itself and considers the cluster having</div><div>only 3 nodes (instead of 5) and that will require only 2 nodes be alive.</div><div><br>By this new definition if one of the node gets lost, the cluster will still hold the quorum</div><div>since it was considering it consists of 3 nodes. And that lost node will again be marked</div><div>as dead after a configured amount of time and eventually further shrink the cluster size to 2 nodes.</div><div>Similarly, when some previously dead node joins the cluster, the cluster will expend itself again to</div><div>accommodate that node.</div><div><br>On top of that if some watchdog node is properly shutdown then it would be Immediately</div><div>marked as dead and removed from the cluster.<br><br>Of course, this is not a bullet-proof and comes with the risk of having a split-brain in case of</div><div>a few network partitioning scenarios, but I think it would work in 99% of cases.<br><div><br></div><div>This new implementation would require two new (proposed) additional configuration parameter.</div><div>1- wd_lost_node_to_remove_timeout (seconds)</div><div>2- wd_initial_node_showup_time (seconds)</div><div><br></div><div>Also, we can also implement a new PCP command to force the lost node to be marked as dead.</div><div><br></div><div>Thoughts and suggestions?</div><div><br></div><div>Thanks</div><div>Best regards</div><div>Muhammad Usama</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, May 11, 2021 at 7:18 AM Tatsuo Ishii <<a href="mailto:ishii@sraoss.co.jp">ishii@sraoss.co.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Pgpool-II developers,<br>
<br>
Recently we got a complain below from a user.<br>
<br>
Currently Pgpool-II releases VIP if the quorum is lost. This is<br>
reasonable and safe so that we can prevent split-brain problems.<br>
<br>
However, I feel it would be nice if there's a way to allow to hold VIP<br>
even if the quorum is lost for emergency sake.<br>
<br>
Suppose we have 3-node pgpool each in different 3 cities. Those 2<br>
cities are break down by an earth quake, and user want to keep their<br>
business relying on the remaining 1 node. Of course we could disable<br>
watchdog and restart pgpool so that applications can directly connect<br>
to pgpool. However in this case applications need to change the IP<br>
which connect to.<br>
<br>
Also as the user pointed out, with 2-node configuration the VIP can be<br>
used by enabling enable_consensus_with_half_vote even if there is<br>
only 1 node remains. It seems as if 2-node config is better than<br>
3-node config in this regard. Of course this is not true since 3-node<br>
config is much more resistant to split-brain problems.<br>
<br>
I think there are multiple ways to deal with the problem:<br>
<br>
1) invent a new config parameter so that pgpool keeps VIP even if the<br>
quorum is lost.<br>
<br>
2) add a new pcp command which re-attaches the VIP after VIP is lost<br>
due to loss of the quorum.<br>
<br>
#1 could easily creates duplicate VIPs. #2 looks better but when other<br>
nodes come up, it could be possible that duplicate VIPs are created.<br>
<br>
Thoughts?<br>
<br>
Best regards,<br>
--<br>
Tatsuo Ishii<br>
SRA OSS, Inc. Japan<br>
English: <a href="http://www.sraoss.co.jp/index_en.php" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_en.php</a><br>
Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.jp</a><br>
<br>
> Dear all,<br>
> <br>
> I have fairly common 3-node cluster, with each node running a PgPool<br>
> and a PostreSQL instance.<br>
> <br>
> I have set up priorities so that:<br>
> - when all 3 nodes are up, the 1st node is gonna have the VIP,<br>
> - when the 1st node is down, the 2nd node is gonna have the VIP, and<br>
> - when both the 1st and the 2nd nodes are down, then the 3rd node<br>
> should get the VIP.<br>
> <br>
> My problem is that when only 1 node is up, the VIP is not brought up,<br>
> because there is no quorum.<br>
> How can I get PgPool to bring up the VIP to the only remaining node,<br>
> which still could and should serve requests?<br>
> <br>
> Regards,<br>
> <br>
> tamas<br>
> <br>
> -- <br>
> Rébeli-Szabó Tamás<br>
> <br>
> _______________________________________________<br>
> pgpool-general mailing list<br>
> <a href="mailto:pgpool-general@pgpool.net" target="_blank">pgpool-general@pgpool.net</a><br>
> <a href="http://www.pgpool.net/mailman/listinfo/pgpool-general" rel="noreferrer" target="_blank">http://www.pgpool.net/mailman/listinfo/pgpool-general</a><br>
_______________________________________________<br>
pgpool-hackers mailing list<br>
<a href="mailto:pgpool-hackers@pgpool.net" target="_blank">pgpool-hackers@pgpool.net</a><br>
<a href="http://www.pgpool.net/mailman/listinfo/pgpool-hackers" rel="noreferrer" target="_blank">http://www.pgpool.net/mailman/listinfo/pgpool-hackers</a><br>
</blockquote></div></div>