[pgpool-general: 8545] Re: Issues taking a node out of a cluster

Tatsuo Ishii ishii at sraoss.co.jp
Tue Jan 17 09:30:48 JST 2023


>> Have you removed pgpool_status file before restarting pgpool?  The
>> file remembers the backend status along with node id hence you need to
>> update the file. If the file does not exist upon pgpol startup, it
>> will be automatically created.
>>
> 
> Yes, we remove the status file when we change the configuration of pgpool.
>>From what we can see in the logs, the backend is set to down after synching
> the status in the cluster. Are backends identified by their index in the
> cluster?

No, backend id is only idenfitied by pgpool.conf.

> After node 0 gets its new configuration, its backend1 will point
> to node 2, while on node 2, backend1 still points to the former node 1. It
> seems like this causes the backends to get mixed up and the wrong one is
> marked down.

I think so too. Each pgpool node tries to sync with leader watchdog node.

I suspect there's something wrong in node 0's pgpool.conf. Can you
share pgpool.conf of node 0? Only the "backend_*" part is enough.

> We do not use replication slots, at least we do not create them manually.
> But in this scenario we also don't perform a failover. The primary database
> runs on node 0 and is never taken offline. It's the standby database on
> node 1 that is taken offline. The backend1 (backend on node 2) that is
> marked down, also isn't touched. In the database logs, I can see that the
> databases are running and never lost connection.

Let me forward the question to the authority of auto_fail_back.

Best reagards,
--
Tatsuo Ishii
SRA OSS LLC
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp



More information about the pgpool-general mailing list