[pgpool-general: 7737] Re: RES: RES: Nodes down as quarentined, but not sure what is the problem
Bo Peng
pengbo at sraoss.co.jp
Mon Oct 4 11:33:46 JST 2021
Sorry for the late response.
If it's possible, could you try if it occurs with the latest version 4.1.8.
https://www.pgpool.net/mediawiki/index.php/Downloads
On Mon, 20 Sep 2021 08:13:24 -0300
Rinaldo Akio Uehara <rinaldo.uehara at gmail.com> wrote:
> I have seen that too.
>
> I restarted pgpool in that node.
>
> The LOADING status changed to STANDBY.
>
> And the pcp_attached_node command continue to show the same error msg.
>
>
>
>
>
>
>
> De: Lachezar Dobrev
> Enviado:sexta-feira, 17 de setembro de 2021 10:28
> Para: Rinaldo Akio Uehara
> Cc:Bo Peng; pgpool-general at pgpool.net
> Assunto: Re: [pgpool-general: 7708] RES: Nodes down as quarentined, but not sure what is the problem
>
>
>
> I am seeing, that on spcdmvm8021 the watch dog says spcdmvm8020 is LOADING.
>
> Is it possible, that the connection between spcdmvm8021 watchdog and
>
> spcdmvm8020 pgpool is somehow not reliable?
>
>
>
> На чт, 16.09.2021 г. в 17:41 ч. Rinaldo Akio Uehara
>
> <rinaldo.uehara at gmail.com> написа:
>
> >
>
> > Hello!
>
> >
>
> >
>
> >
>
> > Here are the output of each command:
>
> >
>
> >
>
> >
>
> > [postgres at spcdmvm8019 ~]$ pcp_attach_node -U pgpool -h 192.168.21.60 --port=9898 -n 1
>
> >
>
> > Password:
>
> >
>
> > ERROR: failback request for node_id: 1 from pid [1384321] is canceled by other pgpool
>
> >
>
> >
>
> >
>
> > [postgres at spcdmvm8019 ~]$ pcp_attach_node -U pgpool -h 192.168.21.60 --port=9898 -n 2
>
> >
>
> > Password:
>
> >
>
> > ERROR: failback request for node_id: 2 from pid [1384337] is canceled by other pgpool
>
> >
>
> >
>
> >
>
> > [postgres at spcdmvm8019 ~]$ pcp_watchdog_info -h spcdmvm8019 -v -U pgpool
>
> >
>
> > Password:
>
> >
>
> > Watchdog Cluster Information
>
> >
>
> > Total Nodes : 3
>
> >
>
> > Remote Nodes : 2
>
> >
>
> > Quorum state : QUORUM EXIST
>
> >
>
> > Alive Remote Nodes : 2
>
> >
>
> > VIP up on local node : YES
>
> >
>
> > Master Node Name : spcdmvm8019:5000 Linux spcdmvm8019
>
> >
>
> > Master Host Name : spcdmvm8019
>
> >
>
> >
>
> >
>
> > Watchdog Node Information
>
> >
>
> > Node Name : spcdmvm8019:5000 Linux spcdmvm8019
>
> >
>
> > Host Name : spcdmvm8019
>
> >
>
> > Delegate IP : 192.168.21.60
>
> >
>
> > Pgpool port : 5000
>
> >
>
> > Watchdog port : 9000
>
> >
>
> > Node priority : 10
>
> >
>
> > Status : 4
>
> >
>
> > Status Name : MASTER
>
> >
>
> >
>
> >
>
> > Node Name : spcdmvm8020:5000 Linux spcdmvm8020
>
> >
>
> > Host Name : spcdmvm8020
>
> >
>
> > Delegate IP : 192.168.21.60
>
> >
>
> > Pgpool port : 5000
>
> >
>
> > Watchdog port : 9000
>
> >
>
> > Node priority : 5
>
> >
>
> > Status : 7
>
> >
>
> > Status Name : STANDBY
>
> >
>
> >
>
> >
>
> > Node Name : spcdmvm8021:5000 Linux spcdmvm8021
>
> >
>
> > Host Name : spcdmvm8021
>
> >
>
> > Delegate IP : 192.168.21.60
>
> >
>
> > Pgpool port : 5000
>
> >
>
> > Watchdog port : 9000
>
> >
>
> > Node priority : 1
>
> >
>
> > Status : 7
>
> >
>
> > Status Name : STANDBY
>
> >
>
> >
>
> >
>
> > [postgres at spcdmvm8019 ~]$ pcp_watchdog_info -h spcdmvm8020 -v -U pgpool
>
> >
>
> > Password:
>
> >
>
> > Watchdog Cluster Information
>
> >
>
> > Total Nodes : 3
>
> >
>
> > Remote Nodes : 2
>
> >
>
> > Quorum state : QUORUM EXIST
>
> >
>
> > Alive Remote Nodes : 2
>
> >
>
> > VIP up on local node : NO
>
> >
>
> > Master Node Name : spcdmvm8019:5000 Linux spcdmvm8019
>
> >
>
> > Master Host Name : spcdmvm8019
>
> >
>
> >
>
> >
>
> > Watchdog Node Information
>
> >
>
> > Node Name : spcdmvm8020:5000 Linux spcdmvm8020
>
> >
>
> > Host Name : spcdmvm8020
>
> >
>
> > Delegate IP : 192.168.21.60
>
> >
>
> > Pgpool port : 5000
>
> >
>
> > Watchdog port : 9000
>
> >
>
> > Node priority : 5
>
> >
>
> > Status : 7
>
> >
>
> > Status Name : STANDBY
>
> >
>
> >
>
> >
>
> > Node Name : spcdmvm8019:5000 Linux spcdmvm8019
>
> >
>
> > Host Name : spcdmvm8019
>
> >
>
> > Delegate IP : 192.168.21.60
>
> >
>
> > Pgpool port : 5000
>
> >
>
> > Watchdog port : 9000
>
> >
>
> > Node priority : 10
>
> >
>
> > Status : 4
>
> >
>
> > Status Name : MASTER
>
> >
>
> >
>
> >
>
> > Node Name : spcdmvm8021:5000 Linux spcdmvm8021
>
> >
>
> > Host Name : spcdmvm8021
>
> >
>
> > Delegate IP : 192.168.21.60
>
> >
>
> > Pgpool port : 5000
>
> >
>
> > Watchdog port : 9000
>
> >
>
> > Node priority : 1
>
> >
>
> > Status : 7
>
> >
>
> > Status Name : STANDBY
>
> >
>
> >
>
> >
>
> > [postgres at spcdmvm8019 ~]$ pcp_watchdog_info -h spcdmvm8021 -v -U pgpool
>
> >
>
> > Password:
>
> >
>
> > Watchdog Cluster Information
>
> >
>
> > Total Nodes : 3
>
> >
>
> > Remote Nodes : 2
>
> >
>
> > Quorum state : QUORUM EXIST
>
> >
>
> > Alive Remote Nodes : 2
>
> >
>
> > VIP up on local node : NO
>
> >
>
> > Master Node Name : spcdmvm8019:5000 Linux spcdmvm8019
>
> >
>
> > Master Host Name : spcdmvm8019
>
> >
>
> >
>
> >
>
> > Watchdog Node Information
>
> >
>
> > Node Name : spcdmvm8021:5000 Linux spcdmvm8021
>
> >
>
> > Host Name : spcdmvm8021
>
> >
>
> > Delegate IP : 192.168.21.60
>
> >
>
> > Pgpool port : 5000
>
> >
>
> > Watchdog port : 9000
>
> >
>
> > Node priority : 1
>
> >
>
> > Status : 7
>
> >
>
> > Status Name : STANDBY
>
> >
>
> >
>
> >
>
> > Node Name : spcdmvm8019:5000 Linux spcdmvm8019
>
> >
>
> > Host Name : spcdmvm8019
>
> >
>
> > Delegate IP : 192.168.21.60
>
> >
>
> > Pgpool port : 5000
>
> >
>
> > Watchdog port : 9000
>
> >
>
> > Node priority : 10
>
> >
>
> > Status : 4
>
> >
>
> > Status Name : MASTER
>
> >
>
> >
>
> >
>
> > Node Name : spcdmvm8020:5000 Linux spcdmvm8020
>
> >
>
> > Host Name : spcdmvm8020
>
> >
>
> > Delegate IP : 192.168.21.60
>
> >
>
> > Pgpool port : 5000
>
> >
>
> > Watchdog port : 9000
>
> >
>
> > Node priority : 5
>
> >
>
> > Status : 1
>
> >
>
> > Status Name : LOADING
>
> >
>
> >
>
> >
>
> >
>
> >
>
> > De: Bo Peng
>
> > Enviado:quinta-feira, 16 de setembro de 2021 02:07
>
> > Para: Rinaldo Akio Uehara
>
> > Cc:pgpool-general at pgpool.net
>
> > Assunto: Re: [pgpool-general: 7705] Nodes down as quarentined, but not sure what is the problem
>
> >
>
> >
>
> >
>
> > Hello,
>
> >
>
> >
>
> >
>
> > > If I try the pcp_attach_node, the msg is:
>
> >
>
> > >
>
> >
>
> > > ERROR: failback request for node_id: 1 from pid [854715] is canceled by other pgpool
>
> >
>
> >
>
> >
>
> > I could not reproduce this issue in 4.1.1.
>
> >
>
> > I think it may be in a network isolation state in your environment.
>
> >
>
> >
>
> >
>
> > Cloud you share your pcp_attach_node command and the following result?
>
> >
>
> >
>
> >
>
> > $ pcp_watchdog_info -h spcdmvm8019 -v -U pgpool
>
> >
>
> > $ pcp_watchdog_info -h spcdmvm8020 -v -U pgpool
>
> >
>
> > $ pcp_watchdog_info -h spcdmvm8021 -v -U pgpool
>
> >
>
> >
>
> >
>
> > On Mon, 13 Sep 2021 22:16:55 -0300
>
> >
>
> > Rinaldo Akio Uehara <rinaldo.uehara at gmail.com> wrote:
>
> >
>
> >
>
> >
>
> > > _______________________________________________
>
> >
>
> > > pgpool-general mailing list
>
> >
>
> > > pgpool-general at pgpool.net
>
> >
>
> > > http://www.pgpool.net/mailman/listinfo/pgpool-general
>
> >
>
> >
>
> >
>
> >
>
> >
>
> > --
>
> >
>
> > Bo Peng <pengbo at sraoss.co.jp>
>
> >
>
> > SRA OSS, Inc. Japan
>
> >
>
> > http://www.sraoss.co.jp/
>
> >
>
> >
>
> >
>
> > _______________________________________________
>
> > pgpool-general mailing list
>
> > pgpool-general at pgpool.net
>
> > http://www.pgpool.net/mailman/listinfo/pgpool-general
>
>
--
Bo Peng <pengbo at sraoss.co.jp>
SRA OSS, Inc. Japan
http://www.sraoss.co.jp/
More information about the pgpool-general
mailing list