[pgpool-hackers: 3919] Re: Proposal: If replication delay exceeds delay_threshold, elect a new load balance node with less delay
Tatsuo Ishii
ishii at sraoss.co.jp
Thu Jun 10 12:14:19 JST 2021
Hi Kawamoto-san,
> Hi Ishii-san,
>
> I modified my patch.
>
> Pleas see the following test result.
> I tested on a 5 node cluster.
>
> First, in the case that the node 3 is only delayed, the node 0 is the primary
> and the node 1,2,4 are the lowest delay standbys. The result is that pgpool
> sent the 17% queries to the primary and the 27~30% queries to the each
> lowest delay standbys. This result is close to the hoped result that the 20%
> to the primary and the 30% to the each standbys.
>
> Second, in the case that the node 3 and 4 are delayed, the node 0 is the
> primary and the node 1,2 are the lowest delay standbys. The result is the
> 19% to the primary and the 39~40% to the each lowest delay standbys. I
> think this is very good result.
>
> What do you think?
The proposed behavior looks sane and good to me. However then you need
to change the docs because the doc stats:
if the delay of the load balancing node is greater than
delay_threshold <productname>Pgpool-II</productname> does not send
read queries to the primary node but the least delay standby with
backend_weight to greater than 0.
The doc says "<productname>Pgpool-II</productname> does not send read
queries to the primary node" but according to your proposal Pgpool-II
sends to the primary node as well as standbys.
> ========
> -bash-4.2$ psql -p 11000 -c "show pool_nodes"
> node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change
> ---------+----------+-------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
> 0 | /tmp | 11002 | up | up | 0.200000 | primary | primary | 0 | true | 0 | | | 2021-06-09 07:23:37
> 1 | /tmp | 11003 | up | up | 0.200000 | standby | standby | 0 | false | 0 | streaming | async | 2021-06-09 07:23:37
> 2 | /tmp | 11004 | up | up | 0.200000 | standby | standby | 0 | false | 0 | streaming | async | 2021-06-09 07:23:37
> 3 | /tmp | 11005 | up | up | 0.200000 | standby | standby | 0 | false | 0 | streaming | async | 2021-06-09 07:23:37
> 4 | /tmp | 11006 | up | up | 0.200000 | standby | standby | 0 | false | 0 | streaming | async | 2021-06-09 07:23:37
> (5 rows)
>
> -bash-4.2$ psql -p 11005 -c "select pg_wal_replay_pause()"
> pg_wal_replay_pause
> ---------------------
>
> (1 row)
>
> -bash-4.2$ pgbench -p 11000 -i test
>
> -bash-4.2$ pgbench -p 11000 -n -S -t 400 test
>
> -bash-4.2$ psql -p 11000 -c "show pool_nodes"
> node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change
> ---------+----------+-------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
> 0 | /tmp | 11002 | up | up | 0.200000 | primary | primary | 69 | false | 0 | | | 2021-06-09 07:05:16
> 1 | /tmp | 11003 | up | up | 0.200000 | standby | standby | 106 | false | 0 | streaming | async | 2021-06-09 07:05:16
> 2 | /tmp | 11004 | up | up | 0.200000 | standby | standby | 108 | false | 0 | streaming | async | 2021-06-09 07:05:16
> 3 | /tmp | 11005 | up | up | 0.200000 | standby | standby | 0 | false | 13158872 | streaming | async | 2021-06-09 07:05:16
> 4 | /tmp | 11006 | up | up | 0.200000 | standby | standby | 119 | true | 0 | streaming | async | 2021-06-09 07:05:16
> (5 rows)
>
> -bash-4.2$ psql -p 11006 -c "select pg_wal_replay_pause()"
> pg_wal_replay_pause
> ---------------------
>
> (1 row)
>
> -bash-4.2$ pgbench -p 11000 -i test
>
> -bash-4.2$ pgbench -p 11000 -n -S -t 400 test
>
> -bash-4.2$ psql -p 11000 -c "show pool_nodes"
> node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change
> ---------+----------+-------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
> 0 | /tmp | 11002 | up | up | 0.200000 | primary | primary | 69 | true | 0 | | | 2021-06-09 07:05:16
> 1 | /tmp | 11003 | up | up | 0.200000 | standby | standby | 106 | false | 0 | streaming | async | 2021-06-09 07:05:16
> 2 | /tmp | 11004 | up | up | 0.200000 | standby | standby | 108 | false | 0 | streaming | async | 2021-06-09 07:05:16
> 3 | /tmp | 11005 | up | up | 0.200000 | standby | standby | 0 | false | 26195408 | streaming | async | 2021-06-09 07:05:16
> 4 | /tmp | 11006 | up | up | 0.200000 | standby | standby | 119 | false | 13036536 | streaming | async | 2021-06-09 07:05:16
> (5 rows)
>
> -bash-4.2$ pgbench -p 11000 -n -S -t 300 test
>
> -bash-4.2$ psql -p 11000 -c "show pool_nodes"
> node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change
> ---------+----------+-------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
> 0 | /tmp | 11002 | up | up | 0.200000 | primary | primary | 127 | false | 0 | | | 2021-06-09 07:05:16
> 1 | /tmp | 11003 | up | up | 0.200000 | standby | standby | 225 | true | 0 | streaming | async | 2021-06-09 07:05:16
> 2 | /tmp | 11004 | up | up | 0.200000 | standby | standby | 233 | false | 0 | streaming | async | 2021-06-09 07:05:16
> 3 | /tmp | 11005 | up | up | 0.200000 | standby | standby | 0 | false | 26268544 | streaming | async | 2021-06-09 07:05:16
> 4 | /tmp | 11006 | up | up | 0.200000 | standby | standby | 119 | false | 13109672 | streaming | async | 2021-06-09 07:05:16
> (5 rows)
> ========
Thanks for the resived patch. Here are comments:
1) The patch seems to have extra space.
/home/t-ishii/select_lower_delay_load_balance_node.patch_r6:204: trailing whitespace.
* The new load balancing node is seleted from the
warning: 1 line adds whitespace errors.
2) You don't need to include patches fopr src/parser/gram_minimal.c
because it's automatically generated from gram_minimal.y.
3) test.sh fails
psql: error: FATAL: database "t-ishii" does not exist
Probably need to add export below:
source $TESTLIBS
TESTDIR=testdir
PG_CTL=$PGBIN/pg_ctl
PSQL="$PGBIN/psql -X "
+ export PGDATABASE=test
Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
More information about the pgpool-hackers
mailing list