[pgpool-general: 8819] Re: Need help with finetuning

Bo Peng pengbo at sraoss.co.jp
Wed Jun 7 15:35:27 JST 2023


Hi,

> I have deleted the pgpool pods and now its up. Below are the connections
> and I see lot of client_connection_count open. Can someone please help why
> these idle connections are open and how to terminate if it is not consumed ?

All connections execpt 154 connected by psql are in idle (Wait for connection) status.

Pgpool-II preforks processes which is specified in num_init_children at startup.
Default is 32.
When the number of concurrent connections reaches 31 (num_init_children -reserved_connections),
Pgpool will throw an "FATAL:  Sorry, too many clients already" error.

It is the correct feature of Pgpool-II, but I don't know why pgpool pod was terminated.
I think you should check the configuration and logs of k8s.

> [root at s1m1pf ~]# kubectl run pg-postgresql-ha-client -it
> --restart='Never' --rm --namespace default --image
> docker.io/bitnami/postgresql-repmgr:15.2.0-debian-11-r23
> --env="PGPASSWORD=postgres"          --command -- psql
> --pset=pager=off -h pg-postgresql-ha-pgpool -p 5432 -U postgres -d
> postgres -c "show pool_processes"
>  pool_pid | start_time | client_connection_count | database | username
> | backend_connection_time | pool_counter | status
> ----------+------------+-------------------------+----------+----------+-------------------------+--------------+--------
> (0 rows)
> 
>  pool_pid |                      start_time                      |
> client_connection_count | database | username |
> backend_connection_time | pool_counter |       status
> ----------+------------------------------------------------------+-------------------------+----------+----------+-------------------------+--------------+---------------------
>  141      | 2023-06-06 09:47:07 (4:30 before process restarting) | 92
>                     |          |          |                         |
>             | Wait for connection
>  142      | 2023-06-06 09:47:07 (4:55 before process restarting) | 115
>                     |          |          |                         |
>             | Wait for connection
>  143      | 2023-06-06 09:47:07 (5:00 before process restarting) | 131
>                     |          |          |                         |
>             | Wait for connection
>  144      | 2023-06-06 09:47:07 (4:55 before process restarting) | 134
>                     |          |          |                         |
>             | Wait for connection
>  145      | 2023-06-06 09:47:07 (4:45 before process restarting) | 121
>                     |          |          |                         |
>             | Wait for connection
>  146      | 2023-06-06 09:47:07 (5:00 before process restarting) | 117
>                     |          |          |                         |
>             | Wait for connection
>  147      | 2023-06-06 09:47:07 (4:40 before process restarting) | 133
>                     |          |          |                         |
>             | Wait for connection
>  148      | 2023-06-06 09:47:07 (4:55 before process restarting) | 102
>                     |          |          |                         |
>             | Wait for connection
>  149      | 2023-06-06 09:47:07 (5:00 before process restarting) | 90
>                     |          |          |                         |
>             | Wait for connection
>  150      | 2023-06-06 09:47:07 (4:35 before process restarting) | 123
>                     |          |          |                         |
>             | Wait for connection
>  151      | 2023-06-06 09:47:07 (4:50 before process restarting) | 121
>                     |          |          |                         |
>             | Wait for connection
>  152      | 2023-06-06 09:47:07 (5:00 before process restarting) | 147
>                     |          |          |                         |
>             | Wait for connection
>  153      | 2023-06-06 09:47:07 (4:35 before process restarting) | 113
>                     |          |          |                         |
>             | Wait for connection
>  154      | 2023-06-06 09:47:07                                  | 144
>                     | postgres | postgres | 2023-06-06 13:21:03     |
> 1            | Execute command
>  155      | 2023-06-06 09:47:07 (5:00 before process restarting) | 105
>                     |          |          |                         |
>             | Wait for connection
>  156      | 2023-06-06 09:47:07 (5:00 before process restarting) | 106
>                     |          |          |                         |
>             | Wait for connection
>  157      | 2023-06-06 09:47:07 (4:50 before process restarting) | 125
>                     |          |          |                         |
>             | Wait for connection
>  158      | 2023-06-06 09:47:07 (5:00 before process restarting) | 101
>                     |          |          |                         |
>             | Wait for connection
>  159      | 2023-06-06 09:47:07 (4:55 before process restarting) | 130
>                     |          |          |                         |
>             | Wait for connection
>  160      | 2023-06-06 09:47:07 (5:00 before process restarting) | 122
>                     |          |          |                         |
>             | Wait for connection
>  161      | 2023-06-06 09:47:07 (5:00 before process restarting) | 110
>                     |          |          |                         |
>             | Wait for connection
>  162      | 2023-06-06 09:47:07 (5:00 before process restarting) | 141
>                     |          |          |                         |
>             | Wait for connection
>  163      | 2023-06-06 09:47:07 (4:45 before process restarting) | 128
>                     |          |          |                         |
>             | Wait for connection
>  164      | 2023-06-06 09:47:07 (5:00 before process restarting) | 125
>                     |          |          |                         |
>             | Wait for connection
>  165      | 2023-06-06 09:47:07 (4:55 before process restarting) | 129
>                     |          |          |                         |
>             | Wait for connection
>  166      | 2023-06-06 09:47:07 (4:45 before process restarting) | 120
>                     |          |          |                         |
>             | Wait for connection
>  167      | 2023-06-06 09:47:07 (4:45 before process restarting) | 114
>                     |          |          |                         |
>             | Wait for connection
>  168      | 2023-06-06 09:47:07 (4:45 before process restarting) | 126
>                     |          |          |                         |
>             | Wait for connection
>  169      | 2023-06-06 09:47:07 (4:35 before process restarting) | 122
>                     |          |          |                         |
>             | Wait for connection
>  170      | 2023-06-06 09:47:07 (4:35 before process restarting) | 132
>                     |          |          |                         |
>             | Wait for connection
>  171      | 2023-06-06 09:47:07 (4:45 before process restarting) | 125
>                     |          |          |                         |
>             | Wait for connection
>  172      | 2023-06-06 09:47:07 (4:40 before process restarting) | 103
>                     |          |          |                         |
>             | Wait for connection
> (32 rows)
> 
> pod "pg-postgresql-ha-client" deleted
> 
> 
> On Mon, Jun 5, 2023 at 3:35 PM Praveen Kumar K S <praveenssit at gmail.com>
> wrote:
> 
> > Hello All,
> >
> > I'm using
> > https://github.com/bitnami/charts/tree/main/bitnami/postgresql-ha helm
> > charts to deploy pgpool+postgres with 3 replicas on k8s cluster. All is
> > well. Below are the parameters.
> >
> > postgresql.maxConnections=900
> > pgpool.authenticationMethod=md5
> > pgpool.maxPool=28
> > pgpool.clientIdleLimit=300
> >
> > All others are default values. The setup comes fine. Our applications are
> > working fine. Now as part of performance testing, the team has run the
> > scripts and the pgpool goes down with FATAL:  Sorry, too many clients
> > already and the pgpool pods keep restarting. Its been 3 days and its still
> > restarting with same error. I deleted my applications to check whats wrong
> > with pgpool. But even after deleting the applications, pgpool is still
> > restarting with same error. So I thought of asking the experts here, how to
> > fine-tune the pgpool parameters to achieve performance. When I checked the
> > backend postres connections, it was 67. After deleting applications it is
> > 4. The command I used is SELECT sum(numbackends) FROM pg_stat_database;
> >
> > Please let me know if you need any additional information. Thanks.
> >
> >
> > --
> >
> >
> > *Regards,*
> >
> >
> > *K S Praveen KumarM: +91-9986855625 *
> >
> 
> 
> -- 
> 
> 
> *Regards,*
> 
> 
> *K S Praveen KumarM: +91-9986855625 *


-- 
Bo Peng <pengbo at sraoss.co.jp>
SRA OSS LLC
https://www.sraoss.co.jp/



More information about the pgpool-general mailing list