<div dir="ltr">Hello,<div><br></div><div>Thanks for your response. After applying below settings, the stack is stable and no restarts after load testing. </div><div><br></div><div>```</div><div>    --set postgresql.maxConnections=900 \<br>    --set pgpool.maxPool=10 \<br>    --set pgpool.numInitChildren=80 \<br>    --set pgpool.childMaxConnections=10 \<br>    --set pgpool.clientIdleLimit=60 \<br>    --set pgpool.replicaCount=3 \<br>    --set pgpool.childLifeTime=60 \<br>    --set pgpool.clientIdleLimit=60 \<br>    --set pgpool.connectionLifeTime=300 \<br>    --set pgpool.reservedConnections=0 \<br></div><div>```</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jun 7, 2023 at 12:05 PM Bo Peng <<a href="mailto:pengbo@sraoss.co.jp">pengbo@sraoss.co.jp</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
> I have deleted the pgpool pods and now its up. Below are the connections<br>
> and I see lot of client_connection_count open. Can someone please help why<br>
> these idle connections are open and how to terminate if it is not consumed ?<br>
<br>
All connections execpt 154 connected by psql are in idle (Wait for connection) status.<br>
<br>
Pgpool-II preforks processes which is specified in num_init_children at startup.<br>
Default is 32.<br>
When the number of concurrent connections reaches 31 (num_init_children -reserved_connections),<br>
Pgpool will throw an "FATAL:  Sorry, too many clients already" error.<br>
<br>
It is the correct feature of Pgpool-II, but I don't know why pgpool pod was terminated.<br>
I think you should check the configuration and logs of k8s.<br>
<br>
> [root@s1m1pf ~]# kubectl run pg-postgresql-ha-client -it<br>
> --restart='Never' --rm --namespace default --image<br>
> <a href="http://docker.io/bitnami/postgresql-repmgr:15.2.0-debian-11-r23" rel="noreferrer" target="_blank">docker.io/bitnami/postgresql-repmgr:15.2.0-debian-11-r23</a><br>
> --env="PGPASSWORD=postgres"          --command -- psql<br>
> --pset=pager=off -h pg-postgresql-ha-pgpool -p 5432 -U postgres -d<br>
> postgres -c "show pool_processes"<br>
>  pool_pid | start_time | client_connection_count | database | username<br>
> | backend_connection_time | pool_counter | status<br>
> ----------+------------+-------------------------+----------+----------+-------------------------+--------------+--------<br>
> (0 rows)<br>
> <br>
>  pool_pid |                      start_time                      |<br>
> client_connection_count | database | username |<br>
> backend_connection_time | pool_counter |       status<br>
> ----------+------------------------------------------------------+-------------------------+----------+----------+-------------------------+--------------+---------------------<br>
>  141      | 2023-06-06 09:47:07 (4:30 before process restarting) | 92<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  142      | 2023-06-06 09:47:07 (4:55 before process restarting) | 115<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  143      | 2023-06-06 09:47:07 (5:00 before process restarting) | 131<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  144      | 2023-06-06 09:47:07 (4:55 before process restarting) | 134<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  145      | 2023-06-06 09:47:07 (4:45 before process restarting) | 121<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  146      | 2023-06-06 09:47:07 (5:00 before process restarting) | 117<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  147      | 2023-06-06 09:47:07 (4:40 before process restarting) | 133<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  148      | 2023-06-06 09:47:07 (4:55 before process restarting) | 102<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  149      | 2023-06-06 09:47:07 (5:00 before process restarting) | 90<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  150      | 2023-06-06 09:47:07 (4:35 before process restarting) | 123<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  151      | 2023-06-06 09:47:07 (4:50 before process restarting) | 121<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  152      | 2023-06-06 09:47:07 (5:00 before process restarting) | 147<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  153      | 2023-06-06 09:47:07 (4:35 before process restarting) | 113<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  154      | 2023-06-06 09:47:07                                  | 144<br>
>                     | postgres | postgres | 2023-06-06 13:21:03     |<br>
> 1            | Execute command<br>
>  155      | 2023-06-06 09:47:07 (5:00 before process restarting) | 105<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  156      | 2023-06-06 09:47:07 (5:00 before process restarting) | 106<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  157      | 2023-06-06 09:47:07 (4:50 before process restarting) | 125<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  158      | 2023-06-06 09:47:07 (5:00 before process restarting) | 101<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  159      | 2023-06-06 09:47:07 (4:55 before process restarting) | 130<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  160      | 2023-06-06 09:47:07 (5:00 before process restarting) | 122<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  161      | 2023-06-06 09:47:07 (5:00 before process restarting) | 110<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  162      | 2023-06-06 09:47:07 (5:00 before process restarting) | 141<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  163      | 2023-06-06 09:47:07 (4:45 before process restarting) | 128<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  164      | 2023-06-06 09:47:07 (5:00 before process restarting) | 125<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  165      | 2023-06-06 09:47:07 (4:55 before process restarting) | 129<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  166      | 2023-06-06 09:47:07 (4:45 before process restarting) | 120<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  167      | 2023-06-06 09:47:07 (4:45 before process restarting) | 114<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  168      | 2023-06-06 09:47:07 (4:45 before process restarting) | 126<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  169      | 2023-06-06 09:47:07 (4:35 before process restarting) | 122<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  170      | 2023-06-06 09:47:07 (4:35 before process restarting) | 132<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  171      | 2023-06-06 09:47:07 (4:45 before process restarting) | 125<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
>  172      | 2023-06-06 09:47:07 (4:40 before process restarting) | 103<br>
>                     |          |          |                         |<br>
>             | Wait for connection<br>
> (32 rows)<br>
> <br>
> pod "pg-postgresql-ha-client" deleted<br>
> <br>
> <br>
> On Mon, Jun 5, 2023 at 3:35 PM Praveen Kumar K S <<a href="mailto:praveenssit@gmail.com" target="_blank">praveenssit@gmail.com</a>><br>
> wrote:<br>
> <br>
> > Hello All,<br>
> ><br>
> > I'm using<br>
> > <a href="https://github.com/bitnami/charts/tree/main/bitnami/postgresql-ha" rel="noreferrer" target="_blank">https://github.com/bitnami/charts/tree/main/bitnami/postgresql-ha</a> helm<br>
> > charts to deploy pgpool+postgres with 3 replicas on k8s cluster. All is<br>
> > well. Below are the parameters.<br>
> ><br>
> > postgresql.maxConnections=900<br>
> > pgpool.authenticationMethod=md5<br>
> > pgpool.maxPool=28<br>
> > pgpool.clientIdleLimit=300<br>
> ><br>
> > All others are default values. The setup comes fine. Our applications are<br>
> > working fine. Now as part of performance testing, the team has run the<br>
> > scripts and the pgpool goes down with FATAL:  Sorry, too many clients<br>
> > already and the pgpool pods keep restarting. Its been 3 days and its still<br>
> > restarting with same error. I deleted my applications to check whats wrong<br>
> > with pgpool. But even after deleting the applications, pgpool is still<br>
> > restarting with same error. So I thought of asking the experts here, how to<br>
> > fine-tune the pgpool parameters to achieve performance. When I checked the<br>
> > backend postres connections, it was 67. After deleting applications it is<br>
> > 4. The command I used is SELECT sum(numbackends) FROM pg_stat_database;<br>
> ><br>
> > Please let me know if you need any additional information. Thanks.<br>
> ><br>
> ><br>
> > --<br>
> ><br>
> ><br>
> > *Regards,*<br>
> ><br>
> ><br>
> > *K S Praveen KumarM: +91-9986855625 *<br>
> ><br>
> <br>
> <br>
> -- <br>
> <br>
> <br>
> *Regards,*<br>
> <br>
> <br>
> *K S Praveen KumarM: +91-9986855625 *<br>
<br>
<br>
-- <br>
Bo Peng <<a href="mailto:pengbo@sraoss.co.jp" target="_blank">pengbo@sraoss.co.jp</a>><br>
SRA OSS LLC<br>
<a href="https://www.sraoss.co.jp/" rel="noreferrer" target="_blank">https://www.sraoss.co.jp/</a><br>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><font style="font-family:"courier new",monospace" size="1"><b style="color:rgb(102,102,102)">Regards,<br><br></b></font><div style="color:rgb(102,102,102)"><font size="1"><b><font face="'comic sans ms', sans-serif"><font style="font-family:"courier new",monospace" size="1">K S Praveen Kumar<br>M: +91-9986855625 </font><br></font></b></font></div></div>