<div dir="ltr"><p dir="auto">I have deleted the pgpool pods and now its up. Below are 
the connections and I see lot of client_connection_count open. Can 
someone please help why these idle connections are open and how to 
terminate if it is not consumed ?</p>
<div class="gmail-snippet-clipboard-content gmail-notranslate gmail-position-relative gmail-overflow-auto"><pre class="gmail-notranslate"><code class="gmail-notranslate">[root@s1m1pf ~]# kubectl run pg-postgresql-ha-client -it --restart='Never' --rm --namespace default --image <a href="http://docker.io/bitnami/postgresql-repmgr:15.2.0-debian-11-r23">docker.io/bitnami/postgresql-repmgr:15.2.0-debian-11-r23</a> --env="PGPASSWORD=postgres"          --command -- psql --pset=pager=off -h pg-postgresql-ha-pgpool -p 5432 -U postgres -d postgres -c "show pool_processes"
 pool_pid | start_time | client_connection_count | database | username | backend_connection_time | pool_counter | status 
----------+------------+-------------------------+----------+----------+-------------------------+--------------+--------
(0 rows)

 pool_pid |                      start_time                      | client_connection_count | database | username | backend_connection_time | pool_counter |       status        
----------+------------------------------------------------------+-------------------------+----------+----------+-------------------------+--------------+---------------------
 141      | 2023-06-06 09:47:07 (4:30 before process restarting) | 92                      |          |          |                         |              | Wait for connection
 142      | 2023-06-06 09:47:07 (4:55 before process restarting) | 115                     |          |          |                         |              | Wait for connection
 143      | 2023-06-06 09:47:07 (5:00 before process restarting) | 131                     |          |          |                         |              | Wait for connection
 144      | 2023-06-06 09:47:07 (4:55 before process restarting) | 134                     |          |          |                         |              | Wait for connection
 145      | 2023-06-06 09:47:07 (4:45 before process restarting) | 121                     |          |          |                         |              | Wait for connection
 146      | 2023-06-06 09:47:07 (5:00 before process restarting) | 117                     |          |          |                         |              | Wait for connection
 147      | 2023-06-06 09:47:07 (4:40 before process restarting) | 133                     |          |          |                         |              | Wait for connection
 148      | 2023-06-06 09:47:07 (4:55 before process restarting) | 102                     |          |          |                         |              | Wait for connection
 149      | 2023-06-06 09:47:07 (5:00 before process restarting) | 90                      |          |          |                         |              | Wait for connection
 150      | 2023-06-06 09:47:07 (4:35 before process restarting) | 123                     |          |          |                         |              | Wait for connection
 151      | 2023-06-06 09:47:07 (4:50 before process restarting) | 121                     |          |          |                         |              | Wait for connection
 152      | 2023-06-06 09:47:07 (5:00 before process restarting) | 147                     |          |          |                         |              | Wait for connection
 153      | 2023-06-06 09:47:07 (4:35 before process restarting) | 113                     |          |          |                         |              | Wait for connection
 154      | 2023-06-06 09:47:07                                  | 144                     | postgres | postgres | 2023-06-06 13:21:03     | 1            | Execute command
 155      | 2023-06-06 09:47:07 (5:00 before process restarting) | 105                     |          |          |                         |              | Wait for connection
 156      | 2023-06-06 09:47:07 (5:00 before process restarting) | 106                     |          |          |                         |              | Wait for connection
 157      | 2023-06-06 09:47:07 (4:50 before process restarting) | 125                     |          |          |                         |              | Wait for connection
 158      | 2023-06-06 09:47:07 (5:00 before process restarting) | 101                     |          |          |                         |              | Wait for connection
 159      | 2023-06-06 09:47:07 (4:55 before process restarting) | 130                     |          |          |                         |              | Wait for connection
 160      | 2023-06-06 09:47:07 (5:00 before process restarting) | 122                     |          |          |                         |              | Wait for connection
 161      | 2023-06-06 09:47:07 (5:00 before process restarting) | 110                     |          |          |                         |              | Wait for connection
 162      | 2023-06-06 09:47:07 (5:00 before process restarting) | 141                     |          |          |                         |              | Wait for connection
 163      | 2023-06-06 09:47:07 (4:45 before process restarting) | 128                     |          |          |                         |              | Wait for connection
 164      | 2023-06-06 09:47:07 (5:00 before process restarting) | 125                     |          |          |                         |              | Wait for connection
 165      | 2023-06-06 09:47:07 (4:55 before process restarting) | 129                     |          |          |                         |              | Wait for connection
 166      | 2023-06-06 09:47:07 (4:45 before process restarting) | 120                     |          |          |                         |              | Wait for connection
 167      | 2023-06-06 09:47:07 (4:45 before process restarting) | 114                     |          |          |                         |              | Wait for connection
 168      | 2023-06-06 09:47:07 (4:45 before process restarting) | 126                     |          |          |                         |              | Wait for connection
 169      | 2023-06-06 09:47:07 (4:35 before process restarting) | 122                     |          |          |                         |              | Wait for connection
 170      | 2023-06-06 09:47:07 (4:35 before process restarting) | 132                     |          |          |                         |              | Wait for connection
 171      | 2023-06-06 09:47:07 (4:45 before process restarting) | 125                     |          |          |                         |              | Wait for connection
 172      | 2023-06-06 09:47:07 (4:40 before process restarting) | 103                     |          |          |                         |              | Wait for connection
(32 rows)

pod "pg-postgresql-ha-client" deleted
</code></pre></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jun 5, 2023 at 3:35 PM Praveen Kumar K S <<a href="mailto:praveenssit@gmail.com">praveenssit@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hello All,<div><br></div><div>I'm using <a href="https://github.com/bitnami/charts/tree/main/bitnami/postgresql-ha" target="_blank">https://github.com/bitnami/charts/tree/main/bitnami/postgresql-ha</a> helm charts to deploy pgpool+postgres with 3 replicas on k8s cluster. All is well. Below are the parameters.</div><div><br clear="all"><div>postgresql.maxConnections=900<br></div><div>pgpool.authenticationMethod=md5<br></div><div>pgpool.maxPool=28<br></div><div>pgpool.clientIdleLimit=300<br></div><div><br></div><div>All others are default values. The setup comes fine. Our applications are working fine. Now as part of performance testing, the team has run the scripts and the pgpool goes down with FATAL:  Sorry, too many clients already and the pgpool pods keep restarting. Its been 3 days and its still restarting with same error. I deleted my applications to check whats wrong with pgpool. But even after deleting the applications, pgpool is still restarting with same error. So I thought of asking the experts here, how to fine-tune the pgpool parameters to achieve performance. When I checked the backend postres connections, it was 67. After deleting applications it is 4. The command I used is SELECT sum(numbackends) FROM pg_stat_database; </div><div><br></div><div>Please let me know if you need any additional information. Thanks. </div><div><br></div><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><font style="font-family:"courier new",monospace" size="1"><b style="color:rgb(102,102,102)">Regards,<br><br></b></font><div style="color:rgb(102,102,102)"><font size="1"><b><font face="'comic sans ms', sans-serif"><font style="font-family:"courier new",monospace" size="1">K S Praveen Kumar<br>M: +91-9986855625 </font><br></font></b></font></div></div></div></div>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><font style="font-family:"courier new",monospace" size="1"><b style="color:rgb(102,102,102)">Regards,<br><br></b></font><div style="color:rgb(102,102,102)"><font size="1"><b><font face="'comic sans ms', sans-serif"><font style="font-family:"courier new",monospace" size="1">K S Praveen Kumar<br>M: +91-9986855625 </font><br></font></b></font></div></div>