<div dir="ltr"><div>Hi,</div><div><br></div><div>Unfortunately, it seems the patch did not fix this issue. Yesterday we had a segmentation fault at this same point again. The top of the backtrace now is:</div><div>#0 close_all_backend_connections () at protocol/pool_connection_pool.c:1082<br>#1 0x0000563a51f9280f in proc_exit_prepare (code=-1) at ../../src/utils/error/elog.c:2707<br>#2 0x00007f0926782da7 in __funcs_on_exit () at src/exit/atexit.c:34<br>#3 0x00007f092677a08f in exit (code=code@entry=0) at src/exit/exit.c:29<br>#4 0x0000563a51f4e4e2 in child_exit (code=0) at protocol/child.c:1378<br>#5 die (sig=3) at protocol/child.c:1174<br>#6 <signal handler called><br></div><div><br></div><div>As you can see, it now crashes at line 1082 in pool_connection_pool.c, which looks like this in our patched version:</div><div>1074 for (i = 0; i < pool_config->max_pool; i++, p++)<br>1075 {<br>1076 int backend_id = in_use_backend_id(p);<br>1077 <br>1078 if (backend_id < 0)<br>1079 continue;<br>1080 if (CONNECTION_SLOT(p, backend_id) == NULL)<br>1081 continue;<br>1082 if (CONNECTION_SLOT(p, backend_id)->sp == NULL)<br>1083 continue;<br>1084 if (CONNECTION_SLOT(p, backend_id)->sp->user == NULL)<br>1085 continue;<br>1086 pool_send_frontend_exits(p);<br>1087 }<br></div><div><br></div><div>At the moment of the crash, a lot is happening at the same time. We are reducing a cluster back to a single node. The crash happens at the very last moment, when only the final remaining node is still up and running, but it still is running with cluster configuration (with a watchdog and 2 backends, the local one up, the remote one down). Our configuration management then restarts the database (to force a configuration change on postgresql). Looking at the logs, this shutdown is noticed by pgpool, but the watchdog does not hold a quorum, so it cannot initiate a failover (also, there's no backend to failover to). Then, within a second, pgpool itself is also shutdown. This is when the process segfaults. Something that does seem interesting is that the pid (183) that segfaults, seems to be started during the failover process. pgpool is simultaneously killing all connection pids and starting this one. Also, this pid is killed within a single ms of being started (see timestamp 2024-11-07T00:48:06.906935 and 2024-11-07T00:48:06.907304 in the logs). I hope this helps in tracking this issue down.</div><div><br></div><div>Best regards,</div><div>Emond</div><div><br></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Op wo 18 sep 2024 om 04:17 schreef Tatsuo Ishii <<a href="mailto:ishii@postgresql.org">ishii@postgresql.org</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Okay.<br>
Please let us know if you notice something.<br>
<br>
Best reagards,<br>
--<br>
Tatsuo Ishii<br>
SRA OSS K.K.<br>
English: <a href="http://www.sraoss.co.jp/index_en/" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_en/</a><br>
Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.jp</a><br>
<br>
> Hi,<br>
> <br>
> Thanks for the patch. I've added it to our build. This crash is quite rare,<br>
> so I guess the only way of knowing if this fixed the error is by observing<br>
> the build for the next couple of months.<br>
> <br>
> Best regards,<br>
> Emond<br>
> <br>
> Op di 17 sep 2024 om 08:37 schreef Tatsuo Ishii <<a href="mailto:ishii@postgresql.org" target="_blank">ishii@postgresql.org</a>>:<br>
> <br>
>> > Thanks for the report.<br>
>> ><br>
>> > Yes, it seems the crash happened when close_all_backend_connections()<br>
>> > was called by on_exit which is called when process exits. I will look<br>
>> > into this.<br>
>><br>
>> close_all_backend_connections() is responsible for closing pooled<br>
>> connections to backend. In the code MAIN_CONNECTION() macro is<br>
>> used. Pooled connections could contain connections pointing to backend<br>
>> which was valid at some point but is in down state at present. So<br>
>> instead of MAIN_CONNECTION, we should use in_use_backend_id() here.<br>
>> Attached patch does this. I hope the patch fixes your problem.<br>
>><br>
>> Best reagards,<br>
>> --<br>
>> Tatsuo Ishii<br>
>> SRA OSS K.K.<br>
>> English: <a href="http://www.sraoss.co.jp/index_en/" rel="noreferrer" target="_blank">http://www.sraoss.co.jp/index_en/</a><br>
>> Japanese:<a href="http://www.sraoss.co.jp" rel="noreferrer" target="_blank">http://www.sraoss.co.jp</a><br>
>><br>
>> >> Hi,<br>
>> >><br>
>> >> One of our test runs this weekend hit another segmentation fault. This<br>
>> >> crash seems to happen when pgpool is shutdown. This happens at the end<br>
>> of<br>
>> >> the testcase that reverts a cluster back to a single node setup. At that<br>
>> >> moment, 172.29.30.2 is already shutdown and removed from the cluster and<br>
>> >> 172.29.30.3 is shutdown. The configuration is updated and pgpool on<br>
>> >> 172.29.30.1 is restarted. The crash seems to happen at the moment<br>
>> ppgool on<br>
>> >> 172.29.30.1 is shutdown to be restarted. I've got the feeling that the<br>
>> >> simultaneous loss of .3 and the shutdown is causing this crash.<br>
>> >><br>
>> >> Below is the backtrace. Please not we've switched from Debian to Alpine<br>
>> >> based images.<br>
>> >> #0 0x000055fe4225f0ab in close_all_backend_connections () at<br>
>> >> protocol/pool_connection_pool.c:1078<br>
>> >> #1 0x000055fe422917ef in proc_exit_prepare (code=-1) at<br>
>> >> ../../src/utils/error/elog.c:2707<br>
>> >> #2 0x00007ff1af359da7 in __funcs_on_exit () at src/exit/atexit.c:34<br>
>> >> #3 0x00007ff1af35108f in exit (code=code@entry=0) at<br>
>> src/exit/exit.c:29<br>
>> >> #4 0x000055fe4224d4d2 in child_exit (code=0) at protocol/child.c:1378<br>
>> >> #5 die (sig=3) at protocol/child.c:1174<br>
>> >> #6 <signal handler called><br>
>> >> #7 memset () at src/string/x86_64/memset.s:55<br>
>> >> #8 0x000055fe4225d2ed in memset (__n=<optimized out>, __c=0,<br>
>> >> __d=<optimized out>) at /usr/include/fortify/string.h:75<br>
>> >> #9 pool_init_cp () at protocol/pool_connection_pool.c:83<br>
>> >> #10 0x000055fe4224f5f0 in do_child (fds=fds@entry=0x7ff1a6aabae0) at<br>
>> >> protocol/child.c:222<br>
>> >> #11 0x000055fe42223ebe in fork_a_child (fds=0x7ff1a6aabae0, id=11) at<br>
>> >> main/pgpool_main.c:863<br>
>> >> #12 0x000055fe42229d90 in exec_child_restart (node_id=0,<br>
>> >> failover_context=0x7ffcd98e8c50) at main/pgpool_main.c:4684<br>
>> >> #13 failover () at main/pgpool_main.c:1739<br>
>> >> #14 0x000055fe42228cd9 in sigusr1_interrupt_processor () at<br>
>> >> main/pgpool_main.c:1507<br>
>> >> #15 0x000055fe4222900f in check_requests () at main/pgpool_main.c:4934<br>
>> >> #16 0x000055fe4222ce53 in PgpoolMain<br>
>> (discard_status=discard_status@entry=0<br>
>> >> '\000', clear_memcache_oidmaps=clear_memcache_oidmaps@entry=0 '\000')<br>
>> at<br>
>> >> main/pgpool_main.c:649<br>
>> >> #17 0x000055fe42222713 in main (argc=<optimized out>, argv=<optimized<br>
>> out>)<br>
>> >> at main/main.c:365<br>
>> >><br>
>> >> Best regards,<br>
>> >> Emond<br>
>> > _______________________________________________<br>
>> > pgpool-general mailing list<br>
>> > <a href="mailto:pgpool-general@pgpool.net" target="_blank">pgpool-general@pgpool.net</a><br>
>> > <a href="http://www.pgpool.net/mailman/listinfo/pgpool-general" rel="noreferrer" target="_blank">http://www.pgpool.net/mailman/listinfo/pgpool-general</a><br>
>><br>
</blockquote></div></div>