[pgpool-general: 8946] Re: pgpool 4.4.4: reading status file: 1 th backend is set to down status
Camarena Daniel
Daniel.Camarena at azo.com
Tue Oct 17 00:59:07 JST 2023
Hi Tatsuo,
thanks for your reply and the explication.
To comment your answers:
> > 1. Is there a file which buffers pg states?
> If you mean "pg_status" column in show pool_nodes command, no. It is obtained from PostgreSQL on the fly when show pool_nodes command gets executed.
Yes. But it seems that out of pg_status is formed a resulting state which is in the column status of show pool_nodes (see results below) and this indicates that the service is down - and pgpool is acting like this. See below the log of pgpool: It indicates, that is marking 0 th node as down because of the "status file".
> > 2. How did the system get into this state?
> I am not familiar with bitnami pgpool nor repmgr. So what I can do is answer from the point of pgpool view. It was caused by either failover triggered by health check (pgpool detects error / shutdown of PostgreSQL), or pcp_detach_node gets executed. I cannot tell either unless looking into pgpool log and pgpool.conf
Pg0 had tons of these messages:
2023-10-11 11:19:03.522 GMT [956538] FATAL: remaining connection slots are reserved for non-replication superuser connections
2023-10-11 11:19:03.525 GMT [956537] FATAL: remaining connection slots are reserved for non-replication superuser connections
2023-10-11 11:19:03.542 GMT [956539] FATAL: remaining connection slots are reserved for non-replication superuser connections
2023-10-11 11:19:03.545 GMT [956540] FATAL: remaining connection slots are reserved for non-replication superuser connections
Pg1 has right now, as I was examining the system the same messages. Sometimes they appear and it seems that because of the a failover occurs - like you described before.
Should I just increase max_connections, default 100, to 200 to prevent the problem?
In the meanwhile I have found a file in the logs folder of pgpool. It has the following content:
root at c8bdc87693d4:/opt/bitnami/pgpool/logs# cat pgpool_status
down
up
up
As pgpool has a line during startup
2023-10-16 05:28:21.670: main pid 1: LOG: reading status file: 0 th backend is set to down status
I thought this file is read and the status of pg0 is overridden by this.
show pool_nodes; returns the following:
node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change
---------+-----------+------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
0 | 10.0.10.7 | 5432 | down | up | 0.333333 | standby | primary | 0 | false | 0 | | | 2023-10-16 05:29:17
1 | 10.0.10.8 | 5432 | up | up | 0.333333 | standby | standby | 5 | false | 0 | | | 2023-10-16 05:29:17
2 | 10.0.10.9 | 5432 | up | up | 0.333333 | standby | standby | 11 | true | 0 | | | 2023-10-16 05:29:17
(3 rows)
Indicates, that pg_role of pg0 is primary but the resulting role is standby, as resulting status is down, even if pg_status is up.
As orchestration always starts pgpool new, I post the startup sequence of the container:
pgpool 05:28:21.38
pgpool 05:28:21.38 Welcome to the Bitnami pgpool container
pgpool 05:28:21.38 Subscribe to project updates by watching https://github.com/bitnami/containers
pgpool 05:28:21.38 Submit issues and feature requests at https://github.com/bitnami/containers/issues
pgpool 05:28:21.38
pgpool 05:28:21.39 INFO ==> ** Starting Pgpool-II setup **
pgpool 05:28:21.40 INFO ==> Validating settings in PGPOOL_* env vars...
pgpool 05:28:21.42 INFO ==> Initializing Pgpool-II...
pgpool 05:28:21.42 INFO ==> Generating pg_hba.conf file...
pgpool 05:28:21.42 INFO ==> Generating pgpool.conf file...
pgpool 05:28:21.54 INFO ==> Generating password file for local authentication...
pgpool 05:28:21.54 INFO ==> Generating password file for pgpool admin user...
pgpool 05:28:21.55 INFO ==> ** Pgpool-II setup finished! **
pgpool 05:28:21.57 INFO ==> ** Starting Pgpool-II **
==> 2023-10-16 05:28:21.670: main pid 1: LOG: reading status file: 0 th backend is set to down status
2023-10-16 05:28:21.670: main pid 1: LOG: health_check_stats_shared_memory_size: requested size: 12288
2023-10-16 05:28:21.671: main pid 1: LOG: memory cache initialized
2023-10-16 05:28:21.671: main pid 1: DETAIL: memcache blocks :64
2023-10-16 05:28:21.671: main pid 1: LOG: allocating (144190784) bytes of shared memory segment
2023-10-16 05:28:21.671: main pid 1: LOG: allocating shared memory segment of size: 144190784
2023-10-16 05:28:21.730: main pid 1: LOG: health_check_stats_shared_memory_size: requested size: 12288
2023-10-16 05:28:21.730: main pid 1: LOG: health_check_stats_shared_memory_size: requested size: 12288
2023-10-16 05:28:21.730: main pid 1: LOG: memory cache initialized
2023-10-16 05:28:21.730: main pid 1: DETAIL: memcache blocks :64
2023-10-16 05:28:21.731: main pid 1: LOG: pool_discard_oid_maps: discarded memqcache oid maps
2023-10-16 05:28:21.741: main pid 1: LOG: unix_socket_directories[0]: /opt/bitnami/pgpool/tmp/.s.PGSQL.5432
2023-10-16 05:28:21.741: main pid 1: LOG: listen address[0]: *
2023-10-16 05:28:21.741: main pid 1: LOG: Setting up socket for 0.0.0.0:5432
2023-10-16 05:28:21.741: main pid 1: LOG: Setting up socket for :::5432
2023-10-16 05:28:21.757: main pid 1: LOG: find_primary_node_repeatedly: waiting for finding a primary node
2023-10-16 05:28:21.793: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:21.793: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:22.833: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:22.833: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:23.881: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:23.881: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:24.922: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:24.922: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:25.959: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:25.959: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:26.996: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:26.996: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:28.034: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:28.034: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:29.071: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:29.071: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:30.108: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:30.108: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:31.147: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:31.147: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:32.188: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:32.188: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:33.225: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:33.225: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:34.261: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:34.261: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:35.299: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:35.300: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:36.339: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:36.339: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:37.377: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:37.377: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:38.416: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:38.416: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:39.453: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:39.453: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:40.492: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:40.492: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:41.542: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:41.542: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:42.580: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:42.580: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:43.616: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:43.616: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:44.653: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:44.653: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:45.696: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:45.696: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:46.734: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:46.734: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:47.770: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:47.770: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:48.809: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:48.809: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:49.846: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:49.847: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:50.886: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:50.886: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:51.922: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:51.923: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:52.962: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:52.962: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:54.003: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:54.003: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:55.045: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:55.045: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:56.095: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:56.095: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:57.131: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:57.131: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:58.168: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:58.168: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:28:59.212: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:28:59.212: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:00.248: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:00.248: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:01.286: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:01.286: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:02.324: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:02.324: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:03.366: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:03.366: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:04.404: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:04.404: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:05.440: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:05.440: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:06.476: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:06.476: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:07.513: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:07.514: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:08.553: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:08.553: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:09.589: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:09.589: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:10.628: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:10.628: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:11.666: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:11.666: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:12.704: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:12.704: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:13.741: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:13.741: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:14.783: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:14.783: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:15.824: main pid 1: LOG: find_primary_node: standby node is 1
2023-10-16 05:29:15.824: main pid 1: LOG: find_primary_node: standby node is 2
2023-10-16 05:29:16.729: main pid 1: LOG: exit handler called (signal: 15)
2023-10-16 05:29:16.729: main pid 1: LOG: shutting down by signal 15
2023-10-16 05:29:16.729: main pid 1: LOG: terminating all child processes
2023-10-16 05:29:16.762: main pid 1: LOG: Pgpool-II system is shutdown
Last but not least pgpool.conf you requested. I left the comments in the file:
I have no name!@73d3fcf715c2:/opt/bitnami/pgpool/conf$ cat pgpool.conf
# ----------------------------
# pgPool-II configuration file
# ----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# Whitespace may be used. Comments are introduced with "#" anywhere on a line.
# The complete list of parameter names and allowed values can be found in the
# pgPool-II documentation.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pgpool reload". Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
#------------------------------------------------------------------------------
# BACKEND CLUSTERING MODE
# Choose one of: 'streaming_replication', 'native_replication',
# 'logical_replication', 'slony', 'raw' or 'snapshot_isolation'
# (change requires restart)
#------------------------------------------------------------------------------
backend_clustering_mode = 'streaming_replication'
#------------------------------------------------------------------------------
# CONNECTIONS
#------------------------------------------------------------------------------
# - pgpool Connection Settings -
listen_addresses = '*'
# what host name(s) or IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = '5432'
# Port number
# (change requires restart)
unix_socket_directories = '/opt/bitnami/pgpool/tmp'
# Unix domain socket path(s)
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
#unix_socket_group = ''
# The Owner group of Unix domain socket(s)
# (change requires restart)
#unix_socket_permissions = 0777
# Permissions of Unix domain socket(s)
# (change requires restart)
#reserved_connections = 0
# Number of reserved connections.
# Pgpool-II does not accept connections if over
# num_init_chidlren - reserved_connections.
# - pgpool Communication Manager Connection Settings -
#pcp_listen_addresses = 'localhost'
# what host name(s) or IP address(es) for pcp process to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
#pcp_port = 9898
# Port number for pcp
# (change requires restart)
pcp_socket_dir = '/opt/bitnami/pgpool/tmp'
# Unix domain socket path for pcp
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
#listen_backlog_multiplier = 2
# Set the backlog parameter of listen(2) to
# num_init_children * listen_backlog_multiplier.
# (change requires restart)
#serialize_accept = off
# whether to serialize accept() call to avoid thundering herd problem
# (change requires restart)
# - Backend Connection Settings -
#backend_hostname0 = 'host1'
# Host name or IP address to connect to for backend 0
#backend_port0 = 5432
# Port number for backend 0
#backend_weight0 = 1
# Weight for backend 0 (only in load balancing mode)
#backend_data_directory0 = '/data'
# Data directory for backend 0
#backend_flag0 = 'ALLOW_TO_FAILOVER'
# Controls various backend behavior
# ALLOW_TO_FAILOVER, DISALLOW_TO_FAILOVER
# or ALWAYS_PRIMARY
#backend_application_name0 = 'server0'
# walsender's application_name, used for "show pool_nodes" command
#backend_hostname1 = 'host2'
#backend_port1 = 5433
#backend_weight1 = 1
#backend_data_directory1 = '/data1'
#backend_flag1 = 'ALLOW_TO_FAILOVER'
#backend_application_name1 = 'server1'
# - Authentication -
enable_pool_hba = 'on'
# Use pool_hba.conf for client authentication
pool_passwd = 'pool_passwd'
# File name of pool_passwd for md5 authentication.
# "" disables pool_passwd.
# (change requires restart)
authentication_timeout = '30'
# Delay in seconds to complete client authentication
# 0 means no timeout.
allow_clear_text_frontend_auth = 'off'
# Allow Pgpool-II to use clear text password authentication
# with clients, when pool_passwd does not
# contain the user password
# - SSL Connections -
#ssl = off
# Enable SSL support
# (change requires restart)
#ssl_key = 'server.key'
# SSL private key file
# (change requires restart)
#ssl_cert = 'server.crt'
# SSL public certificate file
# (change requires restart)
#ssl_ca_cert = ''
# Single PEM format file containing
# CA root certificate(s)
# (change requires restart)
#ssl_ca_cert_dir = ''
# Directory containing CA root certificate(s)
# (change requires restart)
#ssl_crl_file = ''
# SSL certificate revocation list file
# (change requires restart)
#ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'
# Allowed SSL ciphers
# (change requires restart)
#ssl_prefer_server_ciphers = off
# Use server's SSL cipher preferences,
# rather than the client's
# (change requires restart)
#ssl_ecdh_curve = 'prime256v1'
# Name of the curve to use in ECDH key exchange
#ssl_dh_params_file = ''
# Name of the file containing Diffie-Hellman parameters used
# for so-called ephemeral DH family of SSL cipher.
#ssl_passphrase_command=''
# Sets an external command to be invoked when a passphrase
# for decrypting an SSL file needs to be obtained
# (change requires restart)
#------------------------------------------------------------------------------
# POOLS
#------------------------------------------------------------------------------
# - Concurrent session and pool size -
#process_management_mode = static
# process management mode for child processes
# Valid options:
# static: all children are pre-forked at startup
# dynamic: child processes are spawned on demand.
# number of idle child processes at any time are
# configured by min_spare_children and max_spare_children
#process_management_strategy = gentle
# process management strategy to satisfy spare processes
# Valid options:
#
# lazy: In this mode, the scale-down is performed gradually
# and only gets triggered when excessive spare processes count
# remains high for more than 5 mins
#
# gentle: In this mode, the scale-down is performed gradually
# and only gets triggered when excessive spare processes count
# remains high for more than 2 mins
#
# aggressive: In this mode, the scale-down is performed aggressively
# and gets triggered more frequently in case of higher spare processes.
# This mode uses faster and slightly less smart process selection criteria
# to identify the child processes that can be serviced to satisfy
# max_spare_children
#
# (Only applicable for dynamic process management mode)
#num_init_children = 32
# Maximum Number of concurrent sessions allowed
# (change requires restart)
#min_spare_children = 5
# Minimum number of spare child processes waiting for connection
# (Only applicable for dynamic process management mode)
#max_spare_children = 10
# Maximum number of idle child processes waiting for connection
# (Only applicable for dynamic process management mode)
max_pool = '15'
# Number of connection pool caches per connection
# (change requires restart)
# - Life time -
#child_life_time = 5min
# Pool exits after being idle for this many seconds
#child_max_connections = 0
# Pool exits after receiving that many connections
# 0 means no exit
#connection_life_time = 0
# Connection to backend closes after being idle for this many seconds
# 0 means no close
#client_idle_limit = 0
# Client is disconnected after being idle for that many seconds
# (even inside an explicit transactions!)
# 0 means no disconnection
#------------------------------------------------------------------------------
# LOGS
#------------------------------------------------------------------------------
# - Where to log -
#log_destination = 'stderr'
# Where to log
# Valid values are combinations of stderr,
# and syslog. Default to stderr.
# - What to log -
#log_line_prefix = '%m: %a pid %p: ' # printf-style string to output at beginning of each log line.
log_connections = 'off'
# Log connections
#log_disconnections = off
# Log disconnections
log_hostname = 'off'
# Hostname will be shown in ps status
# and in logs if connections are logged
#log_statement = off
# Log all statements
log_per_node_statement = 'off'
# Log all statements
# with node and backend informations
#log_client_messages = off
# Log any client messages
#log_standby_delay = 'if_over_threshold'
# Log standby delay
# Valid values are combinations of always,
# if_over_threshold, none
# - Syslog specific -
#syslog_facility = 'LOCAL0'
# Syslog local facility. Default to LOCAL0
#syslog_ident = 'pgpool'
# Syslog program identification string
# Default to 'pgpool'
# - Debug -
#log_error_verbosity = default # terse, default, or verbose messages
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
# This is used when logging to stderr:
#logging_collector = off
# Enable capturing of stderr
# into log files.
# (change requires restart)
# -- Only used if logging_collector is on ---
#log_directory = '/tmp/pgpool_logs'
# directory where log files are written,
# can be absolute
#log_filename = 'pgpool-%Y-%m-%d_%H%M%S.log'
# log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600
# creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off
# If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d
# Automatic rotation of logfiles will
# happen after that (minutes)time.
# 0 disables time based rotation.
#log_rotation_size = 10MB
# Automatic rotation of logfiles will
# happen after that much (KB) log output.
# 0 disables size based rotation.
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
pid_file_name = '/opt/bitnami/pgpool/tmp/pgpool.pid'
# PID file name
# Can be specified as relative to the"
# location of pgpool.conf file or
# as an absolute path
# (change requires restart)
logdir = '/opt/bitnami/pgpool/logs'
# Directory of pgPool status file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTION POOLING
#------------------------------------------------------------------------------
#connection_cache = on
# Activate connection pools
# (change requires restart)
# Semicolon separated list of queries
# to be issued at the end of a session
# The default is for 8.3 and later
#reset_query_list = 'ABORT; DISCARD ALL'
# The following one is for 8.2 and before
#reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'
#------------------------------------------------------------------------------
# REPLICATION MODE
#------------------------------------------------------------------------------
#replicate_select = off
# Replicate SELECT statements
# when in replication mode
# replicate_select is higher priority than
# load_balance_mode.
#insert_lock = off
# Automatically locks a dummy row or a table
# with INSERT statements to keep SERIAL data
# consistency
# Without SERIAL, no lock will be issued
#lobj_lock_table = ''
# When rewriting lo_creat command in
# replication mode, specify table name to
# lock
# - Degenerate handling -
#replication_stop_on_mismatch = off
# On disagreement with the packet kind
# sent from backend, degenerate the node
# which is most likely "minority"
# If off, just force to exit this session
#failover_if_affected_tuples_mismatch = off
# On disagreement with the number of affected
# tuples in UPDATE/DELETE queries, then
# degenerate the node which is most likely
# "minority".
# If off, just abort the transaction to
# keep the consistency
#------------------------------------------------------------------------------
# LOAD BALANCING MODE
#------------------------------------------------------------------------------
load_balance_mode = 'on'
# Activate load balancing mode
# (change requires restart)
#ignore_leading_white_space = on
# Ignore leading white spaces of each query
#read_only_function_list = ''
# Comma separated list of function names
# that don't write to database
# Regexp are accepted
#write_function_list = ''
# Comma separated list of function names
# that write to database
# Regexp are accepted
# If both read_only_function_list and write_function_list
# is empty, function's volatile property is checked.
# If it's volatile, the function is regarded as a
# writing function.
#primary_routing_query_pattern_list = ''
# Semicolon separated list of query patterns
# that should be sent to primary node
# Regexp are accepted
# valid for streaming replicaton mode only.
#database_redirect_preference_list = ''
# comma separated list of pairs of database and node id.
# example: postgres:primary,mydb[0-4]:1,mydb[5-9]:2'
# valid for streaming replicaton mode only.
#app_name_redirect_preference_list = ''
# comma separated list of pairs of app name and node id.
# example: 'psql:primary,myapp[0-4]:1,myapp[5-9]:standby'
# valid for streaming replicaton mode only.
#allow_sql_comments = off
# if on, ignore SQL comments when judging if load balance or
# query cache is possible.
# If off, SQL comments effectively prevent the judgment
# (pre 3.4 behavior).
disable_load_balance_on_write = 'transaction'
# Load balance behavior when write query is issued
# in an explicit transaction.
#
# Valid values:
#
# 'transaction' (default):
# if a write query is issued, subsequent
# read queries will not be load balanced
# until the transaction ends.
#
# 'trans_transaction':
# if a write query is issued, subsequent
# read queries in an explicit transaction
# will not be load balanced until the session ends.
#
# 'dml_adaptive':
# Queries on the tables that have already been
# modified within the current explicit transaction will
# not be load balanced until the end of the transaction.
#
# 'always':
# if a write query is issued, read queries will
# not be load balanced until the session ends.
#
# Note that any query not in an explicit transaction
# is not affected by the parameter except 'always'.
#dml_adaptive_object_relationship_list= ''
# comma separated list of object pairs
# [object]:[dependent-object], to disable load balancing
# of dependent objects within the explicit transaction
# after WRITE statement is issued on (depending-on) object.
#
# example: 'tb_t1:tb_t2,insert_tb_f_func():tb_f,tb_v:my_view'
# Note: function name in this list must also be present in
# the write_function_list
# only valid for disable_load_balance_on_write = 'dml_adaptive'.
statement_level_load_balance = 'off'
# Enables statement level load balancing
#------------------------------------------------------------------------------
# STREAMING REPLICATION MODE
#------------------------------------------------------------------------------
# - Streaming -
sr_check_period = '30'
# Streaming replication check period
# Disabled (0) by default
sr_check_user = '****'
# Streaming replication check user
# This is neccessary even if you disable streaming
# replication delay check by sr_check_period = 0
sr_check_password = '****'
# Password for streaming replication check user
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
sr_check_database = 'postgres'
# Database name for streaming replication check
#delay_threshold = 0
# Threshold before not dispatching query to standby node
# Unit is in bytes
# Disabled (0) by default
#delay_threshold_by_time = 0
# Threshold before not dispatching query to standby node
# Unit is in second(s)
# Disabled (0) by default
#prefer_lower_delay_standby = off
# If delay_threshold is set larger than 0, Pgpool-II send to
# the primary when selected node is delayed over delay_threshold.
# If this is set to on, Pgpool-II send query to other standby
# delayed lower.
# - Special commands -
#follow_primary_command = ''
# Executes this command after main node failover
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new main node id
# %H = new main node hostname
# %M = old main node id
# %P = old primary node id
# %r = new main port number
# %R = new main database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
#------------------------------------------------------------------------------
# HEALTH CHECK GLOBAL PARAMETERS
#------------------------------------------------------------------------------
health_check_period = '30'
# Health check period
# Disabled (0) by default
health_check_timeout = '10'
# Health check timeout
# 0 means no timeout
health_check_user = '***'
# Health check user
health_check_password = '***'
# Password for health check user
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
#health_check_database = ''
# Database name for health check. If '', tries 'postgres' frist,
health_check_max_retries = '5'
# Maximum number of times to retry a failed health check before giving up.
health_check_retry_delay = '5'
# Amount of time to wait (in seconds) between retries.
connect_timeout = '10000'
# Timeout value in milliseconds before giving up to connect to backend.
# Default is 10000 ms (10 second). Flaky network user may want to increase
# the value. 0 means no timeout.
# Note that this value is not only used for health check,
# but also for ordinary conection to backend.
#------------------------------------------------------------------------------
# HEALTH CHECK PER NODE PARAMETERS (OPTIONAL)
#------------------------------------------------------------------------------
#health_check_period0 = 0
#health_check_timeout0 = 20
#health_check_user0 = 'nobody'
#health_check_password0 = ''
#health_check_database0 = ''
#health_check_max_retries0 = 0
#health_check_retry_delay0 = 1
#connect_timeout0 = 10000
#------------------------------------------------------------------------------
# FAILOVER AND FAILBACK
#------------------------------------------------------------------------------
failover_command = 'echo ">>> Failover - that will initialize new primary node search!"'
# Executes this command at failover
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new main node id
# %H = new main node hostname
# %M = old main node id
# %P = old primary node id
# %r = new main port number
# %R = new main database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
#failback_command = ''
# Executes this command at failback.
# Special values:
# %d = failed node id
# %h = failed node host name
# %p = failed node port number
# %D = failed node database cluster path
# %m = new main node id
# %H = new main node hostname
# %M = old main node id
# %P = old primary node id
# %r = new main port number
# %R = new main database cluster path
# %N = old primary node hostname
# %S = old primary node port number
# %% = '%' character
failover_on_backend_error = 'off'
# Initiates failover when reading/writing to the
# backend communication socket fails
# If set to off, pgpool will report an
# error and disconnect the session.
#failover_on_backend_shutdown = off
# Initiates failover when backend is shutdown,
# or backend process is killed.
# If set to off, pgpool will report an
# error and disconnect the session.
#detach_false_primary = off
# Detach false primary if on. Only
# valid in streaming replicaton
# mode and with PostgreSQL 9.6 or
# after.
search_primary_node_timeout = '0'
# Timeout in seconds to search for the
# primary node when a failover occurs.
# 0 means no timeout, keep searching
# for a primary node forever.
#------------------------------------------------------------------------------
# ONLINE RECOVERY
#------------------------------------------------------------------------------
#recovery_user = 'nobody'
# Online recovery user
#recovery_password = ''
# Online recovery password
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
#recovery_1st_stage_command = ''
# Executes a command in first stage
#recovery_2nd_stage_command = ''
# Executes a command in second stage
#recovery_timeout = 90
# Timeout in seconds to wait for the
# recovering node's postmaster to start up
# 0 means no wait
#client_idle_limit_in_recovery = 0
# Client is disconnected after being idle
# for that many seconds in the second stage
# of online recovery
# 0 means no disconnection
# -1 means immediate disconnection
#auto_failback = off
# Dettached backend node reattach automatically
# if replication_state is 'streaming'.
#auto_failback_interval = 1min
# Min interval of executing auto_failback in
# seconds.
#------------------------------------------------------------------------------
# WATCHDOG
#------------------------------------------------------------------------------
# - Enabling -
#use_watchdog = off
# Activates watchdog
# (change requires restart)
# -Connection to up stream servers -
#trusted_servers = ''
# trusted server list which are used
# to confirm network connection
# (hostA,hostB,hostC,...)
# (change requires restart)
#trusted_server_command = 'ping -q -c3 %h'
# Command to excute when communicate trusted server.
# Special values:
# %h = host name specified by trusted_servers
# - Watchdog communication Settings -
hostname0 = ''
# Host name or IP address of pgpool node
# for watchdog connection
# (change requires restart)
#wd_port0 = 9000
# Port number for watchdog service
# (change requires restart)
#pgpool_port0 = 9999
# Port number for pgpool
# (change requires restart)
#hostname1 = ''
#wd_port1 = 9000
#pgpool_port1 = 9999
#hostname2 = ''
#wd_port2 = 9000
#pgpool_port2 = 9999
#wd_priority = 1
# priority of this watchdog in leader election
# (change requires restart)
#wd_authkey = ''
# Authentication key for watchdog communication
# (change requires restart)
#wd_ipc_socket_dir = '/tmp'
# Unix domain socket path for watchdog IPC socket
# The Debian package defaults to
# /var/run/postgresql
# (change requires restart)
# - Virtual IP control Setting -
#delegate_ip = ''
# delegate IP address
# If this is empty, virtual IP never bring up.
# (change requires restart)
#if_cmd_path = '/sbin'
# path to the directory where if_up/down_cmd exists
# If if_up/down_cmd starts with "/", if_cmd_path will be ignored.
# (change requires restart)
#if_up_cmd = '/usr/bin/sudo /sbin/ip addr add $_IP_$/24 dev eth0 label eth0:0'
# startup delegate IP command
# (change requires restart)
#if_down_cmd = '/usr/bin/sudo /sbin/ip addr del $_IP_$/24 dev eth0'
# shutdown delegate IP command
# (change requires restart)
#arping_path = '/usr/sbin'
# arping command path
# If arping_cmd starts with "/", if_cmd_path will be ignored.
# (change requires restart)
#arping_cmd = '/usr/bin/sudo /usr/sbin/arping -U $_IP_$ -w 1 -I eth0'
# arping command
# (change requires restart)
#ping_path = '/bin'
# ping command path
# (change requires restart)
# - Behaivor on escalation Setting -
#clear_memqcache_on_escalation = on
# Clear all the query cache on shared memory
# when standby pgpool escalate to active pgpool
# (= virtual IP holder).
# This should be off if client connects to pgpool
# not using virtual IP.
# (change requires restart)
#wd_escalation_command = ''
# Executes this command at escalation on new active pgpool.
# (change requires restart)
#wd_de_escalation_command = ''
# Executes this command when leader pgpool resigns from being leader.
# (change requires restart)
# - Watchdog consensus settings for failover -
#failover_when_quorum_exists = on
# Only perform backend node failover
# when the watchdog cluster holds the quorum
# (change requires restart)
#failover_require_consensus = on
# Perform failover when majority of Pgpool-II nodes
# aggrees on the backend node status change
# (change requires restart)
#allow_multiple_failover_requests_from_node = off
# A Pgpool-II node can cast multiple votes
# for building the consensus on failover
# (change requires restart)
#enable_consensus_with_half_votes = off
# apply majority rule for consensus and quorum computation
# at 50% of votes in a cluster with even number of nodes.
# when enabled the existence of quorum and consensus
# on failover is resolved after receiving half of the
# total votes in the cluster, otherwise both these
# decisions require at least one more vote than
# half of the total votes.
# (change requires restart)
# - Watchdog cluster membership settings for quorum computation -
#wd_remove_shutdown_nodes = off
# when enabled cluster membership of properly shutdown
# watchdog nodes gets revoked, After that the node does
# not count towards the quorum and consensus computations
#wd_lost_node_removal_timeout = 0s
# Timeout after which the cluster membership of LOST watchdog
# nodes gets revoked. After that the node node does not
# count towards the quorum and consensus computations
# setting timeout to 0 will never revoke the membership
# of LOST nodes
#wd_no_show_node_removal_timeout = 0s
# Time to wait for Watchdog node to connect to the cluster.
# After that time the cluster membership of NO-SHOW node gets
# revoked and it does not count towards the quorum and
# consensus computations
# setting timeout to 0 will not revoke the membership
# of NO-SHOW nodes
# - Lifecheck Setting -
# -- common --
#wd_monitoring_interfaces_list = ''
# Comma separated list of interfaces names to monitor.
# if any interface from the list is active the watchdog will
# consider the network is fine
# 'any' to enable monitoring on all interfaces except loopback
# '' to disable monitoring
# (change requires restart)
#wd_lifecheck_method = 'heartbeat'
# Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
# (change requires restart)
#wd_interval = 10
# lifecheck interval (sec) > 0
# (change requires restart)
# -- heartbeat mode --
#heartbeat_hostname0 = ''
# Host name or IP address used
# for sending heartbeat signal.
# (change requires restart)
#heartbeat_port0 = 9694
# Port number used for receiving/sending heartbeat signal
# Usually this is the same as heartbeat_portX.
# (change requires restart)
#heartbeat_device0 = ''
# Name of NIC device (such like 'eth0')
# used for sending/receiving heartbeat
# signal to/from destination 0.
# This works only when this is not empty
# and pgpool has root privilege.
# (change requires restart)
#heartbeat_hostname1 = ''
#heartbeat_port1 = 9694
#heartbeat_device1 = ''
#heartbeat_hostname2 = ''
#heartbeat_port2 = 9694
#heartbeat_device2 = ''
#wd_heartbeat_keepalive = 2
# Interval time of sending heartbeat signal (sec)
# (change requires restart)
#wd_heartbeat_deadtime = 30
# Deadtime interval for heartbeat signal (sec)
# (change requires restart)
# -- query mode --
#wd_life_point = 3
# lifecheck retry times
# (change requires restart)
#wd_lifecheck_query = 'SELECT 1'
# lifecheck query to pgpool from watchdog
# (change requires restart)
#wd_lifecheck_dbname = 'template1'
# Database name connected for lifecheck
# (change requires restart)
#wd_lifecheck_user = 'nobody'
# watchdog user monitoring pgpools in lifecheck
# (change requires restart)
#wd_lifecheck_password = ''
# Password for watchdog user in lifecheck
# Leaving it empty will make Pgpool-II to first look for the
# Password in pool_passwd file before using the empty password
# (change requires restart)
#------------------------------------------------------------------------------
# OTHERS
#------------------------------------------------------------------------------
#relcache_expire = 0
# Life time of relation cache in seconds.
# 0 means no cache expiration(the default).
# The relation cache is used for cache the
# query result against PostgreSQL system
# catalog to obtain various information
# including table structures or if it's a
# temporary table or not. The cache is
# maintained in a pgpool child local memory
# and being kept as long as it survives.
# If someone modify the table by using
# ALTER TABLE or some such, the relcache is
# not consistent anymore.
# For this purpose, cache_expiration
# controls the life time of the cache.
#relcache_size = 256
# Number of relation cache
# entry. If you see frequently:
# "pool_search_relcache: cache replacement happend"
# in the pgpool log, you might want to increate this number.
#check_temp_table = catalog
# Temporary table check method. catalog, trace or none.
# Default is catalog.
#check_unlogged_table = on
# If on, enable unlogged table check in SELECT statements.
# This initiates queries against system catalog of primary/main
# thus increases load of primary.
# If you are absolutely sure that your system never uses unlogged tables
# and you want to save access to primary/main, you could turn this off.
# Default is on.
#enable_shared_relcache = on
# If on, relation cache stored in memory cache,
# the cache is shared among child process.
# Default is on.
# (change requires restart)
#relcache_query_target = primary
# Target node to send relcache queries. Default is primary node.
# If load_balance_node is specified, queries will be sent to load balance node.
#------------------------------------------------------------------------------
# IN MEMORY QUERY MEMORY CACHE
#------------------------------------------------------------------------------
#memory_cache_enabled = off
# If on, use the memory cache functionality, off by default
# (change requires restart)
#memqcache_method = 'shmem'
# Cache storage method. either 'shmem'(shared memory) or
# 'memcached'. 'shmem' by default
# (change requires restart)
#memqcache_memcached_host = 'localhost'
# Memcached host name or IP address. Mandatory if
# memqcache_method = 'memcached'.
# Defaults to localhost.
# (change requires restart)
#memqcache_memcached_port = 11211
# Memcached port number. Mondatory if memqcache_method = 'memcached'.
# Defaults to 11211.
# (change requires restart)
#memqcache_total_size = 64MB
# Total memory size in bytes for storing memory cache.
# Mandatory if memqcache_method = 'shmem'.
# Defaults to 64MB.
# (change requires restart)
#memqcache_max_num_cache = 1000000
# Total number of cache entries. Mandatory
# if memqcache_method = 'shmem'.
# Each cache entry consumes 48 bytes on shared memory.
# Defaults to 1,000,000(45.8MB).
# (change requires restart)
#memqcache_expire = 0
# Memory cache entry life time specified in seconds.
# 0 means infinite life time. 0 by default.
# (change requires restart)
#memqcache_auto_cache_invalidation = on
# If on, invalidation of query cache is triggered by corresponding
# DDL/DML/DCL(and memqcache_expire). If off, it is only triggered
# by memqcache_expire. on by default.
# (change requires restart)
#memqcache_maxcache = 400kB
# Maximum SELECT result size in bytes.
# Must be smaller than memqcache_cache_block_size. Defaults to 400KB.
# (change requires restart)
#memqcache_cache_block_size = 1MB
# Cache block size in bytes. Mandatory if memqcache_method = 'shmem'.
# Defaults to 1MB.
# (change requires restart)
#memqcache_oiddir = '/var/log/pgpool/oiddir'
# Temporary work directory to record table oids
# (change requires restart)
#cache_safe_memqcache_table_list = ''
# Comma separated list of table names to memcache
# that don't write to database
# Regexp are accepted
#cache_unsafe_memqcache_table_list = ''
# Comma separated list of table names not to memcache
# that don't write to database
# Regexp are accepted
backend_hostname0 = '10.0.10.7'
backend_port0 = 5432
backend_weight0 = 1
backend_data_directory0 = '/opt/bitnami/pgpool/data'
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_hostname1 = '10.0.10.8'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/opt/bitnami/pgpool/data'
backend_flag1 = 'ALLOW_TO_FAILOVER'
backend_hostname2 = '10.0.10.9'
backend_port2 = 5432
backend_weight2 = 1
backend_data_directory2 = '/opt/bitnami/pgpool/data'
backend_flag2 = 'ALLOW_TO_FAILOVER'
I have no name!@73d3fcf715c2:/opt/bitnami/pgpool/conf$
I hope I provided all the information and you can see anything else. If you need anything let me know.
Thanks, Cheers, Daniel
AZO GmbH & Co. KG
Rosenberger Str. 28
D-74706
Osterburken
Tel.: +49 6291 92-6449
Mob.: +49 162 9919448
Fax: +49 6291 9290449
Mail: Daniel.Camarena at azo.com
Web: http://www.azo.com/
AZO. We Love Ingredients.
KG: Sitz Osterburken, Register-Gericht Mannheim HRA 450086, Persönlich haftende Gesellschafterin: AZO Beteiligungs GmbH, Sitz Osterburken, Register-Gericht Mannheim HRB 450261
Geschäftsführer: Rainer Zimmermann | Daniel Auerhammer | Dr. Matthias Fechner | Jan-Wilko Helms | Dennis Künkel
Diese E-Mail einschließlich ihrer Anhänge ist vertraulich. Wir bitten Sie, eine fehlgeleitete E-Mail zu löschen und uns eine Nachricht zukommen zu lassen. Wir haben die E-Mail vor dem Versenden auf Virenfreiheit geprüft. Eine Haftung für Virenfreiheit schließen wir jedoch aus.
This e-mail and its attachments are confidential. If you are not the intended recipient of this e-mail message, please delete it and inform us accordingly. This e-mail was checked for viruses when sent, however we are not liable for any virus contamination.
-----Ursprüngliche Nachricht-----
Von: Tatsuo Ishii <ishii at sraoss.co.jp>
Gesendet: Montag, 16. Oktober 2023 04:59
An: Camarena Daniel <Daniel.Camarena at azo.com>
Cc: pgpool-general at pgpool.net
Betreff: Re: [pgpool-general: 8942] pgpool 4.4.4: reading status file: 1 th backend is set to down status
[Sie erhalten nicht h?ufig E-Mails von ishii at sraoss.co.jp<mailto:ishii at sraoss.co.jp>. Weitere Informationen, warum dies wichtig ist, finden Sie unter https://aka.ms/LearnAboutSenderIdentification ]
> Hi,
>
> I've a cluster with 3 nodes. Every node runs bitnami/pgpool:4.4.4 as proxy and bitnami/postgresql-repmgr:15.4.0 as server. A PostgreSQL connection to all services (pg0, pg1, pg2, pgpool0, pgpool1, pgpool2) can be established.
> In the QA system I see that pgpool of node 1 is not running properly. It is always in state "starting" and never "healthy". Therefore orchestration is terminating and restarting the container.
> Having a look at the log of pgpool1 and comparing it with the other pgpool instances there is one difference:
> main pid 1: LOG: reading status file: 1 th backend is set to down
> status
>
> Therefore my questions:
>
> 1. Is there a file which buffers pg states?
If you mean "pg_status" column in show pool_nodes command, no. It is obtained from PostgreSQL on the fly when show pool_nodes command gets executed.
> 2. How did the system get into this state?
I am not familiar with bitnami pgpool nor repmgr. So what I can do is answer from the point of pgpool view. It was caused by either failover triggered by health check (pgpool detects error / shutdown of PostgreSQL), or pcp_detach_node gets executed. I cannot tell either unless looking into pgpool log and pgpool.conf
Best reagards,
--
Tatsuo Ishii
SRA OSS LLC
English: http://www.sraoss.co.jp/index_en/<http://www.sraoss.co.jp/index_en/>
Japanese:http://www.sraoss.co.jp/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20231016/958a37ae/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image220503.png
Type: image/png
Size: 760 bytes
Desc: image220503.png
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20231016/958a37ae/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image161592.png
Type: image/png
Size: 1048 bytes
Desc: image161592.png
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20231016/958a37ae/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image302371.png
Type: image/png
Size: 1321 bytes
Desc: image302371.png
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20231016/958a37ae/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image515070.png
Type: image/png
Size: 1362 bytes
Desc: image515070.png
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20231016/958a37ae/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image639046.png
Type: image/png
Size: 795 bytes
Desc: image639046.png
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20231016/958a37ae/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image800412.png
Type: image/png
Size: 1755 bytes
Desc: image800412.png
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20231016/958a37ae/attachment-0005.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image076303.png
Type: image/png
Size: 39998 bytes
Desc: image076303.png
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20231016/958a37ae/attachment-0006.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image892949.png
Type: image/png
Size: 22075 bytes
Desc: image892949.png
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20231016/958a37ae/attachment-0007.png>
More information about the pgpool-general
mailing list