pgpool-II 4.0.20 Documentation | |||
---|---|---|---|
Prev | Up | Chapter 7. Configuration Examples | Next |
This section shows an example of streaming replication configuration using Pgpool-II. In this example, we use 3 Pgpool-II servers to manage PostgreSQL servers to create a robust cluster system and avoid the single point of failure or split brain.
PostgreSQL 11 is used in this configuration example, all scripts have also been tested with PostgreSQL 12.
We use 3 servers with CentOS 7.4. Let these servers be server1, server2, server3. We install PostgreSQL and Pgpool-II on each server.
Note: The roles of Active, Standy, Primary, Standby are not fixed and may be changed by further operations.
Table 7-2. Hostname and IP address
Hostname | IP Address | Virtual IP |
---|---|---|
server1 | 192.168.137.101 | 192.168.137.150 |
server2 | 192.168.137.102 | |
server3 | 192.168.137.103 |
Table 7-3. PostgreSQL version and Configuration
Item | Value | Detail |
---|---|---|
PostgreSQL Version | 11.1 | - |
port | 5432 | - |
$PGDATA | /var/lib/pgsql/11/data | - |
Archive mode | on | /var/lib/pgsql/archivedir |
Start automatically | Disable | - |
Table 7-4. Pgpool-II version and Configuration
Item | Value | Detail |
---|---|---|
Pgpool-II Version | 4.0.2 | - |
port | 9999 | Pgpool-II accepts connections |
9898 | PCP process accepts connections | |
9000 | watchdog accepts connections | |
9694 | UDP port for receiving Watchdog's heartbeat signal | |
Config file | /etc/pgpool-II/pgpool.conf | Pgpool-II config file |
Pgpool-II start user | root | See Section 2.1.7 to startup Pgpool-II with non-root user |
Running mode | streaming replication mode | - |
Watchdog | on | Life check method: heartbeat |
Start automatically | Disable | - |
We assume that all the Pgpool-II servers and the PostgreSQL servers are in the same subnet.
To use the automated failover and online recovery of Pgpool-II, the settings that allow passwordless SSH to all backend servers between Pgpool-II execution user (default root user) and postgres user and between postgres user and postgres user are necessary. Execute the following command on all servers to set up passwordless SSH. The generated key file name is id_rsa_pgpool.
[all servers]# mkdir ~/.ssh [all servers]# chmod 700 ~/.ssh [all servers]# cd ~/.ssh [all servers]# ssh-keygen -t rsa -f id_rsa_pgpool [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server1 [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server2 [all servers]# ssh-copy-id -i id_rsa_pgpool.pub postgres@server3 [all servers]# su - postgres [all servers]$ mkdir ~/.ssh [all servers]$ chmod 700 ~/.ssh [all servers]$ cd ~/.ssh [all servers]$ ssh-keygen -t rsa -f id_rsa_pgpool [all servers]$ ssh-copy-id -i id_rsa_pgpool.pub postgres@server1 [all servers]$ ssh-copy-id -i id_rsa_pgpool.pub postgres@server2 [all servers]$ ssh-copy-id -i id_rsa_pgpool.pub postgres@server3
After setting, use ssh postgres@serverX -i ~/.ssh/id_rsa_pgpool command to make sure that you can log in without entering a password. If you fail executing ssh-copy-id, set a password for postgres user and temporarily allow password authentication for example.
When connect to Pgpool-II and PostgreSQL servers, the target port must be accessible by enabling firewall management softwares. Following is an example for CentOS/RHEL7.
[all servers]# firewall-cmd --permanent --zone=public --add-service=postgresql [all servers]# firewall-cmd --permanent --zone=public --add-port=9999/tcp --add-port=9898/tcp --add-port=9000/tcp --add-port=9694/udp [all servers]# firewall-cmd --reload
In this example, we install Pgpool-II and PostgreSQL RPM packages with YUM.
Install PostgreSQL from PostgreSQL YUM repository.
[all servers]# yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm [all servers]# yum install -y postgresql11-server
Since Pgpool-II related packages are also included in PostgreSQL YUM repository, add the "exclude" settings to /etc/yum.repos.d/pgdg-redhat-all.repo so that Pgpool-II is not installed from PostgreSQL YUM repository.
[all servers]# vi /etc/yum.repos.d/pgdg-redhat-all.repo
The following is a setting example of /etc/yum.repos.d/pgdg-redhat-all.repo.
[pgdg-common] ... exclude=pgpool* [pgdg14] ... exclude=pgpool* [pgdg13] ... exclude=pgpool* [pgdg12] ... exclude=pgpool* [pgdg11] ... exclude=pgpool* [pgdg10] ... exclude=pgpool* [pgdg96] ... exclude=pgpool*
Install Pgpool-II from Pgpool-II YUM repository.
[all servers]# yum install -y http://www.pgpool.net/yum/rpms/4.0/redhat/rhel-7-x86_64/pgpool-II-release-4.0-3.noarch.rpm [all servers]# yum install -y pgpool-II-pg11-*
Before you start the configuration process, please check the following prerequisites.
Set up PostgreSQL streaming replication on the primary server. In this example, we use WAL archiving.
First, we create the directory /var/lib/pgsql/archivedir to store WAL segments on all servers.
[all servers]# su - postgres [all servers]$ mkdir /var/lib/pgsql/archivedir
Initialize PostgreSQL on the primary server.
[server1]# su - postgres [server1]$ /usr/pgsql-11/bin/initdb -D $PGDATA
Then we edit the configuration file $PGDATA/postgresql.conf on server1 (primary) as follows.
listen_addresses = '*' archive_mode = on archive_command = 'cp "%p" "/var/lib/pgsql/archivedir/%f"' max_wal_senders = 10 max_replication_slots = 10 wal_level = replica
We use the online recovery functionality of Pgpool-II to setup standby server after the primary server is started.
Because of the security reasons, we create a user repl solely used for replication purpose, and a user pgpool for streaming replication delay check and health check of Pgpool-II.
Table 7-5. Users
User Name | Password | Detail |
---|---|---|
repl | repl | PostgreSQL replication user |
pgpool | pgpool | Pgpool-II health check and replication delay check user |
postgres | postgres | User running online recovery |
[server1]# psql -U postgres -p 5432 postgres=# SET password_encryption = 'scram-sha-256'; postgres=# CREATE ROLE pgpool WITH LOGIN; postgres=# CREATE ROLE repl WITH REPLICATION LOGIN; postgres=# \password pgpool postgres=# \password repl postgres=# \password postgres
Note: If you plan to use detach_false_primary, role "pgpool" needs to be PostgreSQL super user or or in "pg_monitor" group to use this feature. To let "pgpool" role be in the group, you can do:
GRANT pg_monitor TO pgpool;
Assuming that all the Pgpool-II servers and the PostgreSQL servers are in the network of 192.168.137.0/24, and edit pg_hba.conf to enable scram-sha-256 authentication method.
host all all samenet scram-sha-256 host replication all samenet scram-sha-256
To allow repl user without specifying password for streaming replication and online recovery, we create the .pgpass file in postgres user's home directory and change the permission to 600 on each PostgreSQL server.
[all servers]# su - postgres [all servers]$ vi /var/lib/pgsql/.pgpass server1:5432:replication:repl:<repl user password> server2:5432:replication:repl:<repl user passowrd> server3:5432:replication:repl:<repl user passowrd> [all servers]$ chmod 600 /var/lib/pgsql/.pgpass
Here are the common settings on server1, server2 and server3.
When installing Pgpool-II from RPM, all the Pgpool-II configuration files are in /etc/pgpool-II. In this example, we copy the sample configuration file for streaming replication mode.
# cp /etc/pgpool-II/pgpool.conf.sample-stream /etc/pgpool-II/pgpool.conf
To allow Pgpool-II to accept all incoming connections, we set listen_addresses = '*'.
listen_addresses = '*'
Specify replication delay check user and password. In this example, we leave sr_check_user empty, and create the entry in pool_passwd. From Pgpool-II 4.0, if these parameters are left blank, Pgpool-II will first try to get the password for that specific user from sr_check_password file before using the empty password.
sr_check_user = 'pgpool' sr_check_password = ''
Enable health check so that Pgpool-II performs failover. Also, if the network is unstable, the health check fails even though the backend is running properly, failover or degenerate operation may occur. In order to prevent such incorrect detection of health check, we set health_check_max_retries = 3. Specify health_check_user and health_check_password in the same way like sr_check_user and sr_check_password.
health_check_period = 5 # Health check period # Disabled (0) by default health_check_timeout = 30 # Health check timeout # 0 means no timeout health_check_user = 'pgpool' health_check_password = '' health_check_max_retries = 3
Specify the PostgreSQL backend information. Multiple backends can be specified by adding a number at the end of the parameter name.
# - Backend Connection Settings - backend_hostname0 = 'server1' backend_port0 = 5432 backend_weight0 = 1 backend_data_directory0 = '/var/lib/pgsql/11/data' backend_flag0 = 'ALLOW_TO_FAILOVER' backend_hostname1 = 'server2' backend_port1 = 5432 backend_weight1 = 1 backend_data_directory1 = '/var/lib/pgsql/11/data' backend_flag1 = 'ALLOW_TO_FAILOVER' backend_hostname2 = 'server3' backend_port2 = 5432 backend_weight2 = 1 backend_data_directory2 = '/var/lib/pgsql/11/data' backend_flag2 = 'ALLOW_TO_FAILOVER'
Specify failover.sh script to be executed after failover in failover_command parameter. If we use 3 PostgreSQL servers, we need to specify follow_master_command to run after failover on the primary node failover. In case of two PostgreSQL servers, follow_master_command setting is not necessary.
failover_command = '/etc/pgpool-II/failover.sh %d %h %p %D %m %H %M %P %r %R' follow_master_command = '/etc/pgpool-II/follow_master.sh %d %h %p %D %m %H %M %P %r %R'
Create /etc/pgpool-II/failover.sh, and add execute permission.
# vi /etc/pgpool-II/failover.sh # vi /etc/pgpool-II/follow_master.sh # chmod +x /etc/pgpool-II/{failover.sh,follow_master.sh}
/etc/pgpool-II/failover.sh
#!/bin/bash # This script is run by failover_command. set -o xtrace exec > >(logger -i -p local1.info) 2>&1 # Special values: # %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %H = hostname of the new master node # %M = old master node id # %P = old primary node id # %r = new master port number # %R = new master database cluster path # %% = '%' character FAILED_NODE_ID="$1" FAILED_NODE_HOST="$2" FAILED_NODE_PORT="$3" FAILED_NODE_PGDATA="$4" NEW_MASTER_NODE_ID="$5" NEW_MASTER_NODE_HOST="$6" OLD_MASTER_NODE_ID="$7" OLD_PRIMARY_NODE_ID="$8" NEW_MASTER_NODE_PORT="$9" NEW_MASTER_NODE_PGDATA="${10}" PGHOME=/usr/pgsql-11 logger -i -p local1.info failover.sh: start: failed_node_id=${FAILED_NODE_ID} old_primary_node_id=${OLD_PRIMARY_NODE_ID} \ failed_host=${FAILED_NODE_HOST} new_master_host=${NEW_MASTER_NODE_HOST} ## If there's no master node anymore, skip failover. if [ $NEW_MASTER_NODE_ID -lt 0 ]; then logger -i -p local1.info failover.sh: All nodes are down. Skipping failover. exit 0 fi ## Test passwrodless SSH ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${NEW_MASTER_NODE_HOST} -i ~/.ssh/id_rsa_pgpool ls /tmp > /dev/null if [ $? -ne 0 ]; then logger -i -p local1.error failover.sh: passwrodless SSH to postgres@${NEW_MASTER_NODE_HOST} failed. Please setup passwrodless SSH. exit 1 fi # If standby node is down, skip failover. if [ ${FAILED_NODE_ID} -ne ${OLD_PRIMARY_NODE_ID} ]; then logger -i -p local1.info failover.sh: Standby node is down. Skipping failover. exit 0 fi # Promote standby node. logger -i -p local1.info failover.sh: Primary node is down, promote standby node PostgreSQL@${NEW_MASTER_NODE_HOST}. ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \ postgres@${NEW_MASTER_NODE_HOST} -i ~/.ssh/id_rsa_pgpool ${PGHOME}/bin/pg_ctl -D ${NEW_MASTER_NODE_PGDATA} -w promote if [ $? -ne 0 ]; then logger -i -p local1.error failover.sh: new_master_host=${NEW_MASTER_NODE_HOST} promote failed exit 1 fi logger -i -p local1.info failover.sh: end: new_master_node_id=$NEW_MASTER_NODE_ID started as the primary node exit 0
/etc/pgpool-II/follow_master.sh
#!/bin/bash # This script is run after failover_command to synchronize the Standby with the new Primary. set -o xtrace exec > >(logger -i -p local1.info) 2>&1 # special values: %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %H = new master node host name # %M = old master node id # %P = old primary node id # %R = new master database cluster path # %r = new master port number # %% = '%' character FAILED_NODE_ID="$1" FAILED_NODE_HOST="$2" FAILED_NODE_PORT="$3" FAILED_NODE_PGDATA="$4" NEW_MASTER_NODE_ID="$5" NEW_MASTER_NODE_HOST="$6" OLD_MASTER_NODE_ID="$7" OLD_PRIMARY_NODE_ID="$8" NEW_MASTER_NODE_PORT="$9" NEW_MASTER_NODE_PGDATA="${10}" PGHOME=/usr/pgsql-11 ARCHIVEDIR=/var/lib/pgsql/archivedir REPL_USER=repl PCP_USER=pgpool PGPOOL_PATH=/usr/bin PCP_PORT=9898 # Recovery the slave from the new primary logger -i -p local1.info follow_master.sh: start: synchronize the Standby node PostgreSQL@${FAILED_NODE_HOST} with the new Primary node PostgreSQL@${NEW_MASTER_NODE_HOST} ## Test passwrodless SSH ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${NEW_MASTER_NODE_HOST} -i ~/.ssh/id_rsa_pgpool ls /tmp > /dev/null if [ $? -ne 0 ]; then logger -i -p local1.error follow_master.sh: passwrodless SSH to postgres@${NEW_MASTER_NODE_HOST} failed. Please setup passwrodless SSH. exit 1 fi ## Get PostgreSQL major version PGVERSION=`${PGHOME}/bin/initdb -V | awk '{print $3}' | sed 's/\..*//' | sed 's/\([0-9]*\)[a-zA-Z].*/\1/'` if [ ${PGVERSION} -ge 12 ]; then RECOVERYCONF=${FAILED_NODE_PGDATA}/myrecovery.conf else RECOVERYCONF=${FAILED_NODE_PGDATA}/recovery.conf fi # Check the status of standby ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \ postgres@${FAILED_NODE_HOST} -i ~/.ssh/id_rsa_pgpool ${PGHOME}/bin/pg_ctl -w -D ${FAILED_NODE_PGDATA} status ## If Standby is running, run pg_basebackup. if [ $? -eq 0 ]; then # Execute pg_basebackup ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${FAILED_NODE_HOST} -i ~/.ssh/id_rsa_pgpool " set -o errexit ${PGHOME}/bin/pg_ctl -w -m f -D ${FAILED_NODE_PGDATA} stop rm -rf ${FAILED_NODE_PGDATA} rm -rf ${ARCHIVEDIR}/* ${PGHOME}/bin/pg_basebackup -h ${NEW_MASTER_NODE_HOST} -U ${REPL_USER} -p ${NEW_MASTER_NODE_PORT} -D ${FAILED_NODE_PGDATA} -X stream if [ ${PGVERSION} -ge 12 ]; then sed -i -e \"\\\$ainclude_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'\" \ -e \"/^include_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'/d\" ${FAILED_NODE_PGDATA}/postgresql.conf fi cat > ${RECOVERYCONF} << EOT primary_conninfo = 'host=${NEW_MASTER_NODE_HOST} port=${NEW_MASTER_NODE_PORT} user=${REPL_USER} passfile=''/var/lib/pgsql/.pgpass''' recovery_target_timeline = 'latest' restore_command = 'scp ${NEW_MASTER_NODE_HOST}:${ARCHIVEDIR}/%f %p' EOT if [ ${PGVERSION} -ge 12 ]; then touch ${FAILED_NODE_PGDATA}/standby.signal else echo \"standby_mode = 'on'\" >> ${RECOVERYCONF} fi " if [ $? -ne 0 ]; then logger -i -p local1.error follow_master.sh: end: pg_basebackup failed exit 1 fi # start Standby node on ${FAILED_NODE_HOST} ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \ postgres@${FAILED_NODE_HOST} -i ~/.ssh/id_rsa_pgpool $PGHOME/bin/pg_ctl -l /dev/null -w -D ${FAILED_NODE_PGDATA} start # If start Standby successfully, attach this node if [ $? -eq 0 ]; then # Run pcp_attact_node to attach Standby node to Pgpool-II. ${PGPOOL_PATH}/pcp_attach_node -w -h localhost -U $PCP_USER -p ${PCP_PORT} -n ${FAILED_NODE_ID} if [ $? -ne 0 ]; then logger -i -p local1.error follow_master.sh: end: pcp_attach_node failed exit 1 fi # If start Standby failed, drop replication slot "${FAILED_NODE_HOST}" else logger -i -p local1.error follow_master.sh: end: follow master command failed exit 1 fi else logger -i -p local1.info follow_master.sh: failed_nod_id=${FAILED_NODE_ID} is not running. skipping follow master command exit 0 fi logger -i -p local1.info follow_master.sh: end: follow master command complete exit 0
Note: The follow_master.sh script does not support tablespaces. If you are using tablespaces, you need to modify the script to support tablespaces.
Next, in order to perform online recovery with Pgpool-II we specify the PostgreSQL user name and online recovery command recovery_1st_stage. Because Supergroup privilege of PostgreSQL is required for online recovery, we specify postgres user to recovery_user. Then, we create recovery_1st_stage and pgpool_remote_start in database cluster directory of PostgreSQL primary server (server1), and add execute permission.
recovery_user = 'postgres' recovery_password = '' recovery_1st_stage_command = 'recovery_1st_stage'
[server1]# su - postgres [server1]$ vi /var/lib/pgsql/11/data/recovery_1st_stage [server1]$ vi /var/lib/pgsql/11/data/pgpool_remote_start [server1]$ chmod +x /var/lib/pgsql/11/data/{recovery_1st_stage,pgpool_remote_start}
/var/lib/pgsql/11/data/recovery_1st_stage
#!/bin/bash # This script is executed by "recovery_1st_stage" to recovery a Standby node. set -o xtrace exec > >(logger -i -p local1.info) 2>&1 PRIMARY_NODE_PGDATA="$1" DEST_NODE_HOST="$2" DEST_NODE_PGDATA="$3" PRIMARY_NODE_PORT="$4" DEST_NODE_PORT=5432 PRIMARY_NODE_HOST=$(hostname) PGHOME=/usr/pgsql-11 ARCHIVEDIR=/var/lib/pgsql/archivedir REPL_USER=repl logger -i -p local1.info recovery_1st_stage: start: pg_basebackup for Standby node PostgreSQL@{$DEST_NODE_HOST} ## Test passwrodless SSH ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${DEST_NODE_HOST} -i ~/.ssh/id_rsa_pgpool ls /tmp > /dev/null if [ $? -ne 0 ]; then logger -i -p local1.error recovery_1st_stage: passwrodless SSH to postgres@${DEST_NODE_HOST} failed. Please setup passwrodless SSH. exit 1 fi ## Get PostgreSQL major version PGVERSION=`${PGHOME}/bin/initdb -V | awk '{print $3}' | sed 's/\..*//' | sed 's/\([0-9]*\)[a-zA-Z].*/\1/'` if [ $PGVERSION -ge 12 ]; then RECOVERYCONF=${DEST_NODE_PGDATA}/myrecovery.conf else RECOVERYCONF=${DEST_NODE_PGDATA}/recovery.conf fi ## Execute pg_basebackup to recovery Standby node ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@$DEST_NODE_HOST -i ~/.ssh/id_rsa_pgpool " set -o errexit rm -rf $DEST_NODE_PGDATA rm -rf $ARCHIVEDIR/* ${PGHOME}/bin/pg_basebackup -h ${PRIMARY_NODE_HOST} -U ${REPL_USER} -p ${PRIMARY_NODE_PORT} -D ${DEST_NODE_PGDATA} -X stream if [ ${PGVERSION} -ge 12 ]; then sed -i -e \"\\\$ainclude_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'\" \ -e \"/^include_if_exists = '$(echo ${RECOVERYCONF} | sed -e 's/\//\\\//g')'/d\" ${DEST_NODE_PGDATA}/postgresql.conf fi cat > ${RECOVERYCONF} << EOT primary_conninfo = 'host=${PRIMARY_NODE_HOST} port=${PRIMARY_NODE_PORT} user=${REPL_USER} passfile=''/var/lib/pgsql/.pgpass''' recovery_target_timeline = 'latest' restore_command = 'scp ${PRIMARY_NODE_HOST}:${ARCHIVEDIR}/%f %p' EOT if [ ${PGVERSION} -ge 12 ]; then touch ${DEST_NODE_PGDATA}/standby.signal else echo \"standby_mode = 'on'\" >> ${RECOVERYCONF} fi sed -i \"s/#*port = .*/port = ${DEST_NODE_PORT}/\" ${DEST_NODE_PGDATA}/postgresql.conf " if [ $? -ne 0 ]; then logger -i -p local1.error recovery_1st_stage: end: pg_basebackup failed. online recovery failed exit 1 fi logger -i -p local1.info recovery_1st_stage: end: recovery_1st_stage complete exit 0
/var/lib/pgsql/11/data/pgpool_remote_start
#!/bin/bash # This script is run after recovery_1st_stage to start Standby node. set -o xtrace exec > >(logger -i -p local1.info) 2>&1 PGHOME=/usr/pgsql-11 DEST_NODE_HOST="$1" DEST_NODE_PGDATA="$2" logger -i -p local1.info pgpool_remote_start: start: remote start Standby node PostgreSQL@$DEST_NODE_HOST ## Test passwrodless SSH ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@${DEST_NODE_HOST} -i ~/.ssh/id_rsa_pgpool ls /tmp > /dev/null if [ $? -ne 0 ]; then logger -i -p local1.error pgpool_remote_start: passwrodless SSH to postgres@${DEST_NODE_HOST} failed. Please setup passwrodless SSH. exit 1 fi ## Start Standby node ssh -T -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null postgres@$DEST_NODE_HOST -i ~/.ssh/id_rsa_pgpool " $PGHOME/bin/pg_ctl -l /dev/null -w -D $DEST_NODE_PGDATA start " if [ $? -ne 0 ]; then logger -i -p local1.error pgpool_remote_start: PostgreSQL@$DEST_NODE_HOST start failed. exit 1 fi logger -i -p local1.info pgpool_remote_start: end: PostgreSQL@$DEST_NODE_HOST started successfully. exit 0
In order to use the online recovery functionality, the functions of
pgpool_recovery
, pgpool_remote_start
,
pgpool_switch_xlog
are required, so we need install
pgpool_recovery
on template1 of PostgreSQL server
server1.
[server1]# su - postgres [server1]$ psql template1 -c "CREATE EXTENSION pgpool_recovery"
Note: The recovery_1st_stage script does not support tablespaces. If you are using tablespaces, you need to modify the script to support tablespaces.
Because in the section Before Starting, we already set PostgreSQL authentication method to scram-sha-256, it is necessary to set a client authentication by Pgpool-II to connect to backend nodes. Please note that only AES encrypted password or clear text password can be specified in health_check_password, sr_check_password, wd_lifecheck_password, recovery_password in pgpool.conf. When installing with RPM, the Pgpool-II configuration file pool_hba.conf is in /etc/pgpool-II. By default, pool_hba authentication is disabled, and set enable_pool_hba = on to enable it.
enable_pool_hba = on
The format of pool_hba.conf file follows very closely PostgreSQL's pg_hba.conf format. Set pgpool and postgres user's authentication method to scram-sha-256.
host all pgpool 0.0.0.0/0 scram-sha-256 host all postgres 0.0.0.0/0 scram-sha-256
The default password file name for authentication is pool_passwd. To use scram-sha-256 authentication, the decryption key to decrypt the passwords is required. We create the .pgpoolkey file in root user's home directory.
[all servers]# echo 'some string' > ~/.pgpoolkey [all servers]# chmod 600 ~/.pgpoolkey
Execute command pg_enc -m -k /path/to/.pgpoolkey -u username -p to register user name and AES encrypted password in file pool_passwd. If pool_passwd doesn't exist yet, it will be created in the same directory as pgpool.conf.
[all servers]# pg_enc -m -k /root/.pgpoolkey -u pgpool -p db password: [pgpool user's password] [all servers]# pg_enc -m -k /root/.pgpoolkey -u postgres -p db password: [postgres user's passowrd] # cat /etc/pgpool-II/pool_passwd pgpool:AESheq2ZMZjynddMWk5sKP/Rw== postgres:AESHs/pWL5rtXy2IwuzroHfqg==
Enable watchdog functionality on server1, server2, server3.
use_watchdog = on
Specify virtual IP address that accepts connections from clients on server1, server2, server3. Ensure that the IP address set to virtual IP isn't used yet.
delegate_IP = '192.168.137.150'
To bring up/down the virtual IP and send the ARP requests, we set if_up_cmd, if_down_cmd and arping_cmd. The network interface used in this example is "enp0s8".
if_up_cmd = 'ip addr add $_IP_$/24 dev enp0s8 label enp0s8:0' if_down_cmd = 'ip addr del $_IP_$/24 dev enp0s8' arping_cmd = 'arping -U $_IP_$ -w 1 -I enp0s8'
Set if_cmd_path and arping_path according to the command path.
if_cmd_path = '/sbin' arping_path = '/usr/sbin'
Specify the hostname and port number of each Pgpool-II server.
server1
wd_hostname = 'server1' wd_port = 9000
server2
wd_hostname = 'server2' wd_port = 9000
server3
wd_hostname = 'server3' wd_port = 9000
Specify the hostname, Pgpool-II port number, and watchdog port number of monitored Pgpool-II servers on each Pgpool-II server.
server1
# - Other pgpool Connection Settings - other_pgpool_hostname0 = 'server2' other_pgpool_port0 = 9999 other_wd_port0 = 9000 other_pgpool_hostname1 = 'server3' other_pgpool_port1 = 9999 other_wd_port1 = 9000
server2
# - Other pgpool Connection Settings - other_pgpool_hostname0 = 'server1' other_pgpool_port0 = 9999 other_wd_port0 = 9000 other_pgpool_hostname1 = 'server3' other_pgpool_port1 = 9999 other_wd_port1 = 9000
server3
# - Other pgpool Connection Settings - other_pgpool_hostname0 = 'server1' other_pgpool_port0 = 9999 other_wd_port0 = 9000 other_pgpool_hostname1 = 'server2' other_pgpool_port1 = 9999 other_wd_port1 = 9000
Specify the hostname and port number of destination for sending heartbeat signal on server1, server2, server3.
server1
heartbeat_destination0 = 'server2' heartbeat_destination_port0 = 9694 heartbeat_device0 = '' heartbeat_destination1 = 'server3' heartbeat_destination_port1 = 9694 heartbeat_device1 = ''
server2
heartbeat_destination0 = 'server1' heartbeat_destination_port0 = 9694 heartbeat_device0 = '' heartbeat_destination1 = 'server3' heartbeat_destination_port1 = 9694 heartbeat_device1 = ''
server3
heartbeat_destination0 = 'server1' heartbeat_destination_port0 = 9694 heartbeat_device0 = '' heartbeat_destination1 = 'server2' heartbeat_destination_port1 = 9694 heartbeat_device1 = ''
If you want to ignore the pgpool_status file at startup of Pgpool-II, add "- D" to the start option OPTS to /etc/sysconfig/pgpool.
[all servers]# vi /etc/sysconfig/pgpool ... OPTS=" -D -n"
In the example, we output Pgpool-II's log to syslog.
log_destination = 'syslog' syslog_facility = 'LOCAL1'
Create Pgpool-II log file.
[all servers]# mkdir /var/log/pgpool-II [all servers]# touch /var/log/pgpool-II/pgpool.log
Edit config file of syslog /etc/rsyslog.conf.
[all servers]# vi /etc/rsyslog.conf ... *.info;mail.none;authpriv.none;cron.none;LOCAL1.none /var/log/messages LOCAL1.* /var/log/pgpool-II/pgpool.log
Setting logrotate same as /var/log/messages.
[all servers]# vi /etc/logrotate.d/syslog ... /var/log/messages /var/log/pgpool-II/pgpool.log /var/log/secure
Restart rsyslog service.
[all servers]# systemctl restart rsyslog
Since user authentication is required to use the PCP command, specify user name and md5 encrypted password in pcp.conf. Here we create the encrypted password for pgpool user, and add "username:encrypted password" in /etc/pgpool-II/pcp.conf.
[all servers]# echo 'pgpool:'`pg_md5 PCP passowrd` >> /etc/pgpool-II/pcp.conf
Since follow_master_command script has to execute PCP command without entering the password, we create .pcppass in the home directory of Pgpool-II startup user (root user).
[all servers]# echo 'localhost:9898:pgpool:pgpool' > ~/.pcppass [all servers]# chmod 600 ~/.pcppass
The settings of Pgpool-II is completed.
Next we start Pgpool-II. Before starting Pgpool-II, please start PostgreSQL servers first. Also, when stopping PostgreSQL, it is necessary to stop Pgpool-II first.
Starting Pgpool-II
# systemctl start pgpool.service
Stopping Pgpool-II
# systemctl stop pgpool.service
Let's start to use Pgpool-II. First, we start the primary PostgreSQL.
[server1]# su - postgres [server1]$ /usr/pgsql-11/bin/pg_ctl start -D $PGDATA
Then let's start Pgpool-II on server1, server2, server3 by using the following command.
# systemctl start pgpool.service
First, we should set up PostgreSQL standby server by using Pgpool-II online recovery functionality. Ensure that recovery_1st_stage and pgpool_remote_start scripts used by pcp_recovery_node command are in database cluster directory of PostgreSQL primary server (server1).
# pcp_recovery_node -h 192.168.137.150 -p 9898 -U pgpool -n 1 Password: pcp_recovery_node -- Command Successful # pcp_recovery_node -h 192.168.137.150 -p 9898 -U pgpool -n 2 Password: pcp_recovery_node -- Command Successful
After executing pcp_recovery_node command, verify that server2 and server3 are started as PostgreSQL standby server.
# psql -h 192.168.137.150 -p 9999 -U pgpool postgres -c "show pool_nodes" node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | last_status_change ---------+----------+------+--------+-----------+---------+------------+-------------------+-------------------+--------------------- 0 | server1 | 5432 | up | 0.333333 | primary | 0 | false | 0 | 2019-02-18 11:26:31 1 | server2 | 5432 | up | 0.333333 | standby | 0 | true | 0 | 2019-02-18 11:27:49 2 | server3 | 5432 | up | 0.333333 | standby | 0 | false | 0 | 2019-02-18 11:27:49
Confirm the watchdog status by using pcp_watchdog_info. The Pgpool-II server which is started first run as MASTER.
# pcp_watchdog_info -h 192.168.137.150 -p 9898 -U pgpool Password: 3 YES server1:9999 Linux server1 server1 server1:9999 Linux server1 server1 9999 9000 4 MASTER #The Pgpool-II server started first becames "MASTER". server2:9999 Linux server2 server2 9999 9000 7 STANDBY #run as standby server3:9999 Linux server3 server3 9999 9000 7 STANDBY #run as standby
Stop active server server1, then server2 or server3 will be promoted to active server. To stop server1, we can stop Pgpool-II service or shutdown the whole system. Here, we stop Pgpool-II service.
[server1]# systemctl stop pgpool.service # pcp_watchdog_info -p 9898 -h 192.168.137.150 -U pgpool Password: 3 YES server2:9999 Linux server2 server2 server2:9999 Linux server2 server2 9999 9000 4 MASTER #server2 is promoted to MASTER server1:9999 Linux server1 server1 9999 9000 10 SHUTDOWN #server1 is stopped server3:9999 Linux server3 server3 9999 9000 7 STANDBY #server3 runs as STANDBY
Start Pgpool-II (server1) which we have stopped again, and verify that server1 runs as a standby.
[server1]# systemctl start pgpool.service [server1]# pcp_watchdog_info -p 9898 -h 192.168.137.150 -U pgpool Password: 3 YES server2:9999 Linux server2 server2 server2:9999 Linux server2 server2 9999 9000 4 MASTER server1:9999 Linux server1 server1 9999 9000 7 STANDBY server3:9999 Linux server3 server3 9999 9000 7 STANDBY
First, use psql to connect to PostgreSQL via virtual IP, and verify the backend information.
# psql -h 192.168.137.150 -p 9999 -U pgpool postgres -c "show pool_nodes" node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | last_status_change ---------+----------+------+--------+-----------+---------+------------+-------------------+-------------------+--------------------- 0 | server1 | 5432 | up | 0.333333 | primary | 0 | false | 0 | 2019-02-18 13:08:02 1 | server2 | 5432 | up | 0.333333 | standby | 0 | false | 0 | 2019-02-18 13:21:56 2 | server3 | 5432 | up | 0.333333 | standby | 0 | true | 0 | 2019-02-18 13:21:56
Next, stop primary PostgreSQL server server1, and verify automatic failover.
[server1]$ pg_ctl -D /var/lib/pgsql/11/data -m immediate stop
After stopping PostgreSQL on server1, failover occurs and PostgreSQL on server2 becomes new primary DB.
# psql -h 192.168.137.150 -p 9999 -U pgpool postgres -c "show pool_nodes" node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | last_status_change ---------+----------+------+--------+-----------+---------+------------+-------------------+-------------------+--------------------- 0 | server1 | 5432 | down | 0.333333 | standby | 0 | false | 0 | 2019-02-18 13:22:25 1 | server2 | 5432 | up | 0.333333 | primary | 0 | true | 0 | 2019-02-18 13:22:25 2 | server3 | 5432 | up | 0.333333 | standby | 0 | false | 0 | 2019-02-18 13:22:28
server3 is running as standby of new primary server2.
[server3]# psql -h server3 -p 5432 -U pgpool postgres -c "select pg_is_in_recovery()" pg_is_in_recovery ------------------- t [server2]# su - postgres $ psql postgres=# select pg_is_in_recovery(); pg_is_in_recovery ------------------- f postgres=# select * from pg_stat_replication; -[ RECORD 1 ]----+------------------------------ pid | 11915 usesysid | 16385 usename | repl application_name | walreceiver client_addr | 192.168.137.103 client_hostname | client_port | 37834 backend_start | 2019-02-18 13:22:27.472038+09 backend_xmin | state | streaming sent_lsn | 0/8E000060 write_lsn | 0/8E000060 flush_lsn | 0/8E000060 replay_lsn | 0/8E000060 write_lag | flush_lag | replay_lag | sync_priority | 0 sync_state | async
Here, we use Pgpool-II online recovery functionality to restore server1 (old primary server) as a standby. Before restoring the old primary server, please ensure that recovery_1st_stage and pgpool_remote_start scripts exist in database cluster directory of current primary server server2.
# pcp_recovery_node -h 192.168.137.150 -p 9898 -U pgpool -n 0 Password: pcp_recovery_node -- Command Successful
Then verify that server1 is started as a standby.
# psql -h 192.168.137.150 -p 9999 -U pgpool postgres -c "show pool_nodes" node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | last_status_change ---------+----------+------+--------+-----------+---------+------------+-------------------+-------------------+--------------------- 0 | server1 | 5432 | up | 0.333333 | standby | 0 | false | 0 | 2019-02-18 13:27:44 1 | server2 | 5432 | up | 0.333333 | primary | 0 | false | 0 | 2019-02-18 13:22:25 2 | server3 | 5432 | up | 0.333333 | standby | 0 | true | 0 | 2019-02-18 13:22:28