pgpool-II 4.5.5 Documentation | |||
---|---|---|---|
Prev | Up | Chapter 8. Configuration Examples | Next |
This section shows an example of how to configure Pgpool-II replication mode and snapshot isolation mode.
In streaming replication mode described in Section 8.2, replication is performed by PostgreSQL's streaming replication functionality. However, in native replication mode, Pgpool-II performs replication by routing write queries to all PostgreSQL servers.
Snapshot isolation mode is similar to native replication mode except it adds the visibility consistency among nodes.
PostgreSQL 14 is used in this configuration example. All scripts have been tested with PostgreSQL 10 and later.
In this example, we use one Pgpool-II and three PostgreSQL servers to describe how to configure and use Pgpool-II's replication.
In this example we use 3 servers with CentOS 7.9 installed. Let these servers be server1, server2, server3. We install PostgreSQL on all servers and Pgpool-II on server1.
In this example we use the minimum settings to configure replication. In a production environment, it is recommended to enable Watchdog to avoid single points of failure. For more details about Watchdog configurations, please refer to Section 8.2.7.10.
Table 8-7. Hostname and IP address
Hostname | IP Address | Virtual IP |
---|---|---|
server1 | 192.168.137.101 | PostgreSQL node0, Pgpool-II |
server2 | 192.168.137.102 | PostgreSQL node1 |
server3 | 192.168.137.103 | PostgreSQL node2 |
Table 8-8. PostgreSQL version and Configuration
Item | Value | Detail |
---|---|---|
PostgreSQL Version | 14.0 | - |
port | 5432 | - |
$PGDATA | /var/lib/pgsql/14/data | - |
Archive mode | on | /var/lib/pgsql/archivedir |
Table 8-9. Pgpool-II version and Configuration
Item | Value | Detail |
---|---|---|
Pgpool-II Version | 4.3.0 | - |
port | 9999 | Pgpool-II accepts connections |
9898 | PCP process accepts connections | |
Config file | /etc/pgpool-II/pgpool.conf | Pgpool-II config file |
Pgpool-II start user | postgres (Pgpool-II 4.1 or later) | Pgpool-II 4.0 or before, the default startup user is root |
Clustering mode | native replication mode | - |
snapshot isolation mode | - |
In this example, we install Pgpool-II and PostgreSQL RPM packages with YUM.
Install PostgreSQL from PostgreSQL YUM repository.
[all servers]# yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm [all servers]# yum install -y postgresql14-server
Since Pgpool-II related packages are also included in PostgreSQL YUM repository, add the "exclude" settings to /etc/yum.repos.d/pgdg-redhat-all.repo so that Pgpool-II is not installed from PostgreSQL YUM repository.
[all servers]# vi /etc/yum.repos.d/pgdg-redhat-all.repo
The following is a setting example of /etc/yum.repos.d/pgdg-redhat-all.repo.
[pgdg-common] ... exclude=pgpool* [pgdg14] ... exclude=pgpool* [pgdg13] ... exclude=pgpool* [pgdg12] ... exclude=pgpool* [pgdg11] ... exclude=pgpool* [pgdg10] ... exclude=pgpool* [pgdg96] ... exclude=pgpool*
Install Pgpool-II using Pgpool-II YUM repository.
[all servers]# yum install -y https://www.pgpool.net/yum/rpms/4.4/redhat/rhel-7-x86_64/pgpool-II-release-4.4-1.noarch.rpm [all servers]# yum install -y pgpool-II-pg14-*
Before you start the configuration process, please check the following prerequisites.
Run the following command to set the password of postgres user on each server.
[all servers]# passwd postgres
To use the online recovery of Pgpool-II, the settings that allow passwordless SSH to all servers are required. Execute the following command on all servers to set up passwordless SSH. The generated key file name is id_rsa_pgpool.
[all servers]# su - postgres [all servers]$ cd ~/.ssh [all servers]$ ssh-keygen -t rsa -f id_rsa_pgpool [all servers]$ ssh-copy-id -i id_rsa_pgpool.pub postgres@server1 [all servers]$ ssh-copy-id -i id_rsa_pgpool.pub postgres@server2 [all servers]$ ssh-copy-id -i id_rsa_pgpool.pub postgres@server3
After setting SSH, use ssh postgres@serverX -i ~/.ssh/id_rsa_pgpool command to make sure that you can log in without entering a password. Edit /etc/ssh/sshd_config if necessary and restart sshd.
To allow repl user to execute online recovery scripts without specifying password, we create the .pgpass file in postgres user's home directory and change the permission to 600 on each server.
[all servers]# su - postgres [all servers]$ vi /var/lib/pgsql/.pgpass server1:5432:replication:repl:<repl user password> server2:5432:replication:repl:<repl user password> server3:5432:replication:repl:<repl user password> server1:5432:postgres:postgres:<postgres user password> server2:5432:postgres:postgres:<postgres user password> server3:5432:postgres:postgres:<postgres user password> [all servers]$ chmod 600 /var/lib/pgsql/.pgpass
When connect to Pgpool-II and PostgreSQL servers, the target port must be accessible by enabling firewall management softwares. Following is an example for CentOS/RHEL7.
[all servers]# firewall-cmd --permanent --zone=public --add-service=postgresql [all servers]# firewall-cmd --permanent --zone=public --add-port=9999/tcp --add-port=9898/tcp [all servers]# firewall-cmd --reload
If Watchdog is enabled, you also need to open port 9000 and 9694.
[all servers]# firewall-cmd --permanent --zone=public --add-port=9000/tcp --add-port=9694/udp
This section describes how to create and configure a PostgreSQL server.
In this example, we use WAL archiving. First, we create the directory /var/lib/pgsql/archivedir to store WAL segments on all servers.
[all servers]# su - postgres [all servers]$ mkdir /var/lib/pgsql/archivedir
Create only one PostgreSQL server on server1. The other two PostgreSQL servers are created by using Pgpool-II's online recovery functionality in Section 8.3.7.1.
Run the following command to create a PostgreSQL database cluster on server1.
[server1]# su - postgres [server1]$ /usr/pgsql-14/bin/initdb -E UTF8 --no-locale
Then edit $PGDATA/postgresql.conf on server1.
[server1]$ vi $PGDATA/postgresql.conf listen_addresses = '*' archive_mode = on archive_command = 'cp "%p" "/var/lib/pgsql/archivedir/%f"'
Assuming that all the Pgpool-II servers and the PostgreSQL servers are in the same subnet and edit pg_hba.conf to enable scram-sha-256 authentication method.
[server1]$ vi $PGDATA/pg_hba.conf host all all samenet scram-sha-256 host replication all samenet scram-sha-256
Run the following command to start PostgreSQL server.
[server1]$ /usr/pgsql-14/bin/pg_ctl start
Create PostgreSQL users.
Table 8-10. PostgreSQL users
User Name | Password | Detail |
---|---|---|
repl | repl | PostgreSQL replication user |
pgpool | pgpool | User performing health check (health_check_user) |
postgres | postgres | User performing online recovery |
[server1]$ psql -U postgres -p 5432 postgres=# SET password_encryption = 'scram-sha-256'; postgres=# CREATE ROLE pgpool WITH LOGIN; postgres=# CREATE ROLE repl WITH REPLICATION LOGIN; postgres=# \password pgpool postgres=# \password repl postgres=# \password postgres
Snapshot isolation mode is only available when PostgreSQL's transaction isolation level is "repeatable read". If you want to use snapshot isolation mode, set default_transaction_isolation ='repeatable read' in postgresql.conf.
[server1]$ vi $PGDATA/postgresql.conf default_transaction_isolation = 'repeatable read'
When installing Pgpool-II using RPM, the Pgpool-II configuration sample files are in /etc/pgpool-II.
First, specify Pgpool-II clustering mode in backend_clustering_mode.
Native replication mode
backend_clustering_mode = 'native_replication'
Snapshot isolation mode
backend_clustering_mode = 'snapshot_isolation'
To allow Pgpool-II to accept all incoming connections, we set listen_addresses = '*'.
listen_addresses = '*'
Enable health check to allow Pgpool-II to detect PostgreSQL failure. Also, if the network is unstable, the health check fails even though the backend is running properly, failover or degenerate operation may occur. In order to prevent such incorrect detection of health check, we set health_check_max_retries = 3. Specify health_check_user and health_check_password. In this example, we leave sr_check_password empty, and create the entry in pool_passwd. See Section 8.3.5.6 for how to create the entry in pool_passwd. From Pgpool-II 4.0, if these parameters are left blank, Pgpool-II will first try to get the password for that specific user from pool_passwd file before using the empty password.
health_check_period = 5 health_check_timeout = 30 health_check_user = 'pgpool' health_check_password = '' health_check_max_retries = 3
Specify the PostgreSQL backend information. Multiple backends can be specified by adding a number at the end of the parameter name.
# - Backend Connection Settings - backend_hostname0 = 'server1' backend_port0 = 5432 backend_weight0 = 1 backend_data_directory0 = '/var/lib/pgsql/14/data' backend_flag0 = 'ALLOW_TO_FAILOVER' backend_hostname1 = 'server2' backend_port1 = 5432 backend_weight1 = 1 backend_data_directory1 = '/var/lib/pgsql/14/data' backend_flag1 = 'ALLOW_TO_FAILOVER' backend_hostname2 = 'server3' backend_port2 = 5432 backend_weight2 = 1 backend_data_directory2 = '/var/lib/pgsql/14/data' backend_flag2 = 'ALLOW_TO_FAILOVER'
Next, in order to perform online recovery we specify the PostgreSQL user name and online recovery command recovery_1st_stage_command and recovery_2nd_stage_command. Because Superuser privilege in PostgreSQL is required for performing online recovery, we specify postgres user in recovery_user.
recovery_user = 'postgres' recovery_password = '' recovery_1st_stage_command = 'recovery_1st_stage' recovery_2nd_stage_command = 'recovery_2nd_stage'
The sample scripts replication_mode_recovery_1st_stage.sample, replication_mode_recovery_2nd_stage.sample and pgpool_remote_start.sample are installed in /etc/pgpool-II/. Create the scripts used by online recovery command from sample scripts and copy these files to the database cluster directory.
[server1]# cp -p /etc/pgpool-II/sample_scripts/replication_mode_recovery_1st_stage.sample /var/lib/pgsql/14/data/recovery_1st_stage [server1]# cp -p /etc/pgpool-II/sample_scripts/replication_mode_recovery_2nd_stage.sample /var/lib/pgsql/14/data/recovery_2nd_stage [server1]# cp -p /etc/pgpool-II/sample_scripts/pgpool_remote_start.sample /var/lib/pgsql/14/data/pgpool_remote_start [server1]# chown postgres:postgres /var/lib/pgsql/14/data/{recovery_1st_stage,recovery_2nd_stage,pgpool_remote_start}
Basically, it should work if you change PGHOME according to PostgreSQL installation directory.
[server1]# vi /var/lib/pgsql/14/data/recovery_1st_stage ... PGHOME=/usr/pgsql-14 ... [server1]# vi /var/lib/pgsql/14/data/recovery_2nd_stage ... PGHOME=/usr/pgsql-14 ... [server1]# vi /var/lib/pgsql/14/data/pgpool_remote_start ... PGHOME=/usr/pgsql-14 ...
In order to use the online recovery functionality, the functions of
pgpool_recovery
, pgpool_remote_start
,
pgpool_switch_xlog
are required, so we need to install
pgpool_recovery
on template1 of PostgreSQL server
server1.
[server1]# su - postgres [server1]$ psql template1 -c "CREATE EXTENSION pgpool_recovery"
Note: The recovery_1st_stage script does not support tablespaces. If you are using tablespaces, you need to modify the script to support tablespaces.
Enable client authentication between client and Pgpool-II. When installing with RPM, the Pgpool-II configuration file pool_hba.conf is in /etc/pgpool-II. By default, pool_hba authentication is disabled, set enable_pool_hba = on to enable it.
enable_pool_hba = on
The format of pool_hba.conf file follows very closely PostgreSQL's pg_hba.conf format. Set pgpool and postgres user's authentication method to scram-sha-256.
[server1]# vi /etc/pgpool-II/pool_hba.conf host all pgpool 0.0.0.0/0 scram-sha-256 host all postgres 0.0.0.0/0 scram-sha-256
Note: Please note that in Pgpool-II 4.0 only AES encrypted password or clear text password can be specified in health_check_password, sr_check_password, wd_lifecheck_password, recovery_password in pgpool.conf.
The default password file name for authentication is pool_passwd. To use scram-sha-256 authentication, the decryption key to decrypt the passwords is required. We create the .pgpoolkey file in Pgpool-II start user postgres's (Pgpool-II 4.1 or later) home directory. (Pgpool-II 4.0 or before, by default Pgpool-II is started as root)
[server1]# su - postgres [server1]$ echo 'some string' > ~/.pgpoolkey [server1]$ chmod 600 ~/.pgpoolkey
Execute command pg_enc -m -k /path/to/.pgpoolkey -u username -p to register user name and AES encrypted password in file pool_passwd. If pool_passwd doesn't exist yet, it will be created in the same directory as pgpool.conf.
[server1]# su - postgres [server1]$ pg_enc -m -k ~/.pgpoolkey -u pgpool -p db password: [pgpool user's password] [server1]$ pg_enc -m -k ~/.pgpoolkey -u postgres -p db password: [postgres user's password] [server1]$ cat /etc/pgpool-II/pool_passwd pgpool:AESheq2ZMZjynddMWk5sKP/Rw== postgres:AESHs/pWL5rtXy2IwuzroHfqg==
Since user authentication is required to use the PCP command, we need to specify user name and md5 encrypted password in pcp.conf in format "username:encrypted password".
We use pg_md5 to create the encrypted password entry for pgpool user as below:
[server1]# echo 'pgpool:'`pg_md5 PCP password` >> /etc/pgpool-II/pcp.conf
Since Pgpool-II 4.2, the logging collector process has been implemented. In the example, we enable logging collector.
log_destination = 'stderr' logging_collector = on log_directory = '/var/log/pgpool_log' log_filename = 'pgpool-%Y-%m-%d_%H%M%S.log' log_truncate_on_rotation = on log_rotation_age = 1d log_rotation_size = 10MB
Create the log directory on server1.
[server1]# mkdir /var/log/pgpool_log/ [server1]# chown postgres:postgres /var/log/pgpool_log/
Before starting Pgpool-II, please start PostgreSQL servers first. Also, when stopping PostgreSQL, it is necessary to stop Pgpool-II first. Run the following command to start or stop Pgpool-II.
Starting Pgpool-II
# systemctl start pgpool.service
Stopping Pgpool-II
# systemctl stop pgpool.service
Once the configuration is completed, let's start to use Pgpool-II.
First, let's start Pgpool-II.
[server1]# systemctl start pgpool.service
Then, we create PostgreSQL node1 and node2 using online recovery. Ensure that recovery_1st_stage, recovery_2nd_stage and pgpool_remote_start scripts used by pcp_recovery_node command are in the database cluster directory on server1.
[server1]# pcp_recovery_node -h server1 -p 9898 -U pgpool -n 1 Password: pcp_recovery_node -- Command Successful [server1]# pcp_recovery_node -h server1 -p 9898 -U pgpool -n 2 Password: pcp_recovery_node -- Command Successful
If pcp_recovery_node has run successfully, verify that the PostgreSQL node0 is started as the main node, and node1 and node2 are started as replicas.
# psql -h server1 -p 9999 -U pgpool postgres -c "show pool_nodes" Password for user pgpool: node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change ---------+----------+------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+--------------------- 0 | server1 | 5432 | up | up | 0.333333 | main | main | 0 | true | 0 | | | 2021-12-02 16:48:21 1 | server2 | 5432 | up | up | 0.333333 | replica | replica | 0 | false | 0 | | | 2021-12-02 16:48:21 2 | server3 | 5432 | up | up | 0.333333 | replica | replica | 0 | false | 0 | | | 2021-12-02 16:48:21 (3 rows)
Next, let's verify the replication functionality using a benchmark tool pgbench.
[server1]# /usr/pgsql-14/bin/createdb test -U postgres -p 9999 [server1]# /usr/pgsql-14/bin/pgbench -h server1 -U postgres -i -p 9999 test
To check if the replication works correctly, directly connect to each PostgreSQL server to see if they return identical results.
[server1]# /usr/pgsql-14/bin/psql -h server1 -U postgres -p 5432 test test=# \d List of relations Schema | Name | Type | Owner --------+------------------+-------+---------- public | pgbench_accounts | table | postgres public | pgbench_branches | table | postgres public | pgbench_history | table | postgres public | pgbench_tellers | table | postgres (4 rows) [server1]# /usr/pgsql-14/bin/psql -h server2 -U postgres -p 5432 test test=# \d List of relations Schema | Name | Type | Owner --------+------------------+-------+---------- public | pgbench_accounts | table | postgres public | pgbench_branches | table | postgres public | pgbench_history | table | postgres public | pgbench_tellers | table | postgres (4 rows) [server1]# /usr/pgsql-14/bin/psql -h server3 -U postgres -p 5432 test test=# \d List of relations Schema | Name | Type | Owner --------+------------------+-------+---------- public | pgbench_accounts | table | postgres public | pgbench_branches | table | postgres public | pgbench_history | table | postgres public | pgbench_tellers | table | postgres (4 rows)
server1, server2 and server3 return identical results.
Next, let's run pgbench for a while and check to results.
[server1]# /usr/pgsql-14/bin/pgbench -h server1 -U postgres -p 9999 -T 10 test
All PostgreSQL servers return identical results.
[server1]# /usr/pgsql-14/bin/psql -h server1 -U postgres -p 5432 test -c "SELECT sum(abalance) FROM pgbench_accounts" Password for user postgres: sum -------- -99710 (1 row) [server1]# /usr/pgsql-14/bin/psql -h server2 -U postgres -p 5432 test -c "SELECT sum(abalance) FROM pgbench_accounts" Password for user postgres: sum -------- -99710 (1 row) [server1]# /usr/pgsql-14/bin/psql -h server3 -U postgres -p 5432 test -c "SELECT sum(abalance) FROM pgbench_accounts" Password for user postgres: sum -------- -99710 (1 row)
Next, stop the PostgreSQL main node on server1 and verify the switchover of the main node.
[server1]# su - postgres -c "/usr/pgsql-14/bin/pg_ctl -m i stop"
After stopping PostgreSQL on server1, switchover occurs and PostgreSQL on server2 becomes the new main node.
[server1]# psql -h server1 -p 9999 -U pgpool postgres -c "show pool_nodes" Password for user pgpool: node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change ---------+----------+------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+--------------------- 0 | server1 | 5432 | down | down | 0.333333 | replica | replica | 0 | false | 0 | | | 2021-12-02 16:57:45 1 | server2 | 5432 | up | up | 0.333333 | main | main | 1 | true | 0 | | | 2021-12-02 16:48:21 2 | server3 | 5432 | up | up | 0.333333 | replica | replica | 0 | false | 0 | | | 2021-12-02 16:48:21 (3 rows)
Here, we use Pgpool-II online recovery functionality to restore the PostgreSQL node0 on server1.
# pcp_recovery_node -h server1 -p 9898 -U pgpool -n 0 Password: pcp_recovery_node -- Command Successful
Then verify that server1 is started as the main node.
# psql -h server1 -p 9999 -U pgpool postgres -c "show pool_nodes" Password for user pgpool: node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change ---------+----------+------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+--------------------- 0 | server1 | 5432 | up | up | 0.333333 | main | main | 0 | true | 0 | | | 2021-12-02 16:57:45 1 | server2 | 5432 | up | up | 0.333333 | replica | replica | 0 | false | 0 | | | 2021-12-02 16:48:21 2 | server3 | 5432 | up | up | 0.333333 | replica | replica | 0 | false | 0 | | | 2021-12-02 16:48:21 (3 rows)