pgpool-II 4.4.10 Documentation | |||
---|---|---|---|
Prev | Up | Chapter 5. Server Configuration | Next |
Clustering mode is the method to sync PostgreSQL servers. To set the clustering mode, backend_clustering_mode can be used. In this section we discuss how to set the clustering mode. See Section 2.1.1 for more details.
This mode is most popular and recommended clustering mode. In this mode PostgreSQL is responsible to replicate each servers. To enable this mode, use 'streaming_replication' for backend_clustering_mode.
backend_clustering_mode = 'streaming_replication'
In this mode you can have up to 127 streaming replication standby servers. Also it is possible not to have standby server at all.
The drawback of this mode is, it is necessary to consider the replication delay while distributing read queries. The visibility consistency among nodes is not guaranteed either. To mitigate these problems lots of additional parameters are prepared but it may make the administration task harder. Also it may take sometime to perform failover when the primary server goes down and one of standbys is promoted. If you want to avoid these problems, consider snapshot isolation mode.
See Section 5.12 for additional parameters for streaming replication mode.
This mode makes the Pgpool-II to replicate data between PostgreSQL backends. To enable this mode, use 'native_replication' for backend_clustering_mode.
backend_clustering_mode = 'native_replication'
In this mode you can have up to 127 standby replication servers. Also it is possible not to have standby server at all.
Load balancing (see Section 5.8 ) can also be used with replication mode to distribute the load to the attached backend nodes.
The drawback of this mode is, visibility consistency among nodes is not guaranteed. This could bring inconsistency among nodes and read inconsistent data. If you want to avoid these problems, consider snapshot isolation mode.
Following options affect the behavior of Pgpool-II in the replication mode.
When set to on, and all nodes do not reply with the same packet kind to the query that was sent to all PostgreSQL backend nodes, then the backend node whose reply differs from the majority is degenerated by the Pgpool-II. If replication_stop_on_mismatch is set to off and a similar situation happens then the Pgpool-II only terminates the current user session but does not degenerate a backend node.
Note: Pgpool-II does not examine the data returned by the backends and takes the decision only by comparing the result packet types.
A typical use case of enabling the replication_stop_on_mismatch is to guard against the data inconsistency among the backend nodes. For example, you may want to degenerate a backend node if an UPDATE statement fails on one backend node while passes on others.
Default is off.
This parameter can be changed by reloading the Pgpool-II configurations.
When set to on, and all nodes do not reply with the same number of affected tuples to the INSERT/UPDATE/DELETE query, then the backend node whose reply differs from the majority is degenerated by the Pgpool-II. If failover_if_affected_tuples_mismatch is set to off and a similar situation happens then the Pgpool-II only terminates the current user session but does not degenerate a backend node.
Note: In case of a tie, when two or more groups have the same number of nodes, then the group containing the primary node (backend node having the youngest node id) gets the precedence.
Default is off.
This parameter can be changed by reloading the Pgpool-II configurations.
When set to on, Pgpool-II enables the SELECT query replication mode. i.e. The SELECT queries are sent to all backend nodes.
Table 5-3. replicate_select with load_balance_mode affects on SELECT routing
replicate_select is true | Y | N | |||
load_balance_mode is true | ANY | Y | N | ||
SELECT is inside a transaction block | ANY | Y | N | ANY | |
Transaction isolation level is SERIALIZABLE and the transaction has issued a write query | ANY | Y | N | ANY | ANY |
results(R:replication, M: send only to main, L: load balance) | R | M | L | L | M |
Default is off.
This parameter can be changed by reloading the Pgpool-II configurations.
When set to on, Pgpool-II will automatically lock the table on PostgreSQL before an INSERT statement is issued for that.
When replicating a table with SERIAL data type, the SERIAL column value may get different values on the different backends. The workaround to this problem is to explicitly lock the table before issuing the INSERT.
So for automatically locking the table Pgpool-II do the following transformation:
INSERT INTO ...
to
BEGIN; LOCK TABLE ... INSERT INTO ... COMMIT;
Caution |
This approach severely degrades the transactions' parallelism |
Pgpool-II V2.2 or later, automatically detects whether the table has a SERIAL columns or not, so it never locks the table if it doesn't have the SERIAL columns.
Pgpool-II V3.0 until Pgpool-II V3.0.4 uses a row lock against the sequence relation, rather than table lock. This is intended to minimize lock conflict with VACUUM (including autovacuum). However this can lead to another problem. After transaction wraparound happens, row locking against the sequence relation causes PostgreSQL internal error (more precisely, access error on pg_clog, which keeps transaction status). To prevent this, PostgreSQL core developers decided to disallow row locking against sequences and this broke the Pgpool-II, of course (the "fixed" version of PostgreSQL was released as 9.0.5, 8.4.9, 8.3.16 and 8.2.22).
Pgpool-II V3.0.5 or later uses a row lock against pgpool_catalog.insert_lock table because new PostgreSQL disallows a row lock against the sequence relation. So creating insert_lock table in all databases which are accessed via Pgpool-II beforehand is required. See Section 2.8 for more details. If does not exist insert_lock table, Pgpool-II locks the insert target table. This behavior is same as Pgpool-II V2.2 and V2.3 series.
If you want to use insert_lock which is compatible with older releases, you can specify lock method by configure script. See Section 2.5 for more details.
For fine (per statement) control:
set insert_lock to true, and add /*NO INSERT LOCK*/ at the beginning of an INSERT statement for which you do not want to acquire the table lock.
set insert_lock to false, and add /*INSERT LOCK*/ at the beginning of an INSERT statement for which you want to acquire the table lock.
Note: If insert_lock is enabled, the regression tests for PostgreSQL 8.0 gets fail in transactions, privileges, rules, and alter_table.
The reason for this is that Pgpool-II tries to LOCK the VIEW for the rule test, and it produces the below error message:
! ERROR: current transaction is aborted, commands ignored until end of transaction blockFor example, the transactions test tries an INSERT into a table which does not exist, and Pgpool-II causes PostgreSQL to acquire the lock for the table. Of cause this results in an error. The transaction will be aborted, and the following INSERT statement produces the above error message.
Default is on.
This parameter can be changed by reloading the Pgpool-II configurations.
Specifies a table name used for large object replication control. If it is specified, Pgpool-II will lock the table specified by lobj_lock_table and generate a large object id by looking into pg_largeobject system catalog and then call lo_create to create the large object. This procedure guarantees that Pgpool-II will get the same large object id in all DB nodes in replication mode.
Note: PostgreSQL 8.0 and older does not have lo_create, so this feature does not work with PostgreSQL 8.0 and older versions.
A call to the libpq function lo_creat() triggers this feature. Also large object creation through Java API (JDBC driver), PHP API (pg_lo_create, or similar API in PHP library such as PDO), and this same API in various programming languages are known to use a similar protocol, and thus should work.
This feature does not works with following operations on large objects.
All APIs using lo_create, lo_import_with_oid.
lo_import function in backend called in SELECT.
lo_create function in backend called in SELECT.
Note: All PostgreSQL users must have a write access on lobj_lock_table and it can be created in any schema.
Example to create a large object lock table:
CREATE TABLE public.my_lock_table (); GRANT ALL ON public.my_lock_table TO PUBLIC;
Default is ''(empty), which disables the feature.
This mode is similar to the native replication mode except it adds the visibility consistency among nodes. The implementation is based on a research paper Pangea: An Eager Database Replication Middleware guaranteeing Snapshot Isolation without modification of Database Servers. To enable this mode, use 'snapshot_isolation' for backend_clustering_mode.
backend_clustering_mode = 'snapshot_isolation'
For example, you can avoid following inconsistency among nodes caused by the visibility inconsistency. Here S1 and S2 denotes sessions, while N1 and N2 denotes the PostgreSQL server 1 and 2 respectively.
S1/N1: BEGIN; S1/N2: BEGIN; S1/N1: UPDATE t1 SET i = i + 1; -- i = 1 S1/N2: UPDATE t1 SET i = i + 1; -- i = 1 S1/N1: COMMIT; S2/N1: BEGIN; S2/N2: BEGIN; S2/N2: DELETE FROM t1 WHERE i = 1; -- delete 1 row since S1/N2 is not committed yet S2/N1: DELETE FROM t1 WHERE i = 1; -- delete no row since S1/N1 is committed and i is not 1 anymore S1/N2: COMMIT; S2/N1: COMMIT; S2/N2: COMMIT;
In the snapshot isolation mode, the result will be either one of them below and it never damages the data consistency among database nodes.
S1/N1: BEGIN; S1/N2: BEGIN; S1/N1: UPDATE t1 SET i = i + 1; -- i = 1 S1/N2: UPDATE t1 SET i = i + 1; -- i = 1 S2/N1: BEGIN; S2/N2: BEGIN; S1/N1: COMMIT; S1/N2: COMMIT; S2/N1: DELETE FROM t1 WHERE i = 1; -- delete no row since S1/N1 is committed and i is not 1 anymore S2/N2: DELETE FROM t1 WHERE i = 1; -- delete no row since S1/N2 is committed and i is not 1 anymore S2/N1: COMMIT; S2/N2: COMMIT;
S1/N1: BEGIN; S1/N2: BEGIN; S1/N1: UPDATE t1 SET i = i + 1; -- i = 1 S1/N2: UPDATE t1 SET i = i + 1; -- i = 1 S2/N1: BEGIN; S2/N2: BEGIN; S2/N1: DELETE FROM t1 WHERE i = 1; -- delete 1 row since S1/N1 is not committed yet S2/N2: DELETE FROM t1 WHERE i = 1; -- delete 1 row since S1/N2 is not committed yet S1/N1: COMMIT; S1/N2: COMMIT; S2/N1: COMMIT; S2/N2: COMMIT;
It is necessary to set the transaction isolation level to REPEATABLE READ. That means you need to set it in postgresql.conf like this:
default_transaction_isolation = 'repeatable read'
In this mode PostgreSQL is responsible to replicate each servers. To enable this mode, use 'logical_replication' for backend_clustering_mode.
backend_clustering_mode = 'logical_replication'
In this mode you can have up to 127 logical replication standby servers. Also it is possible not to have standby server at all.
The drawback of this mode is, it is necessary to consider the replication delay while distributing read queries. The visibility consistency among nodes is not guaranteed either. Also certain kind of objects such as DDL and large objects are not replicated.
This mode is used to couple Pgpool-II with Slony-I. Slony-I is responsible for doing the actual data replication. To enable this mode, use 'slony' for backend_clustering_mode.
backend_clustering_mode = 'slony'
In this mode you can have up to 127 replica servers. Also it is possible not to have replica server at all.
The drawback of this mode is, it is necessary to consider the replication delay while distributing read queries. The visibility consistency among nodes is not guaranteed either. Also certain kind of objects such as DDL and large objects are not replicated.
After streaming replication and logical replication are introduced, there are few systems that employ Slony-I. If there's no particular reason, it is not recommended to use this mode.
In this mode, Pgpool-II does not care about the database synchronization. It's user's responsibility to make the whole system does a meaningful thing. Load balancing is not possible in the mode. To enable this mode, use 'raw' for backend_clustering_mode.
backend_clustering_mode = 'raw'