Welcome to the Tutorial for pgpool-II. From here, you can learn how to install, setup, and run parallel queries or do replication using pgpool-II. We assume that you already know basic PostreSQL operations, so please refer to the PostgreSQL document if neccesary.
First, we must learn how to install and configure pgpool-II and database nodes before using replication or parallel query.
Installing pgpool-II is very easy. In the directory which you have extracted the source tar ball, execute the following commands.
$ ./configure $ make $ make install
configure
script collects your system information
and use it for the compilation procedure. You can pass command line
arguments to configure
script to change the default
behavior, such as the installation directory. pgpool-II will be
installed to /usr/local
directory by default.
make
command compiles the source code, and make
install
will install the executables. You must have write
permission on the installation directory.
In this tutorial, we will install pgpool-II in the default
/usr/local
directory.
Note: pgpool-II requires libpq library in PostgreSQL 7.4
or later (version 3 protocol). If the configure
script
displays the following error message, the libpq library may not be
installed, or it is not of version 3.
configure: error: libpq is not installed or libpq is old
If the library is version 3, but the above message is still displayed,
your libpq library is probably not recognized by
the configure
script.
The configure
script searches for libpq library under
/usr/local/pgsql
. If you have installed
PostgreSQL in a directory other than /usr/local/pgsql
,
use --with-pgsql
, or --with-pgsql-includedir
and --with-pgsql-libdir
command line options when you
execute configure
.
pgpool-II configuration parameters are saved in
the pgpool.conf
file. The file is in "parameter =
value
" per line format. When you install pgpool-II,
pgpool.conf.sample
is automatically created. We recommend
copying and renaming it to pgpool.conf
, and edit it as
you like.
$ cp /usr/local/etc/pgpool.conf.sample /usr/local/etc/pgpool.conf
pgpool-II only accepts connections from the local host using port
9999. If you wish to receive conenctions from other hosts, set
listen_addresses
to '*'.
listen_addresses = 'localhost' port = 9999
We will use the default parameters in thie tutorial.
pgpool-II has an interface for administrative purpose to retrieve
information on database nodes, shutdown pgpool-II, etc. via
network. To use PCP commands, user authentication is required. This
authentication is different from PostgreSQL's user authentication. A
user name and password need to be defined in the pcp.conf
file. In the file, a user name and password are listed as a pair on
each line, and they are separated by a colon (:). Passwords are
encrypted in md5 hash format.
postgres:e8a48653851e28c69d0506508fb27fc5
When you install pgpool-II, pcp.conf.sample
is
automatically created. We recommend copying and renaming it to
pcp.conf
, and edit it.
$ cp /usr/local/etc/pcp.conf.sample /usr/local/etc/pcp.conf
To encrypt your password into md5 hash format, use the pg_md5 command,
which is installed as one of pgpool-II's
executables. pg_md5
takes text as a command line
argument, and displays its md5-hashed text.
For example, give "postgres" as the command line argument, and
pg_md5
displays md5-hashed text on its standard
output.
$ /usr/bin/pg_md5 postgres e8a48653851e28c69d0506508fb27fc5
PCP commands are executed via network, so the port number must be
configured with pcp_port
parameter in
pgpool.conf
file.
We will use the default 9898 for pcp_port
in this tutorial.
pcp_port = 9898
Now, we need to set up backend PostgreSQL servers for pgpool-II. These servers can be placed within the same host as pgpool-II, or on separate machines. If you decide to place the servers on the same host, different port numbers must be assigned for each server. If the servers are placed on separate machines, they must be configured properly so that they can accept network connections from pgpool-II.
In this tutorial, we will place three servers within the same host
as pgpool-II, and assign 5432, 5433, 5434 port numbers
respectively. To configure pgpool-II, edit pgpool.conf
as
follows.
backend_hostname0 = 'localhost' backend_port0 = 5432 backend_weight0 = 1 backend_hostname1 = 'localhost' backend_port1 = 5433 backend_weight1 = 1 backend_hostname2 = 'localhost' backend_port2 = 5434 backend_weight2 = 1
For backend_hostname
, backend_port
,
backend_weight
, set the node's hostname, port number, and
ratio for load balancing. At the end of each parameter string, node ID
must be specified by adding positive integers starting with 0 (i.e. 0,
1, 2, …).
backend_weight
parameters are all 1, meaning that
SELECT queries are equally distributed among three servers.
To fire up pgpool-II, execute the following command on a terminal.
$ pgpool
The above command, however, prints no log messages because pgpool
detaches the terminal. If you want to show pgpool log messages, you
pass -n
option to pgpool command so pgpool-II is executed
as non-daemon process, and the terminal will not be detached.
$ pgpool -n &
The log messages are printed on the terminal, so the recommended options to use the following.
$ pgpool -n -d > /tmp/pgpool.log 2>&1 &
The -d
option enables debug messages to be generated.
The above command keeps appending log messages to /tmp/pgpool.log. If you need to rotate log files, pass the logs to a external command which has log rotation function. For example, you can use rotatelogs from Apache2:
$ pgpool -n 2>&1 | /usr/local/apache2/bin/rotatelogs \ -l -f /var/log/pgpool/pgpool.log.%A 86400 &This will generate a log file named "pgpool.log.Thursday" then rotate it 00:00 at midnight. Rotatelogs adds logs to a file if it already exists. To delete old log files before rotation, you could use cron:
55 23 * * * /usr/bin/find /var/log/pgpool -type f -mtime +5 -exec /bin/rm -f '{}' \;Please note that rotatelogs may exist as /usr/sbin/rotatelogs2 in some distributions. -f option generates a log file as soon as rotatelogs starts and is available in apache2 2.2.9 or greater.
Also cronolog
can be used.
$ pgpool -n 2>&1 | /usr/sbin/cronolog \ --hardlink=/var/log/pgsql/pgpool.log \ '/var/log/pgsql/%Y-%m-%d-pgpool.log' &
To stop pgpool-II, execute the following command.
$ pgpool stop
If any client is still connected, pgpool-II waits for it to disconnect, and then terminates itself. Run the following command instead if you want to shutdown pgpool-II forcibly.
$ pgpool -m fast stop
Replication enables the same data to be copied to multiple database nodes.
In this section, we'll use three database nodes, which we have already set up in section "1. Let's Begin!", and takes you step by step to create a database replication system. Sample data to be replicated will be generated by the pgbench benchmark program.
To enable the database replication function, set
replication_mode
to true in pgpool.conf
file.
replication_mode = true
When replication_mode
is set to true, pgpool-II will send a
copy of a received query to all the database nodes.
When load_balance_mode
is set to true, pgpool-II will
distribute SELECT queries among the database nodes.
load_balance_mode = true
In this section, we enable both replication_mode
and
load_balance_mode
.
To reflect the changes in pgpool.conf
, pgpool-II must
be restarted. Please refer to section "1.5 Starting/Stopping pgpool-II".
After configuring pgpool.conf
and restarting
pgpool-II, let's try the actual replication and see if everything is
working.
First, we need to create a database to be replicated. We will name
it "bench_replication". This database needs to be created on all the
nodes. Use the createdb
commands through pgpool-II, and the
database will be created on all the nodes.
$ createdb -p 9999 bench_replication
Then, we'll execute pgbench with -i
option. -i
option initializes the database with
pre-defined tables and data.
$ pgbench -i -p 9999 bench_replication
The following table is the summary of tables and data, which will
be created by pgbench -i
. If, on all the nodes, the
listed tables and data are created, replication is working correctly.
Table Name | Number of Rows |
---|---|
branches | 1 |
tellers | 10 |
accounts | 100000 |
history | 0 |
Let's use a simple shell script to check the above on all the nodes. The following script will display the number of rows in branches, tellers, accounts, and history tables on all the nodes (5432, 5433, 5434).
$ for port in 5432 5433 5434; do > echo $port > for table_name in branches tellers accounts history; do > echo $table_name > psql -c "SELECT count(*) FROM $table_name" -p $port bench_replication > done > done
Data within the range is stored in two or more data base nodes in a parallel Query. This is called a partitioning. Moreover you could replicate some of tables among database nodes even in parallel query mode.
To enable parallel query in pgpool-II, you must set up another database called "System Database" (we will denote it as SystemDB from this point).
SystemDB holds the user-defined rules to decide what data will be saved in which database node. Another use of SystemDB is to merge results sent back from the database nodes using dblink.
In this section, we will use three database nodes which we have set up in section "1. Let's Begin!", and takes you step by step to create a parallel query database system. We will use pgbench again to create sample data.
To enable the parallel query function, set parallel_mode
to true in pgpool.conf
file.
parallel_mode = true
Setting paralle_mode
to true does not start parallel
query automatically. pgpool-II needs SystemDB and the rules
to know how to distribute data to the database nodes.
Also, dblink used by SystemDB makes connections to
pgpool-II. Therefore, listen_addresses
needs to be
configured so that pgpool-II accepts those connections.
listen_addresses = '*'
Attention: The replication is not done for the table that does the partitioning though a parallel Query and the replication can be made effective at the same time.
Attention: You can have both partitioned tables and replicated tables. However a table cannot be a partioned and replicated at the same time. Because the data structure of partioned tables and replicated tables is different, "bench_replication" database created in section "2. Your First Replication" cannot be reused in parallel query mode.
replication_mode = true load_balance_mode = false
OR
replication_mode = false load_balance_mode = true
In this section, we will set parallel_mode and load_balance_mode
to true,
listen_addresses
to '*', replication_mode
to false.
"System database" is just an ordinaly database. The only requirement is that dblink functions and the dist_def table, which describes partioning rule, must be installed in the system database. You could have a system database on a database node, or you could have multiple nodes having system database by using cascade configuration in pgpool-II.
In this section, we will create SystemDB on the 5432 port node. The following list is the configuration parameters for SystemDB
system_db_hostname = 'localhost' system_db_port = 5432 system_db_dbname = 'pgpool' system_db_schema = 'pgpool_catalog' system_db_user = 'pgpool' system_db_password = ''
Actually, the above are the default settings of
pgpool.conf
. Now, we must create a user called "pgpool",
and a database called "pgpool" owned by user "pgpool".
$ createuser -p 5432 pgpool $ createdb -p 5432 -O pgpool pgpool
3.2.1. Installing dblink
Next, we must install dblink into "pgpool" database. dblink is one
of the tools included in the contrib
directory in the
PostgreSQL source code.
To install dblink to your system, execute the following commands.
$ USE_PGXS=1 make -C contrib/dblink $ USE_PGXS=1 make -C contrib/dblink install
After dblink has been installed into your system, we will define
dblink functions in the "pgpool" database. If PostgreSQL is installed in
/usr/local/pgsql
, dblink.sql
(a file with
function definitions) should have been installed in
/usr/local/pgsql/share/contrib
. Now, execute the
following command to define dblink functions.
$ psql -f /usr/local/pgsql/share/contrib/dblink.sql -p 5432 pgpool
3.2.2. Defining dist_def table
Define a table called "dist_def", which has the partitioning rule, in
database called "pgpool".
After installing pgpool-II, you will have system_db.sql
,
which is the psql
script to generate the system database.
$ psql -f /usr/local/share/system_db.sql -p 5432 -U pgpool pgpool
dist_def table is created in pgpool_catalog schema.
If you have configured
system_db_schema
to use other schema, you need to edit
system_db.sql
accordingly.
The definition for "dist_def" is as shown here, and the table name cannot be changed.
CREATE TABLE pgpool_catalog.dist_def ( dbname text, -- database name schema_name text, -- schema name table_name text, -- table name col_name text NOT NULL CHECK (col_name = ANY (col_list)), -- distribution key-column col_list text[] NOT NULL, -- list of column names type_list text[] NOT NULL, -- list of column types dist_def_func text NOT NULL, -- distribution function name PRIMARY KEY (dbname, schema_name, table_name) );
A tuple stored in "dist_def" can be classified into two types.
A distribution rule decides how to distribute data to a particular node. Data will be distributed depending on the value of the "col_name" column. "dist_def_func" is a function that takes the value of "col_name" as its argument, and returns an integer which points to the appropriate database node ID where the data should be stored.
A meta-information is used to rewrite queries. Parallel query must rewrite queries so that the results sent back from the backend nodes can be merged into one result.
3.2.3. Defining replicate_def table
If you want to use replicated tables in SELECT in parallel mode, you need to register information about such tables(replication rule) to a table called replicate_def. The replicate_def table has already been made when making it from the system_db.sql file when dist_def is defined. The replicate_def table is defined as follows.
CREATE TABLE pgpool_catalog.replicate_def ( dbname text, -- database name schema_name text, -- schema name table_name text, -- table name col_list text[] NOT NULL, -- list of column names type_list text[] NOT NULL, -- list of column types PRIMARY KEY (dbname, schema_name, table_name) );
replicate_def includes table's meta data information(dbname, schema_name, table_name, col_list, type_list).
All the query analysis and query rewriting process are dependent on the information (table, column and type) stored in dist_def and/or replicate_def table. If the information is not correct, analysis and query rewriting process will produce wrong results.
In this tutorial, we will define rules to distribute pgbench's sample data into three database nodes. The sample data will be created by "pgbench -i -s 3" (i.e. scale factor of 3). We will create a new database called "bench_parallel" for this section.
In pgpool-II's source code, you can find the
dist_def_pgbench.sql
file in sample
directoy. We will use this sample file here to create distribution
rules for pgbench. Execute the following command in extracted
pgpool-II source code directory.
$ psql -f sample/dist_def_pgbench.sql -p 5432 pgpool
Here is the explanation of dist_def_pgbench.sql
.
Inside dist_def_pgbench.sql
, we are inserting one
row into "dist_def" table. There is a distribution
function for accounts table.
For the key-column, aid is defined for accounts respectively (which is the primary keys)
INSERT INTO pgpool_catalog.dist_def VALUES ( 'bench_parallel', 'public', 'accounts', 'aid', ARRAY['aid', 'bid', 'abalance', 'filler'], ARRAY['integer', 'integer', 'integer', 'character(84)'], 'pgpool_catalog.dist_def_accounts' );
Now, we must define the distribution function for the accounts table. Note that you can use the same function from different tables. Also, you can define functions using languages other than SQL (e.g. PL/pgSQL, PL/Tcl, etc.).
The accounts table when data is initialized specifying 3 scale factor, The value of the aid is 1 to 300000. The function is defined so that data is evenly distributed to three data base nodes.
An SQL function will be defined as the return of the number of the data base node.
CREATE OR REPLACE FUNCTION pgpool_catalog.dist_def_branches(anyelement) RETURNS integer AS $$ SELECT CASE WHEN $1 > 0 AND $1 <= 1 THEN 0 WHEN $1 > 1 AND $1 <= 2 THEN 1 ELSE 2 END; $$ LANGUAGE sql;
The replication rule is the one that which table decides the replication whether to be done.
Here, it is made with pgbench With the branches table and tellers table are registered. As a result, the accounts table and the inquiry that uses the branches table and the tellers table become possible.
INSERT INTO pgpool_catalog.replicate_def VALUES ( 'bench_parallel', 'public', 'branches', ARRAY['bid', 'bbalance', 'filler'], ARRAY['integer', 'integer', 'character(88)'] ); INSERT INTO pgpool_catalog.replicate_def VALUES ( 'bench_parallel', 'public', 'tellers', ARRAY['tid', 'bid', 'tbalance', 'filler'], ARRAY['integer', 'integer', 'integer', 'character(84)'] );
Replicate_def_pgbench.sql is prepared in sample directory. In the directory that progresses the source code to define a replicate rule by using this as follows The psql command is executed.
$ psql -f sample/replicate_def_pgbench.sql -p 5432 pgpool
To reflect the changes in pgpool.conf
, pgpool-II must
be restarted. Please refer to section "1.5 Starting/Stopping pgpool-II".
After configuring pgpool.conf
and restarting
pgpool-II, let's try and see if parallel query is working.
First, we need to create a database to be distributed. We will name
it "bench_parallel". This database needs to be created on all the
nodes. Use the createdb
commands through pgpool-II, and the
databases will be created on all the nodes.
$ createdb -p 9999 bench_parallel
Then, we'll execute pgbench with -i -s 3
options. -i
option initializes the database with
pre-defined tables and data. -s
option specifies the
scale factor for initialization.
$ pgbench -i -s 3 -p 9999 bench_parallel
The tables and data created are shown in "3.3. Defining Distribution Rules".
One way to check if the data have been distributed correctly is to execute a SELECT query via pgpool-II and directly on the backend, and compare two results. If everything is configured right, "bench_parallel" should be distributed as follows.
Table Name | the number of lines |
---|---|
branches | 3 |
tellers | 30 |
accounts | 300000 |
history | 0 |
Let's use a simple shell script to check the above on all the nodes and via pgpool-II. The following script will display the minimum and maximum values in accounts table using port 5432, 5433, 5434, and 9999.
$ for port in 5432 5433 5434 9999; do > echo $port > psql -c "SELECT min(aid), max(aid) FROM accounts" -p $port bench_parallel > done