Difference between revisions of "pgpool-II 3.5 watchdog test"
(One intermediate revision by the same user not shown) | |||
Line 134: | Line 134: | ||
|---- | |---- | ||
|7.3 | |7.3 | ||
− | | | + | |Checking other functionality of watchdog |
|Automatic registration of a server as standby in recovery | |Automatic registration of a server as standby in recovery | ||
| | | | ||
Line 156: | Line 156: | ||
|9 | |9 | ||
|Networking Isolation scenarion | |Networking Isolation scenarion | ||
− | | | + | |In this scenario we need to check if the pgpool primary and secondary node loose contact due to interruption in network connectivity. This means that two instances loose connectivity. |
− | | | + | |The watchdog process should wait based on some timeout value in order to see if the network comes backup. Need to make sure that we don't have a situation where both instances are acting as primary. |
| | | | ||
| | | | ||
|---- | |---- | ||
− | | | + | |10 |
− | | | + | |Testing watchdog on cloud |
− | | | + | |In this case we want to test watchdog functionality by deploying watchdog, pgpool instances and database server on Amazon cloud. The goal should be perform all the test scenario on AWS that are performed on-premise |
− | | | + | |All the watchdog functionality should work on AWS similar to how it works in on-premise deployment. |
| | | | ||
| | | | ||
|---- | |---- | ||
− | | | + | |11 |
|Database Failure | |Database Failure | ||
− | | | + | |In the test case, the PG database running on the secondary dies. This can be done by stopping PG services on the secondary node. |
− | | | + | |The expected scenario is that pgpool primary node will function as is and continue to serving queries from the client. A pgpool secondary node can be added later on to the watchdog cluster, i believe pgpool needs to be restarted after adding back the secondary node. |
| | | | ||
| | | | ||
|---- | |---- | ||
− | | | + | |12 |
− | | | + | |Database Failure |
− | | | + | |In the test case, the PG database running on the primary node dies. This can be done by stopping PG services on the primary node. |
− | | | + | |The expected scenario is that pgpool instance running secondary node will be promoted to primary node and it will serve queries from the client. A secondary node can be added later on, i believe pgpool needs to be restarted after adding back the secondary node. |
| | | | ||
| | | | ||
|---- | |---- | ||
− | | | + | |12 |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|Watchdog agent failure | |Watchdog agent failure | ||
| | | | ||
Line 196: | Line 189: | ||
| | | | ||
|---- | |---- | ||
− | | | + | |13 |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
| | | | ||
| | | | ||
Line 210: | Line 196: | ||
| | | | ||
|---- | |---- | ||
− | | | + | |14 |
|Pgpool-II Watchdog integration | |Pgpool-II Watchdog integration | ||
| | | | ||
Line 217: | Line 203: | ||
| | | | ||
|---- | |---- | ||
− | | | + | |15 |
|PG backend failover with watchdog | |PG backend failover with watchdog | ||
|Test the command interlocking, when pgpool-II watchdog in enabled. | |Test the command interlocking, when pgpool-II watchdog in enabled. | ||
Line 224: | Line 210: | ||
| | | | ||
|---- | |---- | ||
− | | | + | |16 |
|PG backend failback with watchdog | |PG backend failback with watchdog | ||
|Test the command interlocking, when pgpool-II watchdog in enabled. | |Test the command interlocking, when pgpool-II watchdog in enabled. | ||
Line 231: | Line 217: | ||
| | | | ||
|---- | |---- | ||
− | | | + | |17 |
|online recovery with watchdog enabled | |online recovery with watchdog enabled | ||
|Execute online recovery with watchdog enabled. | |Execute online recovery with watchdog enabled. | ||
Line 238: | Line 224: | ||
| | | | ||
|---- | |---- | ||
− | | | + | |18 |
|pgpool-II configuration integrity | |pgpool-II configuration integrity | ||
|Perform test with changing pgpool-II configurations on different pgpool-II nodes | |Perform test with changing pgpool-II configurations on different pgpool-II nodes | ||
Line 245: | Line 231: | ||
| | | | ||
|---- | |---- | ||
− | | | + | |19 |
|integrate external node health checking | |integrate external node health checking | ||
|Test if watchdog can successfully integrate with external health-checking system | |Test if watchdog can successfully integrate with external health-checking system |
Latest revision as of 13:08, 5 November 2015
Test Num | Category | Test Description | Expected Output | How to test | Assigned-to |
1 | Installation | Make sure that the new watchdog is installed and configured successfully | |||
2 | Upgrade | Make sure that pgpool II with new watchdog can be installed on a system running pgpool II with the old watchdog | |||
3 | Configuration | Make sure that pgpool II can be configured successfully with one primary and one stand-by configuration | |||
4 | Setup | Three pgpool instanses (Host-1, Host-2, Host-3) are running on different machine using Ubutu 13:04. Connect to Host-1 and execute a sample query | Configruation | ||
4.1 | Functional testing | Shutdown Host-1's pgpool instanse and execute query again | Host-2 should take and respond to query | ||
4.2 | Functional testing | Shutdown Host-2's pgpool instanse and execute query again | Host-3 should take and respond to query | ||
4.3 | Functional testing | Start Host-1's pgpool instanse and execute query again | Need to see which host will respond to query | ||
5 | Failover scenarions / Setup | Three pgpool instanses (Host-1, Host-2, Host-3) are running on different machine using Ubutu 13:04. Connect to Host-1 and execute a sample query | Configruation | ||
5.1 | Failover scenarios | Un-Plug Host-1's network cable and execute query again | Host-2 should take and respond to query | ||
5.2 | Failover scenarios | Un-Plug Host-2's network cable and execute query again | Host-3 should take and respond to query | ||
5.3 | Failover scenarios | Plug Host-1's network cable execute query again | Need to see which host will respond to query | ||
6 | Functional testing / Setup | Three pgpool instanses (Host-1, Host-2, Host-3) are running on different machine using Ubutu 13:04. Connect to Host-1 and execute a long query | |||
6.1 | Functional testing | Shutdown / Power Off Host-1's instanse and execute query again | Host-2 should take over and start responding, need to see the already running query response. | ||
6.2 | Functional testing | Shutdown / Power Off Host-2's instanse and execute query again | Host-3 should take over and start responding, need to see the already running query response. | ||
6.3 | Functional testing | Start Host-1's pgpool instanse and execute query again | Need to see which host will respond to query | ||
7.1 | Cheking other functionality of watchdog | Changing active/standby state in case of certain faults detected | |||
7.2 | Cheking other functionality of watchdog | Automatic virtual IP address assigning synchronous to server switching | |||
7.3 | Checking other functionality of watchdog | Automatic registration of a server as standby in recovery | |||
8 | Isolated master scenario / Setup | Three pgpool instanses (Host-1, Host-2, Host-3) are running on different machine using Ubutu 13:04. Connect to Host-1 and execute a query | |||
8.1 | Isolated master scenario | Break the connectivity between pgpool watchdog primary and stand node by bringing down connectivity on the stand-by. | Split brain testing ensures that their is only one master at a time that the clients can connect to. The stand-by should be promoted as the primary and the clients shouldn't be able to connect to the old master. | ||
9 | Networking Isolation scenarion | In this scenario we need to check if the pgpool primary and secondary node loose contact due to interruption in network connectivity. This means that two instances loose connectivity. | The watchdog process should wait based on some timeout value in order to see if the network comes backup. Need to make sure that we don't have a situation where both instances are acting as primary. | ||
10 | Testing watchdog on cloud | In this case we want to test watchdog functionality by deploying watchdog, pgpool instances and database server on Amazon cloud. The goal should be perform all the test scenario on AWS that are performed on-premise | All the watchdog functionality should work on AWS similar to how it works in on-premise deployment. | ||
11 | Database Failure | In the test case, the PG database running on the secondary dies. This can be done by stopping PG services on the secondary node. | The expected scenario is that pgpool primary node will function as is and continue to serving queries from the client. A pgpool secondary node can be added later on to the watchdog cluster, i believe pgpool needs to be restarted after adding back the secondary node. | ||
12 | Database Failure | In the test case, the PG database running on the primary node dies. This can be done by stopping PG services on the primary node. | The expected scenario is that pgpool instance running secondary node will be promoted to primary node and it will serve queries from the client. A secondary node can be added later on, i believe pgpool needs to be restarted after adding back the secondary node. | ||
12 | Watchdog agent failure | ||||
13 | |||||
14 | Pgpool-II Watchdog integration | ||||
15 | PG backend failover with watchdog | Test the command interlocking, when pgpool-II watchdog in enabled. | The failover and follow master scripts should only be executed by one pgpool-II node | ||
16 | PG backend failback with watchdog | Test the command interlocking, when pgpool-II watchdog in enabled. | The failback scripts should only be executed by one pgpool-II node | ||
17 | online recovery with watchdog enabled | Execute online recovery with watchdog enabled. | |||
18 | pgpool-II configuration integrity | Perform test with changing pgpool-II configurations on different pgpool-II nodes | Standby watchdog shoud report and fail to start if the configurations on master node is differnet from this node | ||
19 | integrate external node health checking | Test if watchdog can successfully integrate with external health-checking system | send the node down and node alive messages to watchdog ipc socket and it should handle these appropriately |