[pgpool-general: 1335] cursor closes on commit
Bert Desmet
bert at bdesmet.be
Tue Jan 22 21:28:50 JST 2013
Hello,
We are currently migrating from db2 to postgres. Our setup is quite simple:
we have one tool that reads (reporting service), and only 2 tools that
write. This means that we'll never get in the situation where we have >20
connections to the db at the same time.
But because our sql statements are sometimes very complicated we have set
up postgres with streaming replication, and placed pgpool in front.
pgpool really works good for read-only cursors: our reporting system works
great, and connections are send to both server.
However we have a problem with our ETL system.
>From the moment it starts writing data it fails.
I have the same problem when I try it with python: (this shows better what
I mean)
>>> import pyodbc>>> conn = pyodbc.connect('DSN=PostgreSQL')>>> cursor=conn.cursor()>>> query = "INSERT INTO table foo () values (), ()">>> cursor.execute(query)<pyodbc.Cursor object at 0x7f52858e2f30>>>> conn.commit()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
pyodbc.Error: ('HY000', 'The driver did not supply an error!')>>>
cursor.execute(query)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
pyodbc.Error: ('08S01', '[08S01] Could not send Query(connection
dead);\nCould not send Query(connection dead) (26) (SQLExecDirectW)')
Is there a way we can get around that?
This script works great when we connect to postgres without going through
pgpool.
wkr,
Bert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.pgpool.net/pipermail/pgpool-general/attachments/20130122/c73740bb/attachment.htm>
More information about the pgpool-general
mailing list