Experiencing "slow write timer already active" Panic with pgx During Load Testing #1817
Replies: 4 comments
-
Beta Was this translation helpful? Give feedback.
-
it logs lots of WARNING: DATA RACE messages this is one of themWARNING: DATA RACE Previous read at 0x00c0004236a0 by goroutine 8: Goroutine 436 (running) created at: Goroutine 8 (running) created at:
|
Beta Was this translation helpful? Give feedback.
-
FYI im using sqlc to generate the queries so im not really writing that much code |
Beta Was this translation helpful? Give feedback.
-
Context
Hello everyone,
I'm reaching out to the community for insights into a panic I've encountered while using the
pgx
library for PostgreSQL during HTTP load testing with Baton. The panic is consistently reproducible and relates to the connection pool management, specifically when connections are being closed.Error Encountered
The error message is as follows:
panic: BUG: slow write timer already active
goroutine 186 [running]:
github.com/jackc/pgx/v5/pgconn.(*PgConn).enterPotentialWriteReadDeadlock(...)
/home/user/go/pkg/mod/github.com/jackc/pgx/v5@v5.5.0/pgconn/pgconn.go:1741
github.com/jackc/pgx/v5/pgconn.(*PgConn).flushWithPotentialWriteReadDeadlock(0xc00031a900)
/home/user/go/pkg/mod/github.com/jackc/pgx/v5@v5.5.0/pgconn/pgconn.go:1760 +0xaf
github.com/jackc/pgx/v5/pgconn.(*PgConn).Close(0xc00031a900, {0x9b3b80?, 0xc0004b4230})
/home/user/go/pkg/mod/github.com/jackc/pgx/v5@v5.5.0/pgconn/pgconn.go:633 +0x19f
github.com/jackc/pgx/v5.(*Conn).Close(0x9b39c0?, {0x9b3b80?, 0xc0004b4230?})
/home/user/go/pkg/mod/github.com/jackc/pgx/v5@v5.5.0/conn.go:285 +0x31
github.com/jackc/pgx/v5/pgxpool.NewWithConfig.func2(0xc0004b41c0)
/home/user/go/pkg/mod/github.com/jackc/pgx/v5@v5.5.0/pgxpool/pool.go:248 +0x9a
github.com/jackc/puddle/v2.(*Pool[...]).destructResourceValue(0xc00008afd0, 0x6e3101?)
/home/user/go/pkg/mod/github.com/jackc/puddle/v2@v2.2.1/pool.go:694 +0x1f
github.com/jackc/puddle/v2.(*Pool[...]).destroyAcquiredResource(0xc0002041e0, 0xc00021ef00?)
/home/user/go/pkg/mod/github.com/jackc/puddle/v2@v2.2.1/pool.go:674 +0x39
created by github.com/jackc/puddle/v2.(*Resource[...]).Destroy in goroutine 37
/home/user/go/pkg/mod/github.com/jackc/puddle/v2@v2.2.1/pool.go:76 +0x90
Seeking Guidance
pgx
pool management under heavy load that I might be overlooking?Environment
pgx
version: v5I'm curious to hear if others have faced similar issues or have thoughts on what might be going wrong. Any insights or directions would be greatly appreciated!
This is the code that im using
here im running baton -u http://localhost:8080/number -c 10 -r 1000
-c concurrent users
-r requests
Thank you for your time and help.
Beta Was this translation helpful? Give feedback.
All reactions