-
Notifications
You must be signed in to change notification settings - Fork 914
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermitten db connection error with aws rds #702
Comments
@chadweimer I'm seeing this exact same issue and oddly I'm also on the same commits as you and was using the same commit as you before these errors started popping up. Current: 90697d6. It also happened after we updated and suddenly started seeing errors. We use glide to manage dependencies and initially thought this was related to another set of dependencies being updated at the same time but have now narrowed it down to the database connection (based on the IP address in the error). We are running in docker (Alpine linux 3.7) on Go 1.10.1 but connecting to a database on GCP. Here's the list of changes between those two versions: What Go version and base container are you using? |
@chadweimer I think the issue may be this commit: 6e2a335 When In our case it's happening on idle connections. Are you seeing the same? Where is your application running? I'm going to try setting |
we use go 1.9.2 and prebuild the go binary |
#1013 (lib/pq >= v1.9.0) should have addressed the issue where a dead connection is stuck forever. An analysis of the current situation can be found in #835 (comment) I think that this issue can be closed? |
we got this error intermittently in our docker containers
read tcp <ip address>:39954-\u003e<ip address>:5432: read: connection reset by peer
the only special thing is that we use postgres in AWS RDS.
not sure if this affect anything. anyone facing the same issue?
im still using this commit
e42267488fe361b9dc034be7a6bffef5b195bceb
and go1.9.2 only using
database/sql
library withgh.neting.cc/lib/pq
once we got this errror, we just need to delete the pod and hope the new container does not run into this issue again. this is quite unstable and pretty sure we are not the only on who will get this issue
The text was updated successfully, but these errors were encountered: