Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

database/sql: Control over connection pool #4805

Closed
kisielk opened this issue Feb 14, 2013 · 25 comments
Closed

database/sql: Control over connection pool #4805

kisielk opened this issue Feb 14, 2013 · 25 comments
Labels
FrozenDueToAge Suggested Issues that may be good for new contributors looking for work to do.
Milestone

Comments

@kisielk
Copy link
Contributor

kisielk commented Feb 14, 2013

From http://golang.org/src/pkg/database/sql/doc.txt:

* Handle concurrency well.  Users shouldn't need to care about the
   database's per-connection thread safety issues (or lack thereof),
   and shouldn't have to maintain their own free pools of connections.
   The 'db' package should deal with that bookkeeping as needed.  Given
   an *sql.DB, it should be possible to share that instance between
   multiple goroutines, without any extra synchronization.

I think the current approach fails to achieve this because there is no control over the
dynamics of the pool. For example the pool size can grow without bound if there are
enough goroutines trying to use the DB connection. This is a problem if your database
has limits on the number of connections per user, as you end up with errors when trying
to use a *sql.DB.

The current workaround is to maintain your own fixed-size pool of *sql.DB's and ensure
each one is only used by one goroutine at a time, which seems counter to the package
design goals.

There should be a way to configure maxIdleConns, as well as a way to set the maximum
number of connections.
@bradfitz
Copy link
Contributor

Comment 1:

Labels changed: added suggested, removed priority-triage.

Owner changed to @bradfitz.

Status changed to Accepted.

@julienschmidt
Copy link
Contributor

Comment 2:

MaxIdleConns CL: https://golang.org/cl/6855102/

@kisielk
Copy link
Contributor Author

kisielk commented Mar 8, 2013

Comment 3:

Thanks Julien, that covers half of this.

@bradfitz
Copy link
Contributor

Comment 4:

For max-open-conns, that could be solved by drivers themselves I think.  In,
http://golang.org/pkg/database/sql/driver/#Driver
... drivers can block if it's at its limit, waiting until a connection is closed.
I don't agree with your proposed fix of having N *sql.DB instances and only using 1 at a
time.  It'd be better to have 1 *sql.DB and make sure in your own code that it's only
being used by N clients at a time.  Then more resources can be shared.
I'll also note that other APIs in Go don't prevent you from hurting yourself: the
net/http client, for instance, will keep opening TCP connections to remote hosts, as
fast as you ask it to.  It will let you run yourself out of file descriptors, if that's
what you tell it to do.
I had a program which made tens of thousands of HTTP requests and I had to limit myself
to only doing N HTTP requests at once.  Similarly, if your database can't deal with the
load you're sending it, you can rate-limit yourself.
That said, I'm sure we could add some knob here.

@kisielk
Copy link
Contributor Author

kisielk commented Mar 11, 2013

Comment 5:

> ... drivers can block if it's at its limit, waiting until a connection is closed.
What if you don't want the connections to close at all, but just reuse the open
connections from the pool? What I'd like to achieve is having a pool of exactly N
connections  to the database server. This way I can avoid going above the database
connection limits and also avoid having to reconnect to the database all the time.
> I don't agree with your proposed fix of having N *sql.DB instances and only using 1 at
a time.  It'd be better to have 1 *sql.DB and make sure in your own code that it's only
being used by N clients at a time.  Then more resources can be shared.
I didn't really intend that as a fix for the database package, that's just the
workaround I was using. The reason I didn't use just one *sql.DB is for the reason
above, I wanted to avoid reconnections. Now with control over maxIdleConns I suppose I
can set it to N and then also limit the number of concurrent clients to N as well, then
I only need a single *sql.DB.
I agree that the situation is similar to running out of file descriptors, and it's never
possible to cover all cases in the API, so maybe this is enough?

@rsc
Copy link
Contributor

rsc commented Mar 12, 2013

Comment 6:

[The time for maybe has passed.]

Labels changed: removed go1.1maybe.

@bradfitz
Copy link
Contributor

Comment 7:

Issue created. URL: https://golang.org/cl/7634045

@bradfitz
Copy link
Contributor

Comment 8:

This issue was updated by revision 3a2fe62.

R=golang-dev, r
CC=golang-dev
https://golang.org/cl/7634045

@gopherbot
Copy link
Contributor

Comment 9 by chrisfarms:

This really is quite an issue. At the moment I am having to use *sql.Tx as a hack to
ensure that app is always using the same underlying connection.
How about this for a what-if...
What if sql.DB was an interface, (defining all of the methods that the current *sql.DB
type does now).
What if the current *sql.DB type was renamed *sql.Conn and stripped of all of it's
connection pooling magic. So a single *sql.Conn is _always_ using a single connection.
Maybe it can handle auto-reconnecting itself, but that's it. No Pooling for *sql.Conn.
For users *sql.Conn could be synonymous with "single blocking database connection".
*sql.Conn satisfies sql.DB.
What if we then had an *sql.Pool type. This would act a little more like the current
*sql.DB type, it manages a pool of sql.DB and itself satisfies sql.DB so in many cases
could be a direct swap between *sql.Conn / *sql.Pool. This might pave the way to offer
different Pooling strategies, or maybe delegate some decisions to the sql.Driver.
</whatif>

@bradfitz
Copy link
Contributor

Comment 10:

Chris, this isn't the place to redesign the database/sql package, and it's also not
being redesigned.  We like the model, even if we still occasionally find bugs, and parts
aren't as tuned as we'd like.  Please report specific bugs, though.  See issue #5323 for
an example of a specific reported bug being filed and fixed.  We want to fix them.

@gopherbot
Copy link
Contributor

Comment 11 by robfig@yext.com:

IMO, providing a setter for the max size of the connection pool would be the lion's
share of the configurability needed here.  Is doing that an option, or is this pending
some larger pool-configurability re-design? 
If connection pooling is something that database/sql is supposed to do, providing that
knob seems pretty essential (as opposed to pushing the responsibility onto all of the
drivers, as was suggested earlier).

@gopherbot
Copy link
Contributor

Comment 12 by ledangster:

While this issue isn't resolved within the Go SDK, I've implemented driver level pooling
in a fork of lib/pq at https://github.com/ld9999999999/pqpooling using a channel
semaphore.

@gopherbot
Copy link
Contributor

Comment 13 by lee.hambley:

ledangs...@gmail.com have you tried to get this accepted into lib/pq ?

@ianlancetaylor
Copy link
Member

Comment 14:

Labels changed: added go1.2maybe.

@bradfitz
Copy link
Contributor

Comment 15:

Pending CL: https://golang.org/cl/10726044/

@rsc
Copy link
Contributor

rsc commented Jul 30, 2013

Comment 16:

Labels changed: added feature.

@bradfitz
Copy link
Contributor

Comment 17:

This issue was updated by revision 4572e48.

Add the ability to set an open connection limit.
Fixed case where the Conn finalCloser was being called with db.mu locked.
Added seperate benchmarks for each path for Exec and Query.
Replaced slice based idle pool with list based idle pool.
R=bradfitz
CC=golang-dev
https://golang.org/cl/10726044
Committer: Brad Fitzpatrick 

@bradfitz
Copy link
Contributor

Comment 18:

This issue was updated by revision 9456adb.

Add the ability to set an open connection limit.
Fixed case where the Conn finalCloser was being called with db.mu locked.
Added seperate benchmarks for each path for Exec and Query.
Replaced slice based idle pool with list based idle pool.
R=bradfitz
CC=golang-dev
https://golang.org/cl/10726044
Committer: Brad Fitzpatrick 
»»»
R=golang-dev
CC=golang-dev
https://golang.org/cl/13252046

@bradfitz
Copy link
Contributor

Comment 19:

This issue was updated by revision 41c5d8d.

Add the ability to set an open connection limit.
Fixed case where the Conn finalCloser was being called with db.mu locked.
Added separate benchmarks for each path for Exec and Query.
Replaced slice based idle pool with list based idle pool.
R=bradfitz
CC=golang-dev
https://golang.org/cl/10726044
Committer: Brad Fitzpatrick 

@rsc
Copy link
Contributor

rsc commented Sep 9, 2013

Comment 20:

That's all that will happen for 1.2.

Labels changed: added go1.3maybe, removed go1.2maybe.

@rsc
Copy link
Contributor

rsc commented Nov 27, 2013

Comment 21:

Labels changed: removed feature.

@rsc
Copy link
Contributor

rsc commented Dec 4, 2013

Comment 22:

Labels changed: added release-none, removed go1.3maybe.

@rsc
Copy link
Contributor

rsc commented Dec 4, 2013

Comment 23:

Labels changed: added repo-main.

@kisielk kisielk added accepted Suggested Issues that may be good for new contributors looking for work to do. labels Dec 4, 2013
@rsc rsc added this to the Unplanned milestone Apr 10, 2015
@ChrisHines
Copy link
Contributor

It seems to me that the all of the requested features in the initial issue comment are fully satisfied by the current database/sql package and subsequent add-on requests have been rejected. Should we close this issue?

@bradfitz
Copy link
Contributor

Sure. People can open new specific bugs as needed.

@golang golang locked and limited conversation to collaborators Nov 4, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FrozenDueToAge Suggested Issues that may be good for new contributors looking for work to do.
Projects
None yet
Development

No branches or pull requests

7 participants