Concurrency limiting goroutine pool. Limits the concurrency of task execution, not the number of tasks queued. Never blocks submitting tasks, no matter how many tasks are queued.
This implementation builds on ideas from the following:
- http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang
- http://nesv.github.io/golang/2014/02/25/worker-queues-in-go.html
To install this package, you need to setup your Go workspace. The simplest way to install the library is to run:
$ go get github.com/gammazero/workerpool
package main
import (
"fmt"
"github.com/gammazero/workerpool"
)
func main() {
wp := workerpool.New(2)
requests := []string{"alpha", "beta", "gamma", "delta", "epsilon"}
for _, r := range requests {
r := r
wp.Submit(func() {
fmt.Println("Handling request:", r)
})
}
wp.StopWait()
}
There is no upper limit on the number of tasks queued, other than the limits of system resources. If the number of inbound tasks is too many to even queue for pending processing, then the solution is outside the scope of workerpool, and should be solved by distributing load over multiple systems, and/or storing input for pending processing in intermediate storage such as a file system, distributed message queue, etc.
The list of open source projects using worker pool can be found here