Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

imc-dispatcher dial tcp <IP>: connect: cannot assign requested address #4461

Closed
maschmid opened this issue Nov 4, 2020 · 5 comments · Fixed by #4465
Closed

imc-dispatcher dial tcp <IP>: connect: cannot assign requested address #4461

maschmid opened this issue Nov 4, 2020 · 5 comments · Fixed by #4465
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@maschmid
Copy link
Contributor

maschmid commented Nov 4, 2020

Describe the bug

After about a day of cluster runtime, deploying brokers backed by InMemoryChannels (at the time of the errors there are 24 brokers on the cluster, each with a single trigger), sending events with replies, I have noticed the following errors present in imc-dispatcher logs:

"dial tcp : connect: cannot assign requested address"

in errors like these:

{"level":"error","ts":"2020-11-04T18:51:54.205Z","logger":"inmemorychannel-dispatcher","caller":"fanout/fanout_message_handler.go:189","msg":"Fanout had an error","error":"failed to forward reply to http://broker-ingress.knative-eventing.svc.cluster.local/foo10/broker: Post \"http://broker-ingress.knative-eventing.svc.cluster.local/foo10/broker\": dial tcp 172.30.107.69:80: connect: cannot assign requested address","stacktrace":"knative.dev/eventing/pkg/channel/fanout.(*MessageHandler).dispatch\n\t/opt/app-root/src/go/src/knative.dev/eventing/pkg/channel/fanout/fanout_message_handler.go:189\nknative.dev/eventing/pkg/channel/fanout.createMessageReceiverFunction.func1.1\n\t/opt/app-root/src/go/src/knative.dev/eventing/pkg/channel/fanout/fanout_message_handler.go:143"}

{"level":"error","ts":"2020-11-04T18:52:02.334Z","logger":"inmemorychannel-dispatcher","caller":"fanout/fanout_message_handler.go:189","msg":"Fanout had an error","error":"unable to complete request to http://broker-filter.knative-eventing.svc.cluster.local/triggers/foo6/counter/6c5ab867-5345-449a-aef7-7023693bf821: Post \"http://broker-filter.knative-eventing.svc.cluster.local/triggers/foo6/counter/6c5ab867-5345-449a-aef7-7023693bf821\": dial tcp 172.30.166.136:80: connect: cannot assign requested address","stacktrace":"knative.dev/eventing/pkg/channel/fanout.(*MessageHandler).dispatch\n\t/opt/app-root/src/go/src/knative.dev/eventing/pkg/channel/fanout/fanout_message_handler.go:189\nknative.dev/eventing/pkg/channel/fanout.createMessageReceiverFunction.func1.1\n\t/opt/app-root/src/go/src/knative.dev/eventing/pkg/channel/fanout/fanout_message_handler.go:143"}

suggesting there may be sockets leaking

Expected behavior
"cannot assign requested address" errors should not appear

To Reproduce
Currently unknown

Knative release version
0.17.2

Additional context

@maschmid maschmid added the kind/bug Categorizes issue or PR as related to a bug. label Nov 4, 2020
@slinkydeveloper
Copy link
Contributor

/assign

@matzew
Copy link
Member

matzew commented Nov 4, 2020

related? cloudevents/sdk-go#98

@matzew
Copy link
Member

matzew commented Nov 4, 2020

looks like by default, the net/http allows 2 connections in its connection pool to a single host

see: golang/go#16012 (comment)

@slinkydeveloper
Copy link
Contributor

Yeah that's what i was suspecting, my feeling is that we set this parameter too high

@slinkydeveloper
Copy link
Contributor

So the actual limit config is:

transport.MaxIdleConns = 1000
transport.MaxIdleConnsPerHost = 100

Which means it can open a maximum of 1000 sockets, i guess not enough to leak all sockets on the machine... Let me try to figure out if we always use the same http client, or if somewhere we create new ones

slinkydeveloper added a commit to slinkydeveloper/eventing that referenced this issue Nov 5, 2020
Now every new message sender always reuse the same underlying client, whenever possible

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>
knative-prow-robot pushed a commit that referenced this issue Nov 5, 2020
* Fix #4461

Now every new message sender always reuse the same underlying client, whenever possible

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>

* Increase coverage

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>

* Brought back the previous method to avoid breakage

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>

* Now we use nice language

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>

* Removed useless test

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>

* Suggestions

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>

* Imports job

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>

* nit

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>

* Fancy ut

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>

* Copyright

Signed-off-by: Francesco Guardiani <francescoguard@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants