-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlocks and memory leaks #6168
Comments
So, you definitely ran out of memory and it looks like QUIC is the culprit (looks like quic-go/quic-go#1811). |
Depends on libp2p/go-libp2p-quic-transport#53 |
fixes #6168 License: MIT Signed-off-by: Steven Allen <steven@stebalien.com>
@inetic could you re-try with the latest master? |
Sorry, I was off for a few days. Unfortunately we've removed our cmake scripts recently to test directly from git and we currently only download go-ipfs through https://dist.ipfs.io/go-ipfs |
Got it. Keep an eye out for a release. |
@Stebalien sorry to bring this back. We've switched to Hope the log will be useful to you. Also, we've switched back to being able to build |
I assume you have about 700 connections? If so, that trace looks completely normal. Could you post a heap trace? That shouldn't be taking gigabytes. |
Ah, OK, thanks.
Could you give me some pointers on how to do that? All I've ever done in golang was this C++ bindings to |
Version information:
We're using
asio-ipfs
a C++ binding to go-ipfs.That library uses IPFS v0.4.19 from https://dist.ipfs.io/go-ipfs/v0.4.19/go-ipfs-source.tar.gz
Golang: go1.11.2.linux-amd64
Machine it runs on:
Linux 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Note however that we're seeing similar issues on machines with 6GB of RAM
CPU: Intel(R) Atom(TM) CPU C3955 @ 2.10GHz
The machine is a fresh scaleway VPS that doesn't have any other services running on it.
Type:
panic
Description:
When the application starts, we
add
around 500 entries (nothing big, mostly html, css, jpg) into IPFS usingUnixfs().Add(...)
. Then we leave the application running. Looking at the logs, it seems that after ~11 hours, the OOM killer kills the app which then generates about 55MB long log file. The log file spits out traces to ~14000 goroutines most of which are hanging somewhere insemacquire
,select
,syscall
,chan receive
andsyn.Cond.Wait
. These goroutines have been stuck there anywhere between 1 and 673 minutes.Here is the log (compressed to 3.3MB)
The text was updated successfully, but these errors were encountered: