-
Notifications
You must be signed in to change notification settings - Fork 20.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Geth 1.8.2 in docker : fatal error: runtime: out of memory #16377
Comments
Also receiving this issue. Geth slowly eats up all available ram on the system until it crashes. Edit: |
Having the same issue, built from the latest 1.8.3 source, Linux, RAM 2 GB. Sync mode fast, Ropsten network. UPD: adding a 8 GB swap file didn't help. Same behaviour: eats up all available memory and dies. |
Also, has this problem appeared only in 1.8.2? Will reverting to 1.8.1 or 1.8.0 help? |
Hey Sapph1re, I dont think there's anyway to limit geth's ram usage at the current moment. I've included the systemd service file that i'm using to launch geth, it may be of use to you:
Server specifications: |
Setting the In general we've noticed that when syncing Geth will eat up as much memory as there is available on the machine. I don't think this an issue with just v1.8.2. We've seen it all the back to v1.7.x. Once in sync the memory and CPU profile of Geth drastically decreases but it spikes every time a block is verified (for obvious reasons). So you need to give it quite a bit more than your |
I am having an "out of memory" issue with Geth 1.8.9 that is not triggered by fast sync, but just by running Geth. Whenever I start Geth, if I let it run, after a while, it will crash with an "out of memory" error message. My Ubutun Xenial server is configured with 8GB of RAM and 32GB of swap. The command line to start Geth is the following: i used to run Geth 1.6.x with the same command and never had an issue with it running out of memory. I have actually tripled the swap space on this server to accommodate the "out of memory" issue and it is still there. |
Yeah. Running more nodes, this is becoming a real problem. Both in ropsten testnet and mainnet. There is a memory leak somewhere. Geth continually eats up memory until it runs out of machine memory. Seeing this in v1.8.2 and later versions. But we haven't been able to upgrade passed v1.8.2 due to #16846 We're running on Kubernetes so this is particularly painful because our Geth nodes continually get restarted. |
I should probably have mentioned that I'm running 3 geth dockers on one single machine (Mainnet, Ropsten, Rinkeby). |
I notice that it crash after few hours only once I activate The failing setup:
Hope that's help! |
Ah thats interesting! I upgraded my node to 16gb of memory and geth has yet to crash on me anymore. The memory limit flag doesn't actually limit geth's memory usage it seems. |
same issue. here is my specifications: Server Geth |
you should be get a rpc attack.
At 2018-10-17 10:33:05, "eliceaas" <notifications@github.com> wrote:
same issue.
when i change from centos7.4 to ubuntu16.04 and change the rpcport from 8581 to 8681. it seems to be stoped! i think the rpcport may be the main case.
here is my specifications:
Server
Ubuntu 16.04.4 x64
4 GB Memory
40 GB HDD Disk
2 vCPU's
Geth
Version: 1.8.16-stable
Architecture: amd64
Go Version: go1.11.1
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
hi, guys, I also encountered this problem. But when I removed the "--rpc" config ,geth works well. I think the RPC service is the main problem, and hope someone can fix this. |
The main problem is most likely attackers on the internet discovering the rpc service and doing bruteforce password guessing against personal.unlock. Luckily, decrypting keystore is very memory intensive. Note: this doesn't seem to be the case for the original reporter, but very likely for those of you that experience problems only when rpc is enabled. |
hi, holiman |
@hashfury42 could you please share how did you setup your firewall on port, did you allow only particular ip to have access to that port? |
1 similar comment
@hashfury42 could you please share how did you setup your firewall on port, did you allow only particular ip to have access to that port? |
Yes, only particular ip have access to that port, if you use Ubuntu you can read this https://help.ubuntu.com/community/UFW |
@holiman |
Hi Alex,
Greetings from Singapore and thanks alot for the information. I will try
this out by tomorrow morning.
one more question, if i want to enable for two ip addresses, do i need to
repeat the iptable commands twice with different ip addresses.
regards
Jain
…On Thu, Nov 8, 2018 at 7:29 PM Alex ***@***.***> wrote:
@holiman <https://github.com/holiman>
thanks
in my case this method helped me.
I created a solution on:
https://stackoverflow.com/questions/53206228/ethereum-geth-out-of-memory/
<http://url>
I hope it will be useful
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#16377 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AfcavuLJMbHbAj4CaFWtaTF3fdr7wF4Pks5utBWigaJpZM4S4X8s>
.
|
@NIVJAIN
if I understand correctly, yes. However, I recommend reading about linux iptables. Your main task is to have strictly defined addresses access to your node on the port. Provided access to node 2 ip addresses do not forget to specify this parameter: -- rpccorsdomain
please write about your results... |
I'm not sure if it's same problem or not. The issue is it slowly increases to cpu and mem limit. Server geth --rinkeby --rpc --rpcaddr "0.0.0.0" --rpcvhosts=* --rpcport "8545" --rpccorsdomain "neojuneELB-1439772252.ap-northeast-2.elb.amazonaws.com" --rpcapi "eth,net,web3,personal,admin" --syncmode "light" --cache "64" |
It happened with me right now on my private ethereum network. Is there any progress on this? Geth command:-
Log:-
|
This is a very old ticket, and we've worked on, and fixed many memory issues over time. I'm closing this, it's better if new tickets are opened relating to the more recent versions |
Hi there,
I get a
fatal error: runtime: out of memory
after updating geth to 1.8.2 in my docker-compose file. These two screenshots are separated by 1:30 minute (attached is video)geth-out-of-memory.zip :
After 1'30'' one can see that geth is constantly eating up memory
These are the cli args used to start geth :
geth --syncmode "fast" --testnet --ws --wsaddr "0.0.0.0" --wsorigins "*" --rpcvhosts my.domain --cache 512
Here is my server RAM:
Is that related to #16244, #16243 and #16174 ?
Many thanks for your help!
System information
Geth version:
1.8.2
OS & Version: Linux Docker
Backtrace
The text was updated successfully, but these errors were encountered: