Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

eth_calls to some addresses seem to never return, they just hang indefinitely #8311

Closed
tzapu opened this issue Apr 4, 2018 · 13 comments · Fixed by #8943
Closed

eth_calls to some addresses seem to never return, they just hang indefinitely #8311

tzapu opened this issue Apr 4, 2018 · 13 comments · Fixed by #8943
Labels
F3-annoyance 💩 The client behaves within expectations, however this “expected behaviour” itself is at issue. M6-rpcapi 📣 RPC API.
Milestone

Comments

@tzapu
Copy link
Contributor

tzapu commented Apr 4, 2018

I'm running:

  • Which Parity version?: 1.8.7 , nightly, beta
  • Which operating system?: Linux
  • How installed?: docker
  • Are you fully synchronized?: yes
  • Which network are you connected to?: ethereum
  • Did you try to restart the node?: yes

actual
making and eth_call for name() or symbol() on address 0x6a0a0fc761c612c340a0e98d33b37a75e5268472 just hangs forever.
one test ran for 8 hours not returning anything. trying the same curl in geth seems instant.

expected behavior
to return something, be it the result or an error... anything

steps to reproduce

curl --data '{"method":"eth_call","params":[{"to":"0x6a0a0fc761c612c340a0e98d33b37a75e5268472","data":"0x06fdde03"}, "latest"],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545

thank you very much

/edit: tested various configurations: default, archive+tracing, no-ancient-blocks

@5chdn
Copy link
Contributor

5chdn commented Apr 5, 2018

can you make sure you are on the latest parity version?

@5chdn 5chdn added Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known. M6-rpcapi 📣 RPC API. labels Apr 5, 2018
@tzapu
Copy link
Contributor Author

tzapu commented Apr 5, 2018

latest nightly, stable? i tried with 1.8.7, 1.10 beta(v1.10.0-beta-0a9d41e-20180320), and a nightly from a few days ago

thanks for looking into this

@tsutsu
Copy link

tsutsu commented Apr 6, 2018

I've experienced the same problem. It's not actually any particular address, but it can seem to be because, if you are executing a sequence of eth_call requests against a fresh parity instance, it will hang in the same place each time, seemingly in response to the same request. However, if you can get some of the requests done, restart parity, and then do the rest of the requests, the originally-failing request will now complete just fine—and then a later eth_call request will stall out instead.

When this happens, parity spins at 100% CPU load. Tracing the process reveals many threads awaiting a semaphore lock (I'm not sure if that's just normal for parity's architecture, but I figured I'd mention it.)

I think what's actually happening is that the backend handler for eth_call is either falling into a userland deadlock (maybe eth_call acquires a semaphore and doesn't release it?) or is experiencing some kind of kernel-mode resource exhaustion (I notice that the problem takes much longer to crop up—though still does eventually crop up—if I increase the maxfiles ulimit of the session I start parity from, which would suggest a file-descriptor leak.)

@andresilva
Copy link
Contributor

andresilva commented Apr 8, 2018

I haven't looked into this, but the contract at 0x6a0a0fc761c612c340a0e98d33b37a75e5268472 is one of the contracts from the state bloat DoS (https://github.com/ethereum/statesweep).

@tzapu
Copy link
Contributor Author

tzapu commented Apr 8, 2018

figured it might be related as it was really close to the start of the attack (didn't think to search for it though :P )

interestingly, geth returns in a timely fashion, i wonder if they are doing something specific about it...

@tzapu
Copy link
Contributor Author

tzapu commented Apr 10, 2018

i have also have it happen consistently for the following addresses

0x8a2e29e7d66569952ecffae4ba8acb49c1035b6f
0x7c20218efc2e07c8fe2532ff860d4a5d8287cb31
0x0f575d9c2792aa915551599a63166d8343a357b6
0xe727fed4723e92b891c02ce70b87030be1d11894

Tried to look into what @tsutsu mentioned and my findings are:

  • 1 core gets locked to 100% for as long as the request/connection is kept open
  • this happens even if i restart parity
  • this happens even if i reboot the server and it's the first request i run

/edit

  • ulimit nofile seems to be max for me 10485760 running in docker

@tzapu tzapu changed the title eth_calls to some addresses seem to never return, they just hand indefinitely eth_calls to some addresses seem to never return, they just hang indefinitely Apr 10, 2018
@tzapu
Copy link
Contributor Author

tzapu commented Apr 10, 2018

a brand new synced light Parity//v1.10.0-beta-0a9d41e-20180320/x86_64-macos/rustc1.24.1 installed through brew on mac OS ran the call just fine... could this be docker related?

@tzapu
Copy link
Contributor Author

tzapu commented Apr 10, 2018

does not seem related to docker, ran on one of the server parity installed through snaps Parity/v1.10.0-unstable-0a9d41e-20180320/x86_64-linux-gnu/rustc1.24.1
same thing.
also noticed that while cores are locked at 100% from the requests, parity doesn't stop

2018-04-10 08:42:11 UTC Finishing work, please wait...
2018-04-10 08:43:11 UTC Shutdown is taking longer than expected.

@tzapu
Copy link
Contributor Author

tzapu commented Apr 10, 2018

same parity installation as above but started with --light works fine, returns fast

@rphmeier
Copy link
Contributor

@tzapu Interesting, since light clients are just querying remote nodes for information. If the remote node returned quickly then it might be something in the RPC server itself.

@tomusdrw
Copy link
Collaborator

@tzapu Can you pass gas parameter to the request and see if it helps?

@tzapu
Copy link
Contributor Author

tzapu commented May 17, 2018

oh man, somehow i missed this.
so @tomusdrw , thank you, it works with passing gas

url --data '{"method":"eth_call","params":[{"to":"0x6a0a0fc761c612c340a0e98d33b37a75e5268472","data":"0x06fdde03", "gas":"0x7f110"}, "latest"],"id":1,"jsonrpc":"2.0"}' -H "Content-Type: application/json" -X POST localhost:8545

summary:
default, archive configurations without gas: hangs indefinitely with no feedback, request never ends
default, archive configurations with gas: work fine
light configuration without gas: works fine
light configuration with gas: works fine

@5chdn 5chdn added this to the 1.12 milestone May 17, 2018
@5chdn 5chdn added F3-annoyance 💩 The client behaves within expectations, however this “expected behaviour” itself is at issue. and removed Z0-unconfirmed 🤔 Issue might be valid, but it’s not yet known. labels May 17, 2018
@folsen
Copy link
Contributor

folsen commented May 21, 2018

Possibly related to #6840

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
F3-annoyance 💩 The client behaves within expectations, however this “expected behaviour” itself is at issue. M6-rpcapi 📣 RPC API.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants