Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

JSON-RPC: eth_getWork stuck on a single block #7787

Closed
Nashatyrev opened this issue Feb 2, 2018 · 4 comments
Closed

JSON-RPC: eth_getWork stuck on a single block #7787

Nashatyrev opened this issue Feb 2, 2018 · 4 comments
Labels
F2-bug 🐞 The client fails to follow expected behavior. M4-core ⛓ Core client code / Rust. P2-asap 🌊 No need to stop dead in your tracks, however issue should be addressed as soon as possible.
Milestone

Comments

@Nashatyrev
Copy link

Nashatyrev commented Feb 2, 2018

I'm running:

  • 1.9.0-beta
  • Linux
  • Insalled via bash <(curl https://get.parity.io -Lk)
  • Are you fully synchronized?: yes
  • Which network are you connected to?: ethereum
  • Did you try to restart the node?: yes

Steps:

  • run node till sync with enabled JSON-RPC api
  • start calling eth_getWork 10 times per second (like normally miner or pool does)

Expected behavior:

  • new work returned is based on the latest imported block

Actual behavior:

  • At some point (normally within 1 hour of running) eth_getWork stops returning a fresh block to mine and always returns the same obsolete work

Not reproducible on 1.8.8

@5chdn 5chdn added F2-bug 🐞 The client fails to follow expected behavior. P2-asap 🌊 No need to stop dead in your tracks, however issue should be addressed as soon as possible. M4-core ⛓ Core client code / Rust. labels Feb 5, 2018
@5chdn 5chdn added this to the 1.10 milestone Feb 5, 2018
@5chdn 5chdn modified the milestones: 1.10, 1.11 Mar 1, 2018
@5chdn 5chdn modified the milestones: 1.11, 1.12 Apr 24, 2018
@debris
Copy link
Collaborator

debris commented May 9, 2018

@tomusdrw can this be related to the new transaction pool?

@tomusdrw
Copy link
Collaborator

tomusdrw commented May 9, 2018

1.9 does not have the new queue. perhaps an issue with using_queue crate?
Does the client import new blocks, maybe it just doesn't have enough peers?

@tomusdrw
Copy link
Collaborator

tomusdrw commented Jun 1, 2018

Closed via #8656

@tomusdrw tomusdrw closed this as completed Jun 1, 2018
@YihaoPeng
Copy link
Contributor

YihaoPeng commented Jul 25, 2018

Have you fixed it in Parity/v1.11.7-stable-085035f?

I have four nodes running Parity/v1.11.7-stable-085035f, two in chain foundation and two in classic.

The pool backend will call eth_getWork for each node per 500 ms.

Then a few days later I found that the eth_getWork responses of all nodes were stuck on a single block (each node is stuck at a different block number).

At the same time, the response of eth_blockNumber is completely normal.

There are two nodes' responses:

server1 (stuck on 0x5bc453):

# curl --user user:pass --data-binary '{"jsonrpc": "2.0", "method": "eth_blockNumber", "params": [], "id": 1}' -H 'content-type: application/json' http://server1:8545/
{"jsonrpc":"2.0","result":"0x5bf73c","id":1}

# curl --user user:pass --data-binary '{"jsonrpc": "2.0", "method": "eth_getWork", "params": [], "id": 1}' -H 'content-type: application/json' http://server1:8545/
{"jsonrpc":"2.0","result":["0xb8af6297136285cc2524c1c6ea3b5ff6ab718770149f290fd2b3a901d6289ffc","0x8308d376eeb469b7ff84bd59c51988d9618b208dc3b951d1dc1918fa08306723","0x0000000000001454a8a54188ceba97124373caf1027026abd7246b1b19a5c428","0x5bc453"],"id":1}

server2 (stuck on 0x5bd058):

# curl --user user:pass --data-binary '{"jsonrpc": "2.0", "method": "eth_blockNumber", "params": [], "id": 1}' -H 'content-type: application/json' http://server2:8545/
{"jsonrpc":"2.0","result":"0x5bf73f","id":1}

# curl --user user:pass --data-binary '{"jsonrpc": "2.0", "method": "eth_getWork", "params": [], "id": 1}' -H 'content-type: application/json' http://server2:8545/
{"jsonrpc":"2.0","result":["0x98908aea0fe877aca95ca49d51a3c095fa7cb1dbf8b6ef81e3f8419b0b269087","0x8308d376eeb469b7ff84bd59c51988d9618b208dc3b951d1dc1918fa08306723","0x00000000000014c0ba453d1533641eef4d46081e4c7cbaa38b325452af80c61a","0x5bd058"],"id":1}

I admit that my frequency of calling eth_getWork is very high (twice per second). But I suspect that even if I slow down the call speed, I can't completely avoid the problem.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
F2-bug 🐞 The client fails to follow expected behavior. M4-core ⛓ Core client code / Rust. P2-asap 🌊 No need to stop dead in your tracks, however issue should be addressed as soon as possible.
Projects
None yet
Development

No branches or pull requests

5 participants