Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eth_getTransactionCount is able to respond at a block that is not even mined yet #4514

Closed
1 task done
ET-Chan opened this issue Sep 7, 2023 · 2 comments · Fixed by #4517
Closed
1 task done

eth_getTransactionCount is able to respond at a block that is not even mined yet #4514

ET-Chan opened this issue Sep 7, 2023 · 2 comments · Fixed by #4517
Labels
C-bug An unexpected or incorrect behavior

Comments

@ET-Chan
Copy link

ET-Chan commented Sep 7, 2023

Describe the bug

A script is written to keep polling the transaction count of a particular contract address on the next block. The script should expect the node to return an error before the block is mined. For Quicknode/Erigon, this is indeed the case. However, for RETH, the script can return an answer 8 seconds before the block is mined.

Steps to reproduce

Run the following python script

import time

from tenacity import Retrying, wait_fixed, retry
from web3 import Web3

import numpy as np

trader_address = "PASTE A **CONTRACT** ADDRESS"
w3 = Web3(Web3.HTTPProvider("http://localhost:8489"))


def benchmark_transaction_count_test():

    @retry(wait=wait_fixed(0.1))
    def get_res(block_no, ):
        return w3.eth.get_transaction_count(
            account=trader_address,
            block_identifier=block_no
        )

    latencies = []
    prev_block_no = None
    next_block_no = None
    for i in range(10):
        while next_block_no is None or (
                prev_block_no is not None and prev_block_no == next_block_no):
            latest_block_info = w3.eth.get_block(
                block_identifier="latest"
            )
            next_block_no = latest_block_info['number'] + 1

        print(f"Polling Block: {next_block_no}")

        get_res(next_block_no)
        retrieved_at = time.time()
        next_block_info = Retrying(wait=wait_fixed(0.1))(
            w3.eth.get_block,
            block_identifier=next_block_no
        )
        block_mined_at = next_block_info['timestamp']
        latencies.append(retrieved_at - block_mined_at)
        prev_block_no = next_block_no

    return np.median(latencies)


def main():
    print(f"Latencies of transaction count: {benchmark_transaction_count_test()}", )


if __name__ == '__main__':
    main()

Run it against RETH, one will find out it is returning the answer before the block is mined, i.e. the latency is negative.
Run it against Quicknode/Erigon, one will find out it is returning the answer after the block is mined, i.e. the latency is positive.

Note that if the address is an EOA. The latency will be a positive number.

Node logs

No response

Platform(s)

Linux (x86)

What version/commit are you on?

reth 0.1.0-alpha.8 (cd71f68)

What database version are you on?

Current database version: 1
Local database is uninitialized

If you've built Reth from source, provide the full command you used

RUSTFLAGS="-C target-cpu=native" cargo build --profile maxperf --features jemalloc

Code of Conduct

  • I agree to follow the Code of Conduct
@ET-Chan ET-Chan added C-bug An unexpected or incorrect behavior S-needs-triage This issue needs to be labelled labels Sep 7, 2023
@mattsse
Copy link
Collaborator

mattsse commented Sep 7, 2023

eth_getTransactionCount is able to respond at a block that is not even mined yet

if the requested block is the pending block and it already exists, is this behaviour bad and why?

@mattsse
Copy link
Collaborator

mattsse commented Sep 7, 2023

however if this breaks how other clients behave, we should consider mirroring this behavior

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug An unexpected or incorrect behavior
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

3 participants