-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[R4R] Release v1.1.12 #1025
Merged
[R4R] Release v1.1.12 #1025
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* cmd/evm: add 256-bit field validations on transactions (t9n) * cmd/evm: validate gas*gasPrice, return intrinsic gas usage * cmd/evm: address review comment
* core: fix warning flagging the use of DeepEqual on error * apply the same change everywhere possible * revert change that was committed by mistake * fix build error * Update config.go * revert changes to ConfigCompatError * review feedback Co-authored-by: Felix Lange <fjl@twurst.com>
…(#23635) * core/state/snapshot: fix BAD BLOCK error when snapshot is generating * core/state/snapshot: alternative fix for the snapshot generator * add comments and minor update Co-authored-by: Martin Holst Swende <martin@swende.se>
This fixes a panic that occurs when HeaderByNumber() returns an error.
- use Text instead of fmt.Sprintf - reduced allocs from 6 to 2 - improved speed
This PR adds a new accessor method to the freezer database. This new view offers a consistent interface, guaranteeing that all individual tables (headers, bodies etc) are all on the same number, and that this number is not changes (added/truncated) while the operation is performing.
xgo is not maintained at this time, so none of these builds work. Closes #23784
* core: write test showing that TD is not stored properly at genesis The ToBlock method applies a default value for an empty difficulty value. This default is not carried over through the Commit method because the TotalDifficulty database write writes the original difficulty value (nil) instead of the defaulty value present on the genesis Block. Date: 2021-10-22 08:25:32-07:00 Signed-off-by: meows <b5c6@protonmail.com> * core: write TD value from Block, not original genesis value This an issue where a default TD value was not written to the database, resulting in a 0 value TD at genesis. A test for this issue was provided at 90e3ffd393 Date: 2021-10-22 08:28:00-07:00 Signed-off-by: meows <b5c6@protonmail.com> * core: fix tests by adding GenesisDifficulty to expected result See prior two commits. Date: 2021-10-22 09:16:01-07:00 Signed-off-by: meows <b5c6@protonmail.com> * les: fix test with genesis change Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR also counts the size of the key when calculating the size of a db batch
Fixes crashes in various benchmarks in the core package
* cmd/evm: handle rlp errors in t9n * cmd/evm/testdata: fix readme
This PR adds support for ArrowGlacier, as defined by https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/arrow-glacier.md https://eips.ethereum.org/EIPS/eip-4345 > Starting with FORK_BLOCK_NUMBER the client will calculate the difficulty based on a fake block number suggesting to the client that the difficulty bomb is adjusting 10,700,000 blocks later than the actual block number. This also adds support for evm t8n to return the calculated difficulty, so it can be used to construct test.
Some benchmarks in eth/filters were not good: they weren't reproducible, relying on geth chaindata to be present. Another one was rejected because the receipt was lacking a backing transcation. The p2p simulation benchmark had a lot of the warnings below, due to the framework calling both Stop() and Close(). Apparently, the simulated adapter is the only implementation which has a Close(), and there is no need to call both Stop and Close on it.
Don't bother fetching genesis Co-authored-by: wuff1996 <33193253+wuff1996@users.noreply.github.com>
Co-authored-by: mrx <mrx@mrx.com>
…1559 chains (#23840)
It is because write known block only checks block and state without snapshot, which could lead to gap between newest snapshot and newest block state. However, new blocks which would cause snapshot to become fixed were ignored, since state was already known. Co-authored-by: Gary Rong <garyrong0905@gmail.com> Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR offers two more database sub commands for exporting and importing data. Two exporters are implemented: preimage and snapshot data respectively. The import command is generic, it can take any data export and import into leveldb. The data format has a 'magic' for disambiguation, and a version field for future compatibility.
…799) When we map a file for generating the DAG, we do a simple truncate to e.g. 1Gb. This is fine, even if we have nowhere near 1Gb disk available, as the actual file doesn't take up the full 1Gb, merely a few bytes. When we start generating into it, however, it eventually crashes with a unexpected fault address . This change fixes it (on linux systems) by using the Fallocate syscall, which preallocates suffcient space on disk to avoid that situation. Co-authored-by: Felix Lange <fjl@twurst.com>
[R4R] merge go-ethereum
[R4R] fix asynchronous caching of difflayer causes random errors in tests
* Redesign triePrefetcher to make it thread safe There are 2 types of triePrefetcher instances: 1.New created triePrefetcher: it is key to do trie prefetch to speed up validation phase. 2.Copied triePrefetcher: it only copy the prefetched trie information, actually it won't do prefetch at all, the copied tries are all kept in p.fetches. Here we try to improve the new created one, to make it concurrent safe, while the copied one's behavior stay unchanged(its logic is very simple). As commented in triePrefetcher struct, its APIs are not thread safe. So callers should make sure the created triePrefetcher should be used within a single routine. As we are trying to improve triePrefetcher, we would use it concurrently, so it is necessary to redesign it for concurrent access. The design is simple: ** start a mainLoop to do all the work, APIs just send channel message. Others: ** remove the metrics copy, since it is useless for copied triePrefetcher ** for trie(), only get subfetcher through channel to reduce the workload of mainloop * some code enhancement for triePrefetcher redesign * some fixup: rename, temporary trie chan for concurrent safe. * fix review comments * add some protection in case the trie prefetcher is already stopped * fix review comments ** make close concurrent safe ** fix potential deadlock * replace channel by RWMutex for a few triePrefetcher APIs For APIs like: trie(), copy(), used(), it is simpler and more efficient to use a RWMutex instead of channel communicaton. Since the mainLoop would be busy handling trie request, while these trie request can be processed in parallism. We would only keep prefetch and close within the mainLoop, since they could update the fetchers * add lock for subfecter.used access to make it concurrent safe * no need to create channel for copied triePrefetcher * fix trie_prefetcher_test.go trie prefetcher’s behavior has changed, prefetch() won't create subfetcher immediately. it is reasonable, but break the UT, to fix the failed UT
* feat: refactor dockerfile to add entrypoint script
…node (#1009) * fix cache read and write concurrency issue of empty block Signed-off-by: cryyl <yl.on.the.way@gmail.com> * fix: limit the size of chainHeadChanSize Co-authored-by: zjubfd <296179868@qq.com>
[R4R]db: freezer batch compatible offline prunblock command (#1005)
…only db (#1013) * freezer batch compatible offline prunblock command adjust pruneblock local var * not write metadata to db when open db with readyonly
* rm duplicate update/delete on tire * rm useless code
* get copy of prefetcher before use to avoid been modified between access and not-nil condition
yutianwu
approved these changes
Jul 28, 2022
realuncle
approved these changes
Jul 28, 2022
unclezoro
approved these changes
Jul 28, 2022
forcodedancing
approved these changes
Jul 28, 2022
unclezoro
requested changes
Jul 28, 2022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait for CI
unclezoro
approved these changes
Jul 28, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Release v1.1.12 is a performance release. the following two features are introduced in this release.
Separate Processing and State Verification.
Pruning AncientDB inline at runtime.
Separate Processing and State Verification
Separate Processing and State Verification is introduced in #926. it introduces two type nodes to make full use of different storage, one is named fast node, and the other is named verify node. The fast node will do block processing with snapshot, it will do all verification against blocks except state root. The verify node receives diffhash from the fast node and then responds MPT root to the fast node.
If you want to use this feature, See more details here
Pruning AncientDB inline at runtime
A new flag is introduced to prune ancient undesired block data at runtime, it will discard
block
,receipt
,header
in the ancient DB to save space.Example:
geth --config ./config.toml --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --txlookuplimit 0 --puneancient
.Note: once turned on, the ancient data will not be recovered again
Command Changes
After merging the Ethereum version, some Flag parameters have changed, please refer to the following list.
Removed
--yolov3
--vm.ewasm
--vm.evm
--rpc
(use --http)--rpcaddr
(use --http.addr)--rpccorsdomain
(use --http.port)--rpcvhosts
(use --http.corsdomain)--rpcapi
(use --http.vhosts)Added
--dev.gaslimit
Initial block gas limit--sepolia
Sepolia network: pre-configured proof-of-work test network--override.arrowglacier
Manually specify Arrow Glacier fork-block, overriding the bundled setting--override.terminaltotaldifficulty
Manually specify TerminalTotalDifficulty, overriding the bundled setting--rpc.evmtimeout
Sets a timeout used for eth_call (0=infinite)--gpo.ignoreprice
Gas price below which gpo will ignore transactions--metrics.influxdbv2
Enable metrics export/push to an external InfluxDB v2 database--metrics.influxdb.token
Token to authorize access to the database (v2 only)--metrics.influxdb.bucket
InfluxDB bucket name to push reported metrics to (v2 only)--metrics.influxdb.organization
InfluxDB organization name (v2 only)Changed
--syncemode
removed thefast
modeRationale
N/A
Example
N/A
Changes
FEATURE
IMPROVEMENT
BUGFIX