-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Better support for eth_getLogs in light mode #9186
Conversation
It looks like @jimpo signed our Contributor License Agreement. 👍 Many thanks, Parity Technologies CLA Bot |
rpc/src/v1/helpers/light_fetch.rs
Outdated
} else { | ||
reqs.push(request::HeaderByHash(hash.into()).into()); | ||
HeaderRef::Unresolved(reqs.len() - 1, hash.into()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use make_header_requests
here? Something like this:
refs.entry(hash)
.or_insert_with(|| self.make_header_requests(BlockId::Hash(hash), &mut reqs)
.expect("`make_header_requests` never fails for BlockId::Hash; qed"));
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, we could get rid of this entry.or_insert_with
by calling dedup
on fetch_hashes
in case from_block == to_block
, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems kind of roundabout to wrap a value in an enum then unwrap the result because we know how that branch is handled in a helper function. I figured it was preferable here to avoid the expect
and accept some code duplication, but I'll change it if you really prefer.
As for the deduping, this method does not require the input hash vector to be sorted (or of length <= 2), so the or_insert_with
is the easiest way to dedup IMO, rather than doing a sort first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even though I agree that sometimes duplication is better, IMHO in this case using make_header_requests
makes code more readable and DRY (e.g. we can change the logic in one place). As for possible performance impact, I think if make_header_requests
is inlined, then the impact should be negligible.
As for deduping, since this is a private method, we can control the requirements. But I'm fine with leaving it as is.
rpc/src/v1/helpers/light_fetch.rs
Outdated
} | ||
headers.push(hdr); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be simplified to
headers.extend(self.client
.ancestry_iter(iter_start)
.take_while(|hdr| hdr.number() >= from_number));
rpc/src/v1/helpers/light_fetch.rs
Outdated
.map(|(hash, header_ref)| { | ||
let hdr = extract_header(&res, header_ref) | ||
.expect("these responses correspond to requests that header_ref belongs to \ | ||
therefore it will not fail; qed"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nits:
- comma after second to,
- don't mix spaces with tabs
Or, better yet, remove "therefore it will not fail", since "qed" has the same meaning.
rpc/src/v1/helpers/light_fetch.rs
Outdated
|
||
let header_proof = request::HeaderProof::new(to_number, cht_root) | ||
.expect("HeaderProof::new is Some(_) if cht::block_to_cht_number() is Some(_); \ | ||
this would return above if block_to_cht_number returned None; qed"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: spaces mixed with tabs
rpc/src/v1/helpers/light_fetch.rs
Outdated
.expect("HeaderProof::new is Some(_) if cht::block_to_cht_number() is Some(_); \ | ||
this would return above if block_to_cht_number returned None; qed"); | ||
reqs.push(header_proof.into()); | ||
Field::back_ref(reqs.len() - 1, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer
let idx = reqs.len();
let hash_ref = Field::back_ref(idx, 0);
reqs.push(header_proof.into());
Because it is easier to reason about overflows that way and it matches the style in other places.
rpc/src/v1/helpers/light_fetch.rs
Outdated
} | ||
|
||
fn headers_by_hash(&self, hashes: Vec<H256>) -> impl Future<Item = HashMap<H256, encoded::Header>, Error = Error> { | ||
let mut refs = HashMap::with_capacity(hashes.len()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could use H256FastMap
instead?
@@ -625,6 +657,10 @@ pub enum Error { | |||
Decoder(::rlp::DecoderError), | |||
/// Empty response. | |||
Empty, | |||
/// Response data length exceeds request max. | |||
TooManyResults(u64, u64), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For future (doesn't need to be fixed in this PR): I would prefer having a struct-like enum variants, since it's not always clear w/o some context, what is the meaning and the order of the fields.
ethcore/light/src/provider.rs
Outdated
@@ -33,6 +33,9 @@ use transaction_queue::TransactionQueue; | |||
|
|||
use request; | |||
|
|||
/// Maximum allowed size of a headers request. | |||
pub const MAX_HEADERS_LENGTH: u64 = 512; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally find the previous name (MAX_HEADERS_TO_SEND
) more descriptive and less confusing than this one. ("size" usually implies "in bytes")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason I changed it is that the variable was used in a context that was receiving headers, not sending them, so it seemed confusing. What about MAX_HEADERS_PER_REQUEST
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like MAX_HEADERS_PER_REQUEST
much better than MAX_HEADERS_LENGTH
.
// Validate from_block if it's a hash | ||
let last_hash = headers.last().map(|hdr| hdr.hash()); | ||
match (last_hash, from_block) { | ||
(Some(h1), BlockId::Hash(h2)) if h1 != h2 => Vec::new(), |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
This is something we've actually discussed internally, fetching logs for very large range of blocks is common behavior in a lot of DApps, we'd need to make sure this doesn't open up an "attack" vector by accidentally spamming those requests, or rather make sure that it fails in some suitably graceful way. What would happen right now if a request was sent for a large range? |
@folsen bc92ccb enforces a maximum range of 1,000 headers. A more sophisticated approach could be to have two limits, one on the number of blocks within the past 2048 (CHT size) and one on the number of blocks before that. Or to weight them differently or something. But that's probably just an unnecessarily confusing UX. |
f6fd170
to
bc92ccb
Compare
rpc/src/v1/helpers/light_fetch.rs
Outdated
} else { | ||
reqs.push(request::HeaderByHash(hash.into()).into()); | ||
HeaderRef::Unresolved(reqs.len() - 1, hash.into()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even though I agree that sometimes duplication is better, IMHO in this case using make_header_requests
makes code more readable and DRY (e.g. we can change the logic in one place). As for possible performance impact, I think if make_header_requests
is inlined, then the impact should be negligible.
As for deduping, since this is a private method, we can control the requirements. But I'm fine with leaving it as is.
ethcore/light/src/provider.rs
Outdated
@@ -33,6 +33,9 @@ use transaction_queue::TransactionQueue; | |||
|
|||
use request; | |||
|
|||
/// Maximum allowed size of a headers request. | |||
pub const MAX_HEADERS_LENGTH: u64 = 512; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like MAX_HEADERS_PER_REQUEST
much better than MAX_HEADERS_LENGTH
.
// Validate from_block if it's a hash | ||
let last_hash = headers.last().map(|hdr| hdr.hash()); | ||
match (last_hash, from_block) { | ||
(Some(h1), BlockId::Hash(h2)) if h1 != h2 => Vec::new(), |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
rpc/src/v1/helpers/errors.rs
Outdated
pub fn request_rejected_param_limit() -> Error { | ||
Error { | ||
code: ErrorCode::ServerError(codes::REQUEST_REJECTED_LIMIT), | ||
message: "Request has been rejected because requested data size exceeds limit.".into(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How a user can find out the value of the limit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll put it in the error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM modulo a couple of comments I've left
> curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_getLogs","params":[ {"fromBlock":"0x5a6da7","toBlock":"0x5a6da7","address":"0x75228dce4d82566d93068a8d5d49435216551599","topics":[["0xbaba17e31bb9fbfbc0b794111d2b1236ed4e36067a5e0d7c3c3433ad66c99f9d","0x0bffe152251da36b8f0264e3db7a5194b0cae63e5a6cbcf89b753c10ffbe068d","0xdd0dca2d338dc86ba5431017bdf6f3ad45247d608b0a38d866e3131a876be2cc","0xee62c58e2603b92f96a002e012f4f3bd5748102cfa3b711f6d778c6237fcaa96","0xb2e65de73007eef46316e4f18ab1f301b4d0e31aa56733387b469612f90894df","0x8a34ec183bf620d74d0b52e71165bb4255b0591d1c8e9d07c707a7f1d763158d","0xc3cf07f8fa0fafc25a9dd0bad2cd6b961c55dad41b42c8ef8f931bc40e41e08c","0x014ce4e12965529d7d31e11411d7a23b1778d448ab763ffc4d55830cbb4919d7","0x513d029ff62330c16d8d4b36b28fab53f09d10bb51b56fe121ab710ca2d1af80","0x32d554e498d0c7f2a5c7fd8b6b234bfc4e1dfb5290466d998af09a813db32f31","0xabb970462c1f0de9e237d127ad47c01c4e69caa179fd850d076ae9bfc529176e","0xccc07058358a9411a6acb3cd58bf6d0b398c3ff1f0b2c8e97a6dbdbbe74eae41","0xa340b40e5e280037f25da1bff4a1b4030d764649f0d5029a2198182c42cff883","0xec05f094139821aeb3220a0837f5d14eb02aa619179aadf3b316ed95b3648abb","0xb20adf682c8f82b94a135452f54ac4483c9ee8c9b2324e946120696ab1d034b4","0x262b80f2af08a1001d15a1df91dde9acb8441811543886659b3845a8c285748b","0x75dd618f69c0f07adc97fe19ba435f3932ce6aa8cad287fb9bdfaf37639f703a","0x3c67396e9c55d2fc8ad68875fc5beca1d96ad2a2f23b210ccc1d986551ab6fdf","0xa7e9373569caad2b7871ecb4d498619fc1c42840a6c0dbeb8dff20b131721e50","0x299eaafd0d27519eda3fe7195b73e5269e442b3d80928f19afa32b6db2f352b6","0xd4d990bbdf9b9a4383a394341465060ccb75513432ceee3d5fcd8788ab1a507f","0xc62cff53848fe243adb6130140cfe557ce16e8006861abd50adfe425150ba6c5","0x450bd662d3b1e236c8f344457690d257aeae5dca1add336752839ac206613cc0","0x11dda748f0bd3af85a073da0088a0acb827d9584a4fdb825c81f1232a5309538","0x349ab20f76ba930a00da1936627d07400af6bb7cd2e2b4c68bcab93ca8aff418","0x68166bb2a567c21899b00209f52c286bf00ac613acc9f183da791ac5f5f47051","0x3b4f3db017516414df2695e5b0052661779d7163a6cd4368fd74313be73fa0b8"]]} ],"id":74}' localhost:8545
{"jsonrpc":"2.0","result":[{"address":"0x75228dce4d82566d93068a8d5d49435216551599","blockHash":null,"blockNumber":null,"data":"0x000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","logIndex":null,"removed":false,"topics":["0x299eaafd0d27519eda3fe7195b73e5269e442b3d80928f19afa32b6db2f352b6","0x0000000000000000000000000000000000000000000000000000000000000000","0x000000000000000000000000e991247b78f937d7b69cfc00f1a487a293557677"],"transactionHash":null,"transactionIndex":null,"transactionLogIndex":null,"type":"pending"}],"id":74}
rpc/src/v1/helpers/light_fetch.rs
Outdated
|
||
if to_block_num < from_block_num { | ||
// early exit for "to" block before "from" block. | ||
return Either::A(future::ok(Vec::new())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default behavior is a subject to change: see https://github.com/paritytech/parity-ethereum/pull/9256/files/4b45f9ac913f7ef9d7e3d3cda22a2d0c28dc23ea..2ad31e5dabafcb1fdb379c51e8c38e0a7b522d2c.
rpc/src/v1/helpers/light_fetch.rs
Outdated
None => BlockId::Number(to_number), | ||
}; | ||
headers.extend(self.client.ancestry_iter(iter_start) | ||
.take_while(|hdr| hdr.number() >= from_number)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: mixed tabs and spaces
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
CI failure seems unrelated.
Cargo.lock
Outdated
@@ -2159,6 +2159,7 @@ dependencies = [ | |||
"parity-version 2.1.0", | |||
"parking_lot 0.6.2 (registry+https://github.com/rust-lang/crates.io-index)", | |||
"patricia-trie 0.2.1 (git+https://github.com/paritytech/parity-common)", | |||
"plain_hasher 0.1.0 (git+https://github.com/paritytech/parity-common)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
H256FastMap
has moved to util/fastmap
(#9307).
rpc/src/v1/helpers/light_fetch.rs
Outdated
@@ -478,6 +485,8 @@ impl LightFetch { | |||
if to_block_num < from_block_num { | |||
// early exit for "to" block before "from" block. | |||
return Either::A(future::err(errors::filter_block_not_found(to_block))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc #9373
@@ -390,6 +405,7 @@ impl CheckedRequest { | |||
|
|||
None | |||
} | |||
// TODO: CheckedRequest::HeaderWithAncestors arm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is/was an open question in your PR
and I haven't followed the entire thread but is the conclusion that the sequence of headers
should not be cached? If so, can you please add a comment here and add a brief explanation why the HeaderWithAncestors
is not cached instead of the TODO
If my understanding is correct:
- The header cache is: 10 MB
- The header size is: 632 Bytes
- The maximum size of HeaderWithAncestors is: 1000*632 = 632000 (~6% of the total cache)
So it will require at least 16 HeaderWithAncestors
requests to actually fill the cache?! Is this really an issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added caching in 586f46c.
@@ -701,6 +739,56 @@ impl HeaderProof { | |||
} | |||
} | |||
|
|||
/// Request for a header by hash with a range of ancestors. | |||
#[derive(Debug, Clone, PartialEq, Eq)] | |||
pub struct HeaderWithAncestors(pub Field<H256>, pub u64); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer a normal struct
instead of a tuple struct
here because it will make easier to read/understand!
rpc/src/v1/helpers/light_fetch.rs
Outdated
.map(|hdr| (hdr.number(), hdr.hash(), request::BlockReceipts(hdr.into()))) | ||
.map(|(num, hash, req)| self.on_demand.request(ctx, req).expect(NO_INVALID_BACK_REFS).map(move |x| (num, hash, x))) | ||
.collect(); | ||
let fetcher: Self = self.clone(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let fetcher : Self = self.clone();
rpc/src/v1/helpers/light_fetch.rs
Outdated
|
||
let best_number = self.client.chain_info().best_block_number; | ||
|
||
let fetcher: Self = self.clone(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let fetcher : Self = self.clone();
rpc/src/v1/helpers/light_fetch.rs
Outdated
}) | ||
} | ||
|
||
fn headers_by_hash(&self, hashes: Vec<H256>) -> impl Future<Item = H256FastMap<encoded::Header>, Error = Error> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take hashes
by reference here instead because H256
is Copy
?!
Needs to be rebased/merged into master! |
Also fulfills request locally if all headers are in cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 2.0 (modulo failing tests and comment below)
P.S. Please don't rebase commits, it's harder to see what has changed and Github doesn't provide notifications on rebase. We squash-merge anyway, so the commit history is not important.
let mut result = Vec::with_capacity(req.max as usize); | ||
let mut hash = start; | ||
for _ in 0..req.max { | ||
match cache.lock().block_header(&hash) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather move cache.lock()
outside of the loop.
rpc/src/v1/helpers/light_fetch.rs
Outdated
}) | ||
} | ||
|
||
fn headers_by_hash(&self, hashes: &Vec<H256>) -> impl Future<Item = H256FastMap<encoded::Header>, Error = Error> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better to have hashes: &[H256]
, and use Vec::as_slice
.
Cool. Needs a 2nd review. |
@niklasad1 if all your comments have been addressed then please give it the green checkmark :) |
// find all headers which match the filter, and fetch the receipts for each one. | ||
// match them with their numbers for easy sorting later. | ||
let bit_combos = filter.bloom_possibilities(); | ||
let receipts_futures: Vec<_> = headers.drain(..) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A minor thing, isn't better to use into_iter
instead of drain
in this context because we are not reusing the headers
after the operation?
(drain
is borrowing the vector (ends up with an empty vector) vs into_iter
is taking ownership)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
* Light client on-demand request for headers range. * Cache headers in HeaderWithAncestors response. Also fulfills request locally if all headers are in cache. * LightFetch::logs fetches missing headers on demand. * LightFetch::logs limit the number of headers requested at a time. * LightFetch::logs refactor header fetching logic. * Enforce limit on header range length in light client logs request. * Fix light request tests after struct change. * Respond to review comments.
* Light client on-demand request for headers range. * Cache headers in HeaderWithAncestors response. Also fulfills request locally if all headers are in cache. * LightFetch::logs fetches missing headers on demand. * LightFetch::logs limit the number of headers requested at a time. * LightFetch::logs refactor header fetching logic. * Enforce limit on header range length in light client logs request. * Fix light request tests after struct change. * Respond to review comments.
* master: evmbin: Fix gas_used issue in state root mismatch and handle output better (#9418) Update hardcoded sync (#9421) Add block reward contract config to ethash and allow off-chain contracts (#9312) Private packets verification and queue refactoring (#8715) Update tobalaba.json (#9419) docs: add parity ethereum logo to readme (#9415) build: update rocksdb crate (#9414) Updating the CI system (#8765) Better support for eth_getLogs in light mode (#9186) Add update docs script to CI (#9219) `gasleft` extern implemented for WASM runtime (kip-6) (#9357) block view! removal in progress (#9397) Prevent sync restart if import queue full (#9381) nonroot CentOS Docker image (#9280) ethcore: kovan: delay activation of strict score validation (#9406)
* parity-version: bump stable to 1.11.9 * Fix compilation error on nightly rust (#8707) On nightly rust passing `public_url` works but that breaks on stable. This works for both. * parity-version: bump stable to 1.11.10 * Check if synced when using eth_getWork (#9193) (#9210) * Check if synced when using eth_getWork (#9193) * Don't use fn syncing * Fix identation * Fix typo * Don't check for warping * rpc: avoid calling queue_info twice on eth_getWork * Fix potential as_usize overflow when casting from U256 in miner (#9221) * Allow old blocks from peers with lower difficulty (#9226) Previously we only allow downloading of old blocks if the peer difficulty was greater than our syncing difficulty. This change allows downloading of blocks from peers where the difficulty is greater then the last downloaded old block. * Update Dockerfile (#9242) * Update Dockerfile fix Docker build * fix dockerfile paths: parity -> parity-ethereum (#9248) * Update tobalaba.json (#9313) * Light client `Provide default nonce in transactions when it´s missing` (#9370) * Provide `default_nonce` in tx`s when it´s missing When `nonce` is missing in a `EthTransaction` will cause it to fall in these cases provide `default_nonce` value instead! * Changed http:// to https:// on Yasm link (#9369) Changed http:// to https:// on Yasm link in README.md * Provide `default_nonce` in tx`s when it´s missing When `nonce` is missing in a `EthTransaction` will cause it to fall in these cases provide `default_nonce` value instead! * Address grumbles * ethcore: kovan: delay activation of strict score validation (#9406) * Use impl Future in the light client RPC helpers (#8628) * Better support for eth_getLogs in light mode (#9186) * Light client on-demand request for headers range. * Cache headers in HeaderWithAncestors response. Also fulfills request locally if all headers are in cache. * LightFetch::logs fetches missing headers on demand. * LightFetch::logs limit the number of headers requested at a time. * LightFetch::logs refactor header fetching logic. * Enforce limit on header range length in light client logs request. * Fix light request tests after struct change. * Respond to review comments. * Propagate transactions for next 4 blocks. (#9265) Closes #9255 This PR also removes the limit of max 64 transactions per packet, currently we only attempt to prevent the packet size to go over 8MB. This will only be the case for super-large transactions or high-block-gas-limit chains. Patching this is important only for chains that have blocks that can fit more than 4k transactions (over 86M block gas limit) For mainnet, we should actually see a tiny bit faster propagation since instead of computing 4k pending set, we only need `4 * 8M / 21k = 1523` transactions. Running some tests on `dekompile` node right now, to check how it performs in the wild. * ethcore: fix pow difficulty validation (#9328) * ethcore: fix pow difficulty validation * ethcore: validate difficulty is not zero * ethcore: add issue link to regression test * ethcore: fix tests * ethcore: move difficulty_to_boundary to ethash crate * ethcore: reuse difficulty_to_boundary and boundary_to_difficulty * ethcore: fix grumbles in difficulty_to_boundary_aux
* parity-version: bump beta to 2.0.2 * remove ssl from dockerfiles, closes #8880 (#9195) * snap: remove ssl dependencies from snapcraft definition (#9222) * parity-version: bump beta to 2.0.3 * Remove all dapp permissions related settings (#9120) * Completely remove all dapps struct from rpc * Remove unused pub use * Remove dapp policy/permission func in ethcore * Remove all dapps settings from rpc * Fix rpc tests * Use both origin and user_agent * Address grumbles * Address grumbles * Fix tests * Check if synced when using eth_getWork (#9193) (#9210) * Check if synced when using eth_getWork (#9193) * Don't use fn syncing * Fix identation * Fix typo * Don't check for warping * rpc: avoid calling queue_info twice on eth_getWork * Fix potential as_usize overflow when casting from U256 in miner (#9221) * Allow old blocks from peers with lower difficulty (#9226) Previously we only allow downloading of old blocks if the peer difficulty was greater than our syncing difficulty. This change allows downloading of blocks from peers where the difficulty is greater then the last downloaded old block. * Update Dockerfile (#9242) * Update Dockerfile fix Docker build * fix dockerfile paths: parity -> parity-ethereum (#9248) * Propagate transactions for next 4 blocks. (#9265) Closes #9255 This PR also removes the limit of max 64 transactions per packet, currently we only attempt to prevent the packet size to go over 8MB. This will only be the case for super-large transactions or high-block-gas-limit chains. Patching this is important only for chains that have blocks that can fit more than 4k transactions (over 86M block gas limit) For mainnet, we should actually see a tiny bit faster propagation since instead of computing 4k pending set, we only need `4 * 8M / 21k = 1523` transactions. Running some tests on `dekompile` node right now, to check how it performs in the wild. * Update tobalaba.json (#9313) * Fix load share (#9321) * fix(light_sync): calculate `load_share` properly * refactor(api.rs): extract `light_params` fn, add test * style(api.rs): add trailing commas * ethcore: fix pow difficulty validation (#9328) * ethcore: fix pow difficulty validation * ethcore: validate difficulty is not zero * ethcore: add issue link to regression test * ethcore: fix tests * ethcore: move difficulty_to_boundary to ethash crate * ethcore: reuse difficulty_to_boundary and boundary_to_difficulty * ethcore: fix grumbles in difficulty_to_boundary_aux * Light client `Provide default nonce in transactions when it´s missing` (#9370) * Provide `default_nonce` in tx`s when it´s missing When `nonce` is missing in a `EthTransaction` will cause it to fall in these cases provide `default_nonce` value instead! * Changed http:// to https:// on Yasm link (#9369) Changed http:// to https:// on Yasm link in README.md * Provide `default_nonce` in tx`s when it´s missing When `nonce` is missing in a `EthTransaction` will cause it to fall in these cases provide `default_nonce` value instead! * Address grumbles * ethcore: kovan: delay activation of strict score validation (#9406) * Better support for eth_getLogs in light mode (#9186) * Light client on-demand request for headers range. * Cache headers in HeaderWithAncestors response. Also fulfills request locally if all headers are in cache. * LightFetch::logs fetches missing headers on demand. * LightFetch::logs limit the number of headers requested at a time. * LightFetch::logs refactor header fetching logic. * Enforce limit on header range length in light client logs request. * Fix light request tests after struct change. * Respond to review comments. * Add update docs script to CI (#9219) * Add update docs script to CI Added a script to CI that will use the jsonrpc tool to update rpc documentation then commit and push those to the wiki repo. * fix gitlab ci lint * Only apply jsonrpc docs update on tags * Update gitlab-rpc-docs.sh * Copy correct parity repo to jsonrpc folder Copy correct parity repo to jsonrpc folder before attempting to build docs since the CI runner clones the repo as parity and not parity-ethereum. * Fix JSONRPC docs CI job Update remote config in wiki repo before pushing changes using a github token for authentication. Add message to wiki tag when pushing changes. Use project directory to correctly copy parity code base into the jsonrpc repo for doc generation. * Fix set_remote_wiki function call in CI * Prevent blockchain & miner racing when accessing pending block. (#9310) * Prevent blockchain & miner racing when accessing pending block. * Fix unavailability of pending block during reseal. * Prevent sync restart if import queue full (#9381) * Add POA Networks: Core and Sokol (#9413) * ethcore: add poa network and sokol chainspecs * rpc: simplify chain spec docs * cli: rearrange networks by main/test and size/range * parity: don't blacklist 0x00a328 on sokol testnet * parity: add sokol and poanet to params and clean up a bit, add tests * ethcore: add the poa networks and clean up a bit * ethcore: fix path to poacore chain spec * parity: rename poa networks to poacore and poasokol * parity: fix configuration tests * parity: fix parameter tests * ethcore: rename POA Core and POA Sokol * Update tobalaba.json (#9419) * Update hardcoded sync (#9421) - Update foundation hardcoded header to block 6219777 - Update ropsten hardcoded header to block 3917825 - Update kovan hardcoded header to block 8511489
Fixes #9184.
If the light client receives an
eth_getLogs
request for blocks that it does not have, it will fetch the headers from the network on demand rather than silently failing and returning 0 logs.There is a limit of a 1,000 block range on requests to ensure they can be answered quickly.
Tested manually on mainnet because I don't see other automated tests for RPCs.