-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Adjust requests costs for light client #9925
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice find! I think we should remove old test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, adding patch{-beta,-stable} since #9321 is in stable.
|
||
/// Add a new peer_count sample | ||
pub fn add_peer_count(&mut self, peer_count: usize) { | ||
while self.peer_counts.len() >= MOVING_SAMPLE_SIZE { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be simplified by using a fixed size array/vector (initialized wit 1s) and index: usize
that wraps after the end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, but the current implementation is more accurate in case when the number of samples is less than MOVING_SAMPLE_SIZE
(it would be full only after an hour (256 * 15 / 60)).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough, although that could be simple solved with a flag (i.e. is_fully_filled ? 0..len : 0..index
).
EDIT: @ngotchac I got a little confused, however now I'm a little concerned with You may also change to peer_costs to (2^53 can be stored exactly but 2^53 +1 can't) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should have at least one test for this!!
@ngotchac should that addressed in this PR or separate one?
@niklasad1 I highly doubt we could get that many leechers for overflow to happen (we're probably limited by network bandwidth to 1000-2000 peers, so 2**32 / 256 seems unlikely), but I agree we should be more explicit about that. |
|
||
const N: f64 = 50.0; | ||
|
||
for i in 1..(N as usize + 1) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: Rust have inclusive range, the syntax is: for i in 0..=N
Nice work, let's merge? :P |
* PIP Table Cost relative to average peers instead of max peers * Add tracing in PIP new_cost_table * Update stat peer_count * Use number of leeching peers for Light serve costs * Fix test::light_params_load_share_depends_on_max_peers (wrong type) * Remove (now) useless test * Remove `load_share` from LightParams.Config Prevent div. by 0 * Add LEECHER_COUNT_FACTOR * PR Grumble: u64 to u32 for f64 casting * Prevent u32 overflow for avg_peer_count * Add tests for LightSync::Statistics
* PIP Table Cost relative to average peers instead of max peers * Add tracing in PIP new_cost_table * Update stat peer_count * Use number of leeching peers for Light serve costs * Fix test::light_params_load_share_depends_on_max_peers (wrong type) * Remove (now) useless test * Remove `load_share` from LightParams.Config Prevent div. by 0 * Add LEECHER_COUNT_FACTOR * PR Grumble: u64 to u32 for f64 casting * Prevent u32 overflow for avg_peer_count * Add tests for LightSync::Statistics
* version: bump stable to 2.1.7 * Adjust requests costs for light client (#9925) * PIP Table Cost relative to average peers instead of max peers * Add tracing in PIP new_cost_table * Update stat peer_count * Use number of leeching peers for Light serve costs * Fix test::light_params_load_share_depends_on_max_peers (wrong type) * Remove (now) useless test * Remove `load_share` from LightParams.Config Prevent div. by 0 * Add LEECHER_COUNT_FACTOR * PR Grumble: u64 to u32 for f64 casting * Prevent u32 overflow for avg_peer_count * Add tests for LightSync::Statistics * Fix empty steps (#9939) * Don't send empty step twice or empty step then block. * Perform basic validation of locally sealed blocks. * Don't include empty step twice. * prevent silent errors in daemon mode, closes #9367 (#9946) * Fix light client informant while syncing (#9932) * Add `is_idle` to LightSync to check importing status * Use SyncStateWrapper to make sure is_idle gets updates * Update is_major_import to use verified queue size as well * Add comment for `is_idle` * Add Debug to `SyncStateWrapper` * `fn get` -> `fn into_inner` * ci: rearrange pipeline by logic (#9970) * ci: rearrange pipeline by logic * ci: rename docs script * Add readiness check for docker container (#9804) * Update Dockerfile Since parity is built for "mission critical use", I thought other operators may see the need for this. Adding the `curl` and `jq` commands allows for an extremely simple health check to be usable in container orchestrators. For example. Here is a health check for a parity docker container running in Kubernetes. This can be setup as a readiness Probe that would prevent clustered nodes that aren't ready from serving traffic. ```bash #!/bin/bash ETH_SYNCING=$(curl -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' http://localhost:8545 -H 'Content-Type: application/json') RESULT=$(echo "$ETH_SYNCING | jq -r .result) if [ "$RESULT" == "false" ]; then echo "Parity is ready to start accepting traffic" exit 0 else echo "Parity is still syncing the blockchain" exit 1 fi ``` * add sync check script * Fix docker script (#9854) * Dockerfile: change source path of the newly added check_sync.sh (#9869) * Do not use the home directory as the working dir in docker (#9834) * Do not create a home directory. * Re-add -m flag * fix docker build (#9971) * bump smallvec to 0.6 in ethcore-light, ethstore and whisper (#9588) * bump smallvec to 0.6 in ethcore-light, ethstore and whisper * bump transaction-pool * Fix test. * patch cargo to use tokio-proto from git repo this makes sure we no longer depend on smallvec 0.2.1 which is affected by servo/rust-smallvec#96 * use patched version of untrusted 0.5.1 * ci: allow audit to fail
* version: bump beta to 2.2.2 * Add experimental RPCs flag (#9928) * WiP * Enable experimental RPCs. * Keep existing blocks when restoring a Snapshot (#8643) * Rename db_restore => client * First step: make it compile! * Second step: working implementation! * Refactoring * Fix tests * PR Grumbles * PR Grumbles WIP * Migrate ancient blocks interating backward * Early return in block migration if snapshot is aborted * Remove RwLock getter (PR Grumble I) * Remove dependency on `Client`: only used Traits * Add test for recovering aborted snapshot recovery * Add test for migrating old blocks * Fix build * PR Grumble I * PR Grumble II * PR Grumble III * PR Grumble IV * PR Grumble V * PR Grumble VI * Fix one test * Fix test * PR Grumble * PR Grumbles * PR Grumbles II * Fix tests * Release RwLock earlier * Revert Cargo.lock * Update _update ancient block_ logic: set local in `commit` * Update typo in ethcore/src/snapshot/service.rs Co-Authored-By: ngotchac <ngotchac@gmail.com> * Adjust requests costs for light client (#9925) * PIP Table Cost relative to average peers instead of max peers * Add tracing in PIP new_cost_table * Update stat peer_count * Use number of leeching peers for Light serve costs * Fix test::light_params_load_share_depends_on_max_peers (wrong type) * Remove (now) useless test * Remove `load_share` from LightParams.Config Prevent div. by 0 * Add LEECHER_COUNT_FACTOR * PR Grumble: u64 to u32 for f64 casting * Prevent u32 overflow for avg_peer_count * Add tests for LightSync::Statistics * Fix empty steps (#9939) * Don't send empty step twice or empty step then block. * Perform basic validation of locally sealed blocks. * Don't include empty step twice. * prevent silent errors in daemon mode, closes #9367 (#9946) * Fix a deadlock (#9952) * Update informant: - decimal in Mgas/s - print every 5s (not randomly between 5s and 10s) * Fix dead-lock in `blockchain.rs` * Update locks ordering * Fix light client informant while syncing (#9932) * Add `is_idle` to LightSync to check importing status * Use SyncStateWrapper to make sure is_idle gets updates * Update is_major_import to use verified queue size as well * Add comment for `is_idle` * Add Debug to `SyncStateWrapper` * `fn get` -> `fn into_inner` * ci: rearrange pipeline by logic (#9970) * ci: rearrange pipeline by logic * ci: rename docs script * fix docker build (#9971) * Deny unknown fields for chainspec (#9972) * Add deny_unknown_fields to chainspec * Add tests and fix existing one * Remove serde_ignored dependency for chainspec * Fix rpc test eth chain spec * Fix starting_nonce_test spec * Improve block and transaction propagation (#9954) * Refactor sync to add priority tasks. * Send priority tasks notifications. * Propagate blocks, optimize transactions. * Implement transaction propagation. Use sync_channel. * Tone down info. * Prevent deadlock by not waiting forever for sync lock. * Fix lock order. * Don't use sync_channel to prevent deadlocks. * Fix tests. * Fix unstable peers and slowness in sync (#9967) * Don't sync all peers after each response * Update formating * Fix tests: add `continue_sync` to `Sync_step` * Update ethcore/sync/src/chain/mod.rs Co-Authored-By: ngotchac <ngotchac@gmail.com> * fix rpc middlewares * fix Cargo.lock * json: resolve merge in spec * rpc: fix starting_nonce_test * ci: allow nightl job to fail
Fixes #9880
The issue was that since late-August, the PIP request costs were relative to the config max number of peers (the PR fixed an issue where the max number of peers was always 1). However, this doesn't really reflect the load of the server.
With this PR, the costs are relative to the average number of PIP leeching peers, ie. the number of peers sending requests to the node. This should lower the requests costs, without overloading the serving nodes (it seems that less than 5 leeching peers are usually connected, so it should decrease the costs by 100x on Kovan since our bootnodes were setup with 500 max peers).