title |
---|
-chap-num- Weights |
import Pseudocode from '@site/src/components/Pseudocode'; import requestJudgementRuntimeFunctionBenchmark from '!!raw-loader!@site/src/algorithms/requestJudgementRuntimeFunctionBenchmark.tex'; import payoutStakersRuntimeFunctionBenchmark from '!!raw-loader!@site/src/algorithms/payoutStakersRuntimeFunctionBenchmark.tex'; import transferRuntimeFunctionBenchmark from '!!raw-loader!@site/src/algorithms/transferRuntimeFunctionBenchmark.tex'; import withdrawUnbondedRuntimeFunctionBenchmark from '!!raw-loader!@site/src/algorithms/withdrawUnbondedRuntimeFunctionBenchmark.tex';
The Polkadot network, like any other permissionless system, needs to implement a mechanism to measure and to limit the usage in order to establish an economic incentive structure, to prevent the network overload, and to mitigate DoS vulnerabilities. In particular, Polkadot enforces a limited time-window for block producers to create a block, including limitations on block size, which can make the selection and execution of certain extrinsics too expensive and decelerate the network.
In contrast to some other systems such as Ethereum which implement fine measurement for each executed low-level operation by smart contracts, known as gas metering, Polkadot takes a more relaxed approach by implementing a measuring system where the cost of the transactions (referred to as ’extrinsics’) are determined before execution and are known as the weight system.
The Polkadot weight system introduces a mechanism for block producers to measure the cost of running the extrinsics and determine how "heavy" it is in terms of execution time. Within this mechanism, block producers can select a set of extrinsics and saturate the block to its fullest potential without exceeding any limitations (as described in Section -sec-num-ref-). Moreover, the weight system can be used to calculate a fee for executing each extrinsics according to its weight (as described in Section -sec-num-ref-).
Additionally, Polkadot introduces a specified block ratio (as defined in Section -sec-num-ref-), ensuring that only a certain portion of the total block size gets used for regular extrinsics. The remaining space is reserved for critical, operational extrinsics required for the functionality by Polkadot itself.
To begin, we introduce in Section -sec-num-ref- the assumption upon which the Polkadot transaction weight system is designed. In Section -sec-num-ref-, we discuss the limitation Polkadot needs to enforce on the block size. In Section -sec-num-ref-, we describe in detail the procedure upon which the weight of any transaction should be calculated. In Section -sec-num-ref-, we present how we apply this procedure to compute the weight of particular runtime functions.
In this section, we define the concept of weight and we discuss the considerations that need to be accounted for when assigning weight to transactions. These considerations are essential in order for the weight system to deliver its fundamental mission, i.e. the fair distribution of network resources and preventing a network overload. In this regard, weights serve as an indicator on whether a block is considered full and how much space is left for remaining, pending extrinsics. Extrinsics which require too many resources are discarded. More formally, the weight system should:
-
prevent the block from being filled with too many extrinsics
-
avoid extrinsics where its execution takes too long, by assigning a transaction fee to each extrinsic proportional to their resource consumption.
These concepts are formalized in Definition -def-num-ref- and Definition -def-num-ref-:
:::definition
For a block
:::
:::definition
Ṯargeted time per block denoted by
:::
:::definition
Available block ration reserved for normal, noted by
:::
:::definition
P̱olkadot block limits as defined here should be respected by each block producer for the produced block
-
${L}{e}{n}{\left({B}\right)}\le{5}\times{1}'{024}\times{1}'{024}={5}'{242}'{880}$ Bytes -
${T}{\left({B}\right)}={6}$ seconds -
${R}{\left({B}\right)}\le{0.75}$
:::
:::definition
The P̱olkadot transaction weight function denoted by
where
In the remainder of this section, we discuss the requirements to which the weight function needs to comply to.
-
Computations of function
${\mathcal{{{W}}}}{\left({E}\right)}$ must be determined before execution of that${E}$ . -
Due to the limited time window, computations of
${\mathcal{{{W}}}}$ must be done quickly and consume few resources themselves. -
${\mathcal{{{W}}}}$ must be self contained and must not require I/O on the chain state.${\mathcal{{{W}}}}{\left({E}\right)}$ must depend solely on the Runtime function representing${E}$ and its parameters.
Heuristically, "heaviness" corresponds to the execution time of an extrinsic. In that way, the
Nonetheless,
:::
In this section we discuss how applying the limitation defined in Definition -def-num-ref- can be translated to limitation
:::definition
We define the block weight of block
$$ {\mathcal{{{W}}}}{\left({B}\right)}=\sum^{{{\left|{\mathcal{{{E}}}}\right|}}}{\left\lbrace{n}={0}\right\rbrace}{\left({W}{\left({E}{{n}}\right)}\right)} $$
We require that:
$$ {\mathcal{{{W}}}}{\left({B}\right)}<{2}'{000}'{000}'{000}'{000} $$ :::
The weights must fulfill the requirements as noted by the fundamentals and limitations, and can be assigned as the author sees fit. As a simple example, consider a maximum block weight of 1’000’000’000, an available ratio of 75% and a targeted transaction throughput of 500 transactions, we could assign the (average) weight for each transaction at about 1’500’000. Block producers have economic incentive to include as many extrinsics as possible (without exceeding limitations) into a block before reaching the targeted block time. Weights give indicators to block producers on which extrinsics to include in order to reach the blocks fullest potential.
In order to calculate weight of block
The general algorithm to calculate
Calculating the extrinsic weight solely based on theoretical complexity of the underlying implementation proves to be too complicated and unreliable at the same time. Certain decisions in the source code architecture, internal communication within the Runtime or other design choices could add enough overhead to make the asymptotic complexity practically meaningless.
On the other hand, benchmarking an extrinsics in a black-box fashion could (using random parameters) most centainly results in missing corner cases and worst case scenarios. Instead, we benchmark all available Runtime functions which are invoked in the course of execution of extrinsics with a large collection of carefully selected input parameters and use the result of the benchmarking process to evaluate
In order to select useful parameters, the Runtime functions have to be analyzed to fully understand which behaviors or conditions can result in expensive execution times, which is described closer in Section -sec-num-ref-. Not every possible benchmarking outcome can be invoked by varying input parameters of the Runtime function. In some circumstances, preliminary work is required before a specific benchmark can be reliably measured, such as creating certain preexisting entries in the storage or other changes to the environment.
The Practical Examples (Section -sec-num-ref-) covers the analysis process and the implementation of preliminary work in more detail.
The Runtime reuses components, known as "primitives", to interact with the state storage. The execution cost of those primitives can be measured and a weight should be applied for each occurrence within the Runtime code.
For storage, Polkadot uses three different types of storage types across its modules, depending on the context:
-
Value: Operations on a single value. The final key-value pair is stored under the key:
hash(module_prefix) + hash(storage_prefix)
-
Map: Operations on multiple values, datasets, where each entry has its corresponding, unique key. The final key-value pair is stored under the key:
hash(module_prefix) + hash(storage_prefix) + hash(encode(key))
-
Double map: Just like Map, but uses two keys instead of one. This type is also known as "child storage", where the first key is the "parent key" and the second key is the "child key". This is useful in order to scope storage entries (child keys) under a certain
context
(parent key), which is arbitrary. Therefore, one can have separated storage entries based on the context. The final key-value pair is stored under the key:hash(module_prefix) + hash(storage_prefix) + hash(encode(key1)) + hash(encode(key2))
It depends on the functionality of the Runtime module (or its sub-processes, rather) which storage type to use. In some cases, only a single value is required. In others, multiple values need to be fetched or inserted from/into the database.
Those lower level types get abstracted over in each individual Runtime module using the decl_storage!
macro. Therefore, each module specifies its own types that are used as input and output values. The abstractions do give indicators on what operations must be closely observed and where potential performance penalties and attack vectors are possible.
The storage layout is mostly the same for every primitive type, primarily differentiated by using special prefixes for the storage key. Big differences arise on how the primitive types are used in the Runtime function, on whether single values or entire datasets are being worked on. Single value operations are generally quite cheap and its execution time does not vary depending on the data that’s being processed. However, excessive overhead can appear when I/O operations are executed repeatedly, such as in loops. Especially, when the amount of loop iterations can be influenced by the caller of the function or by certain conditions in the state storage.
Maps, in contrast, have additional overhead when inserting or retrieving datasets, which vary in sizes. Additionally, the Runtime function has to process each item inside that list.
Indicators for performance penalties:
-
Fixed iterations and datasets - Fixed iterations and datasets can increase the overall cost of the Runtime functions, but the execution time does not vary depending on the input parameters or storage entries. A base Weight is appropriate in this case.
-
Adjustable iterations and datasets - If the amount of iterations or datasets depend on the input parameters of the caller or specific entries in storage, then a certain weight should be applied for each (additional) iteration or item. The Runtime defines the maximum value for such cases. If it doesn’t, it unconditionally has to and the Runtime module must be adjusted. When selecting parameters for benchmarking, the benchmarks should range from the minimum value to the maximum value, as described in Definition -def-num-ref-.
-
Input parameters - Input parameters that users pass on to the Runtime function can result in expensive operations. Depending on the data type, it can be appropriate to add additional weights based on certain properties, such as data size, assuming the data type allows varying sizes. The Runtime must define limits on those properties. If it doesn’t, it unconditionally has to and the Runtime module must be adjusted. When selecting parameters for benchmarking, the benchmarks should range from the minimum values to the maximum value, as described in paragraph Definition -def-num-ref-.
:::definition
What the maximum value should be really depends on the functionality that the Runtime function is trying to provide. If the choice for that value is not obvious, then it’s advised to run benchmarks on a big range of values and pick a conservative value below the targeted time per block
limit as described in section Section -sec-num-ref-.
:::
The inputs parameters highly vary depending on the Runtime function and must therefore be carefully selected. The benchmarks should use input parameters which will most likely be used in regular cases, as intended by the authors, but must also consider worst case scenarios and inputs which might decelerate or heavily impact performance of the function. The input parameters should be randomized in order to cause various effects in behaviors on certain values, such as memory relocations and other outcomes that can impact performance.
It’s not possible to benchmark every single value. However, one should select a range of inputs to benchmark, spanning from the minimum value to the maximum value which will most likely exceed the expected usage of that function. This is described in more detail in Section -sec-num-ref-. The benchmarks should run individual executions/iterations within that range, where the chosen parameters should give insight on the execution time. Selecting imprecise parameters or too extreme ranges might indicate an inaccurate result of the function as it will be used in production. Therefore, when a range of input parameters gets benchmarked, the result of each individual parameter should be recorded and optionally visualized, then the necessary adjustment can be made. Generally, the worst case scenario should be assigned as the weight value for the corresponding runtime function.
Additionally, given the distinction theoretical and practical usage, the author reserves the right to make adjustments to the input parameters and assigned weights according to the observed behavior of the actual, real-world network.
When assigning the final weight, the worst case scenario of each runtime function should be used. The runtime can then additional "refund" the amount of weights which were overestimated once the runtime function is actually executed.
The Polkadot runtime only returns weights if the difference between the assigned weight and the actual weight calculated during execution is greater than 20%.
It is advised to benchmark the raw I/O operations of the database and assign "base weights" for each I/O operation type, such as insertion, deletion, querying, etc. When a runtime function is executed, the runtime can then add those base weights of each used operation in order to calculate the final weight.
The benchmarks should be executed on clean systems without interference of other processes or software. Additionally, the benchmarks should be executed on multiple machines with different system resources, such as CPU performance, CPU cores, RAM and storage speed.
This section walks through Runtime functions available in the Polkadot Runtime to demonstrate the analysis process as described in Section -sec-num-ref-.
In order for certain benchmarks to produce conditions where resource heavy computation or excessive I/O can be observed, the benchmarks might require some preliminary work on the environment, since those conditions cannot be created with simply selected parameters. The analysis process shows indicators on how the preliminary work should be implemented.
In Polkadot, accounts can save information about themselves on-chain, known as the "Identity Info". This includes information such as display name, legal name, email address and so on. Polkadot offers a set of trusted registrars, entities elected by a Polkadot public referendum, which can verify the specified contact addresses of the identities, such as Email, and vouch on whether the identity actually owns those accounts. This can be achieved, for example, by sending a challenge to the specified address and requesting a signature as a response. The verification is done off-chain, while the final judgement is saved onchain, directly in the corresponding Identity Info. It’s also note worthy that Identity Info can contain additional fields, set manually by the corresponding account holder.
Information such as legal name must be verified by ID card or passport submission.
The function request_judgement
from the identity
pallet allows users to request judgement from a specific registrar.
(func $request_judgement (param $req_index int) (param $max_fee int))
-
req_index
: the index which is assigned to the registrar. -
max_fee
: the maximum fee the requester is willing to pay. The judgement fee varies for each registrar.
Studying this function reveals multiple design choices that can impact performance, as it will be revealed by this analysis.
First, it fetches a list of current registrars from storage and then searches that list for the specified registrar index.
let registrars = <Registrars<T>>::get();
let registrar = registrars.get(reg_index as usize).and_then(Option::as_ref)
.ok_or(Error::<T>::EmptyIndex)?;
Then, it searches for the Identity Info from storage, based on the sender of the transaction.
let mut id = <IdentityOf<T>>::get(&sender).ok_or(Error::<T>::NoIdentity)?;
The Identity Info contains all fields that have a data in them, set by the corresponding owner of the identity, in an ordered form. It then proceeds to search for the specific field type that will be inserted or updated, such as email address. If the entry can be found, the corresponding value is to the value passed on as the function parameters (assuming the registrar is not "stickied", which implies it cannot be changed). If the entry cannot be found, the value is inserted into the index where a matching element can be inserted while maintaining sorted order. This results in memory reallocation, which increases resource consumption.
match id.judgements.binary_search_by_key(®_index, |x| x.0) {
Ok(i) => if id.judgements[i].1.is_sticky() {
Err(Error::<T>::StickyJudgement)?
} else {
id.judgements[i] = item
},
Err(i) => id.judgements.insert(i, item),
}
In the end, the function deposits the specified max_fee
balance, which can later be redeemed by the registrar. Then, an event is created to insert the Identity Info into storage. The creation of events is lightweight, but its execution is what will actually commit the state changes.
T::Currency::reserve(&sender, registrar.fee)?;
<IdentityOf<T>>::insert(&sender, id);
Self::deposit_event(RawEvent::JudgementRequested(sender, reg_index));
The following points must be considered:
-
Varying count of registrars.
-
Varying count of preexisting accounts in storage.
-
The specified registrar is searched for in the Identity Info. An identity can be judged by as many registrars as the identity owner issues requests for, therefore increase its footprint in the state storage. Additionally, if a new value gets inserted into the byte array, memory get reallocated. Depending on the size of the Identity Info, the execution time can vary.
-
The Identity Info can contain only a few fields or many. It is legitimate to introduce additional weights for changes the owner/sender has influence over, such as the additional fields in the Identity Info.
The Polkadot Runtime specifies the MaxRegistrars
constant, which will prevent the list of registrars of reaching an undesired length. This value should have some influence on the benchmarking process.
The benchmarking implementation of for the function
Algorithm -algo-num- request_judgement
Runtime Function Benchmark {#algo-benchmark-request-judgement}
:::algorithm <Pseudocode content={requestJudgementRuntimeFunctionBenchmark} algID="requestJudgementRuntimeFunctionBenchmark" options={{ "lineNumber": true }} />
where
-
Generate-Registrars(
${amount}$ )Creates number of registrars and inserts those records into storage.
-
Create-Account(
${name}$ ,${index}$ )Creates a Blake2 hash of the concatenated input of name and index represent- ing the address of a account. This function only creates an address and does not conduct any I/O.
-
Set-Balance(
${amount}$ ,${balance}$ )Sets a initial balance for the specified account in the storage state.
-
Timer(
${function}$ )Measures the time from the start of the specified f unction to its completion.
-
Request-Judgement(
${registrar}$ ${index}$ ,${max}$ ${fee}$ )Calls the corresponding request_judgement Runtime function and passes on the required parameters.
-
Random(
${num}$ )Picks a random number between 0 and num. This should be used when the benchmark should account for unpredictable values.
-
Add-To(
${collection}$ ,${time}$ )Adds a returned time measurement (time) to collection.
-
Compute-Weight(
${collection}$ )Computes the resulting weight based on the time measurements in the collection. The worst case scenario should be chosen (the highest value). :::
The function payout_stakers
from the staking
Pallet can be called by a single account in order to payout the reward for all nominators who back a particular validator. The reward also covers the validator’s share. This function is interesting because it iterates over a range of nominators, which varies, and does I/O operation for each of them.
First, this function makes few basic checks to verify if the specified era is not higher then the current era (as it is not in the future) and is within the allowed range also known as "history depth", as specified by the Runtime. After that, it fetches the era payout from storage and additionally verifies whether the specified account is indeed a validator and receives the corresponding "Ledger". The Ledger keeps information about the stash key, controller key and other informatin such as actively bonded balance and a list of tracked rewards. The function only retains the entries of the history depth, and conducts a binary search for the specified era.
let era_payout = <ErasValidatorReward<T>>::get(&era)
.ok_or_else(|| Error::<T>::InvalidEraToReward)?;
let controller = Self::bonded(&validator_stash).ok_or(Error::<T>::NotStash)?;
let mut ledger = <Ledger<T>>::get(&controller).ok_or_else(|| Error::<T>::NotController)?;
ledger.claimed_rewards.retain(|&x| x >= current_era.saturating_sub(history_depth));
match ledger.claimed_rewards.binary_search(&era) {
Ok(_) => Err(Error::<T>::AlreadyClaimed)?,
Err(pos) => ledger.claimed_rewards.insert(pos, era),
}
The retained claimed rewards are inserted back into storage.
<Ledger<T>>::insert(&controller, &ledger);
As an optimization, Runtime only fetches a list of the 64 highest staked nominators, although this might be changed in the future. Accordingly, any lower staked nominator gets no reward.
let exposure = <ErasStakersClipped<T>>::get(&era, &ledger.stash);
Next, the function gets the era reward points from storage.
let era_reward_points = <ErasRewardPoints<T>>::get(&era);
After that, the payout is split among the validator and its nominators. The validators receives the payment first, creating an insertion into storage and sending a deposit event to the scheduler.
if let Some(imbalance) = Self::make_payout(
&ledger.stash,
validator_staking_payout + validator_commission_payout
) {
Self::deposit_event(RawEvent::Reward(ledger.stash, imbalance.peek()));
}
Then, the nominators receive their payout rewards. The functions loops over the nominator list, conducting an insertion into storage and a creation of a deposit event for each of the nominators.
for nominator in exposure.others.iter() {
let nominator_exposure_part = Perbill::from_rational_approximation(
nominator.value,
exposure.total,
);
let nominator_reward: BalanceOf<T> = nominator_exposure_part * validator_leftover_payout;
// We can now make nominator payout:
if let Some(imbalance) = Self::make_payout(&nominator.who, nominator_reward) {
Self::deposit_event(RawEvent::Reward(nominator.who.clone(), imbalance.peek()));
}
}
The following points must be considered:
-
The Ledger contains a varying list of claimed rewards. Fetching, retaining and searching through it can affect execution time. The retained list is inserted back into storage.
-
Looping through a list of nominators and creating I/O operations for each increases execution time. The Runtime fetches up to 64 nominators.
:::definition
H̱istory Depth indicated as MaxNominatorRewardedPerValidator
is a fixed constant specified by the Polkadot Runtime which dictates the number of Eras the Runtime will reward nominators and validators for.
:::
:::definition
M̱aximum Nominator Rewarded Per Validator indicated as MaxNominatorRewardedPerValidator
, specifies the maximum amount of the highest-staked nominators which will get a reward. Those values should have some influence in the benchmarking process.
:::
The benchmarking implementation for the function
:::algorithm <Pseudocode content={payoutStakersRuntimeFunctionBenchmark} algID="{payoutStakersRuntimeFunctionBenchmark" options={{ "lineNumber": true }} />
where
-
Generate-Validator()
Creates a validators with some unbonded balances.
-
Validate(
${validator}$ )Bonds balances of validator and bonds balances.
-
Generate-Nominators(
${amount}$ )Creates the amount of nominators with some unbonded balances.
-
Nominate(
${validator}$ ,${nominator}$ )Starts nomination of nominator for validator by bonding balances.
-
Create-Rewards(
${validator}$ ,${nominators}$ ,${era}$ ${depth}$ )Starts an Era and creates pending rewards for validator and nominators.
-
Timer(
${function}$ )Measures the time from the start of the specified f unction to its completion.
-
Add-To(
${collection}$ ,${time}$ )Adds a returned time measurement (time) to collection.
-
Compute-Weight(
${collection}$ )Computes the resulting weight based on the time measurements in the collection. The worst case scenario should be chosen (the highest value). :::
The balances
module is designed to move the specified balance by the sender to the receiver.
The source code of this function is quite short:
let transactor = ensure_signed(origin)?;
let dest = T::Lookup::lookup(dest)?;
<Self as Currency<_>>::transfer(
&transactor,
&dest,
value,
ExistenceRequirement::AllowDeath
)?;
However, one need to pay close attention to the property AllowDeath
and to how the function treat existingand non-existing accounts differently. Two types of behaviors are to consider:
-
If the transfer completely depletes the sender account balance to zero (or bellow the minimum "keep-alive" requirement), it removes the address and all associated data from storage.
-
If recipient account has no balance, the transfer also needs to create the recipient account.
Specific parameters can could have a significant impact for this specific function. In order to trigger the two behaviors mentioned above, the following parameters are selected:
Type | From | To | Description | |
---|---|---|---|---|
Account index | index in… |
1 | 1000 | Used as a seed for account creation |
Balance | balance in… |
2 | 1000 | Sender balance and transfer amount |
Executing a benchmark for each balance increment within the balance range for each index increment within the index range will generate too many variants (
-
index
: 1,balance
: 1000 -
index
: 2,balance
: 1000 -
index
: 3,balance
: 1000 -
…
-
index
: 1000,balance
: 1000 -
index
: 1000,balance
: 2 -
index
: 1000,balance
: 3 -
index
: 1000,balance
: 4 -
…
The parameters itself do not influence or trigger the two worst conditions and must be handled by the implemented benchmarking tool. The
The benchmarking implementation for the Polkadot Runtime function
:::algorithm <Pseudocode content={transferRuntimeFunctionBenchmark} algID="{trasnferRuntimeFunctionBenchmark" options={{ "lineNumber": true }} />
where
-
Create-Account(
${name}$ ,${index}$ )Creates a Blake2 hash of the concatenated input of name and index representing the address of a account. This function only creates an address and does not conduct any I/O.
-
Set-Balance(
${account}$ ,${balance}$ )Sets a initial balance for the specified account in the storage state.
-
Transfer(
${sender}$ ,${recipient}$ ,${balance}$ )Transfers the specified balance from sender to recipient by calling the corresponding Runtime function. This represents the target Runtime function to be benchmarked.
-
Add-To(
${collection}$ ,${time}$ )Adds a returned time measurement (time) to collection.
-
Timer(
${function}$ )Adds a returned time measurement (time) to collection.
-
Compute-Weight(
${collection}$ )Computes the resulting weight based on the time measurements in the collection. The worst case scenario should be chosen (the highest value). :::
The withdraw_unbonded
function of the staking
module is designed to move any unlocked funds from the staking management system to be ready for transfer. It contains some operations which have some I/O overhead.
Similarly to the payout_stakers
function (Section -sec-num-ref-), this function fetches the Ledger which contains information about the stash, such as bonded balance and unlocking balance (balance that will eventually be freed and can be withdrawn).
if let Some(current_era) = Self::current_era() {
ledger = ledger.consolidate_unlocked(current_era)
}
The function consolidate_unlocked
does some cleaning up on the ledger, where it removes outdated entries from the unlocking balance (which implies that balance is now free and is no longer awaiting unlock).
let mut total = self.total;
let unlocking = self.unlocking.into_iter()
.filter(|chunk| if chunk.era > current_era {
true
} else {
total = total.saturating_sub(chunk.value);
false
})
.collect();
This function does a check on wether the updated ledger has any balance left in regards to staking, both in terms of locked, staking balance and unlocking balance. If not amount is left, the all information related to the stash will be deleted. This results in multiple I/O calls.
if ledger.unlocking.is_empty() && ledger.active.is_zero() {
// This account must have called `unbond()` with some value that caused the active
// portion to fall below existential deposit + will have no more unlocking chunks
// left. We can now safely remove all staking-related information.
Self::kill_stash(&stash, num_slashing_spans)?;
// remove the lock.
T::Currency::remove_lock(STAKING_ID, &stash);
// This is worst case scenario, so we use the full weight and return None
None
}
The resulting call to Self::kill_stash()
triggers:
slashing::clear_stash_metadata::<T>(stash, num_slashing_spans)?;
<Bonded<T>>::remove(stash);
<Ledger<T>>::remove(&controller);
<Payee<T>>::remove(stash);
<Validators<T>>::remove(stash);
<Nominators<T>>::remove(stash);
Alternatively, if there’s some balance left, the adjusted ledger simply gets updated back into storage.
// This was the consequence of a partial unbond. just update the ledger and move on.
Self::update_ledger(&controller, &ledger);
Finally, it withdraws the unlocked balance, making it ready for transfer:
let value = old_total - ledger.total;
Self::deposit_event(RawEvent::Withdrawn(stash, value));
The following parameters are selected:
Type | From | To | Description | |
---|---|---|---|---|
Account index | index in… |
0 | 1000 | Used as a seed for account creation |
This benchmark does not require complex parameters. The values are used solely for account generation.
Two important points in the withdraw_unbonded
function must be considered. The benchmarks should trigger both conditions
-
The updated ledger is inserted back into storage.
-
If the stash gets killed, then multiple, repetitive deletion calls are performed in the storage.
The benchmarking implementation for the Polkadot Runtime function withdraw_unbonded
is defined as follows:
:::algorithm <Pseudocode content={withdrawUnbondedRuntimeFunctionBenchmark} algID="{withdrawUnbondedRuntimeFunctionBenchmark" options={{ "lineNumber": true }} />
where
-
Create-Account(
${name}$ ,$index$ )Creates a Blake2 hash of the concatenated input of name and index representing the address of a account. This function only creates an address and does not conduct any I/O.
-
Set-Balance(
${amount}$ ,${balance}$ )Sets a initial balance for the specified account in the storage state.
-
Bond(
${stash}$ ,${controller}$ ,${amount}$ )Bonds the specified amount for the stash and controller pair.
-
UnBond(
${account}$ ,${amount}$ )Unbonds the specified amount for the given account.
-
Pass-Era()
Pass one era. Forces the function
withdraw_unbonded
to update the ledger and eventually delete information. -
Withdraw-Unbonded(
${controller}$ )Withdraws the the full unbonded amount of the specified controller account. This represents the target Runtime function to be benchmarked.
-
Add-To(
${collection}$ ,${time}$ )Adds a returned time measurement (time) to collection.
-
Timer(
${function}$ )Measures the time from the start of the specified f unction to its completion.
-
Compute-Weight(
${collection}$ )Computes the resulting weight based on the time measurements in the collection. The worst case scenario should be chosen (the highest value). :::
Block producers charge a fee in order to be economically sustainable. That fee must always be covered by the sender of the transaction. Polkadot has a flexible mechanism to determine the minimum cost to include transactions in a block.
Polkadot fees consists of three parts:
-
Base fee: a fixed fee that is applied to every transaction and set by the Runtime.
-
Length fee: a fee that gets multiplied by the length of the transaction, in bytes.
-
Weight fee: a fee for each, varying Runtime function. Runtime implementers need to implement a conversion mechanism which determines the corresponding currency amount for the calculated weight.
The final fee can be summarized as:
The Polkadot Runtime defines the following values:
-
Base fee: 100 uDOTs
-
Length fee: 0.1 uDOTs
-
Weight to fee conversion:
${weight}$ fee = weight \times (100${u}{D}{O}{T}{s}\div{\left({10}\times{10}'{000}\right)}{)}$A weight of 10’000 (the smallest non-zero weight) is mapped to
${\frac{{{1}}}{{{10}}}}$ of 100 uDOT. This fee will never exceed the max size of an unsigned 128 bit integer.
Polkadot can add a additional fee to transactions if the network becomes too busy and starts to decelerate the system. This fee can create an incentive to avoid the production of low priority or insignificant transactions. In contrast, those additional fees will decrease if the network calms down and it can execute transactions without much difficulties.
That additional fee is known as the Fee Multiplier
and its value is defined by the Polkadot Runtime. The multiplier works by comparing the saturation of blocks; if the previous block is less saturated than the current block (implying an uptrend), the fee is slightly increased. Similarly, if the previous block is more saturated than the current block (implying a downtrend), the fee is slightly decreased.
The final fee is calculated as:
The Update Multiplier
defines how the multiplier can change. The Polkadot Runtime internally updates the multiplier after each block according the following formula:
Polkadot defines the target_weight
as 0.25 (25%). More information about this algorithm is described in the Web3 Foundation research paper.