You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current form of Message Delivery mechanism (see #299, related #212) splits message delivery and message processing. Independently from the dispatch mechanism (see #214 or #211) we will need some way to make sure that:
At least some queued messages are always processed (i.e. we make some progress in every block).
We have some way to assess the complexity of the dispatch (i.e. Weight) to prevent going over block weight limit.
We scale the dispatch in some sensible way, since we compete for block weight with other transactions in the runtime.
I can think of two ways how we can achieve some fairness and scale the amount of dispatch within every block.
Relayers are allowed to send a "process messages" transaction, which pays for the execution. The more messages is queued in a lane, the higher fee (see Relayers Incentivisation #316) the relayer should get from making progress on that lane. The mechanism should also make sure that the reward from processing the lane is higher than the fee the relayer has to pay to get it's transaction into block in the time of congestion.
The processing fee is payed upfront by the relayer when the message is queued and the dispatch happens during on_intialize callback of the message lane pallet. With some scaling mechanism, depending on the number of queued messages we could for instance utilize from 10%-90% of the entire block weight for the dispatch. The advantage of this approach is that processing messages from the bridge does not directly compete with regular chain transactions and we are guaranteed to make some progress (bridge messages can be considered privileged).
Initially I'm in favour of (2), since it seems simpler to implement. In both cases, though, we will need the message payload to be able to tell us upfront the maximal Weight it might consume during dispatch.
The text was updated successfully, but these errors were encountered:
The current form of Message Delivery mechanism (see #299, related #212) splits message delivery and message processing. Independently from the dispatch mechanism (see #214 or #211) we will need some way to make sure that:
Weight
) to prevent going over block weight limit.I can think of two ways how we can achieve some fairness and scale the amount of dispatch within every block.
on_intialize
callback of the message lane pallet. With some scaling mechanism, depending on the number of queued messages we could for instance utilize from10%-90%
of the entire block weight for the dispatch. The advantage of this approach is that processing messages from the bridge does not directly compete with regular chain transactions and we are guaranteed to make some progress (bridge messages can be considered privileged).Initially I'm in favour of (2), since it seems simpler to implement. In both cases, though, we will need the message payload to be able to tell us upfront the maximal
Weight
it might consume during dispatch.The text was updated successfully, but these errors were encountered: