-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Use Message Queue pallet for UMP dispatch #6271
Conversation
35bdba5
to
8258780
Compare
Co-authored-by: asynchronous rob <rphmeier@gmail.com>
…to gav-message-queue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
When wrapping my head around xcmp/hrmp and in particular ump, one of my first question that comes to mind for UMP, why do we need queuing at all? I can think of a couple of reasons:
- Execution is expected to be significantly more costly than storing.
- We don't want to be forced to drop messages. Given 1, by queuing we buy
us time to react, e.g. ramping up the price so we can economically back
pressure. If we executed immediately and we are above weight limit, we would
have no other way than dropping messages. In the case of UMP, parachains can see filled up queue and stop sending. - We already have a 1 element (per para) queue, via
PendingAvailabilityCommitments
. We could in theory back pressure right here
and not free a core until we processed its messages. The problem apart from
potentially worse performance (block times degrade on bursts) is fairness:
The load might actually be coming from other parachains and we would
potentially back-pressure on "light" ones, just because the others filled up
the block already. Other issues?
Other observations:
-
Since the message gets into state on backing but gets queued
on inclusion: The block causing the enqueuing of the message, does not actually
contain it. So far just an observation, triggered by me wondering whether we could still be forced to drop messages somewhere because we exceed the weight limit due to the needed state access for example. -
Missing apart from the already mentioned bullet points is actual processing of messages? There is
ProcessXcmMessage::process_message
, but it does not seem to be called anywhere and the corresponding test is also blank.
It is crucial to understand that incoming messages might have arbitrary effects within a runtime. There may be constraints related to the conditions under which these effects may be realised. The most obvious example of this is that a certain amount of weight might be required. However, more exotic chains may need to impose further constraints relevant to its own abstractions and these we cannot possibly imagine right now. Therefore, necessarily, from the point of view of the low level protocols XCMP, UMP and DMP (*MP), we must restrict our concern to ferrying datagrams and nothing more. The message queuing and execution subsystem(s) are left with the concern of ensuring that the proper effects take place ASAP given the datagrams received and taking into account quality of service and ephemeral constraints and resource limitations. This message queue pallet provides this functionality, accepting datagrams infallibly as fast as they can be delivered by the underlying message passing system(s) and promising that (subject to certain well-understood static assumptions) they will be executed eventually. It takes into account 2D weight and QoS ensuring that one parachain's message-passing activity cannot unfairly starve another parachain from attention. |
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
bot merge |
* master: (60 commits) Ensure all `StorageVersion`s on Rococo/Westend are correct and migration hooks pass (#7251) Try-runtime proper return types (#7146) Have OCW mined election once a week on Westend (#7248) Bump enumn from 0.1.5 to 0.1.8 (#7226) Companion to #14183: FRAME: Allow message ID to be mutated in `ProcessMessage` (#7262) Remove TODO comment (#7260) Fix build (#7261) Update syn (#7258) Use Message Queue pallet for UMP dispatch (#6271) Freeze chain if there are byzantine threshold + 1 invalid votes against a local candidate (#7225) Revert chain if at least f+1 validators voted against a candidate (#7151) Ensure all `StorageVersion`s on Polkadot/Kusama are correct (#7199) Forgotten pub reexport for `GlobalConsensusParachainConvertsFor` (#7238) PVF: Vote invalid on panics in execution thread (after a retry) (#7155) PVF: Remove `rayon` and some uses of `tokio` (#7153) [xcm] Foreign global consensus parachain LocationToAccountId converter (#7016) Update docs (#7230) Bump parity-db to 0.4.8 (#7231) Merge branch 'master' of https://github.com/paritytech/polkadot (#7224) Relax the watermark rule in the runtime (#7188) ...
…slashing-client * ao-past-session-slashing-runtime: (61 commits) Ensure all `StorageVersion`s on Rococo/Westend are correct and migration hooks pass (#7251) Try-runtime proper return types (#7146) Have OCW mined election once a week on Westend (#7248) Bump enumn from 0.1.5 to 0.1.8 (#7226) Companion to #14183: FRAME: Allow message ID to be mutated in `ProcessMessage` (#7262) Remove TODO comment (#7260) Fix build (#7261) Update syn (#7258) Use Message Queue pallet for UMP dispatch (#6271) Freeze chain if there are byzantine threshold + 1 invalid votes against a local candidate (#7225) Revert chain if at least f+1 validators voted against a candidate (#7151) Ensure all `StorageVersion`s on Polkadot/Kusama are correct (#7199) Forgotten pub reexport for `GlobalConsensusParachainConvertsFor` (#7238) PVF: Vote invalid on panics in execution thread (after a retry) (#7155) PVF: Remove `rayon` and some uses of `tokio` (#7153) [xcm] Foreign global consensus parachain LocationToAccountId converter (#7016) Update docs (#7230) Bump parity-db to 0.4.8 (#7231) Merge branch 'master' of https://github.com/paritytech/polkadot (#7224) Relax the watermark rule in the runtime (#7188) ...
Depends on
paritytech/substrate#12485, #4097, #7105Fixes #6129
Partially addresses paritytech/polkadot-sdk#489
Partially addresses paritytech/polkadot-sdk#488
This utilises the new
pallet-message-queue
for UMP message dispatch. With the unified dispatch system, there is no need for "UMP" to have a pallet of its own, so it is removed in favour of there being a singleump_tests.rs
file to ensure that the message-queue functionality is working as expected for UMP. The eventUMP::ExecutedUpward
was replace by aMessageQueue::Processed
.This also means that the two weight fields of the
HostConfiguration
struct are no longer needed, so there is now a v5 of said struct with them gone. A migration to v5 is provided and should be used when this is merged.Small auxiliary changes:
MAX_UPWARD_MESSAGE_SIZE_BOUND
in the configuration pallet.WellKnownKey
struct for typed access to well-known keys.Migrations
HostConfiguration
updateAlso re-writes the V4 -> V5 migration of the config pallet to be re-usable.
HostConfiguration
An update for the
HostConfiguration
is queued to update the UMP constants. This has to happen in accord with upgrading the UMP queue and is therefore not done in a normal Referendum. Considerations about the affected values are in gdrive or here aspdf:
Note For Para* Teams
The UMP message queue has a lazy message deletion mechanisms to unclog it in case it got filled with overweight messages.
This reaping mechanisms can be triggered by any account on the relay chain by calling
MessageQueue::reap_page
. This basically requires a offchain worker on the relay either for all parachains at once, or one for each. This observation mainly comes from a security perspective, since a completely clogged UMP queue is a possible DoS vector. More convenience is planned in paritytech/substrate#13877.TODO
MessageQueue
pallet and configure properly in Relay-chain runtimes.HostConfiguration
migration in Relay-chain runtimes.receive_upward_messages
.Follow Ups
MAX_UPWARD_MESSAGE_SIZE_BOUND
and use the bound from the MQ pallet.