Skip to content
This repository has been archived by the owner on May 21, 2024. It is now read-only.

Patched version of Cumulus needed for reserve-based transfer examples #53

Closed
stiiifff opened this issue Aug 31, 2022 · 13 comments
Closed
Assignees

Comments

@stiiifff
Copy link
Contributor

The reserve-based transfers that I demoed in the XCM presentation needed a patched version of Cumulus with these two PRs:

I was using this patched fork of Cumulus, but it would be nice if we could just use the Cumulus's main repo (master branch ideally, or a specific branch / tag).

Confirm if we still need the Cumulus fork or not anymore.

@bernardoaraujor
Copy link
Contributor

I can confirm that the slide demo for reserve-based transfer works with cumulus tag x0.9.270.

@stiiifff
Copy link
Contributor Author

stiiifff commented Sep 8, 2022

This doesn't seem to be working for me. The last reserve-transfer demo (i.e. transferring xUSD from Bob's account on Trappist to Bob's account on Parachain 3000, involving a transfer between the 2 parachain's sovereign accounts on Statemine) does not execute successfully: the reserve-transfer fails with a TooExpensive error on Statemine, and the transferred asset ends up in the asset trap

image

@bernardoaraujor
Copy link
Contributor

indeed, the only reserve transfer I confirmed was Statemine>Trappist.

@stiiifff which runtime did you use for Parachain 3000?
The rococo-parachain runtime doesn't have an xcm_config.rs, which is why we switched to Penpal.

@stiiifff
Copy link
Contributor Author

Aah, the reserve-based transfer from Trappist to Parachain 3000 is the highlight of the show 😁️
I used v0.9.27 which is just before we added the DEX pallet to the runtime. I want to update the XCM 101 workshop slides to point to that version (btw, I've created tags, and I'd like to move the xcm-demo tag to the v0.9.27).
I will re-test with Penpal then.

@evilrobot-01
Copy link
Contributor

evilrobot-01 commented Nov 13, 2022

I seem to have this working at https://github.com/paritytech/trappist/blob/frank/add-asset-teleport-test/integration-tests/reserve-transfer-asset/1_reserve_transfer_two_hops.yml (wip), but it required a hack in order to work around the below error that I was getting when the xcm message arrived on the reserve parachain for processing (sent from Trappist).

xcm::execute_xcm_in_credit: [Parachain] Weight limit reached! weight > weight_limit: 52811961000 > 41666666666

I tracked this down to https://github.com/paritytech/cumulus/blob/24ba075e7df5967dc610f82fc6aad6276854fa56/pallets/xcmp-queue/src/lib.rs#L847, whereby the resulting available block space for a xcm message seems be calculated as follows:

weight_available = ReservedXcmpWeight / (weight_restrict_decay + 1)

Based on Statemine config:

MAXIMUM_BLOCK_WEIGHT = 500,000,000,000
weight_restrict_decay = 2
ReservedXcmpWeight = MAXIMUM_BLOCK_WEIGHT.saturating_div(4) = 125,000,000,000

weight_available = 125,000,000,000 / (2 + 1) = 41,666,666,666

The calculated weight of the message (52,811,961,000) is therefore bigger than the weight available within the block (41,666,666,666), so it ended up just getting stuck in the overweight queue and not being processed.

The hack was to reduce the weight_restrict_decay to zero via xcmpQueue.updateWeightRestrictDecay (using parasSudoWrapper.sudoQueueDownwardXcm on relay chain) and then everything works as expected:

  • reserve-transfer of xUSD from Statemine to Trappist (txUSD)
  • reserve-transfer of xUSD from Trappist to tertiary parachain (pxUSD)

image

My question is now why the xcm message is considered overweight by Statemine and how it should be implemented correctly to avoid the hack?

@stiiifff
Copy link
Contributor Author

stiiifff commented Nov 14, 2022

@evilrobot-01 Good catch ! I ran into the same issue but did not have time to investigate further. Let's understand what is the purpose of the weight_restrict_decay factor. IMHO the reserve-transfer operation that we are executing is a typical use case so I'm a bit surprised we're hitting this limit. Maybe this is addressed with XCM v3 ? (in any case, it's great to now have integration tests for these common scenarios).

@evilrobot-01
Copy link
Contributor

..., but tbh I don't remember if this is really needed, or if this bit of code (from the same PR) in the Trappist runtime does the trick.

Confirmed that snippet of code in the Trappist runtime is indeed still enabling the reserve-transfer, as is the case in the updated base-runtime PR of #92 . For reference, Penpal handles it via MultiNativeAsset which does something similar: https://github.com/paritytech/cumulus/blob/26e17fea7ef95d91b94db92079034d4355ad3658/parachains/runtimes/testing/penpal/src/xcm_config.rs#L274-L309.

I was using this patched fork of Cumulus, but it would be nice if we could just use the Cumulus's main repo (master branch ideally, or a specific branch / tag).

Confirm if we still need the Cumulus fork or not anymore.

We are using the Cumulus polkadot-v0.9.30 branch: all working with the above mentioned PR and working integration tests queued at #76. The fork is therefore no longer required.

I believe the only outstanding action is to understand what is going on with weight_restrict_decay now.

@stiiifff
Copy link
Contributor Author

@evilrobot-01 what's missing to close this one ?

@evilrobot-01
Copy link
Contributor

It was just to figure out why the error was happening. I actually noticed the below when copying the zombienet config last night to another POC and wondered whether the max_message_size might be the root cause:

[[hrmp_channels]]
sender = 3000
recipient = 2000
max_capacity = 8
max_message_size = 512

Suggested steps to close this:

  • re-run tests with increased max_message_size
  • run on xcm V3 branch to see if it disappears
  • query with xcm core developers to get a definitive answer

I will set aside a day next week to get this resolved.

@evilrobot-01
Copy link
Contributor

I did some XCM work using Moonbase over the holidays and encountered the same issue at one point, but it seemed to be related to asset and weight configuration. I have implemented #102 as a step to better understanding and debugging things, where I did receive the same error depending on config used.

I will re-visit the integration tests with updated config and see if finally resolved.

@stiiifff stiiifff added this to the Trappist M1 / XCM v2 milestone Jan 25, 2023
@evilrobot-01
Copy link
Contributor

Just received the weight limit reached error again on another project when simply setting a large value of 50_000_000_000u64 for require_weight_at_most:

Weight limit reached! weight > weight_limit: 50496777000 > 41666666666

image

So it seems that it simply uses that value in checking the weight limit and any sufficiently large number provided will trigger the error. This implies the issue was originally caused by too large a weight being provided as a limit.

I haven't confirmed this to be the case, but I feel this can probably be closed now based on all the above gathered insight.

@stiiifff
Copy link
Contributor Author

stiiifff commented Apr 7, 2023

So, I re-tested the whole flow, and it seems that the "two-hop asset reserve-based transfer" (from Trappist -> Statemine -> Stout) is being considered as overweight on Statemine because of the weight_restrict_decay parameter of the xcmpQueue pallet (see above) is by default set to 2.

As explained by @evilrobot-01, the calculated weight of the XCM message is therefore bigger than the maximum allowed:

The calculated weight of the message (52,811,961,000) is therefore bigger than the weight available within the block (41,666,666,666), so it ended up just getting stuck in the overweight queue and not being processed.

By forcing the weight_restrict_decay value to 1 (by using the updateWeightRestrictDecay extrinsic via a call from the relay chain as suggested above), and using maxWeight of 62,500,000,000, the reserve-based transfer works as expected.

The value of 1 is mentioned in the rust documentation of the service_xcmp_queue function (of the xcmpQueue pallet):

/// A reasonable parameter may be `1`, which makes half of the `max_weight` available for the first page,
/// then a quarter plus the remainder for the second &c. though empirical and or practical factors may
/// give rise to adjusting it further.

Based on this learning, and the assumption that complex XCM messages like the one referred to above are not that common (and thus, that the limit described here is not hit very often), we can close this issue. And if one wants to demo the two-hop reserve-based transfer, the workaround of adjusting the weight_restrict_decay parameter value to 1 can be used.

@stiiifff stiiifff closed this as completed Apr 7, 2023
@evilrobot-01
Copy link
Contributor

Amazing, thanks for finally joining all the 'dots' and making sense of it all! A great learning indeed!! 🙌

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants