Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

used MaxUnlockingChunks from config #12289

Conversation

MrishoLukamba
Copy link
Contributor

@MrishoLukamba MrishoLukamba commented Sep 18, 2022

@kianenigma @ggwpez
Fixes #12227
Polkadot address: 133WyzJ4cEEbQ4twhT3nQpoXYeTCzeKn8kWfRfEi4XDF7wKh

@MrishoLukamba
Copy link
Contributor Author

Solved issue #12227

@kianenigma
Copy link
Contributor

@Ank4n Ank4n self-requested a review September 19, 2022 09:07
@kianenigma kianenigma added A0-please_review Pull request needs code review. B7-runtimenoteworthy C1-low PR touches the given topic and has a low impact on builders. D5-nicetohaveaudit ⚠️ PR contains trivial changes to logic that should be properly reviewed. labels Sep 19, 2022
Copy link
Contributor

@kianenigma kianenigma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should document in the config type parameter that if this value is ever set, and then decreased, it leads to a corrupt state and a migration is needed. We also need a test to demonstrate this.

@Ank4n please note that we have the same story with MaxNominations as well.

@MrishoLukamba
Copy link
Contributor Author

MrishoLukamba commented Sep 19, 2022

closes #12227

@Ank4n
Copy link
Contributor

Ank4n commented Sep 20, 2022

@MrishoLukamba Can you also add a test that documents what happens in the scenario where MaxUnlockingChunks is reduced without migration?
Here are couple of examples you can refer
https://github.com/paritytech/substrate/blob/ankan/bound-staking-storage-trivial/frame/staking/src/tests.rs#L4872
https://github.com/paritytech/substrate/blob/ankan/bound-staking-storage-trivial/frame/staking/src/tests.rs#L5367

@MrishoLukamba
Copy link
Contributor Author

@MrishoLukamba Can you also add a test that documents what happens in the scenario where MaxUnlockingChunks is reduced without migration? Here are couple of examples you can refer https://github.com/paritytech/substrate/blob/ankan/bound-staking-storage-trivial/frame/staking/src/tests.rs#L4872 https://github.com/paritytech/substrate/blob/ankan/bound-staking-storage-trivial/frame/staking/src/tests.rs#L5367

Okey let me get into it, if anything I will ask

@MrishoLukamba
Copy link
Contributor Author

@Ank4n @kianenigma There is an issue which I noticed, StakingLedger.staking BoundedVec does not push new object (UnlockingChunks) instead it just increments the value inside UnlockingChunks. So the MaxUnlockingChunks bound in boundedVec is not quite useful,
I think is a bug.

@Ank4n
Copy link
Contributor

Ank4n commented Sep 22, 2022

@Ank4n @kianenigma There is an issue which I noticed, StakingLedger.staking BoundedVec does not push new object (UnlockingChunks) instead it just increments the value inside UnlockingChunks. So the MaxUnlockingChunks bound in boundedVec is not quite useful, I think is a bug.

Can you elaborate more? May be refer in the code.

I see that when we unbond, a new UnlockChunk is pushed (or UnlockChunk for same era updated).
https://github.com/paritytech/substrate/blob/master/frame/staking/src/pallet/mod.rs#L984

@MrishoLukamba
Copy link
Contributor Author

MrishoLukamba commented Sep 23, 2022

@Ank4n @kianenigma There is an issue which I noticed, StakingLedger.staking BoundedVec does not push new object (UnlockingChunks) instead it just increments the value inside UnlockingChunks. So the MaxUnlockingChunks bound in boundedVec is not quite useful, I think is a bug.

Can you elaborate more? May be refer in the code.

I see that when we unbond, a new UnlockChunk is pushed (or UnlockChunk for same era updated). https://github.com/paritytech/substrate/blob/master/frame/staking/src/pallet/mod.rs#L984

In the code it shows that,a new UnlockChunk is being pushed but in testing it is not pushed it is just updated the value field. So the bound MaxUnlockChunk placed there is not used at all. You can see in the new test i've added

@Ank4n
Copy link
Contributor

Ank4n commented Sep 23, 2022

@Ank4n @kianenigma There is an issue which I noticed, StakingLedger.staking BoundedVec does not push new object (UnlockingChunks) instead it just increments the value inside UnlockingChunks. So the MaxUnlockingChunks bound in boundedVec is not quite useful, I think is a bug.

Can you elaborate more? May be refer in the code.
I see that when we unbond, a new UnlockChunk is pushed (or UnlockChunk for same era updated). https://github.com/paritytech/substrate/blob/master/frame/staking/src/pallet/mod.rs#L984

In the code it shows that,a new UnlockChunk is being pushed but in testing it is not pushed it is just updated the value field. So the bound MaxUnlockChunk placed there is not used at all. You can see in the new test i've added

@MrishoLukamba The behaviour looks correct to me. An UnlockChunk is only updated if multiple unbond happen in the same era. If you try to move forward in eras and unbond, you should see multiple chunks being added. If the UnlockChunks are already full, the transaction should return an error NoMoreChunks as the boundedVec is full.

@kianenigma Shouldn't we be able to keep the MaxUnlockChunk equal to BondingDuration. This will imply users can have UnlockChunks without claiming only for up to 28 eras. And in that case all the UnlockChunks are ready to be claimed (after 28 eras) so encouraging the user to just claim existing unlocks before requesting for more unlocks.
If you agree with above, we can bound StakingLedger.unlocking with BondingDuration and not have to introduce a new bound MaxUnlockChunk.

@@ -1346,7 +1346,8 @@ fn too_many_unbond_calls_should_not_work() {
ExtBuilder::default().build_and_execute(|| {
let mut current_era = 0;
// locked at era MaxUnlockingChunks - 1 until 3
for i in 0..MaxUnlockingChunks::get() - 1 {
//maxunlockingchunks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
//maxunlockingchunks

@@ -1346,7 +1346,8 @@ fn too_many_unbond_calls_should_not_work() {
ExtBuilder::default().build_and_execute(|| {
let mut current_era = 0;
// locked at era MaxUnlockingChunks - 1 until 3
for i in 0..MaxUnlockingChunks::get() - 1 {
//maxunlockingchunks
for i in 0..<<Test as Config>::MaxUnlockingChunks as Get::<u32>>::get()- 1 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for i in 0..<<Test as Config>::MaxUnlockingChunks as Get::<u32>>::get()- 1 {
for i in 0..<<Test as Config>::MaxUnlockingChunks as Get<u32>>::get()-1 {

@@ -1361,7 +1362,8 @@ fn too_many_unbond_calls_should_not_work() {
assert_ok!(Staking::unbond(RuntimeOrigin::signed(10), 1));
assert_eq!(
Staking::ledger(&10).unwrap().unlocking.len(),
MaxUnlockingChunks::get() as usize

<<Test as Config>::MaxUnlockingChunks as Get::<u32>>::get() as usize
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
<<Test as Config>::MaxUnlockingChunks as Get::<u32>>::get() as usize
<<Test as Config>::MaxUnlockingChunks as Get<u32>>::get() as usize

Comment on lines 5568 to 5569
//Concern is on validators only
//By Default 11, 10 are stash and ctrl and 21,20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
//Concern is on validators only
//By Default 11, 10 are stash and ctrl and 21,20
// Concern is on validators only
// By Default 11, 10 are stash and ctrl and 21,20

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix all comments accordingly.

Copy link
Contributor

@kianenigma kianenigma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logic seems correct, but code needs to be formatted.

@MrishoLukamba
Copy link
Contributor Author

Logic seems correct, but code needs to be formatted.

Done, and should i use cargo fmt?

@Ank4n
Copy link
Contributor

Ank4n commented Sep 24, 2022

@MrishoLukamba I will close this PR and work on top of your code to get it merged. Pls add your Polkadot address in the PR for the tip.

Super thanks for your contribution. Feel free to follow the new PR here: #12343

@Ank4n Ank4n closed this Sep 24, 2022
@MrishoLukamba
Copy link
Contributor Author

@MrishoLukamba I will close this PR and work on top of your code to get it merged. Pls add your Polkadot address in the PR for the tip.

Super thanks for your contribution. Feel free to follow the new PR here: #12343

Thanks, will also continue contributing on other issues.

@Ank4n
Copy link
Contributor

Ank4n commented Sep 24, 2022

/tip small

@substrate-tip-bot
Copy link

@Ank4n You are not allowed to request a tip. Only shawntabrizi, gavofyork, rphmeier, athei, andresilva, arkpar, bkchr, eskimor, drahnr, dvdplm, robbepop, cmichi, tomaka, pepyakin, tomusdrw, kianenigma, jacogr, rossbulat are allowed.

@kianenigma
Copy link
Contributor

/tip small

@substrate-tip-bot
Copy link

@kianenigma A small tip was successfully submitted for MrishoLukamba (133WyzJ4cEEbQ4twhT3nQpoXYeTCzeKn8kWfRfEi4XDF7wKh on polkadot).

https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.polkadot.io#/treasury/tips

@MrishoLukamba
Copy link
Contributor Author

/tip small

Sorry @kianenigma @Ank4n https://polkadot.js.org/apps/#/treasury/tips, the tip didnt go through

@louismerlin louismerlin removed the D5-nicetohaveaudit ⚠️ PR contains trivial changes to logic that should be properly reviewed. label Oct 13, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review. C1-low PR touches the given topic and has a low impact on builders.
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

MaxUnlockingChunks should be configurable via Runtime
4 participants