Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade 0929 #887

Merged
merged 65 commits into from
Jan 20, 2023
Merged

Upgrade 0929 #887

merged 65 commits into from
Jan 20, 2023

Conversation

vivekvpandya
Copy link
Contributor

Closes #869

@vivekvpandya vivekvpandya added the s:in-progress The pull requests is currently being worked on label Nov 26, 2022
@vivekvpandya vivekvpandya added this to the v0.3.8 milestone Nov 26, 2022
@vivekvpandya vivekvpandya self-assigned this Nov 26, 2022
@Chralt98
Copy link
Member

Did you look into this ? I know you might be a little annoyed. I understand that.

Copy link
Member

@sea212 sea212 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR is ready for merge, but the aura solution is still a needle in my approval finger.
This time I have found the root of the cause and provided a proper solution to fix it in the comment.

@@ -169,18 +169,22 @@ fn setup_redeem_shares_common<T: Config + pallet_timestamp::Config>(
};
let grace_period: u32 =
(market.deadlines.grace_period.saturated_into::<u32>() + 1) * MILLISECS_PER_BLOCK;
pallet_timestamp::Pallet::<T>::set_timestamp((end + grace_period).into());
let block = frame_system::Pallet::<T>::block_number();
zeitgeist_utils::set_block_number_timestamp::<T>(block, (end + grace_period).into());
Call::<T>::report { market_id, outcome }
.dispatch_bypass_filter(RawOrigin::Signed(caller.clone()).into())?;
Call::<T>::admin_move_market_to_resolved { market_id }
.dispatch_bypass_filter(resolve_origin)?;
Ok((caller, market_id))
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for applying the patch at #887 (comment) . I see now what was missing in the patch to make it complete. Some arguments against the approach to tightly couple Aura:

  • The problem actually lies in the minimum timestamp period. It is too low in the benchmark environment, which hinders us from "skipping" blocks in terms of a timestamp delta. This is does not only affect Aura, it could also deny proper execution of other consensus systems.
  • Currently we enforce that projects set the parachain feature when using PM in case they don't use Aura.
  • Projects that have a standalone chain but don't use Aura (they could use Babe instead for example), won't be able to use the pallet benchmarks.

The solution suggested here does two things:

  1. Create a custom MinimumPeriod implementation.
  2. Within the MinimumPeriod implementation, figure out whether the node is in a benchmark environment or not and return an adequate value.

Unfortunately I am not aware of a direct way to query whether the node is in a benchmark environment, however after some extensive analysis of the pallet there is an indirect way. The account whitelist can be fetched, and this is something that should only be set in a benchmark environment. During initialization of the benchmarks, a couple of keys are added to the whitelist. We can use this as a measurement to figure out whether the node is in a benchmark environment, by checking whether the whitelist contains elements.

The solution is to define the MinimumPeriod of the timestamp pallet as such:

// Timestamp
/// Custom getter for minimum timestamp delta.
/// This ensures that consensus systems like Aura don't experience breaking assertions
/// in a benchmark environment
pub struct MinimumPeriod;
impl MinimumPeriod {
    /// Returns the value of this parameter type.
    pub fn get() -> u64 {
        #[cfg(feature = "runtime-benchmarks")]
        {
            use frame_benchmarking::{Benchmarking, benchmarking::get_whitelist};
            // Should that condition be true, we can assume that we are in a benchmark environment.
            if !get_whitelist().is_empty() { return u64::MAX; }
        }


        MILLISECS_PER_BLOCK as u64 / 2
    }
}
impl<I: From<u64>> frame_support::traits::Get<I> for MinimumPeriod {
    fn get() -> I {
        I::from(Self::get())
    }
}
impl frame_support::traits::TypedGet for MinimumPeriod {
    type Type = u64;
    fn get() -> u64 {
        Self::get()
    }
}

You can add https://github.com/sea212/zeitgeist as a remote and then merge the branch sea212-fix-aura-slot-issue into this PR. Click here to see a changelog.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if the changes (besides the Aura thing) within the PM benchmarks were mandatory, so feel free to revert some of the changes from my PR in regards to that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels like with this new solution it will always have CurrentSlot as zero. and that's how it prevents assertion failures.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had initially though for making something like that but when I took look at other projects their approach seems better to me. :)

@sea212 sea212 added s:revision-needed The pull requests must be revised and removed s:review-needed The pull request requires reviews labels Jan 19, 2023
@vivekvpandya vivekvpandya added s:review-needed The pull request requires reviews and removed s:revision-needed The pull requests must be revised labels Jan 20, 2023
@codecov-commenter
Copy link

codecov-commenter commented Jan 20, 2023

Codecov Report

Merging #887 (e02c6d3) into main (b77b5ac) will increase coverage by 0.10%.
The diff coverage is 87.55%.

@@            Coverage Diff             @@
##             main     #887      +/-   ##
==========================================
+ Coverage   94.42%   94.53%   +0.10%     
==========================================
  Files          93       93              
  Lines       20258    21102     +844     
==========================================
+ Hits        19129    19948     +819     
- Misses       1129     1154      +25     
Flag Coverage Δ
tests 94.53% <87.55%> (+0.10%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
.../src/traits/zeitgeist_multi_reservable_currency.rs 53.65% <ø> (ø)
zrml/authorized/src/mock.rs 37.50% <0.00%> (ø)
zrml/court/src/mock.rs 33.33% <0.00%> (ø)
zrml/court/src/weights.rs 0.00% <0.00%> (ø)
zrml/liquidity-mining/src/weights.rs 0.00% <0.00%> (ø)
zrml/simple-disputes/src/mock.rs 8.69% <0.00%> (ø)
zrml/styx/src/weights.rs 0.00% <0.00%> (ø)
zrml/prediction-markets/src/lib.rs 92.30% <53.84%> (+1.53%) ⬆️
zrml/swaps/src/lib.rs 95.76% <72.72%> (+0.38%) ⬆️
zrml/swaps/src/weights.rs 84.65% <86.13%> (ø)
... and 14 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

sea212
sea212 previously approved these changes Jan 20, 2023
Copy link
Member

@sea212 sea212 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, great work Vivek 👍

@Chralt98 Chralt98 added s:accepted This pull request is ready for merge and removed s:review-needed The pull request requires reviews labels Jan 20, 2023
@sea212 sea212 merged commit 3e07be6 into main Jan 20, 2023
@sea212 sea212 deleted the upgrade_0929 branch January 20, 2023 13:22
@vivekvpandya
Copy link
Contributor Author

Thanks! @sea212 and @Chralt98 for patches and reviews!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
s:accepted This pull request is ready for merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Upgrade codebase to polkadot-v0.9.29
4 participants