Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batching same-program instruction invocations to save on context-switch overhead #18428

Closed
jon-chuang opened this issue Jul 5, 2021 · 1 comment
Labels
stale [bot only] Added to stale content; results in auto-close after a week.

Comments

@jon-chuang
Copy link
Contributor

jon-chuang commented Jul 5, 2021

Problem

Currently, if making multiple instruction invocations into the same program, a significant amount of overhead is required.

  1. Copying data from bpf memory into keyed accounts.
  2. Verification of intermediate account data against a reference (PreAccount).
  3. Verification of resultant data from process_instruction against reference.
  4. Copying data back into bpf memory.

The chosen number to reflect these costs is 1000 compute unit cost per CPI. That's not particularly high, but is about 20-25% of a typical Serum tx. Raydium TX, which typically call serum CPIs 10-20 times per TX, can save up to 20,000 CUs, bringing down median cost from 85,000 to 65,000, allowing greater composability with Raydium.

Proposed Solution

Allow batching of multiple instructions into a single syscall. This can get rid of extra data copies per CPI.

In this proposal, we ignore saving on verification - since verification even in between instructions executed for the same program may be a desirable property anyhow, while opening up the possibility of batching instructions from different programs.

Todo

Investigate just how much time-cost savings there could be in batching. It's probably on the order of 10s of microseconds per invoke. Which maybe isn't significant enough to justify the effort ...

@jon-chuang jon-chuang changed the title Batching same-program instruction invocations to save on overhead Batching same-program instruction invocations to save on context-switch overhead Jul 6, 2021
@jon-chuang
Copy link
Contributor Author

Some evidence where this would be of significant use:
Mango markets ForceCancelOrders instruction,
https://github.com/blockworks-foundation/mango/blob/5556f483f939bfd967dbc3d255a3d94e354f35f7/program/src/processor.rs#L1852
performing up to 128 invoke_signed

for cancel in cancels.iter() {
        let cancel_instruction = serum_dex::instruction::MarketInstruction::CancelOrderV2(cancel.clone());
        instruction.data = cancel_instruction.pack();
        solana_program::program::invoke_signed(&instruction, &account_infos, signers_seeds)?;
    }

@github-actions github-actions bot added the stale [bot only] Added to stale content; results in auto-close after a week. label Dec 23, 2022
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale [bot only] Added to stale content; results in auto-close after a week.
Projects
None yet
Development

No branches or pull requests

1 participant