You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This AIP proposes new infrastructure for MoveVM and AptosVM to load and cache modules and scripts.
The existing code loading logic has unspecified (and sometimes unexpected) semantics around module publishing and module initialization.
For example: 1) init_module links incorrectly to old code versions instead of new ones; 2) modules are published one by one, sometimes linking to the new and sometimes to the old versions of code depending on the order.
This results in poor user experience.
There are also performance issues.
Because MoveVM owns the code cache, Block-STM can cache wrong data during speculative module publishing or module initialization.
As a result, to prevent this from happening the cache is flushed occasionally, hurting performance.
There are also no shared global caches, and loading new modules can overload the system.
With the new infrastructure, the semantics of module publishing and initialization is clearly defined, enhancing the user experience.
A global shared module cache is added to the execution layer, and Block-STM is changed to handle module upgrades in parallel.
This significantly imporves performance.
For instance, based on existing and new benchmarks: module publishing is 14.1x faster, workloads with many modules are 1.5x-2x faster.
Out of scope
This AIP does not focus on further optimizations, such as:
accessing Aptos module metadata (RuntimeModuleMetadataV1, etc.) more efficiently,
caching the size of the transitive closure to make the gas charging for dependencies faster,
arena-based memory allocations.
High-level Overview
In short, the solution is:
MoveVM becomes stateless, only containing loading logic from the code cache.
The code cache implementation is provided by the clients using MoveVM.
Module publishing is changed to publish modules as a single bundle, instead of publishing them one-by-one.
This fixes issues around publishing and init_module.
Block-STM is adapted to work correctly with module publishing ensuring no speculative information is ever cached.
This is achieved by making new modules only visible at rolling commit time.
Global thread-safe and lock-free module cache is introduced to store modules across multiple blocks.
@georgemitenkov per today's Move interest group presentation on the new loader/cache model, how does this interplay with randomness during init_module?
@alnoki thanks for pointing to the issue! The new loader does not solve this, as this is a separate issue due to fact that only private entry functions are allowed to call randomness (and init_module is not entry).
AIP Discussion
Summary
This AIP proposes new infrastructure for MoveVM and AptosVM to load and cache modules and scripts.
The existing code loading logic has unspecified (and sometimes unexpected) semantics around module publishing and module initialization.
For example: 1)
init_module
links incorrectly to old code versions instead of new ones; 2) modules are published one by one, sometimes linking to the new and sometimes to the old versions of code depending on the order.This results in poor user experience.
There are also performance issues.
Because MoveVM owns the code cache, Block-STM can cache wrong data during speculative module publishing or module initialization.
As a result, to prevent this from happening the cache is flushed occasionally, hurting performance.
There are also no shared global caches, and loading new modules can overload the system.
With the new infrastructure, the semantics of module publishing and initialization is clearly defined, enhancing the user experience.
A global shared module cache is added to the execution layer, and Block-STM is changed to handle module upgrades in parallel.
This significantly imporves performance.
For instance, based on existing and new benchmarks: module publishing is 14.1x faster, workloads with many modules are 1.5x-2x faster.
Out of scope
This AIP does not focus on further optimizations, such as:
RuntimeModuleMetadataV1
, etc.) more efficiently,High-level Overview
In short, the solution is:
The code cache implementation is provided by the clients using MoveVM.
This fixes issues around publishing and
init_module
.This is achieved by making new modules only visible at rolling commit time.
Read more about it here: https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-107.md
The text was updated successfully, but these errors were encountered: