Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WasmPageAllocator improvements #3789

Closed
fengb opened this issue Nov 27, 2019 · 4 comments · Fixed by #3830
Closed

WasmPageAllocator improvements #3789

fengb opened this issue Nov 27, 2019 · 4 comments · Fixed by #3830
Labels
contributor friendly This issue is limited in scope and/or knowledge of Zig internals. enhancement Solving this issue will likely involve adding new logic or components to the codebase. standard library This issue involves writing Zig code for the standard library.
Milestone

Comments

@fengb
Copy link
Contributor

fengb commented Nov 27, 2019

Briefly discussed this over IRC.

The current wasm allocator has some implicit state that does not play nicely with other allocators that may directly use the intrinsic memory.grow. Additionally, wasm has its own challenges, like not providing a way to free memory.

We should update the WasmPageAllocator to match the PageAllocator a bit closer:

  • alloc should always return unused page(s) of memory
  • free should return the pages back to Zig
  • bookkeeping to handle free pages and page sizes
@fengb
Copy link
Contributor Author

fengb commented Nov 27, 2019

I had experimented with 2 styles of metadata with ZeeAlloc:

  1. Separate node list with { start: [*]u8, len: usize, next: *Node }

    • 12 bytes per node
    • requires preallocation
    • can theoretically run out of nodes before running out of memory
    • reverse lookup (ptr => metadata) is a giant pain
  2. Reserve first 8 bytes with { next: *Node, len: usize }

    • reduces usable size by 8 bytes per page group
    • reduces available alignment

@andrewrk andrewrk added this to the 0.7.0 milestone Nov 27, 2019
@andrewrk andrewrk added contributor friendly This issue is limited in scope and/or knowledge of Zig internals. enhancement Solving this issue will likely involve adding new logic or components to the codebase. standard library This issue involves writing Zig code for the standard library. labels Nov 27, 2019
@andrewrk
Copy link
Member

Here's another half-formed idea:

  • Pre-allocated x bytes of global memory, where every bit decides whether a page is free or used. x=256 would provide enough pre-allocated bits to describe 128 MiB of memory, which is probably more than enough for most use cases.
  • If this amount of memory is used up, then the entire next page (64 KB) of memory in the sequence is all used only as bits describing free/used pages. This allows up to another 32 GiB of memory. This pattern could be followed with another page of memory, but probably would never be necessary.

This strategy depends on being the global arbitrator of memory, which is probably reasonable. Other allocators generally want to use backing allocators. This seems like it would have low overhead and good perf to me (although I haven't tested it), and if that is true then it's probably worth having it. There could be another wasm allocator that didn't require being the global arbitrator of memory, with different perf/overhead characteristics, which one could use if necessary. But there are plenty of use cases where this assumption of being the only caller of the wasm intrinsics would hold.

@fengb
Copy link
Contributor Author

fengb commented Nov 27, 2019

I like this approach. My only concern is how should we determine multiple pages? I had used the heuristic of alignForward the freed memory, but that's not guaranteed to be accurate after a shrink. But maybe it's good enough?

Edit: after typing some of the examples out, I realized we can return excess pages upon shrink so this is a non-problem.

@fengb
Copy link
Contributor Author

fengb commented Nov 27, 2019

Instead of reserving a constant set of memory, we can leverage __heap_base; since the stack + global size is most likely not aligned to 64K, we should have some leftover to do our bookkeeping there.

@fengb fengb mentioned this issue Dec 2, 2019
5 tasks
@andrewrk andrewrk modified the milestones: 0.7.0, 0.6.0 Feb 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor friendly This issue is limited in scope and/or knowledge of Zig internals. enhancement Solving this issue will likely involve adding new logic or components to the codebase. standard library This issue involves writing Zig code for the standard library.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants