-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Get rid of wasm-gc #1262
Comments
According to
so major source of the cruft is There is a tool called
however, if we decide to depend on wabt in our build pipeline we could then use wabt's wasm-opt, which could optimize binary (including pruning of dead symbols) and produce a binary with size of |
Why do we have debug sections in a release build? Shouldn't they be removed? |
I'd say they should be removed! But for now this is not the case (and it has been this way for a while). |
So, a quick recap:
There is a chance that since the last time we checked the things have changed. Notably, there is a LLD flag To check if that actually helps we need:
You should be able to find the runtime files under If the results turn out to be satisfying, we can then get rid of wasm-gc enitrely. |
with wasm-gc // compact
→ pwd
/Users/kianenigma/Desktop/Parity/substrate/target/release/wbuild/node-runtime
→ du -sk node_runtime.compact.wasm
1156 node_runtime.compact.wasm
// Original
→ pwd
.../substrate/target/release/wbuild/target/wasm32-unknown-unknown/release
→ du -sk node_runtime.wasm
1480 node_runtime.wasm without wasm-gc // run with `let res: Result<bool, std::io::Error> = Ok(true);`
→ pwd
/substrate/target/release/wbuild/target/wasm32-unknown-unknown/release
→ du -sk node_runtime.wasm
1168 node_runtime.wasm for which, after a cargo clean, I saved the compact file of the previous build, pasted it back here to silent this error
and then ran the command with the linker flag. but I am not 100% sure if I have done it correctly. I pasted the |
Just a note that the test suite depends on wasm-gc and the prerequisites installation script doesn't pull it in. So to reduce new user friction, it should either be added to that one, or removed from the tests imo. |
Which script? The init script installs it. |
You're right, I've been relying too much on the getsubstrate.io script which does not pull it in, but the README does document it. |
@shawntabrizi ^^^ |
(That is also a script I really would like to get rid of!) |
I ran into it from the Substrate Kitties workshop and here. As a tech ed I'm all for simplification and abstracting bootstrapping from the user, but I do agree that if we cannot provide a coherent experience to newcomers, such a script shouldn't be a thing. Maybe something like a vagrant box with all the prerequisites installed would be better. |
We can and should update the @bkchr I am 100% with you, let's get rid of all the scripts. For now though, I don't see a good solution for installing all these prerequisites. Maybe the best thing is to literally have an install page with each operating system and instructions? @Swader vagrant is a virtual env? |
Here is a PR to update the |
@shawntabrizi vagrant is a VM tool for composing and running reproducible headless VMs of little size and resource intensity. For a good while I was maintaining https://github.com/Swader/homestead_improved Basically it lets you pre-install stuff with provisioning scripts (shell, ansible, etc.) into a headless distro like a 200MB ubuntu, similar to what you might find in WSL, but it's not raw - it's all the prereqs in place. So no downloading rust, running updates, etc. The organization / author keeps the box up to date, the users merely run vagrant box update once in a while, though this is optional because once they run an instance of the VM they can just update it as any other headless VM. Here's a better explained rundown of mine from ages ago: https://www.sitepoint.com/re-introducing-vagrant-right-way-start-php/ - mentions PHP but has nothing to do with it specifically. In a nutshell, all OSes then have the exact same thing. On windows, it works exactly the same as on OS X or Linux. So suddenly all of your users have to follow exactly one set of instructions which always works (download vagrant, oraclevm and this box) and they have the same env from which to report bugs, which is coincidentally identical to the env on which you can then reproduce them. |
@Swader if you are recommending us maintaining vagrant boxes (or similar IaC tools) that'd be cool. We already have docker builds that can be used mostly to play with the ui and less to hack substrate itself. But, such things would never be enough. We need manual installation as well, always. |
I agree. Let me see if I can put together a box, I could use one myself so I'm sure I could dedicate part of my time to maintaining it. We can evaluate once done. |
Hi, is it okay if I work on this issue? |
You can work on this, but our experiments show that you probably will not find anything that achieves the same result. |
@pepyakin this should be closed right? |
I wasn't following for what is happening on the wasm-builder side lately, but just a quick glance revealed that it still does invoke |
I don't get it. AFAIK wasm builder builds the runtime with
|
Just guessing - probably because of the extra
Hmm.... indeed, although extra 40s doesn't seem that bad considering how long everything else takes to build. Maybe we should have separate cargo profiles for a proper release build and a "development" release build? |
Yes, I was thinking about something along these lines too. Alternatively, maybe finally get the debug builds (with lots of opt-level overrides for performance critical crates).
The problem is that it's 40 seconds for one runtime. In polkadot we have 4 ༼ ༎ຶ ෴ ༎ຶ༽ I envision that with #7288 we will be able to split the runtimes out so those will be buildable with separate |
Yes but, wouldn't they be compiled in parallel? |
But @nazar-pc reports a difference for
Not according to this documentation. AFAIK |
Oh, right, that's true |
I think this is caused by incremental compilation, I did not wipe
Assuming you have enough CPU cores of course 😉 |
You also reported different code sizes. |
I just re-ran it a few times and I do reproducibly get different sizes with and without I don't think incremental compilation in Rust guarantees for builds from scratch to be fully identical to some incremental rebuilds, but this is not it regardless. |
I checked commands that are being generated, and with I guess you need |
Well, indeed I do, whole 32 of them. (: Anyhow, this seems like it's something we should measure. Maybe you could also compile |
Polkadot compilation on the same machine as above, also attaching timing results. Status quo:
Runtime compilation into WASM is a single-threaded process, so with 4+ cores time overhead doesn't add up and compilation happens in parallel. |
We could just remove wasm-gc and do the optimal build only for when we prepare a release. We already do this for stuff like |
However, I kept I am not sure how important code size is but we could also look into compiling the runtime with |
I have done this 2 years ago or whatever and this wasn't bringing any good results. I think it was even slower. |
This isn't surprising. It optimizes for smaller code rather than "fast code". Often that isn't necessarily slower. I am not sure how important a smaller runtime would be and if it makes sense to sacrifice some performance. |
There are cases where the performance improves since the smaller binary causes less cache misses when fetching the instructions. Would have to test it on the exact hardware. |
IMO we should not over-complicate it. Just keep the compiler optimizing for performance, remove stuff in the final binary that we don't need and then we compress it anyway again. |
|
I think it sill reduces the code size a bit. Didn't check by how much, though. |
The difference right now is roughly this:
So it is only ~5.2% increase, which is significant, but not deal-breaker, especially considering that things got smaller after |
i wonder where this difference comes from? can you run |
Uncompressed size:
|
wasm-gc got deprecated but we still use it in our build pipeline.
The author mentions that:
But for us this is not entirely true and the size of the runtime is:
And
But we don't use neither of these.
We need to figure out what do we want to use.
The text was updated successfully, but these errors were encountered: