Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider using wasm threads in the wasm node #91

Closed
tomaka opened this issue Feb 9, 2023 · 16 comments
Closed

Consider using wasm threads in the wasm node #91

tomaka opened this issue Feb 9, 2023 · 16 comments
Labels
blocked Progress on this issue requires something beyond our control

Comments

@tomaka
Copy link
Contributor

tomaka commented Feb 9, 2023

paritytech/smoldot#1774

@tomaka
Copy link
Contributor Author

tomaka commented Mar 27, 2023

More notes:

  • Tasks are split in two categories: tasks that interact with networking (open connections, send messages, etc.) and the others. Rather than the task indicating in which category it belongs (which would be risky and error-prone), the Wasm-specific code should assume by default that tasks don't use the networking, and move tasks to the other category as soon as a networking-related function from the Platform trait is called.

  • The API should look like this: a new Client.createBackgroundRunner function should be added. It returns an object that can be sent to a worker. While ideally this object would have a run() function of some sort, functions unfortunately can't be transferred, so we will need to add a freestanding function that accepts the object as parameter.

  • Because the implementation will require a SharedArrayBuffer, and they are not always available, Client.createBackgroundRunner might have to return null.

  • Ideally, tasks running on the main thread should use waitAsync to wake themselves up. Because it's not stable yet, we might need to use a MessagePort instead, however it is unclear to me whether it is legal to have multiple copies of the sending side of the port. Background tasks can use wait, but waitAsync would be better as well in case users want to run other things in the worker as well. Overall, everything would be much more simple if waitAsync was stable.

  • Right now tasks use setTimeout(..., 0) or setImmediate in order to yield. This could instead be done with the same mechanism as the one used to schedule a task on the main thread (i.e. waitAsync or a message port).

@tomaka
Copy link
Contributor Author

tomaka commented Mar 27, 2023

Unfortunately, the Rust side is a bit of a shitshow. See rust-lang/rust#77839 for more info.
Since I remember seeing the multithreaded ray tracing demo, I thought that Rust was more or less ready-ish, but actually not at all.
Trying to compile smoldot the same way as that example gave me cryptic compilation errors in the standard library which I don't want to deal with.

@tomaka tomaka added the blocked Progress on this issue requires something beyond our control label Mar 27, 2023
@tomaka
Copy link
Contributor Author

tomaka commented Mar 29, 2023

gave me cryptic compilation errors in the standard library which I don't want to deal with.

Opened rust-lang/rust#109727

@tomaka
Copy link
Contributor Author

tomaka commented Mar 29, 2023

It seems that Wasi isn't really +atomics-friendly at the moment:

https://github.com/rust-lang/rust/blob/master/library/std/src/sys/wasi/mod.rs#L32-L36

@tomaka
Copy link
Contributor Author

tomaka commented Mar 30, 2023

Maybe one way forward would be to first finish #133

EDIT: no that's not going to lead anywhere, as futures::lock::Mutex also requires non-async Mutexes under the hood.

@tomaka
Copy link
Contributor Author

tomaka commented Apr 20, 2023

It seems that the direction is actually to add a new wasm32-wasi-threads target: rust-lang/compiler-team#574

They were blocked on LLVM16 support in Rust, which has now landed.

@tomaka
Copy link
Contributor Author

tomaka commented May 1, 2023

I have managed to compile a version of Rust that compiles for multithreaded wasi.

Here is the diff to apply on the rust-lang/rust repo on top of commit d3edfd18c790971c77845bfc1a2be4f9281c5416:

diff --git a/compiler/rustc_codegen_ssa/src/back/linker.rs b/compiler/rustc_codegen_ssa/src/back/linker.rs
index 65dfc325a11..b13e9270066 100644
--- a/compiler/rustc_codegen_ssa/src/back/linker.rs
+++ b/compiler/rustc_codegen_ssa/src/back/linker.rs
@@ -1197,6 +1197,8 @@ fn new(mut cmd: Command, sess: &'a Session) -> WasmLd<'a> {
                 cmd.arg("--export=__tls_size");
                 cmd.arg("--export=__tls_align");
                 cmd.arg("--export=__tls_base");
+            } else {
+                cmd.arg("--export=wasi_thread_start");
             }
         }
         WasmLd { cmd, sess }
diff --git a/compiler/rustc_target/src/spec/wasm32_wasi.rs b/compiler/rustc_target/src/spec/wasm32_wasi.rs
index a0476d542e6..ec7f6a398ef 100644
--- a/compiler/rustc_target/src/spec/wasm32_wasi.rs
+++ b/compiler/rustc_target/src/spec/wasm32_wasi.rs
@@ -79,7 +79,7 @@ pub fn target() -> Target {
     let mut options = wasm_base::options();
 
     options.os = "wasi".into();
-    options.add_pre_link_args(LinkerFlavor::WasmLld(Cc::Yes), &["--target=wasm32-wasi"]);
+    options.add_pre_link_args(LinkerFlavor::WasmLld(Cc::Yes), &["--target=wasm32-wasi-threads"]);
 
     options.pre_link_objects_self_contained = crt_objects::pre_wasi_self_contained();
     options.post_link_objects_self_contained = crt_objects::post_wasi_self_contained();
@@ -96,6 +96,8 @@ pub fn target() -> Target {
     options.crt_static_default = true;
     options.crt_static_respected = true;
 
+    options.singlethread = false;
+
     // Allow `+crt-static` to create a "cdylib" output which is just a wasm file
     // without a main function.
     options.crt_static_allows_dylibs = true;
@@ -109,7 +111,7 @@ pub fn target() -> Target {
     options.entry_name = "__main_void".into();
 
     Target {
-        llvm_target: "wasm32-wasi".into(),
+        llvm_target: "wasm32-wasi-threads".into(),
         pointer_width: 32,
         data_layout: "e-m:e-p:32:32-p10:8:8-p20:8:8-i64:64-n32:64-S128-ni:1:10:20".into(),
         arch: "wasm32".into(),
diff --git a/library/std/src/sys/wasi/mod.rs b/library/std/src/sys/wasi/mod.rs
index c468ae395fc..d27cd66cb26 100644
--- a/library/std/src/sys/wasi/mod.rs
+++ b/library/std/src/sys/wasi/mod.rs
@@ -29,8 +29,6 @@
 #[path = "../wasm/atomics/futex.rs"]
 pub mod futex;
 pub mod io;
-#[path = "../unsupported/locks/mod.rs"]
-pub mod locks;
 pub mod net;
 pub mod os;
 #[path = "../unix/os_str.rs"]
@@ -50,7 +48,20 @@
 pub mod time;
 
 cfg_if::cfg_if! {
-    if #[cfg(not(target_feature = "atomics"))] {
+    if #[cfg(target_feature = "atomics")] {
+        #[path = "../unix/locks"]
+        pub mod locks {
+            #![allow(unsafe_op_in_unsafe_fn)]
+            mod futex_condvar;
+            mod futex_mutex;
+            mod futex_rwlock;
+            pub(crate) use futex_condvar::Condvar;
+            pub(crate) use futex_mutex::Mutex;
+            pub(crate) use futex_rwlock::RwLock;
+        }
+    } else {
+        #[path = "../unsupported/locks/mod.rs"]
+        pub mod locks;
         #[path = "../unsupported/once.rs"]
         pub mod once;
     }
diff --git a/src/bootstrap/compile.rs b/src/bootstrap/compile.rs
index 7d2a6862500..2321707afe3 100644
--- a/src/bootstrap/compile.rs
+++ b/src/bootstrap/compile.rs
@@ -275,7 +275,7 @@ fn copy_self_contained_objects(
             .unwrap_or_else(|| {
                 panic!("Target {:?} does not have a \"wasi-root\" key", target.triple)
             })
-            .join("lib/wasm32-wasi");
+            .join("lib/wasm32-wasi-threads");
         for &obj in &["libc.a", "crt1-command.o", "crt1-reactor.o"] {
             copy_and_stamp(
                 builder,
@@ -377,7 +377,7 @@ pub fn std_cargo(builder: &Builder<'_>, target: TargetSelection, stage: u32, car
 
         if target.ends_with("-wasi") {
             if let Some(p) = builder.wasi_root(target) {
-                let root = format!("native={}/lib/wasm32-wasi", p.to_str().unwrap());
+                let root = format!("native={}/lib/wasm32-wasi-threads", p.to_str().unwrap());
                 cargo.rustflag("-L").rustflag(&root);
             }
         }

It must also be compiled with the config.toml containing:

[rust]
lld = true

[target.wasm32-wasi]
wasi-root = "/path/to/wasi-sysroot"   # download from <https://github.com/WebAssembly/wasi-sdk/releases/download/wasi-sdk-20/wasi-sysroot-20.0.tar.gz>

And with command: RUSTFLAGS="-C target-feature=+bulk-memory,+atomics" ./x.py build --stage 1 --target x86_64-unknown-linux-gnu --target wasm32-wasi

@tomaka
Copy link
Contributor Author

tomaka commented May 2, 2023

#494 is a draft of the plan highlighted here.

Unfortunately, the two HTTP headers required for SharedArrayBuffer to be available are pretty annoying.
Instead, as a fallback, we could add an implementation of the Connection interface that sends messages to the main thread and receives responses using a MessageChannel.

However, this means that smoldot needs to entirely run within the worker. This isn't really compatible with an API where you first start smoldot then call createBackgroundRunnable() later on, as there's no way to transfer an existing Wasm instance.

For this reason, I think that the API should change: instead of adding a createBackgroundRunnable function, we should make the user specify at initialization whether they want a smoldot that runs only on the main thread, or if they want a smoldot that also runs in a worker. In other words, we should add a new function named something like startWithWorker as an alternative to start.

startWithWorker would return a tuple of [Client, BackgroundRunner]. The user then sends the BackgroundRunner to a worker and runs it.
startWithWorker would use crossOriginIsolated in order to determine whether smoldot runs multithreaded and the BackgroundRunner contains a SharedArrayBuffer (like in #494), or if smoldot runs single-threaded and the BackgroundRunner contains a MessageChannel.

@tomaka
Copy link
Contributor Author

tomaka commented May 2, 2023

Another idea is to add a new field to StartConfig: portsToWorkers?: MessagePort[].
The user would then call run(MessagePort) within the workers. The ports communicate and smoldot does the whole setup automatically.

@tomaka
Copy link
Contributor Author

tomaka commented May 3, 2023

About panics handling:

First of all, the panic strategy has to be "abort". Unwinding requires the "exceptions handling" wasm proposal which, while it is implemented in browsers, is as far as I know not supported by Rust at the moment.

When a thread panics, everything is left in place as it is. In other words, from a logical point of view a panic is similar to a call to sleep(forever). Other threads can't actually observe any poisoned state, for example because mutexes are kept locked.

Therefore, as long as the thread that panics doesn't call back into the Wasm, it's not possible to have any logic error caused by panicking. The panicking thread has all the time in the world to indicate to the other threads that they should abort.

However, "aborting" is easier said than done. The main thread is fortunately guaranteed to never perform any blocking I/O operation (such as waiting), so we are sure that it can be aborted in one way or another. Workers have a terminate() function, which can be used to abort them.

The last possible issue I can see is a situation where the main thread is spin-looping waiting for another to do some atomic operation. However, nobody in their right might would actually implement spin-looping to wait for a long-lived mutex lock to be released. When a spin-loop is used, it is because another thread is performing a very short operation, and this very short operation is most likely provable to never panic.

So all this explanation means that in principle the Worker objects should be passed to smoldot so that it can terminate them in case of a panic. However I'm pretty reluctant to actually do that, because panicking is so niche (it's not supposed to ever happen). I think that we should only provide ports, and instead report the panic in the public API and let the user terminate workers manually if they wish to properly handle errors (which I assume nobody is ever going to do).

@tomaka
Copy link
Contributor Author

tomaka commented May 3, 2023

One other thing to remember (so many) is that at the moment locking a mutex is implemented using atomic_memory_wait. This will panic if called from the main thread. If the situation stays as is, we might need to run smoldot entirely in the background and use the main thread only to communicate with the background and not run any task.

@tomaka
Copy link
Contributor Author

tomaka commented May 5, 2023

I've now closed #494 due to too many conflicts.
That PR has served its objective, which is to be a proof of concept of how to do a multithreaded smoldot.

As a summary of the changes to make in the future:

  • Compile the code for wasm32-wasi-threads.
  • Don't use instance.exports.memory but create a WebAssembly.Memory manually and provide it to the instance.
  • Change advance_execution and json_rpc_response_peek to notify when ready by notifying a memory location using atomics (futex-style) rather than calling callbacks. This makes it possible to notify across threads. This change can (and should) be done once waitAsync becomes available on Firefox and not necessarily only when adding support for threads.
  • Split the tasks queue in two between networking and non-networking tasks.
  • Replace all the try_lock()s with lock().
  • Add a threadTy field to the config of local.ts, and make local.ts call wasi_thread_start if it is a secondary thread.
  • Call std::thread::spawn a certain number of times (number of times passed through init?) from the main thread, and handle this call in local.ts by emitting an event. The event should lead to a new worker port being activated to start a new instance.

@tomaka
Copy link
Contributor Author

tomaka commented May 26, 2023

Atomics.waitAsync is now considered a "finished proposal" and if I understand correctly will be implemented in Firefox with high priority.

@tomaka
Copy link
Contributor Author

tomaka commented Jul 3, 2023

cc rust-lang/rust#112922

@tomaka
Copy link
Contributor Author

tomaka commented Jul 26, 2023

Looks like wasm-wasi-threads is coming before waitAsync. I didn't expect that.
It might be possible to find a substitution for waitAsync, for example by using MessagePorts, but I can't really be bothered. waitAsync is so insanely convenient, whereas any substitution would be a complicated hack.

@tomaka
Copy link
Contributor Author

tomaka commented Sep 8, 2023

I'm going to close this, and opened #1128 instead for specifically the changes that would use waitAsync.

It seems that we have three directions concerning the wasm: wasm32-wasi-threads (this issue), wasm64-unknown (#88), or wasm32-unknown + exceptions (#519).
The wasm32-wasi-threads target is a "clean hack" (the design is clean but there's a more optimal solution coming), even according to its authors. It's very unlikely that they migrate to wasm64 soon after wasm64 becomes ready.

Using more than one thread for the wasm seems overkill to me right now. The fact that it's possible to have a frontend and a worker are IMO more than enough, and if it isn't then we should probably look into why.
Sure it would be faster to do for example a long distance warp sync on 10 different chains at once if we had more than one thread, but that's a pretty niche situation.

TL;DR: we want to migrate to wasm64 as soon as it's possible to do so (#88), and maybe also enable wasm exceptions (#519), and these two changes conflict with threads and are more worth it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked Progress on this issue requires something beyond our control
Projects
None yet
Development

No branches or pull requests

1 participant