Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix dropping TLS for detached futures #139

Merged
merged 1 commit into from
Mar 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ rand_core = "0.6.4"
rand = "0.8.5"
rand_pcg = "0.3.1"
scoped-tls = "1.0.0"
smallvec = { version = "1.10.0", features = ["const_new"] }
tracing = { version = "0.1.21", default-features = false, features = ["std"] }
smallvec = { version = "1.11.2", features = ["const_new"] }
tracing = { version = "0.1.36", default-features = false, features = ["std"] }

[dev-dependencies]
criterion = { version = "0.4.0", features = ["html_reports"] }
Expand Down
7 changes: 7 additions & 0 deletions src/future/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -145,6 +145,13 @@ where
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
match self.future.as_mut().poll(cx) {
Poll::Ready(result) => {
// If we've finished execution already (this task was detached), don't clean up. We
// can't access the state any more to destroy thread locals, and don't want to run
// any more wakers (which will be no-ops anyway).
if ExecutionState::try_with(|state| state.is_finished()).unwrap_or(true) {
return Poll::Ready(());
}
Comment on lines +148 to +153
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we still polling the future if execution already finished?

Was the task's local storage already dropped in ExecutionState::cleanup when we drained tasks?

Copy link
Member Author

@jamesbornholt jamesbornholt Mar 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're not re-polling the future after execution finishes. But at ExecutionState::cleanup time, we drop the continuation, which for unfinished tasks involves resuming the task so that it can unwind its stack. That's how we end up in this code. We do still clean up the task's local storage inExecutionState::cleanup, so this change isn't actually leaking anything, just changing where the cleanup happens.


// Run thread-local destructors before publishing the result.
// See `pop_local` for details on why this loop looks this slightly funky way.
// TODO: thread locals and futures don't mix well right now. each task gets its own
Expand Down
26 changes: 26 additions & 0 deletions tests/future/basic.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
use futures::task::{FutureObj, Spawn, SpawnError, SpawnExt as _};
use futures::{try_join, Future};
use shuttle::sync::{Barrier, Mutex};
use shuttle::{check_dfs, check_random, future, scheduler::PctScheduler, thread, Runner};
Expand Down Expand Up @@ -385,3 +386,28 @@ fn is_finished_on_join_handle() {
None,
);
}

struct ShuttleSpawn;

impl Spawn for ShuttleSpawn {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice - didn't know about the Spawn trait.

fn spawn_obj(&self, future: FutureObj<'static, ()>) -> Result<(), SpawnError> {
future::spawn(future);
Ok(())
}
}

// Make sure a spawned detached task gets cleaned up correctly after execution ends
#[test]
fn clean_up_detached_task() {
check_dfs(
|| {
let atomic = shuttle::sync::atomic::AtomicUsize::new(0);
let _task_handle = ShuttleSpawn
.spawn_with_handle(async move {
atomic.fetch_add(1, Ordering::SeqCst);
})
.unwrap();
},
None,
)
}
Loading