-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fern and forks #59
Comments
I don't think fern provides any utilities at the moment which allow changing loggers after setting an initial logger. This could be a reasonable feature, though! In the meantime, if you need this a stopgap implementation like this should function: use std::sync::RwLock;
use once_cell::sync::OnceCell;
static LOG_INSTANCE: OnceCell<RwLock<Option<Box<dyn log::Log>>>> = OnceCell::new();
struct UseChangeableLog;
impl log::Log for UseChangeableLog {
fn enabled(&self, metadata: &log::Metadata) -> bool {
with_log(|o| {
o.map(|l| l.enabled(metadata)).unwrap_or(false)
})
}
fn log(&self, record: &log::Record) {
with_log(|o| {
if let Some(l) = o {
l.log(record);
}
})
}
fn flush(&self) {
with_log(|o| {
if let Some(l) = o {
l.flush();
}
})
}
}
fn with_log<O>(f: impl FnOnce(Option<&dyn log::Log>) -> O) -> O {
let opt_lock = LOG_INSTANCE.get();
match opt_lock {
None => f(None),
Some(lock) => {
let guard = lock.read().unwrap();
f(guard.as_ref().map(|b| &**b))
}
}
}
fn set_log_reusable(max: log::LevelFilter, log: Box<dyn log::Log>) {
let mutex = LOG_INSTANCE.get_or_init(Default::default);
{
// ignore panics; we're completely overwriting the value
let mut mutable = mutex.write().unwrap_or_else(|e| e.into_inner());
*mutable = Some(log);
}
// this will fail after the first call. that's OK since we're storing the
// log elsewhere - so ignore the result
let _ = log::set_logger(&UseChangeableLog);
// this call on the other hand IS callable multiple times
log::set_max_level(max);
}
|
@daboross thanks for the quick reply! Nice shim, worked perfectly as a drop-in replacement. I'm having a little trouble getting it to work with forking though. The crux of the issue seems to be that if a thread in the parent process performs log while the child process is forking, that thread will hold the log mutex while the child process's memory space is copied from the parent. This means that--to the child--the log mutex is held forever, because the thread that acquired the mutex no longer exists in the child's memory space to release it. The result is that the child deadlocks when trying to set up its logger. Threads and processes, fun :) |
Ah, that makes sense! Forking is tricky. Do you think it'd be reasonable to grab a mutable lock before forking, and only release it once the operation is complete? That way you could guarantee that the mutex is in a consistent unlocked state after the fork completes. Could be done with a utility function like fn with_mutable_logger_lock<O>(f: FnOnce() -> O) -> O {
let mutex = LOG_INSTANCE.get_or_init(Default::default);
let mut mutable = mutex.write().unwrap_or_else(|e| e.into_inner());
let result = f();
// explicit drop not 100% necessary, but helps clarity
drop(mutable);
result
} I was considering suggesting using an |
Acquire the lock, fork with use nix::unistd::{fork, ForkResult};
use once_cell::sync::OnceCell;
use std::error::Error;
use std::sync::Mutex;
use std::thread::{sleep, spawn};
use std::time::Duration;
static LOCK: OnceCell<Mutex<()>> = OnceCell::new();
fn with_lock<T>(f: impl FnOnce() -> T) -> T {
let mutex = LOCK.get_or_init(Default::default);
let guard = mutex.lock();
let res = f();
drop(guard);
res
}
fn main() -> Result<(), Box<dyn Error>> {
let bg_thread = spawn(|| loop {
let mutex = LOCK.get_or_init(Default::default);
let guard = mutex.lock();
//println!("acquire in bg");
sleep(Duration::from_millis(50));
drop(guard);
//println!("release in bg");
});
// give bg_thread a chance to start before forking
sleep(Duration::from_millis(100));
let res = with_lock(fork)?;
match res {
ForkResult::Child => {
let mutex = LOCK.get().unwrap();
let held = mutex.try_lock().is_err();
println!("CHILD mutex held: {}", held);
}
ForkResult::Parent { child } => {
let mutex = LOCK.get().unwrap();
let held = mutex.try_lock().is_err();
println!("PARENT mutex held: {}", held);
}
}
Ok(())
} There's a background thread |
Hmm it seems like the |
I'm going to close this. I think the theory is right, I just have some app-specific weirdness going on. Thanks for your help @daboross |
Seems that it may be an issue with RwLock + fork. My snippet above was using a Mutex, and so it wasn't seeing the same results. https://stackoverflow.com/questions/61976745/why-does-rust-rwlock-behave-unexpectedly-with-fork |
It is looking very likely that a RwLock/Mutex-based version of a changeable logger is not going to have defined behavior while forking. From the pthread_rwlock_unlock man pages:
I don't know why it matters if the child is a copy of the parent, but there must be some bits of state in the lock that have a thread id in it, causing it to misbehave in a fork. So as it stands, I think that it's not possible to use the threading primitives to make a fork-friendly logger work. Opening back up to suggestions. |
Troubling. I guess If the problem is the with the platform RwLock APIs, what about using a userspace lock implementation? The main one I know of is I guess I might imagine an alternative which does stuff using lower-level APIs, but I don't know exactly what it'd look like. Maybe we could add an additional |
The userspace locking or atomics sounds feasible. I ended up going in a different direction with my forking/logging model though (still using fern though, love it! Great work), so this issue has less relevance to me now. Thank you for your feedback, and feel free to close the issue if it's not something you want to work towards. It is a pretty rare case I imagine. |
Hi, I'm interested in using fern logging in forked subprocesses, however, I don't see a way to uninstall the parent process's global logger and install a child process logger for the subprocess. What is the recommendation here?
The text was updated successfully, but these errors were encountered: