-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
util/log: don't panic #17871
util/log: don't panic #17871
Conversation
Wait, this will also deadlock because the mutex is already held when |
I think this should be backported to 1.0.6 if we release one. |
I think we'll release one, given the recent bugs that we fixed. |
@benesch Imk when I can take a look. |
db3909e
to
edbd714
Compare
@tschottdorf @knz mind taking another look today? Turns out this was more broken than I thought. |
Specifically, @petermattis made The fix I've applied here is to instead require that |
This looks good, but I am a bit sad about introducing a Reviewed 1 of 2 files at r1. Comments from Reviewable |
Ha, I had no idea Review status: 1 of 2 files reviewed at latest revision, all discussions resolved, some commit checks failed. Comments from Reviewable |
Review status: 1 of 2 files reviewed at latest revision, 3 unresolved discussions, some commit checks failed. pkg/util/log/clog.go, line 713 at r1 (raw file):
Rather than the pkg/util/log/clog.go, line 803 at r1 (raw file):
This can get run without pkg/util/log/clog.go, line 841 at r1 (raw file):
Now that the lock is required, let's rename this Comments from Reviewable |
Afraid I can't right now, perhaps drum up someone else?
On Thu, Aug 31, 2017, 15:46 Nikhil Benesch ***@***.***> wrote:
@tschottdorf <https://github.com/tschottdorf> @knz
<https://github.com/knz> mind taking another look today? Turns out this
was more broken than I thought.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#17871 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AE135C-qDxhKWjHg-bHdEa4sq_Wyg8lqks5sdw2TgaJpZM4PAlkW>
.
--
…-- Tobias
|
edbd714
to
433969b
Compare
@tschottdorf, ack. @petermattis @knz PTAL. Review status: 1 of 2 files reviewed at latest revision, 3 unresolved discussions, some commit checks pending. pkg/util/log/clog.go, line 713 at r1 (raw file): Previously, petermattis (Peter Mattis) wrote…
Done. pkg/util/log/clog.go, line 803 at r1 (raw file): Previously, petermattis (Peter Mattis) wrote…
It was changed to work with the pkg/util/log/clog.go, line 841 at r1 (raw file): Previously, petermattis (Peter Mattis) wrote…
Done. Comments from Reviewable |
Review status: 1 of 2 files reviewed at latest revision, all discussions resolved, some commit checks pending. Comments from Reviewable |
3e89c81
to
f0c4fa4
Compare
Previously, log.outputLogEntry could panic while holding the log mutex. This would deadlock any goroutine that logged while recovering from the panic, which is approximately all of the recover routines. Most annoyingly, the crash reporter would deadlock, swallowing the cause of the panic. Avoid panicking while holding the log mutex and use l.exit instead, which exists for this very purpose. In the process, enforce the invariant that l.mu is held when l.exit is called. (The previous behavior was, in fact, incorrect, as l.flushAll should not be called without holding l.mu.) Also add a Tcl test to ensure this doesn't break in the future.
f0c4fa4
to
c4c99c9
Compare
Previously, log.outputLogEntry could panic while holding the log mutex.
This would deadlock any goroutine that logged while recovering from the
panic, which is approximately all of the recover routines. Most
annoyingly, the crash reporter would deadlock, swallowing the cause of
the panic.
Avoid panicking while holding the log mutex and use l.exit instead,
which exists for this very purpose. In the process, enforce the
invariant that l.mu is held when l.exit is called. (The previous
behavior was, in fact, incorrect, as l.flushAll should not be called
without holding l.mu.)
Also add a Tcl test to ensure this doesn't break in the future.