Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pkg/util/log: flush bufferedSinks as part of crash reporter process #101562

Merged
merged 1 commit into from
Apr 24, 2023

Conversation

abarganier
Copy link
Contributor

It was brought to our attention that SQL pods in serverless environments experiencing panics were not showing the panic logs in downstream log collection systems like Splunk.

This caused an investigation into the crash reporter, where we found that the crash reporter only flushed the file logging sinks, but not the buffered network sinks.

Because of this, when the crash reporter logged the panic details to the OPS channel, and the panic log was sent through a fluent-server logging sink, it would simply be buffered. The crash reported would then flush the file sinks before propagating the panic again to kill the process. Since we didn't trigger & wait for the flush of the buffered network sinks, the panic almost never made it to the downstream fluentbit collector in time. In the case of SQL pods, where log files are not maintained once the container is destroyed, this meant these panic logs would be lost entirely.

This patch updates the log flush system with a new function called FlushAll. Previously, we had Flush, which would flush all the file logging sinks. This has been renamed to FlushFileSinks. Due to its widespread use throughout the code base, we intentionally maintain separation between the flushing of just the file sinks specifically, and the flushing of all buffered logging sinks (including network sinks), to avoid changing the semantics of the pre-existing function and its widespread usages.

NB: The FlushAll is a synchronous operation. It will wait for each buffered logging sink to finish flushing before allowing the crash reporter to proceed. This ensures that the buffers are fully drained prior to propagating the panic and killing the process.

Release note: none

Fixes: #101369

@abarganier abarganier requested review from jordanlewis, knz and a team April 14, 2023 18:15
@abarganier abarganier requested review from a team as code owners April 14, 2023 18:15
@abarganier abarganier requested a review from a team April 14, 2023 18:15
@abarganier abarganier requested review from a team as code owners April 14, 2023 18:15
@abarganier abarganier requested a review from a team April 14, 2023 18:15
@abarganier abarganier requested review from a team as code owners April 14, 2023 18:15
@abarganier abarganier requested review from a team, rhu713, rharding6373 and samiskin and removed request for a team April 14, 2023 18:15
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@abarganier abarganier removed request for a team April 14, 2023 18:15
@abarganier
Copy link
Contributor Author

FYI The original issue is not marked as a release or GA blocker so I'm reluctant to merge into 23.1.0 branch.

Good point. I've removed the 23.1.0 backport tag.

Copy link
Contributor Author

@abarganier abarganier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @dhartunian, @jordanlewis, and @knz)


-- commits line 31 at r2:

Previously, dhartunian (David Hartunian) wrote…

have we considered calling it FlushAllSync for clarity?

Good idea - done.

@abarganier
Copy link
Contributor Author

TFTR!

bors r=dhartunian

@craig
Copy link
Contributor

craig bot commented Apr 21, 2023

Build failed (retrying...):

@craig
Copy link
Contributor

craig bot commented Apr 21, 2023

Build failed (retrying...):

@craig
Copy link
Contributor

craig bot commented Apr 21, 2023

Canceled.

It was brought to our attention that SQL pods in serverless environments
experiencing panics were not showing the panic logs in downstream log
collection systems like Splunk.

This caused an investigation into the crash reporter, where we found
that the crash reporter *only* flushed the file logging sinks, but
*not* the buffered network sinks.

Because of this, when the crash reporter logged the panic details to
the OPS channel, and the panic log was sent through a fluent-server
logging sink, it would simply be buffered. The crash reported would
then flush the *file* sinks before propogating the panic again to
kill the process. Since we didn't trigger & wait for the flush of
the buffered network sinks, the panic almost never made it to the
downstream fluentbit collector in time. In the case of SQL pods,
where log files are not maintained once the container is destroyed,
this meant these panic logs would be lost entirely.

This patch updates the log flush system with a new function called
`FlushAll`. Previously, we had `Flush`, which would flush all the
file logging sinks. This has been renamed to `FlushFileSinks`. Due
to its widespread use throughout the code base, we intentionally
maintain separation between the flushing of just the file sinks
specifically, and the flushing of *all* buffered logging sinks
(including network sinks), to avoid changing the semantics of the
pre-existing function and its widespread usages.

NB: The `FlushAll` is a synchronous operation. It will wait for each
buffered logging sink to finish flushing before allowing the crash
reporter to proceed. This ensures that the buffers are fully drained
prior to propogating the panic and killing the process.

Release note: none
@abarganier
Copy link
Contributor Author

bors r=dhartunian

@craig
Copy link
Contributor

craig bot commented Apr 24, 2023

Build succeeded:

@craig craig bot merged commit 315e1a3 into cockroachdb:master Apr 24, 2023
@blathers-crl
Copy link

blathers-crl bot commented Apr 24, 2023

Encountered an error creating backports. Some common things that can go wrong:

  1. The backport branch might have already existed.
  2. There was a merge conflict.
  3. The backport branch contained merge commits.

You might need to create your backport manually using the backport tool.


error creating merge commit from 69a6a8f to blathers/backport-release-22.2-101562: POST https://api.github.com/repos/cockroachdb/cockroach/merges: 409 Merge conflict []

you may need to manually resolve merge conflicts with the backport tool.

Backport to branch 22.2.x failed. See errors above.


error creating merge commit from 69a6a8f to blathers/backport-release-23.1-101562: POST https://api.github.com/repos/cockroachdb/cockroach/merges: 409 Merge conflict []

you may need to manually resolve merge conflicts with the backport tool.

Backport to branch 23.1.x failed. See errors above.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

abarganier added a commit to abarganier/cockroach that referenced this pull request May 4, 2023
Ref: cockroachdb#101562

both for files *and* network sinks, such as fluent-servers.

I received some feedback that we shouldn't divide these
flush operations based on sink type, and instead we should
unify the flush operation to flush both (as the crash reporter
already does).

The existing function to flush just the file sinks is
quite widely used. Introducing flushing of network sinks
begs the question, "What if this adds to the runtime of
code using this explicit flush mechanism?"

Well, the existing FlushFileSinks calls fsync() [1] with
F_FULLFSYNC [2]. IIUC, this means that it already is a
blocking operation as the fsync() call will wait for the
buffered data to be written to permanent storage (not just
the disk cache). Given that, I think any caller of this
function already assumes it's a blocking operation and
therefore should be tolerant of that.

[1]: https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fsync.2.html
[2]: https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fcntl.2.html#//apple_ref/doc/man/2/fcntl

Nonetheless, we implement a timeout mechanism for the
flushing of the buffered network sinks as an added
protection.
abarganier added a commit to abarganier/cockroach that referenced this pull request May 10, 2023
Ref: cockroachdb#101562

both for files *and* network sinks, such as fluent-servers.

I received some feedback that we shouldn't divide these
flush operations based on sink type, and instead we should
unify the flush operation to flush both (as the crash reporter
already does).

The existing function to flush just the file sinks is
quite widely used. Introducing flushing of network sinks
begs the question, "What if this adds to the runtime of
code using this explicit flush mechanism?"

Well, the existing FlushFileSinks calls fsync() [1] with
F_FULLFSYNC [2]. IIUC, this means that it already is a
blocking operation as the fsync() call will wait for the
buffered data to be written to permanent storage (not just
the disk cache). Given that, I think any caller of this
function already assumes it's a blocking operation and
therefore should be tolerant of that.

[1]: https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fsync.2.html
[2]: https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fcntl.2.html#//apple_ref/doc/man/2/fcntl

Nonetheless, we implement a timeout mechanism for the
flushing of the buffered network sinks as an added
protection.
abarganier added a commit to abarganier/cockroach that referenced this pull request May 15, 2023
Ref: cockroachdb#101562

both for files *and* network sinks, such as fluent-servers.

I received some feedback that we shouldn't divide these
flush operations based on sink type, and instead we should
unify the flush operation to flush both (as the crash reporter
already does).

The existing function to flush just the file sinks is
quite widely used. Introducing flushing of network sinks
begs the question, "What if this adds to the runtime of
code using this explicit flush mechanism?"

Well, the existing FlushFileSinks calls fsync() [1] with
F_FULLFSYNC [2]. IIUC, this means that it already is a
blocking operation as the fsync() call will wait for the
buffered data to be written to permanent storage (not just
the disk cache). Given that, I think any caller of this
function already assumes it's a blocking operation and
therefore should be tolerant of that.

[1]: https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fsync.2.html
[2]: https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fcntl.2.html#//apple_ref/doc/man/2/fcntl

Nonetheless, we implement a timeout mechanism for the
flushing of the buffered network sinks as an added
protection.
craig bot pushed a commit that referenced this pull request May 15, 2023
102772: pkg/util/log: unify explicit flush of network and file sinks r=knz a=abarganier

Ref: #101562

both for files *and* network sinks, such as fluent-servers.

I received some feedback that we shouldn't divide these flush operations based on sink type, and instead we should unify the flush operation to flush both (as the crash reporter already does).

The existing function to flush just the file sinks is quite widely used. Introducing flushing of network sinks begs the question, "What if this adds to the runtime of code using this explicit flush mechanism?"

Well, the existing FlushFileSinks calls fsync() [1] with F_FULLFSYNC [2]. IIUC, this means that it already is a blocking operation as the fsync() call will wait for the buffered data to be written to permanent storage (not just the disk cache). Given that, I think any caller of this function already assumes it's a blocking operation and therefore should be tolerant of that.

[1]: https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fsync.2.html
[2]: https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fcntl.2.html#//apple_ref/doc/man/2/fcntl

Nonetheless, we implement a timeout mechanism for the flushing of the buffered network sinks as an added protection.

Fixes: #101369

102785: codeowners, roachtest: merge sql-schema, sql-sessions to sql-foundations r=celiala a=celiala

Reflecting merge sql-schema, sql-sessions to sql-foundations in:

- [x] github CODEOWNERS
- [x] pkg/cmd/roachtest owners
- [x] TEAMS.yaml

Tests should pass once new team is created via https://github.com/cockroachlabs/crl-infrastructure/pull/775

This is a task for Part 2, below.
____

### Tasks to update: GitHub Team name + GitHub Projects + Blathers

Part 1
- [x] Create GitHub T- label: https://github.com/cockroachdb/cockroach/labels/T-sql-foundations
- [ ] Create new sql-foundations GitHub team name: https://github.com/cockroachlabs/crl-infrastructure/pull/775

Part 2

- [ ] Update Blathers: https://github.com/cockroachlabs/blathers-bot/pull/123
- [ ] Update CODEOWNERS, roachtest, TEAMS.yaml:
  - [ ] master: #102785
  - [ ] release-23.1: #102924 
  - [ ] release-22.2: #102925
- [ ] [Manual Step] To be done at same time as merging Part 2 PRs:
  - [ ] Manually rename "SQL Schema" Project to "SQL Foundations": https://github.com/cockroachdb/cockroach/projects/47

Part 3

- [ ] Remove old GitHub team names: https://github.com/cockroachlabs/crl-infrastructure/pull/776

____


Partial work for DEVINFHD-867
Release note: None
Epic: DEVINFHD-867


103118: jwtauthccl: allow cluster SSO to pick principals from custom claims r=kpatron-cockroachlabs a=kpatron-cockroachlabs

Previously, cluster SSO (JWT authentication) required user principals to be in the subject field of the JWT, which by the JWT specification required them to be a single string. A customer has a desire have which SQL users a human is allowed to login as be specified by their IDP via a "groups" field in the JWT.

To accomadate this, a new cluster setting has been introduced (server.jwt_authentication.claim), which when populated specifies which claim of the JWT to read as containing the principal. This value can be either a string or an array of strings. The resulting principal(s) are then fed through the identity map as before to produce the set of valid SQL users that this token can login as. As before, humans can specify which SQL user they wish to login with via the username field as long as it matches one of the values the token is a valid authentication method for.

Fixes: CC-24595

Release note (enterprise change): Cluster SSO (JWT authentication) can now read SQL usernames from any JWT claims instead of requiring the subject claim to be used.

103240: ui: no need to refresh page after error r=maryliag a=maryliag

Previously, if one page crashes on DB Console, all other pages would show the same error message and the user had to force a refresh on the browser to be able to see the other pages. Now only the broken page shows the error and all the other pages load as expected. The user still needs to force a refresh on the broken page if they want to retry.

Fixes #97533

https://www.loom.com/share/56a6d811d9604b7abe673c1430ee605e

Release note (ui change): If a page crashed, a force refresh is no longer required to be able to see the other pages on DB Console.

103301: backupccl: disable restore data processor memory monitoring by default r=rhu713 a=rhu713

Temporarily disable memory monitoring for the restore data processor due to current logic not handling deletes correctly.

Epic: none

Co-authored-by: Alex Barganier <abarganier@cockroachlabs.com>
Co-authored-by: Celia La <celia@cockroachlabs.com>
Co-authored-by: Kyle Patron <kyle@cockroachlabs.com>
Co-authored-by: maryliag <marylia@cockroachlabs.com>
Co-authored-by: Rui Hu <rui@cockroachlabs.com>
abarganier added a commit to abarganier/cockroach that referenced this pull request May 15, 2023
Ref: cockroachdb#101562

both for files *and* network sinks, such as fluent-servers.

I received some feedback that we shouldn't divide these
flush operations based on sink type, and instead we should
unify the flush operation to flush both (as the crash reporter
already does).

The existing function to flush just the file sinks is
quite widely used. Introducing flushing of network sinks
begs the question, "What if this adds to the runtime of
code using this explicit flush mechanism?"

Well, the existing FlushFileSinks calls fsync() [1] with
F_FULLFSYNC [2]. IIUC, this means that it already is a
blocking operation as the fsync() call will wait for the
buffered data to be written to permanent storage (not just
the disk cache). Given that, I think any caller of this
function already assumes it's a blocking operation and
therefore should be tolerant of that.

[1]: https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fsync.2.html
[2]: https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fcntl.2.html#//apple_ref/doc/man/2/fcntl

Nonetheless, we implement a timeout mechanism for the
flushing of the buffered network sinks as an added
protection.
craig bot pushed a commit that referenced this pull request Aug 24, 2023
108928: pkg/util/log: alter bufferedSink to handle writes during sync flush r=knz,Santamaura a=abarganier

Previously, if the bufferedSink had a synchronous flush scheduled, and an additional write (via the `output()` function) was sent to the bufferedSink, the bufferedSink would panic.

After some investigation & analysis of the code, this approach was found to be unnecessary. We can gracefully handle this scenario without panicking. Instead, we can buffer the message to be included in the upcoming flush.

In this scenario, if an additional forceSync output() call is sent to the bufferedSink, when one is already scheduled, we cannot make the call synchronous. Instead, we can buffer the message in the imminent flush, and return.

Because of this, we change the name of the forceSync option to tryForceSync, to indicate that it's best-effort and not an ironclad guarantee.

Release note: none

Fixes: #106345

NB: A follow up PR will reintroduce the flush trigger into the crash reporter / process shutdown procedure (similar to #101562, which was reverted). This PR focuses on the bufferedSink changes themselves, to keep discussion focused.

Co-authored-by: Alex Barganier <abarganier@cockroachlabs.com>
craig bot pushed a commit that referenced this pull request Aug 28, 2023
109186: pkg/util/log: flush buffered network sinks on panic r=knz a=abarganier

Previously, our crash reporter system would flush file log sinks
as part of the process to handle a panic.

This was an incomplete process, since buffered network sinks were
not included in part of this flush process. This means that many
times, panic logs would not make it to the network target, leading
to a loss in observability.

This patch introduces `log.FlushAllSync()`, which flushes both file
and buffered network log sinks. It then updates the crash reporter
to call into this, instead of just flushing file log sinks.

`FlushAllSync()` contains timeout logic to prevent the process from
completing if one of the underlying child sinks that a bufferedSink
wraps becomes unavailable/hangs on its `output()` call.

We originally attempted to fix this in #101562, but a bug in the 
bufferedSink code led us to roll back those changes. The bug in the 
bufferedSink code has since been fixed (#108928), so we can safely 
introduce this logic again.

Release note: none

Fixes: #106345

109578: rpc: increase gRPC server timeout from 1x to 2x NetworkTimeout r=andrewbaptist a=erikgrinaker

This is intended as a conservative backport that changes as little as possible. For 23.2, we should restructure these settings a bit, possibly by removing NetworkTimeout and using independent timeouts for each component/parameter, since they have unique considerations (e.g. whether they are enforced above the Go runtime or by the OS, to what extent they are subject to RPC head-of-line blocking, etc).

---

This patch increases the gRPC server timeout from 1x to 2x NetworkTimeout. This timeout determines how long the server will wait for a TCP send to receive a TCP ack before automatically closing the connection. gRPC enforces this via the OS TCP stack by setting TCP_USER_TIMEOUT on the network socket.

While NetworkTimeout should be sufficient here, we have seen instances where this is affected by node load or other factors, so we set it to 2x NetworkTimeout to avoid spurious closed connections. An aggressive timeout is not particularly beneficial here, because the client-side timeout (in our case the CRDB RPC heartbeat) is what matters for recovery time following network or node outages -- the server side doesn't really care if the connection remains open for a bit longer.

Touches #109317.

Epic: none
Release note (ops change): The default gRPC server-side send timeout has been increased from 2 seconds to 4 seconds (1x to 2x of COCKROACH_NETWORK_TIMEOUT), to avoid spurious connection failures in certain scenarios. This can be controlled via the new environment variable COCKROACH_RPC_SERVER_TIMEOUT.

109610: kv: remove assertions around non-txn'al locking reqs r=nvanbenschoten a=nvanbenschoten

Closes #107860.
Closes #109222.
Closes #109581.
Closes #109582.

We might want to re-introduce these assertions in the future and reject these requests higher up the stack. For now, just remove them to deflake tests.

Release note: None

Co-authored-by: Alex Barganier <abarganier@cockroachlabs.com>
Co-authored-by: Erik Grinaker <grinaker@cockroachlabs.com>
Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-23.1.x Flags PRs that need to be backported to 23.1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

pkg/util/log: route panics through network sinks for SQL pods
5 participants