-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Page cache memory usage regression with gin-gonic after 0.21->0.22 upgrade #673
Comments
This is likely caused by https://github.com/getsentry/sentry-go/blob/master/gin/sentrygin.go#L73, which was added in #644. We'll take a look! |
@zetaab Did you notice similar issues, assuming you're using gin performance tracing in one of your apps now? |
Our apps are internal small applications, I have not noticed any issues like this And we are not using any file uploads |
I am thinking that in case of file uploads it will try to read body and it should be closed after read. Otherwise it will reserve the memory and the behaviour is like described in first post. Now someone just needs to find the correct place to add close |
I've managed to replicate this behaviour in 0.22 by creating a small example which uploads a file. Indeed the multipart temporary files are not cleaned up. When I try the same example with 0.21 the temporary files are cleaned up. After a bunch of digging I've realised that this is not an issue caused by the Sentry SDK. It's really an issue in the way I can also replicate the issue like this: package main
import (
"fmt"
"net/http"
"os"
)
func main() {
fmt.Println("OS temp dir path:", os.TempDir())
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
r.ParseMultipartForm(0)
file, handler, err := r.FormFile("file")
if err != nil {
fmt.Println(err)
return
}
defer file.Close()
})
http.ListenAndServe("127.0.0.1:3000", testMiddleware(http.DefaultServeMux))
}
func testMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
r = r.WithContext(ctx)
next.ServeHTTP(w, r)
})
} The primary culprit here is this line This creates a shallow copy of the request that's passed on to the next handler (since the docs say not to modify the original request). If a handler further up then parses a multipart form the result ends up on the modified shallow request, not the original request. Cleanup of temp files happens when the request finishes, but this only applies to the original request. If there's no multipart result, then no cleanup happens. Version 0.22 added starting a transaction for tracing to the Gin integration. That has to modify the request to inject the span. So sadly the solution here is always to make sure you clean up yourself by calling the RemoveAll method on the form. In the above example we'd defer:
For reference see: |
@cleptric I think we can close this issue, there isn't much we can do from our side. |
I don't think this is true. The bug report is about page cache -- not the temp files. We already call |
@chawco thanks for the response. :) I will try reproduce more and see if I can see any file handle or reference leaks. |
I've been trying to check for any leaks using strace on Ubuntu. So far I can't seem to see any indication that files are being leaked. Here's the strace output when uploading a file for v0.21:
And here's v0.22:
It doesn't seem like anything has changed, and all the opens seem to have matching closes.
From the Docker runtime docs:
And
So temporary files created by multipart parsing can certainly push that metric up. (and if they weren't cleaned up it would remain high) Was your container OOMing from this? What was happening with |
I think we've found a place where the multipart form parsing was being done, but wasn't obvious, because it was being done by It may be good to mention in the docs, however, as if someone wasn't cleaning it up previously they were just being left with some temp files on disk, and not leaking the memory. It would be pretty unexpected for a minor Sentry version bump to cause a memory leak due to a pretty benign pre-existing bug. |
@chawco good point, we'll definitely make a note of this gotcha in the docs. |
Add a section on avoiding temporary file leaks in file upload handlers when using Go SDK middleware. See this issue too: getsentry/sentry-go#673
Add a section on avoiding temporary file leaks when using Go SDK middleware with file upload handlers. See this issue: getsentry/sentry-go#673
* feat(go): add temp file leak section to troubleshooting Add a section on avoiding temporary file leaks when using Go SDK middleware with file upload handlers. See this issue: getsentry/sentry-go#673 * Update src/platforms/go/common/troubleshooting.mdx --------- Co-authored-by: Shana Matthews <shana.l.matthews@gmail.com>
Summary
After go-sentry 0.21.0 to 0.22.0 upgrade (verified by downgrading) we found that our small gin-gonic-based service is "leaking" pagecache memory.
Steps To Reproduce
I don't have an easy way to provide a code snippet, let me know if perhaps this is enough to point in the direction of the problem. If not - I will try to come up with more details.
We have:
c.Request.MultipartForm.File
and processes each file (unzips to see what's inside)After go-sentry 0.22 upgrade (verified by reverting the upgrade back to 0.21 with no other changes) we noticed that the metric for "page cache" (
container_memory_cache
in Kubernetes) keeps going up (into several gigabytes), as POST requests are processed. Before the upgrade, the usage would spike for a moment for each request and then immediately go back to almost nothing.Our theory is that one of the new gin features added in 0.22 are somehow intercepting request lifecycle in some way that is changing what happens with the temporary file(s) gin creates to handle the large POST body.
Expected Behavior
Before the upgrade, size of page cache did not increase.
Screenshots
Sentry.io Event
Environment
SDK
sentry-go
version: 1.22.0Sentry
Additional context
The text was updated successfully, but these errors were encountered: