-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/compile: pgo can dramatically increase goroutine stack size #65532
Comments
Thanks for the detailed report! I know you are aware, but just to be clear to others skimming the issue: this stack size estimate tool will likely underestimate the actual stack allocation size. We are seeing the goroutines sitting parked in Inlining three copies of this function is a particularly bad case because without inlining you can't call I agree with you that frame size does seem reasonable to consider when making inlining decisions. We probably want to avoid making frames too large. This is potentially in scope for #61502, or a follow-up to that (cc @mdempsky @thanm). I don't think this would really be PGO-specific, though PGO inlining may make us more likely to hit general frame size thresholds. I'm also not sure how good of a sense of final frame size we have during inlining; this is pretty early in compilation and even before we do escape analysis. At the other end of the spectrum, I wonder if we could more aggressively reuse stack slots for large objects. Since cc @golang/compiler @cherrymui @aclements |
As part of my work on inlining heuristics I considered adding a heuristic that would bias against inlining functions with large stack frames, but didn't get as far as implementing it. It is a tricky thing to get right, since the inliner runs at an early stage in the compiler, making it tricky to compute an accurate stack size estimate. I think a better long term solution here is something like #62737, e.g. do the inlines but then arrange for the three inlined blobs to share the same instance of the array in question. This can be done in the back end (e.g. late during stack frame layout) or we can change the inliner itself to reuse temps (this is something that the LLVM inliner does IIRC). |
Yup, it's a rough estimate. As discussed during the last diagnostic sync, I'll try to file my proposal for a runtime implementation of this profile type which would allow us to overcome some of the estimation issues.
That's probably a good solution in most cases, including this one. There will still be edge cases e.g. A inlining B (with a big frame) and then calling C in a for loop forever. But maybe those edge cases are rare enough to not be a problem in practice. |
Some related progress might be happening in https://go.dev/cl/553055. |
Change https://go.dev/cl/566177 mentions this issue: |
Change https://go.dev/cl/553055 mentions this issue: |
Introduce a helper type "Intervals" that contains sets of sorted disjoint ranges corresponding to live ranges within a function. Example: the Intervals set "{ [0,1), [4,10) }" would indicate that something is live starting at instruction 0, then up to but not including instruction 1, then dead from 1-3, then live again at instruction 4 up to (but not including) instruction 10. This patch provides APIs for constructing interval sets, testing to see whether two sets overlap, and unioning/merging together two intervals sets. Updates #62737. Updates #65532. Updates #65495. Change-Id: I7140a5989eba93bf3b8762d9224261f5eba0646d Reviewed-on: https://go-review.googlesource.com/c/go/+/566177 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Cherry Mui <cherryyz@google.com>
Preliminary compiler support for merging/overlapping stack slots of local variables whose access patterns are disjoint. This patch includes changes in AllocFrame to do the actual merging/overlapping based on information returned from a new liveness.MergeLocals helper. The MergeLocals helper identifies candidates by looking for sets of AUTO variables that either A) have the same size and GC shape (if types contain pointers), or B) have the same size (but potentially different types as long as those types have no pointers). Variables must be greater than (3*types.PtrSize) in size to be considered for merging. After forming candidates, MergeLocals collects variables into "can be overlapped" equivalence classes or partitions; this process is driven by an additional liveness analysis pass. Ideally it would be nice to move the existing stackmap liveness pass up before AllocFrame and "widen" it to include merge candidates so that we can do just a single liveness as opposed to two passes, however this may be difficult given that the merge-locals liveness has to take into account writes corresponding to dead stores. This patch also required a change to the way ssa.OpVarDef pseudo-ops are generated; prior to this point they would only be created for variables whose type included pointers; if stack slot merging is enabled then the ssagen code creates OpVarDef ops for all auto vars that are merge candidates. Note that some temporaries created late in the compilation process (e.g. during ssa backend) are difficult to reason about, especially in cases where we take the address of a temp and pass it to the runtime. For the time being we mark most of the vars created post-ssagen as "not a merge candidate". Stack slot merging for locals/autos is enabled by default if "-N" is not in effect, and can be disabled via "-gcflags=-d=mergelocals=0". Fixmes/todos/restrictions: - try lowering size restrictions - re-evaluate the various skips that happen in SSA-created autotmps Fixes #62737. Updates #65532. Updates #65495. Cq-Include-Trybots: luci.golang.try:gotip-linux-amd64-longtest Change-Id: Ibc22e8a76c87e47bc9fafe4959804d9ea923623d Reviewed-on: https://go-review.googlesource.com/c/go/+/553055 Reviewed-by: Cherry Mui <cherryyz@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Change https://go.dev/cl/575415 mentions this issue: |
Change https://go.dev/cl/576680 mentions this issue: |
Change https://go.dev/cl/576735 mentions this issue: |
Change https://go.dev/cl/577615 mentions this issue: |
Change https://go.dev/cl/576681 mentions this issue: |
[This is a partial roll-forward of CL 553055, the main change here is that the stack slot overlap operation is flagged off by default (can be enabled by hand with -gcflags=-d=mergelocals=1) ] Preliminary compiler support for merging/overlapping stack slots of local variables whose access patterns are disjoint. This patch includes changes in AllocFrame to do the actual merging/overlapping based on information returned from a new liveness.MergeLocals helper. The MergeLocals helper identifies candidates by looking for sets of AUTO variables that either A) have the same size and GC shape (if types contain pointers), or B) have the same size (but potentially different types as long as those types have no pointers). Variables must be greater than (3*types.PtrSize) in size to be considered for merging. After forming candidates, MergeLocals collects variables into "can be overlapped" equivalence classes or partitions; this process is driven by an additional liveness analysis pass. Ideally it would be nice to move the existing stackmap liveness pass up before AllocFrame and "widen" it to include merge candidates so that we can do just a single liveness as opposed to two passes, however this may be difficult given that the merge-locals liveness has to take into account writes corresponding to dead stores. This patch also required a change to the way ssa.OpVarDef pseudo-ops are generated; prior to this point they would only be created for variables whose type included pointers; if stack slot merging is enabled then the ssagen code creates OpVarDef ops for all auto vars that are merge candidates. Note that some temporaries created late in the compilation process (e.g. during ssa backend) are difficult to reason about, especially in cases where we take the address of a temp and pass it to the runtime. For the time being we mark most of the vars created post-ssagen as "not a merge candidate". Stack slot merging for locals/autos is enabled by default if "-N" is not in effect, and can be disabled via "-gcflags=-d=mergelocals=0". Fixmes/todos/restrictions: - try lowering size restrictions - re-evaluate the various skips that happen in SSA-created autotmps Updates #62737. Updates #65532. Updates #65495. Change-Id: Ifda26bc48cde5667de245c8a9671b3f0a30bb45d Reviewed-on: https://go-review.googlesource.com/c/go/+/575415 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Cherry Mui <cherryyz@google.com>
For -gcflags=-d=mergelocalstrace=1 (which reports estimated savings from stack slot merging), emit separate values for pointerful vs non-pointerful variables, for a bit more detail. Updates #62737. Updates #65532. Updates #65495. Change-Id: I9dd27d2a254036448c85c13d189d1ed36157c9d7 Reviewed-on: https://go-review.googlesource.com/c/go/+/576680 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Cherry Mui <cherryyz@google.com>
…didates It is possible to have situations where a given ir.Name is non-address-taken at the source level, but whose address is materialized in order to accommodate the needs of arch-dependent memory ops. The issue here is that the SymAddr op will show up as touching a variable of interest, but the subsequent memory op will not. This is generally not an issue for computing whether something is live across a call, but it is problematic for collecting the more fine-grained live interval info that drives stack slot merging. As an example, consider this Go code: package p type T struct { x [10]int f float64 } func ABC(i, j int) int { var t T t.x[i&3] = j return t.x[j&3] } On amd64 the code sequences we'll see for accesses to "t" might look like v10 = VarDef <mem> {t} v1 v5 = MOVOstoreconst <mem> {t} [val=0,off=0] v2 v10 v23 = LEAQ <*T> {t} [8] v2 : DI v12 = DUFFZERO <mem> [80] v23 v5 v14 = ANDQconst <int> [3] v7 : AX v19 = MOVQstoreidx8 <mem> {t} v2 v14 v8 v12 v22 = ANDQconst <int> [3] v8 : BX v24 = MOVQloadidx8 <int> {t} v2 v22 v19 : AX v25 = MakeResult <int,mem> v24 v19 : <> Note that the the loads and stores (ex: v19, v24) all refer directly to "t", which means that regular live analysis will work fine for identifying variable lifetimes. The DUFFZERO is (in effect) an indirect write, but since there are accesses immediately after it we wind up with the same live intervals. Now the same code with GOARCH=ppc64: v10 = VarDef <mem> {t} v1 v20 = MOVDaddr <*T> {t} v2 : R20 v12 = LoweredZero <mem> [88] v20 v10 v3 = CLRLSLDI <int> [212543] v7 : R5 v15 = MOVDaddr <*T> {t} v2 : R6 v19 = MOVDstoreidx <mem> v15 v3 v8 v12 v29 = CLRLSLDI <int> [212543] v8 : R4 v24 = MOVDloadidx <int> v15 v29 v19 : R3 v25 = MakeResult <int,mem> v24 v19 : <> Here instead of memory ops that refer directly to the symbol, we take the address of "t" (ex: v15) and then pass the address to memory ops (where the ops themselves no longer refer to the symbol). This patch enhances the stack slot merging liveness analysis to handle cases like the PPC64 one above. We add a new phase in candidate selection that collects more precise use information for merge candidates, and screens out candidates that are too difficult to analyze. The phase make a forward pass over each basic block looking for instructions of the form vK := SymAddr(N) where N is a raw candidate. It then creates an entry in a map with key vK and value holding name and the vK use count. As the walk continues, we check for uses of of vK: when we see one, record it in a side table as an upwards exposed use of N. At each vK use we also decrement the use count in the map entry, and if we hit zero, remove the map entry. If we hit the end of the basic block and we still have map entries, this implies that the address in question "escapes" the block -- at that point to be conservative we just evict the name in question from the candidate set. Although this CL fixes the issues that forced a revert of the original merging CL, this CL doesn't enable stack slot merging by default; a subsequent CL will do that. Updates #62737. Updates #65532. Updates #65495. Change-Id: Id41d359a677767a8e7ac1e962ae23f7becb4031f Reviewed-on: https://go-review.googlesource.com/c/go/+/576735 Reviewed-by: Cherry Mui <cherryyz@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Flag flip to enable stack slot merging by default when optimizing. Please see the earlier CL for details on what this is doing. Updates #62737. Updates #65532. Updates #65495. Change-Id: I8e30d553e74ace43d418f883199721f05320d3d7 Reviewed-on: https://go-review.googlesource.com/c/go/+/576681 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Cherry Mui <cherryyz@google.com>
@thanm yeah, what's the best way to get you a copy? It's from our prod environment, so I'd prefer to no post it in public. I'm on gopher slack or could e-mail you the file. Also: How are you planning to test this? By building the gRPC library with the PGO profile and checking the generated machine code? |
Thanks, email is probably easiest (thanm @ google).
Yes. |
An update on this issue: @felixge provided me with the PGO profile off-line and I was able to easily reproduce the problematic set of inlines. Unfortunately it looks as though the CLs I've been working on to do stack slot merging for disjointly accessed locals (e.g. 577615 and related predecessors) are not going to help here. The reason is that the variable in question, "localBuf" (defined here) is considered address-taken by the compiler. IR excerpt:
This corresponds to the code that assigned a slice of I can think of a couple of ways to potentially attack this problem. The first would be to introduce additional lifetime markers, similar to what's used in the LLVM back end (code). At the moment the Go compiler uses just a single lifetime-start marker (e.g. A second possibility would be to teach the inliner to reuse variables when it actually carries out the inlining, e.g. keep track of additional locals introduced by inlining within a given functions, and then at the point where we inline the body of B into A, check to see if there is an existing var (from a previous inline) that we can reuse. We could apply this sort of optimization even for address-taken variables, which would (in theory) make it applicable in this case. I'll look into this some more and see whether I can come up with something viable. |
This patch revises the algorithm/strategy used for overlapping the stack slots of disjointly accessed local variables. The main change here is to allow merging the stack slot of B into the slot for A if B's size is less then A (prior to this they had to be identical), and to also allow merging a non-pointer variables into pointer-variable slots. The new algorithm sorts the candidate list first by pointerness (pointer variables first), then by alignment, then by size, and finally by name. We no longer check that two variables have the same GC shape before merging: since it should never be the case that we have two vars X and Y both live across a given callsite where X and Y share a stack slot, their gc shape doesn't matter. Doing things this new way increases the total number of bytes saved (across all functions) from 91256 to 124336 for the sweet benchmarks. Updates #62737. Updates #65532. Updates #65495. Change-Id: I1daaac1b1240aa47a6975e98ccd24e03304ab602 Reviewed-on: https://go-review.googlesource.com/c/go/+/577615 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Cherry Mui <cherryyz@google.com>
We used to have a lifetime-ending marker, |
Address-taken local variables are tricky. If a variable is address-taken, and the address escapes the inlined function scope, we can't mark its lifetime end. Currently we do the escape analysis after inlining, so at the inlining stage we don't have the escape information. Perhaps (long term?) we could do a pre-pass of escape analysis before inlining, then at inlining add the lifetime markers (or reuse locals directly) if the variable does not escape the inline function scope. |
Maybe I am missing something, but if an address value can flow out from a function wouldn't the var in question be marked as AUTOHEAP or escaping? E.g.
Here If we only target address-taken autos that are also non-escaping, seems as though this would be a viable option? |
Yes, if it actually escapes the physical (outermost) function. What I meant was, say, F calls G, G inlined into F, and G has a local variable |
I spent a little while prototyping this, but there are still significant hurdles that would have to be overcome. I cooked up some machinery to detect when two given autos within function F are derived from disjoint inlines as in this case, that is straightforward code to write. The difficulty comes in with regard to escape analysis. Let's say you have two packages, P and Q, where P imports Q:
When inlining is done for P's function If you compile
Note the 'esc(no)' (doesn't escape) for We can't use the escape analysis info computed for |
Summary: In Some options:
(Side thought: addrtaken may be a mistake in our compiler. It’s a big hammer and very coarse-grained.) |
@cherrymui, @thanm, and I were just discussing this in more depth. My basic observation is that there are deep parallels between liveness analysis and escape analysis. Liveness analysis determines the extents within a function of non-addrtaken variables, while escape analysis determines whether the extent of addrtaken variables is bounded by a function or not. The idea here is to produce finer-grain extent information in escape analysis, bringing liveness analysis and escape analysis closer to each other. Since escape analysis is done at the IR level, this raises the question of where to attach extent information. In principle this could get very complex, but for these purposes we really only need it at the OINLCALL level. When we translate an OINLCALL to SSA, we would produce VarKill operations for all addrtaken variables that don't escape the OINLCALL, threaded onto the mem at the "end" of the OINLCALL. |
Go version
go1.21.5
Output of
go env
in your module/workspace:What did you do?
Build my application using a
default.pgo
CPU profile from production.What did you see happen?
Go memory usage (
/memory/classes/total:bytes − /memory/classes/heap/released:bytes
) increased from 720 MB to 850 MB (18%) until rollback, see below.This increase in memory usage seems to have been caused by an increase in goroutine stack size (
/memory/classes/heap/stacks:bytes
) from 207 MB to 280MB (35%).This increase was not due to an increase in the number of active goroutines, but due to an increase of the average stack size (
/memory/classes/heap/stacks:bytes / /sched/goroutines:goroutines
).To debug this further, I built a hacky goroutine stack frame profiler. This pointed me to to google.golang.org/grpc/internal/transport.(*loopyWriter).run
For the binary compiled without pgo, my tool estimated 2MB of stack usage for ~1000 goroutines:
And for the binary compiled with pgo, my tool estimated 71MB of stack usage for ~1000 goroutines:
Looking at the assembly, it becomes clear that is due to the frame size increasing from
0x50
(80
) bytes to0xc1f8
(49656
) bytes.assembly
before pgo:
after pgo:
And the root cause for this appears to be the inlining of 3 calls to
processData
, each of which allocates a 16KiB byte array on its stackWhat did you expect to see?
No significant increase in memory usage.
Maybe PGO could take frame sizes into account for inlining, especially if multiple calls are being made to a function that has a large frame size.
Meanwhile, maybe we should send a PR that adds a
//go:noinline
pragma to theprocessData
func in gRPC. Given the current code structure, it seems highly undesirable to inline this function up to 3 times in therun
method.cc @prattmic
The text was updated successfully, but these errors were encountered: