-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/compile: Go 1.18 compile time may be about 18% slower than Go.17 (largely from changes due to generics) #49569
Comments
To be clear, compile time may be 15-18% slower. There is no expectation of a slowdown in execution time. |
Ooops, thanks for the clarification in the title that it's compile time that I am talking about, not execution time! |
A small but not-zero part of this is changes to the GC pacer; because it starts at a smaller heap size (512kiB vs 4MiB) the compiler spends more time initially in GC, and for smaller packages this can be noticeable. Across a bunch of benchmarks the combo has geomean 7% increase in build user time; with the initial heap size restored the geomean slowdown is 2%. Because this is skewed by the startup overhead for smaller packages, it makes some sense to also consider the plain arithmetic mean, which is 2% slower with the new pacer and smaller heap, and only 0.02% slower (i.e., noise) if the initial heap size is restored to its old value. We can increase the initial heap size late in the 1.18 release cycle since it is simply restoring old behavior; the new pacer we probably want to keep because it does slightly better with GC pause latency in certain corner cases. But, it is good to know where some of the time went. |
Relatedly, I've been building Go itself with GOGC=off for years, which speeds up make.bash by more than 20%. This is because the build consists of many short-lived compile processes, which generate significant amounts of allocations and exit within seconds. I know I have enough spare memory to not have to worry about GC, so letting the kernel reclaim memory at exit gives a significant speedup without noticeably increasing peak memory use. Could we apply something similar to |
Another alternative, which would presumably require less tweaking based on available memory, would be to set up a different GOGC default for compile/link/asm/etc processes. For instance, GOGC=200 would presumably increase memory usage slightly and save some CPU overhead, gaining us some of that 20% build time reduction without potentially using tons of memory. |
Leaving GOGC at 200 would be bad for small-memory builds of large packages. What I understand would help is to start GOGC very high (800), periodically set the heap size, and once the heap is large enough (32m, for example) set GOGC back to its environment-specified value. Perhaps, also, we could set a finalizer on an intentionally dead object, so that "not too much time" elapses just in case our polling period is too large. The advantage of doing it this way is no-new-knobs, and (assuming we check heap size soon enough) not accidentally inflating the footprint for large heaps. |
We've known for years that setting GOGC=off in the compiler makes it run faster. |
Change https://golang.org/cl/370154 mentions this issue: |
Updates #49569 Change-Id: Ifba769993c50bb547cb355f56934fb572ec17a1a Reviewed-on: https://go-review.googlesource.com/c/go/+/370154 Reviewed-by: Austin Clements <austin@google.com> Trust: Dan Scales <danscales@google.com>
A real-world measurement for what it's worth: for the Juju project, running |
Is there any way to know whether using generics in a program makes it slower to compile that program? Or is it simply that Go 1.18 compiles programs slightly more slowly, whether or not they actually include generic functions or types? |
@bitfield This issue is about slowness in the compiler even when not using generics (that is the only case for which it is possible to compare Go 1.17 and 1.18). |
Yes, understood, thank you. Feel free to dismiss this as an off-topic supplementary question, but I'm just curious as to whether Go 1.18 compiles a given program detectably more slowly if it involves instantiating a generic type, compared to one that simply uses the equivalent specific type directly. For example: type Bunch[E any] []E
b := Bunch[int]{} versus: type BunchInt []int
b := BunchInt{} |
There is certainly a very small extra amount of work computing the instantiated type in the compiler (type substitution, etc.) There is even more work if the generic type has methods, and the methods are called on the instantiated type, since then instantiated methods must be created. But the extra work for any particular instantiation of a generic type or generic function should mostly be in the noise, similar to adding one extra function to a normal Go program - hardly noticeable in a full compile. As Ian said, the main thing is that the whole compilation process got a little slower in Go 1.18, because of the way the compiler was re-structured to be able to deal with generic types and functions (including a new typechecker). |
@danscales What is the reference project have you used to measure the compile time? Golang itself or anything else? |
The reference project was compiling the runtime package. The measured overhead may vary a bit, as noted in a comment above, where the overhead for a particular full build (including link time) was 6%. |
Just downloaded the latest 1.18 release and tried running our builds. I can confirm for us at least that the builds were 20% slower. We don't use generics, so, this will be a blocker for us moving to go 1.18 for the time being. Not sure how welded in the generics code is, but if there were some kind of flag that said something like we know we don't have generics, so don't build it as such, I'd love to get the other benefits coming with 1.18. Sorry all... And that flag... -G=0. I bet many people won't know to add this and will like me blindly use more cpu and slower compiles by default even though it's not necessary. I'm not sure, but I think I was using the -G=0 flag even when I noticed the slowness. |
Since we're already in the freeze, I assume nothing else is going to happen here for this release. Moving to the backlog. |
One other thing to consider... given the energy crunch 20% for a single user maybe doesn't seem like a lot, but compounded over thousands (possibly millions), this could add up to a lot. One thing I loved about go (not just the language) was it's adherence to performant and energy efficient compilation as well as runtime. |
Can we edit the OP to remove mention of this being slated for go 1.19? |
We now have pretty good evidence that we've generally brought build speed back in line with Go 1.17. @prattmic has the benchmark data. I'll let him post it and close this issue. |
Change https://go.dev/cl/461957 mentions this issue: |
For #49569. For #54202. Change-Id: Iac45338bc4e45617e8ac7425076cf4cd0af157a4 Reviewed-on: https://go-review.googlesource.com/c/go/+/461957 TryBot-Bypass: Austin Clements <austin@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com>
With 1.17 as a baseline, our sweet benchmarks show build time regressions in 1.18 and 1.19. At tip builds are slightly faster than 1.17.
|
Go 1.18 compile times may be 15-18% slower than Go 1.17, largely due to the changes due to implementing generics. These easiest comparison (which shows most of the difference) is to compare the compilation times for -G=0 and -G=3 mode. -G=3 mode is the default, since it supports generics.
A comparison between -G=0 mode in Go 1.18 and Go 1.17 mode shows that the compiler may have slowed down ~1% because of non-generics changes (since -G=0 mode does not support generics).
So, we can mostly compare the speeds of -G=0 and -G=3 mode in Go 1.18 for now. Most of the difference is due to the new front-end processing, since the SSA backend doesn't change at all for generics. In -G=0 mode (used for all compilers before Go 1.18), there is a syntax parser, the noder phase to create the tree of ir.Node nodes, and the standard typechecker. In -G=3 mode, there is the same syntax parser, but the program is first typechecked by types2 (which supports generics), and then we have a noder2 phase to create the tree of ir.Node nodes using the syntax info and type information from the types2 typechecker. The sum of noder + types1-typechecking is about 4% in a run, whereas the sum of types2-typechecker+noder2 is 14%. So, we can see much of the slowdown is due to the change to front-end processing (not unexpectedly).
These are all rough numbers based on a small number of runs/inputs.
We will plan to reduce this extra overhead in Go 1.19.
The text was updated successfully, but these errors were encountered: