-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reimplement linear methods using recursive method #12
Conversation
…output. pull convenience functions to the top of each module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would strongly advise against using inline(always)
in general. More details inline.
Also, be careful of having complex inlined methods in generics. After monomorphization, you might end up with a ton of copies of those complex methods in your executable. That'll bloat the binary size, drag out compile times, and probably result in worse performance because bigger binaries are just slower due to their own instructions' cache pressure.
See e.g. https://twitter.com/charliermarsh/status/1819873110448820668 for an example of monomorphization causing bloat.
/// | ||
/// While this method initializes the interpolator struct on every call, the overhead of doing this | ||
/// is minimal even when using it to evaluate one observation point at a time. | ||
#[inline(always)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You probably don't want inline(always)
here. In fact inline(always)
is probably best avoided pretty much always.
Inlining in LLVM happens "leaves-first", which is backwards from how most of our intuition would expect it to happen. The LLVM inliner on its own would probably decide to inline the ::new()
call and the .interp()
call into this function.
Then the #[inline(always)]
will force it to inline this function into all call sites, and you'll end up with the contents of MulticubicRectilinear::new()
and MulticubicRectilinear::interp()
inlined all over the place separately. That's probably not desirable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point - each instance of inlining directives in here was originally checked in criterion benchmarks, but I'm finding lately that the benchmarks end up compiled somewhat differently from downstream usages. I'll check how it looks in end-to-end benchmarks via the python bindings with more reasonable inlining annotations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The perf gains from gratuitous inlining do appear to be an artifact of benchmarking. This snip (benchmarks running after switching from less-inlined version of both linear calcs back to unmodified) would indicate a significant perf regression for the specific case of evaluating one point at a time, which is an important case for differentiation:
However, end-to-end benchmarks through the python bindings do not show any significant degradation for the case of 1 observation point:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rectilinear method doesn't show any significant change here, possibly because the core library bisection search is fairly low in the inlining tree, and is marked inline(never)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re: monomorphization bloat - definitely a thing that I think about. In this case, the only generics are on the data type and max dimensionality. The only types in common use that implement Float are f32 and f64 in core, plus f16 and f128 available in crates, and I don't anticipate seeing many uses of MAXDIMS other than the default of 8 since that's already an unreasonably large number of dimensions and doesn't affect perf noticeably.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think #[inline]
is fine if you have leaf code you expect to be hot. It's really just #[inline(always)]
especially on large non-leaf functions that ends up being problematic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed - in this case, having just #[inline] doesn't produce any change in performance (even in the hypersensitive micro benchmarks), probably because the hot leaf functions are already small enough that they get inlined without adjusting the weights. So, might as well remove the manual annotations and just leave it to the compiler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about this a little more - even in an opt-level=s build, you'd still want those functions to inline because they're tiny and the effect on perf would be really punishing if they didn't get inlined. So I'll sprinkle a few #[inline] back in there just in case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sprinkled some #[inline] on the chain of functions at the very bottom of the recursion for each interpolator type. No effect on benchmarks at all at opt-level=3, but might help keep something silly from happening in a different build someday
0.4.3 - 2024-08-03
Added
interpn_alloc
function for each method, which allocates a Vec for the outputChanged
#[inline(always)]
end-to-end linear method perf scaling before (on linux/amd desktop machine)
after (on linux/amd framework laptop):