Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reimplement linear methods using recursive method #12

Merged
merged 18 commits into from
Aug 21, 2024

Conversation

jlogan03
Copy link
Owner

@jlogan03 jlogan03 commented Aug 4, 2024

0.4.3 - 2024-08-03

Added

  • Implement interpn_alloc function for each method, which allocates a Vec for the output
  • Add test of linear methods using hat function to check grid cell alignment

Changed

  • Use recursive method to evaluate multilinear interpolation instead of hypercube method
    • This makes extrapolation cost consistent with interpolation cost, and reduces nominal perf scaling
    • Shows about 2x slower perf in micro-benchmarks, but about 2x faster in end-to-end benchmarks after the Python bindings
      • Need to improve benchmarking strategy to better capture perf in real-life usage
  • Reduce repeated documentation
  • Remove some inlining annotations and all instances of #[inline(always)]
    • Minimal effect on performance; provides more flexibility to downstream applications, especially opt-level=s builds

end-to-end linear method perf scaling before (on linux/amd desktop machine)

image

after (on linux/amd framework laptop):

image

Copy link
Collaborator

@obi1kenobi obi1kenobi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would strongly advise against using inline(always) in general. More details inline.

Also, be careful of having complex inlined methods in generics. After monomorphization, you might end up with a ton of copies of those complex methods in your executable. That'll bloat the binary size, drag out compile times, and probably result in worse performance because bigger binaries are just slower due to their own instructions' cache pressure.

See e.g. https://twitter.com/charliermarsh/status/1819873110448820668 for an example of monomorphization causing bloat.

///
/// While this method initializes the interpolator struct on every call, the overhead of doing this
/// is minimal even when using it to evaluate one observation point at a time.
#[inline(always)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably don't want inline(always) here. In fact inline(always) is probably best avoided pretty much always.

Inlining in LLVM happens "leaves-first", which is backwards from how most of our intuition would expect it to happen. The LLVM inliner on its own would probably decide to inline the ::new() call and the .interp() call into this function.

Then the #[inline(always)] will force it to inline this function into all call sites, and you'll end up with the contents of MulticubicRectilinear::new() and MulticubicRectilinear::interp() inlined all over the place separately. That's probably not desirable.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point - each instance of inlining directives in here was originally checked in criterion benchmarks, but I'm finding lately that the benchmarks end up compiled somewhat differently from downstream usages. I'll check how it looks in end-to-end benchmarks via the python bindings with more reasonable inlining annotations.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The perf gains from gratuitous inlining do appear to be an artifact of benchmarking. This snip (benchmarks running after switching from less-inlined version of both linear calcs back to unmodified) would indicate a significant perf regression for the specific case of evaluating one point at a time, which is an important case for differentiation:

image

However, end-to-end benchmarks through the python bindings do not show any significant degradation for the case of 1 observation point:

image

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rectilinear method doesn't show any significant change here, possibly because the core library bisection search is fairly low in the inlining tree, and is marked inline(never)

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No identifiable change in end-to-end benchmarks in python after removing all inlining annotations, even for the single-observation-point cases that see 2x slowdown in rust benchmarks. This should help embedded usages as well, since opt-level=s builds will behave more as expected

image

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re: monomorphization bloat - definitely a thing that I think about. In this case, the only generics are on the data type and max dimensionality. The only types in common use that implement Float are f32 and f64 in core, plus f16 and f128 available in crates, and I don't anticipate seeing many uses of MAXDIMS other than the default of 8 since that's already an unreasonably large number of dimensions and doesn't affect perf noticeably.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think #[inline] is fine if you have leaf code you expect to be hot. It's really just #[inline(always)] especially on large non-leaf functions that ends up being problematic.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed - in this case, having just #[inline] doesn't produce any change in performance (even in the hypersensitive micro benchmarks), probably because the hot leaf functions are already small enough that they get inlined without adjusting the weights. So, might as well remove the manual annotations and just leave it to the compiler

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about this a little more - even in an opt-level=s build, you'd still want those functions to inline because they're tiny and the effect on perf would be really punishing if they didn't get inlined. So I'll sprinkle a few #[inline] back in there just in case

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sprinkled some #[inline] on the chain of functions at the very bottom of the recursion for each interpolator type. No effect on benchmarks at all at opt-level=3, but might help keep something silly from happening in a different build someday

interpn/src/multicubic/regular.rs Outdated Show resolved Hide resolved
interpn/src/multilinear/rectilinear.rs Outdated Show resolved Hide resolved
@jlogan03 jlogan03 merged commit f172183 into main Aug 21, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants