Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generalise Eigen inputs to adj_jac_apply #1811

Closed
wants to merge 6 commits into from
Closed

Generalise Eigen inputs to adj_jac_apply #1811

wants to merge 6 commits into from

Conversation

andrjohns
Copy link
Collaborator

Summary

This pull extends adj_jac_apply to take Eigen inputs other than Eigen::Matrix<T, R, C>. This allows for use with Eigen::Array, Eigen expressions, and Eigen::Map. This is demonstrated by extending the current softmax function with the apply_vector_unary framework to take std::vector<> (and std::vector<std::vector<>>) inputs via Eigen::Map.

I also combined any functions that were separately defined for ````doubleandint``` inputs

Tests

Expanded testing of the softmax function

Side Effects

N/A

Release notes

adj_jac_apply framework generalised to take any Eigen types as inputs

Checklist

  • Math issue Generalise Eigen inputs for adj_jac_apply #1808

  • Copyright holder: Andrew Johnson

    The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
    - Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
    - Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

  • the basic tests are passing

    • unit tests pass (to run, use: ./runTests.py test/unit)
    • header checks pass, (make test-headers)
    • dependencies checks pass, (make test-math-dependencies)
    • docs build, (make doxygen)
    • code passes the built in C++ standards checks (make cpplint)
  • the code is written in idiomatic C++ and changes are documented in the doxygen

  • the new changes are tested

@andrjohns andrjohns requested a review from bbbales2 March 31, 2020 02:29
@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
gp_pois_regr/gp_pois_regr.stan 4.95 4.82 1.03 2.61% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.02 0.02 1.0 -0.36% slower
eight_schools/eight_schools.stan 0.09 0.09 1.0 0.36% faster
gp_regr/gp_regr.stan 0.22 0.22 0.99 -0.66% slower
irt_2pl/irt_2pl.stan 6.46 6.46 1.0 0.13% faster
performance.compilation 89.47 87.51 1.02 2.19% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 7.52 7.58 0.99 -0.71% slower
pkpd/one_comp_mm_elim_abs.stan 20.41 21.45 0.95 -5.1% slower
sir/sir.stan 91.03 101.45 0.9 -11.45% slower
gp_regr/gen_gp_data.stan 0.05 0.05 0.96 -3.63% slower
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.95 2.98 0.99 -1.17% slower
pkpd/sim_one_comp_mm_elim_abs.stan 0.3 0.3 0.99 -0.6% slower
arK/arK.stan 1.75 1.75 1.0 0.28% faster
arma/arma.stan 0.66 0.66 0.99 -0.9% slower
garch/garch.stan 0.51 0.51 1.01 0.53% faster
Mean result: 0.988824529332

Jenkins Console Log
Blue Ocean
Commit hash: b809db4


Machine information ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G22010

CPU:
Intel(R) Xeon(R) CPU E5-1680 v2 @ 3.00GHz

G++:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

Clang:
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

@bbbales2
Copy link
Member

bbbales2 commented Apr 6, 2020

Woah seven days ago my bad. Lemme look.

Copy link
Member

@bbbales2 bbbales2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know if there's any way to use move semantics or forwarding to speed anything up here: https://discourse.mc-stan.org/t/adj-jac-apply/5163/6 ?

@dpsimpson asked me before he did his recent pull request here if it made sense to use adj_jac_apply (which woulda made the code simpler), but it's apparent from the stuff on discourse that this might have ended up being slower than the autodiff. Is this due to unnecessary copying?

template <typename Container,
require_vector_st<std::is_arithmetic, Container>...>
inline auto softmax(const Container& x) {
return apply_vector_unary<Container>::apply(x, [](const auto& v) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it makes sense to extend softmax to work on anything other than a stan vector. We only abuse the array/vector stuff in the distributions.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good call, will revert

@andrjohns
Copy link
Collaborator Author

Forwarding could be a good idea here. I might close this for now and do some testing, see what I can do

@andrjohns andrjohns closed this Apr 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants