Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generalize */fun starting with cr-d #1754

Merged
merged 15 commits into from
Mar 18, 2020

Conversation

t4c1
Copy link
Contributor

@t4c1 t4c1 commented Mar 2, 2020

Summary

Generalizes functions with names starting with cr-d.

Tests

Some checks that an input is a vector are now done at compile time instead of runtime. Tests fpr these checks are removed.

Side Effects

None.

Checklist

  • Math issue Generalize matrix function signatures #1470

  • Copyright holder: Tadej Ciglarič
    The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
    - Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
    - Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

  • the basic tests are passing

    • unit tests pass (to run, use: ./runTests.py test/unit)
    • header checks pass, (make test-headers)
    • dependencies checks pass, (make test-math-dependencies)
    • docs build, (make doxygen)
    • code passes the built in C++ standards checks (make cpplint)
  • the code is written in idiomatic C++ and changes are documented in the doxygen

  • the new changes are tested

@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
gp_pois_regr/gp_pois_regr.stan 4.88 4.83 1.01 0.92% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.02 0.02 0.99 -0.62% slower
eight_schools/eight_schools.stan 0.09 0.09 1.01 0.78% faster
gp_regr/gp_regr.stan 0.22 0.22 1.01 1.35% faster
irt_2pl/irt_2pl.stan 6.11 6.12 1.0 -0.27% slower
performance.compilation 89.62 87.53 1.02 2.34% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 7.67 7.82 0.98 -1.89% slower
pkpd/one_comp_mm_elim_abs.stan 20.5 20.73 0.99 -1.12% slower
sir/sir.stan 95.94 96.78 0.99 -0.87% slower
gp_regr/gen_gp_data.stan 0.05 0.05 1.0 -0.38% slower
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.96 2.97 0.99 -0.55% slower
pkpd/sim_one_comp_mm_elim_abs.stan 0.32 0.31 1.04 4.12% faster
arK/arK.stan 1.74 1.75 0.99 -0.77% slower
arma/arma.stan 0.66 0.66 0.99 -1.03% slower
garch/garch.stan 0.51 0.57 0.89 -12.37% slower
Mean result: 0.994226855604

Jenkins Console Log
Blue Ocean
Commit hash: 39b4c20


Machine information ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G22010

CPU:
Intel(R) Xeon(R) CPU E5-1680 v2 @ 3.00GHz

G++:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

Clang:
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

Copy link
Collaborator

@SteveBronder SteveBronder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly optional comments and a few doc changes.

@andrjohns would you mind taking a quick look at the rev stuff? I think you know rev better than I do. Nothing here seems weird at all but better to have a double check

Comment on lines 12 to 13
template <typename T, require_eigen_vt<is_fvar, T>* = nullptr>
inline value_type_t<T> determinant(const T& m) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EigMat

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

check_square("determinant", "m", m);

const T vals = m.val().determinant();
return fvar<T>(vals, vals * (m.val().inverse() * m.d()).trace());
const typename value_type_t<T>::Scalar vals = m.val().determinant();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Suggested change
const typename value_type_t<T>::Scalar vals = m.val().determinant();
const scalar_type_t<T> vals = m.val().determinant();

Copy link
Contributor Author

@t4c1 t4c1 Mar 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. value_type_t<T> is fvar<X> and I need X here.

Comment on lines 13 to 14
inline Eigen::Matrix<value_type_t<T>, T::RowsAtCompileTime,
T::RowsAtCompileTime>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to use auto here and .matrix() on the return of multiply?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also EigMat

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fyi everywhere T is an Eigen Matrix would be good to have a nicer name like EigMat

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I know about template names, but they are not very high on my priority list, so I tend to forget to change them. You can either keep reminding me about every single instance I miss or we can leave that for a separate series of PRs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the return statements is invoking default constructor, so compiler can not deduce auto.

Copy link
Collaborator

@andrjohns andrjohns Mar 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Except at moment the function will take any eigen type as an input but can only return Eigen::Matrix types (which could cause some compile errors down the line). You could restrict the inputs to only take matrix types, or since the result will need to be evaluated anyway, this could be written to be more flexible:

template <typename T, require_eigen_vt<is_fvar, T>* = nullptr>
inline auto tcrossprod(const T& m) {
  if (m.rows() == 0) {
    return plain_type_t<decltype(multiply(m, m.transpose()))>{};
  }
  return multiply(m, m.transpose()).eval();
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many functions still assume input is derived from Eigen::MatrixBase and will not work with Eigen::ArrayBase-derived types. Or are you thinking about some other type this would fail with?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it was just ArrayBase that I was thinking of, so if that's a non-issue then feel free to ignore

Comment on lines +15 to +18
* @param x1 First scalar.
* @param x2 Second scalar.
* @return Squared distance between scalars
* @throw std::domain_error Any scalar is not finite.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind documenting T1 and T2 as well.

[Optionally] Could also make the names something nicer like Scalar1 and Scalar2

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

stan/math/rev/fun/dot_self.hpp Show resolved Hide resolved
Comment on lines 133 to 136
template <typename T1, typename T2,
require_eigen_vector_vt<is_var, T1>* = nullptr,
require_eigen_vector_vt<std::is_arithmetic, T2>* = nullptr>
inline var squared_distance(const T1& v1, const T2& v2) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Optional] For something like this I think EigenVar and EigenArith would be useful names

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines 60 to 62
Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic> m1(1, 1);
m1 << 2.0;
EXPECT_NEAR(4.0, dot_self(m1), 1E-12);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why gone?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh because it only takes in vectors now? I wonder if this will break user code tho?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. I should probably let this accept matrices too, even if matrix version is not exposed to stan language.

@andrjohns
Copy link
Collaborator

The rev stuff all looks pretty great. Should we also replace the:

reinterpret_cast<vari**>(ChainableStack::instance_->memalloc_.alloc()

with

ChainableStack::instance_->memalloc_.alloc_array<vari*>();

Or save that for a separate issue?

@t4c1
Copy link
Contributor Author

t4c1 commented Mar 5, 2020

I would prefere to leave this one out of this one. I am already doing quite a few things at once here.

t4c1 and others added 3 commits March 5, 2020 09:50
# Conflicts:
#	stan/math/fwd/fun/columns_dot_self.hpp
#	stan/math/fwd/fun/rows_dot_self.hpp
#	stan/math/fwd/fun/squared_distance.hpp
#	stan/math/prim/fun/distance.hpp
#	stan/math/prim/fun/squared_distance.hpp
@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
gp_pois_regr/gp_pois_regr.stan 4.86 4.94 0.98 -1.66% slower
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.02 0.02 1.02 1.94% faster
eight_schools/eight_schools.stan 0.09 0.09 0.99 -0.52% slower
gp_regr/gp_regr.stan 0.22 0.22 0.99 -0.55% slower
irt_2pl/irt_2pl.stan 6.12 6.07 1.01 0.74% faster
performance.compilation 87.5 86.91 1.01 0.67% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 7.66 7.66 1.0 0.03% faster
pkpd/one_comp_mm_elim_abs.stan 20.5 21.61 0.95 -5.44% slower
sir/sir.stan 96.58 97.02 1.0 -0.46% slower
gp_regr/gen_gp_data.stan 0.05 0.05 1.01 0.53% faster
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.98 2.97 1.0 0.38% faster
pkpd/sim_one_comp_mm_elim_abs.stan 0.32 0.31 1.04 3.4% faster
arK/arK.stan 1.74 1.74 1.0 0.45% faster
arma/arma.stan 0.67 0.66 1.02 2.24% faster
garch/garch.stan 0.58 0.58 1.0 0.33% faster
Mean result: 1.00173156292

Jenkins Console Log
Blue Ocean
Commit hash: 6e7ee12


Machine information ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G22010

CPU:
Intel(R) Xeon(R) CPU E5-1680 v2 @ 3.00GHz

G++:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

Clang:
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

@andrjohns
Copy link
Collaborator

@t4c1 There's a problem with using Eigen::Ref for some expression types, since the default Stride length can be wrong. You can verify in your branch with this test for cumulative_sum:

TEST(MathMatrixPrimMat, cumulative_sum_expr) {
  using stan::math::cumulative_sum;

  stan::math::vector_d x2(2);
  x2 << -1.0, 1.0;
  stan::math::matrix_d m2(2, 2);
  m2 << -1.0, 1.0, -1.0, 1.0;

  stan::math::vector_d x2_out = cumulative_sum(x2);
  stan::math::vector_d m2_out = cumulative_sum(cumulative_sum(m2.diagonal()));

  EXPECT_FLOAT_EQ(x2_out[1],m2_out[1]);
}

Which returns:

test/unit/math/prim/fun/cumulative_sum_test.cpp:61: Failure
Expected equality of these values:
  x2_out[1]
    Which is: 0
  m2_out[1]
    Which is: -1

The only fix I've found is to change the Ref call to Ref<const T,0,InnerStride<> >&, but this ends up disabling vectorisation (more info in the Eigen doc here). Not sure if there's a better approach to be had here

@andrjohns
Copy link
Collaborator

Noticed the error after I hit send, the test has a typo (cumulative_sum called twice). Looks like I'm chasing down a different error.

@andrjohns
Copy link
Collaborator

Alright, I wasn't entirely crazy, there is an issue with Eigen::Ref and expressions as inputs, I've got a minimal example here on godbolt that replicates the problem. It may have something to do with the Eigen::Ref object falling out of scope, would be my hunch.

@bob-carpenter
Copy link
Contributor

With the example:

decltype(auto) test_fun(const T& x) {
    const Eigen::Ref<const Eigen::VectorXd>& x_ref = x;
    return (x_ref.array() - 5.0).matrix();
}

I'm not exactly sure what the reference structure for expressions is, but I'm guessing that x_ref.array() creates a local structure and then the - 5 creates an expression that references it. That array goes out of scope when the function returns. The local that gets created for .matrix() holds a reference to the subtraction expression, and that will also go away. The matrix created by .matrix() will be an expression, but it gets copied. It's the references it's holding that can go bad.

Also, why is x_ref being defined as a Ref and as a reference (&)?

@andrjohns
Copy link
Collaborator

That makes sense, going to make generalising these functions safely an interesting proposition.

Also, why is x_ref being defined as a Ref and as a reference (&)?

More info on that here

@andrjohns
Copy link
Collaborator

One solution I've found is to detect whether an expression has been passed to the function, and if so, evaluate the expression prior to calling the function: https://godbolt.org/z/jqoFZi

Whether this is the best (or most efficient) solution is another question

@t4c1
Copy link
Contributor Author

t4c1 commented Mar 11, 2020

First thanks for finding this. I don't like your suggestion. It gives up all the benefits of using Eigen::Ref in the first place.

I guess we could instead place that Eigen::Ref on the nochain stack. Than it would not go out of scope. This gives up on using references (&), but I think reconstructing Eigen::Refs is not very expensive.

EDIT:
Also I think this problem is not limited to Eigen::Ref. Any local variable that is referenced in returned expression would also cause problems when it goes out of scope.

Also it might be better to put this discussion on Eigen::Ref in a separate issue.

@bob-carpenter
Copy link
Contributor

Yes, passing Eigen structures around efficiently deserves its own independent discussion.

I don't see a good option. The two under consideration are.

  1.  Do an eval() and lose the benefits of Ref by creating a copy. This is what we're doing now implicitly by making arguments const references to Eigen::Matrix.

  2. Use placement new on the autodiff stack for every (sub)expression template used in a function. This leaks memory because the locals never get recovered. And I don't see how to do everything in a nicely readable way with placement new because you need to know the type.

What we really need is a kind of inlining through expression templates.

@t4c1
Copy link
Contributor Author

t4c1 commented Mar 13, 2020

@SteveBronder this is waiting for another review. The issue with local variables only blocks returning expressions.

Copy link
Collaborator

@SteveBronder SteveBronder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! One actual fix but most of the comments are optional. Would be nice to have T* -> EigMat* or EigT* so we can read the signature and see what types this operates on

const T vals = m.val().determinant();
return fvar<T>(vals, vals * (m.val().inverse() * m.d()).trace());
const typename value_type_t<EigMat>::Scalar vals = m.val().determinant();
return value_type_t<EigMat>(vals, vals * (m.val().inverse() * m.d()).trace());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[optional]

Since you declare the return type at the top you could do

Suggested change
return value_type_t<EigMat>(vals, vals * (m.val().inverse() * m.d()).trace());
{vals, vals * (m.val().inverse() * m.d()).trace()};

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

stan/math/fwd/fun/tcrossprod.hpp Show resolved Hide resolved
Comment on lines 19 to 21
template <typename T, require_eigen_t<T>* = nullptr>
inline auto crossprod(const T& M) {
return tcrossprod(M.transpose());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[optional]

T -> EigMat

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment on lines 49 to 50
inline Eigen::Matrix<value_type_t<T>, T::RowsAtCompileTime,
T::ColsAtCompileTime>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[optional]

I think auto is fine here since the type is declared in the function

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment on lines 19 to 21
template <typename T, require_eigen_vector_t<T>* = nullptr>
inline Eigen::Matrix<value_type_t<T>, Eigen::Dynamic, Eigen::Dynamic>
diag_matrix(const T& v) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[optional] T -> EigMat

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

stan/math/prim/fun/distance.hpp Show resolved Hide resolved
stan/math/prim/fun/squared_distance.hpp Show resolved Hide resolved
stan/math/prim/fun/tcrossprod.hpp Show resolved Hide resolved
stan/math/rev/fun/squared_distance.hpp Show resolved Hide resolved
template <
typename EigVecVar1, typename EigVecVar2,
require_all_eigen_vector_vt<is_var, EigVecVar1, EigVecVar2>* = nullptr>
inline var squared_distance(const EigVecVar1& v1, const EigVecVar2& v2) {
check_matching_sizes("squared_distance", "v1", v1, "v2", v2);
return var(new internal::squared_distance_vv_vari(v1, v2));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[optional]

return type can let this be

Suggested change
return var(new internal::squared_distance_vv_vari(v1, v2));
return {new internal::squared_distance_vv_vari(v1, v2)};

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
gp_pois_regr/gp_pois_regr.stan 4.83 5.11 0.94 -5.91% slower
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.02 0.02 0.98 -1.68% slower
eight_schools/eight_schools.stan 0.09 0.09 1.01 0.92% faster
gp_regr/gp_regr.stan 0.22 0.22 1.0 0.27% faster
irt_2pl/irt_2pl.stan 6.51 6.44 1.01 1.05% faster
performance.compilation 87.49 86.49 1.01 1.15% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 7.54 7.57 1.0 -0.37% slower
pkpd/one_comp_mm_elim_abs.stan 21.02 21.04 1.0 -0.09% slower
sir/sir.stan 94.13 95.66 0.98 -1.63% slower
gp_regr/gen_gp_data.stan 0.05 0.05 1.01 1.18% faster
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.95 2.95 1.0 -0.04% slower
pkpd/sim_one_comp_mm_elim_abs.stan 0.32 0.31 1.06 5.31% faster
arK/arK.stan 1.74 1.74 1.0 -0.06% slower
arma/arma.stan 0.66 0.66 1.0 0.3% faster
garch/garch.stan 0.51 0.51 1.0 -0.11% slower
Mean result: 1.00067634968

Jenkins Console Log
Blue Ocean
Commit hash: 8386451


Machine information ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G22010

CPU:
Intel(R) Xeon(R) CPU E5-1680 v2 @ 3.00GHz

G++:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

Clang:
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

@t4c1
Copy link
Contributor Author

t4c1 commented Mar 17, 2020

@SteveBronder This is ready for review.

SteveBronder
SteveBronder previously approved these changes Mar 17, 2020
Copy link
Collaborator

@SteveBronder SteveBronder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two optional comments but looks good to me!

Comment on lines 21 to 24
inline value_type_t<T> determinant(const T& m) {
check_square("determinant", "m", m);
if (m.size() == 0) {
return 1;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two things here that I think are both optional

  1. Looking at Eigen's determinant code it looks like they look for a 0 size as well so idt we need the m.size() check here

https://gitlab.com/libeigen/eigen/-/blob/master/Eigen/src/LU/Determinant.h#L32

  static inline typename traits<Derived>::Scalar run(const Derived& m)
  {
    if(Derived::ColsAtCompileTime==Dynamic && m.rows()==0)
      return typename traits<Derived>::Scalar(1);
    return m.partialPivLu().determinant();
  }
  1. If we keep the if then I think doing auto at the top and static_cast<value_type_t<T>>(1) would enable rvo here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caring about rvo when returning a scalar seems a bit pointless. I will check if removing if breaks anything.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah idt it's a big deal either though I think it's good for style imo. That's why it's optional

stan/math/prim/fun/squared_distance.hpp Show resolved Hide resolved
Copy link
Collaborator

@SteveBronder SteveBronder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
gp_pois_regr/gp_pois_regr.stan 4.78 4.92 0.97 -2.74% slower
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.02 0.02 0.96 -4.08% slower
eight_schools/eight_schools.stan 0.09 0.09 1.01 1.36% faster
gp_regr/gp_regr.stan 0.22 0.22 1.02 1.49% faster
irt_2pl/irt_2pl.stan 6.49 6.43 1.01 0.86% faster
performance.compilation 87.26 86.82 1.01 0.51% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 7.52 7.58 0.99 -0.78% slower
pkpd/one_comp_mm_elim_abs.stan 20.75 20.96 0.99 -1.03% slower
sir/sir.stan 95.05 92.0 1.03 3.21% faster
gp_regr/gen_gp_data.stan 0.05 0.05 0.98 -1.83% slower
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.98 2.95 1.01 0.81% faster
pkpd/sim_one_comp_mm_elim_abs.stan 0.34 0.31 1.09 8.06% faster
arK/arK.stan 1.73 1.74 0.99 -0.8% slower
arma/arma.stan 0.66 0.65 1.01 0.91% faster
garch/garch.stan 0.52 0.51 1.01 1.03% faster
Mean result: 1.0054267344

Jenkins Console Log
Blue Ocean
Commit hash: 44fdfd7


Machine information ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G22010

CPU:
Intel(R) Xeon(R) CPU E5-1680 v2 @ 3.00GHz

G++:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

Clang:
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.6.0
Thread model: posix

@t4c1 t4c1 merged commit 3a66a33 into stan-dev:develop Mar 18, 2020
@t4c1 t4c1 deleted the generalize_fun_cr_di branch November 30, 2020 09:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants