Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regression in clad master in dealing with templated functions #922

Closed
guitargeek opened this issue Jun 6, 2024 · 2 comments · Fixed by #928
Closed

Regression in clad master in dealing with templated functions #922

guitargeek opened this issue Jun 6, 2024 · 2 comments · Fixed by #928

Comments

@guitargeek
Copy link
Contributor

This was noticed when running the RooFit unit tests with clad master.

Reproducer (ROOT macro, but should be easy to turn into compiled executable):

template <bool pdfMode>
inline double polynomial(double const *coeffs, int nCoeffs, int lowestOrder, double x)
{
   double retVal = coeffs[nCoeffs - 1];
   for (int i = nCoeffs - 2; i >= 0; i--)
      retVal = coeffs[i] + x * retVal;
   retVal = retVal * std::pow(x, lowestOrder);
   return retVal + (pdfMode && lowestOrder > 0 ? 1.0 : 0.0);
}

double roo_func_wrapper_4(double *params)
{
   double t4[] = {params[0], params[1], 1.};
   const double t5 = polynomial<false>(t4, 3, 0, 1.);
   return t5;
}
#include <Math/CladDerivator.h>

#pragma clad ON
void roo_func_wrapper_4_req()
{
   clad::gradient(roo_func_wrapper_4, "params");
}
#pragma clad OFF

void reproducer()
{
   std::vector<double> parametersVec = {-0.5, -0.5, 0.5};

   std::vector<double> gradientVec(parametersVec.size());

   auto wrapper = [&](double *params) { return roo_func_wrapper_4(params); };

   std::cout << roo_func_wrapper_4(parametersVec.data()) << std::endl;
   roo_func_wrapper_4_grad(parametersVec.data(), gradientVec.data());

   std::cout << "Clad diff:" << std::endl;
   std::cout << gradientVec[0] << std::endl;
   std::cout << gradientVec[1] << std::endl;
   std::cout << gradientVec[2] << std::endl;

   auto numDiff = [&](int i) {
      const double eps = 1e-6;
      std::vector<double> p{parametersVec};
      p[i] = parametersVec[i] - eps;
      double nllValDown = wrapper(p.data());
      p[i] = parametersVec[i] + eps;
      double nllValUp = wrapper(p.data());
      return (nllValUp - nllValDown) / (2 * eps);
   };

   std::cout << "Num diff:" << std::endl;
   std::cout << numDiff(0) << std::endl;
   std::cout << numDiff(1) << std::endl;
   std::cout << numDiff(2) << std::endl;
}

Output:

0
Clad diff:
0
0
0
Num diff:
1
1
0
@vgvassilev
Copy link
Owner

@guitargeek, do you have an idea which commit broke it? There are 17 or so since the tag...

@vaithak
Copy link
Collaborator

vaithak commented Jun 6, 2024

@PetroZarytskyi I just checked that this got introduced in #904. I have created a very minimal reproducer below. Can you look into this once?

#include "clad/Differentiator/Differentiator.h"

double f(double x) {
  return x + (x > 0 ? 1.0 : 0.0);
}

int main() {
  auto f_dx = clad::gradient(f);
  double dx = 0;
  f_dx.execute(3, &dx);
  std::cout << dx << std::endl;
  return 0;
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants