Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calling Standard Library functions from __device__ code #547

Closed
taketwo opened this issue Sep 10, 2016 · 4 comments
Closed

Calling Standard Library functions from __device__ code #547

taketwo opened this issue Sep 10, 2016 · 4 comments

Comments

@taketwo
Copy link

taketwo commented Sep 10, 2016

This is similar to #546.

After including the "gtx/io.hpp" header into NVCC-compiled code I got another handful of warnings. For example:

home/sergey/third-party/glm/src/glm/gtx/io.inl(99): warning: calling a __host__ function from a __host__ __device__ function is not allowed
          detected during:
            instantiation of "const FTy &glm::io::get_facet<FTy,CTy,CTr>(std::basic_ios<CTy, CTr> &) [with FTy=glm::io::format_punct<char>, CTy=char, CTr=std::char_traits<char>]"

The offending code fragment is:

template <typename FTy, typename CTy, typename CTr>
GLM_FUNC_QUALIFIER FTy const& get_facet(std::basic_ios<CTy, CTr>& ios)
{
    if(!std::has_facet<FTy>(ios.getloc()))
        ios.imbue(std::locale(ios.getloc(), new FTy));
    return std::use_facet<FTy>(ios.getloc());
}

Indeed, functions from the Standard Library are __host__ only. And anyway, I guess functions from the IO module are not supposed to be run on the GPU. Should we remove the GLM_FUNC_QUALIFIER from them?

EDIT Actually, the functions still need to be inlined, so rather replace GLM_FUNC_QUALIFIER with GLM_INLINE.

@Groovounet
Copy link
Member

Shouldn't it just be an error to include this file in CUDA? ostream stuff in Cuda kernel?

@Groovounet
Copy link
Member

This issue should be fixed in GLM 0.9.8 and master branches.

This said, I expect that std io function calls will not play nice with CUDA kernels.

Thanks,
Christophe

@taketwo
Copy link
Author

taketwo commented Sep 11, 2016

Shouldn't it just be an error to include this file in CUDA? ostream stuff in Cuda kernel?

Sure, it's an error to call these functions from device code. But to call them from host code (even compiled with NVCC) should be just fine.

Thanks for the fixes. The amount of warnings has dropped down, though I still get a few of this form:

/home/sergey/third-party/glm/src/glm/gtx/io.inl(14): warning: a __host__ function("glm::io::format_punct<CTy>::format_punct") redeclared with __host__ __device__, hence treated as a __host__ __device__ function

Groovounet added a commit that referenced this issue Sep 11, 2016
@taketwo
Copy link
Author

taketwo commented Sep 20, 2016

Unfortunately, this is still not fixed in master. Here is a simple way to reproduce. In the root of "glm" repository paste the following into a "test.cu" file:

#include <iostream>
#include "glm/gtx/io.hpp"

int main(int argc, const char** argv)
{
  glm::vec3 v;
  std::cout << v;
  return 0;
}

Then compile: nvcc -c test.cu. You should get tons of warnings of two types:

  • a __host__ function redeclared with __host__ __device__, hence treated as a __host__ __device__ function
  • calling a __host__ function from a __host__ __device__ function is not allowed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants