Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

interop demos both segfault #217

Closed
whbdupree opened this issue Jan 5, 2018 · 5 comments
Closed

interop demos both segfault #217

whbdupree opened this issue Jan 5, 2018 · 5 comments

Comments

@whbdupree
Copy link

My clinfo output: https://pastebin.com/sBhe9ceC

I am noticing that in clinfo, I have way more extensions than show up in pyopencl:

In [15]: p
Out[15]: <pyopencl.Platform 'AMD Accelerated Parallel Processing' at 0x7f1d7bb18510>

In [16]: p.extensions
Out[16]: 'cl_khr_icd cl_amd_event_callback cl_amd_offline_devices '

@inducer
Copy link
Owner

inducer commented Jan 5, 2018

Could you provide tracebacks of the segfaults?

@inducer
Copy link
Owner

inducer commented Jan 11, 2018

Closing for lack of activity and clear reproducible bug. Reopen if this is still an issue.

@inducer inducer closed this as completed Jan 11, 2018
@s-ol
Copy link
Contributor

s-ol commented May 22, 2018

Seems like this is the same issue I am hitting, running Arch Linux 4.16.8 with a GTX 970.
This happens with my own code as well as the two interop examples from this repository.
Unlike #177 my card does say it supports cl_khr_gl_sharing (see clinfo below).
I just saw the half-fix suggested there but won't get to try it until at the earliest tomorrow, the segfault is a bug either way though.

gdb traceback:

Program received signal SIGSEGV, Segmentation fault.
0x00007fffecb25c95 in ?? () from /usr/lib/libGLX_nvidia.so.0
(gdb) bt
#0  0x00007fffecb25c95 in ?? () from /usr/lib/libGLX_nvidia.so.0
#1  0x00007fffeb909ccd in ?? () from /usr/lib/libnvidia-glcore.so.396.24
#2  0x00007fffecb4b6a3 in glcuR0d4nX () from /usr/lib/libGLX_nvidia.so.0
#3  0x00007fffe8cb6b49 in ?? () from /usr/lib/libnvidia-opencl.so.1
#4  0x00007fffe8bb1825 in ?? () from /usr/lib/libnvidia-opencl.so.1
#5  0x00007fffe8bb146b in ?? () from /usr/lib/libnvidia-opencl.so.1
#6  0x00007ffff0b32e27 in create_context_from_type () from /usr/lib/python3.6/site-packages/pyopencl/_cffi.abi3.so
#7  0x00007ffff0b1d91c in ?? () from /usr/lib/python3.6/site-packages/pyopencl/_cffi.abi3.so
#8  0x00007ffff7420343 in _PyCFunction_FastCallDict () from /usr/lib/libpython3.6m.so.1.0
#9  0x00007ffff73e984e in ?? () from /usr/lib/libpython3.6m.so.1.0
#10 0x00007ffff73ad0fa in _PyEval_EvalFrameDefault () from /usr/lib/libpython3.6m.so.1.0
#11 0x00007ffff73e7a99 in ?? () from /usr/lib/libpython3.6m.so.1.0

clinfo output:

Number of platforms                               1
  Platform Name                                   NVIDIA CUDA
  Platform Vendor                                 NVIDIA Corporation
  Platform Version                                OpenCL 1.2 CUDA 9.2.101
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer
  Platform Extensions function suffix             NV

  Platform Name                                   NVIDIA CUDA
Number of devices                                 1
  Device Name                                     GeForce GTX 970
  Device Vendor                                   NVIDIA Corporation
  Device Vendor ID                                0x10de
  Device Version                                  OpenCL 1.2 CUDA
  Driver Version                                  396.24
  Device OpenCL C Version                         OpenCL C 1.2 
  Device Type                                     GPU
  Device Topology (NV)                            PCI-E, 01:00.0
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               13
  Max clock frequency                             1253MHz
  Compute Capability (NV)                         5.2
  Device Partition                                (core)
    Max number of sub-devices                     1
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x64
  Max work group size                             1024
  Preferred work group size multiple              32
  Warp size (NV)                                  32
  Preferred / native vector sizes                 
    char                                                 1 / 1       
    short                                                1 / 1       
    int                                                  1 / 1       
    long                                                 1 / 1       
    half                                                 0 / 0        (n/a)
    float                                                1 / 1       
    double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              4236115968 (3.945GiB)
  Error Correction support                        No
  Max memory allocation                           1059028992 (1010MiB)
  Unified memory for Host and Device              No
  Integrated memory (NV)                          No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       4096 bits (512 bytes)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        212992 (208KiB)
  Global Memory cache line size                   128 bytes
  Image support                                   Yes
    Max number of samplers per kernel             32
    Max size for 1D images from buffer            134217728 pixels
    Max 1D or 2D image array size                 2048 images
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             4096x4096x4096 pixels
    Max number of read image args                 256
    Max number of write image args                16
  Local memory type                               Local
  Local memory size                               49152 (48KiB)
  Registers per block (NV)                        65536
  Max number of constant args                     9
  Max constant buffer size                        65536 (64KiB)
  Max size of kernel argument                     4352 (4.25KiB)
  Queue properties                                
    Out-of-order execution                        Yes
    Profiling                                     Yes
  Prefer user sync for interop                    No
  Profiling timer resolution                      1000ns
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    Kernel execution timeout (NV)                 Yes
  Concurrent copy and kernel execution (NV)       Yes
    Number of async copy engines                  2
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  NVIDIA CUDA
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   Success [NV]
  clCreateContext(NULL, ...) [default]            Success [NV]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  Invalid device type for platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  No platform

ICD loader properties
  ICD loader Name                                 OpenCL ICD Loader
  ICD loader Vendor                               OCL Icd free software
  ICD loader Version                              2.2.12
  ICD loader Profile                              OpenCL 2.2

@s-ol
Copy link
Contributor

s-ol commented May 23, 2018

Okay, the workaround from #177 is not helping me. Moving the cl.get_platform() call out of initialize like so:

platform = cl.get_platforms()[0]
def initialize():
    from pyopencl.tools import get_gl_sharing_context_properties
    import sys
    if sys.platform == "darwin":
        ctx = cl.Context(properties=get_gl_sharing_context_properties(),
                devices=[])
    else:
        # Some OSs prefer clCreateContextFromType, some prefer
        # clCreateContext. Try both.
        try:
            ctx = cl.Context(properties=[
                (cl.context_properties.PLATFORM, platform)]
                + get_gl_sharing_context_properties())
        except:
            ctx = cl.Context(properties=[
                (cl.context_properties.PLATFORM, platform)]
                + get_gl_sharing_context_properties(),
                devices = [platform.get_devices()[0]])

yields the same result:

Program received signal SIGSEGV, Segmentation fault.
0x00007f289e555c95 in ?? () from /usr/lib/libGLX_nvidia.so.0
(gdb) bt
#0  0x00007f289e555c95 in ?? () from /usr/lib/libGLX_nvidia.so.0
#1  0x00007f289d339ccd in ?? () from /usr/lib/libnvidia-glcore.so.396.24
#2  0x00007f289e57b6a3 in glcuR0d4nX () from /usr/lib/libGLX_nvidia.so.0
#3  0x00007f28b26fbb49 in ?? () from /usr/lib/libnvidia-opencl.so.1
#4  0x00007f28b25f6825 in ?? () from /usr/lib/libnvidia-opencl.so.1
#5  0x00007f28b25f646b in ?? () from /usr/lib/libnvidia-opencl.so.1
#6  0x00007f28b503ae27 in create_context_from_type () from /usr/lib/python3.6/site-packages/pyopencl/_cffi.abi3.so
#7  0x00007f28b502591c in ?? () from /usr/lib/python3.6/site-packages/pyopencl/_cffi.abi3.so
#8  0x00007f28be9c95c0 in _PyCFunction_FastCallDict () from /usr/lib/libpython3.6m.so.1.0
#9  0x00007f28be9969fb in ?? () from /usr/lib/libpython3.6m.so.1.0
...

Additionally moving the get_device() doesn't help either and the segfault happens no matter which of the two constructor versions I call (with or without devices).
@inducer please let me know if there's some more information I can provide (coredumps etc).

@s-ol
Copy link
Contributor

s-ol commented May 25, 2018

I made some progress by logging c_props in cffi_cl.py:798:Context.init:

[
    (4228, <pyopencl.Platform 'NVIDIA CUDA' at 0x55ab824952e0>),
    (8200, -2112597320),
    (8202, <OpenGL.raw.GLX._types.LP_struct__XDisplay object at 0x7f9918567510>)
]
cprops:  [4228, 94195113612000, 8200, -2112597320, 8202, 94195109622912, 0]

it seems that the pointer conversion of the GL context is failing (8200 is CL_GL_CONTEXT_KHR); the C++ interop demo I'm comparing against has a similar value to the other two pointers instead.
actually no, the -2112597320 value comes from get_gl_sharing_context_properties() and is wrong already there

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants