Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any reason ByteBuffer overloads were removed? #197

Closed
dustContributor opened this issue Jun 11, 2016 · 11 comments
Closed

Any reason ByteBuffer overloads were removed? #197

dustContributor opened this issue Jun 11, 2016 · 11 comments

Comments

@dustContributor
Copy link

dustContributor commented Jun 11, 2016

I just moved from lwjgl 3.0.0 build 44 to 3.0.1 build 02, and I'm missing a few overloaded functions with ByteBuffer parameters like:

  • GL11.glGetFloatv(int, ByteBuffer)
  • GL11.glGetIntegerv(int, ByteBuffer)
  • GL20.glDrawBuffers(int, ByteBuffer)
  • ARBGetProgramBinary.glGetProgramBinary(int, int, ByteBuffer, ByteBuffer, ByteBuffer)

Any reason for this? They're really handy because I dont really deal with views over the byte buffers, avoiding dealing with tons of tiny specialized int/float buffers.

EDIT: I can work around this by using ngl* variants directly with MemoryUtil.memAddress(ByteBuffer) as the last parameter but dunno if this is intended.

EDIT2: Also some other stuff that I found... inconsistent.

For example, glGetBoolean moved from returning 'boolean' to returning 'byte' (for comparing against GL_TRUE I guess). But glfwInit moved from returning GL_TRUE to returning 'boolean', is this mismatch intended?

EDIT3: Really nice that NativeResource implements Closeable, now Eclipse warns me if any resource wasnt closed, didn't noticed I was leaking a GLFWVidMode instance 😄

@httpdigest
Copy link
Member

Really nice that NativeResource implements Closeable, now Eclipse warns me if any resource wasnt closed, didn't noticed I was leaking a GLFWVidMode instance

Attention. You are not leaking this java.lang.AutoCloseable. It is allocated by GLFW. Freeing it will potentially crash your application. Better disable this warning in Eclipse. See: #186 (comment)

@dustContributor
Copy link
Author

lol thanks!

@dustContributor
Copy link
Author

dustContributor commented Jun 11, 2016

Also there seems to be missing a method like:

  • GL13.glCompressedTexImage2D(int target, int level, int internalformat, int width, int height, int border, int imageSize, ByteBuffer data)

Moreover, when I switched to the the existing overload with a 'long' data pointer parameter, that one does this check for some reason:

GLChecks.ensureBufferObject(GL21.GL_PIXEL_UNPACK_BUFFER_BINDING, true)

So now my texture code raises an exception. No idea why is checking for pixel unpack buffer, I'm just uploading DXT data to the GPU : / EDIT: Well technically I'm just allocating the buffer, passing a null pointer.

If I switch to the ngl* version, which does no checks, everything works as intended.

EDIT2: If I switch to the overload that doesnt receives an 'imageSize' parameter and that checks for pixel unpack buffer false, then I just get brown blobs as textures.

@Spasi
Copy link
Member

Spasi commented Jun 12, 2016

The single value overload of glGetBoolean returning byte instead of boolean is a bug. The rest are as designed.

Any reason for this? They're really handy because I dont really deal with views over the byte buffers, avoiding dealing with tons of tiny specialized int/float buffers.

EDIT: I can work around this by using ngl* variants directly with MemoryUtil.memAddress(ByteBuffer) as the last parameter but dunno if this is intended.

Also there seems to be missing a method like:

GL13.glCompressedTexImage2D(int target, int level, int internalformat, int width, int height, int border, int imageSize, ByteBuffer data)

For a long time LWJGL 3 was generating a "base" version of functions with pointer parameters. This version did no transformations (e.g. automatically passing buffer size parameters) and all pointer parameters were mapped to ByteBuffer, regardless of type. This base version also had the javadoc for the corresponding function; all other overloads were referencing this base version.

After 3.0.0b and the introduction of MemoryStack and the work on optimizing buffer instance creation for escape analysis, there wasn't really an important reason to keep such "base" versions. Sure, there may be times that using a single ByteBuffer for different data types is convenient, but they should be very few in any codebase and can easily be implemented using the unsafe version (with the 'n' prefix) and memAddress.

So, the decision was made to:

  • Drop the base versions. This was hundreds of (rarely used) methods removed from the public API, which is always nice. Better ship a 3.0.0 that is as lightweight as possible, then add stuff later if there's good reason to.
  • Since we have fewer overloads now, javadoc is replicated to all of them.

Most existing use-cases can be replaced with stack allocations (using MemoryStack). In general, I now consider reusing buffers or caching buffer instances globally an anti-pattern, rethink your approach if you do this often. It's very likely that you've complicated the code with no real benefits (including performance).

Moreover, when I switched to the the existing overload with a 'long' data pointer parameter, that one does this check for some reason:

GLChecks.ensureBufferObject(GL21.GL_PIXEL_UNPACK_BUFFER_BINDING, true)

So now my texture code raises an exception. No idea why is checking for pixel unpack buffer, I'm just uploading DXT data to the GPU : / EDIT: Well technically I'm just allocating the buffer, passing a null pointer.

This isn't new and is actually compatible with LWJGL 2. Functions that interact with buffer objects have two versions: one that accepts NIO buffers in client memory and one that accepts an offset into the buffer bound to the appropriate buffer object binding.

If I switch to the ngl* version, which does no checks, everything works as intended.

Yes, the binding check is the only difference between these methods. But the ngl method is inherently unsafe and might not do what you think it does (see next answer).

EDIT2: If I switch to the overload that doesnt receives an 'imageSize' parameter and that checks for pixel unpack buffer false, then I just get brown blobs as textures.

Yes, because the imageSize parameter is ignored when the data parameter is NULL. What happens is that a texture is created with unspecified contents. So, there are 3 cases:

  • You have the compressed texture data in a PBO and you use glCompressedTexImage2D(int target, int level, int internalformat, int width, int height, int border, int imageSize, long data).
  • You have the compressed texture data in client memory and you use glCompressedTexImage2D(int target, int level, int internalformat, int width, int height, int border, ByteBuffer data) with a non-null data argument.
  • You don't have the compressed texture data and you use glCompressedTexImage2D(int target, int level, int internalformat, int width, int height, int border, ByteBuffer data) with a null data argument.

There's no reason to use the unsafe method (unless you're doing pointer arithmetic or something and want to pass a raw address to data).

@Spasi
Copy link
Member

Spasi commented Jun 12, 2016

Attention. You are not leaking this java.lang.AutoCloseable. It is allocated by GLFW. Freeing it will potentially crash your application. Better disable this warning in Eclipse.

Idea: we could identify structs that are exclusively managed externally and not make them AutoCloseable. We could do that by moving the NativeResource interface down to the concrete classes. For example:

public abstract class Struct extends Pointer.Default // NativeResource removed
public class GLFWGammaRamp extends Struct implements NativeResource // NativeResource added
public class GLFWVidMode extends Struct // no NativeResource

@dustContributor
Copy link
Author

In general, I now consider reusing buffers or caching buffer instances globally an anti-pattern, rethink your approach if you do this often. It's very likely that you've complicated the code with no real benefits (including performance).

Huh? I have a stack allocator implemented in Java that returns ByteBuffer instances. Single alloc at application startup then I just shell out slices as needed, similar design to the stack allocator yourself said was faster than using jemalloc. Way more simple code than manipulating type specific views, I already went through that. I had the allocator before jemalloc bindings existed, and worked fine for two years until right now.

In any case, if it is intended, I'll just use the 'long' pointer overloads then. Easy enough to do with MemoryUtil. No problem.

Yes, because the imageSize parameter is ignored when the data parameter is NULL. What happens is that a texture is created with unspecified contents.

I know. I'm doing what you'd do if you had ARB_texture_storage for example. Do the allocation first for all the mip map levels then upload the data later. I am passing null/0L to the ngl* call that does works too.

What I am saying is that when I use the overload that doesn't has imageSize, which you're saying I should be doing since I'm not uploading the data at that time, something gets broken, because the texture data I later write to the buffer doesn't gets shown.

If I specify imageSize, it works, if I dont, it doesn't. And the only way I have to specify imageSize without the thing raising exceptions is the ngl* call.

That GL_PIXEL_UNPACK_BUFFER "true" check shouldn't be there, or might be a driver thing, the docs dont say anything about "ignoring" imageSize at all for example.

The single value overload of glGetBoolean returning byte instead of boolean is a bug

Ah okay.

Idea: we could identify structs that are exclusively managed externally and not make them AutoCloseable.

Sounds good to me.

@Spasi
Copy link
Member

Spasi commented Jun 12, 2016

Huh?

Please note that I was talking in general, not about you specifically.

I have a stack allocator implemented in Java that returns ByteBuffer instances. Single alloc at application startup then I just shell out slices as needed, similar design to the stack allocator yourself said was faster than using jemalloc. Way more simple code than manipulating type specific views, I already went through that. I had the allocator before jemalloc bindings existed, and worked fine for two years until right now.

Have you tried replacing it with LWJGL's MemoryStack? It's simple (thread-local), optimized (also handles alignment) and comes with a rich API (instance & static, zeroing or not, includes typed buffers, array parameters, string encoding). Note that LWJGL creates typed buffers directly, there's no intermediate ByteBuffer instance created.

What I am saying is that when I use the overload that doesn't has imageSize, which you're saying I should be doing since I'm not uploading the data at that time, something gets broken, because the texture data I later write to the buffer doesn't gets shown.

If I specify imageSize, it works, if I dont, it doesn't. And the only way I have to specify imageSize without the thing raising exceptions is the ngl* call.

OK, I'll look into that. Indeed, if you do have to specify imageSize and NULL client memory, the only way to do that is via the ngl call. What GPU/driver are you testing on? How do you later specify the texture data? With glCompressedTexSubImage2D or glCompressedTexImage2D again?

@dustContributor
Copy link
Author

Have you tried replacing it with LWJGL's MemoryStack?

Haven't had the time. Working on an editor right now. Its on the backlog though 😄

What GPU/driver are you testing on? How do you later specify the texture data?

GTX680, driver 364.12, Debian x64. OpenGL 3.3 core.

Allocation code is the following:

private void allocCompressed ( final int tmpTarget ) {
    int w = width;
    int h = height;
        // This depends on the DXT compression used.
    final int blockSize = this.compressedBlockSize;

    for ( int i = this.baseLevel; i <= this.maxLevel; ++i ) {
        // Calculating the size of the current mip map.
        final int imgSize = blockSize * ((w + 3) / 4) * ((h + 3) / 4);
        // This would be the call that worked on lwjgl 3.0.0 build 44 
        glCompressedTexImage2D( tmpTarget, i, this.internalFormat, w, h, 0, imgSize, null );

        // Further mip maps will be power of two.
        w = max( prevPowerOfTwo( w - 1 ), 1 );
        h = max( prevPowerOfTwo( h - 1 ), 1 );
    }
}

Then the code for the uploading is a bit more involved, but it uses:

public static void glCompressedTexSubImage2D(int target, int level, int xoffset, int yoffset, int width, int height, int format, ByteBuffer data)  { /* stuff  */ }

For each mip map level.

@Spasi
Copy link
Member

Spasi commented Jun 12, 2016

Tried it, passing a wrong imageSize (or 0) to glCompressedTexImage2D causes an OpenGL error and the texture is not initialized.

This is indeed a legitimate case in which the removed overloads would be useful. However, I still don't think it's worth restoring all of them. Instead, I propose the addition of new methods in MemoryUtil that create buffers at NULL (which is currently prohibited). So the above case becomes:

glCompressedTexImage2D( tmpTarget, i, this.internalFormat, w, h, 0, memNullByteBuffer(imgSize) );

where memNullByteBuffer(imgSize) is the new method that is the equivalent of memByteBuffer(NULL, imgSize), but returns a non-null instance. I'm open to suggestions if you don't like the name memNull<Type>Buffer.

@dustContributor
Copy link
Author

dustContributor commented Jun 13, 2016

The issue on that particular call isn't that there is an overload missing, but that it checks for GL_PIXEL_UNPACK_BUFFER_BINDING when it shouldn't.

This works because LWJGL does no checks inside:

nglCompressedTexImage2D( tmpTarget, i, internalFormat, w, h, 0, imgSize, 0L );

This doesnt, because LWJGL itself throws an IllegalStateException, not because its an invalid OpenGL call:

glCompressedTexImage2D( tmpTarget, i, internalFormat, w, h, 0, imgSize, 0L );

The implementation does this:

  public static void glCompressedTexImage2D(int target, int level, int internalformat, int width, int height, int border, int imageSize, long data) {
    if ( CHECKS )
      GLChecks.ensureBufferObject(GL21.GL_PIXEL_UNPACK_BUFFER_BINDING, true); 
    nglCompressedTexImage2D(target, level, internalformat, width, height, border, imageSize, data);
} 

The gl call is checking for state that isn't needed, whereas the ngl call with the same arguments works fine. It just happens that the removed overload with ByteBuffer didn't had the GL_PIXEL_UNPACK_BUFFER_BINDING check (or checked for 'false' instead of 'true').

LWJGL is preventing me from making a perfectly valid OpenGL call.

@Spasi
Copy link
Member

Spasi commented Jun 13, 2016

I understand the problem and as I said, the existing methods are fully compatible with LWJGL 2, which did the exact same checks. To understand why:

  • PBO enabled + illegal size/offset => OpenGL error
  • PBO disabled + illegal size/pointer => JVM crash

So the idea is that, when PBO is enabled LWJGL allows the user to specify the raw values to size and data, because it's safe. When PBO is disabled LWJGL only allows a buffer argument, which is again safe: size and data are populated automatically from the buffer remaining() and memory address. You cannot crash the JVM by mistake.

LWJGL 2 also had an overload with just size and no data (NULL was passed automatically). The problem is that something like that is missing from LWJGL 3. We agree that we must fix that. So, 3 options:

  1. Do not add an overload and remove the checks.
  2. Do not add an overload and use something like my memNullByteBuffer suggestion.
  3. Add an overload with both size and data and no checks.
  4. Add an overload with size only.

1 & 3 are unsafe solutions (will crash the JVM when not used correctly).

Also note that this particular check is only enabled in debug mode.

Spasi added a commit that referenced this issue Jun 13, 2016
@Spasi Spasi closed this as completed in ce4c70b Jun 13, 2016
Spasi added a commit that referenced this issue Jun 17, 2016
…ructs

The NativeResource interface has been moved from Struct/StructBuffer to the concrete subclasses that need it. This resolves the last issue reported in #197.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants