-
-
Notifications
You must be signed in to change notification settings - Fork 644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any reason ByteBuffer overloads were removed? #197
Comments
Attention. You are not leaking this java.lang.AutoCloseable. It is allocated by GLFW. Freeing it will potentially crash your application. Better disable this warning in Eclipse. See: #186 (comment) |
lol thanks! |
Also there seems to be missing a method like:
Moreover, when I switched to the the existing overload with a 'long' data pointer parameter, that one does this check for some reason: GLChecks.ensureBufferObject(GL21.GL_PIXEL_UNPACK_BUFFER_BINDING, true) So now my texture code raises an exception. No idea why is checking for pixel unpack buffer, I'm just uploading DXT data to the GPU : / EDIT: Well technically I'm just allocating the buffer, passing a null pointer. If I switch to the ngl* version, which does no checks, everything works as intended. EDIT2: If I switch to the overload that doesnt receives an 'imageSize' parameter and that checks for pixel unpack buffer false, then I just get brown blobs as textures. |
The single value overload of
For a long time LWJGL 3 was generating a "base" version of functions with pointer parameters. This version did no transformations (e.g. automatically passing buffer size parameters) and all pointer parameters were mapped to After 3.0.0b and the introduction of So, the decision was made to:
Most existing use-cases can be replaced with stack allocations (using
This isn't new and is actually compatible with LWJGL 2. Functions that interact with buffer objects have two versions: one that accepts NIO buffers in client memory and one that accepts an offset into the buffer bound to the appropriate buffer object binding.
Yes, the binding check is the only difference between these methods. But the
Yes, because the
There's no reason to use the unsafe method (unless you're doing pointer arithmetic or something and want to pass a raw address to |
Idea: we could identify structs that are exclusively managed externally and not make them public abstract class Struct extends Pointer.Default // NativeResource removed
public class GLFWGammaRamp extends Struct implements NativeResource // NativeResource added
public class GLFWVidMode extends Struct // no NativeResource |
Huh? I have a stack allocator implemented in Java that returns ByteBuffer instances. Single alloc at application startup then I just shell out slices as needed, similar design to the stack allocator yourself said was faster than using jemalloc. Way more simple code than manipulating type specific views, I already went through that. I had the allocator before jemalloc bindings existed, and worked fine for two years until right now. In any case, if it is intended, I'll just use the 'long' pointer overloads then. Easy enough to do with MemoryUtil. No problem.
I know. I'm doing what you'd do if you had ARB_texture_storage for example. Do the allocation first for all the mip map levels then upload the data later. I am passing null/0L to the ngl* call that does works too. What I am saying is that when I use the overload that doesn't has imageSize, which you're saying I should be doing since I'm not uploading the data at that time, something gets broken, because the texture data I later write to the buffer doesn't gets shown. If I specify imageSize, it works, if I dont, it doesn't. And the only way I have to specify imageSize without the thing raising exceptions is the ngl* call. That GL_PIXEL_UNPACK_BUFFER "true" check shouldn't be there, or might be a driver thing, the docs dont say anything about "ignoring" imageSize at all for example.
Ah okay.
Sounds good to me. |
Please note that I was talking in general, not about you specifically.
Have you tried replacing it with LWJGL's
OK, I'll look into that. Indeed, if you do have to specify |
Haven't had the time. Working on an editor right now. Its on the backlog though 😄
GTX680, driver 364.12, Debian x64. OpenGL 3.3 core. Allocation code is the following: private void allocCompressed ( final int tmpTarget ) {
int w = width;
int h = height;
// This depends on the DXT compression used.
final int blockSize = this.compressedBlockSize;
for ( int i = this.baseLevel; i <= this.maxLevel; ++i ) {
// Calculating the size of the current mip map.
final int imgSize = blockSize * ((w + 3) / 4) * ((h + 3) / 4);
// This would be the call that worked on lwjgl 3.0.0 build 44
glCompressedTexImage2D( tmpTarget, i, this.internalFormat, w, h, 0, imgSize, null );
// Further mip maps will be power of two.
w = max( prevPowerOfTwo( w - 1 ), 1 );
h = max( prevPowerOfTwo( h - 1 ), 1 );
}
} Then the code for the uploading is a bit more involved, but it uses: public static void glCompressedTexSubImage2D(int target, int level, int xoffset, int yoffset, int width, int height, int format, ByteBuffer data) { /* stuff */ } For each mip map level. |
Tried it, passing a wrong This is indeed a legitimate case in which the removed overloads would be useful. However, I still don't think it's worth restoring all of them. Instead, I propose the addition of new methods in
where |
The issue on that particular call isn't that there is an overload missing, but that it checks for GL_PIXEL_UNPACK_BUFFER_BINDING when it shouldn't. This works because LWJGL does no checks inside: nglCompressedTexImage2D( tmpTarget, i, internalFormat, w, h, 0, imgSize, 0L ); This doesnt, because LWJGL itself throws an IllegalStateException, not because its an invalid OpenGL call: glCompressedTexImage2D( tmpTarget, i, internalFormat, w, h, 0, imgSize, 0L ); The implementation does this: public static void glCompressedTexImage2D(int target, int level, int internalformat, int width, int height, int border, int imageSize, long data) {
if ( CHECKS )
GLChecks.ensureBufferObject(GL21.GL_PIXEL_UNPACK_BUFFER_BINDING, true);
nglCompressedTexImage2D(target, level, internalformat, width, height, border, imageSize, data);
} The gl call is checking for state that isn't needed, whereas the ngl call with the same arguments works fine. It just happens that the removed overload with ByteBuffer didn't had the GL_PIXEL_UNPACK_BUFFER_BINDING check (or checked for 'false' instead of 'true'). LWJGL is preventing me from making a perfectly valid OpenGL call. |
I understand the problem and as I said, the existing methods are fully compatible with LWJGL 2, which did the exact same checks. To understand why:
So the idea is that, when PBO is enabled LWJGL allows the user to specify the raw values to LWJGL 2 also had an overload with just
1 & 3 are unsafe solutions (will crash the JVM when not used correctly). Also note that this particular check is only enabled in debug mode. |
This fixes one of the issues reported in #197.
…ructs The NativeResource interface has been moved from Struct/StructBuffer to the concrete subclasses that need it. This resolves the last issue reported in #197.
I just moved from lwjgl 3.0.0 build 44 to 3.0.1 build 02, and I'm missing a few overloaded functions with ByteBuffer parameters like:
Any reason for this? They're really handy because I dont really deal with views over the byte buffers, avoiding dealing with tons of tiny specialized int/float buffers.
EDIT: I can work around this by using ngl* variants directly with MemoryUtil.memAddress(ByteBuffer) as the last parameter but dunno if this is intended.
EDIT2: Also some other stuff that I found... inconsistent.
For example, glGetBoolean moved from returning 'boolean' to returning 'byte' (for comparing against GL_TRUE I guess). But glfwInit moved from returning GL_TRUE to returning 'boolean', is this mismatch intended?
EDIT3: Really nice that NativeResource implements Closeable, now Eclipse warns me if any resource wasnt closed, didn't noticed I was leaking a GLFWVidMode instance 😄
The text was updated successfully, but these errors were encountered: