Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Looking for feedback/suggestions on changing ImTextureID from void* to u64 #1641

Closed
ocornut opened this issue Feb 22, 2018 · 16 comments
Closed

Comments

@ocornut
Copy link
Owner

ocornut commented Feb 22, 2018

Both DirectX12 (#301) and Vulkan (#914) uses descriptors that are 64-bits.
This is a problem for our demos because ImTextureID is defined as pointer-size (typedef void* ImTextureID) and therefore on 32-bit target ImTextureID is 32-bits. This makes it difficult to use it as a convenient way to pass a texture descriptor. (in the case that the user wants to use a native description inside ImTextureID)

I believe it is not a suuuper critical problem because:

  • It only fails when compiling for 32-bit, and we could agree that people shipping DirectX12 and Vulkan applications are maybe unlikely to ship 32-bit?
  • Most real game engines based on DirectX12 and Vulkan would be wrapping textures or materials into their own engine types and then likely would want to store e.g. MyTexture* or MyMaterial* into ImTextureID which wouldn't be a problem any more. Though some engine may use 64-bit integer identifiers in all build mode?

(Let me know if you think those assumptions are wrong!)

So I think this mostly affect: (apart from our demo code)

  • Small-ish applications or quick experiments using DirectX12 and Vulkan, that are likely to stay low-level and want to setup imgui easily. In theory those could/would compile for 64-bit, but when you aren't shipping it's easy to just compile for 32-bit and it would be odd if the default renderer didn't work in there. Really not good for perception of imgui "just working".

One solution is to make ImTextureID always 64-bit. However C++ doesn't have a "pointer type that's always 64-bit" so it would have to be an integer and that mess up with some existing code.

(A) If I try that:
typedef ImU64 ImTextureID;
The kind of errors I get:

From imgui examples: (~7 errors)

// '=' : cannot convert from 'void *' to 'ImTextureID'
io.Fonts->TexID = (void *)(intptr_t)g_FontTexture;     // Store our identifier

// '==' : 'LPDIRECT3DTEXTURE9' differs in levels of indirection from 'ImTextureID'	
IM_ASSERT(g_FontTexture == io.Fonts->TexID);

Another codebase: (~40 errors)

// 'void ImGui::Image(ImTextureID,const ImVec2 &,const ImVec2 &,const ImVec2 &,const ImVec4 &,const ImVec4 &)': cannot convert argument 1 from 'Pasta::ShadedTexture *' to 'ImTextureID'	
ImGui::Image(&shadedTexture, textureSize)

// 'void ImDrawList::PushTextureID(ImTextureID)': cannot convert argument 1 from 'void *' to 'ImTextureID'
draw_list->PushTextureID(RawTex->ImID);

// '!=': no conversion from 'void *' to 'unsigned __int64'
bool push_id = draw_list->_TextureIdStack.back() != RawTex->GetImTexId();

(B) We can minimize the errors by using a higher-level primitive: (not really in the spirit of imgui's codebase, but let's try)

struct ImTextureID
{
    ImU64 Value;                            // user data to identify a texture (this is whatever to you want it to be! read the FAQ about ImTextureID in imgui.cpp)

    ImTextureID()                           { Value = 0; }
    ImTextureID(void* v)                    { Value = (ImU64)v; }
    ImTextureID(const ImTextureID& v)       { Value = v.Value; }
    operator ImU64() const                  { return (ImU64)Value; }
    operator const void*() const            { return (const void*)Value; }
    operator void*() const                  { return (void*)Value; }
    bool operator ==(const ImTextureID& v)  { return Value == v.Value; }   
    bool operator !=(const ImTextureID& v)  { return Value != v.Value; }
};

Then we get from imgui codebase (down from ~7 to 1 error)

// 'type cast' : cannot convert from 'const ImTextureID' to 'LPDIRECT3DTEXTURE9'	directx9_example	
g_pd3dDevice->SetTexture(0, (LPDIRECT3DTEXTURE9)pcmd->TextureId);

Other codebase: (down from ~20 to 5 errors)

// 'type cast': cannot convert from 'const ImTextureID' to 'Pasta::ShadedTexture *'
ShadedTexture* shddTex = (ShadedTexture*)(pcmd->TextureId);

// 'ImTextureID::operator !=': 3 overloads have similar conversions	wb	c:\omar\wb\work\code\srcs\texture.cpp	877	
bool push_id = draw_list->_TextureIdStack.back() != RawTex->GetImTexId();

None of those solution are perfect and we'd creating a new problem for people updating to a new version. They are fixable issues, so not end of the world, but a little annoying considering it'd for a change that would benefit few people.

We'd also have to cast MyTexture* into ImTextureID explicitly now, whereas previously it worke implicitly because it would cast to void*.

(C)
Add an optional imconfig.h directive to make ImTextureID unsigned long long, add static_assert in the dx12/vulkan back-ends in 32-bit mode to refer to this option.

(D)
Specifically for Vulkan or DirectX12 decide that the binding can use another type, e.g. pass VkDescriptorSet* instead of VkDescriptorSet and D3D12_GPU_DESCRIPTOR_HANDLE* instead of D3D12_GPU_DESCRIPTOR_HANDLE. I personally don't know either API at that point and don't know what would be best. Heck, I don't even understand what those things are, what their lifetime is in a typical dx12/vulkan codebase (there is an open question in #914 to decide how ImTextureID should be interpreted in the default Vulkan binding)

(E)
Hide the issue under the carpet by removing 32-bit build for the Vulkan and DirectX12 demo from our example projects and/or upcoming project generator, leave the static asserts on.

(F)
Other suggestions?

@sherief
Copy link

sherief commented Feb 23, 2018

A D3D12_GPU_DESCRIPTOR_HANDLE is just a wrapper around a 64-bit GPU virtual address. I don't see the advantage of using a D3D12_GPU_DESCRIPTOR_HANDLE* vs a D3D12_GPU_DESCRIPTOR_HANDLE directly - can you clarify? I think I'm missing something.

@ocornut
Copy link
Owner Author

ocornut commented Feb 23, 2018

The advantage is that it would always fit in a pointer and therefore work in 32-bit build. Currently we can’t store them in a void* on 32-bit builds. this is what this thread describe and is trying to solve.

If your handles are stored in a persistent way on the heap it doesn’t really matter if you pass them by address or by value.

@mellinoe
Copy link

I'll preface this by mentioning that I'm not using C++ directly -- I'm downstream of the C bindings, so some of the above doesn't have much of an impact on me (like whether pointer types are convertible, etc.). If the type changes from pointer-sized to 8-byte-sized, then it will just constitute another breaking change that isn't terribly difficult to deal with.

I think it might help to mention that in many graphics API's, especially modern ones, you do not bind textures directly. So I don't expect that many people are passing an actual "texture ID" in here. OpenGL and Metal are probably the only common API's where a texture is directly bindable. Even D3D11 (not a "modern" API) uses Shader Resource Views, which are a distinct resource. If you actually pass an API's texture object into these functions, it implies that you are doing a lot of "extra stuff" behind the scenes to make those into actually-bindable resources on-the-fly.

In my library/engine, I just treat the ImTextureID as an opaque identifier, used as an index into a managed resource cache. You can register a texture (as in, an actual Texture, not a view/descriptor), and it internally creates the actual "bindable" resource that is needed for the API (I support D3D11, Metal, OpenGL, Vulkan). The texture IDs are assigned by literally just bumping an index. When you pass that index into Image/ImageButton(), it looks up the previously-registered resource associated with that index and binds it as appropriate.

Anyways, I hope that my perspective helps. Like I said, I'm not using C++ directly, so most of this would be inconsequential for me.

@thedmd
Copy link
Contributor

thedmd commented Feb 23, 2018

Pointer to Vulcan g_DescriptorSet or DX12 g_hFontSrvGpuDescHandle used as ImTextureID seems to be cleanest solution. Users must have handles to their resources somewhere, so they can pass pointers to ImGui too. I'm after (D).

Possible, not yet mentioned alternatives:

(F) Explicit Texture API

IMGUI_API ImTextureID ImGui_CreateTexture32(const void* data, int width, int height);
IMGUI_API ImTextureID ImGui_CreateTextureFromDescriptor(...); // API dependent
IMGUI_API void        ImGui_DestroyTexture(ImTextureID texture);
IMGUI_API int         ImGui_GetTextureWidth(ImTextureID texture);
IMGUI_API int         ImGui_GetTextureHeight(ImTextureID texture);

(G) State ImTextureID type in imconfig.h

@sherief
Copy link

sherief commented Feb 23, 2018

Ahem, yes, 32-bit apps still exist.

I vote for option D.

@ratzlaff
Copy link

ratzlaff commented Feb 23, 2018

in imgui.h change

typedef void* ImTextureID;

to

#ifndef ImTextureID
#define ImTextureID void*
#endif

That way the end user could (for example) #define ImTextureID VkDescriptorSet* in IMGUI_USER_CONFIG

@meshula
Copy link

meshula commented Feb 24, 2018

@thedmd In issue #914, (F) is roughly the method I went with. The reason is that I wanted to leave the ImGui Vulkan interface mostly alone. If I have to externally provide the descriptor_set then I have to also supply things I wouldn't really want to care about, like the sampler. (F) allows ImGui to hide those details under the covers.

The difference between (F) and what I did, is that I have a variant taking a VkImage:

    IMGUI_API ImTextureID ImGui_CreateTexture(VkImage);

@Dynad
Copy link

Dynad commented Feb 24, 2018

I am using imGUI (I LOVE IT!) with my own engine using Vulkan. And i'm strongly against changing the imGUI API sending actual API pointers through imGUI. ImGUI shouldn't care about back-end engine resources at all. It should only pass on identifiers like a GUID or some simple INT index to identify a certain resource. And the renderer will take care of that (because you have in your engine a resource manager that can translate this). It's quite dangerous for a engine framework to keep API resources around in a immediate UI interface like ImGUI with e.g an Vulkan API that does quite the opposite e.g baking resources before you even draw something. I am not sure if this makes any sense but keeping imGUI as is.. is your best option and leave it simple.. don't mix it up with other API's... let the programmers worry about that part.

@ocornut
Copy link
Owner Author

ocornut commented Feb 24, 2018 via email

@ocornut
Copy link
Owner Author

ocornut commented Feb 24, 2018 via email

@Dynad
Copy link

Dynad commented Feb 24, 2018

I understand, you still want to pass an opaque pointer and imGUI has no knowledge about it contents.. But what I was trying to say is that even tho ImGUI has no knowledge about it's contents.. it's Vulkan pointer is still there.. I think it's just wrong to mix raw Vulkan/dx12 (even opaque) pointers inside imGUI.

In Vulkan you can use binding sets, so in my engine every Texture has it's own GUID. The graphics pipeline holds e.g 16 inputs with different set numbers (could start with dummy textures). In my case i'll probably never need more then 16 images at once in imGUI anyway.

These dummy textures can be replaced by updating descriptors in your graphics pipeline with the ones you need. When you bake the command buffers you have VkBindDescriptorSets() .. link the correct set in engine (texture -> set ID) to set 0 in your GLSL shader which is normally the Font texture.

When you need custom texture (which comes from imTexture) set different binding else just use your Font binding. This way there is no need to pass on Vulkan opaque pointers at all.. just a simple GUID.

@volcoma
Copy link

volcoma commented Feb 26, 2018

I vote for

#ifndef ImTextureID
#define ImTextureID void* //or something else like intptr_t
#endif

In many cases I've seen the user uses shared_ptr or something similar which should be alive until the render func is called. We are currently doing stuff like wrapping the public imgui functions which take a texture to take a shared_ptr and then store the shared_ptr in some container which is cleared after the call to the draw func which is tedious. Having it a define would allow the user to make it whatever he wants and would eliminate the need to wrap. This illustrates my problem

static std::vector<std::shared_ptr<void>> s_textures;

void Image(std::shared_ptr<void> texture, const ImVec2& _size,
		   const ImVec2& _uv0 /*= ImVec2(0.0f, 0.0f) */, const ImVec2& _uv1 /*= ImVec2(1.0f, 1.0f) */,
		   const ImVec4& _tintCol /*= ImVec4(1.0f, 1.0f, 1.0f, 1.0f) */,
		   const ImVec4& _borderCol /*= ImVec4(0.0f, 0.0f, 0.0f, 0.0f) */)
{
	s_textures.push_back(texture);
	ImGui::Image(texture.get(), _size, _uv0, _uv1, _tintCol, _borderCol);
}

@ocornut
Copy link
Owner Author

ocornut commented Mar 1, 2018

This doesn't really fully solve the problem discussed here nor in #301 or #914, but as several people suggested it and it's harmless, I've added:

#ifndef ImTextureID
typedef void* ImTextureID;          // user data to identify a texture (this is whatever to you want it to be! read the FAQ about ImTextureID in imgui.cpp)
#endif

So it's possible to define ImTextureId in imconfig.h
Note that it is still a typedef unless #defined (just like ImDrawIdx).
So that's one way to bypass the issue but it's not ideal especially for the examples/ code base.


@Dynad

I think it's just wrong to mix raw Vulkan/dx12 (even opaque) pointers inside imGUI. [...] This way there is no need to pass on Vulkan opaque pointers at all.. just a simple GUID.

This is a design choice and imgui or this thread isn't the place to decide that people shouldn't use pointers in their own engine. It's perfectly fine to do it one way or another. There are hundreds of ways to write engines and people can manage the lifetime of their pointers just fine.?


@volcoma

In many cases I've seen the user uses shared_ptr or something similar which should be alive until the render func is called. [...] This illustrates my problem

To me your example illustrate a non-problem since you just solved it? You can add a 2 lines functions to take care of handling the lifetime however you want, isn't that good enough? What's tedious about it? You could probably even wrap it at a lower-level:

ImTextureId ToImTextureId(std::shared_ptr<void> texture)
{
  s_textures.push_back(texture);
  return texture.get();
}

So when doing imgui calls you can wrap your parameters directly without having to wrap anything else?

ImGui::Image(ToImTextureId(tex), ...);

@thedmd I'd be curious if you could expand on the idea .

(F) Explicit Texture API
IMGUI_API ImTextureID ImGui_CreateTexture32(const void* data, int width, int height);
IMGUI_API ImTextureID ImGui_CreateTextureFromDescriptor(...); // API dependent
IMGUI_API void        ImGui_DestroyTexture(ImTextureID texture);
IMGUI_API int         ImGui_GetTextureWidth(ImTextureID texture);
IMGUI_API int         ImGui_GetTextureHeight(ImTextureID texture);

That's similar to the PR in #914. Everyone's throwing suggestions which I don't comprehend at the moment and I think I'll take a day and learn about Vulkan/DX12 before coming back to this topic.

EDIT Found a new friend! https://vulkan-tutorial.com/

@thedmd
Copy link
Contributor

thedmd commented Mar 6, 2018

(F) Explicit Texture API

What I had in mind is simple texture API implementation, like one here: imgui_impl_dx11.cpp#L831

Basic version allows to create ARGB texture from user provided data:

IMGUI_API ImTextureID ImGui_CreateTexture32(const void* data, int width, int height);
IMGUI_API void        ImGui_DestroyTexture(ImTextureID texture);
IMGUI_API int         ImGui_GetTextureWidth(ImTextureID texture);
IMGUI_API int         ImGui_GetTextureHeight(ImTextureID texture);

Different backends may provide additional versions of ImGui_CreateTextureFromXXX to improve interoperability. For Example for DX11:

IMGUI_API ImTextureID ImGui_CreateTextureFromView(ID3D11ShaderResourceView* view);

Beside abstracting away texture management this code serve as an example how one can deal with ImTextureID being a transparent pointer.

Personally I used this to quickly feed test code with images, by mixing it with 'stb_image' to create:

IMGUI_API ImTextureID ImGui_LoadTexture(const char* path);

@eliasdaler
Copy link
Contributor

I've defined custom ImTextureID for OpenGL as GLuint and except for few warnings (which are fixed in PR #2081 and PR #2080 ), the experience has been great so far.

There's one problem, which can't be solved easily, though. Look at this.

Here, textureId is being printed via "%p" which triggers warning when texture id = unsigned int. The only solution I see is to allow users to define custom format literal as it was done in inttypes.h for intptr_t, for example.

So, we can have something like this:

#ifndef ImTextureIDPrintLiteral
#define ImTextureIDPrintLiteral "%p"
#endif

and in my case, I'll do:

#define ImTextureIDPrintLiteral "%u"

and in code it'll be something like:

"Draw %4d %s vtx, tex " ImTextureIDPrintLiteral ", clip_rect (%4.0f,%4.0f)-(%4.0f,%4.0f)"

Any thoughts on this?

@ocornut
Copy link
Owner Author

ocornut commented Oct 8, 2024

FYI i went forward and made ImTextureID default to ImU64 with 92b9498.
Because much code has been transitioned to perform cast using (ImTextureID)(intptr_t)my_texture it doesn't actually warn much, but it is technically a small breaking change as it is likely to add warnings to some codebase depending on how they perform the cast.

I have tweaked the FAQ accordingly
https://github.com/ocornut/imgui/blob/master/docs/FAQ.md#q-how-can-i-display-an-image-what-is-imtextureid-how-does-it-work

This was not a 100% necessary change but it's sane and it allows simplifying some build scripts/projects to avoid a define, which in turns makes things simpler to grasp for many.

Note that you can always #define ImTextureID to a higher-level structure if you like, with custom constructors, holding more info such as sampling state etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants