Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added CPU blendshapes for GLES2 #48480

Merged
merged 1 commit into from
May 7, 2021
Merged

Added CPU blendshapes for GLES2 #48480

merged 1 commit into from
May 7, 2021

Conversation

pixaline
Copy link

@pixaline pixaline commented May 5, 2021

Basic software blendshapes for GLES2 renderer. So far, I have tested this on windows.
I'm not too experienced with pull requests so let me know if I should change anything

Fixes #36612

Bugsquad edit: This closes #48427.

@pixaline pixaline requested a review from a team as a code owner May 5, 2021 15:12
@Calinou Calinou added this to the 3.4 milestone May 5, 2021
@clayjohn
Copy link
Member

clayjohn commented May 5, 2021

The basic structure looks good. It is missing 2 things:

  1. blend_shape_mode: can be "normalized" or "relative" (see here)
  2. Blending of vertex attributes: this implementation only blends vertices, but blend shapes can contain any of the vertex attributes (see the blendshapes shader in GLES3)

I will leave @lawnjelly to discuss how this can be made to work on GLES3 and GLES2 as the software skeleton transform does. I am not convinced software blendshapes are necessary for GLES3 (the only need as far as I can tell is to work around a particular driver bug).

@lawnjelly
Copy link
Member

It's difficult because clayjohn may be right about it only being used in GLES2. It's difficult to make that call at this stage, but it strikes me it would be a lot of extra work to change down the line, which would be unlikely to happen, so we end up having to commit to particular path early on. I can't immediately see any major downsides to doing it in the client code like the software skinning, compared to in the rasterizer itself.

Some thoughts:

  • Having a common reference software fallback covers us for all eventualities (e.g. driver bugs in either the blendshapes or the skinning in GLES3)
  • I'm slightly wary that the blendshapes and the skinning are, or could become intertwined (I haven't examined how they work at this stage, but e.g. tweening face is often used at the same time as skeletal animation). Attempting to have software skinning and hardware blendshapes (GLES3) could become problematic down the line. You may for example need to do the tweening before the skinning, which won't work with the CPU + GPU arrangement.
  • Shared code / techniques, we may be able to to share a lot of stuff for e.g. culling / logic between the two, which would be more difficult if it was split into software skinning client side and software blend shapes in the GLES2 rasterizer.

I'll try and have a proper look tomorrow as it is getting late here, and work out how the existing blendshapes work, I haven't looked or tried these (so maybe as clayjohn is more familiar he may be right with his intuitions).

@clayjohn
Copy link
Member

clayjohn commented May 6, 2021

Wonderful thoughts, as always. My responses below:

Having a common reference software fallback covers us for all eventualities (e.g. driver bugs in either the blendshapes or the skinning in GLES3)

The issue with putting the software implementation into client code is that users have to force enable software fallbacks if they think that any of their users will experience the driver bug. We need a solution that just works (TM).

I'm slightly wary that the blendshapes and the skinning are, or could become intertwined (I haven't examined how they work at this stage, but e.g. tweening face is often used at the same time as skeletal animation). Attempting to have software skinning and hardware blendshapes (GLES3) could become problematic down the line. You may for example need to do the tweening before the skinning, which won't work with the CPU + GPU arrangement.

With this PR, blendshapes and skinning have the same relationship that they do in GLES3. So if something is wrong when combing blendshapes and skinning then it needs to be addressed for both. If it does turn out to be an issue, then certainly we should kill two birds with one stone and implement software blendshapes client side.

Shared code / techniques, we may be able to to share a lot of stuff for e.g. culling / logic between the two, which would be more difficult if it was split into software skinning client side and software blend shapes in the GLES2 rasterizer.

This sounds very interesting, but I don't know enough about the software skinning to know if it will be valuable.

Overall, I think we should implement blend shapes in the rasterizer in GLES2 and then evaluate whether it makes sense to add a client-side software blend shapes implementation that replaces it (the same way that the current software skinning replaces the hardware skinning). I know it would feel redundant, but this PR won't be very much code even when finished) and it makes sense to me to have proper blend shape support in the rasterizer.

@lawnjelly
Copy link
Member

Ah I get you, sorry I got my wires crossed. You are meaning make a hardware implementation for GLES2 (I was for some reason thinking you meant a software CPU solution in the rasterizer), yes it didn't occur to me as I'm not familiar with blend shapes. So I'm in agreement that a hardware GPU version would be worth trying. 👍

@pixaline
Copy link
Author

pixaline commented May 6, 2021

@clayjohn I have now added blending to normals, colors and uv maps and made it work on both normalized and relative modes. I wasn't sure how exactly weight/bones blend so i excluded them, but it seems to work anyways on my animated skeleton/blendshape test project. Let me know if there's anything else to fix.

@clayjohn
Copy link
Member

clayjohn commented May 6, 2021

Ah I get you, sorry I got my wires crossed. You are meaning make a hardware implementation for GLES2 (I was for some reason thinking you meant a software CPU solution in the rasterizer), yes it didn't occur to me as I'm not familiar with blend shapes. So I'm in agreement that a hardware GPU version would be worth trying.

No, I do mean have a software implementation in the rasterizer and a software implementation in the client the way we have with skeletons. In GLES2, if hardware doesn't support float textures we fall back to a software implementation. This is different from the software skinning implemented client side.

@pixaline
Copy link
Author

pixaline commented May 6, 2021

Alright, I've added tangent and weight blending now.

Copy link
Member

@clayjohn clayjohn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. I tested with https://github.com/KhronosGroup/glTF-Sample-Models/tree/master/2.0/MorphStressTest and the other morph files in the repo.

I am a little concerned about the extra vertex pressure because this forces every model using blendshapes to de-compress vertex data before passing it to the GPU. This results in slightly more expensive CPU-GPU transfers and more vertex pressure when issuing the draw command. That being said, I'm not sure storing a GPU buffer per mesh, configured for each mesh's combination of compression formats is worthwhile.

I'm happy to merge this once CI passes and @lawnjelly has had one more chance to review it and express his concerns.

@lawnjelly
Copy link
Member

lawnjelly commented May 7, 2021

Fantastic work to get this done in such a short time for a new contributor. I'm happy with if clayjohn has tested, it looks well done from a quick look.

Only question, as I'm not familiar with blend shapes in godot, can they be combined with skinning, and if so, are the results the same when using this with software skinning or using this with hardware skinning? The classic example would be using blendshapes for a face / head for talking, used in combination with skinning for the bones. But a simpler version would do to test it.

BTW, even if it doesn't interact correctly, as is this is still a valid addition of functionality as many won't be using that combination, so I'm happy to merge in either case. 👍

@pixaline
Copy link
Author

pixaline commented May 7, 2021

This results in slightly more expensive CPU-GPU transfers

Yes, i was also thinking about this situation especially on weaker platforms such as android. An idea I had was to only recalculate the blend shapes when the mesh has changed (blend values, blend data, mesh data, etc) with like a dirty flag or something. At least for now it would be fairly easy to find your bottleneck by simply hiding objects.

@clayjohn
Copy link
Member

clayjohn commented May 7, 2021

@lawnjelly it should work fine with the rasterizer based skinning (both hardware and software) and it should work with client-side software skinning (as this changes the vertex attributes at the last moment before rendering) unfortunately, I don't have a model to test with that has both skeleton and blendshapes.

@akien-mga akien-mga merged commit a0a22ef into godotengine:3.x May 7, 2021
@akien-mga
Copy link
Member

Thanks!

@pixaline
Copy link
Author

@clayjohn you said:

That being said, I'm not sure storing a GPU buffer per mesh, configured for each mesh's combination of compression formats is worthwhile.

I'm currently optimizing this implementation since performance was pretty bad in a real world case. I was able to implement a CPU/GPU buffer for each surface, and it works much better - framerate now only drops when the blend shape is changed. I justify the added memory usage by only creating it for meshes that actually has blendshapes. The previous code did upload shape data to the gpu without ever using it, anyways. I'm curious what your thoughts were regarding what you wrote, what should I look out for?

@clayjohn
Copy link
Member

@parulina I was concerned about memory usage. If there is enough speed improvement then it is probably worthwhile.

@fire
Copy link
Member

fire commented Jun 14, 2021

Is it difficult to add this to gles3?

@clayjohn
Copy link
Member

clayjohn commented Aug 6, 2021

I'm currently optimizing this implementation since performance was pretty bad in a real world case. I was able to implement a CPU/GPU buffer for each surface, and it works much better - framerate now only drops when the blend shape is changed. I justify the added memory usage by only creating it for meshes that actually has blendshapes. The previous code did upload shape data to the gpu without ever using it, anyways.

@parulina Are you planning on making a PR with the changes?

@pixaline
Copy link
Author

pixaline commented Aug 6, 2021

I totally forgot! I can clean up/rebase and submit it for tomorrow.

@clayjohn
Copy link
Member

clayjohn commented Aug 6, 2021

That would be much appreciated. Thank you :)

@fire
Copy link
Member

fire commented Sep 28, 2021

Was CPU Blendshapes added to GLES3? I wasn't sure.

@clayjohn
Copy link
Member

@fire No, this PR was specific to the GLES2 backend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants