image-to-mesh (to video) workflow example #7
AugmentedRealityCat
started this conversation in
Show and tell
Replies: 1 comment
-
wow, the demo video just look like a Wasteland 3 game. awesome work! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Here is a how I used image-to-mesh to create the mesh of a truck, and how I textured and animated it. Let's begin by the end result:
Dmt_Truck_v09.mp4
The first step after opening the DMT-Mesh interface is to select the proper
Run Mode
, in this caseIMAGE-TO-MESH
.Then we will select an image by first checking the
Source Image
checkbox at the top. By unfurling this little tab, you can use the provided buttons to load the image you want to use as a source.After some poor results with my first few tests, I went to check the original code for this and saw the example images they used in their documentation. So I took one picture from there, this one:
And then I did not change any parameter (there aren't any that are relevant at this step anyways) and I pressed
Generate
. If you changed the default settings somehow, or if you are worried about having the right setting, then just make sure you have selected1B model
asImage to PC model
.In well under a minute, the point cloud was ready. It should be selected by default, but just to make sure open the point cloud tab and, after processing has been completed, select the topmost item in the list.
As far as I know there is no way to preview the point cloud yet at this step. For that, we will need a mesh (maybe there are other ways - I have zero experience with Blender, but this add-on inspired me to learn !)
Now that we have a point cloud, we are going to use it to guide the generation of a mesh of polygons that will be usable in Blender and other 3d applications. To do that we will change the Run Mode to Point_Cloud-to-Mesh and once again we will not change any default parameter - normally we should have
1B model
asImage to PC model
, we used that one already, and now we want to make sure we haveopenAI
selected asPC to mesh method
.The only parameter you can play with is the
Gridres
, but I got good results with the default value of128
. The higher thisGridres
parameter is, the finer is the resulting polygon mesh. All the remaining parameters, unless I am mistaken, are reserved for another model (dmtet) and won't do anything anyways.You are ready to press Generate, and once again, in about a minute, you should see your truck appearing.
It has a hole in the top, and some really apparent damage, particularly near the wheels, which themselves don't seem brand new. Far from it. This is perfect for a post-apocalyptic truck, Gaslands and Mad Max Fury Road Style !
So I went further with this them and opened the Dream-Texture add-on interface and created some really rusty dirty pick-up truck texture by applying it from multiple angles and by specifically selecting some parts of the polygon mesh so as to apply the dream texture selectively on those polygons only. Here is what I got:
Then I baked the model and exported it as a FBX (an ABC or OBJ could work as well) and imported it in Cinema 4d, where I had prepared a post apocalyptic scene in a matter of minutes using Dream-Textures, some blank 3d models of buildings and some basic lighting to give it some depth.
The texture used for camera projection (after upscaling):
I then animated the pick-up truck simulating some kind of pursuit. There is NO other model in the scene besides the Pick-Up Truck and the environment: the other car is just an array of lights without any model to hold them, and everything else like the gunfire is just evoked with sound and lighting. That was the challenge I had given myself for this piece.
Beta Was this translation helpful? Give feedback.
All reactions