Geometrical interface for raytracing models? #780
Replies: 3 comments 1 reply
-
Given the as of yet lack of convergence on both the 3d format and material data fronts, OSI could only support the bare bones model_reference approach as of now. This means that there needs to be some extra-standard arrangement between simulator and model to properly convey the relevant data. This can mean your solution 1, i.e. arranging for the model to support all kinds of 3d model data formats and use of model_reference to name the relevant file. However this can also mean specifiying clear keys for the model_reference (e.g. "net.pmsf.osi3d.model.luxury_sedan_model_xx", ...) that are supplied by the different simulators, and then used by the model to look up a corresponding 3d model and material data in its own formats from its own database. There is usually no need to tie 3d modeling/rendering of simulator and model to each other, given that they have different objectives (a 3d model to use for ray-tracing based radar simulation is unlikely to be interesting or useful to render visual output for human observers or vice-versa). If current activities to converge on one format for 3d models and material data for ray-tracing across simulators and use cases does succeed (cf. OpenMATERIAL and related efforts), then solution 2 will become more feasible going forward. I don't see solution 3 ever becoming an interesting approach, given that this just adds yet another file format to the picture which is not going to be helpful in terms of implementation effort or performance. |
Beta Was this translation helpful? Give feedback.
-
But for raytracing models, don't you compare apples with pears if you substitute with some "corresponding model" (like RFPro substitutes IPG CarMaker models with their own ones)? Then the camera rendering is working based on a meshA, the radar is working on meshB and ultrasonic on meshC. Then there is no consistency. I see that a lot, that the sensor model uses just a "generic car" data and one microdoppler-pedestrian-pattern regardless of actual car type or pedestrian type or pedestrian animation.
i would strongly disagree with that. In terms of materials, yes. Texture maps, material settings like roughness, refractive index, metalness, ... are approximated for visible light/PBR Rendering. Isnt that the whole point of OpenMaterial, to define material properties specific for different sensor technologies and wavelengths, but reusing the mesh geometry in the gltf file?
sure, i would be happier, if there is something preexisting easy to use interface, which gives me low effort on integration in existing software. But from my perspective the other solutions are not practical at the moment.
|
Beta Was this translation helpful? Give feedback.
-
There is no consistency in any case, if you use different ray tracing engines, different modeling assumptions, different quality of geometry and material data (see below). Furthermore, where would the need for "absolute" consistency come from, given the myriad of implicit and explicit assumptions made in modeling and the general level of abstraction of the environment (just consider the handling of rain in an OSI context)? There is a reason that tools like RFPro use different models than tools like CarMaker, where the goals of rendering fidelity differ wildly, and that is just for purely optical wavelengths. Not that this is supposed to be a comment on either of those tools, just an observation that different use cases require different trade-offs, as has always been the case in simulation: All models are wrong, just some are useful - you have to pick your poison, and you have to validate each simulation according to the actual requirements in terms of fidelity, quality and other aspects. And the approach outlined, where the sensor model picks the appropriate 3d model according to model_reference keys does not suggest you should just use one 3d model, but rather have as many high-accuracy and validated 3d models as you need for your scenarios and use-case. In fact this can mean that usually there will be a 1:n mapping rather than just a 1:1 or n:1 mapping from simulator models to sensor model models. If you require that level of fidelity for a certain use case. Which is usually the big if, given that for most system-level simulation purposes it is IMHO questionable to use high-fidelity sensor models in the first place. But that is a different discussion.
Except that it isn't the same: Why would one use the same geometry model to satisfy 4mm wavelength propagation questions vs. 750nm wavelength propagation (i.e. resolution would differ by 6 orders of magnitude)? Especially when much of the material that for visual propagation is opaque is semi-transparent for radar (and vice versa). So much of the underlying geometry does not matter for one use-case, but does for the other. E.g. the crashbar geometry underlying the bumper geometry is often of crucial importance for radar propagation (as is the recycling material mix of the bumper for that matter), but who cares to model to this depth for visual rendering? Don't get me wrong, I'm not suggesting that there never is a case for re-use, but in general that comes with at least as many trade-offs as sensor-specific models have, so this is not a panacea. And we have not yet delved into matching the geometry to the underlying ray-tracing modeling approaches, either.
That is one idea, but I would not characterize it as the whole or even a majority of the point: The major advance of OpenMaterial would be to enable the largish material databases that are needed to model reality to even become broadly exchangeable. This has huge benefits, regardless of whether you use the same or different meshes for the underlying geometry for the different sensor models. Fixing on one standard format for the exchange of the 3d geometry would also be very beneficial in terms of tooling and model exchange, even if you still use different actual geometries for the different sensors.
Which is usually not much of a problem, given that simulation, if it is to be efficient, will always be single system image (distributed simulation does not make much sense when there is massive parallelism due to the scenario workload that makes for much more efficient use of resources, especially given today's over-abundance of cores (CPU or GPU) in a single system image). And it enables the use of standard tools and techniques for broad distribution of content, which all clusters have, without having to reinvent that badly as a speciality for OSI: You can easily distribute one simulation image containing all relevant data across a simulation cluster for a million scenario runs, and get the benefits of caching, shared memory mapped I/O etc. without much effort at all.
These sensor models will always have to be validated and fine-tuned to individual simulators to achieve the fidelity levels expected of them, otherwise one would likely not use ray-tracing in the first place, I would venture. So in practice, while this is a hassle, it is often not much of an added one to the hassle that is already there. Again not arguing that the state of affairs is great, but it is manageable for the instances that warrant the level of detail that ray-tracing is supposed to enable.
Actually solution 2 and 3 are more or less basically the same: You standardize the way that simulators provide 3d models to sensor models. Whether you do this by picking an existing file format, or invent a new one, is really not much different, except for solution 3 you first have to put in the not inconsiderable work (we are talking about person years) to standardize this from scratch, and then you start with an implementation base of 0 vs. at least 1 for solution 2. And with solution 2 you actually make it possible to reuse the same 3d models across simulators, because they will likely support the format as input and not just as output, whereas in solution 3 you still have not solved the problem of how to get the 3d models into the simulator in the first place. And it is likely to always be file based, because why would you try to transfer potentially gigabytes of data using a network-centric protocol that affords little in the way of centralized distribution, cacheing, etc. for generally mostly static data with much re-use? When that problem is mostly solved for files using standard mechanisms. Not to speak of the performance problems you are likely to run into using protobuf, which we have already seen for reflection-based interfaces in OSI, which are likely much less problematic than mesh-data and materials, not to speak of potentially volumetric data in the future? Now don't get me wrong, I love OSI for what it is (even given that I am personally somewhat at fault for some of its features), but I don't think stretching it to be yet another 3d model format is a good use of resources. Now that does not mean that if someone comes up with a concrete proposal that you might not find sufficient interest to persue this. And given that OSI like ASAM is contributor/member-driven, anything that makes some sense and is supported by a large number of contributors would be welcomed. But currently I'd think that investing in pushing something like OpenMATERIAL forward is a much more likely approach to persue. Just my two centi-euros for what it's worth. |
Beta Was this translation helpful? Give feedback.
-
Hello everyone,
OSI is also supposed to work with raytracing sensor models.
If the simulator and the raytracing sensor model are seperated, the geometry from the simulator needs to be transferred to the sensor model.
Currently OSI only offers the way of defining a model_reference for stationary/moving/trafficLight/trafficSign/etc.
It is "implementation-specific how model_references are resolved to 3d models." as stated in the documentation.
That means you could insert "Car.obj" or just "Car" or some unique id for the asset or anything.
My first guess was to put in the actual 3d file, e.g. Car.obj
But simulators might use different 3d formats as inputs:
How should it work then?
Solution 1:
make the raytracing sensor model be compatible with multiple simulators => it needs to be able to load all these file formats.
Solution 2:
force every simulator to use a common 3d format (e.g. .gltf), => they need to convert every source 3d model to this format.
Solution 3:
provide a generic protobuf geometry interface to send just the mesh/material data (after the actual file format was parsed; so the source file format does not matter)
All 3d file formats have in common, that they provide a list of vertices and indices, normals, uvs, a topology (triangle, quad, strips, ...), list of materials and mapping which index belongs to which material. (edited: also add sensor specific properties for radar, lidar, ultrasonic)
For Animations also: weight of vertices to skeleton bones
Is there some recommended solution?
Regards,
Sebastian Wolter
Beta Was this translation helpful? Give feedback.
All reactions