Reduce redundancy in object graph through inference and caching #1586
Labels
topic: core
Issues relating to core geometry, operations, algorithms
type: development
Work to ease development or maintenance, without direct effect on features or bugs
(This issue is part of a larger cleanup effort. See #1589.)
Problem
Shapes in Fornjot are made of objects: Vertices, edges, faces, etc. Those objects reference each other and form a graph.
Right now, this object graph is highly redundant. For example, the vertices that bound an edge are defined in 1-dimensional curve coordinates, 2-dimensional surface coordinates, and 3-dimensional global coordinates. This is done to ensure that each of those values is only calculated once and then re-used whenever it is needed anywhere. Otherwise, we'd risk having slightly different numbers that should be the same, which would lead to various problems.
This redundant data in the object graph makes working with it very challenging. When building shapes, you have different information available at various points in time. There are mechanisms to infer missing information later, but they are ad-hoc. Overall, it's very easy to build shapes with missing or conflicting information. There are various protections in place to prevent this from resulting in an invalid shape, but that doesn't help with the difficulty of building a valid shape.
I used to think that this redundancy in the object graph was necessary, and just an expression of the inherent complexity of the problem. I no longer believe this. I'm convinced that the complexity in the object graph can be limited by reducing or completely removing this redundancy.
Proposed Solution
Here's what I have in mind:
Context and Outlook
This is one of multiple measures targeted at simplifying the kernel's core data structures, and as a result make it easier and less error-prone to work with. I have already opened some issues in that vein, and am working on creating more. I also plan to open a meta-issue, that will point to all issues that are related to this work.
Experience has shown that the complexity of the core data structures, the very thing these efforts are meant to address, often makes the implementation of these efforts quite hard, usually in unexpected ways. For that reason, there's no obvious order in which to do things. It might make sense to start working on one issue, then pause it when things get dicey, and attack the problem from another direction before resuming.
For that reason, I'm assigning myself to this issue and will work on it where it makes sense. But it's unlikely that I can work on this in a single block of time.
The text was updated successfully, but these errors were encountered: