You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many exec nodes in the plan have a "total time" metric that records the entire time spent in that node, including time spent waiting for the input iterator(s) hasNext and next calls. I'm not sure this is very useful in practice, and many users think this metric represents time spent solely operating on the node, when often it is more measuring the time spent waiting for inputs. Almost all the exec nodes have a separate metric that records the time spent locally operating on the GPU which in practice is much more useful.
Given we've seen cases where too many metrics have been generated while executing the plan, we should consider removing this one across the board, or at least only keeping it for a select few exec nodes where it makes sense.
The text was updated successfully, but these errors were encountered:
Many exec nodes in the plan have a "total time" metric that records the entire time spent in that node, including time spent waiting for the input iterator(s)
hasNext
andnext
calls. I'm not sure this is very useful in practice, and many users think this metric represents time spent solely operating on the node, when often it is more measuring the time spent waiting for inputs. Almost all the exec nodes have a separate metric that records the time spent locally operating on the GPU which in practice is much more useful.Given we've seen cases where too many metrics have been generated while executing the plan, we should consider removing this one across the board, or at least only keeping it for a select few exec nodes where it makes sense.
The text was updated successfully, but these errors were encountered: