Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance regression in 0.20 #3024

Closed
WorldSEnder opened this issue Dec 10, 2022 · 2 comments · Fixed by #3025
Closed

Performance regression in 0.20 #3024

WorldSEnder opened this issue Dec 10, 2022 · 2 comments · Fixed by #3025

Comments

@WorldSEnder
Copy link
Member

WorldSEnder commented Dec 10, 2022

js-framework-benchmark shows a serious problem in the "create many rows" benchmark. My initial analysis points to the following method appear way too often on the stacktrace (recursively):

impl PartialEq for NodeRef {
fn eq(&self, other: &Self) -> bool {
self.0.as_ptr() == other.0.as_ptr() || Some(self) == other.0.borrow().link.as_ref()
}
}

I believe the check in NodeRef::link causes a very long chain of linked node refs to be traversed. I'm not certain yet why such a long chain exists in the first place. It might be connected to a specific order in which the lifetime methods of components are called. Before a component is rendered, a node ref is still created and linked to its successor. This could cause such a huge chain, and most likely O(n^2) behaviour in large lists.

Why does it not show up in our benchmarks? The regression might have been introduced before these were established.


Anyway, I'll see if I can come up with a fix. This might be the time to rework NodeRefs overall. The current design seems especially vulnerable to this accidental n^2 behaviour.

@ranile
Copy link
Member

ranile commented Dec 10, 2022

Why does it not show up in our benchmarks?

Because we don't control runners for our benchmarks. The hardware is what GitHub decides so the results between different runs are not comparable

@WorldSEnder
Copy link
Member Author

That can't be all of it. The speedup from the linked PR is approximately x2 on my machine, far outside the margin of error of different runs. I'm currently reading the workflow files to look for anything out of the ordinary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants