Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(predictive-perf): predict FCP #3730

Merged
merged 3 commits into from
Nov 9, 2017
Merged

Conversation

patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Nov 2, 2017

closes #3671
closes #3593

Adds prediction of FCP, similar to FMP but move the time boundary to observed FCP, remove the CPU layout tasks, and add in the EvaluateScript tasks of blocking scripts (which really could have been part of FMP prediction too).

I also added redirects support since FCP was more radically impacted by missing redirect support than FMP was. I dug into a lot of the larger error cases (where error was >1s) and at this point small differences should not be used for decision making.

Some notable examples:

  • DevTools throttling limitations and approximation of just adding 600ms delay to everything greatly exaggerates impact of redirects (i.e. 3 redirects = 1.8 s instead of the correct result of ~500ms)
  • Server response time was measured in seconds and had variability i.e. returned in 5s for unthrottled case but only 3 seconds in throttled case

Overall, we're about as accurate as we were with FMP, higher error at the tail but lower error for most sites. Further progress on accuracy will need higher fidelity throttling like WPT.

Accuracy comparison

Metric Value on 99% Value on 95% Value on 90%
FMP - spearman's rho .805 .833 .854
FCP - spearman's rho .792 .832 .850
FMP - MAPE % 27.44% 24.11% 21.94%
FCP - MAPE % 29.03% 21.72% 19.65%

Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm interested in keeping the FMP prediction, for now.

Until we have an actual midpoint metric replacement I don't think we can remove FMP entirely. (see also #3672)

@patrickhulce
Copy link
Collaborator Author

patrickhulce commented Nov 6, 2017

I'm interested in keeping the FMP prediction, for now.

I'm inclined to reduce churn in exposing until we have a concrete plan for what we expose ala #3761, but I agree so I changed to just add FCP and not touch FMP

@patrickhulce patrickhulce changed the title core(predictive-perf): predict FCP instead of FMP core(predictive-perf): predict FCP Nov 6, 2017
Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all looks good to me.

As we were talking about, maybe add extendedInfo entries for first approximations at estimates, even if they're only in the ballpark of where we're going with this? Happy to go with awkward names that discourage treating them as more than placeholders :)

@patrickhulce patrickhulce merged commit 0af1721 into master Nov 9, 2017
@patrickhulce patrickhulce deleted the predict_fcp_instead branch November 9, 2017 03:38
christhompson pushed a commit to christhompson/lighthouse that referenced this pull request Nov 28, 2017
* core(predictive-perf): predict FCP instead of FMP

* add back FMP

* add rough estimates
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Lantern: Research performance of predicting FCP instead of FMP Lantern: Implement proper redirect handling
4 participants