Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core(lcp-lazy-loaded): add LCP savings estimate #15064

Merged
merged 5 commits into from
Aug 15, 2023
Merged

Conversation

adamraine
Copy link
Member

This PR uses the LCP load delay as the estimated LCP savings for the lcp-lazy-loaded audit. It might be possible to get a better estimate by simulating the page graph with modified fetch priority.

That being said, this audit might be going away as part of #13738. I wanted to reduce the titanic deck chair shuffling in this PR by using a very simple estimate.

@adamraine adamraine requested a review from a team as a code owner May 11, 2023 19:51
@adamraine adamraine requested review from brendankenny and removed request for a team May 11, 2023 19:51
@adamraine adamraine changed the title core(lcp-lazy-loaded): metric impact estimate core(lcp-lazy-loaded): add LCP savings estimate May 11, 2023
Comment on lines 85 to 87
// We don't know when the LCP resource was first "seen" before being lazy loaded.
// Our best estimate for LCP savings is the entire LCP load delay.
lcpSavings = lcpBreakdown.loadStart - lcpBreakdown.ttfb;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe lcpImageRequest.networkRequestTime - lcpImageRequest.rendererStartTime is this time (though still not entirely fair to blame all of it on the lazy loading)? Unless there's some optimizations like e.g. the preload scanner skipping loading="lazy" resources and letting the full parser handle it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe lcpImageRequest.networkRequestTime - lcpImageRequest.rendererStartTime is this time

From my testing this isn't the case (although would be nice if it was), rendererStartTime is not when the request was "discovered" but when the renderer decided to start the lazy request.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And turns out

Unless there's some optimizations like e.g. the preload scanner skipping loading="lazy" resources and letting the full parser handle it

is indeed a thing:

if (type == ResourceType::kImage && is_img &&
    IsLazyLoadImageDeferable(document_parameters)) {
  return nullptr;
}

:)

It actually makes sense that the renderer would be the owner of this because it has to know about layout, current viewport etc, but I didn't think about that.

I guess we know it's bounded by at least ttfb and rendererStartTime, then? Still not great though. My main issue with this wide estimate is that it makes it as big an opportunity as using fetchpriority=high, when it almost never will be.

Including loading=lazy is just a little bit worse than having no fetchpriority/loading attributes on the image at all, because the browser does eventually figure out what's going on once layout occurs. See the testing in https://web.dev/lcp-lazy-loading/#testing-a-fix, for instance

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Longe term, if we fold this advice into prioritize-lcp-image this savings estimate will make more sense.

One thing we could do is take a statistical guess at the savings. Apply a 13%/15% (https://web.dev/lcp-lazy-loading/#causal-performance) savings to LCP and bound by ttfb and rendererStartTime.

@adamraine adamraine merged commit 9cd0686 into main Aug 15, 2023
@adamraine adamraine deleted the lcp-lazy-impact branch August 15, 2023 23:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants