-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core(lantern): improve LCP tail error #10529
Conversation
lighthouse-core/computed/metrics/lantern-largest-contentful-paint.js
Outdated
Show resolved
Hide resolved
AI for me: checkout this branch (or wait til master) and run audits again (use the gathered artifacts you hopefully still have...) |
was this directed at me @connorjclark ? do you need some assets I might have lying around? |
It was a note to myself |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in order to do this we start breaking startTime ties with network priority and it affects other metrics in subtle ways
possible to add a simulator test directly for this? If the effect is subtle on all the metric snapshots then it might be difficult to pin the problem on this if it ever regressed.
lighthouse-core/computed/metrics/lantern-largest-contentful-paint.js
Outdated
Show resolved
Hide resolved
It would still be massive on LCP but ya this is a good call regardless 👍 |
b51f20f
to
9f58163
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious about a comment you made in one of the syncs about this, being reluctant to bring in other artifacts into lantern analysis (which have so far just needed a trace and a devtools log).
Am I misremembering it and you were talking about another change or did you just find it sufficient to base this on the info in the network records?
lighthouse-core/computed/metrics/lantern-largest-contentful-paint.js
Outdated
Show resolved
Hide resolved
lighthouse-core/computed/metrics/lantern-largest-contentful-paint.js
Outdated
Show resolved
Hide resolved
lighthouse-core/computed/metrics/lantern-largest-contentful-paint.js
Outdated
Show resolved
Hide resolved
*/ | ||
static _computeNodeStartPosition(node) { | ||
if (node.type === 'cpu') return node.startTime; | ||
return node.startTime + (PriorityStartTimePenalty[node.record.priority] * 1000 * 1000 || 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait, this seems different than just a tie breaker. It can actually change the order of nodes that don't have the same start time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"tiebreaker" if it were only the exact same nanosecond timestamp is useless. The lower the priority the wider the berth that's given to consider it a "tie" and defer to the higher priority request.
Requests started within the same second that all have their dependencies finished do not really need to be started in the exact same observed order that Chrome happened to request them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"tiebreaker" if it were only the exact same nanosecond timestamp is useless.
sure, but usually "tie breaker" is for exactly that sort of thing (e.g. sorting comparators). This is more of a time spreading heuristic :)
You didn't post much motivation for the exact numbers used in the PR, and it's not clear to me that reordering like that would work out correctly, but I guess the proof is the comparison errors :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the proof is the comparison errors
bingo :) haha
and that's fair, so here's my caveat to anyone who comes back trying to find where they came from
Dear reader from the future, I do not have supreme confidence that the exact numbers used in the PR are optimal nor were they divined after some extensive research process. They "felt right" and improved accuracy so if you need to change them for similar reasons, feel free :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lmao i am future reader, thx for context
*/ | ||
static _computeNodeStartPosition(node) { | ||
if (node.type === 'cpu') return node.startTime; | ||
return node.startTime + (PriorityStartTimePenalty[node.record.priority] * 1000 * 1000 || 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also that || 0
frightens me :) When does that come into play? startTime
and priority
should always be defined?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it shouldn't (it ran successfully on our entire lantern corpus and passed all our tests without this change) but in the event that we get something unplanned for priority the failure mode without it is no longer sorting the nodes at all, failing the entire performance category also seems like a pretty bad experience so 0 fallback makes the most sense.
maybe it was low priority as proxy for offscreen? |
I was talking about this change :) For LCP-specifically just using the priority as a proxy like we're doing here is good enough because any image that's onscreen and the largest that would block LCP is not issued with 'Low' priority. We don't really need to distinguish anything more nuanced than that for now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't really evaluate all the snapshot changes but code and error rate changes LGTM!
those are two separate thoughts, so it's more like
Good job finding a fix for this so fast. |
Here's how this changed the numbers from 11ty. I re-used the same artifacts, only difference is the auditing version of Lighthouse.
|
Summary
This PR makes two key changes to improve our LCP tail error raised in #10527. I've broken the changes into separate commits for easier reviewing and analysis of their accuracy impact.
Ignore offscreen (Low network priority) images in optimistic LCP
This is a pure win. Only metric improvements, no regressions. A no brainer.
Ignore offscreen (Low network priority) image endTimes in the pessimistic LCP, but still consider the network contention they cause
This isn't a pure win since in order to do this we start breaking startTime ties with network priority and it affects other metrics in subtle ways/has a minor regression for SI accuracy, but this is what actually fixes the #10527 netlify case and brings our P95 error in LCP from 127% down to 80% with other massive tail error decreases, so I think it's definitely worth doing.
Here is the summary of total net effect on accuracy.
Related Issues/PRs
#10527