-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core(estimated-input-latency): use a 5s rolling window #4989
Conversation
Is what we're computing here functionally equivalent to
|
yes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's performance like? We could add an iterative version of getMainThreadTopLevelEventDurations()
if it's slow as it's doing a lot of redundant work per window
'of latency, or less. 10% of the time a user can expect additional latency. If your ' + | ||
'latency is higher than 50 ms, users may perceive your app as laggy. ' + | ||
'[Learn more](https://developers.google.com/web/tools/lighthouse/audits/estimated-input-latency).', | ||
helpText: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
possible to revert these? don't appear to have changed :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
const latencyPercentiles = TracingProcessor.getRiskToResponsiveness( | ||
events, | ||
startEvt.start, | ||
startEvt.start + ROLLING_WINDOW_SIZE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if there's a last argument of [0.9]
then no need for the find()
below (since dropping the extendedInfo
...though if we keep EIL, there's arguably usefulness in keeping other percentiles :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
'input, in milliseconds. There is a 90% probability that a user encounters this amount ' + | ||
'of latency, or less. 10% of the time a user can expect additional latency. If your ' + | ||
'latency is higher than 50 ms, users may perceive your app as laggy. ' + | ||
'[Learn more](https://developers.google.com/web/tools/lighthouse/audits/estimated-input-latency).', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we might want to tweak this, though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tweaked!
@@ -75,8 +89,9 @@ class EstimatedInputLatency extends Audit { | |||
static audit(artifacts, context) { | |||
const trace = artifacts.traces[this.DEFAULT_PASS]; | |||
|
|||
return artifacts.requestTraceOfTab(trace) | |||
.then(traceOfTab => EstimatedInputLatency.calculate(traceOfTab, context)); | |||
return artifacts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
revert? :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
yeah it could definitely be better (~200ms for a CNN trace), but it's marginally better than our old TTI, and the filter on possible start points helps a lot, was planning on tackling this when adding shared eFID/EIL utils but now that it's potentially DOA maybe we just do that as followup separate |
bc96c45
to
0a33750
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is what we're computing here functionally equivalent to
the largest 90pctl EQT value of 5s windows between FMP to end of the trace
yes
tight. let's add that as a comment somewhere, so its clear how we compare to the std metric.
const ninetieth = latencyPercentiles.find(result => result.percentile === 0.9); | ||
const rawValue = parseFloat(ninetieth.time.toFixed(1)); | ||
const events = TracingProcessor.getMainThreadTopLevelEvents(tabTrace, startTime) | ||
.filter(evt => evt.duration >= 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since a blank trace gets a value of 16, can't we safely remove any events with a duration < 16?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well a trace with constant 16ms events would technically get a EQT of 24ms since we arbitrarily add 16ms to whatever value we compute, I'm all for doing more filtering here though, who cares about your small events
const events = TracingProcessor.getMainThreadTopLevelEvents(tabTrace, startTime) | ||
.filter(evt => evt.duration >= 1); | ||
|
||
const candidateStartEvts = events.filter(evt => evt.duration >= 10); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similarly, can we raise 10 to something like 50? Seems like we'd lose precision in the "you're good" area of scores, but that doesn't seem too bad.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah sgtm 👍 much faster too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't that make the max score the PODR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, only filtering the start event times
if we're only concerned with speed there are easier ways of making this faster than sprinkling filters everywhere :P |
Speed is not a major concern right now. In the unit test which has 20s of CPU time split between 1000 start times and 10000 small tasks it takes ~40-50ms. Of the 200ms for CNN 180 of those are spent in trace-of-tab, not here. We're not filtering because we absolutely have to, it's more why do useless work. I don't really care either way. 1ms and 10ms don't seem to be a bottleneck speed-wise or have any meaningful impact accuracy-wise. |
friendly ping for re-review! I took no action on filter thresholds since it seemed to be contentious... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! 🎢 🪟
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
chose a 5s window because that's what TTI uses and it gave the right "feel" on several test sites, CNN gets ~1s and is punished harshly, pwa.rocks gets ~17ms, 1s was a bit too small and harsh 10s is forgiving of 1s long tasks which is just too lenient