-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixes for TTFB #2231
fixes for TTFB #2231
Conversation
b1b2161
to
80d68c6
Compare
80d68c6
to
1108575
Compare
|
||
return artifacts.requestNetworkRecords(devtoolsLogs).then( | ||
networkRecords => artifacts.requestCriticalRequests(networkRecords) | ||
.then(criticalRequests => [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can use promise.all to avoid this little dance here. a number of other audits employ the pattern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring to something like this? Or am I missing something?
return Promsise.all([
networkRecords,
artifacts.requestCriticalRequests(networkRecords)
])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup
return Promise.all([
artifacts.requestNetworkRecords(devtoolsLogs),
artifacts.requestCriticalRequests(networkRecords),
]).then(([networkRecords, criticalRequests]) => ...
1108575
to
e0d305f
Compare
@paulirish @patrickhulce should I try to run smoke tests on this audit? |
@wardpeet no rush on this cause this is your reviewers' fault, but can you rebase when you get a chance? Things have changed so much since this was opened that it's hard to tell what's a real change and what's not :):) ❤️ |
e0d305f
to
830037c
Compare
@brendankenny done :) hopefully travis doesn't fail (fingers crossed) |
lighthouse-core/config/default.js
Outdated
@@ -227,12 +229,13 @@ module.exports = { | |||
{id: 'estimated-input-latency', weight: 1, group: 'perf-metric'}, | |||
{id: 'link-blocking-first-paint', weight: 0, group: 'perf-hint'}, | |||
{id: 'script-blocking-first-paint', weight: 0, group: 'perf-hint'}, | |||
{id: 'uses-responsive-images', weight: 0, group: 'perf-hint'}, | |||
// {"id": "unused-css-rules", "weight": 0}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can revert these
artifacts.requestNetworkRecords(devtoolsLogs), | ||
artifacts.requestCriticalRequests(devtoolsLogs) | ||
]) | ||
.then(([criticalRequests, networkRecords]) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these are flipped, shouldn't it be [networkRecords, criticalRequests]
?
}; | ||
} | ||
|
||
static extractRequests(networkRecords) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This split seems weird to me, shouldn't this belong with the *Chains
computed artifact and this class just filters to the requests? It's been a while, but I thought we decided to split based on the filtering/chain creation.
The world has also changed quite a bit by now with fast mode introducing a new full-blown dependency graph, instead of all this work we could probably just get away with using the isCritical
static method on the critical request chains class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still need to drop requests that start from a low priority request. Or am I wrong?
The critical requests artifact gives a flat tree of requests and the critical-request-chain artifact creates a tree
reference: (if this is outdated info, I can change my approach)
#2192 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I look at #931 it looks like we just want all critical resources so maybe it makes sense to just look at the priority and not an image and do more logic in the critical-request-chains gatherer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right right. That really puts a wrinkle in wanting to split these then since you need to construct the chain even just to determine the list of requests. I suppose this is fine for now. We'll probably end up migrating this to use the dependency graph from #2720 at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK @wardpeet talked it over with @paulirish and @brendankenny, and we think we should stick to measuring the TTFB for just the primary HTML request to keep it simple and avoid messing with the critical request chains gatherers for now.
We're about to get better infrastructure for dealing with the graph of requests that should make this much easier and we can revisit it then. So sorry for the churn here, and my bad for not checking we really needed all the critical requests before suggesting the split 😅 You're a champ 💪
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, you will still need to construct the chain but critical resources would give a flat request tree so it's easier to model other audits/gatherers around it.
I will revert the changes and only measure ttfb on primary request
52b0429
to
efcf5c1
Compare
const Util = require('../report/v2/renderer/util'); | ||
const URL = require('../lib/url-shim'); | ||
|
||
const TTFB_THRESHOLD = 200; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we intending to run this audit only on unthrottled runs? Won't it flag all resources with the default throttling settings on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe I don't understand this comment. TTFB should be below 200, this is merely a constant that I use in the audit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well if throttling is on won't TTFB always be at least as high as our throttling value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a really good question. I believer you are right. Should we higher the max ttfb? I believe @paulirish suggested using 200ms
I'll do some audits based on throttling and no throttling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When throttling is the TTFB that we observe should be minimum 150ms. And if there was any server latency, that is just absorbed in there.
So if the observed TTFB is > 151ms, then ALL of that was real network latency + server response.
That much sound correct to yall?
Then, 200 is a number we can evaluate separate from our throttling and we just have to decide when its slow enough for us to care.
PSI uses either 200 or 400, I forget.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah based on https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/lib/emulation.js#L47 our set latency should be 562ms. So whatever i just say but s/150/562/ :)
well... then our threshold has to be like 600ms
eesh.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so much simpler this is great!
const Util = require('../report/v2/renderer/util'); | ||
|
||
const TTFB_THRESHOLD = 600; | ||
const TTFB_THRESHOLD_BUFFER = 15; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at this point I'm not sure splitting buffer is worth it when our threshold is 600, haha
return { | ||
category: 'Performance', | ||
name: 'time-to-first-byte', | ||
description: 'Time To First Byte (TTFB)', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this description nicely summarizes what we're really doing but it feels weird along with the other perf opportunities, maybe we should convert it into something like "Keep server response times low" and mention TTFB in the description or debugString?
I know @paulirish doesn't like "response time" though ;) open to whatever doesn't make it feel out of place there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep server response times low (TTFB)?
ping @patrickhulce |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM % nit 🎉
|
||
if (!passed) { | ||
debugString = `Root document (${URL.getURLDisplayName(finalUrl)}) went over` + | ||
` the ${thresholdDisplay} threshold`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if it's over the threshold then we can be confident it took at least that amount of time, so let's report the actual ttfb here e.g. Root document request took X ms
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
something like Root document took ${Util.formatMilliseconds(ttfb, 1)} ms to get the first byte.
or keep it short as you stated above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yours sgtm too 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Successfully emerges from the distant past! :)
I changed a few things in this PR I moved the critical requests to a new computed gatherer and moved a small part to the chain gatherer. I tried rewriting all tests for this as well.
I also added a fix to check to check if _timing is available on the network record so ttfb does not fail.