-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(uses-http2): check protocol first #2701
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the before/after speed difference? Even the worst site should only have resources in the thousands... If the URL constructor is a bottleneck for that, we should be able to shave a lot more seconds off the lighthouse run
@@ -43,7 +43,7 @@ class UsesHTTP2Audit extends Audit { | |||
|
|||
// Filter requests that are on the same host as the page and not over h2. | |||
const resources = networkRecords.filter(record => { | |||
const requestHost = new URL(record._url).host; | |||
const requestHost = record._parsedURL && record._parsedURL.host; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
netrequest.parsedURL uses a different implementation of url parsing. (Though I hope that wouldn't matter).
Alternatively we could add caching to our URL shim? That'd have the benefit of improving perf for all audits/gatherers.
Don't feel strongly, just raising the idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the only place we were using new URL
for network records over parsedURL
, the only other place where we use new URL
in a loop is for the anchor tags which I think is less likely to have hundreds/thousands of target=_blank
elements, so maybe it's ok?
I'm not sure what the overlap of duplicate URLs would be for caching
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay then.
add a comment to explain why we use URL.hostname but ParsedURL.host ? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@brendankenny The screenshot is the before, the after is |
what site are those numbers from? I can't get more than a few hundred URLs there so far :) |
@brendankenny just from cnn.com |
hmm, something weird is definitely happening in that case. For cnn I'm seeing 449 URL constructor calls in uses-http2, but 3940 constructor calls in the whole run, so in theory we're wasting 16 seconds just constructing URLs. But I'm also only seeing 50-250ms spent in that audit, so I'm not sure what's going on. Maybe the URL constructor's performance changed? I'm running node 8.1.4. |
@brendankenny not all constructor calls are created equal, there's nothing that wrong with a simple URL parsing but when we throw enormous data URLs in there it gets pretty slow pretty quick https://jsperf.com/new-url/1. The screenshot of 2s run was with an ad that had particularly many data URIs with node 8.1.2 vs <1 ms, is this an objection to the fix or just curiosity? |
ah, that wasn't clear before. In that case, another option would be to do an early return on |
I did not expect using parsedURL to be so controversial 😆 that's fine too, though it's still wasted parsing |
@brendankenny looks like you can take another look |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM ⚖️ 🔬 🐁
Invoking
new URL
for each network record was taking an insane amount of time