Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(uses-http2): check protocol first #2701

Merged
merged 1 commit into from
Jul 19, 2017
Merged

perf(uses-http2): check protocol first #2701

merged 1 commit into from
Jul 19, 2017

Conversation

patrickhulce
Copy link
Collaborator

Invoking new URL for each network record was taking an insane amount of time

image

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the before/after speed difference? Even the worst site should only have resources in the thousands... If the URL constructor is a bottleneck for that, we should be able to shave a lot more seconds off the lighthouse run

@@ -43,7 +43,7 @@ class UsesHTTP2Audit extends Audit {

// Filter requests that are on the same host as the page and not over h2.
const resources = networkRecords.filter(record => {
const requestHost = new URL(record._url).host;
const requestHost = record._parsedURL && record._parsedURL.host;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

netrequest.parsedURL uses a different implementation of url parsing. (Though I hope that wouldn't matter).

Alternatively we could add caching to our URL shim? That'd have the benefit of improving perf for all audits/gatherers.

Don't feel strongly, just raising the idea.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the only place we were using new URL for network records over parsedURL, the only other place where we use new URL in a loop is for the anchor tags which I think is less likely to have hundreds/thousands of target=_blank elements, so maybe it's ok?

I'm not sure what the overlap of duplicate URLs would be for caching

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay then.

add a comment to explain why we use URL.hostname but ParsedURL.host ? :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@patrickhulce
Copy link
Collaborator Author

patrickhulce commented Jul 17, 2017

@brendankenny The screenshot is the before, the after is 0.579ms and see #2701 (comment) for comment on where there's not much left to win after this :)

@brendankenny
Copy link
Member

what site are those numbers from? I can't get more than a few hundred URLs there so far :)

@patrickhulce
Copy link
Collaborator Author

@brendankenny just from cnn.com

@brendankenny
Copy link
Member

hmm, something weird is definitely happening in that case. For cnn I'm seeing 449 URL constructor calls in uses-http2, but 3940 constructor calls in the whole run, so in theory we're wasting 16 seconds just constructing URLs. But I'm also only seeing 50-250ms spent in that audit, so I'm not sure what's going on.

Maybe the URL constructor's performance changed? I'm running node 8.1.4.

@patrickhulce
Copy link
Collaborator Author

@brendankenny not all constructor calls are created equal, there's nothing that wrong with a simple URL parsing but when we throw enormous data URLs in there it gets pretty slow pretty quick https://jsperf.com/new-url/1.

The screenshot of 2s run was with an ad that had particularly many data URIs with node 8.1.2 vs <1 ms, is this an objection to the fix or just curiosity?

@brendankenny
Copy link
Member

The screenshot of 2s run was with an ad that had particularly many data URIs

ah, that wasn't clear before.

In that case, another option would be to do an early return on notH2, so it doesn't attempt to parse the URL if it's not h2. That would avoid having to compare the two differently parsed URLs.

@patrickhulce
Copy link
Collaborator Author

patrickhulce commented Jul 17, 2017

I did not expect using parsedURL to be so controversial 😆

that's fine too, though it's still wasted parsing

@patrickhulce patrickhulce changed the title perf(uses-http2): use pre-existing parsedURL perf(uses-http2): check protocol first Jul 17, 2017
@paulirish
Copy link
Member

@brendankenny looks like you can take another look

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ⚖️ 🔬 🐁

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants