-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core(unused-javascript): augment with source maps #10090
Conversation
/** @type {Record<string, number>} */ | ||
const files = {}; | ||
|
||
// This is 10x slower (~320ms vs 46ms for a big map), but its correctness |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left this in for reviewers. will remove later.
let offset = lineOffsets[mapping.lineNumber]; | ||
|
||
offset += mapping.columnNumber; | ||
const byteEnd = mapping.lastColumnNumber || lineLengths[mapping.lineNumber]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
byteEnd is a lastColumnOfLine ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lastColumnOfMapping
let offset = lineOffsets[mapping.lineNumber]; | ||
|
||
offset += mapping.columnNumber; | ||
const byteEnd = mapping.lastColumnNumber || lineLengths[mapping.lineNumber]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lengths are 1 indexed but columnNums are 0 index so i think you want to decrement one if you're pulling from lineLengths?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for (const mapping of bundle.map.mappings()) { | ||
let offset = lineOffsets[mapping.lineNumber]; | ||
|
||
offset += mapping.columnNumber; |
This comment was marked as resolved.
This comment was marked as resolved.
Sorry, something went wrong.
offset += mapping.columnNumber; | ||
const byteEnd = mapping.lastColumnNumber || lineLengths[mapping.lineNumber]; | ||
for (let i = mapping.columnNumber; i < byteEnd; i++) { | ||
if (wasteData.every(data => data.unusedByIndex[offset] === 1)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i guess the alternative to this is if the mapping could also express its start and endOffsets?
i'm interested in what'd be involved in making that happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be straight-forward, but would need to be done upstream in CDT. The offset can be pulled from the character iterator's position
(see SourceMap._parseMap
).
This current method is good for now. when you bring this feature to CDT it'll be good if you can make the unused calculation re-usable, then we can delete this function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This current method is good for now. when you bring this feature to CDT it'll be good if you can make the unused calculation re-usable, then we can delete this function.
lol so you're saying that what you did is fine, but i should reimplement when i upstream it? 🤣
i guess you're taking this position mostly because the changes to sdk/sourcemap.js would be too significant? i guess that makes sense, though the class now is pretty modified already. ah well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's two parts here:
- modify
SourceMap._parseMap
to also save the end offsets, so the byte attribution can be a little faster - the feature you're working on in CDT will need to do the same byte attribution. So there is no difference in LH vs CDT, this should be done in the same way by using the same code
I don't want to do 1. here because the change needed is much more than how this PR is currently augmenting SourceMap/SDK. Would need to completely replace a private function. Since it's just for speed, not correctness, I think it can wait until done upstream.
For 2, I'd imagine you'll just copy L90-116 here, and make a function on SourceMap, something like calculateSourceByteAttribution(compiledSource: string)
. I'm lost on if you've done this bit already in your CL - are you already calculating attribution for bundle?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, I don't mind taking over the above within DT :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, I don't mind taking over the above within DT :)
that sounds pretty divine to me.
commit ecb6260 Author: Connor Clark <cjamcl@google.com> Date: Wed Dec 11 15:17:52 2019 -0800 no index commit 52796db Author: Connor Clark <cjamcl@google.com> Date: Wed Dec 11 15:17:02 2019 -0800 no unshift commit 943e19c Author: Connor Clark <cjamcl@google.com> Date: Wed Dec 11 15:14:21 2019 -0800 deletionBraceCount commit caa269b Author: Connor Clark <cjamcl@google.com> Date: Wed Dec 11 15:13:25 2019 -0800 comments commit e6fde71 Author: Connor Clark <cjamcl@google.com> Date: Wed Dec 11 15:02:50 2019 -0800 semver commit 6480198 Author: Connor Clark <cjamcl@google.com> Date: Tue Dec 10 21:18:36 2019 -0800 update frontend commit 5d17268 Author: Connor Clark <cjamcl@google.com> Date: Tue Dec 10 21:07:44 2019 -0800 rm some ts ignores commit 7129ef9 Author: Connor Clark <cjamcl@google.com> Date: Tue Dec 10 21:04:09 2019 -0800 trim unneeded code commit 0b655b6 Author: Connor Clark <cjamcl@google.com> Date: Tue Dec 10 20:46:11 2019 -0800 better cdt lib gen commit 3333c4f Author: Connor Clark <cjamcl@google.com> Date: Tue Dec 10 20:19:20 2019 -0800 add sections to source map type commit 73eca7b Author: Connor Clark <cjamcl@google.com> Date: Tue Dec 10 20:00:02 2019 -0800 move SDK up b/c global pollution is gone commit 47e8a41 Author: Connor Clark <cjamcl@google.com> Date: Tue Dec 10 19:44:59 2019 -0800 comment commit 9f47880 Author: Connor Clark <cjamcl@google.com> Date: Mon Dec 9 15:02:57 2019 -0800 remove unused js commit 7d50227 Author: Connor Clark <cjamcl@google.com> Date: Sat Dec 7 00:36:07 2019 -0800 nul commit ed4bb21 Author: Connor Clark <cjamcl@google.com> Date: Sat Dec 7 00:33:39 2019 -0800 remove ParsedURL commit 43da869 Author: Connor Clark <cjamcl@google.com> Date: Sat Dec 7 00:30:08 2019 -0800 tmp commit 0d73a8b Author: Connor Clark <cjamcl@google.com> Date: Sat Dec 7 00:22:10 2019 -0800 reduce code commit 4a9127d Author: Connor Clark <cjamcl@google.com> Date: Fri Dec 6 23:52:44 2019 -0800 delete some stuff not needed commit dff8358 Author: Connor Clark <cjamcl@google.com> Date: Fri Dec 6 23:24:48 2019 -0800 errors commit 808736d Author: Connor Clark <cjamcl@google.com> Date: Fri Dec 6 22:56:20 2019 -0800 Common.console commit 122f2a2 Author: Connor Clark <cjamcl@google.com> Date: Fri Dec 6 22:54:42 2019 -0800 ts commit 2f80d5a Author: Connor Clark <cjamcl@google.com> Date: Fri Dec 6 22:50:18 2019 -0800 remove hacky globalThis.cdt commit 7ad9389 Author: Connor Clark <cjamcl@google.com> Date: Fri Dec 6 21:24:00 2019 -0800 do not pollute global array commit fa07a80 Author: Connor Clark <cjamcl@google.com> Date: Fri Dec 6 21:17:56 2019 -0800 mapping test commit f9408f3 Author: Connor Clark <cjamcl@google.com> Date: Fri Dec 6 20:49:23 2019 -0800 count lines commit 6d27caa Author: Connor Clark <cjamcl@google.com> Date: Fri Dec 6 20:41:09 2019 -0800 initial
Updated to use optional artifacts. Will be no change when using the default config. Will see bundle stuff when using the experimental config. Gating moving to default until better styling for subRows. And I guess at this point, it might be best to just release (read: move to the default config) all the source map stuff at once (including visualization). First post updated with link to a report. |
offset += mapping.columnNumber; | ||
const byteEnd = mapping.lastColumnNumber || lineLengths[mapping.lineNumber]; | ||
for (let i = mapping.columnNumber; i < byteEnd; i++) { | ||
if (wasteData.every(data => data.unusedByIndex[offset] === 1)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, I don't mind taking over the above within DT :)
that sounds pretty divine to me.
* @param {ReturnType<typeof UnusedJavaScript.determineLengths>} lengths | ||
* @param {number} bundleSourceUnusedThreshold | ||
*/ | ||
static createBundleMultiData(item, wasteData, bundle, lengths, bundleSourceUnusedThreshold) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iirc you had more descriptive function names before.
i think this should be two functions..
one is L92 - 118... and titled around computing bundle waste
ish
and then a second is L120 - 138 and calling that bundle subrow data
ish sgtm
you said you're about to extract some of this into a computed artifact, so it sounds like you're going to do this in your next PR anyhow, yeah?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I have a few better names and split this fn in a computed artifact PR :)
/** | ||
* @param {WasteData[]} wasteData | ||
* @param {string} url | ||
* @param {ReturnType<typeof UnusedJavaScript.determineLengths>} lengths |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wow that's wild.
assert.deepStrictEqual(config, extendedConfig, 'had mutations'); | ||
// Clone so we do not compare event callbacks binded on gatherers. | ||
assert.deepStrictEqual( | ||
JSON.parse(JSON.stringify(config)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this wipes out a bunch of checks on these objects, so i'm a bit hesitant to loosen things this much..
whats the motivation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the source map gatherer sets a .bind property in its ctor (so two instances are always gonna be not equal). i think this test broke when I had source maps in the default config ... but the plan has changed since so i'll revert this change. Will prop up again when we move maps to default config..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We ran into this again in #10623
The should generate the same filtered config, extended or original
test in config-test.js
will break if there's any .bind()
property in the ctor (like @connorjclark said). This test filters down to onlyCategories: ['performance'],
.. and the performance category, previous to 10090 and 10623, had ZERO gatherers with bound instance props. (Just by coincidence). Whereas best-practices and other categories definitely have them.
So this was a fun lil unexpected failure.
In #10623, @Beytoven is adding some code to delete instances before doing the diff which will sort this out.
{key: 'totalBytes', valueType: 'bytes', label: str_(i18n.UIStrings.columnSize)}, | ||
{key: 'wastedBytes', valueType: 'bytes', label: str_(i18n.UIStrings.columnWastedBytes)}, | ||
/* eslint-disable max-len */ | ||
{key: 'url', valueType: 'url', subRows: {key: 'sources', valueType: 'code'}, label: str_(i18n.UIStrings.columnURL)}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think amp is very unusual in how they do their maps.
i think the goal here is to reduce the length of each sub row...for that, we can trim off the longest common prefix (LCP ;) as we collect the source map. all other uses of maps will benefit from this too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sg!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only last item. after this we gucci.
/** | ||
* @param {LH.Artifacts.SourceMap} SourceMap | ||
*/ | ||
trimCommonPrefix(SourceMap) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i was expecting to see the trimming done in the audit as it's a presentational concern.
could we move it there?
i'd feel better if our sourcemap artifacts remain as we received them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's do it in jsbundles. I think every audit will want this treatment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvm lets do it ur way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rip goat
|
thanks @connorjclark should be fixed now, time to move off the free plan I think, we hit the quota about twice a week now 😬 |
Depends on: #10084 #10078
http://misc-hoten.surge.sh/lh-reports/pr-10090.html