-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core(lantern): handle disk cache simulation #5221
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
presumably you're seeing better accuracy for your trace corpus?
// Rough access time for seeking to location on disk and reading sequentially | ||
// @see http://norvig.com/21-days.html#answers | ||
const sizeInMb = (networkNode.record._resourceSize || 0) / 1024 / 1024; | ||
timeElapsed = 8 + 20 * sizeInMb - timingData.timeElapsed; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is slower in Chrome. UMA must have some timings we could look at.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah we're not really accounting for the full Chrome IPC RT time, just the OS/disk access, but the same could be said for network really
perhaps we add an IPC RT latency factor in the future where we account for more based on the chunk size of all IPC-based traffic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay yah sg. DiskCache.0.AsyncIOTime
is the best one but it doesn't give us much as its measured from the cache thread and no cost to browser IO thread or over the IPC to the renderer.
|
||
const estimatedTimeElapsed = calculation.timeElapsed + timingData.timeElapsedOvershoot; | ||
let timeElapsed = 0; | ||
if (networkNode.fromDiskCache) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the new logic I think we should spilt _estimateTimeRemaining
into a few pieces -- it's gettin a little long.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aye, prefer that happen now? wanted to keep the diff a little simpler though I suppose with separate commits that's still the case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ya let's do it now. i can follow these changes pretty ez.
yeah there's really negligible impact here though since all the traces were collated with LH (and thus had the cache entirely disabled) the only thing impacting accuracy is the SI change which was relatively minor since it's just the floor component and the intercept scaling adjustment which only impacts sites with estimates <1s (slight improvement in accuracy to those sites) |
adds support to lantern for handling network files that were loaded from disk cache instead of network (necessary for working with
--disable-storage-reset
and in DevTools)this also makes an adjustment to all lantern metrics predicted to be <1s that scales the intercept linearly until 1s. It only affects sites that were predicted to be very fast.
for example a site predicted to have FCP at 500ms
Before: 600 + 500 = 1100
After: 500/1000 * 600 + 500 = 800
for example a site predicted to have FCP at 10ms (for example cached example.com)
Before: 600 + 10 = 610
After: 10/1000 * 600 + 10 = 16
lastly, it adjusts speed index pessimistic estimate to use the pessimistic FCP as the floor rather than the coefficient predicted one. It was similarly unfair to very fast sites to double apply the coefficient in this case.
closes #5193