Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change the watch mode of TS compiler to 'UseFsEventsWithFallbackDynamicPolling' #256

Closed
wants to merge 1 commit into from

Conversation

NeKJ
Copy link

@NeKJ NeKJ commented Apr 20, 2019

  • add typescriptWatchMode option
  • add pre-defined enum 'ForkTsCheckerWebpackPlugin.TypescriptWatchMode' that consists of TS 3.4.x supported values for watch mode
  • set the environment variable 'TSC_WATCHFILE' according to the options before the creation and execution of the compiler program
  • append used TS Incremental API and TS watch mode settings to the information output
  • update README.md with help about this new option

…icPolling'

- add typescriptWatchMode option
- add pre-defined enum 'ForkTsCheckerWebpackPlugin.TypescriptWatchMode' that consists of TS 3.4.x supported values for watch mode
- set the environment variable 'TSC_WATCHFILE' according to the options before the creation and execution of the compiler program
- append used TS Incremental API and TS watch mode settings to the information output
- update README.md with help about this new option
@NeKJ NeKJ mentioned this pull request Apr 20, 2019
@NeKJ
Copy link
Author

NeKJ commented Apr 20, 2019

TS uses internally the fs.watch and fs.watchFile API functions of nodejs for their watch mode. The latter function is even not recommended by nodejs documentation for performance reasons, and urges to use fs.watch instead.

NodeJS doc:

Using fs.watch() is more efficient than fs.watchFile and fs.unwatchFile. fs.watch should be used instead of fs.watchFile and fs.unwatchFile when possible.

TS, for some unknown to me, reason, defaults to fs.watchFile and that's why it uses CPU cycles on idle. The configuration option to control this behaviour of TS is the TSC_WATCHFILE env. variable and the value that makes TS to use the performant fs.watch API function is the UseFsEvents. And indeed this works and CPU usage drops to nothing on idle.

However, I decided to use 'UseFsEventsWithFallbackDynamicPolling' value as the default mode because it makes TS to use 'fs.watch' and if that doesn't work/is not supported by the OS, it will fallback to the DynamicPriorityPolling that uses fs.watchFile in a not so bad way which (I think) causes lower CPU usage still than the default behaviour.

@johnnyreilly
Copy link
Member

johnnyreilly commented Apr 20, 2019

First of all: thank you! Great detective work 😁

Now that you've explained what the issue is (and provided a solution) I'm a-wondering... The solution is essentially set an environment variable either to this:

TSC_WATCHFILE=UseFsEventsWithFallbackDynamicPolling

Or to the value supplied. It's completely cool that we can do this. I'm wondering whether it's a good idea. The complete win for me would be TypeScript itself having a better default. So before we look to merge this I think I'd like to raise this with the TypeScript team. Let's see where that leads.

In the meantime, would you be able to characterise how each of these performed?:

'UseFsEventsWithFallbackDynamicPolling' - I guess this was top choice since it's the default you picked 😄 
'PriorityPollingInterval' - ?
'DynamicPriorityPolling' - ?
'UseFsEvents' - ?
'UseFsEventsWithFallbackDynamicPolling' - ?
'UseFsEventsOnParentDirectory' - ?

@NeKJ
Copy link
Author

NeKJ commented Apr 21, 2019

Of course I agree that this is an issue of TS and that's where the fix should be done, not in this plugin and you did the right thing to raise this with them (I would too but you, as the maintainer of this, have much more ...weight). But for the time being we have this at least as a practical workaround for the users of this plugin. I didn't make any real measurements the first time because I didn't want to spent anymore time on it, but you asked nicely :p so here they are:

I measured all of them, using $ top -b -n2 -d 30 -p <pid of node that runs service.js> which is a sample of 30 seconds, run 10+ seconds after the build has been completed and everything has settled.

Value CPU usage on idle
TS default (TSC_WATCHFILE not set) 7.4%
UseFsEventsWithFallbackDynamicPolling 0.2%
UseFsEventsOnParentDirectory 0.2%
PriorityPollingInterval 6.2%
DynamicPriorityPolling 0.5%
UseFsEvents 0.2%

As we can see, the worst option of them all is the default behaviour of TS and next to it is the PriorityPollingInterval where it is a tad bit lower. DynamicPriority is at 0.5% and all rest at 0.2%. Therefore the UseFsEventsWithFallbackDynamicPolling is the best choice as it will in the best case choose UseFsEvents with 0.2% or in the worst use DynamicPriorityPolling which goes up to 0.5%.

These all are measurements on my own computer which is a PC running linux 64bit. To do this right,we will need measurements from other systems too (OSX, Windows etc).

@phryneas
Copy link
Contributor

phryneas commented Apr 21, 2019

Hm. As a temporary measure, updating the package.json somehow like this should be enough:

{
  "scripts": {
    "watch": "TSC_WATCHFILE=UseFsEventsWithFallbackDynamicPolling webpack"
  }
}

So I'm not sure this may need a code change right now.

Maybe just updating the README for the time being, or pinning an issue with that information would be enough until it is fixed upstream? (Disclaimer: I'm not the maintainer of this project, just giving my thoughts here)

@johnnyreilly
Copy link
Member

johnnyreilly commented Apr 21, 2019

I want to see what the TypeScript team say before we proceed. I hope you understand, but taking away something from an API is a lot harder than adding it in the first place.

Given there's a workaround that's usable by all right now, I don't think the urgency on this is high.

I do massively appreciate the detective work done here it's super useful and helpful. I hope you understand my caution. Much ❤️

@NeKJ
Copy link
Author

NeKJ commented Apr 22, 2019

No worries, I don't care if it gets merged or not, I'm just glad that I could help to track down the issue. :)

@InvictusMB
Copy link

@johnnyreilly
As I mentioned in facebook/create-react-app#7003 (comment) I don't run into performance issues without async mode for fork-ts-checker-webpack-plugin enabled. Is the watch mode really the culprit there? How come it does not affect sync mode as well?

@phryneas
Copy link
Contributor

@InvictusMB that is really weird. Could you provide a reproduction repo? That would be of great help!

@InvictusMB
Copy link

@phryneas Not sure how to set that up. I've tried creating a fresh project with CRA3 and @material-ui/core. But the differences are not that noticeable this way.

In my real codebase there are 500+ components and about 40 @material-ui/core imports. It takes ~100 sec to yarn start on Windows machine with CRA3 and async: false.
With async: true it becomes non deterministic and can take from 10 minutes to eternity. I've never waited longer than 20 minutes.

Another workaround I had was replacing all the top level import {Foo} from '@material-ui/core' imports with deep import Foo from '@material-ui/core/Foo' imports.
This lets the compiling finish in reasonable time but the CPU load remains high.

Also I found that while TS process constantly generates high CPU load it does not cause any IO load.

@johnnyreilly
Copy link
Member

johnnyreilly commented May 21, 2019

@InvictusMB have you tried the environment variable workaround? #256 (comment)

Oh you have and it works; great! Yeah, I'm hoping behaviour is going to be changed in TypeScript itself - check out microsoft/TypeScript#31048

@InvictusMB
Copy link

@johnnyreilly
Yeah, it works but my concern is that file watcher mode is just a catalyst that exposes some other issue.
I lack the knowledge to explain the correlation between watcher mode, async mode and the non linear performance hit of importing pattern used.

It is as if every import * as MUI from '@material-ui/core' causes a new set of watchers on all the @material-ui/core .d.ts files to be created. Shouldn't this also affect the sync mode? Or is there no file watching involved in sync mode?

@phryneas
Copy link
Contributor

@johnnyreilly
Yeah, it works but my concern is that file watcher mode is just a catalyst that exposes some other issue.
I lack the knowledge to explain the correlation between watcher mode, async mode and the non linear performance hit of importing pattern used.

Yup, I understand that concern. One thing I'd like you to report back on: does this only happen with useTypescriptIncrementalApi: true? Cancellation of running checks is handled differently between true & false, so I'm curious if there is a difference in behaviour there.

Oh, and all things aside: afaik, Material UI discourages importing directly from @material-ui/core - that will make your bundle size explode since tree shaking won't work any more. But obviously, it's great to uncover problems like this one 😁

@phryneas
Copy link
Contributor

Oh yeah, forgot to mention you @InvictusMB - the response above was for you :)

@InvictusMB
Copy link

@phryneas
If useTypescriptIncrementalApi relies on v3.4 API then no. I was testing with TS 3.3.1.

Material UI discourages importing directly from @material-ui/core

Yes, I know that and I don't like it. Compiler/bundler inefficiencies should not define the way I write code.

I actually hacked together a babel macro that expands namespace import into deep imports.
Surprisingly, it also seems to have an effect on TS performance. Even though the macro .d.ts file only reexports stuff from @material-ui/core TS does not go nuts the way it does with imports from @material-ui/core itself.

@phryneas
Copy link
Contributor

@phryneas
If useTypescriptIncrementalApi relies on v3.4 API then no. I was testing with TS 3.3.1.

It doesn't, useTypescriptIncrementalApi is working down to TS 2.8-ish. Could you please give it a test? :)

@InvictusMB
Copy link

InvictusMB commented May 22, 2019

@phryneas
Ok, the naming gets confusing with the introduction of --incremental compiler option in TS 3.4.
Also I'm becoming slightly overwhelmed by all the permutations but yes,useTypescriptIncrementalApi matters too.

So, this is slow:

async: true
useTypescriptIncrementalApi: true
TSC_WATCHFILE: default

Flipping any of those options seems to make overall start up performance ok.

The issue seems to be dependent on total RAM load or something else too. This time I was not able to reproduce it with @material-ui/core alone. And even with @material-ui/icons the build time haven't exceeded 6 minutes and sometimes the slowdown did not happen at all. Maybe it gets increasingly worse when there is already a lot of pressure on swap file.

When I was able to reproduce it, useTypescriptIncrementalApi: true introduced 2-2.5x increase in initial dev server start time. Disabling it caused 5-7x increase in subsequent rebuild times.

Using a macro shaves off 20-30% of build time consistently.

Using TS 3.5 RC also makes performance acceptable. With 3.5 RC idle CPU load is ~25% and it drops to ~7% with TSC_WATCHFILE setting.

I have no idea what to make of all this data.

@phryneas
Copy link
Contributor

Yeah, looks like fork-ts-checker-webpack-plugin was a few versions ahead of TS when naming things 🤣

So, my current analysis on this is (please verify @johnnyreilly @piotr-oles - maybe you have some ideas on this?):

  • in the file-watching code we have implemented ourselves (useTypescriptIncrementalApi: false), there are no problems
  • with useTypescriptIncrementalApi: true
    • in async: false mode, we have webpack wait for typechecking to be done to allow emitting. That way, webpack is only able to trigger a new compilation & typecheck after the last emit has taken place
    • in async: false, we start a typecheck every time webpack starts a compilation. Usually we'd cancel the last running typecheck using a CancellationToken.
      But typescript.createWatchProgram has no support for a CancellationToken. So we wait for the last typecheck to complete before starting a new one.

So I guess our culprit is somewhere in CompilerHost.processChanges.

I guess what happens is this: a first typecheck (A) is started and before it is complete, you change something in the code. A new typecheck (B) is queued after (A). Inbetween, you make another change. (C) is queued after (B). By now, (A) is finished - but it is not the current typecheck, so results aren't shown to you. The more you edit, the more typechecks are queued up and they are done one-after-the-next. So your queue reaches a length where it might take hours until you get the final result. (that will be shown to you)

@InvictusMB This is just a wild theory - but am I assuming right that this behaviour takes very long if you are editing in-between and is slow, but not extremely slow if you aren't touching your keyboard?

@johnnyreilly @piotr-oles I just did a quick scan of the code so I might have missed a safeguard against such behaviour - could you please take a look and verify or debunk my theory?

@phryneas
Copy link
Contributor

Reading the code again, it might be more of this scenario:

(A) running
(B) requested -> (B) queued after (A)
(C) requested -> (B and C) queued after (A)
(D) requested -> (B and C and D) queued after (A)
(A finishes)
(B and C and D) running in parallel

...which might also be problematic.

But I think this is wild speculation at this point, I'm not really in a state of mind to dig too far into source code today. Will look back into this in a few days :)

@johnnyreilly
Copy link
Member

johnnyreilly commented May 23, 2019

Having read and reread I think your analysis is solid. I'm pretty sure that describes the behaviour I'd expect.

I wonder if @0xorial has anything to add to the understanding of the useTypescriptIncrementalApi behaviour? In case you didn't know, it was the fine work of @0xorial that landed this fine functionality in fork-ts-checker-webpack-plugin: #198

Very grateful we are too 🤗

Side note: in retrospect we probably should have called the optionuseTypescriptWatchApi. Ah well 😄

I wonder if we should move this discussion to a separate issue? The topics being discussed now no longer seems to be directly related to the file watching behaviour...

@0xorial
Copy link
Contributor

0xorial commented May 23, 2019

@johnnyreilly I would not dare to make assumptions without some debugging - complexity of CompilerHost.ts and Typescript compiler are way higher than I would like to see :(

Normally, it should process everything sequentially, and yes, multiple fast requests could create a longer queue of processing...

@InvictusMB, for my curiosity, what is your current timing for initial/incremental build with best config that you could get?

@InvictusMB
Copy link

InvictusMB commented May 23, 2019

@InvictusMB This is just a wild theory - but am I assuming right that this behaviour takes very long if you are editing in-between and is slow, but not extremely slow if you aren't touching your keyboard?

No, there is no editing happening concurrently with the ongoing type checking when I benchmark things.

@0xorial
On my Windows machine with

async: true
useTypescriptIncrementalApi: true
TSC_WATCHFILE: UseFsEventsWithFallbackDynamicPolling

and using a macro for MUI imports and TS 3.3.1 I get:
~120 sec for initial yarn start with clean Babel cache (after wiping node_modules/.cache)
60-90 sec for subsequent yarn start
2-6 sec for incremental build varying depending on file being edited.
That is on a 500+ components CRA3 project win MUI core and icons used.
These measurements are only for emitting files by webpack. Type checking takes probably 10-15 sec on top of that. I haven't measured that precisely.

On Mac the initial builds are roughly 2x faster, incrementals are usually under 1 sec and none of the settings had any statistically significant impact.

While @phryneas' theory might not explain the initial start time, it certainly hints to why I now get stale type checker data on incremental builds from time to time.
For example, it might still complain about the variable not being used while the next line using the variable is already in place. For it to go away I have to trigger a rebuild by adding/removing empty lines but even that does not always help. I guess I have to wait until the current type check(s) are finished or some watcher cool down time expires,

I don't know if this is due to the watcher mode or async mode, or a combination of those.

This hasn't happened before. Before moving to CRA 3 with async: true and having all this fun that is.

@piotr-oles
Copy link
Collaborator

I'm closing it as I was able to release v5.0.0-alpha.1 version which uses webpack watch system instead so this issue is no longer a case :)

@piotr-oles piotr-oles closed this May 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants