Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tsc --watch initial build 3x slower than tsc #34119

Open
aaronjensen opened this issue Oct 13, 2019 · 51 comments
Open

tsc --watch initial build 3x slower than tsc #34119

aaronjensen opened this issue Oct 13, 2019 · 51 comments
Assignees
Labels
Domain: Performance Reports of unusually slow behavior Fix Available A PR has been opened for this issue Needs Investigation This issue needs a team member to investigate its status. Rescheduled This issue was previously scheduled to an earlier milestone

Comments

@aaronjensen
Copy link

aaronjensen commented Oct 13, 2019

TypeScript Version: 3.7.0-dev.20191011, 3.6.4, 3.5.2

Search Terms:
DeepReadonly
slow watch mode

Code

The slowness occurs on a codebase of around 1000 files. I can't distill it into a repro, but I can show the type that causes the slowness and an alternate type that does not.

I noticed the slowness when I replaced our implementation of DeepReadonly with the one from ts-essentials. One thing I should note in case it is helpful, is that in our codebase DeepReadonly is only used about 80 times. It's also used nested in some instances, a DeepReadonly type is included as a property of another DeepReadonly type, for example.

Here is the type from ts-essentials:

export type Primitive = string | number | boolean | bigint | symbol | undefined | null;
/** Like Readonly but recursive */
export type DeepReadonly<T> = T extends Primitive
  ? T
  : T extends Function
  ? T
  : T extends Date
  ? T
  : T extends Map<infer K, infer V>
  ? ReadonlyMap<K, V>
  : T extends Set<infer U>
  ? ReadonlySet<U>
  : T extends {}
  ? { readonly [K in keyof T]: DeepReadonly<T[K]> }
  : Readonly<T>;
interface ReadonlySet<ItemType> extends Set<DeepReadonly<ItemType>> {}
interface ReadonlyMap<KeyType, ValueType> extends Map<DeepReadonly<KeyType>, DeepReadonly<ValueType>> {}

Here is ours:

export type Primitive = number | boolean | string | symbol
export type DeepReadonly<T> = T extends ((...args: any[]) => any) | Primitive
  ? T
  : T extends _DeepReadonlyArray<infer U>
  ? _DeepReadonlyArray<U>
  : T extends _DeepReadonlyObject<infer V>
  ? _DeepReadonlyObject<V>
  : T
export interface _DeepReadonlyArray<T> extends ReadonlyArray<DeepReadonly<T>> {}
export type _DeepReadonlyObject<T> = {
  readonly [P in keyof T]: DeepReadonly<T[P]>
}

Expected behavior:

Both types, when used in our codebase would take a similar amount of time for both a tsc and the initial build of tsc --watch.

Actual behavior:

Our original DeepReadonly takes about 47 seconds to build using tsc. The initial build with tsc --watch also takes a similar amount of time, around 49 seconds.

With the ts-essentials version, a tsc build takes around 48 seconds. The initial build with tsc --watch takes anywhere from 3-5 minutes.

Playground Link:

N/A

Related Issues:

None for sure.

@aaronjensen
Copy link
Author

aaronjensen commented Oct 13, 2019

FWIW, the slowdown appears to come from the recursive application of { readonly [K in keyof T]: DeepReadonly<T[K]> }. If that's extracted to a separate type like this:

export type DeepReadonlyObject<T> = {
  readonly [P in keyof T]: DeepReadonly<T[P]>
}

And used like this:

// ...
  : T extends {}
  ? DeepReadonlyObject<T>
// ...

Then watch is just as fast as a regular tsc compile.

Unfortunately, the editor tools don't display the type as nicely since they actually show DeepReadonlyObject<{...}> instead of { readonly ... }.

@RyanCavanaugh RyanCavanaugh added the Needs Investigation This issue needs a team member to investigate its status. label Oct 17, 2019
@RyanCavanaugh RyanCavanaugh added this to the TypeScript 3.8.0 milestone Oct 17, 2019
@amcasey
Copy link
Member

amcasey commented Oct 23, 2019

@aaronjensen Thanks for the feedback! I'm surprised to hear that tsc and --watch are behaving differently - that's really interesting. Do you have any .tsbuildinfo files that might be affecting incremental build? Are you using --incremental or --build?

A thousand file repro is fine, if we have access to the thousand files. 😉 They're not on GH, are they? Alternatively, we've just added a new --generateCpuProfile flag to tsc. It would be very cool if you could gather a trace. It's plain text, so you can double-check for private info, but it should just be paths and they should already be sanitized. (If you decide to share a trace, please also note or include the version of tsc.js you were using when you collected it.)

@aaronjensen
Copy link
Author

They're not on GH, are they?

They are, but in a private repo. So unless you have access to private repos or there's a way to temporarily grant it (i'd need to receive my client's permission), then no, unfortunately.

Do you have any .tsbuildinfo files that might be affecting incremental build?

Not that I can see. Here's my tsconfig.json. I have noEmit: true and there's no built directory generated.

{
  "compilerOptions": {
    "allowSyntheticDefaultImports": true,
    "moduleResolution": "node",
    "noEmit": true,
    "pretty": true,
    "outDir": "./built",
    "allowJs": true,
    "jsx": "preserve",
    "target": "ESNext",
    "module": "esNext",
    "lib": ["es2016", "dom", "es2017.object", "dom.iterable"],
    "experimentalDecorators": true,
    "noUnusedParameters": true,
    "noUnusedLocals": true,
    "sourceMap": true,
    "strict": true,
    "baseUrl": ".",
    "paths": {
      "*": ["app/*", "types/*"]
    },
    "types": ["googlemaps", "webpack-env", "mixpanel", "gapi", "gapi.auth2"]
  },
  "include": ["./app/**/*"]
}

Here's the version the profiles were generated with

Version 3.7.0-dev.20191021

And here are some profiles.

  • before.txt is the faster ("ours" above), with --incremental which took 70 seconds i
  • after.txt is the version from ts-essentials with --incremental, which took 7.5 minutes.
  • after-not-incremental.txt is the version from ts-essentials without --incremental, which took only 38 seconds.

Archive.zip
after-not-incremental.txt.zip

@sheetalkamat
Copy link
Member

incremental uses ".d.ts" emit to decide what files need to be built. So please modify your tsconfig.json to remove noEmit: true and instead add emitDeclarationsOnly: true.
After that run tsc --extendedDiagnostics to get the details about timings from the phase.

@aaronjensen
Copy link
Author

aaronjensen commented Oct 23, 2019

EDIT This was with emitDeclarationsOnly: true but the actual compiler option is emitDeclarationOnly: true, so I'm rerunning.

@sheetalkamat My original issue is about --watch, does the same apply?

Here's the timing info from tsc --extendedDiagnostics:

Files:                        1594
Lines:                      195461
Nodes:                      750701
Identifiers:                262162
Symbols:                    880782
Types:                      454620
Memory used:              1906854K
Assignability cache size:   345227
Identity cache size:         15370
Subtype cache size:         116503
I/O Read time:               0.79s
Parse time:                  1.18s
Program time:                3.39s
Bind time:                   0.92s
Check time:                 30.78s
transformTime time:        358.82s
commentTime time:            5.68s
printTime time:            384.62s
Emit time:                 384.68s
Source Map time:             0.32s
I/O Write time:              0.64s
Total time:                419.78s

Note that I actually have some type errors in all of these builds that were introduced by 3.7 and I haven't taken the time to fix them yet.

@sheetalkamat
Copy link
Member

yes.. --watch uses same logic as incremental (incremental is just about serializing the partial data that --watch uses to disk) As seen the issue lies with the emit time is what is the issue.
@amcasey The .d.ts emit seems to be slow. @weswigham may have seen this before.

@aaronjensen
Copy link
Author

Run again with the proper settings:

Files:                       1594
Lines:                     195461
Nodes:                     750701
Identifiers:               262162
Symbols:                   880801
Types:                     454634
Memory used:              830202K
Assignability cache size:  345227
Identity cache size:        15370
Subtype cache size:        116503
I/O Read time:              0.75s
Parse time:                 1.31s
Program time:               3.51s
Bind time:                  0.86s
Check time:                32.17s
transformTime time:       766.51s
commentTime time:           4.81s
printTime time:           787.95s
Emit time:                788.00s
I/O Write time:             0.35s
Total time:               824.53s

And the faster one, run the same way:

Files:                       1594
Lines:                     195459
Nodes:                     750671
Identifiers:               262146
Symbols:                   908404
Types:                     461249
Memory used:              881506K
Assignability cache size:  349613
Identity cache size:        15478
Subtype cache size:        117735
I/O Read time:              0.68s
Parse time:                 1.18s
Program time:               3.23s
Bind time:                  0.92s
Check time:                30.75s
transformTime time:        17.47s
commentTime time:           0.43s
printTime time:            20.00s
Emit time:                 20.03s
I/O Write time:             0.33s
Total time:                54.93s

@weswigham
Copy link
Member

I suspect it's the same issue as a couple other threads - your type is all anonymous in the slow form, so everything gets printed structurally in a recursive fashion, resulting in a huge declaration file. The alternate form is faster because the printback terminates with a named type. To confirm:

  1. Use the latest nightly so we know what we're working with - a few days ago we accidentally published an older version as a nightly because of a bug in the build, so updating to today's would be a good thing to do, so we ensure we're not looking at older traces.
  2. Turn declaration emit on (declaration: true) and do a survey of the resulting declaration files - do any seem very large in the slow build? I imagine if you look at them you'd see why it takes so long.
  3. Even if the above isn't the case, capture a cpu profile for us by passing --generateCpuProfile profile.cpuprofile to the compiler and send/upload it for us to look at - the detailed timing information is super useful. ❤️

@aaronjensen
Copy link
Author

@weswigham okay, will try. I've been running an incremental build with declaration: true and emitDeclarationOnly: true and it's been running for over 15 minutes... will let it keep going for a bit more but may have to come back to it later.

@aaronjensen
Copy link
Author

The type emission for the fast version looks totally wrong. Here's an example property:

fast:

    form: import("../../types/utility-types")._DeepReadonlyObject<{
        submited: any;
        validations: any;
    }>;

slow:

    readonly form: {
        readonly submited: boolean;
        readonly validations: readonly {}[] | null;
    };

This happens on 3.8 nightly and 3.6.4

The build took over 20 minutes and it actually failed to emit every file it should have. I'll have to get a profile from it later.

@amcasey
Copy link
Member

amcasey commented Oct 23, 2019

@weswigham The zip file posted above includes cpu profiles. I think you're right that it's spending too much time emitting declarations, probably because it's not using names. I'm seeing 65% of time in emitDeclarationFileOrBundle and 57% of time spent in typeToTypeNode.

@weswigham
Copy link
Member

Specifically a lot of time in typeToTypeNode indicates it may be a bug in our type generation where we fail to terminate an infinite type at a reasonable size (or the generated types are just really really big)? Unless it's furthermore mostly in getAccessibleSymbolChain (or a worker thereof), in which case it's expensive uncached symbol visibility calculations which I'd already like to fix, as I've seen such timings before. The example @aaronjensen posted above confounds both possibilities, though - I don't see why generating one of those forms over the other would be meaningfully slower, at least for types of those sizes (even with the "bug" in the first which is likely a direct consequence of a reverse mapped type used to produce that type).

@weswigham
Copy link
Member

weswigham commented Oct 23, 2019

Yep, looking at the traces I see pretty much all the time spent in trySymbolTable and getAliasForSymbolInContainer, which are workers user for calculating visible symbol chains. The whole process is uncached and quite naive right now (it's pretty much the same as it was since TS 1.8, but we rely on it more and more). It has been on my backlog as something that needs to be rewritten with a better more cacheable structure for perf.

There's also a fair chunk of time spent in GC and in path component-related operations that might also need some optimization - I'm guessing the large number of import types we manufacture require a large number of path calculations which end up making a lot of garbage - some caching may be in order here.

@aaronjensen
Copy link
Author

Are you good on the cpu profiles I provided originally? Anything else you'd like me to try?

In general, it'd be helpful to know the rules for when TypeScript expands a type and when it's able to give it a name. Looking at the generated types and it expands my entire redux state everywhere it's used, which is pretty verbose. Is there anything circular referencing things that could cause that?

@amcasey
Copy link
Member

amcasey commented Oct 24, 2019

@aaronjensen I think we have everything we need for now. @weswigham is there a mitigation we can suggest, other than rolling back to the original type?

@aaronjensen
Copy link
Author

aaronjensen commented Oct 24, 2019

Okay. One thing that helps a little is this hack:

type MyStateType = DeepReadonly<
  {
    foo: FooState,
    bar: BarState
  }
>

export interface MyState extends MyStateType {}

Is it a (separate) issue that is worth opening the DeepReadonly generic type gets expanded when imported into another file? e.g. in foo.ts:

import { DeepReadonly } from 'utility-types'

export type Foo = DeepReadonly<{ foo: string }>

and in bar.ts:

import { Foo } from './foo'

export const useFoo = ({ foo }: { foo: Foo }) => {}

Turns into in bar.d.ts:

export declare const useFoo: ({ foo }: {
    foo: {
        readonly foo: string;
    };
}) => void;

I'd like/expect to see:

export declare const useFoo: ({ foo }: {
    foo: import("./foo").Foo
}) => void;

edit it doesn't even need to be in a different file, if useFoo is in foo.ts the same thing happens to it:

import { DeepReadonly } from 'utility-types';
export declare type Foo = DeepReadonly<{
    foo: string;
}>;
export declare const useFoo: ({ foo }: {
    foo: {
        readonly foo: string;
    };
}) => void;

@aaronjensen
Copy link
Author

aaronjensen commented Oct 24, 2019

Okay, it turns out that:

import { DeepReadonly } from 'utility-types'

export type Foo = DeepReadonly<{ foo: string }>
export const useFoo: ({ foo }: { foo: Foo }) => void = ({ foo }) => {}

Results in:

import { DeepReadonly } from 'utility-types';
export declare type Foo = DeepReadonly<{
    foo: string;
}>;
export declare const useFoo: ({ foo }: {
    foo: Foo;
}) => void;

That's definitely surprising. I figured it out while trying to track down a "XXX has or is using name ..." issue. Is there a way/plan to properly infer the original type instead of the expanded type in this situation? I'd imagine it'd actually help the emit situation in this original issue.

edit

Apparently function declarations infer properly... maybe that's enough of a reason to use top level function declarations...

export function
import { DeepReadonly } from 'utility-types'

export type Foo = DeepReadonly<{ foo: string }>
export const useFoo: ({ foo }: { foo: Foo }) => void = ({ foo }) => {}

export function useFoo2({ foo }: { foo: Foo }) {
  return 2
}
import { DeepReadonly } from 'utility-types';
export declare type Foo = DeepReadonly<{
    foo: string;
}>;
export declare const useFoo: ({ foo }: {
    foo: Foo;
}) => void;
export declare function useFoo2({ foo }: {
    foo: Foo;
}): number;

@DanielRosenwasser
Copy link
Member

Created #34776 to track some of the optimizations we have in mind here.

@aaronjensen
Copy link
Author

Thanks, @DanielRosenwasser. Any thoughts on this:

Is there a way/plan to properly infer the original type instead of the expanded type in this situation? I'd imagine it'd actually help the emit situation in this original issue.

I'd be happy to open a new issue for it. Essentially, It'd be great if:

const foo = (bar: Bar) => {}

resulted in emitting the same types as:

function foo(bar: Bar) {}

@DanielRosenwasser
Copy link
Member

It wouldn't rewrite as a function declaration, but here's what you probably have in mind: #34778

@DanielRosenwasser
Copy link
Member

At least what I mean is that in that issue, you wouldn't have to think about the way foo is declared because the only reason you should (in theory) run into that is alias expansion.

@aaronjensen
Copy link
Author

Yes, that's what I meant, not that it would rewrite, but that it would emit with the type alias intact as it does when defined with function currently. I added an example to the issue you created. Thank you.

@lingz
Copy link

lingz commented Dec 17, 2019

FWIW, I tracked down the bad declarations using an emit only build, and then used a file size explorer to determine which files were slowing the build down. This got my build a 3x speedup (70secs -> 20secs)

tsc -p ./tsconfig.base.json --declaration --emitDeclarationOnly --extendedDiagnostics --declarationDir ~/Temp/declarations

I used Disk Inventory X and found big files and cut them down.

Before:

71002092-871aeb80-20d6-11ea-91ea-2544d68e4a61

After:
71001691-b54bfb80-20d5-11ea-9f69-faa5186a614e

@OliverJAsh
Copy link
Contributor

As far as I understand, switching from type aliases to interfaces seems to help reduce the emitted declaration size (and therefore speed up declaration emit time). Related Twitter thread: https://twitter.com/drosenwasser/status/1319205169918144513.

However, this doesn't help in cases where the problematic types are generated dynamically using mapped types, as is the case when using tools such as io-ts or Unionize.

What can we do in these cases? It seems to me that the only solution would be for TS to treat type aliases in the same way it treats interfaces. That is, always preserve the display name and never inline the structure. Related: #32287. /cc @DanielRosenwasser

@Bnaya
Copy link

Bnaya commented Oct 27, 2020

However, this doesn't help in cases where the problematic types are generated dynamically using mapped types, as is the case when using tools such as io-ts or Unionize.

In some cases, you can mask it as well, as i'm doing for mobx-state-tree models
https://gist.github.com/Bnaya/5c208956a3e5f3699595f9b9ca0ad3eb#file-good-ts

It's super verbose but works

@Bnaya
Copy link

Bnaya commented Oct 27, 2020

These all obscure steps are things that better be done by TypeScript internally.
And this is not emit-only issue, but when you have a huge type that being expanded, it will also drastically slowdown intellisense and freeze vscode

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Domain: Performance Reports of unusually slow behavior Fix Available A PR has been opened for this issue Needs Investigation This issue needs a team member to investigate its status. Rescheduled This issue was previously scheduled to an earlier milestone
Projects
None yet
Development

Successfully merging a pull request may close this issue.