Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make compressing transforms cache configurable #574

Merged
merged 2 commits into from
Sep 1, 2018

Conversation

thymikee
Copy link
Contributor

@thymikee thymikee commented Feb 5, 2018

Please check if the PR fulfills these requirements

  • Tests for the changes have been added
  • Docs have been updated

What kind of change does this PR introduce?

  • Feature

What is the current behavior?
Cache compression is mandatory

What is the new behavior?
One can opt out from compressing cache using the new cacheCompression: boolean option.
By default set to true so it doesn't break anything.

Does this PR introduce a breaking change?

  • No

Other information:
I wrote a bit about in related issue #571

@thymikee thymikee changed the title Make compressing cache transforms configurable Make compressing transforms cache configurable Feb 5, 2018
@thymikee thymikee mentioned this pull request Feb 12, 2018
12 tasks
@danez
Copy link
Member

danez commented Feb 25, 2018

The question we still have is if we maybe remove the cache alltogether and instead recommend using cache-loader.

Have you tried using cache-loader in front of babel-loader that should be faster than the babel cache.

@thymikee
Copy link
Contributor Author

@danez, I didn't try it yet, but I'll try to got to that soon. I've heard Webpack 5 is going to focus on better build cache system – I wonder what are the ideas for caching transformers like Babel or TS with sourcemaps, also for cache-loader.

BTW, I have another branch locally where I split code from maps and save both to FS (if necessary) – this way only maps need to be serialized, code goes straight to the disk. The wins are only visible for inline source maps (~5-10% IIRC) though, otherwise it performs similar, so I'm not so happy about that.

I'll yet have to profile the code properly using --inspect to be sure what's slowing things down instead of guessing 😅.

@loganfsmyth loganfsmyth force-pushed the feat/configurable-compression branch from 60e4480 to 3f1e6a2 Compare September 1, 2018 21:45
@loganfsmyth
Copy link
Member

I rebased your branch over our new async-function implementation, and made 2 small changes:

  • Use the name cacheCompression in the cache function options too
  • Move the filename .gz-injection logic to the read/write functions.

@loganfsmyth loganfsmyth merged commit 67a5f40 into babel:master Sep 1, 2018
@loganfsmyth
Copy link
Member

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants