Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to rename() a directory over another fails slowly on Windows #98

Closed
binki opened this issue Nov 14, 2016 · 9 comments · Fixed by #100
Closed

Trying to rename() a directory over another fails slowly on Windows #98

binki opened this issue Nov 14, 2016 · 9 comments · Fixed by #100

Comments

@binki
Copy link

binki commented Nov 14, 2016

$ mkdir -p a b && time node -e "require('.').rename('a', 'b', err => console.log(err))"
{ Error: EPERM: operation not permitted, rename 'C:\Users\ohnob\repos\node-graceful-fs\a' -> 'C:\Users\ohnob\repos\node-graceful-fs\b'
    at Error (native)
  errno: -4048,
  code: 'EPERM',
  syscall: 'rename',
  path: 'C:\\Users\\ohnob\\repos\\node-graceful-fs\\a',
  dest: 'C:\\Users\\ohnob\\repos\\node-graceful-fs\\b' }

real    1m0.326s
user    0m0.000s
sys     0m0.031s

It should fail instantly, like with fs.rename:

$ mkdir -p a b && time node -e "require('fs').rename('a', 'b', err => console.log(err))"
{ Error: EPERM: operation not permitted, rename 'C:\Users\ohnob\repos\node-graceful-fs\a' -> 'C:\Users\ohnob\repos\node-graceful-fs\b'
    at Error (native)
  errno: -4048,
  code: 'EPERM',
  syscall: 'rename',
  path: 'C:\\Users\\ohnob\\repos\\node-graceful-fs\\a',
  dest: 'C:\\Users\\ohnob\\repos\\node-graceful-fs\\b' }

real    0m0.284s
user    0m0.000s
sys     0m0.031s
$ git describe --long
v4.1.10-0-gdb8df44
binki added a commit to binki/node-fs-extra that referenced this issue Nov 14, 2016
When trying to rename a directory over another directory, graceful-fs-4.1.10
(maybe other versions?) takes 1 minute to fail the call with an error.
This causes the test to timeout. The workaround is to increase the timeout
of that one test to 1m30s.

See isaacs/node-graceful-fs#98
@binki
Copy link
Author

binki commented Nov 14, 2016

This is a regression. Renaming a directory over another used to fail fairly instantly in graceful-fs-4.1.9.

@isaacs
Copy link
Owner

isaacs commented Nov 15, 2016

This is because of 59145ce. @sam-github, any thoughts?

@binki
Copy link
Author

binki commented Nov 15, 2016

I know it would be racy, but retrying is already introducing potential race conditions. I don’t think that the return error code of fs.rename() gives you enough information to determine that waiting should happen. Perhaps upon getting an error, check if the destination folder already exists and, if so, return the failure instantly as a special case?

@sam-github
Copy link
Contributor

So, it used to retry, it just retries longer now. Though enough longer, I guess you are noticing it?

The existing code checked for `er.code === "EACCES" || er.code === "EPERM", it only retries if it is one of those. I agree, it looks like the underlying node error code doesn't contain enough information to know if its a temporary failure, to be retried, or permanent.

The slow retry loop that gives time for virus checkers to complete could be moved out of graceful-fs, and into npm. I'm not clear if it is a loop that should ever have been in graceful-fs... I understand graceful-fs to retry on fd exhuastion, but should it also retry on fs errors?

@isaacs, what is this history? Is this retry loop something that should be hoisted into npm proper, out of graceful-fs?

@binki
Copy link
Author

binki commented Nov 15, 2016

I personally like that this loop is in graceful-fs because this is a Windows weirdness that people should be able to magically supporting just by using it. But I think that with the right type of operations, even the short 1s loop could cause programs which run fine on Unix to be unusably slow on Windows if they rely on rename() throwing errors (though maybe such patterns are actually uncommon) such as the references I made to this issue from a test suite which broke after this change. So how about we just add detection of known situations where this error is permanent (in this case, destination existing) and fail instantly?

@binki
Copy link
Author

binki commented Nov 15, 2016

Here’s the output from graceful-fs-4.1.9 for reference, especially now that I know why this is 1.275s (compared to fs.rename() which is 0.284s—probably mostly just node startup/compilation time):

$ mkdir -p a b && time node -e "require('graceful-fs').rename('a', 'b', err => console.log(err))"
{ Error: EPERM: operation not permitted, rename 'C:\Users\ohnob\repos\node-fs-extra\a' -> 'C:\Users\ohnob\repos\node-fs-extra\b'
    at Error (native)
  errno: -4048,
  code: 'EPERM',
  syscall: 'rename',
  path: 'C:\\Users\\ohnob\\repos\\node-fs-extra\\a',
  dest: 'C:\\Users\\ohnob\\repos\\node-fs-extra\\b' }

real    0m1.275s
user    0m0.000s
sys     0m0.015s

@isaacs
Copy link
Owner

isaacs commented Nov 15, 2016

I'd be fine with statting the target to see if it exists. Anything but a failure with ENOENT would trigger the retry.

isaacs added a commit that referenced this issue Nov 16, 2016
When `rename` is used on Windows, and encounters an EPERM or EACCES
error, graceful-fs will retry for up to 30 seconds with gradual backoff
to get around file system locking due to A/V etc.

This was fixed in 59145ce

However, this is a problem if the target exists, which can cause an
EPERM or EACCES that is not due to folder locking.

This commit causes it to fail fast when an EPERM or EACCES is
raised by rename if the target exists.

Fix #98
@isaacs
Copy link
Owner

isaacs commented Nov 16, 2016

@binki Can you npm i isaacs/node-graceful-fs#windows-rename-polyfill-slow in your project and see if it fixes the issue for you?

If it does, then I'll land and publish this as a patch release.

@binki
Copy link
Author

binki commented Nov 16, 2016

@isaacs That seems to fix it. The operation looks like it runs instantly so that the project’s tests run fine without having to extend the timeout. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants