-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File permission issues cause backups to stop being created #71
Comments
Another situation:
and backups stop being created. |
With this permission it is expected that a user (non-root) is unable to read the file.
What should be a suitable behaviour for the backup solution, while still letting the user know, that something has not been backed up? |
That's obviously a good question. "Thinking aloud" - first that comes to my mind is to mark the directory with "_errors" at the end of its name, and placing a text file named the same as the failed backup dir plus "_errors.txt" next to the directory. To keep it simple, the file might contain just the stderr output like the one I posted above (I assume it's stderr output). This way it would be immediately clear that not everything went 100% correct but the errors wouldn't prevent backups from finishing, potentially leading to data losses. |
Came here to report the same problem. Had a directory that was causing the following error:
I had to
Thanks! |
Whatever docker container I've got doing this, seems to keep making root owned files. Don't like it, but I'm resorting to running |
I also always run it as root. I will dig in on how to be able to continue when |
I suggest that tool should give warnings to these fails and continue working. Because that is really annoyed, the backup will never finish... |
I had a file owned by root with
Even though it aborted early with this error:
, it appears that it continued copying files into the While I was a bit disappointed that it aborted the entire huge backup and said it "failed", over 1 insignificant file which I wouldn't have cared if it was skipped, I can see how this script can't know whether a file failing to copy meets the user's idea of a "successful" backup or not. It depends entirely on which file(s) it is and how important that file is to the user. I wouldn't be opposed to a "--allow-failed-files" option or similar that allows them to override the default behavior and consider it a success even if one or more files failed. Hopefully the user is monitoring the stderr output anyway, to watch for any warnings or errors. That way they would both be aware of the problem (to fix it for later backups) and have the backup considered successful and permanently saved in the regularly scheduled timestamped backup dir. But I think it would be better if users would use excludes/filters to simply skip any problematic files so they don't cause any error in the first place... (With #81, however, it sounds like it may not always be possible to know which files will be problematic ahead of time.) Since it is a shared hosting account where I don't have I ended up passing |
I encountered a situation, where backups never finish, leaving the
in_progress
directory always there. After analysing, the culprit seemed to be a specific file, which had0000
permissions set. Changing the permissions allowed backups to work normally. Such situations should probably be handled more gracefully. While zero permissions look fishy, I can imagine other permissions related situations that would also cause failures. Like files created when running processes with elevated privileges (sudo <something>
).The text was updated successfully, but these errors were encountered: