-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
zfs-mount fails because directory isn't empty, screws up bind mounts and NFS #4784
Comments
This happen every now and then but have proven almost impossible to debug (because it can't be reproduced reliably). The "only" (real) way to solve this is to boot into rescue mode, import your pool WITHOUT mouthing anything but the base filesystem. Then remove everything in there. Then run "zfs mount -a" until it possibly fail on some other directory. Then make sure nothing is mounted below that, remove all mount points (make sure they're empty!) and then try "mount -a" again. Do this until it can mount and unmount a couple of times. Then unmount all filesystems, export the pool and reboot. This time it should boot correctly. |
Ah that's wonderful hahaha Luckily I don't reboot often and I'm only using ZoL until FreeNAS 10 becomes usable later this year. I'll give it a try later on. Thanks for the info. I was thinking that my best solution would be to just forgo the bind mounts and export the mount points directly so it doesn't matter if the zfs-mount fails since the pools still get mounted either way, I'm the only one using my server so security isn't really an issue. Also by "base filesystem" do you mean / or the base level of my pool (for example I have a pool called Storage with Multimedia and VM datasets, are you saying just mount Storage as /mnt/storage and do an rm -rf /mnt/storage/* ?) Now that I recall I did have files being written to a top level directory (ex. Storage) instead of a dataset (ex. VMs) and that was confusing the hell out of me because it was mounted in the VM but the file existed in one location but not in another. I'm guessing something like that is what you're referring to? |
@FransUrbo |
While Linux allows mounting on a non-empty directory, Solaris didn't (where ZFS comes from). With a current enough ZoL you can zfs set overlay=on on the dataset(s) in question (or a parent dataset, since the property is inherited) to make ZFS behave like linux. |
@brando56894 I mean the base of your pool. @tuxoko, what @GregorKopka said. It was decided that ZFS is ZFS and should behave like ZFS, not Linux. Maybe don't make huge sense for the large [Linux] community, but it makes more sense when you think of ZFS a multi-OS service.. |
@GregorKopka your history is incorrect. Solaris always did allow overlay mounts, which led to a significant number of service calls. Service calls are expensive. Thus when ZFS came along, they learned from their prior mistake and restricted overlay mounts. That said, in modern times, one could argue there are use cases for overlay mounts, even though the cost of service calls continues to rise. Gun loaded, pointed down, hoping to miss foot. |
@richardelling the SUN didn't shine on my prior to ZFS, sorry for that. @kpande following that reasoning zfs send/recv shouldn't be recommended because you'll eventually be hit by #4811, I think this line of thinking is a bad idea and should not be pursued any further... |
@kpande my point was that the bug you referenced should be fixed instead, let's continue the discussion there. |
@FransUrbo What about making the mountpoint readonly? That should prevent anyone from accidentally put stuff in it, no? |
This happens because the order in which fstab-mounts and zfs-mounts happen is undefined. See zfs-mount.service and one of the auto-generated mount-units. Systemd orders the auto-generated mounts by filesystem hierachy. See systemd.mount(5). |
Any news on this issue? Because I think I'm affected too.
Any advice on how to fix this? Even though it was there for sure before rebooting. |
Like I said I followed the guide in the wiki. Therefore I ended up with this version of ZoL:
That's not the most recent version I guess. Can this problem be solved by updating to the new release? If so, how should I do that? |
I have a pool i can mount with an alternate root. Export it, and it leaves the alternate root behind.
same happens with a normal import |
I posted this over at the Arch forums but I haven't gotten any responses after a week. I've been running ZoL on Arch for probably about 6 months and ZoL on Ubuntu 14.10 LTS for about 3 months prior to that, before that I was running it on FreeNAS 9.
Everything was working fine up until a month or so ago when the zfs-mount.service daemon failed to start after a reboot, which causes my bind mounts to fail, which in turn causes my NFS mounts to screw up since they're mounting empty directories and breaks my Usenet KVM since it runs out of space.
I can't seem to figure out what the issue is, it seems like Linux is trying to double mount my pools for some reason, because even though the daemon fails, all the pools (3 in total) are mounted successfully and previous to zfs-mount running since they're already mounted. If I disable the zfs-mount service then the pools never get mounted.
Here's the journal error from zfs-mount.service
Once the system is fully loaded I have to destroy my Usenet KVM, manually mount the bind mounts (mount -a doesn't work for some reason, even though they're in /etc/fstab), then restart the Usenet KVM and all is well. I've also tried using service files for the bind mounts but that didn't seem to work either.
I'm using the zfs-linux-git package0.6.5_r62_g16fc1ec_4.6.2_1-1
The text was updated successfully, but these errors were encountered: