Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZFS makes kmemleak jump up and down and scream about suspected memory leaks #55

Closed
rincebrain opened this issue Aug 26, 2010 · 8 comments

Comments

@rincebrain
Copy link
Contributor

I can't tell at a glance if this is spurious or not, but I booted 2.6.35.3 to see if I could replicate my problems with locking up ZFS on a modern stable vanilla kernel, and turned on kmemleak out of curiosity, as well as a few other lock checking mechanisms.

Shortly after loading the ZFS module, I ran mkfs on MD on two ZVOLs in RAID-0, and left the machine alone for a bit. I came back to find dmesg reporting:

[ 1897.485033] kmemleak: 95206 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
[ 2515.244851] kmemleak: 241565 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
[ 3132.159404] kmemleak: 66178 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
[ 3740.934527] kmemleak: 44239 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
[ 4349.703875] kmemleak: 17413 new suspected memory leaks (see /sys/kernel/debug/kmemleak)

(For reference, those are the only instances of kmemcheck in my entire dmesg log).

The result of cat [...]/kmemleak is attached. Suffice it to say that either the cache allocator is leaking memory badly, or kmemleak is throwing spurious warnings like no tomorrow.

(Git commit 1db6954 for SPL and 18568e1 for ZFS)

@rincebrain
Copy link
Contributor Author

Actually, hang on, it's going to take a bit to finish outputting, at current count the kmemleak output is 180 MB and growing.

@rincebrain
Copy link
Contributor Author

Well, that log output at about 400 MB uncompressed.

Would you actually like it, or just a few excerpts? All of it is from ZFS or SPL, at a quick grep.

@behlendorf
Copy link
Contributor

Interesting, but forgive me if I'm a bit skeptical. By default the spl does basic memory accounting for all the memory it allocates and frees. At module unload time if any memory is unaccounted for it will print a message to the console indicating how much memory was leaked. Are you seeing anything like this in the console?

kmem leaked x/y bytes
vmem leaked x/y bytes

If so you can rebuild the spl code with the --enable-debug-kmem-tracking option, and then you'll need to rebuild zfs again against the new spl. With this build option set detail memory accounting will be done including tracking every memory allocation and free. It hurts performance badly, but on module unload it will show you exactly where any leaked memory was allocated.

@rincebrain
Copy link
Contributor Author

I'm extremely skeptical. I'd more easily guess that kmemleak's tracking is broken, though at a read of the documentation it appears to be extremely thorough.

Rebuilding SPL and ZFS, in that order.

@behlendorf
Copy link
Contributor

I think a few excerpts would be a good start. Hopefully it's high repetitive.

@rincebrain
Copy link
Contributor Author

Also, I'm not unloading the module explicitly approximately ever, so I can go unload it and see what happens.

@rincebrain
Copy link
Contributor Author

It appears I'll have to wait until I reach the physical machine again, as the machine has deadlocked.

@behlendorf
Copy link
Contributor

I'm quite sure now there isn't actually a leak in the spl/zfs code. These errors are false positives from kmemleak so I'm closing this bug. If you get a chance grab the latest code from master and give it a try stability is considerably improved these days!

akatrevorjay added a commit to akatrevorjay/zfs that referenced this issue Dec 16, 2017
# This is the 1st commit message:
Merge branch 'master' of https://github.com/zfsonlinux/zfs

* 'master' of https://github.com/zfsonlinux/zfs:
  Enable QAT support in zfs-dkms RPM

# This is the commit message openzfs#2:

Import 0.6.5.7-0ubuntu3

# This is the commit message openzfs#3:

gbp changes

# This is the commit message openzfs#4:

Bump ver

# This is the commit message openzfs#5:

-j9 baby

# This is the commit message openzfs#6:

Up

# This is the commit message openzfs#7:

Yup

# This is the commit message openzfs#8:

Add new module

# This is the commit message openzfs#9:

Up

# This is the commit message openzfs#10:

Up

# This is the commit message openzfs#11:

Bump

# This is the commit message openzfs#12:

Grr

# This is the commit message openzfs#13:

Yay

# This is the commit message openzfs#14:

Yay

# This is the commit message openzfs#15:

Yay

# This is the commit message openzfs#16:

Yay

# This is the commit message openzfs#17:

Yay

# This is the commit message openzfs#18:

Yay

# This is the commit message openzfs#19:

yay

# This is the commit message openzfs#20:

yay

# This is the commit message openzfs#21:

yay

# This is the commit message openzfs#22:

Update ppa script

# This is the commit message openzfs#23:

Update gbp conf with br changes

# This is the commit message openzfs#24:

Update gbp conf with br changes

# This is the commit message openzfs#25:

Bump

# This is the commit message openzfs#26:

No pristine

# This is the commit message openzfs#27:

Bump

# This is the commit message openzfs#28:

Lol whoops

# This is the commit message openzfs#29:

Fix name

# This is the commit message openzfs#30:

Fix name

# This is the commit message openzfs#31:

rebase

# This is the commit message openzfs#32:

Bump

# This is the commit message openzfs#33:

Bump

# This is the commit message openzfs#34:

Bump

# This is the commit message openzfs#35:

Bump

# This is the commit message openzfs#36:

ntrim

# This is the commit message openzfs#37:

Bump

# This is the commit message openzfs#38:

9

# This is the commit message openzfs#39:

Bump

# This is the commit message openzfs#40:

Bump

# This is the commit message openzfs#41:

Bump

# This is the commit message openzfs#42:

Revert "9"

This reverts commit de488f1.

# This is the commit message openzfs#43:

Bump

# This is the commit message openzfs#44:

Account for zconfig.sh being removed

# This is the commit message openzfs#45:

Bump

# This is the commit message openzfs#46:

Add artful

# This is the commit message openzfs#47:

Add in zed.d and zpool.d scripts

# This is the commit message openzfs#48:

Bump

# This is the commit message openzfs#49:

Bump

# This is the commit message openzfs#50:

Bump

# This is the commit message openzfs#51:

Bump

# This is the commit message openzfs#52:

ugh

# This is the commit message openzfs#53:

fix zed upgrade

# This is the commit message openzfs#54:

Bump

# This is the commit message openzfs#55:

conf file zed.d

# This is the commit message #56:

Bump
jkryl referenced this issue in mayadata-io/cstor Mar 15, 2018
…pgrade (#55)

Signed-off-by: Jan Kryl <jan.kryl@cloudbyte.com>
jkryl referenced this issue in mayadata-io/cstor Mar 16, 2018
…pgrade (#55)

Signed-off-by: Jan Kryl <jan.kryl@cloudbyte.com>
jkryl referenced this issue in mayadata-io/cstor Mar 16, 2018
…pgrade (#55)

Signed-off-by: Jan Kryl <jan.kryl@cloudbyte.com>
jkryl referenced this issue in mayadata-io/cstor Mar 16, 2018
…pgrade (#55)

Signed-off-by: Jan Kryl <jan.kryl@cloudbyte.com>
jkryl referenced this issue in mayadata-io/cstor Mar 16, 2018
…pgrade (#55)

Signed-off-by: Jan Kryl <jan.kryl@cloudbyte.com>
jkryl referenced this issue in mayadata-io/cstor Mar 16, 2018
…pgrade (#55)

Signed-off-by: Jan Kryl <jan.kryl@cloudbyte.com>
tonynguien pushed a commit to tonynguien/zfs that referenced this issue Dec 21, 2021
When a Vec is converted to Bytes, the excess capacity of the Vec is
dropped, by reallocating the Vec to be smaller and freeing the old
memory. When the Vec of an AlignedVec is converted to Bytes, we don't
want this to happen, becuase the new allocation won't necessarily have
the same alignment as the Vec's old buffer. Also it has a performance
impact of an additional alloc/bcopy/free, which is what we're trying to
avoid with the AlignedVec.

We can avoid this by setting the Vec's size the be the same as its
capacity before conversion. We also verify that it's buffer pointer
doesn't change across these operations.
anodos325 pushed a commit to anodos325/zfs that referenced this issue May 16, 2022
NAS-115362 / 13.0 / Merge ZFS 2.1.4 release
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants