-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Constant freezing when free memory is low #3834
Comments
@behlendorf you linked this issue, not the fix that will appear in the next point release ;) |
Closing as duplicate of #3808. |
As described in the comment above arc_reclaim_thread() it's critical that the reclaim thread be careful about blocking. Just like it must never wait on a hash lock, it must never wait on a task which can in turn wait on the CV in arc_get_data_buf(). This will deadlock, see issue #3822 for full backtraces showing the problem. To resolve this issue arc_kmem_reap_now() has been updated to use the asynchronous arc prune function. This means that arc_prune_async() may now be called while there are still outstanding arc_prune_tasks. However, this isn't a problem because arc_prune_async() already keeps a reference count preventing multiple outstanding tasks per registered consumer. Functionally, this behavior is the same as the counterpart illumos function dnlc_reduce_cache(). Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tim Chase <tim@chase2k.com> Issue #3808 Issue #3834 Issue #3822
As described in the comment above arc_reclaim_thread() it's critical that the reclaim thread be careful about blocking. Just like it must never wait on a hash lock, it must never wait on a task which can in turn wait on the CV in arc_get_data_buf(). This will deadlock, see issue #3822 for full backtraces showing the problem. To resolve this issue arc_kmem_reap_now() has been updated to use the asynchronous arc prune function. This means that arc_prune_async() may now be called while there are still outstanding arc_prune_tasks. However, this isn't a problem because arc_prune_async() already keeps a reference count preventing multiple outstanding tasks per registered consumer. Functionally, this behavior is the same as the counterpart illumos function dnlc_reduce_cache(). Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tim Chase <tim@chase2k.com> Issue #3808 Issue #3834 Issue #3822
ZFS/SPL 0.6.5.2 Bug Fixes * Init script fixes openzfs/zfs#3816 * Fix uioskip crash when skip to end openzfs/zfs#3806 openzfs/zfs#3850 * Userspace can trigger an assertion openzfs/zfs#3792 * Fix quota userused underflow bug openzfs/zfs#3789 * Fix performance regression from unwanted synchronous I/O openzfs/zfs#3780 * Fix deadlock during ARC reclaim openzfs/zfs#3808 openzfs/zfs#3834 * Fix deadlock with zfs receive and clamscan openzfs/zfs#3719 * Allow NFS activity to defer snapshot unmounts openzfs/zfs#3794 * Linux 4.3 compatibility openzfs/zfs#3799 * Zed reload fixes openzfs/zfs#3773 * Fix PAX Patch/Grsec SLAB_USERCOPY panic openzfs/zfs#3796 * Always remove during dkms uninstall/update openzfs/spl#476 ZFS/SPL 0.6.5.1 Bug Fixes * Fix zvol corruption with TRIM/discard openzfs/zfs#3798 * Fix NULL as mount(2) syscall data parameter openzfs/zfs#3804 * Fix xattr=sa dataset property not honored openzfs/zfs#3787 ZFS/SPL 0.6.5 Supported Kernels * Compatible with 2.6.32 - 4.2 Linux kernels. New Functionality * Support for temporary mount options. * Support for accessing the .zfs/snapshot over NFS. * Support for estimating send stream size when source is a bookmark. * Administrative commands are allowed to use reserved space improving robustness. * New notify ZEDLETs support email and pushbullet notifications. * New keyword 'slot' for vdev_id.conf to control what is use for the slot number. * New zpool export -a option unmounts and exports all imported pools. * New zpool iostat -y omits the first report with statistics since boot. * New zdb can now open the root dataset. * New zdb can print the numbers of ganged blocks. * New zdb -ddddd can print details of block pointer objects. * New zdb -b performance improved. * New zstreamdump -d prints contents of blocks. New Feature Flags * large_blocks - This feature allows the record size on a dataset to be set larger than 128KB. We currently support block sizes from 512 bytes to 16MB. The benefits of larger blocks, and thus larger IO, need to be weighed against the cost of COWing a giant block to modify one byte. Additionally, very large blocks can have an impact on I/O latency, and also potentially on the memory allocator. Therefore, we do not allow the record size to be set larger than zfs_max_recordsize (default 1MB). Larger blocks can be created by changing this tuning, pools with larger blocks can always be imported and used, regardless of this setting. * filesystem_limits - This feature enables filesystem and snapshot limits. These limits can be used to control how many filesystems and/or snapshots can be created at the point in the tree on which the limits are set. *Performance* * Improved zvol performance on all kernels (>50% higher throughput, >20% lower latency) * Improved zil performance on Linux 2.6.39 and earlier kernels (10x lower latency) * Improved allocation behavior on mostly full SSD/file pools (5% to 10% improvement on 90% full pools) * Improved performance when removing large files. * Caching improvements (ARC): ** Better cached read performance due to reduced lock contention. ** Smarter heuristics for managing the total size of the cache and the distribution of data/metadata. ** Faster release of cached buffers due to unexpected memory pressure. *Changes in Behavior* * Default reserved space was increased from 1.6% to 3.3% of total pool capacity. This default percentage can be controlled through the new spa_slop_shift module option, setting it to 6 will restore the previous percentage. * Loading of the ZFS module stack is now handled by systemd or the sysv init scripts. Invoking the zfs/zpool commands will not cause the modules to be automatically loaded. The previous behavior can be restored by setting the ZFS_MODULE_LOADING=yes environment variable but this functionality will be removed in a future release. * Unified SYSV and Gentoo OpenRC initialization scripts. The previous functionality has been split in to zfs-import, zfs-mount, zfs-share, and zfs-zed scripts. This allows for independent control of the services and is consistent with the unit files provided for a systemd based system. Complete details of the functionality provided by the updated scripts can be found here. * Task queues are now dynamic and worker threads will be created and destroyed as needed. This allows the system to automatically tune itself to ensure the optimal number of threads are used for the active workload which can result in a performance improvement. * Task queue thread priorities were correctly aligned with the default Linux file system thread priorities. This allows ZFS to compete fairly with other active Linux file systems when the system is under heavy load. * When compression=on the default compression algorithm will be lz4 as long as the feature is enabled. Otherwise the default remains lzjb. Similarly lz4 is now the preferred method for compressing meta data when available. * The use of mkdir/rmdir/mv in the .zfs/snapshot directory has been disabled by default both locally and via NFS clients. The zfs_admin_snapshot module option can be used to re-enable this functionality. * LBA weighting is automatically disabled on files and SSDs ensuring the entire device is used fairly. * iostat accounting on zvols running on kernels older than Linux 3.19 is no longer supported. * The known issues preventing swap on zvols for Linux 3.9 and newer kernels have been resolved. However, deadlocks are still possible for older kernels. Module Options * Changed zfs_arc_c_min default from 4M to 32M to accommodate large blocks. * Added metaslab_aliquot to control how many bytes are written to a top-level vdev before moving on to the next one. Increasing this may be helpful when using blocks larger than 1M. * Added spa_slop_shift, see 'reserved space' comment in the 'changes to behavior' section. * Added zfs_admin_snapshot, enable/disable the use of mkdir/rmdir/mv in .zfs/snapshot directory. * Added zfs_arc_lotsfree_percent, throttle I/O when free system memory drops below this percentage. * Added zfs_arc_num_sublists_per_state, used to allow more fine-grained locking. * Added zfs_arc_p_min_shift, used to set a floor on arc_p. * Added zfs_arc_sys_free, the target number of bytes the ARC should leave as free. * Added zfs_dbgmsg_enable, used to enable the 'dbgmsg' kstat. * Added zfs_dbgmsg_maxsize, sets the maximum size of the dbgmsg buffer. * Added zfs_max_recordsize, used to control the maximum allowed record size. * Added zfs_arc_meta_strategy, used to select the preferred ARC reclaim strategy. * Removed metaslab_min_alloc_size, it was unused internally due to prior changes. * Removed zfs_arc_memory_throttle_disable, replaced by zfs_arc_lotsfree_percent. * Removed zvol_threads, zvols no longer require a dedicated task queue. * See zfs-module-parameters(5) for complete details on available module options. Bug Fixes * Improved documentation with many updates, corrections, and additions. * Improved sysv, systemd, initramfs, and dracut support. * Improved block pointer validation before issuing IO. * Improved scrub pause heuristics. * Improved test coverage. * Improved heuristics for automatic repair when zfs_recover=1 module option is set. * Improved debugging infrastructure via 'dbgmsg' kstat. * Improved zpool import performance. * Fixed deadlocks in direct memory reclaim. * Fixed deadlock on db_mtx and dn_holds. * Fixed deadlock in dmu_objset_find_dp(). * Fixed deadlock during zfs rollback. * Fixed kernel panic due to tsd_exit() in ZFS_EXIT. * Fixed kernel panic when adding a duplicate dbuf to dn_dbufs. * Fixed kernel panic due to security / ACL creation failure. * Fixed kernel panic on unmount due to iput taskq. * Fixed panic due to corrupt nvlist when running utilities. * Fixed panic on unmount due to not waiting for all znodes to be released. * Fixed panic with zfs clone from different source and target pools. * Fixed NULL pointer dereference in dsl_prop_get_ds(). * Fixed NULL pointer dereference in dsl_prop_notify_all_cb(). * Fixed NULL pointer dereference in zfsdev_getminor(). * Fixed I/Os are now aggregated across ZIO priority classes. * Fixed .zfs/snapshot auto-mounting for all supported kernels. * Fixed 3-digit octal escapes by changing to 4-digit which disambiguate the output. * Fixed hard lockup due to infinite loop in zfs_zget(). * Fixed misreported 'alloc' value for cache devices. * Fixed spurious hung task watchdog stack traces. * Fixed direct memory reclaim deadlocks. * Fixed module loading in zfs import systemd service. * Fixed intermittent libzfs_init() failure to open /dev/zfs. * Fixed hot-disk sparing for disk vdevs * Fixed system spinning during ARC reclaim. * Fixed formatting errors in {{zfs(8)}} * Fixed zio pipeline stall by having callers invoke next stage. * Fixed assertion failed in zrl_tryenter(). * Fixed memory leak in make_root_vdev(). * Fixed memory leak in zpool_in_use(). * Fixed memory leak in libzfs when doing rollback. * Fixed hold leak in dmu_recv_end_check(). * Fixed refcount leak in bpobj_iterate_impl(). * Fixed misuse of input argument in traverse_visitbp(). * Fixed missing missing mutex_destroy() calls. * Fixed integer overflows in dmu_read/dmu_write. * Fixed verify() failure in zio_done(). * Fixed zio_checksum_error() to only include info for ECKSUM errors. * Fixed -ESTALE to force lookup on missing NFS file handles. * Fixed spurious failures from dsl_dataset_hold_obj(). * Fixed zfs compressratio when using with 4k sector size. * Fixed spurious watchdog warnings in prefetch thread. * Fixed unfair disk space allocation when vdevs are of unequal size. * Fixed ashift accounting error writing to cache devices. * Fixed zdb -d has false positive warning when feature@large_blocks=disabled. * Fixed zdb -h | -i seg fault. * Fixed force-received full stream into a dataset if it has a snapshot. * Fixed snapshot error handling. * Fixed 'hangs' while deleting large files. * Fixed lock contention (rrw_exit) while running a read only load. * Fixed error message when creating a pool to include all problematic devices. * Fixed Xen virtual block device detection, partitions are now created. * Fixed missing E2BIG error handling in zfs_setprop_error(). * Fixed zpool import assertion in libzfs_import.c. * Fixed zfs send -nv output to stderr. * Fixed idle pool potentially running itself out of space. * Fixed narrow race which allowed read(2) to access beyond fstat(2)'s reported end-of-file. * Fixed support for VPATH builds. * Fixed double counting of HDR_L2ONLY_SIZE in ARC. * Fixed 'BUG: Bad page state' warning from kernel due to writeback flag. * Fixed arc_available_memory() to check freemem. * Fixed arc_memory_throttle() to check pageout. * Fixed'zpool create warning when using zvols in debug builds. * Fixed loop devices layered on ZFS with 4.1 kernels. * Fixed zvol contribution to kernel entropy pool. * Fixed handling of compression flags in arc header. * Substantial changes to realign code base with illumos. * Many additional bug fixes. Signed-off-by: Nathaniel Clark <nathaniel.l.clark@intel.com> Change-Id: I87c012aec9ec581b10a417d699dafc7d415abf63 Reviewed-on: http://review.whamcloud.com/16399 Tested-by: Jenkins Reviewed-by: Alex Zhuravlev <alexey.zhuravlev@intel.com> Tested-by: Maloo <hpdd-maloo@intel.com> Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
It seems, this problem is still not completely fixed. With kernel 3.12.x and 0.65.2 My nodes still hang reproducably with swap on a zvol. Very simple test case: Run something creating permanent disk I/O like e.g. a fio job and then start a Linpack job in parallel that is active on all cores and consumes all RAM. After some minutes you will see the familiar hung tasks messages and a little later a total dead-lock.
|
There is enough cache on the system 4-5 GB but when free memory is low zfs freezes. Both 0.6.5.0 and 0.6.5.1
System is a virtual machine with 8GB ram 4Gb swap. It's running elasticsearch +rabbitmq on top of zfs. OS is Centos 7 3.10.0-229.11.1.el7.x86_64. Only thing set for zfs is:
It constantly ends up with very high load due to zfs freezes :
On 0.6.5.0
On 0.6.5.1
The text was updated successfully, but these errors were encountered: