forked from alobakin/linux
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The first draft of complete basic XDP series for IDPF upstream #3
Draft
michalQb
wants to merge
9
commits into
dev-queue-tony
Choose a base branch
from
dev-queue-xdp-upstream-v1
base: dev-queue-tony
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The page pool feature allows for setting the page offset as a one of creation parameters. Such offset can be used for XDP-specific configuration of page pool when we need some extra space reserved for the packet headroom. Unfortunately, such page offset value (from the page pool) was never used during SKB build what can have a negative impact when XDP_PASS action is returned and the received packet should be passed to the kernel network stack. Address such a problem by adding the page offset from the page pool when SKB offset is being computed. Fixes: 3a8845a ("idpf: add RX splitq napi poll support") Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
The IDPF driver supports the header split feature and that feature is always enabled by default. However, for flexibility reasons and to simplify the implementation of some features it would be useful to have the support for switching the header split off. Address that need by adding the user config parameter and the function for disabling (or enabling) the header split feature. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
Extend basic structures of the driver (e.g. 'idpf_vport', 'idpf_queue', 'idpf_vport_user_config_data') by adding members necessary to support XDP. Add extra XDP Tx queues needed to support XDP_TX and XDP_REDIRECT actions without interfering a regular Tx traffic. Also add functions dedicated to support XDP initialization for Rx and Tx queues and call those functions from the existing algorithms of queues configuration. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
Implement loading the XDP program using ndo_bpf callback for splitq and XDP_SETUP_PROG parameter. Add functions for stopping, reconfiguring and restarting all queues when needed. Also, implement the XDP hot swap mechanism when the existing XDP program is replaced by another one (without a necessity of reconfiguring anything). Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
Implement basic setup of the XDP program. Extend the function for creating the page pool by adding a support for XDP headroom configuration. Add handling of XDP_PASS and XDP_DROP action. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
Implement two separate completion queue cleaning functions which should be used depending on the scheduling mode: - queue-based scheduling (idpf_tx_clean_qb_complq) - flow-based scheduling (idpf_tx_clean_fb_complq). Add 4-byte descriptor for queue-based scheduling mode and perform some refactoring to extract the common code for both scheduling modes. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
Implement sending the packet from an XDP ring. XDP path functions are separate from the general Tx routines, because this allows to simplify and therefore speedup the process. It also makes code more friendly to future XDP-specific optimizations Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
Implement XDP_REDIRECT action and ndo_xdp_xmit() callback. For now, packets redirected from CPU with index greater than XDP queues number are just dropped with an error. This is a rather common situation and it will be addressed in later patches. Patch also refactors RX XDP handling to use switch statement due to increased number of actions. Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
Port of commit 22bf877 ("ice: introduce XDP_TX fallback path"). The patch handles the case, when queue number is not sufficient for the current number of CPUs. To avoid dropping some packets redirected from other interfaces, XDP TxQs are allowed to be shared between CPUs, which imposes the locking requirement. Static key approach has little to none performance penalties when sharing is not needed. Suggested-by: Larysa Zaremba <larysa.zaremba@intel.com> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
Setting to "draft". |
michalQb
pushed a commit
that referenced
this pull request
Dec 20, 2023
Hou Tao says: ==================== bpf: Fix the release of inner map From: Hou Tao <houtao1@huawei.com> Hi, The patchset aims to fix the release of inner map in map array or map htab. The release of inner map is different with normal map. For normal map, the map is released after the bpf program which uses the map is destroyed, because the bpf program tracks the used maps. However bpf program can not track the used inner map because these inner map may be updated or deleted dynamically, and for now the ref-counter of inner map is decreased after the inner map is remove from outer map, so the inner map may be freed before the bpf program, which is accessing the inner map, exits and there will be use-after-free problem as demonstrated by patch #6. The patchset fixes the problem by deferring the release of inner map. The freeing of inner map is deferred according to the sleepable attributes of the bpf programs which own the outer map. Patch #1 fixes the warning when running the newly-added selftest under interpreter mode. Patch #2 adds more parameters to .map_fd_put_ptr() to prepare for the fix. Patch #3 fixes the incorrect value of need_defer when freeing the fd array. Patch #4 fixes the potential use-after-free problem by using call_rcu_tasks_trace() and call_rcu() to wait for one tasks trace RCU GP and one RCU GP unconditionally. Patch #5 optimizes the free of inner map by removing the unnecessary RCU GP waiting. Patch #6 adds a selftest to demonstrate the potential use-after-free problem. Patch #7 updates a selftest to update outer map in syscall bpf program. Please see individual patches for more details. And comments are always welcome. Change Log: v5: * patch #3: rename fd_array_map_delete_elem_with_deferred_free() to __fd_array_map_delete_elem() (Alexei) * patch #5: use atomic64_t instead of atomic_t to prevent potential overflow (Alexei) * patch #7: use ptr_to_u64() helper instead of force casting to initialize pointers in bpf_attr (Alexei) v4: https://lore.kernel.org/bpf/20231130140120.1736235-1-houtao@huaweicloud.com * patch #2: don't use "deferred", use "need_defer" uniformly * patch #3: newly-added, fix the incorrect value of need_defer during fd array free. * patch #4: doesn't consider the case in which bpf map is not used by any bpf program and only use sleepable_refcnt to remove unnecessary tasks trace RCU GP (Alexei) * patch #4: remove memory barriers added due to cautiousness (Alexei) v3: https://lore.kernel.org/bpf/20231124113033.503338-1-houtao@huaweicloud.com * multiple variable renamings (Martin) * define BPF_MAP_RCU_GP/BPF_MAP_RCU_TT_GP as bit (Martin) * use call_rcu() and its variants instead of synchronize_rcu() (Martin) * remove unnecessary mask in bpf_map_free_deferred() (Martin) * place atomic_or() and the related smp_mb() together (Martin) * add patch #6 to demonstrate that updating outer map in syscall program is dead-lock free (Alexei) * update comments about the memory barrier in bpf_map_fd_put_ptr() * update commit message for patch #3 and #4 to describe more details v2: https://lore.kernel.org/bpf/20231113123324.3914612-1-houtao@huaweicloud.com * defer the invocation of ops->map_free() instead of bpf_map_put() (Martin) * update selftest to make it being reproducible under JIT mode (Martin) * remove unnecessary preparatory patches v1: https://lore.kernel.org/bpf/20231107140702.1891778-1-houtao@huaweicloud.com ==================== Link: https://lore.kernel.org/r/20231204140425.1480317-1-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
michalQb
pushed a commit
that referenced
this pull request
Dec 20, 2023
…gister-spills' Andrii Nakryiko says: ==================== Complete BPF verifier precision tracking support for register spills Add support to BPF verifier to track and support register spill/fill to/from stack regardless if it was done through read-only R10 register (which is the only form supported today), or through a general register after copying R10 into it, while also potentially modifying offset. Once we add register this generic spill/fill support to precision backtracking, we can take advantage of it to stop doing eager STACK_ZERO conversion on register spill. Instead we can rely on (im)precision of spilled const zero register to improve verifier state pruning efficiency. This situation of using const zero register to initialize stack slots is very common with __builtin_memset() usage or just zero-initializing variables on the stack, and it causes unnecessary state duplication, as that STACK_ZERO knowledge is often not necessary for correctness, as those zero values are never used in precise context. Thus, relying on register imprecision helps tremendously, especially in real-world BPF programs. To make spilled const zero register behave completely equivalently to STACK_ZERO, we need to improve few other small pieces, which is done in the second part of the patch set. See individual patches for details. There are also two small bug fixes spotted during STACK_ZERO debugging. The patch set consists of logically three changes: - patch #1 (and corresponding tests in patch #2) is fixing/impoving precision propagation for stack spills/fills. This can be landed as a stand-alone improvement; - patches #3 through alobakin#9 is improving verification scalability by utilizing register (im)precision instead of eager STACK_ZERO. These changes depend on patch #1. - patch alobakin#10 is a memory efficiency improvement to how instruction/jump history is tracked and maintained. It depends on patch #1, but is not strictly speaking required, even though I believe it's a good long-term solution to have a path-dependent per-instruction information. Kind of like a path-dependent counterpart to path-agnostic insn_aux array. v3->v3: - fixed up Fixes tag (Alexei); - fixed few more selftests to not use BPF_ST instruction in inline asm directly, checked with CI, it was happy (CI); v2->v3: - BPF_ST instruction workaround (Eduard); - force dereference in added tests to catch problems (Eduard); - some commit message massaging (Alexei); v1->v2: - clean ups, WARN_ONCE(), insn_flags helpers added (Eduard); - added more selftests for STACK_ZERO/STACK_MISC cases (Eduard); - a bit more detailed explanation of effect of avoiding STACK_ZERO in favor of register spill in patch #8 commit (Alexei); - global shared instruction history refactoring moved to be the last patch in the series to make it easier to revert it, if applied (Alexei). ==================== Link: https://lore.kernel.org/r/20231205184248.1502704-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
michalQb
pushed a commit
that referenced
this pull request
Dec 20, 2023
Andrii Nakryiko says: ==================== BPF token and BPF FS-based delegation This patch set introduces an ability to delegate a subset of BPF subsystem functionality from privileged system-wide daemon (e.g., systemd or any other container manager) through special mount options for userns-bound BPF FS to a *trusted* unprivileged application. Trust is the key here. This functionality is not about allowing unconditional unprivileged BPF usage. Establishing trust, though, is completely up to the discretion of respective privileged application that would create and mount a BPF FS instance with delegation enabled, as different production setups can and do achieve it through a combination of different means (signing, LSM, code reviews, etc), and it's undesirable and infeasible for kernel to enforce any particular way of validating trustworthiness of particular process. The main motivation for this work is a desire to enable containerized BPF applications to be used together with user namespaces. This is currently impossible, as CAP_BPF, required for BPF subsystem usage, cannot be namespaced or sandboxed, as a general rule. E.g., tracing BPF programs, thanks to BPF helpers like bpf_probe_read_kernel() and bpf_probe_read_user() can safely read arbitrary memory, and it's impossible to ensure that they only read memory of processes belonging to any given namespace. This means that it's impossible to have a mechanically verifiable namespace-aware CAP_BPF capability, and as such another mechanism to allow safe usage of BPF functionality is necessary.BPF FS delegation mount options and BPF token derived from such BPF FS instance is such a mechanism. Kernel makes no assumption about what "trusted" constitutes in any particular case, and it's up to specific privileged applications and their surrounding infrastructure to decide that. What kernel provides is a set of APIs to setup and mount special BPF FS instanecs and derive BPF tokens from it. BPF FS and BPF token are both bound to its owning userns and in such a way are constrained inside intended container. Users can then pass BPF token FD to privileged bpf() syscall commands, like BPF map creation and BPF program loading, to perform such operations without having init userns privileged. This version incorporates feedback and suggestions ([3]) received on v3 of this patch set, and instead of allowing to create BPF tokens directly assuming capable(CAP_SYS_ADMIN), we instead enhance BPF FS to accept a few new delegation mount options. If these options are used and BPF FS itself is properly created, set up, and mounted inside the user namespaced container, user application is able to derive a BPF token object from BPF FS instance, and pass that token to bpf() syscall. As explained in patch #3, BPF token itself doesn't grant access to BPF functionality, but instead allows kernel to do namespaced capabilities checks (ns_capable() vs capable()) for CAP_BPF, CAP_PERFMON, CAP_NET_ADMIN, and CAP_SYS_ADMIN, as applicable. So it forms one half of a puzzle and allows container managers and sys admins to have safe and flexible configuration options: determining which containers get delegation of BPF functionality through BPF FS, and then which applications within such containers are allowed to perform bpf() commands, based on namespaces capabilities. Previous attempt at addressing this very same problem ([0]) attempted to utilize authoritative LSM approach, but was conclusively rejected by upstream LSM maintainers. BPF token concept is not changing anything about LSM approach, but can be combined with LSM hooks for very fine-grained security policy. Some ideas about making BPF token more convenient to use with LSM (in particular custom BPF LSM programs) was briefly described in recent LSF/MM/BPF 2023 presentation ([1]). E.g., an ability to specify user-provided data (context), which in combination with BPF LSM would allow implementing a very dynamic and fine-granular custom security policies on top of BPF token. In the interest of minimizing API surface area and discussions this was relegated to follow up patches, as it's not essential to the fundamental concept of delegatable BPF token. It should be noted that BPF token is conceptually quite similar to the idea of /dev/bpf device file, proposed by Song a while ago ([2]). The biggest difference is the idea of using virtual anon_inode file to hold BPF token and allowing multiple independent instances of them, each (potentially) with its own set of restrictions. And also, crucially, BPF token approach is not using any special stateful task-scoped flags. Instead, bpf() syscall accepts token_fd parameters explicitly for each relevant BPF command. This addresses main concerns brought up during the /dev/bpf discussion, and fits better with overall BPF subsystem design. This patch set adds a basic minimum of functionality to make BPF token idea useful and to discuss API and functionality. Currently only low-level libbpf APIs support creating and passing BPF token around, allowing to test kernel functionality, but for the most part is not sufficient for real-world applications, which typically use high-level libbpf APIs based on `struct bpf_object` type. This was done with the intent to limit the size of patch set and concentrate on mostly kernel-side changes. All the necessary plumbing for libbpf will be sent as a separate follow up patch set kernel support makes it upstream. Another part that should happen once kernel-side BPF token is established, is a set of conventions between applications (e.g., systemd), tools (e.g., bpftool), and libraries (e.g., libbpf) on exposing delegatable BPF FS instance(s) at well-defined locations to allow applications take advantage of this in automatic fashion without explicit code changes on BPF application's side. But I'd like to postpone this discussion to after BPF token concept lands. [0] https://lore.kernel.org/bpf/20230412043300.360803-1-andrii@kernel.org/ [1] http://vger.kernel.org/bpfconf2023_material/Trusted_unprivileged_BPF_LSFMM2023.pdf [2] https://lore.kernel.org/bpf/20190627201923.2589391-2-songliubraving@fb.com/ [3] https://lore.kernel.org/bpf/20230704-hochverdient-lehne-eeb9eeef785e@brauner/ v11->v12: - enforce exact userns match in bpf_token_capable() and bpf_token_allow_cmd() checks, for added strictness (Christian); v10->v11: - fix BPF FS root check to disallow using bind-mounted subdirectory of BPF FS instance (Christian); - further restrict BPF_TOKEN_CREATE command to be executed from inside exactly the same user namespace as the one used to create BPF FS instance (Christian); v9->v10: - slight adjustments in LSM parts (Paul); - setting delegate_xxx options require capable(CAP_SYS_ADMIN) (Christian); - simplify BPF_TOKEN_CREATE UAPI by accepting BPF FS FD directly (Christian); v8->v9: - fix issue in selftests due to sys/mount.h header (Jiri); - fix warning in doc comments in LSM hooks (kernel test robot); v7->v8: - add bpf_token_allow_cmd and bpf_token_capable hooks (Paul); - inline bpf_token_alloc() into bpf_token_create() to prevent accidental divergence with security_bpf_token_create() hook (Paul); v6->v7: - separate patches to refactor bpf_prog_alloc/bpf_map_alloc LSM hooks, as discussed with Paul, and now they also accept struct bpf_token; - added bpf_token_create/bpf_token_free to allow LSMs (SELinux, specifically) to set up security LSM blob (Paul); - last patch also wires bpf_security_struct setup by SELinux, similar to how it's done for BPF map/prog, though I'm not sure if that's enough, so worst case it's easy to drop this patch if more full fledged SELinux implementation will be done separately; - small fixes for issues caught by code reviews (Jiri, Hou); - fix for test_maps test that doesn't use LIBBPF_OPTS() macro (CI); v5->v6: - fix possible use of uninitialized variable in selftests (CI); - don't use anon_inode, instead create one from BPF FS instance (Christian); - don't store bpf_token inside struct bpf_map, instead pass it explicitly to map_check_btf(). We do store bpf_token inside prog->aux, because it's used during verification and even can be checked during attach time for some program types; - LSM hooks are left intact pending the conclusion of discussion with Paul Moore; I'd prefer to do LSM-related changes as a follow up patch set anyways; v4->v5: - add pre-patch unifying CAP_NET_ADMIN handling inside kernel/bpf/syscall.c (Paul Moore); - fix build warnings and errors in selftests and kernel, detected by CI and kernel test robot; v3->v4: - add delegation mount options to BPF FS; - BPF token is derived from the instance of BPF FS and associates itself with BPF FS' owning userns; - BPF token doesn't grant BPF functionality directly, it just turns capable() checks into ns_capable() checks within BPF FS' owning user; - BPF token cannot be pinned; v2->v3: - make BPF_TOKEN_CREATE pin created BPF token in BPF FS, and disallow BPF_OBJ_PIN for BPF token; v1->v2: - fix build failures on Kconfig with CONFIG_BPF_SYSCALL unset; - drop BPF_F_TOKEN_UNKNOWN_* flags and simplify UAPI (Stanislav). ==================== Link: https://lore.kernel.org/r/20231130185229.2688956-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
michalQb
pushed a commit
that referenced
this pull request
Dec 20, 2023
Hou Tao says: ==================== The patch set aims to fix the problems found when inspecting the code related with maybe_wait_bpf_programs(). Patch #1 removes unnecessary invocation of maybe_wait_bpf_programs(). Patch #2 calls maybe_wait_bpf_programs() only once for batched update. Patch #3 adds the missed waiting when doing batched lookup_deletion on htab of maps. Patch #4 does wait only if the update or deletion operation succeeds. Patch #5 fixes the value of batch.count when memory allocation fails. ==================== Link: https://lore.kernel.org/r/20231208102355.2628918-1-houtao@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
michalQb
pushed a commit
that referenced
this pull request
Sep 6, 2024
Lockdep reported a warning in Linux version 6.6: [ 414.344659] ================================ [ 414.345155] WARNING: inconsistent lock state [ 414.345658] 6.6.0-07439-gba2303cacfda #6 Not tainted [ 414.346221] -------------------------------- [ 414.346712] inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage. [ 414.347545] kworker/u10:3/1152 [HC0[0]:SC0[0]:HE0:SE1] takes: [ 414.349245] ffff88810edd1098 (&sbq->ws[i].wait){+.?.}-{2:2}, at: blk_mq_dispatch_rq_list+0x131c/0x1ee0 [ 414.351204] {IN-SOFTIRQ-W} state was registered at: [ 414.351751] lock_acquire+0x18d/0x460 [ 414.352218] _raw_spin_lock_irqsave+0x39/0x60 [ 414.352769] __wake_up_common_lock+0x22/0x60 [ 414.353289] sbitmap_queue_wake_up+0x375/0x4f0 [ 414.353829] sbitmap_queue_clear+0xdd/0x270 [ 414.354338] blk_mq_put_tag+0xdf/0x170 [ 414.354807] __blk_mq_free_request+0x381/0x4d0 [ 414.355335] blk_mq_free_request+0x28b/0x3e0 [ 414.355847] __blk_mq_end_request+0x242/0xc30 [ 414.356367] scsi_end_request+0x2c1/0x830 [ 414.345155] WARNING: inconsistent lock state [ 414.345658] 6.6.0-07439-gba2303cacfda #6 Not tainted [ 414.346221] -------------------------------- [ 414.346712] inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage. [ 414.347545] kworker/u10:3/1152 [HC0[0]:SC0[0]:HE0:SE1] takes: [ 414.349245] ffff88810edd1098 (&sbq->ws[i].wait){+.?.}-{2:2}, at: blk_mq_dispatch_rq_list+0x131c/0x1ee0 [ 414.351204] {IN-SOFTIRQ-W} state was registered at: [ 414.351751] lock_acquire+0x18d/0x460 [ 414.352218] _raw_spin_lock_irqsave+0x39/0x60 [ 414.352769] __wake_up_common_lock+0x22/0x60 [ 414.353289] sbitmap_queue_wake_up+0x375/0x4f0 [ 414.353829] sbitmap_queue_clear+0xdd/0x270 [ 414.354338] blk_mq_put_tag+0xdf/0x170 [ 414.354807] __blk_mq_free_request+0x381/0x4d0 [ 414.355335] blk_mq_free_request+0x28b/0x3e0 [ 414.355847] __blk_mq_end_request+0x242/0xc30 [ 414.356367] scsi_end_request+0x2c1/0x830 [ 414.356863] scsi_io_completion+0x177/0x1610 [ 414.357379] scsi_complete+0x12f/0x260 [ 414.357856] blk_complete_reqs+0xba/0xf0 [ 414.358338] __do_softirq+0x1b0/0x7a2 [ 414.358796] irq_exit_rcu+0x14b/0x1a0 [ 414.359262] sysvec_call_function_single+0xaf/0xc0 [ 414.359828] asm_sysvec_call_function_single+0x1a/0x20 [ 414.360426] default_idle+0x1e/0x30 [ 414.360873] default_idle_call+0x9b/0x1f0 [ 414.361390] do_idle+0x2d2/0x3e0 [ 414.361819] cpu_startup_entry+0x55/0x60 [ 414.362314] start_secondary+0x235/0x2b0 [ 414.362809] secondary_startup_64_no_verify+0x18f/0x19b [ 414.363413] irq event stamp: 428794 [ 414.363825] hardirqs last enabled at (428793): [<ffffffff816bfd1c>] ktime_get+0x1dc/0x200 [ 414.364694] hardirqs last disabled at (428794): [<ffffffff85470177>] _raw_spin_lock_irq+0x47/0x50 [ 414.365629] softirqs last enabled at (428444): [<ffffffff85474780>] __do_softirq+0x540/0x7a2 [ 414.366522] softirqs last disabled at (428419): [<ffffffff813f65ab>] irq_exit_rcu+0x14b/0x1a0 [ 414.367425] other info that might help us debug this: [ 414.368194] Possible unsafe locking scenario: [ 414.368900] CPU0 [ 414.369225] ---- [ 414.369548] lock(&sbq->ws[i].wait); [ 414.370000] <Interrupt> [ 414.370342] lock(&sbq->ws[i].wait); [ 414.370802] *** DEADLOCK *** [ 414.371569] 5 locks held by kworker/u10:3/1152: [ 414.372088] #0: ffff88810130e938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x357/0x13f0 [ 414.373180] #1: ffff88810201fdb8 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x3a3/0x13f0 [ 414.374384] #2: ffffffff86ffbdc0 (rcu_read_lock){....}-{1:2}, at: blk_mq_run_hw_queue+0x637/0xa00 [ 414.375342] #3: ffff88810edd1098 (&sbq->ws[i].wait){+.?.}-{2:2}, at: blk_mq_dispatch_rq_list+0x131c/0x1ee0 [ 414.376377] #4: ffff888106205a08 (&hctx->dispatch_wait_lock){+.-.}-{2:2}, at: blk_mq_dispatch_rq_list+0x1337/0x1ee0 [ 414.378607] stack backtrace: [ 414.379177] CPU: 0 PID: 1152 Comm: kworker/u10:3 Not tainted 6.6.0-07439-gba2303cacfda #6 [ 414.380032] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 [ 414.381177] Workqueue: writeback wb_workfn (flush-253:0) [ 414.381805] Call Trace: [ 414.382136] <TASK> [ 414.382429] dump_stack_lvl+0x91/0xf0 [ 414.382884] mark_lock_irq+0xb3b/0x1260 [ 414.383367] ? __pfx_mark_lock_irq+0x10/0x10 [ 414.383889] ? stack_trace_save+0x8e/0xc0 [ 414.384373] ? __pfx_stack_trace_save+0x10/0x10 [ 414.384903] ? graph_lock+0xcf/0x410 [ 414.385350] ? save_trace+0x3d/0xc70 [ 414.385808] mark_lock.part.20+0x56d/0xa90 [ 414.386317] mark_held_locks+0xb0/0x110 [ 414.386791] ? __pfx_do_raw_spin_lock+0x10/0x10 [ 414.387320] lockdep_hardirqs_on_prepare+0x297/0x3f0 [ 414.387901] ? _raw_spin_unlock_irq+0x28/0x50 [ 414.388422] trace_hardirqs_on+0x58/0x100 [ 414.388917] _raw_spin_unlock_irq+0x28/0x50 [ 414.389422] __blk_mq_tag_busy+0x1d6/0x2a0 [ 414.389920] __blk_mq_get_driver_tag+0x761/0x9f0 [ 414.390899] blk_mq_dispatch_rq_list+0x1780/0x1ee0 [ 414.391473] ? __pfx_blk_mq_dispatch_rq_list+0x10/0x10 [ 414.392070] ? sbitmap_get+0x2b8/0x450 [ 414.392533] ? __blk_mq_get_driver_tag+0x210/0x9f0 [ 414.393095] __blk_mq_sched_dispatch_requests+0xd99/0x1690 [ 414.393730] ? elv_attempt_insert_merge+0x1b1/0x420 [ 414.394302] ? __pfx___blk_mq_sched_dispatch_requests+0x10/0x10 [ 414.394970] ? lock_acquire+0x18d/0x460 [ 414.395456] ? blk_mq_run_hw_queue+0x637/0xa00 [ 414.395986] ? __pfx_lock_acquire+0x10/0x10 [ 414.396499] blk_mq_sched_dispatch_requests+0x109/0x190 [ 414.397100] blk_mq_run_hw_queue+0x66e/0xa00 [ 414.397616] blk_mq_flush_plug_list.part.17+0x614/0x2030 [ 414.398244] ? __pfx_blk_mq_flush_plug_list.part.17+0x10/0x10 [ 414.398897] ? writeback_sb_inodes+0x241/0xcc0 [ 414.399429] blk_mq_flush_plug_list+0x65/0x80 [ 414.399957] __blk_flush_plug+0x2f1/0x530 [ 414.400458] ? __pfx___blk_flush_plug+0x10/0x10 [ 414.400999] blk_finish_plug+0x59/0xa0 [ 414.401467] wb_writeback+0x7cc/0x920 [ 414.401935] ? __pfx_wb_writeback+0x10/0x10 [ 414.402442] ? mark_held_locks+0xb0/0x110 [ 414.402931] ? __pfx_do_raw_spin_lock+0x10/0x10 [ 414.403462] ? lockdep_hardirqs_on_prepare+0x297/0x3f0 [ 414.404062] wb_workfn+0x2b3/0xcf0 [ 414.404500] ? __pfx_wb_workfn+0x10/0x10 [ 414.404989] process_scheduled_works+0x432/0x13f0 [ 414.405546] ? __pfx_process_scheduled_works+0x10/0x10 [ 414.406139] ? do_raw_spin_lock+0x101/0x2a0 [ 414.406641] ? assign_work+0x19b/0x240 [ 414.407106] ? lock_is_held_type+0x9d/0x110 [ 414.407604] worker_thread+0x6f2/0x1160 [ 414.408075] ? __kthread_parkme+0x62/0x210 [ 414.408572] ? lockdep_hardirqs_on_prepare+0x297/0x3f0 [ 414.409168] ? __kthread_parkme+0x13c/0x210 [ 414.409678] ? __pfx_worker_thread+0x10/0x10 [ 414.410191] kthread+0x33c/0x440 [ 414.410602] ? __pfx_kthread+0x10/0x10 [ 414.411068] ret_from_fork+0x4d/0x80 [ 414.411526] ? __pfx_kthread+0x10/0x10 [ 414.411993] ret_from_fork_asm+0x1b/0x30 [ 414.412489] </TASK> When interrupt is turned on while a lock holding by spin_lock_irq it throws a warning because of potential deadlock. blk_mq_prep_dispatch_rq blk_mq_get_driver_tag __blk_mq_get_driver_tag __blk_mq_alloc_driver_tag blk_mq_tag_busy -> tag is already busy // failed to get driver tag blk_mq_mark_tag_wait spin_lock_irq(&wq->lock) -> lock A (&sbq->ws[i].wait) __add_wait_queue(wq, wait) -> wait queue active blk_mq_get_driver_tag __blk_mq_tag_busy -> 1) tag must be idle, which means there can't be inflight IO spin_lock_irq(&tags->lock) -> lock B (hctx->tags) spin_unlock_irq(&tags->lock) -> unlock B, turn on interrupt accidentally -> 2) context must be preempt by IO interrupt to trigger deadlock. As shown above, the deadlock is not possible in theory, but the warning still need to be fixed. Fix it by using spin_lock_irqsave to get lockB instead of spin_lock_irq. Fixes: 4f1731d ("blk-mq: fix potential io hang by wrong 'wake_batch'") Signed-off-by: Li Lingfeng <lilingfeng3@huawei.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Yu Kuai <yukuai3@huawei.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20240815024736.2040971-1-lilingfeng@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
michalQb
pushed a commit
that referenced
this pull request
Sep 6, 2024
Ido Schimmel says: ==================== Preparations for FIB rule DSCP selector This patchset moves the masking of the upper DSCP bits in 'flowi4_tos' to the core instead of relying on callers of the FIB lookup API to do it. This will allow us to start changing users of the API to initialize the 'flowi4_tos' field with all six bits of the DSCP field. In turn, this will allow us to extend FIB rules with a new DSCP selector. By masking the upper DSCP bits in the core we are able to maintain the behavior of the TOS selector in FIB rules and routes to only match on the lower DSCP bits. While working on this I found two users of the API that do not mask the upper DSCP bits before performing the lookup. The first is an ancient netlink family that is unlikely to be used. It is adjusted in patch #1 to mask both the upper DSCP bits and the ECN bits before calling the API. The second user is a nftables module that differs in this regard from its equivalent iptables module. It is adjusted in patch #2 to invoke the API with the upper DSCP bits masked, like all other callers. The relevant selftest passed, but in the unlikely case that regressions are reported because of this change, we can restore the existing behavior using a new flow information flag as discussed here [1]. The last patch moves the masking of the upper DSCP bits to the core, making the first two patches redundant, but I wanted to post them separately to call attention to the behavior change for these two users of the FIB lookup API. Future patchsets (around 3) will start unmasking the upper DSCP bits throughout the networking stack before adding support for the new FIB rule DSCP selector. Changes from v1 [2]: Patch #3: Include <linux/ip.h> in <linux/in_route.h> instead of including it in net/ip_fib.h [1] https://lore.kernel.org/netdev/ZpqpB8vJU%2FQ6LSqa@debian/ [2] https://lore.kernel.org/netdev/20240725131729.1729103-1-idosch@nvidia.com/ ==================== Link: https://patch.msgid.link/20240814125224.972815-1-idosch@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
michalQb
pushed a commit
that referenced
this pull request
Sep 6, 2024
…git/netfilter/nf Pablo Neira Ayuso says: ==================== Netfilter fixes for net The following patchset contains Netfilter fixes for net: Patch #1 disable BH when collecting stats via hardware offload to ensure concurrent updates from packet path do not result in losing stats. From Sebastian Andrzej Siewior. Patch #2 uses write seqcount to reset counters serialize against reader. Also from Sebastian Andrzej Siewior. Patch #3 ensures vlan header is in place before accessing its fields, according to KMSAN splat triggered by syzbot. * tag 'nf-24-08-22' of git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf: netfilter: flowtable: validate vlan header netfilter: nft_counter: Synchronize nft_counter_reset() against reader. netfilter: nft_counter: Disable BH in nft_counter_offload_stats(). ==================== Link: https://patch.msgid.link/20240822101842.4234-1-pablo@netfilter.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
michalQb
pushed a commit
that referenced
this pull request
Sep 6, 2024
…rnel/git/netfilter/nf-next Pablo Neira Ayuso says: ==================== Netfilter updates for net-next The following batch contains Netfilter updates for net-next: Patch #1 fix checksum calculation in nfnetlink_queue with SCTP, segment GSO packet since skb_zerocopy() does not support GSO_BY_FRAGS, from Antonio Ojea. Patch #2 extend nfnetlink_queue coverage to handle SCTP packets, from Antonio Ojea. Patch #3 uses consume_skb() instead of kfree_skb() in nfnetlink, from Donald Hunter. Patch #4 adds a dedicate commit list for sets to speed up intra-transaction lookups, from Florian Westphal. Patch #5 skips removal of element from abort path for the pipapo backend, ditching the shadow copy of this datastructure is sufficient. Patch #6 moves nf_ct_netns_get() out of nf_conncount_init() to let users of conncoiunt decide when to enable conntrack, this is needed by openvswitch, from Xin Long. Patch #7 pass context to all nft_parse_register_load() in preparation for the next patch. Patches #8 and alobakin#9 reject loads from uninitialized registers from control plane to remove register initialization from datapath. From Florian Westphal. * tag 'nf-next-24-08-23' of git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next: netfilter: nf_tables: don't initialize registers in nft_do_chain() netfilter: nf_tables: allow loads only when register is initialized netfilter: nf_tables: pass context structure to nft_parse_register_load netfilter: move nf_ct_netns_get out of nf_conncount_init netfilter: nf_tables: do not remove elements if set backend implements .abort netfilter: nf_tables: store new sets in dedicated list netfilter: nfnetlink: convert kfree_skb to consume_skb selftests: netfilter: nft_queue.sh: sctp coverage netfilter: nfnetlink_queue: unbreak SCTP traffic ==================== Link: https://patch.msgid.link/20240822221939.157858-1-pablo@netfilter.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
michalQb
pushed a commit
that referenced
this pull request
Sep 6, 2024
Ido Schimmel says: ==================== Unmask upper DSCP bits - part 2 tl;dr - This patchset continues to unmask the upper DSCP bits in the IPv4 flow key in preparation for allowing IPv4 FIB rules to match on DSCP. No functional changes are expected. Part 1 was merged in commit ("Merge branch 'unmask-upper-dscp-bits-part-1'"). The TOS field in the IPv4 flow key ('flowi4_tos') is used during FIB lookup to match against the TOS selector in FIB rules and routes. It is currently impossible for user space to configure FIB rules that match on the DSCP value as the upper DSCP bits are either masked in the various call sites that initialize the IPv4 flow key or along the path to the FIB core. In preparation for adding a DSCP selector to IPv4 and IPv6 FIB rules, we need to make sure the entire DSCP value is present in the IPv4 flow key. This patchset continues to unmask the upper DSCP bits, but this time in the output route path. Patches #1-#3 unmask the upper DSCP bits in the various places that invoke the core output route lookup functions directly. Patches #4-#6 do the same in three helpers that are widely used in the output path to initialize the TOS field in the IPv4 flow key. The rest of the patches continue to unmask these bits in call sites that invoke the following wrappers around the core lookup functions: Patch #7 - __ip_route_output_key() Patches #8-alobakin#12 - ip_route_output_flow() The next patchset will handle the callers of ip_route_output_ports() and ip_route_output_key(). No functional changes are expected as commit 1fa3314 ("ipv4: Centralize TOS matching") moved the masking of the upper DSCP bits to the core where 'flowi4_tos' is matched against the TOS selector. Changes since v1 [1]: * Remove IPTOS_RT_MASK in patch #7 instead of in patch #6 [1] https://lore.kernel.org/netdev/20240827111813.2115285-1-idosch@nvidia.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
michalQb
pushed a commit
that referenced
this pull request
Oct 4, 2024
…_child(). syzkaller reported KMSAN splat in tcp_create_openreq_child(). [0] The uninit variable is tcp_rsk(req)->ao_keyid. tcp_rsk(req)->ao_keyid is initialised only when tcp_conn_request() finds a valid TCP AO option in SYN. Then, tcp_rsk(req)->used_tcp_ao is set accordingly. Let's not read tcp_rsk(req)->ao_keyid when tcp_rsk(req)->used_tcp_ao is false. [0]: BUG: KMSAN: uninit-value in tcp_create_openreq_child+0x198b/0x1ff0 net/ipv4/tcp_minisocks.c:610 tcp_create_openreq_child+0x198b/0x1ff0 net/ipv4/tcp_minisocks.c:610 tcp_v4_syn_recv_sock+0x18e/0x2170 net/ipv4/tcp_ipv4.c:1754 tcp_check_req+0x1a3e/0x20c0 net/ipv4/tcp_minisocks.c:852 tcp_v4_rcv+0x26a4/0x53a0 net/ipv4/tcp_ipv4.c:2265 ip_protocol_deliver_rcu+0x884/0x1270 net/ipv4/ip_input.c:205 ip_local_deliver_finish+0x30f/0x530 net/ipv4/ip_input.c:233 NF_HOOK include/linux/netfilter.h:314 [inline] ip_local_deliver+0x230/0x4c0 net/ipv4/ip_input.c:254 dst_input include/net/dst.h:460 [inline] ip_sublist_rcv_finish net/ipv4/ip_input.c:580 [inline] ip_list_rcv_finish net/ipv4/ip_input.c:631 [inline] ip_sublist_rcv+0x10f7/0x13e0 net/ipv4/ip_input.c:639 ip_list_rcv+0x952/0x9c0 net/ipv4/ip_input.c:674 __netif_receive_skb_list_ptype net/core/dev.c:5703 [inline] __netif_receive_skb_list_core+0xd92/0x11d0 net/core/dev.c:5751 __netif_receive_skb_list net/core/dev.c:5803 [inline] netif_receive_skb_list_internal+0xd8f/0x1350 net/core/dev.c:5895 gro_normal_list include/net/gro.h:515 [inline] napi_complete_done+0x3f2/0x990 net/core/dev.c:6246 e1000_clean+0x1fa4/0x5e50 drivers/net/ethernet/intel/e1000/e1000_main.c:3808 __napi_poll+0xd9/0x990 net/core/dev.c:6771 napi_poll net/core/dev.c:6840 [inline] net_rx_action+0x90f/0x17e0 net/core/dev.c:6962 handle_softirqs+0x152/0x6b0 kernel/softirq.c:554 __do_softirq kernel/softirq.c:588 [inline] invoke_softirq kernel/softirq.c:428 [inline] __irq_exit_rcu kernel/softirq.c:637 [inline] irq_exit_rcu+0x5d/0x120 kernel/softirq.c:649 common_interrupt+0x83/0x90 arch/x86/kernel/irq.c:278 asm_common_interrupt+0x26/0x40 arch/x86/include/asm/idtentry.h:693 __msan_instrument_asm_store+0xd6/0xe0 arch_atomic_inc arch/x86/include/asm/atomic.h:53 [inline] raw_atomic_inc include/linux/atomic/atomic-arch-fallback.h:992 [inline] atomic_inc include/linux/atomic/atomic-instrumented.h:436 [inline] page_ref_inc include/linux/page_ref.h:153 [inline] folio_ref_inc include/linux/page_ref.h:160 [inline] filemap_map_order0_folio mm/filemap.c:3596 [inline] filemap_map_pages+0x11c7/0x2270 mm/filemap.c:3644 do_fault_around mm/memory.c:4879 [inline] do_read_fault mm/memory.c:4912 [inline] do_fault mm/memory.c:5051 [inline] do_pte_missing mm/memory.c:3897 [inline] handle_pte_fault mm/memory.c:5381 [inline] __handle_mm_fault mm/memory.c:5524 [inline] handle_mm_fault+0x3677/0x6f00 mm/memory.c:5689 do_user_addr_fault+0x1373/0x2b20 arch/x86/mm/fault.c:1338 handle_page_fault arch/x86/mm/fault.c:1481 [inline] exc_page_fault+0x54/0xc0 arch/x86/mm/fault.c:1539 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623 Uninit was stored to memory at: tcp_create_openreq_child+0x1984/0x1ff0 net/ipv4/tcp_minisocks.c:611 tcp_v4_syn_recv_sock+0x18e/0x2170 net/ipv4/tcp_ipv4.c:1754 tcp_check_req+0x1a3e/0x20c0 net/ipv4/tcp_minisocks.c:852 tcp_v4_rcv+0x26a4/0x53a0 net/ipv4/tcp_ipv4.c:2265 ip_protocol_deliver_rcu+0x884/0x1270 net/ipv4/ip_input.c:205 ip_local_deliver_finish+0x30f/0x530 net/ipv4/ip_input.c:233 NF_HOOK include/linux/netfilter.h:314 [inline] ip_local_deliver+0x230/0x4c0 net/ipv4/ip_input.c:254 dst_input include/net/dst.h:460 [inline] ip_sublist_rcv_finish net/ipv4/ip_input.c:580 [inline] ip_list_rcv_finish net/ipv4/ip_input.c:631 [inline] ip_sublist_rcv+0x10f7/0x13e0 net/ipv4/ip_input.c:639 ip_list_rcv+0x952/0x9c0 net/ipv4/ip_input.c:674 __netif_receive_skb_list_ptype net/core/dev.c:5703 [inline] __netif_receive_skb_list_core+0xd92/0x11d0 net/core/dev.c:5751 __netif_receive_skb_list net/core/dev.c:5803 [inline] netif_receive_skb_list_internal+0xd8f/0x1350 net/core/dev.c:5895 gro_normal_list include/net/gro.h:515 [inline] napi_complete_done+0x3f2/0x990 net/core/dev.c:6246 e1000_clean+0x1fa4/0x5e50 drivers/net/ethernet/intel/e1000/e1000_main.c:3808 __napi_poll+0xd9/0x990 net/core/dev.c:6771 napi_poll net/core/dev.c:6840 [inline] net_rx_action+0x90f/0x17e0 net/core/dev.c:6962 handle_softirqs+0x152/0x6b0 kernel/softirq.c:554 __do_softirq kernel/softirq.c:588 [inline] invoke_softirq kernel/softirq.c:428 [inline] __irq_exit_rcu kernel/softirq.c:637 [inline] irq_exit_rcu+0x5d/0x120 kernel/softirq.c:649 common_interrupt+0x83/0x90 arch/x86/kernel/irq.c:278 asm_common_interrupt+0x26/0x40 arch/x86/include/asm/idtentry.h:693 Uninit was created at: __alloc_pages_noprof+0x82d/0xcb0 mm/page_alloc.c:4706 __alloc_pages_node_noprof include/linux/gfp.h:269 [inline] alloc_pages_node_noprof include/linux/gfp.h:296 [inline] alloc_slab_page mm/slub.c:2265 [inline] allocate_slab mm/slub.c:2428 [inline] new_slab+0x2af/0x14e0 mm/slub.c:2481 ___slab_alloc+0xf73/0x3150 mm/slub.c:3667 __slab_alloc mm/slub.c:3757 [inline] __slab_alloc_node mm/slub.c:3810 [inline] slab_alloc_node mm/slub.c:3990 [inline] kmem_cache_alloc_noprof+0x53a/0x9f0 mm/slub.c:4009 reqsk_alloc_noprof net/ipv4/inet_connection_sock.c:920 [inline] inet_reqsk_alloc+0x63/0x700 net/ipv4/inet_connection_sock.c:951 tcp_conn_request+0x339/0x4860 net/ipv4/tcp_input.c:7177 tcp_v4_conn_request+0x13b/0x190 net/ipv4/tcp_ipv4.c:1719 tcp_rcv_state_process+0x2dd/0x4a10 net/ipv4/tcp_input.c:6711 tcp_v4_do_rcv+0xbee/0x10d0 net/ipv4/tcp_ipv4.c:1932 tcp_v4_rcv+0x3fad/0x53a0 net/ipv4/tcp_ipv4.c:2334 ip_protocol_deliver_rcu+0x884/0x1270 net/ipv4/ip_input.c:205 ip_local_deliver_finish+0x30f/0x530 net/ipv4/ip_input.c:233 NF_HOOK include/linux/netfilter.h:314 [inline] ip_local_deliver+0x230/0x4c0 net/ipv4/ip_input.c:254 dst_input include/net/dst.h:460 [inline] ip_sublist_rcv_finish net/ipv4/ip_input.c:580 [inline] ip_list_rcv_finish net/ipv4/ip_input.c:631 [inline] ip_sublist_rcv+0x10f7/0x13e0 net/ipv4/ip_input.c:639 ip_list_rcv+0x952/0x9c0 net/ipv4/ip_input.c:674 __netif_receive_skb_list_ptype net/core/dev.c:5703 [inline] __netif_receive_skb_list_core+0xd92/0x11d0 net/core/dev.c:5751 __netif_receive_skb_list net/core/dev.c:5803 [inline] netif_receive_skb_list_internal+0xd8f/0x1350 net/core/dev.c:5895 gro_normal_list include/net/gro.h:515 [inline] napi_complete_done+0x3f2/0x990 net/core/dev.c:6246 e1000_clean+0x1fa4/0x5e50 drivers/net/ethernet/intel/e1000/e1000_main.c:3808 __napi_poll+0xd9/0x990 net/core/dev.c:6771 napi_poll net/core/dev.c:6840 [inline] net_rx_action+0x90f/0x17e0 net/core/dev.c:6962 handle_softirqs+0x152/0x6b0 kernel/softirq.c:554 __do_softirq kernel/softirq.c:588 [inline] invoke_softirq kernel/softirq.c:428 [inline] __irq_exit_rcu kernel/softirq.c:637 [inline] irq_exit_rcu+0x5d/0x120 kernel/softirq.c:649 common_interrupt+0x83/0x90 arch/x86/kernel/irq.c:278 asm_common_interrupt+0x26/0x40 arch/x86/include/asm/idtentry.h:693 CPU: 0 PID: 239 Comm: modprobe Tainted: G B 6.10.0-rc7-01816-g852e42cc2dd4 #3 1107521f0c7b55c9309062382d0bda9f604dbb6d Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014 Fixes: 06b22ef ("net/tcp: Wire TCP-AO to request sockets") Reported-by: syzkaller <syzkaller@googlegroups.com> Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Dmitry Safonov <0x7f454c46@gmail.com> Link: https://patch.msgid.link/20240714161719.6528-1-kuniyu@amazon.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
michalQb
pushed a commit
that referenced
this pull request
Oct 4, 2024
When tries to demote 1G hugetlb folios, a lockdep warning is observed: ============================================ WARNING: possible recursive locking detected 6.10.0-rc6-00452-ga4d0275fa660-dirty torvalds#79 Not tainted -------------------------------------------- bash/710 is trying to acquire lock: ffffffff8f0a7850 (&h->resize_lock){+.+.}-{3:3}, at: demote_store+0x244/0x460 but task is already holding lock: ffffffff8f0a6f48 (&h->resize_lock){+.+.}-{3:3}, at: demote_store+0xae/0x460 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&h->resize_lock); lock(&h->resize_lock); *** DEADLOCK *** May be due to missing lock nesting notation 4 locks held by bash/710: #0: ffff8f118439c3f0 (sb_writers#5){.+.+}-{0:0}, at: ksys_write+0x64/0xe0 #1: ffff8f11893b9e88 (&of->mutex#2){+.+.}-{3:3}, at: kernfs_fop_write_iter+0xf8/0x1d0 #2: ffff8f1183dc4428 (kn->active#98){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x100/0x1d0 #3: ffffffff8f0a6f48 (&h->resize_lock){+.+.}-{3:3}, at: demote_store+0xae/0x460 stack backtrace: CPU: 3 PID: 710 Comm: bash Not tainted 6.10.0-rc6-00452-ga4d0275fa660-dirty torvalds#79 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x68/0xa0 __lock_acquire+0x10f2/0x1ca0 lock_acquire+0xbe/0x2d0 __mutex_lock+0x6d/0x400 demote_store+0x244/0x460 kernfs_fop_write_iter+0x12c/0x1d0 vfs_write+0x380/0x540 ksys_write+0x64/0xe0 do_syscall_64+0xb9/0x1d0 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fa61db14887 RSP: 002b:00007ffc56c48358 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007fa61db14887 RDX: 0000000000000002 RSI: 000055a030050220 RDI: 0000000000000001 RBP: 000055a030050220 R08: 00007fa61dbd1460 R09: 000000007fffffff R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000002 R13: 00007fa61dc1b780 R14: 00007fa61dc17600 R15: 00007fa61dc16a00 </TASK> Lockdep considers this an AA deadlock because the different resize_lock mutexes reside in the same lockdep class, but this is a false positive. Place them in distinct classes to avoid these warnings. Link: https://lkml.kernel.org/r/20240712031314.2570452-1-linmiaohe@huawei.com Fixes: 8531fc6 ("hugetlb: add hugetlb demote page support") Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Acked-by: Muchun Song <muchun.song@linux.dev> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
michalQb
pushed a commit
that referenced
this pull request
Oct 4, 2024
When using cachefiles, lockdep may emit something similar to the circular locking dependency notice below. The problem appears to stem from the following: (1) Cachefiles manipulates xattrs on the files in its cache when called from ->writepages(). (2) The setxattr() and removexattr() system call handlers get the name (and value) from userspace after taking the sb_writers lock, putting accesses of the vma->vm_lock and mm->mmap_lock inside of that. (3) The afs filesystem uses a per-inode lock to prevent multiple revalidation RPCs and in writeback vs truncate to prevent parallel operations from deadlocking against the server on one side and local page locks on the other. Fix this by moving the getting of the name and value in {get,remove}xattr() outside of the sb_writers lock. This also has the minor benefits that we don't need to reget these in the event of a retry and we never try to take the sb_writers lock in the event we can't pull the name and value into the kernel. Alternative approaches that might fix this include moving the dispatch of a write to the cache off to a workqueue or trying to do without the validation lock in afs. Note that this might also affect other filesystems that use netfslib and/or cachefiles. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-build2+ torvalds#956 Not tainted ------------------------------------------------------ fsstress/6050 is trying to acquire lock: ffff888138fd82f0 (mapping.invalidate_lock#3){++++}-{3:3}, at: filemap_fault+0x26e/0x8b0 but task is already holding lock: ffff888113f26d18 (&vma->vm_lock->lock){++++}-{3:3}, at: lock_vma_under_rcu+0x165/0x250 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 (&vma->vm_lock->lock){++++}-{3:3}: __lock_acquire+0xaf0/0xd80 lock_acquire.part.0+0x103/0x280 down_write+0x3b/0x50 vma_start_write+0x6b/0xa0 vma_link+0xcc/0x140 insert_vm_struct+0xb7/0xf0 alloc_bprm+0x2c1/0x390 kernel_execve+0x65/0x1a0 call_usermodehelper_exec_async+0x14d/0x190 ret_from_fork+0x24/0x40 ret_from_fork_asm+0x1a/0x30 -> #3 (&mm->mmap_lock){++++}-{3:3}: __lock_acquire+0xaf0/0xd80 lock_acquire.part.0+0x103/0x280 __might_fault+0x7c/0xb0 strncpy_from_user+0x25/0x160 removexattr+0x7f/0x100 __do_sys_fremovexattr+0x7e/0xb0 do_syscall_64+0x9f/0x100 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (sb_writers#14){.+.+}-{0:0}: __lock_acquire+0xaf0/0xd80 lock_acquire.part.0+0x103/0x280 percpu_down_read+0x3c/0x90 vfs_iocb_iter_write+0xe9/0x1d0 __cachefiles_write+0x367/0x430 cachefiles_issue_write+0x299/0x2f0 netfs_advance_write+0x117/0x140 netfs_write_folio.isra.0+0x5ca/0x6e0 netfs_writepages+0x230/0x2f0 afs_writepages+0x4d/0x70 do_writepages+0x1e8/0x3e0 filemap_fdatawrite_wbc+0x84/0xa0 __filemap_fdatawrite_range+0xa8/0xf0 file_write_and_wait_range+0x59/0x90 afs_release+0x10f/0x270 __fput+0x25f/0x3d0 __do_sys_close+0x43/0x70 do_syscall_64+0x9f/0x100 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&vnode->validate_lock){++++}-{3:3}: __lock_acquire+0xaf0/0xd80 lock_acquire.part.0+0x103/0x280 down_read+0x95/0x200 afs_writepages+0x37/0x70 do_writepages+0x1e8/0x3e0 filemap_fdatawrite_wbc+0x84/0xa0 filemap_invalidate_inode+0x167/0x1e0 netfs_unbuffered_write_iter+0x1bd/0x2d0 vfs_write+0x22e/0x320 ksys_write+0xbc/0x130 do_syscall_64+0x9f/0x100 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #0 (mapping.invalidate_lock#3){++++}-{3:3}: check_noncircular+0x119/0x160 check_prev_add+0x195/0x430 __lock_acquire+0xaf0/0xd80 lock_acquire.part.0+0x103/0x280 down_read+0x95/0x200 filemap_fault+0x26e/0x8b0 __do_fault+0x57/0xd0 do_pte_missing+0x23b/0x320 __handle_mm_fault+0x2d4/0x320 handle_mm_fault+0x14f/0x260 do_user_addr_fault+0x2a2/0x500 exc_page_fault+0x71/0x90 asm_exc_page_fault+0x22/0x30 other info that might help us debug this: Chain exists of: mapping.invalidate_lock#3 --> &mm->mmap_lock --> &vma->vm_lock->lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- rlock(&vma->vm_lock->lock); lock(&mm->mmap_lock); lock(&vma->vm_lock->lock); rlock(mapping.invalidate_lock#3); *** DEADLOCK *** 1 lock held by fsstress/6050: #0: ffff888113f26d18 (&vma->vm_lock->lock){++++}-{3:3}, at: lock_vma_under_rcu+0x165/0x250 stack backtrace: CPU: 0 PID: 6050 Comm: fsstress Not tainted 6.10.0-build2+ torvalds#956 Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014 Call Trace: <TASK> dump_stack_lvl+0x57/0x80 check_noncircular+0x119/0x160 ? queued_spin_lock_slowpath+0x4be/0x510 ? __pfx_check_noncircular+0x10/0x10 ? __pfx_queued_spin_lock_slowpath+0x10/0x10 ? mark_lock+0x47/0x160 ? init_chain_block+0x9c/0xc0 ? add_chain_block+0x84/0xf0 check_prev_add+0x195/0x430 __lock_acquire+0xaf0/0xd80 ? __pfx___lock_acquire+0x10/0x10 ? __lock_release.isra.0+0x13b/0x230 lock_acquire.part.0+0x103/0x280 ? filemap_fault+0x26e/0x8b0 ? __pfx_lock_acquire.part.0+0x10/0x10 ? rcu_is_watching+0x34/0x60 ? lock_acquire+0xd7/0x120 down_read+0x95/0x200 ? filemap_fault+0x26e/0x8b0 ? __pfx_down_read+0x10/0x10 ? __filemap_get_folio+0x25/0x1a0 filemap_fault+0x26e/0x8b0 ? __pfx_filemap_fault+0x10/0x10 ? find_held_lock+0x7c/0x90 ? __pfx___lock_release.isra.0+0x10/0x10 ? __pte_offset_map+0x99/0x110 __do_fault+0x57/0xd0 do_pte_missing+0x23b/0x320 __handle_mm_fault+0x2d4/0x320 ? __pfx___handle_mm_fault+0x10/0x10 handle_mm_fault+0x14f/0x260 do_user_addr_fault+0x2a2/0x500 exc_page_fault+0x71/0x90 asm_exc_page_fault+0x22/0x30 Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/2136178.1721725194@warthog.procyon.org.uk cc: Alexander Viro <viro@zeniv.linux.org.uk> cc: Christian Brauner <brauner@kernel.org> cc: Jan Kara <jack@suse.cz> cc: Jeff Layton <jlayton@kernel.org> cc: Gao Xiang <xiang@kernel.org> cc: Matthew Wilcox <willy@infradead.org> cc: netfs@lists.linux.dev cc: linux-erofs@lists.ozlabs.org cc: linux-fsdevel@vger.kernel.org [brauner: fix minor issues] Signed-off-by: Christian Brauner <brauner@kernel.org>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.