-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support eliding map lookup nullness #8284
base: bpf-next_base
Are you sure you want to change the base?
Conversation
Upstream branch: 8eef6ac |
419af5f
to
c0247f7
Compare
Upstream branch: c5d2bac |
cd3399d
to
13d536a
Compare
c0247f7
to
ce9b303
Compare
Upstream branch: c5d2bac |
13d536a
to
360e2c0
Compare
ce9b303
to
ce2660e
Compare
The print was missing a newline. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
MEM_WRITE attribute is defined as: "Non-presence of MEM_WRITE means that MEM is only being read". bpf_load_hdr_opt() both reads and writes from its arg2 - void *search_res. This matters a lot for the next commit where we more precisely track stack accesses. Without this annotation, the verifier will make false assumptions about the contents of memory written to by helpers and possibly prune valid branches. Fixes: 6fad274 ("bpf: Add MEM_WRITE attribute") Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Previously, the verifier was treating all PTR_TO_STACK registers passed to a helper call as potentially written to by the helper. However, all calls to check_stack_range_initialized() already have precise access type information available. Rather than treat ACCESS_HELPER as a proxy for BPF_WRITE, pass enum bpf_access_type to check_stack_range_initialized() to more precisely track helper arguments. One benefit from this precision is that registers tracked as valid spills and passed as a read-only helper argument remain tracked after the call. Rather than being marked STACK_MISC afterwards. An additional benefit is the verifier logs are also more precise. For this particular error, users will enjoy a slightly clearer message. See included selftest updates for examples. Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
This commit allows progs to elide a null check on statically known map lookup keys. In other words, if the verifier can statically prove that the lookup will be in-bounds, allow the prog to drop the null check. This is useful for two reasons: 1. Large numbers of nullness checks (especially when they cannot fail) unnecessarily pushes prog towards BPF_COMPLEXITY_LIMIT_JMP_SEQ. 2. It forms a tighter contract between programmer and verifier. For (1), bpftrace is starting to make heavier use of percpu scratch maps. As a result, for user scripts with large number of unrolled loops, we are starting to hit jump complexity verification errors. These percpu lookups cannot fail anyways, as we only use static key values. Eliding nullness probably results in less work for verifier as well. For (2), percpu scratch maps are often used as a larger stack, as the currrent stack is limited to 512 bytes. In these situations, it is desirable for the programmer to express: "this lookup should never fail, and if it does, it means I messed up the code". By omitting the null check, the programmer can "ask" the verifier to double check the logic. Tests also have to be updated in sync with these changes, as the verifier is more efficient with this change. Notable, iters.c tests had to be changed to use a map type that still requires null checks, as it's exercising verifier tracking logic w.r.t iterators. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
Upstream branch: c5d2bac |
Test that nullness elision works for common use cases. For example, we want to check that both constant scalar spills and STACK_ZERO functions. As well as when there's both const and non-const values of R2 leading up to a lookup. And obviously some bound checks. Particularly tricky are spills both smaller or larger than key size. For smaller, we need to ensure verifier doesn't let through a potential read into unchecked bytes. For larger, endianness comes into play, as the native endian value tracked in the verifier may not be the bytes the kernel would have read out of the key pointer. So check that we disallow both. Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
360e2c0
to
5f07c16
Compare
Pull request for series with
subject: Support eliding map lookup nullness
version: 6
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=919724