Skip to content

Commit

Permalink
Revert "bpf: precise scalar_value tracking"
Browse files Browse the repository at this point in the history
ANBZ: torvalds#342

This reverts commit 074d356.

The commit 565c32b ("bpf: track spill/fill of constants") is backported
from upstream kernel-4.19.y stable branch to fix cve, but it suffer serious
degradation in the number of states processed and corresponding verification
time increase. Loading cilium's bpf_lxc.o would fail beacause insn_processed
exceeded BPF_COMPLEXITY_LIMIT_INSNS due to such degradation.

To fix this problem, follow-up optimization patch
074d356 ("bpf:precise scalar_value tracking") was introduced. However,
subsequent bug-fix patches was forgotten, caused bpf verifier incorrectly
prune states, and some bpf instructions was misjudged as dead code, so the
kernel was eventually soft lockup due to bpf verifier's dead-code overwritten.

There are 3 ways to fix dead-code misjudgment problem:
1. backport follow-up bug-fix patches.
2. increase the number of BPF_COMPLEXITY_LIMIT_INSNS.
3. revert the cve-fix patches

The first ways is more elegant, but is more risky.
The bug-fix patches are:
- a3ce685 ("bpf: fix precision tracking")
- b3b50f0 ("bpf: fix precision bit propagation for BPF_ST instructions")
- 6754172 ("bpf: fix precision tracking in presence of bpf2bpf calls")
- 2339cd6 ("bpf: fix precision tracking of stack slots")
- f54c789 ("bpf: Fix precision tracking for unbounded scalars")
To make these patches work, one more seems irrelevant patch need to
be backported too: 9242b5f ("bpf: add self-check logic to liveness
analysis").
These patches modified too much code, and it is hard to ensure they
won't introduce other bugs, so it is a risky solution.

The second way could mitigate such degradation a little. Processing cilium
bpf_lxc.o with 2900+ instructions, the statistics data is:
		cve-not-fixed	cve-fixed	fixed-with-optimization
processed-insns	39024		293776		212987
verify-time	150+ms		3.6s		2.5s

So the most practical solution is third way.

Signed-off-by: Qiao Ma <mqaio@linux.alibaba.com>
Acked-by: Mao Wenan <wenan.mao@linux.alibaba.com>
Acked-by: Tony Lu <tonylu@linux.alibaba.com>
  • Loading branch information
maqiao-mq committed Apr 20, 2022
1 parent f807751 commit f3e93a7
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 519 deletions.
19 changes: 0 additions & 19 deletions include/linux/bpf_verifier.h
Original file line number Diff line number Diff line change
Expand Up @@ -89,8 +89,6 @@ struct bpf_reg_state {
*/
u32 frameno;
enum bpf_reg_liveness live;
/* if (!precise && SCALAR_VALUE) min/max/tnum don't affect safety */
bool precise;
};

enum bpf_stack_slot_type {
Expand Down Expand Up @@ -134,31 +132,14 @@ struct bpf_id_pair {
u32 cur;
};

struct bpf_idx_pair {
u32 prev_idx;
u32 idx;
};

/* Maximum number of register states that can exist at once */
#define BPF_ID_MAP_SIZE (MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE)
#define MAX_CALL_FRAMES 8
struct bpf_verifier_state {
/* call stack tracking */
struct bpf_func_state *frame[MAX_CALL_FRAMES];
struct bpf_verifier_state *parent;
u32 curframe;
bool speculative;

/* first and last insn idx of this verifier state */
u32 first_insn_idx;
u32 last_insn_idx;
/* jmp history recorded from first to last.
* backtracking is using it to go from last to first.
* For most states jmp_history_cnt is [0-3].
* For loops can go up to ~40.
*/
struct bpf_idx_pair *jmp_history;
u32 jmp_history_cnt;
};

/* linked list of verifier states used to prune search */
Expand Down
Loading

0 comments on commit f3e93a7

Please sign in to comment.