[PATCH OpenHarmony-4.19 00/17] OpenHarmony-4.19 cve bugfix 0731
There are cves based on OpenHarmony-4.19 as follows: CVE-2021-21781 CVE-2021-22555 CVE-2021-35039 CVE-2021-3609 CVE-2021-34693 CVE-2021-32078 CVE-2021-33624 note CVE-2021-33624 has some bpf selftests pre-dependent patches. ----------------------------------- Alexei Starovoitov (1): bpf: extend is_branch_taken to registers Andrey Ignatov (1): selftests/bpf: Test narrow loads with off > 0 in test_verifier Daniel Borkmann (5): bpf, test_verifier: switch bpf_get_stack's 0 s> r8 test bpf: Update selftests to reflect new error states bpf: Inherit expanded/patched seen count from old aux data bpf: Do not mark insn as seen under speculative path verification bpf: Fix leakage under speculation on mispredicted branches Florian Westphal (1): netfilter: x_tables: fix compat match/target pad out-of-bound write John Fastabend (1): bpf: Test_verifier, bpf_get_stack return value add <0 Mimi Zohar (1): module: limit enabling module.sig_enforce Norbert Slusarek (1): can: bcm: fix infoleak in struct bcm_msg_head Ovidiu Panait (2): bpf: fix up selftests after backports were fixed selftests/bpf: add selftest part of "bpf: improve verifier branch analysis" Piotr Krysiuk (1): bpf, selftests: Fix up some test_verifier cases for unprivileged Russell King (2): ARM: footbridge: remove personal server platform ARM: ensure the signal page contains defined contents Thadeu Lima de Souza Cascardo (1): can: bcm: delay release of struct bcm_op after synchronize_rcu() arch/arm/configs/footbridge_defconfig | 1 - arch/arm/kernel/signal.c | 14 +-- arch/arm/mach-footbridge/Kconfig | 21 ---- arch/arm/mach-footbridge/Makefile | 2 - arch/arm/mach-footbridge/personal-pci.c | 58 ---------- arch/arm/mach-footbridge/personal.c | 25 ----- kernel/bpf/verifier.c | 95 +++++++++++++---- kernel/module.c | 9 ++ net/can/bcm.c | 10 +- net/ipv4/netfilter/arp_tables.c | 2 + net/ipv4/netfilter/ip_tables.c | 2 + net/ipv6/netfilter/ip6_tables.c | 2 + net/netfilter/x_tables.c | 10 +- tools/testing/selftests/bpf/test_verifier.c | 112 +++++++++++++++----- 14 files changed, 196 insertions(+), 167 deletions(-) delete mode 100644 arch/arm/mach-footbridge/personal-pci.c delete mode 100644 arch/arm/mach-footbridge/personal.c -- 2.22.0
From: Ovidiu Panait
From: Ovidiu Panait
stable inclusion from linux-4.19.193 commit b190383c714a379002b00bc8de43371e78d291d8 category: bugfix issue: #I42H19 CVE: NA
--------------------------------
After the backport of the changes to fix CVE 2019-7308, the selftests also need to be fixed up, as was done originally in mainline 80c9b2fae87b ("bpf: add various test cases to selftests").
This is a backport of upstream commit 80c9b2fae87b ("bpf: add various test cases to selftests") adapted to 4.19 in order to fix the selftests that began to fail after CVE-2019-7308 fixes.
Suggested-by: Frank van der Linden
Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun ---
Looks good to me
From: Piotr Krysiuk
在 2021/7/31 11:14, Yu Changchun 写道:
From: Piotr Krysiuk
stable inclusion from linux-4.19.193 commit 1982f436a9a990e338ac4d7ed80a9fb40e0a1885 category: bugfix issue: #I42H19 CVE: NA
--------------------------------
commit 0a13e3537ea67452d549a6a80da3776d6b7dedb3 upstream
Fix up test_verifier error messages for the case where the original error message changed, or for the case where pointer alu errors differ between privileged and unprivileged tests. Also, add alternative tests for keeping coverage of the original verifier rejection error message (fp alu), and newly reject map_ptr += rX where rX == 0 given we now forbid alu on these types for unprivileged. All test_verifier cases pass after the change. The test case fixups were kept separate to ease backporting of core changes.
Signed-off-by: Piotr Krysiuk
Co-developed-by: Daniel Borkmann Signed-off-by: Daniel Borkmann Acked-by: Alexei Starovoitov [OP: backport to 4.19, skipping non-existent tests] Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- tools/testing/selftests/bpf/test_verifier.c | 42 ++++++++++++++++----- 1 file changed, 33 insertions(+), 9 deletions(-)
Looks good to me
From: Andrey Ignatov
在 2021/7/31 11:14, Yu Changchun 写道:
From: Andrey Ignatov
stable inclusion from linux-4.19.193 commit 737f5f3a633518feae7b2793f4666c67e39bcc5a category: bugfix issue: #I42H19 CVE: NA
--------------------------------
commit 6c2afb674dbda9b736b8f09c976516e1e788860a upstream
Test the following narrow loads in test_verifier for context __sk_buff: * off=1, size=1 - ok; * off=2, size=1 - ok; * off=3, size=1 - ok; * off=0, size=2 - ok; * off=1, size=2 - fail; * off=0, size=2 - ok; * off=3, size=2 - fail.
Signed-off-by: Andrey Ignatov
Signed-off-by: Alexei Starovoitov Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- tools/testing/selftests/bpf/test_verifier.c | 48 ++++++++++++++++----- 1 file changed, 38 insertions(+), 10 deletions(-)
Looks good to me
From: Ovidiu Panait
在 2021/7/31 11:14, Yu Changchun 写道:
From: Ovidiu Panait
stable inclusion from linux-4.19.193 commit c905bfe767e98a13dd886bf241ba9ee0640a53ff category: bugfix issue: #I42H19 CVE: NA
--------------------------------
Backport the missing selftest part of commit 7da6cd690c43 ("bpf: improve verifier branch analysis") in order to fix the following test_verifier failures:
... Unexpected success to load! 0: (b7) r0 = 0 1: (75) if r0 s>= 0x0 goto pc+1 3: (95) exit processed 3 insns (limit 131072), stack depth 0 Unexpected success to load! 0: (b7) r0 = 0 1: (75) if r0 s>= 0x0 goto pc+1 3: (95) exit processed 3 insns (limit 131072), stack depth 0 ...
The changesets apply with a minor context difference.
Fixes: 7da6cd690c43 ("bpf: improve verifier branch analysis") Signed-off-by: Ovidiu Panait
Signed-off-by: Greg Kroah-Hartman Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun ---
Looks good to me
From: Alexei Starovoitov
在 2021/7/31 11:14, Yu Changchun 写道:
From: Alexei Starovoitov
stable inclusion from linux-4.19.193 commit e0b86677fb3e4622b444dcdd8546caa0dba8a689 category: bugfix issue: #I42H19 CVE: NA
--------------------------------
commit fb8d251ee2a6bf4d7f4af5548e9c8f4fb5f90402 upstream
This patch extends is_branch_taken() logic from JMP+K instructions to JMP+X instructions. Conditional branches are often done when src and dst registers contain known scalars. In such case the verifier can follow the branch that is going to be taken when program executes. That speeds up the verification and is essential feature to support bounded loops.
Signed-off-by: Alexei Starovoitov
Acked-by: Andrii Nakryiko Signed-off-by: Daniel Borkmann [OP: drop is_jmp32 parameter from is_branch_taken() calls and adjust context] Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- kernel/bpf/verifier.c | 33 ++++++++++++++++++--------------- 1 file changed, 18 insertions(+), 15 deletions(-)
Looks good to me
From: John Fastabend
在 2021/7/31 11:14, Yu Changchun 写道:
From: John Fastabend
stable inclusion from linux-4.19.193 commit f915e7975fc2d593ddb60b67d14eef314eb6dd08 category: bugfix issue: #I42H19 CVE: NA
--------------------------------
commit 9ac26e9973bac5716a2a542e32f380c84db2b88c upstream.
With current ALU32 subreg handling and retval refine fix from last patches we see an expected failure in test_verifier. With verbose verifier state being printed at each step for clarity we have the following relavent lines [I omit register states that are not necessarily useful to see failure cause],
Failed to load prog 'Success'! [..] 14: (85) call bpf_get_stack#67 R0_w=map_value(id=0,off=0,ks=8,vs=48,imm=0) R3_w=inv48 15: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) 15: (b7) r1 = 0 16: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 16: (bf) r8 = r0 17: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) 17: (67) r8 <<= 32 18: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smax_value=9223372032559808512, umax_value=18446744069414584320, var_off=(0x0; 0xffffffff00000000), s32_min_value=0, s32_max_value=0, u32_max_value=0, var32_off=(0x0; 0x0)) 18: (c7) r8 s>>= 32 19 R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=2147483647, var32_off=(0x0; 0xffffffff)) 19: (cd) if r1 s< r8 goto pc+16 R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=0, var32_off=(0x0; 0xffffffff)) 20: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=0, R9=inv48 20: (1f) r9 -= r8 21: (bf) r2 = r7 22: R2_w=map_value(id=0,off=0,ks=8,vs=48,imm=0) 22: (0f) r2 += r8 value -2147483648 makes map_value pointer be out of bounds
After call bpf_get_stack() on line 14 and some moves we have at line 16 an r8 bound with max_value 48 but an unknown min value. This is to be expected bpf_get_stack call can only return a max of the input size but is free to return any negative error in the 32-bit register space. The C helper is returning an int so will use lower 32-bits.
Lines 17 and 18 clear the top 32 bits with a left/right shift but use ARSH so we still have worst case min bound before line 19 of -2147483648. At this point the signed check 'r1 s< r8' meant to protect the addition on line 22 where dst reg is a map_value pointer may very well return true with a large negative number. Then the final line 22 will detect this as an invalid operation and fail the program. What we want to do is proceed only if r8 is positive non-error. So change 'r1 s< r8' to 'r1 s> r8' so that we jump if r8 is negative.
Next we will throw an error because we access past the end of the map value. The map value size is 48 and sizeof(struct test_val) is 48 so we walk off the end of the map value on the second call to get bpf_get_stack(). Fix this by changing sizeof(struct test_val) to 24 by using 'sizeof(struct test_val) / 2'. After this everything passes as expected.
Signed-off-by: John Fastabend
Signed-off-by: Alexei Starovoitov Signed-off-by: Daniel Borkmann Link: https://lore.kernel.org/bpf/158560426019.10843.3285429543232025187.stgit@joh... Signed-off-by: Greg Kroah-Hartman [OP: backport to 4.19] Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- tools/testing/selftests/bpf/test_verifier.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
Looks good to me
From: Daniel Borkmann
在 2021/7/31 11:14, Yu Changchun 写道:
From: Daniel Borkmann
stable inclusion from linux-4.19.193 commit d1e281d6cb8841122c4677b47fcebdc6f410bd74 category: bugfix issue: #I42H19 CVE: NA
--------------------------------
[ no upstream commit ]
Switch the comparison, so that is_branch_taken() will recognize that below branch is never taken:
[...] 17: [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...] 17: (67) r8 <<= 32 18: [...] R8_w=inv(id=0,smax_value=-4294967296,umin_value=9223372036854775808,umax_value=18446744069414584320,var_off=(0x8000000000000000; 0x7fffffff00000000)) [...] 18: (c7) r8 s>>= 32 19: [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...] 19: (6d) if r1 s> r8 goto pc+16 [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...] [...]
Currently we check for is_branch_taken() only if either K is source, or source is a scalar value that is const. For upstream it would be good to extend this properly to check whether dst is const and src not.
For the sake of the test_verifier, it is probably not needed here:
# ./test_verifier 101 #101/p bpf_get_stack return R0 within range OK Summary: 1 PASSED, 0 SKIPPED, 0 FAILED
I haven't seen this issue in test_progs* though, they are passing fine:
# ./test_progs-no_alu32 -t get_stack Switching to flavor 'no_alu32' subdirectory... #20 get_stack_raw_tp:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
# ./test_progs -t get_stack #20 get_stack_raw_tp:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Daniel Borkmann
Acked-by: Alexei Starovoitov Acked-by: John Fastabend Signed-off-by: Greg Kroah-Hartman [OP: backport to 4.19] Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman Signed-off-by: Yang Yingliang Signed-off-by: Greg Kroah-Hartman Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun
Looks good to me
From: Daniel Borkmann
在 2021/7/31 11:14, Yu Changchun 写道:
From: Daniel Borkmann
stable inclusion from linux-4.19.193 commit 138b0ec1064c8f154a32297458e562591a94773f category: bugfix issue: #I42H19 CVE: NA
--------------------------------
commit d7a5091351756d0ae8e63134313c455624e36a13 upstream
Update various selftest error messages:
* The 'Rx tried to sub from different maps, paths, or prohibited types' is reworked into more specific/differentiated error messages for better guidance.
* The change into 'value -4294967168 makes map_value pointer be out of bounds' is due to moving the mixed bounds check into the speculation handling and thus occuring slightly later than above mentioned sanity check.
* The change into 'math between map_value pointer and register with unbounded min value' is similarly due to register sanity check coming before the mixed bounds check.
* The case of 'map access: known scalar += value_ptr from different maps' now loads fine given masks are the same from the different paths (despite max map value size being different).
Signed-off-by: Daniel Borkmann
Reviewed-by: John Fastabend Acked-by: Alexei Starovoitov [OP: 4.19 backport, account for split test_verifier and different / missing tests] Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- tools/testing/selftests/bpf/test_verifier.c | 35 +++++++-------------- 1 file changed, 12 insertions(+), 23 deletions(-)
Looks good to me.
From: Daniel Borkmann
在 2021/7/31 11:14, Yu Changchun 写道:
From: Daniel Borkmann
mainline inclusion from mainline-v5.13-rc7 commit d203b0fd863a2261e5d00b97f3d060c4c2a6db71 category: bugfix issue: #I42H19 CVE: CVE-2021-33624
--------------------------------
Instead of relying on current env->pass_cnt, use the seen count from the old aux data in adjust_insn_aux_data(), and expand it to the new range of patched instructions. This change is valid given we always expand 1:n with n>=1, so what applies to the old/original instruction needs to apply for the replacement as well.
Not relying on env->pass_cnt is a prerequisite for a later change where we want to avoid marking an instruction seen when verified under speculative execution path.
Signed-off-by: Daniel Borkmann
Reviewed-by: John Fastabend Reviewed-by: Benedict Schlueter Reviewed-by: Piotr Krysiuk Acked-by: Alexei Starovoitov Conflicts: kernel/bpf/verifier.c
seen of bpf_insn_aux_data is bool in kernel-4.19.
Signed-off-by: He Fengqing
Reviewed-by: Kuohai Xu Reviewed-by: Xiu Jianfeng Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- kernel/bpf/verifier.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
Looks good to me
From: Daniel Borkmann
在 2021/7/31 11:14, Yu Changchun 写道:
From: Daniel Borkmann
mainline inclusion from mainline-v5.13-rc7 commit fe9a5ca7e370e613a9a75a13008a3845ea759d6e category: bugfix issue: #I42H19 CVE: CVE-2021-33624
--------------------------------
... in such circumstances, we do not want to mark the instruction as seen given the goal is still to jmp-1 rewrite/sanitize dead code, if it is not reachable from the non-speculative path verification. We do however want to verify it for safety regardless.
With the patch as-is all the insns that have been marked as seen before the patch will also be marked as seen after the patch (just with a potentially different non-zero count). An upcoming patch will also verify paths that are unreachable in the non-speculative domain, hence this extension is needed.
Signed-off-by: Daniel Borkmann
Reviewed-by: John Fastabend Reviewed-by: Benedict Schlueter Reviewed-by: Piotr Krysiuk Acked-by: Alexei Starovoitov Conflicts: kernel/bpf/verifier.c
pass_cnt is not introduced in kernel-4.19.
Signed-off-by: He Fengqing
Reviewed-by: Kuohai Xu Reviewed-by: Xiu Jianfeng Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- kernel/bpf/verifier.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-)
Looks good to me
From: Daniel Borkmann
在 2021/7/31 11:14, Yu Changchun 写道:
From: Daniel Borkmann
mainline inclusion from mainline-v5.13-rc7 commit 9183671af6dbf60a1219371d4ed73e23f43b49db category: bugfix issue: #I42H19 CVE: CVE-2021-33624
--------------------------------
The verifier only enumerates valid control-flow paths and skips paths that are unreachable in the non-speculative domain. And so it can miss issues under speculative execution on mispredicted branches.
For example, a type confusion has been demonstrated with the following crafted program:
// r0 = pointer to a map array entry // r6 = pointer to readable stack slot // r9 = scalar controlled by attacker 1: r0 = *(u64 *)(r0) // cache miss 2: if r0 != 0x0 goto line 4 3: r6 = r9 4: if r0 != 0x1 goto line 6 5: r9 = *(u8 *)(r6) 6: // leak r9
Since line 3 runs iff r0 == 0 and line 5 runs iff r0 == 1, the verifier concludes that the pointer dereference on line 5 is safe. But: if the attacker trains both the branches to fall-through, such that the following is speculatively executed ...
r6 = r9 r9 = *(u8 *)(r6) // leak r9
... then the program will dereference an attacker-controlled value and could leak its content under speculative execution via side-channel. This requires to mistrain the branch predictor, which can be rather tricky, because the branches are mutually exclusive. However such training can be done at congruent addresses in user space using different branches that are not mutually exclusive. That is, by training branches in user space ...
A: if r0 != 0x0 goto line C B: ... C: if r0 != 0x0 goto line D D: ...
... such that addresses A and C collide to the same CPU branch prediction entries in the PHT (pattern history table) as those of the BPF program's lines 2 and 4, respectively. A non-privileged attacker could simply brute force such collisions in the PHT until observing the attack succeeding.
Alternative methods to mistrain the branch predictor are also possible that avoid brute forcing the collisions in the PHT. A reliable attack has been demonstrated, for example, using the following crafted program:
// r0 = pointer to a [control] map array entry // r7 = *(u64 *)(r0 + 0), training/attack phase // r8 = *(u64 *)(r0 + 8), oob address // [...] // r0 = pointer to a [data] map array entry 1: if r7 == 0x3 goto line 3 2: r8 = r0 // crafted sequence of conditional jumps to separate the conditional // branch in line 193 from the current execution flow 3: if r0 != 0x0 goto line 5 4: if r0 == 0x0 goto exit 5: if r0 != 0x0 goto line 7 6: if r0 == 0x0 goto exit [...] 187: if r0 != 0x0 goto line 189 188: if r0 == 0x0 goto exit // load any slowly-loaded value (due to cache miss in phase 3) ... 189: r3 = *(u64 *)(r0 + 0x1200) // ... and turn it into known zero for verifier, while preserving slowly- // loaded dependency when executing: 190: r3 &= 1 191: r3 &= 2 // speculatively bypassed phase dependency 192: r7 += r3 193: if r7 == 0x3 goto exit 194: r4 = *(u8 *)(r8 + 0) // leak r4
As can be seen, in training phase (phase != 0x3), the condition in line 1 turns into false and therefore r8 with the oob address is overridden with the valid map value address, which in line 194 we can read out without issues. However, in attack phase, line 2 is skipped, and due to the cache miss in line 189 where the map value is (zeroed and later) added to the phase register, the condition in line 193 takes the fall-through path due to prior branch predictor training, where under speculation, it'll load the byte at oob address r8 (unknown scalar type at that point) which could then be leaked via side-channel.
One way to mitigate these is to 'branch off' an unreachable path, meaning, the current verification path keeps following the is_branch_taken() path and we push the other branch to the verification stack. Given this is unreachable from the non-speculative domain, this branch's vstate is explicitly marked as speculative. This is needed for two reasons: i) if this path is solely seen from speculative execution, then we later on still want the dead code elimination to kick in in order to sanitize these instructions with jmp-1s, and ii) to ensure that paths walked in the non-speculative domain are not pruned from earlier walks of paths walked in the speculative domain. Additionally, for robustness, we mark the registers which have been part of the conditional as unknown in the speculative path given there should be no assumptions made on their content.
The fix in here mitigates type confusion attacks described earlier due to i) all code paths in the BPF program being explored and ii) existing verifier logic already ensuring that given memory access instruction references one specific data structure.
An alternative to this fix that has also been looked at in this scope was to mark aux->alu_state at the jump instruction with a BPF_JMP_TAKEN state as well as direction encoding (always-goto, always-fallthrough, unknown), such that mixing of different always-* directions themselves as well as mixing of always-* with unknown directions would cause a program rejection by the verifier, e.g. programs with constructs like 'if ([...]) { x = 0; } else { x = 1; }' with subsequent 'if (x == 1) { [...] }'. For unprivileged, this would result in only single direction always-* taken paths, and unknown taken paths being allowed, such that the former could be patched from a conditional jump to an unconditional jump (ja). Compared to this approach here, it would have two downsides: i) valid programs that otherwise are not performing any pointer arithmetic, etc, would potentially be rejected/broken, and ii) we are required to turn off path pruning for unprivileged, where both can be avoided in this work through pushing the invalid branch to the verification stack.
The issue was originally discovered by Adam and Ofek, and later independently discovered and reported as a result of Benedict and Piotr's research work.
Fixes: b2157399cc98 ("bpf: prevent out-of-bounds speculation") Reported-by: Adam Morrison
Reported-by: Ofek Kirzner Reported-by: Benedict Schlueter Reported-by: Piotr Krysiuk Signed-off-by: Daniel Borkmann Reviewed-by: John Fastabend Reviewed-by: Benedict Schlueter Reviewed-by: Piotr Krysiuk Acked-by: Alexei Starovoitov onflicts: kernel/bpf/verifier.c [yyl: bypass_spec_v1 is not introduced in kernel-4.19, use allow_ptr_leaks instead]
Signed-off-by: Yang Yingliang
Signed-off-by: He Fengqing Reviewed-by: Kuohai Xu Reviewed-by: Xiu Jianfeng Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun
Looks good to me
From: Russell King
在 2021/7/31 11:14, Yu Changchun 写道:
From: Russell King
mainline inclusion from mainline-v5.13-rc1 commit 298a58e165e447ccfaae35fe9f651f9d7e15166f category: bugfix issue: #I42GYZ CVE: CVE-2021-32078
--------------------------------
Remove the personal server platform, as that has had an array overrun issue identified. It is believed that no one is using this code.
Signed-off-by: Russell King
Conflicts: arch/arm/mach-footbridge/Kconfig arch/arm/mach-footbridge/personal-pci.c Signed-off-by: Yang Yingliang Reviewed-by: Xiu Jianfeng Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- arch/arm/configs/footbridge_defconfig | 1 - arch/arm/mach-footbridge/Kconfig | 21 --------- arch/arm/mach-footbridge/Makefile | 2 - arch/arm/mach-footbridge/personal-pci.c | 58 ------------------------- arch/arm/mach-footbridge/personal.c | 25 ----------- 5 files changed, 107 deletions(-) delete mode 100644 arch/arm/mach-footbridge/personal-pci.c delete mode 100644 arch/arm/mach-footbridge/personal.c
Looks good to me
From: Norbert Slusarek
在 2021/7/31 11:14, Yu Changchun 写道:
From: Norbert Slusarek
mainline inclusion from mainline-v5.13-rc7 commit 5e87ddbe3942e27e939bdc02deb8579b0cbd8ecc category: bugfix issue: #I42GZO CVE: CVE-2021-34693
--------------------------------
On 64-bit systems, struct bcm_msg_head has an added padding of 4 bytes between struct members count and ival1. Even though all struct members are initialized, the 4-byte hole will contain data from the kernel stack. This patch zeroes out struct bcm_msg_head before usage, preventing infoleaks to userspace.
Fixes: ffd980f976e7 ("[CAN]: Add broadcast manager (bcm) protocol") Link: https://lore.kernel.org/r/trinity-7c1b2e82-e34f-4885-8060-2cd7a13769ce-16235... Cc: linux-stable
Signed-off-by: Norbert Slusarek Acked-by: Oliver Hartkopp Signed-off-by: Marc Kleine-Budde Signed-off-by: Yang Yingliang Reviewed-by: Xiu Jianfeng Reviewed-by: Yue Haibing Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- net/can/bcm.c | 3 +++ 1 file changed, 3 insertions(+)
Looks good to me
From: Thadeu Lima de Souza Cascardo
在 2021/7/31 11:14, Yu Changchun 写道:
From: Thadeu Lima de Souza Cascardo
mainline inclusion from mainline-v5.14-rc1 commit d5f9023fa61ee8b94f37a93f08e94b136cf1e463 category: bugfix issue: #I42H1R CVE: CVE-2021-3609
--------------------------------
can_rx_register() callbacks may be called concurrently to the call to can_rx_unregister(). The callbacks and callback data, though, are protected by RCU and the struct sock reference count.
So the callback data is really attached to the life of sk, meaning that it should be released on sk_destruct. However, bcm_remove_op() calls tasklet_kill(), and RCU callbacks may be called under RCU softirq, so that cannot be used on kernels before the introduction of HRTIMER_MODE_SOFT.
However, bcm_rx_handler() is called under RCU protection, so after calling can_rx_unregister(), we may call synchronize_rcu() in order to wait for any RCU read-side critical sections to finish. That is, bcm_rx_handler() won't be called anymore for those ops. So, we only free them, after we do that synchronize_rcu().
Fixes: ffd980f976e7 ("[CAN]: Add broadcast manager (bcm) protocol") Link: https://lore.kernel.org/r/20210619161813.2098382-1-cascardo@canonical.com Cc: linux-stable
Reported-by: syzbot+0f7e7e5e2f4f40fa89c0@syzkaller.appspotmail.com Reported-by: Norbert Slusarek Signed-off-by: Thadeu Lima de Souza Cascardo Acked-by: Oliver Hartkopp Signed-off-by: Marc Kleine-Budde Signed-off-by: Yang Yingliang Reviewed-by: Xiu Jianfeng Reviewed-by: Yue Haibing Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- net/can/bcm.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
Looks good to me
From: Mimi Zohar
在 2021/7/31 11:14, Yu Changchun 写道:
From: Mimi Zohar
stable inclusion from linux-4.19.196 commit ff660863628fb144badcb3395cde7821c82c13a6 category: bugfix issue: #I42HL9 CVE: CVE-2021-35039
--------------------------------
[ Upstream commit 0c18f29aae7ce3dadd26d8ee3505d07cc982df75 ]
Irrespective as to whether CONFIG_MODULE_SIG is configured, specifying "module.sig_enforce=1" on the boot command line sets "sig_enforce". Only allow "sig_enforce" to be set when CONFIG_MODULE_SIG is configured.
This patch makes the presence of /sys/module/module/parameters/sig_enforce dependent on CONFIG_MODULE_SIG=y.
Fixes: fda784e50aac ("module: export module signature enforcement status") Reported-by: Nayna Jain
Tested-by: Mimi Zohar Tested-by: Jessica Yu Signed-off-by: Mimi Zohar Signed-off-by: Jessica Yu Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- kernel/module.c | 9 +++++++++ 1 file changed, 9 insertions(+)
Looks good to me
From: Florian Westphal
在 2021/7/31 11:14, Yu Changchun 写道:
From: Florian Westphal
stable inclusion from linux-4.19.188 commit 12ec80252edefff00809d473a47e5f89c7485499 category: bugfix issue: #I42HLL CVE: CVE-2021-22555
--------------------------------
commit b29c457a6511435960115c0f548c4360d5f4801d upstream.
xt_compat_match/target_from_user doesn't check that zeroing the area to start of next rule won't write past end of allocated ruleset blob.
Remove this code and zero the entire blob beforehand.
Reported-by: syzbot+cfc0247ac173f597aaaa@syzkaller.appspotmail.com Reported-by: Andy Nguyen
Fixes: 9fa492cdc160c ("[NETFILTER]: x_tables: simplify compat API") Signed-off-by: Florian Westphal Signed-off-by: Pablo Neira Ayuso Signed-off-by: Greg Kroah-Hartman Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- net/ipv4/netfilter/arp_tables.c | 2 ++ net/ipv4/netfilter/ip_tables.c | 2 ++ net/ipv6/netfilter/ip6_tables.c | 2 ++ net/netfilter/x_tables.c | 10 ++-------- 4 files changed, 8 insertions(+), 8 deletions(-)
Looks good to me
From: Russell King
在 2021/7/31 11:14, Yu Changchun 写道:
From: Russell King
stable inclusion from linux-4.19.177 commit 80ef523d2cb719c3de66787e922a96b5099d2fbb category: bugfix issue: #I42HM6 CVE: CVE-2021-21781
--------------------------------
[ Upstream commit 9c698bff66ab4914bb3d71da7dc6112519bde23e ]
Ensure that the signal page contains our poison instruction to increase the protection against ROP attacks and also contains well defined contents.
Acked-by: Will Deacon
Signed-off-by: Russell King Signed-off-by: Sasha Levin Signed-off-by: Yang Yingliang Signed-off-by: Yu Changchun --- arch/arm/kernel/signal.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-)
Looks good to me.
participants (2)
-
weiyongjun (A)
-
Yu Changchun