
From: Ard Biesheuvel <ardb@kernel.org> mainline inclusion from mainline-5.11-rc1 commit 59d2f2827dfdccf8911d5e51465136b52ba623c4 category: feature issue: #I3ZXZF CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... ------------------------------------------------- Now that calling __do_fixup_smp_on_up() can be done without passing the physical-to-virtual offset in r3, we can replace the open coded PC relative offset calculations with a pair of adr_l invocations. This removes some open coded arithmetic involving virtual addresses, avoids literal pools on v7+, and slightly reduces the footprint of the code. Reviewed-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> (cherry picked from commit 59d2f2827dfdccf8911d5e51465136b52ba623c4) Signed-off-by: Zhao Hongjiang <zhaohongjiang@huawei.com> Acked-by: Xie XiuQi <xiexiuqi@huawei.com> Signed-off-by: Chen Jun <chenjun102@huawei.com> Signed-off-by: Yu Changchun <yuchangchun1@huawei.com> --- arch/arm/kernel/head.S | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S index e87e0878b8f2..7adab03a31bc 100644 --- a/arch/arm/kernel/head.S +++ b/arch/arm/kernel/head.S @@ -519,19 +519,11 @@ ARM_BE8(rev r0, r0) @ byteswap if big endian retne lr __fixup_smp_on_up: - adr r0, 1f - ldmia r0, {r3 - r5} - sub r3, r0, r3 - add r4, r4, r3 - add r5, r5, r3 + adr_l r4, __smpalt_begin + adr_l r5, __smpalt_end b __do_fixup_smp_on_up ENDPROC(__fixup_smp) - .align -1: .word . - .word __smpalt_begin - .word __smpalt_end - .pushsection .data .align 2 .globl smp_on_up -- 2.22.0