
(Backport: resolve conflicts due to missing f4693c2716b35 and also drop in_bpf_jit from fixup_exception the same way 5.15 backport 9c82ce593626 does it.) Commit91fc957c9b
("arm64/bpf: don't allocate BPF JIT programs in module memory") restricts BPF JIT program allocation to a 128MB region to ensure BPF programs are still in branching range of each other. However this restriction should not apply to the aarch64 JIT, since BPF_JMP | BPF_CALL are implemented as a 64-bit move into a register and then a BLR instruction - which has the effect of being able to call anything without proximity limitation. The practical reason to relax this restriction on JIT memory is that 128MB of JIT memory can be quickly exhausted, especially where PAGE_SIZE is 64KB - one page is needed per program. In cases where seccomp filters are applied to multiple VMs on VM launch - such filters are classic BPF but converted to BPF - this can severely limit the number of VMs that can be launched. In a world where we support BPF JIT always on, turning off the JIT isn't always an option either. Fixes:91fc957c9b
("arm64/bpf: don't allocate BPF JIT programs in module memory") Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Russell King <russell.king@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/1636131046-5982-2-git-send-email-alan.maguire@oracle.com (cherry picked from commit b89ddf4cca43f1269093942cf5c4e457fd45c335) Bug: 252919296 Change-Id: Iec7d0b2bba001df94c2e21fcd5883ff002111cd5 Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
39 lines
1.1 KiB
C
39 lines
1.1 KiB
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#ifndef __ASM_EXTABLE_H
|
|
#define __ASM_EXTABLE_H
|
|
|
|
/*
|
|
* The exception table consists of pairs of relative offsets: the first
|
|
* is the relative offset to an instruction that is allowed to fault,
|
|
* and the second is the relative offset at which the program should
|
|
* continue. No registers are modified, so it is entirely up to the
|
|
* continuation code to figure out what to do.
|
|
*
|
|
* All the routines below use bits of fixup code that are out of line
|
|
* with the main instruction path. This means when everything is well,
|
|
* we don't even have to jump over them. Further, they do not intrude
|
|
* on our cache or tlb entries.
|
|
*/
|
|
|
|
struct exception_table_entry
|
|
{
|
|
int insn, fixup;
|
|
};
|
|
|
|
#define ARCH_HAS_RELATIVE_EXTABLE
|
|
|
|
#ifdef CONFIG_BPF_JIT
|
|
int arm64_bpf_fixup_exception(const struct exception_table_entry *ex,
|
|
struct pt_regs *regs);
|
|
#else /* !CONFIG_BPF_JIT */
|
|
static inline
|
|
int arm64_bpf_fixup_exception(const struct exception_table_entry *ex,
|
|
struct pt_regs *regs)
|
|
{
|
|
return 0;
|
|
}
|
|
#endif /* !CONFIG_BPF_JIT */
|
|
|
|
extern int fixup_exception(struct pt_regs *regs);
|
|
#endif
|