Merge d04937ae94 ("x86/speculation: Warn about eIBRS + LFENCE + Unprivileged eBPF + SMT") into android12-5.10-lts

Steps on the way to 5.10.105

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I76951de21f6efca47dab5f20ad20d588f46729d0
This commit is contained in:
Greg Kroah-Hartman
2022-03-14 19:56:29 +01:00
8 changed files with 222 additions and 80 deletions

View File

@@ -60,8 +60,8 @@ privileged data touched during the speculative execution.
Spectre variant 1 attacks take advantage of speculative execution of Spectre variant 1 attacks take advantage of speculative execution of
conditional branches, while Spectre variant 2 attacks use speculative conditional branches, while Spectre variant 2 attacks use speculative
execution of indirect branches to leak privileged memory. execution of indirect branches to leak privileged memory.
See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>` See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[6] <spec_ref6>`
:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`. :ref:`[7] <spec_ref7>` :ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
Spectre variant 1 (Bounds Check Bypass) Spectre variant 1 (Bounds Check Bypass)
--------------------------------------- ---------------------------------------
@@ -131,6 +131,19 @@ steer its indirect branch speculations to gadget code, and measure the
speculative execution's side effects left in level 1 cache to infer the speculative execution's side effects left in level 1 cache to infer the
victim's data. victim's data.
Yet another variant 2 attack vector is for the attacker to poison the
Branch History Buffer (BHB) to speculatively steer an indirect branch
to a specific Branch Target Buffer (BTB) entry, even if the entry isn't
associated with the source address of the indirect branch. Specifically,
the BHB might be shared across privilege levels even in the presence of
Enhanced IBRS.
Currently the only known real-world BHB attack vector is via
unprivileged eBPF. Therefore, it's highly recommended to not enable
unprivileged eBPF, especially when eIBRS is used (without retpolines).
For a full mitigation against BHB attacks, it's recommended to use
retpolines (or eIBRS combined with retpolines).
Attack scenarios Attack scenarios
---------------- ----------------
@@ -364,13 +377,15 @@ The possible values in this file are:
- Kernel status: - Kernel status:
==================================== ================================= ======================================== =================================
'Not affected' The processor is not vulnerable 'Not affected' The processor is not vulnerable
'Vulnerable' Vulnerable, no mitigation 'Mitigation: None' Vulnerable, no mitigation
'Mitigation: Full generic retpoline' Software-focused mitigation 'Mitigation: Retpolines' Use Retpoline thunks
'Mitigation: Full AMD retpoline' AMD-specific software mitigation 'Mitigation: LFENCE' Use LFENCE instructions
'Mitigation: Enhanced IBRS' Hardware-focused mitigation 'Mitigation: Enhanced IBRS' Hardware-focused mitigation
==================================== ================================= 'Mitigation: Enhanced IBRS + Retpolines' Hardware-focused + Retpolines
'Mitigation: Enhanced IBRS + LFENCE' Hardware-focused + LFENCE
======================================== =================================
- Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
used to protect against Spectre variant 2 attacks when calling firmware (x86 only). used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
@@ -584,12 +599,13 @@ kernel command line.
Specific mitigations can also be selected manually: Specific mitigations can also be selected manually:
retpoline retpoline auto pick between generic,lfence
replace indirect branches retpoline,generic Retpolines
retpoline,generic retpoline,lfence LFENCE; indirect branch
google's original retpoline retpoline,amd alias for retpoline,lfence
retpoline,amd eibrs enhanced IBRS
AMD-specific minimal thunk eibrs,retpoline enhanced IBRS + Retpolines
eibrs,lfence enhanced IBRS + LFENCE
Not specifying this option is equivalent to Not specifying this option is equivalent to
spectre_v2=auto. spectre_v2=auto.
@@ -730,7 +746,7 @@ AMD white papers:
.. _spec_ref6: .. _spec_ref6:
[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_. [6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf>`_.
ARM white papers: ARM white papers:

View File

@@ -5048,8 +5048,12 @@
Specific mitigations can also be selected manually: Specific mitigations can also be selected manually:
retpoline - replace indirect branches retpoline - replace indirect branches
retpoline,generic - google's original retpoline retpoline,generic - Retpolines
retpoline,amd - AMD-specific minimal thunk retpoline,lfence - LFENCE; indirect branch
retpoline,amd - alias for retpoline,lfence
eibrs - enhanced IBRS
eibrs,retpoline - enhanced IBRS + Retpolines
eibrs,lfence - enhanced IBRS + LFENCE
Not specifying this option is equivalent to Not specifying this option is equivalent to
spectre_v2=auto. spectre_v2=auto.

View File

@@ -204,7 +204,7 @@
#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
#define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
#define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
#define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */

View File

@@ -82,7 +82,7 @@
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_RETPOLINE
ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \ ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \
__stringify(jmp __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \ __stringify(jmp __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \
__stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_AMD __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE
#else #else
jmp *%\reg jmp *%\reg
#endif #endif
@@ -92,7 +92,7 @@
#ifdef CONFIG_RETPOLINE #ifdef CONFIG_RETPOLINE
ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *%\reg), \ ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *%\reg), \
__stringify(call __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \ __stringify(call __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \
__stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_AMD __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_LFENCE
#else #else
call *%\reg call *%\reg
#endif #endif
@@ -134,7 +134,7 @@
"lfence;\n" \ "lfence;\n" \
ANNOTATE_RETPOLINE_SAFE \ ANNOTATE_RETPOLINE_SAFE \
"call *%[thunk_target]\n", \ "call *%[thunk_target]\n", \
X86_FEATURE_RETPOLINE_AMD) X86_FEATURE_RETPOLINE_LFENCE)
# define THUNK_TARGET(addr) [thunk_target] "r" (addr) # define THUNK_TARGET(addr) [thunk_target] "r" (addr)
@@ -164,7 +164,7 @@
"lfence;\n" \ "lfence;\n" \
ANNOTATE_RETPOLINE_SAFE \ ANNOTATE_RETPOLINE_SAFE \
"call *%[thunk_target]\n", \ "call *%[thunk_target]\n", \
X86_FEATURE_RETPOLINE_AMD) X86_FEATURE_RETPOLINE_LFENCE)
# define THUNK_TARGET(addr) [thunk_target] "rm" (addr) # define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
#endif #endif
@@ -176,9 +176,11 @@
/* The Spectre V2 mitigation variants */ /* The Spectre V2 mitigation variants */
enum spectre_v2_mitigation { enum spectre_v2_mitigation {
SPECTRE_V2_NONE, SPECTRE_V2_NONE,
SPECTRE_V2_RETPOLINE_GENERIC, SPECTRE_V2_RETPOLINE,
SPECTRE_V2_RETPOLINE_AMD, SPECTRE_V2_LFENCE,
SPECTRE_V2_IBRS_ENHANCED, SPECTRE_V2_EIBRS,
SPECTRE_V2_EIBRS_RETPOLINE,
SPECTRE_V2_EIBRS_LFENCE,
}; };
/* The indirect branch speculation control variants */ /* The indirect branch speculation control variants */

View File

@@ -16,6 +16,7 @@
#include <linux/prctl.h> #include <linux/prctl.h>
#include <linux/sched/smt.h> #include <linux/sched/smt.h>
#include <linux/pgtable.h> #include <linux/pgtable.h>
#include <linux/bpf.h>
#include <asm/spec-ctrl.h> #include <asm/spec-ctrl.h>
#include <asm/cmdline.h> #include <asm/cmdline.h>
@@ -613,6 +614,32 @@ static inline const char *spectre_v2_module_string(void)
static inline const char *spectre_v2_module_string(void) { return ""; } static inline const char *spectre_v2_module_string(void) { return ""; }
#endif #endif
#define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n"
#define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n"
#define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n"
#ifdef CONFIG_BPF_SYSCALL
void unpriv_ebpf_notify(int new_state)
{
if (new_state)
return;
/* Unprivileged eBPF is enabled */
switch (spectre_v2_enabled) {
case SPECTRE_V2_EIBRS:
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
break;
case SPECTRE_V2_EIBRS_LFENCE:
if (sched_smt_active())
pr_err(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
break;
default:
break;
}
}
#endif
static inline bool match_option(const char *arg, int arglen, const char *opt) static inline bool match_option(const char *arg, int arglen, const char *opt)
{ {
int len = strlen(opt); int len = strlen(opt);
@@ -627,7 +654,10 @@ enum spectre_v2_mitigation_cmd {
SPECTRE_V2_CMD_FORCE, SPECTRE_V2_CMD_FORCE,
SPECTRE_V2_CMD_RETPOLINE, SPECTRE_V2_CMD_RETPOLINE,
SPECTRE_V2_CMD_RETPOLINE_GENERIC, SPECTRE_V2_CMD_RETPOLINE_GENERIC,
SPECTRE_V2_CMD_RETPOLINE_AMD, SPECTRE_V2_CMD_RETPOLINE_LFENCE,
SPECTRE_V2_CMD_EIBRS,
SPECTRE_V2_CMD_EIBRS_RETPOLINE,
SPECTRE_V2_CMD_EIBRS_LFENCE,
}; };
enum spectre_v2_user_cmd { enum spectre_v2_user_cmd {
@@ -700,6 +730,13 @@ spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
return SPECTRE_V2_USER_CMD_AUTO; return SPECTRE_V2_USER_CMD_AUTO;
} }
static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
{
return (mode == SPECTRE_V2_EIBRS ||
mode == SPECTRE_V2_EIBRS_RETPOLINE ||
mode == SPECTRE_V2_EIBRS_LFENCE);
}
static void __init static void __init
spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
{ {
@@ -767,7 +804,7 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
*/ */
if (!boot_cpu_has(X86_FEATURE_STIBP) || if (!boot_cpu_has(X86_FEATURE_STIBP) ||
!smt_possible || !smt_possible ||
spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) spectre_v2_in_eibrs_mode(spectre_v2_enabled))
return; return;
/* /*
@@ -787,9 +824,11 @@ set_mode:
static const char * const spectre_v2_strings[] = { static const char * const spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable", [SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", [SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines",
[SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", [SPECTRE_V2_LFENCE] = "Mitigation: LFENCE",
[SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS",
[SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE",
[SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines",
}; };
static const struct { static const struct {
@@ -800,8 +839,12 @@ static const struct {
{ "off", SPECTRE_V2_CMD_NONE, false }, { "off", SPECTRE_V2_CMD_NONE, false },
{ "on", SPECTRE_V2_CMD_FORCE, true }, { "on", SPECTRE_V2_CMD_FORCE, true },
{ "retpoline", SPECTRE_V2_CMD_RETPOLINE, false }, { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false },
{ "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false }, { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
{ "retpoline,lfence", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
{ "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false }, { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
{ "eibrs", SPECTRE_V2_CMD_EIBRS, false },
{ "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false },
{ "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false },
{ "auto", SPECTRE_V2_CMD_AUTO, false }, { "auto", SPECTRE_V2_CMD_AUTO, false },
}; };
@@ -838,17 +881,30 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
} }
if ((cmd == SPECTRE_V2_CMD_RETPOLINE || if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
cmd == SPECTRE_V2_CMD_RETPOLINE_AMD || cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) && cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC ||
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!IS_ENABLED(CONFIG_RETPOLINE)) { !IS_ENABLED(CONFIG_RETPOLINE)) {
pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option); pr_err("%s selected but not compiled in. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO; return SPECTRE_V2_CMD_AUTO;
} }
if (cmd == SPECTRE_V2_CMD_RETPOLINE_AMD && if ((cmd == SPECTRE_V2_CMD_EIBRS ||
boot_cpu_data.x86_vendor != X86_VENDOR_HYGON && cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
boot_cpu_data.x86_vendor != X86_VENDOR_AMD) { cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
pr_err("retpoline,amd selected but CPU is not AMD. Switching to AUTO select\n"); !boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
}
if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE) &&
!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO; return SPECTRE_V2_CMD_AUTO;
} }
@@ -857,6 +913,16 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
return cmd; return cmd;
} }
static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void)
{
if (!IS_ENABLED(CONFIG_RETPOLINE)) {
pr_err("Kernel not compiled with retpoline; no mitigation available!");
return SPECTRE_V2_NONE;
}
return SPECTRE_V2_RETPOLINE;
}
static void __init spectre_v2_select_mitigation(void) static void __init spectre_v2_select_mitigation(void)
{ {
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline(); enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@@ -877,49 +943,64 @@ static void __init spectre_v2_select_mitigation(void)
case SPECTRE_V2_CMD_FORCE: case SPECTRE_V2_CMD_FORCE:
case SPECTRE_V2_CMD_AUTO: case SPECTRE_V2_CMD_AUTO:
if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) { if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
mode = SPECTRE_V2_IBRS_ENHANCED; mode = SPECTRE_V2_EIBRS;
break;
}
mode = spectre_v2_select_retpoline();
break;
case SPECTRE_V2_CMD_RETPOLINE_LFENCE:
pr_err(SPECTRE_V2_LFENCE_MSG);
mode = SPECTRE_V2_LFENCE;
break;
case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
mode = SPECTRE_V2_RETPOLINE;
break;
case SPECTRE_V2_CMD_RETPOLINE:
mode = spectre_v2_select_retpoline();
break;
case SPECTRE_V2_CMD_EIBRS:
mode = SPECTRE_V2_EIBRS;
break;
case SPECTRE_V2_CMD_EIBRS_LFENCE:
mode = SPECTRE_V2_EIBRS_LFENCE;
break;
case SPECTRE_V2_CMD_EIBRS_RETPOLINE:
mode = SPECTRE_V2_EIBRS_RETPOLINE;
break;
}
if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
if (spectre_v2_in_eibrs_mode(mode)) {
/* Force it so VMEXIT will restore correctly */ /* Force it so VMEXIT will restore correctly */
x86_spec_ctrl_base |= SPEC_CTRL_IBRS; x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
goto specv2_set_mode;
}
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_auto;
break;
case SPECTRE_V2_CMD_RETPOLINE_AMD:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_amd;
break;
case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_generic;
break;
case SPECTRE_V2_CMD_RETPOLINE:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_auto;
break;
}
pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!");
return;
retpoline_auto:
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
retpoline_amd:
if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n");
goto retpoline_generic;
}
mode = SPECTRE_V2_RETPOLINE_AMD;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
} else {
retpoline_generic:
mode = SPECTRE_V2_RETPOLINE_GENERIC;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
} }
specv2_set_mode: switch (mode) {
case SPECTRE_V2_NONE:
case SPECTRE_V2_EIBRS:
break;
case SPECTRE_V2_LFENCE:
case SPECTRE_V2_EIBRS_LFENCE:
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE);
fallthrough;
case SPECTRE_V2_RETPOLINE:
case SPECTRE_V2_EIBRS_RETPOLINE:
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
break;
}
spectre_v2_enabled = mode; spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]); pr_info("%s\n", spectre_v2_strings[mode]);
@@ -945,7 +1026,7 @@ specv2_set_mode:
* the CPU supports Enhanced IBRS, kernel might un-intentionally not * the CPU supports Enhanced IBRS, kernel might un-intentionally not
* enable IBRS around firmware calls. * enable IBRS around firmware calls.
*/ */
if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) { if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n"); pr_info("Enabling Restricted Speculation for firmware calls\n");
} }
@@ -1015,6 +1096,10 @@ void cpu_bugs_smt_update(void)
{ {
mutex_lock(&spec_ctrl_mutex); mutex_lock(&spec_ctrl_mutex);
if (sched_smt_active() && unprivileged_ebpf_enabled() &&
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
switch (spectre_v2_user_stibp) { switch (spectre_v2_user_stibp) {
case SPECTRE_V2_USER_NONE: case SPECTRE_V2_USER_NONE:
break; break;
@@ -1621,7 +1706,7 @@ static ssize_t tsx_async_abort_show_state(char *buf)
static char *stibp_state(void) static char *stibp_state(void)
{ {
if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
return ""; return "";
switch (spectre_v2_user_stibp) { switch (spectre_v2_user_stibp) {
@@ -1651,6 +1736,27 @@ static char *ibpb_state(void)
return ""; return "";
} }
static ssize_t spectre_v2_show_state(char *buf)
{
if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
return sprintf(buf, "Vulnerable: LFENCE\n");
if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
return sprintf(buf, "Vulnerable: eIBRS with unprivileged eBPF\n");
if (sched_smt_active() && unprivileged_ebpf_enabled() &&
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
return sprintf(buf, "%s%s%s%s%s%s\n",
spectre_v2_strings[spectre_v2_enabled],
ibpb_state(),
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
stibp_state(),
boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
spectre_v2_module_string());
}
static ssize_t srbds_show_state(char *buf) static ssize_t srbds_show_state(char *buf)
{ {
return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]); return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
@@ -1676,12 +1782,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]); return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]);
case X86_BUG_SPECTRE_V2: case X86_BUG_SPECTRE_V2:
return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], return spectre_v2_show_state(buf);
ibpb_state(),
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
stibp_state(),
boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
spectre_v2_module_string());
case X86_BUG_SPEC_STORE_BYPASS: case X86_BUG_SPEC_STORE_BYPASS:
return sprintf(buf, "%s\n", ssb_strings[ssb_mode]); return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);

View File

@@ -1500,6 +1500,12 @@ struct bpf_prog *bpf_prog_by_id(u32 id);
struct bpf_link *bpf_link_by_id(u32 id); struct bpf_link *bpf_link_by_id(u32 id);
const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id); const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
static inline bool unprivileged_ebpf_enabled(void)
{
return !sysctl_unprivileged_bpf_disabled;
}
#else /* !CONFIG_BPF_SYSCALL */ #else /* !CONFIG_BPF_SYSCALL */
static inline struct bpf_prog *bpf_prog_get(u32 ufd) static inline struct bpf_prog *bpf_prog_get(u32 ufd)
{ {
@@ -1694,6 +1700,12 @@ bpf_base_func_proto(enum bpf_func_id func_id)
{ {
return NULL; return NULL;
} }
static inline bool unprivileged_ebpf_enabled(void)
{
return false;
}
#endif /* CONFIG_BPF_SYSCALL */ #endif /* CONFIG_BPF_SYSCALL */
static inline struct bpf_prog *bpf_prog_get_type(u32 ufd, static inline struct bpf_prog *bpf_prog_get_type(u32 ufd,

View File

@@ -237,6 +237,10 @@ static int bpf_stats_handler(struct ctl_table *table, int write,
return ret; return ret;
} }
void __weak unpriv_ebpf_notify(int new_state)
{
}
static int bpf_unpriv_handler(struct ctl_table *table, int write, static int bpf_unpriv_handler(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos) void *buffer, size_t *lenp, loff_t *ppos)
{ {
@@ -254,6 +258,9 @@ static int bpf_unpriv_handler(struct ctl_table *table, int write,
return -EPERM; return -EPERM;
*(int *)table->data = unpriv_enable; *(int *)table->data = unpriv_enable;
} }
unpriv_ebpf_notify(unpriv_enable);
return ret; return ret;
} }
#endif /* CONFIG_BPF_SYSCALL && CONFIG_SYSCTL */ #endif /* CONFIG_BPF_SYSCALL && CONFIG_SYSCTL */

View File

@@ -204,7 +204,7 @@
#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
#define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCEs for Spectre variant 2 */
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
#define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
#define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */