x86/stackprotector/32: Make the canary into a regular percpu variable
[ Upstream commit 3fb0fdb3bbe7aed495109b3296b06c2409734023 ] On 32-bit kernels, the stackprotector canary is quite nasty -- it is stored at %gs:(20), which is nasty because 32-bit kernels use %fs for percpu storage. It's even nastier because it means that whether %gs contains userspace state or kernel state while running kernel code depends on whether stackprotector is enabled (this is CONFIG_X86_32_LAZY_GS), and this setting radically changes the way that segment selectors work. Supporting both variants is a maintenance and testing mess. Merely rearranging so that percpu and the stack canary share the same segment would be messy as the 32-bit percpu address layout isn't currently compatible with putting a variable at a fixed offset. Fortunately, GCC 8.1 added options that allow the stack canary to be accessed as %fs:__stack_chk_guard, effectively turning it into an ordinary percpu variable. This lets us get rid of all of the code to manage the stack canary GDT descriptor and the CONFIG_X86_32_LAZY_GS mess. (That name is special. We could use any symbol we want for the %fs-relative mode, but for CONFIG_SMP=n, gcc refuses to let us use any name other than __stack_chk_guard.) Forcibly disable stackprotector on older compilers that don't support the new options and turn the stack canary into a percpu variable. The "lazy GS" approach is now used for all 32-bit configurations. Also makes load_gs_index() work on 32-bit kernels. On 64-bit kernels, it loads the GS selector and updates the user GSBASE accordingly. (This is unchanged.) On 32-bit kernels, it loads the GS selector and updates GSBASE, which is now always the user base. This means that the overall effect is the same on 32-bit and 64-bit, which avoids some ifdeffery. [ bp: Massage commit message. ] Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/c0ff7dba14041c7e5d1cae5d4df052f03759bef3.1613243844.git.luto@kernel.org Stable-dep-of: e3f269ed0acc ("x86/pm: Work around false positive kmemleak report in msr_build_context()") Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:

committed by
Greg Kroah-Hartman

parent
6d22547437
commit
f594871732
@@ -441,6 +441,9 @@ struct fixed_percpu_data {
|
||||
* GCC hardcodes the stack canary as %gs:40. Since the
|
||||
* irq_stack is the object at %gs:0, we reserve the bottom
|
||||
* 48 bytes of the irq stack for the canary.
|
||||
*
|
||||
* Once we are willing to require -mstack-protector-guard-symbol=
|
||||
* support for x86_64 stackprotector, we can get rid of this.
|
||||
*/
|
||||
char gs_base[40];
|
||||
unsigned long stack_canary;
|
||||
@@ -461,17 +464,7 @@ extern asmlinkage void ignore_sysret(void);
|
||||
void current_save_fsgs(void);
|
||||
#else /* X86_64 */
|
||||
#ifdef CONFIG_STACKPROTECTOR
|
||||
/*
|
||||
* Make sure stack canary segment base is cached-aligned:
|
||||
* "For Intel Atom processors, avoid non zero segment base address
|
||||
* that is not aligned to cache line boundary at all cost."
|
||||
* (Optim Ref Manual Assembly/Compiler Coding Rule 15.)
|
||||
*/
|
||||
struct stack_canary {
|
||||
char __pad[20]; /* canary at %gs:20 */
|
||||
unsigned long canary;
|
||||
};
|
||||
DECLARE_PER_CPU_ALIGNED(struct stack_canary, stack_canary);
|
||||
DECLARE_PER_CPU(unsigned long, __stack_chk_guard);
|
||||
#endif
|
||||
/* Per CPU softirq stack pointer */
|
||||
DECLARE_PER_CPU(struct irq_stack *, softirq_stack_ptr);
|
||||
|
Reference in New Issue
Block a user