FROMLIST: arm64: mte: optimize GCR_EL1 modification on kernel entry/exit

Accessing GCR_EL1 and issuing an ISB can be expensive on some
microarchitectures. Although we must write to GCR_EL1, we can
restructure the code to avoid reading from it because the new value
can be derived entirely from the exclusion mask, which is already in
a GPR. Do so.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/I560a190a74176ca4cc5191dad08f77f6b1577c75
Change-Id: If73813daf8a24209b7582ae7b5e9a2a30004b086
Bug: 192536783
Link: https://lore.kernel.org/linux-arm-kernel/20210714013638.3995315-1-pcc@google.com/T/
This commit is contained in:
Peter Collingbourne
2021-07-13 19:00:08 -07:00
committed by Todd Kjos
parent a20103c331
commit 98b2c1dd1c

View File

@@ -183,15 +183,11 @@ alternative_else_nop_endif
#endif #endif
.endm .endm
.macro mte_set_gcr, tmp, tmp2 .macro mte_set_gcr, mte_ctrl, tmp
#ifdef CONFIG_ARM64_MTE #ifdef CONFIG_ARM64_MTE
/* ubfx \tmp, \mte_ctrl, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16
* Calculate and set the exclude mask preserving orr \tmp, \tmp, #SYS_GCR_EL1_RRND
* the RRND (bit[16]) setting. msr_s SYS_GCR_EL1, \tmp
*/
mrs_s \tmp2, SYS_GCR_EL1
bfxil \tmp2, \tmp, #MTE_CTRL_GCR_USER_EXCL_SHIFT, #16
msr_s SYS_GCR_EL1, \tmp2
#endif #endif
.endm .endm