x86/mm/cpa: Optimize cpa_flush_array() TLB invalidation

Instead of punting and doing tlb_flush_all(), do the same as
flush_tlb_kernel_range() does and use single page invalidations.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom.StDenis@amd.com
Cc: dave.hansen@intel.com
Link: http://lkml.kernel.org/r/20181203171043.430001980@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
这个提交包含在:
Peter Zijlstra
2018-12-03 18:03:49 +01:00
提交者 Ingo Molnar
父节点 5fe26b7a8f
当前提交 935f583982
修改 3 个文件,包含 29 行新增19 行删除

查看文件

@@ -15,6 +15,8 @@
#include <asm/apic.h>
#include <asm/uv/uv.h>
#include "mm_internal.h"
/*
* TLB flushing, formerly SMP-only
* c/o Linus Torvalds.
@@ -721,7 +723,7 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
*
* This is in units of pages.
*/
static unsigned long tlb_single_page_flush_ceiling __read_mostly = 33;
unsigned long tlb_single_page_flush_ceiling __read_mostly = 33;
void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
unsigned long end, unsigned int stride_shift,