[PATCH] i386: PARAVIRT: add kmap_atomic_pte for mapping highpte pages
Xen and VMI both have special requirements when mapping a highmem pte page into the kernel address space. These can be dealt with by adding a new kmap_atomic_pte() function for mapping highptes, and hooking it into the paravirt_ops infrastructure. Xen specifically wants to map the pte page RO, so this patch exposes a helper function, kmap_atomic_prot, which maps the page with the specified page protections. This also adds a kmap_flush_unused() function to clear out the cached kmap mappings. Xen needs this to clear out any potential stray RW mappings of pages which will become part of a pagetable. [ Zach - vmi.c will need some attention after this patch. It wasn't immediately obvious to me what needs to be done. ] Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Zachary Amsden <zach@vmware.com>
This commit is contained in:

committed by
Andi Kleen

parent
a27fe809b8
commit
ce6234b529
@@ -99,6 +99,15 @@ static void flush_all_zero_pkmaps(void)
|
||||
flush_tlb_kernel_range(PKMAP_ADDR(0), PKMAP_ADDR(LAST_PKMAP));
|
||||
}
|
||||
|
||||
/* Flush all unused kmap mappings in order to remove stray
|
||||
mappings. */
|
||||
void kmap_flush_unused(void)
|
||||
{
|
||||
spin_lock(&kmap_lock);
|
||||
flush_all_zero_pkmaps();
|
||||
spin_unlock(&kmap_lock);
|
||||
}
|
||||
|
||||
static inline unsigned long map_new_virtual(struct page *page)
|
||||
{
|
||||
unsigned long vaddr;
|
||||
|
Reference in New Issue
Block a user