sh: Migrate from PG_mapped to PG_dcache_dirty.

This inverts the delayed dcache flush a bit to be more in line with other
platforms. At the same time this also gives us the ability to do some
more optimizations and cleanup. Now that the update_mmu_cache() callsite
only tests for the bit, the implementation can gradually be split out and
made generic, rather than relying on special implementations for each of
the peculiar CPU types.

SH7705 in 32kB mode and SH-4 still need slightly different handling, but
this is something that can remain isolated in the varying page copy/clear
routines. On top of that, SH-X3 is dcache coherent, so there is no need
to bother with any of these tests in the PTEAEX version of
update_mmu_cache(), so we kill that off too.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
此提交包含在:
Paul Mundt
2009-07-22 19:20:49 +09:00
父節點 c0b96cf639
當前提交 2277ab4a1d
共有 11 個檔案被更改,包括 82 行新增143 行删除

查看文件

@@ -33,25 +33,25 @@ void update_mmu_cache(struct vm_area_struct * vma,
unsigned long flags;
unsigned long pteval;
unsigned long vpn;
unsigned long pfn = pte_pfn(pte);
struct page *page;
/* Ptrace may call this routine. */
if (vma && current->active_mm != vma->vm_mm)
return;
page = pfn_to_page(pfn);
if (pfn_valid(pfn) && page_mapping(page)) {
#if defined(CONFIG_SH7705_CACHE_32KB)
{
struct page *page = pte_page(pte);
unsigned long pfn = pte_pfn(pte);
int dirty = test_and_clear_bit(PG_dcache_dirty, &page->flags);
if (dirty) {
unsigned long addr = (unsigned long)page_address(page);
if (pfn_valid(pfn) && !test_bit(PG_mapped, &page->flags)) {
unsigned long phys = pte_val(pte) & PTE_PHYS_MASK;
__flush_wback_region((void *)P1SEGADDR(phys),
PAGE_SIZE);
__set_bit(PG_mapped, &page->flags);
if (pages_do_alias(addr, address & PAGE_MASK))
__flush_wback_region((void *)addr, PAGE_SIZE);
}
}
#endif
}
local_irq_save(flags);