Commit Graph

7 Commits

Author SHA1 Message Date
Linus Torvalds
80b29b6b8c Merge tag 'csky-for-linus-5.4-rc1' of git://github.com/c-sky/csky-linux
Pull csky updates from Guo Ren:
 "This round of csky subsystem just some fixups:

   - Fix mb() synchronization problem

   - Fix dma_alloc_coherent with PAGE_SO attribute

   - Fix cache_op failed when cross memory ZONEs

   - Optimize arch_sync_dma_for_cpu/device with dma_inv_range

   - Fix ioremap function losing

   - Fix arch_get_unmapped_area() implementation

   - Fix defer cache flush for 610

   - Support kernel non-aligned access

   - Fix 610 vipt cache flush mechanism

   - Fix add zero_fp fixup perf backtrace panic

   - Move static keyword to the front of declaration

   - Fix csky_pmu.max_period assignment

   - Use generic free_initrd_mem()

   - entry: Remove unneeded need_resched() loop"

* tag 'csky-for-linus-5.4-rc1' of git://github.com/c-sky/csky-linux:
  csky: Move static keyword to the front of declaration
  csky: entry: Remove unneeded need_resched() loop
  csky: Fixup csky_pmu.max_period assignment
  csky: Fixup add zero_fp fixup perf backtrace panic
  csky: Use generic free_initrd_mem()
  csky: Fixup 610 vipt cache flush mechanism
  csky: Support kernel non-aligned access
  csky: Fixup defer cache flush for 610
  csky: Fixup arch_get_unmapped_area() implementation
  csky: Fixup ioremap function losing
  csky: Optimize arch_sync_dma_for_cpu/device with dma_inv_range
  csky/dma: Fixup cache_op failed when cross memory ZONEs
  csky: Fixup dma_alloc_coherent with PAGE_SO attribute
  csky: Fixup mb() synchronization problem
2019-09-30 10:16:17 -07:00
Christoph Hellwig
8e3a68fb55 dma-mapping: make dma_atomic_pool_init self-contained
The memory allocated for the atomic pool needs to have the same
mapping attributes that we use for remapping, so use
pgprot_dmacoherent instead of open coding it.  Also deduct a
suitable zone to allocate the memory from based on the presence
of the DMA zones.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-08-29 16:43:33 +02:00
Guo Ren
ae76f635d4 csky: Optimize arch_sync_dma_for_cpu/device with dma_inv_range
DMA_FROM_DEVICE only need to read dma data of memory into CPU cache,
so there is no need to clear cache before. Also clear + inv for
DMA_FROM_DEVICE won't cause problem, because the memory range for dma
won't be touched by software during dma working.

Changes for V2:
 - Remove clr cache and ignore the DMA_TO_DEVICE in _for_cpu.
 - Change inv to wbinv cache with DMA_FROM_DEVICE in _for_device.

Signed-off-by: Guo Ren <ren_guo@c-sky.com>
Cc: Arnd Bergmann <arnd@arndb.de>
2019-08-06 15:15:34 +08:00
Guo Ren
4af9027d3f csky/dma: Fixup cache_op failed when cross memory ZONEs
If the paddr and size are cross between NORMAL_ZONE and HIGHMEM_ZONE
memory range, cache_op will panic in do_page_fault with bad_area.

Optimize the code to support the range which cross memory ZONEs.

Changes for V2:
 - Revert back to postcore_initcall

Signed-off-by: Guo Ren <ren_guo@c-sky.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Arnd Bergmann <arnd@arndb.de>
2019-08-06 15:12:59 +08:00
Christoph Hellwig
f04b951f6c csky: use the generic remapping dma alloc implementation
The csky code was largely copied from arm/arm64, so switch to the
generic arm64-based implementation instead.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Guo Ren <ren_guo@c-sky.com>
2018-12-01 18:07:16 +01:00
Christoph Hellwig
576d0d552b csky: don't use GFP_DMA in atomic_pool_init
csky does not implement ZONE_DMA, which means passing GFP_DMA is a no-op.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Guo Ren <ren_guo@c-sky.com>
2018-12-01 18:07:16 +01:00
Guo Ren
013de2d667 csky: MMU and page table management
This patch adds files related to memory management and here is our
memory-layout:

   Fixmap       : 0xffc02000 – 0xfffff000       (4 MB - 12KB)
   Pkmap        : 0xff800000 – 0xffc00000       (4 MB)
   Vmalloc      : 0xf0200000 – 0xff000000       (238 MB)
   Lowmem       : 0x80000000 – 0xc0000000       (1GB)

abiv1 CPU (CK610) is VIPT cache and it doesn't support highmem.
abiv2 CPUs are all PIPT cache and they could support highmem.

Lowmem is directly mapped by msa0 & msa1 reg, and we needn't setup
memory page table for it.

Link:https://lore.kernel.org/lkml/20180518215548.GH17671@n2100.armlinux.org.uk/
Signed-off-by: Guo Ren <ren_guo@c-sky.com>
Cc: Christoph Hellwig <hch@infradead.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
2018-10-25 23:36:19 +08:00