With the callback implemented omap-dma can provide information to client
drivers regarding to supported address widths, directions, residue
granularity, etc.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Do not print the paRAM information when verbose debugging is not asked and
also reduce the number of lines printed in edma_prep_dma_cyclic()
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Indicate that the edma dmaengine driver has support for cyclic mode.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pause/Resume can be used by the audio stack when the stream is paused/resumed
The edma platform code has support for this and the legacy audio stack used
this.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When clients asks for maxburst = 0 it is basically the same case as if they
were asking for maxburst = 1 since in both case ASYNC need to be used and
the eDMA is expected to write/read one word per DMA request.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Because of some driver base on DMA, changed the initcall order as subsys_initcall.
Signed-off-by: Yuan Yao <yao.yuan@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The ">" here should be ">=" or we are one step beyond the end of the
sdma->channels[] array.
Fixes: 2e041c9462 ('dmaengine: sirf: enable generic dt binding for dma channels')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
VIDEO_TIMBERDALE selects TIMB_DMA which itself depends on
MFD_TIMBERDALE, so VIDEO_TIMBERDALE should either select or depend on
MFD_TIMBERDALE as well. I chose to make it depend on it because I
think it makes more sense and it is consistent with what other options
are doing.
Adding a "|| HAS_IOMEM" to the TIMB_DMA dependencies silenced the
kconfig warning about unmet direct dependencies but it was wrong:
without MFD_TIMBERDALE, TIMB_DMA is useless as the driver has no
device to bind to.
Signed-off-by: Jean Delvare <jdelvare@suse.de>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mauro Carvalho Chehab <m.chehab@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The code to handle any length SG lists calls edma_resume()
even before edma_start() is called. This is incorrect
because edma_resume() enables edma events on the channel
after which CPU (in edma_start) cannot clear posted
events by writing to ECR (per the EDMA user's guide).
Because of this EDMA transfers fail to start if due
to some reason there is a pending EDMA event registered
even before EDMA transfers are started. This can happen if
an EDMA event is a byproduct of device initialization.
Fix this by calling edma_resume() only if it is not the
first batch of MAX_NR_SG elements.
Without this patch, MMC/SD fails to function on DA850 EVM
with DMA. The behaviour is triggered by specific IP and
this can explain why the issue was not reported before
(example with MMC/SD on AM335x).
Tested on DA850 EVM and AM335x EVM-SK using MMC/SD card.
Cc: stable@vger.kernel.org # v3.12.x+
Cc: Joel Fernandes <joelf@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Tested-by: Jon Ringle <jringle@gridpoint.com>
Tested-by: Alexander Holler <holler@ahsoftware.de>
Reported-by: Jon Ringle <jringle@gridpoint.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Now that mv_xor_slot_cleanup() has no remaining callers, we remove it
and rename __mv_xor_slot_cleanup() to mv_xor_slot_cleanup().
We take this opportunity to add a comment that makes it clear that the
channel spinlock should be held before calling mv_xor_slot_cleanup().
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
In order to simplify the code, remove all the calls to the locked
mv_xor_slot_cleanup() and instead use the unlocked version only,
It's less error prone to have just one function, and require the caller
to ensure proper locking.
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
In mv_xor_status(), we are currently calling mv_xor_clean_completed_slots()
when the transaction is complete (the cookie status is DMA_COMPLETE).
However, a completed status means that mv_xor_slot_cleanup() was called,
which cleans the completed slots.
In other words, there's nothing to cleanup for a completed transaction in
mv_xor_status(). Remove the unneeded call to mv_xor_clean_completed_slots().
Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Ezequiel Garcia <ezequiel.garcia@free-electrons.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
As result of deprecation of MSI-X/MSI enablement functions
pci_enable_msix() and pci_enable_msi_block() all drivers
using these two interfaces need to be updated to use the
new pci_enable_msi_range() or pci_enable_msi_exact()
and pci_enable_msix_range() or pci_enable_msix_exact()
interfaces.
Function pci_enable_msix() returns a tri-state value while
pci_enable_msi_exact() is a canonical zero/-errno variant.
The former is being phased out in favor of the latter.
In case of 'ioat' there (should be) no difference.
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Includes an appropriate header file dma_v2.h in ioat/dca.c because
functions ioat2_dca_init() and ioat3_dca_init() have their function
declarations in dma_v2.h.
This eliminates the following warning in ioat/dca.c:
drivers/dma/ioat/dca.c:410:22: warning: no previous prototype for ‘ioat2_dca_init’ [-Wmissing-prototypes]
drivers/dma/ioat/dca.c:624:22: warning: no previous prototype for ‘ioat3_dca_init’ [-Wmissing-prototypes]
Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Mark the functions ioat3_prep_xor_val(), ioat3_prep_pq_val() and
ioat3_prep_pqxor_val() as static in dma_v3.c because they are not used
outside this file.
This eliminates the following warnings in dma_v3.c:
drivers/dma/ioat/dma_v3.c:741:1: warning: no previous prototype for ‘ioat3_prep_xor_val’ [-Wmissing-prototypes]
drivers/dma/ioat/dma_v3.c:1092:1: warning: no previous prototype for ‘ioat3_prep_pq_val’ [-Wmissing-prototypes]
drivers/dma/ioat/dma_v3.c:1134:1: warning: no previous prototype for ‘ioat3_prep_pqxor_val’ [-Wmissing-prototypes]
Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Use PCI standard marco dev_is_pci() instead of directly compare
pci_bus_type to check whether it is pci device.
Signed-off-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Pull slave-dmaengine updates from Vinod Koul:
- New driver for Qcom bam dma
- New driver for RCAR peri-peri
- New driver for FSL eDMA
- Various odd fixes and updates thru the subsystem
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (29 commits)
dmaengine: add Qualcomm BAM dma driver
shdma: add R-Car Audio DMAC peri peri driver
dmaengine: sirf: enable generic dt binding for dma channels
dma: omap-dma: Implement device_slave_caps callback
dmaengine: qcom_bam_dma: Add device tree binding
dma: dw: Add suspend and resume handling for PCI mode DW_DMAC.
dma: dw: allocate memory in two stages in probe
Add new line to test result strings produced in verbose mode
dmaengine: pch_dma: use tasklet_kill in teardown
dmaengine: at_hdmac: use tasklet_kill in teardown
dma: cppi41: start tear down only if channel is busy
usb: musb: musb_cppi41: Dont reprogram DMA if tear down is initiated
dmaengine: s3c24xx-dma: make phy->irq signed for error handling
dma: imx-dma: Add missing module owner field
dma: imx-dma: Replace printk with dev_*
dma: fsl-edma: fix static checker warning of NULL dereference
dma: Remove comment about embedding dma_slave_config into custom structs
dma: mmp_tdma: move to generic device tree binding
dma: mmp_pdma: add IRQF_SHARED when request irq
dma: edma: Fix memory leak in edma_prep_dma_cyclic()
...
Add the DMA engine driver for the QCOM Bus Access Manager (BAM) DMA controller
found in the MSM 8x74 platforms.
Each BAM DMA device is associated with a specific on-chip peripheral. Each
channel provides a uni-directional data transfer engine that is capable of
transferring data between the peripheral and system memory (System mode), or
between two peripherals (BAM2BAM).
The initial release of this driver only supports slave transfers between
peripherals and system memory.
Signed-off-by: Andy Gross <agross@codeaurora.org>
Tested-by: Stanimir Varbanov <svarbanov@mm-sol.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We can move the handling of the DMA synchronisation control out of the
prepare functions; this can be pre-calculated when the DMA channel has
been allocated, so we don't need to duplicate this in both prepare
functions.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the interrupt handling for OMAP2+ into omap-dma, rather than using
the legacy support in the platform code.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Export the DMA register information from the SoC specific data, such
that we can access the registers directly in omap-dma.c, mapping the
register region ourselves as well.
Rather than calculating the DMA channel register in its entirety for
each access, we pre-calculate an offset base address for the allocated
DMA channel and then just use the appropriate register offset.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide a function to read the CSAC/CDAC register, working around the
OMAP 3.2/3.3 erratum (which requires two reads of the register if the
first returned zero.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide a pair of channel register accessors, and a pair of global
accessors for non-channel specific registers.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We don't need to read-modify-write the CCR register; we already know
what value it should contain at this point. Use the cached CCR value
when setting the enable bit.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We don't need to issue a barrier for every segment of a DMA transfer;
doing this just once per descriptor will do.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the clnk_ctrl setup to the preparation functions, saving its
value in the omap_desc. This only needs to be set once per descriptor,
not for each segment, so set it in omap_dma_start_desc() rather than
omap_dma_start().
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The only thing which changes is which registers are written, so put this
in local variables instead. This results in smaller code.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Consolidate clearing of the channel status register, rather than open
coding the same functionality in two places.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Since we record the CCR register in the dma transaction, we can move the
processing of the iframe buffering errata out of the omap_dma_start().
Move it to the preparation functions.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide our own set of more complete register definitions; this allows
us to get rid of the meaningless 1 << n constants scattered throughout
this code.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Consolidate the setup of the channel control register. Prepare the
basic value in the preparation of the DMA descriptor, and write it into
the register upon descriptor execution.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Consolidate the setup of the channel source destination parameters
register. This way, we calculate the required CSDP value when we setup
a transfer descriptor, and only write it to the device registers once
when we start the descriptor.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's no need to keep writing registers which don't change value in
omap_dma_start_sg(). Move this into omap_dma_start_desc() and merge
the register updates together.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Program the transfer parameters directly into the hardware, rather
than using the functions in arch/arm/plat-omap/dma.c.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide and use a hook to obtain the underlying DMA platform operations
so that omap-dma.c can access the hardware more directly without
involving the legacy DMA driver.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
move to support of_dma_request_slave_channel() and dma_request_slave_channel.
we add a xlate() to let dma clients be able to find right dma_chan by generic
"dmas" properties in dts.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
With the callback implemented omap-dma can provide information to client
drivers regarding to supported address widths, directions, residue
granularity, etc.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This is to disable/enable DW_DMAC hw during late suspend/early resume.
Since DMA is providing service to other clients (eg: SPI, HSUART),
we need to ensure DMA suspends after the clients and resume
before the clients are active.
Signed-off-by: Chew, Chiau Ee <chiau.ee.chew@intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Prevents test result strings from being output on same line. Issue will
happen with verbose and multi-iteration modes enabled.
Signed-off-by: Jerome Blin <jerome.blin@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As discussed in [1] the tasklet_disable is not a proper function for teardown.
We need to ensure irq is disabled, followed by ensuring that don't schedule any
more tasklets and then its safe to use tasklet_kill().
Here in pch dma driver we need to use free_irq() before tasklet_kill(). So move
up the free_irq() which will ensure that the irq is disabled and also wait till
all scheduled interrupts are executed by invoking synchronize_irq().
[1]: http://lwn.net/Articles/588457/
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
As discussed in [1] the tasklet_disable is not a proper function for teardown.
We need to ensure irq is disabled, followed by ensuring that don't schedule any
more tasklets and then its safe to use tasklet_kill().
Here in at_hdmac driver we use free_irq() before tasklet_kill(). The free_irq()
will ensure that the irq is disabled and also wait till all scheduled interrupts
are executed by invoking synchronize_irq(). So we need to only do tasklet_kill()
after invoking free_irq()
[1]: http://lwn.net/Articles/588457/
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>