Merge branch 'patchwork' into topic/docs-next

* patchwork: (1492 commits)
  [media] cec: always check all_device_types and features
  [media] cec: poll should check if there is room in the tx queue
  [media] vivid: support monitor all mode
  [media] cec: fix test for unconfigured adapter in main message loop
  [media] cec: limit the size of the transmit queue
  [media] cec: zero unused msg part after msg->len
  [media] cec: don't set fh to NULL in CEC_TRANSMIT
  [media] cec: clear all status fields before transmit and always fill in sequence
  [media] cec: CEC_RECEIVE overwrote the timeout field
  [media] cxd2841er: Reading SNR for DVB-C added
  [media] cxd2841er: Reading BER and UCB for DVB-C added
  [media] cxd2841er: fix switch-case for DVB-C
  [media] cxd2841er: fix signal strength scale for ISDB-T
  [media] cxd2841er: adjust the dB scale for DVB-C
  [media] cxd2841er: provide signal strength for DVB-C
  [media] cxd2841er: fix BER report via DVBv5 stats API
  [media] mb86a20s: apply mask to val after checking for read failure
  [media] airspy: fix error logic during device register
  [media] s5p-cec/TODO: add TODO item
  [media] cec/TODO: drop comment about sphinx documentation
  ...

Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
This commit is contained in:
Mauro Carvalho Chehab
2016-07-23 07:59:19 -03:00
1510 changed files with 22288 additions and 10453 deletions

View File

@@ -128,16 +128,44 @@ X!Edrivers/base/interface.c
!Edrivers/base/platform.c
!Edrivers/base/bus.c
</sect1>
<sect1><title>Device Drivers DMA Management</title>
<sect1>
<title>Buffer Sharing and Synchronization</title>
<para>
The dma-buf subsystem provides the framework for sharing buffers
for hardware (DMA) access across multiple device drivers and
subsystems, and for synchronizing asynchronous hardware access.
</para>
<para>
This is used, for example, by drm "prime" multi-GPU support, but
is of course not limited to GPU use cases.
</para>
<para>
The three main components of this are: (1) dma-buf, representing
a sg_table and exposed to userspace as a file descriptor to allow
passing between devices, (2) fence, which provides a mechanism
to signal when one device as finished access, and (3) reservation,
which manages the shared or exclusive fence(s) associated with
the buffer.
</para>
<sect2><title>dma-buf</title>
!Edrivers/dma-buf/dma-buf.c
!Edrivers/dma-buf/fence.c
!Edrivers/dma-buf/seqno-fence.c
!Iinclude/linux/fence.h
!Iinclude/linux/seqno-fence.h
!Iinclude/linux/dma-buf.h
</sect2>
<sect2><title>reservation</title>
!Pdrivers/dma-buf/reservation.c Reservation Object Overview
!Edrivers/dma-buf/reservation.c
!Iinclude/linux/reservation.h
</sect2>
<sect2><title>fence</title>
!Edrivers/dma-buf/fence.c
!Iinclude/linux/fence.h
!Edrivers/dma-buf/seqno-fence.c
!Iinclude/linux/seqno-fence.h
!Edrivers/dma-buf/sync_file.c
!Iinclude/linux/sync_file.h
</sect2>
</sect1>
<sect1><title>Device Drivers DMA Management</title>
!Edrivers/base/dma-coherent.c
!Edrivers/base/dma-mapping.c
</sect1>

View File

@@ -88,7 +88,7 @@ function.<footnote>
<structfield>capabilities</structfield> field of &v4l2-capability;
returned by the &VIDIOC-QUERYCAP; ioctl is set. There are two
streaming methods, to determine if the memory mapping flavor is
supported applications must call the &VIDIOC-REQBUFS; ioctl.</para>
supported applications must call the &VIDIOC-REQBUFS; ioctl with the memory type set to <constant>V4L2_MEMORY_MMAP</constant>.</para>
<para>Streaming is an I/O method where only pointers to buffers
are exchanged between application and driver, the data itself is not
@@ -369,7 +369,7 @@ rest should be evident.</para>
<structfield>capabilities</structfield> field of &v4l2-capability;
returned by the &VIDIOC-QUERYCAP; ioctl is set. If the particular user
pointer method (not only memory mapping) is supported must be
determined by calling the &VIDIOC-REQBUFS; ioctl.</para>
determined by calling the &VIDIOC-REQBUFS; ioctl with the memory type set to <constant>V4L2_MEMORY_USERPTR</constant>.</para>
<para>This I/O method combines advantages of the read/write and
memory mapping methods. Buffers (planes) are allocated by the application

View File

@@ -157,7 +157,7 @@ on working with the default settings initially.</para>
<varlistentry>
<term>LIRC_SET_{SEND,REC}_CARRIER</term>
<listitem>
<para>Set send/receive carrier (in Hz).</para>
<para>Set send/receive carrier (in Hz). Return 0 on success.</para>
</listitem>
</varlistentry>
<varlistentry>

View File

@@ -121,6 +121,70 @@
<entry><constant>MEDIA_ENT_F_AUDIO_MIXER</constant></entry>
<entry>Audio Mixer Function Entity.</entry>
</row>
<row>
<entry><constant>MEDIA_ENT_F_PROC_VIDEO_COMPOSER</constant></entry>
<entry>Video composer (blender). An entity capable of video
composing must have at least two sink pads and one source
pad, and composes input video frames onto output video
frames. Composition can be performed using alpha blending,
color keying, raster operations (ROP), stitching or any other
means.
</entry>
</row>
<row>
<entry><constant>MEDIA_ENT_F_PROC_VIDEO_PIXEL_FORMATTER</constant></entry>
<entry>Video pixel formatter. An entity capable of pixel formatting
must have at least one sink pad and one source pad. Read
pixel formatters read pixels from memory and perform a subset
of unpacking, cropping, color keying, alpha multiplication
and pixel encoding conversion. Write pixel formatters perform
a subset of dithering, pixel encoding conversion and packing
and write pixels to memory.
</entry>
</row>
<row>
<entry><constant>MEDIA_ENT_F_PROC_VIDEO_PIXEL_ENC_CONV</constant></entry>
<entry>Video pixel encoding converter. An entity capable of pixel
enconding conversion must have at least one sink pad and one
source pad, and convert the encoding of pixels received on
its sink pad(s) to a different encoding output on its source
pad(s). Pixel encoding conversion includes but isn't limited
to RGB to/from HSV, RGB to/from YUV and CFA (Bayer) to RGB
conversions.
</entry>
</row>
<row>
<entry><constant>MEDIA_ENT_F_PROC_VIDEO_LUT</constant></entry>
<entry>Video look-up table. An entity capable of video lookup table
processing must have one sink pad and one source pad. It uses
the values of the pixels received on its sink pad to look up
entries in internal tables and output them on its source pad.
The lookup processing can be performed on all components
separately or combine them for multi-dimensional table
lookups.
</entry>
</row>
<row>
<entry><constant>MEDIA_ENT_F_PROC_VIDEO_SCALER</constant></entry>
<entry>Video scaler. An entity capable of video scaling must have
at least one sink pad and one source pad, and scale the
video frame(s) received on its sink pad(s) to a different
resolution output on its source pad(s). The range of
supported scaling ratios is entity-specific and can differ
between the horizontal and vertical directions (in particular
scaling can be supported in one direction only). Binning and
skipping are considered as scaling.
</entry>
</row>
<row>
<entry><constant>MEDIA_ENT_F_PROC_VIDEO_STATISTICS</constant></entry>
<entry>Video statistics computation (histogram, 3A, ...). An entity
capable of statistics computation must have one sink pad and
one source pad. It computes statistics over the frames
received on its sink pad and outputs the statistics data on
its source pad.
</entry>
</row>
</tbody>
</tgroup>
</table>

View File

@@ -5,7 +5,7 @@
</refmeta>
<refnamediv>
<refname><constant>V4L2_PIX_FMT_Z16</constant></refname>
<refpurpose>Interleaved grey-scale image, e.g. from a stereo-pair</refpurpose>
<refpurpose>16-bit depth data with distance values at each pixel</refpurpose>
</refnamediv>
<refsect1>
<title>Description</title>

View File

@@ -6,7 +6,7 @@
<refnamediv>
<refname>VIDIOC_REQBUFS</refname>
<refpurpose>Initiate Memory Mapping or User Pointer I/O</refpurpose>
<refpurpose>Initiate Memory Mapping, User Pointer or DMA Buffer I/O</refpurpose>
</refnamediv>
<refsynopsisdiv>