Fix kernel-doc parameter warning and correct the function name:
Warning(linux-next-20081022//drivers/scsi/scsi_ioctl.c:281): No description found for parameter 'ndelay'
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Impact: cleanup
Now that arch/x86/pci/pci.h is used in a number of other places as well,
move the lowlevel x86 pci definitions into the architecture include files.
(not to be confused with the existing arch/x86/include/asm/pci.h file,
which provides public details about x86 PCI)
Tested on: X86_32_UP, X86_32_SMP and X86_64_SMP
Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This implements support for the Large Block Transfer feature found in Silicon
Image 311x controllers. This allows transferring bigger contiguous chunks of
data from system memory and avoids the 64KB boundary restriction of standard
SFF controllers.
This is based on a patch from Jeff Garzik (from the sii-lbt branch of
libata-dev) but includes a few bug fixes: Since the bmdma2 register does not
implement the status bits, the original bmdma register must be used except
where the bmdma2 register is required. As well the DMA boundary should be
31-bit instead of 32-bit since the top bit of the length field is still
required for the PRD end-of-table flag.
Signed-off-by: Robert Hancock <hancockr@shaw.ca>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
The patchlet below blacklists NCQ on OCZ CORE v2 SSD drive(s). Even
though the drive advertises NCQ support with queue depth 1, it responds
with all-zeroes FIS to NCQ commands which triggers ata error handling
several times before the kernel decides to disable NCQ on the drive.
Signed-off-by: Lubomir Bulej <lubomir.bulej@dsrg.mff.cuni.cz>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
PXA27x and later processors support overlay1 and overlay2 on-top of the
base framebuffer (although under-neath the base is also possible). They
support palette and no-palette RGB formats, as well as YUV formats (only
available on overlay2). These overlays have dedicated DMA channels and
behave in a similar way as a framebuffer.
This heavily simplified and re-structured work is based on the original
pxafb_overlay.c (which is pending for mainline merge for a long time).
The major problems with this pxafb_overlay.c are (if you are interested
in the history):
1. heavily redundant (the control logics for overlay1 and overlay2 are
actually identical except for some small operations, which are now
abstracted into a 'pxafb_layer_ops' structure)
2. a lot of useless and un-tested code (two workarounds which are now
fixed on mature silicons)
3. cursorfb is actually useless, hardware cursor should not be used
this way, and the code was actually un-tested for a long time.
The code in this patch should be self-explanatory, I tried to add minimum
comments. As said, this is basically simplified, there are several things
still on the pending list:
1. palette mode is un-supported and un-tested (although re-using the
palette code of the base framebuffer is actually very easy now with
previous clean-up patches)
2. fb_pan_display for overlay(s) is un-supported
3. the base framebuffer can actually be abstracted by 'pxafb_layer' as
well, which will help further re-use of the code and keep a better
and consistent structure. (This is the reason I named it 'pxafb_layer'
instead of 'pxafb_overlay' or something alike)
See Documentation/fb/pxafb.txt for additional usage information.
Signed-off-by: Eric Miao <eric.miao@marvell.com>
Cc: Rodolfo Giometti <giometti@linux.it>
Signed-off-by: Eric Miao <ycmiao@ycmiao-hp520.(none)>
1. introduce var_to_depth() to calculate the color depth including the
transparency bit
2. the conversion from 'fb_var_screeninfo' to LCCR3 BPP bits can be re-
used by overlays (in OVLxC1), thus an individual pxafb_var_to_bpp()
has been separated out.
3. pxafb_setmode() should really set the color bitfields correctly at
begining, introduce a pxafb_set_pixfmt() for this
4. allow user apps to specify color formats within fb_var_screeninfo,
and checking of this in pxafb_check_var() has been simplified as
below:
a) pxafb_var_to_bpp() should pass - which means a basically correct
bits_per_pixel and color depth setting
b) the RGBT bitfields are then forced into supported values by
pxafb_set_pixfmt()
Signed-off-by: Eric Miao <eric.miao@marvell.com>
Signed-off-by: Eric Miao <ycmiao@ycmiao-hp520.(none)>
Add the palette format support for LCCR4_PAL_FOR_3, and fix the
issue of LCCR4 being never assigned.
Also remove the useless pxafb_set_truecolor().
Signed-off-by: Eric Miao <eric.miao@marvell.com>
Signed-off-by: Eric Miao <ycmiao@ycmiao-hp520.(none)>
dma branching is enabled by extending the current setup_frame_dma()
function to allow a 2nd set of frame/palette dma descriptors to be
used.
As a result, pxafb_dma_buff.dma_desc[], pxafb_dma_buff.pal_desc[]
and pxafb_info.fdadr[] are doubled.
This allows maximum re-use of the current dma setup code, although
the pxafb_info.fdadr[xx] for FBRx register values looks a bit odd.
Signed-off-by: Eric Miao <eric.miao@marvell.com>
Signed-off-by: Eric Miao <ycmiao@ycmiao-hp520.(none)>
Note the var->yres_virtual is only re-calculated from the fix.smem_len
when text mode acceleration is enabled (which is default), this is due
to the issue as Russell suggested below:
Previous experience of doing this with the X server and acornfb is that
it causes all sorts of problems - it seems to force the X server into
assuming that the framebuffer should be panned no matter what settings
you ask it for.
The recommended workaround (implemented in acornfb) is to only do these
kinds of adjustments if text mode acceleration is enabled. IIRC, the X
server should be disabling text mode acceleration when it maps the
framebuffer. I seem to remember that there are X servers which forget
to do that though.
Signed-off-by: Eric Miao <eric.miao@marvell.com>
Signed-off-by: Eric Miao <ycmiao@ycmiao-hp520.(none)>
The amount of video memory size is decided according to the following
order:
1. <xres> x <yres> x <bits_per_pixel> by default, which is the backward
compatible way
2. size specified in platform data
3. size specified in module parameter 'options' string or specified in
kernel boot command line (see updated Documentation/fb/pxafb.txt)
And now since the memory is allocated from system memory, the pxafb_mmap
can be removed and the default fb_mmap() should be working all right.
Also, since we now have introduced the 'struct pxafb_dma_buff' for DMA
descriptors and palettes, the allocation can be separated cleanly.
NOTE: the LCD DMA actually supports chained transfer (i.e. page-based
transfers), to simplify the logic and keep the performance (with less
TLB misses when accessing from memory mapped user space), the memory
is allocated by alloc_pages_*() to ensures it's physical contiguous.
Signed-off-by: Eric Miao <eric.miao@marvell.com>
Signed-off-by: Eric Miao <ycmiao@ycmiao-hp520.(none)>
See description of commit:
[ARM] rtc-sa1100: don't assume CLOCK_TICK_RATE to be a constant
for additional information.
Signed-off-by: Eric Miao <eric.miao@marvell.com>
As Nicolas and Russell pointed out, CLOCK_TICK_RATE is no more
a constant on PXA when multiple processors and platforms are
selected, change TIMER_FREQ in rtc-sa1100.c into a variable.
Since the code to decide the clock tick rate is re-used from
timer.c, introduce a common get_clock_tick_rate() for this.
Signed-off-by: Eric Miao <eric.miao@marvell.com>
Acked-by: Nicolas Pitre <nico@marvell.com>
devname needs to be allocated before the irq is installed, so the
irq routines get the correct name in /proc.
Also check the return value from the AGP init function, and
fixup the exit points.
Signed-off-by: Dave Airlie <airlied@redhat.com>
The error path for object list being null is in the second goto target.
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
This showed up in logs where people had a hung chip, so pinning was blocked
on the chip unpinning other buffers, and the X Server took its scheduler
signal during that time.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
This removes the requirement for user space to pin a buffer before
setting a mode that is backed by the pixels from that buffer.
Signed-off-by: Kristian Høgsberg <krh@redhat.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Fix this sparse warning:
drivers/gpu/drm/i915/intel_fb.c:417:5: warning: symbol 'intelfb_panic' was not declared. Should it be static?
Signed-off-by: Hannes Eder <hannes@hanneseder.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Thanks to Hannes Eder for pointing out that this code was dead according to
sparse.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Move 'extern'-decls from "intel_dvo.c" to "dvo.h", as "dvo.h" is
included by and only by files where the symbols are either defined or
used.
Signed-off-by: Hannes Eder <hannes@hanneseder.net>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
We haven't seen this in practice, but it was visible when looking at a bug
report from when i915_gem_evict_everything() was broken and would always
return error.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@linux.ie>
This is required by the display plane, and fixes 1400x1050 laptop displays.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The replace fb ioctl replaces the backing buffer object for a modesetting
framebuffer object. This can be acheived by just creating a new
framebuffer backed by the new buffer object, setting that for the crtcs
in question and then removing the old framebuffer object.
Signed-off-by: Kristian Hogsberg <krh@redhat.com>
Acked-by: Jakob Bornecrantz <jakob@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The initially merged modesetting API has some uglies in it, this
cleans up the struct members and ioctl ordering for initial submission.
It also removes the unneeded hotplug infrastructure.
airlied:- I've pulled this patch in from git modesetting-gem tree.
Signed-off-by: Dave Airlie <airlied@redhat.com>
the calling function doesn't call this function unless one of the two
states that sets the value is true.
Signed-off-by: Dave Airlie <airlied@redhat.com>
This commit adds i915 driver support for the DRM mode setting APIs.
Currently, VGA, LVDS, SDVO DVI & VGA, TV and DVO LVDS outputs are
supported. HDMI, DisplayPort and additional SDVO output support will
follow.
Support for the mode setting code is controlled by the new 'modeset'
module option. A new config option, CONFIG_DRM_I915_KMS controls the
default behavior, and whether a PCI ID list is built into the module for
use by user level module utilities.
Note that if mode setting is enabled, user level drivers that access
display registers directly or that don't use the kernel graphics memory
manager will likely corrupt kernel graphics memory, disrupt output
configuration (possibly leading to hangs and/or blank displays), and
prevent panic/oops messages from appearing. So use caution when
enabling this code; be sure your user level code supports the new
interfaces.
A new SysRq key, 'g', provides emergency support for switching back to
the kernel's framebuffer console; which is useful for testing.
Co-authors: Dave Airlie <airlied@linux.ie>, Hong Liu <hong.liu@intel.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Add mode setting support to the DRM layer.
This is a fairly big chunk of work that allows DRM drivers to provide
full output control and configuration capabilities to userspace. It was
motivated by several factors:
- the fb layer's APIs aren't suited for anything but simple
configurations
- coordination between the fb layer, DRM layer, and various userspace
drivers is poor to non-existent (radeonfb excepted)
- user level mode setting drivers makes displaying panic & oops
messages more difficult
- suspend/resume of graphics state is possible in many more
configurations with kernel level support
This commit just adds the core DRM part of the mode setting APIs.
Driver specific commits using these new structure and APIs will follow.
Co-authors: Jesse Barnes <jbarnes@virtuousgeek.org>, Jakob Bornecrantz <jakob@tungstengraphics.com>
Contributors: Alan Hourihane <alanh@tungstengraphics.com>, Maarten Maathuis <madman2003@gmail.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Use the new core GEM object mapping code to allow GTT mapping of GEM
objects on i915. The fault handler will make sure a fence register is
allocated too, if the object in question is tiled.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Add core support for mapping of GEM objects. Drivers should provide a
vm_operations_struct if they want to support page faulting of objects.
The code for handling GEM object offsets was taken from TTM, which was
written by Thomas Hellström.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Dave Airlie <airlied@redhat.com>