RAS support capability needs to be updated on top of different
memeory ECC enablement, and remove redundant memory ecc check
in gmc module for vega20 and arcturus.
v2: check HBM ECC enablement and set ras mask accordingly.
v3: avoid to invoke atomfirmware interface to query twice.
Suggested-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Guchun Chen <guchun.chen@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The offset into the array was specified in bytes but should
be in terms of 32-bit words. Also prevent large reads that
would also cause a buffer overread.
v2: Read from correct offset from internal storage buffer.
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
In dcn20_funcs and dcn21_funcs struct, the member ".dsc_pg_control = NULL"
should be removed due to .dsc_pg_control be assigned to dcn20_dsc_pg_control.
Signed-off-by: Stanley.Yang <Stanley.Yang@amd.com>
Reviewed-by: Anthony Koo <Anthony.Koo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
disallow the logical to be enabled on platforms that
don't support gfx ras at this stage, like sriov skus,
dgpu with legacy ras.etc
Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Reviewed-by: Monk Liu <monk.liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
invoking an error injection successfully will cause an at_event intterrupt that
will occur before the invoke sequence can complete causing an invalid error
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: John Clements <john.clements@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This fixes a problem found on the MacBookPro 2017 Retina panel:
The panel reports 10 bpc color depth in its EDID, and the
firmware chooses link settings at boot which support enough
bandwidth for 10 bpc (324000 kbit/sec aka LINK_RATE_RBR2
aka 0xc), but the DP_MAX_LINK_RATE dpcd register only reports
2.7 Gbps (multiplier value 0xa) as possible, in direct
contradiction of what the firmware successfully set up.
This restricts the panel to 8 bpc, not providing the full
color depth of the panel on Linux <= 5.5. Additionally, commit
'4a8ca46bae8a ("drm/amd/display: Default max bpc to 16 for eDP")'
introduced into Linux 5.6-rc1 will unclamp panel depth to
its full 10 bpc, thereby requiring a eDP bandwidth for all
modes that exceeds the bandwidth available and causes all modes
to fail validation -> No modes for the laptop panel -> failure
to set any mode -> Panel goes dark.
This patch adds a quirk specific to the MBP 2017 15" Retina
panel to override reported max link rate to the correct maximum
of 0xc = LINK_RATE_RBR2 to fix the darkness and reduced display
precision.
Please apply for Linux 5.6+ to avoid regressing Apple MBP panel
support.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This can fix the baco reset failure seen on Navi10.
And this should be a low risk fix as the same sequence
is already used for system suspend/resume.
Signed-off-by: Evan Quan <evan.quan@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
In dcn20_funcs and dcn21_funcs struct, the member ".dsc_pg_control = NULL"
should be removed due to .dsc_pg_control be assigned to dcn20_dsc_pg_control.
Signed-off-by: Stanley.Yang <Stanley.Yang@amd.com>
Reviewed-by: Anthony Koo <Anthony.Koo@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
UAPI Changes: None
Cross-subsystem Changes: None
Core Changes: Fixed regressions introduced by commit cd82d82cbc
("drm/dp_mst: Add branch bandwidth validation to MST atomic check"),
which would cause us to:
* Calculate the available bandwidth on an MST topology incorrectly, and
as a result reject most display configurations that would try to enable
more then one sink on a topology
* Occasionally expose MST connectors to userspace before finishing
probing their PBN capabilities, resulting in us rejecting display
configurations because we assumed briefly that no bandwidth was
available
Driver Changes: None
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Lyude Paul <lyude@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/bf16ee577567beed91c86b7d9cda3ec2e8c50a71.camel@redhat.com
amd-drm-next-5.7-2020-03-10:
amdgpu:
- SR-IOV fixes
- Fix up fallout from drm load/unload callback removal
- Navi, renoir power management watermark fixes
- Refactor smu parameter handling
- Display FEC fixes
- Display DCC fixes
- HDCP fixes
- Add support for USB-C PD firmware updates
- Pollock detection fix
- Rework compute ring priority handling
- RAS fixes
- Misc cleanups
amdkfd:
- Consolidate more gfx config details in amdgpu
- Consolidate bo alloc flags
- Improve code comments
- SDMA MQD fixes
- Misc cleanups
gpu scheduler:
- Add suport for modifying the sched list
uapi:
- Clarify comments about GEM_CREATE flags that are not used by userspace.
The kernel driver has always prevented userspace from using these.
They are only used internally in the kernel driver.
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Alex Deucher <alexdeucher@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200310212748.4519-1-alexander.deucher@amd.com
Sigh, this is mostly my fault for not giving commit cd82d82cbc
("drm/dp_mst: Add branch bandwidth validation to MST atomic check")
enough scrutiny during review. The way we're checking bandwidth
limitations here is mostly wrong:
For starters, drm_dp_mst_atomic_check_bw_limit() determines the
pbn_limit of a branch by simply scanning each port on the current branch
device, then uses the last non-zero full_pbn value that it finds. It
then counts the sum of the PBN used on each branch device for that
level, and compares against the full_pbn value it found before.
This is wrong because ports can and will have different PBN limitations
on many hubs, especially since a number of DisplayPort hubs out there
will be clever and only use the smallest link rate required for each
downstream sink - potentially giving every port a different full_pbn
value depending on what link rate it's trained at. This means with our
current code, which max PBN value we end up with is not well defined.
Additionally, we also need to remember when checking bandwidth
limitations that the top-most device in any MST topology is a branch
device, not a port. This means that the first level of a topology
doesn't technically have a full_pbn value that needs to be checked.
Instead, we should assume that so long as our VCPI allocations fit we're
within the bandwidth limitations of the primary MSTB.
We do however, want to check full_pbn on every port including those of
the primary MSTB. However, it's important to keep in mind that this
value represents the minimum link rate /between a port's sink or mstb,
and the mstb itself/. A quick diagram to explain:
MSTB #1
/ \
/ \
Port #1 Port #2
full_pbn for Port #1 → | | ← full_pbn for Port #2
Sink #1 MSTB #2
|
etc...
Note that in the above diagram, the combined PBN from all VCPI
allocations on said hub should not exceed the full_pbn value of port #2,
and the display configuration on sink #1 should not exceed the full_pbn
value of port #1. However, port #1 and port #2 can otherwise consume as
much bandwidth as they want so long as their VCPI allocations still fit.
And finally - our current bandwidth checking code also makes the mistake
of not checking whether something is an end device or not before trying
to traverse down it.
So, let's fix it by rewriting our bandwidth checking helpers. We split
the function into one part for handling branches which simply adds up
the total PBN on each branch and returns it, and one for checking each
port to ensure we're not going over its PBN limit. Phew.
This should fix regressions seen, where we erroneously reject display
configurations due to thinking they're going over our bandwidth limits
when they're not.
Changes since v1:
* Took an even closer look at how PBN limitations are supposed to be
handled, and did some experimenting with Sean Paul. Ended up rewriting
these helpers again, but this time they should actually be correct!
Changes since v2:
* Small indenting fix
* Fix pbn_used check in drm_dp_mst_atomic_check_port_bw_limit()
Signed-off-by: Lyude Paul <lyude@redhat.com>
Fixes: cd82d82cbc ("drm/dp_mst: Add branch bandwidth validation to MST atomic check")
Cc: Sean Paul <seanpaul@google.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Mikita Lipski <mikita.lipski@amd.com>
Tested-by: Hans de Goede <hdegoede@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200309210131.1497545-1-lyude@redhat.com
We used to punt off reprobing path resources to the link address probe
work, but now that we handle CSNs asynchronously from the driver's HPD
handling we can do whatever the heck we want from the CSN!
So, reprobe the path resources from drm_dp_mst_handle_conn_stat(). Also,
get rid of the path resource reprobing code in
drm_dp_check_and_send_link_address() since it's needlessly complicated
when we already reprobe path resources from
drm_dp_handle_link_address_port(). And finally, teach
drm_dp_send_enum_path_resources() to return 1 on PBN changes so we know
if we need to send another hotplug or not.
This fixes issues where we've indicated to userspace that a port has
just been connected, before we actually probed it's available PBN -
something that results in unexpected atomic check failures.
Signed-off-by: Lyude Paul <lyude@redhat.com>
Fixes: cd82d82cbc ("drm/dp_mst: Add branch bandwidth validation to MST atomic check")
Cc: Mikita Lipski <mikita.lipski@amd.com>
Cc: Hans de Goede <hdegoede@redhat.com>
Cc: Sean Paul <sean@poorly.run>
Link: https://patchwork.freedesktop.org/patch/msgid/20200306234623.547525-4-lyude@redhat.com
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Tested-by: Hans de Goede <hdegoede@redhat.com>
DisplayPort specifications are fun. For a while, it's been really
unclear to us what available_pbn actually does. There's a somewhat vague
explanation in the DisplayPort spec (starting from 1.2) that partially
explains it:
The minimum payload bandwidth number supported by the path. Each node
updates this number with its available payload bandwidth number if its
payload bandwidth number is less than that in the Message Transaction
reply.
So, it sounds like available_pbn represents the smallest link rate in
use between the source and the branch device. Cool, so full_pbn is just
the highest possible PBN that the branch device supports right?
Well, we assumed that for quite a while until Sean Paul noticed that on
some MST hubs, available_pbn will actually get set to 0 whenever there's
any active payloads on the respective branch device. This caused quite a
bit of confusion since clearing the payload ID table would end up fixing
the available_pbn value.
So, we just went with that until commit cd82d82cbc ("drm/dp_mst: Add
branch bandwidth validation to MST atomic check") started breaking
people's setups due to us getting erroneous available_pbn values. So, we
did some more digging and got confused until we finally looked at the
definition for full_pbn:
The bandwidth of the link at the trained link rate and lane count
between the DP Source device and the DP Sink device with no time slots
allocated to VC Payloads, represented as a Payload Bandwidth Number. As
with the Available_Payload_Bandwidth_Number, this number is determined
by the link with the lowest lane count and link rate.
That's what we get for not reading specs closely enough, hehe. So, since
full_pbn is definitely what we want for doing bandwidth restriction
checks - let's start using that instead and ignore available_pbn
entirely.
Signed-off-by: Lyude Paul <lyude@redhat.com>
Fixes: cd82d82cbc ("drm/dp_mst: Add branch bandwidth validation to MST atomic check")
Cc: Mikita Lipski <mikita.lipski@amd.com>
Cc: Hans de Goede <hdegoede@redhat.com>
Cc: Sean Paul <sean@poorly.run>
Reviewed-by: Mikita Lipski <mikita.lipski@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200306234623.547525-3-lyude@redhat.com
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Tested-by: Hans de Goede <hdegoede@redhat.com>
I noticed that there is a prototype for vmw_fifo_ping_host_locked() but
no function. Then I looked further and noticed more functions which are
not used anymore or functions protoypes which remained after the
function was removed.
Remove unused function (prototypes).
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
TTM doesn't yet fully support mapping of DMA memory when SEV is active,
so in that case, refuse DMA operation. For guest-backed object operation
this means 3D acceleration will be disabled. For host-backed, VRAM will be
used for data transfer between the guest and the device.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
When we refuse DMA from system pages for whatever reason, we don't
handle that correctly when guest-backed objects was enabled.
Since guest-backed objects by definition require DMA to and from
system pages, disable all functionality that relies on them.
That basically amounts to 3D acceleration and screen targets.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Runtime PM and RGB output need to be released when host1x client
registration fails. The releasing is missed in the code, let's correct it.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
The devm_platform_ioremap_resource() helper replaces few lines of a
boilerplate code with a single line, making code to look cleaner a tad.
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
drm_dp_mst_topology_mgr_cbs.destroy_connector callbacks are identical
amongst every driver and don't do anything other than cleaning up the
connector((drm_connector_unregister()/drm_connector_put())) except for
amdgpu_dm driver where some amdgpu_dm specific code in there.
This connector cleaning up is now being handled in the drm core so
driver destroy_connector callbacks are not needed (except for
amdgpu_dm) hence remove them.
Removal is done with below sementic patch:
@r1@
identifier func, E;
@@
struct drm_dp_mst_topology_cbs E = {
...,
- .destroy_connector = func
};
@delete depends on r1@
identifier r1.func;
@@
- static void func(...){...}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Suggested-by: Emil Velikov <emil.velikov@collabora.com>
Suggested-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Lyude Paul <lyude@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200307083023.76498-6-pankaj.laxminarayan.bharadiya@intel.com
Reviewed-by: Lyude Paul <lyude@redhat.com>
drm_dp_mst_port_add_connector() directly calls the
drm_connector_register() now and
drm_dp_mst_topology_mgr_cbs.register_connector callback is not getting
called anymore.
Hence remove all drm_dp_mst_topology_mgr_cbs.register_connector
callbacks.
This is the preparatory step for removing the
drm_dp_mst_topology_mgr_cbs.register_connector callback hook.
The removal is done with below sementic patch:
@r1@
identifier func, E;
@@
struct drm_dp_mst_topology_cbs E = {
...,
- .register_connector = func
};
@delete depends on r1@
identifier r1.func;
@@
- static void func(...){...}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Suggested-by: Emil Velikov <emil.velikov@collabora.com>
Suggested-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Lyude Paul <lyude@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200307083023.76498-3-pankaj.laxminarayan.bharadiya@intel.com
Reviewed-by: Lyude Paul <lyude@redhat.com>
Adaptive Sync is a VESA feature so add a DRM core helper to parse
the EDID's detailed descritors to obtain the adaptive sync monitor range.
Store this info as part fo drm_display_info so it can be used
across all drivers.
This part of the code is stripped out of amdgpu's function
amdgpu_dm_update_freesync_caps() to make it generic and be used
across all DRM drivers
v6:
* Call it monitor_range (Ville)
v5:
* Use the renamed flags
v4:
* Use is_display_descriptor() (Ville)
* Name the monitor range flags (Ville)
v3:
* Remove the edid parsing restriction for just DP (Nicholas)
* Use drm_for_each_detailed_block (Ville)
* Make the drm_get_adaptive_sync_range function static (Harry, Jani)
v2:
* Change vmin and vmax to use u8 (Ville)
* Dont store pixel clock since that is just a max dotclock
and not related to VRR mode (Manasi)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Harry Wentland <harry.wentland@amd.com>
Cc: Clinton A Taylor <clinton.a.taylor@intel.com>
Cc: Kazlauskas Nicholas <Nicholas.Kazlauskas@amd.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200310231651.13841-2-manasi.d.navare@intel.com