If there is limited link bandwidth on a MST network,
it must be divided fairly between the streams on that network
Implement an algorithm to determine the correct DSC config
for each stream
The algorithm:
This
[ ] ( )
represents the range of bandwidths possible for a given stream.
The [] area represents the range of DSC configs, and the ()
represents no DSC. The bandwidth used increases from left to right.
First, try disabling DSC on all streams
[ ] (|)
[ ] (|)
Check this against the bandwidth limits of the link and each branch
(including each endpoint). If it passes, the job is done
Second, try maximum DSC compression on all streams
that support DSC
[| ] ( )
[| ] ( )
If this does not pass, then enabling this combination of streams
is impossible
Otherwise, divide the remaining bandwidth evenly amongst the streams
[ | ] ( )
[ | ] ( )
If one or more of the streams reach minimum compression, evenly
divide the reamining bandwidth amongst the remaining streams
[ |] ( )
[ |] ( )
[ | ] ( )
[ | ] ( )
If all streams can reach minimum compression, disable compression
greedily
[ |] ( )
[ |] ( )
[ ] (|)
Perform this algorithm on each full update, on each MST link
with at least one DSC stream on it
After the configs are computed, call
dcn20_add_dsc_to_stream_resource on each stream with DSC enabled.
It is only after all streams are created that we can know which
of them will need DSC.
Do all of this at the end of amdgpu atomic check. If it fails,
fail check; This combination of timings cannot be supported.
v2: Use drm_dp_mst_atomic_check to validate bw for certain dsc
configurations
v3: Use dc_dsc_policy structure to get min and max bpp rate
for DSC configuration
Acked-by: Lyude Paul <lyude@redhat.com>
Reviewed-by: Wenjing Liu <Wenjing.Liu@amd.com>
Signed-off-by: David Francis <David.Francis@amd.com>
Signed-off-by: Mikita Lipski <mikita.lipski@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
[why]
Need to calculate VCPI slots differently for DSC
to take in account current link rate, link count
and FEC.
[how]
Add helper to get pbn_div from dc_link
Acked-by: Lyude Paul <lyude@redhat.com>
Reviewed-by: Leo Li <sunpeng.li@amd.com>
Signed-off-by: Mikita Lipski <mikita.lipski@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
[why]
For DSC case we cannot use topology manager's PBN divider
variable. The default divider does not take FEC into account.
Therefore the driver has to calculate its own divider based
on the link rate and lane count its handling, as it is hw specific.
[how]
Pass pbn_div as an argument, which is used if its more than
zero, otherwise default topology manager's pbn_div will be used.
Reviewed-by: Lyude Paul <lyude@redhat.com>
Signed-off-by: Mikita Lipski <mikita.lipski@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
During MST mode enumeration, if a new dc_sink is created,
populate it with dsc caps as appropriate.
Use drm_dp_mst_dsc_aux_for_port to get the raw caps,
then parse them onto dc_sink with dc_dsc_parse_dsc_dpcd.
Reviewed-by: Wenjing Liu <Wenjing.Liu@amd.com>
Signed-off-by: David Francis <David.Francis@amd.com>
Signed-off-by: Mikita Lipski <mikita.lipski@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
For DSC MST, sometimes monitors would break out
in full-screen static. The issue traced back to the
PPS generation code, where these variables were being used
uninitialized and were picking up garbage.
memset to 0 to avoid this
Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Signed-off-by: David Francis <David.Francis@amd.com>
Signed-off-by: Mikita Lipski <mikita.lipski@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
With DSC, bpp can be fractional in multiples of 1/16.
Change drm_dp_calc_pbn_mode to reflect this, adding a new
parameter bool dsc. When this parameter is true, treat the
bpp parameter as having units not of bits per pixel, but
1/16 of a bit per pixel
v2: Don't add separate function for this
v3: In the equation divide bpp by 16 as it is expected
not to leave any remainder
v4: Added DSC test parameters for selftest
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Reviewed-by: Lyude Paul <lyude@redhat.com>
Reviewed-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: David Francis <David.Francis@amd.com>
Signed-off-by: Mikita Lipski <mikita.lipski@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Use filep->private_data to store a pointer to the kfd_process data
structure. Take an extra reference for that, which gets released in
the kfd_release callback. Check that the process calling kfd_ioctl
is the same that opened the file descriptor. Return -EBADF if it's
not, so that this error can be distinguished in user mode.
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Philip Yang <Philip.Yang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
For high-res (8K) or HFR (4K120) displays, using uncompressed pixel
formats like YCbCr444 would exceed the bandwidth of HDMI 2.0, so the
"interesting" modes would be disabled, leaving only low-res or low
framerate modes.
This change lowers the pixel encoding to 4:2:2 or 4:2:0 if the max TMDS
clock is exceeded. Verified that 8K30 and 4K120 are now available and
working with a Samsung Q900R over an HDMI 2.0b link from a Radeon 5700.
Reviewed-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Thomas Anderson <thomasanderson@google.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
[Why]
Some combined docks will always trigger CP_IRQ but there's nothing the driver
needs to take care of, but the CP_IRQ breaks the original hdcp state and
triggers the driver to restart the authentication.
[How]
Add the event type check before restart the authentication or resend the stream
management
Signed-off-by: Xiaodong Yan <Xiaodong.Yan@amd.com>
Reviewed-by: Wenjing Liu <Wenjing.Liu@amd.com>
Acked-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
[WHY]
Some monitors trigger HDCP2.x timeout after reinitializing (e.g. toggling HDR)
by taking longer than expected to return h' (h prime)
Previously the 200ms watchdog timer retry count would hit
MAX_NUM_OF_ATTEMPTS (4), causing fallback to HDCP1.x
[HOW]
Adding a 1s delay after an h' watchdog timeout provides enough time
for affected monitors to return h' in time without hitting MAX_NUM_OF_ATTEMPTS
Signed-off-by: Michael Strauss <michael.strauss@amd.com>
Reviewed-by: Wenjing Liu <Wenjing.Liu@amd.com>
Acked-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
[Why]
PSP needs session ID to destroy a session, In the case where we fail
create session we don't have a session ID
[How]
Set the session ID before returning
Signed-off-by: Bhawanpreet Lakha <Bhawanpreet.Lakha@amd.com>
Reviewed-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
list_for_each() can be replaced by the more concise
list_for_each_entry() here for iteration over the lists.
This change was reported by coccinelle.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Provided an unified entry point. And fixed the confusing that the API
usage is conflict with what the naming implies.
Signed-off-by: Evan Quan <evan.quan@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
By this, we can avoid to pass in the VRAM address on every table
transferring. That puts extra unnecessary traffics on SMU on
some cases(e.g. polling the amdgpu_pm_info sysfs interface).
V2: document what the driver table is for and how it works
Signed-off-by: Evan Quan <evan.quan@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
So that we do not need to allocate a piece of VRAM for it. This
is a preparation for coming change which unifies the VRAM address
for all driver tables interaction with SMU.
Signed-off-by: Evan Quan <evan.quan@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Under sriov and pp_onevf mode,
1.take resume instead of hw_init for smc recover to avoid
potential memory leak.
2.add return condition inside smc resume function for
sriov_pp_onevf_mode and pm_enabled param.
Signed-off-by: Jack Zhang <Jack.Zhang1@amd.com>
Acked-by: Evan Quan <evan.quan@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Before, initialization of smu ip block would be skipped
for sriov ASICs. But if there's only one VF being used,
guest driver should be able to dump some HW info such as
clks, temperature,etc.
To solve this, now after onevf mode is enabled, host
driver will notify guest. If it's onevf mode, guest will
do smu hw_init and skip some steps in normal smu hw_init
flow because host driver has already done it for smu.
With this fix, guest app can talk with smu and dump hw
information from smu.
v2: refine the logic for pm_enabled.Skip hw_init by not
changing pm_enabled.
v3: refine is_support_sw_smu and fix some indentation
issue.
Signed-off-by: Jack Zhang <Jack.Zhang1@amd.com>
Acked-by: Evan Quan <evan.quan@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
use unified variable smu->is_apu to check apu asic in smu driver.
related patch:
drm/amd/powerplay: bypass dpm_context null pointer check guard for some
smu series
Signed-off-by: Kevin Wang <kevin1.wang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Per confirmation with RLC firmware team, the RLC should
be unhalted after all RLC related firmwares uploaded.
However, in fact the RLC is unhalted immediately after
RLCG firmware uploaded. And that may causes unexpected
PSP hang on loading the succeeding RLC save restore
list related firmwares.
So, we correct the firmware loading sequence to load
RLC save restore list related firmwares before RLCG
ucode. That will help to get around this issue.
Signed-off-by: Evan Quan <evan.quan@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>