RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode

The mlx5 VF driver doesn't set QP tx port affinity because it doesn't know
if the lag is active or not, since the "lag_active" works only for PF
interfaces. In this case for VF interfaces only one lag is used which
brings performance issue.

Add a lag_tx_port_affinity CAP bit; When it is enabled and
"num_lag_ports > 1", then driver always set QP tx affinity, regardless
of lag state.

Link: https://lore.kernel.org/r/20200527055014.355093-1-leon@kernel.org
Signed-off-by: Mark Zhang <markz@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
This commit is contained in:
Mark Zhang
2020-05-27 08:50:14 +03:00
committed by Jason Gunthorpe
parent e0cca8b456
commit 802dcc7fc5
3 changed files with 10 additions and 2 deletions

View File

@@ -3653,7 +3653,8 @@ static unsigned int get_tx_affinity(struct ib_qp *qp,
struct mlx5_ib_qp_base *qp_base;
unsigned int tx_affinity;
if (!(dev->lag_active && qp_supports_affinity(qp)))
if (!(mlx5_ib_lag_should_assign_affinity(dev) &&
qp_supports_affinity(qp)))
return 0;
if (mqp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)