net_sched: gen_estimator: complete rewrite of rate estimators
1) Old code was hard to maintain, due to complex lock chains. (We probably will be able to remove some kfree_rcu() in callers) 2) Using a single timer to update all estimators does not scale. 3) Code was buggy on 32bit kernel (WRITE_ONCE() on 64bit quantity is not supposed to work well) In this rewrite : - I removed the RB tree that had to be scanned in gen_estimator_active(). qdisc dumps should be much faster. - Each estimator has its own timer. - Estimations are maintained in net_rate_estimator structure, instead of dirtying the qdisc. Minor, but part of the simplification. - Reading the estimator uses RCU and a seqcount to provide proper support for 32bit kernels. - We reduce memory need when estimators are not used, since we store a pointer, instead of the bytes/packets counters. - xt_rateest_mt() no longer has to grab a spinlock. (In the future, xt_rateest_tg() could be switched to per cpu counters) Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:

committed by
David S. Miller

parent
a6e1693129
commit
1c0d32fde5
@@ -1395,7 +1395,7 @@ static int tc_fill_qdisc(struct sk_buff *skb, struct Qdisc *q, u32 clid,
|
||||
|
||||
if (gnet_stats_copy_basic(qdisc_root_sleeping_running(q),
|
||||
&d, cpu_bstats, &q->bstats) < 0 ||
|
||||
gnet_stats_copy_rate_est(&d, &q->bstats, &q->rate_est) < 0 ||
|
||||
gnet_stats_copy_rate_est(&d, &q->rate_est) < 0 ||
|
||||
gnet_stats_copy_queue(&d, cpu_qstats, &q->qstats, qlen) < 0)
|
||||
goto nla_put_failure;
|
||||
|
||||
|
Reference in New Issue
Block a user