tracing: rework sched_preempt_disable trace point implementation

The current implementation of sched_preempt_disable trace point
fails to detect the preemption disable time inside spin_lock_bh()
and spin_unlock_bh(). This is because __local_bh_disable_ip() calls
directly __preempt_count_add() which skips the preemption disable
tracking. Instead of relying on the updates to preempt count, it
is better to write the preemption disable tracking directly to
preemptsoff tracer. This is similar to how irq disable tracking
is done.

The current code handles the false positives coming from __schedule()
by directly resetting the time stamp. This requires an interface
from the scheduler to preemptsoff tracer. To avoid this additional
interface, the current patch detects the same condition by comparing
the task pid and context switch count. If they are not matching
at the time of preemption disable to enable, don't track the preemption
disable time as it involved a context switch.

Due to this rework. the sched_preempt_disable trace point location is
changed to

/sys/kernel/debug/tracing/events/preemptirq/sched_preempt_disable/enable

Change-Id: I7f58d316b7c54bc7a54102bfeb678404bda010d4
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
[satyap@codeaurora.org: port to 5.4 and resolve trivial merge conflicts]
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
This commit is contained in:
Pavankumar Kondeti
2020-01-13 10:46:41 +05:30
committed by Satya Durga Srinivasu Prabhala
parent 617f2e7fe2
commit 9db84311e7
6 changed files with 99 additions and 90 deletions

View File

@@ -74,7 +74,7 @@ sched_ravg_window_handler(struct ctl_table *table, int write,
loff_t *ppos);
#endif
#if defined(CONFIG_PREEMPT_TRACER) || defined(CONFIG_DEBUG_PREEMPT)
#if defined(CONFIG_PREEMPTIRQ_EVENTS) || defined(CONFIG_PREEMPT_TRACER)
extern unsigned int sysctl_preemptoff_tracing_threshold_ns;
#endif
#if defined(CONFIG_PREEMPTIRQ_EVENTS) && defined(CONFIG_IRQSOFF_TRACER)