Browse Source

qcacmn: Hold lock for entire nbuf debug iteration

Hold a spinlock for the entire iteration of the nbuf map/unmap
hashtable. This prevents invalid list/memory access issues in case of a
race with an unmap operation. This also trades list/memory safety for
the chance to watchdog bite while printing. Since we are about to panic
anyway, the worst case is manually loading the memory dump to inspect
the contents of the map/unmap hashtable.

Change-Id: Iafc38764d55fc46910051349e4f4b26da33cae51
CRs-Fixed: 2171736
Dustin Brown 7 years ago
parent
commit
a13db35821
1 changed files with 8 additions and 6 deletions
  1. 8 6
      qdf/linux/src/qdf_nbuf.c

+ 8 - 6
qdf/linux/src/qdf_nbuf.c

@@ -548,15 +548,17 @@ void qdf_nbuf_map_check_for_leaks(void)
 	qdf_err("Nbuf map without unmap events detected!");
 	qdf_err("------------------------------------------------------------");
 
+	/* Hold the lock for the entire iteration for safe list/meta access. We
+	 * are explicitly preferring the chance to watchdog on the print, over
+	 * the posibility of invalid list/memory access. Since we are going to
+	 * panic anyway, the worst case is loading up the crash dump to find out
+	 * what was in the hash table.
+	 */
 	qdf_spin_lock_irqsave(&qdf_nbuf_map_lock);
 	hash_for_each(qdf_nbuf_map_ht, bucket, meta, node) {
-		qdf_spin_unlock_irqrestore(&qdf_nbuf_map_lock);
-
 		count++;
-		qdf_err("0x%pk @ %s:%u", meta->nbuf, kbasename(meta->file),
-			meta->line);
-
-		qdf_spin_lock_irqsave(&qdf_nbuf_map_lock);
+		qdf_err("0x%pk @ %s:%u",
+			meta->nbuf, kbasename(meta->file), meta->line);
 	}
 	qdf_spin_unlock_irqrestore(&qdf_nbuf_map_lock);