Lucene search

K
redhatcveRedhat.comRH:CVE-2024-27005
HistoryMay 01, 2024 - 7:19 p.m.

CVE-2024-27005

2024-05-0119:19:44
redhat.com
access.redhat.com
2
linux kernel
vulnerability
icc_lock
icc_bw_lock
mutexes

7.1 High

AI Score

Confidence

Low

0.0004 Low

EPSS

Percentile

15.8%

In the Linux kernel, the following vulnerability has been resolved: interconnect: Don’t access req_list while it’s being manipulated The icc_lock mutex was split into separate icc_lock and icc_bw_lock mutexes in [1] to avoid lockdep splats. However, this didn’t adequately protect access to icc_node::req_list. The icc_set_bw() function will eventually iterate over req_list while only holding icc_bw_lock, but req_list can be modified while only holding icc_lock. This causes races between icc_set_bw(), of_icc_get(), and icc_put(). Example A: CPU0 CPU1 -— ---- icc_set_bw(path_a) mutex_lock(&icc;_bw_lock); icc_put(path_b) mutex_lock(&icc;_lock); aggregate_requests() hlist_for_each_entry(r, … hlist_del(… Example B: CPU0 CPU1 -— ---- icc_set_bw(path_a) mutex_lock(&icc;_bw_lock); path_b = of_icc_get() of_icc_get_by_index() mutex_lock(&icc;_lock); path_find() path_init() aggregate_requests() hlist_for_each_entry(r, … hlist_add_head(… Fix this by ensuring icc_bw_lock is always held before manipulating icc_node::req_list. The additional places icc_bw_lock is held don’t perform any memory allocations, so we should still be safe from the original lockdep splats that motivated the separate locks. [1] commit af42269c3523 (“interconnect: Fix locking for runpm vs reclaim”)

7.1 High

AI Score

Confidence

Low

0.0004 Low

EPSS

Percentile

15.8%