Lucene search

K
ubuntucveUbuntu.comUB:CVE-2024-36003
HistoryMay 20, 2024 - 12:00 a.m.

CVE-2024-36003

2024-05-2000:00:00
ubuntu.com
ubuntu.com
2
linux kernel
vulnerability
ice_reset_vf
circular locking
deadlock

AI Score

6.5

Confidence

High

EPSS

0

Percentile

15.5%

In the Linux kernel, the following vulnerability has been resolved: ice:
fix LAG and VF lock dependency in ice_reset_vf() 9f74a3dfcf83 (“ice: Fix VF
Reset paths when interface in a failed over aggregate”), the ice driver has
acquired the LAG mutex in ice_reset_vf(). The commit placed this lock
acquisition just prior to the acquisition of the VF configuration lock. If
ice_reset_vf() acquires the configuration lock via the ICE_VF_RESET_LOCK
flag, this could deadlock with ice_vc_cfg_qs_msg() because it always
acquires the locks in the order of the VF configuration lock and then the
LAG mutex. Lockdep reports this violation almost immediately on creating
and then removing 2 VF:
====================================================== WARNING: possible
circular locking dependency detected 6.8.0-rc6 #54 Tainted: G W O
------------------------------------------------------ kworker/60:3/6771 is
trying to acquire lock: ff40d43e099380a0 (&vf->cfg_lock){+.+.}-{3:3}, at:
ice_reset_vf+0x22f/0x4d0 [ice] but task is already holding lock:
ff40d43ea1961210 (&pf->lag_mutex){+.+.}-{3:3}, at: ice_reset_vf+0xb7/0x4d0
[ice] which lock already depends on the new lock. the existing dependency
chain (in reverse order) is: -> #1 (&pf->lag_mutex){+.+.}-{3:3}:
__lock_acquire+0x4f8/0xb40 lock_acquire+0xd4/0x2d0 __mutex_lock+0x9b/0xbf0
ice_vc_cfg_qs_msg+0x45/0x690 [ice] ice_vc_process_vf_msg+0x4f5/0x870 [ice]
__ice_clean_ctrlq+0x2b5/0x600 [ice] ice_service_task+0x2c9/0x480 [ice]
process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0 kthread+0x104/0x140
ret_from_fork+0x31/0x50 ret_from_fork_asm+0x1b/0x30 -> #0
(&vf->cfg_lock){+.+.}-{3:3}: check_prev_add+0xe2/0xc50
validate_chain+0x558/0x800 __lock_acquire+0x4f8/0xb40
lock_acquire+0xd4/0x2d0 __mutex_lock+0x9b/0xbf0 ice_reset_vf+0x22f/0x4d0
[ice] ice_process_vflr_event+0x98/0xd0 [ice] ice_service_task+0x1cc/0x480
[ice] process_one_work+0x1e9/0x4d0 worker_thread+0x1e1/0x3d0
kthread+0x104/0x140 ret_from_fork+0x31/0x50 ret_from_fork_asm+0x1b/0x30
other info that might help us debug this: Possible unsafe locking scenario:
CPU0 CPU1 ---- ---- lock(&pf->lag_mutex); lock(&vf->cfg_lock);
lock(&pf->lag_mutex); lock(&vf->cfg_lock); *** DEADLOCK*** 4 locks held by
kworker/60:3/6771: #0: ff40d43e05428b38 ((wq_completion)ice){+.+.}-{0:0},
at: process_one_work+0x176/0x4d0 #1: ff50d06e05197e58
((work_completion)(&pf->serv_task)){+.+.}-{0:0}, at:
process_one_work+0x176/0x4d0 #2: ff40d43ea1960e50
(&pf->vfs.table_lock){+.+.}-{3:3}, at: ice_process_vflr_event+0x48/0xd0
[ice] #3: ff40d43ea1961210 (&pf->lag_mutex){+.+.}-{3:3}, at:
ice_reset_vf+0xb7/0x4d0 [ice] stack backtrace: CPU: 60 PID: 6771 Comm:
kworker/60:3 Tainted: G W O 6.8.0-rc6 #54 Hardware name: Workqueue: ice
ice_service_task [ice] Call Trace: <TASK> dump_stack_lvl+0x4a/0x80
check_noncircular+0x12d/0x150 check_prev_add+0xe2/0xc50 ?
save_trace+0x59/0x230 ? add_chain_cache+0x109/0x450
validate_chain+0x558/0x800 __lock_acquire+0x4f8/0xb40 ?
lockdep_hardirqs_on+0x7d/0x100 lock_acquire+0xd4/0x2d0 ?
ice_reset_vf+0x22f/0x4d0 [ice] ? lock_is_held_type+0xc7/0x120
__mutex_lock+0x9b/0xbf0 ? ice_reset_vf+0x22f/0x4d0 [ice] ?
ice_reset_vf+0x22f/0x4d0 [ice] ? rcu_is_watching+0x11/0x50 ?
ice_reset_vf+0x22f/0x4d0 [ice] ice_reset_vf+0x22f/0x4d0 [ice] ?
process_one_work+0x176/0x4d0 ice_process_vflr_event+0x98/0xd0 [ice]
ice_service_task+0x1cc/0x480 [ice] process_one_work+0x1e9/0x4d0
worker_thread+0x1e1/0x3d0 ? __pfx_worker_thread+0x10/0x10
kthread+0x104/0x140 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x31/0x50 ?
__pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1b/0x30 </TASK> To avoid
deadlock, we must acquire the LAG —truncated—

AI Score

6.5

Confidence

High

EPSS

0

Percentile

15.5%