Security Vulnerabilities
- CVEs Published In July 2024
In the Linux kernel, the following vulnerability has been resolved:
phy: stm32: fix a refcount leak in stm32_usbphyc_pll_enable()
This error path needs to decrement "usbphyc->n_pll_cons.counter" before
returning.
In the Linux kernel, the following vulnerability has been resolved:
KVM: x86: nSVM: fix potential NULL derefernce on nested migration
Turns out that due to review feedback and/or rebases
I accidentally moved the call to nested_svm_load_cr3 to be too early,
before the NPT is enabled, which is very wrong to do.
KVM can't even access guest memory at that point as nested NPT
is needed for that, and of course it won't initialize the walk_mmu,
which is main issue the patch was addressing.
Fix this for real.
In the Linux kernel, the following vulnerability has been resolved:
net: ieee802154: at86rf230: Stop leaking skb's
Upon error the ieee802154_xmit_complete() helper is not called. Only
ieee802154_wake_queue() is called manually. In the Tx case we then leak
the skb structure.
Free the skb structure upon error before returning when appropriate.
As the 'is_tx = 0' cannot be moved in the complete handler because of a
possible race between the delay in switching to STATE_RX_AACK_ON and a
new interrupt, we introduce an intermediate 'was_tx' boolean just for
this purpose.
There is no Fixes tag applying here, many changes have been made on this
area and the issue kind of always existed.
In the Linux kernel, the following vulnerability has been resolved:
parisc: Fix data TLB miss in sba_unmap_sg
Rolf Eike Beer reported the following bug:
[1274934.746891] Bad Address (null pointer deref?): Code=15 (Data TLB miss fault) at addr 0000004140000018
[1274934.746891] CPU: 3 PID: 5549 Comm: cmake Not tainted 5.15.4-gentoo-parisc64 #4
[1274934.746891] Hardware name: 9000/785/C8000
[1274934.746891]
[1274934.746891] YZrvWESTHLNXBCVMcbcbcbcbOGFRQPDI
[1274934.746891] PSW: 00001000000001001111111000001110 Not tainted
[1274934.746891] r00-03 000000ff0804fe0e 0000000040bc9bc0 00000000406760e4 0000004140000000
[1274934.746891] r04-07 0000000040b693c0 0000004140000000 000000004a2b08b0 0000000000000001
[1274934.746891] r08-11 0000000041f98810 0000000000000000 000000004a0a7000 0000000000000001
[1274934.746891] r12-15 0000000040bddbc0 0000000040c0cbc0 0000000040bddbc0 0000000040bddbc0
[1274934.746891] r16-19 0000000040bde3c0 0000000040bddbc0 0000000040bde3c0 0000000000000007
[1274934.746891] r20-23 0000000000000006 000000004a368950 0000000000000000 0000000000000001
[1274934.746891] r24-27 0000000000001fff 000000000800000e 000000004a1710f0 0000000040b693c0
[1274934.746891] r28-31 0000000000000001 0000000041f988b0 0000000041f98840 000000004a171118
[1274934.746891] sr00-03 00000000066e5800 0000000000000000 0000000000000000 00000000066e5800
[1274934.746891] sr04-07 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[1274934.746891]
[1274934.746891] IASQ: 0000000000000000 0000000000000000 IAOQ: 00000000406760e8 00000000406760ec
[1274934.746891] IIR: 48780030 ISR: 0000000000000000 IOR: 0000004140000018
[1274934.746891] CPU: 3 CR30: 00000040e3a9c000 CR31: ffffffffffffffff
[1274934.746891] ORIG_R28: 0000000040acdd58
[1274934.746891] IAOQ[0]: sba_unmap_sg+0xb0/0x118
[1274934.746891] IAOQ[1]: sba_unmap_sg+0xb4/0x118
[1274934.746891] RP(r2): sba_unmap_sg+0xac/0x118
[1274934.746891] Backtrace:
[1274934.746891] [<00000000402740cc>] dma_unmap_sg_attrs+0x6c/0x70
[1274934.746891] [<000000004074d6bc>] scsi_dma_unmap+0x54/0x60
[1274934.746891] [<00000000407a3488>] mptscsih_io_done+0x150/0xd70
[1274934.746891] [<0000000040798600>] mpt_interrupt+0x168/0xa68
[1274934.746891] [<0000000040255a48>] __handle_irq_event_percpu+0xc8/0x278
[1274934.746891] [<0000000040255c34>] handle_irq_event_percpu+0x3c/0xd8
[1274934.746891] [<000000004025ecb4>] handle_percpu_irq+0xb4/0xf0
[1274934.746891] [<00000000402548e0>] generic_handle_irq+0x50/0x70
[1274934.746891] [<000000004019a254>] call_on_stack+0x18/0x24
[1274934.746891]
[1274934.746891] Kernel panic - not syncing: Bad Address (null pointer deref?)
The bug is caused by overrunning the sglist and incorrectly testing
sg_dma_len(sglist) before nents. Normally this doesn't cause a crash,
but in this case sglist crossed a page boundary. This occurs in the
following code:
while (sg_dma_len(sglist) && nents--) {
The fix is simply to test nents first and move the decrement of nents
into the loop.
In the Linux kernel, the following vulnerability has been resolved:
iommu: Fix potential use-after-free during probe
Kasan has reported the following use after free on dev->iommu.
when a device probe fails and it is in process of freeing dev->iommu
in dev_iommu_free function, a deferred_probe_work_func runs in parallel
and tries to access dev->iommu->fwspec in of_iommu_configure path thus
causing use after free.
BUG: KASAN: use-after-free in of_iommu_configure+0xb4/0x4a4
Read of size 8 at addr ffffff87a2f1acb8 by task kworker/u16:2/153
Workqueue: events_unbound deferred_probe_work_func
Call trace:
dump_backtrace+0x0/0x33c
show_stack+0x18/0x24
dump_stack_lvl+0x16c/0x1e0
print_address_description+0x84/0x39c
__kasan_report+0x184/0x308
kasan_report+0x50/0x78
__asan_load8+0xc0/0xc4
of_iommu_configure+0xb4/0x4a4
of_dma_configure_id+0x2fc/0x4d4
platform_dma_configure+0x40/0x5c
really_probe+0x1b4/0xb74
driver_probe_device+0x11c/0x228
__device_attach_driver+0x14c/0x304
bus_for_each_drv+0x124/0x1b0
__device_attach+0x25c/0x334
device_initial_probe+0x24/0x34
bus_probe_device+0x78/0x134
deferred_probe_work_func+0x130/0x1a8
process_one_work+0x4c8/0x970
worker_thread+0x5c8/0xaec
kthread+0x1f8/0x220
ret_from_fork+0x10/0x18
Allocated by task 1:
____kasan_kmalloc+0xd4/0x114
__kasan_kmalloc+0x10/0x1c
kmem_cache_alloc_trace+0xe4/0x3d4
__iommu_probe_device+0x90/0x394
probe_iommu_group+0x70/0x9c
bus_for_each_dev+0x11c/0x19c
bus_iommu_probe+0xb8/0x7d4
bus_set_iommu+0xcc/0x13c
arm_smmu_bus_init+0x44/0x130 [arm_smmu]
arm_smmu_device_probe+0xb88/0xc54 [arm_smmu]
platform_drv_probe+0xe4/0x13c
really_probe+0x2c8/0xb74
driver_probe_device+0x11c/0x228
device_driver_attach+0xf0/0x16c
__driver_attach+0x80/0x320
bus_for_each_dev+0x11c/0x19c
driver_attach+0x38/0x48
bus_add_driver+0x1dc/0x3a4
driver_register+0x18c/0x244
__platform_driver_register+0x88/0x9c
init_module+0x64/0xff4 [arm_smmu]
do_one_initcall+0x17c/0x2f0
do_init_module+0xe8/0x378
load_module+0x3f80/0x4a40
__se_sys_finit_module+0x1a0/0x1e4
__arm64_sys_finit_module+0x44/0x58
el0_svc_common+0x100/0x264
do_el0_svc+0x38/0xa4
el0_svc+0x20/0x30
el0_sync_handler+0x68/0xac
el0_sync+0x160/0x180
Freed by task 1:
kasan_set_track+0x4c/0x84
kasan_set_free_info+0x28/0x4c
____kasan_slab_free+0x120/0x15c
__kasan_slab_free+0x18/0x28
slab_free_freelist_hook+0x204/0x2fc
kfree+0xfc/0x3a4
__iommu_probe_device+0x284/0x394
probe_iommu_group+0x70/0x9c
bus_for_each_dev+0x11c/0x19c
bus_iommu_probe+0xb8/0x7d4
bus_set_iommu+0xcc/0x13c
arm_smmu_bus_init+0x44/0x130 [arm_smmu]
arm_smmu_device_probe+0xb88/0xc54 [arm_smmu]
platform_drv_probe+0xe4/0x13c
really_probe+0x2c8/0xb74
driver_probe_device+0x11c/0x228
device_driver_attach+0xf0/0x16c
__driver_attach+0x80/0x320
bus_for_each_dev+0x11c/0x19c
driver_attach+0x38/0x48
bus_add_driver+0x1dc/0x3a4
driver_register+0x18c/0x244
__platform_driver_register+0x88/0x9c
init_module+0x64/0xff4 [arm_smmu]
do_one_initcall+0x17c/0x2f0
do_init_module+0xe8/0x378
load_module+0x3f80/0x4a40
__se_sys_finit_module+0x1a0/0x1e4
__arm64_sys_finit_module+0x44/0x58
el0_svc_common+0x100/0x264
do_el0_svc+0x38/0xa4
el0_svc+0x20/0x30
el0_sync_handler+0x68/0xac
el0_sync+0x160/0x180
Fix this by setting dev->iommu to NULL first and
then freeing dev_iommu structure in dev_iommu_free
function.
In the Linux kernel, the following vulnerability has been resolved:
mm: don't try to NUMA-migrate COW pages that have other uses
Oded Gabbay reports that enabling NUMA balancing causes corruption with
his Gaudi accelerator test load:
"All the details are in the bug, but the bottom line is that somehow,
this patch causes corruption when the numa balancing feature is
enabled AND we don't use process affinity AND we use GUP to pin pages
so our accelerator can DMA to/from system memory.
Either disabling numa balancing, using process affinity to bind to
specific numa-node or reverting this patch causes the bug to
disappear"
and Oded bisected the issue to commit 09854ba94c6a ("mm: do_wp_page()
simplification").
Now, the NUMA balancing shouldn't actually be changing the writability
of a page, and as such shouldn't matter for COW. But it appears it
does. Suspicious.
However, regardless of that, the condition for enabling NUMA faults in
change_pte_range() is nonsensical. It uses "page_mapcount(page)" to
decide if a COW page should be NUMA-protected or not, and that makes
absolutely no sense.
The number of mappings a page has is irrelevant: not only does GUP get a
reference to a page as in Oded's case, but the other mappings migth be
paged out and the only reference to them would be in the page count.
Since we should never try to NUMA-balance a page that we can't move
anyway due to other references, just fix the code to use 'page_count()'.
Oded confirms that that fixes his issue.
Now, this does imply that something in NUMA balancing ends up changing
page protections (other than the obvious one of making the page
inaccessible to get the NUMA faulting information). Otherwise the COW
simplification wouldn't matter - since doing the GUP on the page would
make sure it's writable.
The cause of that permission change would be good to figure out too,
since it clearly results in spurious COW events - but fixing the
nonsensical test that just happened to work before is obviously the
CorrectThing(tm) to do regardless.
In the Linux kernel, the following vulnerability has been resolved:
s390/cio: verify the driver availability for path_event call
If no driver is attached to a device or the driver does not provide the
path_event function, an FCES path-event on this device could end up in a
kernel-panic. Verify the driver availability before the path_event
function call.
In the Linux kernel, the following vulnerability has been resolved:
perf: Fix list corruption in perf_cgroup_switch()
There's list corruption on cgrp_cpuctx_list. This happens on the
following path:
perf_cgroup_switch: list_for_each_entry(cgrp_cpuctx_list)
cpu_ctx_sched_in
ctx_sched_in
ctx_pinned_sched_in
merge_sched_in
perf_cgroup_event_disable: remove the event from the list
Use list_for_each_entry_safe() to allow removing an entry during
iteration.
In the Linux kernel, the following vulnerability has been resolved:
mm: vmscan: remove deadlock due to throttling failing to make progress
A soft lockup bug in kcompactd was reported in a private bugzilla with
the following visible in dmesg;
watchdog: BUG: soft lockup - CPU#33 stuck for 26s! [kcompactd0:479]
watchdog: BUG: soft lockup - CPU#33 stuck for 52s! [kcompactd0:479]
watchdog: BUG: soft lockup - CPU#33 stuck for 78s! [kcompactd0:479]
watchdog: BUG: soft lockup - CPU#33 stuck for 104s! [kcompactd0:479]
The machine had 256G of RAM with no swap and an earlier failed
allocation indicated that node 0 where kcompactd was run was potentially
unreclaimable;
Node 0 active_anon:29355112kB inactive_anon:2913528kB active_file:0kB
inactive_file:0kB unevictable:64kB isolated(anon):0kB isolated(file):0kB
mapped:8kB dirty:0kB writeback:0kB shmem:26780kB shmem_thp:
0kB shmem_pmdmapped: 0kB anon_thp: 23480320kB writeback_tmp:0kB
kernel_stack:2272kB pagetables:24500kB all_unreclaimable? yes
Vlastimil Babka investigated a crash dump and found that a task
migrating pages was trying to drain PCP lists;
PID: 52922 TASK: ffff969f820e5000 CPU: 19 COMMAND: "kworker/u128:3"
Call Trace:
__schedule
schedule
schedule_timeout
wait_for_completion
__flush_work
__drain_all_pages
__alloc_pages_slowpath.constprop.114
__alloc_pages
alloc_migration_target
migrate_pages
migrate_to_node
do_migrate_pages
cpuset_migrate_mm_workfn
process_one_work
worker_thread
kthread
ret_from_fork
This failure is specific to CONFIG_PREEMPT=n builds. The root of the
problem is that kcompact0 is not rescheduling on a CPU while a task that
has isolated a large number of the pages from the LRU is waiting on
kcompact0 to reschedule so the pages can be released. While
shrink_inactive_list() only loops once around too_many_isolated, reclaim
can continue without rescheduling if sc->skipped_deactivate == 1 which
could happen if there was no file LRU and the inactive anon list was not
low.
In the Linux kernel, the following vulnerability has been resolved:
iio: buffer: Fix file related error handling in IIO_BUFFER_GET_FD_IOCTL
If we fail to copy the just created file descriptor to userland, we
try to clean up by putting back 'fd' and freeing 'ib'. The code uses
put_unused_fd() for the former which is wrong, as the file descriptor
was already published by fd_install() which gets called internally by
anon_inode_getfd().
This makes the error handling code leaving a half cleaned up file
descriptor table around and a partially destructed 'file' object,
allowing userland to play use-after-free tricks on us, by abusing
the still usable fd and making the code operate on a dangling
'file->private_data' pointer.
Instead of leaving the kernel in a partially corrupted state, don't
attempt to explicitly clean up and leave this to the process exit
path that'll release any still valid fds, including the one created
by the previous call to anon_inode_getfd(). Simply return -EFAULT to
indicate the error.