In the Linux kernel, the following vulnerability has been resolved:
dmaengine: nbpfaxi: Fix memory corruption in probe()
The nbpf->chan[] array is allocated earlier in the nbpf_probe() function
and it has "num_channels" elements. These three loops iterate one
element farther than they should and corrupt memory.
The changes to the second loop are more involved. In this case, we're
copying data from the irqbuf[] array into the nbpf->chan[] array. If
the data in irqbuf[i] is the error IRQ then we skip it, so the iterators
are not in sync. I added a check to ensure that we don't go beyond the
end of the irqbuf[] array. I'm pretty sure this can't happen, but it
seemed harmless to add a check.
On the other hand, after the loop has ended there is a check to ensure
that the "chan" iterator is where we expect it to be. In the original
code we went one element beyond the end of the array so the iterator
wasn't in the correct place and it would always return -EINVAL. However,
now it will always be in the correct place. I deleted the check since
we know the result.
In the Linux kernel, the following vulnerability has been resolved:
tracing: Add down_write(trace_event_sem) when adding trace event
When a module is loaded, it adds trace events defined by the module. It
may also need to modify the modules trace printk formats to replace enum
names with their values.
If two modules are loaded at the same time, the adding of the event to the
ftrace_events list can corrupt the walking of the list in the code that is
modifying the printk format strings and crash the kernel.
The addition of the event should take the trace_event_sem for write while
it adds the new event.
Also add a lockdep_assert_held() on that semaphore in
__trace_add_event_dirs() as it iterates the list.
In the Linux kernel, the following vulnerability has been resolved:
smb: client: fix use-after-free in cifs_oplock_break
A race condition can occur in cifs_oplock_break() leading to a
use-after-free of the cinode structure when unmounting:
cifs_oplock_break()
_cifsFileInfo_put(cfile)
cifsFileInfo_put_final()
cifs_sb_deactive()
[last ref, start releasing sb]
kill_sb()
kill_anon_super()
generic_shutdown_super()
evict_inodes()
dispose_list()
evict()
destroy_inode()
call_rcu(&inode->i_rcu, i_callback)
spin_lock(&cinode->open_file_lock) <- OK
[later] i_callback()
cifs_free_inode()
kmem_cache_free(cinode)
spin_unlock(&cinode->open_file_lock) <- UAF
cifs_done_oplock_break(cinode) <- UAF
The issue occurs when umount has already released its reference to the
superblock. When _cifsFileInfo_put() calls cifs_sb_deactive(), this
releases the last reference, triggering the immediate cleanup of all
inodes under RCU. However, cifs_oplock_break() continues to access the
cinode after this point, resulting in use-after-free.
Fix this by holding an extra reference to the superblock during the
entire oplock break operation. This ensures that the superblock and
its inodes remain valid until the oplock break completes.
In the Linux kernel, the following vulnerability has been resolved:
bpf: Reject %p% format string in bprintf-like helpers
static const char fmt[] = "%p%";
bpf_trace_printk(fmt, sizeof(fmt));
The above BPF program isn't rejected and causes a kernel warning at
runtime:
Please remove unsupported %\x00 in format string
WARNING: CPU: 1 PID: 7244 at lib/vsprintf.c:2680 format_decode+0x49c/0x5d0
This happens because bpf_bprintf_prepare skips over the second %,
detected as punctuation, while processing %p. This patch fixes it by
not skipping over punctuation. %\x00 is then processed in the next
iteration and rejected.
In the Linux kernel, the following vulnerability has been resolved:
comedi: aio_iiro_16: Fix bit shift out of bounds
When checking for a supported IRQ number, the following test is used:
if ((1 << it->options[1]) & 0xdcfc) {
However, `it->options[i]` is an unchecked `int` value from userspace, so
the shift amount could be negative or out of bounds. Fix the test by
requiring `it->options[1]` to be within bounds before proceeding with
the original test. Valid `it->options[1]` values that select the IRQ
will be in the range [1,15]. The value 0 explicitly disables the use of
interrupts.
In the Linux kernel, the following vulnerability has been resolved:
comedi: pcl812: Fix bit shift out of bounds
When checking for a supported IRQ number, the following test is used:
if ((1 << it->options[1]) & board->irq_bits) {
However, `it->options[i]` is an unchecked `int` value from userspace, so
the shift amount could be negative or out of bounds. Fix the test by
requiring `it->options[1]` to be within bounds before proceeding with
the original test. Valid `it->options[1]` values that select the IRQ
will be in the range [1,15]. The value 0 explicitly disables the use of
interrupts.
In the Linux kernel, the following vulnerability has been resolved:
drm/amdkfd: Don't call mmput from MMU notifier callback
If the process is exiting, the mmput inside mmu notifier callback from
compactd or fork or numa balancing could release the last reference
of mm struct to call exit_mmap and free_pgtable, this triggers deadlock
with below backtrace.
The deadlock will leak kfd process as mmu notifier release is not called
and cause VRAM leaking.
The fix is to take mm reference mmget_non_zero when adding prange to the
deferred list to pair with mmput in deferred list work.
If prange split and add into pchild list, the pchild work_item.mm is not
used, so remove the mm parameter from svm_range_unmap_split and
svm_range_add_child.
The backtrace of hung task:
INFO: task python:348105 blocked for more than 64512 seconds.
Call Trace:
__schedule+0x1c3/0x550
schedule+0x46/0xb0
rwsem_down_write_slowpath+0x24b/0x4c0
unlink_anon_vmas+0xb1/0x1c0
free_pgtables+0xa9/0x130
exit_mmap+0xbc/0x1a0
mmput+0x5a/0x140
svm_range_cpu_invalidate_pagetables+0x2b/0x40 [amdgpu]
mn_itree_invalidate+0x72/0xc0
__mmu_notifier_invalidate_range_start+0x48/0x60
try_to_unmap_one+0x10fa/0x1400
rmap_walk_anon+0x196/0x460
try_to_unmap+0xbb/0x210
migrate_page_unmap+0x54d/0x7e0
migrate_pages_batch+0x1c3/0xae0
migrate_pages_sync+0x98/0x240
migrate_pages+0x25c/0x520
compact_zone+0x29d/0x590
compact_zone_order+0xb6/0xf0
try_to_compact_pages+0xbe/0x220
__alloc_pages_direct_compact+0x96/0x1a0
__alloc_pages_slowpath+0x410/0x930
__alloc_pages_nodemask+0x3a9/0x3e0
do_huge_pmd_anonymous_page+0xd7/0x3e0
__handle_mm_fault+0x5e3/0x5f0
handle_mm_fault+0xf7/0x2e0
hmm_vma_fault.isra.0+0x4d/0xa0
walk_pmd_range.isra.0+0xa8/0x310
walk_pud_range+0x167/0x240
walk_pgd_range+0x55/0x100
__walk_page_range+0x87/0x90
walk_page_range+0xf6/0x160
hmm_range_fault+0x4f/0x90
amdgpu_hmm_range_get_pages+0x123/0x230 [amdgpu]
amdgpu_ttm_tt_get_user_pages+0xb1/0x150 [amdgpu]
init_user_pages+0xb1/0x2a0 [amdgpu]
amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu+0x543/0x7d0 [amdgpu]
kfd_ioctl_alloc_memory_of_gpu+0x24c/0x4e0 [amdgpu]
kfd_ioctl+0x29d/0x500 [amdgpu]
(cherry picked from commit a29e067bd38946f752b0ef855f3dfff87e77bec7)
In the Linux kernel, the following vulnerability has been resolved:
kasan: remove kasan_find_vm_area() to prevent possible deadlock
find_vm_area() couldn't be called in atomic_context. If find_vm_area() is
called to reports vm area information, kasan can trigger deadlock like:
CPU0 CPU1
vmalloc();
alloc_vmap_area();
spin_lock(&vn->busy.lock)
spin_lock_bh(&some_lock);
<interrupt occurs>
<in softirq>
spin_lock(&some_lock);
<access invalid address>
kasan_report();
print_report();
print_address_description();
kasan_find_vm_area();
find_vm_area();
spin_lock(&vn->busy.lock) // deadlock!
To prevent possible deadlock while kasan reports, remove kasan_find_vm_area().
In the Linux kernel, the following vulnerability has been resolved:
wifi: prevent A-MSDU attacks in mesh networks
This patch is a mitigation to prevent the A-MSDU spoofing vulnerability
for mesh networks. The initial update to the IEEE 802.11 standard, in
response to the FragAttacks, missed this case (CVE-2025-27558). It can
be considered a variant of CVE-2020-24588 but for mesh networks.
This patch tries to detect if a standard MSDU was turned into an A-MSDU
by an adversary. This is done by parsing a received A-MSDU as a standard
MSDU, calculating the length of the Mesh Control header, and seeing if
the 6 bytes after this header equal the start of an rfc1042 header. If
equal, this is a strong indication of an ongoing attack attempt.
This defense was tested with mac80211_hwsim against a mesh network that
uses an empty Mesh Address Extension field, i.e., when four addresses
are used, and when using a 12-byte Mesh Address Extension field, i.e.,
when six addresses are used. Functionality of normal MSDUs and A-MSDUs
was also tested, and confirmed working, when using both an empty and
12-byte Mesh Address Extension field.
It was also tested with mac80211_hwsim that A-MSDU attacks in non-mesh
networks keep being detected and prevented.
Note that the vulnerability being patched, and the defense being
implemented, was also discussed in the following paper and in the
following IEEE 802.11 presentation:
https://papers.mathyvanhoef.com/wisec2025.pdf
https://mentor.ieee.org/802.11/dcn/25/11-25-0949-00-000m-a-msdu-mesh-spoof-protection.docx
In the Linux kernel, the following vulnerability has been resolved:
wifi: zd1211rw: Fix potential NULL pointer dereference in zd_mac_tx_to_dev()
There is a potential NULL pointer dereference in zd_mac_tx_to_dev(). For
example, the following is possible:
T0 T1
zd_mac_tx_to_dev()
/* len == skb_queue_len(q) */
while (len > ZD_MAC_MAX_ACK_WAITERS) {
filter_ack()
spin_lock_irqsave(&q->lock, flags);
/* position == skb_queue_len(q) */
for (i=1; i<position; i++)
skb = __skb_dequeue(q)
if (mac->type == NL80211_IFTYPE_AP)
skb = __skb_dequeue(q);
spin_unlock_irqrestore(&q->lock, flags);
skb_dequeue() -> NULL
Since there is a small gap between checking skb queue length and skb being
unconditionally dequeued in zd_mac_tx_to_dev(), skb_dequeue() can return NULL.
Then the pointer is passed to zd_mac_tx_status() where it is dereferenced.
In order to avoid potential NULL pointer dereference due to situations like
above, check if skb is not NULL before passing it to zd_mac_tx_status().
Found by Linux Verification Center (linuxtesting.org) with SVACE.