Security Vulnerabilities
- CVEs Published In April 2025
In the Linux kernel, the following vulnerability has been resolved:
ext4: goto right label 'out_mmap_sem' in ext4_setattr()
Otherwise, if ext4_inode_attach_jinode() fails, a hung task will
happen because filemap_invalidate_unlock() isn't called to unlock
mapping->invalidate_lock. Like this:
EXT4-fs error (device sda) in ext4_setattr:5557: Out of memory
INFO: task fsstress:374 blocked for more than 122 seconds.
Not tainted 6.14.0-rc1-next-20250206-xfstests-dirty #726
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:fsstress state:D stack:0 pid:374 tgid:374 ppid:373
task_flags:0x440140 flags:0x00000000
Call Trace:
<TASK>
__schedule+0x2c9/0x7f0
schedule+0x27/0xa0
schedule_preempt_disabled+0x15/0x30
rwsem_down_read_slowpath+0x278/0x4c0
down_read+0x59/0xb0
page_cache_ra_unbounded+0x65/0x1b0
filemap_get_pages+0x124/0x3e0
filemap_read+0x114/0x3d0
vfs_read+0x297/0x360
ksys_read+0x6c/0xe0
do_syscall_64+0x4b/0x110
entry_SYSCALL_64_after_hwframe+0x76/0x7e
In the Linux kernel, the following vulnerability has been resolved:
ext4: fix out-of-bound read in ext4_xattr_inode_dec_ref_all()
There's issue as follows:
BUG: KASAN: use-after-free in ext4_xattr_inode_dec_ref_all+0x6ff/0x790
Read of size 4 at addr ffff88807b003000 by task syz-executor.0/15172
CPU: 3 PID: 15172 Comm: syz-executor.0
Call Trace:
__dump_stack lib/dump_stack.c:82 [inline]
dump_stack+0xbe/0xfd lib/dump_stack.c:123
print_address_description.constprop.0+0x1e/0x280 mm/kasan/report.c:400
__kasan_report.cold+0x6c/0x84 mm/kasan/report.c:560
kasan_report+0x3a/0x50 mm/kasan/report.c:585
ext4_xattr_inode_dec_ref_all+0x6ff/0x790 fs/ext4/xattr.c:1137
ext4_xattr_delete_inode+0x4c7/0xda0 fs/ext4/xattr.c:2896
ext4_evict_inode+0xb3b/0x1670 fs/ext4/inode.c:323
evict+0x39f/0x880 fs/inode.c:622
iput_final fs/inode.c:1746 [inline]
iput fs/inode.c:1772 [inline]
iput+0x525/0x6c0 fs/inode.c:1758
ext4_orphan_cleanup fs/ext4/super.c:3298 [inline]
ext4_fill_super+0x8c57/0xba40 fs/ext4/super.c:5300
mount_bdev+0x355/0x410 fs/super.c:1446
legacy_get_tree+0xfe/0x220 fs/fs_context.c:611
vfs_get_tree+0x8d/0x2f0 fs/super.c:1576
do_new_mount fs/namespace.c:2983 [inline]
path_mount+0x119a/0x1ad0 fs/namespace.c:3316
do_mount+0xfc/0x110 fs/namespace.c:3329
__do_sys_mount fs/namespace.c:3540 [inline]
__se_sys_mount+0x219/0x2e0 fs/namespace.c:3514
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x67/0xd1
Memory state around the buggy address:
ffff88807b002f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffff88807b002f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff88807b003000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
^
ffff88807b003080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ffff88807b003100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Above issue happens as ext4_xattr_delete_inode() isn't check xattr
is valid if xattr is in inode.
To solve above issue call xattr_check_inode() check if xattr if valid
in inode. In fact, we can directly verify in ext4_iget_extra_inode(),
so that there is no divergent verification.
In the Linux kernel, the following vulnerability has been resolved:
block: fix adding folio to bio
>4GB folio is possible on some ARCHs, such as aarch64, 16GB hugepage
is supported, then 'offset' of folio can't be held in 'unsigned int',
cause warning in bio_add_folio_nofail() and IO failure.
Fix it by adjusting 'page' & trimming 'offset' so that `->bi_offset` won't
be overflow, and folio can be added to bio successfully.
In the Linux kernel, the following vulnerability has been resolved:
f2fs: fix to avoid accessing uninitialized curseg
syzbot reports a f2fs bug as below:
F2FS-fs (loop3): Stopped filesystem due to reason: 7
kworker/u8:7: attempt to access beyond end of device
BUG: unable to handle page fault for address: ffffed1604ea3dfa
RIP: 0010:get_ckpt_valid_blocks fs/f2fs/segment.h:361 [inline]
RIP: 0010:has_curseg_enough_space fs/f2fs/segment.h:570 [inline]
RIP: 0010:__get_secs_required fs/f2fs/segment.h:620 [inline]
RIP: 0010:has_not_enough_free_secs fs/f2fs/segment.h:633 [inline]
RIP: 0010:has_enough_free_secs+0x575/0x1660 fs/f2fs/segment.h:649
<TASK>
f2fs_is_checkpoint_ready fs/f2fs/segment.h:671 [inline]
f2fs_write_inode+0x425/0x540 fs/f2fs/inode.c:791
write_inode fs/fs-writeback.c:1525 [inline]
__writeback_single_inode+0x708/0x10d0 fs/fs-writeback.c:1745
writeback_sb_inodes+0x820/0x1360 fs/fs-writeback.c:1976
wb_writeback+0x413/0xb80 fs/fs-writeback.c:2156
wb_do_writeback fs/fs-writeback.c:2303 [inline]
wb_workfn+0x410/0x1080 fs/fs-writeback.c:2343
process_one_work kernel/workqueue.c:3236 [inline]
process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317
worker_thread+0x870/0xd30 kernel/workqueue.c:3398
kthread+0x7a9/0x920 kernel/kthread.c:464
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
Commit 8b10d3653735 ("f2fs: introduce FAULT_NO_SEGMENT") allows to trigger
no free segment fault in allocator, then it will update curseg->segno to
NULL_SEGNO, though, CP_ERROR_FLAG has been set, f2fs_write_inode() missed
to check the flag, and access invalid curseg->segno directly in below call
path, then resulting in panic:
- f2fs_write_inode
- f2fs_is_checkpoint_ready
- has_enough_free_secs
- has_not_enough_free_secs
- __get_secs_required
- has_curseg_enough_space
- get_ckpt_valid_blocks
: access invalid curseg->segno
To avoid this issue, let's:
- check CP_ERROR_FLAG flag in prior to f2fs_is_checkpoint_ready() in
f2fs_write_inode().
- in has_curseg_enough_space(), save curseg->segno into a temp variable,
and verify its validation before use.
In the Linux kernel, the following vulnerability has been resolved:
md/md-bitmap: fix wrong bitmap_limit for clustermd when write sb
In clustermd, separate write-intent-bitmaps are used for each cluster
node:
0 4k 8k 12k
-------------------------------------------------------------------
| idle | md super | bm super [0] + bits |
| bm bits[0, contd] | bm super[1] + bits | bm bits[1, contd] |
| bm super[2] + bits | bm bits [2, contd] | bm super[3] + bits |
| bm bits [3, contd] | | |
So in node 1, pg_index in __write_sb_page() could equal to
bitmap->storage.file_pages. Then bitmap_limit will be calculated to
0. md_super_write() will be called with 0 size.
That means the first 4k sb area of node 1 will never be updated
through filemap_write_page().
This bug causes hang of mdadm/clustermd_tests/01r1_Grow_resize.
Here use (pg_index % bitmap->storage.file_pages) to make calculation
of bitmap_limit correct.
In the Linux kernel, the following vulnerability has been resolved:
md/raid1,raid10: don't ignore IO flags
If blk-wbt is enabled by default, it's found that raid write performance
is quite bad because all IO are throttled by wbt of underlying disks,
due to flag REQ_IDLE is ignored. And turns out this behaviour exist since
blk-wbt is introduced.
Other than REQ_IDLE, other flags should not be ignored as well, for
example REQ_META can be set for filesystems, clearing it can cause priority
reverse problems; And REQ_NOWAIT should not be cleared as well, because
io will wait instead of failing directly in underlying disks.
Fix those problems by keep IO flags from master bio.
Fises: f51d46d0e7cb ("md: add support for REQ_NOWAIT")
In the Linux kernel, the following vulnerability has been resolved:
md: fix mddev uaf while iterating all_mddevs list
While iterating all_mddevs list from md_notify_reboot() and md_exit(),
list_for_each_entry_safe is used, and this can race with deletint the
next mddev, causing UAF:
t1:
spin_lock
//list_for_each_entry_safe(mddev, n, ...)
mddev_get(mddev1)
// assume mddev2 is the next entry
spin_unlock
t2:
//remove mddev2
...
mddev_free
spin_lock
list_del
spin_unlock
kfree(mddev2)
mddev_put(mddev1)
spin_lock
//continue dereference mddev2->all_mddevs
The old helper for_each_mddev() actually grab the reference of mddev2
while holding the lock, to prevent from being freed. This problem can be
fixed the same way, however, the code will be complex.
Hence switch to use list_for_each_entry, in this case mddev_put() can free
the mddev1 and it's not safe as well. Refer to md_seq_show(), also factor
out a helper mddev_put_locked() to fix this problem.
In the Linux kernel, the following vulnerability has been resolved:
f2fs: fix potential deadloop in prepare_compress_overwrite()
Jan Prusakowski reported a kernel hang issue as below:
When running xfstests on linux-next kernel (6.14.0-rc3, 6.12) I
encountered a problem in generic/475 test where fsstress process
gets blocked in __f2fs_write_data_pages() and the test hangs.
The options I used are:
MKFS_OPTIONS -- -O compression -O extra_attr -O project_quota -O quota /dev/vdc
MOUNT_OPTIONS -- -o acl,user_xattr -o discard,compress_extension=* /dev/vdc /vdc
INFO: task kworker/u8:0:11 blocked for more than 122 seconds.
Not tainted 6.14.0-rc3-xfstests-lockdep #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:0 state:D stack:0 pid:11 tgid:11 ppid:2 task_flags:0x4208160 flags:0x00004000
Workqueue: writeback wb_workfn (flush-253:0)
Call Trace:
<TASK>
__schedule+0x309/0x8e0
schedule+0x3a/0x100
schedule_preempt_disabled+0x15/0x30
__mutex_lock+0x59a/0xdb0
__f2fs_write_data_pages+0x3ac/0x400
do_writepages+0xe8/0x290
__writeback_single_inode+0x5c/0x360
writeback_sb_inodes+0x22f/0x570
wb_writeback+0xb0/0x410
wb_do_writeback+0x47/0x2f0
wb_workfn+0x5a/0x1c0
process_one_work+0x223/0x5b0
worker_thread+0x1d5/0x3c0
kthread+0xfd/0x230
ret_from_fork+0x31/0x50
ret_from_fork_asm+0x1a/0x30
</TASK>
The root cause is: once generic/475 starts toload error table to dm
device, f2fs_prepare_compress_overwrite() will loop reading compressed
cluster pages due to IO error, meanwhile it has held .writepages lock,
it can block all other writeback tasks.
Let's fix this issue w/ below changes:
- add f2fs_handle_page_eio() in prepare_compress_overwrite() to
detect IO error.
- detect cp_error earler in f2fs_read_multi_pages().
In the Linux kernel, the following vulnerability has been resolved:
wifi: ath12k: Clear affinity hint before calling ath12k_pci_free_irq() in error path
If a shared IRQ is used by the driver due to platform limitation, then the
IRQ affinity hint is set right after the allocation of IRQ vectors in
ath12k_pci_msi_alloc(). This does no harm unless one of the functions
requesting the IRQ fails and attempt to free the IRQ.
This may end up with a warning from the IRQ core that is expecting the
affinity hint to be cleared before freeing the IRQ:
kernel/irq/manage.c:
/* make sure affinity_hint is cleaned up */
if (WARN_ON_ONCE(desc->affinity_hint))
desc->affinity_hint = NULL;
So to fix this issue, clear the IRQ affinity hint before calling
ath12k_pci_free_irq() in the error path. The affinity will be cleared once
again further down the error path due to code organization, but that does
no harm.
In the Linux kernel, the following vulnerability has been resolved:
bnxt_en: Mask the bd_cnt field in the TX BD properly
The bd_cnt field in the TX BD specifies the total number of BDs for
the TX packet. The bd_cnt field has 5 bits and the maximum number
supported is 32 with the value 0.
CONFIG_MAX_SKB_FRAGS can be modified and the total number of SKB
fragments can approach or exceed the maximum supported by the chip.
Add a macro to properly mask the bd_cnt field so that the value 32
will be properly masked and set to 0 in the bd_cnd field.
Without this patch, the out-of-range bd_cnt value will corrupt the
TX BD and may cause TX timeout.
The next patch will check for values exceeding 32.