In the Linux kernel, the following vulnerability has been resolved:
IB/core: Implement a limit on UMAD receive List
The existing behavior of ib_umad, which maintains received MAD
packets in an unbounded list, poses a risk of uncontrolled growth.
As user-space applications extract packets from this list, the rate
of extraction may not match the rate of incoming packets, leading
to potential list overflow.
To address this, we introduce a limit to the size of the list. After
considering typical scenarios, such as OpenSM processing, which can
handle approximately 100k packets per second, and the 1-second retry
timeout for most packets, we set the list size limit to 200k. Packets
received beyond this limit are dropped, assuming they are likely timed
out by the time they are handled by user-space.
Notably, packets queued on the receive list due to reasons like
timed-out sends are preserved even when the list is full.
In the Linux kernel, the following vulnerability has been resolved:
crypto: hisilicon/debugfs - Fix debugfs uninit process issue
During the zip probe process, the debugfs failure does not stop
the probe. When debugfs initialization fails, jumping to the
error branch will also release regs, in addition to its own
rollback operation.
As a result, it may be released repeatedly during the regs
uninit process. Therefore, the null check needs to be added to
the regs uninit process.
In the Linux kernel, the following vulnerability has been resolved:
bpf: mark bpf_dummy_struct_ops.test_1 parameter as nullable
Test case dummy_st_ops/dummy_init_ret_value passes NULL as the first
parameter of the test_1() function. Mark this parameter as nullable to
make verifier aware of such possibility.
Otherwise, NULL check in the test_1() code:
SEC("struct_ops/test_1")
int BPF_PROG(test_1, struct bpf_dummy_ops_state *state)
{
if (!state)
return ...;
... access state ...
}
Might be removed by verifier, thus triggering NULL pointer dereference
under certain conditions.
In the Linux kernel, the following vulnerability has been resolved:
mm: avoid overflows in dirty throttling logic
The dirty throttling logic is interspersed with assumptions that dirty
limits in PAGE_SIZE units fit into 32-bit (so that various multiplications
fit into 64-bits). If limits end up being larger, we will hit overflows,
possible divisions by 0 etc. Fix these problems by never allowing so
large dirty limits as they have dubious practical value anyway. For
dirty_bytes / dirty_background_bytes interfaces we can just refuse to set
so large limits. For dirty_ratio / dirty_background_ratio it isn't so
simple as the dirty limit is computed from the amount of available memory
which can change due to memory hotplug etc. So when converting dirty
limits from ratios to numbers of pages, we just don't allow the result to
exceed UINT_MAX.
This is root-only triggerable problem which occurs when the operator
sets dirty limits to >16 TB.
In the Linux kernel, the following vulnerability has been resolved:
virtio-pci: Check if is_avq is NULL
[bug]
In the virtio_pci_common.c function vp_del_vqs, vp_dev->is_avq is involved
to determine whether it is admin virtqueue, but this function vp_dev->is_avq
may be empty. For installations, virtio_pci_legacy does not assign a value
to vp_dev->is_avq.
[fix]
Check whether it is vp_dev->is_avq before use.
[test]
Test with virsh Attach device
Before this patch, the following command would crash the guest system
After applying the patch, everything seems to be working fine.
In the Linux kernel, the following vulnerability has been resolved:
vhost_task: Handle SIGKILL by flushing work and exiting
Instead of lingering until the device is closed, this has us handle
SIGKILL by:
1. marking the worker as killed so we no longer try to use it with
new virtqueues and new flush operations.
2. setting the virtqueue to worker mapping so no new works are queued.
3. running all the exiting works.
In the Linux kernel, the following vulnerability has been resolved:
cdrom: rearrange last_media_change check to avoid unintentional overflow
When running syzkaller with the newly reintroduced signed integer wrap
sanitizer we encounter this splat:
[ 366.015950] UBSAN: signed-integer-overflow in ../drivers/cdrom/cdrom.c:2361:33
[ 366.021089] -9223372036854775808 - 346321 cannot be represented in type '__s64' (aka 'long long')
[ 366.025894] program syz-executor.4 is using a deprecated SCSI ioctl, please convert it to SG_IO
[ 366.027502] CPU: 5 PID: 28472 Comm: syz-executor.7 Not tainted 6.8.0-rc2-00035-gb3ef86b5a957 #1
[ 366.027512] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 366.027518] Call Trace:
[ 366.027523] <TASK>
[ 366.027533] dump_stack_lvl+0x93/0xd0
[ 366.027899] handle_overflow+0x171/0x1b0
[ 366.038787] ata1.00: invalid multi_count 32 ignored
[ 366.043924] cdrom_ioctl+0x2c3f/0x2d10
[ 366.063932] ? __pm_runtime_resume+0xe6/0x130
[ 366.071923] sr_block_ioctl+0x15d/0x1d0
[ 366.074624] ? __pfx_sr_block_ioctl+0x10/0x10
[ 366.077642] blkdev_ioctl+0x419/0x500
[ 366.080231] ? __pfx_blkdev_ioctl+0x10/0x10
...
Historically, the signed integer overflow sanitizer did not work in the
kernel due to its interaction with `-fwrapv` but this has since been
changed [1] in the newest version of Clang. It was re-enabled in the
kernel with Commit 557f8c582a9ba8ab ("ubsan: Reintroduce signed overflow
sanitizer").
Let's rearrange the check to not perform any arithmetic, thus not
tripping the sanitizer.
In the Linux kernel, the following vulnerability has been resolved:
drm/amd/display: Add NULL pointer check for kzalloc
[Why & How]
Check return pointer of kzalloc before using it.
In the Linux kernel, the following vulnerability has been resolved:
drm/amdgpu: fix double free err_addr pointer warnings
In amdgpu_umc_bad_page_polling_timeout, the amdgpu_umc_handle_bad_pages
will be run many times so that double free err_addr in some special case.
So set the err_addr to NULL to avoid the warnings.
In the Linux kernel, the following vulnerability has been resolved:
nilfs2: add missing check for inode numbers on directory entries
Syzbot reported that mounting and unmounting a specific pattern of
corrupted nilfs2 filesystem images causes a use-after-free of metadata
file inodes, which triggers a kernel bug in lru_add_fn().
As Jan Kara pointed out, this is because the link count of a metadata file
gets corrupted to 0, and nilfs_evict_inode(), which is called from iput(),
tries to delete that inode (ifile inode in this case).
The inconsistency occurs because directories containing the inode numbers
of these metadata files that should not be visible in the namespace are
read without checking.
Fix this issue by treating the inode numbers of these internal files as
errors in the sanity check helper when reading directory folios/pages.
Also thanks to Hillf Danton and Matthew Wilcox for their initial mm-layer
analysis.