In the Linux kernel, the following vulnerability has been resolved:
arm64: dts: qcom: msm8998: Fix CPU/L2 idle state latency and residency
The entry/exit latency and minimum residency in state for the idle
states of MSM8998 were ..bad: first of all, for all of them the
timings were written for CPU sleep but the min-residency-us param
was miscalculated (supposedly, while porting this from downstream);
Then, the power collapse states are setting PC on both the CPU
cluster *and* the L2 cache, which have different timings: in the
specific case of L2 the times are higher so these ones should be
taken into account instead of the CPU ones.
This parameter misconfiguration was not giving particular issues
because on MSM8998 there was no CPU scaling at all, so cluster/L2
power collapse was rarely (if ever) hit.
When CPU scaling is enabled, though, the wrong timings will produce
SoC unstability shown to the user as random, apparently error-less,
sudden reboots and/or lockups.
This set of parameters are stabilizing the SoC when CPU scaling is
ON and when power collapse is frequently hit.
In the Linux kernel, the following vulnerability has been resolved:
scsi: ufs: core: Improve SCSI abort handling
The following has been observed on a test setup:
WARNING: CPU: 4 PID: 250 at drivers/scsi/ufs/ufshcd.c:2737 ufshcd_queuecommand+0x468/0x65c
Call trace:
ufshcd_queuecommand+0x468/0x65c
scsi_send_eh_cmnd+0x224/0x6a0
scsi_eh_test_devices+0x248/0x418
scsi_eh_ready_devs+0xc34/0xe58
scsi_error_handler+0x204/0x80c
kthread+0x150/0x1b4
ret_from_fork+0x10/0x30
That warning is triggered by the following statement:
WARN_ON(lrbp->cmd);
Fix this warning by clearing lrbp->cmd from the abort handler.
In the Linux kernel, the following vulnerability has been resolved:
scsi: scsi_debug: Fix out-of-bound read in resp_readcap16()
The following warning was observed running syzkaller:
[ 3813.830724] sg_write: data in/out 65466/242 bytes for SCSI command 0x9e-- guessing data in;
[ 3813.830724] program syz-executor not setting count and/or reply_len properly
[ 3813.836956] ==================================================================
[ 3813.839465] BUG: KASAN: stack-out-of-bounds in sg_copy_buffer+0x157/0x1e0
[ 3813.841773] Read of size 4096 at addr ffff8883cf80f540 by task syz-executor/1549
[ 3813.846612] Call Trace:
[ 3813.846995] dump_stack+0x108/0x15f
[ 3813.847524] print_address_description+0xa5/0x372
[ 3813.848243] kasan_report.cold+0x236/0x2a8
[ 3813.849439] check_memory_region+0x240/0x270
[ 3813.850094] memcpy+0x30/0x80
[ 3813.850553] sg_copy_buffer+0x157/0x1e0
[ 3813.853032] sg_copy_from_buffer+0x13/0x20
[ 3813.853660] fill_from_dev_buffer+0x135/0x370
[ 3813.854329] resp_readcap16+0x1ac/0x280
[ 3813.856917] schedule_resp+0x41f/0x1630
[ 3813.858203] scsi_debug_queuecommand+0xb32/0x17e0
[ 3813.862699] scsi_dispatch_cmd+0x330/0x950
[ 3813.863329] scsi_request_fn+0xd8e/0x1710
[ 3813.863946] __blk_run_queue+0x10b/0x230
[ 3813.864544] blk_execute_rq_nowait+0x1d8/0x400
[ 3813.865220] sg_common_write.isra.0+0xe61/0x2420
[ 3813.871637] sg_write+0x6c8/0xef0
[ 3813.878853] __vfs_write+0xe4/0x800
[ 3813.883487] vfs_write+0x17b/0x530
[ 3813.884008] ksys_write+0x103/0x270
[ 3813.886268] __x64_sys_write+0x77/0xc0
[ 3813.886841] do_syscall_64+0x106/0x360
[ 3813.887415] entry_SYSCALL_64_after_hwframe+0x44/0xa9
This issue can be reproduced with the following syzkaller log:
r0 = openat(0xffffffffffffff9c, &(0x7f0000000040)='./file0\x00', 0x26e1, 0x0)
r1 = syz_open_procfs(0xffffffffffffffff, &(0x7f0000000000)='fd/3\x00')
open_by_handle_at(r1, &(0x7f00000003c0)=ANY=[@ANYRESHEX], 0x602000)
r2 = syz_open_dev$sg(&(0x7f0000000000), 0x0, 0x40782)
write$binfmt_aout(r2, &(0x7f0000000340)=ANY=[@ANYBLOB="00000000deff000000000000000000000000000000000000000000000000000047f007af9e107a41ec395f1bded7be24277a1501ff6196a83366f4e6362bc0ff2b247f68a972989b094b2da4fb3607fcf611a22dd04310d28c75039d"], 0x126)
In resp_readcap16() we get "int alloc_len" value -1104926854, and then pass
the huge arr_len to fill_from_dev_buffer(), but arr is only 32 bytes. This
leads to OOB in sg_copy_buffer().
To solve this issue, define alloc_len as u32.
In the Linux kernel, the following vulnerability has been resolved:
scsi: pm80xx: Fix memory leak during rmmod
Driver failed to release all memory allocated. This would lead to memory
leak during driver removal.
Properly free memory when the module is removed.
In the Linux kernel, the following vulnerability has been resolved:
cfg80211: call cfg80211_stop_ap when switch from P2P_GO type
If the userspace tools switch from NL80211_IFTYPE_P2P_GO to
NL80211_IFTYPE_ADHOC via send_msg(NL80211_CMD_SET_INTERFACE), it
does not call the cleanup cfg80211_stop_ap(), this leads to the
initialization of in-use data. For example, this path re-init the
sdata->assigned_chanctx_list while it is still an element of
assigned_vifs list, and makes that linked list corrupt.
In the Linux kernel, the following vulnerability has been resolved:
x86, relocs: Ignore relocations in .notes section
When building with CONFIG_XEN_PV=y, .text symbols are emitted into
the .notes section so that Xen can find the "startup_xen" entry point.
This information is used prior to booting the kernel, so relocations
are not useful. In fact, performing relocations against the .notes
section means that the KASLR base is exposed since /sys/kernel/notes
is world-readable.
To avoid leaking the KASLR base without breaking unprivileged tools that
are expecting to read /sys/kernel/notes, skip performing relocations in
the .notes section. The values readable in .notes are then identical to
those found in System.map.
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: validate payload size in ipc response
If installing malicious ksmbd-tools, ksmbd.mountd can return invalid ipc
response to ksmbd kernel server. ksmbd should validate payload size of
ipc response from ksmbd.mountd to avoid memory overrun or
slab-out-of-bounds. This patch validate 3 ipc response that has payload.
In the Linux kernel, the following vulnerability has been resolved:
vfio/pci: Lock external INTx masking ops
Mask operations through config space changes to DisINTx may race INTx
configuration changes via ioctl. Create wrappers that add locking for
paths outside of the core interrupt code.
In particular, irq_type is updated holding igate, therefore testing
is_intx() requires holding igate. For example clearing DisINTx from
config space can otherwise race changes of the interrupt configuration.
This aligns interfaces which may trigger the INTx eventfd into two
camps, one side serialized by igate and the other only enabled while
INTx is configured. A subsequent patch introduces synchronization for
the latter flows.
In the Linux kernel, the following vulnerability has been resolved:
vfio/pci: Create persistent INTx handler
A vulnerability exists where the eventfd for INTx signaling can be
deconfigured, which unregisters the IRQ handler but still allows
eventfds to be signaled with a NULL context through the SET_IRQS ioctl
or through unmask irqfd if the device interrupt is pending.
Ideally this could be solved with some additional locking; the igate
mutex serializes the ioctl and config space accesses, and the interrupt
handler is unregistered relative to the trigger, but the irqfd path
runs asynchronous to those. The igate mutex cannot be acquired from the
atomic context of the eventfd wake function. Disabling the irqfd
relative to the eventfd registration is potentially incompatible with
existing userspace.
As a result, the solution implemented here moves configuration of the
INTx interrupt handler to track the lifetime of the INTx context object
and irq_type configuration, rather than registration of a particular
trigger eventfd. Synchronization is added between the ioctl path and
eventfd_signal() wrapper such that the eventfd trigger can be
dynamically updated relative to in-flight interrupts or irqfd callbacks.
In the Linux kernel, the following vulnerability has been resolved:
vfio/pci: Disable auto-enable of exclusive INTx IRQ
Currently for devices requiring masking at the irqchip for INTx, ie.
devices without DisINTx support, the IRQ is enabled in request_irq()
and subsequently disabled as necessary to align with the masked status
flag. This presents a window where the interrupt could fire between
these events, resulting in the IRQ incrementing the disable depth twice.
This would be unrecoverable for a user since the masked flag prevents
nested enables through vfio.
Instead, invert the logic using IRQF_NO_AUTOEN such that exclusive INTx
is never auto-enabled, then unmask as required.