In the Linux kernel, the following vulnerability has been resolved:
i40e: fix race condition by adding filter's intermediate sync state
Fix a race condition in the i40e driver that leads to MAC/VLAN filters
becoming corrupted and leaking. Address the issue that occurs under
heavy load when multiple threads are concurrently modifying MAC/VLAN
filters by setting mac and port VLAN.
1. Thread T0 allocates a filter in i40e_add_filter() within
i40e_ndo_set_vf_port_vlan().
2. Thread T1 concurrently frees the filter in __i40e_del_filter() within
i40e_ndo_set_vf_mac().
3. Subsequently, i40e_service_task() calls i40e_sync_vsi_filters(), which
refers to the already freed filter memory, causing corruption.
Reproduction steps:
1. Spawn multiple VFs.
2. Apply a concurrent heavy load by running parallel operations to change
MAC addresses on the VFs and change port VLANs on the host.
3. Observe errors in dmesg:
"Error I40E_AQ_RC_ENOSPC adding RX filters on VF XX,
please set promiscuous on manually for VF XX".
Exact code for stable reproduction Intel can't open-source now.
The fix involves implementing a new intermediate filter state,
I40E_FILTER_NEW_SYNC, for the time when a filter is on a tmp_add_list.
These filters cannot be deleted from the hash list directly but
must be removed using the full process.
In the Linux kernel, the following vulnerability has been resolved:
media: dvbdev: prevent the risk of out of memory access
The dvbdev contains a static variable used to store dvb minors.
The behavior of it depends if CONFIG_DVB_DYNAMIC_MINORS is set
or not. When not set, dvb_register_device() won't check for
boundaries, as it will rely that a previous call to
dvb_register_adapter() would already be enforcing it.
On a similar way, dvb_device_open() uses the assumption
that the register functions already did the needed checks.
This can be fragile if some device ends using different
calls. This also generate warnings on static check analysers
like Coverity.
So, add explicit guards to prevent potential risk of OOM issues.
In the Linux kernel, the following vulnerability has been resolved:
nfs: Fix KMSAN warning in decode_getfattr_attrs()
Fix the following KMSAN warning:
CPU: 1 UID: 0 PID: 7651 Comm: cp Tainted: G B
Tainted: [B]=BAD_PAGE
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009)
=====================================================
=====================================================
BUG: KMSAN: uninit-value in decode_getfattr_attrs+0x2d6d/0x2f90
decode_getfattr_attrs+0x2d6d/0x2f90
decode_getfattr_generic+0x806/0xb00
nfs4_xdr_dec_getattr+0x1de/0x240
rpcauth_unwrap_resp_decode+0xab/0x100
rpcauth_unwrap_resp+0x95/0xc0
call_decode+0x4ff/0xb50
__rpc_execute+0x57b/0x19d0
rpc_execute+0x368/0x5e0
rpc_run_task+0xcfe/0xee0
nfs4_proc_getattr+0x5b5/0x990
__nfs_revalidate_inode+0x477/0xd00
nfs_access_get_cached+0x1021/0x1cc0
nfs_do_access+0x9f/0xae0
nfs_permission+0x1e4/0x8c0
inode_permission+0x356/0x6c0
link_path_walk+0x958/0x1330
path_lookupat+0xce/0x6b0
filename_lookup+0x23e/0x770
vfs_statx+0xe7/0x970
vfs_fstatat+0x1f2/0x2c0
__se_sys_newfstatat+0x67/0x880
__x64_sys_newfstatat+0xbd/0x120
x64_sys_call+0x1826/0x3cf0
do_syscall_64+0xd0/0x1b0
entry_SYSCALL_64_after_hwframe+0x77/0x7f
The KMSAN warning is triggered in decode_getfattr_attrs(), when calling
decode_attr_mdsthreshold(). It appears that fattr->mdsthreshold is not
initialized.
Fix the issue by initializing fattr->mdsthreshold to NULL in
nfs_fattr_init().
In the Linux kernel, the following vulnerability has been resolved:
drm/i915/hdcp: Add encoder check in hdcp2_get_capability
Add encoder check in intel_hdcp2_get_capability to avoid
null pointer error.
In the Linux kernel, the following vulnerability has been resolved:
drm/i915/hdcp: Add encoder check in intel_hdcp_get_capability
Sometimes during hotplug scenario or suspend/resume scenario encoder is
not always initialized when intel_hdcp_get_capability add
a check to avoid kernel null pointer dereference.
In the Linux kernel, the following vulnerability has been resolved:
io_uring/rw: fix missing NOWAIT check for O_DIRECT start write
When io_uring starts a write, it'll call kiocb_start_write() to bump the
super block rwsem, preventing any freezes from happening while that
write is in-flight. The freeze side will grab that rwsem for writing,
excluding any new writers from happening and waiting for existing writes
to finish. But io_uring unconditionally uses kiocb_start_write(), which
will block if someone is currently attempting to freeze the mount point.
This causes a deadlock where freeze is waiting for previous writes to
complete, but the previous writes cannot complete, as the task that is
supposed to complete them is blocked waiting on starting a new write.
This results in the following stuck trace showing that dependency with
the write blocked starting a new write:
task:fio state:D stack:0 pid:886 tgid:886 ppid:876
Call trace:
__switch_to+0x1d8/0x348
__schedule+0x8e8/0x2248
schedule+0x110/0x3f0
percpu_rwsem_wait+0x1e8/0x3f8
__percpu_down_read+0xe8/0x500
io_write+0xbb8/0xff8
io_issue_sqe+0x10c/0x1020
io_submit_sqes+0x614/0x2110
__arm64_sys_io_uring_enter+0x524/0x1038
invoke_syscall+0x74/0x268
el0_svc_common.constprop.0+0x160/0x238
do_el0_svc+0x44/0x60
el0_svc+0x44/0xb0
el0t_64_sync_handler+0x118/0x128
el0t_64_sync+0x168/0x170
INFO: task fsfreeze:7364 blocked for more than 15 seconds.
Not tainted 6.12.0-rc5-00063-g76aaf945701c #7963
with the attempting freezer stuck trying to grab the rwsem:
task:fsfreeze state:D stack:0 pid:7364 tgid:7364 ppid:995
Call trace:
__switch_to+0x1d8/0x348
__schedule+0x8e8/0x2248
schedule+0x110/0x3f0
percpu_down_write+0x2b0/0x680
freeze_super+0x248/0x8a8
do_vfs_ioctl+0x149c/0x1b18
__arm64_sys_ioctl+0xd0/0x1a0
invoke_syscall+0x74/0x268
el0_svc_common.constprop.0+0x160/0x238
do_el0_svc+0x44/0x60
el0_svc+0x44/0xb0
el0t_64_sync_handler+0x118/0x128
el0t_64_sync+0x168/0x170
Fix this by having the io_uring side honor IOCB_NOWAIT, and only attempt a
blocking grab of the super block rwsem if it isn't set. For normal issue
where IOCB_NOWAIT would always be set, this returns -EAGAIN which will
have io_uring core issue a blocking attempt of the write. That will in
turn also get completions run, ensuring forward progress.
Since freezing requires CAP_SYS_ADMIN in the first place, this isn't
something that can be triggered by a regular user.
In the Linux kernel, the following vulnerability has been resolved:
net/sched: stop qdisc_tree_reduce_backlog on TC_H_ROOT
In qdisc_tree_reduce_backlog, Qdiscs with major handle ffff: are assumed
to be either root or ingress. This assumption is bogus since it's valid
to create egress qdiscs with major handle ffff:
Budimir Markovic found that for qdiscs like DRR that maintain an active
class list, it will cause a UAF with a dangling class pointer.
In 066a3b5b2346, the concern was to avoid iterating over the ingress
qdisc since its parent is itself. The proper fix is to stop when parent
TC_H_ROOT is reached because the only way to retrieve ingress is when a
hierarchy which does not contain a ffff: major handle call into
qdisc_lookup with TC_H_MAJ(TC_H_ROOT).
In the scenario where major ffff: is an egress qdisc in any of the tree
levels, the updates will also propagate to TC_H_ROOT, which then the
iteration must stop.
net/sched/sch_api.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
In the Linux kernel, the following vulnerability has been resolved:
net: stmmac: TSO: Fix unbalanced DMA map/unmap for non-paged SKB data
In case the non-paged data of a SKB carries protocol header and protocol
payload to be transmitted on a certain platform that the DMA AXI address
width is configured to 40-bit/48-bit, or the size of the non-paged data
is bigger than TSO_MAX_BUFF_SIZE on a certain platform that the DMA AXI
address width is configured to 32-bit, then this SKB requires at least
two DMA transmit descriptors to serve it.
For example, three descriptors are allocated to split one DMA buffer
mapped from one piece of non-paged data:
dma_desc[N + 0],
dma_desc[N + 1],
dma_desc[N + 2].
Then three elements of tx_q->tx_skbuff_dma[] will be allocated to hold
extra information to be reused in stmmac_tx_clean():
tx_q->tx_skbuff_dma[N + 0],
tx_q->tx_skbuff_dma[N + 1],
tx_q->tx_skbuff_dma[N + 2].
Now we focus on tx_q->tx_skbuff_dma[entry].buf, which is the DMA buffer
address returned by DMA mapping call. stmmac_tx_clean() will try to
unmap the DMA buffer _ONLY_IF_ tx_q->tx_skbuff_dma[entry].buf
is a valid buffer address.
The expected behavior that saves DMA buffer address of this non-paged
data to tx_q->tx_skbuff_dma[entry].buf is:
tx_q->tx_skbuff_dma[N + 0].buf = NULL;
tx_q->tx_skbuff_dma[N + 1].buf = NULL;
tx_q->tx_skbuff_dma[N + 2].buf = dma_map_single();
Unfortunately, the current code misbehaves like this:
tx_q->tx_skbuff_dma[N + 0].buf = dma_map_single();
tx_q->tx_skbuff_dma[N + 1].buf = NULL;
tx_q->tx_skbuff_dma[N + 2].buf = NULL;
On the stmmac_tx_clean() side, when dma_desc[N + 0] is closed by the
DMA engine, tx_q->tx_skbuff_dma[N + 0].buf is a valid buffer address
obviously, then the DMA buffer will be unmapped immediately.
There may be a rare case that the DMA engine does not finish the
pending dma_desc[N + 1], dma_desc[N + 2] yet. Now things will go
horribly wrong, DMA is going to access a unmapped/unreferenced memory
region, corrupted data will be transmited or iommu fault will be
triggered :(
In contrast, the for-loop that maps SKB fragments behaves perfectly
as expected, and that is how the driver should do for both non-paged
data and paged frags actually.
This patch corrects DMA map/unmap sequences by fixing the array index
for tx_q->tx_skbuff_dma[entry].buf when assigning DMA buffer address.
Tested and verified on DWXGMAC CORE 3.20a
In the Linux kernel, the following vulnerability has been resolved:
drm/amdgpu: prevent NULL pointer dereference if ATIF is not supported
acpi_evaluate_object() may return AE_NOT_FOUND (failure), which
would result in dereferencing buffer.pointer (obj) while being NULL.
Although this case may be unrealistic for the current code, it is
still better to protect against possible bugs.
Bail out also when status is AE_NOT_FOUND.
This fixes 1 FORWARD_NULL issue reported by Coverity
Report: CID 1600951: Null pointer dereferences (FORWARD_NULL)
(cherry picked from commit 91c9e221fe2553edf2db71627d8453f083de87a1)
In the Linux kernel, the following vulnerability has been resolved:
media: s5p-jpeg: prevent buffer overflows
The current logic allows word to be less than 2. If this happens,
there will be buffer overflows, as reported by smatch. Add extra
checks to prevent it.
While here, remove an unused word = 0 assignment.