Security Vulnerabilities
- CVEs Published In October 2024
In the Linux kernel, the following vulnerability has been resolved:
NFSD: Limit the number of concurrent async COPY operations
Nothing appears to limit the number of concurrent async COPY
operations that clients can start. In addition, AFAICT each async
COPY can copy an unlimited number of 4MB chunks, so can run for a
long time. Thus IMO async COPY can become a DoS vector.
Add a restriction mechanism that bounds the number of concurrent
background COPY operations. Start simple and try to be fair -- this
patch implements a per-namespace limit.
An async COPY request that occurs while this limit is exceeded gets
NFS4ERR_DELAY. The requesting client can choose to send the request
again after a delay or fall back to a traditional read/write style
copy.
If there is need to make the mechanism more sophisticated, we can
visit that in future patches.
In the Linux kernel, the following vulnerability has been resolved:
uprobes: fix kernel info leak via "[uprobes]" vma
xol_add_vma() maps the uninitialized page allocated by __create_xol_area()
into userspace. On some architectures (x86) this memory is readable even
without VM_READ, VM_EXEC results in the same pgprot_t as VM_EXEC|VM_READ,
although this doesn't really matter, debugger can read this memory anyway.
In the Linux kernel, the following vulnerability has been resolved:
tracing/timerlat: Drop interface_lock in stop_kthread()
stop_kthread() is the offline callback for "trace/osnoise:online", since
commit 5bfbcd1ee57b ("tracing/timerlat: Add interface_lock around clearing
of kthread in stop_kthread()"), the following ABBA deadlock scenario is
introduced:
T1 | T2 [BP] | T3 [AP]
osnoise_hotplug_workfn() | work_for_cpu_fn() | cpuhp_thread_fun()
| _cpu_down() | osnoise_cpu_die()
mutex_lock(&interface_lock) | | stop_kthread()
| cpus_write_lock() | mutex_lock(&interface_lock)
cpus_read_lock() | cpuhp_kick_ap() |
As the interface_lock here in just for protecting the "kthread" field of
the osn_var, use xchg() instead to fix this issue. Also use
for_each_online_cpu() back in stop_per_cpu_kthreads() as it can take
cpu_read_lock() again.
In the Linux kernel, the following vulnerability has been resolved:
net: stmmac: Fix zero-division error when disabling tc cbs
The commit b8c43360f6e4 ("net: stmmac: No need to calculate speed divider
when offload is disabled") allows the "port_transmit_rate_kbps" to be
set to a value of 0, which is then passed to the "div_s64" function when
tc-cbs is disabled. This leads to a zero-division error.
When tc-cbs is disabled, the idleslope, sendslope, and credit values the
credit values are not required to be configured. Therefore, adding a return
statement after setting the txQ mode to DCB when tc-cbs is disabled would
prevent a zero-division error.
In the Linux kernel, the following vulnerability has been resolved:
gso: fix udp gso fraglist segmentation after pull from frag_list
Detect gso fraglist skbs with corrupted geometry (see below) and
pass these to skb_segment instead of skb_segment_list, as the first
can segment them correctly.
Valid SKB_GSO_FRAGLIST skbs
- consist of two or more segments
- the head_skb holds the protocol headers plus first gso_size
- one or more frag_list skbs hold exactly one segment
- all but the last must be gso_size
Optional datapath hooks such as NAT and BPF (bpf_skb_pull_data) can
modify these skbs, breaking these invariants.
In extreme cases they pull all data into skb linear. For UDP, this
causes a NULL ptr deref in __udpv4_gso_segment_list_csum at
udp_hdr(seg->next)->dest.
Detect invalid geometry due to pull, by checking head_skb size.
Don't just drop, as this may blackhole a destination. Convert to be
able to pass to regular skb_segment.
In the Linux kernel, the following vulnerability has been resolved:
net: gso: fix tcp fraglist segmentation after pull from frag_list
Detect tcp gso fraglist skbs with corrupted geometry (see below) and
pass these to skb_segment instead of skb_segment_list, as the first
can segment them correctly.
Valid SKB_GSO_FRAGLIST skbs
- consist of two or more segments
- the head_skb holds the protocol headers plus first gso_size
- one or more frag_list skbs hold exactly one segment
- all but the last must be gso_size
Optional datapath hooks such as NAT and BPF (bpf_skb_pull_data) can
modify these skbs, breaking these invariants.
In extreme cases they pull all data into skb linear. For TCP, this
causes a NULL ptr deref in __tcpv4_gso_segment_list_csum at
tcp_hdr(seg->next).
Detect invalid geometry due to pull, by checking head_skb size.
Don't just drop, as this may blackhole a destination. Convert to be
able to pass to regular skb_segment.
Approach and description based on a patch by Willem de Bruijn.
In the Linux kernel, the following vulnerability has been resolved:
vrf: revert "vrf: Remove unnecessary RCU-bh critical section"
This reverts commit 504fc6f4f7f681d2a03aa5f68aad549d90eab853.
dev_queue_xmit_nit is expected to be called with BH disabled.
__dev_queue_xmit has the following:
/* Disable soft irqs for various locks below. Also
* stops preemption for RCU.
*/
rcu_read_lock_bh();
VRF must follow this invariant. The referenced commit removed this
protection. Which triggered a lockdep warning:
================================
WARNING: inconsistent lock state
6.11.0 #1 Tainted: G W
--------------------------------
inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage.
btserver/134819 [HC0[0]:SC0[0]:HE1:SE1] takes:
ffff8882da30c118 (rlock-AF_PACKET){+.?.}-{2:2}, at: tpacket_rcv+0x863/0x3b30
{IN-SOFTIRQ-W} state was registered at:
lock_acquire+0x19a/0x4f0
_raw_spin_lock+0x27/0x40
packet_rcv+0xa33/0x1320
__netif_receive_skb_core.constprop.0+0xcb0/0x3a90
__netif_receive_skb_list_core+0x2c9/0x890
netif_receive_skb_list_internal+0x610/0xcc0
[...]
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(rlock-AF_PACKET);
<Interrupt>
lock(rlock-AF_PACKET);
*** DEADLOCK ***
Call Trace:
<TASK>
dump_stack_lvl+0x73/0xa0
mark_lock+0x102e/0x16b0
__lock_acquire+0x9ae/0x6170
lock_acquire+0x19a/0x4f0
_raw_spin_lock+0x27/0x40
tpacket_rcv+0x863/0x3b30
dev_queue_xmit_nit+0x709/0xa40
vrf_finish_direct+0x26e/0x340 [vrf]
vrf_l3_out+0x5f4/0xe80 [vrf]
__ip_local_out+0x51e/0x7a0
[...]
In the Linux kernel, the following vulnerability has been resolved:
media: venus: fix use after free bug in venus_remove due to race condition
in venus_probe, core->work is bound with venus_sys_error_handler, which is
used to handle error. The code use core->sys_err_done to make sync work.
The core->work is started in venus_event_notify.
If we call venus_remove, there might be an unfished work. The possible
sequence is as follows:
CPU0 CPU1
|venus_sys_error_handler
venus_remove |
hfi_destroy |
venus_hfi_destroy |
kfree(hdev); |
|hfi_reinit
|venus_hfi_queues_reinit
|//use hdev
Fix it by canceling the work in venus_remove.
In the Linux kernel, the following vulnerability has been resolved:
aoe: fix the potential use-after-free problem in more places
For fixing CVE-2023-6270, f98364e92662 ("aoe: fix the potential
use-after-free problem in aoecmd_cfg_pkts") makes tx() calling dev_put()
instead of doing in aoecmd_cfg_pkts(). It avoids that the tx() runs
into use-after-free.
Then Nicolai Stange found more places in aoe have potential use-after-free
problem with tx(). e.g. revalidate(), aoecmd_ata_rw(), resend(), probe()
and aoecmd_cfg_rsp(). Those functions also use aoenet_xmit() to push
packet to tx queue. So they should also use dev_hold() to increase the
refcnt of skb->dev.
On the other hand, moving dev_put() to tx() causes that the refcnt of
skb->dev be reduced to a negative value, because corresponding
dev_hold() are not called in revalidate(), aoecmd_ata_rw(), resend(),
probe(), and aoecmd_cfg_rsp(). This patch fixed this issue.
In the Linux kernel, the following vulnerability has been resolved:
ext4: drop ppath from ext4_ext_replay_update_ex() to avoid double-free
When calling ext4_force_split_extent_at() in ext4_ext_replay_update_ex(),
the 'ppath' is updated but it is the 'path' that is freed, thus potentially
triggering a double-free in the following process:
ext4_ext_replay_update_ex
ppath = path
ext4_force_split_extent_at(&ppath)
ext4_split_extent_at
ext4_ext_insert_extent
ext4_ext_create_new_leaf
ext4_ext_grow_indepth
ext4_find_extent
if (depth > path[0].p_maxdepth)
kfree(path) ---> path First freed
*orig_path = path = NULL ---> null ppath
kfree(path) ---> path double-free !!!
So drop the unnecessary ppath and use path directly to avoid this problem.
And use ext4_find_extent() directly to update path, avoiding unnecessary
memory allocation and freeing. Also, propagate the error returned by
ext4_find_extent() instead of using strange error codes.