Security Vulnerabilities
- CVEs Published In June 2024
In the Linux kernel, the following vulnerability has been resolved:
net/mlx5e: Avoid field-overflowing memcpy()
In preparation for FORTIFY_SOURCE performing compile-time and run-time
field bounds checking for memcpy(), memmove(), and memset(), avoid
intentionally writing across neighboring fields.
Use flexible arrays instead of zero-element arrays (which look like they
are always overflowing) and split the cross-field memcpy() into two halves
that can be appropriately bounds-checked by the compiler.
We were doing:
#define ETH_HLEN 14
#define VLAN_HLEN 4
...
#define MLX5E_XDP_MIN_INLINE (ETH_HLEN + VLAN_HLEN)
...
struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(wq, pi);
...
struct mlx5_wqe_eth_seg *eseg = &wqe->eth;
struct mlx5_wqe_data_seg *dseg = wqe->data;
...
memcpy(eseg->inline_hdr.start, xdptxd->data, MLX5E_XDP_MIN_INLINE);
target is wqe->eth.inline_hdr.start (which the compiler sees as being
2 bytes in size), but copying 18, intending to write across start
(really vlan_tci, 2 bytes). The remaining 16 bytes get written into
wqe->data[0], covering byte_count (4 bytes), lkey (4 bytes), and addr
(8 bytes).
struct mlx5e_tx_wqe {
struct mlx5_wqe_ctrl_seg ctrl; /* 0 16 */
struct mlx5_wqe_eth_seg eth; /* 16 16 */
struct mlx5_wqe_data_seg data[]; /* 32 0 */
/* size: 32, cachelines: 1, members: 3 */
/* last cacheline: 32 bytes */
};
struct mlx5_wqe_eth_seg {
u8 swp_outer_l4_offset; /* 0 1 */
u8 swp_outer_l3_offset; /* 1 1 */
u8 swp_inner_l4_offset; /* 2 1 */
u8 swp_inner_l3_offset; /* 3 1 */
u8 cs_flags; /* 4 1 */
u8 swp_flags; /* 5 1 */
__be16 mss; /* 6 2 */
__be32 flow_table_metadata; /* 8 4 */
union {
struct {
__be16 sz; /* 12 2 */
u8 start[2]; /* 14 2 */
} inline_hdr; /* 12 4 */
struct {
__be16 type; /* 12 2 */
__be16 vlan_tci; /* 14 2 */
} insert; /* 12 4 */
__be32 trailer; /* 12 4 */
}; /* 12 4 */
/* size: 16, cachelines: 1, members: 9 */
/* last cacheline: 16 bytes */
};
struct mlx5_wqe_data_seg {
__be32 byte_count; /* 0 4 */
__be32 lkey; /* 4 4 */
__be64 addr; /* 8 8 */
/* size: 16, cachelines: 1, members: 3 */
/* last cacheline: 16 bytes */
};
So, split the memcpy() so the compiler can reason about the buffer
sizes.
"pahole" shows no size nor member offset changes to struct mlx5e_tx_wqe
nor struct mlx5e_umr_wqe. "objdump -d" shows no meaningful object
code changes (i.e. only source line number induced differences and
optimizations).
In the Linux kernel, the following vulnerability has been resolved:
net/mlx5: Use del_timer_sync in fw reset flow of halting poll
Substitute del_timer() with del_timer_sync() in fw reset polling
deactivation flow, in order to prevent a race condition which occurs
when del_timer() is called and timer is deactivated while another
process is handling the timer interrupt. A situation that led to
the following call trace:
RIP: 0010:run_timer_softirq+0x137/0x420
<IRQ>
recalibrate_cpu_khz+0x10/0x10
ktime_get+0x3e/0xa0
? sched_clock_cpu+0xb/0xc0
__do_softirq+0xf5/0x2ea
irq_exit_rcu+0xc1/0xf0
sysvec_apic_timer_interrupt+0x9e/0xc0
asm_sysvec_apic_timer_interrupt+0x12/0x20
</IRQ>
In the Linux kernel, the following vulnerability has been resolved:
net/mlx5e: Fix handling of wrong devices during bond netevent
Current implementation of bond netevent handler only check if
the handled netdev is VF representor and it missing a check if
the VF representor is on the same phys device of the bond handling
the netevent.
Fix by adding the missing check and optimizing the check if
the netdev is VF representor so it will not access uninitialized
private data and crashes.
BUG: kernel NULL pointer dereference, address: 000000000000036c
PGD 0 P4D 0
Oops: 0000 [#1] SMP NOPTI
Workqueue: eth3bond0 bond_mii_monitor [bonding]
RIP: 0010:mlx5e_is_uplink_rep+0xc/0x50 [mlx5_core]
RSP: 0018:ffff88812d69fd60 EFLAGS: 00010282
RAX: 0000000000000000 RBX: ffff8881cf800000 RCX: 0000000000000000
RDX: ffff88812d69fe10 RSI: 000000000000001b RDI: ffff8881cf800880
RBP: ffff8881cf800000 R08: 00000445cabccf2b R09: 0000000000000008
R10: 0000000000000004 R11: 0000000000000008 R12: ffff88812d69fe10
R13: 00000000fffffffe R14: ffff88820c0f9000 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff88846fb00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000000000036c CR3: 0000000103d80006 CR4: 0000000000370ea0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
mlx5e_eswitch_uplink_rep+0x31/0x40 [mlx5_core]
mlx5e_rep_is_lag_netdev+0x94/0xc0 [mlx5_core]
mlx5e_rep_esw_bond_netevent+0xeb/0x3d0 [mlx5_core]
raw_notifier_call_chain+0x41/0x60
call_netdevice_notifiers_info+0x34/0x80
netdev_lower_state_changed+0x4e/0xa0
bond_mii_monitor+0x56b/0x640 [bonding]
process_one_work+0x1b9/0x390
worker_thread+0x4d/0x3d0
? rescuer_thread+0x350/0x350
kthread+0x124/0x150
? set_kthread_struct+0x40/0x40
ret_from_fork+0x1f/0x30
In the Linux kernel, the following vulnerability has been resolved:
block: Fix wrong offset in bio_truncate()
bio_truncate() clears the buffer outside of last block of bdev, however
current bio_truncate() is using the wrong offset of page. So it can
return the uninitialized data.
This happened when both of truncated/corrupted FS and userspace (via
bdev) are trying to read the last of bdev.
In the Linux kernel, the following vulnerability has been resolved:
RDMA/ucma: Protect mc during concurrent multicast leaves
Partially revert the commit mentioned in the Fixes line to make sure that
allocation and erasing multicast struct are locked.
BUG: KASAN: use-after-free in ucma_cleanup_multicast drivers/infiniband/core/ucma.c:491 [inline]
BUG: KASAN: use-after-free in ucma_destroy_private_ctx+0x914/0xb70 drivers/infiniband/core/ucma.c:579
Read of size 8 at addr ffff88801bb74b00 by task syz-executor.1/25529
CPU: 0 PID: 25529 Comm: syz-executor.1 Not tainted 5.16.0-rc7-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
print_address_description.constprop.0.cold+0x8d/0x320 mm/kasan/report.c:247
__kasan_report mm/kasan/report.c:433 [inline]
kasan_report.cold+0x83/0xdf mm/kasan/report.c:450
ucma_cleanup_multicast drivers/infiniband/core/ucma.c:491 [inline]
ucma_destroy_private_ctx+0x914/0xb70 drivers/infiniband/core/ucma.c:579
ucma_destroy_id+0x1e6/0x280 drivers/infiniband/core/ucma.c:614
ucma_write+0x25c/0x350 drivers/infiniband/core/ucma.c:1732
vfs_write+0x28e/0xae0 fs/read_write.c:588
ksys_write+0x1ee/0x250 fs/read_write.c:643
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
Currently the xarray search can touch a concurrently freeing mc as the
xa_for_each() is not surrounded by any lock. Rather than hold the lock for
a full scan hold it only for the effected items, which is usually an empty
list.
In the Linux kernel, the following vulnerability has been resolved:
KVM: arm64: Avoid consuming a stale esr value when SError occur
When any exception other than an IRQ occurs, the CPU updates the ESR_EL2
register with the exception syndrome. An SError may also become pending,
and will be synchronised by KVM. KVM notes the exception type, and whether
an SError was synchronised in exit_code.
When an exception other than an IRQ occurs, fixup_guest_exit() updates
vcpu->arch.fault.esr_el2 from the hardware register. When an SError was
synchronised, the vcpu esr value is used to determine if the exception
was due to an HVC. If so, ELR_EL2 is moved back one instruction. This
is so that KVM can process the SError first, and re-execute the HVC if
the guest survives the SError.
But if an IRQ synchronises an SError, the vcpu's esr value is stale.
If the previous non-IRQ exception was an HVC, KVM will corrupt ELR_EL2,
causing an unrelated guest instruction to be executed twice.
Check ARM_EXCEPTION_CODE() before messing with ELR_EL2, IRQs don't
update this register so don't need to check.
In the Linux kernel, the following vulnerability has been resolved:
IB/hfi1: Fix AIP early init panic
An early failure in hfi1_ipoib_setup_rn() can lead to the following panic:
BUG: unable to handle kernel NULL pointer dereference at 00000000000001b0
PGD 0 P4D 0
Oops: 0002 [#1] SMP NOPTI
Workqueue: events work_for_cpu_fn
RIP: 0010:try_to_grab_pending+0x2b/0x140
Code: 1f 44 00 00 41 55 41 54 55 48 89 d5 53 48 89 fb 9c 58 0f 1f 44 00 00 48 89 c2 fa 66 0f 1f 44 00 00 48 89 55 00 40 84 f6 75 77 <f0> 48 0f ba 2b 00 72 09 31 c0 5b 5d 41 5c 41 5d c3 48 89 df e8 6c
RSP: 0018:ffffb6b3cf7cfa48 EFLAGS: 00010046
RAX: 0000000000000246 RBX: 00000000000001b0 RCX: 0000000000000000
RDX: 0000000000000246 RSI: 0000000000000000 RDI: 00000000000001b0
RBP: ffffb6b3cf7cfa70 R08: 0000000000000f09 R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000000
R13: ffffb6b3cf7cfa90 R14: ffffffff9b2fbfc0 R15: ffff8a4fdf244690
FS: 0000000000000000(0000) GS:ffff8a527f400000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000000001b0 CR3: 00000017e2410003 CR4: 00000000007706f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
__cancel_work_timer+0x42/0x190
? dev_printk_emit+0x4e/0x70
iowait_cancel_work+0x15/0x30 [hfi1]
hfi1_ipoib_txreq_deinit+0x5a/0x220 [hfi1]
? dev_err+0x6c/0x90
hfi1_ipoib_netdev_dtor+0x15/0x30 [hfi1]
hfi1_ipoib_setup_rn+0x10e/0x150 [hfi1]
rdma_init_netdev+0x5a/0x80 [ib_core]
? hfi1_ipoib_free_rdma_netdev+0x20/0x20 [hfi1]
ipoib_intf_init+0x6c/0x350 [ib_ipoib]
ipoib_intf_alloc+0x5c/0xc0 [ib_ipoib]
ipoib_add_one+0xbe/0x300 [ib_ipoib]
add_client_context+0x12c/0x1a0 [ib_core]
enable_device_and_get+0xdc/0x1d0 [ib_core]
ib_register_device+0x572/0x6b0 [ib_core]
rvt_register_device+0x11b/0x220 [rdmavt]
hfi1_register_ib_device+0x6b4/0x770 [hfi1]
do_init_one.isra.20+0x3e3/0x680 [hfi1]
local_pci_probe+0x41/0x90
work_for_cpu_fn+0x16/0x20
process_one_work+0x1a7/0x360
? create_worker+0x1a0/0x1a0
worker_thread+0x1cf/0x390
? create_worker+0x1a0/0x1a0
kthread+0x116/0x130
? kthread_flush_work_fn+0x10/0x10
ret_from_fork+0x1f/0x40
The panic happens in hfi1_ipoib_txreq_deinit() because there is a NULL
deref when hfi1_ipoib_netdev_dtor() is called in this error case.
hfi1_ipoib_txreq_init() and hfi1_ipoib_rxq_init() are self unwinding so
fix by adjusting the error paths accordingly.
Other changes:
- hfi1_ipoib_free_rdma_netdev() is deleted including the free_netdev()
since the netdev core code deletes calls free_netdev()
- The switch to the accelerated entrances is moved to the success path.
In the Linux kernel, the following vulnerability has been resolved:
IB/hfi1: Fix panic with larger ipoib send_queue_size
When the ipoib send_queue_size is increased from the default the following
panic happens:
RIP: 0010:hfi1_ipoib_drain_tx_ring+0x45/0xf0 [hfi1]
Code: 31 e4 eb 0f 8b 85 c8 02 00 00 41 83 c4 01 44 39 e0 76 60 8b 8d cc 02 00 00 44 89 e3 be 01 00 00 00 d3 e3 48 03 9d c0 02 00 00 <c7> 83 18 01 00 00 00 00 00 00 48 8b bb 30 01 00 00 e8 25 af a7 e0
RSP: 0018:ffffc9000798f4a0 EFLAGS: 00010286
RAX: 0000000000008000 RBX: ffffc9000aa0f000 RCX: 000000000000000f
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000
RBP: ffff88810ff08000 R08: ffff88889476d900 R09: 0000000000000101
R10: 0000000000000000 R11: ffffc90006590ff8 R12: 0000000000000200
R13: ffffc9000798fba8 R14: 0000000000000000 R15: 0000000000000001
FS: 00007fd0f79cc3c0(0000) GS:ffff88885fb00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffc9000aa0f118 CR3: 0000000889c84001 CR4: 00000000001706e0
Call Trace:
<TASK>
hfi1_ipoib_napi_tx_disable+0x45/0x60 [hfi1]
hfi1_ipoib_dev_stop+0x18/0x80 [hfi1]
ipoib_ib_dev_stop+0x1d/0x40 [ib_ipoib]
ipoib_stop+0x48/0xc0 [ib_ipoib]
__dev_close_many+0x9e/0x110
__dev_change_flags+0xd9/0x210
dev_change_flags+0x21/0x60
do_setlink+0x31c/0x10f0
? __nla_validate_parse+0x12d/0x1a0
? __nla_parse+0x21/0x30
? inet6_validate_link_af+0x5e/0xf0
? cpumask_next+0x1f/0x20
? __snmp6_fill_stats64.isra.53+0xbb/0x140
? __nla_validate_parse+0x47/0x1a0
__rtnl_newlink+0x530/0x910
? pskb_expand_head+0x73/0x300
? __kmalloc_node_track_caller+0x109/0x280
? __nla_put+0xc/0x20
? cpumask_next_and+0x20/0x30
? update_sd_lb_stats.constprop.144+0xd3/0x820
? _raw_spin_unlock_irqrestore+0x25/0x37
? __wake_up_common_lock+0x87/0xc0
? kmem_cache_alloc_trace+0x3d/0x3d0
rtnl_newlink+0x43/0x60
The issue happens when the shift that should have been a function of the
txq item size mistakenly used the ring size.
Fix by using the item size.
In the Linux kernel, the following vulnerability has been resolved:
dma-buf: heaps: Fix potential spectre v1 gadget
It appears like nr could be a Spectre v1 gadget as it's supplied by a
user and used as an array index. Prevent the contents
of kernel memory from being leaked to userspace via speculative
execution by using array_index_nospec.
[sumits: added fixes and cc: stable tags]
In the Linux kernel, the following vulnerability has been resolved:
mm/kmemleak: avoid scanning potential huge holes
When using devm_request_free_mem_region() and devm_memremap_pages() to
add ZONE_DEVICE memory, if requested free mem region's end pfn were
huge(e.g., 0x400000000), the node_end_pfn() will be also huge (see
move_pfn_range_to_zone()). Thus it creates a huge hole between
node_start_pfn() and node_end_pfn().
We found on some AMD APUs, amdkfd requested such a free mem region and
created a huge hole. In such a case, following code snippet was just
doing busy test_bit() looping on the huge hole.
for (pfn = start_pfn; pfn < end_pfn; pfn++) {
struct page *page = pfn_to_online_page(pfn);
if (!page)
continue;
...
}
So we got a soft lockup:
watchdog: BUG: soft lockup - CPU#6 stuck for 26s! [bash:1221]
CPU: 6 PID: 1221 Comm: bash Not tainted 5.15.0-custom #1
RIP: 0010:pfn_to_online_page+0x5/0xd0
Call Trace:
? kmemleak_scan+0x16a/0x440
kmemleak_write+0x306/0x3a0
? common_file_perm+0x72/0x170
full_proxy_write+0x5c/0x90
vfs_write+0xb9/0x260
ksys_write+0x67/0xe0
__x64_sys_write+0x1a/0x20
do_syscall_64+0x3b/0xc0
entry_SYSCALL_64_after_hwframe+0x44/0xae
I did some tests with the patch.
(1) amdgpu module unloaded
before the patch:
real 0m0.976s
user 0m0.000s
sys 0m0.968s
after the patch:
real 0m0.981s
user 0m0.000s
sys 0m0.973s
(2) amdgpu module loaded
before the patch:
real 0m35.365s
user 0m0.000s
sys 0m35.354s
after the patch:
real 0m1.049s
user 0m0.000s
sys 0m1.042s