In the Linux kernel, the following vulnerability has been resolved:
s390/ap: Fix crash in AP internal function modify_bitmap()
A system crash like this
Failing address: 200000cb7df6f000 TEID: 200000cb7df6f403
Fault in home space mode while using kernel ASCE.
AS:00000002d71bc007 R3:00000003fe5b8007 S:000000011a446000 P:000000015660c13d
Oops: 0038 ilc:3 [#1] PREEMPT SMP
Modules linked in: mlx5_ib ...
CPU: 8 PID: 7556 Comm: bash Not tainted 6.9.0-rc7 #8
Hardware name: IBM 3931 A01 704 (LPAR)
Krnl PSW : 0704e00180000000 0000014b75e7b606 (ap_parse_bitmap_str+0x10e/0x1f8)
R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
Krnl GPRS: 0000000000000001 ffffffffffffffc0 0000000000000001 00000048f96b75d3
000000cb00000100 ffffffffffffffff ffffffffffffffff 000000cb7df6fce0
000000cb7df6fce0 00000000ffffffff 000000000000002b 00000048ffffffff
000003ff9b2dbc80 200000cb7df6fcd8 0000014bffffffc0 000000cb7df6fbc8
Krnl Code: 0000014b75e7b5fc: a7840047 brc 8,0000014b75e7b68a
0000014b75e7b600: 18b2 lr %r11,%r2
#0000014b75e7b602: a7f4000a brc 15,0000014b75e7b616
>0000014b75e7b606: eb22d00000e6 laog %r2,%r2,0(%r13)
0000014b75e7b60c: a7680001 lhi %r6,1
0000014b75e7b610: 187b lr %r7,%r11
0000014b75e7b612: 84960021 brxh %r9,%r6,0000014b75e7b654
0000014b75e7b616: 18e9 lr %r14,%r9
Call Trace:
[<0000014b75e7b606>] ap_parse_bitmap_str+0x10e/0x1f8
([<0000014b75e7b5dc>] ap_parse_bitmap_str+0xe4/0x1f8)
[<0000014b75e7b758>] apmask_store+0x68/0x140
[<0000014b75679196>] kernfs_fop_write_iter+0x14e/0x1e8
[<0000014b75598524>] vfs_write+0x1b4/0x448
[<0000014b7559894c>] ksys_write+0x74/0x100
[<0000014b7618a440>] __do_syscall+0x268/0x328
[<0000014b761a3558>] system_call+0x70/0x98
INFO: lockdep is turned off.
Last Breaking-Event-Address:
[<0000014b75e7b636>] ap_parse_bitmap_str+0x13e/0x1f8
Kernel panic - not syncing: Fatal exception: panic_on_oops
occured when /sys/bus/ap/a[pq]mask was updated with a relative mask value
(like +0x10-0x12,+60,-90) with one of the numeric values exceeding INT_MAX.
The fix is simple: use unsigned long values for the internal variables. The
correct checks are already in place in the function but a simple int for
the internal variables was used with the possibility to overflow.
In the Linux kernel, the following vulnerability has been resolved:
media: lgdt3306a: Add a check against null-pointer-def
The driver should check whether the client provides the platform_data.
The following log reveals it:
[ 29.610324] BUG: KASAN: null-ptr-deref in kmemdup+0x30/0x40
[ 29.610730] Read of size 40 at addr 0000000000000000 by task bash/414
[ 29.612820] Call Trace:
[ 29.613030] <TASK>
[ 29.613201] dump_stack_lvl+0x56/0x6f
[ 29.613496] ? kmemdup+0x30/0x40
[ 29.613754] print_report.cold+0x494/0x6b7
[ 29.614082] ? kmemdup+0x30/0x40
[ 29.614340] kasan_report+0x8a/0x190
[ 29.614628] ? kmemdup+0x30/0x40
[ 29.614888] kasan_check_range+0x14d/0x1d0
[ 29.615213] memcpy+0x20/0x60
[ 29.615454] kmemdup+0x30/0x40
[ 29.615700] lgdt3306a_probe+0x52/0x310
[ 29.616339] i2c_device_probe+0x951/0xa90
In the Linux kernel, the following vulnerability has been resolved:
tracing/probes: fix error check in parse_btf_field()
btf_find_struct_member() might return NULL or an error via the
ERR_PTR() macro. However, its caller in parse_btf_field() only checks
for the NULL condition. Fix this by using IS_ERR() and returning the
error up the stack.
In the Linux kernel, the following vulnerability has been resolved:
dma-buf/sw-sync: don't enable IRQ from sync_print_obj()
Since commit a6aa8fca4d79 ("dma-buf/sw-sync: Reduce irqsave/irqrestore from
known context") by error replaced spin_unlock_irqrestore() with
spin_unlock_irq() for both sync_debugfs_show() and sync_print_obj() despite
sync_print_obj() is called from sync_debugfs_show(), lockdep complains
inconsistent lock state warning.
Use plain spin_{lock,unlock}() for sync_print_obj(), for
sync_debugfs_show() is already using spin_{lock,unlock}_irq().
In the Linux kernel, the following vulnerability has been resolved:
SUNRPC: Fix loop termination condition in gss_free_in_token_pages()
The in_token->pages[] array is not NULL terminated. This results in
the following KASAN splat:
KASAN: maybe wild-memory-access in range [0x04a2013400000008-0x04a201340000000f]
In the Linux kernel, the following vulnerability has been resolved:
soundwire: cadence: fix invalid PDI offset
For some reason, we add an offset to the PDI, presumably to skip the
PDI0 and PDI1 which are reserved for BPT.
This code is however completely wrong and leads to an out-of-bounds
access. We were just lucky so far since we used only a couple of PDIs
and remained within the PDI array bounds.
A Fixes: tag is not provided since there are no known platforms where
the out-of-bounds would be accessed, and the initial code had problems
as well.
A follow-up patch completely removes this useless offset.
In the Linux kernel, the following vulnerability has been resolved:
USB: core: Fix hang in usb_kill_urb by adding memory barriers
The syzbot fuzzer has identified a bug in which processes hang waiting
for usb_kill_urb() to return. It turns out the issue is not unlinking
the URB; that works just fine. Rather, the problem arises when the
wakeup notification that the URB has completed is not received.
The reason is memory-access ordering on SMP systems. In outline form,
usb_kill_urb() and __usb_hcd_giveback_urb() operating concurrently on
different CPUs perform the following actions:
CPU 0 CPU 1
---------------------------- ---------------------------------
usb_kill_urb(): __usb_hcd_giveback_urb():
... ...
atomic_inc(&urb->reject); atomic_dec(&urb->use_count);
... ...
wait_event(usb_kill_urb_queue,
atomic_read(&urb->use_count) == 0);
if (atomic_read(&urb->reject))
wake_up(&usb_kill_urb_queue);
Confining your attention to urb->reject and urb->use_count, you can
see that the overall pattern of accesses on CPU 0 is:
write urb->reject, then read urb->use_count;
whereas the overall pattern of accesses on CPU 1 is:
write urb->use_count, then read urb->reject.
This pattern is referred to in memory-model circles as SB (for "Store
Buffering"), and it is well known that without suitable enforcement of
the desired order of accesses -- in the form of memory barriers -- it
is entirely possible for one or both CPUs to execute their reads ahead
of their writes. The end result will be that sometimes CPU 0 sees the
old un-decremented value of urb->use_count while CPU 1 sees the old
un-incremented value of urb->reject. Consequently CPU 0 ends up on
the wait queue and never gets woken up, leading to the observed hang
in usb_kill_urb().
The same pattern of accesses occurs in usb_poison_urb() and the
failure pathway of usb_hcd_submit_urb().
The problem is fixed by adding suitable memory barriers. To provide
proper memory-access ordering in the SB pattern, a full barrier is
required on both CPUs. The atomic_inc() and atomic_dec() accesses
themselves don't provide any memory ordering, but since they are
present, we can use the optimized smp_mb__after_atomic() memory
barrier in the various routines to obtain the desired effect.
This patch adds the necessary memory barriers.
In the Linux kernel, the following vulnerability has been resolved:
KVM: x86: Forcibly leave nested virt when SMM state is toggled
Forcibly leave nested virtualization operation if userspace toggles SMM
state via KVM_SET_VCPU_EVENTS or KVM_SYNC_X86_EVENTS. If userspace
forces the vCPU out of SMM while it's post-VMXON and then injects an SMI,
vmx_enter_smm() will overwrite vmx->nested.smm.vmxon and end up with both
vmxon=false and smm.vmxon=false, but all other nVMX state allocated.
Don't attempt to gracefully handle the transition as (a) most transitions
are nonsencial, e.g. forcing SMM while L2 is running, (b) there isn't
sufficient information to handle all transitions, e.g. SVM wants access
to the SMRAM save state, and (c) KVM_SET_VCPU_EVENTS must precede
KVM_SET_NESTED_STATE during state restore as the latter disallows putting
the vCPU into L2 if SMM is active, and disallows tagging the vCPU as
being post-VMXON in SMM if SMM is not active.
Abuse of KVM_SET_VCPU_EVENTS manifests as a WARN and memory leak in nVMX
due to failure to free vmcs01's shadow VMCS, but the bug goes far beyond
just a memory leak, e.g. toggling SMM on while L2 is active puts the vCPU
in an architecturally impossible state.
WARNING: CPU: 0 PID: 3606 at free_loaded_vmcs arch/x86/kvm/vmx/vmx.c:2665 [inline]
WARNING: CPU: 0 PID: 3606 at free_loaded_vmcs+0x158/0x1a0 arch/x86/kvm/vmx/vmx.c:2656
Modules linked in:
CPU: 1 PID: 3606 Comm: syz-executor725 Not tainted 5.17.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:free_loaded_vmcs arch/x86/kvm/vmx/vmx.c:2665 [inline]
RIP: 0010:free_loaded_vmcs+0x158/0x1a0 arch/x86/kvm/vmx/vmx.c:2656
Code: <0f> 0b eb b3 e8 8f 4d 9f 00 e9 f7 fe ff ff 48 89 df e8 92 4d 9f 00
Call Trace:
<TASK>
kvm_arch_vcpu_destroy+0x72/0x2f0 arch/x86/kvm/x86.c:11123
kvm_vcpu_destroy arch/x86/kvm/../../../virt/kvm/kvm_main.c:441 [inline]
kvm_destroy_vcpus+0x11f/0x290 arch/x86/kvm/../../../virt/kvm/kvm_main.c:460
kvm_free_vcpus arch/x86/kvm/x86.c:11564 [inline]
kvm_arch_destroy_vm+0x2e8/0x470 arch/x86/kvm/x86.c:11676
kvm_destroy_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:1217 [inline]
kvm_put_kvm+0x4fa/0xb00 arch/x86/kvm/../../../virt/kvm/kvm_main.c:1250
kvm_vm_release+0x3f/0x50 arch/x86/kvm/../../../virt/kvm/kvm_main.c:1273
__fput+0x286/0x9f0 fs/file_table.c:311
task_work_run+0xdd/0x1a0 kernel/task_work.c:164
exit_task_work include/linux/task_work.h:32 [inline]
do_exit+0xb29/0x2a30 kernel/exit.c:806
do_group_exit+0xd2/0x2f0 kernel/exit.c:935
get_signal+0x4b0/0x28c0 kernel/signal.c:2862
arch_do_signal_or_restart+0x2a9/0x1c40 arch/x86/kernel/signal.c:868
handle_signal_work kernel/entry/common.c:148 [inline]
exit_to_user_mode_loop kernel/entry/common.c:172 [inline]
exit_to_user_mode_prepare+0x17d/0x290 kernel/entry/common.c:207
__syscall_exit_to_user_mode_work kernel/entry/common.c:289 [inline]
syscall_exit_to_user_mode+0x19/0x60 kernel/entry/common.c:300
do_syscall_64+0x42/0xb0 arch/x86/entry/common.c:86
entry_SYSCALL_64_after_hwframe+0x44/0xae
</TASK>
In the Linux kernel, the following vulnerability has been resolved:
drm/amd/display: Wrap dcn301_calculate_wm_and_dlg for FPU.
Mirrors the logic for dcn30. Cue lots of WARNs and some
kernel panics without this fix.