In the Linux kernel, the following vulnerability has been resolved:
media: ttpci: fix two memleaks in budget_av_attach
When saa7146_register_device and saa7146_vv_init fails, budget_av_attach
should free the resources it allocates, like the error-handling of
ttpci_budget_init does. Besides, there are two fixme comment refers to
such deallocations.
In the Linux kernel, the following vulnerability has been resolved:
wifi: iwlwifi: mvm: ensure offloading TID queue exists
The resume code path assumes that the TX queue for the offloading TID
has been configured. At resume time it then tries to sync the write
pointer as it may have been updated by the firmware.
In the unusual event that no packets have been send on TID 0, the queue
will not have been allocated and this causes a crash. Fix this by
ensuring the queue exist at suspend time.
In the Linux kernel, the following vulnerability has been resolved:
USB: usb-storage: Prevent divide-by-0 error in isd200_ata_command
The isd200 sub-driver in usb-storage uses the HEADS and SECTORS values
in the ATA ID information to calculate cylinder and head values when
creating a CDB for READ or WRITE commands. The calculation involves
division and modulus operations, which will cause a crash if either of
these values is 0. While this never happens with a genuine device, it
could happen with a flawed or subversive emulation, as reported by the
syzbot fuzzer.
Protect against this possibility by refusing to bind to the device if
either the ATA_ID_HEADS or ATA_ID_SECTORS value in the device's ID
information is 0. This requires isd200_Initialization() to return a
negative error code when initialization fails; currently it always
returns 0 (even when there is an error).
In the Linux kernel, the following vulnerability has been resolved:
media: edia: dvbdev: fix a use-after-free
In dvb_register_device, *pdvbdev is set equal to dvbdev, which is freed
in several error-handling paths. However, *pdvbdev is not set to NULL
after dvbdev's deallocation, causing use-after-frees in many places,
for example, in the following call chain:
budget_register
|-> dvb_dmxdev_init
|-> dvb_register_device
|-> dvb_dmxdev_release
|-> dvb_unregister_device
|-> dvb_remove_device
|-> dvb_device_put
|-> kref_put
When calling dvb_unregister_device, dmxdev->dvbdev (i.e. *pdvbdev in
dvb_register_device) could point to memory that had been freed in
dvb_register_device. Thereafter, this pointer is transferred to
kref_put and triggering a use-after-free.
In the Linux kernel, the following vulnerability has been resolved:
usb: gadget: f_ncm: Fix UAF ncm object at re-bind after usb ep transport error
When ncm function is working and then stop usb0 interface for link down,
eth_stop() is called. At this piont, accidentally if usb transport error
should happen in usb_ep_enable(), 'in_ep' and/or 'out_ep' may not be enabled.
After that, ncm_disable() is called to disable for ncm unbind
but gether_disconnect() is never called since 'in_ep' is not enabled.
As the result, ncm object is released in ncm unbind
but 'dev->port_usb' associated to 'ncm->port' is not NULL.
And when ncm bind again to recover netdev, ncm object is reallocated
but usb0 interface is already associated to previous released ncm object.
Therefore, once usb0 interface is up and eth_start_xmit() is called,
released ncm object is dereferrenced and it might cause use-after-free memory.
[function unlink via configfs]
usb0: eth_stop dev->port_usb=ffffff9b179c3200
--> error happens in usb_ep_enable().
NCM: ncm_disable: ncm=ffffff9b179c3200
--> no gether_disconnect() since ncm->port.in_ep->enabled is false.
NCM: ncm_unbind: ncm unbind ncm=ffffff9b179c3200
NCM: ncm_free: ncm free ncm=ffffff9b179c3200 <-- released ncm
[function link via configfs]
NCM: ncm_alloc: ncm alloc ncm=ffffff9ac4f8a000
NCM: ncm_bind: ncm bind ncm=ffffff9ac4f8a000
NCM: ncm_set_alt: ncm=ffffff9ac4f8a000 alt=0
usb0: eth_open dev->port_usb=ffffff9b179c3200 <-- previous released ncm
usb0: eth_start dev->port_usb=ffffff9b179c3200 <--
eth_start_xmit()
--> dev->wrap()
Unable to handle kernel paging request at virtual address dead00000000014f
This patch addresses the issue by checking if 'ncm->netdev' is not NULL at
ncm_disable() to call gether_disconnect() to deassociate 'dev->port_usb'.
It's more reasonable to check 'ncm->netdev' to call gether_connect/disconnect
rather than check 'ncm->port.in_ep->enabled' since it might not be enabled
but the gether connection might be established.
In the Linux kernel, the following vulnerability has been resolved:
serial/pmac_zilog: Remove flawed mitigation for rx irq flood
The mitigation was intended to stop the irq completely. That may be
better than a hard lock-up but it turns out that you get a crash anyway
if you're using pmac_zilog as a serial console:
ttyPZ0: pmz: rx irq flood !
BUG: spinlock recursion on CPU#0, swapper/0
That's because the pr_err() call in pmz_receive_chars() results in
pmz_console_write() attempting to lock a spinlock already locked in
pmz_interrupt(). With CONFIG_DEBUG_SPINLOCK=y, this produces a fatal
BUG splat. The spinlock in question is the one in struct uart_port.
Even when it's not fatal, the serial port rx function ceases to work.
Also, the iteration limit doesn't play nicely with QEMU, as can be
seen in the bug report linked below.
A web search for other reports of the error message "pmz: rx irq flood"
didn't produce anything. So I don't think this code is needed any more.
Remove it.
In the Linux kernel, the following vulnerability has been resolved:
serial: max310x: fix NULL pointer dereference in I2C instantiation
When trying to instantiate a max14830 device from userspace:
echo max14830 0x60 > /sys/bus/i2c/devices/i2c-2/new_device
we get the following error:
Unable to handle kernel NULL pointer dereference at virtual address...
...
Call trace:
max310x_i2c_probe+0x48/0x170 [max310x]
i2c_device_probe+0x150/0x2a0
...
Add check for validity of devtype to prevent the error, and abort probe
with a meaningful error message.
In the Linux kernel, the following vulnerability has been resolved:
Squashfs: check the inode number is not the invalid value of zero
Syskiller has produced an out of bounds access in fill_meta_index().
That out of bounds access is ultimately caused because the inode
has an inode number with the invalid value of zero, which was not checked.
The reason this causes the out of bounds access is due to following
sequence of events:
1. Fill_meta_index() is called to allocate (via empty_meta_index())
and fill a metadata index. It however suffers a data read error
and aborts, invalidating the newly returned empty metadata index.
It does this by setting the inode number of the index to zero,
which means unused (zero is not a valid inode number).
2. When fill_meta_index() is subsequently called again on another
read operation, locate_meta_index() returns the previous index
because it matches the inode number of 0. Because this index
has been returned it is expected to have been filled, and because
it hasn't been, an out of bounds access is performed.
This patch adds a sanity check which checks that the inode number
is not zero when the inode is created and returns -EINVAL if it is.
[phillip@squashfs.org.uk: whitespace fix]
In the Linux kernel, the following vulnerability has been resolved:
nfs: fix UAF in direct writes
In production we have been hitting the following warning consistently
------------[ cut here ]------------
refcount_t: underflow; use-after-free.
WARNING: CPU: 17 PID: 1800359 at lib/refcount.c:28 refcount_warn_saturate+0x9c/0xe0
Workqueue: nfsiod nfs_direct_write_schedule_work [nfs]
RIP: 0010:refcount_warn_saturate+0x9c/0xe0
PKRU: 55555554
Call Trace:
<TASK>
? __warn+0x9f/0x130
? refcount_warn_saturate+0x9c/0xe0
? report_bug+0xcc/0x150
? handle_bug+0x3d/0x70
? exc_invalid_op+0x16/0x40
? asm_exc_invalid_op+0x16/0x20
? refcount_warn_saturate+0x9c/0xe0
nfs_direct_write_schedule_work+0x237/0x250 [nfs]
process_one_work+0x12f/0x4a0
worker_thread+0x14e/0x3b0
? ZSTD_getCParams_internal+0x220/0x220
kthread+0xdc/0x120
? __btf_name_valid+0xa0/0xa0
ret_from_fork+0x1f/0x30
This is because we're completing the nfs_direct_request twice in a row.
The source of this is when we have our commit requests to submit, we
process them and send them off, and then in the completion path for the
commit requests we have
if (nfs_commit_end(cinfo.mds))
nfs_direct_write_complete(dreq);
However since we're submitting asynchronous requests we sometimes have
one that completes before we submit the next one, so we end up calling
complete on the nfs_direct_request twice.
The only other place we use nfs_generic_commit_list() is in
__nfs_commit_inode, which wraps this call in a
nfs_commit_begin();
nfs_commit_end();
Which is a common pattern for this style of completion handling, one
that is also repeated in the direct code with get_dreq()/put_dreq()
calls around where we process events as well as in the completion paths.
Fix this by using the same pattern for the commit requests.
Before with my 200 node rocksdb stress running this warning would pop
every 10ish minutes. With my patch the stress test has been running for
several hours without popping.