Security Vulnerabilities
- CVEs Published In May 2024
In the Linux kernel, the following vulnerability has been resolved:
platform/chrome: cros_ec_uart: properly fix race condition
The cros_ec_uart_probe() function calls devm_serdev_device_open() before
it calls serdev_device_set_client_ops(). This can trigger a NULL pointer
dereference:
BUG: kernel NULL pointer dereference, address: 0000000000000000
...
Call Trace:
<TASK>
...
? ttyport_receive_buf
A simplified version of crashing code is as follows:
static inline size_t serdev_controller_receive_buf(struct serdev_controller *ctrl,
const u8 *data,
size_t count)
{
struct serdev_device *serdev = ctrl->serdev;
if (!serdev || !serdev->ops->receive_buf) // CRASH!
return 0;
return serdev->ops->receive_buf(serdev, data, count);
}
It assumes that if SERPORT_ACTIVE is set and serdev exists, serdev->ops
will also exist. This conflicts with the existing cros_ec_uart_probe()
logic, as it first calls devm_serdev_device_open() (which sets
SERPORT_ACTIVE), and only later sets serdev->ops via
serdev_device_set_client_ops().
Commit 01f95d42b8f4 ("platform/chrome: cros_ec_uart: fix race
condition") attempted to fix a similar race condition, but while doing
so, made the window of error for this race condition to happen much
wider.
Attempt to fix the race condition again, making sure we fully setup
before calling devm_serdev_device_open().
In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: Fix memory leak in hci_req_sync_complete()
In 'hci_req_sync_complete()', always free the previous sync
request state before assigning reference to a new one.
In the Linux kernel, the following vulnerability has been resolved:
raid1: fix use-after-free for original bio in raid1_write_request()
r1_bio->bios[] is used to record new bios that will be issued to
underlying disks, however, in raid1_write_request(), r1_bio->bios[]
will set to the original bio temporarily. Meanwhile, if blocked rdev
is set, free_r1bio() will be called causing that all r1_bio->bios[]
to be freed:
raid1_write_request()
r1_bio = alloc_r1bio(mddev, bio); -> r1_bio->bios[] is NULL
for (i = 0; i < disks; i++) -> for each rdev in conf
// first rdev is normal
r1_bio->bios[0] = bio; -> set to original bio
// second rdev is blocked
if (test_bit(Blocked, &rdev->flags))
break
if (blocked_rdev)
free_r1bio()
put_all_bios()
bio_put(r1_bio->bios[0]) -> original bio is freed
Test scripts:
mdadm -CR /dev/md0 -l1 -n4 /dev/sd[abcd] --assume-clean
fio -filename=/dev/md0 -ioengine=libaio -rw=write -bs=4k -numjobs=1 \
-iodepth=128 -name=test -direct=1
echo blocked > /sys/block/md0/md/rd2/state
Test result:
BUG bio-264 (Not tainted): Object already free
-----------------------------------------------------------------------------
Allocated in mempool_alloc_slab+0x24/0x50 age=1 cpu=1 pid=869
kmem_cache_alloc+0x324/0x480
mempool_alloc_slab+0x24/0x50
mempool_alloc+0x6e/0x220
bio_alloc_bioset+0x1af/0x4d0
blkdev_direct_IO+0x164/0x8a0
blkdev_write_iter+0x309/0x440
aio_write+0x139/0x2f0
io_submit_one+0x5ca/0xb70
__do_sys_io_submit+0x86/0x270
__x64_sys_io_submit+0x22/0x30
do_syscall_64+0xb1/0x210
entry_SYSCALL_64_after_hwframe+0x6c/0x74
Freed in mempool_free_slab+0x1f/0x30 age=1 cpu=1 pid=869
kmem_cache_free+0x28c/0x550
mempool_free_slab+0x1f/0x30
mempool_free+0x40/0x100
bio_free+0x59/0x80
bio_put+0xf0/0x220
free_r1bio+0x74/0xb0
raid1_make_request+0xadf/0x1150
md_handle_request+0xc7/0x3b0
md_submit_bio+0x76/0x130
__submit_bio+0xd8/0x1d0
submit_bio_noacct_nocheck+0x1eb/0x5c0
submit_bio_noacct+0x169/0xd40
submit_bio+0xee/0x1d0
blkdev_direct_IO+0x322/0x8a0
blkdev_write_iter+0x309/0x440
aio_write+0x139/0x2f0
Since that bios for underlying disks are not allocated yet, fix this
problem by using mempool_free() directly to free the r1_bio.
In the Linux kernel, the following vulnerability has been resolved:
arm64: tlb: Fix TLBI RANGE operand
KVM/arm64 relies on TLBI RANGE feature to flush TLBs when the dirty
pages are collected by VMM and the page table entries become write
protected during live migration. Unfortunately, the operand passed
to the TLBI RANGE instruction isn't correctly sorted out due to the
commit 117940aa6e5f ("KVM: arm64: Define kvm_tlb_flush_vmid_range()").
It leads to crash on the destination VM after live migration because
TLBs aren't flushed completely and some of the dirty pages are missed.
For example, I have a VM where 8GB memory is assigned, starting from
0x40000000 (1GB). Note that the host has 4KB as the base page size.
In the middile of migration, kvm_tlb_flush_vmid_range() is executed
to flush TLBs. It passes MAX_TLBI_RANGE_PAGES as the argument to
__kvm_tlb_flush_vmid_range() and __flush_s2_tlb_range_op(). SCALE#3
and NUM#31, corresponding to MAX_TLBI_RANGE_PAGES, isn't supported
by __TLBI_RANGE_NUM(). In this specific case, -1 has been returned
from __TLBI_RANGE_NUM() for SCALE#3/2/1/0 and rejected by the loop
in the __flush_tlb_range_op() until the variable @scale underflows
and becomes -9, 0xffff708000040000 is set as the operand. The operand
is wrong since it's sorted out by __TLBI_VADDR_RANGE() according to
invalid @scale and @num.
Fix it by extending __TLBI_RANGE_NUM() to support the combination of
SCALE#3 and NUM#31. With the changes, [-1 31] instead of [-1 30] can
be returned from the macro, meaning the TLBs for 0x200000 pages in the
above example can be flushed in one shoot with SCALE#3 and NUM#31. The
macro TLBI_RANGE_MASK is dropped since no one uses it any more. The
comments are also adjusted accordingly.
In the Linux kernel, the following vulnerability has been resolved:
virtio_net: Do not send RSS key if it is not supported
There is a bug when setting the RSS options in virtio_net that can break
the whole machine, getting the kernel into an infinite loop.
Running the following command in any QEMU virtual machine with virtionet
will reproduce this problem:
# ethtool -X eth0 hfunc toeplitz
This is how the problem happens:
1) ethtool_set_rxfh() calls virtnet_set_rxfh()
2) virtnet_set_rxfh() calls virtnet_commit_rss_command()
3) virtnet_commit_rss_command() populates 4 entries for the rss
scatter-gather
4) Since the command above does not have a key, then the last
scatter-gatter entry will be zeroed, since rss_key_size == 0.
sg_buf_size = vi->rss_key_size;
5) This buffer is passed to qemu, but qemu is not happy with a buffer
with zero length, and do the following in virtqueue_map_desc() (QEMU
function):
if (!sz) {
virtio_error(vdev, "virtio: zero sized buffers are not allowed");
6) virtio_error() (also QEMU function) set the device as broken
vdev->broken = true;
7) Qemu bails out, and do not repond this crazy kernel.
8) The kernel is waiting for the response to come back (function
virtnet_send_command())
9) The kernel is waiting doing the following :
while (!virtqueue_get_buf(vi->cvq, &tmp) &&
!virtqueue_is_broken(vi->cvq))
cpu_relax();
10) None of the following functions above is true, thus, the kernel
loops here forever. Keeping in mind that virtqueue_is_broken() does
not look at the qemu `vdev->broken`, so, it never realizes that the
vitio is broken at QEMU side.
Fix it by not sending RSS commands if the feature is not available in
the device.
In the Linux kernel, the following vulnerability has been resolved:
batman-adv: Avoid infinite loop trying to resize local TT
If the MTU of one of an attached interface becomes too small to transmit
the local translation table then it must be resized to fit inside all
fragments (when enabled) or a single packet.
But if the MTU becomes too low to transmit even the header + the VLAN
specific part then the resizing of the local TT will never succeed. This
can for example happen when the usable space is 110 bytes and 11 VLANs are
on top of batman-adv. In this case, at least 116 byte would be needed.
There will just be an endless spam of
batman_adv: batadv0: Forced to purge local tt entries to fit new maximum fragment MTU (110)
in the log but the function will never finish. Problem here is that the
timeout will be halved all the time and will then stagnate at 0 and
therefore never be able to reduce the table even more.
There are other scenarios possible with a similar result. The number of
BATADV_TT_CLIENT_NOPURGE entries in the local TT can for example be too
high to fit inside a packet. Such a scenario can therefore happen also with
only a single VLAN + 7 non-purgable addresses - requiring at least 120
bytes.
While this should be handled proactively when:
* interface with too low MTU is added
* VLAN is added
* non-purgeable local mac is added
* MTU of an attached interface is reduced
* fragmentation setting gets disabled (which most likely requires dropping
attached interfaces)
not all of these scenarios can be prevented because batman-adv is only
consuming events without the the possibility to prevent these actions
(non-purgable MAC address added, MTU of an attached interface is reduced).
It is therefore necessary to also make sure that the code is able to handle
also the situations when there were already incompatible system
configuration are present.
In the Linux kernel, the following vulnerability has been resolved:
bounds: Use the right number of bits for power-of-two CONFIG_NR_CPUS
bits_per() rounds up to the next power of two when passed a power of
two. This causes crashes on some machines and configurations.
In the Linux kernel, the following vulnerability has been resolved:
i2c: smbus: fix NULL function pointer dereference
Baruch reported an OOPS when using the designware controller as target
only. Target-only modes break the assumption of one transfer function
always being available. Fix this by always checking the pointer in
__i2c_transfer.
[wsa: dropped the simplification in core-smbus to avoid theoretical regressions]
In the Linux kernel, the following vulnerability has been resolved:
sched/eevdf: Prevent vlag from going out of bounds in reweight_eevdf()
It was possible to have pick_eevdf() return NULL, which then causes a
NULL-deref. This turned out to be due to entity_eligible() returning
falsely negative because of a s64 multiplcation overflow.
Specifically, reweight_eevdf() computes the vlag without considering
the limit placed upon vlag as update_entity_lag() does, and then the
scaling multiplication (remember that weight is 20bit fixed point) can
overflow. This then leads to the new vruntime being weird which then
causes the above entity_eligible() to go side-ways and claim nothing
is eligible.
Thus limit the range of vlag accordingly.
All this was quite rare, but fatal when it does happen.
In the Linux kernel, the following vulnerability has been resolved:
phy: ti: tusb1210: Resolve charger-det crash if charger psy is unregistered
The power_supply frame-work is not really designed for there to be
long living in kernel references to power_supply devices.
Specifically unregistering a power_supply while some other code has
a reference to it triggers a WARN in power_supply_unregister():
WARN_ON(atomic_dec_return(&psy->use_cnt));
Folllowed by the power_supply still getting removed and the
backing data freed anyway, leaving the tusb1210 charger-detect code
with a dangling reference, resulting in a crash the next time
tusb1210_get_online() is called.
Fix this by only holding the reference in tusb1210_get_online()
freeing it at the end of the function. Note this still leaves
a theoretical race window, but it avoids the issue when manually
rmmod-ing the charger chip driver during development.