In the Linux kernel, the following vulnerability has been resolved:
crypto: af_alg - Disallow concurrent writes in af_alg_sendmsg
Issuing two writes to the same af_alg socket is bogus as the
data will be interleaved in an unpredictable fashion. Furthermore,
concurrent writes may create inconsistencies in the internal
socket state.
Disallow this by adding a new ctx->write field that indiciates
exclusive ownership for writing.
In the Linux kernel, the following vulnerability has been resolved:
iommu/amd/pgtbl: Fix possible race while increase page table level
The AMD IOMMU host page table implementation supports dynamic page table levels
(up to 6 levels), starting with a 3-level configuration that expands based on
IOVA address. The kernel maintains a root pointer and current page table level
to enable proper page table walks in alloc_pte()/fetch_pte() operations.
The IOMMU IOVA allocator initially starts with 32-bit address and onces its
exhuasted it switches to 64-bit address (max address is determined based
on IOMMU and device DMA capability). To support larger IOVA, AMD IOMMU
driver increases page table level.
But in unmap path (iommu_v1_unmap_pages()), fetch_pte() reads
pgtable->[root/mode] without lock. So its possible that in exteme corner case,
when increase_address_space() is updating pgtable->[root/mode], fetch_pte()
reads wrong page table level (pgtable->mode). It does compare the value with
level encoded in page table and returns NULL. This will result is
iommu_unmap ops to fail and upper layer may retry/log WARN_ON.
CPU 0 CPU 1
------ ------
map pages unmap pages
alloc_pte() -> increase_address_space() iommu_v1_unmap_pages() -> fetch_pte()
pgtable->root = pte (new root value)
READ pgtable->[mode/root]
Reads new root, old mode
Updates mode (pgtable->mode += 1)
Since Page table level updates are infrequent and already synchronized with a
spinlock, implement seqcount to enable lock-free read operations on the read path.
In the Linux kernel, the following vulnerability has been resolved:
tty: serial: samsung_tty: Fix a memory leak in s3c24xx_serial_getclk() when iterating clk
When the best clk is searched, we iterate over all possible clk.
If we find a better match, the previous one, if any, needs to be freed.
If a better match has already been found, we still need to free the new
one, otherwise it leaks.
In the Linux kernel, the following vulnerability has been resolved:
NFSD: Avoid calling OPDESC() with ops->opnum == OP_ILLEGAL
OPDESC() simply indexes into nfsd4_ops[] by the op's operation
number, without range checking that value. It assumes callers are
careful to avoid calling it with an out-of-bounds opnum value.
nfsd4_decode_compound() is not so careful, and can invoke OPDESC()
with opnum set to OP_ILLEGAL, which is 10044 -- well beyond the end
of nfsd4_ops[].
In the Linux kernel, the following vulnerability has been resolved:
bcache: Fix __bch_btree_node_alloc to make the failure behavior consistent
In some specific situations, the return value of __bch_btree_node_alloc
may be NULL. This may lead to a potential NULL pointer dereference in
caller function like a calling chain :
btree_split->bch_btree_node_alloc->__bch_btree_node_alloc.
Fix it by initializing the return value in __bch_btree_node_alloc.
In the Linux kernel, the following vulnerability has been resolved:
hwmon: (xgene) Fix ioremap and memremap leak
Smatch reports:
drivers/hwmon/xgene-hwmon.c:757 xgene_hwmon_probe() warn:
'ctx->pcc_comm_addr' from ioremap() not released on line: 757.
This is because in drivers/hwmon/xgene-hwmon.c:701 xgene_hwmon_probe(),
ioremap and memremap is not released, which may cause a leak.
To fix this, ioremap and memremap is modified to devm_ioremap and
devm_memremap.
[groeck: Fixed formatting and subject]
In the Linux kernel, the following vulnerability has been resolved:
tun: Fix memory leak for detached NAPI queue.
syzkaller reported [0] memory leaks of sk and skb related to the TUN
device with no repro, but we can reproduce it easily with:
struct ifreq ifr = {}
int fd_tun, fd_tmp;
char buf[4] = {};
fd_tun = openat(AT_FDCWD, "/dev/net/tun", O_WRONLY, 0);
ifr.ifr_flags = IFF_TUN | IFF_NAPI | IFF_MULTI_QUEUE;
ioctl(fd_tun, TUNSETIFF, &ifr);
ifr.ifr_flags = IFF_DETACH_QUEUE;
ioctl(fd_tun, TUNSETQUEUE, &ifr);
fd_tmp = socket(AF_PACKET, SOCK_PACKET, 0);
ifr.ifr_flags = IFF_UP;
ioctl(fd_tmp, SIOCSIFFLAGS, &ifr);
write(fd_tun, buf, sizeof(buf));
close(fd_tun);
If we enable NAPI and multi-queue on a TUN device, we can put skb into
tfile->sk.sk_write_queue after the queue is detached. We should prevent
it by checking tfile->detached before queuing skb.
Note this must be done under tfile->sk.sk_write_queue.lock because write()
and ioctl(IFF_DETACH_QUEUE) can run concurrently. Otherwise, there would
be a small race window:
write() ioctl(IFF_DETACH_QUEUE)
`- tun_get_user `- __tun_detach
|- if (tfile->detached) |- tun_disable_queue
| `-> false | `- tfile->detached = tun
| `- tun_queue_purge
|- spin_lock_bh(&queue->lock)
`- __skb_queue_tail(queue, skb)
Another solution is to call tun_queue_purge() when closing and
reattaching the detached queue, but it could paper over another
problems. Also, we do the same kind of test for IFF_NAPI_FRAGS.
[0]:
unreferenced object 0xffff88801edbc800 (size 2048):
comm "syz-executor.1", pid 33269, jiffies 4295743834 (age 18.756s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 00 07 40 00 00 00 00 00 00 00 00 00 00 00 00 ...@............
backtrace:
[<000000008c16ea3d>] __do_kmalloc_node mm/slab_common.c:965 [inline]
[<000000008c16ea3d>] __kmalloc+0x4a/0x130 mm/slab_common.c:979
[<000000003addde56>] kmalloc include/linux/slab.h:563 [inline]
[<000000003addde56>] sk_prot_alloc+0xef/0x1b0 net/core/sock.c:2035
[<000000003e20621f>] sk_alloc+0x36/0x2f0 net/core/sock.c:2088
[<0000000028e43843>] tun_chr_open+0x3d/0x190 drivers/net/tun.c:3438
[<000000001b0f1f28>] misc_open+0x1a6/0x1f0 drivers/char/misc.c:165
[<000000004376f706>] chrdev_open+0x111/0x300 fs/char_dev.c:414
[<00000000614d379f>] do_dentry_open+0x2f9/0x750 fs/open.c:920
[<000000008eb24774>] do_open fs/namei.c:3636 [inline]
[<000000008eb24774>] path_openat+0x143f/0x1a30 fs/namei.c:3791
[<00000000955077b5>] do_filp_open+0xce/0x1c0 fs/namei.c:3818
[<00000000b78973b0>] do_sys_openat2+0xf0/0x260 fs/open.c:1356
[<00000000057be699>] do_sys_open fs/open.c:1372 [inline]
[<00000000057be699>] __do_sys_openat fs/open.c:1388 [inline]
[<00000000057be699>] __se_sys_openat fs/open.c:1383 [inline]
[<00000000057be699>] __x64_sys_openat+0x83/0xf0 fs/open.c:1383
[<00000000a7d2182d>] do_syscall_x64 arch/x86/entry/common.c:50 [inline]
[<00000000a7d2182d>] do_syscall_64+0x3c/0x90 arch/x86/entry/common.c:80
[<000000004cc4e8c4>] entry_SYSCALL_64_after_hwframe+0x72/0xdc
unreferenced object 0xffff88802f671700 (size 240):
comm "syz-executor.1", pid 33269, jiffies 4295743854 (age 18.736s)
hex dump (first 32 bytes):
68 c9 db 1e 80 88 ff ff 68 c9 db 1e 80 88 ff ff h.......h.......
00 c0 7b 2f 80 88 ff ff 00 c8 db 1e 80 88 ff ff ..{/............
backtrace:
[<00000000e9d9fdb6>] __alloc_skb+0x223/0x250 net/core/skbuff.c:644
[<000000002c3e4e0b>] alloc_skb include/linux/skbuff.h:1288 [inline]
[<000000002c3e4e0b>] alloc_skb_with_frags+0x6f/0x350 net/core/skbuff.c:6378
[<00000000825f98d7>] sock_alloc_send_pskb+0x3ac/0x3e0 net/core/sock.c:2729
[<00000000e9eb3df3>] tun_alloc_skb drivers/net/tun.c:1529 [inline]
[<
---truncated---
In the Linux kernel, the following vulnerability has been resolved:
btrfs: output extra debug info if we failed to find an inline backref
[BUG]
Syzbot reported several warning triggered inside
lookup_inline_extent_backref().
[CAUSE]
As usual, the reproducer doesn't reliably trigger locally here, but at
least we know the WARN_ON() is triggered when an inline backref can not
be found, and it can only be triggered when @insert is true. (I.e.
inserting a new inline backref, which means the backref should already
exist)
[ENHANCEMENT]
After the WARN_ON(), dump all the parameters and the extent tree
leaf to help debug.