An issue was discovered in Xen through 4.9.x allowing x86 guest OS users to cause a denial of service (hypervisor crash) or possibly gain privileges because MSI mapping was mishandled.
An issue was discovered in Xen 4.5.x through 4.9.x allowing attackers (who control a stub domain kernel or tool stack) to cause a denial of service (host OS crash) because of a missing comparison (of range start to range end) within the DMOP map/unmap implementation.
An issue was discovered in Xen through 4.9.x allowing x86 HVM guest OS users to cause a denial of service (hypervisor crash) or possibly gain privileges because self-linear shadow mappings are mishandled for translated guests.
An issue was discovered in Xen through 4.9.x allowing x86 PV guest OS users to cause a denial of service (memory leak) because reference counts are mishandled.
An issue was discovered in Xen through 4.9.x allowing x86 SVM PV guest OS users to cause a denial of service (hypervisor crash) or gain privileges because IDT settings are mishandled during CPU hotplugging.
An issue was discovered in Xen through 4.9.x allowing x86 PV guest OS users to cause a denial of service (unbounded recursion, stack consumption, and hypervisor crash) or possibly gain privileges via crafted page-table stacking.
An issue was discovered in Xen 4.4.x through 4.9.x allowing ARM guest OS users to cause a denial of service (prevent physical CPU usage) because of lock mishandling upon detection of an add-to-physmap error.
Heap-based buffer overflow in the pcnet_receive function in hw/net/pcnet.c in QEMU allows guest OS administrators to cause a denial of service (instance crash) or possibly execute arbitrary code via a series of packets in loopback mode.
Memory leak in Xen 3.3 through 4.8.x allows guest OS users to cause a denial of service (ARM or x86 AMD host OS memory consumption) by continually rebooting, because certain cleanup is skipped if no pass-through device was ever assigned, aka XSA-207.
A parameter verification issue was discovered in Xen through 4.9.x. The function `alloc_heap_pages` allows callers to specify the first NUMA node that should be used for allocations through the `memflags` parameter; the node is extracted using the `MEMF_get_node` macro. While the function checks to see if the special constant `NUMA_NO_NODE` is specified, it otherwise does not handle the case where `node >= MAX_NUMNODES`. This allows an out-of-bounds access to an internal array.