GitLab has remediated an issue in GitLab CE/EE affecting all versions from 13.7 to 18.2.8, 18.3 before 18.3.4, and 18.4 before 18.4.2 that could have allowed authenticated users without project membership to view sensitive manual CI/CD variables by querying the GraphQL API.
The ELEX WordPress HelpDesk & Customer Ticketing System plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check on the 'eh_crm_settings_restore_trash' AJAX endpoint in all versions up to, and including, 3.3.1. This makes it possible for authenticated attackers, with Subscriber-level access and above, to restore all deleted tickets.
The ELEX WordPress HelpDesk & Customer Ticketing System plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check on the eh_crm_restore_data() function in all versions up to, and including, 3.3.1. This makes it possible for authenticated attackers, with Subscriber-level access and above, to restore tickets.
The ELEX WordPress HelpDesk & Customer Ticketing System plugin for WordPress is vulnerable to unauthorized modification of data due to a missing capability check on the 'eh_crm_settings_empty_trash' function in all versions up to, and including, 3.3.1. This makes it possible for authenticated attackers, with Subscriber-level access and above, to empty the ticket trash.
The AuthKit library for Next.js provides convenient helpers for authentication and session management using WorkOS & AuthKit with Next.js. In authkit-nextjs version 2.11.0 and below, authenticated responses do not defensively apply anti-caching headers. In environments where CDN caching is enabled, this can result in session tokens being included in cached responses and subsequently served to multiple users. Next.js applications deployed on Vercel are unaffected unless they manually enable CDN caching by setting cache headers on authenticated paths. Patched in authkit-nextjs 2.11.1, which applies anti-caching headers to all responses behind authentication.
vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.
Claude Code is an agentic coding tool. Prior to version 2.0.31, due to an error in sed command parsing, it was possible to bypass the Claude Code read-only validation and write to arbitrary files on the host system. This issue has been patched in version 2.0.31.