CVE-2026-44222
| Summary |
vLLM is an inference and serving engine for large language models (LLMs). From 0.6.1 to before 0.20.0, there is a a Token Injection vulnerability in vLLM’s multimodal processing. Unauthenticated, text-only prompts that spell special tokens are interpreted as control. Image and video placeholder sequences supplied without matching data cause vLLM to index into empty grids during input-position computation, raising an unhandled IndexError and terminating the worker or degrading availability. Multimodal paths that rely on image_grid_thw/video_grid_thw are affected. This vulnerability is fixed in 0.20.0.
|
| Publication Date |
May 13, 2026, 5:16 a.m. |
| Registration Date |
May 15, 2026, 4:18 a.m. |
| Last Update |
May 15, 2026, 12:38 a.m. |
|
CVSS3.1 : HIGH
|
| スコア |
7.5
|
| Vector |
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H |
| 攻撃元区分(AV) |
ネットワーク |
| 攻撃条件の複雑さ(AC) |
低 |
| 攻撃に必要な特権レベル(PR) |
不要 |
| 利用者の関与(UI) |
不要 |
| 影響の想定範囲(S) |
変更なし |
| 機密性への影響(C) |
なし |
| 完全性への影響(I) |
なし |
| 可用性への影響(A) |
高 |
Affected software configurations
| Configuration1 |
or higher |
or less |
more than |
less than |
| cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* |
0.6.1 |
|
|
0.20.0 |
Related information, measures and tools
Common Vulnerabilities List