llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes `ggml_nbytes` to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix.

Project Subscriptions

No data.

Advisories

No advisories yet.

Fixes

Solution

No solution given by the vendor.


Workaround

No workaround given by the vendor.

History

Tue, 24 Mar 2026 02:30:00 +0000

Type Values Removed Values Added
Description llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes `ggml_nbytes` to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix.
Title llama.cpp has a Heap Buffer Overflow via Integer Overflow in GGUF Tensor Parsing
Weaknesses CWE-122
CWE-190
References
Metrics cvssV3_1

{'score': 7.8, 'vector': 'CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H'}


Projects

Sign in to view the affected projects.

cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2026-03-24T00:01:40.989Z

Reserved: 2026-03-18T18:55:47.427Z

Link: CVE-2026-33298

cve-icon Vulnrichment

No data.

cve-icon NVD

Status : Received

Published: 2026-03-24T01:17:01.870

Modified: 2026-03-24T01:17:01.870

Link: CVE-2026-33298

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

No data.

Weaknesses