Analysis and contextual insights are available on OpenCVE Cloud.
No vendor fix or workaround currently provided.
Additional remediation guidance may be available on OpenCVE Cloud.
Tracking
Sign in to view the affected projects.
No advisories yet.
Thu, 30 Apr 2026 17:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| CPEs | cpe:2.3:a:ggml:llama.cpp:*:*:*:*:*:*:*:* |
Tue, 24 Mar 2026 14:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
ssvc
|
Tue, 24 Mar 2026 10:45:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| First Time appeared |
Ggml
Ggml llama.cpp |
|
| Vendors & Products |
Ggml
Ggml llama.cpp |
Tue, 24 Mar 2026 02:30:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes `ggml_nbytes` to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix. | |
| Title | llama.cpp has a Heap Buffer Overflow via Integer Overflow in GGUF Tensor Parsing | |
| Weaknesses | CWE-122 CWE-190 |
|
| References |
| |
| Metrics |
cvssV3_1
|
Status: PUBLISHED
Assigner: GitHub_M
Published:
Updated: 2026-03-25T03:55:51.679Z
Reserved: 2026-03-18T18:55:47.427Z
Link: CVE-2026-33298
Updated: 2026-03-24T13:30:16.553Z
Status : Analyzed
Published: 2026-03-24T01:17:01.870
Modified: 2026-04-30T17:01:02.417
Link: CVE-2026-33298
No data.
OpenCVE Enrichment
Updated: 2026-03-25T20:40:45Z