Analysis and contextual insights are available on OpenCVE Cloud.
No vendor fix or workaround currently provided.
Additional remediation guidance may be available on OpenCVE Cloud.
Tracking
Sign in to view the affected projects.
No advisories yet.
Tue, 28 Apr 2026 21:30:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| CPEs | cpe:2.3:a:ggml:llama.cpp:*:*:*:*:*:*:*:* |
Fri, 13 Mar 2026 10:00:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| First Time appeared |
Ggml
Ggml llama.cpp |
|
| Vendors & Products |
Ggml
Ggml llama.cpp |
Thu, 12 Mar 2026 21:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
ssvc
|
Thu, 12 Mar 2026 17:00:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | llama.cpp is an inference of several LLM models in C/C++. Prior to b8146, the gguf_init_from_file_impl() in gguf.cpp is vulnerable to an Integer overflow, leading to an undersized heap allocation. Using the subsequent fread() writes 528+ bytes of attacker-controlled data past the buffer boundary. This is a bypass of a similar bug in the same file - CVE-2025-53630, but the fix overlooked some areas. This vulnerability is fixed in b8146. | |
| Title | llama.cpp has a Heap Buffer Overflow via Integer Overflow in `mem_size` Calculation — Bypass of CVE-2025-53630 Fix | |
| Weaknesses | CWE-122 CWE-190 |
|
| References |
| |
| Metrics |
cvssV3_1
|
Status: PUBLISHED
Assigner: GitHub_M
Published:
Updated: 2026-03-14T03:55:24.463Z
Reserved: 2026-02-25T03:11:36.689Z
Link: CVE-2026-27940
Updated: 2026-03-12T20:33:45.889Z
Status : Analyzed
Published: 2026-03-12T17:16:49.920
Modified: 2026-04-28T21:27:02.260
Link: CVE-2026-27940
No data.
OpenCVE Enrichment
Updated: 2026-03-20T15:48:58Z