Description
llama-cpp-python is the Python bindings for llama.cpp. `llama-cpp-python` depends on class `Llama` in `llama.py` to load `.gguf` llama.cpp or Latency Machine Learning Models. The `__init__` constructor built in the `Llama` takes several parameters to configure the loading and running of the model. Other than `NUMA, LoRa settings`, `loading tokenizers,` and `hardware settings`, `__init__` also loads the `chat template` from targeted `.gguf` 's Metadata and furtherly parses it to `llama_chat_format.Jinja2ChatFormatter.to_chat_handler()` to construct the `self.chat_handler` for this model. Nevertheless, `Jinja2ChatFormatter` parse the `chat template` within the Metadate with sandbox-less `jinja2.Environment`, which is furthermore rendered in `__call__` to construct the `prompt` of interaction. This allows `jinja2` Server Side Template Injection which leads to remote code execution by a carefully constructed payload.
Published: 2024-05-10
Score: 9.7 Critical
EPSS: 61.8% High
KEV: No
Impact: n/a
Action: n/a
AI Analysis

Analysis and contextual insights are available on OpenCVE Cloud.

Remediation

No vendor fix or workaround currently provided.

Additional remediation guidance may be available on OpenCVE Cloud.

Tracking

Sign in to view the affected projects.

Advisories
Source ID Title
Github GHSA Github GHSA GHSA-56xg-wfcc-g829 llama-cpp-python vulnerable to Remote Code Execution by Server-Side Template Injection in Model Metadata
History

No history.

Subscriptions

No data.

cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2024-08-02T02:51:10.739Z

Reserved: 2024-05-02T06:36:32.439Z

Link: CVE-2024-34359

cve-icon Vulnrichment

Updated: 2024-08-02T02:51:10.739Z

cve-icon NVD

Status : Deferred

Published: 2024-05-14T15:38:45.093

Modified: 2026-04-15T00:35:42.020

Link: CVE-2024-34359

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

No data.

Weaknesses