Description
LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before version 1.83.7, the POST /prompts/test endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process. The endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host. This issue has been patched in version 1.83.7.
Published: 2026-05-08
Score: 8.6 High
EPSS: < 1% Very Low
KEV: No
Impact: n/a
Action: n/a
AI Analysis

Analysis and contextual insights are available on OpenCVE Cloud.

Remediation

No vendor fix or workaround currently provided.

Additional remediation guidance may be available on OpenCVE Cloud.

Tracking

Sign in to view the affected projects.

Advisories
Source ID Title
Github GHSA Github GHSA GHSA-xqmj-j6mv-4862 LiteLLM: Server-Side Template Injection in /prompts/test endpoint
History

Thu, 14 May 2026 12:15:00 +0000

Type Values Removed Values Added
Weaknesses CWE-94
References
Metrics threat_severity

None

threat_severity

Important


Wed, 13 May 2026 17:15:00 +0000

Type Values Removed Values Added
First Time appeared Litellm
Litellm litellm
CPEs cpe:2.3:a:litellm:litellm:*:*:*:*:*:*:*:*
Vendors & Products Litellm
Litellm litellm
Metrics cvssV3_1

{'score': 8.8, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H'}


Fri, 08 May 2026 15:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'no', 'Exploitation': 'none', 'Technical Impact': 'total'}, 'version': '2.0.3'}


Fri, 08 May 2026 05:45:00 +0000

Type Values Removed Values Added
First Time appeared Berriai
Berriai litellm
Vendors & Products Berriai
Berriai litellm

Fri, 08 May 2026 04:00:00 +0000

Type Values Removed Values Added
Description LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before version 1.83.7, the POST /prompts/test endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process. The endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host. This issue has been patched in version 1.83.7.
Title LiteLLM: Server-Side Template Injection in /prompts/test endpoint
Weaknesses CWE-1336
References
Metrics cvssV4_0

{'score': 8.6, 'vector': 'CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:N/SC:N/SI:N/SA:N'}


cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2026-05-09T03:55:49.702Z

Reserved: 2026-04-25T05:04:37.027Z

Link: CVE-2026-42203

cve-icon Vulnrichment

Updated: 2026-05-08T14:36:54.287Z

cve-icon NVD

Status : Analyzed

Published: 2026-05-08T04:16:19.450

Modified: 2026-05-13T17:14:58.667

Link: CVE-2026-42203

cve-icon Redhat

Severity : Important

Publid Date: 2026-05-08T03:36:58Z

Links: CVE-2026-42203 - Bugzilla

cve-icon OpenCVE Enrichment

Updated: 2026-05-14T14:00:20Z

Weaknesses