Vulnerability Details : CVE-2023-29374
In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
Vulnerability category: Execute code
Products affected by CVE-2023-29374
- cpe:2.3:a:langchain:langchain:*:*:*:*:*:*:*:*
Exploit prediction scoring system (EPSS) score for CVE-2023-29374
0.33%
Probability of exploitation activity in the next 30 days
EPSS Score History
~ 72 %
Percentile, the proportion of vulnerabilities that are scored at or less
CVSS scores for CVE-2023-29374
Base Score | Base Severity | CVSS Vector | Exploitability Score | Impact Score | Score Source | First Seen |
---|---|---|---|---|---|---|
9.8
|
CRITICAL | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H |
3.9
|
5.9
|
NIST |
CWE ids for CVE-2023-29374
-
The product constructs all or part of a command, data structure, or record using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify how it is parsed or interpreted when it is sent to a downstream component.Assigned by: nvd@nist.gov (Primary)
References for CVE-2023-29374
-
https://twitter.com/rharang/status/1641899743608463365/photo/1
Rich Harang (@rharang@mastodon.social) on Twitter: "this is why we can't have nice things. A langchain LLM agent for solving math problems just yeets any python code you give it into an eval() statemeExploit
-
https://github.com/hwchase17/langchain/issues/814
Exploiting llm-math (and likely PAL) and suggesting and alternative · Issue #814 · hwchase17/langchain · GitHubExploit;Issue Tracking;Patch
-
https://github.com/hwchase17/langchain/pull/1119
Patch LLMMathChain exec vulnerability by zachschillaci27 · Pull Request #1119 · hwchase17/langchain · GitHubPatch
-
https://github.com/hwchase17/langchain/issues/1026
Security concerns · Issue #1026 · hwchase17/langchain · GitHubIssue Tracking
Jump to