langchain_experimental (aka LangChain Experimental) 0.1.17 through 0.3.0 for LangChain allows attackers to execute arbitrary code through sympy.sympify (which uses eval) in LLMSymbolicMathChain. LLMSymbolicMathChain was introduced in fcccde406dd9e9b05fc9babcbeb9ff527b0ec0c6 (2023-10-05).
The product receives input or data, but it does not validate or incorrectly validates that the input has the properties that are required to process the data safely and correctly.
Link | Tags |
---|---|
https://cwe.mitre.org/data/definitions/95.html | not applicable |
https://github.com/langchain-ai/langchain/releases/tag/langchain-experimental%3D%3D0.3.0 | release notes |
https://docs.sympy.org/latest/modules/codegen.html | technical description |
https://gist.github.com/12end/68c0c58d2564ef4141bccd4651480820#file-cve-2024-46946-txt | exploit third party advisory |