In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
The product constructs all or part of a command, data structure, or record using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify how it is parsed or interpreted when it is sent to a downstream component.
Link | Tags |
---|---|
https://github.com/hwchase17/langchain/pull/1119 | patch |
https://github.com/hwchase17/langchain/issues/814 | patch issue tracking exploit |
https://twitter.com/rharang/status/1641899743608463365/photo/1 | exploit |
https://github.com/hwchase17/langchain/issues/1026 | issue tracking |