An issue in deep-diver LLM-As-Chatbot before commit 99c2c03 allows a remote attacker to execute arbitrary code via the modelsbyom.py component.
The product constructs all or part of a code segment using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the syntax or behavior of the intended code segment.