CVE-2025-52566

Public Exploit
llama.cpp tokenizer signed vs. unsigned heap overflow

Description

llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.

Category

8.6
CVSS
Severity: High
CVSS 3.1 •
EPSS 0.01%
Affected: ggml-org llama.cpp
Published at:
Updated at:

References

Frequently Asked Questions

What is the severity of CVE-2025-52566?
CVE-2025-52566 has been scored as a high severity vulnerability.
How to fix CVE-2025-52566?
To fix CVE-2025-52566, make sure you are using an up-to-date version of the affected component(s) by checking the vendor release notes. As for now, there are no other specific guidelines available.
Is CVE-2025-52566 being actively exploited in the wild?
It is possible that CVE-2025-52566 is being exploited or will be exploited in a near future based on public information. According to its EPSS score, there is a ~0% probability that this vulnerability will be exploited by malicious actors in the next 30 days.
What software or system is affected by CVE-2025-52566?
CVE-2025-52566 affects ggml-org llama.cpp.
This platform uses data from the NIST NVD, MITRE CVE, MITRE CWE, First.org and CISA KEV but is not endorsed or certified by these entities. CVE is a registred trademark of the MITRE Corporation and the authoritative source of CVE content is MITRE's CVE web site. CWE is a registred trademark of the MITRE Corporation and the authoritative source of CWE content is MITRE's CWE web site.
© 2025 Under My Watch. All Rights Reserved.