CVE-2025-53630

Integer Overflow in GGUF Parser can lead to Heap Out-of-Bounds Read/Write in gguf

Description

llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.

Category

8.9
CVSS
Severity: High
CVSS 4.0 •
EPSS 0.04%
Affected: ggml-org llama.cpp
Published at:
Updated at:

References

Frequently Asked Questions

What is the severity of CVE-2025-53630?
CVE-2025-53630 has been scored as a high severity vulnerability.
How to fix CVE-2025-53630?
To fix CVE-2025-53630, make sure you are using an up-to-date version of the affected component(s) by checking the vendor release notes. As for now, there are no other specific guidelines available.
Is CVE-2025-53630 being actively exploited in the wild?
As for now, there are no information to confirm that CVE-2025-53630 is being actively exploited. According to its EPSS score, there is a ~0% probability that this vulnerability will be exploited by malicious actors in the next 30 days.
What software or system is affected by CVE-2025-53630?
CVE-2025-53630 affects ggml-org llama.cpp.
This platform uses data from the NIST NVD, MITRE CVE, MITRE CWE, First.org and CISA KEV but is not endorsed or certified by these entities. CVE is a registred trademark of the MITRE Corporation and the authoritative source of CVE content is MITRE's CVE web site. CWE is a registred trademark of the MITRE Corporation and the authoritative source of CWE content is MITRE's CWE web site.
© 2025 Under My Watch. All Rights Reserved.