CVE-2025-24357

vLLM allows a malicious model RCE by torch.load in hf_model_weights_iterator

Description

vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0.

Category

7.5
CVSS
Severity: High
CVSS 3.1 •
EPSS 0.03%
Vendor Advisory github.com
Affected: vllm-project vllm
Published at:
Updated at:

References

Frequently Asked Questions

What is the severity of CVE-2025-24357?
CVE-2025-24357 has been scored as a high severity vulnerability.
How to fix CVE-2025-24357?
To fix CVE-2025-24357, make sure you are using an up-to-date version of the affected component(s) by checking the vendor release notes. As for now, there are no other specific guidelines available.
Is CVE-2025-24357 being actively exploited in the wild?
As for now, there are no information to confirm that CVE-2025-24357 is being actively exploited. According to its EPSS score, there is a ~0% probability that this vulnerability will be exploited by malicious actors in the next 30 days.
What software or system is affected by CVE-2025-24357?
CVE-2025-24357 affects vllm-project vllm.
This platform uses data from the NIST NVD, MITRE CVE, MITRE CWE, First.org and CISA KEV but is not endorsed or certified by these entities. CVE is a registred trademark of the MITRE Corporation and the authoritative source of CVE content is MITRE's CVE web site. CWE is a registred trademark of the MITRE Corporation and the authoritative source of CWE content is MITRE's CWE web site.
© 2025 Under My Watch. All Rights Reserved.