Summary: The video discusses the issue of malicious machine learning models being uploaded to platforms like Hugging Face. It highlights how attackers can disguise harmful models by compressing them into formats that current scanners fail to detect, raising concerns about supply chain security in the machine learning community.
Keypoints:
- The video continues the discussion on large language models (LLMs) and their security risks.
- Malicious ML models are being uploaded in a 7z format, which bypasses Hugging Face’s scanning mechanisms.
- This method exploits the failure of scanners to check for harmful content when the file extension is not recognized.
- The speaker draws parallels to supply chain attacks in software, questioning the trustworthiness of machine learning models similar to how developers are cautious with npm packages.
- This situation highlights the need for improved security measures in the deployment and sharing of machine learning models.
Youtube Video: https://www.youtube.com/watch?v=FCnfT1r-aDw
Youtube Channel: Security Weekly – A CRA Resource
Video Published: Tue, 25 Mar 2025 21:00:16 +0000