Researchers Uncover Vulnerabilities in AI and ML Models
Recent research has unveiled a significant number of security vulnerabilities within various open-source artificial intelligence (AI) and machine learning (ML) models, raising alarms about potential risks such as remote code execution and data theft. These vulnerabilities were disclosed by Protect AI through their Huntr bug bounty platform, highlighting critical flaws in tools like ChuanhuChatGPT, Lunary, and LocalAI.