Return to Blog
Researchers Uncover Vulnerabilities in AI and ML Models
Industry News

Researchers Uncover Vulnerabilities in AI and ML Models

Recent research has unveiled a significant number of security vulnerabilities within various open-source artificial intelligence (AI) and machine learning (ML) models, raising alarms about potential risks such as remote code execution and data theft. These vulnerabilities were disclosed by Protect AI through their Huntr bug bounty platform, highlighting critical flaws in tools like ChuanhuChatGPT, Lunary, and LocalAI.

Key Vulnerabilities Identified

Among the most severe vulnerabilities are two critical issues affecting Lunary, a toolkit for large language models (LLMs):

- CVE-2024-7474: This Insecure Direct Object Reference (IDOR) vulnerability allows authenticated users to view or delete external users, posing a risk of unauthorized data access and potential data loss, with a CVSS score of **9.1**.

- CVE-2024-7475: An improper access control vulnerability that enables attackers to modify SAML configurations, potentially allowing unauthorized access to sensitive information, also rated 9.1.

Additionally, another IDOR vulnerability in Lunary (CVE-2024-7473) permits attackers to manipulate user prompts by altering request parameters, further compromising user security.

ChuanhuChatGPT is affected by a path traversal flaw (CVE-2024-5982), which could lead to arbitrary code execution and exposure of sensitive data, also scoring 9.1 on the CVSS scale. 

In LocalAI, two vulnerabilities were found:

- CVE-2024-6983: This flaw allows arbitrary code execution through malicious configuration file uploads (CVSS 8.8).

- CVE-2024-7010: A timing attack vulnerability that enables attackers to guess valid API keys by analyzing server response times (CVSS 7.5).

Lastly, a remote code execution flaw was identified in the Deep Java Library (DJL) due to an arbitrary file overwrite bug (CVE-2024-8396), with a CVSS score of 7.8.

Implications for Users

The disclosure of these vulnerabilities underscores the urgent need for users to update their installations of these AI/ML tools to the latest versions to mitigate risks and secure their systems against potential exploits.

In addition to these findings, Protect AI has introduced Vulnhuntr, an open-source Python static code analyzer that utilizes LLMs to identify zero-day vulnerabilities in Python codebases. This tool analyzes code in manageable segments to detect security issues efficiently.

As organizations increasingly rely on AI and ML technologies, the importance of robust security measures cannot be overstated. VScanner is designed to address these very challenges by providing comprehensive vulnerability scanning solutions. By integrating VScanner into your security framework, you can proactively identify and remediate vulnerabilities before they can be exploited.

VScanner not only enhances your security posture but also ensures compliance with best practices in software development and deployment. With the rise of threats targeting AI models, leveraging VScanner can help safeguard your applications against the vulnerabilities highlighted in recent reports.

By staying ahead of potential threats and employing effective scanning tools like VScanner, organizations can protect their sensitive data and maintain trust in their AI-driven solutions.

Source: [The Hacker News]

#vscanner #cybersecurity #AI