Artificial Intelligence & Machine Learning , Governance & Risk Management , Next-Generation Technologies & Secure Development
Intel's Max Severity Flaw Affects AI Model Compressor Users
CVSS 10-Rated Bug Could Enable Hackers to Execute Arbitrary Code on SystemsA maximum-severity bug in Intel's artificial intelligence model compression software can allow hackers to execute arbitrary code on the company's systems that run affected versions. The technology giant has released a fix for the Neural Compressor flaw, which is rated 10 on the CVSS scale.
See Also: Establishing a Governance Framework for AI-Powered Applications
Neural Compressor software helps companies reduce the amount of memory needed for an AI model, while reducing cache miss rate and the computational cost of using neural networks. It also helps systems achieve higher inference performance. Companies use the open-source Python library to deploy AI applications on different types of hardware devices, including those with limited computational power, such as mobile devices.
Intel did not specify how many companies use the software or the number of users affected. The bug only affects those who use versions before 2.5.0.
Tracked as CVE-2024-22476, the flaw is the most severe among the 41 security advisories Intel released last week. Originating from improper input validation or failing to sanitize user input, the bug allows hackers to remotely exploit it without needing any special privileges or user interaction, and it has high impact on data confidentiality, integrity and availability.
Intel said an external security entity submitted the bug report but did not identify the individual or company.
CVE-2024-22476 is one of two vulnerabilities in the company's Neural Compressor software that it disclosed. The other one, tracked as CVE-2024-21792 with a moderate severity score, is a time-of-check, time-of-use flaw that could give hackers access to unauthorized information. The hacker requires local, authenticated access to a vulnerable system to exploit it.
Researchers have found several dozens of vulnerabilities in large language models this past year that could result in manipulation of live conversations, exploitation of unpatched flaws, self-spreading zero-click vulnerabilities and the use of hallucinations to propagate malware.
Companies that use this software as core components to build and support AI products can increase the impact of the vulnerability, as in Intel's case. A month ago, researchers from Wiz found now-mitigated vulnerabilities on popular AI application developer HuggingFace that allowed attackers to tamper with the models on its registry - and even add malicious ones to it.
Apart from the vulnerabilities in the Neural Compressor, Intel's disclosure included five high-severity privilege escalation vulnerabilities in its UEFI firmware for server products. The bugs, tracked as CVE-2024-22382, CVE-2024-23487, CVE-2024-24981, CVE-2024-23980 and CVE-2024-22095, are all input validation flaws that the company has scored between 7.2 and 7.5 on the CVSS scale.