Researchers Demonstrate Novel Method of Hiding Malware Inside Neural Network Models

Researchers Demonstrate Novel Method of Hiding Malware Inside Neural Network Models

Researchers have shown how to hide malware inside a neural network by creating an image classifier that can effectively fool security solutions.

Researchers Zhi Wang, Chaoge Liu, and Xiang Cui presented a method attackers can use to drop malicious payloads networks using neural network models and evading detection.

They were able to hide 36.9MB binary code, that was almost impossible to detect, in a 178MB-AlexNet model. They also said that with the increasing use of artificial intelligence, cybercriminals will increasingly rely on neural networks to carry out their attacks. This experiment aims to provide a realistic scenario for protecting against neural network-assisted attacks.

The trick involved selecting a layer within an existing trained model (i.e. Image classifier) that could be used to inject malware into that layer.

If the model doesn’t have enough neurons, an attacker could use an untrained model with extra neurons, researchers said. The model would then be trained on the same data set as the trained one has to achieve the same performance.

The technique works only for the hiding of the malware. Its execution is not possible since it needs to be extracted from the model by an additional application.

Researchers uploaded some malware-embedded models to VirusTotal to see if they can be detected. None of the models were detected proving this method can easily evade the security checks conducted by antivirus engines.

“We uploaded some of the malware-embedded models to VirusTotal to check whether the malware can be detected. The models were recognized as zip files by VirusTotal. 58 antivirus engines were involved in the detection works, and no suspicious was detected. It means that this method can evade the security scan by common antivirus engines,” the researchers stated in the paper.

As a mitigation, they advised using security solutions that can detect extracting the malware from the model and its execution. Experts also warned about the dangers of using third-party apps that can compromise the original models.

“The model’s structure remains unchanged when the parameters are replaced with malware bytes, and the malware is disassembled in the neurons. As the characteristics of the malware are no longer available, it can evade detection by common antivirus engines. As neural network models are robust to changes, there are no obvious losses on the performances when it’s well configured.” concludes the paper. “This paper proves that neural networks can also be used maliciously. With the popularity of AI, AI-assisted attacks will emerge and bring new challenges for computer security”

About the author

CIM Team

CIM Team

CyberIntelMag is the trusted authority in cybersecurity, comprised of leading industry experts for over 20 years, dedicated to serving cybersecurity professionals. Our goal is to provide a one-stop shop for knowledge and insight needed to navigate throughout today’s emerging cybersecurity landscape through in-depth coverage of breaking news, tutorials, product reviews, videos and industry influencers.

Share: