Researchers demonstrate that malware can be hidden inside AI models
Researchers Zhi Wang, Chaoge Liu, and Xiang Cui published a paper last Monday demonstrating a new technique for slipping malware past automated detection tools—in this case, by hiding it inside a neural network.
The three embedded 36.9MiB of malware into a 178MiB AlexNet model without significantly altering the function of the model itself. The malware-embedded model classified images with near-identical accuracy, within 1% of the malware-free model. (This is possible because the number of layers and total neurons in a convolutional neural network is fixed prior to training—which means that, much like human brains, many of the neurons in a trained model end up being either largely or entirely dormant.)
Just as importantly, squirreling the malware away into the model broke it up in ways that prevented detection by standard antivirus engines. VirusTotal, a service which “inspects items with over 70 antivirus scanners and URL/domain blocklisting services, in addition to a myriad of tools to extract signals from the studied content,” did not raise any suspicions about the malware-embedded model.
The researchers’ technique chooses the best layer to work with in an already-trained model, then it embeds the malware into that layer. In an existing trained model—for example, a widely-available image classifier—there may be an undesirably large impact on accuracy due to not having enough dormant or mostly-dormant neurons.
If the accuracy of a malware-embedded model is insufficient, the attacker may choose instead to begin with an untrained model, add lots of extra neurons, and then train it on the same dataset the original model used. This should produce a model with larger size but equivalent accuracy, plus that approach provides more room to squirrel away nasty stuff inside.
The good news is, we’re effectively just talking about steganography—the new technique is a way to hide malware, not execute it. In order to actually run the malware, it must be extracted from the poisoned model by another malicious program and then reassembled into its working form. The bad news is, neural network models are considerably larger than typical photographic images—offering attackers the ability to hide far more illicit data inside them without detection.
Cybersecurity researcher Dr. Lukasz Olejnik told Motherboard that he didn’t think the new technique offered much to an attacker. “Today, it would not be simple to detect it by antivirus software, but this is only because nobody is looking.” But the technique does represent yet another way to potentially smuggle data past digital sentries, and into a potentially less-protected interior network.
via Ars Technica https://arstechnica.com
July 23, 2021 at 02:06PM