Cybersecurity researchers have managed to embed malicious code in an artificial intelligence neural system. The disassembled malware was not detected by antivirus systems.
<!–[if IE 9]><!–[if IE 9]>
© Getty Images – Oscar Wong
Artificial intelligence and neural networks are invading many everyday application areas. They are becoming new playgrounds for malware. Chinese researchers at Cornell University have inserted malicious code into the nodes of a neural network without being detected by antivirus software and minimizing the program’s efficiency deficit.
“By embedding malware in neurons, malware can be covertly disseminated with little to no impact on neural network performance,” the researchers explain. Meanwhile, since the structure of neural network models remains unchanged, they can pass the security analysis of antivirus engines.” The team ran this experiment on AlexNet, an image detection AI, incorporating 36.9 MB of malware into this 178 MB model and finding a 1% loss of accuracy, without the intrusion being suspected by VirusTotal’s antivirus engines.
<!–[if IE 9]><!–[if IE 9]>![endif]-->
With this method, the malware is disassembled in the neural system and thus escapes detection. Once downloaded, the malicious code could then be reassembled to execute the malware. ” Experience proves that neural networks can also be used maliciously,” the researchers say. With the popularity of AI, AI-assisted attacks will emerge and bring new challenges for computer security.” They hope that the scenario proposed in their study will contribute to new protection efforts.