Can ChatGPT write a novel code attack? This researcher shows it can be done
A Forcepoint security researcher says he used ChatGPT to develop a zero-day exploit that bypassed detections when uploaded to VirusTotal.
This means that these sophisticated and usually very expensive attacks, primarily the possession of nation-states and APT groups, are now a lot more accessible to any would-be cybercriminal with access to the AI - if they can game the engine to do what's needed.
For this exercise in malware development, Forcepoint's Aaron Mulgrew, who calls himself a "self-described novice," didn't write any code but used advanced techniques including steganography, which embeds files into the code of other file, usually an image. This allows crooks to bypass defense tools and steal sensitive data. Because the chatbot's guardrails prevent it from answering any prompt that includes "malware" developing a new exploit does require some creativity, and it took Mulgrew only two attempts to completely evade detection. Mulgrew says producing the attack took "only a few hours."