Could hackers trick voice assistants into committing fraud? Researchers say yes.
Voice assistant technology is supposed to make our lives easier, but security experts say it comes with some uniquely invasive risks. Since the beginning of the year, multiple Nest security camera users have reported instances of strangers hacking into and issuing voice commands to Alexa, falsely announcing a North Korean missile attack, and targeting one family by speaking directly to their child, turning up their home thermostat to 90 degrees, and shouting insults. These incidents are alarming, but the potential for silent compromises of voice assistants could be even more damaging.
Nest owner Google — which recently integrated Google Assistant support into Nest control hubs — has blamed weak user passwords and a lack of two-factor authentication for the attacks. But even voice assistants with strong security may be vulnerable to stealthier forms of hacking. Over the past couple of years, researchers at universities in the US, China, and Germany have successfully used hidden audio files to make AI-powered voice assistants like Siri and Alexa follow their commands.