This Technique Uses AI to Fool Other AIs
Artificial intelligence has made big strides recently in understanding language, but it can still suffer from an alarming, and potentially dangerous, kind of algorithmic myopia.
Research shows how AI programs that parse and analyze text can be confused and deceived by carefully crafted phrases. A sentence that seems straightforward to you or me may have a strange ability to deceive an AI algorithm. That’s a problem as text-mining AI programs increasingly are used to judge job applicants, assess medical claims, or process legal documents. Strategic changes to a handful of words could let fake news evade an AI detector; thwart AI algorithms that hunt for signs of insider trading; or trigger higher payouts from health insurance claims.
“This kind of attack is very important,” says Di Jin, a graduate student at MIT who developed a technique for fooling text-based AI programs with researchers from the University of Hong Kong and Singapore’s Agency for Science, Technology, and Research. Jin says such “adversarial examples” could prove especially harmful if used to bamboozle automated systems in finance or health care: “Even a small change in these areas can cause a lot of troubles.”