This Technique Uses AI to Fool Other AIs
Artificial intelligence has made big strides recently in understanding language, but it can still suffer from an alarming, and potentially dangerous, kind of algorithmic myopia.
Artificial intelligence has made big strides recently in understanding language, but it can still suffer from an alarming, and potentially dangerous, kind of algorithmic myopia.
The town of Rotterdam, New York, has only 45 police officers, but technology extends their reach. Each day a department computer logs the license plates of around 10,000 vehicles moving through and around town, using software plugged into a network of cameras at major intersections and commercial areas.
“Let’s say for instance you had a bank robbed,” says Jeffrey Collins, a lieutenant who supervises the department’s uniform division. “You can look back and see every car that passed.” Officers can search back in time for a specific plate, and also by color, make, and model of car.
Google CEO Sundar Pichai has warned about the dangers of unchecked A.I. in an op-ed in the Financial Times, and has said he believes that the area should be more tightly regulated. “We need to be clear-eyed about what could go wrong,” with A.I., he wrote, citing concerns such as the rise of deepfakes and potential abuses of facial recognition technology.
A leaked draft of a white paper has revealed that the European Commission is considering a temporary ban on the use of facial recognition technologies in public areas for up to five years.
A temporary ban would give regulators the time they need to figure out how to prevent facial recognition from being abused by both governments and businesses. However, exceptions to the ban could be made for security projects as well as for research and development.
We are surrounded by surveillance cameras that record us at every turn. But for the most part, while those cameras are watching us, no one is watching what those cameras observe or record because no one will pay for the armies of security guards that would be required for such a time-consuming and monotonous task.
The security clearance process is broken—a fact widely accepted by stakeholders in the public and private sector, legislative and executive branches of government, Democrats and Republicans. As federal leaders work on the largest process overhaul in half a century, artificial intelligence will play a key role.
Deepfakes are fake videos or audio recordings that look and sound just like the real thing. Once the bailiwick of Hollywood special effects studios and intelligence agencies producing propaganda, like the CIA or GCHQ's JTRIG directorate, today anyone can download deepfake software and create convincing fake videos in their spare time.
L33tdawg: Congrats to Jeremy and the Fast.ai team!
Today marks the culmination of two years of development for some software that could make machine learning a lot easier to program, thereby helping to democratize AI.
That's the hope of Jeremy Howard, a co-founder of San Francisco-based Fast.ai, a startup outfit that is today releasing version 1.0 of "fastai," a set of code libraries designed to radically simplify writing machine learning tasks.
In 2012, artificial intelligence researchers revealed a big improvement in computers’ ability to recognize images by feeding a neural network millions of labeled images from a database called ImageNet. It ushered in an exciting phase for computer vision, as it became clear that a model trained using ImageNet could help tackle all sorts of image-recognition problems. Six years later, that’s helped pave the way for self-driving cars to navigate city streets, and Facebook to automatically tag people in your photos.
Numerous organizations have taken on the noble task of reporting pedophilic images, but it's both technically difficult and emotionally challenging to review vast amounts of the horrific content. Google is promising to make this process easier. It's launching an AI toolkit that helps organizations review vast amounts of child sex abuse material both quickly and while minimizing the need for human inspections. Deep neural networks scan images for abusive content and prioritize the most likely candidates for review.