Skip to main content

Artificial Intelligence

This Technique Uses AI to Fool Other AIs

posted onFebruary 24, 2020
by l33tdawg
Credit: Wired

Artificial intelligence has made big strides recently in understanding language, but it can still suffer from an alarming, and potentially dangerous, kind of algorithmic myopia.

AI License Plate Readers Are Cheaper—So Drive Carefully

posted onJanuary 28, 2020
by l33tdawg
Credit: Wired

The town of Rotterdam, New York, has only 45 police officers, but technology extends their reach. Each day a department computer logs the license plates of around 10,000 vehicles moving through and around town, using software plugged into a network of cameras at major intersections and commercial areas.

“Let’s say for instance you had a bank robbed,” says Jeffrey Collins, a lieutenant who supervises the department’s uniform division. “You can look back and see every car that passed.” Officers can search back in time for a specific plate, and also by color, make, and model of car.

Google CEO Sundar Pichai warns of dangers of A.I. and calls for more regulation

posted onJanuary 21, 2020
by l33tdawg
Credit: Wired

Google CEO Sundar Pichai has warned about the dangers of unchecked A.I. in an op-ed in the Financial Times, and has said he believes that the area should be more tightly regulated. “We need to be clear-eyed about what could go wrong,” with A.I., he wrote, citing concerns such as the rise of deepfakes and potential abuses of facial recognition technology.

EU calls for five year ban on facial recognition

posted onJanuary 20, 2020
by l33tdawg
Credit: Tech Radar

A leaked draft of a white paper has revealed that the European Commission is considering a temporary ban on the use of facial recognition technologies in public areas for up to five years.

A temporary ban would give regulators the time they need to figure out how to prevent facial recognition from being abused by both governments and businesses. However, exceptions to the ban could be made for security projects as well as for research and development.

What are deepfakes? How and why they work

posted onNovember 8, 2018
by l33tdawg
Credit: IT World

Deepfakes are fake videos or audio recordings that look and sound just like the real thing. Once the bailiwick of Hollywood special effects studios and intelligence agencies producing propaganda, like the CIA or GCHQ's JTRIG directorate, today anyone can download deepfake software and create convincing fake videos in their spare time.

Fast.ai's software could radically democratize AI

posted onOctober 2, 2018
by l33tdawg
Credit: Fast.ai

L33tdawg: Congrats to Jeremy and the Fast.ai team!

Today marks the culmination of two years of development for some software that could make machine learning a lot easier to program, thereby helping to democratize AI.

That's the hope of Jeremy Howard, a co-founder of San Francisco-based Fast.ai, a startup outfit that is today releasing version 1.0 of "fastai," a set of code libraries designed to radically simplify writing machine learning tasks.

AI Can Recognize Images. But Can It Understand This Headline?

posted onSeptember 7, 2018
by l33tdawg
Credit: Wired

In 2012, artificial intelligence researchers revealed a big improvement in computers’ ability to recognize images by feeding a neural network millions of labeled images from a database called ImageNet. It ushered in an exciting phase for computer vision, as it became clear that a model trained using ImageNet could help tackle all sorts of image-recognition problems. Six years later, that’s helped pave the way for self-driving cars to navigate city streets, and Facebook to automatically tag people in your photos.

Google offers AI toolkit to report child sex abuse images

posted onSeptember 4, 2018
by l33tdawg
Credit: Engadget

Numerous organizations have taken on the noble task of reporting pedophilic images, but it's both technically difficult and emotionally challenging to review vast amounts of the horrific content. Google is promising to make this process easier. It's launching an AI toolkit that helps organizations review vast amounts of child sex abuse material both quickly and while minimizing the need for human inspections. Deep neural networks scan images for abusive content and prioritize the most likely candidates for review.