Skip to main content

Boffins bust AI with corrupted training data

posted onAugust 28, 2017
by l33tdawg

If you don't know what your AI model is doing, how do you know it's not evil?

Boffins from New York University have posed that question in a paper at arXiv, and come up with the disturbing conclusion that machine learning can be taught to include backdoors, by attacks on their learning data.

The problem of a “maliciously trained network” (which they dub a “BadNet”) is more than a theoretical issue, the researchers say in this paper: for example, they write, a facial recognition system could be trained to ignore some faces, to let a burglar into a building the owner thinks is protected.

Source

Tags

Industry News

Recent News

Tuesday, November 14th

Sunday, November 12th

Friday, November 10th

Wednesday, November 8th

Monday, November 6th