Skip to main content

Boffins bust AI with corrupted training data

posted onAugust 28, 2017
by l33tdawg

If you don't know what your AI model is doing, how do you know it's not evil?

Boffins from New York University have posed that question in a paper at arXiv, and come up with the disturbing conclusion that machine learning can be taught to include backdoors, by attacks on their learning data.

The problem of a “maliciously trained network” (which they dub a “BadNet”) is more than a theoretical issue, the researchers say in this paper: for example, they write, a facial recognition system could be trained to ignore some faces, to let a burglar into a building the owner thinks is protected.

Source

Tags

Industry News

You May Also Like

Recent News

Thursday, February 22nd

Wednesday, February 21st

Tuesday, February 20th

Monday, February 19th

Thursday, February 15th