Skip to main content

Boffins bust AI with corrupted training data

posted onAugust 28, 2017
by l33tdawg

If you don't know what your AI model is doing, how do you know it's not evil?

Boffins from New York University have posed that question in a paper at arXiv, and come up with the disturbing conclusion that machine learning can be taught to include backdoors, by attacks on their learning data.

The problem of a “maliciously trained network” (which they dub a “BadNet”) is more than a theoretical issue, the researchers say in this paper: for example, they write, a facial recognition system could be trained to ignore some faces, to let a burglar into a building the owner thinks is protected.

Source

Tags

Industry News

You May Also Like

Recent News

Friday, November 29th

Tuesday, November 19th

Friday, November 8th

Friday, November 1st

Tuesday, July 9th

Wednesday, July 3rd

Friday, June 28th

Thursday, June 27th

Thursday, June 13th

Wednesday, June 12th

Tuesday, June 11th