Institute Director Agrawala talks Deep Fakes with the Stanford Institute for Human-Centered AI

To spot a deep fake, researchers looked for inconsistencies between “visemes,” or mouth formations, and “phonemes,” the phonetic sounds.

Using AI to Detect Seemingly Perfect Deep-Fake Videos was published on the Stanford Institute for Human-Centered Artificial Intelligence Blog on 10/13/2020, which featured Brown Institute Director Maneesh Agrawala. In the article, Agrawala spoke to the struggles of detecting audio/video manipulation and highlighted his work with institute fellow Ohad Fried on developing an AI-based approach to detecting lip-sync technology common to deep fakes.

“As the technology to manipulate video gets better and better, the capability of technology to detect manipulation will get worse and worse…We need to focus on non-technical ways to identify and reduce disinformation and misinformation.”

Read the article at hai.stanford.edu.

For more information about the project, contact the Brown Institute at brown_institute@stanford.edu.