Intel says it can detect deepfake videos in real time • The Register

Intel claims to have developed an AI model that can detect in real time if a video is using deepfake technology by looking for subtle color changes that would be obvious if the subject were a live human.
The chipmaker claims that FakeCatcher is able to return results in milliseconds and has an accuracy rate of 96 percent.
In recent years, there have been concerns about so-called deepfake videos, which use AI algorithms to create fake footage of people. The main concern centered on it possibly being used to trick politicians or celebrities into making statements or doing things that they didn’t actually say or do.

ICYMI: A mother is accused of molesting her daughter’s cheerleading rivals with humiliating deepfake videos
CONTINUE READING
“Deepfake videos are everywhere now. You’ve probably already seen them; Videos of celebrities doing or saying things they never actually did,” said Intel Labs research scientist Ilke Demir. And it doesn’t just affect celebrities, ordinary citizens have also been victims.
According to the chipmaker, some deep learning-based detectors are analyzing the raw video data to try to find telltale signs that would identify it as fake. In contrast, FakeCatcher takes a different approach, analyzing real videos for visual cues that indicate the subject is real.
These include subtle color changes in a video’s pixels due to the flow of blood from the heart, which pumps blood throughout the body. Those blood flow signals are collected all over the face and algorithms translate them into spatio-temporal maps, Intel says, allowing a deep learning model to tell whether a video is real or not. Some detection tools require video content to be uploaded for analysis and then wait hours for results, they say.
However, it is not beyond the realm of possibility to imagine that anyone with motives to create fake videos could be able to develop algorithms capable of fooling FakeCatcher given enough time and resources.
Of course, Intel drew extensively on its own technologies when developing FakeCatcher, including the open-source toolkit OpenVINO for optimizing deep learning models and OpenCV for processing real-time images and videos. Development teams also leveraged the Open Visual Cloud platform to provide an integrated software stack for Intel’s Xeon Scalable processors. FakeCatcher software can run up to 72 different detection streams simultaneously on scalable 3rd generation Xeon processors.
According to Intel, there are several potential use cases for FakeCatcher, including preventing users from uploading malicious deepfake videos to social media and helping news organizations avoid broadcasting manipulated content. ®
https://www.theregister.com/2022/11/15/intel_fakecatcher/ Intel says it can detect deepfake videos in real time • The Register