Intel has unveiled FakeCatcher, which is its new deepfake detection platform. The tool can flag deepfakes in milliseconds with a 96% accuracy rate, claims the company. The technology runs on a server using Intel hardware and software, and interfaces through a web-based platform.
What makes it unique is that it works in real-time — and is claimed to be the first to do so — meaning no more uploading videos for analysis and waiting hours for results. Intel says that while most deep learning-based detectors inspect raw data to spot inauthenticity, FakeCatcher takes a different approach and looks for authentic clues in real videos. It assesses what makes us human and analyses “subtle blood flow in the pixels of a video.”
“When our hearts pump blood, our veins change colour. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake,” reads Intel’s press release.
Intel adds that “the real-time detection platform can run up to 72 different detection streams simultaneously on 3rd Gen Intel Xeon Scalable processors.”
Deepfakes have the potential to mislead people and even diminish their trust in the media. They essentially are videos in which a person’s face/body is digitally altered to match someone else, usually with malicious intent like spreading misinformation. FakeCatcher helps restore trust by helping users tell between real and fake content.
A study conducted by researchers from Lancaster University and UC Berkeley found that deepfakes have grown indistinguishable from real faces, and people even rate the former as more trustworthy. Aside from spreading misinformation, deepfakes are also being used to scam people. Last August, a Binance executive warned that sophisticated hacking teams are pretending to be Binance employees and ripping people off. These hacking teams use footage from such executives’ news interviews and TV appearances to create eerily realistic deepfakes of them.
Tools like FakeCatcher have become especially necessary in light of all this. Intel suggests that social media platforms can make use of the tech to combat harmful deepfake videos. Meanwhile, news organisations could use it to avoid spreading manipulated videos unintentionally.