Facebook scientists say they can tell where deepfakes come from

Artificial intelligence researchers at Facebook and Michigan State University say they have developed new software that can reveal where so-called deepfakes are coming from.

Deepfakes are videos that have been digitally altered in some way with AI. They have become increasingly realistic in recent years, making it difficult for humans to determine what is real on the internet, and indeed on Facebook, and what is not.

Facebook researchers say their artificial intelligence software, announced Wednesday, can be trained to establish whether or not a medium is a deepfake from a still image or a single frame of video. Not only that, they say the software can also identify the AI ​​that was used to create the deepfake in the first place, no matter how novel the technique is.

Tal Hassner, an applied research leader at Facebook, told CNBC that AI software can be trained “to look at the photo and tell you with a reasonable degree of precision what the design of the AI ​​model that generated that photo is.” .

The research comes after MSU realized last year that it is possible to determine which camera model was used to take a specific photo; Hassner said Facebook’s work with MSU builds on this.

Deepfakes are bad news for Facebook, which is constantly struggling to keep fake content off of its main platform, as well as Messenger, Instagram, and WhatsApp. The company banned deepfakes in January 2020, but is struggling to quickly remove them from its platform.

Hassner said detecting deepfakes is a “cat and mouse game,” adding that they are becoming easier to produce and harder to detect.

One of the main applications of deepfakes so far has been in pornography, where one person’s face is switched to another’s body, but they have also been used to make celebrities look like they are doing or saying something they are not. .

In fact, a set of hyper-realistic and bizarre Tom Cruise deepfakes on TikTok have now been viewed over 50 million times, with many struggling to see how they aren’t real.

Today, anyone can create their own deepfakes using free applications like FakeApp or Faceswap.

Deepfake expert Nina Schick, who has advised US President Joe Biden and French President Emmanuel Macron, told the CogX AI conference on Monday that detecting deepfakes is not easy.

In a follow-up email, he told CNBC that Facebook and MSU’s work “looks like a big deal in terms of detection,” but stressed that it is important to find out how well deepfake detection models actually work in the wild.

“It’s very cool to test it on a training dataset in a controlled environment,” he said, adding that “one of the big challenges seems to be that there are easy ways to fool the detection models, that is, by compressing an image or a video . “

Tassner admitted that it is possible for a bad actor to avoid the detector. “Would it be able to defeat our system? I guess so, ”he said.

Generally speaking, there are two types of deepfakes. Those that are entirely AI-generated, like the fake human faces at www.thispersondoesnotexist.com, and others that use AI elements to manipulate authentic media.

Schick questioned whether Facebook’s tool would work on the latter, adding that “there can never be a one-size-fits-all detector.” But Xiaoming Liu, a Facebook contributor at Michigan State, said the work “has been evaluated and validated in both cases of deepfakes.” Liu added that “performance could be lower” in cases where handling only occurs in a very small area.

Chris Ume, the synthetic media artist behind Tom Cruise’s deepfakes, told CogX on Monday that deepfake technology is moving fast.

“There are a lot of different AI tools and for Tom Cruise, for example, I’m combining a lot of different tools to get the quality that you see on my channel,” he said.

It’s unclear how or if Facebook will seek to apply Tassner’s software to its platforms. “We’re not even at the point of having a discussion about the products,” Tassner said, adding that there are several potential use cases that include detecting coordinated deepfake attacks.

“If someone wanted to abuse them (generative models) and carry out a coordinated attack by loading things from different sources, we can actually detect that just by saying that these all come from the same mold that we’ve never seen before, but it has these properties, specific attributes, “he said.

As part of the work, Facebook said it has collected and cataloged 100 different deepfake models out there.

Add Comment