star twitter facebook envelope linkedin instagram youtube alert-red alert home left-quote chevron hamburger minus plus search triangle x

Heinz Professor Says Society 'Woefully Underprepared for Deep Fakes’


By Bill Brink

Is this the real life? Or is this just fantasy?

You recognize your doctor’s voice on the phone, but is it really her? Surely your favorite musician didn’t just use an ethnic slur during a concert. And that doesn’t seem in character for Ukrainian President Volodymyr Zelensky to call upon his troops to surrender, does it? 

Yet in March 2022, there he was on a Ukrainian news website, doing just that.

The video was what is known as a deepfake, a product of artificial intelligence that either manipulates the likenesses and voices of real people or creates fictitious people from scratch. This one wasn’t particularly well done, and the real Zelensky stood up and squashed it in short order. But had just one Ukrainian brigade believed it, watching on a small screen with spotty reception after months of brutal fighting, their surrender could have been all the Russians needed to change the course of the war.

Deepfakes are revolutionizing scams and amplifying misinformation. They have the potential to defraud, embarrass, blackmail and deceive. They are unstoppable and more accessible than ever, and policing them involves the grayest of gray areas, with nothing less than the First Amendment at stake.

“One person's satirical joke is another person's dig at their belief system and perception of reality,” said Ari Lightman, a Distinguished Service Professor of Digital Media and Marketing at the Heinz College of Information Systems and Public Policy. “And so it’s really challenging. We have woefully underprepared the next generation to deal with these issues.”

The history of deepfakes

AI didn’t invent misinformation. It just poured nitroglycerin on the fire. Altering images for the purposes of misleading people dates back to at least Stalin’s Great Purge

Just like the realm of cybersecurity, in which we need AI to combat AI-powered intrusions, we’ll need to fight fire with fire when it comes to deepfakes and misinformation. Phishing emails begat the rise of the spam filter, which now employs sophisticated AI to deflect intrusion attempts. The same is true of detecting deepfakes, because we’re no longer dealing with the Nigerian prince who can’t spell.

Heinz College Professor Ari Lightman

Verification required


“There has to be some sort of vetting and verification mechanism, because the average user will just assume it's real,” Ari Lightman said.

Learn more about Ari

Deepfakes rely on artificial neural networks, a form of machine learning that attempts to mirror the way neurons interact in the human brain. Initially, deepfakes were created by Generative Adversarial Networks, which consisted of two algorithms: The generator, which creates false images or video, and the discriminator, which tries to decipher between fake and real. More recent deepfakes came from auto-encoders, algorithms that encode the crucial features of a face from their training data, to then be decoded by a similarly-trained filter. 

But now there is no barrier to entry. Voice cloning services like ElevenLabs and synthetic video vendors like Synthesia allow anyone to create deepfakes. Cheaply, too. And they work. Lightman compared it to the proliferation of dating apps, where the privacy and protection offered by Match.com and eHarmony were sidelined as competitors jostled for market share.

“They might not have the focus associated with cybersecurity and privacy infringement that others might have,” he said. “And I think the same thing is very much true for folks going into the AI space without any sort of precedent set around regulations, privacy, validation and verification.”

What are the risks?

The same types of large language models that produce images, videos and audio that look and sound real are also capable of mass-producing propaganda, misinformation and fraud schemes. Generative AI, like ChatGPT, can learn individual writing styles well enough to convincingly mimic them. 

“If you send out two spear-phishing emails, you will probably get zero responses,” Lightman said. “If you scale that using AI to send out two million, while customizing them for the receiver based on collected behavior patterns so that they appear more believable, then you most likely will increase your hit ratio.”

Synthetic video and audio pose a more sinister threat. Fraudsters used a deepfake video of Elon Musk in an attempted cryptocurrency scam. That family member who calls you in a panic, in need of cash now, might be a doctored recording. Those same altered recordings can help would-be thieves gain access to your bank, a problem whether they drain your checking account or swindle a company out of $35 million.

The potential for blackmail and revenge is astronomical. The term deepfake, a reference to deep learning, comes from a username on Reddit, which in 2017 became awash with doctored pornographic videos featuring the faces of celebrities, and nonconsensual pornography remains a threat. The FBI recently issued a warning about “sextortion” scams. Finally, deepfakes threaten to undermine the notion of truth itself. Without a shared understanding of what is real and what is not, the fabric of democracy and society threatens to tear.

“That’s one big worry, that none of us are going to be able to trust what we see anymore,” said Vincent Conitzer, a member of the Block Center for Technology and Society’s Advisory Council and the director of CMU’s Foundations of Cooperative AI (FOCAL) Lab. “Interesting how that would play out, right?” 

What can be done?

Let’s examine a popular deepfake: Pope Francis in a Balenciaga puffer jacket. Look at his right hand. Odd, huh?

Currently, deepfake generators struggle with hands. They also struggle with hair, skin tone and eyes, and sometimes they don’t blink because they’re trained on photos of people with their eyes open. Same thing with the side profile: One of the easiest ways to spot a deepfake on a video call is to ask the person to turn their head. 

Carnegie Mellon University Professor Vincent Conitzer

Shared Reality


“That’s one big worry," Vincent Conitzer said, "that none of us are going to be able to trust what we see anymore."

But it’s not reasonable to ask people to employ that level of scrutiny to everything they see. The deepfakes will improve anyway. A more realistic option is to watermark content created with, or altered by, artificial intelligence.

“There has to be some sort of vetting and verification mechanism, because the average user will just assume it's real,” Lightman said.

Both the private sector and higher education are employing advanced technology to detect deepfakes. FakeCatcher, Intel’s real-time deepfake detector, looks for subtle signs of blood flow, present in real videos but absent in a synthetic. Research from the University of Buffalo detects deepfakes by analyzing the reflection in a person’s eyes. 

As with all of artificial intelligence, the tricky part isn’t so much creating safeguards as it is enforcing them. The challenge will be distinguishing between “all in good fun” and “intended to deceive.” Good luck to the folks at social media websites, and in state and federal legislatures, trying to write that policy. 

“There won’t be an easy way to draw the line, because that is the nature of the problem,” said Tae Wan Kim, an Associate Professor of Business Ethics at Carnegie Mellon’s Tepper School of Business. “Courts deal with similar issues all the time: When is an offense harmful enough?”

In January 2023, China enacted regulations requiring watermarking of synthetic content and the consent of its subject. The European Union updated its Code of Practice on Disinformation to address deepfakes, and the United Kingdom planned to crack down on deepfakes in an amendment to its Online Safety Bill. South Korea passed a law in 2020 outlawing the dissemination of deepfakes intended to harm the public interest

The Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to (guess what it spells) Accountability Act, which was introduced in the U.S. House of Representatives in 2019 and again in 2021, and called for watermarking, disclosure and penalties for violation, stalled in committee. As of June 2023, nine states have specific laws regarding deepfakes, mostly dealing with elections and sexually explicit content. 

In the meantime, citizens the world over must adapt. We know not to give away our social security number over the phone, and we know not to click on links in suspicious emails. To avoid further escape from reality, we now must apply some skepticism to what we see and hear.

Learn more about MSISPM

Learn more about MSPPM

Learn more about MISM