Skip to main content
Press Release Published: Nov 8, 2023

Mace: Deepfake Technology Can Be Weaponized to Cause Harm

WASHINGTON—Subcommittee on Cybersecurity, Information Technology, and Government Innovation Chairwoman Nancy Mace (R-S.C.) today delivered opening remarks at a subcommittee hearing titled “Advances in Deepfake Technology.” Chairwoman Mace discussed both the benefits and risks to the power of artificial intelligence and warned that realistic-looking, AI-generated falsified images and videos can cause harm by creating national security threats.

Below are Subcommittee Chairwoman Mace’s opening remarks as prepared for delivery.

Good morning, and welcome to this hearing of the Subcommittee on Cybersecurity, Information Technology, and Government Innovation.

The groundbreaking power of artificial intelligence is a double-edged sword.

That’s nowhere more evident than in AI’s capacity to generate realistic-looking images, audio and video.

The latest AI algorithms can be used to make synthetic creations nearly indistinguishable from actual faces, voices and events.  These creations are referred to as “deepfakes.”

Deepfakes can be put to a variety of productive uses. They are used to enhance video games and other forms of entertainment. And they are being used to advance medical research. 

But deepfake technology can be weaponized to cause harm.

It can be used to make people appear to say or do things that they have not actually said or done. It can be used to perpetrate various crimes, including financial fraud and intellectual property theft. And it can be used by anti-American actors to create national security threats.

A few weeks ago, AI-generated pornographic images of female students at a New Jersey high school were circulated by male classmates.

A company that studies deep fakes found ninety percent of deep fake images are pornographic.

Last month, the attorneys general of 54 states and territories wrote to Congressional leaders urging they address how AI is being used to exploit children, specifically through the generation of child sexual abuse material – or CSAM.

“AI can combine data from photographs of both abused and nonabused children to animate new and realistic sexualized images of children who do not exist but may resemble actual children,” they wrote.

Creating these images is easier than ever,” the letter states, “as anyone can download the AI tools to their computer and create images by simply typing in a short description of what the user wants to see.”

Falsified videos and photos circulating on social media are also making it difficult to separate fact from fiction in conflicts taking place around the world.  Videos purportedly taken from on the ground in Israel, Gaza and Ukraine have circulated rapidly around on social media – only to be proven inauthentic. One AI-generated clip showed the Ukraine president urging troops to put down their arms.

I’m not interested in banning synthetic images or video that offend some people or make them uncomfortable.  

But if we can’t separate truth from fiction, we can’t ensure our laws are enforced or that our national security is preserved.

And there is a more insidious danger that the sheer volume of impersonations and false images we are exposed to on social media lead us to no longer recognize reality when it’s staring us in the face.

Bad actors are rewarded when people think everything is fake. That’s called the Liar’s Dividend. The classic case is the Hunter Biden laptop, which many in the media and elsewhere attributed to Russian propaganda. 

But the risk from deep fakes can be mitigated. We will hear today about one such effort. It’s being pursued by a partnership of tech companies interested in maintaining a flow of trusted content. They’ve created voluntary standards that enable creators to embed content provenance data into an image or video.  Allowing others to know if the content is computer generated or has been manipulated in some way.

Our witnesses today will be able to discuss this standard – along with other ideas for addressing the harm caused by deep fakes.

With that, I yield to the Ranking Member of the Subcommittee, Mr. Connolly.