Pentagon to Create 'Deepfake' Internet Users with AI Technology: Covert Online Influence Operation
New U.S. military plans to develop AI-generated personas risk escalating a global disinformation arms race.
The U.S. Department of Defense has set its sights on advanced AI technology to create realistic online personas that are indistinguishable from actual users.
Follow Jon Fleetwood: Instagram @realjonfleetwood / Twitter @JonMFleetwood / Facebook @realjonfleetwood
According to a procurement document reviewed by The Intercept, the Pentagon’s Joint Special Operations Command (JSOC) is seeking contractors to help develop these deepfake capabilities for use by Special Operations Forces (SOF).
The document, a 76-page outline of technological needs for elite military operations, states that “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content.”
These virtual personas are intended to appear as unique, authentic individuals with “multiple expressions” and even “Government Identification quality photos.”
JSOC’s specifications go beyond simple images, seeking technology capable of creating “facial & background imagery, facial & background video, and audio layers.”
This technology would enable the creation of “selfie video” content, complete with fabricated backgrounds, designed to be undetectable by social media algorithms.
Such capabilities would allow these digital personas to operate covertly on online platforms, blending seamlessly into virtual environments.
The development and deployment of such deepfake technology mark a significant expansion of previous SOF interests in digital manipulation.
Last year, Special Operations Command (SOCOM) expressed an interest in using deepfakes—digitally manipulated audiovisual content that appears real—as part of its toolkit for information warfare.
These images and videos are typically created through machine-learning techniques that allow software to simulate human features.
SOCOM’s focus has since evolved to include tools similar to StyleGAN, a software released by Nvidia that powered the site “This Person Does Not Exist” and became infamous for generating hyper-realistic fake faces.
SOCOM’s latest interest in undetectable AI-generated profiles highlights a paradox in U.S. policy.
The U.S. has long voiced concerns about foreign actors leveraging similar technologies to spread disinformation.
In a joint statement last September, the NSA, FBI, and Cybersecurity and Infrastructure Security Agency (CISA) warned that “synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications.”
These agencies labeled deepfakes as a “top risk” for 2023, underscoring the threat posed by adversaries who could disseminate AI-generated content undetected.
A classified U.S. intelligence briefing earlier this year raised alarm over adversaries like Russia, China, and Iran wielding deepfake technologies for propaganda.
The briefing underscored concerns about foreign capabilities for “AI-generated content” that could function as a “malign influence accelerant,” suggesting that these state actors pose a significant risk by exploiting the same technologies the Pentagon now seeks to develop.
In a related call for private sector support, the Pentagon’s Defense Innovation Unit stated, “This technology is increasingly common and credible, posing a significant threat to the Department of Defense, especially as U.S. adversaries use deepfakes for deception, fraud, disinformation, and other malicious activities.”
The domestic impact of deploying this technology is not lost on security experts.
“What’s notable about this technology is that it is purely of a deceptive nature,” noted Heidy Khlaaf, chief AI scientist at the AI Now Institute.
She expressed concerns about potential repercussions: “There are no legitimate use cases besides deception, and it is concerning to see the U.S. military lean into a use of a technology they have themselves warned against. This will only embolden other militaries or adversaries to do the same, leading to a society where it is increasingly difficult to ascertain truth from fiction and muddling the geopolitical sphere.”
The deployment of such deceptive tactics by U.S. forces could indeed lead to a broader embrace of this technology by authoritarian governments, The Intercept emphasized.
In January, the State Department launched an international “Framework to Counter Foreign State Information Manipulation,” citing the risk of foreign deepfakes as a national security threat.
In its press release, the State Department explained, “Authoritarian governments use information manipulation to shred the fabric of free and democratic societies.”
The drive to adopt deepfake capabilities reflects an internal tension within the U.S. government.
Daniel Byman, a security studies professor at Georgetown University and a member of the State Department’s International Security Advisory Board, described this dichotomy: “Much of the U.S. government has a strong interest in the public believing that the government consistently puts out truthful (to the best of knowledge) information and is not deliberately deceiving people,” he said, noting that other branches are simultaneously pursuing capabilities rooted in deception.
Byman added, “So there is a legitimate concern that the U.S. will be seen as hypocritical. I’m also concerned about the impact on domestic trust in government—will segments of the U.S. people, in general, become more suspicious of information from the government?”
As the Pentagon pushes forward with its deepfake research, its plans may come under intense scrutiny not only for adopting the same tactics the U.S. condemns in foreign adversaries but also for the potential use of these technologies against American citizens.
The growing sophistication of deepfake technology, now sought by U.S. military forces, raises serious questions about the future of information warfare and its implications for domestic audiences.
With both domestic and international actors embracing such tools, the lines between truth and deception become increasingly blurred, fueling concerns about how these capabilities could impact public trust in a digitally manipulated era.
You can subscribe to The Intercept for free here.
Follow Jon Fleetwood: Instagram @realjonfleetwood / Twitter @JonMFleetwood / Facebook @realjonfleetwood
Great post Jon.
A few observations:
> It's good to keep in mind that the U.S. Gov't has the legal authority to spread propaganda and has been doing so for a while.
> The U.S. Military uses 5th-gen / cognitive warfare tactics on social media a lot, i.e., Interactive Internet Activities (IIAs) - I've written about these and talked about them on my podcast & as a guest for various interviews (even one alongside Dr. Robert Malone). Let me know if you want more info on this, including the DoD memo.
> Nowadays, the level of sophistication in which virtual (seemingly totally real) "people" can be created and used for various purposes is pretty incredible. Just have a look at www.heygen.com (play the video on the top of their home page to see it in action) to see how advanced this is - even beyond military development.
> "In January, the State Department launched an international “Framework to Counter Foreign State Information Manipulation,” citing the risk of foreign deepfakes as a national security threat."
translation: The State Dept. (which is essentially ground zero for the U.S. Deep State - as per many of former insider Mike Benz' revelations) wants a MONOPOLY on the tech.
I would expect to see many more deepfakes in the coming weeks to spread confusion about the Election, the happenings in Ukraine & Israel/Gaza.
WHAT are your thoughts on this article? 8t caught my eye as I was scanning pieces on Sunstack. I dud not even flip to your page to read about you.