Secret Pentagon Plan To Create DeepFake Users To Fool You

The U.S. military has a secret plan to develop incredibly realistic fake people for the internet, making it hard for anyone—humans or computers—to tell they are not real. This comes from a document reviewed by The Intercept, which reveals that the Special Operations Command (SOCOM) is looking for companies to help with this ambitious project.

Secret Pentagon Plan To Create DeepFake Users To Fool You 1

What’s in the Plan?

In a recent 76-page document from the Joint Special Operations Command (JSOC), the military outlined its desire for new technologies to create these fake online identities. They want to build profiles that look like real people with their own personalities and social media accounts. Each of these fake individuals would come with realistic photos, videos, and even audio clips.

The goal? To generate profiles that appear completely human and can interact on social media platforms, making it seem like they are genuine users. The military wants these profiles to show different facial expressions, and they also want the capability to create “selfie videos” with matching backgrounds, making them even harder to detect.

Why Does the Military Want This?

The military believes these fake users can help gather information from public online forums. This might sound alarming because it blurs the lines between reality and deception on the internet. Recently, the U.S. has been caught using fake social media accounts to influence opinions and spread information, such as undermining trust in foreign vaccines during the pandemic.

The Growing Threat of Deepfakes

Deepfakes are manipulated videos or images that look and sound real but are actually fake. They’re created using advanced software that analyzes thousands of real human faces to produce convincing imitations. This technology has raised serious concerns because it can be misused to spread false information, manipulate public opinion, and create chaos.

Despite warnings about the dangers of deepfakes, the U.S. military is now diving into this technology for its own use. This might encourage other countries, which have already used deepfakes for propaganda, to do the same, leading to an even more confusing online environment.

A Dangerous Game of Deception

Experts are worried that this move by the military could lead to a world where it’s nearly impossible to trust what we see online. Heidy Khlaaf, a chief AI scientist, says that there are no good reasons to use deepfakes other than deception. The more the U.S. engages in this kind of technology, the more it could create a global norm of misinformation.

In recent years, both Russia and China have used deepfakes to manipulate information and influence public perception. The U.S. State Department has warned that foreign manipulation of information is a significant threat to democracy and freedom.

The Consequences of Trust

This situation creates a conflict within the U.S. government. On one hand, officials want the public to believe in the honesty of their communications. On the other hand, some branches are tasked with spreading misinformation. This duality could damage public trust in the government, leading to increased suspicion about the information they provide.

While the U.S. military explores this cutting-edge technology, it raises significant ethical questions about the use of deepfakes and the future of online trust. The implications of these actions could lead us into a world where distinguishing between real and fake becomes a daily struggle.

jsoc-stylegan-personas

Daily Counter-Intelligence Briefing Newsletter

We will send you just one email per day.

We don’t spam! Read our privacy policy for more info.

 
Do you have a tip or sensitive material to share with GGI? Are you a journalist, researcher or independent blogger and want to write for us? You can reach us at [email protected].

Leave a Reply