In early 2020, deepfake expert Henry Ajder made a shocking discovery: he found one of the first Telegram bots that could use artificial intelligence to “undress” photos of women. This bot generated over 100,000 explicit images, some of which were of children. Ajder described this as a turning point, highlighting the potential dangers that deepfakes could pose. Fast forward to now, and it’s clear that deepfakes are not just more common—they’re also easier to create and more harmful than ever.

A recent review by WIRED of Telegram communities that deal with explicit nonconsensual content revealed that at least 50 bots can create explicit photos or videos of people in just a few clicks. These bots have a staggering total of over 4 million monthly users. Some bots have more than 400,000 users each, while several others boast over 100,000 users. This shows just how widespread and easily accessible these tools have become, particularly on Telegram, one of the most popular messaging apps in the world.
The Explosion of Nonconsensual Deepfakes
Explicit deepfakes, often called nonconsensual intimate image abuse (NCII), have exploded since they first appeared in late 2017. Advances in generative AI technology have fueled this growth, leading to a flood of websites and Telegram bots that can create such content. These bots target thousands of women and girls globally, affecting everyone from Italy’s Prime Minister to schoolgirls in South Korea. A recent survey found that 40% of U.S. students reported being aware of deepfakes linked to their schools in the last year.
WIRED’s investigation found that the 50 identified bots were supported by at least 25 Telegram channels, which had over 3 million members combined. These channels provide updates about new bot features and promotions for purchasing “tokens” needed to use these bots. They also serve as a place for users to find new bots if their favorites get removed.
Telegram’s Challenge in Tackling Deepfakes
After WIRED reached out to Telegram about the presence of these explicit deepfake bots, the company removed 75 bots and channels that were identified. However, they didn’t comment on why these actions were taken. Many of the bot owners quickly expressed their determination to create new ones, showing how resilient this underground market is.
Telegram bots operate like small apps within the messaging platform, and they can do everything from trivia quizzes to generating explicit images. Many bots are upfront about their purpose, with names and descriptions that reference nudity and the ability to remove clothing from photos. Users typically need to buy tokens to create images, but it’s unclear how effectively these bots actually work.
The Dark Reality of Deepfake Abuse
The consequences of these deepfake images can be devastating. They can cause psychological trauma, humiliation, and fear, particularly for women and girls. Emma Pickering, a representative from a domestic abuse organization, noted that this kind of abuse is common but rarely punished.
Despite some legislative progress—23 U.S. states have passed laws addressing nonconsensual deepfakes—tech companies have been slow to act. Some deepfake creation apps even found their way into the app stores of Apple and Google, demonstrating the challenges of regulation.
Ajder points out that Telegram is uniquely equipped for deepfake abuse. It provides search functionality that allows users to find communities and bots easily. This makes it a dangerous space for victims, as the platform enables both the creation and sharing of harmful content.
The Struggle for Accountability
In late September, many deepfake channels began reporting that Telegram had banned their bots. However, shortly after, users were still able to find new links to these bots. This ongoing game of cat-and-mouse shows how difficult it is for Telegram to control this harmful content.
Elena Michael, a cofounder of a campaign group focused on image-based abuse, emphasized that Telegram should be more proactive in moderating harmful content rather than waiting for users to report it. She argues that the burden shouldn’t fall on victims to protect themselves; the platform should take responsibility.
As deepfake technology continues to advance, the presence of these harmful bots on Telegram raises serious concerns about user safety, especially for vulnerable individuals. The battle against nonconsensual deepfakes is far from over, and both tech companies and lawmakers need to take urgent action to protect users from this emerging threat.