A former OpenAI employee, who claimed the company had broken copyright laws while developing its ChatGPT chatbot, has been found dead under mysterious circumstances. Suchir Balaji, 26, passed away several weeks ago, but his death has only recently made headlines.

San Francisco police discovered his body during a “well-being check” on November 26 at his apartment. They ruled out foul play, and his death was officially determined to be suicide. Balaji’s tragic passing comes just three months after he made serious accusations against OpenAI, alleging the company had violated copyright laws in the way it used data to train its artificial intelligence models.


Balaji, a researcher at OpenAI for four years, had become a whistleblower after questioning whether the company’s data practices were legal. He left OpenAI in August, explaining that he could no longer work for a company developing technology that he believed would harm society. “If you believe what I believe, you have to just leave the company,” he told The New York Times.
Balaji’s knowledge was crucial to multiple ongoing lawsuits against OpenAI, with publishers, authors, and artists accusing the company of using their copyrighted content without permission to train ChatGPT. These lawsuits have gained more attention in recent months, as the use of AI technology has raised concerns about intellectual property rights.

Just days before his death, Balaji was mentioned in a lawsuit filed by The New York Times against OpenAI and its partner, Microsoft. The paper claimed the companies had used millions of published articles to train their AI models without respecting the rights of journalists and publishers. Balaji, who had unique documents that could support the lawsuit, was seen as an important figure in this legal battle.
Although OpenAI and Microsoft defended their actions, claiming their use of the data falls under “fair use,” Balaji disagreed. He argued that the data used wasn’t different enough to justify its use and that AI models like ChatGPT could be directly competing with the very content they were trained on. He believed this was not just an ethical issue but a legal one, one that could harm the very internet ecosystem it relied on.

Balaji’s controversial views were shared widely, including his belief that AI posed an immediate danger to businesses and individuals who depended on the digital data that the models were learning from. In a post on X (formerly Twitter), he expressed doubt about the defense of “fair use” for AI products, calling it “pretty implausible.”
His tragic death has sent shockwaves through the tech and legal communities, with reactions from people like Elon Musk, who posted a cryptic message on social media: “Hmmm.”
OpenAI, when asked about Balaji’s passing, expressed sadness, saying, “We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir’s loved ones during this difficult time.”
The story of Balaji’s life and death raises troubling questions about the intersection of technology, ethics, and the law. His role as a whistleblower has made him a central figure in the growing debate over AI’s impact on intellectual property and privacy. His death adds a chilling layer to an already complex and controversial issue.