Will AI Kill Wikipedia

Since machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content, there is doubt about whether AI will kill Wikipedia.

As generative artificial intelligence continues to permeate all aspects of culture, the people who steward Wikipedia are divided on how best to proceed. 

During a recent community call, it became apparent that there is a community split over whether or not to use large language models to generate content. While some people expressed that tools like Open AI’s ChatGPT could help with generating and summarizing articles, others remained wary. 

The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated. 

Amy Bruckman is a regents professor and senior associate chair of the school of interactive computing at the Georgia Institute of Technology and author of Should You Believe Wikipedia?: Online Communities and the Construction of Knowledge. Like people who socially construct knowledge, she says, large language models are only as good as their ability to discern fact from fiction. 

“Our only recourse is to use [large language models], but edit it and have someone check the sourcing,” Bruckman told Motherboard. 

It didn’t take long for researchers to figure out that OpenAI’s ChatGPT is a terrible fabricator, which is what tends to doom students who rely solely on the chatbot to write their essays. Sometimes it will invent articles and their authors. Other times it will name-splice lesser known scholars with more prolific ones, but will do so with the utmost confidence. OpenAI has even said that the model “hallucinates” when it makes up facts—a term that has been criticized by some AI experts as a way for AI companies to avoid accountability for their tools spreading misinformation. 

“The risk for Wikipedia is people could be lowering the quality by throwing in stuff that they haven’t checked,” Bruckman added. “I don’t think there’s anything wrong with using it as a first draft, but every point has to be verified.” 

According to a peer-reviewed study published on Monday in the journal Nature Neuroscience, scientists have developed an A.I. system focused on turning people’s thoughts into text.

Read more

Explore exclusive GGI coverage of Donald Trump’s assassination attempt.

Do you have a tip or sensitive material to share with GGI? Are you a journalist, researcher or independent blogger and want to write for us? You can reach us at [email protected].

Leave a Reply