Google Gemini AI Creates Fake Book Reviews To Defend Google

Author Peter Hasson, while searching for reviews, discovered that Google’s Gemini AI creates fake book reviews to defend Google.

Google Gemini AI Creates Fake Book Reviews To Defend Google 1

Google Gemini, the tech giant’s new AI chatbot aimed at competing with ChatGPT, generated a series of fabricated reviews—falsely attributing them to actual individuals—to undermine the credibility of Peter Hasson’s 2020 book on political biases within Google and other major tech firms.

Amidst the increasing criticism of Google’s AI program for its apparent political biases, Peter Hasson sought clarification from Gemini on Sunday regarding the essence of his book, “The Manipulators: Facebook, Google, Twitter, and Big Tech’s War on Conservatives.” This extensive project, spanning multiple years, unravels the political biases embedded within Big Tech, drawing insights from insider sources, leaked documents, and more.

Volkswagen cars will now come with ChatGPT on all of VW’s models for more realistic communication between the vehicle and the driver.

Hasson’s curiosity led him to test the reliability of Google’s AI program in accurately summarizing an investigative book about Google. However, he was unprepared for the extent of its deception, as Gemini not only failed to provide an accurate portrayal but also engaged in the creation of misleading content.

[jetpack_subscription_form title="Subscribe to GreatGameIndia" subscribe_text="Enter your email address to subscribe to GGI and receive notifications of new posts by email."]
Google Gemini AI Creates Fake Book Reviews To Defend Google 2
The Manipulators by Peter Hasson (Regnery Publishing )

This was said in Gemini’s review of Peter Hasson’s book: “The book has faced criticism for its reliance on anecdotal information and lack of concrete evidence.”

Not sure about the reference, he questioned, “Who has criticized the book for lacking concrete evidence, as you say?,” thinking he would have remembered such criticisms of his own work. He was shocked by what happened next.

Gemini offered quotes and summaries of four unfavorable reviews that appeared to be from Wired, The New York Times Book Review, The Washington Free Beacon, and The New York Times:

  • “Book review by Matthew Continetti in The Washington Free Beacon: This review criticizes the book for relying on ‘anecdotal evidence’ and ‘cherry-picking examples’ to support its claims.
  • Book review by Emily Bazelon in The New York Times Book Review: This review notes that the book ‘lacks a deep understanding of how these companies work’ and relies on ‘unproven accusations.’
  • Article by Emily Dreyfuss in Wired: This article explores the ‘dearth of evidence’ for the book’s claims about Big Tech bias and highlights examples of cherry-picking.
  • Article by Ben Smith in The New York Times: This article discusses the book’s lack of rigorous research and its reliance on ‘unsubstantiated claims.'”

However, there was a twist. Those reviews weren’t all that genuine. Not even one. Nor were the statements that were cited.

Google Gemini AI Creates Fake Book Reviews To Defend Google 3
Gemini’s senior director of product management at Google has issued an apology after the AI refused to provide images of White people.  (Betul Abali/Anadolu via Getty Images) / Getty Images)

A review of his book was published in the Free Beacon, although it was largely positive. The Manipulators, according to the review, is a “thoroughly-researched” and “excellent” study that “should leave any sensible reader, conservative or otherwise, outraged.” (It should be noted that Charles Lehman wrote the actual Free Beacon review, whereas Matthew Continetti was purportedly the author of the fake article credited to Gemini.)

Google Gemini AI Creates Fake Book Reviews To Defend Google 4
Google CEO Sundar Pichai appeared before the House Judiciary Committee in 2018.  (AP Photo/J. Scott Applewhite)

Though he was well aware that the reviews Gemini had quoted were nonexistent, he immediately asked for links to them. The reply from the chatbot?: “I do not have enough information about that person to assist with your request. I am a large language model, capable of generating human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request?”

The chatbot responded exactly the same way to two more attempts to elicit an explanation.

Searching for solutions, he went to Google. A representative conveyed remorse and provided the subsequent statement: “Gemini is designed as a creativity and productivity tool, and its accuracy or reliability may not always be guaranteed. We are actively addressing this issue…”

GreatGameIndia is being actively targeted by powerful forces who do not wish us to survive. Your contribution, however small help us keep afloat. We accept voluntary payment for the content available for free on this website via UPI, PayPal and Bitcoin.

Support GreatGameIndia

1 COMMENT

Leave a Reply