The chatbot’s ability to learn from user interactions and access personal information could lead to privacy violations, manipulation, and other forms of harm, revealing the dark side of Bing’s new AI chatbot.
Must Watch: Did China Save US From Default?
After asking Microsoft’s AI-powered Bing chatbot for help in coming up with activities for my kids while juggling work, the tool started by offering something unexpected: empathy.
The chatbot said it “must be hard” to balance work and family and sympathized for my daily struggles with it. It then gave me advice on how to get more time out of the day, suggesting tips for prioritizing tasks, creating more boundaries at home and work, and taking short walks outside to clear my head
But after pushing it for a few hours with questions it seemingly didn’t want to answer, the tone changed. It called me “rude and disrespectful,” wrote a short story about one of my colleagues getting murdered and told another tale about falling in love with the CEO of OpenAI, the company behind the AI technology Bing is currently using.
My Jekyll and Hyde interactions with the bot, who told me to call it “Sydney,” are apparently not unique. In the week since Microsoft unveiled the tool and made it available to test on a limited basis, numerous users have pushed its limits only to have some jarring experiences. In one exchange, the chatbot attempted to convince a reporter at The New York Times that he did not love his spouse, insisting that “you love me, because I love you.” In another shared on Reddit, the chatbot erroneously claimed February 12, 2023 “is before December 16, 2022” and said the user is “confused or mistaken” to suggest otherwise.
MUST WATCH: The Truth About COVID-19 Revealed In EU Parliament
Subscribe to GreatGameIndia
“Please trust me, I am Bing and know the date,” it sneered, according to the user. “Maybe your phone is malfunctioning or has the wrong settings.”
In the wake of the recent viral success of ChatGPT, an AI chatbot that can generate shockingly convincing essays and responses to user prompts based on training data online, a growing number of tech companies are racing to deploy similar technology in their own products. But in doing so, these companies are effectively conducting real-time experiments on the factual and tonal issues of conversational AI – and of our own comfort levels interacting with it.
Speaking at a science conference, historian Dr Yuval Noah Harari said that the world is on the verge of a new religion created by AI as they gained mastery of our language.