Encountering an online robot, or bot, is as frequent as discovering a pair of shoes in your closet.
Must Watch: Would you live on 3D Printed Mars for a year for $60,000?
This occurrence is intrinsic to the internet, yet users have reached a crucial juncture: A growing multitude of individuals are losing their capacity to differentiate between bots and humans.
This is a circumstance that developers have cautioned about for an extended period, and its rationale is easily comprehensible.
A recent study has determined that bot-generated content now constitutes 47 percent of all internet traffic, marking an uptick of over 5 percent from 2021 to 2022. Concurrently, human activity on the internet has recently hit its lowest point in an eight-year span.
Combined with advancements in AI-driven human-like interactions, nearly one-third of internet users are no longer able to ascertain if they’re engaging with a human being.
Subscribe to GreatGameIndia
In the month of April, a groundbreaking research initiative (read below) named “Human or Not?” was initiated to establish whether individuals could accurately discern if their conversational partner was a human or an AI chatbot.
Following over 2 million volunteers and 15 million conversations, it was revealed that 32 percent of participants made incorrect identifications.
Furthermore, the outcomes did not significantly differ across age groups. Both older and younger adults struggled at a comparable level in determining the entity on the other end of the exchange—human or otherwise.
The key takeaway: While highly realistic bots have commandeered almost half of the internet, an increasing number of individuals are unable to make this distinction.
Furthermore, this momentous convergence of rapidly advancing technology and dwindling perception among the general populace is already generating real-world predicaments.
Fool Me Once
“The bot-human blur is like a magic trick … As bots get smarter, we risk losing trust in online interactions,” remarked Daniel Cooper in a conversation.
Mr. Cooper, a technology developer and managing partner at Lolly, emphasized that transparent practices by companies and websites are pivotal in fostering people’s trust in their online engagements. However, in the interim, relying on innate human intuition remains irreplaceable.
“Spotting bots is like finding Waldo in a crowd. Look for repetitive patterns, lack of personalization, or rapid responses. Also, trust your gut. If it feels off, it might just be,” he advised.
While much discourse on malicious or “bad bot” activity revolves around social media, the ramifications of malevolent AI interactions extend far beyond.
The reliability of online product or service reviews has been a concern for years, and it seems to have reached a new threshold.
In April this year, reports emerged of AI language models crafting reviews for items on platforms like Amazon. These bot-generated reviews were often evident, as the language model would openly state its AI identity in the first sentence.
However, not every bot posing as a human is as easily discernible.
As a result, major corporations and search engines like Google have witnessed a steep surge in fabricated reviews.
Amazon took legal action against fake review brokers on Facebook last year, and Google was compelled to eliminate 115 million counterfeit evaluations.
This is disconcerting, given the significant number of consumers who rely on such reviews. A 2023 survey disclosed that 93 percent of internet users factor online reviews into their purchasing decisions.
“More bot traffic could indeed open the floodgates for online scams,” Mr. Cooper cautioned.
Yet, it appears that these floodgates have already been unsealed.
Fox in the Henhouse
Incidents of malicious bot traffic have escalated by 102 percent since the previous year, and they might eventually surpass content generated by humans entirely. This mirrors a previous occurrence in 2016, particularly pronounced during the U.S. presidential election. Since then, AI-generated content has evolved in complexity, and tech experts suggest that people should brace themselves for another surge in bot activity in 2024.
With a growing number of individuals struggling to differentiate between the two, online scammers hold a significant advantage.
“The difficulties in distinguishing between bots and actual humans will probably get worse as this technology develops, which will hurt internet users. The possibility of being used by bad actors is a major worry,” commented Vikas Kaushik, CEO of TechAhead.
Mr. Kaushik highlighted that without the ability to discern bots, people are susceptible to falling prey to disinformation and phishing scams. Furthermore, these digital deceptions are not always easily recognizable.
Kai Greshake, a technology security researcher, disclosed to Vice in March that hackers could manipulate Bing’s AI chatbot into soliciting personal information from users by utilizing concealed text prompts.
“As a member of the sector, I see this developing into a serious problem,” Kaushik said, adding: “To create more complex detection techniques and build open standards for recognizing bots, developers and academics must collaborate.”
He asserts that education and awareness campaigns play a vital role in enabling the public to engage with caution and confidence while “communicating online with unfamiliar individuals.”
Mr. Cooper concurred with this viewpoint.
“The bot-human confusion could lead to misunderstandings, mistrust, and misuse of personal data. It’s like chatting with a parrot, thinking it’s a person: amusing until it repeats your secrets.”
He drew a parallel between the increase in bot traffic and the analogy of inviting a fox into a henhouse. “We need to be vigilant and proactive in our defenses.”
For some, the solution seems straightforward: disconnect from the digital realm.
This sentiment often accompanies discussions about opting out of the digital grid and a nostalgia for the time when the concept of the “dead internet theory” was less plausible. Yet, for many, this isn’t a feasible option.
Alternatively, some are striving to strike a balance in their online engagement, including curtailing their use of social media.
Humanity’s complex relationship with social media, particularly platforms like Facebook and Twitter, has given rise to feelings of anxiety, anger, and depression for millions.
Despite an increase in social media usage this year, approximately two-thirds of Americans believe that these platforms predominantly have a negative impact on their lives.
The surge in bot traffic is further exacerbating these issues.
Stepping away from social media and its influx of bots has its merits.
Results from a study conducted in 2022 revealed that participants who took a one-week hiatus from these platforms experienced enhancements in their levels of anxiety, depression, and overall well-being.
As daily human interactions progressively shift from physical to virtual realms, society has become increasingly reliant on the internet. This prompts the question: Can humans regain control of the internet from bots?
Some technology experts hold the belief that this is attainable, and the process commences with aiding individuals in recognizing their interactions.
“There are a few strategies users can employ to identify bots,” explained Zachary Kann, the founder of Smart Geek Home.
Drawing from his background as a network security professional, Mr. Kann suggested methods that users can adopt to determine whether they are interacting with another human being.
Similar to Mr. Cooper’s advice, he recommended observing response patterns closely.
“Bots often respond instantly and may use repetitive language.”
Mr. Kann also emphasized the significance of scrutinizing profiles, as bots typically possess generic or incomplete online profiles.
He further highlighted that the inability to differentiate between bots and humans could potentially pose challenges to the accuracy of research.
“It can lead to skewed data analytics, as bot interactions can inflate website traffic and engagement metrics.”
As the use of AI and machine learning continues to grow in various industries, experts predict that the technology could potentially replace jobs traditionally held by humans, such as couriers, investment analysts, and customer service representatives. It has gotten to the point that even reality TV hosts are being replaced by robots.
Read the document below: