Artificial Intelligence (AI) researchers in Australia have demonstrated how it is possible to train a system to manipulate human behavior and decision-making.
- EXPLOSIVE: Here’s what was uncovered in Hunter Biden’s iCloud Hack
- MAJOR PEER REVIEWED STUDY: Moderna Vaccine Increases Myocarditis Risk By 44 Times In Young Adults
- MUST READ: High Level International Bankers Simulate The Collapse Of Global Financial System
- BIG STORY: Wuhan Lab Isolated Monkeypox Strain In 2020
- EXPLOSIVE: Ukraine Biolabs Used Fever Carrying Mosquitoes To Spark Dengue Pandemic In Cuba
A new study by researchers at The Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, designed and tested a method to find and exploit vulnerabilities in human decision-making, using an AI system called a recurrent neural network.
In three experiments that pitted man against machine, the researchers showed how an AI can be trained to identify vulnerabilities in human habits and behaviors and to weaponize them to influence human decision-making.
In the first experiment, humans click on red or blue boxes to earn in-game currency. The AI studied their choice patterns and began guiding them towards making specific decisions, with a roughly 70-percent success rate.
In the next experiment, participants were asked to press a button when they saw a specific symbol (a colored shape) but to refrain from pressing the button when shown other symbols.
Subscribe to GreatGameIndia
The AI’s ‘goal’ was to arrange the sequence of symbols displayed to the participant in such a way as to trick them into making mistakes, eventually increasing human errors by 25 percent.
In the third experiment, the human player would pretend to be an investor giving money to a trustee (the AI) who would then return an amount of money to the participant.
The human would then decide how much to invest in each successive round of the game, based on revenue generated by each ‘investment.’
In this particular experiment, the AI was given one of two tasks: either to maximize the amount of money it made, or to maximize the amount of money both the human player and the machine ended up with.
This research, while limited in scope for now, provides terrifying insights into how an AI can influence human ‘free will’ albeit in a rudimentary context, but throws open the possibilities of (ab)use on a much larger scale, which many suspect is already the case.
The findings could be deployed for good, influencing public-policy decisions to produce better health outcomes for the population, just as easily as it could be weaponized to undermine key decision-making, like elections, for example.