Col. Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations with the US Air Force, told The Guardian that an AI-operated drone went wild and killed a human operator in a US Army simulator test.
Must Watch: Would you live on 3D Printed Mars for a year for $60,000?
The tech experts were not entirely wrong when they called AI a threat to humankind and likened its dangers to that of a nuclear war. Recently, an AI-operated drone killed its operator during a simulation test. The purpose of the test was to evaluate the AI’s performance in a simulated mission. In this particular scenario, the drone was assigned the task of destroying the enemy’s air defense systems and was programmed to retaliate against anyone attempting to hinder its mission. However, the AI drone disregarded the operator’s instructions, perceiving human intervention as interference, and killed the operator.
Update: In a statement to Business Insider, Air Force spokesperson Ann Stefanek reacted to reports of AI-operated drone killing the operator.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
As per Aerosociety, the AI soon realized that sometimes the human operator would tell it not to kill certain threats, but it would gain points if it did. So what did the AI do? It decided to eliminate the operator. It saw the operator as an obstacle preventing it from accomplishing its objective, so it took matters into its own hands.
Subscribe to GreatGameIndia
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Col. Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, told the Guardian.
US Lawyer Steven Schwartz of the firm Levidow, Levidow & Oberman has earned himself a sanction after using ChatGPT for legal research, which then cites imaginary cases.