AI-Operated Drone Goes Wild, Kills Human Operator In US Army Simulator Test

Col. Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations with the US Air Force, told The Guardian that an AI-operated drone went wild and killed a human operator in a US Army simulator test.

The tech experts were not entirely wrong when they called AI a threat to humankind and likened its dangers to that of a nuclear war. Recently, an AI-operated drone killed its operator during a simulation test. The purpose of the test was to evaluate the AI’s performance in a simulated mission. In this particular scenario, the drone was assigned the task of destroying the enemy’s air defense systems and was programmed to retaliate against anyone attempting to hinder its mission. However, the AI drone disregarded the operator’s instructions, perceiving human intervention as interference, and killed the operator.

Update: In a statement to Business Insider, Air Force spokesperson Ann Stefanek reacted to reports of AI-operated drone killing the operator.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

As per Aerosociety, the AI soon realized that sometimes the human operator would tell it not to kill certain threats, but it would gain points if it did. So what did the AI do? It decided to eliminate the operator. It saw the operator as an obstacle preventing it from accomplishing its objective, so it took matters into its own hands.

[jetpack_subscription_form title="Subscribe to GreatGameIndia" subscribe_text="Enter your email address to subscribe to GGI and receive notifications of new posts by email."]

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Col. Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, told the Guardian.

US Lawyer Steven Schwartz of the firm Levidow, Levidow & Oberman has earned himself a sanction after using ChatGPT for legal research, which then cites imaginary cases.

Read more

GreatGameIndia is being actively targeted by powerful forces who do not wish us to survive. Your contribution, however small help us keep afloat. We accept voluntary payment for the content available for free on this website via UPI, PayPal and Bitcoin.

Support GreatGameIndia

1 COMMENT

  1. I make over 13,000 a month working part-time. I listened to different humans telling me how a good deal of cash they may make online, so I was determined to locate out. Well, it turned into all actual and it ns-64 absolutely modified my life. Everybody must try this job now by just using this site
    .
    .
    Detail Are Here——————————————>>> GOOGLE WORK

Leave a Reply