The technology of artificial intelligence and machine learning is fascinating but sometimes misunderstood. The former CEO of Google, Eric Schmidt, has claimed that an AI deterrence regime would necessitate an AI Hiroshima.
- EXPLOSIVE: Here’s what was uncovered in Hunter Biden’s iCloud Hack
- MAJOR PEER REVIEWED STUDY: Moderna Vaccine Increases Myocarditis Risk By 44 Times In Young Adults
- MUST READ: High Level International Bankers Simulate The Collapse Of Global Financial System
- BIG STORY: Wuhan Lab Isolated Monkeypox Strain In 2020
- EXPLOSIVE: Ukraine Biolabs Used Fever Carrying Mosquitoes To Spark Dengue Pandemic In Cuba
Eric Schmidt, the former CEO of Google, likened AI to nuclear weapons and advocated for a deterrent system akin to mutually assured destruction, which prevents the most powerful nations on earth from annihilating one another.
On July 22, Schmidt spoke on a panel about artificial intelligence and national security at the Aspen Security Forum. In response to a query regarding the importance of morality in technology, Schmidt admitted that, in the early years of Google, he had been naive about the power of information. He then drew a bizarre analogy between AI and nuclear weapons and urged technology to better align with the ethics and morality of the people it serves.
In Schmidt’s vision, China and the United States would need to formalize an AI treaty in the coming years. “In the 50s and 60s, we eventually worked out a world where there was a ‘no surprise’ rule about nuclear tests and eventually they were banned,” Schmidt said. “It’s an example of a balance of trust, or lack of trust, it’s a ‘no surprises’ rule. I’m very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failing… will allow people to say ‘Oh my god, they’re up to something,’ and then begin some kind of conundrum. Begin some kind of thing where, because you’re arming or getting ready, you then trigger the other side. We don’t have anyone working on that, and yet AI is that powerful.”
The technology of artificial intelligence and machine learning is fascinating but sometimes misunderstood. Generally speaking, it is not as intelligent as people may believe. It has the ability to produce works of art on par with masterpieces, outperform people in Starcraft II, and make simple phone calls on behalf of users. It has not proven as successful when attempts have been made to get it to perform more difficult tasks, like driving a car in a busy metropolis.
Subscribe to GreatGameIndia
Schmidt envisioned a day in the not-too-distant future in which security concerns in both China and the United States would necessitate some sort of AI-related deterrence agreement. He talks about the 1950s and 1960s, when diplomacy created a number of controls around the world’s deadliest weapons. However, it took decades of nuclear explosions and, crucially, the destruction of Hiroshima and Nagasaki for the world to reach the point where it enacted the Nuclear Test Ban Treaty, SALT II, and other historic pieces of legislation.
Tens of thousands of people were killed and the world was shown the lingering devastation of nuclear weapons as America attacked the two Japanese cities at the close of World War II. Then, the governments of China and Russia hurried to purchase the weaponry. Mutual assured destruction (MAD), a notion of deterrence that assures if one country launches a nuclear bomb, it is probable that every other country will do the same, is how we cope with the risk that these weapons will be deployed. We do not employ the most devastating weapon on the earth because there is a chance that doing so could wipe out global civilization, at the very least.
Schmidt made some colorful remarks, but we do not need or want MAD for AI. One is that AI has not demonstrated that it can be as devastating as nuclear weapons. However, those in positions of authority are afraid of this new technology, and usually for the wrong reasons. Some have even proposed giving artificial intelligence (AI) control of nuclear weapons, arguing that they would be better judges of their deployment than humans.
The issue with AI is not that it has the capacity to end the world like a nuclear bomb, but rather that it can only be as good as the people who created it and that they do so by reflecting their own ideals. AI experiences the well-known “garbage in, garbage out” issue: All artificial intelligence (AI) retains the biases of those who created it, and a chatbot taught on 4chan becomes ugly. Racist algorithms produce racist robots.
The CEO of DeepMind, which developed the AI that is outplaying Starcraft II gamers, Demis Hassabis, appears to comprehend this concept more thoroughly than Schmidt. In a July interview for the Lex Fridman podcast, Fridman questioned Hassabis on how a technology as potent as AI might be governed and how he would prevent becoming personally tainted by the power.
Hassabis’ answer is about himself. “AI is too big an idea,” he said. “It matters who builds [AI], which cultures they come from, and what values they have, the builders of AI systems. The AI systems will learn for themselves… but there’ll be a residue in the system of the culture and values of the creators of that system.”
Artificial intelligence reflects its creator. It cannot destroy an entire city with a 1.2 megaton blast. Not until someone else instructs it to.