In a futuristic, somewhat odd idea, Google’s former CEO says that an AI deterrence regime would necessitate an AI Hiroshima.
Former Google CEO Eric Schmidt compared AI to nuclear weapons and called for a deterrence regime similar to the mutually-assured destruction that keeps the world’s most powerful countries from destroying each other.
Schmidt talked about the dangers of AI at the Aspen Security Forum at a panel on national security and artificial intelligence on July 22. While fielding a question about the value of morality in tech, Schmidt explained that he, himself, had been naive about the power of information in the early days of Google.
He then called for technology companies to operate more in line with the ethics and morals of the people it serves.
— Aspen Security Forum (@AspenSecurity) July 22, 2022
Schmidt imagined a near future where China and the U.S. needed to cement a treaty around AI. “In the 50s and 60s, we eventually worked out a world where there was a ‘no surprise’ rule about nuclear tests and eventually they were banned,” Schmidt said. “It’s an example of a balance of trust, or lack of trust, it’s a ‘no surprises’ rule.
I’m very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failing…will allow people to say ‘Oh my god, they’re up to something,’ and then begin some kind of conundrum. Begin some kind of thing where, because you’re arming or getting ready, you then trigger the other side. We don’t have anyone working on that and yet AI is that powerful.”
The problem with AI is not that it has the potential world-destroying force of a nuclear weapon. It’s that AI is only as good as the people who designed it and that they reflect the values of its creators. AI suffers from the classic “garbage in, garbage out” problem: Racist algorithms make racist robots, all AI carries the biases of its creators.
This is something Demis Hassabis—the CEO of DeepMind, which trained the AI that’s beating Starcraft II players—seems to understand better than Schmidt. In a July interview on the Lex Fridman podcast, Fridman asked Hassabis how technology as powerful as AI could be controlled and how Hassabis himself might avoid being corrupted by the power.
Hassabis’ answer is about himself. “AI is too big an idea,” he said. “It matters who builds [AI], which cultures they come from and what values they have, the builders of AI systems. The AI systems will learn for themselves…but there’ll be a residue in the system of the culture and values of the creators of that system.”
AI is a reflection of its creator. It can’t level a city in a 1.2 megaton blast. Not unless a human teaches it to do so.
Eric Thompson Podcast
PureTalk – Save Big Money On Your Cell Phone Bill (Use Promo”FLS” to get 50% off your first month)