There are some people who think artificial intelligence is more dangerous than nuclear weapons. Among those sounding the alarm are the likes of Bill Gates, Stephen Hawking and the controversial Elon Musk. “I don’t understand why people aren’t worried,” the Microsoft co-founder once said.
“Once created, artificial intelligence will develop itself and redesign itself at an accelerating pace,” Hawking said. He added that further development of the technology could prove to be “a fatal mistake”.
Today, the most titled service in the field is ChatGPT, an AI platform capable of, among many other things, explaining complex topics, writing texts, composing music or solving math problems. The product is developed by OpenAI, a company run by Sam Altman. Its lifestyle might worry those who believe artificial intelligence could bring about the end of the world.
Also read: Humans could be wiped out by robots in just 100 years, warns a leading astrophysicist
The Silicon Valley entrepreneur believes civilisation could be wiped out by a synthetic virus, an AI attack or nuclear war. That’s why Altman prepares for a possible apocalypse.
“I try not to think about it too much. But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the IDF (n.e.: Israel’s army) and a large area of land in Big Sur,” he said in 2016, in a conversation with the founders of startup Shypmate. Previously, Altman was president of accelerator Y Combinator.
Also for training, the entrepreneur got his pilot’s license. On his land in Big Sur, a mountainous area on the California coast, aircraft can land.
Altman co-founded OpenAI with Elon Musk and others. They said the company’s goal is to make sure artificial intelligence doesn’t destroy human civilization.