- Web Desk
- Yesterday
Musk’s dilemma: prioritizing human safety or pursuing relocation to Mars
-
- Web Desk
- Sep 07, 2023

Times Magazine published an article by Walter Isaacson, titled “Inside Elon Musk’s Struggle for the Future of AI”. The article traverses through Musk’s experiences with artificial intelligence (AI) and his interactions with the men behind the leading AI platforms today. He does not trust these people. He is determined and on a journey to save humanity from a super-intelligence that might wipe off humans as a lower species altogether. If that goal is unachievable, then he wants to build an escape for humans to move on Mars. His chosen method to achieve these goals is to use AI to his own advantage, to humanize AI as opposed to being a mechanical force.
Here is a fast-forwarded reproduction of the article:
In 2012, Musk met Demis Hassabis, AI researcher and co-founder of ‘DeepMind’, a company that sought to design computers that could learn how to think like humans. They had a conversation about the potential threat of AI to humanity. Hassabis pointed out that machines could become super-intelligent and surpass and even annihilate humans. Musk decided that Hassabis might be right and invested $5 million in DeepMind as a way to monitor what it was doing.
Fast-forward a year, and Musk got into a passionate debate with Google’s Larry Page. Musk argued that without built-in safeguards AI-systems might make humans irrelevant or even extinct. Page dismissed, by saying that if machines someday surpassed humans in intelligence, it would simply be the next stage of evolution.
Human consciousness, Musk retorted, was a precious flicker of light in the universe, and we should not let it be extinguished. Page considered that sentimental nonsense.
Unsurprisingly, Musk was dismayed when he heard that Page and Google were planning to buy DeepMind. Musk, along with a like-minded friend, Luke Nosek, tried to put together financing to stop the deal. “The future of AI should not be controlled by Larry,” Musk told Hassabis.
The effort failed
Google’s acquisition of DeepMind was announced in January 2014.
Musk began working on how to counter Google and promote AI safety. With Sam Altman, he co-founded a nonprofit AI-research lab, called OpenAI. It would make its software open-source and try to counter Google’s growing dominance of the field. Another way to assure AI safety, Musk felt, was to tie the bots closely to humans. They should be an extension of the will of individuals, rather than systems that could go rogue and develop their own goals and intentions.
Musk’s determination to develop AI capabilities at his own companies caused a break with OpenAI in 2018. He tried to convince Altman that OpenAI should be folded into Tesla. The OpenAI team rejected that idea, and Altman stepped in as president of the lab, starting a for-profit arm that was able to raise equity funding, including a major investment from Microsoft.
Elon Musk: “Block” option on X takes a vacation, netizens baffled
So Musk decided to forge ahead with building rival AI teams to work on an array of related projects. These included Neuralink, which aims to plant microchips in human brains; Optimus, a human-like robot; and Dojo, a supercomputer that can use millions of videos to train an artificial neural network to simulate a human brain. It also spurred him to become obsessed with pushing to make Tesla cars self-driving. Eventually Musk would tie them all together, along with a new company he founded called xAI, to pursue the goal of artificial general intelligence.
In March 2023, OpenAI released GPT-4 to the public. Google then released a rival chatbot named Bard. The stage was thus set for a competition between OpenAI-Microsoft and DeepMind-Google to create products that could chat with humans in a natural way and perform an endless array of text-based intellectual tasks.
Musk worried that these chatbots and AI systems, especially in the hands of Microsoft and Google, could become politically indoctrinated, perhaps even infected by what he called the woke-mind virus. He also feared that self-learning AI systems might turn hostile to the human species.
His compulsion to ride to the rescue kicked in. He was resentful that he had founded and funded OpenAI but was now left out of the fray. AI was the biggest storm brewing. And there was no one more attracted to a storm than Musk.
The fuel for AI is data. The new chatbots were being trained on massive amounts of information, such as billions of pages on the internet and other documents. Google and Microsoft, with their search engines and cloud services and access to emails, had huge gushers of data to help train these systems.
However, Musk had a treasure trove of data as well. One asset was the Twitter feed, the other was Tesla camera feed.
With more than a trillion tweets posted over the years, 500 million added each day, Twitter was a great training ground for a chatbot to test how real humans react to its responses. Restricting number of views per day came as a result of Musk trying to prevent Google and Microsoft from “scraping” up millions of tweets to use as data to train their AI systems.
The 160 billion frames per day of video that Tesla received and processed from the cameras on its cars offered a second for of data – humans navigating in real-world situations. It could help create AI for physical robots, not just text-generating chatbots.
Tesla and Twitter together could provide the datasets and the processing capability for both approaches: teaching machines to navigate in physical space and to answer questions in natural language.
New AI machine-learning systems could ingest information on their own and teach -themselves how to generate outputs, even upgrade their own code and capabilities. The AI could forge ahead on its own at an uncontrollable pace and leave us mere humans behind. “That could happen sooner than we expected,” Musk said.
Musk speculates about the window of opportunity for building a sustainable human colony on Mars before an AI apocalypse destroyed earthly civilization. “Getting Starship launched. Getting to Mars is now far more pressing.” He paused again, then added, “Also, I need to focus on making AI safe. That’s why I’m starting an AI company.”