Elon Musk, CEO of Tesla and SpaceX has often warned about the danger of artificial intelligence (AI). His warnings, like those of Stephen Hawking, seem to fall on deaf ears.
Musk has been sounding the alarm about the possible dangers and potentially devastating effects of AI for years. He doesn’t pull any punches. It’s clear he thinks artificial intelligence poses a threat to humanity.
From “Wargames” to “DeepMind” AI keeps getting smarter
Of all the companies developing AI technology, Musk is most troubled by the Google-owned project DeepMind, he said in an interview with The New York Times.
Musk was an early investor in DeepMind. In 2014, Google bought DeepMind for over $500 million, according to reports. Musk said in a 2017 interview that his stake in DeepMind was so he can keep an eye on exploding AI developments. He didn’t make the deal for a return on his investment.
“The nature of the AI that they’re building is one that crushes all humans at all games,” he said. “It’s basically the plotline in ‘WarGames.'”
A film made in 1983, “WarGames” starred Matthew Broderick and pointed out the dangers of AI. It featured a superbrain computer that was trained to test wartime scenarios. Then the artificial intelligence is triggered and it leads to the brink of a nuclear war.
The billionaire warned in 2016 that human beings could become the equivalent of “house cats” to new AI overlords. Elon Musk has been repeatedly calling for regulation. He thinks researchers should use caution as AI technology is developed.
Estimates are that AI could surpass human intelligence in the next five years, even if we don’t see the impact of it immediately.
The potential for total disruption in every sector is high. Despite the fact that most processes are already highly automated, there are still plenty of ways in which robots can be improved with the addition of AI.
“That doesn’t mean that everything goes to hell in five years,” Musk emphasized. “It just means that things get unstable or weird.”
Robots are a big part of popular culture
Today we have machines that can learn, think, and ask questions. With the onset of big data, advanced neural algorithms, and super-powered parallel processing, we have unprecedented growth in the field of AI.
While you read this article, automatons are hard at work in factories. And the Ford plant has two four-legged robots named Fluffy and Spot that are at it’s Van Dyke Transmission Plant crawling around it’s facilities to laser scan 3D images that engineers will use to redesign and retool its plants.
Machines beat humans at chess and cars, like Elon Musk’s Tesla, have self-driving features.
Thanks to the average home Wi-Fi System you can manage all the appliances in your household, from an app on your smartphone, while you are at work or even out of town.
Technologists, engineers, and researchers are finding ways to use artificial intelligence in education, the medical profession, and customer service. AI is influencing the automotive industry and it’s traveling in space.
As superintelligence develops, neuroscientists, physicists, and programmers all ponder the nature of life. More often than not, it is suggested that AI will someday develop into a separate species.
Changing the Rules in AI
Until the early 1990’s AI was developed primarily through rules-based techniques. This meant that robots could only do what they were programmed to do.
Artificial Intelligence programmed by humans was totally reactive and further limited to specific reactions or clearly defined “rules.” AI was not able to absorb experience or maintain memories to help it make decisions.
The shift from rules-based learning to experiential has introduced a “self-determination” dynamic that allows machines to learn from the information they are given.
AI, at its most effective, uses Machine Learning and Deep Learning tools. With machine learning, computers have acquired the capability to teach themselves. They are no longer dependent on a human programmer. Using deep learning techniques, AI can now make its own rules.
Voltaire summed it up before Spiderman quoted him in the movies … “With great power comes great responsibility.”
The Soul Algorithm
In 1950, when Turing presented his paper he was questioning if computers could really “think.” Since then, popular culture has given us “The Jetsons,” “2001 a Space Odyssey,” “Star Trek,” and the Star Wars franchise. So on-screen, in classrooms, and in the workplace machines are thinking.
Since the 1980’s, AI, in robot form, has been depicted in such classic films as “Blade Runner,” “The Terminator,” “I, Robot,” “Wall-E,” and “Robocop.” And of course, “WarGames.”
The idea that robots can somehow come alive and do harm, is a reality in the movies. There is also the concept that a machine can be loyal, love, and even have its heartbroken.
The ability to think and to be conscious are signs that a “being” is truly alive. Having “feelings” and emotions is also a good indicator that we aren’t dealing with inanimate objects. To date, artificial intelligence hasn’t produced a real-life machine.
We are starting to see AI machines that look like they can both think and feel. And they may be thinking better than humans and feeling differently than we expect.
Regardless, at the center of these debates and discussions lies the question: Can machines ever come to life? Is it possible for them to have emotions, morals, consciousness, perhaps even a soul?
It has been said that if AI comes alive it is thanks to its programmers. Somewhere tucked into the future there may be an algorithm that creates a loving, thinking, consciousness. Or even a soul.
An AI future
In the hi-tech sectors, many of the players are coming together in groups like the Partnership on AI, Machine Intelligence Research Institute, and The Future of Life. The greatest minds of tech meet in these modern-day think tanks to problem solve the worst-case scenario and imagine the amazing benefits of an AI future.