With films like A.I., I, Robot, Her, and most recently Ex Machina, it seems as if we're becoming pretty well acquainted with the idea of artificial intelligence.
I think most of us are in favor of making leaps and bounds in the field of science. But if the movies have taught us anything, it's that there can be major consequences to creating a bionic, autonomous life force. Oftentimes, chaos ensues as the human race is forced to come to terms with the definition of "humanity."
Recently, scientists have been claiming that we're making progress and may be closer than ever to creating A.I., but are we prepared to mingle with hyper-intelligent beings? According to the famous British theoretical scientist, Stephen Hawking, it's about time we start preparing for a potential showdown.
At the Zeitgeist 2015 conference in London earlier this week, according to TechWorld, Hawking warned audiences of what might be looming on the horizon:
"Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.
Our future is a race between the growing power of technology and the wisdom with which we use it."
This isn't the first time Stephen Hawking has opened up about his fears regarding artificial intelligence.
This past December, the pre-eminent scientist told the BBC that A.I. would be the biggest event in history, but may also lead to our downfall. You can hear more about his thoughts on A.I. around the four minute mark:
As daunting and unlikely as that may seem to some, Hawking is not the only scientist with that thought.
A number of the world's leading scientists, including Hawking and Elon Musk, have signed an open letter through the Future of Life Institute stating that research towards A.I. should continue, but it should also be highly controlled and monitored. The authors write:
"The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable."
In short, the benefits of artificial intelligence are immense, but the pitfalls could be equally great.
Much along the lines of what Hawking was saying, the letter goes on to state that further precautions must be taken in order to ensure safety and caution should be the number one priority:
"We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do."
Like Hawking, the Tesla and Space-X entrepreneur, Elon Musk, has also publicly spoken out about artificial intelligence. In October, at the AeroAstro Centennial Symposium at MIT, where Musk discussed the need for regulation over AI research.
"I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish.
With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out."
With that sentiment seemingly being shared by at least two of the world's premier scientific innovators, it does raise some cause for concern. Still, that doesn't mean that we're doomed to one fate.
So what do you think? Will we be facing off against our robot overlords or living in peace in the next century? Or, in an entirely different situation, will the idea of dangerous A.I. be in the distant past?