ByMark Newton, writer at Creators.co
Movie Pilot Associate Editor. Email: [email protected]
Mark Newton

What will eventually destroy the human race? Nuclear weapons? An asteroid? Global warming? What about a computer AI?

Artificial intelligence has long been a staple of science fiction peril. Terminator, I, Robot and 2001: A Space Odyssey are just some of the sci-fi movies which feature computers going beyond their initial programming in order to conduct nefarious schemes against humanity.

's directorial debut, Transcendence, looks set to continue this tradition, featuring a brilliant computer scientist, played by , who merges his brain with an artificial super-intelligence.

For the most part, the film appears to be on an informational lock-down (perhaps Pfister has taken a leaf from producer 's book on movie secrecy), however we do have a fairly extensive synopsis which lays out the main points of the science fiction flick.

Fundamentally, the film will explore some of the issues which humanity will also have to navigate in the not too distant future. Most explicitly it asks if artificial and super-intelligence is something to aspire to, or whether humanity — with all it's inherent flaws and quirks — is ultimately better. Sure, a super-intelligence could rid us of all our wants and fears, but would we essentially be living in a dictatorship, even if it is a benevolent one? This dichotomy of opinion is presented in these teaser trailers featuring Depp, and the voice of God-himself, Mr .

[bc:2953818124001]

[bc:2953818125001]

Now, although Transcendence might seem like typical Hollywood fair, the issues behind the movie are indeed extremely contemporary, and rather divisive. Transcendence's plot features anti-technology 'terrorists' launching campaigns against the scientists in an attempt to curb their developments. Now, we're certainly starting to see an increase in these groups (although none of them have taken to assassinating Bill Gates just yet). Having said that, one group named Individuals Tending to Savagery (ITS) were linked to letter-bombs sent to nano-technology professors in Mexico, while a much less extreme group, Stop the Cyborgs, has started a public campaign against Google Glass in England. Check out a message from Transcendence's resident anti-tech rebels below:

[bc:3021873742001]

In this sense, the plot and issues of Transcendence are not a million miles away from the reality. Although these groups are on the fringe, they're fully expected to increase as consumer technology that affects social and ethical traditions becomes more widespread - such as bio-mechanical augmentations and surveillance and human-network technology. Central to the debate is the issue of whether the advances in technology — including super-intelligence — is inevitable and whether simply because we can conceive of it, we should seek to make it a reality. Stop The Cyborgs would disagree, claiming it's an issue that needs extensive discussion and policy control. Their website claims:

Techno-social systems are contingent creations which should be treated as moral & political issues rather than inevitable forces of nature.

Now, for some, this kind of anti-tech sentiment might sound like the crazed bellowing of those guys who stand on street corners claiming the 'End is Nigh'. But it might interest you to know these issues aren't simply the pre-occupation of internet conspiracy theorists or Hollywood sci-fi screenwriters. No, it is also an increasingly pertinent topic in the study of philosophy and ethics. Take for example the paper Ethical Issues in Advanced Artificial Intelligence from Nick Bostrom, a member of the prestigious Oxford University's philosophy faculty. In his thesis, he suggests:

A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different.

Indeed, Bostrom doesn't shy away from making apocalyptic warnings concerning the future of super-intelligence. For example, he argues the biggest threat of all is if the super-intelligence only operates for the benefit of a certain group or elite, if it is pre-programmed with human based prejudices. However, he also suggests that errors in programming could also cause major issues. He posits that a super-intelligence which is dedicated to one arbitrary task, such as the manufacture of paper clips (and nothing else), could break beyond expected limits in order to maximize the output of paper clips — thereby satisfying it's one purpose. But perhaps, most importantly, the super-intelligence could deprive us of our most important possession: humanity:

More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.

Some have even gone further. Shane Lang, who works for British AI company (and recent Google acquisition) DeepMind has claimed:

Eventually, I think human extinction will probably occur, and technology will likely play a part in this.

These kind of concerns have even persuaded Google to develop it's own ethics board to ensure artificial intelligence isn't abused. But here in lies one of the issues; if Google (or any other corporation/state for that matter) creates AI, will there not be an incredible pressure for that AI to benefit that group, thereby falling foul of Bostrom's first ingredient in his recipe for robopocalypse disaster? Having their own ethics department hardly helps this issue.

Bostrom claims the only solution is to build a super-intelligence which is fundamentally imbued with a sense of benevolence towards ALL humans and perhaps even all sentient life. If this is a fundamental value installed into the software, perhaps the superintelligence can only work as a force of good. But humans are inherently flawed, while superintelligence is inherently expected to learn new things which could clash with its programming. Furthermore, the resources to build a superintelligence are only really available to states and major corporations - entities which have expressed agendas AND attitudes towards certain people. Would the US allow its superintelligence to mutually benefit all the world's population on an objective basis? What if that benevolence falls foul of US foreign policy? What if it uses US resources to assist an unfriendly state? The same applies to Google. What if their superintelligence suggests decreasing its profits to increase social support? Would Google really allow that? In that sense, can we be trusted to create a truly non-prejudiced AI?

On the plus side, super-intelligence could achieve incredible feats. Bostrum claims it can aid in the development of space travel, eliminate age and disease, calculate the best possible policy solution to issues and, if coupled with nano-technology, end environmental destruction and "unnecessary suffering of all kinds".

Ultimately, Bostrum concludes that it is our benefit to develop super-intelligence as quickly as possible, as artificial intelligence could be instrumental in solving the future problems humanity will have to deal with. Problems such as reduction in resources, over-population, environmental degradation and, importantly, other future technologies.

Personally, I'm not so sure. In fact, recently, I've found myself becoming more and more of a Luddite (I don't even have internet in my apartment), so I'm inherently suspicious of these kind of things. I don't fear that the computer will take over the world, more that such technology could be used by a small group of people to promote their aims AND potentially increase or create new social, economic and political divisions. Those who have the power and resources to create a super intelligent AI, rarely have a world-view which is beneficial to everyone.

What do you think? Are you excited for the prospect of super-intelligent computers, or do you fear these advances? Let me know below.

Sources: