10 Perplexing Sci-Fi Film Problems Solved By The Internet
6. When Does An AI Gain Civil Rights?
This year's Transcendence tackled one of the biggest philosophical questions posed by science fiction: where does morality enter into artificial intelligence? Can you get a computer so smart that it's effectively a human brain, and would it be murder to then switch it off? And should robots be given civil rights? They're all brain teasers that stretch back as far as HG Wells and Isaac Asimov, two pioneers of the genre, and has been explored in both fiction and non-fiction realms ever since. We wouldn't blame you for not seeing Transendence's take on that quandary, by the way. We didn't. Still, it's an interesting question: Is it immoral to delete an A.I. without just cause? How complex does a program have to be before it should be granted civil rights? One internet user adapted Mary Anne Warren's five "criteria for personhood", usually used as an argument for abortion, with the ultimate conclusion being that once an artificial intelligence is capable of comprehending concepts beyond what it has been coded to understand, it has a right to life. So we kinda feel bad for gunning down all those robots in Portal now. Others suggested an AI capable of compassionate acts would make it more "human" and deserving of such rights, whilst somebody pointed people to the American Society For The Prevention Of Cruelty To Robots. Yes, that's a real thing.
Tom Baker is the Comics Editor at WhatCulture! He's heard all the Doctor Who jokes, but not many about Randall and Hopkirk. He also blogs at http://communibearsilostate.wordpress.com/