Do Androids Dream of Electric Sheep? We Still Don’t Know

Mia Manns
9 min readOct 5, 2017

--

“Everything is true,” he said. “Everything anybody has ever thought.” — Philip K. Dick, Do Androids Dream of Electric Sheep.

The fact of the matter is, Philip K. Dick kills it. The author of the source texts for films Blade Runner, Total Recall and Minority Report, Philip K. Dick became one of my favorite storytellers before I read his books. He’s the almost definitive voice on A.I. And the question in Minority Report, whether technology can take the place of human judgment, is an undecided moral quandary that every person should review and consider carefully.

Story and character eclipse philosophical argumentation in Dick’s work. The philosophy is acted out by action and reaction, along character arcs and against forces of antagonism, as protagonists face conflict — specific conflict, such as John Anderton’s need to go on the lam because the technology he uses within the PreCrime department has ruled that he is (will be) guilty of murder. Dick doesn’t say whether Precrime is good or bad. The ramifications of its use, as it is controlled by men, unfold through story. The problem may not be the technology so much as the hands that hold it. The interpretation from that is entirely up to the audience, as I argued in 2011, here. (Or maybe I just argued my interpretation. Guilty.)

When it comes to A.I., everything anybody has ever thought is true. There’s no right answer to the question of whether A.I. consciousness lacks some crucial facet key to humanity’s consciousness. Your opinion may hold equal weight to Alan Turing’s. I think the Turing test is ridiculous, but no one can speculate (or has yet speculated) a philosophical argument that is absolute in discounting Turing’s standard for whether a machine is thinking. That’s sort of the nature of philosophy: it deals in arguments, not proofs. I guess once something becomes an accepted answer, the question ceases to be philosophical and becomes mathematic (if proven) or scientific (if a descriptive upheld by evidence).

A.I. fiction is fun. The story is often like the game Mafia, where black and red cards are dealt to a group of people, then everyone closes their eyes, and only those with red cards open theirs; these are the Mafiosi, and the Mafiosi can choose other players to murder. The object of the game is for the Mafiosi to murder all of the civilians, or the civilians to identify and execute all of the Mafiosi. In Battlestar Galactica, Westworld, and Blade Runner, we (the audience) are all civilians. We don’t know for sure who has a hidden identity and a hidden agenda. Once revealed, will the secret traitor choose to annihilate civilians, or choose humanity over a programmed alliance? Will they have the option? Do they have a choice, or is the choice programmed into them? (In Mafia, if she wants to a Mafioso may vote to execute a fellow mafia member to distance herself from a suspicious character, throwing him under the bus. Androids can’t necessarily go against their programming.)

There are certain characters we are told up front are without doubt automata. Number Six, Dolores, and Rachel Rosen are cylon, host, replicant, from the outset. The rest of the cast of characters become potential traitors to humanity and even sleeper agents. I promise not to spoil identities for those who haven’t yet finished Westworld (what’s wrong with you?) or Battlestar Galactica (no really, it ended a decade ago and it’s amazing). Blade Runner, however, first published as Do Androids Dream of Electric Sheep by Philip K. Dick in 1968, and adapted to film in 1984, is fair game — but you don’t have to worry about spoilers — every character is either likely a replicant from the outset or ambiguously impossible to argue one way or the other. But if you prefer to see it unfold on your own, I recommend picking up a copy of Dick’s book or watching the damn movie already (it’s been thirty-five years, holy cow).

There’s a guessing game as to who is good and who is bad, as we define good as human and bad as android. This qualification matters, because androids will be loyal to their faction, as in Mafia, where the Mafioso (or Imperial Spy in Resistance or Minions of Mordred in Avalon or, granted, Werewolf in Werewolf) will kill off civilians in an effort of self-preservation.

But there’s an extra layer of depth to the moral question of an A.I. story: What if the android chooses humanity over its faction? The guessing game was already fun when we wonder, in Westworld, whether the founder, Dr. Robert Ford (Anthony Hopkins), is secretly a robot who builds robots, whether a board member with whom he disagrees, Theresa Cullin (Sidse Babett Knudsen), can be programmed to redirect her stances, whether Elsie (Shannon Woodward), a programmer in the Behavior Lab and Diagnostics division of the Westworld Mesa Hub who sleuths out who’s responsible for a little corporate espionage and code that backfires, can be coded back to her sleep pod and made to forget everything she knows, and so on. But what if an android programmed to go with the flow doesn’t want to? What if an android refuses to execute a kill order? What if the android can’t refuse, but feels guilt? The fun thing about A.I. fiction is that it leaves us with an unanswerable question: What does it mean?

The guessing game can be rewarding, especially if you have the chance to watch with friends (read: college roommates). It’s pretty funny too if one or more of those friends already knows who the Mafia is — like when a player is killed early in the game and gets to watch the ‘night phase’ where the Mafia choose who to kill. It’s rewarding when you make the right call and your S.O. made the wrong one — and it’s pretty funny when you realize you’re both totally in the dark. Neither of us saw that coming.

It’s not rewarding when the story goes against your beliefs about consciousness. An imitation of consciousness is not consciousness. I don’t buy that the correct series of inputs and outputs can create the same thing as human consciousness, human life. Even if humans are built out of genetic code, even if humans take in stimuli from their environment and respond with precipitated reactions, to suggest that our thought and behavior can be reduced to ones and zeros requires an acceptance of total determinism. You can’t argue that our thoughts are as simple as computer inputs and outputs and then argue that we have free choice. If we have free choice, then consciousness is something more than the programming of our DNA. Otherwise, everything on the planet was predestined, including what you ate for breakfast and whether or not you were late for work, and whether you yawn when you read this sentence (because yawning is contagious and I just made you think about it).

The Turing Test is preposterous: that if artificial intelligence is able to win the imitation game, to appear indistinguishable from a human in terms of thought and behavior, then the machine has achieved consciousness. Turing asked, “Are there imaginable digital computers which would do well in the imitation game?” Imitation is not the same as the real thing.

Take John Searles’ Chinese Room thought experiment; to be able to translate Chinese to English does not prove understanding of Chinese; the ability to imitate an understanding of Chinese is not the same as the ability to understand the language.

Wherever you stand on the question of consciousness is a matter of choice, as the question has not been definitively proven. Even Turing himself said, “I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it.” He chose to believe, however, that the mystery of consciousness does not mean we can’t prove that A.I. can think. I choose to believe it does.

But sometimes a story suggests that not only are A.I. capable of consciousness, their story choices are affected by it. For example, if an A.I. refuses to execute a programmed command. If an A.I. turns against its faction and protects humans, demonstrating altruism. If a robot makes a choice against its own self-interest, if it makes a sacrifice, the author is imposing a view that consciousness can be constructed by code.

There is still room for interpretation, however. It could be an unexpected flaw from the complexity of the programming that gives the A.I. the ability to choose self-sacrifice. A human being raised to be loyal to one faction or one identity can, through a series of changes in environment and experience, commit a betrayal. It could be that the coding allows for growth, change, evolution, which decides for the robot whether to protect or murder — which does not mean that the machine was able to think for itself.

In Philip K. Dick’s novel Do Androids Dream of Electric Sheep, we know that Rachel Rosen is a replicant, but we don’t know for sure that Deckard is not. We’re supposed to take the Voight-Kampf test (a test of anthropological empathy) as proof that one character or another is an android or a human, and the easiest reading does not suggest in any way that Deckard is a robot; however, we’re invited multiple times to doubt the Voight-Kampf test, and even if, in the book, the Voight-Kampf test was valid every time, whether there could be a flaw in the test is open to interpretation.

One could argue, even though the test is never wrong, that there’s always room for error, and Deckard could, after all, be an Andy. If that’s the reading you choose, no one can defeat your argument with any textual evidence in the world (or in the book). If you choose to believe that the test can be wrong, and choose to believe that Deckard is an android, there’s no textual evidence against you — in the early chapters it’s established that a kind of arms race in the technology is possible, that a newer model of andy could be designed to pass the Voight-Kampf test, and the story defeats the argument with every example, but the concept remains alive for those who choose doubt and cynicism. What if it isn’t even a new technology that passes the test? What if it’s a bug in an obsolete model that by fluke registers a passing score?

In the director’s cut of Blade Runner, the unicorn dream suggests that Deckard is a replicant. Apparently one of the screenwriters, Hampton Fancher, wanted there to be ambiguity as to whether Deckard is a replicant or not. Unless I’m missing something, the only possible scenario in which there’s ambiguity as to whether Gaff’s origami unicorn signifies that Deckard is a replicant would be if the unicorn dream was a complete coincidence and Gaff had no idea that the origami mystical animal he chose would connect to the dream. In story, we don’t often expect coincidences, especially in the very last frames of a film. Yet some persist to argue that Deckard is human. Harrison Ford really wants him to be human: “I thought the audience deserved one human being on screen that they could establish an emotional relationship with.” Clearly it’s his opinion that we can’t have an emotional relationship with a robot. Well, tell that to Alan Turing (RIP with love).

And so, the ambiguity remains. We’re not certain for sure which characters are not A.I., we’re not certain whether A.I. can have true consciousness, and we’re not certain what the A.I. characters’ choices mean. We’re not even sure what consciousness means.

That’s why A.I. fiction, perhaps more than any other science fiction, gets to the heart of literature and exposes something important. The best of literature meets in a cross-section of the ultimately universal and the core of the personal to ask what it means to be human. These stories are ultimately open to interpretation and cannot be argued one way or the other. Your view of what it means to be human versus a machine indistinguishable from a human is upheld in every story. Your opinion holds, and no author can knock down your argument.

The nature of A.I. fiction is that, as it’s so full of unproven and possibly unprovable theories, it’s completely open to interpretation. That’s the of joy it. It’s possible to assimilate any of the evidence from any of these stories into an argument for the singularity (A.I. consciousness) or against it.

--

--