Tuesday, December 13, 2016

Westworld and Moral Obligation to the Dubiously Sentient

So I realize I'm late to the party here. My HBO source dried up a while ago and I only realized today that HBO will let you watch the first episode for free. But while I've never seen the Michael Crichton movie the show is based on, I was pretty sure that I would have to check out the Westworld show as soon as I saw a preview.

I've only seen the first episode, but already I have a lot of thoughts, and not all of them are going to fit in this post.

One of my philosophical fixations has always been the nature of consciousness and by extension sentience. Like Descartes, I think that one's own consciousness is the purest philosophical axiom. Without a consciousness, we could not even be there to think about other philosophical questions. I don't think that the existence of the consciousness automatically leads to conclusions about the existence of God or guarantees the immortality of the soul (a term I consider to be just a religious way about talking about the consciousness) the way that Descartes does, even if I hope for both of those to prove true.

But nowhere within science fiction is the question of consciousness more central than when we talk about artificial intelligence. You see it from Asimov's robot stories to Data on Star Trek TNG to recent movies like Ex Machina.

Throughout history, we have built tools, but they have almost always been for mechanical purposes - making physical labor easier rather than mental labor. While we had things like abacuses for a long time as well, the birth of the computer age accelerated technology's approach to intelligence.

We generally have a sense - perhaps an instinct - that if we are the ones to construct an information-processing machine, if we understand its underlying logic, that it will not be sentient. We see ourselves as a separate kind of entity, imbued with something that a machine could never have - this spark of consciousness.

There's a concept in philosophy called a philosophical zombie. This is the basic idea: imagine a person who behaves exactly as you would expect a human being to behave. You hit them on the toe, they yelp in pain. You tell them a funny joke, they laugh. You ask them about the latest Marvel movie and they talk about how cool they found all the psychedelic imagery (at the time of this post, the most recent one was Doctor Strange.) But in this hypothetical, while the person's behavior is indistinguishable from any other person's, we know (simply because these are the parameters of the hypothetical) that there is no experience of this person's conscious thoughts. They don't truly see things - they instead process the information that the receptors on their retinas take in and then add that to the other information processing in their brain which will then output a set of behaviors.

Now, would it be wrong to kill this person?

This raises a question of why we consider murder to be morally wrong - arguably the cardinal moral wrong in our moral universe. I think that most of us would agree that it is because death, as well as the pain associated with the disruption of the body's systems, are undesirable. We fear the pain of a mortal injury and we also fear the potential oblivion that death could bring.

A philosophical zombie would show a semblance of fear or pain if threatened with such an action, but by definition, it would not truly experience them.

So does that make it ok?

I can't really say. I'm inclined to lean toward no, the argument being that even if there is no real fear or pain being inflicted on an entity that can experience such things, that the action taken is reflected in the person taking the action.

But I also play video games.

Video games as a medium are not intrinsically violent, but they tend to take the form of violent adventures. Even the most kid-friendly games, like the Super Mario Bros. series, primarily has its cartoon mascot employ violence against enemies. Frankly, you can trace this back a lot farther. Chess, arguably the most iconic game in the world, takes the form of a highly abstracted battle between two kingdoms. The goal of the game is to kill the enemy's king. "Checkmate" is really a bastardization of "Shah Mat," which means "the king is dead."

Still, a chess piece is generally very few steps removed from a hunk of plastic, stone, or whatever you used to make it. I don't think anyone but the most devoted animist would consider the pieces to actually be sentient beings who cared if they were killed or not.

(EDIT: One can also point out that while the king chess piece is "killed," it is not a true death. The piece can return to life, as it were, when a new game begins. So perhaps a chess piece is merely an actor playing a role. As anguished as Hamlet might be as he uncovers the truth of his father's murder and eventually dies from poison, the actor playing Hamlet is probably having a great time.)

And the NPCs (non-player characters) in video games are ultimately a whole lot less sophisticated than a human mind. Even in games where computer AI is more sophisticated, it's still just essentially working from a flowchart.

In Westworld, we're clearly meant to question whether its automatons - called "Hosts" - are sentient. Given that they are played by human actors, the show pushes toward affirming that. But there's an interesting scene in which the park's creator, Robert Ford (who shares the name with a figure from American Western history) is down in "cold storage" where the defunct hosts are kept. He's having a drink with one of the older models. And while this character is also played by a human actor, his movements are far jerkier, and his dialogue a lot more repetitive. It becomes apparent that the hosts here are really just an evolution of the kind of Disneyland audio-animatronics that presumably inspired Crichton in the first place.

The convincing realism of the host characters becomes apparent as simply a refinement of the system. They have become increasingly sophisticated through constant reprogramming. In an early scene, some of the designers are exploring the new "revelries" that Ford has added to the programming, adding unique gestures that are generated procedurally from a host's memories. We're long past the point where the hosts are staying to a strict script, but that's not really pure sci-fi given that we've got computers doing this kind of procedural generation to create unpredictable behaviors and products already.

As the hosts approach closer and closer to realistic human behavior, we're forced to confront an unsettling truth:

We are also machines.

Our brains are networks of neurons that send electrical signals to each other. Ultimately we're also operating from a flowchart that takes input (our senses) and produces an output (our behaviors.) Our bodies are basically incredibly complicated chemical processes, resulting in life as we know it.

But if you ask me, that is not a sad commentary on what it means to be a living human, and is instead a profound endorsement of the power of machines. Because as I stated at the beginning of this post: consciousness is where all of our philosophical ideas must ultimately grow out of. We must be conscious beings for out thoughts to have any substance to them. And if organic machines can produce or tap into consciousness, then why should we assume that artificial machines could not do the same?

(EDIT: To return to the earlier edit, the hosts are, in a sense, actors playing roles. We see that hosts can play multiple roles - Dolores' father for example had previously been used as the leader of a cult of cannibals out in the desert, a very different kind of character than the one we see in this first episode - but in a sense, these hosts are forced into the most extreme forms of method acting. We don't know if there is a sentience within to know if they are acting or not, but in a sense, aren't they just the people they are programmed to be?)

No comments:

Post a Comment