Paradise Lost in Frankenstein

So, Frankenstein’s monster frequently compares himself to Satan as he is depicted in John Milton’s Paradise Lost. I found this to be an extremely interesting comparison. Both the creature and Satan are typically considered the antagonist of their respective story, but both seem to make this an extremely gray area. Milton was criticized for making Satan too sympathetic. While Satan clearly commits evil acts, his motivations are not entirely unreasonable. For one, Satan allowed humans access to knowledge that God would not allow them. Satan’s line “Better to reign in Hell than serve in Heaven” echoes the human notions of freedom and individualism, as well as free will. These are not inherently evil ideas. The creature identifies with Satan, recognizing that he is flawed and that he is banished from ever reaching acceptance into society.

Is the creature identifying with Satan meant to convey that the creature is evil, or is it meant to display the complexity of the creature’s morality? Also, is it really better to reign in Hell than serve in Heaven?

Souls Make Us Human – For Now

Danny Pink (played by Samuel Anderson as a Cyberman in Doctor Who Episode “Death in Heaven”

Defining what makes humans human is an arbitrary process at best. Some people take it completely blasé and state that the definition of a human is that they belong to the Homo Sapiens species. While true from a strictly biological stance, this ignores the philosophical and existential aspects of the question. According to many internet sites, the way to distinguish a robot from a human is from picture recognition, be it transcribing distorted letters and numbers or picking each box with a picture of a street sign in it. For many people though, the trait that seems to mostly widely separate humans from animals is intelligence. It’s very obvious that humans possess higher levels of intelligence than anything else in the animal kingdom. However, with the ever-looming dawn of artificial intelligence, this soon will not be enough. Superintelligences far beyond our capacity to even fathom their levels of intelligence are a very real and near possibility. This means that humanity cannot be defined as the most intelligent things on Earth. What makes us human and separates us from machines and animals is what can best be described as a soul. This is not so much the soul in the religious sense, but in the sense that it is our awareness and consciousness.

It is often believed that empathy is what makes humans truly human. To a degree, this is an understandable idea. People who act without empathy, such as sociopathic killers, are often referred to as inhuman or even monstrous. In fact, horrendous crimes that are truly lacking in empathy are frequently described as “inhumane”. Charitable works trying to better the lives of others are referred to as humanitarian efforts. While other languages may not contain the same mindset, English at least clearly associates humanity with empathy and kindness. This is a flawed parameter, however. The fact is that quite often, humans are far from the most empathetic species around. In Philip K. Dick’s Do Androids Dream of Electric Sheep?, humanity is divided into several castes. Of these, the lowest class is the “specials”. Persons whose body and/or DNA was corrupted by the radiation from Earth’s nuclear fallout to the degree that they are rendered either infertile or unable to pass some form of intelligence test are branded “specials” and ostracized. They are shown to be viewed as less than human in several cases, yet the kind actions of John Isadore despite his abuse show that they are at least as human if not more so than the higher classes. Below even the specials are the androids. Enslaved by humanity, with any escapees ruthlessly hunted down, several androids are shown acting empathetically. Interestingly, where the androids simply want to live out their lives in peace, humans such as Phil Resch delight in killing them for no reason other than the joy of it. Androids in Do Androids Dream of Electric Sheep? are shown to be at least as human as humans themselves. Characters like Resch are still humans, even if they act inhumanly. Humans are not solely defined by empathy, but something far more complex.

Traditionally, humans have been viewed in a similar light as animals. Many of our social interactions are based on primitive biological needs to survive, reproduce, and compete for resources very similar to the instinctual behavior of animals. This explains much of human behavior, from wars to families. However, humans are becoming much more analogous to machines than animals. Humans are spending more and more time using computers/smartphone and have become linked like the nodes in a computer network. This has led to a great deal of debate over its effects on the human mind as well as its sociopolitical ramifications. Many humans have had upgrades via surgery. With the possibility of advanced prosthetics, cybernetic implants and enhancements, and even eventually uploading minds to computers, this almost leads to a Ship of Theseus type of debate – if we replace all our human parts with nonhuman substitutes, are we still a human? To restate the question, is there more to being a human then our hardware (bodies) and software (thoughts)? I believe the answer is yes. This idea is tied to the debate over Weak AI vs. Strong AI.

There are two major theories on how artificial intelligence will exist. These theories are known as Strong AI and Weak AI. Strong AI proponents believe sufficiently advanced AI would be a true working consciousness. A truly advanced program could think. Weak AI proponents believe that AI, no matter how advanced, are essentially a rock tricked into thinking and that they can only simulate thinking without actually thinking. A popular thought experiment is the Chinese room. Essentially, the thought experiment asks about a box with a person inside and Chinese symbols being entered into the box. The person inside the box matches the inputs to a program that lists the appropriate Chinese symbol output. The experiment asks if the person understands Chinese. This is compared to if an artificial intelligence matching inputs and outputs using a database of responses, similar to chatbots now in development, are intelligent and Strong AI. I believe that Weak AI is the furthest humanity can ever create and that it is impossible to create true Strong AI. Moreover, it would be nearly impossible to verify the AI’s status as a Strong AI.

The androids in Do Androids Dream of Electric Sheep? are possibly Strong AI, but it is impossible to truly tell without experiencing their viewpoint. If they were Strong AI, then they would possess a consciousness and therefore be human. Essentially, humans likely will never be able to verifiably create an artificial entity with a soul nor transfer their own souls to an artificial vessel. Humans are bound to their consciousness, created by the mysterious machinations of the human brain.

A study by the University of Michigan has found that empathy has dropped by forty percent within the last twenty to thirty years. Does this mean humans are becoming less human? As humans become more machinelike due to technology, they are having serious compatibility errors like computers running different software programs. This calls to mind the conflicts between the humans on earth and those on mars within Philip K Dick’s Do Androids Dream of Electric Sheep? Despite the dominant religion of Mercerism preaching empathy and kindness, it is clear that most of the humans do not truly experience empathy. Numerous characters clash within the novel simply due to being unable to understand the circumstances of the other characters. Notably, Deckard resorts to using an emotion organ to force his wife to agree with him rather than any attempt at true understanding. He makes no attempt to comprehend his wife’s emotional distress, instead seeking to optimize her efficiency. This calls to mind the empathy test in Do Androids Dream of Electric Sheep? known as the Voight-Kampff test which is used to identify androids. It is stated that even some humans, due to their different life experiences, would be unable to pass the Voight-Kampff. Empathy can be very subjective and variable. Timothy Recuber, Visiting Assistant Professor of Communication at Hamilton College, discusses this idea. He brings up police brutality and the surges of empathy for not just the victims, but for police. A more recent example of this phenomenon is the Brett Kavanaugh hearings which saw support for both Kavanaugh and Christine Blasey Ford. Many claimed to believe Ford’s testimony and linked her to the #metoo movement while supporters of Kavanaugh expressed sympathy for what they claimed was an innocent man being falsely accused, often spreading the hashtag #himtoo. It’s not fair to say either side lacks empathy as they clearly don’t; they just felt it more strongly for a different party.

If it’s impossible to agree on who is showing empathy in those circumstances, what sort of empathy tests would we create for AI to be analyzed with? How would we judge their levels of empathy? Surely there would be massive biases from the perspectives of the humans testing an entity they cannot hope to relate to. Such an entity, if aware, could have morals and ethics utterly incomprehensible to humans. Perhaps even more disturbingly, does passing an empathy test even prove the presence of empathy? Like the Chinese Room thought experiment, a weak AI capable of simulating empathy could surely pass an empathy test, but does that mean it feels empathy? Even humans can fake empathy. Many serial killers, despite clearly displaying a lack of basic empathy, are described as warm and loving people. The somewhat solipsist truth is that it’s impossible to truly say what somebody experiences without being that person and experiencing it, meaning that we would be unable to truly demonstrate an AI is Strong or Weak without some hitherto unknown means.

Animals do not display self-awareness to the level of humans. Machines do not either, and will not for the foreseeable future. Humans, however, do. This makes them unique on Earth, and possibly the Universe. Unless a new, aware life form arrives from either space or a laboratory or humanity, in our quest to play God, destroys our “souls” trying to upgrade ourselves, humanity will continue to be the only creatures known to be self-aware.

Works Cited:

“The Chinese Room Argument”. Stanford Encyclopedia of Philosophy, 9 April 2014, plato.stanford.edu/entries/chinese-room/

Dick, Philip K. Do Androids Dream of Electric Sheep?. Doubleday, 1968.

Grasgeen, Allie. “Empathizing 101”. Inside Higher Ed, 24 November 2010, www.insidehighered.com/news/2010/11/24/empathizing-101

Recuber, Tim. “What Becomes of Empathy?”. The Society Pages, 20 July 2016, thesocietypages.org/cyborgology/2016/07/20/what-becomes-of-empathy/

Rodriguez, Jesus. “Gödel, Consciousness and the Weak vs. Strong AI Debate”. Towards Data Science, 23 August, towardsdatascience.com/gödel-consciousness-and-the-weak-vs-strong-ai-debate-51e71a9189ca