On the Moral Standing of Artificial Intelligence

Ever since there has been research into artificial intelligence there has always been people who have said that researching AI is a horrible idea and that AI could easily be the end of the human race, as countless science fiction writers have shown. While this is interesting to think about it really isn’t the most interesting question surrounding AI. As Ray Kurzweil as says “before 2030, we will have machines proclaiming Descartes’ dictum. And it won’t seem like a programmed response” (Kurzweil 60). The dictum Kurzweil references is, of course, “I think, therefore I am” (53). In fact, Kurzweil says, “this claim will be largely accepted” (280). Fast forward to the year 2099 and there is “no longer any clear distinction between humans and machines” (280). Whether Kurzweil is right in predictions of the exact years or not is irrelevant, it seems likely that sometime in the future Artificial Intelligence will be invented and that when this happens everything will change. What is interesting about AI is how seriously should claims to existence be taken and should AI be given moral standing. Descartes provides an answer to the existence question with his concept of a thinking thing.  Immanuel Kant’s categorical imperative provides compelling reasons to give moral standing to AI and Peter Singer’s work on the moral standing of animals and suffering is also quite useful. Just as fiction is quite adept at showing possible dangers of AI, it is also good at illustrating that it is not a big stretch of the imagination to think of AI as existing in the same way humans or that it is not unreasonable to think of AI as having moral standing. In order to illustrate this I will use examples from the video game series Mass Effect; as this series does a wonderful job of illustrating some of the philosophical issues surrounding AI and our moral responsibilities to an AI. While it seems obvious that there will be a moral responsibilities to AI; an argument, no matter how good, will not convince the person that truly thinks this in an absurd topic. With any luck I’ll manage to show that simply claiming that an AI can never truly have human characteristics and claim existence in the way humans do will not work and that a good argument will need to be made to get around Kant and Singer.

Before moving into philosophical issues surrounding Artificial Intelligence, it is important to define some terms as well as to say something about why discussions of the moral standing of AI is not talked about very often. How to make AI moral is but our responsibility to AI is not. The most important term to define from the offset is Artificial Intelligence.

There are two types of AI. First is weak AI which is an “intelligent-acting machine” (Hauser). Second is strong AI which means that “these actions can be real intelligence” as opposed to preprogramed responses (Hauser). This essay is concerned with strong AI. Generally speaking, an AI is said to be a machine or software that is similar to humans in all or most of the relevant ways. By “relevant ways,” I mean an AI is rational, has emotions, is a thinking thing, is curious and is self-aware.

There are at least a few different responses when moral responsibility to AI is mentioned. The first dismisses the question to a later date when AI has been invented. Another response simply dismisses the idea of AI having the same rights as humans as utter nonsense because AI will always be no more than a machine as it’s thoughts would be preprogramed. These two responses are examples of what Daniel Kahneman calls System 1 thinking which he defines operating “automatically and quickly, with little or no effort and no sense of voluntary control” (20). The topic of AI, however, is complex and requires continual application of System 2 thinking which is the allocation of “attention to the effortful mental activities that demand it, including complex computations” (21).

The other main response one hears is that scientists are not even looking to create strong AI so there will not be moral responsibility because the machine will only be intelligent acting rather than being intelligent and being similar to humans in relevant ways. While this is a fair critique, it could potentially be short sighted as it completely ignores a possibility of accidentally creating strong AI or of weak AI evolving into strong AI. Harry Collins and Trevor Pinch point out that science is like a golem. They describe a golem as “a lumbering fool who knows neither his own strength nor the extent of his clumsiness or ignorance” (2). If this is an accurate description of science, and they provide plenty of evidence that it is, then it would make sense to start thinking about moral responsibilities to artificial intelligence now rather than waiting until a later time.

After defining terms and dealing with preliminary objections to a subject like this it is extremely important to discuss what it means to say artificial intelligence can be said to exist in the same way humans do and what under what criteria such a claim can be made. If this is not done clearly and in such a way that it seems reasonable, then the argument that AI will deserve moral standing fails as it is not based on well-reasoned argument that they can be said to exist in the relevant ways humans do. Descartes’ discussion of thinking things provide a good and useful criteria for judging whether or not an AI can be thought of as being similar to humans.

It is essential to discuss what a thinking thing actually is and why it is possible to apply it to AI.  As before Descartes’ way of talking about himself as a human is applicable to AI as he is trying to find trying to find some stable ground to stand on so he can rid himself of the possibility of the evil deceiver.

In the second meditation of Meditations on the First Philosophy, Descartes is still trying to find “a point which was fixed and assured” (102). Descartes concludes that he exists because there must be something that is thinking (103). This is a good start, but good reasons for the assertion that an AI could be considered to exist like a human still needs to be found. After all, computers already think but most people do not say a computer exists in a meaningful way. For Descartes, his existence is dependent on his being a thinking thing because if there is thinking there must be something that is doing the thinking. Descartes realizes he is a thinking thing because he is aware of his thinking. Descartes writes, “I existed without a doubt, by the fact that I was persuaded, or indeed by the mere fact that I thought at all” (103). As self-awareness is an important step for Descartes it will also be considered a step that AI will need to take.

But what does it mean to call something a thinking thing? After Descartes concludes that he does in fact exist he moves on to ask what he is. He writes “but what, then, am I? A thing that thinks. What is a thing that thinks? That is to say, a thing that doubts, perceives, affirms, denies, wills, does not will, that imagines also, and that which feels” (106-107). This is a good criteria to use because it captures, arguably, most of the important things that most humans do; if a machine can do these thing then it would seem that there is, at least, a strong reason for considering AI to be like humans in relevant ways.

Certainly one potential critique of this criteria is from a person who would look at an AI that exhibits all of these things and still question whether or not the machine was simply programmed to think that it was self-aware and that it has all of these characteristics. It seems as though this is a fair critique so it will be taken rather seriously. If an artificially intelligent machine presented itself and claimed and even demonstrated that it possessed all of those characteristics it would be fair to wonder if the characteristics were real or part of programing. However, in response one can ask how we know that humans genuinely have all of the characteristics that are considered important. If this is not answered definitively then an AI’s claim to have these characteristics would need to be accepted. After all, when a person says that they genuinely have all of the characteristics that define one as human, they are presupposing that they themselves truly have these characteristics and that these characteristics are not programmed in humans.

Of course it is fair to ask why thinking, doubting, understanding, self-awareness, agreeing, disagreeing, making decisions, imagining and emotions are relevant and not other things. The other criteria that comes up from time to time is that AI may not have an organic body and if AI does not have a body then it really cannot be thought of as being human-like. This may sound like a potential criteria on the surface but it is really problematic. To show that this is problematic all one has to do is ask what percentage of the body has to be organic for it to be called human. People with prosthetic body parts are not 100 percent organic and no one would hesitate to say that someone with a prosthetic limb was not human. As a rebuttal, one could say that to be called human-like one needs to have a mostly organic body. This also fails because what happens if an organic body can be built and the brain is a hard drive or even a small chip as opposed to being made of organic material. Again one is left in a tricky situation, one cannot call the body with the hard drive a non-human and call a double or triple amputee with prosthetics a human without sounding hypocritical or having double standards. As such, it would seem that the criteria of an organic body fails to be relevant.

Admittedly, it still sounds odd as well as hard to imagine that artificially intelligent machines can be similar enough to humans that it could be possible to say that machines can be thought of as “human.” This is where fiction can shine. It can stimulate the imagination and make the odd sound normal. Mass Effect does a wonderful job of making the idea of machines being thought of as being similar to humans and having the same rights and responsibilities seem not strange at all. Mass Effect is relevant to this discussion because the machine race called the Geth meets, to varying degrees, all of these criteria outlined above.

Descartes provides some interesting ideas on how one might argue that an AI should be considered human; but the question of how could this play out in the real world remains. The Geth from the Mass Effect Trilogy show how a machine could be a self-aware, thinking thing. In the first Mass Effect game, the Geth are nothing more than targets for the player to shoot. In Mass Effect 2, the Geth are portrayed in a different light. In this game, the player is given the chance to recruit and talk to a Geth named Legion. Through Legion, one sees that the Geth do think and that they are self-aware. The best example of this comes after the player helps the Quarian character, Tali, with a mission. Legion is attempting to send evidence to his people that the Quarians were testing weapons to use on the Geth. Tali is not happy about this. After Shepard, the main character, points out that this would lead to war, Legion decides that it is best to not send the evidence. This clearly shows that the Geth are capable of reason. Furthermore, this also indicates self-awareness because the Legion is conscious of his desire to avoid war with the Quarians (Mass Effect 2).

So far, all that has been shown is that it is not ludicrous to argue that a machine can think and be self-aware. At this point, it seems prudent to ask for a second time the question Descartes asked himself in the Second Meditation, “what is a thing that thinks” (107)? He answers, “a thing that doubts, perceives, affirms, denies, wills, does not will, that imagines also, and which feels” (107). TO this list I would also add curiosity or a desire to learn. Descartes is assigning to thinking things general characteristics of humans. Of course, Descartes is simply regaining these characteristics after doubting their existence but one can easily use Descartes’ line of reasoning on artificial intelligence. Granted, the idea that software could possibly have genuine human characteristics sounds strange. However, a brief sojourn into Mass Effect 3 will show that this is not implausible.

Once again, Legion provides an excellent example of how an AI might plausibly demonstrate human characteristics. One of the most human questions a person can ask is whether or not one has a soul. Whether or not humans have souls is irrelevant. Nevertheless, it is this line of questioning that is one of the factors that set humans apart from animals and machines. However, what would happen if a computer that was not programmed with any interest of knowledge of souls suddenly started asking, “does this unit have a soul” (Mass Effect 3)? Is this question not indicative of human thought? Is this not the type of question children ask at some point? Upon seeing or hearing a machine ask this, one might reasonably assume that one is dealing with an evolving AI and that everything is about to change. Earlier in the game, Shepard learns that the Quarians, upon hearing this question, panicked and tried to exterminate the Geth. This is why the conversation on moral responsibility should take place before anything like this could happen.

If one decides that an artificial intelligence can exist in all the meaningful ways humans do, then the next question that needs to be answered is whether or not we have moral responsibility towards the new race of machines. If one is going to say that an AI does exist in all the meaningful ways that humans do, then the simple answer is that one does have a moral responsibility to an AI. The complex part comes when one tries to explain why moral standing must be granted to AI. There is of course the Ethical Egoist argument that might say the human race as an entity should grant moral standing because we want AI to do the same for us. A Utilitarian would probably grant moral standing because the consequences could be severe; i.e. the classic sci-fi movie plot where an AI decides that the human race should be exterminated. While both of these theories provide good reasons, it seems that Kant’s deontological ethics are more useful. Peter Singer’s work on the moral standing of animals, while not related to artificial intelligence at all, provides arguments that can easily be used to make a case for granting moral standing to AI. Both thinkers ground their views in very different ways which is useful as an appeal can be made to reason and another to empathy or solidarity.

Kant’s deontological ethics are complex but they do provide a universal, but strictly rational, way of making moral decisions. Therefore, deontology is a great candidate for arguing why one should grant moral standing to artificial intelligence as the conclusion should be universalizable. Shepard seems to have some deontological reasoning behind his/her acceptance of Legion as a team member and as a fellow “human.”

Although Kant’s ethics are complex and have a lot of things going on, there are a few aspects of Kant’s ethics that apply to AI. The major aspects are his formulation of the categorical imperatives. After a brief explanation of the various ways he formulates the categorical imperative, I will examine how one might apply Kant’s reasoning to artificial intelligence.

The first way Kant’s formulates the categorical imperative is in the famous statement, “act only on that maxim whereby thou canst at the same time will that it should become a universal law” (38). In other words, only do the things that you would want others to do in your place. A good example of this is indiscriminate murder, one should not indiscriminately murder because no rational person would or could will that everyone should indiscriminately murder.

The second formulation says, “act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as means only” (45). In other words, as rational beings we cannot will that we are to only be treated as means because each rational being recognizes that he or she is an end to him or herself (44). This means that when AI is invented and becomes self-aware, we should not use an AI only as a means to entertainment or slave labor or anything else.

Now a case needs to be made for why Kant’s ethics should apply to artificial intelligence. Of course Kant himself would probably find this line of reasoning odd. However, it would seem that being consistent deontologist would require granting moral standing to AI, especially if AI emerges in ways similar to the Geth.

From the first formulation, one sees why AI would deserve moral standing. As has already been shown through examples from Mass Effect 2 and Mass Effect 3, it is not unreasonable to think of AIs existing in all of the meaningful ways that humans do. If one admits that an AI does exist as a thinking thing capable of emotion and rational thinking, and then says that an AI can be used as entertainment or slave labor then one is saying that humans can be used in the same way and a good argument would be needed to show that AI can be used for slave labor and humans cannot. It would seem that it would be quite difficult to argue that treating AI as slaves is universalizable especially when humans, even the mentally handicapped who do not meet Descartes criteria of a thinking thing, should never be used as slaves. However, it seems fairly easy to universalize the idea that AI should not be used as slave labor and that political and personal decisions should take into account the effect on AI.

The second formulation further makes that case that AI must be granted moral standing. Obviously, people treat each other as a means all the time. In many contexts, this is fine. Kant would agree with this. However, one should not treat someone only as a means. The same can be said for AI. In certain circumstances, we can certainly use an AI for entertainment or for labor. The Quarians treated the Geth as a means. The first Geth were servants of the Quarians and they were used for labor. When the Geth started to evolve the Quarians viewed the Geth as not being useful and the Quarians tried to kill the Geth (Mass Effect 3). Even if AI is nowhere near as advanced as the Geth and possesses only rudimentary intelligence and self-awareness there would still be a moral responsibility the not use AI as a means only as humans are rational and can discover this imperative through reason.

Kant definitely provides strong reasons for saying that humans will have a moral responsibilities to AI as he makes it hard to universalize treating one group one way and treating a similar group in a very different way. Even harder to get around is the idea of not using someone as a means to an end. Since it is generally considered questionable to use someone it will be hard to say it is not right to use a human but it is permissible to use an AI.

Peter Singer’s work on the moral standing of animals is also helpful when trying to show that artificial intelligence should have moral standing. Of course Singer’s reasoning will do nothing if one fundamentally thinks that talking about rights for animals or AI is silly. Singer’s main argument for why animals, and by extension artificial intelligence, should have moral standing is a quote from Jeremy Bentham that says, “The question is not, can they reason? nor Can they talk? But can they suffer?” (33). Granted this is a somewhat strange line of reasoning to apply to AI. However, it is powerful.

If one is going to accept Singer’s reasoning one will have to show that an AI can indeed suffer. One way this can be accomplished is by determining whether or not the AI has interests. Singer says, “the capacity for suffering and enjoyment is a prerequisite for having interests at all” (34). Therefore, if an AI has preferences one can infer that the AI can suffer. Legion shows this in Mass Effect 3 when he explains that the drastic measures the Geth took were in order to live. According to Singer’s reasoning, the fact that the Geth have the preference to live means that they can indeed suffer.

Now that it has been shown that an AI can suffer it is important to discuss how an AI could suffer. Physical suffering is a difficult thing to discuss because it would require speculation on how a pain chip might work and why one should consider an AI’s physical pain equivalent to human pain. Really this line of reasoning is irrelevant. It does not matter if there is a pain chip; what does matter is that the machine would believe it is in pain. Given this, it would be astoundingly cruel to not care about causing pain to a machine, intentionally or otherwise, just because it can’t really feel pain.

Can an AI, of the kind talked about already, suffer psychologically? The answer is yes. The simple fact that they could have preferences necessitates this. If the human race in general treated AI as a means to an end then it seems likely that AIs would suffer psychologically. If people used AI for target practice in video games or for sex then, yes, much like humans AIs would be harmed. They would cease to trust humans and, depending on what they were used for, live in terror. If it is morally permissible to do this to a being that is just as human as a human is, then it is morally permissible to do the same things to other humans wholesale. Naturally, many people would object to being treated this way. If one does not object to an AI being treated in the same way then the person is guilty of being blatantly speciesist, to use Singer’s term (41).

It seems likely that AI will be invented some day in the future. Kurzweil’s estimates might be off but it does seem probable that AI will exist sometime in the future. As such, it is a good idea to have the conversation of whether or not an AI can exist in the same way a human does. Descartes lays wonderful groundwork for showing that it is reasonable to think that AI can exist in all the meaningful ways humans do, thanks largely to thinking, rationality and self-awareness. The Mass Effect Trilogy does a wonderful job of showing how an AI can have human characteristics. Finally, Kant and Singer provide a strong groundwork that shows why AIs should be entitled to moral consideration. The arguments presented by these two thinkers are forceful and one must take them seriously in any discussion on moral standing. People will certainly reject the arguments if they think the idea that an AI should be equal to humans is stupid. I do not expect to change the minds of this kind of person, I simply hope to muddy the waters a bit and show that declaring that the human race will not have moral responsibilities to AI is not as black and white or as easy as it may seem.

Works Cited

Bioware. Mass Effect 2. Electronic Arts, 2010. XBOX 360.

Bioware. Mass Effect 3. Electronic Arts. 2012. XBOX 360.  

Descartes, Rene. Discourse on Method and The Meditations. London: Penguin Group, 1968. Print.

Hauser, Larry. http://www.iep.utm.edu/art-inte/. 8 June 2007. Web. 7 December 2013.

Kant, Immanuel. Groundwork of the Metaphysics of Morals. Feather Trail Press, 2009. Print.

Kurzweil, Ray. The Age of Spiritual Machines. New York: Penguin Group, 1999. Print.

Singer, Peter. Writings on an Ethical Life. New York: HarperCollins Publishers, Inc., 2000. Print.

Leave a comment