Rights for Artificial Intelligence

“The AIs of the future will be truly conscious and so should be given the same rights as you or I.”

Critically discuss and evaluate this claim.

This proposes a contradiction of sorts because, for the most part, corporations and natural landmarks are not considered persons, yet they have personhood rights granted to them. In America, Corporate Personhood grants protection to companies, which is important for free market capitalism, and although Indigenous cultures often assign spirits to natural landmarks, a majority of human persons don’t consider natural spirits to be persons, and assigning personhood was more closely related to protection and preservation of Maori culture and the natural environment in New Zealand. To gain rights, one does not have to be a person, although to be considered a person would make the debate of whether or not an entity should be given rights significantly easier.

 

lucas-albuquerque-UcA1fXByFss-unsplash.j

Photo by Lucas Albuquerque on Unsplash 

One way to examine whether future Artificial Intelligence should be given rights is to determine what makes a person and if AI can ever achieve that. A common necessary condition of personhood is consciousness. When

Before we can start, the notion of rights will have to be discussed. There are several different forms of rights given to different groups of life. There are human rights, rights that belong to us simply because we are of the human race. Worker or Labour rights, animal rights, many different kinds. So to discuss and evaluate this claim, we would have to determine the kind of rights to be given to Artificial Intelligence, should this claim be put forward in a political, legal, economic, or social manner.

Assuming you and I are both human, the rights we both have been given without any jurisdiction will be human rights. These are, according to the UN, “inherent to all human beings, regardless of race, sex, nationality, ethnicity, language, religion, or any other status” (UN.org, n.d.). Artificial Intelligence are not human beings: they don’t have the same biological makeup, same capabilities, and behave differently. However, in a legal sense, that may not be important, as non-human entities have been granted rights before. Corporations have been granted rights in America for over 100 years (Winkler, 2018), and in New Zealand, both the Whanganui River and the Te Urewera Forest have been granted rights, due to an Indigenous belief of the identity these natural landmarks hold (Warne, 2019) (Colwell, 2016). They were granted legal personhood, and in legal senses, “those who are not recognized as persons are accorded no rights” (Hoffman, 1986). 

persons lose consciousness by entering a comatose state, they are considered diminished persons rather than full persons, and we can make the decision to completely end their lives for them, as they don’t have the same rights as conscious persons. If consciousness was the only necessary condition for personhood, then future AI that would become truly conscious would be given the rights of personhood without a doubt. However, not only do some people believe that AI will never be truly conscious, but some also believe that consciousness isn’t sufficient for personhood.

1108x622.jpg

The synthetic nature of Artificial Intelligence is what suggests to many that AI will never be truly conscious. Because AI follows a code or algorithm that dictates what procedures they will take, some have argued that this doesn’t demonstrate true understanding from the AI and therefore they will never reach the level of thinking and consciousness present in other persons, particularly humans (Wulff, 2019). It could be argued that, although conscious AI has been popularly depicted in Science Fiction novels and movies, this is simply a personification of human characteristics in order to criticise certain behaviours or expand our characteristics beyond the world presented to us in daily life.

However, this argument can be used against humans too. We follow a code of sorts, that being our DNA, which acts as genetic coding to make up who we are. Hard Determinism suggests that none of our actions is of our own will and that our DNA codes for every choice we make and every action from the outcomes of our choices. If we don’t have the ability to make decisions separate from our genetic coding, then we fall into the same trap as was set for Artificial Intelligence. If we are considered as conscious beings, even if our ‘thoughts’ are set out for us by our DNA, then future AI could be considered conscious as well. With this in mind, if we consider consciousness as a sufficient condition for personhood, then future AI should be considered persons as well as we are, and therefore given the same rights of personhood that you or I have.

A problem with this is that many people don’t consider consciousness as the only necessary condition for personhood. If another necessary condition of personhood is morality, many would argue that Artificial Intelligence would never be considered persons, as they cannot be taught the same integral morality that exists in human persons. A common suspicion many people have about AI is the threat of takeover and human life being eliminated or enslaved by AI, a common theme in Science Fiction and a

best-pixar-movie-romance.jpg

very contemporary fear among human persons. Part of this stems from a concern that AI will never have an integral moral policy, and not have the ability to be effectively taught morality. 

 

Whether we consider an entity a person might not affect whether or not we give them rights. Like before, corporations and natural landmarks have been given rights even if they aren’t considered persons by a majority of people. A majority of animals aren’t considered persons, yet there are still ethical guidelines and rights given to them simply out of moral principle (bbc.co.uk, n.d.). The discussion doesn’t need to focus on whether we consider Artificial Intelligence to have the capacity for personhood, it can focus on whether they should have the same rights as you or I, regardless of personhood. Like animals, “intelligent machines also have rudimentary systems of pleasure and emotion — they have inbuilt systems of desires and aversions, of repulsion and reward” (Fischel, 2018). If we give animals rights on this basis, it follows that we should give AI the same rights.

 

The only caveat of these rights to non-human entities is that none of them has the same extent of rights as humans do. There are certain rights given to humans that non-human entities simply don’t receive. Only humans are given the largest extent of human rights, and while non-humans might have an easy time retaining rights in the court of law (Winkler, 2014), it’s only humans that are treated with the same rights outside of courts, and only humans who have rights determined for them by the UN. Although many would argue that we should give Artificial Intelligence rights, we don’t have to give them the same extent of rights as you or I would have, whether they fully achieve consciousness or not.

Bibliography

Article Written by

Sumatra {Marty} Costin

Jeju International School