CAN ARTIFICIAL INTELLIGENCE BE SUED?

ABSTRACT:

The Artificial Intelligence sector has shown a rapid growth over the decade. Different variety of software have been in utilization since many years now. With usage comes responsibility. This article concerns itself with one important question, that is, whether Artificial intelligence can be sued or not. The question sheds light upon the discussion of treating artificial intelligence as a legal personality. The position of leading countries such as the United Kingdom and the United States of America, on this topic has been made clear.

INTRODUCTION:

Artificial Intelligence is undeniably the very boon of our existence, however, its existence is not without doubt as it holds real and substantial risks. Such risks are mainly associated with its usage, as products, upon which humans are very dependant upon. Ultimately, there exists a debate on who is to be blamed for the offenses committed by these technologies, if not the Intelligence itself. If we consider the legal personality given to idols and corporations, one may argue that Artificial Intelligence should also be considered a legal entity and with it, embrace all the legal rights and duties which come along with it. These legal rights and duties are inclusive of the right to sue and be sued.[1]

If we hold the companies or individuals which make the Artificial Intelligence, responsible for any legal offence committed by the AI, it may render the entire sector into oblivion. Although the sector is a money packing sector, it is still undergoing a massive research and development stage. The benefits outweigh the issues. Yet, the issue of ultimately holding which entity responsible may come down to the facts and circumstance of each case. For example, would it be the manufacturer of the AI or the seller of the AI or the ultimate user of the AI? The powers given to an A.I algorithm are huge in comparison to the accountability assigned to it. In many countries, AI is used in assisting major decision making platforms. Countries use A.I algorithms for digitalization of public administration, for example, the Aadhaar Card System in India. What does the plethora of information processed by these AI consist of, is not only statistical data but personal data as well.

POSITION OF VARIOUS COUNTRIES:

In the Indian law, only a “legal person” is competent to enter into a legal contract. This general rule does not qualify an AI as a legal person. Hence, a contract entered into by an AI of its own violation, may not be regarded as a valid contract.[2]But there is a possibility that in future, AI may be granted the status of a ‘person’ under the law because unlike corporations, AI is indeed autonomous body because after a point, the programmers of AI do not control it and all activities are performed on its own intelligence.[3]  The U.K and the U.S, both decline to impose liability on artificial intelligence. According to these countries, liability for a crime arises where the accused had the intention to commit the crime. The concept of intention includes a person who instructed another person or animal, who lacks mental capacity to commit a crime, which is clearly applicable on Artificial Intelligence machines. Therefore, if an Artificial Intelligence machine or software commits an offence, the creator of such intelligence will be held liable. The maker of the artificial intelligence machine can also be held accountable if the artificial intelligence system commits a criminal act.

Artificial Intelligenceis not comparable with other legal persons. Rather, these machines can be better compared to animals in terms, forinstance, of so-called autonomy, self-awareness, or self-determination, though the latter may be more autonomous compared to the former; and they are different by nature – one is a human-made product, whilst the other is a living animal. Likewise, both are regarded as objects rather than subjects of law. In the legal perspective, A.I is to be treated as tools and the human in charge of the A.I, are legally responsible for their actions and that they operate within the boundaries of the law. In a hypothetical case, if the creators are not held liable for the activities of their technology and the AI are given a separate legal entity, then the creators will be absolved from any liability and any incentive to continue to refine safety measures, will be removed for the creators. There are many other reasons which lead to denying legal personality to Artificial Intelligence which includes the lack of accountability, the lack of understanding the importance of legal sanctions, and most importantly, the lack of emotional intelligence.

CASES:

The precedents on the topic of artificial intelligence are very limited because these software’s are treated as products or services, by the manufacturers as well as the consumers. When such technology commits a mistake, it is upon the court to interpret where the blame will lie. The court in the case of United States v Athlone Indus.,Inc.[4] in 1984,stated that robots could not be sued. Currently, an artificial intelligence machine cannot be sued because the law considers them as a product or a service. In Switzerland, the police arrested an Artificial Intelligence Robot because it was used to make illegal purchases online, but it was not charged with a crime. In Nelson v. American Airlines, Inc[5]., the Court applied the doctrine of ”res ipsa loquitur” in finding an inference of negligence by American Airlines relating to injuries suffered while one of its planes was on autopilot, but ruled that the inference could be rebutted.In 2017, Facebook had to shut down two of its AI algorithm programs when,both the AI started talking to each other in a language they themself invented which was incomprehensible to humans. In the U.S., criminal prosecutors dropped the charges onUber Technologies Inc., for the death of a 49-year-old pedestrian killed by one of its automatic cars.

CONCLUSION:

Artificial Intelligence is progressive and is becoming more common in terms of usage throughout the world. Eventually, this may bring about a change in the legal standing of artificial intelligence. The legal definition of”legal personality” will open its horizons once again and creations of the mind would have to be altered to accommodate artificial intelligence machines. In conclusion, Artificial Intelligence are not legal persons, they cannot be sued.


[1]https://www.aitimejournal.com/@claudiu.balan/can-i-sue-a-robot

[2]https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/india

[3]https://www.thehindu.com/opinion/op-ed/artificial-intelligence-the-law-and-the-future/article27766446.ece

[4] 746 F.2d 977 (3d Cir. 1984)

[5]70 Cal. Rptr. 33 (Cal. Ct. App. 1968)