ABSTRACT
With technology growing at a faster pace than ever, it is important to understand the new liabilities that may occur due to the malfunction of such an expert system. The rise in products that run with the help of AI, such as self-driving cars or expert systems responsible for medical and clinical test puts the question of legal liability in case of its failure. The moral notion of safeguarding the consumer from harm caused by such a system and providing required compensation in case of fault is very important. The paper aims to view the liability of damages caused due to defects in AI systems from a tortious angle. It will also talk about whether strict liability should be applied or not and to whom it may apply. This paper will also explore the idea of AI as a service or a product and whether it has any autonomous power in taking decisions and whether the system itself should be liable to the repercussion. Further on, it aims to discuss some provisions, propositions, and cases which helped to create a framework of an entirely new domain of legal liability.
KEYWORD: Artificial intelligence, liability, algorithm, breach of duty, consumer, European Union.
INTRODUCTION
Artificial intelligence is advancing every day to become more compatible with the contemporary world. With such advancement, there is a possibility of instances where traditional laws may not fulfill the legal vacuum that may arise due to it. To understand the legal liability of defects that may arise, first, we need to understand their nature which may affect the point of discussion.
When we consider the nature of AI, various elements are pictured. First, would be the autonomous nature of AI[1]. Unlike traditional software, AI can make independent decisions based on the user’s input. The autonomy of AI raises the question of legal liability on it. Second, the complexity of AI. The complex nature of the expert system often makes it difficult to interpret the origin of the fault, hence, there is ambiguity on the exact reason for the defect. The third element would be randomness. An AI is programmed in such a way that it can produce thousands of different outputs based on the users’ input. The randomness of the result can sometimes create such instances where the said outcome was unforeseeable and hence would require some authority to compensate for the legal repercussion that arises with it. Recently, The European Commission released ‘New liability rules on products and AI to protect consumers and foster innovation, which suggests two proposals. First, to modernize the existing rules on the strict liability of manufacturers for defective products and second, to harmonize the national liability rules for AI[2]. With such propositions, a better idea of the boundaries of AI in strict liability can be drawn.
With the understanding of nature, we can connect the areas where factors relating to AI can be amenable to consumers.
Concept of AI
The term ‘Artificial Intelligence seemed to be first used by John McCarthy (Considered one of the founding fathers of AI) in 1956[3]. This term is defined by various people in different ways. According to John McCarthy AI is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to biologically observable methods[4]. When we put more perspective on it, the term ‘artificial can be regarded as anything which is synthetic and is usually synthesized by humans to replicate a thing or phenomenon. Thus, the main question arises on the term ‘intelligence.’ Many AI researchers believe that there isn’t a solid definition of intelligence and hence it is difficult to point out whether a computer program is capable of having intelligence similar to that of a human. Alan Turing, who was an English mathematician was one of the first to research artificial intelligence. He came up with the Turing test, which was one of the first methods that gave an idea of what could be the factor that would result in considering computer software intelligent. In the Turing test, it was considered that if a computer could mimic human-like intelligence to perform a specific task in given conditions it can be considered intelligent. However, this test is considered one-sided and not an accurate way of considering if a program has artificial intelligence because even if the software is not aware of human characteristics, it can still acquire knowledge and be considered intelligent.
Difference between artificial intelligence and conventional software
Artificial intelligence is a series of command which is programmed in a way to make autonomous decisions. It takes input from the user and processes it in such a way that a customized result is formed. This is very different from conventional software which is programmed to perform only on a particular set of instructions. Unlike conventional software, these expert systems can make the connection between new information which were not introduced initially during programming. Artificial intelligence programs utilize the knowledge of the relationship between objects and events in a particularly focused problem area[5]. Such a specific area can be considered a ‘domain,’ where the input entered by the user in the system is analyzed and is run through various series of combinations until a customized result is achieved. This is not possible through traditional software.
Domains of AI and the gaps
AI is seen in every aspect of the modern era. With the peculiar algorithm, manufacturers and developers of the huge corporation provide various services and products to the consumers. Some most prevalent examples would be Tesla’s semi-autonomous vehicles and Google’s autonomous vehicles[6]. Other systems that are prevailing in the transport industry would be GPS and a feature in self-driving cars where the system can detect traffic signs. PathAI is an AI company that deals with healthcare-related services and products, developed a machine learning technology that helps the pathologist to obtain a more accurate diagnosis. In this case, to understand the loopholes that can be created let us assume a hypothetical situation where due to the occurrence of a bug in the software, an incorrect diagnosis was done. Now, because the system was based on an autonomous system it can be argued that the diagnostic healthcare authority should not be held liable for a defect that may have arisen due to a faulty algorithm. Whereas, the developers or manufacturers of the system may take the defense of AI being an autonomous system that takes independent decisions based on users’ input patterns. While considering autonomous vehicles which use machine learning technology to make decisions that are based on object detection and classification algorithms, may default, and interpret the situation wrongly. The given situations again put a basic question of liability.
RESEARCH METHODOLOGY
To perform research and collect information, various secondary sources were considered. Secondary sources such as research papers, law reports, commentaries, and articles from professors of renowned universities were considered for references. Detailed analysis was made from literary works inclusive of books by renowned writers. Certain data were taken from intergovernmental sources such as European Union including the new proposed rules. A critical analysis of the given data was done to advance a supposition for the said topic. Later a personal interpretation was made to reach logical reasoning.
REVIEW OF LITERATURE
Liability regime for AI
In his article, Gabriel Hallevy (2010)[7] explores three models for determining the criminal liability of offenses committed by artificial intelligence (AI).
The first model, “perpetration-via-another,” addresses situations where AI is used to commit offenses. It suggests that developers who create AI programs to facilitate criminal acts should be held liable. Additionally, consumers who use AI products or services to commit offenses without intending to do so may also be considered responsible. For example, a developer who creates AI software that illegally distributes users’ data to companies without consent would be liable under this model.
The second model, “natural-probable-consequence,” focuses on the unintentional consequences of AI actions. It argues that individuals can be held accountable if an offense is a natural and probable consequence of the AI’s behavior. For instance, if a developer creates AI software designed to protect a computer but unintentionally causes harm by destroying websites to eliminate threats, they could be held liable for the offense committed by the AI.
The third model, “direct liability,” considers both the physical action (actus reus) and the mental state of mind (mens rea) of AI entities. It suggests that if an AI independently causes harm through its movements, it fulfills the actus reus requirement for a specific offense. For example, if a self-driving car, due to its inadequacies, hits an obstacle and causes damage, the AI would be considered to have committed an offense through an act of omission.
Varied standards of liability for AI
Maruerite E. Gerstner (1993) considers three standards where liability could form. Gerstner points out that liability could occur either through a contractual relationship between the consumer and manufacturer or where the user relies on the information provided by the AI system[8]. Here the standards mentioned by Gerstner are as follows.
A. Negligence
Negligence is the failure to use the care a reasonably prudent person would use.[9] Gerstner identified three key aspects of negligence concerning consumers and manufacturers in nexus to tort law: duty of care, breach of duty, and causation and damages resulting from the breach. The manufacturer or developer has the responsibility to ensure that the products they sell are not harmful to consumers. This involves meeting professional standards, particularly in areas such as software development. For example, if a company develops AI software for medical diagnosis, it should employ professionals with specialized knowledge in that field. However, the lack of proper licensing procedures for software developers often hinders the enforcement of such standards. This complexity arises from the presence of different standards for different developers involved in the software development process.
Gerstner also explores various ways in which the duty of care can be breached. Breaches may occur due to errors resulting from incorrect information provided by the human expert responsible for designing the software or due to inadequate and outdated statistics. Additionally, since AI operates based on algorithms, incorrect user data or irregular inputs can lead to faulty outcomes. Finally, the concept of causation requires a close connection between the vendor and the consumer who suffered damages.
Considering all these factors, a plaintiff could argue that the vendor or developer is negligent by failing to uphold the necessary standards expected of them, making them liable for the damages. On the other hand, the vendor could argue that the consumer negligently provided faulty inputs and did not thoroughly review the software.
B. Strict liability
In the case of strict liability, according to Gerstner, there is no requirement for an act of negligence. The main element that is taken into consideration is the product in a defective condition unreasonably dangerous to the user or consumer or his property is subject to liability for physical harm thereby caused to the ultimate user or consumer.[10] To consider the strict liability to AI it is important to determine whether an AI is a product or a service because, if it is a service the concept of strict liability may not be applicable. Whereas, in the case of products, strict liability can apply even in the case of intangible products. In Ransome v. Wisconsin Electric Power Co., 6[11], electricity albeit an intangible product, was considered a consumable product, and hence the court held that strict liability may apply to damages caused due to an intangible product. There is however an issue, where the AI system can be very complex, and therefore it can be difficult to distinguish whether it is a product or service. To figure out this we need to focus on the results that the AI program provides. In this case, if the software is designed in a way that it provides assistance that might help the user to continue a function, it can be considered as a service. The following example would be an AI system that is developed to suggest stock market values. Here, the algorithm is designed to analyze the different stock values and provide the user with the appropriate information, this is then considered to be a service that is provided by AI. When we are considering AI as a product, Gerstner mentions that in the case of products, it can also be a service, however, the basic distinction is that such programs are mass-produced. Gaming software that runs on AI, and uses the players’ input to enhance the experience can be a product, as this is produced in mass.
The application of strict liability is seen to broaden in its domain, but its application in terms of AI is still not prevalent in courts.
C. Breach of Warranties
The last standard of liability discussed by Gerstner is the warranties in AI. Here to apply the concept of warranty, the said program must be considered a product. A “good” must be a “thing” that is “movable.”[12] Article 2 of the Uniform Commercial Code[13] defines goods as things that are movable at the time of identification of the contract. Since certain AI systems are movable and can be considered products, the warranty concept can be foreseeable. There are a few limitations in this standard, in terms of hybrid systems, whose nature of service or product can be determined by the methods mentioned above.
Here, due to the warranty, a duty is implied over the vendor if the user relied on his knowledge and expertise to buy the product. These warranties can either be in expressed or implied forms which are defined in U.C.C. In express warranty, the quality or the features of the goods are assured, it contains written promises as part of the contract[14] For example, if the vendor is selling an AI product saying that it will contain features that will analyze and process certain information, the features of said product are assured by the vendor. While in the implied warranty, is a guarantee that is not in written or majorly expressed. U.C.C section 2-315[15] talks about the implied warranty of fitness, where the seller has the duty if it is known at the time of contract, the purpose for which the goods are required, and the fact that the buyer is relying on skill and expertise of the seller to select goods.
METHOD
Various methods were used including case study and conducting an in-depth investigation of a single case or a small group of cases to understand unique phenomena or contexts. Along with it a quantitative data collection and analysis was done first, followed by qualitative data collection and analysis to provide a deeper understanding of the quantitative results.
SUGGESTIONS
Argument of autonomy
With all the previous discussions, a question that seemed quite prominent was whether we should give complete autonomous status to AI. As discussed earlier, AI can surely contain certain knowledge that may conclude that it acquires autonomous intelligence. However, to consider the functions of AI exactly similar to human intelligence would be a flaw in the argument, as even though it is considered intelligent without being aware of human characteristics, it still requires crucial input data either from the user or the developer. Moreover, if we give AI, the pedestal of human characteristics, in the future there may occur instances where we need to provide such expert systems their rights or legal personhood. A fair example would be Sofia, a human-like AI robot that was given citizenship in Saudi Arabia in 2017.
Currently, we are majorly dealing with an expert system that is looked at as products and services. Hence to create a legal groundwork for defects that arise from AI, we need to understand that the accountability of such should lie within the professional individual who develops and manufactures the expert system or those who are responsible for influencing the results. In the latter case, various contributors such as users or experts who provide their statistical data are considered. From the above models of propositions of liability, it can be understood that producers of such AI systems have more expertise and knowledge of the patterns and machine learning involved in the system. Hence, the degree of liability is more on such professionals than on consumers.
Harmonization of strict liability on a risk-based approach
Considering the discrepancies regarding the applicability of strict liability in AI. There should be a uniform framework for strict liability according to the standard of risk it possesses. By creating a uniform groundwork for conventional as well as expert systems based on risk measures, all such defects in a motor vehicle or aircraft system that is based on AI can be included in the scope of strict liability.[16] The standard of risk could be classified into social risks, pure economic risks, and physical risks.[17] The social risk may contain the example of the Microsoft AI Chabot Tay, which caused social unrest by posting offensive statements on Twitter by analyzing the input data present. The economic risk may include an AI system that provided a poor recommendation of stocks resulting in economic loss to consumers and lastly, physical risks which is mainly AI-operated mechanical robots or systems, causing injury to a human. By differentiating the risk, it will be easier to determine the degree of damage that was unreasonably dangerous. Since strict liability applies to both tangible[18] and intangible products,[19] the applicability of strict liability is widespread.
Redressal for injury
The need for providing redressal to the victim of AI defects is of utmost importance. The proposal of the draft Artificial Intelligence Act[20] by the European Parliament, mentions ensuring that when AI system defects cause physical damage, injury, or data loss to consumers, they may seek compensation from the system provider or manufacturer. This provides financial security for injured consumers. Another proposal by European Parliament stated that the developer or producer of software, including AI system providers within the meaning of [Regulation (EU) (AI Act)], should be treated as a manufacturer.[21] This clear distinction differentiates the liability standards for resellers and manufacturers. It would also encourage various retailers to deal with manufacturers who are involved in developing the expert system.[22] Also according to a recent proposal by the EU on liability rules on AI products, modern liability was imposed on manufacturers, for the damages caused by robots, drones, or smart homes systems and digital services that were considered products.[23]Development in such rules has created a liability framework for such developers.
Regulations for developers and manufacturers
With the development of unconventional fields of services, it is important to come up with new legal provisions that regulate such developers and manufacturers. Creating stricter regulating authorities that would control the developing process of developers, would help to reduce the possibilities of error-based defects. Such regulating provisions also focus on the ethical values on that the developers’ base their projects on. Such regulation also helps consumers to embrace the newfound AI system-based solutions. The regulation provision on AI products should be made human-centric to create a safer environment that respects the fundamental rights of the people.[24] Withholding liability over said professional individual, adequate licensing of manufacturing companies and developers helps to create an extended layer of surveillance and safeguard the market. With said propositions, consumer awareness also plays an important role, where the user ensures that they analyze the AI software properly and do not obtain unethical software in general.
CONCLUSION
Artificial Intelligence is spreading its influence in every domain of technology. With such vast influence, consumer dynamics also increase rapidly. Such expert systems are very important for the development of humankind. Hence, we need such laws and regulations that help to create a safe environment for consumers and create trust in AI-based systems. These frameworks would help to fill the gap of the legal vacuum caused by AI and regulate the misuse of power by certain dominant manufacturers and developers. With adequate standards in the development of AI, i1t would keep a check on the error-based faults in the development process. It is also evident, that in the future, there will come AI systems that are even more advanced than current systems. Hence, we need to be updated with the provision and ensure that our regulations and liability systems are neither obsolete nor very stringent that which restricts the further development of AI. We need to create AI safe and responsible for the users.
– Apiha Yasmin Laskar, NMIMS Kirit P. Mehta School of Law, Mumbai
[1] Christiane Wendehorst, Strict Liability for AI, and other Emerging Technologies, Journal of European Tort Law,150,150180(2020)
[2] European Commission, New liability rules on products and AI to protect consumers and foster innovation, (Sept. 28, 2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807
[3] Liu Wanting, The Contract in AI Era: Vulnerability and Risk Allocation, 9 CHINA LEGAL Sci. 125, 125-158 (2021).
[4] John McCarthy, What Is Artificial Intelligence? , JOHN McCARTHY,(Nov. 12, 2007), http://www-formal.stanford.edu/jmc/
[5] Maruerite E. Gerstner, Comment, Liability Issues with Artificial Intelligence Software, 33 SANTA CLARA L. REV. 239,239269 (1993).
[6] Liu Wanting, The Contract in AI Era: Vulnerability and Risk Allocation, 9 CHINA LEGAL Sci. 125,125-158 (2021).
[7] Hallevy G.: The Criminal Liability of Artificial Intelligence entities, AKRON INTELLECTUAL PROPERTY JOURNAL, ,171,171-201(2010)
[8] Maruerite E. Gerstner, Comment, Liability Issues with Artificial Intelligence Software, 33 SANTA CLARA L. REV. 239,239269(1993).
[9] BLACK’S LAW DICTIONARY 1032 (6th ed. 1990).
[10] RESTATEMENT (SECOND) OF TORTS § 402(A) (1964). (USA)
[11] Ransome v. Wisconsin Electric Power Co., 87 Wis. 2d 605, 275 N.W.2d 641 (Wis. 1979)
[12] Michigan Law Review, Computer Programs as Goods under the U.C.C., 77 MICH. L. REV. 1149(1979).
[13] U.C.C. § 2-313 (1990).
[14] Maruerite E. Gerstner, Comment, Liability Issues with Artificial Intelligence Software, 33 SANTA CLARA L. REV. 239,239269 (1993).
[15] U.C.C. § 2-315 (1990).
[16] Christiane Wendehorst, Strict Liability for AI, and other Emerging Technologies, Journal of European Tort Law,150,150180(2020)
[17] Christiane Wendehorst, Strict Liability for AI, and other Emerging Technologies, Journal of European Tort Law,150,150180(2020)
[18] RESTATEMENT (SECOND) OF TORTS § 402(A) (1964).
[19] Ransome v. Wisconsin Electric Power Co., 87 Wis. 2d 605, 275 N.W.2d 641 (Wis. 1979)
[20] European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (AI Act), COM (Mar.21,2021) https://eur-lex.europa.eu/legalcontent/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN
[21] European Commission, Directive of European Parliament and of the Council on liability for defective products, COM (Sept.28,2022) https://ec.europa.eu/info/files/proposal-directive-adapting-non-contractual-civil-liability-rules-artificialintelligence_en
[22] Maruerite E. Gerstner, Comment, Liability Issues with Artificial Intelligence Software, 33 SANTA CLARA L. REV. 239,239269(1993).
[23] European Commission, New liability rules on products and AI to protect consumers and foster innovation, (Sept. 28, 2022), https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807
[24] European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (AI Act), COM (Mar.21,2021) https://eur-lex.europa.eu/legalcontent/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN
