Abstract
Globalization and the rapid development of technological solutions, especially artificial intelligence (AI), are changing the legal context, especially liability. This research paper aims to do a comparative study of the laws pertaining to AI and its legal prospects prevailing in India as well as in the global context. On the other hand, this paper seeks to unveil the various strategies by the global society with emphasis on the SUI Artificial intelligence act and State legislation in the USA. Thus, the evaluation really focuses on the absence of the law to address negligence, duty of care and legal responsibility regarding AI applications. It also analyses the concern and importance of ethical and socio economic issues specifically in the context of the Indian patients involved. The author supports the multispectral approach to AI regulation since it requires integration from policymaking, the law, and IT specialists. Such cooperation of various disciplines is necessary for the development of proper mechanisms that would hold persons accountable and guarantee the rights of every citizen. The following paper gives an account of the existing situation regarding artificial intelligence and legal accountability to inform the expanded advancement in the field.
Keywords
Artificial Intelligence, Legal Liability, Comparative Analysis, Regulatory Frameworks, Ethical Considerations, Jurisprudence
Introduction
Artificial intelligence (AI) is rapidly evolving, significantly impacting legal liability issues. An analysis of Indian and international law highlights the challenges in assigning legal responsibility for losses caused by AI. India has yet to establish a clear legal framework addressing AI liability, leaving many questions unresolved. The existing laws like the Information Technology Act, 2000 and the newly passed Consumer Protection Act of 2019 does help in framing a rough concept but then they may not suit the upper requirement of AI as AI is self- operating, not easily traceable and can have several unexpected side effects. The Indian judiciaries have also addressed these questions, and in certain cases, the courts acknowledged the require a larger legal regime to deal with liabilities possessing AI.
Globally, a number of legal systems have adopted diverse methods concerning responsibility for artificial intelligence. The European Union has put forward the Artificial Intelligence Act that is expected to lay the foundations for an appropriate legal regime regulating all the AI systems in the EU including provisions concerning legal responsibility.In the U.S., AI regulation is fragmented, with various states and federal bodies developing guidelines. Courts are adapting traditional legal theories to address AI-related issues, focusing on causation and accountability among designers, producers, sellers, and end-users. Key challenges include identifying legal damage attribution and the emergent properties of AI systems. Addressing these complexities will require legal reforms, technological advancements, and collaboration among policymakers, legal experts, and AI developers.
Research Methodology
This paper is of descriptive nature and the research is based on secondary sources for the deep analysis of the impact of Artificial Intelligence on legal liability. Secondary sources of information like newspapers, journals, and websites are used for the research.
Review of literature
There are only emerging approaches towards the legal issue of liability resulting from AI’s application, which indicates clear distinctions in the Indian and those of the international legal systems. Internationally, the most developed on this matter has been the European Union with the General Data Protection Regulation (GDPR) covering all data issues for the Union, and the recently proposed Artificial Intelligence Act which focuses on risk-based categorization of AI, and the Ethical requirements currently in the proposed bill like transparency and accountability. Although the United States does not have a clearly defined federal strategy for AI regulation, many of the country’s individual states have embraced the practice of using specific regulations and guidelines in matters to do with liability and ethics.
India, on the other hand, is still partially protected with regard to legal systems because these latter are under evolution. The Personal Data Protection Bill adopted in India is a twin sister of the GDPR and in terms of regulation of data privacy but does not contain specific provisions on the liabilities related to AI-driven solutions. The National Strategy on Artificial Intelligence in India prepared by the NITI Aayog has the goal of AI for efficient business and industries, hence paying less attention to the regulatory measures related to accountabilities due to AI. Ethical issues like injustice and opaqueness in the applications of the algorithms are critical in both the context, however, the impact and scale of the problem is highly significant in India due to the socioeconomic factors and disparities calling for localized solutions. A comparative examination of the subject emphasised the importance of India to establish AI regulations that address the legal and ethical issues and foster innovation within the nation’s framework.
Global Development: Historiography
The term “artificial intelligence” (AI) was coined by John McCarthy in 1955, leading to its recognition as an academic discipline in 1956. AI has experienced cycles of optimism and despair, with fluctuating funding and various research approaches, including brain stimulation and human problem-solving simulations. By the 21st century, statistical machine learning became prominent, effectively solving complex challenges in business and research. This evolution raised philosophical questions about the nature of intelligence and the ethics of creating machines that mimic human behavior. A practical example is IBM’s AI, Ross, which analyzes legal contracts and is widely used in law firms across the United States.
Understanding AI
AI compromise the ability of the computers in perceiving, learning, thinking and deciding like a human being. Its subfields such as, machine learning, deep learning, natural language processing, and Computer vision help in solving several issues. Machine learning lets A. I systems work through precedent data, which means that they learn as they progress; deep learning performs diagnoses and prognosis from multiple data sets. Natural language processing allows an AI system to read, write, and talk, which is crucial when it comes to searching the law and analytical documents. Computer vision trains the bots to interpret data that are in form of images or videos. AI can be expected to shape legal research on matters and review of legal documents as a means of working that is both faster and more efficient. Programs such as Kira System, Leverton, Bravia can determine necessary data in contracts and papers which will significantly save time on Due diligence and analysis. AI also makes it possible in the process of legal research to present large sections of the relevant literature to scholars, so that they can locate ideas, comprehend them and be able to relate them. But these are not substitutes for judgment and contextual sense making; that is what AI is – an enabler for tools to enhance the productivity and effectiveness of legal instruments personnel.
Legal liability: A theoretical framework
Understanding liability risk from AI
The use of artificial intelligence also known as AI is one of the most promising technologies to healthcare. Nevertheless, despite the numerous possibilities to enhance the quality of patients’ treatment and minimize the expenditures, there are severe concerns regarding various detrimental consequences of implementing AI tools. Lawyers’ concerns may be found in such issues as legal responsibility and claims against healthcare organizations that are required to navigate through the federal laws that are still in their developmental stages. Perhaps the most pressing legal question is: The question now arises as to who will be legally responsible for the adverse effects that AI tools are responsible for on the patient?
Liability of AI in Healthcare
The responsibility for damages resulting from an AI application to healthcare is still uncertain, and at this time, there are not many legal precedents. Medical AI models are relatively recent, and thus, there is only a small number of personal injury claims with judicial opinions. Problems that have been encountered by plaintiffs in software liability cases include the following: Sir
- Duty of Care and Standard of Care: Typically, when a product causes harm to a patient, the law follows the standard principles for sharing the blame between the user of the product and the manufacturer. The plaintiff must show that the defendant had a “duty of care,” their actions were below the “standard of care,” and their breach caused the harm. However, these determinations are more complicated for AI and other software tools applied in the healthcare industry.
- Product Liability: It has been seen that the Courts have not been receptive of the concept of product liability doctrines to warrant AI resulting leads to an injury. The legal doctrine of ‘pre-emption’ prevents patients from seeking remedies for injuries in state courts regarding certain FDA-endorsed medical gadgets. Also, the majority of the states require the plaintiff in a product liability case against a manufacturer to satisfy the components of the test of reasonable and safer alternative and that the harm was reasonably foreseeable. Meeting these demands is technically difficult because of the ‘fuzzy’ structure of the AI.
Legal Responsibility in the Tort of AI
The incorporation of AI in the healthcare sector opens questionably legal issues in the area of tort law. The legal doctrine of ‘pre-emption’ prevents patients from instituting personal injuries suits in state courts regarding certain FDA endorses medical devices. Also, a majority of the states require a plaintiff suing a manufacturer to prove that there was a feasible and safer alternative which the manufacturer refused to use; and that the harm was reasonably foreseeable. Satisfying these demands is technically challenging since the structures behind AI are often ‘obscure’.
Liability of AI in contract law
This paper evaluates how the application of AI in contract law as a service presents new questions concerning responsibility for contract transfer and breaches. Litigation issues have arisen in regard to contract enforcement of contracts that are made using artificial intelligence, more so changes in contractual relations as a result of performance by the AI, and thirdly, there is no legal governing contractual relations made through the AI. For instance, in early 2022, a known case in the UK was related to an AI contract management system where one party was legally restricted to enforce certain provisions of the contract. A variation of this case is a situation in which an Indian court concluded that a company was in breach of contract when an AI procurement tool modified a supplier’s contract without the knowledge of the higher management. These cases indicate the social imperative for legal certainty on matters touching on AI in contracts.These cases prove the need to make some suggestion regarding the introduction of some specific regulations regarding the involvement of AI in the formation and interpretation of the contract, as well as on the admissibility of the risk factor on the potential contract breaches through the application of AI and the Parties’ assurance in ensuring that the Parties are accountable for actions by AI systems engaged in contract performance.
AI and legal liability in Indian and international jurisprudence
AI and legal liability in India
The Indian legal system has faced certain issues that has risen due to incorporation of artificial intelligence in various sectors. The Indian Supreme Court first came across the concerns arising from the use of AI in the case of Sushil Kumar Sharma v. Union of India where it made an appeal for the necessity of the complete legal law for the AI. It was agreed by the court that the previous legislations were not sufficient to address necessities and consequences of AI based technologies.
Another example includes the Sushil Kumar Sharma case that demonstrated the absence of rules and restrictions concerning the usage of AI systems and explaining who is responsible for them. The court laid significance on creating concrete legal frameworks to govern the utilization of Artificial intelligence in a considerate manner and to tackle problems like, unfair algorithms, right to privacy, and anti-social uses. This case proved how the Indian judiciary and policymakers must address the future legal problems of AI systems and work on building a solid legal framework.
AI and legal liability in International
The external environment also reflects the increased attention of the international legal community to legal AI aspects. Scholars have analysed the problems of countering the negative AI applications like fake news generation. Zellers et al. Have described reliable approaches to differentiate text written by AI, which stresses the need for viable and efficient procedures used for identifying AI-generated content.
Also, the possible legal effects of creative work created by an AI have been debatable. Yanisky-Ravid and Velez-Hernandez have also analysed an issue of the copyright protection for artworks created with the help of creative ALS persistently underlining the importance of discussing questions connected with IPR within the context of AI-related innovations. The current study is relevant to the ongoing unknown of the global legal community for compiling with the changes in various aspects of AI and its implications for the law in the field of intellectual property, liability, and accountability.
Regulatory framework and ethical considerations
At present, the European Union is in the process of governing artificial intelligence (AI) as it stimulates the development of ethical guidelines and the bill for the Artificial Intelligence Act. This legislation is risk-based, which divides AI systems into risk levels; it sets high requirements for high risk systems to enhance the visibility of AI and to promote human intervention. The EU mostly focuses on the ethical aspects of AI initiation which are; equality, openness, accountability as indicated by the High-Level Expert Group on Artificial Intelligence. While countries such as the UK and Germany are developing in the legislation of artificial intelligence, India still has no clear rules and is waiting for the bill of personal data protection which does not concern artificial intelligence. EU and India have the similar issues where AI is concerned; namely, bias, transparency, and accountability. Since the manifestations of AI are becoming a part of daily life, it is necessary to define appropriate legislation requirements and ethical norms. The current and future endeavours in the EU and India will consequently impact the future development and deployment of AI on its countries.
Future trends and developments
Over the past decade, Artificial Intelligence (AI) has meshed into various industries. The era witnessed a dramatic increase in tools, applications, and platforms based on AI and Machine Learning (ML). These technologies have impacted healthcare, manufacturing, law, finance, retail, real estate, accountancy, digital marketing, and several other areas.
What trends will impact the legal profession over the next five years?
It was identified that legal profession has the potential to be revolutionised by artificial intelligence (AI). In the recent survey that was conducted among the law firms, 79% of the respondents projected that AI would exert high or other disruptive impact on a law firm in the next five years against the 69% who made the similar projection the previous year. Also, 42% of respondents opine that AI will transform industries, 34% of whom made the same assertion a year ago. There is a chance that the optimist’s side is right, however, lawyers are more skeptical about it. The most important strategic area related to AI identified among law firms is about searching for AI possibilities, which demonstrates how this field requires leaving for further exploration cautiously due to ethical and legal concerns caused by the profession’s transition.
Potential benefits of AI for lawyers
The potential benefit of AI in the legal industry are vast and varied. Many professionals believe that AI has the power to revolutionize the way legal work is done, making it more efficient, accurate, and effective. In addition to efficiency gains and freed-up time, professionals are starting to get excited about the opportunities for AI to directly deliver value through new use cases. The top three areas of new value for professionals are:
- Handling large volumes of data more effectively
- Reduce inaccuracies due to human error
- Provide advanced analytics for better decision-making
Law firm professionals specifically noted the opportunities for AI to help them improve response times.
How lawyers and leaders can prepare for the future
The Future of Professionals Report highlights the growing impact of technology, especially AI, on the legal industry. With this in mind, it is crucial for lawyers and leaders to proactively prepare for the future.
- Embrace change and foster a culture of innovation : The legal industry is undergoing rapid transformation, and successful firms will be those that embrace change and continuously seek improvement. Encouraging a culture of innovation and open-mindedness is crucial, where lawyers and staff feel empowered to experiment with new ideas and approaches.
- Stay educated and informed : Continuous learning is paramount for lawyers and leaders to remain relevant in the evolving legal landscape.
Ultimately, the future of the legal profession lies in the hands of those who are willing to embrace change, innovate, and continuously learn. By fostering a culture of embracing technology, education, and diverse perspectives, lawyers can position themselves for success in the future of work.
Artificial intelligence as evidence – Case laws
Machine learning (ML) is now being presented as evidence in trials, and the courts are finding it difficult in dealing with the issue of the credibility and the relevancy of such evidence. A prime example is the case of State v. Loomis in Wisconsin where the defendant objected to be being subjected to an AI-based risk assessment tool in his sentencing arguing that it was unfair. The court however in the end dismissed the argument that its use was unlawful, but it was very clear that the court kept on emphasizing the importance of transparency and especially supervision of artificial intelligence by persons.
Another case is the police miles usage of an AI-based facial recognition system, the concern of which has been raised over the accuracy of the systems, the issue of bias, and the right to privacy. In the case of Commonwealth v Moore in Massachusetts the court dealt with the issue of facial recognition in evidence Now while the debate can rage as to the admissibility of Facial Identification Evidence the Massachusetts court felt that the technology was sufficiently accurate for identification in that case.
These cases demonstrate the increasing relevance of AI in legal circumstances and the deficiency of contemporary courts in providing proper standards for the admissibility of the evidence produced by AI.
Suggestions and Conclusion
The rapid advancement of artificial intelligence (AI) presents significant challenges and opportunities in the realm of legal liability, requiring a comparative study of Indian and international legal frameworks to emphasize the need for robust regulatory measures; while the European Union is taking proactive steps with the proposed Artificial Intelligence Act, India remains in a developmental phase, lacking comprehensive legislation to address AI-related liabilities, posing risks especially in sectors like healthcare where AI’s role is expanding, and the judiciary’s recognition of the need for a structured legal regime is a positive step, yet concrete measures are essential to ensure accountability and protect citizens’ rights, necessitating the establishment of comprehensive legislation, multidisciplinary collaboration among legal experts, technologists, and policymakers, enhanced public awareness and education, and a focus on ethical standards that prioritize transparency, accountability, and fairness in AI applications, particularly in sensitive sectors like healthcare, to foster innovation while ensuring that legal frameworks evolve to meet the challenges posed by AI technologies.
Sharmila Solanki
Swami Devi Dayal Group of Professional Institutions , Barwala , Panchkula.
Kurukshetra University