Abstract
Artificial Intelligence has entered the legal field in significant ways, especially through the use of AI-generated data as part of digital evidence in investigations and trials. Tools like facial recognition systems, predictive policing algorithms, and automated forensic reports are now being used to support criminal and civil litigation. However, Indian courts are still governed by legal principles rooted in 19th-century evidence law. As a result, there is growing uncertainty about how AI-generated evidence fits into our current legal system.
This paper aims to examine whether the Indian Evidence Act, 1872, is equipped to deal with new forms of evidence that are created or analysed by machines rather than humans. The study discusses key provisions such as Sections 65A and 65B, which deal with electronic records, and evaluates whether these are sufficient to cover the complex nature of AI-generated content. It also explores how Indian courts have interpreted these sections in recent judgments and where ambiguity still remains.
Additionally, the research looks at how other countries are addressing the same issue, with a particular focus on the United States and European Union, where AI is more regulated. The paper argues that India urgently needs a structured legal framework that can address not just the admissibility of such evidence but also its credibility, bias, and transparency. The absence of specific guidelines risks both wrongful convictions and loss of public faith in judicial processes. This research contributes by offering legal and policy suggestions to fill this evolving gap.
Keywords
Artificial Intelligence, Digital Evidence, Indian Evidence Act, 1872, Admissibility of AI Evidence, Legal Vacuum in Technology Law, Algorithm Bias in Courts.
Introduction
The fast-paced advancement of technology has revolutionized many aspects of modern life, and the legal field is no exception. Among the most transformative technologies, Artificial Intelligence (AI) has emerged as a powerful tool that is increasingly being adopted in law enforcement, forensic analysis, surveillance, and even judicial decision-making.
AI-generated tools now assist in creating and processing evidence, ranging from facial recognition outputs, biometric authentication, predictive analytics, to enhanced audio-video materials, and even automated expert reports. These forms of digital evidence are not just by-products of technology but are actively shaping how facts are gathered and presented before courts.
However, while the technology has moved ahead, the law, especially in India, is still catching up. The Indian Evidence Act, 1872, which serves as the backbone of evidentiary rules in Indian courts, was never designed to deal with such advanced technological realities. Though amendments have been made to include electronic records under Sections 65A and 65B, they fall short in addressing complex issues that arise when machines, rather than humans, produce or analyse evidence. For example, who is the author of an AI-generated image or report? Can a judge understand how an algorithm reached a certain conclusion? Is such evidence unbiased and trustworthy? These questions expose the legal vacuum in technology law that currently exists in the Indian judicial framework.
The concept of admissibility of AI evidence involves multiple layers: legal validity, technical reliability, procedural compliance, and constitutional safeguards. The Indian legal system generally follows principles of natural justice and fairness, where the right to cross-examination, the reliability of the source, and the chain of custody play essential roles in accepting evidence. AI-generated evidence disrupts these norms because its internal logic is not always transparent, and its outcomes can vary based on the quality of data input or the algorithm’s design. This raises serious concerns about algorithmic bias in courts, especially when such evidence could influence a person’s guilt, innocence, or civil liability.
Furthermore, there is no uniform policy or judicial standard for admitting AI-generated evidence in Indian courts. While some judges have accepted electronically produced documents after checking compliance with Section 65B, others have raised questions about authenticity and reliability, especially when certificates or metadata are missing. This inconsistency adds to the confusion and highlights the lack of a structured approach. In contrast, countries like the United States and members of the European Union have started building guidelines around AI use in the justice system, focusing on accountability, transparency, and fairness. India, despite being a technology hub, is lagging in this regard. This research paper critically explores the current state of Indian evidentiary law concerning AI-generated content and attempts to identify the gaps that make it difficult for such evidence to be reliably admitted and evaluated. It also examines landmark cases, legal principles, and international practices to suggest how India can develop a more robust legal response. Through this study, it becomes clear that without proper legal backing, the use of AI in the justice system may do more harm than good. Addressing this gap is not just a technical issue—it is essential for upholding the values of justice, fairness, and the rule of law in the digital age.
Research methodology
The nature of this research is qualitative, doctrinal, and analytical. It relies primarily on secondary sources, including legal statutes such as the Indian Evidence Act, 1872, case laws, academic articles, judicial decisions, expert opinion related to artificial intelligence, digital evidence, and legal admissibility.
Review of literature
The admissibility of AI-generated evidence is a novel legal challenge that has yet to be comprehensively addressed in Indian jurisprudence. Existing legal literature and judicial decisions primarily focus on electronic and digital records, with limited attention to artificial intelligence–generated outputs. However, several significant works provide a foundational understanding of the evidentiary issues in digital domains.
One of the landmark cases shaping the discourse on electronic evidence in India is Anvar P.V. v. P.K. Basheer, where the Supreme Court held that electronic records are admissible only if accompanied by a certificate under Section 65B of the Indian Evidence Act, 1872. This decision clarified the procedural threshold for electronic evidence and rejected prior judicial practices of admitting such evidence without proper certification. Later, in Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal, the Supreme Court reaffirmed the mandatory nature of Section 65B and emphasized the technical precision required for the admissibility of digital records.
In the scholarly realm, legal academics have begun exploring the intersection of AI and evidence law. Shreya Sinha, in her article on artificial intelligence and Indian evidence jurisprudence, critiques the current evidentiary framework as inadequate for regulating machine-generated content due to its lack of human authorship and explainability.. Similarly, Kunal Mehta argues that AI outputs pose unique challenges in authentication and chain of custody, particularly because they are often self-executing and lack conventional metadata or witness-based verification.
Legal commentaries on the Indian Evidence Act have also noted that while Sections 65A and 65B cover electronic evidence, they are silent on the admissibility of algorithmic or predictive outputs generated without direct human intervention. The Bhartiya Sakshya Adhiniyam, 2023, which replaced the Indian Evidence Act, continues the same framework under Section 61 but still omits explicit mention of AI-generated material, reflecting a legislative gap.
Internationally, jurisdictions like the United States and the European Union have started debating the reliability and admissibility of AI-based tools in courtrooms, especially in criminal justice systems. However, Indian legal literature remains in the early stages of this discourse, with no binding precedent directly addressing the status of AI-generated evidence.
Furthermore, reports from the Ministry of Law & Justice emphasize the need for updating procedural law to reflect advancements in digital technology, yet they stop short of recommending statutory reform for AI-generated content.. This reflects a cautious approach by lawmakers and highlights the necessity for academic and judicial engagement with the topic.
Overall, the literature reveals a substantial gap in Indian legal doctrine and statutory guidance concerning AI-generated evidence. While digital and electronic evidence is now well-integrated into the legal system, AI-generated outputs remain in a legal grey zone. This research aims to address that vacuum and contribute to an emerging field of techno-legal scholarship.
Method
This research paper adopts a doctrinal legal research methodology to critically analyse the current statutory and judicial framework in India with regard to the admissibility of AI-gener7ated evidence. The doctrinal method is suitable here because the issue lies within the interpretative realm of statutes, case laws, and legal principles rather than empirical fieldwork. The paper examines the present evidentiary rules under the Bhartiya Sakshya Adhiniyam, 2023, and compares them with the repealed Indian Evidence Act, 1872, to assess whether current legislative provisions adequately address the unique features of artificial intelligence outputs.
The study begins with a close reading of primary legal texts, especially statutory provisions governing digital and electronic evidence. Particular emphasis is placed on Section 61 of the Bhartiya Sakshya Adhiniyam, 2023, which corresponds with Sections 65A and 65B of the Indian Evidence Act, 1872. These provisions are scrutinized to understand how the legal system currently treats electronic records and whether these definitions extend to AI-generated outputs such as algorithm-based reports, predictive analytics, facial recognition data, or machine learning conclusions.
In addition to statutory analysis, the paper surveys landmark judicial decisions that have interpreted the admissibility of electronic and digital evidence. The cases of Anvar P.V. v. P.K. Basheer and Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal are used as foundational judgments to discuss the certification requirements for digital documents under Section 65B and how the judiciary has responded to emerging technology. These cases are analyzed to extrapolate principles that may be applicable or inadequate when applied to evidence generated autonomously by artificial intelligence.
Secondary sources such as academic journal articles, expert commentaries, and legal reviews are extensively referred to for theoretical and comparative insights. Authors like Shreya Sinha and Kunal Mehta have raised critical concerns about the inability of traditional evidentiary principles to account for the opacity, autonomy, and unpredictability of AI-generated evidence. These perspectives are used to frame the legislative and judicial vacuum within the larger context of digital transformation in legal proceedings.
To broaden the analytical framework, the research includes a comparative legal perspective, especially focusing on developments in the United States and the European Union, where judicial systems have begun to articulate principles for evaluating AI tools in evidence, especially in criminal cases involving predictive policing or forensic algorithms.This comparative element highlights both the gaps in the Indian system and potential legislative strategies that could be adopted.
Furthermore, the paper examines reports by Indian law reform bodies and committees such as the Ministry of Law & Justice and the Parliamentary Standing Committee on Home Affairs, which have briefly discussed technological advancements but remain silent on AI-generated content. These policy-level documents are analysed to assess whether the Indian legislature is moving toward a future-proof evidence law or still relying on outdated frameworks designed for human-originated digital content.
The research also makes limited use of authoritative legal commentaries on evidence law, including those by Batuk Lal and Vepa P. Sarathi, which, although primarily written in the context of electronic evidence, offer guiding principles that can be adapted to the emerging AI context.. These texts support the argument that the principles of admissibility, relevance, and reliability must evolve to accommodate machine-generated evidence.
Overall, the method involves critical, comparative, and interdisciplinary analysis of statutory materials, judicial decisions, and scholarly works. This approach is geared toward identifying legal ambiguities, drawing attention to unregulated areas, and proposing informed reforms to bridge the gap between technology and evidence law.
Suggestions
The emerging role of artificial intelligence in digital forensics, surveillance, and automated decision-making in law enforcement necessitates urgent legal attention in India. Current evidentiary laws, including the Bhartiya Sakshya Adhiniyam, 2023, while accommodating electronic records, are not sufficiently equipped to handle the distinctive nature of AI-generated content. Based on doctrinal analysis and comparative study of foreign jurisdictions, the following recommendations are proposed to fill this legislative and procedural gap.
1. Statutory Definition and Categorisation
There is a pressing need to explicitly define “AI-generated evidence” in statutory terms. While Section 61 of the Bhartiya Sakshya Adhiniyam replicates the older Section 65B framework, it fails to differentiate between traditional electronic evidence (like emails, PDFs) and autonomous, machine-generated outputs (such as predictive analytics or AI-enhanced surveillance feeds). A legislative amendment must clarify whether AI-generated data qualifies as “electronic records” and specify the conditions under which it is admissible.
2. Develop Admissibility Standards Specific to AI
The judiciary and lawmakers must craft a test for admissibility that assesses AI tools on parameters such as accuracy, reliability, auditability, and transparency. Borrowing from the U.S. Supreme Court’s Daubert principles, Indian courts could consider whether the AI system has been peer-reviewed, has a known error rate, is generally accepted in the relevant field, and can be explained in court. This would prevent blind acceptance of “black box” technologies and preserve the principle of a fair trial.
3. Mandatory Certification and Algorithm Disclosure
Similar to the certificate required under Section 65B of the Indian Evidence Act, a new protocol must be adopted requiring certification by the party producing AI evidence. The certificate should verify that the AI system was operationally valid at the time of generating the output and should disclose essential algorithmic details that do not violate proprietary rights. Without this safeguard, the opposing party cannot meaningfully challenge the evidence, violating principles of natural justice.
4. Training for Legal Professionals and Judges
With the increasing intersection of law and emerging technologies, it is essential to introduce regular training programs on AI literacy for lawyers, prosecutors, and members of the judiciary. Training modules should include understanding algorithmic bias, model limitations, and evidentiary handling of AI-driven data. Without such efforts, courts risk over-reliance on outputs that may lack contextual reliability.
5. Institutional and Policy Research
Institutions like the Law Commission of India, the Ministry of Electronics and IT, and NITI Aayog should be mandated to initiate detailed studies and release white papers focused on AI’s role in the criminal justice system. These reports can guide Parliament in enacting robust, forward-looking reforms, ensuring that the Indian legal system remains technologically adaptive yet constitutionally grounded.
Conclusion
The integration of artificial intelligence into the justice delivery system, particularly in the realm of evidence generation and analysis, marks a significant shift in how legal systems operate in the digital age. AI tools are no longer hypothetical—they are currently being employed for facial recognition, digital forensics, biometric validation, and predictive modelling. However, the Indian legal system, governed primarily by statutes like the Indian Evidence Act, 1872 (now replaced by the Bhartiya Sakshya Adhiniyam, 2023), remains fundamentally rooted in traditional evidentiary principles. This mismatch between legal frameworks and technological advancement has given rise to a serious vacuum in the treatment of AI-generated evidence.
Throughout this research, it has become apparent that while Indian courts have made progress in accepting electronic evidence—particularly after landmark rulings like Anvar P.V. v. P.K. Basheer and Arjun Panditrao Khotkar—the current statutory framework is not yet equipped to deal with evidence generated autonomously by machines. AI-generated outputs differ significantly from traditional evidence in both form and substance. They often lack human authorship, transparency, and explainability—key criteria used to determine the credibility and admissibility of evidence under existing laws.
The absence of express legislative provisions addressing AI-generated content has led to judicial inconsistency and procedural confusion. Judges may or may not accept such evidence depending on their interpretation of sections related to electronic records. The problem is compounded by the fact that AI systems often function as “black boxes,” with little to no clarity about how a particular output or conclusion was generated. This lack of transparency severely undermines a party’s ability to challenge or cross-examine the evidence, thereby weakening the principles of natural justice and fair trial.
Moreover, concerns about algorithmic bias, data manipulation, and lack of standardized protocols raise serious constitutional and ethical questions. These risks are particularly grave in criminal cases, where AI-generated evidence may influence decisions about guilt, bail, or sentencing. Without proper legal scrutiny, the justice system could end up endorsing unreliable or discriminatory outputs, leading to miscarriages of justice.
The international scenario offers valuable lessons. Jurisdictions like the United States and the European Union have begun to engage with these challenges by introducing frameworks and guidelines that promote transparency, accountability, and fairness in the use of AI in judicial processes. India, with its strong technological foundation and growing digital governance ecosystem, is well-positioned to develop its model suited to its socio-legal context.
Thus, the recognition and regulation of AI-generated evidence is no longer optional—it is a legal necessity. This research underscores the urgent need for a comprehensive statutory and judicial response that addresses not only the technical and procedural dimensions of such evidence but also the broader implications for justice, equity, and the rule of law. In doing so, India can take a proactive role in shaping the future of law in the era of artificial intelligence.
AUTHOR:
Amisha Rani
Shri Ramswaroop Memorial University
