ABSTRACT
The sudden rise in artificial intelligence’s content-producing powers has transformed how we create and consume content, and from a legal context, this presents a unique dilemma in evidence presentation. AI-generated products like deepfakes, audio files, and machine-generated texts can replicate real human-created evidence. This paper explores whether such evidence can be considered admissible under the Bharatiya Sakshya Adhiniyam, 2023 (BSA), India’s newly enacted legislation replacing the Indian Evidence Act, 1872.
While BSA, 2023 provides a modern approach to various aspects of evidence law, including digital records and electronic signatures, it does not comment on the authenticity of machine-generated content lacking a traceable human author. This paper closely works on relevant provisions such as Sections 61, 63, 66, and 67 of the BSA in the context of AI-generated material, through analysis and comparative study with legal positions in the foreign countries like the US, EU, and China, this research evaluates the position of Indian evidence law to face the emerging challenges of AI in litigation.
This research argues that current legal standards have not evolved enough to reliably examine or admit AI-generated evidence without risking unfairness, manipulation, or false information. There are some suggestions, including judicial training, introduction of rebuttable presumptions regarding AI content, and establishment of technical forensic protocols. The study concludes that while BSA is a step forward in digitising Indian law, it requires timely reforms to safeguard procedural fairness in the age of intelligent machines.
KEYWORDS
AI-Generated Evidence, Bhartiya Sakshya Adhiniyam, Deepfakes and Law,
Admissibility of Digital Records, Artificial Intelligence and Justice, Indian Evidence Law Reform.
INTRODUCTION
Artificial Intelligence (AI) has rapidly outgrown the world of computer science and entered the courtroom. Tools like ChatGPT can draft convincing legal arguments with modern legal analogies, while deepfake videos and synthetic audio can easily create real-life events, voices, and people. These developments raise serious questions about the authenticity of such content and whether they are admissible as legal evidence. However, admissibility is not the only concern. Indian evidence law is built on the principles of authorship, intention, and authenticity. AI-generated material challenges all three. If the machine has no intent, no human author, and can be easily manipulated, does it still count as admissible evidence under the BSA? This paper argues that the current law lacks the legal tools to answer that question clearly. India transitions to the Bhartiya Sakshya Adhiniyam, 2023 (BSA), replacing the Indian Evidence Act. The question arises: can it handle the complexities introduced by AI-generated evidence?
The BSA attempts to modernize evidence law by acknowledging electronic records and digital signatures. Sections such as 61, 63, and 66 provide for electronic evidence, including presumptions of authenticity under certain technical safeguards. However, none of these provisions directly engage with content autonomously generated by AI, particularly when there is no known human author or where manipulation is virtually undetectable to the naked eye.
While jurisdictions like the U.S. and U.K. have begun accepting AI-assisted document reviews in civil and regulatory proceedings, the Indian legal framework has yet to acknowledge such technology as admissible evidence. With the growing use of predictive coding by Indian law firms, forensic experts, and compliance professionals, it’s important to examine the legal grounds for admitting AI-processed material in court. This paper analyses the scope and limitations of the BSA in dealing with AI-generated evidence, using doctrinal interpretation and comparative references from jurisdictions like the United States, European Union, and China. The aim is not only to critique the current legal position but also to offer constructive suggestions to future-proof the Indian evidentiary system.
The issue requires immediate attention. With AI tools becoming cheaper and more accessible, the risk of falsified evidence entering legal proceedings is no longer theoretical. With just a click of a button, fake fabricated WhatsApp chats to doctored surveillance footage, in the past few years, several celebrities have become the victims of deepfake videos. The threat is real, and the law must adapt. As India takes its first steps into a post-colonial evidence regime, it must ensure that its legal framework can keep pace with the technological realities of the 21st century.
RESEARCH METHODOLOGY
This research uses a doctrinal approach to interpret legal texts, judicial decisions, and comparative legislative developments related to AI-generated evidence. The methodology revolves around analysing the Bhartiya Sakshya Adhiniyam, 2023 (BSA), particularly its provisions dealing with electronic records and digital evidence. This research consists of the statutory interpretation of Sections 61, 63, 66, and 67 of the BSA.
The research is non-empirical and relies on primary sources such as the BSA, relevant case law under the Indian Evidence Act, and recent judgments that deal with electronic evidence in India. Since the BSA is relatively new and lacks judicial interpretation in the context of AI-generated evidence, the study also considers relevant precedents under the old Evidence Act (1872) and analyses how those principles might evolve under the new law.
The research consists of insights from foreign jurisdictions such as the United States, European Union, and China to further empower doctrinal analysis. These systems have begun to grapple with the evidentiary challenges posed by deepfakes, synthetic content, and machine-generated records. This comparative perspective helps evaluate whether Indian law is moving in the right direction or lagging.
The study also uses secondary sources such as academic commentaries, law commission reports, research papers, policy guidelines, and media reports, especially to highlight the rapid growth and abuse of generative AI technologies. These sources assist in mapping the gap between technological developments and the current legal framework.
REVIEW OF LITERATURE
In India, conversations around artificial intelligence and evidence law are still in their early stages. There’s been some talk about electronic records, but when it comes to content that’s generated by AI, not just stored or transmitted digitally, there’s very little serious analysis. Most of the scholarship so far just doesn’t go that deep.
As inferred from the book , Dr. V.K. Dewan, on Law Relating to Information Technology and Cyber Crimes, was one of the earlier efforts to tackle digital evidence. It covered important ground, like issues of authenticity and how courts can trust electronic records. The only flaw was that it came out long before tools like ChatGPT, DALL·E, or even deepfakes entered the scene, so naturally, it doesn’t deal with evidence produced by a
machine without direct human involvement.
Then there’s the 2018 report by Justice B.N. Srikrishna’s committee on data protection. It acknowledged that algorithms play a big role in processing personal data. But again, it wasn’t concerned with the use of that data as evidence in court. The committee was more focused on privacy, not proof. Even the newer Digital Personal Data Protection Act, 2023, and the Data Protection Board under it do not address the evidentiary implications of AI-generated material. It’s like the legal framework is always one step behind the tech.
Meanwhile, people are already wrestling with those issues in other parts of the world. In the United States, scholars like Andrew Keane Woods and Rebecca Wexler have argued that AI-generated content doesn’t quite fit into existing legal frameworks, especially under the Daubert standard for expert evidence. Judges there are expected to determine whether an expert’s testimony is reliable and scientifically valid. But what happens when the so-called expert is a black-box algorithm? That opens up a whole new set of questions.
The European Union is moving even faster. Their proposed AI Act talks about ways to detect and verify deepfakes. They’ve floated ideas like mandatory watermarks or cryptographic stamps to show where content comes from and whether it’s been tampered with. These aren’t just academic proposals; they’re shaping real policy.
In China, the courts have gone a step further. Some judicial pilots are using AI to generate and process evidence. Scholars there have suggested using blockchain to track and verify AI-generated content before it’s accepted by a judge. That sounds promising, but critics have raised valid concerns mainly about how transparent these systems are and whether defendants can challenge them effectively.
Back in India, legal scholarship still hasn’t caught up. It’s a good opportunity to rethink how we handle digital and AI-related evidence. But as of now, no one has done a full doctrinal or policy-level study of how the BSA might deal with machine-generated content. That silence is telling. This paper tries to step into that gap by looking at what the BSA says and what it doesn’t, and by comparing it with what’s happening in other legal systems. The goal isn’t just to criticise where we’re lagging, but to ask what kind of evidentiary framework we need in a world where machines can create facts.
METHOD
This research uses a doctrinal method combined with legal analysis to assess the admissibility of AI-generated evidence under the Bhartiya Sakshya Adhiniyam, 2023 (BSA). The method includes three core steps:
Statutory Interpretation of the BSA, 2023, the study begins with a close reading of the key provisions of the BSA, specifically, Section 61 states that Nothing in this Adhiniyam shall apply to deny the admissibility of an electronic or digital record in the evidence on the ground that it is an electronic or digital record and such record shall, Section 63 ensures digital records are admissible, but only when certified properly to prevent fake or tampered evidence, Section 66 provides us with the situations when the certificate is necessary, Section 67 states that presumption related to digital signatures.
Section 63 requires a certificate under conditions set out in Section 66 for electronic records to be presumed genuine. But in the case of AI-generated images or voice clips, where no human creates or owns the output, who signs that certificate? This reveals a doctrinal gap. The law implicitly assumes that all electronic evidence has a human origin. Yet AI output often lacks such traceability. If courts continue to demand human certification for AI evidence, it risks excluding evidence even when it may be authentic but lacks a certifying creator.
These sections are interpreted to determine whether current statutory language can extend to include machine-generated content, such as AI-created images, text, or audio. The analysis checks whether the BSA implicitly requires human authorship or whether it allows for autonomous systems to be recognised as sources of admissible evidence.
Doctrinal Analysis of Indian Judicial Decisions, since BSA is a newly established statute, the research refers back to relevant case law under the Indian Evidence Act, 1872. Cases
Anvar P.V. v. P.K. Basheer – This case changed the game for how electronic evidence is admitted in Indian courts. It made Section 65B certification mandatory, which affects: Civil and criminal trials, Investigations relying on digital data, Election petitions, Cybercrime cases. In the case of Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal – These decisions are analysed to understand how Indian courts have dealt with electronic evidence in the past, particularly regarding authentication, chain of custody, and expert validation. This helps project how courts may treat AI-generated evidence
moving forward under the BSA.
Comparative Legal Analysis
To account for the absence of Indian case law on AI-generated evidence specifically, the paper draws from the practices of:
In the United States of America
United States – Daubert Standard for Expert Evidence
The Daubert v. Merrell Dow Pharmaceuticals (1993) case established the Daubert standard, making federal judges act as “gatekeepers” to ensure expert testimony is evidence-based and reliable. The Daubert test empowers judges to assess the reliability of evidence even before trial. Indian judges under the BSA However, they lack an equivalent screening tool. Courts cannot refuse electronic evidence unless procedural defects exist, they have little power to question its reliability before admitting it. If AI-generated content enters the record, Indian courts may be forced to admit flawed or misleading material simply because it ticks procedural boxes.
In the European Union
European Union – AI Act & Deepfake Regulations
The EU AI Act requires transparent labelling of AI-generated content, with special safeguards for deepfakes used in public interest contexts. Deepfakes that influence elections or public opinion may be classified as “high-risk” and subjected to stricter oversight. The EU is actively exploring technical detection methods like watermarking and metadata to flag synthetic content. The EU approach to mandatory watermarking focuses on transparency and accountability. India, however, lacks such a parallel regime. While watermarking sounds promising, it raises questions about its enforceability in a jurisdiction with weak digital infrastructure and varied court capacity. Before adopting this model, India must assess whether watermarking can realistically function as a safeguard in Indian trials.
In China
China – Blockchain in Court
In June 2018, the Hangzhou Internet Court became the first to accept blockchain‑secured evidence using cryptographic timestamping and hash verification to authenticate digital records. In July 2021, China’s Supreme People’s Court formalized a presumption: blockchain‑verified electronic evidence is deemed tamper-free, pending technical
verification.
Why These Matter to Indian Law
The Daubert model emphasizes judicial scrutiny of reliability as an essential step before AI-based expert evidence is admitted.
The EU approach highlights transparency and accountability, useful in regulating deepfake‑based testimony.
China’s blockchain validation tackles authenticity, crucial for verifying AI‑generated documents or logs.
Each of these methods is geared toward answering the central question: Can AI-generated evidence be admitted in Indian courts, and if so, under what conditions?
SUGGESTION
The Bharatiya Sakshya Adhiniyam, 2023, marks progress in adapting evidence law to a digital environment. However, it still leaves critical blind spots when it comes to the unique issues raised by AI-generated material. As synthetic content becomes more realistic and more accessible, courts will soon be confronted with tough questions about authorship, reliability, and the risk of tampering. The proposals below are intended to address some of these doctrinal and procedural gaps.
- Presumption Against Admissibility Without Human Attribution
Courts should consider adopting a default position where AI-generated evidence is presumed inadmissible unless a human origin can be established. This presumption would not be absolute — parties could rebut it by producing credible technical or forensic proof of authenticity. The aim is to reduce the chance of deepfakes or artificially fabricated content being smuggled into the evidentiary record without proper safeguards. This presumption would reverse the burden of proof, making the advocacy of AI evidence show its reliability first. Such a shift would align with the principle of evidence that the party seeking to rely on disputed evidence must prove its authenticity. In effect, this treats AI-generated content as inherently suspicious unless proven otherwise, a doctrine grounded in procedural fairness.
- Expand Sections 63 and 66 to Cover Autonomous Content
Sections 63 and 66 of the Bharatiya Sakshya Adhiniyam deal with electronic evidence and its admissibility in courts of law. These provisions can become more productive either by amendment or judicial clarification, which should expressly include content produced without direct human intervention. The law could require such material to pass a reliability test, backed by expert input or metadata validation, to ensure that it hasn’t been tampered.
- Mandatory Certification by Registered Forensic Labs
Before any AI-generated content is admitted, courts should insist on a formal verification certificate from government-approved cyber forensic labs, such as CERT-IN or labs affiliated with central forensic agencies. This would act as a filter to block manipulated or unverifiable submissions and provide the judiciary with a reliable technical assessment of the material’s integrity. Courts already rely on expert opinion under Section 45 of the BSA. Extending that trust to AI-specific forensic experts, especially from CERT-IN or government-accredited labs, creates a legally consistent way to bridge the gap between technical complexity and judicial competence.
- Judicial Education on AI and Digital Forensics
With the increasing presence of AI-based evidence, there’s a pressing need to upgrade the skill set of the judiciary. Judicial academies must introduce structured modules on digital forensics, deepfakes, and AI-generated content to ensure judges understand the tools, the risks, and the proof standards needed.
- AI Education in Law Schools
With AI deepening in the roots of society, it is very crucial to teach the upcoming generations of lawyers about it. Law students must know how materials can be easily fabricated and falsified; not only this, will also help in understanding the dangers and repercussions of AI, but also in falsified evidence technology.
- Disclosure Rules for AI-Assisted Legal Submissions
When parties rely on AI tools to create or process evidence, they should be required to disclose basic information about the mechanisms used. This should include the AI tools used to create that piece, its known limitations, the data used to generate the output, and any confidence indicators. This promotes transparency and helps the court assess the reliability of AI-derived material.
CONCLUSION
India is only beginning to grapple with how artificial intelligence fits into its evidentiary laws. The Bharatiya Sakshya Adhiniyam, 2023, made some progress, particularly with how it treats electronic records and digital signatures. But it stops short of tackling the specific issues that arise when the content is generated by AI systems rather than humans. Traditional electronic evidence still assumes there’s a traceable author, a clear intention, and a chain of custody. That’s not always the case with AI outputs, especially those created by generative models or black-box systems. The result is a gap in both legal clarity and procedural safeguards.
Globally, these dilemmas aren’t new. In the U.S., for instance, scholars have flagged that legal tests like the Daubert standard, which guide judges on whether expert testimony is reliable, don’t apply when the “expert” is a machine whose inner workings even its creators might not fully understand. The conversation becomes even trickier when the algorithm is proprietary or its decision-making is opaque. Europe, on the other hand, seems to be moving faster. Under its proposed AI Act, tools like digital watermarks and cryptographic tagging are being considered to help track the origins of synthetic content and prove whether it’s been altered. China, too, has stepped into this space by mandating blockchain-based validation mechanisms for some forms of digital evidence.
Due to the complications from deep fake technology, there is a pressing need for strong legal rules in India. The mix of artificial intelligence and identity theft raises concerns about privacy and the basic rights of people in the online world. Current laws have had trouble keeping up with how fast this technology is changing, resulting in big gaps in legal protection. Even with the rules in the Information Technology Act and the Indian Penal Code, there are not enough specific laws to deal with the unique problems that deepfakes create. Also, the unclear liability of AI systems makes it harder to prosecute those who commit identity theft through deepfakes. To address these issues, a full strategy that includes legal changes and better detection technologies is necessary. This effort will help protect digital identities and create a safer online space for the people of
India.
To fix this, reforms can’t just be cosmetic. First, the law needs to formally recognise AI-generated outputs as a unique category of evidence. Second, we need stricter protocols for accepting such content in court, things like metadata analysis, expert review of the algorithms used, and reliability checks. Third, judges and lawyers should be trained in understanding how these technologies work, so they can question or challenge them appropriately. And fourth, the judiciary could consider adopting a more structured admissibility standard, something that combines elements of technical scrutiny with questions around bias, reproducibility, and ethical use.
We’re at a stage where ignoring these issues could have serious consequences. As AI-generated material becomes more common in investigations and litigation, we risk accepting flawed or misleading evidence simply because we don’t yet have the tools or the laws to evaluate it properly. The Bharatiya Sakshya Adhiniyam, 2023, is a step forward, but it must evolve. If we want justice to remain fair and fact-based, our legal system has to keep pace with the technology it increasingly relies on. Otherwise, we’re handing over too much trust to systems we don’t fully understand, and that’s not just a legal risk; it’s a democratic one. Indian evidence law has always balanced fairness with flexibility. But AI is not just another digital record, it’s a fundamentally different category of content. If the BSA fails to recognize that distinction, we may end up with either over-admission (where fake content gets in) or over-exclusion (where genuine evidence is lost). Neither serves justice. The law must evolve, not only to regulate machines, but to protect the humans they might misrepresent.
________________________________________________________________________
