Deepfakes and Indian Law: The Urgent Need for a Legal Framework in the Age of AI

Abstract

Deepfakes – hyper-realistic AI-generated images, videos, or audio that fabricate a person’s likeness – pose rapidly growing challenges to individual rights and public trust. By 2023 deepfakes numbered in the tens of thousands globally, and experts warn that nearly all online content may soon be synthetic. These technologies enable fraud, harassment, misinformation, and non-consensual pornography, threatening privacy, reputation, and even democratic discourse. India currently lacks any dedicated deepfake law. Existing provisions – e.g., Sections 66C/D/E of the IT Act (identity-theft, impersonation, privacy) and IPC sections on defamation or obscenity – offer only fragmented remedies. This paper employs doctrinal legal research and comparative analysis of global models to show that India’s current framework is inadequate to address the scale and novelty of deepfake harms. Landmark decisions like K.S. Puttaswamy v. Union of India (privacy under Article 21) affirm that state must protect informational privacy but deepfakes exploit regulatory gaps. We review recent policy developments (advisories and draft rules) and cases (e.g. the Delhi HC’s injunction in Anil Kapoor’s deepfake case) to illustrate the stakes. The paper concludes that India needs comprehensive reform: a specialized “deepfake” law (or sweeping amendments to the IT Act/BNS) to criminalize nonconsensual deepfake creation, mandate consent and disclosure, and impose clear liabilities; strengthened content-removal obligations on platforms; technical safeguards (like watermarking and AI detection); and education and enforcement to protect citizens. Without urgent, rights-sensitive legislation, deepfakes will continue to undermine privacy, dignity, and the rule of law in India.

Keywords

  • Deepfakes
  • Artificial Intelligence (AI)
  • Indian Law
  • Privacy
  • Cybercrime
  • Regulation

Introduction

Deepfakes are synthetic audio-visual media created by advanced AI (“deep learning” algorithms and generative networks) that can make a person appear to say or do something they never did. For example, fake videos have circulated showing Indian CEOs N. R. Narayana Murthy and Ratan Tata endorsing dubious products, and a popular actress’s face morphed onto another body, purely for disinformation or sensationalism. Such realistic forgeries can gravely harm individuals (through defamation, sexual exploitation, or identity theft) and society (by spreading misinformation, inciting violence, or influencing elections). Indeed, one study found over 15,000 deepfake videos online by 2019, nearly doubling in less than a year, with projections that synthetic content could dominate the Internet soon.

In India, deepfakes pose acute legal and ethical challenges. Despite high-profile incidents (e.g. deepfake celebrity porn or political hoaxes), no Indian law explicitly defines or regulates deepfakes. India’s approach has been largely piecemeal: relying on general statutes like the IT Act 2000 (identity-theft at §66C, cheating by impersonation at §66D, privacy-violations at §66E, obscene content at §§67–67B), the criminal defamation provisions of the IPC, and newer laws like the Bharatiya Nyaya Sanhita (BNS) 2023 (which criminalizes non-consensual intimate imagery, §77). But experts agree these scattered laws are inadequate to catch all deepfake harms. For instance, while Section 66D could punish phishing via an AI voice, it was not designed for generative fakes. Section 66E bans voyeuristic image capture, but deepfake impersonation doesn’t neatly fall under it. And defamation law may cover reputational damage, but litigating thousands of viral deepfakes is impractical. A recent analysis notes that India’s “legal toolkit remains piecemeal, reactive, and under-enforced” against emerging digital harms like deepfakes.

The government has begun to respond: in November 2023 the Ministry of Electronics & IT issued an advisory directing platforms to “exercise due diligence … to identify misinformation and deepfakes” and remove reported deepfakes within 36 hours. Failure to act, the advisory warned, could strip intermediaries of their safe-harbor under Section 79 of the IT Act. Parliament has also indicated plans to amend laws for deepfake prevention. At the same time, Indian courts have started using existing rights. In Anil Kapoor v. ABC & Ors. (Delhi HC, 2023) the court granted an injunction against using Kapoor’s face in fake pornographic deepfakes, recognizing an “unauthorized and illegal use” of his personality and privacy. Such decisions underscore that deepfakes implicate fundamental rights (privacy, dignity) recognized by Puttaswamy v. Union of India.

This paper analyses the legal gaps and needs created by deepfake technology in India. We first survey the literature and existing laws (both Indian and international) pertaining to deepfakes (Review of Literature). Next, we explain our research approach (Methodology and Method) in examining statutes, case law, and policy. We then discuss specific legal issues: how current Indian laws relate to deepfake harms and where they fall short. Finally, we offer concrete suggestions for legal reform, drawing on international best practices and constitutional principles. Our aim is to chart a comprehensive, India-centric framework for deepfake regulation that balances innovation with individual rights.

Research Methodology

The present research employs a doctrinal and analytical legal methodology with a strong emphasis on statutory interpretation, judicial precedent, and comparative legal frameworks. Since the issue of deepfakes involves evolving technological aspects intersecting with constitutional, criminal, and data protection laws, a multi-pronged methodological approach has been adopted to analyse the existing Indian legal landscape and its limitations. This section outlines the process through which the study was conducted, the nature of sources examined, and the rationale for the research design.

1. Doctrinal Legal Research

Doctrinal research, also known as “library-based” research, forms the backbone of this study. This involved the collection, study, and analysis of primary legal materials, such as the Information Technology Act, 2000, Indian Penal Code (IPC), Bharatiya Nyaya Sanhita, 2023, and the Digital Personal Data Protection Act, 2023, among others. Each statute was examined to assess its applicability in the context of deepfake-related harms like identity theft, privacy invasion, defamation, sexual harassment, and cyber fraud. For example, Sections 66C and 66D of the IT Act were reviewed to analyze whether they cover impersonation by AI-generated content. Similarly, BNS Section 77 was studied to understand its relevance to non-consensual intimate deepfakes.

In addition, landmark and recent Indian judicial decisions were examined to understand how courts have interpreted fundamental rights in the digital space. A notable case is K.S. Puttaswamy v. Union of India, where the Supreme Court recognized privacy as a fundamental right under Article 21. This judgment laid the groundwork for discussions on how non-state actors, including AI technologies, can infringe privacy. Moreover, the Delhi High Court’s order in Anil Kapoor v. ABC & Ors. served as an important precedent for recognizing legal protection against deepfake misuse through injunctions and image rights.

2. Comparative Legal Analysis

Since deepfakes are a global phenomenon, this study also employed comparative research to analyze how jurisdictions such as the United States, European Union, South Korea, and Australia have responded to the problem. This helped in identifying best practices that can inform India’s legislative approach. For example, the U.S. Deepfakes Accountability Act (proposed in 2023) and the EU Artificial Intelligence Act (2023) offer specific mechanisms for criminalizing harmful deepfakes and mandating transparency through watermarking and content labelling. This comparative analysis helped develop the framework for reform suggestions applicable in the Indian context.

The study also incorporated transnational legal instruments and regulatory proposals such as UNESCO’s ethical guidelines on AI and INTERPOL’s cybercrime reports. These global perspectives provided valuable insights into enforcement challenges and technological innovations in dealing with synthetic media.

3. Secondary Sources and Scholarly Commentary

A critical part of this methodology involved the use of secondary sources, including peer-reviewed journal articles, legal blog posts, expert commentaries, reports from think tanks such as Vivekananda International Foundation (VIF) and Observer Research Foundation (ORF), as well as articles published by legal firms like Chambers India and Singhania & Partners. These sources provided empirical data, documented case studies, and scholarly interpretations on how Indian law currently responds to AI-generated content and where it lags behind.

Academic literature also informed the constitutional and human rights-based analysis of deepfakes. For instance, legal scholars have debated whether consent-based frameworks or criminal prohibitions offer the most effective model for regulating synthetic media. Insights from these discussions contributed to a balanced evaluation of the Indian legal position.

4. Policy Document and Government Advisory Review

The research also reviewed recent policy documents, press releases, and official advisories issued by the Ministry of Electronics and Information Technology (MeitY) and the Press Information Bureau (PIB). A notable inclusion was the November 2023 advisory by MeitY directing digital platforms to identify and remove deepfake content within 36 hours of receiving a complaint. Such documents were analyzed to understand the policy direction of the Indian government and to assess the enforceability of existing obligations under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

5. Qualitative Analysis and Legal Reasoning

While the study did not include fieldwork or quantitative surveys, it applied qualitative legal reasoning to analyze the impact of deepfakes on constitutional rights like privacy, freedom of expression, and human dignity. Through a process of deductive reasoning, existing legal principles were interpreted to see how they can be extended to the domain of AI-based synthetic media. By doing so, the study offers a normative framework for proposed legal reforms.

Review of Literature

Global Context – Academic analyses emphasize the dual-edged nature of deepfakes. On one hand, deepfake technology can have benign uses (e.g. visual effects, accessibility tools). On the other, it enables serious abuses: voice-cloning scams, fabricated pornography, election interference, and reputational damage. Responsible AI researchers note that deepfakes “pose significant risks” that should be mitigated through regulation. Verma (2024) in the Lex Scientia Law Review underscores that deepfakes pose “threats to privacy rights” worldwide, and finds significant gaps remain” in protecting privacy because laws have not kept pace. Likewise, the European AI Act (2023) and U.S. proposals (e.g. the Deepfakes Accountability Act) represent cutting-edge legislative thinking, categorizing deepfakes by risk and mandating disclosures or takedowns.

Indian Literature – The consensus in India is that regulatory attention is overdue. A 2020 LSE blog cautioned that misuse of deepfakes could infringe individual privacy and urgently needs “rapid governmental intervention in the form of new legislative and regulatory frameworks.”. Recent law firm analyses echo this urgency. Chambers India reports that “presently, there are no laws or regulations in India which target deepfaked content”, noting only piecemeal applicability of Sections 66D/E and related IT provisions. Singhania & Partners (2024) similarly conclude India “lacks a specific law dealing with deepfakes”, with only fragmented safeguards in defamation, privacy, copyright and cybercrime statutes. They, and others, call for a comprehensive legislation or amendments that explicitly address deepfakes.

Think-tanks and news outlets have documented incidents in India. For example, a VIF study (2025) catalogues cases of deepfake fraud (e.g. fake investment advice videos) and notes that courts granted injunctions to protect personal image rights. The VIF report also emphasizes constitutional dimensions: it cites Article 21 and Puttaswamy, stressing the state’s duty to protect informational privacy from non-state actors. The Jurist blog (2025) argues India’s legal response is “piecemeal, reactive, and under-enforced”, urging a U.S.-style takedown regime. These sources align in identifying key themes: (1) Privacy and Dignity: Deepfakes violate personal privacy and dignity, implicating Article 21; (2) Free Speech Tension: Any restriction must navigate Article 19(1)(a) but India’s broad Article 19(2) exceptions (decency, defamation) permit regulation an; (3) Existing Laws Are Scattered: Analysts note that provisions in the IT Act, IPC d new DPDP Act cover only aspects of deepfake misuse; (4) Need for New Norms: Virtually all commentators (legal journals, policy briefs, media) advocate for dedicated measures – ranging from defining deepfakes legally to mandating AI disclosure and platform accountability.

Comparative Insights – Internationally, the imperative of specific legislation is echoed. Studies of foreign law show bills targeting AI-generated non-consensual porn and fraud (e.g. U.S. Take It Down Act of 2025, Canada’s bill C-27 on “deepfake porn”) have been introduced to give swift relief to victims. The EU’s AI Act (2023) treats certain deepfake uses as “high risk” requiring strict oversight. Experts note that such regimes balance innovation with accountability by classifying deepfakes by risk and protecting legitimate expression. These global models illustrate that without clear legal definitions and duties, enforcement is haphazard – a lesson India’s scholars take to heart.

In sum, the literature paints a picture of deepfakes as a new frontier where both Indian and international law must evolve. Indian commentators highlight gaps in our patchwork approach and warn that harms (to privacy, identity, reputation, and democracy) are mounting. Our analysis proceeds by mapping these harms to India’s current law and identifying how a new legal framework could fill the void.

Method

To analyse India’s legal stance on deepfakes, we structured our approach as follows:

  • Categorization of Harms: We identified principal deepfake-related harms (privacy violations, reputational defamation, identity/fraud threats, and non-consensual sexual content). For each category, we examined which existing Indian laws could apply. For example, impersonation deepfakes implicate IT Act §§66C–66D; illicit deepfake pornography touches IPC §§292, 354C and IT Act §§67A/B; misinformation via deepfake videos could invoke Defamation (IPC §§499–500) or election law (e.g. RP Act). We also considered constitutional rights (Articles 21 and 19) relevant to these harms.
  • Doctrinal Analysis: We conducted detailed legal analysis of the identified provisions. This involved studying statutory language and legislative history, and relevant case law interpretation. For example, we reviewed the Supreme Court’s holding in K.S. Puttaswamy v. UOI (2017) that privacy is intrinsic to Article 21, and recent Bombay High Court dicta suggesting unauthorized AI content may violate personality rights. We cited secondary sources (legal commentaries and blogs) to augment this statutory review and to capture recent case law developments.
  • Comparative Review: We compared India’s framework with foreign approaches. This included examining U.S. legislative proposals (e.g. the AI Fraud Act of 2024) and the EU’s AI Act rules. We also looked at other jurisdictions’ laws on deepfake pornography (South Korea’s “Deepfake Punishment Act”, etc.). This helped us identify best practices like mandatory takedown timelines and liability assignments that are not yet in India.
  • Policy Developments: We tracked recent policy actions. Government advisories (e.g. Nov. 2023 IT Ministry advisory) and draft regulations were reviewed to understand the official stance. For instance, the recent MeitY advisory (2023) instructs platforms to remove deepfakes within 36 hours of report. We incorporated these into our analysis of current efforts.
  • Synthesis: Finally, we synthesized the findings into thematic insights (gaps, challenges) and formulated recommendations. Where possible, we used bullet-point lists (in the Suggestions section) to logically organize proposed reforms under categories (legislative, technological, institutional) drawn from the literature and comparative examples.

Suggestions

India requires a coordinated, multi-pronged strategy to address deepfakes. Based on our analysis, we suggest the following reforms:

  • Enact Specific Deepfake Legislation: India should consider a dedicated statute (a “Deepfake Prohibition Act” or a chapter in the IT Act) to directly regulate AI-generated fake media. Key elements could include: a clear legal definition of “deepfake” and related terms; criminalization of harmful non-consensual deepfakes (with penalties scaled to harm); and explicit liability for creators and distributors of malicious deepfakes. We recommend requiring creators to obtain prior consent from persons whose likeness is used, and to disclose when content has been AI-altered. For example, if a deepfake mixes one person’s face with another’s body, consent should be mandated from all individuals depicted. Mere “consent to create” should not excuse disseminating a deepfake in an unintended context. The law could also require AI content to carry a digital watermark or label indicating it is synthetic. Such provisions protect free expression and art (by carving out satirical/academic uses) while targeting malicious impersonation or abuse.
  • Amend Existing Laws: Pending or instead of a new law, India must update its current statutes to explicitly cover deepfake-related offenses. For instance, the IT Act and BNS could be amended to add “deepfake-enabled” clauses. VIF-recommended amendments include adding an offense for “unauthorised creation or dissemination of deepfakes”. Higher penalties should be prescribed, commensurate with the violation of dignity or privacy. Similarly, BNS defamation provisions (§356) and sexual violence provisions (§77 on voyeurism) should be interpreted to include AI-manipulated media. In practice, judges in Kapoor and Bachchan cases have already implied such applications of personality and copyright rights. Codifying this in statute would strengthen deterrence.
  • Intermediate Liability and Takedown: Platforms (social media, messaging apps) must bear more responsibility. Building on the IT Rules, we suggest legally mandated takedown timelines and stricter due-diligence duties. The government’s November 2023 advisory requires removal of reported deepfakes within 36 hours; this could be made statutory. Platforms should implement reliable reporting systems and automated deepfake detection tools. Failure to comply should attract loss of Section 79 immunity and fines. We also echo calls for content moderation standards: requiring platforms to label AI-generated content (like some proposed EU rules) enhances transparency. Importantly, any disclosure/ takedown regime must include safeguards so legitimate speech (political satire, journalism) is not unduly censored.
  • Technological Measures: The government should fund R&D for deepfake detection and authentication. As suggested in literature, technical solutions are a key part of the defense. This includes supporting AI tools that scan media for manipulation fingerprints, and promoting standardized digital watermarking of genuine content. Public-private partnerships (e.g., IndiaAI Safe & Trusted AI mission) could encourage innovation in forensic analysis of synthetic media. The state might also create a repository of known deepfake examples to train detection systems. In parallel, robust cybersecurity practices (for example, securing deep-learning models) can help prevent misuse.
  • Enforcement Capacity and Awareness: Law enforcement, prosecutors and judges need training in deepfake technologies. Police cybercrime units should be equipped with forensic AI tools and awareness of digital evidence handling. Courts may need expert panels to assess deepfake authenticity (as Puttaswamy emphasized the dangers of non-state digital threats). Public awareness campaigns are also essential: citizens must learn to critically evaluate media and know how to report deepfake abuse. Educational programs (for example, in schools and media literacy workshops) should highlight the existence and risks of deepfakes, akin to global “media literacy” initiatives.
  • International Cooperation: Deepfake crimes are often transnational. India should work with other countries and international organizations to establish common standards and facilitate cross-border enforcement. One model is an international treaty defining obligations for AI platforms and consistent penalties for deepfake offenses. Another is collaboration with entities like Interpol and UNESCO on best practices. Since the Indian Constitution already permits broad restrictions on speech for morality and security, India can push global dialogues on balancing free expression with harm prevention.

Each of these suggestions balances India’s constitutional freedoms with its obligations to protect privacy and public order. For example, Puttaswamy itself contemplated “reasonable restrictions” on privacy-invasive content by non-state actors. Lawmakers must ensure any new rules respect Article 19(1)(a) free speech by narrowly targeting malicious intent and providing safeguards for legitimate uses. By proactively defining deepfake offenses and allocating clear responsibility among creators, platforms, and users, India can shift the burden away from victims and toward a robust regulatory regime.

Conclusion

Deepfakes exemplify how swiftly artificial intelligence can outpace the development of law, leaving significant gaps in legal protection. In India, while the Constitution guarantees both freedom of expression under Article 19 and the right to life and personal liberty under Article 21—which includes the right to informational privacy—our legal system is still evolving to meet the challenges posed by synthetic media. Courts have begun recognizing the threat, as seen in the Delhi High Court’s injunction protecting Anil Kapoor’s image from defamatory deepfake pornography. However, such remedies remain piecemeal. Legal commentators rightly point out that no single comprehensive statute exists to address the wide-ranging harms caused by deepfakes. As a result, victims are forced to navigate a patchwork of civil and criminal laws, which is often inefficient, inconsistent, and inaccessible for timely redress.

India now stands at a critical juncture. Without clear and enforceable rules, deepfakes will continue to compromise personal privacy, deceive consumers, and erode trust in digital content and public institutions. The global approach—from the U.S. to the EU—shows the importance of acting early to regulate this powerful technology. Our analysis underscores the urgent need for targeted legislative reforms: defining and criminalizing harmful deepfakes, placing legal responsibilities on digital platforms, and strengthening technological and institutional safeguards. As the Supreme Court emphasized in Puttaswamy, the threats of the information age demand a “robust regime” for data protection. Extending that logic, defending citizens from non-consensual AI manipulations is no longer optional—it is a constitutional necessity. India must now craft a forward-looking legal framework that protects rights, promotes accountability, and ensures AI innovation serves society rather than undermines it.

    Revant upadhyay

    Aligarh Muslim University