Abstract
Deepfake technology is powered by artificial intelligence and rapidly changing how we create images, videos, and audio. While it has possibilities for creativity and innovation, the malicious non-consensual use particularly in the form of non-consensual sexual imagery, has raised important concerns for privacy, dignity, and autonomy in India. Current legal frameworks in India, eg; The Information Technology Act, 2000, the Bharatiya Nyaya Sanhita, and the POCSO Act provide some protection for victims of deepfakes, but do not comprehensively cover this area, nor were they designed to address the challenges posed by AI-generated content. This paper will analyse whether India’s laws effectively protect victims of deepfakes, explore the constitutional clash between dignity (Article 21) and free speech (Article 19), summarize international developments, and analyse the “Right to Be Forgotten” in order to establish a remedy. The paper will conclude with recommendations for balanced reform that regards legislation, platform accountability, and judicial creativity.
Keywords – Deepfakes, Consent, Privacy, Free Speech, Information Technology Act, 2000, Right to Be Forgotten
Introduction
Throughout history, technology has always pushed the boundaries of law, and while many innovations have challenged our understanding of truth and dignity, few have upended our conception of these as much as deepfakes. A deepfake is a wholly artificial video/image/audio created using artificial intelligence that allows it to seem as if an individual said or did something which, in reality, they never did. While there are many deepfakes that are playful parodies, there are also deepfakes that are profoundly harmful; namely, when they are sexual in nature and made without consent. In India, as in many other jurisdictions, a growing trend of non-consensual deepfake pornography has emerged, primarily targeting women, celebrities and minors. The harm is both reputational and a huge violation of privacy and dignity. Furthering the challenge, the law, in its current state, is not equipped to address this properly.
This paper examines whether India’s existing cyber laws (the Information Technology Act, 2000, and the relevant provisions of the Bharatiya Nyaya Sanhita and the POCSO Act) are sufficient to address deepfakes; and the delicate balance between protecting privacy and dignity against protecting free speech and creative expression. Finally, it proposes that a rights-based approach, building on India’s developing jurisprudence of the Right to Be Forgotten, would better protect individual dignity in the face of deepfakes.
Research Methodology
This paper relies on a doctrinal research method, drawing from statutes (IT Act, BNS, POCSO), constitutional provisions, judicial decisions, and secondary literature such as academic articles, reports, and global legislative developments.
Review of Literature
- Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, by Citron & Chesney – the canonical piece, they map deepfakes’ individual harms (privacy, reputation, sexual exploitation) and systemic harms (mis/disinformation, national security), and survey legal responses (tort, criminal law, platform rules). But it’s written pre-2024 AI boom and does not cover India.
- Deeply Dehumanizing, Degrading, And Violating: Deepfake Pornography and the Path to Legal Recourse by Emily Pascale argues the U.S. Congress can criminalise non-consensual deepfake pornography without violating the First Amendment by treating it as unprotected speech. Useful for your 19(1)(a) vs. 21 balancing shows one way to square speech with dignity. Limitation is that it’s based on U.S. doctrine; you’ll need to translate its logic to Indian constitutional tests (proportionality).
The Legal Gaps in India
India’s regulations concerning online safety and digital crimes remain only partially equipped to keep up with deepfake attacks. At this stage, its legislative framework includes only some of the bigger picture, and none of the original content risks posed by AI-generated material. Let us dissect it further –
- The Information Technology Act, 2000, is India’s primary legislation concerning cybercrimes. Certain sections are applicable for prosecuting deepfakes here, such as –
- Section 66E – Penalizes the capturing or sharing of private images without consent.
- Section 67 – Deals with sharing obscene content online.
- Section 67A – Addresses sexually explicit content.
The larger issue is that deepfakes complicate this because, in many instances, the subject in the video never performed the sexual act the AI has simply created the images. Courts, therefore, are left with the question of whether a fake sexual video can still be “obscene” under the law if no act actually happened? This drives the grey area, and leaves victims inadequately protected.
- Bharatiya Nyaya Sanhita (BNS), 2023, some provisions of the BNS can be applied to deepfake cases –
- Section 77 (Voyeurism) – Punishes secretly capturing or sharing intimate images or videos of someone without permission.
- Section 78 (Stalking) – Includes following, monitoring or using online harassment against another person, affecting online stalking through misinformation and false content.
- Section 356 (Defamation) – Where a deepfake contradicts or causes damage to a person’s reputation through an untrue and detrimental portrayal.
The limitation: These provisions only criminalize individual harms like, harassment, stalking or defamation, not the individual effects or assaults that arise from individualized harms; neither do they specifically address the most distinctive features of AI-generated content (whether by an individual or an enterprise). For example – In a sexual deepfake, there is no actual sexual act, thus making the possibility of proving “obscenity” very difficult under the sanction of law. In a political deepfake, one that conveys untrue information or otherwise, has the potential to generally mislead public opinion; it does not necessarily align with ground of voyeurism, stalking or defamation.
- POCSO Act, 2012 – The Protection of Children from Sexual Offences (POCSO) Act makes child sexual abuse material illegal, which is very important since child safety online is a huge issue. The problem is what happens when AI creates fake sexual images of children who never existed or real children who have never posed in that manner? Current legislation finds it difficult to categorize and prosecute this type of AI generated child abuse materials.
Why These Laws Fall Short
Right now, India’s legal system treats deepfake cases like regular cybercrimes, voyeurism, stalking, or obscenity. But these laws only target the symptoms, not the root cause: the deliberate use of artificial intelligence to manipulate reality. This piecemeal approach leaves major loopholes –
- Victims of non-sexual deepfakes (like fake speeches or false news videos) have little protection.
- Law enforcement struggles with what evidence is valid when the “content” itself is fabricated.
- The speed at which deepfakes spread online is much faster than the slow legal process.
In short, India’s current laws were written before AI deepfakes became a reality. They can only partially apply, which means victims often don’t get proper remedies, and offenders exploit the gaps.
The Constitutional Dilemma
The issue of deepfakes in India is not just about technology and criminal law it also raises serious constitutional questions.
- Article 21 – Right to Privacy and Dignity
The Supreme Court in Puttaswamy v. Union of India 2017 recognized privacy as a fundamental right under Article 21. This includes the right to protect one’s image, reputation, and dignity. Non-consensual sexual deepfakes clearly violate these rights because they strip people of control over how their bodies and identities are used. For victims, the harm is not just reputational but deeply personal and psychological.
- Article 19(1)(a) – Freedom of Speech and Expression
At the same time, the Constitution guarantees freedom of speech. Deepfake technology is not always harmful – sometimes it is used for parody, satire, art, film, or education. For instance, a comedian using a harmless deepfake for political satire or a filmmaker recreating historical figures with consent-based deepfake tools could claim protection under free speech.
- The Core Challenge
The real difficulty is finding the balance. If the law is too harsh, it might criminalize creativity and free expression. But if the law is too lenient, it leaves victims unprotected against sexual exploitation, political misinformation, or reputational harm. For example – Should a political parody deepfake be treated the same way as a non-consensual sexual deepfake? and where should the line be drawn between artistic freedom and malicious manipulation?
Global Responses
South Korea – Criminalized deepfake pornography, with penalties up to 5 years.
European Union (AI Act, 2024) – Mandates labeling of AI-generated content.
United States (Proposed NO FAKES Act) – Protects individuals from unauthorized digital replicas.
Where India stands (and what to learn) – India has no deepfake-specific statute yet. The government has used advisories under the IT Rules (2021) to push platforms to act faster on deepfakes, and CERT-In has issued technical guidance. Helpful, but these are administrative measures, not a dedicated legal framework.
Takeaways for India
From South Korea – Create deepfake-specific offences for sexual imagery (including creation, distribution, and knowing possession), with aggravated penalties for minors—while building in proportionality.
From the EU – Add mandatory disclosure/labeling for synthetic media so people can spot manipulated content pair this with fast takedown for non-consensual sexual material.
From the U.S. – Consider a likeness/voice right at the national level (India-specific version) to cover unauthorized digital replicas, and avoid a confusing state-by-state patchwork.
Other jurisdictions show two workable paths –
- criminal law for sexual deepfakes (Korea), and
- transparency with platform duties (EU).
India can blend these and criminalize non-consensual sexual deepfakes, require labels for synthetic media, and give victims a clear takedown + remedies pathway, so dignity is protected without chilling legitimate satire or art.
The Right to be Forgotten in India
Constitutional Basis – The RTBF in India flows primarily from the Supreme Court’s recognition of privacy as a fundamental right in Justice K.S. Puttaswamy v. Union of India (2017). Privacy was held to encompass control over one’s personal information, autonomy, and dignity in the digital age.
Judicial Recognition – While no comprehensive RTBF legislation exists in India, several High Courts have recognized its contours –
- Delhi High Court (2016–2017): Directed removal of an acquittal judgment from online platforms to protect an individual from perpetual reputational harm.
- Orissa High Court (2020): Allowed a woman to seek removal of her intimate images from the internet.
- Karnataka High Court (2017): Recognized RTBF as integral to privacy, permitting erasure of personal information to prevent lifelong stigma.
These precedents show a judicial trend toward balancing informational privacy and reputation against freedom of expression and public interest.
Right to be Forgotten and Deepfakes
Why It Matters – Deepfakes weaponize personal data often images and voices to create false yet realistic digital replicas. These can irreparably damage a person’s dignity, reputation, and safety. Unlike older forms of defamation or obscenity, deepfakes linger indefinitely online, making RTBF particularly relevant.
How RTBF Applies – Extending RTBF to deepfakes would mean – Victims could demand removal of manipulated videos or images from search engines, social media, and hosting platforms. Individuals regain control over their digital identity and narrative. Even if a deepfake spreads, RTBF allows some mitigation by ensuring platforms don’t continue circulating it.
Tension with Free Speech – Critics might argue that blanket takedowns risk overreach and censorship, especially if RTBF is invoked to suppress criticism or true information. Courts will need to carefully weigh public interest (e.g., information about public figures) against the individual’s right to dignity and privacy.
The RTBF is India’s closest existing legal tool to counter the harms of deepfakes. Its expansion would not just address digital privacy, but also protect individuals from identity theft, reputational destruction, and non-consensual sexual exploitation. However, its implementation must balance personal dignity with societal needs for transparency and free expression.
Suggestions
- New Legislation – Introduce a specific law criminalizing non-consensual deepfake pornography, modeled on South Korea. This would fill the gap in Indian law, which currently relies on scattered BNS and IT Act provisions.
- Platform Accountability – Mandate social media and hosting platforms to remove flagged deepfakes within 24–48 hours. Require them to use AI-based detection tools while ensuring human oversight to prevent over-blocking.
- Balancing Articles 19 & 21 – Adopt a harm-based test: only criminalize non-consensual and harmful deepfakes (e.g., sexual, defamatory, fraudulent). Protect satire, parody, and art under Article 19(1)(a), but allow restrictions where dignity and privacy (Article 21) are at stake.
- Judicial Expansion of RTBF – Courts can extend the Right to Be Forgotten to cover deepfakes, enabling victims to seek de-indexing and takedown orders, giving them greater control over digital identity.
- Public Awareness – Launch digital literacy campaigns to educate citizens about the risks of deepfakes, available remedies, and the importance of consent in digital content sharing.
Conclusion
Deepfakes are more than sophisticated forms of digital deception; they represent a new kind of digital abuse that threatens the very core of privacy, dignity, and self-determination. Victims are placed at risk of reputational harm, abuse, and hacktivism in ways that existing laws are unable to address. The current patchwork of laws governing cyberspace in India, comprised of the BNS, IT Act, and intermediary guidelines, provide top-down and incomplete protection while lacking the breadth, scale, and sophistication of AI-enabled obscene imagery and deepfakes.
What we need are substantive legal-ethical reforms: the criminalization of non-consensual deepfakes, better accountability for platforms and intermediaries, court recognition of the right to be forgotten in the digital sphere, and an ongoing campaign to drive public awareness. If not, the law as a form of regulatory response will always lag behind, chasing down technological misuse in a haphazard and piecemeal way. The challenge of deepfakes thus presents India not simply with a challenge of good governance; rather, stands as an opportunity to reaffirm our constitutional commitments to dignity, equality, and self-determination in the internet age.
