NARENDER KUMAR GOSWAMI VS UNION OF INDIA (2025)
Court: Supreme Court of India
Date of Decision: 16 May 2025
Case No: Writ Petition (Civil) No. 300 of 2025
Hon’ble Bench: Justice Surya Kant, Justice Nongmeikapam Kotiswar Singh
Petitioner: Narendra Kumar Goswami (Advocate, U.P. Bar Council)
Respondent: Union of India & Others
Statutes & Provisions Referred: Information Technology Act, 2000 [Sections 66F, 70B (5), 87 (2) (zg), 88]; The Bharatiya Nyaya Sanhita (BNS): [Sections 198, 319, 354]; Constitution of India [Articles 14, 19(1)(a), 21, 21A, 324, 142]; IT Rules, 2021 [Rule 3(1)(h)(vii), Rule 16]
These are the Judicial Doctrines used in this case: Constitutional Morality, Right to Privacy, Electoral Integrity, Judicial Oversight
FACTS OF THE CASE
Narendra Kumar Goswami, a lawyer and member of the Uttar Pradesh Bar Council, filed a Public Interest Litigation (PIL) with the Supreme Court due to his concern regarding “the increasing prevalence of artificial intelligence (AI) in the production of “deepfake” photos and videos”.[1] The objective of these deepfakes is to resemble the genuine article. They can be employed to deceive individuals in a variety of circumstances, but they are particularly effective during elections and in the dissemination of false information that could jeopardize the safety of the country or harm the reputations of individuals. In this context, he requested the Court to regulate this growing issue. He said that there are existing laws and standards in place to deal with deepfakes, but they don’t function. He argued that the “IT Act of 2000”[2] doesn’t do a good job of protecting individuals against deepfakes. He talked about a lot of rights that deepfakes can break, like “Article 14”[3] (the right to be treated fairly), “Article 19”[4] (the right to free speech and expression), and Article 21 (the right to life and personal freedom). Mr. Goswami told the Court that deepfakes were dangerous for democracy, privacy, and justice, not merely because of technical problems. He instructed the Supreme Court to make rules and legislation right away to stop this kind of bad behavior.
ISSUES RAISED
- Whether there aren’t any rules against deepfakes, do Articles 14, 19(1)(a), and 21 break the law?
- Whether the IT Act of 2000 isn’t enough to deal with the risks that AI poses.
- Whether “Article 142”[5] allows that temporary protections be put in place.
- Whether the Election Commission says that AI content needs to be actively managed during elections.
CONTENTION
ARGUMENTS BY PETITIONER
- The issue of privacy and dignity: The modification of images and videos in a “deep fake” context is something that compromises an individual’s personal privacy and safety. “Article 21”[6] of the Indian Constitution says that right to privacy is provided to every citizen. Under the case “K.S. Puttaswamy vs. Union of India”[7] supreme court held that “Article 21 guarantees the right to privacy to every citizen.” In this case, the court reasoned privacy, personal liberty, identity, and dignity are closely associated. In cases where a person’s likeness is fraudulently used in a sexual, defamatory, or misleading way, the person has diminished control over their likeness. This is especially true when deepfakes are used. Let someone else control your voice and image. This is wrong because it takes away your freedom and honor. This harm is especially exacerbated when shared virally online, combining the blurring between reality and fiction at its worst. As such, deepfakes harm more than an individual’s reputations – they degrade personal autonomy. Deepfakes change who we are and what is real. They violate the very idea of personhood that Article 21 is meant to protect, and they demand that the media follow strict laws and act ethically in a world that is becoming more and more digital.
- Tampering with elections: One way AI can trick people is by spreading false information or fake news during elections. The Supreme Court said in “PUCL vs Union of India”[8] that people have the right to know as part of their freedom of speech. Voters can’t make fair choices when they can’t get correct information. This hurts democracy.
- Need for Regulation: The scope of India’s IT Act does not explicitly address issues like deepfakes or AI-generated content, which creates a hurdle for the enforcement of re-held rights regarding a wrongful use of synthetic media. The current landscape is outdated because the particular law does not relate to misuse of synthetic media and will likely require the court-system to step in and provide guidance on the application of rights and protections in these circumstances.
- Comparative Models: China’s Deep Synthesis provisions require clear labeling and responsible use of harvesting AI-generated content. They prohibit deepfakes made without consent, and require strong data protections. Globally, best practices, such as those from Partnership on AI, are focused on transparency, consent, and ethical use, as opposed to unlawful or potentially harmful use and to protect people’s rights.
- Transformative Constitutionalism: As citizens are now confronted with new threats to their protections from their privacy, citizens inability to determine fact from fiction, and reputational harms, through new technologies (e.g., deepfakes and misuse of AI), the court has been asked to intervene to provide meaningful protections. We need strong legal protections to protect citizens from these dangers, especially where existing laws do not keep up with rapidly changing digital threats.
ARGUMENTS BY RESPONDENTS
- Jurisdictional Concerns: The Delhi High Court was already reviewing some deepfake-related issues in “Chaitanya Rohilla vs Union of India”[9]. The petitioner petitioned on behalf of artificial intelligence (AI) regulatory issues – and specifically sought any action against the platforms that are enabling the production of synthetic media, so problems with fake news and AI control need to be fixed. The Court’s response was to make a committee to study global practices and make recommendation for safeguards. As of now, the courts have never seen how dangerous new tech can be.
- Premature Intervention: The Union Government replied that it had anticipated these concerns over deepfakes and harms of AI and was still in the process of formulating new policies to mitigate these concerns. They asked the judiciary to wait until they had a chance to complete their processes, also stating that if the judiciary acted prematurely, it would disturb their processes and ultimately prevent it from establishing proper regulations.
- Existing Frameworks: The respondents argued that the “IT Rules 2021”[10] and “CERT-In protocols”[11] sufficiently safeguard against digital harms in the following ways: platforms’ due diligence obligations, grievance redressal, and rapid response to dangerous content. The respondents claimed these frameworks account for misinformation, privacy violations, and deepfakes adequately, so there is no need for subsequent judicial intervention at this time.
RATIONALE (COURT REASONING)
This case is called Narender Kumar Goswami vs. Union of India (2025). A lawyer who is a member of the Bar Council is bringing the case. They are the petitioner. The person who sent the petition asked the Supreme Court to quickly make laws that protect people from deepfake material made by AI, especially during elections. Article 21 says that everyone has the right to be quiet and left alone. We should treat everyone the same, as written in Article 14. Anyone can say what they think, according to Article 19(1)(a). He said that the lack of rules violated these rights. He talked about well-known cases like Navtej Singh Johar, PUCL, and K.S. Puttaswamy to show how deepfakes could trick votes, damage reputations, and spread false information. To deal with these threats, he suggested five big changes: making it required to watermark AI-generated content so that people can easily tell which media is fake; setting up a system to quickly remove harmful deepfakes 24 hours a day; making a National AI Regulation Authority to keep an eye on this kind of content; changing the Election Commission’s Model Code of Conduct to ban hidden synthetic media; and finally, starting a national education program called the Deepfake Literacy Mission under the Samagra Shiksha Abhiyan to teach people.
JUDGEMENT
In this case, the Supreme Court refrained from proceeding with the petition but granted the petitioner liberty to file the matter in the Delhi High Court. What was said was agreed upon, especially when it came to deepfakes. The court told the Delhi High Court to give what the petitioner said some serious thought. The Supreme Court suggested to the High Court that a proper discussion of the issues is important, to consider the suggestions presented, and to take such steps that will lead to an improvement in the rules and protective mechanisms available. The decision is significant in keeping the matter still on the table in a way where careful consideration can be given without completely rejecting it.
DEFECTS OF LAW
The Supreme Court found that the current statutory framework, namely the Information Technology Act 2000 and the IT Rules 2021, is unable to contextualize the growing issue of how AI-created fake content, like deepfakes, will be addressed. Neither of the aforementioned laws require progenitors of synthetic media to identify or mark it as such, nor, if applicable, require that it be marked with a media watermark to show its artificiality (perhaps like a “this is deepfake” watermark) during use, if existent. Also, there is no mechanism in place for the quick response to deepfakes or other electoral interference during elections, which could sway public perception. In this, there is no regulatory body or authority in India on AI-type issues, nor is there an effort to sanction training or educate law enforcement agencies or the electorate on deepfakes, their identification and what to do about their existence. To illustrate that India is falling behind, the petitioner listed several other more developed and developed programs like the Deep Synthesis Provisions in China and other standards.
INFERENCE
The Supreme Court didn’t look at the request immediately away because there was already a case going on in Delhi High Court called Chaitanya Rohilla vs. Union of India. The Supreme Court didn’t turn down the case in every way. It didn’t tell the guy who made it to aid the High Court by talking about his problems and ideas. The courts realize that deepfakes are a huge problem. Especially in protecting the values of the Constitution and democracy overall. The Supreme Court said little about the case, but they did tell the candidates and the High Court that they should consider it for a while. AI can be dangerous, and it seems like more and more people are realizing this. That could happen. In the future, there may be more rules and laws that will be good for everyone.
AUTHOR
Name: Anish
College: PUSSGRC, Hoshiarpur (Panjab University)
Semester & Year: 6th Semester & 3rd Year
REFERENCE
- WEBSITES
https://indiankanoon.org/doc/192303696
https://www.casemine.com/judgement/in/682c3437cb7d8775a7b844c0
- ACTS USED IN THIS CASE
Information Technology Act, 2000
https://www.indiacode.nic.in/bitstream/123456789/13116/1/it_act_2000_updated.pdf
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
CERT-In protocols
https://www.cert-in.org.in/s2cMainServlet?pageid=GUIDLNVIEW01
- ARTICLES USED IN THIS CASE (INDIA CONSTITUTION)
Article 14 (Right of Equality)
https://indiankanoon.org/doc/367586
Article 19 (Right to freedom of speech)
https://indiankanoon.org/doc/1218090
Article 21 (Protection of life and Personal Liberty)
https://indiankanoon.org/doc/1199182
Article 142 (Enforcement of Supreme Court Orders and Decrees)
- CASE LAWS USED IN THIS CASE
K.S. Puttaswamy vs. Union of India (2017) 10 SCC 1
https://www.manupatracademy.com/legalpost/manu-sc-1044-2017
PUCL vs Union of India AIR 1997 SC 568
Chaitanya Rohilla vs Union of India W.P. (C) 677/2021
https://indiankanoon.org/doc/43358219
[1]Zee news < https://zeenews.india.com/people/babydoll-archi-was-a-deepfake-created-using-real-woman-s-morphed-pics-by-jilted-ex-lover-report-2936053.html > last visited on 20 July,2025
[2] Information Technology Act, 2000, No. 21, Acts of Parliament, 2000 (India)
[3] INDIA CONST. art.14.
[4] INDIA CONST. art. 19.
[5] INDIA CONST. art. 142.
[6] INDIA CONST. art. 21.
[7] K.S. Puttaswamy vs. Union of India (2017) 10 SCC 1
[8] PUCL vs Union of India AIR 1997 SC 568
[9] Chaitanya Rohilla vs. Union of India W.P. (C) 677/2021
[10] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, Acts of Parliament, 2021 (India)
[11] Cert < https://www.cert-in.org.in/s2cMainServlet?pageid=GUIDLNVIEW01 > last visited on 23 July,2025