1. ABSTRACT
The convergence of Artificial Intelligence (AI) and machine learning, particularly Generative Adversarial Networks (GANs) has ushered in a new and complex era of digital threats, with deepfakes emerging as a particularly potent and legally challenging weapon.
Deepfakes, which are altered fake contents made by mixing, superimposing, replacing, and merging several photos, videos, or audio files with the ability to create realistic digital impersonations and manipulate reality. The rise of deepfakes and synthetic media raises concerns about audio and video records, which are losing probative value as reputation suffers, propaganda is spread, and privacy is violated. The aim of this paper is to shed light on this emerging threat that is the result of ongoing technological advancements in the field of artificial intelligence. We will discuss the technology underlying deepfakes and the various ways they can be utilized to harass individuals and society as a whole. It addresses the technological principles of deepfake production, such as Generative Adversarial Networks (GANs), and their use in a wide range of cybercrimes. Furthermore, the paper discusses the regulatory efforts taken to deal with deepfakes and raise concerns, as well as ideas for appropriate legal recourse.
2. KEYWORDS
Deepfake, Cybercrime, AI , Cyber security, data privacy, Generative Adversarial Networks(GANs).
3. INTRODUCTION
Deepfake is an artificial intelligence-based technique that employs machine learning algorithms, specifically generative adversarial networks (GANs), to create synthetic media such as photos, videos, and sounds.[1] In 2019, Silversparro Technologies, a tech start-up engaged in machine learning and deep learning generated a deepfake GIF of Narendra Modi, prime minister of India on experimental basis. Abhinav Kumar Gupta, founder and CEO of Silversparro Technologies said that “[T]he implications of deepfakes are worrying” and claimed that a dystopian future awaited the Indian politics.[2]
Deepfake technology aims to create highly realistic synthetic media that mimics real people while manipulating some aspects of the content. It‟s built on two techniques: deep learning and generative adversarial networks.[3]
4. RESEARCH METHODOLOGY
This research study uses a thorough, multidisciplinary technique that combines legal analysis with technical and policy-oriented perspectives. The study is based on a doctrinal legal research strategy that includes a critical evaluation of existing laws, legislation, case law, and regulations/remedies pertaining to deepfake, cybercrime, and data privacy. This doctrinal research is critical for identifying gaps and limits in current legal frameworks when applied to the distinct features of deepfake technology.
The study also includes a comparative legal analysis, which looks at how different jurisdictions, both national and international (for example, the United States, the European Union, and a few Asian countries), are addressing the deepfake issue through legislative and judicial measures. This comparative approach focuses on best practices and similar difficulties in legal and regulatory responses.
A large section of the study is dedicated to technical analysis, which examines the mechanics of deepfake production, with a particular emphasis on Generative Adversarial Networks (GANs). This technical understanding is critical for developing legal and policy suggestions, ensuring their relevance and effectiveness.
At last the paper uses a case study analysis to assess the real-world impact of deepfakes. The study examines documented incidents of deepfake-enabled cybercrime to provide tangible evidence of the threat’s severity, as well as its financial and reputational effects. This comprehensive process assures that the findings and suggestions are solidly grounded in legal precedent, technological reality, and practical experience.
5. REVIEW OF LITERATURE
The scholarly discourse on deepfake technology is a vibrant topic that combines computer science, law, and sociology. A thorough evaluation of the existing literature is required to lay the groundwork for this research and identify the important gaps that our analysis seeks to remedy. This review is divided into two sections: an exploration of the technological underpinnings of deepfakes and a critical assessment of the legal and regulatory response which inludes international and national perspective.
5.1. Technological Underpinnings and Detection
5.1.1 Creation of deepfakes
Deep learning is an area of machine learning that processes and analyzes vast volumes of data using artificial neural networks, which are algorithms inspired by the structure and function of the brain. It has been used for a variety of applications, including computer vision, natural language processing, speech recognition, and robotics.
Generative adversarial networks (GANs) are a sort of deep learning architecture in which two neural networks, a generator and a discriminator, which are trained using a big dataset of real pictures, videos, or audio. The generator network generates synthetic data, such as a synthetic image, that closely resembles the real data in the training set. The discriminator network then examines the legitimacy of the synthetic data and gives the generator suggestions on how to improve its output. This procedure is repeated several times, with the generator and discriminator learning from one another until the generator creates synthetic data that is extremely lifelike and difficult to distinguish from real data.
There are three types of deepfake videos that have been mentioned herein[4]:
- Face swapping is the process of superimposing synthesized faces from the source onto the face of the target while keeping the target’s facial expression.
- Head puppetry is the process of synthesizing video of the target person’s entire head and shoulders using the source person’s head to make it look that the target is behaving similarly to the source.
- Lip syncing is the process of creating a falsified video by just changing the lip region such that the target individual looks to be saying something he or she did not say originally.
However, a recurring theme in this research is the ongoing “AI arms race,” where the sophistication of deepfake generation consistently outpaces the development of detection methods. Research from institutions like the National Institute of Standards and Technology (NIST) and various academic papers confirm that while new detection algorithms are developed, they often become obsolete as deepfake creators employ more advanced techniques to evade detection. 3 This body of literature confirms the core premise of this paper: a purely technological solution is insufficient to contain the deepfake threat.
5.2 Legal and Regulatory Responses: A Critical Evaluation
This is the most crucial section for a legal paper. Here, you will critically analyze how different legal systems are attempting to regulate deepfakes.
5.2.1 INTERNATIONAL SCENARIO
Several countries have introduced legislation specifically targeting deepfakes and preventing their use for any malicious aspirations.
In the United States in December 2018, Senator Ben Sasse introduced the US Malicious Deep Fake Prohibition Act, 2018[5] which aims to eliminate the misuse of deepfake to defraud, extort, harass, or harm the reputation of anyone and make it a criminal offence. The Deepfakes Accountability Act, 2019 introduced by US Congresswoman Yvette Clarke requires defending and combatting manipulated media and the spread of misinformation. The bill creates guidelines on the use of this technology and imposes penalties on infractions.
In January 2024, officials from the United States introduced the ‘No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act’ (No AI FRAUD), a framework to safeguard citizens from AI-generated fakes. The ‘digital portrayal’ of any individual without his or her permission is likewise prohibited under this new framework. In the UK, the UK Online Safety Act was passed in 2023, making it unlawful to transmit digitally modified sexual photos or films online.
Following a recent case involving sexually explicit deepfakes of popular singer Taylor Swift, the
US government introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act in January 2024, which ensures compensation for victims of AI-generated pornography and deepfakes. Furthermore, on January 10th, US senators introduced the ‘No Artificial Intelligence Fake Replicas and Unauthorised Duplications (No AI FRAUD) Act,’ which intends to ban the use of personal information, particularly their face and voice, to construct AI fakes. The terms, likeness and voice have been specifically highlighted in the bill which focuses on preventing the non-consensual use of such in creating deceiving digital content.[6]
Given the prevalence of deep fakes and copyright infringement, proper regulation is extremely important. Technological methods will be adopted to address this technological challenge, one of which is the ‘Digital Rights Management’ mechanism. They may not immediately prohibit deepfakes, but they will certainly limit their proliferation. The first law governing DRM was the United States Copyright Act of 1976. It was enacted to meet the standards for copyright protection established by the WIPO Internet Treaties, including the WIPO Copyright Treaty of 1996 and the WIPO Performers and Phonograms Treaty of 1996.
Bletchley Declaration on AI Safety: On the Month of November 2023, a safety summit was held in Bletchley Park, United Kingdom, which focused on the risk analysis and mitigation of AI through international action. This summit was joined by twenty-nine countries which includes India, the US, Germany, China, Canada, Australia, France, Japan and the European Union and many other countries with the paramount aim of assessing the opportunities and identifying the threats caused by the increasing use of AI.
This Summit had some key takeaways:[7]
- The transformative potential of AI that coincides with the significant risks of AI;
- There is a desperate need to guarantee the safety of frontier AI;
- The most important need is to attain international cooperation which can only be achieved through an open global discussion on AI.
The majority of nations have taken the following initiatives. Many other governments have implemented a range of methods to regulate and manage deepfake technology. We have already covered the technological and legislative steps that will be implemented by governments. In addition, various educational efforts, such as public awareness campaigns, should be implemented to raise awareness about the prevalence of deepfakes.
5.2.2 LEGAL SANCTIONS FOR DEEP FAKE AS PER EXISTING LAWS IN INDIA
Deepfake technologies abuse movie stars and other well-known individuals to the point that their movies are transformed, superimposed, and even published to pornographic websites. This act cannot be justified and brought within the scope of fair use; rather, it should be classified as obscenities. This remark is reinforced by the case of Ranjit D. Udeshi v. State of Maharashtra[8], where the Supreme Court said that ‘an act would be regarded obscene if it ends up corrupting the moral fabrics of the person being exposed to it’.
In India, there is no formal legal structure for the offense of deepfake, however some clauses under the Indian Copyright Act, 1957, the Penal Code, 1860, and the Information Technology Act (2000) prohibit it.
Remedies under the Copyright Act, 1957 : The Berne Convention which deals with the protection of works and the rights of their authors[9] provides for certain moral rights requirements and in compliance with it, section 57 of the Indian Copyright Act deals with the „right to paternity and integrity‟. Because deepfakes are considered mutilation, distortion, and modification of a person’s work, the person who used such technology will face civil and criminal liability under sections 55[10] and 63 of the Copyright Act, which provide damages, imprisonment, fines, and other injunctive reliefs to infringers. These provisions may serve to deter harmful deepfakes. However, they do not provide protection for deepfakes made for fair use or lawful purposes.
Remedies under Penal Code, 1860 : Deepfakes can be dealt with under India’s defamation laws. According to Section 499 of the Penal Code of 1860, attempting to publish any material against an individual in order to harm his or her reputation constitutes defamation and is punishable. And the development of deepfake porn constitutes defamation because it is developed solely for the purpose of exacting revenge or defaming someone. Section 292 of the IPC also prohibits the sale, distribution, and so on of ‘obscene materials’, but its application is superseded by the IT Act, which will be explored in detail below.
Remedies under the Information Technology Act, 2000 : Section 67 of the Information Technology Act makes it a criminal offense to publish or transmit obscene materials electronically. Perpetrators of deepfake pornography will face jail for 3 to 5 years or a fine, depending on the circumstances. In addition, section 79 of the IT Act, which addresses intermediary liability, can be used. Following the case of Myspace Inc v. Super Cassettes Industries Ltd.[11], the aforementioned intermediary liability is now imposed for copyright infringement as well. The court stated that, “In case of copyright infringement, intermediaries have a responsibility to remove infringing content when notified by private parties, even without a court order”. Consequently, it resulted in the harmonious construction of the provisions of the Copyright Act and Information Technology Act.
Remedies for Breach of Privacy : As in the case of K.S. Puttaswamy v. Union of India[12] where the right to privacy became a fundamental right under Article 21 of the Constitution of India, ‘Information privacy’ is also an important notion because it prohibits the distribution of such information and allows the individual to restrict access to it. The Personal Data Protection Bill of 2018, which came following the Puttaswamy ruling, safeguards personal data like as images, videos, etc., and the deepfake producers who utilize such personal data would be liable or guilty of personal data breach since the confidentiality is being compromised.[13]
6. METHOD
A. The Technical Mechanism of Deepfake Generation
Deepfakes are essentially a product of Generative Adversarial Networks (GANs). GANs are composed of two competing neural networks:
- Generator Network, Its objective is to generate new, synthetic data. In the context of deepfakes, this network creates a fake image or video frame, such as mapping one person’s face onto another’s body.
- The Discriminator Network analyzes generated data to assess its authenticity. It is trained using a dataset of authentic content.
This process is a game of cat and mouse. The generator adapts to the discriminator’s feedback, iteratively improving its output until it can reliably mislead the discriminator. The end product is a highly realistic deepfake capable of deceiving both human viewers and automated systems. The data needed to train these models is frequently available on the internet, sourced from social media, public videos, and corporate websites.
B. Deepfake Applications in Cybercrime: A Legal Analysis
The legal ramifications of deepfakes are profound and extend to multiple domains of criminal and civil law.
- Financial Fraud and Vishing (Voice Phishing):
One of the most immediate and financially devastating deepfake threats is their use in fraud. This typically involves a vishing (voice phishing) or deepfake video call where an attacker impersonates a senior executive to trick an employee into initiating an unauthorized financial transfer.
The fundamental legal problem here is to assign culpability and show causation. While a fraudulent transfer is a clear offense, demonstrating that the deepfake was the direct and exclusive cause of the employee’s actions can be difficult. Existing fraud statutes, while appropriate, may not completely include the technical techniques of deceit.
- Identity Theft and Impersonation:
Deepfakes are a new and sophisticated kind of identity theft. Unlike traditional identity theft, which frequently entails obtaining personal information such as a social security number, a deepfake targets the very essence of a person’s digital identity—their likeness and voice. A recent video of Virat Kohli supporting a betting app that promises big earnings went popular on social media sites. It was eventually determined that the video was a deepfake generated by cybercriminals.
Most identity theft statutes, such as the Identity Theft and Assumption Deterrence Act in the United States, are concerned with the use of personal identify to commit a crime. A deepfake, on the other hand, can be used for other malicious reasons, such as corporate espionage or blackmail, without the usage of a financial identifier. This creates a legal lacuna. The victim of a deepfake impersonation may have little legal options unless a specific injury, such as financial loss, can be demonstrated.
- Blackmailing: blackmail, particularly sextortion, has become more common in this digital age. A case in point is when a retired IPS officer from Uttar Pradesh was blackmailed with a deepfake video showing him soliciting sex. Afraid of tarnishing his reputation, the poor man had repeatedly made payments to the scammers.[14] Section 66-C of the IT Act, 2000 provides punishment for identity theft and section 66-D punishes anyone who personates with the intention of cheating using a computer device.
The Ministry of Electronics and Information Technology had issued an advisory to all intermediaries to comply with the existing rules i.e., Rule 3(1)(b) within the due diligence section of the IT rules which mandates intermediaries to remove any content that is prohibited under the law.[15] In addition to that, MEITy held 2 Digital dialogues in which the concern surrounding deepfakes by the Prime Minister was addressed. Although the government has addressed the concern surrounding deepfakes by issuing notifications and statements, a robust legal framework regarding the same is yet pending.
- Jurisdiction and Cross-Border Crime:
Because deepfakes may be made and targeted anywhere, they provide a severe legal problem. A deepfake designed in one jurisdiction to deceive a firm in another may slip between the cracks of international law.
The idea of territoriality in criminal law frequently requires that a crime be committed inside the jurisdiction for prosecution to continue. While many jurisdictions have expanded laws to include extraterritoriality for cybercrime, enforcement is dependent on international collaboration via complex and often slow mechanisms like as Mutual Legal Assistance Treaties (MLATs). The speed and scale of deepfake attacks frequently render traditional legal remedies ineffectual.
7. SUGGESTIONS
To effectively combat the deepfake threat, a multi-faceted approach that integrates legal, technological, and policy-based solutions is essential.
- A mandate must be put on creators and providers to obtain consent from people who are being displayed in the video, ensure the authenticity of users‟ identities, and provide recourse mechanisms to those affected. The service providers must set up guidelines and service agreements, put in place a system to confirm users’ real identities, develop a database to detect illegal and false information, and document network logs, drawing inspiration from China’s regulations on deep synthesis.[16] The requirement of the proactive defense necessary to counter the speed of deepfake attacks.
- There must be a strict security evaluation while offering templates and other models and resources for editing and morphing face, voice, and physiological data, which entails public interests, national security, etc.
- There must be a greater emphasis on making intermediaries liable to verify the authenticity of videos through highly adept content moderators. Provisions should be made to ensure that the intermediaries do not fall back into the safety net provided by section 79 of the IT Act.[17]
- Mirroring the strategy adopted in the EU AI Act, India can follow a risk-based approach, wherein the degree of the risk determines the kind of rules required. Therefore, the higher the risk, the stricter the rules. More stringent regulations govern high-risk AI before the video can be disseminated. It enables high-risk assessments and precautionary systems by recording activity to detect the outcomes and documentation containing all the details about the systems and their objectives to monitor their compliance.
- Utilizing blockchain technology helps distinguish between authentic and manipulated content by timestamping and recording data on the blockchain, offering clear evidence of when the content was created. This way unauthorized alterations can be identified. Blockchain and digital watermarking can create an immutable record of media creation, making it possible to prove that a piece of content is genuine and has not been altered.
- Employee and Stakeholder Education: Legal changes alone will not suffice.
Organizations should be legally required to provide regular and comprehensive training to all employees, particularly those in finance, information technology, and executive positions, on the hazards of deepfakes and communication verification protocols. This education is the primary line of defense against social engineering.
8. CONCLUSION
The advancement in technology comes with its pros and cons. The increased usage of deepfakes necessitates systematic control, which India’s current laws do not provide, necessitating legislation to address this technology. The current legal framework, built on principles and statutes that predate the age of AI, is demonstrably inadequate to address the unique harms and cross-border nature of these threats.
Deepfakes are an important topic to address, requiring an ongoing approach that combines law, technology, and ethics. Given the constant evolution of technology, it is critical for India to stay up with these advances and build its legal framework to manage technology-related concerns. Although deepfakes are unique, more AI-generated material may become available in the future. As a result, it is critical to strengthen our legislative and enact tough rules to handle the issue of deepfakes, so that our country does not lag behind in the future.
Name: Pranjali Singh Chauhan
College: Amity University Madhya Pradesh
[1] Todd C. Helmus, Artificial Intelligence, Deepfakes, and Disinformation : A Primer (RAND Corporation 2022)
[2] Karen Rebelo, “India is Teeming with ‘Cheapfakes’, Deepfakes Could make it Worse” (Boom Live, 17-6-2019) <https://www.boomlive.in/india-is-teeming-with-cheapfakes-deepfakes-could-make-it-worse/?infinitescroll=1> accessed on 17-8-2025.
[3] Jia Wen Seow, et al., “A Comprehensive Overview of Deepfake: Generation, Detection, Datasets and
Opportunities”, 513 Science Direct 351-371 (2022) <https://doi.org/10.1016/j.neucom.2022.09.135> accessed on 18-8-2025.
[4] Siwei Lyu, “DeepFake Detection : Current Challenges and Next Steps” (2020) ArXiv <https://arxiv.org/pdf/2003.09234.pdf> accessed on 17-8-2025.
[5] Malicious Deep Fake Prohibition Act, 2018
[6] No Artificial Intelligence Fake Replicas and Unauthorised Duplications Bill, 2024
[7] Steven Farmer and Johanna Lipponen, ‘Key Takeaways from the UK’s AI Summit: The Bletchley Declaration’ (Internet & Social Media Law Blog, 07 November 2023) <https://www.internetandtechnologylaw.com/ai-summitbletchley-declaration/> accessed 17-08-25
[8] Ranjit D. Udeshi v. State of Maharashtra, AIR 1965 SC 881
[9] ‘Summary of Berne Convention for protection of literary and Artistic Works’ (WIPO)
<https://www.wipo.int/treaties/en/ip/berne/summary_berne.html> accessed 16-08-2025
[10] The Copyright Act, 1957, s. 55
[11] Myspace Inc v. Super Cassettes Industries Ltd., 2016 SCC OnLine Del 6382
[12] Justice KS Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1
[13] Shruti Dhapola, ‘Personal Data Protection Bill, 2018 Draft Submitted by Justice Srikrishna Committee : Here is What it Says’ The Indian Express (28 July 2018) <https://indianexpress.com/article/technology/tech-newstechnology/personal-dat a-protection-bill-2018-justice-srikrishna-data-protection-report-submitted-to-meity5279972/> accessed 17-08-25
[14] Ravish Ranjan Shukla Edited By Chandrajit Mitra, ‘Retired Top Cop Latest Victim Of Deepfake, Video Used To Con Ghaziabad Man’ NDTV (30 November 2023) <https://www.ndtv.com/ghaziabad-news/retired-top-cop-latestvictim-of-deepfake- video-used-to-con-ghaziabad-man-4621233> accessed 17-08-2025
[15] Ministry of Electronics & IT, MeitY issues advisory to all intermediaries to comply with existing IT rules (2023)
[16] Giulia Interesse, ‘China to ReGULATE Deep Synthesis (Deepfake) Technology Starting 2023’ (China Briefing, 20 December 2022) <https://www.china-briefing.com/news/china-to-regulate-deep-synthesis-deep-fake-technologystarting-january-2023/> accessed 21 February 2024
[17] Centre Planning New Regulations, Penalties for Both Creators and Platforms To Deal with Deepfakes (n 32)
