LEGAL IMPLICATIONS OF DEEP FAKE TECHNOLOGY ON  CYBERSECURITY LAWS IN INDIA

ABSTRACT

Deep fake technology is a sophisticated form of artificial intelligence that can be used in order  to create extremely realistic fake content, consequently making it difficult for people to  differentiate between what is real and what is a deep fake. The rapid emergence in the use of  deep fake technology in recent times is of grave concern in the realm of cybersecurity. This  paper seeks to delve into the multifaceted nature of deep fakes and analyse the impact that deep  fake technology has on the cybersecurity laws in India, with major focus being on the already  existing legal frameworks and their effectiveness in combating this evolving threat. This paper  also explores the need for amendments in the Information Technology Act, 20001and new  legislations in order to deal with crimes related to deep fake technology.

KEYWORDS

Deep fake technology, cybersecurity, Information Technology Act,2000, Indian Penal Code ,1860 , Amendment

INTRODUCTION

Recently there has been a huge upsurge in incidents related to the use of deep fake technology,  a recent example being the case of Rashmika Mandanna’s deep fake video4 that went viral. In  the video, Rashmika was shown entering an elevator in a bodycon romper and greeting the  camera person with a smile. After this video went viral, it was revealed that this was a deep  fake created by a person whose identity still remains a mystery. In reality, the original video  belonged to a British-Indian girl named Zara Patel who posted this video on her social media  on October 9th. This incident caused the actress to voice her concerns about such manipulated  videos. The video also garnered many other celebrities’ attention including Amitabh Bachchan,  Mrunal Thakur and Rajeev Chandrashekhar, the minister of electronics and technology of  India. Following this, the Delhi Commission for Women had issued a notice to the police,  requesting quick action against the perpetrator. An FIR was filed under sections 465 and 469  of the Indian Penal Code, 1861 which states ‘punishment for forgery’ and ‘forgery to harm reputation’ respectively; along with the sections 66C and 66E of the Information Technology Act, 2000 which states ‘Punishment for identity theft’ and ‘Punishment for violation of privacy’ respectively. 

Following the viral deep fake of Rashmika Mandanna, there was also a morphed image of Sara  Tendulkar and Shubham Gill that started circulating around the internet. Additionally, on July  9th, 2023, a fifty-two year old man in Kerala fell victim to an AI-enabled deep fake fraud5 in which he received a call from an unknown number who had used deep fake technology to  impersonate the voice and face of the man’s former colleague. The scammer then proceeded to  ask for forty rupees from the man for the operation of his sister-in-law to which the man agreed  as he believed the scammer to be his friend with whom he had worked for four decades. This  was the first time that the police was dealing with a cyber fraud case that used deep fake

technology.

All these reported incidents show how there has been an increase in use of deep fake technology  by scammers to commit frauds and crimes and that there need to be specific and clear laws  made in order to deal with such misdeeds. 

RESEARCH METHODOLOGY

The research methodology used in this paper is analytical and descriptive in nature. The sources  used in this paper are secondary in nature, that is, the information is compiled through various  articles, journals and news reports.

REVIEW OF LITERATURE

Westerlund, M. (2019)2 provides an in-depth discussion of what deep fake technology is and the dangers that deep fake technology poses to the entire society as a whole. She discusses the role that social media plays to spread these deep fakes on a global scale in a short duration of time. This shows the need for  separate legislations to be made in order to deal with deep fake offenses.

Das B. & Sharma P. (2023)3 mentions how deep fake algorithms and Massive Language Models (MLMs) are used by hackers to create codeless fake  contents to spread cyber threats. They also discuss in detail about how various cybercriminals  can commit crimes exemplified by vishing and business email compromise, which is very hard  to detect.

There have been multiple articles published by various publications explaining the dangers of  deep fake technology and numerous news reports have also been highlighting incidents where  deep fake technology has been used to fulfill ulterior motives.

Existing Cyber Security Laws In India To Deal With Deep fake Technology

Though there are no specific laws or regulations regarding deep fake technology that have been  made in India, the current legal framework of India contains certain laws and provisions related to cybersecurity, data protection, etc. that can be applicable in cases related to deep fake technology. Certain main cybersecurity laws and regulations that can prove to be relevant to  deep fake technology are as follows:

Information Technology Act, 2000 (IT Act) – The Information Technology Act, 2000 is the  primary legislative body that deals with the cybersecurity in India. While there is no  specific mention of deep fake technology in the IT Act, 2000 there are certain provisions  related to unauthorized access of someone’s digital property, digital forgery and  violation of privacy that can be imposed in cases involving deep fake technology.The  sections listed in the IT Act, 2000 that can be used to punish offenses related to deep fake technology are section 66C – Punishment for identity theft6, section 66D – Punishment  for cheating by personation by using computer resource7 and 66E – Punishment for  violation of privacy8. Out of these, section 66C and section 66E were invoked in the  case of Rashmika Mandanna. Section 66E of the IT Act, 2000 that is concerned with the  privacy of the individual states that Whoever, intentionally or knowingly captures,  publishes or transmits the image of a private area of any person without his or her  consent, under circumstances violating the privacy of that person, shall be punished  with imprisonment which may extend to three years or with fine of minimum two lakh

rupees, or with both. All these sections could be relevant in cases involving non consensual deep fake creation of an individual.

Indian Penal Code 1860 (IPC) – The sections of the Indian Penal Code, 1860 that deals with fraud,  forgery and other similar crimes can also be invoked in incidents regarding the usage  of this advanced form of artificial intelligence technology. Section 419 of the IPC 1860 states  the punishment for ‘cheating by personation’ and provides that any person who cheats  by personation shall be punished with imprisonment of either a term which may extend  to three years or with a fine or with both. A person is said to be guilty of ‘cheating by  personation’ if the said person cheats by pretending to be some other person, or by  knowingly replaces one person for another, or representing that he or any other person  is a person other than him or the said person.The provisions under sections 463, 465  and 468 of the Indian Penal Code, 1860 dealing with forgery and “forgery for the purpose of  cheating” can also be applied in a case of identity theft with the use of deep fake technology.

Defamation Laws Of India – Contents made using deep fake technology in order to  portray people in a defamatory light and have them say or do things that never actually  happened can also be dealt under the laws concerning defamation in India.

NAVIGATING THE COMPLEXITIES OF DEEPFAKE CHALLENGES

Authentication of digital evidence in courts – Contents made using deep fake technology tend to be so realistic and convincing that if it is used as digital evidence in  legal proceedings, it can be difficult to detect its authenticity.

Threat to the privacy of individuals – Deep fake technology tends to use an  individual’s face or voice or both in order to create a fake and unauthorized content that  represents them in a negative light, consequently infringing upon their privacy. There  needs to be laws that address this violation of privacy and people need to be provided  with legal aid in order to deal with these non-consensual spreading of their deep fakes.

Defamation done using deep fake technology – Deep fake technology can also be  used to create false content that can be damaging to a person’s reputation and lead to a  defamation case. Therefore, there needs to be clear methods on how defamation done  through deep fake technology should be treated under defamation laws and what should  the punishment of those who created such deep fakes be.

Impersonation using deep fake technology – Deep fakes can be used by scammers  and frauds to impersonate any individual and perform fraudulent activities such as  scams and identity theft as seen in the case of the man who lost forty thousand to a  scammer who posed as a former colleague of the man using deep fake technology.

International threats involving deep fake – As deep fake technology is not restricted  to the borders of a specific nation, there need to be laws and provisions discussing the  international offenses that can be performed using deep fakes and there needs to be  mutual understanding and cooperation in investigating international offenses related to  deep fake technology.

Blackmail done using deep fake technology – Deep fake technology can be used to  create false videos or images of people engaging in inappropriate activities and use the  falsified content to blackmail an individual.

TRAVERSING THE HISTORY OF DEEPFAKE TECHNOLOGY

There have been many incidents in the past where deep fake technology has been used such  as in 2017, when deep fake pornography circulated on the internet for the first time when a  reddit user posted X-rated clips that he had compiled with his own home computer. In 2019 as  well, many pornographic deep fakes of adult actresses were spread across the internet. Deep  fakes have also been used to misrepresent politicians and portray them negatively for example

in May 2023, a deep fake video of Vice President Kamala Harris went viral in which she  was supposedly slurring her words and speaking nonsensically about today, tomorrow and  yesterday. Similarly in June 2023, in the United States, Ron DeSantis in his presidential  campaign used a deep fake to misrepresent Donald Trump.

ROLE OF SOCIAL MEDIA IN PROPAGATING DEEP FAKES

Social media plays a crucial role in spreading of deep fakes, the most important factor being  social media’s ability to easily share information at a rapid speed to an audience on a global  scale with just the touch of a finger. Social media helps deep fake creators to reach a broader  audience with minimal efforts required. The vast volume of deep fake content available on the  internet makes it incredibly difficult to track down the original source. Similarly, it is even  more difficult to remove the content that’s already been posted. The virality of the content  shared on social media platforms and apps like facebook, instagram, youtube, twitter, etc further contributes to increasing the damage that’s already been done and often influences the  public opinion. The algorithms used in such social media platforms usually promotes the

content that is engaging, sensational or provocative and the content which gets the most clicks  or increased initial screen time is pushed to top which spreads the falsely created deep fake content even further to greater masses. Social media offers a kind of community to these  creators who make deep fake content and spread fake propaganda to influence the public’s  opinion, for example on reddit – a social media platform, where there are multiple subreddits (subreddits are communities where people discuss, create and share about a common topic in  which they have an interest) dedicated to the creation and distribution of various kinds of deep  fake content.

INTERNATIONAL COMPARISONS

As of now there are no specific regulatory measures against deep fake crimes in India. Other  countries such as the United States of America, members of the European Union and many other  countries have been exploring and implementing measures to address and battle deep fake related concerns and crimes in their respective countries. India can learn from their actions and  mistakes and take inspiration from them on how to efficiently counter similar crimes. Let us  delve into what these countries have been practicing and implementing to safeguard their citizens from the horrors of deep fake crimes.

China’s approach towards deep fake technology – China has taken several approaches to  deal with deep fake technology, such as in 2019, the government of China made it mandatory  for every individual and organization to disclose it whenever they have used deep fake technology in the content that they share on social media. The government also asked to not  distribute any deep fakes without a clear disclaimer of the fact that it had been artificially  produced using deep fake technology. China had also established provisions to regulate providers of deep fake content through the Cyberspace Administration of China namely, Deep Synthesis Provisions8 which came  into effect on 10 January 2023. It is said to be the most comprehensive legislation yet in the  whole world with respect to deep fake technology. It oversees the use of deepfake technology, including texts, images, audios and videos created through AI-based models. These regulations are in line with China’s historical efforts to tightly control internet activities. The Cyberspace Administration of China (CAC) justifies these regulations by pointing out that deep synthesis technology has been misused for creating, copying, publishing, and spreading illegal and harmful content, leading to defamation, identity forgery, and other negative impacts on communication, social order, and national security. According to the new rules, content produced with AI systems must carry a visible watermark, indicating that it has been edited. Content generation service providers are required to refrain from handling personal information, adhere to regulations related to AI algorithm assessment and verification, authenticate users for video creator verification, and establish feedback mechanisms for content consumers.

Canada’s approach towards deep fake technology – Canadian government has taken various  measures to tackle the threat of crimes emerging from deep fake technology. They have used a  three-way strategy which consists of prevention, detection and response in order to prevent  deep fakes from being created and distributed. The government of Canada has developed  prevention technology and also worked towards creating awareness amongst its citizens  regarding deep fake technology. The Canada Elections Act includes provisions specifically designed to address situations in which deepfake technology may be used to influence or disrupt a Canadian election more directly. One such provision is in section 480.1 of the Canada Elections Act, which was amended in 2018 to become section 480.1(1). This section, introduced through the 2014 Fair Elections Act, focuses on impersonation. However, aside from this provision, there are other avenues within the Act to prosecute instances involving deep fakes. These include provisions established by the more recent Elections Modernization Act. For instance, the act encompasses the prosecution of activities such as publishing false statements with the intent to affect the election resultsThe government has been working towards creating new laws

that would make it difficult and illegal for individuals to create and circulate deep fakes with  ill intentions.

United States of America’s (USA) approach towards deep fake technology – The measures taken by the United States to combat deep fake technology include the provisions under the  U.S. National Defense Authorization Act (NDAA). The NDAA Act, 2021 became a law after  overriding the veto of the then President Donald Trump. Under this law, the Department of  Homeland Security (DHS) was required to issue an annual report for the upcoming five years which would include all forms of harm done through deep fake technology. Additionally, the  Department of Homeland Security was also asked to study deep fake technology and its  possible solutions. Another law named Identifying Outputs of Generative Adversarial  Networks Act, was signed by president Trump in 2020. This law involved research into deep  fake technology by the National Science Foundation (NSF) of America and required for it to  make authenticity and preventive measures for tackling such crimes. In 2019, Texas became  the first state to ban deep fakes9 that were being used to influence an election. California’s law  prohibits creation and distribution of video, photos and audios of any politicians which are manipulated to resemble real footage within 60 days of an election.

SUGGESTIONS

The government of India can also tackle this issue of grave national importance as sooner or  later this technology might be used with malicious intent to alter and manipulate the results of  elections or tarnish someone’s reputation in the eyes of the public which is a dangerous power to have. Here are few suggestions on how Indian Government can combat this ever-growing  threat of deep fake technology:

Promoting media literacy and responsible content sharing– Encouraging heightened awareness and media literacy is crucial. Educating individuals on the susceptibility of digital media to manipulation and distortion is essential, emphasizing the inherent limitations of technology and its potential to propagate misinformation. The government should initiate social campaigns within educational institutions and workplaces to instill a culture of responsible content sharing on social media platforms. A pivotal aspect is advocating for content verification before disseminating sensitive videos, images, or audio to prevent the spread of misinformation.

Enhancing legal framework for Deepfake technology crimes– In order to address offenses linked to deep fake technology, specific legislations and regulations must be established under India’s IT Act of 2000. This includes criminalizing the creation and possession of deep fake content crafted with malicious intent and without the consent of the victim whose identity is manipulated in the video, audio, or image. Moreover, regulations should mandate online platforms to implement authentication methods that can curtail the dissemination of deep fake technology on the internet. Strengthening consent laws, especially in cases where someone’s identity is utilized without explicit permission, is imperative for effective legal measures.

Fostering international cooperation in combating deepfake crimes– Efforts should be directed towards fostering collaboration with other nations to uncover and probe cross-border crimes associated with deep fake technology. Identifying gaps within current legal frameworks that criminals exploit to evade consequences is crucial. Law enforcement agencies must be equipped with specialized tools and mechanisms to effectively detect and investigate crimes related to deep fake technology.

CONCLUSION

In conclusion, the spectrum of deep fake crimes in India is very broad which presents a large  amount of challenges to the existing cybersecurity laws in India. This paper has discussed a  large variety of ways that deep fake and AI can be misused and also discussed its potential risks  that can happen in the future. The current cyber security laws of India under the Information  Technology Act of 2000 (IT Act) can act as a solid foundation to deal with these cyber threats.

However, there needs to be specific provisions and amendments made in these legislations that  explicitly address the challenges faced by deep fake technology such as issues related to the consent and privacy violations of individuals and also the authentication of digital evidence during legal proceedings. The gaps in the existing legal framework need to be reviewed  carefully and revised accordingly in order to curb this increasing threat of misuse of deep fake technologies. 

Name of the Author – Shiwakshi Kushwaha

Name of the College – Lloyd Law College, Greater Noida

______________

1 Mika Westerlund, The emergence of deep fake technology: A Review, 9(11), Tech. Innov. Mgt. Review, 39-52 (2019).

2 Bibhu Dash and Pawankumar Sharma, Are ChatGPT and deepfake algorithms endangering the cybersecurity industry?, 10(1), Int’l Jrnl of Engineering and Appl. Sci., 1-5 (2023).

3TIMES OF INDIA  https://timesofindia.indiatimes.com/entertainment/hindi/bollywood/news/days-after rashmika-mandannas-deep fake-video-went-viral-on-the-internet-delhi-police-registers-fir-in the-case/articleshow/105133555.cms?from=mdr (last visited 11 November)

4 HINDUSTAN TIMES  https://www.hindustantimes.com/india-news/deep fake-scammers-trickindian-man-into-transferring-money-police-investigating-multi-million-rupee-scam 101689622291654.html (last visited 11 November)

5 Information Technology Act, 2000, § 66 C (India)

6 Information Technology Act, 2000, § 66 D (India)

7 Information Technology Act, 2000, § 66 E (India)

8 THE DIPLOMAT https://thediplomat.com/2023/03/chinas-new-legislation-on-deep fakes should-the-rest-of-asia-follow-suit/ (last visited 12 November)

9 EXPRESS NEWS https://www.expressnews.com/news/local/politics/article/Texas-is-first-state to-ban-political-14504294.php (last visited 13 November)