UNDERSTANDING DEEP FAKES AND GENERATIVE ARTIFICIAL INTELLIGENCE: IMPLICATIONS AND LEGAL PERSPECTIVES IN INDIA

ABSTRACT

The rapid advancement of artificial intelligence (AI), particularly generative AI, presents significant challenges and opportunities across various domains. This research paper explores the capabilities and implications of generative AI, focusing on deepfakes, which use deep learning techniques to create highly realistic synthetic media. The paper examines the applications and ethical concerns of generative AI, highlighting the risks posed by deepfakes in areas such as cybersecurity, privacy, and misinformation. 

Through a detailed analysis of secondary data sources, including scholarly articles, reports, and reputable websites, this study provides a comprehensive overview of the current state of generative AI and deepfakes in India. It evaluates existing legal frameworks, such as the Information Technology Act and the Digital Personal Data Protection Act, and discusses their effectiveness in addressing the misuse of this technology. Additionally, the paper proposes a multi-faceted approach to mitigate deepfake threats, encompassing technological detection systems, policy initiatives, public awareness, and the adoption of a zero-trust mindset.

KEYWORDS

Generative Artificial Intelligence (Generative AI), Artificial Intelligence (AI), Deepfakes, Deep Learning model, Generative Adversarial Network (GAN) and AI regulations in India.

INTRODUCTION

The rapid development of AI is increasingly worrying for global community, governments, and even general public, carrying major implications for national security and cybersecurity. Additionally, it brings up ethical issues surrounding transparency and surveillance. In a world full of misinformation and distrust, AI offers more advanced methods to convince people of the truthfulness of false information.

A kind of AI system known as “Generative AI” is built to produce original content, such as text, graphics, audio, or video, using training data. Generative AI is capable of producing unique and creative results, in contrast to classical AI, which is primarily concerned with pattern recognition and prediction. Deep learning models, which are sophisticated machine learning systems designed to emulate learning and decision-making processes of the human brain, form the basis of generative AI. These models operate by identifying and capturing correlations within vast amounts of data. They then use this knowledge to understand user requests or queries in natural language and generate relevant new content in response.


Deepfakes are a particular use of generative AI. The phrase “deepfakes” refers to the technology used to create this specific type of manipulated content, or “fakes,” to produce synthetic media, including images, videos, and audio recordings. This high level of realism deceives viewers into believing they are witnessing or hearing someone say or do something they have never said or done. 

A subset of machine learning methods, which are a subset of AI, is represented by deep learning. A model in machine learning builds a model for a particular task using training data. The model improves with more thorough and strong training data. A model that uses deep learning may automatically find feature representations in the data that allow for data processing or classification. They receive “deeper” training, which is successful.

 
Generative Adversarial Networks (GANs): An important piece of technology used to create deepfakes is known as “Generative Adversarial Network,” or GAN. In a GAN, an adversarial approach employs two machine learning networks to generate synthetic content. The first network, known as the “generator,” is provided with data that exemplifies the desired type of content, enabling it to “learn” characteristics of that data.

This generator then makes an effort to produce fresh instances of that data that display same traits as original data. The second machine learning network, which has likewise been trained to “learn” to detect features of that sort of data, is then shown these created instances. This second network identifies weaknesses in the provided instances, rejecting those that don’t match with the properties of original data and labeling them as “fakes.” These fakes are then sent back to first network, allowing it to refine its data creation process. The more realistic the content used to train GAN networks, the more realistic the output will be.

This research paper explores the applications and misuses of generative AI and deepfakes along with the legal perspective of deepfakes in India, examining existing laws, challenges, and evolving strategies to counter the misuse of this technology.

RESEARCH METHODOLOGY

This section outlines the methodology used to understand generative AI and deepfake technology and the current position of generative AI and deepfakes in India. The research focuses on secondary data obtained from reputable websites and scholarly articles to provide a comprehensive understanding of the topic.

The data for this research was collected from:

  • Websites: Reputable news outlets, government websites, and technology blogs.
  • Scholarly Articles: Peer-reviewed journals and conference papers.
  • Reports and White Papers: Publications from research institutions.

This methodology was chosen to gather a wide range of perspectives and information on generative AI and deepfakes and also gain knowledge about the present situation in India along with steps and measures taken by the Indian legislation to deal with this problem. 

Also, after systematic reading, I have summarised the articles providing diverse views and knowledge about this AI technology focussing on their findings and conclusion.

REVIEW OF LITERATURE

The purpose of the following literature study is to compile a body of information regarding many impacts and ramifications of generative AI and deepfakes globally. Since the goal of a systematic review is to synthesize information from several disciplines, the scope was not limited to any particular topic of study.

  1. In July 2023, the Buffett Institute for Global Affairs published an article in their Buffett Brief titled- “The Rise of artificial intelligence and Deepfakes.” 

Even though deepfake media poses an increasing risk to global security, this article discusses how deepfakes could be used in the fight against terrorism. According to the article, deepfake technology can be used to manipulate military orders to confuse rank-and-file soldiers, undermine political figures, and take advantage of tensions inside the country. Numerous real-world instances were emphasized-

  1. A fake movie featuring the then-prime minister of Belgium, Sophie Wilmès, was produced in the spring of 2020 by Extinction Rebellion Belgium, purporting to link the proliferation of COVID-19 to unchecked ecological disasters. Using video from her most recent pandemic message to the country, the group created a fake speech that was scripted by Extinction Rebellion. 
  1. The Ukrainian people were shocked to discover a video showing their president, Volodymyr Zelenskyy, calling on armed forces to lay down their guns and surrender to the invading armies in 2022, not long after Russia started its invasion of their country. Zelenskyy’s administration swiftly denied the video’s veracity after it went viral. The video was produced by Russian propagandists utilizing deepfake technology.

The article then talked about solutions and guardrails for the above cyber-deception.

Experts proposed that the United States and its democratic allies create a Deepfakes Equities Process, a code of conduct for governments to use when using deepfakes. This process is modeled after the federal government’s vulnerabilities equities process, which guides decisions on whether newly discovered cybersecurity vulnerabilities should be kept secret for offensive use against government adversaries or disclosed to public.

  1. The second article I would like to talk about is written by Andrew Ray titled- “Disinformation, Deepfakes, and Democracies: The Need for Legislative Reform.”

This article discusses the rising concern over how deepfakes may affect national elections. False information regarding political candidates, parties, and policies can be created and disseminated through the use of deepfakes. 

This may mislead voters, skew public opinion, and affect the results of elections. Political figures can be deliberately targeted for character assassination using deepfakes. Deepfakes are a type of deceptive advertising that political campaigns might use to present candidates or ideas inaccurately. This manipulative practice can unfairly advantage certain candidates or parties and undermine the fairness of elections.

  1.  Another article I would like to review is written by Sarvagya Chitranshi titled, “The “Deepfake” Conundrum-Can the Digital Personal Data Protection Act, 2023 deal with misuse of Generative AI?” This article talks about this act. According to the text, the DPDPA aims to prevent “Data Fiduciaries” from abusing an individual’s personal information. The Act defines a data fiduciary as any individual who decides how and why personal information about an individual will be utilized. 

According to Section 4, a data fiduciary may only use personal information for those for which the data principal has given clear authorization. In addition, a data fiduciary may use personal information for additional permissible purposes that are explained in Section 7. This guarantees that a data fiduciary cannot lawfully gather personal information about an individual without that person’s express consent and use it to train any generative AI models. 

But the problem isn’t necessarily with the data fiduciaries alone. Those who sign up as data fiduciaries can create AI-based fake media by using other people’s images or videos. In this particular case, Section 8(5) becomes significant, since it imposes a duty on data fiduciary to safeguard any personal information in its custody against any potential breaches. By Section 8(6), data fiduciaries must also notify data principal as soon as a breach is discovered. By submitting data to the proper authorities, the data principal would be able to intervene even before data is expressly misused.

Even if it’s “likely” that data will be used to make a decision affecting the data principal, Section 8(3) obliges data fiduciary to ensure accuracy and completeness of data. The term “likely” broadens responsibility of data fiduciaries, like social media companies, to consider deepfakes. Additionally, the DPDPA imposes obligations on data principals under Section 15. Specifically, under subsection (b), it addresses the common issue of impersonation.

This is crucial in today’s environment, where AI-generated content is often used to deceive people by mimicking someone else. Upon receiving a complaint, data fiduciaries can trace the source of deepfake and hold individuals responsible for uploading it to their platform accountable. This section enables action against such individuals, thus providing adequate relief to aggrieved parties.

However, it appears that the DPDPA does not fully address the issue of fraudulent generative AI-based media. The Act’s Section 3(c), which outlines circumstances under which the Act would not apply, is pertinent in this situation. According to the first clause, when a person processes data for any domestic or personal reason, the act is not applicable. 

Therefore, if data used to construct a deepfake comes from a public source, deepfake created for personal use might fall under this provision. This will stop anyone from arguing that using AI to create bogus content qualifies them for a DPDPA exemption.

METHOD

Applications and misuses of generative AI and deep fakes:

Advanced machine learning algorithms enable generative AI and deepfakes, which have several applications in diverse industries. In the field of creativity and the arts, generative AI can produce unique works of art that help artists experiment with different ideas and approaches. It can write scripts, tales, essays, and other written material, which helps writers be more productive. 

It can also compose music, which helps musicians in their creative process. In the fields of design and architecture, AI helps create creative building designs that are optimized for sustainability, usability, and beauty. It also generates logos, website layouts, and other visual content.

Generative AI also has major benefits for the media and entertainment sectors. AI is used in game production to create characters, levels, and plots that enhance the gameplay. It reduces production time and costs by producing realistic special effects, animations, and even complete scenes in movies. 

One particular use case and set of difficulties for generative AI is deepfakes. Deepfakes are used in entertainment to make realistic stunt duplicates, resurrect actors who have passed away, and de-age performers.

Deepfakes in communication and media allow for the production of virtual influencers on social media and bespoke video material for marketing that has well-known faces delivering messages. Deepfake technology, which creates synthetic faces for studies and publications, can improve identity verification systems and privacy protection in security and authentication.

Despite their benefits, these technologies pose significant ethical challenges- 

  1. Phishing Attacks and Identity Theft: Phishing attacks can leverage deep fakes to steal someone’s identity. To trick workers into disclosing private information or carrying out illegal activities, attackers may produce convincing audio or video recordings that mimic reputable people, such as business leaders.
  1. Adult material: Non-consensual Content: People have been included in explicit material without their consent thanks to the misuse of deep fakes. This presents serious moral and legal issues since it can be used for defamatory retribution or to damage someone’s reputation.
  1. False evidence: This refers to the creation of fake pictures or sounds that may be used in court as proof of someone’s guilt or innocence.

Addressing these issues requires developing robust detection technologies, creating regulatory frameworks, and promoting ethical AI practices to ensure the responsible use of generative AI and deepfakes.

Present Indian situation:

India does not have any laws designed specifically to deal with the problems caused by deep fake technology. Nonetheless, several current laws may be used to address different facets of the production and propagation of deep fakes:

For instance, deep fake offenses that entail the acquisition, dissemination, or publishing of an individual’s picture in the media, infringing upon their privacy, are subject to Section 66E of the Information Technology Act, 2000 (IT Act). Similarly, those who use computer resources or communication devices maliciously to impersonate someone else or cheat are subject to punishment under Section 66D of the IT Act. Furthermore, posting or sending pornographic or sexually explicit deep fakes might result in legal action under Sections 67, 67A, and 67B of the IT Act.

For cybercrimes related to deep fakes, Section 509 (words, gestures, or acts designed to offend a woman’s modesty), Section 499 (criminal defamation), and Section 153 (a) and (b) (spreading hate on communal lines) of the Indian Penal Code, 1860, (IPC) may also be invoked. 

In addition, Section 51 of the Copyright Act of 1957 may be invoked in cases where deep fakes have been produced using any copyrighted picture or video. Any property that belongs to another person and over which they have the sole right to use is prohibited.

Following the circulation of deepfake films on the internet that purportedly showed their CEOs making stock and investment advice, the National Stock Exchange (NSE) and the Bombay Stock Exchange (BSE) recently released cautionary notices.

The Right to Privacy Deepfakes violates someone’s private rights by altering their identity, appearance, or attributes. The right to life and personal liberty are guaranteed by Article 21 of the Indian Constitution, and this has been understood to encompass the right to privacy. Significant rulings, such as Justice K.S. Puttaswamy (Retd.) v. Union of India, have solidified fundamental nature of this privilege.

Liability of Intermediaries: Section 79 of IT Act and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 cover liability of intermediaries, including social media platforms. Infringing content must be removed by intermediaries as soon as a court order or notification is received. According to the regulations, some platforms must designate staff members in charge of keeping an eye out for and flagging offensive information.

Challenges:

  1. Advancements in Technology vs the Law: Deep fake technology is developing quickly, and the legal system finds it difficult to keep up. Because there aren’t any laws specifically targeting deep fakes, it may be difficult for legal frameworks to effectively control and punish the improper use of this technology.
  1. Attribution and Jurisdiction: The internet’s cross-border nature makes it difficult to pinpoint the source of a deep fake and to establish jurisdiction in court. 
  1. Freedom of Expression Issues: Balancing the need to curb the malicious use of deep fakes with the right to freedom of expression is a delicate task. The legal framework must strike a balance that prevents harm without unduly restricting legitimate forms of expression, such as satire or political commentary.

SUGGESTIONS

Developments in generative AI and Large Language Models (LLMs) have led to the emergence of deepfakes. Therefore counter-measures are required to deal with the problem.

The following are some of the counter-measures given by Anna Maria Collard, Senior Vice-President of content strategy & evangelist for KnowBe4 in her article titled, “4 ways to future-proof against deep fakes in 2024” and Beyond published on the World Economic Forum website:

  1. Technology- Numerous technology-based detection systems are currently available, employing machine learning, neural networks, and forensic analysis to scrutinize digital content for inconsistencies typical of deepfakes. Forensic techniques that assess facial manipulation can confirm the authenticity of content. Nevertheless, developing and maintaining automated detection tools capable of real-time analysis is still challenging. Over time and with widespread adoption, AI-driven detection strategies are expected to aid in the fight against deepfakes significantly.
  1. Policy efforts- There is a need for international and multistakeholder initiatives to devise practical and actionable solutions to the global deepfake issue. Efforts are ongoing to reach a global consensus on responsible AI and establish clear boundaries, which need to be furthered. Requiring generative AI and language model providers to incorporate traceability into deepfake creation can improve accountability. 

However, malicious actors might bypass these measures using jailbroken versions or non-compliant tools. A unified international stance on ethical standards, acceptable use, and defining malicious deepfakes is essential to prevent the misuse of such technology.

  1. Public awareness: Raising public awareness and enhancing media literacy are crucial defenses against AI-driven social engineering and manipulation. From early education, individuals should learn to discern real from fake content, understand how deepfakes spread, and recognize the tactics of malicious actors. 

Media literacy programs should focus on critical thinking and provide tools for verifying information. Research indicates that media literacy can significantly protect society from AI-driven disinformation by reducing individuals’ propensity to share deepfakes.

  1. Zero-trust mindset: In cybersecurity, a zero-trust approach means not trusting anything by default and always verifying. Applied to online information consumption, this approach encourages skepticism and constant verification, aligning with mindfulness practices that promote thoughtful engagement with digital content. Implementing a culture of zero-trust through cybersecurity mindfulness programs (CMP) equips users to handle deepfake and other AI-driven cyber threats, which technology alone cannot easily counter.

As we increasingly live our lives online and approach the reality of the metaverse, a zero-trust mindset becomes even more critical in distinguishing between real and synthetic environments. Effectively mitigating deepfake threats requires a multilayered strategy combining technological, regulatory, and public awareness efforts. 

This demands global collaboration among nations, organizations, and civil society, alongside significant political commitment. Meanwhile, a zero-trust mindset will encourage proactive cybersecurity measures, prompting individuals and organizations to stay vigilant against digital deception as the lines between virtual and physical worlds blur.

CONCLUSION

Generative AI and deepfake technologies offer remarkable creative potential but also pose serious ethical and security risks. The misuse of deepfakes can lead to significant consequences, including identity theft, privacy violations, and the spread of misinformation. In India, while existing laws provide some coverage against deepfake-related offenses, some gaps need to be addressed through updated legislation and regulatory measures. 

Key recommendations include the development of advanced detection technologies, the implementation of international policy efforts, and the promotion of public awareness and media literacy. Additionally, adopting a zero-trust mindset in cybersecurity can help individuals and organizations remain vigilant against digital deception. 

Addressing the deepfake challenge requires a coordinated global effort, combining technological innovation, legal safeguards, and education to ensure the responsible use of AI and protect society from its potential harms. The path forward lies in a balanced approach that promotes responsible AI development and safeguards the integrity of information in our increasingly digital world.

Reeshika Agarwal

Final-Year Learner

Symbiosis Law School, Noida