Deepfakes in the Age of Misinformation: A Threat to Democracy or a Call for Media Literacy?

  1. Abstract 

The rapidly evolving fields of artificial intelligence (AI) and deepfake technology are explored in this research study, along with how they affect democratic processes and the importance of media literacy in this developing subject. As Deepfakes have developed, they have threatened political stability, public trust and National Security. Using a mixed-method approach of qualitative and quantitative analysis, this study helps us investigate the rise of deepfakes and the effectiveness of detection technologies. This study also highlights existing literature, legal frameworks and policy recommendations to suggest comprehensive solutions. This research also indicates the creation of a specific AI Act to regulate deepfake technology. Also, it highlights the crucial role of media literacy in helping individuals to assess digital content critically. The study aims to promote a more knowledgeable and resilient society in the digital age by addressing these concerns using technological, legal and educational methods.

  1. Keywords

Deep fake Technology, Media Literacy, AI Act, Detection technologies, Misinformation

  1. Introduction

“Deep Learning Based Fake Artificial Intelligence”, in short, Deep Fake AI. Imagine waking up tomorrow morning and you see that your favourite politician has started giving a speech in favour of their biggest rival in his latest speech, but you attended his latest rally, and he never said anything like this or imagine scrolling social media at your home, and you see yourself with your partner doing something that you never did or would never do, You are in extreme shock after watching your biggest fear: It’s a Deep fake. An AI manipulated video made with the intention to destroy your image.

  1. Historical context and the rise of Deep Fake 

The term Deep Fake AI was first coined by a Reddit User in the year 2017; at that period, the Reddit community started creating Deep Fakes by swapping the faces of celebrities from the Oscars and creating Non-Consensual Pornographic Videos. Recently, a Chinese company named Momo released an app, Zao, that lets users superimpose one person’s photo or video on top of another. Later, this app was banned due to privacy concerns. But it was only the tip of the iceberg cause after that, many Deep fake websites and apps were created and available very quickly on the internet and the Dark Web.

A recent study conducted by Sumsub Research states that from 2022 to 2023, a significant tenfold increase was seen in the rise of Deep Fake videos in all industries. They also mentioned that Identity fraud has increased most dramatically in online media, rising 274% between 2021 and 2023. The video game, healthcare, transportation, and professional services sectors are all impacted by the Deep Fakes.

  1. Democratic Consequences 

The spread of false news and information on the internet has severely threatened our democracy’s core values and diminished Civic Engagement in society. We have seen how Deep Fakes have messed up elections in many countries. According to the Journal Live Mint, In November of last year, police had filed numerous cases for Deep Fake videos that targeted senior politicians like Congressman Kamal Nath, BJP’s Shivraj Singh Chauhan, Kailash Vijayvargia, and Modi during the state legislature elections in Madhya Pradesh in central India and Rajasthan in the west. Private consultancy firms, led by WhatsApp, are frequently tasked with producing Deep Fake content distributed over social media networks.
According to the radio network VOA, i.e. Voice of America, Journalist’s Deep Fakes, in particular, has made it challenging to differentiate facts from fiction. There are AI-generated videos where natural news anchors deliver fake reports, blurring the line between truth and misinformation. Deep Fake manipulation has targeted leading news sources such as CNN, CBS, BBC, and VOA, among others, with notable reporters like Gayle King, Clarissa Ward and Anderson Cooper being mimicked to propagate false narratives.

  1. Research Methodology

Using a mixed-methods approach, this study provides an in-depth understanding of deepfake technology and its consequences by integrating qualitative and quantitative assessments.

  1. Data collection methods
  1. Primary Sources – Studied Interviews of experts in AI, cybersecurity and Government official data to gather insights and first-hand information.
  2. Secondary Sources – Studied various research papers, articles, and reports on deepfakes, misinformation, and media literacy.
  3. Data Analysis Techniques
  1. Qualitative Analysis – Identifying patterns, for example – following the most common statement said by most people
  2. Quantitative Analysis – Using statistics and comparing data. For example, calculate the percentage of people who can analyse a deepfake versus those who cannot.
  1. Literature Review
  2. Evolution of Deep Fake Technology
  3. Technological Advancements in Deep Fake Creation 
  4. Getting More Realistic 

Earlier, Deep Fake used to look fake and blurry; if we take the example of the AI Model Scope, it created the first video just by writing a prompt; in the video, Actor Will Smith was shown eating spaghetti; the footage was quite blurry, but it was the march 2023. However, in the present day, AI like Sora has achieved remarkable achievements in the Text-to-video generation, which looks too realistic. 

  1. Key Developments and Milestones  

According to an article in Science Direct, deep learning in today’s world has made it much more realistic and accessible, just like buying products from a Walmart or drugs from the dark web. Earlier, creating a deep fake required a lot of data and expertise, but now, a person with no knowledge of the field or no prior experience can create deep fakes in minutes.

  1. Applications and Misuse “Beyond Entertainment”
  2. Legitimate business 
  3. Entertainment Industry – Deepfakes have been widely used to thoroughly age an actor’s personality in movies and improve VFX effects. A few of the latest examples are the Star Wars and Marvel movies. 

Nowadays, after considerable progress in generative AI, the latest models can create complete music (including lyrics) from a simple text or a thought. 

The latest AI in this field performing at its peak is Suno Ai and Udio, which became popular in April 2024, just after their launch.

  1. Education and Marketing – Ai has changed modern learning techniques by creating intelligent tutoring systems to support students by analysing their past experiences and mistakes. Also, digital content creating AI has made it very easy to make teaching as interactive as possible by giving interactive simulations and augmented reality experiences or 3d Learning, making learning more engaging and exciting.

Deepfakes have significantly changed the advertising and marketing field by providing cost-effective content creation rapidly and efficiently. Sometimes, ideas created by AI are seen as more unique and dynamic than many successful advertisers and professional marketers.

  1. Malicious Application

2.1 Political manipulation – Deepfakes have played an immense role in promoting propaganda and disinformation. It has been seen in recent years that fake speeches and videos have been used during political campaigns to manipulate the people. Recently 2, Bollywood actors’ deepfakes were circulated widely on Twitter, showing that they were supporting the congress party in their rallies; this deepfake was created without the consent of these actors. Another example was a person who died almost a decade ago due to an airstrike; the woman was shown in the video giving a speech urging Tamilians across the world and promoting false news. However, after a deep analysis of the video, a fact checker in Tamil Nadu found that AI-generated the video and was a deepfake.

2.2 Cybercrime and Pornography – The creation of fake identities, forged documents and Financial fraud cases have increased in the past few years; using deepfakes to create fake manipulated audio or AI voice clones in short Of someone you know and using it to commit financial fraud has become a big market in India.

On the other hand, Deepfake porn has become a massive market in the pornographic industry. It has been used to target celebrities and as a private individual revenge basis. In this type, faces are usually swapped without the consent of the person they want to take revenge on or who wants to defame that person. Committing such crimes often leads to damage to the reputation of that individual.

  1. Deepfakes and Misinformation
  2. The Misinformation Epidemic “Truth in the Crosshairs” 
  3. Contribution of Deepfakes in Misinformation 

Deepfake videos that GAN, i.e. Generative Adversarial Networks create, have contributed the most to the spread of Misinformation in various ways:

  1. Realism and Credibility: Deepfakes nowadays have become very realistic, and it has become hard to identify the difference between a real and a manipulated video. These realistic videos can create false narratives to manipulate people; these deepfakes become highly influential in the case of politicians, celebrities, or any public figure.
  2. Speed and Reach – Instagram and Twitter have become large distributors and promotors of deepfake content. The companies cannot always fact-check the videos as they are usually circulated at large due to their viral nature.
  3. Emotional manipulation – Deepfakes can be used to alter the emotions of the public at large without verifying their authenticity. They can play an influential motivator role in manipulating emotions such as anger, fear or disgust. For example – A deepfake video of a politician giving a speech against any minority community can become controversial and can quickly polarise public opinion.
  4. Disinformation campaigns – Deepfakes of Government representatives can be used to manipulate elections and public opinion or have the capability to destabilise a country’s e-economy or its image quickly. 
  5. Psychological impact of misinformation 
  1. Confirmation Bias – People in general are more likely to believe in information that aligns with their beliefs, which makes the public more easily exploitable.
  2. Availability Heuristic – This type of cognitive bias leads people to judge the situation by relying on the information that first comes to mind. In the case of Deepfakes, a highly realistic deepfake video can make false events seem more likely to be believed as they are more impactful.
  3. Emotional Reasoning – Deepfakes can trigger a person’s emotions, leading to a person not reasoning. When a manipulated video is playing people’s emotions, it makes it harder for a person to evaluate the situation and then take action critically.
  4. Anchoring Effect – When a deepfake is widely spread and Is later exposed and termed as Misleading content, it will continue to create a long-lasting impression that will continue to influence the perception of the public; this term is known as the “Continued Influence Effect.”
  5. Case Studies “When Lies Go Viral”
  6. Case 1 – Recently, videos of 2 news anchors who were not real people (created by AI), their videos were being circulated and distributed by Pro-China bot accounts on Facebook and Twitter. In these videos, two news reporters were shown commenting on the shameful lack of action against gun violence in the US. The other female news anchor was shown promoting China’s role and geopolitical relations at the international summit meeting in the other video.

But these videos were a bit off as their voices were not correctly synced to the reflexes of their facial muscles. Their faces had a pixelated video game quality, which later proved to be a deepfake.

  1. Case 2 – In March 2022, a deepfake video of the president of Ukraine, Mr Zelensky, was shown in which he told his soldiers to drop their weapons and surrender against Russia. The video was 1 minute long. It was not clear who created that deepfake. Still, the government of Ukraine have been warning the people of Ukraine for a long time about the possibility of deepfakes being circulated by Russia. A professor at the University of California, Mr Berkely, an expert in digital media forensics, said, “This is the first one I’ve seen that got some legs, but I suspect it’s the tip of the iceberg.”

Evaluation of consequences on public perception and trust

  1. Erosion of trust – These above incidents ruin the public’s confidence in media and official communications, as the public gets confused about the authenticity of the content they consume.
  2. Increased Skepticism – There were visible flaws in case 1, like unsynced voice and pixilation issues, but minor flaws will become harder to detect as soon as the technology improves.
  3. Exploitation of cognitive Biases: Deepfakes exploit a person’s emotional beliefs and later force them to think based on their pre-existing beliefs, bypassing critical and rational thinking of a situation.
  1. Threat to Democracy
  2. Undermining Public Trust: “Erosion of Faith”
  3.  Impact on Democratic Institutions
  4. Compromising Elections 
  1. In the old days, if you wanted to threaten a country, you needed a few high-end aircraft carriers, nuclear weapons and a few long-range missiles. But today, all you need is the ability to produce very realistic deepfake videos that can intervene in the election quickly and can easily affect a country’s economy tremendously,” stated US Senator Marco Rubio (Porup,2019). 
  2. Recently, a PIL was filed in the Delhi High Court for refraining from using deepfake in political campaigns for the Lok Sabha 2024 elections. The aim of the plea was to give direction to the Election Commission of India to implement the necessary guidelines against deepfake technologies on platforms like Google, Meta and X(Twitter)

The plea also stated that other political parties in International organisations like the EU have taken steps under the EU charter to conduct free and fair elections.

  1. Character Assassination and Defamation 

If we look up to the previous examples that clearly state how the reputation of a person is affected when a deepfake is circulated on the internet, even if the deepfake is later confirmed as a manipulated video, then also the image of that particular person will be affected, this concept falls under the term “Anchoring Effect.”

  1. National Security Concerns “A New Front in Information Warfare”
  2. Political Propaganda and Disinformation
  3. Weaponizing Deepfakes – Foreign state actors can use deepfakes as a weapon against a country to shape public opinion and political will for their own interests. This can pose a clear threat to a country’s public security.
  4. Espionage and Cyber-Attacks
  5. Espionage – Deepfakes play a significant potential threat in espionage activities by creating fake identities and compromising the security protocols of a country. Let’s break it down 
  6. Creation of Fake Identities – 
  • Intelligence Gathering – Deepfakes can be used to create a fake identity of an individual that can affect a person’s private information without raising any suspicion. For example, it can be easily acquired by making a deepfake audio of a person’s personal information. 
  1. Compromising Security Protocols
  • Authentication Bypass – In the modern world, many security systems use biometric scans and voice recognition to verify a person’s identity. Let’s take the case of a high-ranking general in the military; a deepfake of that general can be used to access restricted areas by bypassing security efficiently.
  1. Cyberattacks – Deepfakes can be used as a tool for cyberattacks in many ways, like Phishing; in the case of phishing, AI can be used to mail deepfakes to millions of emails at a single time, it can either be a disinformation campaign started by a group of Chinese hackers or Russian hackers or a shakedown of a country’s economy.
  2. International Relations and Geopolitical Stability 
  3. Destabilizing Alliance – Deepfakes can play an impactful role in Diplomatic relations of international alliances like BRICS or EU.

Imagine that a deepfake of a President goes viral in which he is saying statements that are entirely against the views of the alliance nations, or he is shown as declaring war against a nation that is a friend to the other nation. However, the video can be later confirmed as a deepfake, but what about the time gap between confirming the footage as a deepfake or an original video?

Consequences could be brutal, and verifying any information as false or true at that exact time is impossible. That time gap could create a massacre in a country, the share market could crash, and economies could go down, too.

  1. Legal and Ethical Implications

5.1) The Legal Landscape: “Regulating the Unregulatable?”

         The legal problems that Deepfake AI has created in the modern world are 

  1. Privacy violations – Deepfake technology mainly creates fake videos and images of a legal entity that infringes on an individual’s privacy. This topic also includes making non-consensual pornography content without the authorisation of that legal entity. 
  2. Fraud and Scams – Deepfake technology can be used to commit fraud by deepfaking someone’s audio and using it to scam that person’s family or any relative.
  3. Election Interference – Deepfake AI has been used various times during elections, and videos or audio viral of a political candidate doing something illegal by impacting the outcome of elections.
  4. Defamation and Libel – Deepfakes have been used several times to defame individuals or organisations. This always harms that individual’s or organisation’s reputation, and legal actions are taken against the person who created it.

However, if we take the scenario of India, we have laws that counter Deepfake contents that are – 

  1. Information Technology Act, 2000 – Section 43, 66(e), Section 66(d), Section66(f)
  2. Indian Penal Code (IPC) – Section 499, Section 420, Section 468, Section 471
  3. Copyright Act, 1957 – Section 63
  4. Personal Data Protection Bill, 2022 

5.2) Ethical Dilemmas: “Balancing Innovation and Harm”

  1. Ethical Consideration in the development and use of Deepfake Technology
  1. Informed Consent – Consent is the primary factor in creating a deepfake of a person. Creating a deepfake usually involves videos and audio of a person and is mainly taken without their consent.
  2. Privacy violations – The ethical responsibility of the creator of deepfake is to protect a person’s individual privacy rights and prevent misuse, but this is not followed in most cases, especially in non-consensual pornographic videos.
  3. Balancing Freedom of Expression and the Need for Regulation
  1. Freedom of Expression – Deepfake technology can also be a tool for creativity, satire and expression. It becomes crucial to protect the rights of artistic and expressive freedoms, including the creation of parody and political satire.
  2. Transparency and Accountability – Deepfake software should start using watermarks on every single video created by their software so that a prudent person can easily recognise whether it is a deepfake or not.
  3. The Role of Media Literacy 

6.1) Defining Media Literacy: “Empowering the Digital Citizen”

  1. Concept and Importance – The ability to critically analyse, evaluate and access and to determine their accuracy and credibility.

As all the information in the modern world has become digital and visual, media literacy has become more crucial for understanding content.

  1. Education and Awareness “Building a Resilient Society”
  2. Integrating Media Literacy into Education – Media literacy should be implemented in schools and universities. Students should be taught how to analyse any sources critically and should be taught to recognise any manipulated videos and information or simple we can promote through Public Awareness campaigns.

6.3) Tools and Techniques: “Detecting the undetectable”

  1. Fact Checking and Verification – Fact-checking organisations should be developed, and their roles should be strictly followed. Also, social media platforms like Instagram, Meta, and X should create more robust departments for fact checkers, and their policies need to be enhanced.

6.4) Encouraging Critical Thinking: “From Passive Consumers to Active Evaluator”

a) Developing Critical Thinking skills 

  1. Encouraging Skepticism – Skepticism plays an essential role in today’s world, i.e., questioning the authenticity of online content.
  2. Analytical Skills – We should promote the ability to critically analyse and interpret media messages. The CRAAP test (Currency, relevance, authority, accuracy, purpose) evaluates information.

VII) Solutions and Recommendations

7.1) Detection technologies

  1. Advancements in AI and Machine Learning – We should start using AI itself to detect deepfake content. Training AI with Deep learning models, neural networks and machine learning algorithms. Currently, we have tools like Deeptrace, sensity AI, and Microsoft’s Video Authenticator.
  2. Digital Watermarking – We can rule that every Deepfake generator application should use the watermark in every video created on their platform. 
  3. Provenance Tracking – We can use technologies like Blockchain to track down the history and origin of any digital content. Example – Project like Adobe’s Content Authenticity initiative that focuses on content provenance.
  4. User-friendly Verification Tools – We can create user-friendly verification tools to verify audio-visual content like deepfakes. Keeping in mind that it is available to every single person. For example, we can develop apps and browser extensions that help us verify content authenticity.

7.2) Legal frameworks and Regulations 

  1. Current Legal Landscapes – Currently, as we have mentioned, the current legal frameworks where the deepfakes are governed, so we’ll directly move on to the limitations of the current framework.
  2. Limitations of the current framework 
  • Section 66E – It does not prevent deep fakes from being created in the first place.
  • Section 499 IPC – It requires to prove damage to a person’s reputation, which might not always be in the case of Defamation.
  • Intent and Attribution – Proving a malicious intention behind creating a deepfake can be difficult. A deepfake that is made with the intention of Sattire or Humous might get misconstructed as malicious.
  • Rules should be made to identify the originators and distributors of deep fakes, which is complex, especially in cases where methods like phishing are used.
  1. Proposed legal frameworks – The parliament should introduce a dedicated AI act to address and govern deepfakes specifically
  • Scope – To govern the creation, distribution, and use of deepfakes.
  • They should establish Ethical guidelines for AI and deepfake development, ensuring respect for the privacy and consent of an individual.
  • A dedicated regulatory body should oversee the implementation of the AI Act.
  • The act should also include public awareness campaigns from time to time.
  1. Conclusion

The rise of deepfake technology has posed significant challenges to democratic institutions, has become a threat to the National Security of every country, and has become an enemy of an individual person. Addressing all these challenges can be hard but possible, requiring a multi-faceted approach encompassing technological solutions, robust policy frameworks, and widespread public engagement. By developing and implementing advanced detection technologies, comprehensive legal regulations like an AI Act, and promoting media literacy through education and public awareness campaigns, we can mitigate the threats we face as a country and as an individual in our daily lives. To secure our digital future and make sure that deepfakes are used ethically and freely, technology professionals, lawmakers, educators, and the general public must collaborate. By working together to balance innovation and protection, society will be more prepared to negotiate the challenges of the digital age effectively.

SUBMITTED BY:

SHASHANK SINGH

SYMBIOSIS LAW SCHOOL PUNE