Hyperreality and the Cinematograph Act: Regulating AI in Indian Cinema Before Truth Fades

ABSTRACT

The emerging use of Artificial Intelligence (AI) is revolutionising the way art and cinematic creation is taking place. While the film industry is significantly benefitting from its capabilities, it is at the same time posed with some unprecedented challenges because of the use of AI. One challenge lies ahead of us, the people, to identify whether the content consumed by us is not manipulated through this easy, prompt given result generating technology. Cinema plays a huge role in shaping public perception, building influence and public opinion in society. When the reality is manipulated in cinema through AI, it not only challenges public trust but also becomes a threat to public order and shared truth. This inability to distinguish the real from the fake creates the situation of hyperreality, established by French philosopher Jean Baudrillard. This paper explores the emergence of Hyperreality in Indian Cinema through the use of AI and deepfake technology, arguing for the urgent reform in the Cinematograph Act, 1952, to preserve truth and accountability in the visual storytelling.

KEYWORDS

Hyperreality, Cinematograph Act, Artificial Intelligence, Cinema, Deepfakes, Legal Reform, Truth Manipulation

INTRODUCTION

In India, cinema is not merely a form of entertainment but actually is a powerful social instrument that shapes the culture, influences political narratives and molds the consciousness of the people. In recent years, the rise of Artificial Intelligence (AI), mainly deep learning tools that can generate hyperrealistic images, audios and videos, has completely changed the landscape of visual storytelling. Though this technology helps enhance creativity, it also makes it easier to rewire history and manipulating the truth, therefore erasing accountability.

Jean Baudrillard’s concept of Hyperreality is a state where representations become more real than the reality itself, strangely resonates with the current digital environment. Deepfake videos, AI generated performances and synthetic performers are shaking the pillars of trust among the audience and authenticity of cinematic experience. The Indian regulatory mechanisms are lagging behind, the Cinematograph Act, established in 1952, is the base of film regulation. Seemingly too old, it remains outdated in its scope to deal with technological manipulation and AI-driven content.

AI in cinema have terrific advantages. It can enhance the visual, audio and video, it can completely transform them as per the instructions provided, enhancing with each given prompt, trying to follow the orders of the instructor at its best. Film Director Matt Szymanowski in his TEDx Talk talks about why he thinks AI is the next revolution in cinema, explaining how AI, another groundbreaking addition to the technology of cinema, can bring bold and innovative imaginations to life. And there are visibly a lot of advantages to using AI in cinema like script analysis, casting and location scouting, visual effects, marketing, production and many more, creating a set of disadvantages too, such as privacy of the artist, copyright issues and central to our theme, distortion of truth, whether it be through deepfakes in videos or alterations in audios.

The bigger threat comes into light when we consider cinema depicting historical and political areas. This paper explores hyperreality driven by AI and how it challenges truth in cinema. I t examines the current legal vacuum under Indian film laws, arguing for urgent reforms. It aims to analyse how this vast grey area in today’s tech and outdated reforms can lead to deepfake technologies impact societal understanding of history, identity, and accountability, and why the Cinematograph Act must evolve to safeguard public perception from artificially constructed realities.

METHODOLOGY

This paper takes a qualitative doctrinal perspective, highlighting a critical examination of the Cinematograph Act, 1952 and its shortcomings in regulating developing AI technologies in film. It is based on secondary materials such as laws, scholarly articles, policy documents, case studies, and international legal instruments (such as the EU AI Act and SAG-AFTRA contracts) to establish regulatory loopholes and recommend reforms.

Current examples of Here (2024), Roadrunner, and Secret Invasion are utilized illustratively to bring out the ways in which AI affects popular opinion and legal responsibility. The research also draws on media ethics, technological advancements, and cultural theory, notably Jean Baudrillard’s concept of hyperreality, to frame the social implications of cinematic productions generated by AI.

In general, the study is interdisciplinary, integrating legal interpretation, media analysis, and comparative approaches to suggest modifications that reconcile artistic freedom with legal and ethical protections.

LEGAL GAPS IN THE CINEMATOGRAH ACT

The Cinematograph act was enacted during the time when cinema was a physical and analog medium. It aimed to regulate the exhibition of films in India and to make sure that the content aligned with the moral and cultural values of the nation. Its central regulatory body, the Central Board of Film Certification (CBFC) was granted the power to certify films as “U”, “UA”, OR “A”, based on content involving violence, obscenity, religion or public decency.

Yet, with the advent of artificial intelligence in the movie industry, such a legal framework has come to be grossly insufficient and obsolete. The Act is not taking into consideration recent digital phenomenon including:

  • AI-created imagery or sound
  • Deepfake technology
  • Synthetic replication of actors
  • Unconsented voice cloning
  • Undisclosed altering of historical or political footage

Failure to Recognize AI Content

One of the most obvious lacunae in the Act is its complete omission or definition regarding AI-created media. For instance, take the case of the movie Here (2024), where Tom Hanks and Robin Wright were de-aged with the help of AI tools provided by Metaphysic. The question arises of whether such digitally enhanced performances need new consent, and who owns the rights, the actor, the production house, or the AI firm?

In India, it is not mandatory for filmmakers to reveal whether a scene or character has been created using AI. A movie might contain deepfake shots of historical personalities, or even digitally resurrect a public personality to deliver politically charged messages, and no legally binding requirement to disclose that it is artificial.

Consent & Personality Rights

The Cinematograph Act also does not provide protection for actors’ personality rights, particularly with regard to AI-generated content using their face, body, or voice. With the proliferation of deepfakes, an actor can now be digitally placed in a scene that he never acted in, even posthumously, opening up issues of difficult posthumous rights, exploitation, and digital resurrection.

Globally, this is an issue that is being passionately discussed. In America, SAG-AFTRA (actors’ union) demanded contractual protections in its 2023 strike to ensure that studios cannot use AI to copy actors without explicit, continuous permission. The EU’s AI Act also requires disclosure when synthetic media is offered to consumers.

India’s legal framework, on the other hand, lacks any provision for express consent in such situations, making artists and celebrities susceptible to abuse. 

CBFC’s Role is Ill-Equipped

Today the CBFC only censors based on nudity, hate speech, or defamation, but there is no mechanism or expertise in the Board to identify or assess synthetic content. With tools such as Sora creating whole scenes from text inputs and apps like MidJourney used for visual design and storyboarding, there is no method of checking whether what the viewer is seeing is real or not.

In India’s politically charged environment, this is especially dangerous. Picture a biopic dropping in manufactured but believable deepfake videos of a political figure expressing something incendiary. If those are not disclosed or regulated, they can distort historical accounts or sway elections, with minimal legal punishment.

Comparative Perspective

Internationally, this issue is increasingly being recognized. The UK is debating a “deepfake consent law”, and nations such as China have already enacted laws requiring watermarks and transparency in the case of AI-produced content. In the EU, the Digital Services Act calls for transparency in the content, such as for media produced by algorithms or neural networks.

India, by comparison, remains in a regulatory vacuum.The Cinematograph (Amendment) Bill, 2023 made strides in fighting piracy, but it still lacks any provisions for AI or digital manipulation in filmmaking.

CINEMA AS A HYPERREALITY MEDIUM: SOCIAL INFLUENCE

The phrase “hyperreality” was made fashionable by theorist Jean Baudrillard, who described a state in which the line between reality and simulation is obscured so widely that humans come to believe the artificial is real. Cinema, already a highly illusionist media form, is quickly becoming an even more potent vehicle of hyperreal narrative creation, particularly with the addition of AI, deepfakes, and generative visual and audio software.

This is a revolution with deep consequences: what we view on screen is no longer necessarily an interpretation of truth, but a manipulated, constructed, and synthetic reality that is almost indistinguishable from the truth and in certain situations, more convincing than the truth.

Deepfakes and the End of Visual Trust

The international research report “Deepfakes and the Breakdown of Truth” (Digilabs) admonishes how deepfake technology is not only a creative tool, it’s an affront to truth itself. When audiences have no idea if someone truly said or did something in a clip, film and media cease reflecting reality and begin rewriting it.

The documentary Roadrunner (2021), for example, employed AI to synthesize the voice of late chef Anthony Bourdain for narration. While it was tastefully done, audiences and critics were conflicted: was it ethical? Was there ever permission granted? More significantly, did audiences perceive the voice as real, hence find it more believable than if it had been marked “synthetic”?

This is the essence of the hyperreality crisis: artificial video is starting to become more ‘truthful’ than actuality itself.

Psychological and Political Implications

In a country like India, where cinema heavily influences public sentiment and where film often intersects with politics, the dangers of hyperreality are amplified. A deepfake video of a political figure making a fabricated statement could go viral faster than fact-checkers can debunk it. Add dramatic music, cinematic visuals, and AI-enhanced clarity, the result is a visually authentic lie that feels more convincing than grainy real footage.

AI “news-style inserts” in movies, for example montages that look like genuine news reporting also mislead public memory by creating false historical or political occurrences, thus warping collective consciousness. Without clear labelling, audiences (particularly rural or low-literacy communities) cannot always tell what is true and what is not.

Take the case of India’s strong emotional affinity with biopics or period epics (e.g., The Accidental Prime Minister, PM Narendra Modi, Sam Bahadur). Were these to incorporate AI voices or events replicated in situ but with fictional colours, they might effortlessly reprogram public consciousness, all in the name of creative freedom.

Media Literacy vs. Visual Sophistication

Though visual media has become more sophisticated, the overall level of media literacy among the population hasn’t caught up. Fewer viewers question what they see, particularly in films, which are perceived as authoritative, expensive, and handpicked. AI-enhanced realism further entrenches this psychological attachment, and individuals find it hard to challenge visual veracity.

In addition, as the StudioBinder video notes, the issue is not only in post-production but also in pre-production scriptwriting, greenlighting, and concept development, all now subject to AI software such as ScriptBook, Midjourney, and Sora. The whole pipeline is vulnerable to narrative bias being fed in through machine-generated logic.

Case in Point: The “Secret Invasion” Controversy

Marvel’s Secret Invasion used AI-creation in its opening credits, a decision that resulted in severe backlash. Fans felt cheated, not only because it was AI, but because they weren’t informed. This once again shows how undisclosed use can destroy trust among the audience, particularly when the facade is shattered.

Film has always straddled the border of reality and fantasy. But with deepfake software and AI, it now threatens to destroy that boundary entirely, producing a hyperreality in which facts and fiction and feelings can be swapped back and forth. 

In a democratic country where the public’s opinion is easily influenced by movies, this becomes a serious threat to society. Unless laws adapt to make disclosure, transparency, and digital ethics mandatory, we risk entering a world where individuals give more credence to what they watch on screen than to what they learn from history books.

SUGGESTIONS

While artificial intelligence continues to redefine the limits of creative work, the Indian legal system, particularly the Cinematograph Act, 1952, needs to adapt to counter the growing complexity and threat posed by media made with AI. The objective isn’t to suppress innovation or creativity, but to reconcile creative progression with legal responsibility, ethical disclosure, and safeguarding the public interest.

1. Reforms the Cinematograph Act, 1952

The Act, which was initially targeting morality, obscenity, and violence, fails to consider the artificial nature of AI-generated content. It needs to be revised to:

  • Provide definitions for “synthetic media,” “deepfake,” “AI-generated performance,” and “digital likeness replication”.
  • Require disclosures for any content that has been created, edited, or manipulated using AI-based tools such as voice, visual, or performance.
  • Empower CBFC to evaluate not just moral but technological changes in submitted films, especially where AI content might lead to deception.

For example, CBFC must indicate and require labelling for scenes with AI-generated actors, voice clones, or historical re-creations.

2. Implement “Consent Protocols” for Synthetic Replication

Indian laws currently are not specific enough to demand consent for the synthetic use of a person’s face, voice, or personality in fictional or composite characters. For this we need to:

  • Implement mandatory consent provisions for public figures and actors, including AI-facilitated duplication.
  • Extend personality rights to include AI likeness and voice.
  • Establish penalties for unauthorized use, particularly when it has the power to cause reputational or emotional damage, or lead people off track.

This safeguards not only celebrities but also common people whose images can be used without permission, particularly in a low-budget film or an advertisement.

3. Digital Watermarking and Labelling Norms

  • Every AI-created or edited scene must have invisible or visible watermarks, attested by CBFC or a digitally empowered government agency.
  • Demand on-screen notice for edited scenes (e.g., “This scene contains AI-generated images.”)
  • Establish a reliable certification system for material employing AI responsibly and openly.

This assists in developing audience consciousness, enhances legal traceability, and encourages voluntary compliance on the part of filmmakers.

4. Criminalize Misuse of AI in Cinema

Special provisions must be made in the Cinematograph Act or the Information Technology Act to criminalize:

  • Employment of AI to falsely depict historical events or political personalities in a false or defamatory manner.
  • Deepfaking or synthesized scenes employed to inspire hatred, disseminate falsehoods, or manipulate facts, particularly in dramatized news formats.
  • AI voice or facial cloning employed without permission within documentary-style productions.

For illustration, a movie that deepfakes a politician delivering a hate speech, even fictional, needs to include prominent disclaimers and perhaps be subject to regulatory action.

5. Set up an AI & Cinema Ethics Board

In addition to CBFC, a dedicated advisory council on AI & Cinema Ethics can be formed with:

  • Tech specialists (AI/ML researchers)
  • Law academics
  • Creatives’ guild representatives (WGA, actors’ unions, etc.)
  • Psychologists and media sociologists

Such a panel can examine contentious content, guide the aspects of ethics, and develop evolving AI usage guidelines that keep pace with emerging trends.

6. Public Awareness and Media Literacy Campaigns

Laws are useless without public awareness. The government and industry players need to:

  • Conduct national campaigns of awareness on the deepfake and synthetic media reality.
  • Add AI literacy courses to film school programs and journalism courses.
  • Urge OTT platforms to tag synthetic content and encourage open filmmaking.

CONCLUSION

Artificial Intelligence is no longer a theoretical idea about the cinema of the future, it’s already changing how films are written, filmed, edited, and even experienced. From virtual performers to AI-created voice and even entire deepfaked scenes, the divide between fact and fiction continues to shrink. As the technology advances, so does the possibility of distortion, manipulation, and deception, not only within movies but in the way society experiences reality itself.

Still, however much cinema is changing, Indian law, especially the Cinematograph Act, 1952, is still rooted in morality and obscenity concerns. It does not provide much or anything on the legal aspects of synthetic media, tampered identities, or deepfakes. The powers of censorship and certification currently vested in the CBFC are not suited to manage these digital shifts.

Through the lens of hyperreality, we’ve seen how AI-generated visuals can rewrite political memory, fabricate news, or manipulate public opinion, all under the guise of entertainment. In a country where cinema holds immense cultural and emotional influence, the risk of unchecked AI is not just legal or creative; it is social and democratic.

Thus, regulation has to develop not to stifle creativity, but to protect authenticity and human rights. Mandatory disclosure of AI-generated content, protection of actor’s personality rights, digital watermarking, and stringent penal provisions for synthetic misinformation are not merely advised, they are the need of the hour.

Cinema is a mirror of society, but also a reflection of it. If we don’t acknowledge the role of new technologies in film production, we risk substituting truth with illusion, memory with simulation, art with algorithm. The solution is not to stop innovation, but make sure creativity continues to serve the truth, not substitute it.