We are dancing to the tune of AI and its algorithms are dictating the rhythm of our world. Its increasing integration into human systems have raised questions about its impact on intellectual property rights. This paper conducts a thorough examination of safe harbour provisions in India, citing relevant cases and offering insightful suggestions. Subsequently, the paper critically assesses the current frameworks in place. Moving forward, it analyses the challenges presented by AI in the context of intellectual property infringement, shedding light on the applicability of fair use exceptions. In a nutshell, the paper advocates for a proactive and harmonised approach in determining intermediary liability in the era of AI- generated content and its impact on intellectual property.
Key Words: intellectual property rights, safe harbour, current frameworks, fair use exceptions, intermediary.
Introduction:
The internet went from a confusing new invention to something we can’t imagine our life without. It’s everywhere, from finding the best pizza to connecting with loved ones. But with all this power comes responsibility. The companies that control how we access information, such as search engines such as Google and social media platforms like Instagram, act as intermediaries. These companies act like ‘middlemen’ for the online world. They connect us with what we need, whether it’s buying something, sending money, or even finding the perfect cat video. These Internet Intermediaries provide services that enable people to use the internet. The social networks provide platforms for users to share self- generated content, while search engines index and facilitate access to user- generated content.
Since there has been a tremendous increase in online content transmission, rise in the number of these intermediaries and most importantly the augment of Artificial Intelligence, there has been a significant upsurge in online Intellectual Property violation cases. A recent surge of IP infringement cases against these internet intermediaries for infringing content uploaded by users have given rise to many debates and concerns.
This article firstly explains the current safe harbour provisions in India including landmark cases that laid the foundation for the existing law. Secondly, it critically examines the current framework on intermediary liability and thirdly, the paper also analyses the impact of Artificial Intelligence on IP infringement and intermediary liability.
Research Methodology:
The research on internet intermediary and their liability is based on qualitative research. Data was collected from various sources, including legal documents, academic articles, law journals and websites which forms the core of the study. Analysis of legal frameworks and critical examination of existing law and new Intellectual Property challenges provide a holistic view.
Literature Review:
- Evolving Scope of Intermediary Liability in India by I. Gupta (2023): Gupta’s research investigates the extent of intermediary liability within the Indian context, emphasizing the diverse and ambiguous nature of various approaches. The study delves into the legal structure in India and explores its consequences for online intermediaries.
- Reforming Intermediary Liability in the Platform Economy by G.F. Frosio (2017): Frosio’s study examines the evolving scenario of intermediary liability within the platform economy. It specifically discusses the implications of safe harbor provisions and how these exemptions influence online intermediaries.
- Intermediary Liability in India” by P. Advani (2013): As a component of the India Research Project, this article presents perspectives on intermediary liability in India. It furnishes a fundamental comprehension of the legal framework and its consequences for online platforms.
- Intermediary Liability in the Context of Online Platform: Comparative Analysis of Different Legal Approaches (2022): This study compares global legal approaches and analyse the challenges and opportunities that online platforms face and the lessons which can be learned from the comparative analysis.
Safe Harbour Provisions in India: Navigating Intermediary Liability
The Information Technology Act, 2000 (IT Act) in India establishes the legal framework for intermediary liability. It is essential to interpret its provisions alongside those of the Indian Copyright Act, 1957 explicitly prohibits actions such as ‘authorizing copyright infringement’ as well as involvement in ‘secondary infringement’.
The IT Act states an “intermediary” means “any person who, on behalf of another person, receives, stores, or transmits that record or provides any service with respect to that record.” Additionally, it specifies that intermediaries include “telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places, and cyber cafes.”
Chapter 12 of the IT Act focuses only on intermediary liability and Section 79 of IT Act grants immunity to intermediaries from liability related to third-party information in specified situations. This provision is akin to the European E-Commerce Directive & the ‘safe harbor’ framework in the United States Digital Millennium Copyright Act. Specifically, it provides that an intermediary will not be held liable for content generated by third party unless:
a) if it only “provides access to a communication system over which information made available by third parties is transmitted or temporarily stored” or
b) “it does not initiate the transmission, select the receiver of the transmission, and select or modify the information contained in the transmission“.
If the services provided by the intermediary align with the aforementioned conditions, it can benefit from immunity against liability for activities of third person. However, this immunity is based on these conditions:
1. Firstly, intermediary is required to exercise due diligence “while discharging his duties” in accordance with the IT Act and adhere to relevant directions. Failing to maintain due diligence would result in the forfeiture of the immunity provided by this provision.
2. Secondly, the intermediary should not directly participate in carrying out the alleged unlawful act. Involvement that would disqualify the intermediary from immunity may involve conspiracy, aiding, abetting, or inducement through various means, such as threats or promises.
3. Thirdly, the intermediary must promptly ‘remove or disable access’ to the illicit content provided on its platform upon obtaining real information or receiving notification from the government regarding objectionable content.
In case of Myspace I, it was held that that Section 79 of IT Act is not applicable to copyright infringements. In contrast to Section 81of IT Act’s non-obstante clause, which establishes the Act’s priority in comparison to alternative regulations., the exception specifically leaves out the Copyright Act and the Patent Act. The Madras High Court, in Vodafone case, applied S. 79 to copyright infringement, contrary to Myspace I. Myspace II, an appeal from Myspace I, recognized the nuance, overturning the earlier decision and affirming that the general safe harbour extends to copyright infringements. The absence of a specific safe harbour provision given in the Copyright Act at the time of Myspace I sparked a discussion on the necessity for one tailored to copyright infringement.
As a result, the Copyright (Amendment) Act of 2012 incorporated Section 52(b) and Section 52(c), establishing safe harbor provisions for intermediaries. Consequently, India’s copyright safe harbors encompass Section 51(a)(ii), Section 52(b), & Section 52(c) of the Copyright Act, 1957, in conjunction with the Rule 75 of the Copyright Rules, 2013.
Following Myspace II, in a scenario where both safe harbours are applicable to copyright violation, the court has not specified which takes precedence. The most likely interpretation from Myspace II is that the general immunity under section 79 of IT Act supersedes the copyright safe harbour. In essence, section 79 of IT Act serves as an additional layer of protection, becoming applicable solely when establishing secondary violation under Copyright Act (Section 51(a)(ii)) and the lack of fair use [Section 52(b) and Section 52(c)]. If secondary violation and the lack of fair use are proven, the safe harbour under Section 79 of IT Act comes into play.
Intermediary Liability and Intellectual Property Infringement
Intermediary Liability is highly pertinent to Intellectual Property across various dimensions. The crucial functions played by intermediaries in the utilization and distribution of Intellectual Property adds to their growing significance, ensuring easy accessibility and availability in the line with their primary purpose. Presently, intermediary liability concerning IP infringement extends to situations where they actively engage in the process rather than merely serving as data transmission service providers. Internet intermediaries, being service providers, bear the responsibility of monitoring content published on their platforms.
Hence, if an intermediary neglects to take action against an infringement despite having adequate knowledge of it, the intermediary becomes liable. This principle is adhered to by the majority of countries worldwide. While the “safe harbour” concept outlined in Section 79 of IT Act lays a defence for intermediaries, there are instances where intermediaries are evidently involved in infringements of Intellectual Property Rights.
In the case of Google India Private Ltd vs M/S. Visakha Industries, In a defamation case against Google India, the complainant alleged hosting of defamatory online articles. Key issues included Google India’s status as the relevant intermediary, eligibility for Section 79 safe harbour protection in the IT Act, and potential liability for criminal defamation. The Supreme Court ruled that determination of control and responsibility should occur in trial, not under Section 482 of the CrPC. Applying the unamended Section 79, the Court emphasized its precedence over the IPC, shielding intermediaries from separate IPC actions. The Court held that internet operators may be held liable for criminal defamation provided they had the power to remove defamatory content but refused to do so upon request.
The “Prajwala case” holds significant legal importance concerning intermediary liability in the context of online content in India. In 2015, the NGO Prajwala initiated the case brought attention to the Supreme Court of India regarding the dissemination of sexually violent videos on the internet. This case played a pivotal role in shaping the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Rules 2021), which establish regulations for online intermediaries and their responsibility for content on their platforms. It reflects the changing landscape of liability of intermediaries in India, as online platforms encounter heightened scrutiny and accountability. The case underscored the challenge of balancing freedom of expression with the obligations of online intermediaries. Moreover, it prompted the Supreme Court to issue directives for creating guidelines and Standard Operating Procedures (SOPs) to combat harmful content, particularly addressing issues like child pornography and violent imagery, highlighting the imperative of addressing illegal content on the internet.
Further in the Snapdeal Case, the High Court of Karnataka dismissed criminal proceedings against the directors of Snapdeal Private Limited, an e-commerce company operating “Snapdeal.com” in India. The legal actions, initiated six years after the fact, were associated with the unauthorized sale of medicines. The High Court invalidated the proceedings, asserting that Snapdeal qualified as an “intermediary” in Section 2(1)(w) of IT Act, taking into account the immunity for intermediaries outlined in Section 79 of the Act, the court ruled that Snapdeal had exercised “due diligence” as mandated under Section 79 (2)(c) of the Act and the Rules 2021. This diligence ensured that its sellers adhered to applicable laws, thereby absolving Snapdeal of accountability for the unauthorized sale conducted by a third-party seller on its platform.
In the recent case of Suki Sivam v. YouTube Google LLC, United States, and others, allegations of copyright infringement were directed at several internet intermediaries, including YouTube, Facebook, and WhatsApp. The plaintiff, contended that unidentified individuals were posting his speeches on these platforms without his consent. The court concluded that the defendants met the criteria for intermediaries and were immune from onus under Section 79 of the Act. Additionally, the court ruled that the plaintiff had not successfully pinpointed any specific infringing content by providing URLs or phone numbers, leading to the dismissal of the lawsuit. Emphasizing that intermediaries could only be held accountable upon being informed through a court order, the court highlighted the impracticality of expecting them to scrutinize every upload and post on their platforms for potential infringing content.
Thus, India’s legal landscape on intermediary liability, shaped by the IT Act and Copyright Act, provides a framework granting immunity to intermediaries under specific conditions. Recent cases like Snapdeal and Suki Sivam underscore the importance of due diligence by intermediaries and their adherence to guidelines. While the evolving Prajwala case led to significant regulations, the nuanced Myspace decisions added complexity. Ultimately, India’s courts grapple with balancing freedom of expression and holding platforms accountable for harmful content, reflecting the evolving nature of intermediary liability in the digital age.
Under the Microscope: Exposing the Cracks in India’s Safe Harbour
The aforementioned legal cases and amendments have aimed to address the weaknesses in the intermediary immunity system. However, numerous uncertainties and persistent challenges in the law remain unresolved.
- Ambiguous Language:
Section 79(3)(c) of the IT Act gives the power to the government to block content which is “prejudicial to public order” but does not provide any definition to the term, leaving room for arbitrary interpretations. For instance, In February 2021, Twitter initially followed Indian government orders to block various accounts associated with the farmers’ protests, citing worries about “inflammatory content.” The unclear definition of such content raised fears of political motives and dissent suppression. Following public backlash, Twitter restored most accounts, but the incident highlighted the risk of arbitrary interpretation of terms.
- Increased Government Authority:
The Rules 2021 gives power to the government to demand user data from internet platforms within 72 hours which raises concerns about surveillance and privacy violations. In 2021, government again put immense pressure on Twitter to remove tweets which were criticizing of its handling of COVID-19, this showcases the potential for government overreach under the act due to increased takedown powers.
- Impact on Free Speech:
The current rise in self-censorship among journalists and activists, apprehensive of government interventions, underscores the legislation’s chilling impact on free expression. A legislative instance, demanding platforms to actively monitor and eliminate “egregious content” without precise definitions, serves as a broad and ambiguous censorship tool, posing a threat to diverse voices.
- Transparency and Accountability Challenges:
The absence of independent oversight and transparency in content removal decisions, frequently executed by private companies using unclear internal guidelines, restricts users’ avenues for seeking remedies. A legislative illustration is the Act’s grievance redressal mechanism, lacking defined timelines and independent adjudication, posing challenges for users to effectively contest content removal decisions.
- Government Influence Over Oversight Mechanisms:
The IT Act allows the government to appoint members to grievance redressal committees, raising concerns about potential bias and lack of independent oversight.
- Additional Concerns:
Provisions related to data localization and government access to user data raise privacy concerns. The Act’s focus on content control rather than addressing root causes is criticized for being ineffective.
It is crucial to acknowledge that there are supporters of the Act who view it as essential for combating online harms. However, the outlined concerns underscore the necessity for a more nuanced and balanced approach to regulate online intermediaries in India.
Charting a Path Forward: Analysing Liability for AI-Generated Content
Artificial Intelligence (AI) has revolutionised various industries leading to innovation and creativity but its impact on intellectual property law is undeniable. The existing legal frameworks encounter difficulties addressing the responsibility of platforms hosting AI-generated content. Traditional copyright laws may not be adequately equipped to manage scenarios where AI systems independently generate content, leading to challenges in determining ownership and accountability.
There is a rise of “Generative AI” models which are capable of creating content that blurs the lines of originality and authorship. Like a music platform “Musicoin” employs blockchain technology to empower musicians to sell their music directly to fans, avoiding the need for record labels and distribution fees. Similarly, the OpenSea NFT marketplace employs smart contracts to facilitate smooth transfer of ownership and ensure royalty payments for digital artworks. In 2023, a widely circulated deepfake video featured actor Tom Hanks endorsing a political candidate, underscoring the capability of AI to generate misleading content that may violate intellectual property rights of individuals. Legal systems may require amendments to regulate AI being a creative force, potentially introducing new liability or ownership categories.
AI software such as ChatGPT/GPT-4 developed by OpenAI, and Bard by Google, generate human-like content by manipulating the ideas. Now the content generation has moved beyond asking questions from Google’s search engine to customised personal content. This has raised apprehensions regarding violation of copyright and privacy, etc. Also, these companies use data generated by users to train their software. Which raises questions as to what is the extent to which these companies can claim ‘fair-use’ exemption under the Copyright Act of 1957?
According to the Copyright Act of 1957, originality is required for protection of copyright, meaning that a work should have its origin from the author. In India, the standard mandates a minimum level of creativity beyond mere skill and effort. These AI tools, utilizing data of various sources that existed prior, produce outcomes by combining these sources with their models. Nonetheless, the resulting output might not meet the creativity requirement necessary for protection of copyright, particularly if seen as an extraction of information from sources already in existence. lacking an element of original creative elements.
Under Indian copyright law, works produced by AI may be eligible for protection as “derivative works” if they exhibit substantial variation from pre-existing material. The Copyright Act also recognizes authorship of work generated by computer, attributing authorship to the individual causing the work to be created.
In 2019, the High Court of Delhi in Navigators Logistics Ltd. v. Kashif Qureshi & Ors. rejected a copyright claim concerning a computer-generated content, citing the absence of human intervention. Yet, in 2020, the Copyright Office acknowledged the AI tool named “Raghav” listed as a co-author in the creation of an artwork, marking the first time an AI tool received recognition as an author in India. However, a withdrawal notice followed, requiring the applicant to communicate the legal standing of the AI tool to the Copyright Office. This decision arguably overlooked the intention of legislature that expressly grants protection to computer-generated works under Indian law.
Indian courts have proactively dealt the improper use of AI tools leading to violation of copyright. In Anil Kapoor v. Simply Life India, the court furnished an injunction against the utilization of AI for generating fake and morphed content, especially for commercial purposes, with a focus on safeguarding individual personality rights.
The rapid rise of AI-generated content needs to change our approach towards intellectual property and its legal frameworks. Both developers and users of AI tools need proactive measures to address potential infringement. The Legislators should enact a statue or amend the existing one to regulate the developers should use of licensed content in a fair manner and they should undergo ethical data training. Users should scrutinize terms of service, insist on indemnification, and advocate for clear policy and legislative instruction from Copyright Offices and Courts. Striking a balance between protecting copyright and promoting innovation necessitates rethinking our understanding of creativity and legal incentives. In light of these pressing issues, suggestions of the Parliamentary Report’s call for a “separate category of rights” for AI works needs to implemented. Additionally, establishing an independent AI Council for platform certification, implementing accountability platforms, strengthening transparency and cybersecurity protocols, are crucial steps towards navigating the complex landscape of AI-generated content in a responsible and equitable manner.
Conclusion:
To sum up, the pervasive influence of the internet, guided by the intermediaries like Google, Instagram, You Tube, ChatGPT etc, has become an indispensable part of our lives. As the reliance on their intermediaries grows, concerns about intellectual property rights surfaces, especially in the wake of increased online content transmission and the advent of AI. The surge in intellectual property violation cases, fuelled by the proliferation of content, increased intermediary numbers, and the advent of the AI, necessitates a meticulous examination of existing legal frameworks.
The analysis of India’s safe harbour provisions revealed intricacies holding platforms accountable for harmful content. Despite amendments and case laws addressing loopholes, ambiguous language, increased government authority, impact of free speech, etc. These challenges necessitate a more nuanced regulatory approach to ensure responsible practices. The rise of AI tools has introduced complexities in determining ownership and liability, promoting a re-evaluation of existing legal frameworks.
To navigate this complex landscape, some proactive measures, including the establishment of an independent AI statute and council for platform certification, robust accountability mechanisms, transparency protocols, etc. Striking a balance between copyright protection and fostering innovation requires a paradigm shift in our understanding of creativity and legal incentives. The implementation of these measures, couples with the recommendation from the Parliamentary Report for a “separate category of rights” for AI works, can pave the way for responsible and equitable practices in the realm of AI-generated content.
NAME: – Kashish Mittal
COLLEGE: – UPES, Dehradun.
