_____________________________________________________________
Abstract
In the wake of the advent of artificial intelligence, this paper analyzes the legal terrain of intellectual property in India. In India, generally, a registered inventor or author can only be a natural person. However, with the increasing autonomy of AI, ownership and patentability issues arise.
Artificial Intelligence (AI) has transitioned from a science fiction idea to a dominant power in the world knowledge economy, the ‘new electricity’ in a world where wealth is increasingly flowing from tangible to intangible assets. India’s “AI for All” vision sees technology as a force for social advancement. However, the current Intellectual Property Rights (IPR) framework, Section 3(k) of the Patents Act, 1970 and Section 2(d) of the Copyright Act, 1957 , still subscribes to anthropocentric norms that recognize only “natural persons” as authors, inventors or creators. This creates a deep accountability gap and incentive puzzle. The lack of clear attribution of rights to autonomous AI outputs creates significant barriers to commercial exploitation, licensing, and royalty collection. A comparative analysis of the global responses to the “DABUS” case reveals that India, USA and EU have refused to recognize AI inventorships, whereas South Africa has issued the first patent for an AI invention and China has extended copyright protection to creative works reflecting the ‘intellectual activity’ of the development team. The monetization of performance in flagship Indian institutions like the IITs and IISc is essentially through the ‘work-for-hire’ doctrine and Technology Transfer Offices (TTOs). The valuation of such monetization is still complicated by the regulatory demands of the Digital Personal Data Protection Act (DPDPA), 2023. The paper ends with a call for moving away from classical legal paradigms and towards an agile regulatory system. It proposes the creation of a sui generis class of rights for AI inventions and the eventual codify of AI laws to create a robust ecosystem of responsible innovation. The report notes that current protections for patents, copyrights and trademarks largely perceive AI as a tool, not a creator. It also points to the challenges of originality and the likely necessity for future regulatory adjustments to cope with the works made by machines. Ultimately, the source presents a roadmap for firms to navigate innovation and IP enforcement in this changing technological and regulatory world.
Introduction
Artificial Intelligence (AI) has transitioned from an abstraction of science fiction to a fundamental driver of the global economy. As famously posited by Andrew Ng, AI serves as the “new electricity” of the 21st century, possessing a transformative power that revolutionizes every sector from pharmaceutical research to retail commerce.[1] This technological paradigm shift is taking place in a critical movement of global finance in which wealth in society is moving from tangible to intangible assets, in different forms of patents, trademarks and algorithmic data. In the Indian context, the ‘AI for All’[2] roadmap of NITI Aayog has strategically identified AI as a transformative force for inclusive social development. However, the domestic legal infrastructure to support and monetize the outputs of AI is still in flux.
Previously, AI systems were seen as simply tools to support humans. As humans and technology have evolved, AI tools have now become autonomous entities capable of generating complex research, sophisticated art and patentable technology without significant human intervention. The rapid growth of AI-related patent applications in India, especially in telecommunications, transportation and healthcare, highlights the economic necessity of providing clarity in legal pathways to ownership. However, the current regime of Indian Intellectual Property Rights (IPR) is still based on the anthropocentric standards of authorship and inventorship. Statutory barriers, notably Section 3(k) of the Patents Act, 1970 and Section 2(d) of the Copyright Act, 1957, assume that a creator, author or inventor must be a natural human being, thus creating a profound accountability gap and an incentive dilemma for investors.
The lack of a clear ownership right in respect of AI-generated creations is a major obstacle to their commercial exploitation. The lack of a clear legal identity for the creator makes processes such as licensing, collection of royalties and creation of university spin-off companies fraught with risk. If such innovations are thrown into the public domain due to absence of statutory protection, it is a big disincentive for private sector and academia to put money in high risk AI generated creations. Thus, the economic routes for monetization demand a radical reconciliation of Indian laws with the recognition of the present reality of AI intellect. This research paper critically analyses the possibilities of how India can amend its legal framework to recognize the AI-human collaborations thereby creating a robust and sustainable ecosystem for the commercialization of AI-driven innovation in a globalized market.
The Jurisprudential Dilemma: AI as a Legal Person
A primary jurisprudential challenge in the monetization of AI-generated works is the attribution of legal personhood. The term ‘person’ is derived from the Latin ‘Persona‘, signifying an entity recognized by law as capable of bearing legal rights and duties. Classic jurisprudence, as posited by Salmond, defines a person as any being whom the law regards as capable of rights and bound by duties.[3] Following the same trail the Indian laws too recognizes person as natural person and also explicitly including any company, association, or body of persons, whether incorporated or not.[4] However, it has yet to extend this legal fiction to autonomous algorithms.
The application of corporate juridical personality theories provides a critical framework for determining whether AI systems should be recognized as independent legal actors:[5]
- Realistic Theory: This theory, built up by Maitland and Gierke, holds that anybody which has a ‘will’ and ‘life’ of its own should be endowed with personality as a matter of right. Proponents argue that AI is far more autonomous than traditional corporations, which rely on human agents, such as directors, to make decisions. The theory assumes that if the AI is capable of self-learning and has independent problem-solving traits, it is a real legal entity, not just a tool.
- Aggregate Theory: The supporters of this model (Adolf Berle and Gardiner Means) believe that the legal entities are merely collections of natural persons, whose relations with each other are regulated by contracts. According to this theory, AI does not hold a separate legal personality. Rather, its actions are traced back to the human involvement such as the manufacturer, programmer and users that input command .
- Fiction Theory: This theory, supported by jurists like Savigny and Salmond, is that the personality of a non-human being is completely fictitious and is created artificially by law for the sole purpose of facilitating legal navigation. In this sense, AI is conceived as an artificial person, created to embody collective human decisions and lacking any inherent value, unless the state chooses to grant it capacity.
- Concession Theory: This theory is based on the concept of a sovereign state, and claims that legal personality exists only by virtue of state recognition. AI would only have legal capacity if it performed a certain societal function or added value, such as a corporation.
- Hybrid Theory: This model implies that no one theory can fully explain the complex nature of AI. It denotes a simultaneous use of all models whereby AI can be regarded as a ‘subject of rights’ (an owner) and as a ‘object of ownership’ (a tool) depending on its level of autonomy and the specific legal context.
In compilation of all the theories of Jurisprudence, it can be concluded that for attributing personhood to a non-human entity , it shall identify three major factors:
- Sentience, or a functional know-how of its environment.
- Intelligence sufficient to solve specific problems autonomously.
- Decision-making ability and the capacity to understand the consequences of actions.
In India the law has granted personhood to non-human entities and this complicates the debate. For example, the Uttarakhand High Court in a landmark decision in March 2017 declared the Ganga and Yamuna rivers “legal persons” or living human entities with same legal rights, duties and responsibilities as a person to ensure their preservation and protection from pollution.[6] In 2013, Indian Ministry of Environment and Forests officially banned dolphinariums by classifying dolphins as “non-human persons” because of their high level of emotional intelligence.[7] Saudi Arabia has given citizenship to the robot ‘Sophia’ in the first case of AI being given legal personhood. There is a global debate on whether the best model to hold AI accountable for civil and criminal wrongs is to grant AI a limited sense of personhood like a corporation.
Ownership and Authorship of AI Research in India
In India, the ownership and authorship of AI-generated research are governed by a human-centric legal framework primarily consisting of the Copyright Act, 1957, and the Patents Act, 1970. Currently, Indian law does not recognize AI as a legal person capable of holding Intellectual Property Rights (IPR) independently.
- Authorship under the Copyright Act, 1957
Under Indian copyright law, the status of AI-generated works remains a subject of evolving judicial interpretation, as the statute contains no explicit provisions for AI.
- The “Person” Requirement: Generally, authorship is granted to human creators. Section 2(d) of The Copyright Act defines an author. On interpreting the meaning of clause (d) sub-clause (iv) and (v), which represents author as a “person”, which has traditionally been interpreted to mean a natural or juristic person, excluding autonomous machines, it is understood that the person-hood is required to claim rights over copyrighted property.
- Computer-Generated Works: Section 2(d)(vi) mentions computer-generated works and states that the author is “the person who causes the work to be created.” This means that the person or entity that owns or controls the AI system is treated as the legal author, even if the AI did the heavy lifting of creativity.
- Ownership Default: Section 17 provides that the author is prima facie the first owner of copyright unless the contract (e.g., an employment agreement) states otherwise. In academia, research outputs are often owned by the institution if produced during the course of employment. This leads to an interpretation that the person who creates the work shall be the owner unless he waives his rights away.
- Inventorship under the Patents Act, 1970
The patenting of AI research in India faces both procedural hurdles regarding who can be an “inventor” and substantive hurdles regarding the nature of AI itself.
- Natural Person Standard: Indian Patents Act presumes a natural person as the “inventor”. Thus applications naming an AI, like the DABUS[8] system, as an inventor are likely to be rejected as defective.
- What are not inventions, Section 3: clause (k) of this section excludes “a mathematical or business method or a computer programme per se or algorithms” from patentability. Because of this , many advances in AI are regarded as algorithms , not as patentable inventions . AI research in general relies on these features .
- The “Technical Effect” Exception: As per the Delhi High Court in Ferid Allani v. Union of India[9], AI-related innovations may be patentable if they show a ‘technical effect’ or a ‘technical contribution’ which is beyond an algorithm.
- Institutional Ownership in Research
In Indian academic and research institutions (including the IITs and IISc), ownership is typically determined by internal IP policies and employment contracts.
- Work-for-Hire Doctrine: When the researcher uses institutional resources and funds to generate AI-based work, the institution generally claims default ownership of the IPR.
- Sponsored Research: Ownership provisions for collaborative studies with outside sponsors are defined in the SponsoredResearch Agreements (SRAs). IP may belong to institution, sponsor or jointly.
- Moral Rights: The Indian law recognizes the moral rights of the human researcher to be credited for their intellectual labour and to preserve the integrity of the work irrespective of who owns the economic rights.
- Policy Debate and Future Reforms
There is an active debate in India regarding whether to adapt the current regime to better suit the age of AI.
- 161st Parliamentary Standing Committee Report: The committee recommended that the Patents and Copyright Acts should be reviewed on a priority basis. It recommended that the innovations induced by AI should be protected to incentivize R&D. It proposed to create a sui generis rights category for inventions made by AI.[10]
- Ministry of Commerce Stance (2024): On the other hand, the Ministry of Commerce and Industry recently clarified that India’s existing intellectual property rights (IPR) regime is already ‘well-equipped’ to protect AI works and there is no proposal at present to establish separate legal categories for AI-generated content.
Comparative Study: India and the world
India’s stance is increasingly starkly different from the world’s rising dissonance in responses to the “DABUS” (Device for the Autonomous Bootstrapping of Unified Sentience) case where an AI was named as an inventor on patent applications
| Jurisdiction | Status of AI Inventorship | Primary Reasoning |
| India | Rejected | Statutes require a natural person; no proposal for change as of 2024. |
| USA | Rejected | Thaler v. Vidal[11]confirmed “inventor” is limited to natural persons. |
| UK | Rejected | The Supreme Court held an inventor must be a natural person under the 1977 Act. |
| EU | Rejected | Designated inventor must have legal capacity; AI lacks personhood. |
| South Africa | Accepted | First nation to grant a patent designating AI as the inventor (SOPHIA). |
| China | Contentious | Dreamwriter[12] case confirmed copyright for AI news articles as the creation reflected the “intellectual activity” of the development team. |
- South Africa: The First Nation to Grant a Patent to an AI Inventor:
In 2021 South Africa made history by becoming the first jurisdiction in the world to formally recognize an AI system as an inventor in a patent grant.
- The DABUS Patent: On 24 June 2021 Dr Stephen Thaler filed a patent application with the South African Companies and Intellectual Property Commission (CIPC) in terms of the Patent Cooperation Treaty (PCT). The DABUS PATENT awarded to this application DABUS (Device for the Autonomous Bootstrapping of Unified Sentience)[13] was named as the inventor in the application.
- Ownership Structure: In this specific grant, the AI is listed as the inventor, but the patent holder is the AI’s human owner (Dr. Thaler).
- Global Context: The decision represented a significant divergence from the practice of other leading jurisdictions, including the United States, the United Kingdom and the European Patent Office, which had previously refused to grant similar DABUS applications on the basis that an inventor must be a natural person.
2. China: The Contentious Dreamwriter Case and Copyright Protection
China’s take on AI-created research and creativity is marked by a split between its patent and copyright laws, leading to a hostile legal environment.
- The Dreamwriter Case (2018): The Dreamwriter Case (2018) was the first case where a Chinese court acknowledged the copyright protection of a work created by an AI. The dispute began when a defendant mechanically copied an article under the title “Tencent Securities” written by Tencent’s AI software “Dreamwriter” in just two minutes.
- Legal Reasoning for Protection: The Court then considered the Legal Reasoning for Protection . The Court found the article copyrightable because it exhibited the “ intellectual activity ” of the development team . In particular, the court concluded that the team’s selection of data inputs, trigger condition settings and arrangement of the frame template were directly linked to the article’s specific “expression.”
- Patent Law Divergence: Unlike this openness in the copyright law, China’s Patent Examination Guidelines (2020/2024) explicitly states that AI cannot be named as an inventor. The newer 2024 guidelines, however, provide that AI and big data algorithms are patentable subject matter where they are directed to a specific technical problem or have a “technical relationship” with the internal structure of a computer system.
- Hybrid Outcomes: Other decisions, like Gao Yang v. Golden Vision[14] , have underscored the importance of human intervention, awarding copyright solely because humans preselected the recording modes and parameters for AI-powered recordings
Valuation and Commercialisation of AI Research
Commercialization is the process of converting IP into marketable products or services. In India, this is primarily managed through Technology Transfer Offices (TTOs).
- Licensing Models and Revenue Sharing
Indian flagship institutions (e.g. IITs and IISc) tend to commercialize AI research through a few effective models:
- Revenue Sharing: Inventors usually share the licensing revenues with the licensing organization between 30% and 70%.
- Exclusive vs. Non-Exclusive Licenses: Exclusive licenses grant a single partner sole rights to commercialize, often used for high-risk pharmaceutical AI research, while non-exclusive licenses are favored for software and educational tools.
- Valuation Challenges
It is difficult to assess AI-generated research because of the “black box” nature of algorithms.
- Data as an Asset: The value of AI research is often related to the quality and quantity of training data.
- Trade secrets: If AI research is not patent protected, developers may choose to keep their discoveries as trade secrets which can give them a competitive advantage but limit the dissemination of the knowledge to the public.
One of the key intersections between intellectual property (IP) law and the economic realities of the Fourth Industrial Revolution is the valuation and commercialisation of AI-generated research. As social wealth shifts from tangible to intangible assets, the ability to value AI research properly, and translate that into marketable products will be key to national growth.
Valuation of AI-Generated Research
In the field of artificial intelligence research, valuation is the process of evaluating the economic and legal value of algorithmic outputs, datasets, and autonomous inventions.
- Unique Intangible Asset Value: Financial value of inventions and patents generated by AI is separate from the value of the AI system. This enables organizations to identify research outputs as distinct intangible assets in their portfolios.
- The Standard of Reasonableness as a Standard of Valuation: The value of an AI patent is increasingly linked to the inventive step standard rather than the standard of reasonableness as a standard of valuation. More sophisticated artificial intelligence systems are able to do things faster and more accurately than humans. A good illustration of this is IBM Watson, which can analyze a patient’s genome in 10 minutes compared to 160 hours for a human team. This is a new level of non-obviousness – only AI outputs that are hard for a standard AI to replicate will have high patentable value.
- The Trade Secret vs Patent Valuation Dilemma: Valuation is heavily affected by the choice of legal protection. If the legal regime does not recognize AI as inventor, owners are incentivized to protect high-value innovations as trade secrets. This keeps the commercial value for the developer but limits the wider dissemination of knowledge. This can lead to “knowledge silos”.
- Fiscal Valuation and Tax Incentives: From the corporate perspective, AI is valued as a capital investment that leads to tax deductions in the form of depreciation and amortization. This fiscal treatment provides a financial benefit to the firm that cannot be obtained when using human researchers.
Commercialisation of AI-Generated Research
Intellectual Property Rights (IPRs) on the other hand are the legal tools that provide protection to such intangible creations enabling them to be owned, traded and monetised.[15] IPRs include patents, trademarks, copyrights, industrial designs, geographical indications and trade secrets. Consequently, IP is now considered as a legal right and a strategic resource of the economy that determines the competitive ability and development of individuals, companies and countries. Commercialisation is the structured process of transferring intellectual property from the research environment into the commercial market through products, services or technologies. The means of licensing, assignment, franchising, technology transfer and IP securitization among others enable the movement of intellectual property from legal protection to economic exploitation.[16]
- The cycle of Commercialisation: Typically, the procedure comprises five critical phases:
- Identification: To identify research outputs that have commercial potential.
- Intellectual Property (IP) Protection: Legal barriers (patents, copyrights or trademarks) against commercial exploitation.
- Licensing and Partnerships: Developing arrangements with private companies for use of the intellectual property in return for royalties.
- Entrepreneurship and Spin-offs: Creation of university spin-off companies to commercialize the innovation.
- Market Launch: The final stage of introducing the product or service to the public or private sector.
- The role of TTOs: Technology Transfer Offices (TTOs) are supposed to act as the main intermediary in India and the rest of the world, negotiating licensing contracts, assessing marketability and providing legal support.
- Models of Licensing and Revenue Sharing:
- Exclusive/Non-Exclusive Licenses: A license is granted for a partner that is exclusive, which means that the partner is the only person who is allowed to use the technology. Exclusive licenses are often used in high-risk pharmaceutical AI research. non-exclusive licenses are more commonly used and are often chosen for educational purposes.
- Incentive-Based Sharing: Top Indian institutions like IITs and IISc have developed models where a good chunk (30%-70%) of the licensing revenue goes directly to the inventors to encourage research.
- National AI Marketplace – Proposed: India’s NITI Aayog has proposed a National AI Marketplace that would bring together commercial AI innovations, easing the regulatory process and providing an ethical base for development.
Legal and Ethical Hurdles to Commercialisation
AI development should be stopped at a reasonable level so that it is not very difficult to digest that machines are at par with humans and there are possibilities that they surpass human intelligence. AI may be over our human heads. AI can do the same kind of antisocial behaviour as human criminals but not with moral blameworthiness. The accountability gap and the ‘black box’ nature of AI algorithms for decision-making, where the inputs and calculations are not visible, resulting in a ‘absence of accountability. This lack of transparency makes it difficult to know who is to be held responsible, the manufacturer of faulty AI, the developer who provided AI the biased training data, or the AI itself.[17]
- The Public-Private Imperative: An important challenge is reconciling the ethical need to make publicly funded research available as a ‘public good’ with the commercial need for exclusive IP rights to attract private investment.
- Collaborative Ownership Conflicts: AI research is frequently a collaborative effort involving developers, data providers, and institutions. Disputes over inventorship and royalty splits can often stall or derail the commercialization of AI research.
- The Liability Gap: When AI is used for commercial purposes in high-risk sectors like finance or health care, it creates a “liability gap.” “Who is responsible for the AI? The developer, user or proprietor? It is important to build this trust, which is necessary for mass-market adoption.
Policy Implications in India
The move to AI-based research should be balanced with social equity.
- Digital Personal Data Protection Act (DPDPA) 2023: Act does not specifically address AI. But the notice and consent provisions of the Act impose significant barriers to training AI models on large data sets.
- Algorithmic Bias : The training data can be biased leading to discriminatory research results, especially in the healthcare and recruitment fields.
- Employment implications: The adoption of AI is estimated to displace millions of jobs globally by 2025 but will also create 97 million jobs, requiring retraining of Indian workforce.
Recommendations and Desired Developments in Intellectual Property and Regulatory Laws for AI
The fast development of AI means that we need to move beyond the traditional human-centered legal approach to a more flexible and adaptive regulatory framework. To successfully integrate AI into the national and global IP ecosystem, the following developments are recommended:
1.An all-encompassing liability model
- Structured liability model: A model which clearly identifies the parties responsible for outcomes generated by AI, is an important step toward AI acceptance.
- Stakeholder Classification: Laws should identify the main actors in the AI lifecycle, i.e. developers, manufacturers, operators and users, so that accountability is appropriately attributed.
- Co-Liability Model: Jurisdictions are encouraged to adopt a co-liability model encouraging shared liability between AI systems and human creators of AI systems, depending on the level of AI system autonomy and human oversight of the system.
- Strict Liability for High-Risk Applications: In the absence of formal certification or in high-risk sectors, a strict liability regime should be applied to AI applications, with joint and several liability for damages to be imposed on the entities in the development chain.
2. Compulsory Transparency and Explainability Standards
IP and regulatory laws require that AI decision-making be transparent and explainable to build trust and legal certainty.
- Right to Explanation: legislation should be introduced to enable users to ask for simple to understand explanations for AI made decisions which have a major impact on them.
- Training Data Disclosure: Laws should require the disclosure and attribution of the data and algorithms used by the AI system in the creation of a work or invention for IP purposes, to ensure accountability and traceability.
- Standards-Based Regulation: Regulation Based on Standards: Regulators should establish standards for Explainable AI (XAI) that prioritize research into technologies that prioritize human-understandable decision-making.
- Human-Centered and Ethical Legal Reforms
AI deployments need legal frameworks that prioritize human well-being and address algorithmic dangers.
- Bias Mitigation and Fairness: Future intellectual property laws should mandate periodic bias assessments and the use of diverse datasets to prevent discriminatory outcomes in research and commercial products generated by AI.
- Human-in-the-Loop Approach: Regulatory guidelines should prioritize the preservation of human oversight, ensuring that AI is used to augment, not replace, human cognitive capabilities.
- Ethical Review Boards: It is proposed to establish sector-specific ethical review boards and a consortium of Ethics Councils to oversee the deployment of autonomous systems and ensure compliance with societal values.
4.Sectoral Regulations and Innovation Incentives
Innovation Incentives and Sectoral Regulation Sources suggest moving away from the “one-size-fits-all” approach, towards a series of graduated regulations that match the industry and the risk.
- Partial Legal Personhood: Policymakers should consider giving AI a specialized or hybrid legal status (e.g., electronic person) that would allow it limited legal capabilities for specific actions, such as entering contracts or owning intellectual property, rather than full legal personhood.
- Regulation Sandboxes: Establishing innovation sandboxes is crucial for allowing developers to test and deploy new AI technologies in a controlled environment. This provides regulators with real-time insight to improve future regulations.
- Sui Generis IP Protection: A sui generis type of rights for AI-generated innovations that may not meet conventional human inventorship requirements but provide significant societal and economic value is under consideration.
5.Global Standards and International Harmonization:
The borderless nature of AI necessitates international collaboration in order to establish a coherent intellectual property regime.
- Consistent Global Approach: In order to facilitate international trade and cross-border data flows, regulatory standards and principles of AI should be harmonized and aligned across jurisdictions.
- Global Data Governance: The establishment of an international framework for data governance will foster trust and encourage the development of data-driven AI innovation, while ensuring the protection of individual privacy.
Conclusion: The Path to Codification
The ultimate goal of AI adoption is the creation of AI-specific rules that create a clear framework of accountability and rights assignment. The shift to an iterative and adaptive regulatory process allows society to navigate the constantly changing technology world, while protecting basic human rights and encouraging responsible innovation.
Author Details:
SAUMYA BIDUA
LL.M.| Amity University, Gwalior
Dr. SANJUM BEDI
Associate Professor| Amity University, Gwalior
[1] Artificial Intelligence: The New Electricity’, WIPO Magazine, June 2019
[2] AI for All, Digital India, Ministry of Education, Government of India
[3] John Salmond, Jurisprudence or The Theory of the Law (2d ed. 1907).
[4] Section 2(26), Bharatiya Nyaya Sanhita (BNS), 2023
[5] John Dewey, “The Historic Background of Corporate Legal Personality” (1926) 35 Yale Law Journal 655
[6] Mohd. Salim v. State of Uttarakhand & Others, Writ Petition (PIL) No.126 of 2014, Uttarakhand High Court (March 20, 2017).
[7] Ministry of Environment and Forests, Government of India. (2013). Decision on establishment of dolphinariums/dolphin parks in India. Central Zoo Authority, New Delhi.
[8] DABUS (Device for the Autonomous Bootstrapping of Unified Sentience), by Dr. Stephen Thaler -Indian Patent Application No. 202017019068. Yogini Bhasvar-Jog, Artificial Intelligence as an Inventor on Patents
[9] DABUS (Device for the Autonomous Bootstrapping of Unified Sentience), by Dr. Stephen Thaler -Indian Patent Application No. 202017019068. Yogini Bhasvar-Jog, Artificial Intelligence as an Inventor on Patents
[10] Parliament of India, “161st Report on Review of the Intellectual Property Rights Regime in India” (Department Related Parliamentary Standing Committee on Commerce, 2021).
[11] Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022)
[12] Shenzhen Tencent Computer System Co., Ltd. v. Shanghai Yingxun Technology Co., Ltd.
(2019) Yue 0305 Min Chu No. 14010
[13] Ibid
[14] Gao Yang v. Golden Vision (Beijing) Film and Television Culture Co. Ltd. et al.is (2017) 京73民终797号 ((2017) Jing 73 Min Zhong
[15] N.S. Gopalakrishnan & T.G. Agitha, Principles of Intellectual Property 237–260 (2d ed. 2014) (India-specific commercialisation insights)
[16] Justin Hughes, The Philosophy of Intellectual Property, 77 Geo. L.J. 287 (1988).
[17] Dutch Childcare Benefit Scandal An Urgent Wake-Up Call To Ban Racist Algorithms’, Amnesty International, 25th October, 2021, https://www.amnesty.org/en/latest/news/2021/10/xenophobicmachines-dutch-child-benefit-scandal/, retrieved on 4th April,2024
