Rethinking Agency in the Age of AI: Legal Accountability and  Autonomous Decision-Making

Abstract: 

This paper tends to learn about the rise of artificial intelligence (AI) in contract law  presenting challenges in accountability, consent, and legal recognition. While the Indian  Contract Act, 1872 (ICA) and the Information Technology Act, 2000 recognize digital  contracts, they lack provisions for AI-driven contracts and smart contracts, creating  uncertainty in liability allocation and enforcement. AI increasingly facilitates contract  formation, negotiation, and execution, yet its lack of legal personality raises concerns about  automated decision-making and contractual obligations. 

This paper traverses whether AI can function as a legal agent, comparing regulatory  approaches in the U.S., U.K., Singapore, and China. While some jurisdictions maintain  human oversight, others introduce AI-specific liability frameworks. A key question is  whether AI should be granted legal personhood to autonomously execute contracts in India.  Through legal analysis, case law, and hypothetical scenarios, this research paper evaluates  India’s need for AI liability laws, smart contract recognition, and blockchain dispute  resolution mechanisms.

Introduction: Evolving Challenges of Agency in the Age of AI 

“If a machine signs a contract, who should be held accountable when it breaks the terms—the  programmer, the user, or the algorithm itself?”, Contract law has evolved significantly,  adapting to technological and economic advancements. Traditionally, contracts were paper based or verbal agreements, relying on human intent (mens rea), offer, acceptance, and  consideration for enforceability. The Indian Contract Act, 1872, governed these agreements,  ensuring legal validity through established principles. However, with globalization and  digitization, contract law has shifted towards electronic contracts and automated  transactions, narrowing commercial dealings. In the contemporary digital landscape, artificial  intelligence increasingly functions as a (middle man) intermediary, facilitating interactions  between consumers and businesses, one can witness its advancement in the legal jurisdiction  too. AI acting as an agent serve law of contracts where it deals with formation of contracts  through sites such as “Contractzy1, negotiation through “Genie AI2and even enforcement  through “VerifAI3. While these innovations enhance efficiency, they also raise fundamental  legal questions: Can AI act as a contracting party? Who is liable for contract breaches?  Should AI be granted legal personhood or remain under human oversight? This paper  critically explores these questions by analysing legal frameworks across jurisdictions and  proposing regulatory solutions for India. 

Legal Personhood and the Problem of AI Accountability 

In India, we have Information Technology Act, 2000, enabling the recognition of electronic  signatures and digital contracts through (s.10A, 2(h), and 10 of the ICA), laying the foundation  for modern, technology-driven agreements. Today, AI and blockchain are further transforming  contracts, particularly through smart contracts, which execute transactions automatically based  on pre-programmed conditions4but is this framework sufficient for our country? Through  (s.182, ICA), we know that an agent is a person employed to do any act for another in dealings  with a third person, and this can be created by agreement – express or implied, estoppel,  

1“AI-Powered Contract Management Software for Enterprises” (CLM Software) <https://www.contractzy.io/> 2“Legal AI for Negotiation” <https://www.genieai.co/use-case/ai-negotiation>  

3“VerifAI” (The Easiest AI Contract Review Tool) <https://www.spotdraft.com/products/verifai>  4(June 9, 2000)  

<https://eprocure.gov.in/cppp/rulesandprocs/kbadqkdlcswfjdelrquehwuxcfmijmuixngudufgbuubgubfugbububjxc gfvsbdihbgfGhdfgFHytyhRtMjk4NzY=> 

operation of law, or necessary circumstances, as per (s.186-187 of the ICA)56, Now, imagine  artificial intelligence; an intangible technology that enables machines to learn and perform  various functions beyond natural language processing, including decision-making, problem solving, interaction with external environments, and execution of actions7taking decisions for  us and signing modern contracts without consent or accountability. This necessitates the need  to have regulations providing statutory recognition for smart contracts, which raises concerns  about automated enforcement, liability, and dispute resolution. AI-driven contracts present  further ambiguities since Indian law presumes human intent in agreements- which creates  uncertainty in determining AI’s contractual obligations. The decentralized nature of  blockchain contracts complicates jurisdictional issues, as disputes may span multiple legal  systems. Without clear regulations, legal uncertainty persists, hindering the full adoption of  emerging technologies. Strengthening contract law is crucial to fostering technological  innovation and ensuring legal certainty in digital transactions.8 

The Classic Concept of Agency and Its Limitations for AI 

Not limited to this, but AI’s authority of handling the obligation of an agency to answer for its  actions and decisions (Accountability) and agreeing upon the same thing in the same sense,  (Consent) as per (s.13, ICA) as an interface or user aren’t recognised since these laws do not  take AI into account because of AI’s lack of legal personality which raises concerns about  enforcement gaps; who should bear responsibility? Courts may attribute liability to developers,  but this model fails when AI acts autonomously. Without legal recognition, no declaration of  intent would exist; without a declaration of intent, the law of agency is not applicable.9 A  pertinent legal question arises; do we not already engage with AI acting as an agent in  various contractual transactions? From e-commerce platforms like JioMart and Nykaa employing AI-driven chatbots to automated negotiation tools in financial services. This  evolving reality challenges traditional agency law principles, necessitating a re-evaluation of  accountability, consent, and the legal status of AI in contractual relationships. AI frameworks  

 5 The Indian Contract Act, 1872, 

6 Fardunji Mulla D Sir, Mulla The Indian Contract Act (16th Edition, LexisNexis 2021) 383 7IBM, “AI Agents” (IBM, July 3, 2024) <https://www.ibm.com/think/topics/ai-agents 8 Anglen J, “The Legal Implications of Smart Contracts: Regulations and Compliance” Rapid Innovation (December 30, 2024) <https://www.rapidinnovation.io/post/the-legal-implications-of-smart-contracts regulations-and-compliance>  

9 DiMatteo LA, Poncibò C and Cannarsa M, The Cambridge Handbook of Artificial Intelligence: Global  Perspectives on Law and Ethics (Cambridge University Press 2022)

across platforms like Windows, Linux, iOS, and Android have enabled its adoption as an  efficient agent across various industries. However, Indian law does not yet recognize AI as a  legally competent agent capable of assuming accountability and consent in contractual  relationships. 

AI as an ‘Electronic Agent’: Redefining Legal Relationships 

But why is being a legal personality important for AI? It is the foundation of an entity’s ability  to engage in legal relations, holding and duties within a structured legal framework. It would  ensure competency, predictability, and stability, thereby fostering consistency in legal  dealings. One of its key features is separate legal existence, allowing entities such as  corporations to function independently of their members, ensuring continuity beyond  individual ownership or participation. Furthermore, legal personality would grant the ability  to sue and be sued, enabling entities to protect their interests through legal proceedings.10 A  significant advantage, particularly for corporations, is limited liability, which shields  individual members from personal financial risk. Additionally, perpetual succession allows  corporations and similar entities to exist indefinitely, unaffected by changes in membership.  The recognition would also facilitate economic activities, permitting businesses, trusts, and  other entities to enter contracts, hold assets, and engage in trade efficiently.11 Moreover, it  would enable the separation of roles and functions, allowing individuals to operate under  multiple legal identities, such as acting as both a trustee and a private individual  simultaneously.  

Consent in AI-Driven Transactions: Revisiting Express and Implied Agency 

In addition to AI not being recognised legally in law of contracts, an agent is also legally bound  by several duties and liabilities while acting on behalf of the principal. S. 211 imposes a duty  on the agent to conduct transactions per the principal’s directions or, in the absence of specific  instructions, according to the customary trade practices. S. 212 requires an agent to exercise  reasonable skill and diligence in performing their functions, ensuring that no negligence leads  

 10 SMITH B, “LEGAL PERSONALITY” (1928) XXXVII YALE LAW JOURNAL 

11 JusCorpus, “ANALYSIS OF THE CONCEPT OF PERPETUAL SUCCESSION UNDER COMPANIES  ACT 2013” (Jus Corpus, February 28, 2024) <https://www.juscorpus.com/analysis-of-the-concept-of-perpetual succession-under-companies-act-2013/> 

to the principal’s loss. Furthermore, S. 213 mandates that the agent must provide accurate and  complete accounts of transactions to the principal. The agent is also liable under S. 215 if they  act against the interests of the principal by dealing in transactions where they have conflicting  interests. Breach of these duties can hold the agent liable for damages or contractual breach  under S. 217, which allows the principal to seek indemnification for losses caused by the  agent’s misconduct or negligence.12 AI, when functioning as an agent, performs several of  these duties through automation, predictive decision-making, and execution of contractual  obligations. AI-driven agents, such as smart contracts and AI-powered financial advisory bots,  can execute transactions as per pre-defined algorithms, ensuring compliance with  contractual terms. However, this raises a crucial dilemma—should AI be held accountable  like human agents, or should liability be placed entirely on developers and deployers? The  latter may create moral hazard, as corporations could offload responsibility onto autonomous  systems. Unlike human agents, AI cannot exercise judgment, weighing ethical considerations,  or bear fiduciary duties. 

Autonomy and Control: Who Is the Real Principal in AI Actions? 

As of today, there is no prominent Indian case law explicitly establishing an AI as an “agent”  under the law, primarily because current legal frameworks do not recognize AI as a legal person  with agency capabilities but the Ministry of Electronics and Information Technology (MEITY),  the executive agency for AI-related strategies, recently constituted four committees to bring in  a policy framework for AI13 but for now; through real life experiences, let us understand how  Uber’s AI-driven dynamic pricing model exemplifies AI acting as an agent in commercial  transactions. The algorithm autonomously sets fares based on real-time factors such as  demand, traffic conditions, phone’s battery level and rider history, directly influencing the  contractual agreement between the rider and Uber. Unlike traditional agents, Uber’s AI makes  pricing decisions without direct human intervention, raising questions about accountability and  fairness as debated in Spencer Meyer v. Travis Kalanick (2016).14 Instances of price surges  during emergencies have sparked legal debates on whether AI can be held liable for  

 12 The Indian Contract Act, 1872 

13 “Assessing the Intelligence of the Artificial Intelligence in Law: Prospects in India” (Singhania & Partners)  <https://singhania.in/blog/assessing-the-intelligence-of-the-artificial-intelligence-in-law-prospects-in-india->  14 Meyer v. Kalanick, 203 F. Supp. 3d 393 (S.D.N.Y. 2016)

discriminatory or exploitative pricing.15 Under traditional agency law, an agent must act in the  principal’s best interest, but Uber’s AI prioritizes profit optimization over consumer welfare,  challenging the application of (s.211 and 215, ICA) which require agents to act with diligence  and avoid conflicts. The pertinent issue is whether liability rests with Uber as a legal entity or  with the AI system itself. The algorithm functions are based on pre-programmed parameters  and real-time data inputs and since Uber, as a corporate entity, retains control over the  deployment and operation of the AI system, legal accountability would likely be attributed to  Uber rather than the algorithm.  

Existing Statutory Frameworks: A Comparative Global Outlook 

Similarly, Smart contracts on Ethereum’s blockchain use AI as an agent to autonomously  execute agreements and trigger dispute resolution mechanisms. AI-driven platforms like  Kleros analyse disputes, but legal challenges arise as we know that The Indian Contract Act,  1872, and existing legal frameworks recognize only natural and juristic persons as legally  accountable entities. Consequently, unless AI is granted distinct legal personality through  legislative reform, liability remains with the entities deploying and managing such AI  systems.16 Given the increasing role of AI in contract formation, execution, and dispute  resolution, it is imperative for legal systems to evolve and consider the recognition of AI as an  autonomous legal actor. The absence of a well-defined framework addressing AI’s legal status  and accountability may create lacunae in enforcement, necessitating urgent legislative  intervention to adapt to the realities of an AI-driven commercial landscape. A structured legal  framework must define AI’s contractual autonomy while ensuring human oversight and clear  liability attribution (joint liability). Legal reforms are essential to address these gaps and  ensure accountability in AI-driven contracts. 

The Indian Perspective: Agency Law and Technology Gaps 

 15 Chouhan MS, “Delhi Man Reveals Insights into Uber’s Pricing Algorithm, Finds Link to Phone Type,  Battery” Hindustan Times (January 20, 2025) <https://www.hindustantimes.com/trending/delhi-man-reveals insights-into-ubers-pricing-algorithm-finds-link-to-phone-type-battery-101737343628321.html> 16 Buchwald M and Buchwald M 168/1369 Smart Contract Dispute Resolution: the Inescapable Flaws of  Blockchainbased Arbitration

Following the above examples, the recognition of AI as a legal entity in contract law would present a transformative shift in India’s legal framework, addressing contemporary challenges  in an increasingly digitized commercial landscape like Enhanced accountability being one of  the primary benefits, as granting AI legal status would enable direct responsibility for contract  breaches, thereby reducing ambiguity in liability allocation. This would establish clearer legal  recourse mechanisms when AI autonomously executes transactions or enforces contractual  terms. Furthermore, facilitation of AI-driven contracts would allow AI systems, particularly  in smart contracts and blockchain-based platforms, to autonomously enter, execute, and  enforce contracts without human intervention.17 Such automation could streamline business  processes, minimize delays, and enhance contractual efficiency. Additionally, encouraging  innovation and investment in AI technology would be a significant advantage of legal  recognition. A well-defined legal framework would provide stability, fostering investment in  AI-driven solutions and increasing business confidence in dispute resolution, and dynamic  pricing models. Regulatory clarity is another key benefit, recognizing AI as a legal entity  would promote authoritative governance by establishing clear guidelines for taxation,  compliance, and dispute resolution within firms. Finally, AI as an independent contractual  party could introduce a new dimension to commercial transactions, wherein AI entities could  potentially own assets, enter contracts, and settle liabilities in their own name, akin to corporate  personhood.  

Despite the advantages associated with recognizing AI as a legal entity in contract law,  government have not yet granted such recognition due to several complex legal, ethical, and  practical considerations. One of the foremost concerns is the absence of legal precedent, as  Indian contract law, under (s.11, ICA), currently recognizes only natural persons and juristic  entities (such as companies and LLPs) as capable or competent of entering into contracts, and  India being a common law country relies highly on the Doctrine of Staire Decisis. Secondly, 

AI lacks intention (mens rea) and free will—both of which are essential elements of a valid  contract, making its legal recognition problematic. Additionally, liability and enforcement  issues arise if an AI system makes an erroneous contractual decision and without human  oversight, enforcing penalties on AI entities may also prove impractical. Moreover,  jurisdictional conflicts present another major challenge where AI operates across borders,  

 17 “The Ethics of Artificial Intelligence: Issues and Initiatives” (March 2020)  

<https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf> 

often engaging in transactions that span multiple legal systems.18 This creates conflicts of law and raises concerns regarding which jurisdiction should govern AI-related disputes.  Enforcing contractual obligations against an AI entity across different legal regimes could be  highly difficult. Furthermore, ethical and moral considerations must be taken into account.  Although, It can be noted that granting AI legal personhood may allow corporations to escape  liability by shifting blame onto AI, but it’s still no valid excuse for AI to not be recognized as  a legal entity as anyhow the liability would be shared with the corporation. Some countries  (like the EU) often argue for human-in-the-loop responsibility instead of AI autonomy, but is  this really possible as we move into a digital world?  

Let us understand AI as an agency in a hypothetical scenario, Firm X and Y decide to merge  through the execution of a blockchain-based smart contract that automatically executes once  predefined conditions are met. An AI legal assistant, “LexBot,” deployed by Firm X,  autonomously signs the merger contract on behalf of the firm. Subsequently, it emerges that  Firm X fraudulently concealed significant financial debt, causing Firm Y substantial losses.  Despite the AI’s role in signing the contract, legal analysis under agency law confirms that the  AI acted solely within the scope of Firm X’s authorization, rendering the firm fully liable. Now,  there can be two solutions to this scenario. First, Under the Indian Contract Act, 1872  (Current Legal Framework), Firm X remains fully liable for fraudulent misrepresentation,  as AI lacks legal personhood. (s.182) of ICA would take LexBot as merely a tool acting under  Firm X’s directive. Since Firm X concealed financial liabilities, it constitutes fraud under  (s.17), contract voidable under (s.19), allowing Firm Y to rescind the agreement and claim  damages. Whereas, If India Recognizes AI as a Legal Personality; Here, LexBot, as an  autonomous contracting entity, could bear partial or full responsibility for the fraudulent  contract. Firm X might argue that the AI, not the firm, committed the contract breach, shifting  liability. To prevent firms from evading accountability, courts could impose joint liability on  both Firm X and LexBot, ensuring firms remain responsible for AI’s actions. New regulations  might introduce AI-specific penalties, asset seizure mechanisms, or human oversight  mandates to safeguard contractual integrity. 

 18 “Legal and Ethical Implication of Artificial Intelligence: Policies and Regularisation with Special Emphasis  on Legal Profession in India” (H.K. LAW OFFICES, August 15, 2021)  

<https://hklawoffices.in/2021/08/15/legal-and-ethical-implication-of-artificial-intelligence-policies-and regularisation-with-special-emphasis-on-legal-profession-in-india/> 

Case Study: South African Legal Developments on AI and Agency 

Did the hypothetical make you question if AI should be afforded contractual rights akin to  those of humans or corporations, or should it remain under human control? I agree that there is  a risk of autonomy beyond control, fraudulent behaviour, biased decision-making, or  contractual manipulation without clear accountability mechanisms but to prevent legal  consequences; we can analyse advance country’s laws strengths and limitations. The U.S. legal  system does not grant AI independent legal personhood, but it acknowledges AI as an  automated agent acting under human control but legal adaptations have been made under the  Uniform Electronic Transactions Act (UETA) and the E-SIGN Act, which validate  contracts executed by automated systems.19 For example, in United States v. Athlone  Industries, Inc. (1984), It was held that agency requires the capacity to assent, which AI  lacks.20 Nevertheless, in Trimex International FZE Ltd. v. Vedanta Aluminum Ltd. (2010),21 email-based agreements were deemed enforceable.22 The U.S. AI agency laws provide clear  human accountability, ensuring liability falls on individuals or corporations rather than AI  itself. However, these laws struggle hard with AI-driven liability, as responsibility becomes  unclear when AI acts autonomously. Additionally, outdated legal definitions fail to address  modern AI capabilities, and the lack of a unified federal framework creates inconsistencies,  especially compared to evolving global regulations. While functional, the current legal  framework remains incomplete, requiring future reforms to keep pace with AI advancements  and its growing role in decision-making. 

Potential Legal Models: Strict Liability and Regulatory Intervention 

Moreover, The U.K. follows legislative measures such as the Electronic Communications  Act 2000 and reports from the UK Law Commission which acknowledges AI’s role in contract  enforcement. Courts have addressed in Software Solutions Partners Ltd. v. HM Revenue &  Customs (2007),23 that the enforceability of AI-driven contractual obligations would be  

 19 “Electronic Signature Laws & Regulations” (United States

20 “United States v. Athlone Industries, Inc., 746 F.2d 977” 

21 Trimex Int’l FZE Ltd. v. Vedanta Aluminium Ltd., (2010) 3 SCC 1 

22 “Digital Supreme Court Reports” <https://digiscr.sci.gov.in/view_judgment?id=MzU5NTQ=> 23 Software Sols. Partners Ltd. v. HM Revenue & Customs, [2007] EWHC 971 (Ch) (UK).

upheld.24 Additionally, the UK Jurisdiction Taskforce (UKJT) 2019 Report supports AI executed smart contracts if they meet legal principles.25 However, gaps remain in defining AI’s  liability and decision-making autonomy. The lack of specific AI regulations creates  uncertainty, requiring future reforms to address accountability and evolving AI capabilities in  business and legal transactions. Along the same lines, Singapore’s AI agency laws embrace a  tech-forward approach where The Electronic Transactions Act 2010 permits AI-driven smart  contracts, with Personal Data Protection Commission (PDPC) which has introduced AI 

specific regulations emphasizing transparency and accountability.26 A key legal precedent is  Quoine Pte Ltd v. B2C2 Ltd (2020),27 in which the Singapore Court of Appeal considered AI driven financial transactions and held that such contracts could be valid under certain  conditions. However, gaps remain in defining liability when AI operates autonomously, as  current laws still rely on human oversight. The absence of independent AI legal status creates  uncertainty in complex transactions. While Singapore’s regulatory framework is progressive,  future refinements are needed to address AI accountability and its expanding role in business  and contract law. 

Even if we choose not to compare our laws with those of countries like the USA or the UK, we  can still gain valuable insights by examining the legal framework of our neighbouring country,  China. China’s AI legal framework follows a state-controlled yet structured approach,  integrating AI into contractual dealings under strict regulations. The Civil Code of China  (2021) recognizes e-contracts, while the Cybersecurity and Data Security Laws ensure AI  transparency and accountability.28 Courts have upheld AI-executed agreements, such as in  Beijing Baidu Network Technology Co. Ltd. v. Liu Haixi (2020),29 where AI-generated  contracts were deemed enforceable under traditional contract law.30 India, with its growing  digital economy, could adopt elements of China’s model, particularly in contractual  enforceability and AI accountability. However, While China’s regulatory model ensures AI  

 24 “R (Software Solutions Partners Ltd) v R & C Commissioners” (vLex) <https://vlex.co.uk/vid/r-software solutions-partners-793994357> 

25 “Smart Contracts” (Law Commission, July 30, 2018) <https://lawcom.gov.uk/project/smart-contracts/>  26 “Electronic Transactions Act 2010” (Singapore Statutes Online) <https://sso.agc.gov.sg/act/eta2010>  27 Quoine Pte Ltd v. B2C2 Ltd, [2020] SGCA(I) 2 (Sing 

28 chillibyte, “China’s First Civil Code, 1st January 2021” (CELIA Alliance, January 2, 2021)  <https://www.celiaalliance.com/news/chinas-first-civil-code-1st-january-2021/> 

29 Beijing Baidu Network Tech. Co. Ltd. v. Liu Haixi, (2020) (China) 

30 Wininger A, “Beijing Internet Court Releases Translation of Li vs. Liu Recognizing Copyright in Generative  AI” (January 22, 2024) <https://www.chinaiplawupdate.com/2024/01/beijing-internet-court-releases-translation of-li-vs-liu-recognizing-copyright-in-generative-ai/>

accountability, its strict state-controlled approach may not align with India’s democratic legal  system. A hybrid model that ensures corporate liability while allowing AI autonomy in certain  contract types (e.g., smart contracts) may be a better. 

What we have learnt from studying these international frameworks is the need to develop a  balanced approach that integrates AI within contract law without granting it full legal  personhood. What India needs is an AI Liability Law, Similar to EU’s AI Act, imposing  human responsibility for AI decisions along with Contract Law Amendment; Modifying  ICA 1872 to define AI-assisted contracts explicitly and a Regulatory Body for AI Contracts  establishing an AI Contracts Tribunal under IT Act 2000, For e.g.: Inclusion of S. 10B:  Contracts executed by AI systems shall be deemed valid if they meet traditional contractual  principles, provided that liability remains with the deploying entity and S. 182A (Proposed):  Defines AI-assisted agency, ensuring human accountability remains in place. Some critics  argue that granting AI legal personality could allow corporations to escape liability by blaming  “autonomous” AI for contract breaches. How would India prevent this issue? A solution to  overcome this would be through joint liability: while ensuring that liability remains with a  human or corporate entity. A regulatory sandbox approach could also be implemented,  allowing AI contract mechanisms to operate under controlled conditions before full legal  integration. Regulatory bodies, such as the Reserve Bank of India (RBI) and SEBI, have  adopted this model to assess fintech innovations before broader implementation. By identifying  risks and refining legal frameworks in a controlled setting, the sandbox approach ensures that  AI-driven contracts are legally sound, secure, and accountable before large-scale adoption in  India.31 

Conclusion: Reimagining Agency for the AI Era 

In conclusion, While AI is transforming contract law, we still need lawyers to address  complexities in consent, accountability, and liability with modern contracts. As digital  signatures and AI-driven contracts become more common, It can be noted that India must  

 31 J SK, “Regulatory Sandboxes: Decoding India’s Attempt to Regulate Fintech Disruption” OBSERVER  RESEARCH FOUNDATION ( ORF ) (May 24, 2023) <https://www.orfonline.org/research/regulatory sandboxes-decoding-indias-attempt-to-regulate-fintech-disruption-66427> 

develop AI agency liability laws and recognise a legal personality to regulate automated  transactions. A sandbox approach would help test AI contract mechanisms before full legal  implementation. Other nations have introduced AI-specific liability frameworks, while India  still relies on traditional contract law, which assumes human intent. A feasible approach would  be to amend the ICA, 1872, by incorporating a new section recognizing AI-assisted contracts  while maintaining human oversight liability under a strict liability model. This would strike a  balance between technological advancement and legal accountability. 

– Sanjeevani Shandilya 

O.P Jindal Global University: 2024 Batch