Shivansh Sharma
Damodaram Sanjivayya National Law University, Visakhapatnam
A Research Paper Submitted for the Virtual Internship Program
The Amikus Qriae
September 2025
Date of Submission: 25th September 2025
ABSTRACT
The integration of Artificial Intelligence (AI) into international arbitration represents a pivotal shift that promises to revolutionize dispute resolution. This paper examines the dual impact of AI, exploring its potential as an efficiency tool while critically assessing its threats to fundamental principles of due process. AI technologies, particularly machine learning and natural language processing, are increasingly utilized for tasks such as document review, case outcome prediction, and drafting arbitral awards, which can significantly reduce time and cost. However, their rapid adoption introduces complex legal and ethical concerns, including algorithmic bias, the lack of transparency in AI-driven decision-making, and the potential erosion of judicial independence and the essential human element for fair adjudication.
Employing a doctrinal methodology, this research analyses a range of legal sources, academic literature, and institutional guidelines. The paper concludes that while AI offers undeniable advantages in enhancing the efficiency of international arbitration, its uncritical application poses a direct threat to due process rights. It argues that for AI to be a beneficial tool rather than a liability, its use must be accompanied by robust regulatory frameworks, clear ethical guidelines, and a commitment to maintaining ultimate human oversight. Ultimately, this research offers a balanced approach to harnessing AI’s capabilities while preserving the integrity and credibility of international arbitration.
Keywords: Artificial Intelligence, International Arbitration, Due Process, Algorithmic Bias, Dispute Resolution, Legal Technology.
INTRODUCTION
The convergence of technology and law is fundamentally reshaping the legal landscape, with Artificial Intelligence (AI) emerging as a transformative force in virtually every domain of the legal profession. International arbitration, a historically traditional and intentionally circumspect practice, is no exception to this paradigm shift. While once viewed as a futuristic concept, the integration of AI tools, from predictive analytics to advanced document review, is now a tangible reality, presenting both unprecedented opportunities and concomitant challenges. The prevailing discourse centres not on whether AI will be incorporated, but on the modalities of its adoption and the scope of its impact. This paper addresses the central tension at the heart of this evolution: the conflict between AI as an indispensable efficiency tool and its potential to pose a fundamental threat to due process and the integrity of the arbitral system.
The persistent demand for more efficient and cost-effective dispute resolution mechanisms has long driven innovation in international arbitration. AI, with its capacity to process vast datasets and automate repetitive tasks, is posited as a critical mechanism for addressing systemic inefficiencies. Tools powered by machine learning and natural language processing can streamline case management, assist in legal research, and expedite the review of electronic documents, thereby significantly reducing the time and financial burden on parties.[1] For instance, predictive coding, a technology-assisted review (TAR) method, has become an accepted standard for document discovery, and AI-driven platforms are increasingly used to summarize awards and draft procedural orders.[2] The potential for AI to enhance accessibility and predictability in cross-border disputes is therefore undeniable.
However, the rapid adoption of AI has simultaneously prompted a robust scholarly debate regarding its compatibility with the foundational principles of due process. Due process requires a fair and equal opportunity for each party to present its case and a guarantee of an impartial tribunal. The opaque nature of some AI algorithms, often referred to as a “black box,” raises serious questions about the transparency and explainability of AI-assisted decisions. Should a tribunal rely on an AI system to analyse evidence or predict outcomes, and that system’s reasoning cannot be scrutinized, it could undermine a party’s right to a reasoned award and the integrity of the process itself.[3] Furthermore, concerns about algorithmic bias, where AI systems may perpetuate or even amplify existing biases present in their training data, pose a direct threat to the impartiality required of an arbitral tribunal. The risk of AI-generated errors or “hallucinations,” documented in various legal contexts, also introduces a new layer of uncertainty that could compromise the accuracy and fairness of an award.
This research paper aims to provide a comprehensive analysis of this complex issue. It will first explore the applications of AI that enhance efficiency in international arbitration. Subsequently, it will scrutinize the legal and ethical challenges posed by AI, focusing on the potential for due process violations. The paper will conclude by offering suggestions for a framework that can responsibly integrate AI, ensuring that its benefits are harnessed without sacrificing the procedural safeguards that are cornerstones of a legitimate dispute resolution system.
RESEARCH METHODOLOGY
This research paper employs a doctrinal methodology, utilizing analytical and descriptive techniques, to interrogate the central tension between procedural efficiency and due process guarantees arising from the integration of Artificial Intelligence (AI) in international arbitration. Doctrinal research, often termed ‘library-based’ research, is particularly well-suited for legal analysis as it focuses on systematically analysing and interpreting primary and secondary legal sources to ascertain the current state of the law, delineate its challenges, and propose solutions.
The primary objective is to critically examine the existing legal and regulatory landscape surrounding the use of AI in arbitral proceedings. The methodology is structured around the following key research stages:
- Review of Primary Sources: This stage involves an analysis of relevant international arbitration rules and institutional guidelines (e.g., ICC, LCIA, SIAC) to identify any current or proposed provisions governing the disclosure, oversight, and use of technology and AI by tribunals or parties. This encompasses the scrutiny of landmark arbitral awards and judicial decisions where the integrity or process of AI-assisted evidence or decision-making has been contested.
- Analysis of Secondary Sources: A comprehensive review of academic literature, scholarly articles, law review journals, and authoritative reports from leading legal institutions and think tanks will be conducted. This is crucial for understanding the prevailing scholarly discourse on the ethical implications of AI, particularly focusing on the concepts of algorithmic bias, transparency, explainability (XAI), and their impact on due process rights.
- Comparative and Analytical Study: The gathered data will be subjected to an analytical critique. The paper will systematically compare the purported efficiency gains of specific AI applications (e.g., predictive coding, e-discovery) against the due process concerns they raise (e.g., right to be heard, impartiality). This comparative approach facilitates the development of a nuanced argument, transcending a simple pro/con dichotomy to establish a framework for responsible AI adoption.
- Prescriptive Conclusion: Based on the analytical findings, the research will culminate in prescriptive suggestions for ameliorating the identified risks. This entails proposing concrete regulatory and ethical guidelines for adoption by international arbitration bodies, practitioners, and arbitrators, thereby ensuring the seamless integration of AI while upholding the core tenets of procedural fairness and due process.
The methodology is thus designed for rigor and objectivity, ensuring that the conclusions and prescriptive suggestions are demonstrably grounded in established legal principles and informed by the most current academic and professional perspectives.
REVIEW OF LITERATURE
The burgeoning academic and professional literature on the intersection of Artificial Intelligence (AI) and International Arbitration delineates two distinct but interconnected schools of thought: one centred on the efficiency-enhancing potential and the other addressing critical due process challenges. This review synthesizes key arguments, identifies major thematic divisions, and delineates the scholarly lacuna this paper seeks to address.
- AI as an Efficiency Tool and Catalyst for Arbitration Modernization
A significant body of literature advocates for AI as an indispensable instrument for mitigating persistent criticisms of arbitration: high cost and protracted timelines. Consensus suggests that AI’s primary utility resides in automating and streamlining the preparatory and administrative phases of the arbitral process.[4] This potential manifest in three primary, inter-related applications:
- Document Review and E-Discovery: The most established application of AI is in Technology-Assisted Review (TAR) or Predictive Coding. Academic analysis confirms that AI systems can process extensive quantities of electronic documents with enhanced speed and consistency relative to human reviewers, thereby substantially lowering costs and expediting the discovery phase.[5] This technology, initially met with scepticism, is now widely accepted and often seen as a professional necessity rather than a mere innovation.
- Case Management and Analytics: Several studies discuss the growing use of AI in predictive analytics to forecast case outcomes, estimate damages, and inform settlement strategies, particularly in investment arbitration where a limited number of public awards exist.[6] Although prudence is advised, this is viewed in the literature as a strategic instrument for counsel, affording an empirical advantage and enhancing client consultation.
- Institutional Adoption: Recent surveys and reports from institutions like the Queen Mary University of London (QMUL) and major law firms confirm that a majority of practitioners anticipate the use of AI for research, data analytics, and document review to grow significantly, citing time-saving as the principal driver.[7]
The prevalent scholarly view holds that AI provides a necessary technological update to make international arbitration more competitive against other dispute resolution methods by maximizing efficiency in the preparation phase.
- AI as a Threat to Due Process: Transparency, Bias, and the ‘Black Box’
Conversely, the counter-narrative, frequently articulated by legal ethicists and due process proponents, identifies three core threats that the uncritical adoption of AI poses to the fundamental fairness of the arbitral process:
A. The Black Box Problem and Lack of Transparency
A central theme in the literature is the “black box” characteristic of complex AI models, particularly deep neural networks. Scholars argue that the right to a reasoned award and the ability to challenge it under instruments like the New York Convention is fundamentally compromised if the tribunal’s ratio decidendi is materially informed by an opaque AI output that cannot be fully scrutinized by the parties.[8] Arbitrators using AI for substantive tasks, such as generating legal arguments or evaluating evidence, must ensure explainability (XAI), a concept the literature increasingly links to due process. Without a transparent explanation of how an AI system reached its conclusion, a party’s right to be heard and respond to all evidence is arguably undermined.[9]
B. Algorithmic Bias and Impartiality
Furthermore, a dominant concern involves the threat of algorithmic bias. Legal commentary emphasizes that AI systems trained on historical legal data may inadvertently learn and perpetuate biases present in past awards, potentially leading to discriminatory outcomes against certain jurisdictions, industries, or demographic groups.[10] Since the impartiality of the tribunal is a non-negotiable pillar of arbitration, the mere possibility of hidden, systemic bias in an AI tool is considered a direct challenge to the legitimacy of the award and grounds for potential annulment or non-enforcement. The literature advocates for robust auditing and monitoring of AI systems to ensure they align with ethical and non-discrimination norms.
C. The Human Element and Responsibility
The literature consistently examines the issue of accountability. If an AI system generates a material error (or “hallucination”) that influences the arbitral award, the question of attribution of responsibility arises: to whom does the error belong: the software developer, the party who introduced the evidence, or the arbitrator who relied upon it? Scholars maintain that while AI can assist, the ultimate responsibility for the award remains with the human arbitrator.[11] This necessitates human oversight and verification of all AI outputs, ensuring that the technology remains an aid to human judgment rather than a replacement for it. The literature suggests that the ethical duties of counsel and arbitrators (such as the duty of competence and confidentiality) must be redefined to explicitly cover the use of AI.
- The Gap in the Literature
Crucially, while the existing literature effectively details both the promise and the peril of AI in arbitration, there remains a substantial gap in establishing a practical, unified regulatory and ethical framework that is easily implementable across diverse arbitral institutions. Much of the analysis remains theoretical or focuses solely on the existence of the problem. This paper aims to bridge the analytical divide by synthesizing the due process concerns into concrete legal risks and offering specific, balanced methodological and procedural suggestions for arbitrators and institutions. This approach thus seeks to move the discourse from mere identification of the tension to offering a practical framework for its resolution.
METHOD
The analysis presented in this paper is structured to address the core tension of the research question, whether AI constitutes an efficiency-enhancing tool or a threat to due process, achieved through a systematic breakdown of its implementation in two key phases of international arbitration. The methodology employs a two-pronged analytical approach to evaluate the impact of specific AI applications.
- The Efficiency Perspective: Quantification of Benefits
The first phase of the analysis delineates the practical application of AI technologies and their demonstrated capacity to enhance efficiency, reduce costs, and accelerate the arbitral timeline. This entails an examination of:
- Technology-Assisted Review (TAR) in E-Discovery: This section analyses the rationale for employing AI to manage the vast data volumes common in international disputes. The methodology will quantify the efficiency gains (time and cost savings) by drawing upon industry data and evidence of judicial acceptance of TAR protocols.
- Predictive Analytics and Case Strategy: This sub-section critically examines AI platforms utilized by counsel for forecasting potential outcomes, assessing legal risk, and optimizing settlement positions. The analysis focuses on how these tools render the decision-making process more data-driven and systematic.
- Automated Administrative Tasks: This section addresses the efficiency gains derived from AI in drafting procedural orders, summarizing legal texts, and assisting with tribunal management, focusing on how these functions permit human arbitrators and counsel to concentrate on substantive legal issues.
This section establishes the empirical and economic justification for AI integration.
- The Due Process Perspective: Qualitative Risk Evaluation
The second, and arguably more critical, phase involves a qualitative evaluation of the legal and ethical risks that AI poses to the fundamental principles of due process and procedural fairness in international arbitration. This will be achieved by scrutinizing three specific threats:
- The Black Box and Transparency: The methodology analyses the a priori right of a party to understand the basis of a decision, focusing on the difficulty of reconciling AI opacity (the “black-box” phenomenon) with the requirement for reasoned awards under international instruments. The analysis will then assess the effectiveness of proposed solutions, such as Explainable AI (XAI), in satisfying due process requirements.
- Algorithmic Bias and Impartiality: This sub-section evaluates the potential for AI systems to perpetuate historical or societal biases resident in training data. The methodology will link the presence of inherent bias directly to the legal challenge of ensuring tribunal impartiality (a jus cogens norm) and the potential grounds for annulment of an award.
- Accountability and Human Oversight: The final analysis delineates the doctrine of human responsibility. It subsequently investigates the tension between AI autonomy and the legal principle that human arbitrators must remain the ultimate decision-makers. The methodology establishes a standard for what constitutes sufficient human oversight necessary to preclude an award from being challenged on the grounds of a fundamental procedural irregularity.
By systematically applying legal principles to these technological facts, this methodology section provides the evidentiary basis for the suggestions and conclusions regarding the responsible and ethical integration of AI.
SUGGESTIONS
Based on the analysis of AI’s dual impact on efficiency and due process, the following suggestions are offered to guide the responsible integration of technology into international arbitration, ensuring that innovation does not undermine legitimacy.
- Mandatory Disclosure and Documentation (Transparency)
Arbitral institutions (e.g., ICC, LCIA) should mandate the disclosure of substantive AI use by tribunals and counsel. Any AI tool used for material tasks, such as legal analysis, evidence classification, or outcome prediction, must be documented. The disclosure should specify: the AI model utilized, the training data source, and the scope of its reliance. This practice directly addresses the “black box” problem and allows parties to scrutinize the AI’s input, thereby safeguarding the right to be heard.
- Algorithmic Audit and Certification (Impartiality)
Arbitration bodies should develop a framework for AI certification. Prior to use, AI tools intended for substantive legal analysis should undergo an independent algorithmic bias audit to ensure they do not perpetuate biases based on factors like geography or industry, which could compromise the tribunal’s impartiality. Compliance with this standard should be a prerequisite for using the tool in proceedings.
- Codification of the Human Oversight Rule (Accountability)
Institutional rules must explicitly codify the principle that the human arbitrator retains final, unreviewable decision-making authority and full responsibility for the award. AI outputs should be treated as drafts or suggestions, and the arbitrator must actively verify and validate all data or arguments generated by the technology. This ensures that the use of AI remains consistent with the arbitrator’s ethical duty of diligence and integrity.
- Development of AI Protocol in Procedural Orders
Tribunals should adopt model procedural orders that include a specific section on the ethical and practical guidelines for AI use during the case. This proactive measure establishes clear expectations between the tribunal and the parties regarding data confidentiality, the handling of AI errors, and the scope of permissible AI assistance, thereby pre-empting costly due process challenges after the award is rendered.
CONCLUSION
The integration of Artificial Intelligence into international arbitration represents a critical juncture that demands a balanced and regulated approach. This paper set out to examine whether AI functions primarily as an efficiency tool or an existential threat to due process. The analysis confirms that AI offers undeniable, quantifiable benefits in terms of efficiency, particularly in automating document review (TAR), enhancing legal research, and providing predictive analytics to curb the escalating cost and duration of disputes. These technological advancements are essential for the continued relevance and competitiveness of international arbitration.
However, the findings robustly support the argument that the uncritical adoption of AI poses a significant and direct threat to due process. This threat is rooted in three key areas: the opacity of the “black box” algorithms, which undermines the right to a reasoned award; the danger of algorithmic bias, which compromises the fundamental requirement of tribunal impartiality; and the resultant accountability deficit concerning AI-generated errors.
Ultimately, the choice is not between adopting AI or rejecting it, but between using it responsibly or recklessly. The integrity of an arbitral award, which relies on transparency, impartiality, and human judgment, cannot be sacrificed for mere speed. Therefore, the suggestions put forth, including mandatory disclosure, algorithmic audits, and the codification of final human oversight, are not obstacles to innovation, but necessary safeguards. By adopting a framework of calibrated transparency and robust ethical guidelines, international arbitration can successfully harness the power of AI to achieve greater efficiency while simultaneously reinforcing its commitment to procedural fairness, thereby ensuring its legitimacy in the algorithmic era.
Shivansh Sharma Student of Damodaram Sanjivayya National Law University, Visakhapatnam
Submitted in partial fulfilment of the requirements for the Virtual Internship Program, The Amikus Qriae, September 2025.
[1] Cynthia H. Cwik, International Arbitration Experts Discuss The Efficiency Of Artificial Intelligence Tools In International Arbitration, JAMS (July 9, 2025), https://www.jamsadr.com/blog/2025/international-arbitration-experts-discuss-the-efficiency-of-artificial-intelligence-tools-in-international-arbitration.
[2] See White & Case, 2025 International Arbitration Survey – The path forward: Realities and opportunities in arbitration, Arbitration and AI, (June 2, 2025), https://www.whitecase.com/insight-our-thinking/2025-international-arbitration-survey-arbitration-and-ai.
[3] See Annabelle O. Onyefulu, Artificial Intelligence in International Arbitration: A Step Too Far?, 89 Arbitration: The International Journal of Arbitration, Mediation and Dispute Management 56, 57-58 (2023).
[4] See White & Case, 2025 International Arbitration Survey – The path forward: Realities and opportunities in arbitration, Arbitration and AI, (June 2, 2025).
[5] See Aparna Jauhari & Kritika Goswami Ahuja, AI And the Future of Arbitration: Legal and Ethical Challenges, 7 Int’l J. Mgmt. & Hum. 40215, 40217 (2025).
[6] See Heider Cristian Moura Quintão & Murillo de Oliveira Dias, Artificial Intelligence in Arbitration: Opportunities and Regulatory Challenges, 13 Archives of Business Research 19370, 19372 (2025).
[7] See White & Case, 2025 International Arbitration Survey, supra note 1 (reporting that 51% of respondents cite the risk of undetected AI errors and bias as the main obstacle).
[8] See Annabelle O. Onyefulu, Artificial Intelligence in International Arbitration: A Step Too Far?, 89 Arbitration: The International Journal of Arbitration, Mediation and Dispute Management 56, 57-58 (2023).
[9] See AI, Transparency, and Fairness in International Arbitration: Rethinking Disclosure and Due Process in the Age of Algorithmic Adjudication, ResearchGate (July 5, 2025) (advocating for calibrated transparency).
[10] See Algorithm bias and Discrimination bias in AI-Assisted Legal Processes, Int’l J. L. Mgmt. & Hum. 1, 3 (2025) (discussing how training data can reinforce prejudices).
[11] See Ethical Constraints When Using Artificial Intelligence in Arbitration, FedArb (Aug. 26, 2025) (emphasizing that AI is not a substitute for human insight and judgment).
