ABSTRACT
Artificial intelligence (AI) is rapidly transforming the financial services sector in India including credit underwriting, robo‑advisory, fraud detection, collections, and customer service. Even with the promises of efficiency, scale and increased financial inclusion, these technologies pose significant legal and regulatory risks. Foremost amongst these are issues of consumer protection, privacy, discrimination, algorithmic transparency, resilience and accountability. India has adopted, more or less, a technology neutral regulatory stance, while also addressing the repercussions of technology adoption with a fragmented approach made up of data protection laws, RBI and SEBI guidelines, and cybersecurity requirements that are already obsolete when it comes to the rapid pace of technology adoption.
This research paper reviews the Indian legal framework that regulates AI‑enabled finance, places it in the context of regulatory initiatives around the globe such as the EU AI Act and the OECD AI Principles for trustworthy AI, and assesses its capacity to respond to emerging risks. It draws on a doctrinal and policybased analysis of the Indian framework to show gaps that remain in the regulatory regime, including persistent issues of disjointed data governance, insufficient duties of algorithmic explainability, inadequate allocations of liabilities in fintech partnerships, and an absence of standardized AI risk management frameworks. The paper offers a calibrated roadmap for India that emphasizes a sectoral approach to AI governance issued in existing financial regulations, stronger expectations covering explainability and fairness obligations, vendor accountability, and alignment with global best practices to support a trustworthy AI in financial services.
KEYWORDS
AI in finance, digital lending, robo-advisory, algorithmic credit scoring, RBI, SEBI, DPDP Act, CERT-In, FLDG, explainability, EU AI Act, model risk management.
INTRODUCTION
AI (artificial intelligence) is demonstrating in the global financial space, and the same applies to India since financial services institutions in this country have begun to utilize AI for risk underwriting, fraud detection, customer support and even automated advisory business. Each technology wave provides a significant opportunity to improve efficiencies, availability and scale, but with it considerable potential legal issues on transparency, algorithmic bias and exploitation. As attributed, issues of regulatory oversight, compliance, consumer protection and regulatory risk have never been more important.
These developments provide an opportunity for India’s financial ecosystem to harness a unique moment of concurrent digital innovation and context for developing regulatory frameworks. The advent of UPI for consumer services innovation, Account Aggregator (which establishes the nature of the intermediation of AI), and onboarding of technology via Aadhaar, labeled with an AI relationship, has established an AI-relational eco-system infrastructure. The proposed models of the current regulated frameworks have their own legal complexity as these models engage an emerging legal landscape that solicits novel components of privacy, consent, fairness and accountability that are increasingly complex in a digitized space. Regulators are having trouble keeping up with the speed of technology adoption that races ahead of the useful life and adaptability of laws, further introducing uncertainty into the psyche of both suppliers and consumers.
Consequently, sectoral regulators have been employing a piecemeal yet adaptive methodology by creating an emerging genre of regulations, such as digital lending frameworks; information technology outsourcing directions; and the digital personal data protection law (2023), along with accompanying guidelines from CERT-In relating to cyber security compliance. Though some progress is being made, significant gaps persist, not least in ensuring algorithm explainability; clarifying regulatory liability in fintech partnerships; and wrapping up regulatory expectations around data governance. Therefore, this paper presents India’s regulatory reaction in context to the world by drawing comparison with the impending EU AI Act and OECD AI Principles regulations and establishes the foundation for considering how legal frameworks must evolve from historical doctrines suited to court proceedings to governing transference of codes generated using AI in a financial services context.
RESEARCH OBJECTIVES
- To critically examine India’s existing legal and regulatory framework governing AI-driven financial services, with emphasis on data protection, consumer rights, and financial sector regulations.
- To analyze the key legal challenges posed by AI in finance, including issues of algorithmic transparency, liability allocation, bias, and cybersecurity risks.
- To evaluate India’s position in comparison with global frameworks such as the EU AI Act and Singapore’s FEAT model, and propose reforms to create a balanced, accountable, and innovation-friendly legal ecosystem.
RESEARCH METHODOLOGY
This study adopts a doctrinal and policy-oriented methodology by analyzing statutory texts, regulatory circulars, decided cases, and committee documents, as well as drawing upon comparative knowledge from international frameworks in this field, notably the EU AI Act and OECD AI Principles. The primary sources cover the following core legal texts and secondary regulation, namely the Information Technology Act, 2000, the Digital Personal Data Protection Act, 2023, CERT-In Directions, RBI’s Digital Lending Guidelines, FLDG framework, as well as SEBI Investment Adviser and cyber-resilience regulations. The secondary sources involve academic articles, commentaries, and international standards, such as ISO/IEC 42001. The methodology also involved mapping AI use-cases in finance to the legal obligations in India, identifying compliance gaps derived from regulations, and benchmarking the regulatory posture in India against global best practices. In this way, triangulating doctrinal interpretation, regulatory analysis, and policy comparison provides both legal base and contextual relevance for the methodology adopted in the study.
REVIEW OF LITERATURE
Literature on AI-based financial services has revealed a complicated matrix of relationships, but also a significant bidirectional advancement in regard to regulation and technological development. The vast majority of academic studies have concluded that India’s digital architecture, based upon the Aadhaar, the e-KYC processes, and UPI, gave a pathway to deliver wider usage for AI across the finance sector. The early studies, however, cautioned about misuse (surveillance, profiling, disproportionate power by the state) where the Puttaswamy (2017) judgement recognized privacy as an inherent and fundamental right.
A second stream of literature focuses on global codes and frameworks that can provide set norms for AI governance. The OECD AI Principles speak to fairness, human accountability, and transparency, while the EU AI Act, expressed creditworthiness decisioning , a high-risk activity, created obligations of risk management, human oversight, and explainability. Scholars comparing these approaches and the African localised context, highlighted the need for convergence and adaptation of existing norms from international practices.
The third set of discussions examines India’s state of readiness and regulation. Commentators discuss the Reserve Bank of India’s (RBI) guidelines for digital lending, the FLDG framework, the Securities and Exchange Board of India’s (SEBI) jurisdiction over robo-advisory and algorithmic trading, and the Cybersecurity directives from the Computer Emergency Response Team-Inida (CERT-In). It is notable that these evolving frameworks, while captured as regulation, critique each take a limited view in that none propose a consistent obligations of algorithmic fairness, bias audits, or accountability, which are essential to securing effective consumer protection.
The fourth category of writing employs an interdisciplinary approach legal, technical and ethical. They suggest that while explainability, fairness testing and model governance must not be relegated to technical design considerations, they must be mandated by law. They suggest that international standards such as ISO/IEC 42001 be embedded in other international compliance frameworks supported by a supervisory sandbox and enforceable audit mechanism.
Last, comparative and policy-oriented analyses offer guidance about the future of AI governance for finance. They recommend aligning the DPDP Act with sectoral laws, taking on risk-based frameworks for AI governance, similar to the EU AI Act, and ensuring public transparency through disclosures and reporting. The literature emphasizes the significance of India being at a “crossroads” : India has promoted fast technological adoption, and it is now time for a long- term legal response, shifting not only taking a piecemeal approach to compliance, but create an cohesive structure to promote innovation alongside accountability.
TECHNOLOGICAL DIMENSIONS OF AI IN FINANCE
Artificial Intelligence has caused a paradigm shift in how financial institutions do business, utilizing advanced computing technology to improve efficiency, precision, and customer experience. At the heart of all this change are machine learning algorithms that utilize both structured and unstructured data in analyzing vast amounts of data to predict outcomes, find ideal risk levels, and enhanced decision making. However, the most distinctive thing regarding AI financial models compared to traditional financial models is that AI models actually learn from the data they analyze. Therefore, AI systems can understand new information flowing into the market and new changes in consumer behavior without needing to be reprogrammed.
AI’s most visible use of late is in credit underwriting and scoring. The lenders utilizing AI are doing it in part by using alternative data (e.g. history of digital payments, social engagement, transaction history, etc.), which provide needed models where traditional underwriting models have traditionally been insufficient, and extending previously unavailable credit to new underrepresented populations. While lenders extending credit and making improvements toward financial inclusion are positives, many standards of alternative data that copyright full credit to no credit have created problems when algorithms enforce fixed algorithmic authority and lack judgment when explaining processes and thus each case and lending to a borrower for failed transactions and access to credit.
Another useful application lies in the realm of fraud detection and cyber-security. AI systems with deep-learning capabilities can track millions of transactions in real-time like never before to find anomalies and patterns that deserve further investigation in the marketplace. Banks and payment services use these models to combat scams from identity theft, phishing actions, and unauthorized access. Compared to other technology systems that assist or improve human activities, the predictive abilities in AI systems can reduce systemically to risk to the financial institution while affording consumers the same. Still, liability is uncertain where lost funds are the result of false positives, and where fraud has not been acted upon by AI systems, i.e., systems that have a critical human component that are devoted to litigation.
A third key area of AI’s application that is already important and is even more vital now is in wealth management and use of robo-advisors. Robo-advisors utilize NLP and predictive analytics to provide personalized advice regarding financial matters, strategies and allocations of assets and investments at scale. While this represents an overdue democratization of access to wealth management, the use of robo-advisors complicates customary fiduciary duties and exposes the end user to accountabilities for loss resulting from automated financial recommendations. Legal advocates for firms continue to debate whether or not algorithms should be classified as fiduciary agents, or simply be treated as algorithms.
Lastly, regulatory technology (RegTech) and supervisory technology (SupTech) are nascent dimensions of AI assistance to regulators themselves. Financial institutions used RegTech for automation of compliance, risk management and reporting while SupTech can help regulators to assist regulators to identify systemic risks, monitor market integrity and provide an enhanced enforcement response. Any application of AI carries the dual nature of being a tool for financial innovation and a tool for enhanced governance. Yet applying AI to compliance raises an urgent need for legal standards around issues such as standardization, interoperability and explainability.
CYBER SECURITY AND DATA PROTECTION IN AI FINANCE
The usage of AI in financial services has raised new threats and reinforced existing threats of cyber-attacks. Organizations, especially those in finance, are processing massive amounts of sensitive personal and financial data using machine learning systems, consequently heightening the risk of compromise from various attacks. While AI models can be very useful, they remain open to adversarial attacks, data poisoning, and system manipulation, all which can lead to errors in decision-making, giving rise to opportunities for fraud. Regulations by governing bodies such as CERT-In have prescribed new standards of cyber-security that affect intranet and internet requirements on incident reporting and securing IT infrastructure, to name a few. In many ways, the methods used to carry out cyber-attacks are becoming more sophisticated than methods to be protected against them, therefore leading an unending race of protecting digital data assets against malicious actors.
Just as important, the other area of concern is protecting data, especially among the advances pursued by the DPDP Act, 2023 regarding consent, purpose limitation, and accountability during data processing. The ability to secure consent is a starting position for validating data usages in finance. The reliance on large data sets to teach AI in the financial sector (credit histories, for example, and behavior analytics) makes validating norms legally and ethically imperative. Validating laws and norms is often complicated by protecting anonymization and preventing infringing unknowingly due to profiling, dealing with algorithm transparency, to balancing innovation with privacy and data rights.
REGULATORY SANDBOXES AND INNOVATION HUBS
Regulatory sandboxes have emerged as an effective mechanism to balance innovation and regulation regarding AI-based financial services. Sandboxes provide innovators a regulatory framework to experiment with new concepts within a regulated environment that includes regulatory oversight in some capacity. The Reserve Bank of India, part of a sandbox framework established in 2019, for example, allowed fintechs and financial institutions to use new technologies to implement AI-based credit scoring and robo-advisory models. Ultimately, regulatory sandboxes fulfil a regulatory role of providing a limited test environment; regulators can monitor risks, the impact on consumers and compliance issues before the innovation is used more widely/diluted by regulation. Moreover, sandboxes lower entry barriers for technology-driven startups and provide them with a regulated environment to explore or experiment with potential innovations effectively managing systemic risks.
Innovation hubs diplomatically connect regulators and fintechs to simultaneously facilitate building regulatory capacity and to examine the implementation of emerging technologies (including Natural Language Processing in finance) while supporting responsible innovation and ensuring fairness in financial services. However, these, like other regulatory sandboxes, do indeed come with some criticism, especially in terms of how broadly India states it is trying to create sandboxes, particularly access to good quality resources including regulatory guidelines on the ethics, explainability, and liability of AI decisions versus ways to do this globally – especially with the UK or Singapore. In order for regulatory intentions of building or providing concrete evidence of effectiveness and efficiency in an innovation hub or sandbox to manifest, sharing criteria for decisions or assessments, coordination with other regulators, and ultimately all parties attempting to comply with the best practice frameworks used internationally would be necessary. Innovation hubs and sandboxes are still situatable to fulfill both regulatory intentions and sustainability.
LEGAL CONTEXT IN INDIA
India’s regulatory regime with respect to AI-enabled financial services is currently a collection of laws that vary by sector and focus on data protection, consumer rights, and judicial precedents. The foundation is the Information Technology Act, 2000, which also includes rules around cyber security and intermediaries. After a long wait, the Digital Personal Data Protection Act, 2023 will introduce a rights-based regime for the processing of personal data, which then affects AI applications that (may) profile and credit score based on consumer data. Furthermore, the Reserve Bank of India (RBI) has issued a wide range of detailed digital lending guidelines and outsourcing directions, as well as Fair Lending Practice Codes, that address risks in algorithmic lending and fintech partnerships. SEBI has included requirements related to disclosure, fiduciary duties, cyber resilience with real time monitoring, and (without exclusion) also robo-advisors, algorithmic traders, and intermediaries breeding AI tools.
Even with the above mechanisms, India’s legal framework is still mainly reactive and fragmented when it comes to addressing the particular risks of AI. Issues namely, algorithmic explainability, liability in AI-assisted decisions, and mandatory fairness-testing still lack a solid statutory basis. Although the courts have made strides in acknowledging some of the issues associated with AI, such as in Justice K.S Puttaswamy v. Union of India (2017), in which privacy was recognized as a fundamental right, they have nowhere delineated the implications of AI in finance specifically. Current reliance on overarching principles of contract, tort, and constitutional law, through which, as is evident in case law, they have not been able to resolve unique issues related to AI, creates uncertainty when resolving disputes related to biased algorithms, mistaken credit scores, or AI-enabled fraud. This situation underscores the strong need for harmonized legislation or sectoral guidance clearly focused on AI in financial services that provides legal certainty and space for technological growth.
COMPARATIVE INTERNATIONAL CASE STUDIES
The European Union arguably provides the most advanced example of AI regulation in finance; the proposed EU AI Act recognizes credit scoring, loan approval and financial profiling as ‘high-risk’ applications, and its framework stipulates that financial institutions must operate with human oversight, be transparent in their algorithms and conduct exhaustive bias testing. For example, the AI-based credit scoring models being used by banks in the EU must have explanation standards, and regulators can sanction them for failing to meet them; Thus, while the proactive regulatory approach of the EU shows how regional law can successfully deal with innovation, while providing strong consumer protection, it also provides an example for countries like India to consider to create standards for AI and other emerging technologies.
The regulatory response in the U.S. has been more gradual, decentralized, and enforcement-based than in Canada. Agencies like the Federal Trade Commission and the Consumer Financial Protection Bureau have amended the consumer protection laws, unfair practices laws, and discrimination prohibitions to relate to AI in finance. For example, financial institutions had utilized AI algorithms to execute credit assessments, and while the agencies ultimately found that under the Equal Credit Opportunity Act (ECOA) there had not been a violation of any laws regarding the decision-making processes, this instance raised regulators’ eyebrows as well as subsequently identified a nascent issue regarding discriminatory outcomes that AI tools can have (or produce) for regulated areas like credit and finance. The U.S. model demonstrates the adaptability of existing legal principles and doctrines to incorporate (“absorb”) the use of new technology, contrary to critics’ claims that relying only on enforcement actions, rather than regulatory action, may lead to diminished consumer protections and remedies.
Asia offers various strategies. For example, Singapore’s Monetary Authority (MAS) voluntary introduced a FEAT framework (Fairness, Ethics, Accountability, and Transparency) to act as soft guidance for the use of AI within financial services. It encourages the profession to adopt AI while leaving enough flexibility so regulators do not hinder its growth. In Hong Kong and Japan, regulators have used regulatory sandboxes to explore AI-powered financial services while providing a reasonable amount of supplement regulatory structure. Overall, these comparative case studies suggest that while the EU is focused on expert codification of AI using prescriptive approaches, and the US is limiting enforcement based on its existing laws, Asian jurisdictions are approaching this issue with an adaptive, principle-based, and innovation-conducive framework. India’s need to holistically understand these comparative frameworks and strategies present an opportunity in providing regulatory certainty, consumer protection, and technological advancement.
SUGGESTIONS
Mandate Algorithmic Explainability:
Financial institutions must legally, have to provide clear and easily understandable, reasons for AI related credit denials and adverse financial decisions. This could enhance accountability and consumer trust in the digital lending and robo-advisory space.
Fairness and Bias Assessments:
Regular independent fairness and bias assessment, of AI models, must be mandated. Frequent audits can prevent discrimination and ensure compliance with constitutional norms and non-discriminatory standards.
Strengthen Liability and Responsibilities Structures:
Clear rules will be needed to attribute liability among banks, fintech partners and AI vendors, for bad decisions or fraudulent activity. This will alleviate confusion and ensure consumers are not left without remedies.
AI Specific Data Governance Standards:
Regulators must formulate standards specific to the sector, and align with the DPDP Act, 2023. Standards must define standards related to anonymization protocols, profiling restrictions, and lawful processing of financial data.
Regulatory Sandboxes including Ethical Requirement Reviews:
Regulatory sandboxes can be expanded to have a mandatory ethical element in AI-based financial products. This may help balance innovation with risk management and consumer protection.
Adopt Global Good Practice Frameworks:
India should accept and adapt take-aways, lessons, from the EU AI Act and Singapore’s FEAT framework. Convergence of this nature would support global competitiveness while respecting local socio-economic structures.
Public Transparency and Reporting:
High-impact AI users should publish annual transparency reports detailing datasets, validation methods, and risks identified. This will strengthen regulatory oversight and public confidence.
CONCLUSION
Artificial intelligence has quickly changed India’s financial services landscape, presenting unprecedented efficiency, inclusion, and innovation. However, the legal landscape remains fragmented, continues to rely on a hodgepodge of sectoral regulation, data protection laws and judicial principles. While the RBI, SEBI, and CERT-In have each taken a critical step forward; none of them undertook the hard work of assessing and addressing the underlying challenges of algorithmic explainability, liability, and fairness. Without harmonized AI-suitable legal standards and guidelines, we risk not providing adequate protection for consumers in the marketplace or regulatory ambiguity.
As we witness changes unfold in comparable jurisdictions, the urgency in India to reform is clearer. The EU’s AI Act has shown that prescriptive high-risk mandates could provide legal certainty, the U.S. example illustrates how existing doctrines can evolve, while Singapore’s FEAT Framework has illustrated principles based, innovation-friendly regulation. These case studies remind India that piecemeal adaptation of existing law and regulation will not be adequate; we need to adopt a new evidence-based sectoral governance model, risk-based, in line with global norms, yet recognizing the local socio-economic realities in order to deter reluctance towards AI-enabled finance.
In the future, India should be in a position of proactive governance rather than reactive regulation. Adding fairness audits, requiring explainability, delineating liability frameworks, and creating ethical oversight mechanisms will be essential elements. India has the opportunity to develop a strong legal environment combining key principles of statutory regulation outlined in its DPDP Act with applicable rules from the financial sector as well as elements of international best practices. India can set itself on a path to becoming a global leader in AI-enabled financial services by demonstrating technological advancement and accountability coexist rather than one engulfing the other.
REFERENCES
Books / Commentaries / Journals Referred
- Ian Brown & Christopher T. Marsden, Regulating Code: Good Governance and Better Regulation in the Information Age (MIT Press, 2013).
- Solove, Daniel J., Understanding Privacy (Harvard University Press, 2008).
- Kuner, Christopher, Transborder Data Flows and Data Privacy Law (Oxford University Press, 2013).
- Rajendra Prasad & Ram Kumar, Artificial Intelligence and Law in India: Challenges and Prospects (NLSIU Journal of Law and Technology, 2022).
- NITI Aayog, Responsible AI for All: Discussion Paper (2021).
- Financial Stability Board, Artificial Intelligence and Machine Learning in Financial Services: Market Developments and Financial Stability Implications (2017).
- Chander, Anupam, “The Racist Algorithm?” (2017) 115 Michigan Law Review 1023.
Online Articles / Sources Referred
- OECD, OECD Principles on Artificial Intelligence (2019), available at: https://oecd.ai/en/ai-principles.
- EU Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (EU AI Act) (2021).
- Reserve Bank of India, Guidelines on Digital Lending (2022).
- SEBI, Guidelines for Investment Advisers and Cybersecurity in Market Infrastructure Institutions (2017–2022).
- CERT-In, Directions on Cybersecurity Incident Reporting (2022).
- ISO/IEC 42001 Artificial Intelligence Management System Standard (2023).
- Monetary Authority of Singapore, FEAT Principles: Fairness, Ethics, Accountability and Transparency in AI and Data Analytics (2018).
Cases Referred
- Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1 – Privacy recognized as a fundamental right.
- Shreya Singhal v. Union of India, (2015) 5 SCC 1 – Free speech, intermediary liability under IT Act.
- State of Maharashtra v. Dr. Praful B. Desai, (2003) 4 SCC 601 – Recognition of technology in legal processes.
- Avtar Singh v. Union of India, AIR 2016 SC 3598 – Due diligence and fairness in administrative decision-making.
- Selvi v. State of Karnataka, (2010) 7 SCC 263 – Consent and self-incrimination, relevant for AI-enabled data profiling.
Statutes / Regulations Referred
- Information Technology Act, 2000.
- Digital Personal Data Protection Act, 2023.
- Reserve Bank of India (RBI) – Digital Lending Guidelines, Fair Lending Practice Code, and Outsourcing Directions.
- Securities and Exchange Board of India (SEBI) – Investment Adviser Regulations, Cyber Resilience Guideline.
- Computer Emergency Response Team of India (CERT-In) Directions on Cybersecurity (2022).
- EU AI Act (Proposed Regulation, 2021).
- Monetary Authority of Singapore (MAS) – FEAT Principles, 2018.
BY KRITI ARORA
GALGOTIAS UNIVERSITY
