BY: ANIRUDH GUPTA
ABSTRACT
The rapid growth of Artificial Intelligence (AI) technologies is changing many industries. However, this progress has brought new challenges for criminal law. As AI systems perform increasingly complex tasks on their own, there have been incidents where AI involvement leads to harm, property loss, or violations of rights. These situations create significant hurdles. Can existing criminal law principles effectively address the risks posed by autonomous systems? Who is responsible when an AI entity commits a crime?
This study explores the issue of criminal responsibility for AI-generated offenses. It combines legal theory, legislative analysis, and scholarly viewpoints. It looks at the basic elements of criminal responsibility: actus reus (the unlawful act) and mens rea (the guilty mind)—and how they apply to situations involving artificial intelligence.
The study considers alternative models of liability, including:
– AI as an innocent agent—where criminal responsibility lies with the human actors who create or misuse AI for illegal activities.
– AI as a Semi-Innocent Agent—where creators or users may be held accountable for harm resulting from negligence or expected misuse.
– AI as an Independent Entity—considering a future where fully independent AI entities could be directly legally accountable.
The study examines classic cases, hypothetical examples, and real incidents, including autonomous vehicle crashes and AI-driven cybercrimes. It shows how courts and legislatures could interpret developer accountability, foreseeability, purpose, and causation. It also looks at existing laws, such as Section 304A of the Indian Penal Code, to find legal gaps and the need for updated regulations.
*Ankit Kumar Padhy & Amit Kumar Padhy, Criminal Liability of the Artificial Intelligence Entities, 8 NIRMA U. L.J. (2019),
**AI Systems and Criminal Liability, Scandinavian University Press,
Amanda Pinto & Martin Evans, Corporate Criminal Liability (Sweet & Maxwell 2003).
***Dafni Lima, Could AI agents be held criminally liable: Artificial Intelligence and the Challenges for Criminal Law, 69 S.C.L. REV. 682 (2018).
KEYWORDS
Artificial Intelligence, Criminal Liability, AI Developers, Legal Responsibility, Actus Reus, Mens Rea
INTRODUCTION
The growing use of Artificial Intelligence (AI) technologies in important areas like healthcare, transportation, law enforcement, and finance has changed how societies think about legal accountability and criminal responsibility. As these systems become more autonomous and make decisions with little human oversight, new legal challenges arise involving crimes committed or facilitated by AI.
When an autonomous AI system engages in a criminal act, such as causing injury, damaging property, or committing fraud, the key question is who should be held responsible. Should the blame fall on the developer who created or programmed the AI, the user or operator who deployed it, the company that owns the system, or, in an extreme case, the AI itself? Since AI systems can gather information, “make decisions,” and potentially learn in ways that go beyond human control, addressing these questions requires a fresh look at basic principles of criminal law.
As a result, legal systems worldwide are discussing whether traditional criminal law can apply to non-human actors. Most modern criminal justice systems rely on two key elements:
Actus Reus (physical act): The actual behaviour that constitutes the crime, such as causing harm, destruction, or engaging in illegal acts.
Mens Rea (guilty mind): The mental aspect needed for responsibility, including intention, knowledge, carelessness, or criminal negligence.
*Ankit Kumar Padhy & Amit Kumar Padhy, Criminal Liability of the Artificial Intelligence Entities, 8 NIRMA U. L.J. (2019)
** AI Systems and Criminal Liability, Scandinavian University Press
Determining mens rea for an autonomous system creates complexities in AI-related cases. Key factors include whether the developer or operator could have reasonably foreseen the offending behaviour, whether design or deployment involved negligence, or whether there was reckless disregard for known risks.
For example, under general negligence laws (e.g., Section 304A of the Indian Penal Code), a developer may face criminal liability if they failed to implement safety measures and an autonomous vehicle caused harm.
To ensure effective accountability and provide deterrence when human actors misuse or fail to monitor AI technologies, this emerging field needs legal adjustments. Many jurisdictions are considering reforms to clarify how responsibility is assigned. These reforms go beyond basic intent and look at causation, foreseeability, and the duty of care owed by technology owners and developers.
RESEARCH METHODOLOGY
This study looks at the criminal responsibility of AI developers for crimes caused by AI, using a doctrinal and comparative approach. The doctrinal methodology involves critically examining current legal principles, laws, and court interpretations. The comparative analysis highlights the differences and similarities in the regulatory frameworks of the main jurisdictions and reveals best practices as well as specific challenges in determining criminal liability for actions involving artificial intelligence.
The study follows this structure:
- Academic Literature Review: The sources include peer-reviewed papers, academic commentary, and key legal writings on artificial intelligence and criminal liability.
- Case Study Analysis: Real-world examples of AI-generated harm, such as accidents involving autonomous vehicles or algorithm-driven financial crimes, illustrate the practical application of liability models.
- Legislative Examination: Statutes and policy reports from the US, the EU, and India, including the Information Technology Act and the Indian Penal Code, are reviewed to assess regulatory coverage and identify legislative gaps.
- Expert Reports & Government Publications: The evaluation of foreseeability, negligence, and vicarious liability related to developer activities is informed by white papers, regulatory guidelines, and analyses from legal authorities.
- Comparative Liability Models: The study examines various legal theories, including:
- Vicarious liability, which attributes criminal responsibility to businesses or superiors for the actions of employees or agents when developers act in their professional capacity.
- “Foreseeable consequence,” which assigns responsibility in cases where developers could have reasonably anticipated harmful outcomes from AI use, even without intentional wrongdoing.
- Direct liability, which considers whether highly independent or “intelligent” AI systems could eventually face criminal charges.
This study evaluates trends specific to each jurisdiction and those that are global, using a multi-source, multi-method approach while referencing key scholarly articles and government documents that influence the liability discussion in technologically advanced societies.
*Ankit Kumar Padhy & Amit Kumar Padhy, Criminal Liability of the Artificial Intelligence Entities, 8 NIRMA U. L.J. (2019),
**Determination of Civil and Criminal liability of Artificial Intelligence, DMEJL (2021),
***European Parliament, Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), 2021.
****Indian Penal Code, 1860, Acts of Parliament, 1860 (India).
REVIEW OF LITERATURE
The legal literature provides various ways to analyze and tackle the issue of criminal responsibility for harm caused by AI. Leading scholars and critics have created several important models, each affecting AI developers, users, and society differently:
- Perpetrator via another model: This view sees AI as a neutral agent or tool. Responsibility falls on the person—usually the developer or user—who intentionally teaches, programs, or directs the AI to carry out a criminal act. In this case, the AI system lacks criminal intent; it merely performs the actions it was designed or programmed to do. Legal experts believe this reflects traditional practices where humans commit crimes through intermediaries, such as animals or others with limited ability.
For instance, if a developer builds a robot that starts a fire, the robot acts, but the developer is held criminally responsible.
- The Natural-Probable-Consequence Model: This concept holds people accountable for failing to foresee or prevent AI-related harm—not through direct intent, but through negligence or recklessness. If a developer or user could reasonably expect harm from AI use (for example, due to poor programming or lack of monitoring), and such harm occurs, they might be responsible even if the act was not intentional.
For example, if a self-driving car is produced without sufficient safety measures and causes a fatal accident, the developer could face criminal charges for negligence.
- Direct Liability Model: As AI technologies progress, some experts predict that highly autonomous AI with the ability to learn, decide, and change its behavior may one day be held legally responsible. This controversial and mostly hypothetical approach raises fundamental issues regarding the assignment of mens rea (guilty mind) and whether criminal punishment is applicable to non-human agents. It pushes researchers to consider whether current or future AI could fulfill legal standards for criminal liability without human intervention.
For example, discussions may focus on potential charges against an AI that independently engages in harmful actions, but practical questions, such as the AI’s inability to be punished or rehabilitated, remain unresolved.
- Corporate Liability Model: Acknowledging the business environment where AI is often developed and used, this model holds companies accountable for systemic problems like regulatory violations, oversight failures, or lack of ethical safeguards when AI systems cause harm. Jurisdictions with strong corporate criminal liability laws may impose strict liability on businesses, especially for crimes related to public safety or consumer protection.
For instance, a tech company that creates an AI chatbot that collects user data without consent may face corporate liability for not following privacy laws.
Global Scholarly Consensus: Legal scholars have consistently raised concerns about the adequacy of current criminal law systems in keeping up with rapid technological advancements. Many call for new laws to fill gaps, outline standards of responsibility, and create regulatory frameworks that address the complexity and unpredictability of autonomous systems.
*Ankit Kumar Padhy & Amit Kumar Padhy, Criminal Liability of the Artificial Intelligence Entities, 8 NIRMA U. L.J. (2019),
**Determination of Civil and Criminal liability of Artificial Intelligence, DMEJL (2021),
***Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities: From Science Fiction to Legal Social Control, 4 AKRON INTELL. PROP. J. 179 (2010).
****Amanda Pinto & Martin Evans, Corporate Criminal Liability (Sweet & Maxwell 2003).
*****P. Freitas, F. Andrade & P. Novais, Criminal Liability of Autonomous Agents: from the unthinkable to the plausible (2012).
******European Parliament, Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), 2021.
*******Mindaugas Naucius, Should Fully Autonomous Artificial Intelligence Systems Be Granted Legal Capacity, 17 TEISES APZVALGA L. REV. 113 (2018).
METHOD
To thoroughly examine the criminal responsibility of AI developers for crimes committed by AI, this research uses several systematic strategies:
- Statutory Analysis Across Jurisdictions.
The paper looks closely at laws regarding criminal liability. It includes the requirements for actus reus (wrongful act) and mens rea (guilty mind) as stated in relevant laws like the Indian Penal Code, the United States Model Penal Code, and new European regulations, such as the proposed EU Artificial Intelligence Act. It emphasizes how these basic concepts apply to, or do not address, situations involving the independent decision-making of AI agents.
- Analysis of Case Law and Judicial Opinions
The study examines judicial interpretations, important cases, and hypothetical examples from academic discussions to highlight how courts reason. This includes how courts deal with negligence in cases like self-driving car accidents, cybercrime involving AI, and product liability claims against tech creators.
- Comparative Evaluation of Liability Models.
Three main models—strict liability, vicarious liability, and direct liability—are assessed for their relevance and limits regarding AI-related crimes. The analysis shows when strict liability (no intent needed), vicarious liability (corporate or employer responsibility), and direct responsibility for AI entities themselves can be applied while also reviewing discussions in both domestic and international contexts.
- Real-world Incident Analysis
The study evaluates incidents such as tragic self-driving car accidents, financial crimes involving algorithms, and AI-related privacy violations. These cases provide insight into how responsibility should be shared among developers, users, and businesses, along with issues of causation, foreseeability, and culpability.
- Review of Recommendations by International Organizations
The report compiles policy suggestions from law commissions, expert panels, and global regulatory bodies. These guidelines on due diligence, monitoring, and algorithmic transparency aim to reduce risks and clarify accountability for tech producers.
The combined methodological approach aims to uncover the legal, procedural, and practical issues needed to update criminal law to tackle the challenges posed by autonomous systems.
- Case Law and Crimes Involving AI
Criminal crimes involving AI can include:
- Autonomous Vehicle Accidents. If self-driving cars cause fatalities or injuries due to faulty programming or a lack of safety features, the developer may face charges of negligent homicide.
- AI-Assisted Cybercrime. A user or developer may use AI to carry out unauthorized phishing, identity theft, or ransomware attacks with criminal intent.
- Direct Physical Harm. Robots programmed to cause harm can lead to serious issues. For instance, a robot might set fire to a house as programmed or directed by the developer or user. In this situation, the developer could be considered the criminal if they intentionally or carelessly allowed the crime to occur.
- Data Breach and Deepfake Abuse. Developers may be held responsible under computer use and privacy laws if they enabled, failed to prevent, or ignored threats of privacy violations, identity theft, or deepfake misinformation.
*Indian Penal Code, 1860, Acts of Parliament, 1860 (India).
**Amanda Pinto & Martin Evans, Corporate Criminal Liability (Sweet & Maxwell 2003).
***European Parliament, Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), 2021.
****Mindaugas Naucius, Should Fully Autonomous Artificial Intelligence Systems Be Granted Legal Capacity, 17 TEISES APZVALGA L. REV. 113 (2018).
SUGGESTIONS
The legal framework for AI-generated crimes needs to change. This should reflect the risks and uncertainties of autonomous systems. Based on current research, laws, and legislative efforts, the following recommendations can help increase criminal liability for AI developers and companies:
- Clarifying Developer Obligations.
- Governments should clearly define the expected behaviours for AI engineers, focusing on:
- Adequate Foresight: Developers must identify potential hazards, including unintended consequences of autonomous decision-making. They should include safeguards during both development and deployment.
- Supervision and Monitoring: AI systems should undergo continuous monitoring and auditing before and after they are deployed. This includes documenting testing processes and incident records.
- Risk Assessment: The law should require structured risk studies, such as scenario modeling, failure mode analysis, and real-world simulations for high-risk AI applications, particularly those impacting public safety or basic rights.
- Ethics-Driven Design: Legislatures should use binding regulations to enforce ethical principles like transparency, fairness, and privacy. This will reduce the chances of careless or irresponsible use.
- Establishing Clear Causation Standards.
- Attributing liability requires specific criteria to connect developer actions or inactions to criminal results.
- Legal Causation: Laws should clarify that liability applies only when a developer’s actions significantly contribute to an offense. For instance, this includes programming errors that directly cause harm or failing to offer necessary protections that lead to illegal exploitation.
- Foreseeability: Statutory definitions should establish what a “reasonable” developer could expect, distinguishing between foreseeable harm and distant or speculative risks.
- Special Rules for Autonomous Acts: In situations with limited human oversight, highly autonomous AI may need new legal provisions or court guidance on indirect or system-level liability.
- Promoting International Harmonization
- Due to AI’s global reach, coordinated international regulation is essential.
- Standardization of Liability Frameworks: Treaties or international agreements should set universal definitions, criteria for diligence, liability thresholds, and enforcement mechanisms for AI crimes across all jurisdictions.
- Enabling Cross-Border Remedies: Victims of AI-related harm should have access to justice, regardless of the location of the developer or deploying company, aided by coordinated legal agreements and international collaboration.
- Global Best Practice Exchanges: International organizations like the UN, EU, and OECD should facilitate regular discussions among policymakers, experts, and industry leaders. This helps update standards and share insights from different legal systems.
- Regulating Advanced AI
- As technology gets closer to Artificial General Intelligence (AGI), legal systems should:
- Anticipate New Risks: Write laws for future AI that has learning and adaptability features. This should include rules for accountability when intentions are unclear or not present.
- Algorithmic Accountability: For critical systems, developers must ensure traceability, explainability, and the ability for human override. This maintains accountability as the level of autonomy increases.
- Institutional Oversight: Governments should form specialized bodies or commissions to assess, monitor, and, if necessary, intervene in the functioning of advanced AI systems that present systemic risks.
- Expanding Corporate Liability
- To ensure full accountability and foster strong safety cultures within organizations:
- Systemic Responsibility: Laws should expand corporate criminal liability to cover not just individual developer misconduct but also organizational failures, such as lack of training, oversight, or security measures.
- Compliance Program Mandates: Companies must create clear compliance standards for AI design, deployment, and monitoring. These standards should be regularly audited by independent agencies.
- Enhanced Remedies and Penalties: Laws should empower courts to impose penalties and damages along with required operational changes, like product recalls or mandatory upgrades. This ensures effective corrections and protection for affected parties.
*Determination of Civil and Criminal liability of Artificial Intelligence, DMEJL (2021).
**Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities: From Science Fiction to Legal Social Control, 4 AKRON INTELL. PROP. J. 179 (2010).
***Amanda Pinto & Martin Evans, Corporate Criminal Liability (Sweet & Maxwell 2003).
****P. Freitas, F. Andrade & P. Novais, Criminal Liability of Autonomous Agents: from the unthinkable to the plausible (2012).
*****European Parliament, Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), 2021.
******Mindaugas Naucius, Should Fully Autonomous Artificial Intelligence Systems Be Granted Legal Capacity, 17 TEISES APZVALGA L. REV. 113 (2018).
Technology-Specific Issues in the AI-Criminal Law Context
- Algorithmic bias.
AI systems in criminal justice, such as predictive policing and sentencing tools, often reflect biases present in their training data. This can lead to unfair outcomes for minority groups, especially if prior data shows discriminatory practices. AI-driven decisions may reinforce existing inequalities and result in unjust criminal charges or penalties.
- Explainability (“Black Box” Problem).
Many AI algorithms operate in ways that are hard to understand, even for their creators. This lack of clarity raises concerns about the legal process and due process, as those affected by AI-generated results might struggle to grasp or challenge how those decisions were made. Courts and legal professionals may find it challenging to assess and verify AI-generated evidence and recommendations.
- Data Privacy.
AI relies heavily on collecting and processing large amounts of data, which often includes personal or sensitive information. Legal frameworks need to tackle issues like illegal surveillance, improper data sharing, and the risk of data breaches. The rise of AI-generated deepfake technology further complicates data security and privacy, making it essential to set strict standards for validating and using evidence.
Additional Technological Risks
- Deepfakes and Fabrication: AI-generated videos and audio can convincingly mimic real people, threatening the reliability of digital evidence. The legal system needs to create solid authentication processes.
- Professional Misconduct: Lawyers using AI tools must be careful, as careless dependence can lead to ethical issues, such as presenting briefs with incorrect case law created by AI.
- Regulatory Developments: Courts are setting standards for accepting AI-generated evidence, ensuring it is as reliable and clear as expert testimony.
Ethical and regulatory imperatives
- Legal professionals must understand the technical tools they use, check AI-assisted results, and ensure transparency and fairness.
- To reduce technological risks in criminal law, international organizations and legal authorities are creating guidelines and standards, including impact assessments, system disclosures, and transparency measures.
- AI offers significant potential, but it also brings serious new risks. Tackling algorithmic bias, addressing the “black box” issue, and enhancing data privacy protections are key legal challenges that need attention to maintain accountability in criminal law and the integrity of justice.
CONCLUSION
The criminal responsibility of AI developers for crimes committed by AI is a complex legal issue that existing laws only partly address. As AI systems increasingly perform autonomous actions that can cause harm or break laws, current criminal law principles like actus reus and mens rea need to change to fairly assign responsibility. Legal frameworks mostly view AI as either a neutral tool, placing responsibility on creators and users who knowingly or carelessly enable harmful actions, or as semi-autonomous agents, where foreseeability and carelessness affect liability. Because of ongoing questions about intent and punishment, recognizing highly autonomous AI as separate legal entities is still largely theoretical.
In India, there is no direct case law on the criminal liability of AI developers. However, legal scholars suggest applying traditional negligence rules, like those in Section 304A of the Indian Penal Code, to cases where developers do not foresee or prevent harm from AI systems. Principles of corporate responsibility also apply to companies that use AI technologies, emphasizing the importance of compliance, monitoring, and risk management.
To tackle new risks, legal frameworks need to change by clarifying developers’ responsibilities concerning foresight, oversight, and ethics. They should set standards for causation that consider foreseeability, encourage international agreement on AI liability rules, regulate advanced AI capabilities, and enhance corporate accountability. These changes will offer clearer guidance on assigning criminal responsibility in AI-related crimes, ensuring effective deterrence and justice.
Ultimately, the future of criminal liability in AI situations will hinge on finding a balance between innovation and responsibility. We must ensure public safety while promoting responsible AI development and use. Courts, lawmakers, and international regulatory bodies must collaborate to create sensible legislation that reflects the transformative effect of AI on society and criminal justice.
