TOPIC:EXPLORING BHARATIYA NYAYA SANHITA AND THE NEW TECHNOLOGY LANDSCAPE

NAME- BISWAJIT DASH

BRANCH- BBA-LLB

SUBMITTED TO- 

 THE AMIKUS QRIAE

ABSTRACT: 

The Bharatiya Nyaya Sanhita (BNS) 2023 is a major reform of India’s criminal justice system, aimed at updating and streamlining the legal framework to address current concerns. This study investigates the convergence of the BNS 2023 and the fast-changing technological world, with a special emphasis on how advances in digital technologies such as artificial intelligence, blockchain, big data analytics, and cybersecurity are altering the implementation of criminal law. As the digital era continues to transform society, the criminal justice system must deal with new types of crime, such as cybercrime, online fraud, identity theft, and digital evidence manipulation, all of which necessitate the development of novel legal frameworks. One of the BNS 2023’s main focusses is on technology-related crimes, which include legal measures for dealing with cybercrimes and digital evidence in criminal investigations. As technology such as artificial intelligence and facial recognition become more prevalent in law enforcement, the Act seeks to integrate these techniques while protecting individual rights and privacy. The study investigates how the BNS 2023 intends to govern the use of technology by law enforcement authorities, ensuring that breakthroughs like as AI do not jeopardize fundamental concepts such as the right to a fair trial, due process, and protection from unjustified surveillance. Furthermore, this study digs into the ethical and legal quandaries presented by technological improvements in the criminal justice system. For example, the use of surveillance technologies like drones and facial recognition software raises questions about privacy, permission, and potential misuse. The report also looks at the significance of blockchain in preserving the integrity of digital evidence, as well as the rising dependence on data analytics to predict and prevent criminal activities.

KEYWORDS:  

  1. Bharatiya Nyaya Sanhita (BNS) 2023 
  2. Criminal Justice Reform 
  3. Technological Landscape 
  4. Digital Evidence 
  5. Cybercrimes 
  6. Legal Framework 
  7. Digital Age Criminality 
  8. Legal and Technological Intersection

INTRODUCTION

The rapid pace of technological advancement has dramatically changed the crime landscape, ushering in a new era of tech-driven offenses that put traditional legal systems to the test. The Bharatiya Nyaya Sanhita, 2023, recognizes how crime is evolving in our digital world and seeks to tackle the new threats that technology brings, such as cybercrimes, online exploitation, data breaches, and financial fraud. This research paper takes a deep dive into the complex role technology plays in both facilitating and combating modern crimes, highlighting key provisions in the Bharatiya Nyaya Sanhita designed to address these issues. 

It will look at the historical background and the emergence of technology-driven crimes, including cyber fraud, identity theft, and violations of data privacy. The paper will also analyze specific legal measures aimed at cyber offenses like hacking, phishing, and unauthorized data access, while considering how artificial intelligence (AI) and machine learning (ML) can help detect and prevent these crimes. Additionally, it will discuss the admissibility of digital evidence and the challenges that come with it, as well as the importance of protecting personal data during criminal investigations. Lastly, the paper will explore legal protections against online harassment, cyberbullying, and the increasing risks associated with cryptocurrency crimes.

The research will dive into how social media platforms are held responsible for spreading misinformation and hate speech, tackle the tricky issues surrounding illicit trade on the dark web, and examine the child protection measures designed to combat online exploitation. Additionally, the paper will discuss the legal ramifications of surveillance technologies, such as biometrics and facial recognition, while also weighing the threats that cyber warfare poses to national security. It will critically analyze the collaboration with tech companies in the fight against cybercrimes, the jurisdictional hurdles in cross-border offenses, and the potential legal reforms needed for crimes driven by technology. Ultimately, this paper aims to provide a thorough review of the Bharatiya Nyaya Sanhita and its initiatives to tackle the ever-evolving challenges posed by technology-facilitated crime.

The rise of surveillance and advanced policing systems highlights the urgent need to tackle these issues. As companies strive to boost productivity and reduce human involvement, the potential for AI to unintentionally discriminate or cause harm presents a new legal and ethical challenge. The BNS sets out standards for liability, recognizing accountability and punishment, but these standards are rooted in rational principles that assume human intent, blame, and foreseeable risks. This framework, which equates knowledge with responsibility, becomes problematic given AI’s autonomy. The foundational principles of BNS criminal responsibility rely on mental attributes that AI simply doesn’t have. AI isn’t a sentient being; it lacks ‘intent’ in the legal sense and ‘knowledge’ in the human sense. This means that traditional criminal liability doesn’t easily apply under Indian law, which hinges on the doctrine of mens rea. Nevertheless, as autonomous AI systems become more prevalent across society, establishing common law to regulate these systems is becoming increasingly essential.

Currently, in India, the framework for criminal responsibility hinges on two key legal principles: mens rea and actus reus. The foundation of criminal liability as outlined in the Bharatiya Nyaya Sanhita is very much centered around human perspectives, making it tricky to apply these concepts to non-human entities like AI. As a result, when it comes to the autonomous actions of AI, we find ourselves in a bit of a legal gray area. This isn’t just a local issue; other countries are grappling with similar questions, ranging from ideas about granting legal personhood to AI to placing liability solely on the developers. The European Union’s draft regulations have started to pave the way for assigning some level of responsibility to AI entities. However, this raises a pressing question in Indian law: as technology, especially AI, evolves at lightning speed, how can the law keep up? Therefore, exploring potential legal frameworks for AI accountability isn’t just an academic exercise- it’s a crucial need for our current criminal justice system.

OVERVIEW OF CYBERCRIME IN INDIA AND THE BHARATIYA NYAYA SANHITA (BNS), 2023

Cybercrime in India has seen a shocking increase over the last ten years, driven by rapid digitization, more people getting online, and the rise of smartphones. The range of cyber offenses is quite broad, including phishing scams, data breaches, identity theft, financial fraud, ransomware attacks, and cyberstalking. On top of that, the spread of misinformation and hate speech on digital platforms has heightened social unrest, posing significant challenges for law enforcement agencies trying to keep the peace. As technology continues to advance, so do the tactics of cybercriminals, making traditional investigative methods less effective against these sophisticated threats.

In response to these issues, the Bharatiya Nyaya Sanhita (BNS) aims to completely revamp India’s criminal laws. It brings in updated provisions designed to tackle new crimes in the digital age, especially those that target individuals, financial institutions, and public safety. The BNS recognizes how crucial technology is in fighting cyber offenses, with specific sections addressing issues like spreading misinformation, online defamation, and inciting violence through hate speech.

However, while the BNS takes a forward-thinking approach, it doesn’t quite clarify how to practically apply advanced tools like Artificial Intelligence (AI) in cybercrime investigations. The law leans heavily on broad technological monitoring but lacks clear guidelines for incorporating AI-driven solutions. This gap creates a disconnect between what the law intends and how it can be effectively enforced.

 For the BNS to work well, it needs to embrace modern technological solutions that can provide real-time detection, thorough digital forensics, and secure evidence management. This paper looks into how AI can fill these gaps, boosting the BNS’s capability to tackle the complex and ever-changing world of cybercrime in India.

TECHNOLOGY-DRIVEN CRIME TRENDS IN INDIA

The quick growth of digital tech in India has changed how crime happens and what it looks like. As more people use the internet, smartphones, and online services, we’ve seen more tech-based crimes. People and companies are worried about cyber scams stolen identities, and data privacy issues. Cyber crooks use online platforms to trick victims and take their money often through fake emails harmful software, and clever mind games. Identity theft has become a big problem, with bad guys stealing personal info to pretend to be someone else causing money loss and harm to reputations. Misuse of personal data, including getting into it without permission, has set off alarms. This is true as India uses more digital stuff and more personal info is used for business and government reasons. Social media and online shopping sites have opened new doors for online bullying, damaging reputations, and other harmful acts. These new trends show we need updated legal laws to deal with both old and new types of crime in the digital world.

CONCEPTUAL FRAMEWORK OF CRIMINAL LIABILITY

The idea of criminal responsibility, which has existed in legal culture for hundreds of years, rests on principles that assume human actors are rational and in control of their wrongful actions. To be held responsible, one must have a mental component like intent, knowledge, or recklessness, and a physical component that causes harm or breaks the law. This structure is summed up by key concepts such as mens rea (the mental part of a crime) and actus reus (the physical part). But as AI takes on a more official role in society setting up legal rules for companies becomes tricky. With AI now acting almost on its own, fitting it into current criminal liability law creates problems that legal systems haven’t grasped yet. From a criminal justice standpoint, the issue is that AI systems, which can work without human input are by nature centered on human-like qualities. 

  1. Traditional Criminal Liability Principles 

Normal criminal responsibility is closely correlated to the human factor and ethicality. While Indonesian criminal law (ICL) and the rest of the world’s criminal law place the guilt for the prevalence of crimes and criminal responsibility in general with the rational, voluntary control by a defendant of his actions. For instance, the legal system, in cases of murder or theft, presupposes that the agents are aware of the function of those activities, and hence, are able to turn away from such a function. This kind of assumption is aligned with retributive justice theories of punishment, which seek not only to punish the offender for the moral consequence of his/her fault but also to penalize him/her as an offender. Parts of the BNS, say the ones like “Section 100” that discuss culpable homicide. However, presuppose that the perpetrator had an intention or knowledge of death, and this seems to be consistent with the opinion that people have the natural notion of the consequences of their acts.

  1. Mens rea and Actus Reus: Key components of Criminal Liability 

Two of the key ideas in the theory of crime responsible for the prosecutor still lie in the doctrines of mens rea and actus reus. Mens rea conveys the concept of the guilty mind or the mental element which is a key part of the actus reus of an infringement. Thus, for one to be held criminally responsible for an illegal act, evidence of actus reus, as well as the necessary criminal state of mind must be provided. In other words, the crime rate in India and known as the intent or lack of it to do the crime are the factors which make one an accused person of a wrongdoing. 

On the other hand, AI systems lack mens rea since they are not conscious, do not have emotions and do not have moral compass. Still, more advanced AI could use their processes as the basis for what they want to do, but such actions are the results of the prior programming and not of that very intent. In other words, self-driving cars are not capable of “choosing” to ignore a red light or to hit a person – which is what a car driven by a human driver would do; the cars instead process the fed data and do real-time calculations and then perform the actions. When we talk about using AI’s capabilities to commit actus reus without mens rea, this not only presents but further complicates the task of making traditional criminal liability principles applicable. The problem gets even more severe due to the so-called feature of learning: systems improving as they get more data. So, while a person can be accountable for putting an individual in a certain position, or the programmer of the machine can be held liable for a wrongful act, operating a traditional machine gives the offender a clearer picture, i.e., the machine will not display errors of judgment. These are the very reasons why the formation of an AI system is not in line with the division made in the criminal law between reus and mens rea, hence resulting in legal problems in the determination of criminal responsibility in the said systems.

  1. Challenges in Applying Traditional Liability to AI Systems

Understanding AI systems through the lens of traditional criminal law is challenging. This mainly arises from the significant gap between the way criminal law is applied and how AI systems function. AI lacks the human trait known as mens rea, which involves having intention or knowledge. Since AI isn’t conscious or moral, it cannot intend to do something in the legal sense, which is important in Indian criminal law. For instance, if a self-driving car causes an accident resulting in a death, determining who is responsible becomes complex. The AI system lacks awareness that it might be endangering lives. Traditional legal approaches, such as those under ‘Section 100’ of the BNS concerning death by negligence, aren’t well-suited for AI. This is because negligence implies a human failure to be careful, a distinctly human weakness.

Moreover, the way AI learns adds another layer of complexity to accountability. Most AI systems use advanced algorithms that evolve over time, leading to possible unpredictable outcomes. If an AI system independently makes a decision that results in crime or harm, identifying who is responsible—whether it’s the programmer, the user, or the AI system itself—is a challenging question. Additionally, the idea of legally holding an AI system accountable doesn’t align with modern law. Laws focus on punishing people to prevent crime, but machines can’t be punished like humans. Punishment is meant for humans, and AI can’t learn from it or experience imprisonment as humans do.

TYPES OF AI- RELATED OFFENSES AND POTENTIAL LIABILITIES

AI crimes can be minor or cause serious harm, either physically or financially. As AI systems become more complicated, there are more chances for someone to be blamed for damages, directly or indirectly. Self-driving cars are a notable example. They have sometimes been involved in fatal accidents. For example, if a self-driving car makes a quick decision based on incorrect traffic information, it could cause injuries. In such cases, it’s difficult to figure out who is legally at fault. Is it the car maker, the software developer, or someone else? Product liability laws generally hold manufacturers and developers responsible for system issues, but these laws don’t entirely cover problems caused by AI acting independently.

AI can create risks in many areas because its algorithms might make unfair decisions. This can have a negative impact on people of color or women, particularly in fields like law enforcement and healthcare. Technologies such as predictive policing have been criticized for promoting racism and classism. In a diverse country like India, any AI tool used in policing or hiring that discriminates based on its data could face legal challenges under the nation’s discrimination laws. While not always considered criminal, these biases raise important legal questions about accountability, including civil liability and human rights issues.

AI also introduces new challenges in the realm of cybercrime. Criminals use AI for various attacks, including DDoS attacks and phishing scams. In India, such cyber offenses are covered by the Information Technology Act of 2000 and its 2008 amendment. However, these laws have not yet addressed AI-based cyber-attacks. When an AI system changes its hacking methods without human input, determining who is responsible for any damages becomes complicated. Developers might argue that the AI exceeded its intended capabilities, while victims might hold the developers accountable. The use of AI in crime indicates a strong need for legal updates to effectively manage these scenarios.

LEGAL ISSUES IN ESTABLISHING AI CRIMINAL LIABILITY

Introducing criminal responsibility for AI comes with many legal challenges. These challenges include questions about who is accountable when AI makes decisions, legal concerns surrounding AI actions, and whether AI can have the same legal responsibilities as a human. In India, the law is generally created with people in mind, which makes it difficult to apply these laws to AI systems that are complex and operate on their own. Another issue arises when AI takes control of situations, as this shifts the traditional understanding of accountability in criminal law. This text discusses the need to rethink these legal concepts as AI becomes more involved in decision-making.

ISSSUES OF ACCOUNTABILITY AND ATTRIBUTION-  

One major legal issue with AI is determining who is responsible when an AI system does something that would be considered a crime if done by a person. In Indian law, responsibility is usually assigned to someone whose actions cause harm. However, AI can make decisions on its own, which makes it difficult to determine who to hold accountable. This raises the question of whether the creators, operators, or users of AI should be blamed.

For example, if a self-driving car gets into an accident, it’s often unclear who is at fault. The problem might not stem from a single reason like a flawed design or improper use. In a real-life case, both the car manufacturer and the software developer blamed each other. The manufacturer might argue that the developer should be responsible for any software flaws, while the developer might claim the manufacturer failed to properly manage the installation of the AI in the car. This spreading of responsibilities creates legal gaps, making it extremely challenging to hold anyone accountable for the actions of AI systems.  

In criminal law, a person usually needs a “guilty mind” or intention to do something wrong to be considered responsible. But AI doesn’t have consciousness or intent, so it cannot have what is called mens rea, meaning a guilty mind. For example, if a self-driving car causes harm or death, it’s hard to say the car had criminal intent, as it can’t think. Indian courts would face challenges in classifying AI actions under “Sections 100” and “106” of the BNS, which are about culpable homicide and negligence, because these laws assume a level of reasoning that AI doesn’t possess. Right now, it’s unclear who, if anyone, should be held criminally responsible when AI is involved in such incidents.

ROLE OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING IN CRIME DETECTION AND PREVENTION 

Artificial Intelligence (AI) and Machine Learning (ML) are now crucial tools in today’s policing, especially for spotting and stopping crimes. These technologies assist police by examining large volumes of data quickly and accurately, helping to identify patterns or unusual activities that might otherwise be missed. AI can help identify possible suspects or predict criminal activities using historical data, allowing police to take preventive action. For instance, machine learning can examine trends in cybercrime, which helps prevent future crimes such as online fraud, identity theft, or phishing scams. Besides predicting crimes, AI and ML also aid investigations by giving law enforcement tools to analyze digital evidence and find connections between different events. They assist in facial recognition, voice analysis, and image recognition, which helps track criminals and solve complex cases. But, the use of AI in policing brings up concerns about privacy, biased data, and potential misuse, so it is important to balance these technologies with legal safeguards to ensure proper use.

Below are key factors and areas where AI enhances cybercrime investigation 

  1. REAL-TIME DETECTION AND MONITORING 

AI is used to keep an eye on a lot of online activity all the time and can spot suspicious patterns right away. It uses tools like machine learning algorithms and Natural Language Processing (NLP) to check social media, forums, and other digital platforms. These tools are effective at identifying potential threats, including fraud, hacking attempts, as well as harmful content like hate speech and misinformation. Automating these tasks with AI means there’s less need for people to watch over everything constantly. This allows for quicker reactions when problems are found, helping to lessen any potential harm.

  1. PREDICTIVE ANALYTICS AND THREAT INTELLIGENCE

AI isn’t just for spotting issues; it also examines past data to predict future cyber threats. This process, known as predictive analytics, helps determine when, how, and where potential attacks might occur, allowing people to take action in advance. For example, by studying previous cases, AI can predict fraud schemes or analyze metadata to identify potential cyberterrorism threats. By gathering insights from various sources, AI helps to make preventive actions even more effective.

  1. DIGITAL FORENSICS AND EVIDENCE COLLECTION

AI helps gather and examine digital evidence by automatically extracting important data from devices, cloud storage, and communication platforms. It ensures evidence is secure and can be used in court by creating records that cannot be altered and keeping a clear track of who handles the evidence. This makes forensic investigations faster and more dependable.

  1. ENHANCING EVIDENCE INTEGRITY AND DATA SECURITY 

AI enhances the acceptance of digital evidence in court by using technology like blockchain to protect and verify data. Blockchain ensures that evidence cannot be altered, keeping it genuine. Additionally, AI helps comply with data protection laws, like India’s Digital Personal Data Protection Act. It achieves this by anonymizing private information and safeguarding privacy during investigations.

  1. CONTENT MODERATION USING NLP 

To fight against the spread of false information, hate speech, and extremist content, AI-powered Natural Language Processing (NLP) tools examine and identify harmful online content. These tools are able to detect hidden manipulations or content that aims to incite problems. This allows law enforcement to remove such material quickly and trace its origin. Doing this reduces social and political damage and helps hold the responsible people accountable. AI is revolutionizing cybercrime investigations by equipping law enforcement with better tools to fight digital threats. By improving detection, prediction, evidence gathering, and data protection, AI provides a stronger response to the increasing challenges posed by cybercrime.

DIGITAL EVIDENCE IN CRIMINAL INVESTIGATION: ADMISSIBILITY AND CHALLENGES 

As criminal investigations increasingly rely on digital evidence, this brings both opportunities and challenges to the legal system. Digital evidence includes emails, chat logs, video footage, and other electronic records. Such evidence is crucial in solving crimes, especially cybercrimes. However, challenges arise in the way this evidence is collected, stored, and used in court. The Bharatiya Nyaya Sanhita of 2023 provides guidelines for handling these complex issues. Ensuring the integrity and authenticity of digital evidence is a key concern. The law mandates that digital evidence must be collected and stored properly to preserve its integrity, making it admissible in court. This involves maintaining a digital chain of custody to prevent tampering or changes. The Sanhita also addresses the admissibility of electronic records in court, specifying the conditions for using digital evidence in criminal trials. Despite these guidelines, challenges remain. Investigators need technical knowledge to manage complex digital evidence effectively. There are concerns about data privacy and protection during evidence collection too. As technology evolves, courts must develop new standards and procedures for handling digital evidence, with variations depending on the crime and jurisdiction. These challenges highlight the need for ongoing updates in legal frameworks and investigator training to keep up with the swiftly changing digital world.

CHALLENGES IN CYBERCRIME DETECTION AND EVIDENCE COLLECTION 

Investigating cybercrime is challenging, especially in India. The legal system is still developing to deal with digital crimes. Major problems include the large scale of cybercrimes and how people can stay anonymous online. Technology is advancing quickly, which makes it hard to keep up. There are also limits in the law and not enough resources to tackle these issues.

  • Scale and complexity of cybercrimes
  • Anonymity in digital world 
  • Rapid technological advancements 
  • Legal and ethical constraints 
  • Insufficient training resources 

ETHICAL AND LEGAL CONSIDERATIONS IN AI DEPLOYMENT FOR CYBERCRIME INVESTIGATION 

Using Artificial Intelligence (AI) in solving cybercrimes introduces many questions related to ethics and law. It’s important to think about these questions carefully to make sure AI is used properly. AI can greatly assist in detecting and preventing cybercrimes and in gathering evidence. However, it also causes concerns about people’s privacy, treating everyone fairly, being responsible, and being clear about how things work. To use AI in ways that meet legal requirements for dealing with cybercrimes, such as the Bharatiya Nyaya Sanhita (BNS) and the Digital Personal Data Protection Act, 2023, we need to address these issues. This also includes following general principles of justice. Here, we will look at the main ethical and legal issues that need to be considered when using AI tools for investigating cybercrimes.

CONCLUSION 

The Bharatiya Nyaya Sanhita, 2023 is a major move by India to address the problems caused by crimes linked to technology. Technology is changing fast, making crimes more complicated and without borders. This shift means we need laws that can handle new kinds of threats. The Sanhita covers topics like data privacy, cyberbullying, cryptocurrency fraud, and rules for digital evidence, showing it understands the digital world’s challenges.

Still, the Sanhita is just one part of the solution. We need bigger and ongoing efforts to really deal with modern crimes. This means working with tech companies, using things like AI and machine learning in investigations, protecting vulnerable groups, and teaming up with other countries to fight international cyber threats. Teaching people about digital safety and having clear laws for cybercrimes will help both citizens and law enforcement better manage these issues.

We must also think ahead and prepare for changes in technology. Our laws need to stay flexible to adapt to new digital threats while protecting people’s rights. By keeping up with legal reforms, partnering with tech experts, and focusing on digital education, India can build a strong system to prevent and tackle technology-driven crimes, ensuring a safe digital future for everyone.

REFERENCES

  1. Shreya Mishra, “Cybersecurity in the Age of Digital India,” 12 Indian Journal of Cyber Law 112 (2019).
  2. Ananya Singh, “Artificial Intelligence and Criminal Law: A Critical Analysis,” 15 Indian Law Review 203 (2021).
  3. Pranav Gupta, “The Impact of Social Media on Crime and Criminal Justice in India,” 9 Journal of Social Science and Legal Studies 58 (2018).
  4. View of “The Role of Technology in Facilitating and Addressing New-Age Crimes under Bharatiya Nyaya Sanhita, 2023.” (n.d.). https://bpasjournals.com/library-science/index.php/journal/article/view/3043/2854
  5. Eoghan Casey, Digital Evidence and Computer Crime: Forensic Science, Computers, and the Internet 150–55 (3d ed. 2011).
  6. Digital Personal Data Protection Act, No. 24 of 2023, Acts of Parliament, 2023 (India)
  7. The Bharatiya Nyaya Sanhita,2023 

Name: Biswajit Dash
College Name: Soa National Institute Of Law