Title: Use of Artificial Intelligence (AI) in Law Enforcement and Its Human Rights Implication.

Abstract

The research paper focuses on the understanding of the usage of AI in law enforcement and its human rights implications.

A rapidly developing field that has captured all our attention is Artificial intelligence (AI), and almost every industry across the globe is incorporating AI applications such as banking and finance, healthcare, sales and marketing, agriculture, travel and hospitality. This is a partial list, and this rapid growth brings various drawbacks and challenges. One area where AI is making a significant impact is the law field. it has been subject to many controversies regarding its application and sufficiency in the legal profession.

This paper delves into the relationship between AI and Law, the usage of AI in law enforcement, and its human rights implications as how AI could affect people’s rights like privacy, equality, and freedom of speech.

Keywords

Artificial Intelligence, Law Enforcement, Human Rights, Privacy, Crime, Algorithms.

Introduction

Artificial intelligence(AI) isn’t a tool of the future, it’s a present-day need. As technology gets smarter every day, various industries seize the opportunity to make the most use of AI, it helps out in legal fields such as education, professional jobs, law enforcement, Judicial system.

Law enforcement needs the ability to have command over the technology to make their duties easier and more accurate.

AI can go a long way in improving productivity, saving time, streamlining processes, and allowing us to focus on tasks that only humans can do with accurate tools.Law enforcement agencies have unlocked the potential of AI in several important areas like surveillance, crime prevention, and crime-solving.

Research Methodology

This research has adopted a qualitative approach using secondary data from the internet, books, academic journals, reports, and case studies. The methodology comprises a thorough research on artificial intelligence (AI) in law enforcement and a human rights and ethical assessment of these implementations. To give a broad understanding of the topic, The study also uses professional opinion and regulations making the research complete.

Review Literature

Artificial intelligence and its working

Artificial intelligence (AI) is the application of machines or software to carry out tasks that ordinarily require human insights, such as learning, reasoning, problem-solving, perception, and language understanding. AI does this by activating human intelligence through calculations, information and computational power. Philosophy, neurology, psychology, cognitive science, computer science, engineering, mathematics, and physics provide the basis of artificial intelligence.

Generally, AI refers to algorithms that reflect human thinking in decision-making and can automate tasks that could previously only be performed by humans. However, AI is a field, not a system, and can be categorized into various subfields, such as 2R robotics, image processing, machine learning and deep learning. AI is particularly useful in fields that require processing large amounts of data, as machines can operate much faster and more efficiently than humans.

Use of AI in Law Enforcement

Area where AI is making a significant impact is the legal field, It helps out in legal education – like doing research and analysis, aiding in writing, and virtual learning and in the legal profession – document review and due diligence, predictive analytics, legal chatbots and virtual assistants, ethical compliance and research, in Judicial system – case prediction and analytics, legal assistance and access to justice, and case management.

AI in law enforcement is discussed from two angles. One is to comprehend the usage of AI by criminals. There are three points of view being discussed. Digital crime is one. there is already a problem with cybercrime. Machines are replacing human hackers, who are now taking over and posing a threat to security and money. The machine is less expensive, runs around the clock, and self-learns.

Furthermore, it is anticipated that humans would be unable to handle the growing amount of data. Physical crime is the next issue. A drone with an AI program that can identify photos and target attacks, for instance, can be piloted. The likelihood of these types of crimes is likewise rising with the accuracy of picture recognition. Political crime is the final issue. Fake news and misinformation are not brand-new problems. But it’s been in written form up until now. False material can now be produced in audio and video formats. False information regarding CEOs of corporations and politicians would damage their reputations. Or, taken to its utmost, incorrect information might start a conflict. These days, many are worried about such misinformation being used for political purposes.

AI-based criminal training is required for law enforcement agencies. In particular, scenarios about the type of technology now in use, its use, and countermeasures must be taken into account

It has changed the way our societies function and ensure order. One big change is in Law Enforcement. Even though it’s still new, law enforcement agencies all over the world are quickly taking up these technologies. In India, startups are already working on AI-enabled policing and many departments are already using them. However, necessary legal reforms have not kept pace with this rapid growth, which means that AI technology can be misused without much regard for the law.

AI, with its powerful algorithms and predictive analytics, is transforming how law enforcement agencies prevent and detect crime and providing valuable insights for enhancing law enforcement strategies. prevention and detection of crimes is one of the key roles of AI in law enforcement.

Crime Predictions and Prevention

Generative AI algorithms and machine learning can examine historical criminal data from various sources like crime reports, social media, and surveillance recordings to identify high-risk areas and predict potential criminal activities. By finding patterns and differences in this information, AI can help law enforcement agencies predict where crimes might happen and stop them before crimes are committed 

For example, predictive policing systems driven by Generative AI might look at prior criminal data to determine locations at high risk of crimes happening, so that before the occurrence of the crime officers can prevent it.

Suspect Identification

Law enforcement agencies (LEAs) have access to tons of video evidence that can help catch suspects and crack cases. However, many agencies still have to manually review all that footage, taking up many of their department’s resources. Like how Law enforcement agency databases hold the fingerprints of known offenders and persons of interest, booking photos of these individuals can also be compared to videos or images collected during investigations. This helps make departments more efficient and accurate.

Such applications help officials to spot suspects faster, This software analyses past booking records in their databases, making suspect identification easier and even helps in putting together virtual suspect line-ups, Sharing suspect lists with other departments, and Organising evidence by case details. Due to this, investigators can work smarter, free up time for important tasks, and save costs along the way. AI offers more ways for law enforcement agencies to speed up processing data like redacting sensitive information.

Surveillance Systems

In many major cities, cameras are widespread on the streets and public places. Law enforcement frequently uses this footage to review crimes post-incident and arrest offenders. AI-powered surveillance systems have aided in monitoring public places and identifying possible threats by using facial recognition on these photos, as well as recognizing items and complex actions like vehicle accidents occurring. Object identification is important for police officers monitoring large crowds like music festivals or marathons. Given their inability to be present in many locations at the same time, police can rely on AI to send out an alarm if someone in their neighbourhood holds a weapon or exhibits unusual behaviour that could be seen as threatening.

Working together with drone cameras helps to cover larger areas and speed up search and rescue missions efficiently. These drones have AI-driven facial and object recognition features built-in, which can help reduce crimes and enhance public safety. However, the use of AI-powered surveillance raises concerns about privacy and civil rights. Balancing security requirements with protecting individual privacy is a significant challenge for law enforcement agencies implementing AI-based surveillance systems.

Decision Making

Offering insightful information and expediting the decision-making process during criminal investigations, AI-assisted decision-making can also benefit investigators. AI algorithms can assist in identifying links, patterns, and probable suspects that may not be immediately obvious to human detectives by evaluating a variety of information sources, including surveillance footage, witness accounts, and forensic evidence. This can help create stronger cases against offenders and considerably speed up the investigative process.

To ensure that decisions are ultimately just, open, and accountable, it is imperative to strike a balance between the use of AI and allowing human investigators to carry out their duties.

Emergency Response

Law enforcement personnel need to be ready for more than just criminal investigations because they are often the first responders to a range of emergencies. Rather, they are an integral component of emergency response procedures. AI may potentially be able to assist when a crime is being committed. AI-aided services provide the cameras that are already in place to keep an eye out for any possible gun violence and notify the authorities right away.

Human Rights Concerns

In recent years, AI has advanced much more rapidly than legislation. This legal loophole makes it possible for AI to be used without enough regulation in ways that go against democratic ideals. These are some major Human Rights issue 1. Privacy Infringement 2. Discrimination and Bias 3. Lack of Accountability. The misuse of AI threatens core human rights (freedom of opinion and expression, the right to privacy and non-discrimination) One of the main concerns with AI in law enforcement is bias. Algorithmic bias results from the conscious or unconscious biases of human developers leaking into the algorithm.

Furthermore, Data Bias happens when artificial intelligence magnifies human prejudices found in the data it analyses, resulting in law enforcement agencies unfairly singling out particular groups for attention.

AI behaving recklessly could also have implications for the right to privacy of individuals in several ways: it can accumulate data on an unprecedented and unconsented scale; identify someone seeking anonymity; profile based on that data (or other sources); be used, over time, to track him or her. Facial Recognition Technology (FRT) is also a technique that can be employed to trail and detect human beings. FRTs are already being deployed for such uses globally, from flagging visa overstays by air passengers to enforcing quarantine measures. The pervasive nature of FRT can harm people’s privacy, And as expected, these technologies are very likely to be misused. It became apparent when Huawei, the Chinese technology powerhouse, created artificial intelligence (AI) software that was able to recognise Uyghur individuals based on their facial features and alert officials of China.

This leads to a breach in freedom of thought and experience, which cannot be guaranteed to exist naturally with an AI nature. To decide what information you will be presented with, search engines and social media have begun to depend heavily on AI. An opinion of a user who is made to feed on selective information can never be free. Anonymity is an important enabler of freedom of expression and any threat of persecution to any person for expressing himself means a violation of this right.

Judicial Aspect
  1. Indian Judiciary

The Supreme Court has been employing an AI-controlled instrument to process data and provide justices with it for decision-making since 2021. It abstains from taking part in the process of making decisions. The Supreme Court of India also uses SUVAS (Supreme Court Vidhik Anuvaad Software), a technology that translates judicial documents from English into regional tongues and vice versa. The Punjab & Haryana High Court denied a bail request in the case of Jaswinder Singh v. State of Punjab because the prosecution claimed the petitioner had participated in a vicious, lethal assault. To obtain a broader viewpoint on the granting of bail when cruelty is involved, the presiding judge asked ChatGPT for an opinion. It’s crucial to remember that the trial court will not consider these remarks and that the mention of ChatGPT does not represent an opinion on the case’s merits. The reference was only meant to offer a more comprehensive grasp of bail jurisprudence in situations where cruelty is a contributing factor.

  1. USA

Artificial Intelligence-driven instruments, such as COMPAS (Correctional Offender Management Profiling for Alternative Solutions), aid courts in evaluating risk by examining variables including past criminal activity, socioeconomic status, and psychological state to forecast the probability of reoffending. AI is also used by the US Sentencing Commission to develop and implement sentencing guidelines that ensure equitable and reasonable punishment. Chatbots are used by the US court system to respond to often-asked queries from the public regarding court procedures, timetables, and other related topics. This improves information accessibility for all parties and lessens the workload of court employees.

There are guidelines on how to develop ethical AI, There are three parts in the trustworthy AI framework that these guidelines aim to encourage: 1. Lawful, 2. Ethical and 3. Robust. While these guidelines aren’t technically law, they’re a great step in the right direction. Here are a couple of measures we can commit to so that human rights can be safeguarded:

  1. A human rights impact assessment should be carried out in advance of developing, acquiring or deploying AI systems by every State through its national legislative framework. In addition, the people who use it should be aliterate and can indeed understand what is going on in there, and how to interact with that system. Artificial intelligence systems require human control. After all, no machine should be independently making decisions and human oversight must always remain the controlling factor and it should be maintained at every stage of an AI system. This makes certain that the AI systems function within a regulated context and abide by human rights.
  2. Effective legislation for data protection is capable of containing any human rights risks. AI is data-driven, and therefore any law governing AI must make sure it gives its citizens ownership of their data by which consent should necessarily be obtained to use his/her data.
  3. Need to prevent discrimination due to implicit bias. I think there also must be a failsafe that will not allow AI systems to propagate bias and data diversity has got to happen. It should establish a framework for due diligence, and human rights impact assessments must be conducted periodically.
Method

The research included a deep exploration of AI use in law enforcement from various jurisdictions. This examined case studies in greater detail to get a better sense of what using AI for policing looks like. To receive multiple views and opinions of expertly written articles and journals It also reviewed legal documents and policy papers to examine the regulatory environment, including anything that appeared effective in addressing human rights considerations.

Suggestions

Here, are some suggestions to make AI in law enforcement better:

  1. Set up groups to watch over how AI is being used & make sure human rights are respected. They should be able to check AI systems, look into complaints, and uphold rules.
  2. Make sure the decisions made by AI can be seen by everyone. Police should tell us which algorithms they use, how choices are made, and where their data is. This way, people trust them more and they stay accountable.
  3. Stop biases in AI. Study biases in algorithms and work to get rid of them. Use different data for training AI and keep checking for bias. Use fair methods that stop discrimination.
  4. Update laws to deal with the new issues brought by AI in policing. Make rules that guard people’s privacy, say no to unfair practices, and lay out clear rules for using AI in law enforcement.
  5. Follow good practices when making and using AI in law enforcement. Stick to fairness, accountability, and transparency principles. Work with others like human rights groups to be sure the right things are done.

The future of using AI in law enforcement can be promising for safety and efficiency improvements if it works well, through strong supervision, honesty, and dedicated proper use of AI technology and dealing with these challenges smartly, we can benefit from the advantages of AI in law enforcement without violating human rights. And mitigating the risks like privacy problems, biases, and accountability issues.

Conclusion

Generative AI in law enforcement brings many benefits to assist in criminal inquiries and deterrence, but it also raises problems of excessive monitoring and misuse of personal data and a possibility of ethical and privacy concerns. AI in law enforcement must respect individual rights. This means having strong data protection rules, clear algorithmic decision-making, and safeguards against abuse of AI spying and establishing the correct equilibrium necessitates well-defined protocols and rules that guarantee the ethical application of these systems

AI is changing the game in law enforcement by processing tons of data and finding complex patterns. By using AI in decision-making, crime prevention and prediction, surveillance systems, and suspect identifications, law enforcement can boost efficiency & results. This helps authorities allocate resources better & focus on high-risk areas to prevent crimes before they happen. Also, AI can help investigators by giving insights & speeding up conclusions in criminal cases. It can analyse videos, witness statements, and evidence quickly. And it’s crucial to find a balance between using AI and letting human investigators do their thing to make sure decisions are fair, transparent, and accountable in the end.

Sonal Laxman Devare.

G J Advani Law College.