Algorithmic Bias and Discrimination in Automated Decision-Making: Examining Legal Remedies in India

ABSTRACT

The increasing use of algorithmic and artificial intelligence based systems in governance, welfare distribution, employment, finance, and law enforcement has transformed decision-making processes in India. While these technologies are often promoted as efficient, neutral, and objective, they frequently replicate and intensify existing social inequalities embedded within historical data and institutional practices. This phenomenon, commonly referred to as algorithmic bias, poses serious risks to the constitutional guarantees of equality, non-discrimination, privacy, and due process under Indian law.

This paper examines the nature and impact of algorithmic bias in automated decision-making systems in the Indian context, with particular emphasis on caste-based discrimination, welfare delivery mechanisms such as Aadhaar, predictive policing, and private-sector hiring and credit assessment practices. Through doctrinal legal research, the study analyses relevant constitutional provisions, judicial precedents including Justice K.S. Puttaswamy v. Union of India and E.P. Royappa v. State of Tamil Nadu,1 and existing statutory frameworks such as the Information Technology Act, 2000 and the Digital Personal Data Protection Act, 2023.

The paper further undertakes a comparative analysis of regulatory approaches adopted in the European Union under the General Data Protection Regulation and the United States’ anti-discrimination framework to identify regulatory gaps in Indian law. It argues that the current Indian legal framework is inadequate to address the opacity, accountability deficits, and indirect discrimination caused by automated systems. The study concludes by proposing targeted legal and institutional reforms, including algorithmic transparency obligations, strengthened anti-discrimination standards, independent regulatory oversight, and mandatory algorithmic impact assessments, to ensure that technological governance aligns with constitutional values and protects vulnerable populations.

KEYWORDS

Algorithmic Bias, Artificial Intelligence, Automated Decision-Making, Discrimination, Equality,  Constitutional Law

INTRODUCTION

The deployment of artificial intelligence and data-driven technologies has significantly transformed contemporary decision-making processes. Algorithms increasingly influence, and in certain instances autonomously determine, outcomes that were traditionally subject to human discretion. These automated systems are used in areas that affect people’s lives and what they can do. For example artificial intelligence decides who can get government help and who can get a loan. It also looks at job applications. Tries to figure out if someone will commit a crime. Artificial intelligence plays a role in deciding who gets access to things like jobs and money.2 Artificial intelligence is used to screen job applications and predict what people will do. This means that artificial intelligence has a lot of power over what happens to people. People usually think that these systems are fair and make sense. They believe that these systems are not biased like humans can be.

Algorithmic bias is when automated systems make mistakes or treat people unfairly. This happens because of the way these systems are designed or the data they use. Some groups of people may be left out. Treated differently. This can be very bad for people who have already been treated unfairly in the past. The thing about bias is that it is not always easy to see. It is like a hidden problem. Unlike when people’re mean to each other on purpose, algorithmic bias is not always clear.2 It is hard to figure out who is responsible and how to make things right. Algorithmic bias is a problem because it is hard to understand and fix.

In India people are getting worried about the government and private companies using computer systems that make decisions automatically. The government of India is using these systems a lot for things like Aadhaar cards giving out welfare benefits and making policies based on data. The idea behind this is to make things work better and stop corruption. Some people think this is not a good idea because it can leave some people out, let the government spy on citizens 3 and take away their rights without following the proper rules. The Indian government is doing this on a large scale. India is using systems for many things, including algorithmic systems. This is an issue in India, where algorithmic systems are being used by the government. 

The private sector is using algorithms more and more for things like hiring, money and services for people. This makes the problem even worse. When companies use computers to pick who to hire it can be bad for women or people from areas if the computer learned from old data that is not fair. The same thing happens with computers that decide who can get credit. They might not give credit to people who do not have a lot of money or a bank account.

RESEARCH METHODOLOGY

This research is done in a detailed and careful way. It mainly looks at the laws and rules that’re already in place in India like the constitution, the laws that have been passed, the decisions made by judges and the policies that have been created. The research focuses on laws related to discrimination, equality, data protection and technology regulation in India. The decisions made by the Supreme Court of India and the High Courts have been carefully looked at to see how the judges think about equality, fairness, privacy and making sure the right steps are taken when the government does something. The Supreme Court of India and the High Courts decisions are important to understand how the judges think about these things, in India.

We looked at legal sources and also at other things like books written by academics, articles in law reviews, research papers and reports from groups that work for the people and from international bodies. We compared the laws in places like the European Union and the United States to see how they do things. We did this to get ideas and find the way to do things that would work well in India. We are not saying that India should just copy the laws from countries but we want to learn from them and find things that would be good for India.

REVIEW OF LITERATURE

People are talking a lot about how algorithms can be biased and unfair. This conversation has been going on for ten years now. It started because we are using intelligence to make big decisions that affect our lives. Cathy O’Neil4 wrote an important book called Weapons of Math Destruction. In this book Cathy O’Neil says that algorithms can actually make things unfair. This happens because algorithms can make decisions on a scale and we do not always know how they work. Also nobody is really responsible when algorithms make mistakes. Cathy O’Neil points out that algorithms used in schools, jobs and the criminal justice system can make the people who’re already in power even more powerful. These algorithms can make it seem like they are being fair. Really they are not. Algorithmic bias is a problem and algorithms can make our lives unfair. Cathy O’Neils work is important because it helps us understand how algorithms can be biased and unfair.

Solon Barocas and Andrew Selbst do a job of explaining how the law applies to algorithmic discrimination. They compare data analytics to something called disparate impact discrimination, which is against the law. The law says that if something is unfair to a group of people it is not allowed. Solon

Barocas and Andrew Selbst say that algorithmic systems often discriminate against people.2 It is not because someone wants to be mean. It happens because these systems use data and other things that are not fair. 

In India people are just starting to write about how algorithms affect the way the government works.. Already there are big worries about this. Usha Ramanathan and others have taken a look at the Aadhaar project and they think it could be used to spy on people and leave out those who are already struggling, especially the people who rely on government help. There have been cases where the biometric authentication did not work and this meant that people did not get the food and money they needed.

METHOD

Understanding Algorithmic Bias in Automated Decision-Making

Algorithmic bias is not something that happens by chance, it is actually a common problem.8 This problem comes from the way technology and society work together. Algorithms look for patterns in information. Use those patterns to make guesses or decisions. But when the information used to teach these systems is unfair the results will also be unfair.5 This is because the information is based on the inequalities that already exist in society. In places like India, where people have been treated unfairly for a time, algorithmic bias often looks a lot like the old social rules that favored certain groups of people based on things like caste, gender, religion and class. Algorithmic bias is an issue in these societies and it makes the existing problems of caste, gender, religion and class even worse.

Bias can happen at different points in the life of an algorithm. When we are collecting data we might leave out some groups of people. This means that people who are already treated unfairly might not be included in the data. For example people who do not have jobs or who do not use the internet or who do not have bank accounts are often not in the data that is used to decide if someone can get credit or a job. When we are designing the algorithm the people building it might choose things that are related to things like where someone lives, what school they went to or what language they speak.1 

Caste-Based Bias and Algorithmic Systems

The thing about India is that caste is still a part of the society. It affects lots of things like where you can study, what job you can get, where you can live and how well you can move up in life. Now you do not often see caste written down in computer programs. It can be figured out from other things like your last name, where you are from, which school you went to, what you do for work or what language you speak. So when computers are trained on data that shows how some people were treated unfairly because of their

caste they can still be unfair to those people even if it does not say their caste directly. Caste is something that computers can pick up on from these things and that is how caste can still be a problem. Caste systems are very old. They are still influencing things like education and employment and housing and social mobility in India.

For example, computer programs that help with hiring and are based on what happened in the past may treat candidates from Scheduled Castes and Scheduled Tribes. This is because these programs are using data and old hiring practices were not fair to these groups. Something similar happens with credit scores. Credit scoring algorithms may give scores to people from marginalised castes. This is because people from marginalised castes often do not have a lot of experience with banks and other financial institutions. They have been left out of the economy for a time. This is a problem because it goes against what the constitution says. The constitution says that everyone should be treated equally. Articles 14, 15 and 16 of the constitution are very clear about this.6 They say that it is not okay to discriminate against people and that everyone should have the opportunities. The Scheduled Castes and Scheduled Tribes should have the chances, as everyone else.

Algorithmic Bias in Welfare Delivery and Public Services

India is using computers to decide who gets help from the government. This is causing a lot of discussion. They are using a system called Aadhaar to check if people are who they say they are. The government says this system helps make sure the right people get help and that it stops people from getting things they do not deserve.. There have been many problems with this system. Sometimes the computers make mistakes and people do not get the food or money they need.7 India’s Aadhaar system has also stopped some people from getting healthcare and pensions. This is a problem because people need these things to survive.4 India’s reliance on computers to decide who gets help is not working for everyone.

When we use computers to give people welfare benefits, mistakes can be very bad for them. This is especially true for people who are already struggling like people who are disabled and workers who move from one place to another. These people can easily be left out if the computer system does not recognize them properly or if the information about them is wrong. The problem gets even worse because there is no one to check what the computer is doing and to help people who are having problems with it. Welfare benefits are very important. We need to make sure that people can get them easily. Mistakes with welfare benefits can be life threatening. Welfare benefits are meant to help people like persons, persons, with disabilities and migrant workers.

Predictive Policing, Surveillance, and Criminal Justice

Policing and criminal justice are using computer systems more and more. These systems include tools that try to predict where crimes will happen and technology that recognizes faces. They say they can find places where a lot of crime happens or people who might commit crimes by looking at what happened in the past. But the thing is, the information about crime is often not fair because police officers have been targeting groups of people more than others and these groups are already treated unfairly.

Predictive policing algorithms that are trained on this kind of data can make things worse in some neighborhoods. These are neighborhoods where people from minorities or people who are not very well off live. The police and other authorities may watch these neighborhoods closely.8

Limits of Judicial Review in Algorithmic Contexts

The thing is we have remedies that are supposed to help us but they do not always work in real life. One big problem is that courts can not see the details of how algorithms work because companies say this information is a secret that they own.9 Sometimes courts say this information is related to security. Also judges often do not have the skills to understand how complex algorithmic systems work. Constitutional remedies are still available. These issues limit how well they can work.

STATUTORY FRAMEWORK GOVERNING ALGORITHMIC DECISION-MAKING IN INDIA

Information Technology Act, 2000

The Information Technology Act, 2000 which is also known as the IT Act is the law in India that deals with digital technologies. This law was made a time ago when computers were not making as many decisions as they are now and artificial intelligence was not a big part of how the government or businesses work. Because of this the rules in the law are not good enough to deal with problems like bias and discrimination. The Information Technology Act and the rules that go with it do provide some protection for keeping data safe and stopping people from getting to it without permission. However the Information Technology Act does not make companies tell people how their algorithmic systems work or be accountable for what these systems do.

Digital Personal Data Protection Act, 2023

The Digital Personal Data Protection Act of 2023 is a deal for India when it comes to protecting people’s personal information. This Digital Personal Data Protection Act says that people have the right to control their data they can say yes or no to sharing it. They can see what data is being kept, they can ask for it to be corrected if it is wrong and they can complain if something goes wrong. The people and companies that handle data, which are called data fiduciaries, have to make sure they are doing it in a way that follows the law and keeps the data safe.

Anti-Discrimination and Labour Laws

In India, the laws that are already in place to stop people from being treated and to protect workers do not do a very good job of dealing with problems caused by computers making decisions that are biased. The laws that make sure people are treated equally when it comes to getting a job might apply if computers used to hire people end up being unfair.

COMPARATIVE LEGAL PERSPECTIVES

European Union

The European Union10 is a player when it comes to controlling how decisions are made by computers. The General Data Protection Regulation, which is also known as the GDPR, knows that there are problems with computers making decisions on their own. The GDPR says that people have the right to say no to decisions that are made by computers if these decisions will affect them in a big way.11

United States

The United States does not have a set of federal rules that control how algorithms make decisions. The United States is dealing with discrimination through laws that already exist to protect civil rights and rules that are specific to certain industries. The United States courts and the people who enforce the rules are taking a look at algorithmic systems to see if they are following the laws that prevent discrimination especially when it comes to getting a job or a place to live. The algorithmic decision-making process in the United States is still being watched closely. The algorithmic systems are being checked to make sure they are fair.

SUGGESTIONS

India needs to make a law that deals with algorithms and how they affect people. This law should clearly say what kinds of algorithms are at risk. For these risk algorithmic systems the law should make sure that

companies are transparent about how they work. The law should also make companies explain how their algorithms make decisions.

Second, the law against discrimination should be made stronger so it clearly says that discrimination by computers and indirect discrimination are not allowed. This includes situations where automated decisions affect people unfairly. The law should match the way discrimination actually happens today.

We need someone to watch over the companies and the government when they use computer programs that make decisions for them. This person or group should be independent. Know what they are doing.

Fourth we need to make sure that algorithmic impact assessments are done every time before we use automated systems in areas like welfare, employment, credit and law enforcement.

Fifth we need to make sure that people who work in the court system and government offices are able to do their jobs. This means that judges and lawyers and the people who make rules need to learn about systems that use computers to make decisions.

So policy making needs to be done in a way that includes everyone. This means that groups like civil society organisations and technologists and the people who are affected by these things should be part of the process when we are designing and regulating systems.

CONCLUSION

While automated systems enable efficiency and scalability, they simultaneously pose serious risks of institutionalised discrimination. India already has problems with some people not being treated equally. If computers make decisions that’re biased it can make things even worse 12 and go against what the constitution says about being fair and equal.

This paper says that the laws we have in India are not good enough to deal with the problems caused by governance. These laws are based on principles in our constitution but they are not working well together and are not enough. The fact that automated decision making systems are not transparent is very big.13 Are complicated means we need to think of new ways to approach the law. We really need laws that are comprehensive that think about the future and that protect people’s rights. These laws should combine the values in our constitution with the need to make technology accountable, for what it does to governance. Algorithmic governance is an issue and our laws need to be able to deal with it.

Name: Vanshita Lakhanpal
College: CPJCHS & School of law (Gurugobind Singh Indraprastha University)
BIBLOGRAPHY
  1. Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.
  2. Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 CALIF. L. REV. 671 (2016).
  3. Usha Ramanathan, Aadhaar: From Welfare to Surveillance, 50 ECON. & POL. WKLY. 36 (2015).
  4. CATHY O’NEIL, WEAPONS OF MATH DESTRUCTION (2016).
  5. Kate Crawford, The Hidden Biases in Big Data, 31 HARV. BUS. REV. 1 (2013).
  6. Jean Drèze et al., Aadhaar and Food Security in Jharkhand, 52 ECON. & POL. WKLY. 50 (2017).
  7. E.P. Royappa v. State of Tamil Nadu, (1974) 4 SCC 3.
  8. Maneka Gandhi v. Union of India, (1978) 1 SCC 248.
  9. Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2015).
  10. Regulation (EU) 2016/679, General Data Protection Regulation, 2016 O.J. (L 119).
  11. GDPR art. 22.
  12. Virginia Eubanks, AUTOMATING INEQUALITY (2018).
  13. Sandra Wachter, Brent Mittelstadt & Luciano Floridi, Why a Right to Explanation of Automated Decision-Making Does Not Exist in the GDPR, 7 INT’L DATA PRIVACY L. 76 (2017).

Leave a Comment

Your email address will not be published. Required fields are marked *