Name of the author: Adhila Fathima. I
Designation: 3rd year student of BA LLB at Chennai Dr. Ambedkar Government Law College, Pudupakkam.
Title of the paper: UNMASKING BIAS IN AI: PROTECTING EQUALITY IN THE ERA OF AUTOMATION
Contact: iqbaladhila171@gmail.com , 9487375874
UNMASKING BIAS IN AI: PROTECTING EQUALITY IN THE ERA OF AUTOMATION
Abstract
The increasing use of artificial intelligence (AI) to perform a number of functions introduced issues of bias and discrimination bias and discrimination linked within AI systems. This paper analyses the bias inherent and portrayed in AI, with a view to focus on the impacts of algorithmic design, and systemic disparities. Real-life examples of biased data produced by AI and their effects are examined in this paper. It explores the legal framework in Indian and global perspective examining if the current laws are sufficient to tackle AI-based discrimination and bias.
Keywords: Artificial intelligence, Bias, Discrimination, Algorithmic bias, Regulatory Framework
- Introduction
Artificial intelligence (AI), a computer science sub-discipline, deals with creating extremely sophisticated and intelligent machines that take input from available information and summing them up to automate processes (Roselli et al. 2019). AI is quickly transforming sectors, ranging from healthcare and finance to recruitment and law enforcement. While AI brings efficiency and innovation, it also presents major risks, including biases. This paper attempts to unveil AI bias, examining real-world examples, effects and remedies. It delves into significant legal frameworks for addressing algorithmic discrimination, and the difficulties in implementing effective legislations. Through examining case studies and recent legal regulation, This study endeavors to suggest implementable measures to safeguard equality and ensure that AI is a means of justice, not an avenue of systemic discrimination.
II.Research Methodology
This study adopts a doctrinal research methodology, focusing on a systematic analysis of laws, including statutes, case law, and scholarly literature. Secondary research is used to interpret and evaluate existing legal principles within the chosen area of law.
III. Definition of bias in AI
Bias in AI can be understood as the unfair and discriminatory treatment of a certain group of people by AI. This can occur because the data and algorithms used to train these systems are embedded with intrinsic human biases. Different factors can be responsible for such biases, such as privation of diversity in training data, improper metrics, historical data, and so on. As artificial intelligence has the potential to influence people’s lives in countless ways in the present digital world, issues like biased outcomes is an alarming challenge.
IV. Real-life examples
Biased AI systems have been observed in various instances such as poor facial recognition with darker skin tones, and discrimination of women and minorities by hiring algorithms. A study by the USC Information Sciences Institute found that the databases of major AI systems like ConceptNET and GenericsKB are biased. For example, women are portrayed negatively to men, Muslims are related with words like terrorism, Mexicans with poverty, priests with phidophilia, policemen with death, and lawyers with dishonesty. The list of groups that are discriminated against goes on like politicians, performing artists, detectives, pharmacists, handymen, British people, and more. The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) system which is used in the United States criminal justice was founded on being biased against African American people. They were more feasible to be labeled as high-risk even if they are not convicted before. African Americans are discriminated against by AI in healthcare sectors also like the uncertain prediction of mortality rates. Furthermore, text-to-image generative systems like StableDiffusion, OpenAI’s DALL-E, and Midjourney have also displayed biased results. Research conducted by Carnegie Mellon University in Pittsburgh found that the advertisement system of Google shows high-paying positions to men compared to women. Gender bias was reported numerous times such as when images of specialized professionals are asked to generate by an AI, and both younger and older people are shown but older people are every time men.
- Tay: Microsoft’s racist chatbot
One of the earliest reported bias in AI is the behaviour of Tay, a social chatbot designed by the Microsoft. It is created as an automated text based programme like the ones existing in the ecommerce sites. The chatbot was designed to act like a wisecracking teenage girl in the twitter, who not only answers the questions of the online community, but engages in interesting conversations (focused to engage with 18-24 years olds). She is trained to learn from the interactions she had with the real people which resulted in passing anti semitic, sexist, and discriminatory comments. Microsoft expressed its apology after it shut-down the AI within 24 hours. The company stated that they are deeply sorry for such offensive tweets. It was stated that Tay is not the first ever AI released into the online social world. Xiaolce is their chatbot, that is widely used in China, amazing with its conversations and stories. Thus Microsoft kept a lot of hope in the chatbot they developed.
- Racial Bias reflected from a risk assessment tool
COMPAS is a widely used risk assessment tool used in U.S criminal justice system. It was trained by historical data of criminal records. A study by ProPublica exposed that COMPAS is biased in risk assessments. It was found to be arbitrarily over predicting black defendants more than white defendants as high risk. The study reflects how biased training data has the potential to cause racial disparities even in the criminal justice system.
- Amazon’s Biased hiring algorithm
The recruitment algorithm used by Amazon is trained by the previous resumes submitted to the company. The AI followed the historical pattern used by the company, where most of the data came from the male applicants. It assimilated to categorize resumes that resembled with the people who are working in the company. The pre-existing employees were majorly male, which resulted in the occurrence of unconscious bias. The algorithm thus continuously rejected the experiences associated with female candidates, and made it difficult for the competent female candidates to qualify for the role. Thus a systematic discrimination occurred by the recruitment algorithm of amazon and this shows how incomplete training data leads to discriminatory outcomes.
V. Sources of AI Bias
A critical question that arises in these circumstances is, from where do these biases occur? How can Artificial Intelligence display such discriminatory things? Bias in AI can arise from various sources from different stages of machine learning.
Data bias is a type of bias that occurs when the AI is trained by the data that is unrepresentative, insufficient, contains errors, or misses crucial information. Algorithmic bias occurs because inherent biases are contained in the algorithms used in machine learning, and they are exhibited in the outputs. In this, the biases are deliberately installed to make such judgments, and smoothing or regularisation parameters are used to compensate for the bias linked in the data. Confirmation bias is another type where the AI systems are tuned by the developers to rely a lot on pre-existing data. There is another source, user bias, where the individuals using the system consciously or unconsciously introduce their biases and prejudices to it.
VI. Potential Impacts
- Infringement of fundamental rights
The fundamental rights of an individual are encroached upon when AI systems deliver erroneous results by relying on inadequate data. The fundamental rights which include the right to equal opportunities for all, right to privacy, right to impartiality between men and women, socio-economic rights, etc are violated through biased AI outcomes. These biased algorithms pose severe risks and spread injustice in society. Some AI models reflect disparities in the healthcare sectors by differentiating high-income and low-income patients. It is against the principles of equity when some poor and marginalized sections of society are denied some essential socio-economic services. The right to a fair judicial system is another right that is violated in this situation. The COMPAS system, which is discussed above predicts the chance of an offender reoffending. It falsely targets black offenders more often than other races. The practice of discrimination is clearly proven in circumstances like this. Privacy is an essential right that is intended to be protected by every individual. However, a study shows only 53% of consumers are comfortable if their personal data is leveraged by AI-based companies. Although others are ‘fairly’ or ‘not very comfortable’ with sharing it.
- Economic impacts
AI systems are used by financial sectors to aid them in several fields such as lending risk assessments, financial modeling, economic framework, and more. The decisions related to fraud detection creditworthiness, loan approvals, etc rely on the AI. It is opined by the researchers that discrimination and financial injustice were caused by some of the automated algorithms used by the fintech companies. These AI systems are accused of deepening the existing bias and inequality, by discriminating against marginalized groups. Credit discrimination is one of the bias created by the fintech companies. Additionally, discriminatory advertising for financial services, differential pricing for goods and services, etc are prevailing. For example, AI algorithms used in setting credit scores, resulted in lower credit scores to people based on their race and ethnic background. Mitigating these biases for fair use of online services are necessary, as advocated by industry experts.
- Effects faced by business organisations:
Business organizations relies on AI-based autonomous systems as it has the potential to influence overall functioning of the organizations by supporting several arreas of a business. Businesses reap huge profits through algorithms that uses patterns and inputs to provide specialized and more efficient services. This rises concerns as the businesses can lose its reputation when consumers gets aware about these biased algorithms. For example, Nikon faced severe backlash for its S630 advanced camera’s working that shows an alert message of people blinking while clicking photographs of Asian people (Akter et al., 2021). Some companies may also lose their future market due to the discriminative or racist behaviours exhibited in their systems. Eg: Microsoft had to shut down its chatbot Tay within 24 hours of its launch after the twitter community accused it of passing racist, sexist comments (Vincent, 2016).
VII. Legal framework in india
India lacks dedicated legislation that focuses on AI bias and discrimination. However, few legislative and regulatory measures are taken to control issues related to AI. The existing framework to regulate AI bias is as follows;
Constitutional Provisions: The Indian Constitution’s crucial anti-discriminatory provisions apply to all AI systems. The salient provisions that gives protection against discrimination such as Article 14 which Guarantees equality before the law and equal protection of the law; Article 15 which Prohibits discrimination based on religion, race, caste, sex, or place of birth; Article 21 which Protects the right to life and personal liberty, that is also interpreted as the law which protects privacy. Article 21 guarantees the right to privacy, thus it comes under the context of AI bias in data collection.
The Digital Personal Data Protection Act 2023: The Act which was enacted in August 2023, aims to balance between the protection of personal data and the growing influence of artificial intelligence. The Act applies to automated processing of personal data collection. It includes personal data collection by AI, disclosure, and some other sorts of processing. The organizations will have to comply with the requirements of the Act such as, the data should be only used for specific purposes, obtaining consent before data use, and specifying the purpose for the data usage to the individuals. These regulatory measures can lead to the reduction of bias.
In recent times, fintech firms have also begun to make use of AI for their development by data-driven conceptions. Payment platforms like Paytm use AI to study user behavior and transactional history, which enables them to give customized product recommendations. The Digital Personal Data Protection Act (DPDPA) of 2023 aims to ensure transparency and control of individuals over their data. It requires the fintech firms to obtain explicit consent from the users. The act enables the users to access, correct, and erase their data, and grievance redressal mechanisms provide the opportunity for addressing data misuse. Financial services often leverage AI tools like fraud detection, investment advice, credit scoring, etc., which contributes to the intensification of inequalities and discrimination. India’s approach to regulating data privacy in fintech promotes anti-discriminatory AI practices.
The Information Technology Act, 2000: the Act forbids intermediaries from publishing, hosting, or sharing any information that is defamatory or damaging. It requires organizations to take security measures while handling sensitive personal data. The companies may face penalties if such data is misused.
National Policies
Advisory issued by MeitY: The Ministry of Electronics and Information Technology (MeitY) issued an advisory on March 1, 2024, for the object of regulating untrustworthy AI models, Generative AI, and LLMs. the key directives that AI models must comply with include:
Any bias, discrimination, or interference with the electoral process should not be facilitated by AI models; under-tested AI models must obtain permission from MeitY before bringing it into play. Users must be previously warned about the possible miscues of the AI’s output; Media generated by AI must be labeled by metadata or unique identifiers. This requirement is mandated to enable users to identify the origin of the content, mainly in the context of misinformation or deepfakes.
The Draft National Data Governance Framework Policy (NDGFP): The policy was released in May 2022, and focuses on emphasising AI research and development. The access to high-quality data for training AI algorithms is enhanced by this policy as the output given by the AI models is significantly impacted by the accuracy of the data used.
National Strategy for Artificial Intelligence(2018)
NITI Ayog released the National Strategy For Artificial Intelligence in 2018, which provides the guidelines for AI research and development focused on significant sectors such as healthcare, agriculture, education, “smart” cities, and transportation. In February 2021, an approach paper: Part 1 – Principles for Responsible AI was released by NITI Ayog, focusing on ethical and societal considerations of deploying AI solutions by enhancing the accountability of AI decisions.
VIII. International and regional AI regulations
The Organisation for Economic Co-operation and Development (OECD) adopted principles for responsible AI, to reinforce fairness, transparency, and accountability. The UNESCO Recommendation on the Ethics of AI also follows several AI governance steps for ethical AI practices. Additionally, The World Economic Forum and the IEEE are also working on AI governance. The IEEE Global initiative on Ethics of Autonomous and Intelligent Systems has made standards for fair AI.
Regional and national framework
Countries begun to get cautioned about the potential risks and challenges to be faced by artificial intelligence. however, Major nations enacted legislations that broadly focuses on the AI challenges and lacks specified laws that strictly regulates the issue of bias.
United States: The Federal Trade Commission is one of the federal organizations that plays a crucial role in regulating biased algorithms. Civil Rights statutes such as the FTC Act relied upon as there is a lack of direct federal legislation. Section 5 of the Act prohibits “unfair and deceptive acts or practices” that are in or affecting commerce. As the word “commerce” is interpreted broadly, it is applicable in maximal fields. An algorithm could be made liable if it misleads the consumer. FTC has specified that it is prepared to treat the racially biased algorithms as unfair.
The US Federal government recently released the Blueprint for the AI Bill of Rights, and a draft of the AI Risk Management Framework to cope with the challenges posed by AI. The AI Bill of Rights is a non-binding principle that imposes some key principles to guide the design. It includes protection from algorithmic discrimination. The bias found in the algorithms should be addressed. People should never be favored by the algorithms based on their race, color, ethnicity, sex, identity, disability, or any other protected characteristics.
The AI Risk Management Framework (AI RMF) is a set of guidelines developed to help organizations address the risks of AI. It was developed with the main goal of promoting accountable AI systems by addressing the risks posed to individuals and organizations. Potential risks including bias, the threat to fairness, and transparency are to be identified and figured out.
European Union: The EU AI Act was approved by The Council of the European Union in 2024, the first comprehensive Act for AI. The risks and challenges around 27 EU member states are to be mitigated by this Act. the Act follows a risk-based approach and categorizes the AI systems into four levels of risks, which include 1. Unacceptable risk, 2. High risk, 3. Limited risk, 4. Minimal or no risk. The social credit systems, behavioral manipulation, systems that cause harm to vulnerable populations, and systems that assess the likelihood of committing crimes are categorized as unacceptable risks. The image generators and chatbots are subjected to transparency obligations. Disclosure is a significant feature of this act, which assists to combat the ‘black box’ nature of AI
China: The Interim Measures for the Management of Generative AI Services Providers was issued in July 2023, which regulates generative AI technologies. It covers any generative AI services available in China, regardless of where the company is based. The policy emphasizes ethical AI development through data privacy and cybersecurity.
Canada: the AI and Data Act (AIDA) is proposed in Canada, to protect the citizens from the risks posed by AI and to promote accountable AI practices. It emphasizes safety and human rights through transparent AI systems.
Australia: Australia lacks a specific law that governs AI, however, it has introduced voluntary regulations like ‘AI ethics principles and guidelines’ to promote responsible AI.
The legislation enacted by the European Union, The EU Act can be assessed as the most decisive law to regulate AI, and such laws which are specified to prevent the risks posed by AI are expected to be enacted by other countries including India
IX. Suggestions and Conclusion
The suggested steps should be considered to mitigate AI bias and enhance fairness.
- Specific Legal and regulatory framework: As discrimination is one of the alarming human rights issues, specific legislation that is dedicated to mitigate this sole problem is suggested to achieve efficiency.
- Enhancing Data Diversity: It is suggested to use more inclusive and representative training datasets and apply bias-mitigation techniques in data collection and training.
- The need of Global Cooperation: International standards for AI ethics is to be developed to encourage bias prevention. Establishing independent regulatory bodies to control AI deployments can also be an effective initiative.
Ensuring fair AI will needs a multifaceted strategy involving technical solutions, enhanced legal accountability, and ethical practices in AI development. Governments, technology firms, and civil society organizations need to collaborate to develop transparent, bias-free AI systems that are grounded in human rights.
