DECODING AI GOVERNANCE: CHALLENGES AND WAY FORWARD

Abstract

Artificial intelligence (AI) is becoming increasingly prevalent in many spheres of society, and therefore strong regulations and legislations are needed to reduce hazards, guarantee ethical application, and uphold human rights. This paper examines the urgent need for regulations and the challenges present in developing effective governance models. Utilizing a descriptive study technique and a comprehensive literature analysis, it delves into the ethical considerations surrounding the implementation of AI, including algorithmic bias and privacy issues. It also decodes the current regulatory environment in India and provides a critical analysis of the European Union’s Artificial Intelligence Act as a groundbreaking regulatory framework. Moreover, suggestions to combat bias and opacity have been offered in the paper. Overall, the paper argues for a balanced approach towards regulation of AI for effective and smooth development. 

Keywords- algorithmic bias, regulatory framework, ethical AI, transparency requirements, risk-based approach.

INTRODUCTION

Artificial intelligence is pervading almost every area of our lives at a rapid pace. Applications of AI influence a wide range of decisions that, until recently, could only be completed by highly skilled individuals. Even in poor and emerging nations, artificial intelligence has become a part of everyday life for the common masses. AI has already had a significant positive impact on several aspects of our life, including banking, health care, travel, government initiatives, education, communication, agriculture, and even low-end retail purchases and payments. Every sector is attempting to use AI to intrude inconspicuously and become more competitive while also offering individualized, customized services to its customers. AI has a huge and ubiquitous impact on all spheres of society, including individuals, organizations, corporations, and even national governments. The wide application of AI in today’s world has sparked worries about its ethical use and various instances of its misuse have necessitated the need for effective regulation. Any AI regulation, such as a transparency requirement, would need to define AI. However, there is no single perfect definition of AI. Various definitions characterize AI by the autonomy of systems or its “human-like” intelligent outcomes, without clearly demarcating the boundaries of AI. Artificial intelligence (AI) was first defined as “the science and engineering of making intelligent machines, especially intelligent computer programs” by John McCarthy in 1956. The Merriam-Webster dictionary defines AI as “[a] branch of computer science dealing with the simulation of intelligent behaviour in computers,” or “[t]he capability of a machine to imitate intelligent human behaviour”.

RESEARCH METHODOLOGY-

This research paper is of a descriptive nature and the research is based on secondary resources. Secondary resources include online articles, journals, and newspapers.

REVIEW OF LITERATURE- 

Several scholarly papers and journal articles have analysed the applications and governance of AI. Some of them are- Artificial Intelligence as a Challenge for Law and Regulation by Wolfgang Hoffmann-Riem. This publication explores the fields of application for AI, obstacles to application of law and types of rules and regulations.

Addressing Algorithmic Bias in India: Ethical Implications and Pitfalls by Yoshita Sood explains the presence of machine bias and gives solutions to counter it.

Towards Intelligent Regulation of Artificial Intelligence by Miriam C Buiten enumerates various definitions of AI and algorithms. These works provide act as important sources of information to understand the field of Artificial Intelligence.

WHY IS THERE A NEED FOR REGULATION OF AI SYSTEMS?

While the use of artificial intelligence has improved our lives in many ways, there are myriad ways in which its use has led to dangerous consequences such as presence of bias in predictive policing, violation of copyright and consumer laws etc. AI systems can potentially have various ethical impacts, influencing industries like criminal justice, healthcare, and transportation in their functioning. Regulation ensures justice, accountability, transparency, and respect for human rights by ensuring that AI abides by moral standards and societal ideals. The potential harm brought about by malicious exploitation or malfunctioning AI gives rise to safety and security concerns. 

Furthermore, privacy and data protection concerns are raised because AI depends on massive data warehouses. As a result, regulations governing data collection, usage, and sharing are required to preserve people’s right to privacy. The significance of regulation is further highlighted by addressing algorithmic bias and prejudice, improving transparency, and encouraging accountability. 

Some important aspects of AI that necessitate regulations are- 

PRESENCE OF MACHINE BIAS-

The presence of AI bias,which could have disastrous consequences in a country like India, where societal divisions based on caste, religion, gender, and economic position are already firmly ingrained, is the most pressing ethical issue with the deployment of AI. To prevent the manifestation of this bias, laws and rules that control the systems supporting this prejudice must be implemented. 

Machine bias is caused by algorithms designed to reinforce preexisting biases and inflict costs on people and society through actions like denying people jobs, loans, or bail. It can also come from biased data used to train AI systems. 

While machine bias often fails to promote inclusive cultures and design, it perpetuates repressive institutions and uphold the existing status quo. 

Two theories that talk about the underlying causes of machine bias are the Biased Training Data theory and Biased Programmers theory. The concept known as “Biased Training Data” focuses on the lacunae that exist in the massive amounts of data that machine learning programs use.  Whereas “Biased Programmers” theory blames programmers and developers whose bias, whether intentional or unintentional, reflects in their code.

EXAMPLES-

Artificial intelligence (AI) in facial recognition technology may result in heightened surveillance of Muslims and lower caste individuals because the existing stored data is already biased against them.

Research says that if a chatbot is asked to list twenty doctors and professors from India, the names produced are usually of high caste Hindus and the bias that arises from these statistics is then upheld in the actual world. Almost half of India’s population is constituted of women, Adivasis, and people living in rural areas, and they may be missing or inaccurately depicted in datasets, leading to false results in AI systems of policing, according to a 2021 Google Research investigation.

On the other hand, middle-class consumers of mobile safety apps that use data mapping to identify dangerous areas tend to flag Muslim, Dalit, and impoverished communities as suspect, which may lead to over policing and needless widespread surveillance. 

Official data suggests that India’s criminal databases are particularly problematic because Muslims, Dalits, and Indigenous people are arrested, charged, and incarcerated at higher rates than other groups. 

While it may not be possible to eliminate this prejudice, laws should include provisions requiring impartial databases, audits, and other related measures. 

PRIVACY AND COPYRIGHT ISSUES

Large language model (LLM) datasets are being trained on copious volumes of writing by several authors, journalists, and content producers without their knowledge or approval. Content creators specifically allege that chatbot developers are improperly utilizing their works to train chatbots. Content providers have brought several cases, requesting monetary damages and injunctions against those who utilize their works without permission. Prominent newspapers have taken action to stop their material from being used to train AI models, including the New York Times and the Washington Post. Content creators and chatbot creators are in the process of negotiating licenses for the usage of resources to address these problems. If talks break down, more litigation is anticipated, with experts arguing that whether AI systems are breaking copyright laws or staying within the parameters of fair use rules will ultimately be decided by court rulings.

Moreover, AI algorithms can collect and analyze massive data samples, including personal and sensitive information, all of which present increased worries over privacy rights violations. AI generated content also has the potential to disclose confidential information. It is important that the training of generative AI with datasets and the subsequent production of information be monitored by necessary regulations to prevent multiplicity of litigations and uphold privacy protection rights.

CHALLENGES- There are several factors which may pose a challenge to the development of effective regulation. Some of them are-

  • Opacity of AI systems- An inherent problem that exists in the AI systems is the Black-box phenomena which means that the internal workings or processing of the AI systems are not easily understandable by humans. Not even the programmers or developers understand its inner working. This lack of transparency poses a problem for creating trust, accountability, and liability. Without complete transparency, it may prove difficult to make the AI developers responsible for harms committed by the AI systems. It is possible to solve the black box problem in specific cases, however it requires a high amount of expertise and capability. 
  • Rapid pace of development (How to balance innovation and regulation?)

The velocity at which technology is advancing is ever- increasing. AI has been slowly developing behind the scenes for a long time. However, with the advent of Open AI’s Chat gpt, Bing, Bard etc., AI has come into the limelight. A corporate race has started towards the development of faster and better AI systems. To control this race, regulations are needed to prevent exploitation of datasets and human rights. However, the slowly developing regulations may be outpaced by the rapidly developing technologies. 

  • Geopolitical variables- Technology, even though is not bound by borders, is affected by geopolitical factors and variables. Its development is affected by heightened tech competition between nations especially US and China. Every nation wants an advantage or a leg up over other nations especially with the advent of autonomous weapon systems. Therefore, the development of effective and ethical regulations may be affected by these variables. Moreover, there is a possibility that the development of regulations may be disproportionately controlled by the advanced or developed economies. Any international organization on AI may be presented with this challenge and its important that a holistic, representative development of AI is focused upon, taking in the viewpoints of all nations.

EUROPEAN UNION’S ARTIFICIAL INTELLIGENCE ACT

The world’s first comprehensive artificial intelligence law, the Artificial Intelligence Act (the “AI Act”), was passed by the European Parliament on March 13, 2024. Fundamentally, the AI Act seeks to protect democracy and fundamental rights while also fostering development in the field in a careful balance. It has employed a risk-based framework to accomplish this, imposing different degrees of requirements according to the possible effects and risks presented by various AI applications. 

Definition of AI- ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

 The AI Act would come into effect gradually during a 24-month transitional period following its entry into force. Codes of practice will take effect nine months after the entrance into force date, general-purpose AI rules, including governance, will take effect twelve months after, and other elements, such prohibitions on banned AI systems, will take effect six months after that date. High-risk system obligations will take effect 36 months after the law’s enactment.  

To whom does the Act apply? 

The Act applies to public and private entities both within and outside the EU if the AI model is used in the EU. It can apply to both providers (who develop the AI system) and deployers of the systems.

What are the risk categories?

The Act delineates four levels of risk for AI systems, as well as an identification of risks specific to general purpose models:

Minimal risk and Limited Risk- Minimal risk systems are unregulated and only subject to existing legislations without additional requirements. Most AI systems presently employed belong to this category. Transparency requirements are necessary for the use of limited risk systems.

Unacceptable risk- Some limited set of dangerous applications of AI have been enumerated under this category because they violate fundamental rights. These systems are prohibited from use. Some of them are – use of subliminal or deceptive mechanisms to influence behavior, social scoring based on personal characteristics, biometric identification in public spaces, exploiting vulnerabilities such as disability, age, or socio- economic conditions and predictive policing.

High risk-. Strict rules targeting high-risk AI systems are introduced under the AI Act, and compliance with them requires extensive governance mechanisms. The term “high-risk AI systems” refers to those utilized in non-banned biometrics or safety-sensitive items governed by EU rules, as well as those working in industries including essential infrastructure, public services, education, employment, and law enforcement. Additional requirements for providers of these systems include creating a risk management system for the duration of the AI system’s lifecycle, putting strong data governance procedures into place, creating technical documentation, and designing the AI system with cybersecurity, accuracy, robustness, and automatic record-keeping in mind.

GPAI Model- The term “GPAI model” refers to an artificial intelligence (AI) model that exhibits significant generality, can be integrated into a variety of downstream systems or applications, and can competently perform a wide range of different tasks, regardless of how the model is marketed. This includes when the system is trained with large amount of data sets using self-supervision at scale. This system must meet transparency specifications and has to comply with the EU Copyright law. They also must provide the requisite information about the training datasets. The Act has made it compulsory to label when a video or content is AI generated. Additionally, some AI systems may pose systemic risks due to their high computational ability. Presently GPAI models having a total computing power exceeding 10^25 FLOPs are identified as carrying systemic risks, as higher computational power typically correlates with increased potency. Providers of such models are obligated to analyse and control associated risks, inform about important incidents, conduct tests and evaluations, impose cybersecurity measures, and reveal information regarding the amount of energy consumed in their models.

Criticisms- 

The new law has been criticized on several fronts. It has been claimed that the Act provides various concessions and exemptions which could be used dangerously by the law enforcement agencies and Big Tech companies. Many believe that the Act isn’t as extensive as they hoped it would be. Moreover, the regulations could adversely affect SMEs because of excessive compliance price and administrative burden. The definition of AI in the Act is also criticized for being too broad.

REGULATORY LANDSCAPE OF AI IN INDIA 

Presently, there is no codified legislation specifically governing the use of Artificial Intelligence in India. However, certain steps have been taken by the Government towards its regulation. 

The Ministry of Electronic & Information Technology issued an advisory dated 1 March 2024 to control the use and deployment of AI models. The advisory talked about 3 compliance measures which should be performed by all the intermediaries- (1) ensuring that the AI systems do not exhibit any bias or discrimination; (2) untested AI systems should only be made available to the general public after clear cut approval of the Government of India and (3) if the AI is capable of producing information, audio or video that have a possibility of being misused as deepfakes or fake information, then a permanent label or identifier should be attached to it displaying information about where it was made and if it was changed by another computer source. All intermediaries should comply with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 if they do not wish to lose the safe harbour protection provided to them. The advisory has been issued by the MeitY. 

in relation to the powers under the IT Rules. The advisory is applicable to all the intermediaries operating in India. Another Advisory was issued on 15th March which abolished the need for intermediaries to obtain prior approval of the Government.

The National Strategy for Artificial Intelligence was released by the NITI Aayog in 2018. 

The strategy enumerated measures for innovation and development of AI highlighting its use in sectors such as agriculture, healthcare, education, smart cities infrastructure etc.

Furthermore, in February 2021, the NITI Aayog a paper called PART 1- PRINCIPLES FOR RESPONSIBLE AI. It establishes guidelines and rules for responsible, ethical innovation of AI systems. In August 2022, PART 2- Operationalizing Principles for Responsible AI was released. This report talks about the regulatory and policy steps that need to be taken by both the government and private actors to catalyse ethical development of AI.

Moreover, India is a part of the Global Partnership on Artificial Intelligence (GPAI).

Existing Laws affecting AI-

The Information Technology Act 2000 along with the IT Rules 2011 and The Digital Personal Data Protection Act 2023 do not specifically govern AI but could affect its development. For instance, the DPDPA does not include publicly available personal data within its scope which could mean that AI models could scrape this data for self-training without obtaining consent beforehand.

The upcoming Digital India Act (set to replace the IT Act and the IT Rules) may contain provisions for regulating AI systems. 

It remains to be seen how the regulatory landscape of AI further evolves in India. Since AI technologies will soon become an integral part of different sectors, a balanced approach is needed in India which walks a fine line between innovation and protection of human rights. It is important that these systems are not misused by political or private actors in any way and special measures should be taken to prevent propagation of existing biases and prejudices by these AI systems.

SUGGESTIONS-

First and foremost, industry wide standards for transparency in AI systems should be developed. Ex-post explainable AI (XAI) techniques are being developed by companies all over the world which are aimed at understanding the inner workings of AI. These techniques should be used to clearly explain to users the processing of AI. This understanding should therefore be incorporated into the process of development of effective regulations and legislations for AI.

Moreover, the regulations should mandate certain steps that should be performed by various private entities developing or deploying AI such as- employment of debiasing algorithms, mandated bias audits of AI models and disclosure of bias present in AI models when they are being used for any kind of decision making.

The digital divide among various countries and different classes of people should be kept in mind while crafting the necessary legislations. Additionally, collaboration with different sectors (especially the energy sector as AI uses up a lot of energy) should be done for holistic development of AI systems.

CONCLUSION

In conclusion, a harmonious approach towards governance of AI is needed to ensure its effective and smooth application. Technologies like AI are capable of miracles but at the same time this same technology can cause havoc and great destruction. Only effective governance can tilt the balance of this precarious scale towards peaceful development and innovation. It is the duty of the legal stakeholders to become strong opponents to Big Tech if rapid innovation leads to violation of human rights.  Moreover, it is important that the legal landscape does not remain static and continues to develop with the pace of development of technology. Lastly, lessons should be learned from implementation to effectively reach a balance between innovation and societal implications.

NAME- ANANYA SRIVASTAVA

COLLEGE NAME- DR. RAM MANOHAR LOHIA NATIONAL LAW UNIVERSITY.