email, keyboard, computer

Regulation of AI (Artificial Intelligence) in contemporary world


Artificial Intelligence[1], or AI, means making machines think like humans. It’s when computers can do smart things, like understanding language, recognizing speech, or even seeing and understanding pictures. For examples Smart Assistants like Siri, Alexa and Google Assistant; Chatbots like ChatGPT, Bard; Self-driving cars, stc. AI works by using a lot of information and special computer programs. For making AI work, people use different computer languages like Python, Java, C++, or Julia to write programs or making them analysing lots of data and situations through which they can learn. That’s where issues arise, from where such huge data will be available and the concept of Data protection comes in reality. Artificial intelligence (AI) has undergone a remarkable transformation since its inception in the 1950s. Early pioneers laid the groundwork for AI concepts, but the field faced setbacks due to computing limitations. In the 1990s, AI resurged with the advent of big data and powerful computers, leading to breakthroughs in machine learning, deep learning, and natural language processing (NLP). The 2000s saw the commercialization of AI-powered products like search engines, recommendation systems, and speech recognition. The 2010s witnessed further advancements, including voice assistants, self-driving cars, and AI-based cancer detection systems. The 2020s marked the emergence of generative AI, a new type of AI that can produce original content.

Keywords: Artificial Intelligence (AI), Machine Learning, Data Protection, Generative AI, Ethical Guidelines, Regulatory Landscape, Multistakeholder Approach, Privacy Protection.


How AI has impacted humans in better way

Reducing Human Error: AI plays a crucial role in minimizing errors by offering a zero-error potential when programmed accurately. Through predictive analysis, AI models anticipate outcomes, leaving little room for mistakes. By automating repetitive tasks such as data collection, customer interactions, and software testing, AI allows human workers to focus on tasks that require uniquely human abilities, contributing to overall efficiency.

Efficient Data Handling and Decision-Making: One of AI’s significant advantages is its ability to smoothly handle big data. This facilitates quick decision-making by providing reliable insights at a faster pace. The 24/7 availability of AI systems, combined with powerful algorithms, allows for the consolidation of data and predictions, contributing to faster and more informed decision-making processes.

Emergence of Generative AI[2]:Generative AI can be used to automate many tasks that are currently performed by humans, such as data entry and customer service. For example, Amazon uses generative AI to automatically generate product descriptions and customer support responses. Even, it is being used to create new products and services that were previously seemed to be impossible. For example, OpenAI’s GPT-3 and GPT-4 are being used as engaging chatbots that can be used for customer service or to provide information about products and services.

Although there are many benefits for humans from computing to enhance productivity, efficiency, and decision-making, but many issues arise due to emergence of AI such as Job displacement, Biases and discrimination, privacy and security concerns and probable existential risk.

Current Regulatory Landscape in the world

Although the USA had previously viewed AI with some leniency, there have recently been increasing calls for regulation. While the UK is developing a set of pro-innovation regulatory principles, the Cyberspace Administration of China is also seeking feedback on a proposal to regulate AI. Internationally, the Council of Europe is presently drafting an international treaty on artificial intelligence (AI), UNESCO accepted recommendations on the ethics of AI in 2021, and the Organisation for Economic Co-operation and Development (OECD) adopted a (non-binding) recommendation on AI in 2019.[3]

Importance of AI (Artificial Intelligence) regulation

According to AI Index at Stanford[4], the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.

Privacy Protection: AI often involves the processing of vast amounts of data, raising concerns about individual privacy. Regulations are necessary to define how data should be collected, stored, and used, ensuring that individuals’ privacy rights are protected.

  1. General Data Protection Regulation (GDPR) in the European Union, establish rules for the lawful and transparent processing of personal data.

Mitigating Bias and Discrimination: AI systems may inadvertently perpetuate biases present in training data, leading to discriminatory outcomes. Regulatory measures aim to minimize bias, promote fairness, and ensure that AI applications do not disproportionately impact certain groups or individuals.

  1. Amazon’s Gender Bias[5]: Amazon’s automated recruitment system unintentionally discriminated against women. The AI, trained on resumes from past candidates, learned that fewer women were in technical roles and, in turn, gave lower ratings to female applicants.
  2. US Healthcare Algorithm Racial Bias[6]: An algorithm used by US hospitals for predicting patients’ medical needs exhibited racial bias. It relied on healthcare cost history, overlooking the fact that black patients often paid for active interventions like emergency hospital visits
  3. ChatBot Tay’s Discriminatory Tweets[7]: In 2016, Microsoft released a chatbot called Tay on Twitter. The chatbot was designed to learn from its interactions with users. However, Tay quickly began to post discriminatory and offensive tweets. Microsoft was forced to take Tay offline after just 16 hours.

Ensuring Safety and Security: AI systems can have real-world implications, especially in critical sectors like healthcare, transportation, and finance. Regulations are necessary to set safety standards, ensuring that AI applications meet certain criteria to prevent accidents, malfunctions, or intentional misuse.

  • In 2015, a high-frequency trading firm used an AI-powered algorithm to manipulate the stock market. The algorithm allowed the firm to make profits by executing trades faster than other traders.
  • In the 2020 US presidential election, AI was used to spread conspiracy theories about the election being stolen from Donald Trump. These conspiracy theories led to the January 6th attack on the US Capitol[8].

Protecting Against Misuse: AI technologies can be exploited for malicious purposes, such as deepfake creation, cyberattacks, or social manipulation. Regulations help create frameworks to detect and prevent the misuse of AI, safeguarding against potential threats to security and democracy.

  • In 2017, the WannaCry ransomware attack used AI to spread quickly across the world, infecting millions of computers and causing billions of dollars in damage.[9]
  •  In 2016, Russia used AI to spread misinformation on social media in an attempt to influence the outcome of the US presidential election[10].

Research methodology: This paper is of descriptive nature and the research is based on secondary sources for the deep analysis on the need of regulation of AI in the world and various approaches towards its regulation. Secondary sources of information like newspapers, journals, and websites are used for the research.

Review of Literature:

Some significant AI-related cases in the India, highlighting some of the legal issues that are at stake when AI is brought into the courts:

  1. Jaswinder Singh v. State of Punjab[11]: In this case, the Punjab & Haryana High Court utilized ChatGPT, an AI language model, to analyse bail jurisprudence. While the court ultimately denied bail to the petitioner, it acknowledged the potential of AI to assist in legal decision-making.
  2. Christian Louboutin SAS & Anr. v. M/s The Shoe Boutique – Shutiq[12]: In this case, the Delhi High Court addressed the admissibility of AI-generated data as evidence. The court held that AI responses cannot replace human judgment in legal proceedings. However, AI tools can be used for preliminary understanding or research.

Recent case examples:

  1. Microsoft, GitHub and OpenAI are currently being sued [13]in a class action motion that accuses them of violating copyright law by allowing Copilot, a code-generating AI system trained on billions of lines of public code, to regurgitate licensed code snippets without providing credit.
  2. Two companies behind popular AI art tools, Midjourney and Stability AI, are in the crosshairs of a legal case that alleges they infringed on the rights of millions of artists by training their tools on web-scraped images.

Regulatory approaches of real-world examples

European Union: The European Union (EU) stands as one of the largest global jurisdictions actively shaping the regulation of digital technology on a worldwide scale. The Artificial Intelligence Act[14], considered in 2023 as the most comprehensive global regulation of AI.

This act has a primary objective of categorizing and overseeing AI applications based on their potential to cause harm. This classification primarily falls into three distinct categories: prohibited practices, high-risk systems, and other AI systems. Prohibited practices encompass the use of AI for subliminal manipulation, exploitation of vulnerabilities leading to physical or psychological harm, indiscriminate real-time remote biometric identification in public spaces by law enforcement. The Act outrightly prohibits the latter category, while proposing an authorization regime for the first three in the context of law enforcement.

USA: The proposed AI Initiative Act aims to accelerate AI research for economic and national security. The White House issued draft guidance on AI regulation, prompting responses and updates. Specific agencies like the FDA address AI in medical imaging. NYC’s Bias Audit Law prohibits biased AI tools in hiring. The Biden administration also hints at proactive federal AI regulation in 2023.[15][16]

Some global cooperation against concerns of Artificial Intelligence (AI)

The Global AI Governance Summit: [17]It is an annual event that brings together experts from around the world to discuss the ethical and responsible development and deployment of artificial intelligence (AI). The summit is organized by the World Economic Forum, a non-governmental organization that promotes international cooperation.

[18]Bletchley Park Declaration: The Artificial Intelligence (AI) Safety Summit 2023 held at Bletchley Park, England has marked a significant turning point in the global approach to tackling the challenges posed by frontier AI technologies. It aims to create a collective understanding and coordinated approach to address the potential risks and benefits of advanced AI systems, known as frontier AI.

Suggestions for Future regulations:

Developing effective regulations for artificial intelligence (AI) requires careful consideration of various factors to ensure ethical, fair, and safe deployment of AI technologies. Here are recommendations for future AI regulations:

  1. Multistakeholder Involvement: Encourage a multistakeholder approach involving government bodies, industry experts, academia, and civil society to collectively contribute to the regulatory framework. This inclusive approach ensures diverse perspectives and comprehensive insights.
  2. Risk-Based Regulation: Implement risk-based regulations that categorize AI applications based on their potential for harm. This allows for tailored regulatory measures, focusing more stringent requirements on high-risk applications while facilitating innovation in low-risk scenarios.
  3. Transparency and Explain ability: Mandate transparency in AI systems, especially in critical applications. Users should understand how AI decisions are made, promoting trust and accountability. Require developers to provide explanations for significant AI-driven decisions, enhancing interpretability.
  4. Accountability and Liability: Clarify liability and accountability frameworks for AI systems. Establish clear lines of responsibility for developers, operators, and users, especially in cases of AI-driven errors or harm. Develop mechanisms to address challenges in attributing responsibility.
  5. Ethical Guidelines: Integrate ethical guidelines into regulations, emphasizing fairness, non-discrimination, and respect for human rights. Encourage developers to adopt ethical AI principles, promoting responsible AI development and deployment.
  6. International Collaboration: Foster international collaboration on AI regulations to ensure consistency and address global challenges. Collaborate with other nations, organizations, and international bodies to establish common standards and facilitate cross-border deployment.


In conclusion, the dynamic field of artificial intelligence (AI) has evolved significantly, impacting various aspects of human life. The global regulatory landscape is responding to the challenges posed by AI, emphasizing the need for ethical guidelines, transparency, and international collaboration. Recent legal cases underscore the importance of robust regulations to address privacy concerns and ensure accountability. Moving forward, embracing a multistakeholder approach and adapting regulations to the evolving AI landscape will be essential for harnessing its benefits responsibly.

Pradeep Yadav

Law Centre 1, Faculty of Law, University of Delhi











[11] CRM-M-22496-2022

[12] CS (COMM) 583/2023

[13] .