REGULATING AI – BALANCING BETWEEN INNOVATION AND ACCOUNTABILITY

ABSTRACT: –

The world has experienced a very rapid development of Artificial Intelligence (AI) in almost all spheres and areas, threatening to expose us into the downside risks that come with it. With AI systems becoming more and more autonomous, it is important to establish regulations that define appropriate behaviour while developing, deploying or using these technologies in a safe way.

Poor or no regulation can allow for discriminatory AI systems, privacy intrusions and redundancies in the workforce all of which have potential to not only do harm). Conversely strong regulation could lead to trustworthiness innovation friendly mechanisms as well protection from human rights. 

The rapid growth of AI brings with it a multitude of risks, some of which have already become a reality. For instance, we’ve seen accidents caused by self-driving cars, and more subtly, AI has contributed to the manipulation of elections and the polarization of political views. It’s essential that we develop laws and regulations specifically tailored to AI to ensure its responsible use.

Keywords – Artificial Intelligence, Regulation, Privacy, Polarization. 

INTRODUCTION: – 

The fast pace of development and deployment of Artificial Intelligence systems brings with it many benefits but also significant risks that cannot be overlooked. This makes it imperative to have appropriate regulations in place so that AI systems can be safely and responsively developed, deployed, and used with an increasing degree of autonomy. Lack of regulation can mean human rights violations, bias in AI systems, privacy violations, and loss of jobs.

On the contrary, good regulation can underpin trustworthiness, promote innovation, and protect human rights. According to the OECD, 2019, the development of effective regulations for AI is a very challenging task due to having in-depth knowledge about the technology and its associated risks. Therefore, a balance needs to be drawn between promoting innovation and protection against the possible risks associated with AI.

There cannot be regulation of AI on the one-size-fits-all basis. The subject AI technology is too multi-faceted, and the areas of its applications are very diverse. The risk-based approaches have to be targeted. For example, AI usage in video games is far removed from AI that creates a serious threat to the security of critical infrastructure or human lives.

RESEARCH METHODOLOGY: – 

This will be a doctrinal research paper, whereby the issue of Artificial Intelligence regulation will be explored by analysing literature and other secondary data sources. The nature of this approach will be relevant to this research in that it offers an in-depth view into the legal and theoretical frameworks governing AI regulation.

REVIEW OF LITREATURE: – 

  • ETHICALLY ALIGNED DESIGN 2019 REPORT: –  A 2019 “Ethically Aligned Design” report by the IEEE envisions autonomous and intelligent systems that meet human well-being as pertains to dignity and safety. It addresses urgent topics on human rights, transparency, and fairness, putting forward principles and recommending initiatives that can guide development in Artificial Intelligence in harmony with human values. This goes to mean that human welfare is just the prime consideration that needs an input from technology, ethics, philosophy, and social science. The report provides a framework through which one should develop artificial intelligence and autonomous systems in ways that foster human good and well-being. This looks towards a future when autonomous systems are designed to align ultimately to enhance human wellbeing, dignity, and safety.      
  • AI NOW DECEMBER 2019 REPORT: –  The 2019 AI Now report spells out a sophisticated understanding of the impact of AI on society, underscoring that AI is not neutral and has the potential to increase existing inequities in societies, which appears to be growing for marginalized groups in specific ways. There has been a call for a democratic and participatory approach towards the development of AI, whereby social justice and human rights would precede corporate profits. It requires acute, critical understanding of the ways in which AI has an impact and demands more justice and fairness in both its development and deployment. This report will activate a call for socially fairer and more unbiased directions in AI developments for the benefit of all members of society and not just the privileged few. 
  • On Artificial Intelligence – A European approach to excellence and trust: – The European Commission’s report “On Artificial Intelligence – A European approach to excellence and trust” COM (2020) 65 final, enunciates a vision for AI development and deployment in the EU. It is about the development of human-centred artificial intelligence, which means giving due precedence to AI development that is trustworthy, transparent, and accountable. This report can see a way ahead for the development of AI that ensures enhanced respect of European values and fundamental rights as innovation and competitiveness bloom. It also underscores the need for investments in AI research and development, while making the environment more enabling for AI start-ups and SMEs.
  • The Future of Jobs Report 2020: – According to the Future of Jobs Report 2020, the advancement in technology, more so Artificial Intelligence, is likely to potentially disrupt job markets. Up until 2022, 85 million jobs may be displaced by automation, but 97 million new roles may emerge, which would adapt to new technological trends. The report expressly underlined the necessity for workers to develop skills that would complement machines: critical thinking, creativity, and emotional intelligence. The most in-demand skills will be those that make humans efficient at working with machines, such as data analysis, programming, and digital literacy. It also predicts a gig-based future of work—people engaging in freelance jobs and project-based work. Workers will have to adjust to that future of a job market transformed by automation by acquiring skills that augment their abilities and enable them to serve efficiently in a fast-changing work environment.

AI AND ITS REGULATION: – 

Any regulation of AI use must address challenges in three threshold areas. The first one deals with old-fashioned abuses: privacy, security, and fairness. In particular, regulations should ensure that AI systems are designed and deployed in a way that respects basic human rights and dignity.

The second-dimension concerns transparency and accountability. Mechanisms ensuring that AI systems are transparent, explainable, and accountable should be developed. Regulations should encourage the development of AI systems that are auditable, and their processes of reaching decisions are transparent.

The third area is promotion of innovation and competition. This raises the flipside, that there must be a means to make sure there is innovation and competition within the development of AI systems. Regulations need to encourage the development of new AI applications and services while ensuring that they are safe and responsible.

The studies of the development of the rules for AI still require international cooperation because AI is a global technology and its regulation calls for the cooperation of governments, industries, and civil society organizations in the development of common standards and guidelines for developing and deploying AI.

Industry involvement is important. The making of rules for AI should be close cooperation with industry players to ensure that the rules, when finally implemented, they will be practical, effective, and not prone to hamper innovation.

Finally, public engagement in the development of AI regulations is called for. Development with regard to AI regulations ought to have inbuilt public engagements and consultations that make regulations that mirror values and concerns for society as a whole.

In other words, AI regulation is an extremely complex and challenging process. Nevertheless, it is of paramount importance to ensure that AI systems are designed and rolled out in as safe and responsible a way as possible so that an enabling environment is created to encourage the development of useful AI systems to realize human wellbeing and dignity.

CHALLENGES OF BALANCING INNOVATION AND ACCOUNTABILITY: – 

A new wave of Artificial Intelligence (AI) technology is enabling widespread integration that adds to the efficiency and effectiveness, reinforcing productivity in an assortment. Some Aspects to consider but free movement in AI Industry Results Lots of Risks for Individual and Society. Unregulated Ai has the risk of causing damage, reinforcing biases and violating privacy thus we need comprehensive regulation that manages to both spur innovation but hold those accountable if it malfunctions.

According to the white paper, one of the greatest dangers of AI runs off-leash is its danger to individuals and society as a whole; If not designed and deployed responsibly, AI systems can lead to physical injury and death, financial loss or emotional damages .For example, poorly-tested autonomous vehicles can – and have-caused accidents resulting in fatalities. Analogically, biased or wrong AI powered medical diagnosis systems will cause flawed diagnoses harming patients.

Parallel to the rapid development in AI technologies is growing fear of its vulnerabilities. It is extremely important that AI systems be designed to achieve at least the same kind of requirements expected of other engineering such as aviation and power systems. Among the many pressing vulnerabilities needing urgent attention is the danger posed by data-poisoning techniques.

Data poisoning refers to manipulating training data to compromise an AI system. A classic example here is that of spam filtering, whereby attackers can manipulate the training data to let spam go undetected. Yet another kind of AI vulnerability is the “back door” attack, in which the malicious programmers of AI systems insert code to be able to infiltrate the system later. A recent study at NYU demonstrated that back-door attacks can be used in creating AI models with state-of-the-art performance on the training data but very erratic behaviour if run with attacker-chosen inputs.

First of all, contacting them to train ML models on cloud platforms increases the risk of back-door attacks. Another grave problem is the recurrent case of re-purposing and re-training AI models for new tasks called transfer learning. On one hand, transfer learning can reduce the cost of training, but on the other hand, it makes AI models more vulnerable under misclassification attacks, especially when central models are publicly available.

In particular, adversarial attacks will be most potent against any AI that is very dependent on inputs to drive decisions or predictions. In that sense, computer-vision systems must be quite vulnerable to these types of attacks by definition, since they depend on countless inputs, the pixels. Likewise, an AI model making predictions about human behaviour or taste based on a multitude of diversified inputs, like social media data, search entries, and location tracking, is prone to misclassification, hacking, and strategic manipulation.

Centralization of models in training increases the risk of vulnerabilities, and reutilization of AI models in new tasks enhances these risks. As AI models themselves are becoming more complex, with an increased number of inputs, the likelihood of misclassification and manipulation goes up. The handling of these vulnerabilities will be key to the responsible development and deployment of AI systems.

Whereas AI systems are vulnerable to performance and reliability risks, it is important to have secure and robust AI that can sustain data poisoning techniques and backdoor and adversarial attacks. If these vulnerabilities are recognized and a movement toward the cure is initiated, AI will be developed and used responsibly for the betterment of humankind.

STRATEGIES FOR EFFECTIVE AI REGULATION: – 

With the increasing trend in AI technology comes a growing urge for proper regulation, so that such development and deployment of systems are done in a responsible and ethical way. Two key paths to act on the effective regulation of AI lie through transparency and accountability.

Transparency in AI systems puts forward how decisions are made and thus ensures fairness. Artificial intelligence systems, in particular those that involve decisions affecting individuals, shall have to be ultra-transparent in processes and the logic used. In other words, what contributes to the factors of a decision and the logic used in arriving at the decision should be open and understandable to man. This is useful not only in detecting biases or errors that potentially creep into an AI system but also to ensure fairness and no biasness.

Proper measurement of autonomy and intelligent systems is necessary to make sure that they bring real benefits to humanity. While traditional metrics, such as profit, GDP, consumption level, and occupational safety, may seem important, they indicate little about what is beneficial for improved human well-being, though psychological, social, economic, and environmental in nature. Well-being metrics, in this respect, are a more holistic approach toward assessing just how beneficial technological progress really is.

AIs can help analyse complex data to identify avenues through which they can improve human well-being, opening new avenues for societal and technological innovations. The assumption here is that AIs can have unintended side effects that are negative and lower human welfare. It becomes very important, therefore, to develop such AIs that are oriented toward human well-being and freedom.

To attain this, AIs should be designed with values-based methodologies focussed on human development, foregrounding human assistance over autonomy. Applications developed should be sustainable from an economic value creation perspective, though handling broader social costs and benefits.

AIs designs, therefore, should respect human emotions and emotional experience. Intelligence has an affective core that shall be driven by feelings, such as anger, fear, or joy, of the human. Autonomous AI and intelligent systems participating in or facilitating human society shall not cause harm by amplifying or dampening human emotional experience.

Clear norms and standards are a precondition to developing and deploying AIs in accordance with human values. The latter finds its expression in the indication of what community they will coexist with and what sort of norms apply to particular tasks. With this in place, we stand the chance to develop AIs that truly benefit humankind, furthering human well-being, freedom, and real development.

Ultimately, AIs development should be human-centred in its approach so that machines are positioned to assist humans rather than replace them. In a nutshell, a values-based approach toward the development of AIs will be helpful in coming up with systems that assure economic value besides social responsibility and environmental sustainability.

FUTURE OF AI REGULATION: ACHIEVING A BALANCE: – 

While social sciences and humanities approaches have a long history in information security and risk management, research remains necessary but relatively nascent for tackling both social and technical dimensions in security. At the core of this challenge lies the need to redraw boundaries for analysis and design to extend beyond the algorithm and secure channels for all affected stakeholders to democratically steer system development and to dissent when concerns arise.

This will also tend to yield a far better set of regulations, and will consider the different perspectives and needs of many stakeholder groups, including AI developers, policymakers, the general public, and organizations of civil society. By engaging with stakeholders, regulators can develop a deeper understanding of the AI ecosystem and develop regulations that are appropriate for the specific needs of various industries and applications.

The need to balance between securing accountability and allowing innovation is just proper; major consequences in developing AI technologies may result from both overly restrictive regulations and a lack of them. Over-regulation will throttle innovation and development in AI, while on the other hand, the no-regulation approach may allow AI growth without supervision and control, leaving avenues of misuse open. Essentially, this says it all: regulators are required to find that middle road which will encourage innovation while making sure APS are developed and fielded in a responsible and ethical way.

Long-term implications of the regulation of AI are far-reaching and profound. Choices made today in the field of regulation are going to fashion tomorrow’s technology and its relations with society. A forward-looking attitude to regulation is required where possible AI risks and benefits can be correctly anticipated, and accordingly, strategies to mitigate former and maximize latter may be chalked out.

One major priority of the regulators has to be to make AI transparent, explainable, and accountable. It covers the development of standards for development as well as deployment; it covers mechanisms of oversight and enforcement. Regulatory bodies, in these contexts, have to address the ethical dimensions of AI involving bias, fairness, and privacy issues.

Another major area for concentrated effort would be the proper sharing of benefits of AI through society, with policy development that will define how to deal with potential job displacement effects due to AI and strategies aimed at creating digital literacy and skills development.

The achievement of balance between the innovation and accountability goals of AI regulation requires collaborative efforts from stakeholders. Only in this set will the regulators, by engaging manifold perspectives and assuming a forward-looking attitude, develop comprehensive and impactful regulations that promote the sustainable development of AI technologies for the equitably shared benefits to society.

CONCLUSION: – 

Artificial Intelligence has just emerged as an area that can possibly be regulated at the threshold of a new era in technological advancement. In that respect, the complexities are therefore necessary to maintain a balanced approach between the needs to foster innovation and the needs to ensure accountability. Such a regulatory framework, at the heart of which lies transparency, accountability, and collaboration, will have been close-ended in view of its support for the responsible advancement of AI while mitigating potential risks.

The stakes are very high, and the consequences of inaction or poor regulation could be enormous. If uncontrolled, AI would enhance existing social inequities, compromise privacy, and undermine trust in institutions. On the other hand, with considered, forward-looking regulation, we could use the transformative power of AI to drive growth, improve lives, and work toward a fairer society.

The future of artificial intelligence regulation has a high potential for creating a world in which technologies serve society positively and ethically. Success can come only through joint efforts by governments, industry leaders, civil society, and academia to build a fit-for-purpose regulatory environment that focuses on innovation, accountability, and the centrality of human beings.

Moving forward, there is the requirement to be vigilant, adaptive, and true to principles of transparency, accountability, and collaboration. By doing so, the full potential of AI to drive progress, prosperity, and a better tomorrow for all shall be fully unlocked. The time to act is now. It lies with us to ultimately realize the future of regulating AI.

NAME – Shagun Kothari

COLLEGE – Maharashtra National Law University, Nagpur.