Does the DPDP Act protect the mechanism of personal data protection in the face of artificial intelligence software?

Since man is a constantly inquisitive animal, he is always tempted to make discoveries to meet the new needs of life that arise. As a result, the digitized information world we live in today has been born. In Marshall McLuhanhan’s concept of the global village, the human being who searched for the new faces of globalization moved to the edge of the technological information world. Another valuable product of this technology is the concept of artificial intelligence or automatic intelligence, which is turning the current information world upside down and embarking on a new journey. However, personal data protection has been challenged through this artificial intelligence software, while the security of personal data and the copyright rights of those personal information systems have been directly and indirectly affected. The main discussed issue in this research paper is whether the DIGITAL PERSONAL DATA PROTECTION ACT safeguards the personal data protection mechanism in the face of artificial intelligence software. to go But as a developing country in the world, our country, which has the largest population in the world, is still at a low level of literacy. The problem that arises here is, to what extent there is media literacy among the least literate people? The least media-literate Internet users are exposed to the use of artificial intelligence data systems to affect their personal data. Through the context of the act, it is possible to identify unauthorized access to personal data, unauthorized use of that data, and the impact on the organizations that provide the data in this regard. Just as this basically confirms the rights of individuals regarding their data, it is also challenging. We do not hesitate to impose regulations on the institutions that process the data. Accordingly, the purpose of this is to ask whether there is enough potential to solve the problems that have arisen through the legal mechanism and to provide recommendations for how the reforms of the local law should be based on international parameters, comparing it with the practical social context.

Keywords 

DPDP Act, Artificial Intelligence, Data Protection, Personal Data, Copyright,

Research Methodology

Primary and secondary sources are used in this research, which is carried out using a doctrinal approach. Law books, legal journals, guidelines, and websites are utilized as secondary sources, whereas statutes and court decisions are used as main sources. Basically, Indian Digital Personal Data Protection Act, No 22th Of 2023.

Introduction

“There is no universal definition of artificial intelligence. AI is generally considered to be a discipline of computer science that is aimed at developing machines and systems that can carry out tasks considered to require human intelligence.” Technically, the martial world is an infinite number of tasks that a person engages in. For that, a large use of data systems based on automatic intelligence has been identified. But it is a pity that India, which has the largest population in the world, as a developing state, which has been carried along with many countries in the world, has the current level of literacy. But India is second in the world with 692 million internet users. A problem that arises at some point is the extent to which literacy is media literacy among the least literate population. If it is negatively affected, the use of AI technology applications and potential casual practitioners are unable to even assess its vulnerability, which has hampered the declaration of personal data security in this regard. 

In a digitized society, economic patterns as well as strategies are being digitized. Because many people are switching to online earning methods and paying more attention to the related earning methods, more legal intervention is necessary. For that, the DPDP Act for Personal Data Protection seems to play a special role. Due to this, it can be pointed out that it is important to pay more attention to the protection of personal data systems that are affected by the activities related to AI technology.

The development of artificial intelligence software and the way that software interacts with people’s data is further developed through new technologies. The Personal Data Protection Act created in India was also inspired by the European Data Protection Regulations. Its primary purpose is to control the collection, storage, and processing of personal data. It also empowers individuals’ rights and strengthens them by ensuring transparency in access to personal data. Artificial intelligence functionality is being empowered with mobile applications and other software being created today. Accordingly, the first stage of impact on individual privacy rights is the ability to provide responsive answers to a request someone makes through AI through the widespread use of AI data algorithms that are updated to match new requirements. The specialty of these automated intelligent systems is to access any sensitive and non-sensitive human information centers and make predictions through screening through data summarization. Here, through AI technology, human images are recreated, and human images are produced.

Review of Literature

In India, the DPDP Act, which was passed in 2021, provides a legal framework for the safeguarding of personal data. It is crucial to comprehend and adhere to the DPDPA’s standards in the context of Al technology, as data is essential for training algorithms and making defensible choices.

AI applications may be subject to the provisions of the Data Protection Act.

The Act imposes several obligations on data controllers and processors, including the requirement to obtain consent from individuals before processing personal data, ensure the security of data, and allow individuals to access, correct, or delete their personal information.

In the context of AI, these obligations can be translated as follows.

Obtaining Consent: AI developers must obtain consent from individuals before collecting, processing, or using their data. This may require obtaining explicit consent from individuals and ensuring that the purpose of data processing is clear and understandable.

Security of data: AI developers must implement appropriate security measures to protect personal data used by their AI systems. This may include encryption, access control, and regular security audits.

Rights of Access, Correction, and Deletion: Individuals shall have the right to access, correct, or delete their personal information processed by AI systems. AI developers must ensure that these rights are respected and that appropriate mechanisms are in place to facilitate these requests.

Accountability: AI developers must be accountable for their AI systems and the personal data they process. This includes implementing appropriate governance structures, conducting regular audits, and ensuring compliance with the Data Protection Act.

In the field of artificial intelligence, the Digital Privacy Data Protection Act (DPDP) in India is essential for protecting personal data. The DPDPA’s guidelines must be followed by individuals and organizations to properly secure personal data. 

Individuals currently have control and transparency over their data according to the DPDPA’s establishment of their rights. Organizations that gather and handle personal data are required to be bound by rules such as purpose restriction, data reduction, and storage limitation.

  1. Consent Mechanism

Getting people’s lawful consent before collecting and using their data is one of the core elements of the DPDPA. Organisations using user data to train algorithms in the context of AI must properly disclose this usage to users during the consent process.

  1. Data Minimization and Purpose Limitation

Organizations ought to use data minimization, acquiring only the information required for the stated objective, which is to protect personal data from unnecessary exposure.

    – Organisations are prohibited from using personal data for reasons other than those that are made clear throughout the data-acquiring process by the purpose limitation principle.

  1. Security Measures

Establish strong security measures in place to prevent unauthorized access, disclosure, alteration, and destruction of personal data. A thorough security policy must include encryption, secure storage, and access controls, particularly when working with AI models that deal with enormous volumes of sensitive data.

  1. Transparency and Accountability

Organizations need to give people clear information about how their data will be used and be open and honest about how they utilize data.

 – By implementing accountability measures in place, organizations may be held accountable for the data processing they perform, which promotes trust between data controllers and data consumers.

  1. Data Subject Rights

Individuals are given certain rights under the DPDPA over their data, including the ability to access, correct, remove, and port their data. Organizations that use AI technology ought to have systems implemented to promote these rights and give people authority over their data.

  1. Impact Assessment for AI Systems

Deploying AI systems that process personal data requires the completion of Data Protection Impact Assessments (DPIAs). To guarantee DPDPA compliance and lessen any negative effects on people’s privacy, assess the possible risks and repercussions connected with AI applications.

  1. Cross-Border Data Transfer

Ensure cross-border transfers of personal data adhere to the DPDPA’s regulations, which include getting express consent or putting in place authorized measures. Since AI systems frequently involve cross-border cooperation, compliance with data transfer regulations is essential.

  1. Anonymization and Pseudonymization

Use methods such as pseudonymization and anonymization to safeguard private information while allowing AI apps to use it. By reducing the chance of re-identification, these techniques protect the privacy of datasets used to train AI models.

  1. Regular Audits and Compliance Verifications

Carry out routine audits to evaluate DPDP Act compliance and make sure AI systems adhere to data protection guidelines. Organizations can discover opportunities for improvement and adjust to changing privacy standards with the support of ongoing compliance inspections.

A thorough strategy is needed to protect personal data in the age of artificial intelligence under India’s Digital Privacy Data Protection Act. Organizations must understand the nuances of the DPDP Act to protect people’s right to privacy. This includes getting informed permission, putting strong security measures in place, and guaranteeing transparency. Following these recommendations not only promotes a responsible data handling culture but also helps to establish trust between users and AI-powered organizations.

Addressing the convergence of data privacy, intellectual property rights, and technical improvements is necessary to protect authors’ copyrights under the Digital Privacy Data Protection Act (DPDPA) in the context of AI technology. 

Copyright Framework and AI

Although the DPDP Act is primarily concerned with data protection, copyright issues are indirectly affected by it, particularly when it comes to personal data that is used in AI algorithms.

   – Copyrights are included in authors’ original works, and AI, which is frequently trained on a variety of datasets, may infringe upon these rights.

Defining Authorship and Ownership  

Give a clear explanation of authorship and ownership of content produced by artificial intelligence. Even though copyright isn’t specifically addressed in the DPDPA, the concepts of fair and legal processing yet apply.

Explicit Licensing Agreements

 Writers’ ought to create clear licensing contracts outlining the uses of their works in artificial intelligence applications. To guarantee that AI technology respects the author’s intellectual property rights, specify the extent of use, time frame, and any restrictions.

Informed Consent for AI Utilization

 Get writers’ permission before using their works to train AI models or for other data processing tasks. Respecting the rights and choices of content creators, openly convey the goal, scope, and possible consequences of using AI.

Copyright Metadata Integration

Include copyright information in digital content to make ownership and licensing conditions evident. As AI systems handle large datasets, this metadata becomes increasingly important to maintain usage rights and attribution.

Monitoring and Enforcement systems

 Put in place systems for keeping an eye on how copyrighted content is being used in AI applications. Authors can detect and handle cases of unapproved duplication or distribution made possible by AI systems with the aid of enforcement tools.

Watermarking and Traceability 

Utilize digital watermarking strategies to incorporate distinct identifiers into protected content. This makes traceability easier and aids authors in locating instances in which their intellectual property is included in AI-generated work.

Fair usage Considerations

Understand fair usage as it relates to AI-generated work under copyright law.

   – Create rules that strike a fair and equitable balance between the rights of writers being protected and the transformational character of AI.

Rights Management Platforms

 To automate the monitoring and enforcement of copyright agreements, make use of rights management platforms that interface with AI systems. These systems can simplify the process of guaranteeing adherence to authorship rights and licensing requirements.

Collaboration with AI Developers 

Encourage authors and AI developers to work together to create moral and legal guidelines for the use of material.

    – Promote discussions and collaborations that give writers’ copyrights top priority while utilizing AI developments.

 Data Minimization Practices

 Promote the use of data minimization techniques in AI training datasets to reduce the amount of extraneous copyrighted material included. This strategy complies with the DPDPA’s principles of data privacy and copyright protection.

Public Awareness and Education

Inform writers about how AI may affect copyright and what steps they can take to safeguard their creative works.

    – By educating the public, artists can become more knowledgeable decision-makers and actively influence copyright laws about artificial intelligence.

Protecting authors’ copyrights in the ever-changing world of AI technology under India’s Digital Privacy Data Protection Act necessitates a proactive and comprehensive strategy. It is essential to match the protection of intellectual property rights with the values of data privacy and fair use. This can be achieved through the use of technology solutions such as watermarking and partnerships with AI developers, as well as through explicit licensing agreements. A harmonious balance between innovation and copyright protection guarantees that authors’ contributions are appreciated and valued in the digital age, even as AI continues to transform the digital landscape.

The case Anil Kapoor Vs. Simply Life India & Ors. can be presented as one of the most controversial cases of copyright through AI technology. His interests and reputation have suffered. The public’s reputation, their safety, and the importance of balance were all taken into consideration by the court. According to the Hon’ble Judge, there may be some benefits to the infringement of individual rights, including the right to privacy, the right to subsistence, the right to dignity in a social context, and reputational harm.

Suggestions

AI software and methods that acquire personal data are a sensitive and complicated topic that involves ethical, security, and privacy concerns. Although artificial intelligence (AI) can not intrinsically “steal” data, improper application or management can lead to instances of unauthorized data access, breaches, or misuse. The main points of how such situations could develop and the safeguards put in place will be covered in detail in this part.

Unauthorized Access and Data Security 

Like any technology, AI software depends on data availability for functionality and training. When security measures are insufficient, personal information might be compromised by malicious individuals or even inadvertent flaws leading to unauthorized access.

AI Algorithm Misuse

 Sensitive information may occasionally be extracted from AI systems through unintentional exploits. Adversarial attacks or the manipulation of machine learning algorithms to uncover patterns in the data that were not meant for public consumption are two possible ways this might happen.

Lack of Ethical Supervision

Preventing data theft in AI development requires ethical issues. AI apps run the potential of being used to gather and exploit personal data for a variety of reasons, such as identity theft, spying, or unauthorized profiling when developers disregard ethical standards.

Insufficient Data Protection Measures

Inadequate authentication procedures, inadequate access controls, or insufficient encryption provide room for unauthorized people or organizations to access personal information that AI systems store or process.

Third-Party Data Sharing 

Working with third parties presents some dangers, particularly when managing huge datasets. There is a chance that personal information will be shared or accessed without the right authorization or protections if these organizations do not follow strict data protection regulations.

Insider Threats

People who work for companies or development teams and improperly utilize their access to personal data can also be involved in instances of data theft. Whether deliberate or inadvertent, this insider threat is a serious threat to data security.

Inadequate Consent Mechanisms

People should give their explicit and informed consent before AI systems process their data. However, users’ intentions or expectations regarding the use of their data may not be met if consent processes are insufficient or inaccurately stated.

Issues in Law and Regulation

Legal and regulatory monitoring gaps might give rise to chances for data theft. If unlawful use or weak security measures result in insufficient consequences, companies may be less inclined to prioritize data protection.

Dealing with Discrimination and Bias

Artificial intelligence (AI) systems may unintentionally reinforce discrimination and bias if they are not properly developed and provided, particularly when handling sensitive personal data. This presents problems associated with unfair treatment or profiling based on stolen data, in addition to ethical concerns.

Best Practices and Preventive Measures

Use strong encryption methods to protect data when it’s in motion and stationery. Make use of authentication procedures and access controls to prevent unauthorized access to personal information. Update and patch AI software frequently to fix bugs and other security gaps. Carry out in-depth privacy impact analyses to identify and reduce risks related to AI applications Encourage accessibility in data processing procedures and be transparent in your communications with users regarding the usage of their data. To guarantee accountability and consequences for data breaches and strengthen legislative frameworks and regulatory actions.

Although artificial intelligence (AI) does not “steal” personal data per se, there are real and pressing concerns about abuse, unauthorized access, and insufficient security. A comprehensive strategy is needed to mitigate these risks, one that includes strict regulatory frameworks, strong data protection measures, and ethical concerns in AI development. In an increasingly AI-driven environment, maintaining the security of personal data requires both the ethical application of AI and continuous efforts to address security issues.

Conclusion

India might improve its AI data protection laws by harmonizing them with internationally accepted norms such as the California Consumer Privacy Act (CCPA) in the US and the General Data Protection Regulation (GDPR) in the EU. To do this, it is necessary to prioritize openness, user permission, and the right to be forgotten. Sturdy security measures must also be maintained, and users and creators of AI should be held accountable. Furthermore, encouraging global collaboration in the field of data governance will help create a more thorough and efficient legal framework. In the face of the various updates happening in the world, the invincible human being is constantly making efforts. It seems that the users of the media are not active enough to see the impact of the negative consequences of the technology, no matter how much they are connected to it.

K.L.A.Hasini Imalsha Weerasundara.

Faculty of Law, University of Colombo, Sri Lanka.