ALGORITHMIC AMPLIFICATION OF HATE SPEECH AND COMMUNAL VIOLENCE IN INDIA

Abstract

India’s booming expansion in internet and technology has fundamentally transformed how people connect, interact and share ideas. But this has also intensified and made the spread of hate speech and crimes against vulnerable groups easier. This paper examines how the algorithmic amplification of a specific type of content on social media platforms such as Facebook, Instagram, YouTube, and X fuels the circulation of divisive messages that incite communal conflicts, resulting in hate crimes and other forms of social unrest. Analysis determines that a large portion of hate speech incidents either originate from or are amplified by these algorithm-driven systems. Despite the widespread incidents and reporting, the measures to curb such amplification remain minimal, with only a fraction of harmful posts being removed by the platforms. This study critically analyses India’s legal frameworks that concern hate crimes and digital platforms’ responsibility, encompassing Sections 153A and 295A of I.P.C. ( now under Sections 196 and 299 of Bharatiya Nyaya Sanhita, 2023), the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021, and other significant Supreme Court rulings. The analysis stresses the substantial lacunae in legislation and enforcement concerning digital amplifications. It also suggests actionable reformatory practices, including the maintenance of transparency of the algorithm, improved content moderation and suggestions by platforms, clarity in legislation, and enhancing the capacity of regulators. These measures aim at finding a harmonious ground between the protection of expression with the necessity of preventing digitally intensified communal hatred, thereby supporting India’s democratic and diverse nature.

Keywords

Algorithmic amplification, hate speech, communal violence, social media, algorithm, content moderation, legal reform, India.

Introduction

India has witnessed an unprecedented growth in the sphere of internet and social media usage, with over 750 million internet users and more than 448 million active social media participants as of 2024. These platforms include Facebook, Instagram, WhatsApp and other similar social networking sites. This technological development has paved a new path for communicating and exchanging moral, socio-economic, and political views across countries, through digital platforms. Subsequently, such widespread exchange of views and opinions has also enabled the rapid spread of hateful content that threatens the social bond. Targeting minority communities with hate speech, provoked by the amplified algorithmic content, has posed a challenge and concern to the maintenance of public order. The structure of social media platforms is designed such that it depends heavily on the algorithmic curation systems to maximise user engagement. This implies that provocative and polarising content often spreads faster and wider than other moderating influences, thus inciting communal tension. Over the years, India has witnessed an alarming rise in communal violence cases linked closely to spread of hate speech online. This paper largely explores the technological and social nuances within the Indian legal and regulatory frameworks, and towards understanding the complexities of content amplification and the challenges faced by platforms in balancing freedom of expression with preventing amplification to curb hate speech.

Research Methodology

This study uses doctrinal legal analysis and empirical data review from sources including the India Hate Lab, Supreme Court rulings, and statutory frameworks like IPC Sections 153A and 295A, and IT Rules 2021. Supplementary research draws from Human Rights Watch, Mozilla Foundation, Indian Kanoon. Constraint in accessing the algorithmic data and moderation bias towards specific content poses limitations.

Review of Literature

The legal environment of India indicates a balance between well-established freedom of speech under Article 19(1)(a) and restrictions necessary to maintain public order under Article 19(2).

The 267th Law Commission Report has highlighted inadequacies in the current laws to address the complexity and scale of hate speech online. Recommending the need for legislative upgrade.

Scholars have stressed the need to revise legal standards concerning digital development, specifically algorithmic moderation, which has not been captured by the traditional framework enough.  Reports of platform governance have further revealed that boosting or amplifying the algorithm for maximising engagement may favour polarising against balanced content.

Frances Haugen’s whistleblower accounts have revealed Meta’s internal recognition of these issues, but allowed commercial interests to delay decisive action.

Empirical investigations, such as the Centre for the Study of Organised Hate and Human Rights Watch, have demonstrated clear links between algorithmically managed hate speech and rise of communal violence, focusing on the political exploitation of social media for curated mobilisation, particularly during elections. 

Additional research stresses the need for algorithmic transparency, moderation strategies for culturally sensitive groups and the establishment of regulatory measures adapted to India’s sociolinguistic context. The recent scholarship also calls for an in-depth understanding of the social implications of algorithmic content governance, stating that the digital platforms operate as “arbiters” of speech with societal consequences that require accountability commensurate with their influence. Cross-disciplinary analyses also scrutinise the opacity of these algorithmic systems (“black boxes”), demanding transparency to ensure effective legal and societal oversight. 

The literature highlights that combating hate speech in India necessitates a complex strategy involving technological innovation, consolidated legal reform, and enhanced civil society engagement for the protection of democratic norms and minority rights.

Methods

Social media platforms use machine learning to determine what users see in their feeds. This often results in emotional or divisive posts rising to the top because they generate more clicks and reactions. Internal research indicates that hate speech, in particular, leads to stronger engagement, which encourages its spread. When surfing YouTube the recommendation system of the app gradually leads users to more and more extreme content. On the other hand the group feature of WhatsApp helps in spreading of the inciteful messages faster to a larger number of people.

Even though these apps have their own policies against hate speech there are implications of labelling what’s harmful and these policies remain not so useful, till 2024, Facebook took actions against only 1.6% of these hate speech reports.

When analysing from a legal perspective, there are laws such as IPC sections 153A and 295A (now Section 196 and 299 of BNS) which mark hate speech a punishable offence, but they fall short for how an algorithm amplifies this type of content. The IT rules of 2021 do require platforms to take down harmful materials swiftly, but still, vague enforcement and lenient actions and penalties diminishes its effectiveness. 

The supreme court has asked for more effective actions to diminish the hate speech, but fast and far-reaching nature of the social media platforms makes it complicated to make on time actions and responses. This type of gap was exploited during the recent elections when provoking speeches were live-streamed and social media algorithms boosted the content, which led to real-world communal violence.

Suggestions

For effective countermeasures, it is necessary to integrate the reforms. In legislative amendments, they should explicitly target the algorithmic amplification of the hate speech laws and must impose transparency and accountability for the social media platforms. These social media platforms must have independent audits for algorithms accompanied by complete vernacular moderation for the content standards.

Regulators must ensure that transparency reports contain details of hate speech metrics, including language and regional breakdowns. Any content flagged as hate speech must be suspended during periods of communal tension to allow the algorithmic “circuit breakers” to function effectively. Fast-track courts and judicial training must be institutionalised, which are capable of adjudicating algorithmic content.

Law enforcement agencies must have enhanced investigative capabilities while respecting due process of law. Investment should be made in the AI-powered regional language moderation tools.  Civil society must organise events to educate people about online hate speech risks.

Conclusion

The rise of algorithmic amplification in hate speech has fuelled communal violence, outpacing our current legal solutions. 

The Indian democracy is facing a risk of further polarisation due to the limited moderation efforts by such social media platforms. We urgently need combined efforts from the legislative, judicial, technological and civil society sectors, all responsible for India’s diverse culture and linguistic context, to protect and maintain the constitutional freedoms.

Author

Anayza Faiyaz, B.A.LL.B. (Third Year)

Barkatullah University, Bhopal, Department of Legal Studies and Research.