Author : Soumya Awasthi

Issue BriefsPublished on Mar 28, 2025 Extremist Propaganda On Social Media Impact Challenges And CountermeasuresPDF Download
ballistic missiles,Defense,Doctrine,North Korea,Nuclear,PLA,SLBM,Submarines
Extremist Propaganda On Social Media Impact Challenges And Countermeasures

Extremist Propaganda on Social Media: Impact, Challenges, and Countermeasures

Social media is becoming an increasingly useful tool for radicalisation and the recruitment and mobilisation of individuals for extremist activities. India, with its unique socio-political landscape, is particularly susceptible to the misuse of social media. This brief explores the challenges posed by social media extremism in India and globally. It examines the psychological and societal impacts of platforms like X, the interplay between local and international propaganda, and the limitations of existing regulatory measures. The brief makes a case for a multi-pronged strategy for addressing these gaps to allow India to effectively counter the evolving threats of social media extremism while balancing security needs with the right to freedom of expression.

Attribution:

Soumya Awasthi, “Extremist Propaganda on Social Media: Impact, Challenges, and Countermeasures,” ORF Issue Brief No. 790, March 2025, Observer Research Foundation.










Introduction

Propaganda is a powerful tool for influencing public opinion and normalising violence. For extremists, a primary propaganda strategy is the exploitation of individuals’ vulnerabilities—such as emotional instability, social isolation, dissatisfaction with government policies, and the desire for belonging or respect—to create an “us vs. them” mentality, often using psychological warfare to dehumanise perceived adversaries and justify violence.[1]

In recent years, extremist actors have increasingly used social media platforms—low-cost, fast, decentralised, and globally connected—to spread their ideologies, recruit followers, and foster support for their activities. Terrorist groups are turning to the internet for activities such as recruitment and the dissemination of violent content through tools such as hashtags, videos, images, and open letters. Though social media is an enabler rather than a primary driver of violent radicalisation, its role in reinforcing extremist ideologies, identifying potential recruits, and fostering engagement cannot be underestimated.

Each platform offers unique advantages to extremist groups. Facebook, for example, acts as a decentralised hub for sharing information; X allows for rapid communication and engagement with global audiences; and YouTube is the preferred platform for video propaganda that is often tailored to resonate with specific cultural and linguistic audiences.

As of 2024, there were around 5.35 billion internet users worldwide, each generating approximately 15.87 Terabytes (TB) of data, including 500 million tweets on X, daily.[2] Facebook had the highest number of visitors globally in 2023, producing around 4,000 TB of data daily that year. The same study found that WhatsApp users share the most number of images, with 6.9 billion photos shared between users daily, followed by Snapchat, with 3.8 billion photos shared.[3]

Terror groups can easily reach out to registered users on social media without having to build their audiences. Extremist groups exploit social media’s global reach, anonymity, and interactive capabilities by creating emotionally charged content. For radicalisation and recruitment, terror groups disseminate violent content targeting vulnerable youth, such as propaganda videos that show military training and other battlefield activities. Extremists also use video games to embed their narratives, modifying popular titles to reflect jihadist themes.

They use visual and audiovisual mediums, such as videos of attacks or symbolic imagery, to trigger powerful psychological responses and incite violent behaviour. Vulnerable individuals, particularly the youth, are drawn to these narratives, which aim to provide a sense of purpose and identity.

Understanding the methods and strategies that extremist groups use on social media is essential to countering their influence. By examining the psychological and social dynamics at play, policymakers and stakeholders can develop effective counter-narratives and interventions to address the root causes of radicalisation and prevent its spread. This will ensure that the digital space enables positive engagement rather than divisiveness, conflict, and violence.

This brief uses interviews and analyses of research material to identify propaganda strategies used by terror groups to rationalise their acts of terror and to recruit and radicalise members. Approximately 100 such images, slogans, hashtags, and social media posts in English, Hindi, and Urdu were assessed. According to the analysis, extremist groups target three kinds of audiences: the general public, for gaining sympathy and justifying their actions; the enemy group, usually the government and its various agencies; and existing members of the group to keep them motivated.

Table 1 shows the results of interviewing practitioners from the paramilitary and the strategic communications and cybersecurity sectors.

Table 1: Propaganda Strategies Used by Extremist/Terror Groups

 AudienceStrategy
1.Members of the Group·       Identity Fusion ·       Martyrdom Narrative ·       Glorification ·       Victory Narrative ·       Victimisation
2.Enemy Audience·       Slurring ·       Demonising ·       Defamation ·       Disinformation ·       Conspiracy Against them ·       Instilling Threat ·       Challenging their Strength
3.Universal Audience·       Us vs. Them ·       Victimhood ·       Social Welfare

Source: Author’s own, based on field work[a]

India’s Challenge

India’s diverse cultural and political fabric makes it vulnerable to exploitation by these groups. Platforms like Facebook, WhatsApp, Telegram, and X are extensively used for spreading propaganda, mobilising individuals, and fuelling communal tensions.

For instance, extremist groups capitalised on the COVID-19 pandemic, framing it as divine retribution against non-believers[4] and leveraging the increased screen time as a result of the lockdowns to recruit and radicalise individuals.

Indian security agencies have attempted to counter these threats by monitoring social media and collaborating with tech companies to flag harmful content. However, the decentralised and rapidly evolving nature of digital platforms requires more robust policies and proactive measures. Understanding how these platforms are exploited is crucial to addressing the broader threats. Extremist groups rely on the susceptibility of audiences to unverified information, using repeated exposure to propaganda to shape opinions and deepen ideological divides.

Pakistan-backed groups such as the Kashmir Tigers (KT),[b] the Resistance Front (TRF), and the People’s Anti-Fascist Front (PAFF), as well as global groups like the Popular Front of India and the Islamic State (IS) have used social media for propaganda.

TRF, a proxy of Lashkar-e-Taiba, also emerged in 2019 and projects itself as an “indigenous resistance movement”, using victory narratives to rally sympathisers and instil fear among adversaries. The group’s attacks, such as the Anantnag (2023) and Ganderbal (2024) incidents,[c] are celebrated in propaganda posts that glorify martyrs and emphasise the group’s strength.[5] They also utilise the tactic of slurring as part of ‘psychological operations’ to undermine the morale of the Indian security forces and foster a sense of shared identity among the members of the terror groups.

Similarly, the PAFF, which emerged in 2020, operates as a proxy for Jaish-e-Mohammed (JeM). PAFF publicises its attacks and spreads misinformation to generate public sympathy and degrade the state and its forces. For example, during the Poonch attack, PAFF circulated messages and images designed to provoke a response from security forces and amplify their propaganda.

These groups demonstrate a calculated and adaptive use of social media to advance their objectives. By understanding their strategies and countering their influence, India can better address the challenges posed by extremist propaganda in the digital age.

Global Overview

The use of social media and digital platforms by extremist and terrorist groups reflects a sophisticated adaptation to technology, enabling recruitment, propaganda dissemination, and global coordination. Hamas[d] utilised platforms like Telegram, X, and Instagram during the October 2023 Israel-Gaza conflict to distribute graphic content and sensationalist posts.[6] By January 2024, Hamas had escalated its digital presence, such as through TikTok, collaborating with Hezbollah to amplify anti-Israel narratives and glorify its own militancy through coordinated messaging strategies that involved humour, memes, and emotionally charged content.[7]

ISIS also leverages encrypted platforms like Telegram and WhatsApp. Throughout 2023, the group expanded its recruitment campaigns into Southeast Asia by exploiting local political and economic grievances. In 2024, ISIS activities extended into South Asia and Africa,[8] targeting vulnerable populations in regions like India, Bangladesh, Nigeria, and Somalia through emotionally charged narratives spread through secure communication channels.

White supremacist and far-right groups[9] in the West, like the Identitarian Movement (Europe), The Proud Boys (US and Canada), and the Patriot Front (US) mirrored these tactics, using platforms such as Telegram and Gab for recruitment and coordination. These groups exploited Middle Eastern conflicts to propagate narratives about a “clash of civilisations”, fuelling anti-immigrant and anti-Muslim sentiments. The Islamic State also adapted their digital strategy by publishing online magazines like Voice of Khurasan or Voice of Hind, after regaining power in Afghanistan, using platforms like X and WhatsApp to build legitimacy, disseminate governance content, and suppress dissenting narratives. Crowdfunding became another key strategy, with Hamas and Hezbollah exploiting online platforms to circulate financial appeals through cryptocurrency.[10]

Psychological warfare has also become a prominent tactic. During the Gaza conflict in 2023-24, Hamas spread social media posts that included manipulated footage aimed at instilling fear and confusion among Israeli civilians.[11] Similarly, Iran-backed militia have disseminated anti-Western sentiments on social media during the ongoing Russia-Ukraine conflict.[12] Encrypted platforms like Telegram also serve as hubs for extremist activity, hosting virtual “classes” on operational security and propaganda dissemination.

The widespread exploitation of digital platforms has made them indispensable tools for extremist agendas, underscoring the critical need for robust countermeasures.

Psychological Impact

Social media extremism operates on a foundation of emotional and psychological manipulation. At the heart of this manipulation is the exploitation of human emotions, which often lead to the escalation of violent tendencies. Extremist narratives tap into deep-seated grievances, cultivating anger and resentment among targeted individuals or communities. These narratives usually frame certain groups or institutions as oppressors, creating a perceived justification for hostility and aggression. Over time, exposure to such content desensitises individuals to violence, normalising it as a legitimate response to perceived injustices. The desensitisation is coupled with a sense of empowerment derived from aggression, as individuals feel validated through the approval and encouragement of like-minded online communities.

Alienation and anti-state sentiments are exacerbated by extremist propaganda, which fosters a sense of victimhood in vulnerable individuals. By portraying communities as systematically oppressed or marginalised, these narratives reinforce an “us vs. them” mentality that pits groups against one another and the state. This polarisation feeds into the formation of radical identities, which is marked by loyalty to extremist ideologies.

This psychological manipulation extends beyond individuals to target state institutions, particularly security agencies. Extremist groups deploy strategies of humiliation and psychological warfare against law enforcement, undermining their authority and credibility. Public shaming and online abuse of security officials are commonplace, often accompanied by victory narratives that glorify attacks on state agencies. Conspiracy theories further erode trust, painting security forces as corrupt, ineffective, or oppressive. This barrage of disinformation weakens public confidence in the institutions responsible for maintaining law and order.[13]

Among security officials, operational stress and paranoia become more widespread in an environment rife with misinformation and hostility. The vilification and scrutiny strain their capacity to act effectively while also impacting their mental health. Propaganda-induced mistrust also creates barriers between law enforcement and the public. Extremist narratives also often target the families and social standing of security officials.

The cumulative effect is a spiralling of violence and distrust. Online extremism frequently translates into street-level violence,[14] which weakens the capacity of security forces to respond and further polarises society. Communities become divided, with mutual distrust and hostility replacing dialogue and cooperation. The resulting fractures threaten the social fabric, undermining efforts to build cohesive, peaceful societies.

Counter-extremism strategies must prioritise individuals’ emotional and cognitive well-being, promoting inclusion, resilience, and critical thinking. Strengthening community ties and fostering trust between the public and security agencies are equally vital. Only by tackling the psychological roots of extremism can society hope to break the cycle of manipulation, violence, and division perpetuated by extremist groups online.

India’s Countermeasures

The Government of India has implemented a comprehensive approach to address the misuse of social media platforms by extremist groups for radicalisation, propaganda, and recruitment. Recognising the influence and potential of these platforms to disseminate harmful content rapidly, Indian authorities have introduced a combination of legislative frameworks, technological solutions, and collaborations with social media companies to mitigate the threat. In 2024, the Ministry of Electronics and Information Technology (MeitY) used its mandate under Section 69A of the Information Technology Act to block websites, URLs, WhatsApp accounts, Instagram, Facebook, and YouTube channels related to, linked to, or owned by extremists and terrorist groups, including KT, Kashmir Fights, Resistance Media, TRF, and Jhelum Media.[15]

Table 2: Number of Sites/URLs and Accounts Blocked by MeitY Since 2022

YearSocial Media PlatformsNumber of Accounts/URLs
2022WhatsApp6,775

X3,417

Facebook1,743
2023WhatsApp12,483

X3,772

Facebook6,074
2024WhatsApp8,821

X2,950

Facebook3,159

Khalistan-Linked URLs10,500

Popular Front of India URLs2,100

YouTube (extremist Content)2,211

Instagram Accounts2,198

Telegram Accounts225

WhatsApp (Extremist Content)138

Source: Authors’ own      

Legal Instruments

India has enacted legal frameworks to regulate social media content. The Information Technology (IT) Rules 2021 empower law-enforcement agencies to take swift action against unlawful content. These rules require flagged material that violates laws relating to public order, security, or morality to be removed within 24 hours of notification by authorities. This provision is critical in combating extremist propaganda that could incite violence or spread misinformation.[16] Section 69A of the IT Act also grants the government authority to block access to content or entire platforms if they threaten national security, public order, or the country’s sovereignty. This has proven to be effective in curbing the spread of harmful material with around 28,000 URLs blocked in 2022.[17]

Monitoring and Intelligence Gathering

India has enhanced its monitoring and intelligence capabilities to address the use of encrypted platforms by extremist groups. The Cybercrime Coordination Centre works with state-level cybercrime cells to track suspicious accounts and identify patterns of harmful online activity.[18] Mechanisms have been developed to provide specialised training, monthly workshops, information-sharing to detect extremist content and trace networks operating on encrypted platforms such as Telegram, WhatsApp, and Signal.[19] Although end-to-end encryption presents challenges in accessing communications, intelligence agencies employ advanced analytical tools to track digital footprints, uncover group activities, and pre-empt potential threats. This coordinated effort between central and state authorities ensures a comprehensive response to the misuse of digital platforms.

Mutual Legal Assistance Treaties

The Indian government has entered Mutual Legal Assistance Treaties (MLATs) with various countries[e] to facilitate cross-border investigations. These treaties provide a structured framework for cooperation between nations to address crimes involving digital platforms, many of which are headquartered outside India. Through MLATs, Indian authorities can request access to user data, trace the origins of extremist content hosted on foreign servers, and access encrypted communications when necessary. Such international collaboration is essential for overcoming jurisdictional challenges posed by the global operations of social media platforms, ensuring that offenders cannot take advantage of the constraints of geographical boundaries to evade accountability.

These measures have helped India’s efforts at regulating social media and addressing its misuse for extremist purposes. However, the rapidly evolving tactics of extremist groups and technological innovations employed by social media platforms demand ongoing innovation and adaptability. Balancing the need for security with preserving democratic principles and individual privacy is an enduring challenge in the effort to combat online radicalisation.

Current Policy Gaps

Despite India’s legal framework aimed at regulating social media and curbing online extremism, the system has shortcomings in effectively preventing the misuse of these platforms.

Ineffectiveness of Internet Shutdowns in Jammu & Kashmir

Following the abrogation of Article 370 in August 2019, the Indian government imposed a prolonged internet shutdown in Jammu & Kashmir intended to curb the spread of extremist content and maintain public order. However, this measure failed, with extremist groups utilising alternative communication methods like publishing and circulating pamphlets and newsletters. Indeed, the shutdowns only served to alienate the local population instead.[20]

Legal Challenges to Content Regulation

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2023, were introduced to enhance social media content regulation.[21] These rules empower the government to direct social media platforms to remove content deemed “fake, false, or misleading” by a government-established fact-checking unit.[22] This approach has raised concerns about potential overreach and censorship, leading to legal challenges.[f]

Proliferation of Hate Speech and Misinformation

Despite existing laws, social media platforms continue to be conduits for hate speech and misinformation. Unchecked online content exacerbates communal tensions.[23]

Misuse of Social Media During Elections

Social media has been used during electoral periods to spread disinformation and manipulate public opinion. For instance, during the 2024 Indian general elections, deepfake videos featuring political figures were circulated to mislead voters. AI was used, for example, to create a fake video of PM Modi dancing, or misleading videos of Bollywood celebrities using Hindu supremacist language.[24] Despite legal frameworks, the rapid dissemination of such content has posed challenges to regulatory authorities.[25]

Inadequate Response to Online Extremism

The limitations of the legal framework are evident in the persistence of online extremism. Despite laws to curb hate speech and actions promoting enmity between groups, online platforms continue to host extremist content, indicating enforcement gaps.[26]

These cases illustrate that, while India has established legal measures to counter the misuse of social media, enforcement challenges and the dynamic nature of digital platforms often undermine their effectiveness. Addressing these issues requires robust legislation, adaptive enforcement strategies, and collaboration with social media companies to mitigate the spread of extremist content.

Which Way Forward?

A multi-pronged approach is essential to address the multifaceted challenges posed by the misuse of social media platforms for radicalisation, propaganda, and extremist activities. This involves strengthening existing regulatory frameworks, enhancing technological capabilities, and fostering awareness among the population. Below are policy recommendations that aim to counter these issues effectively.

Regulatory Reforms

One key issue is the lack of jurisdiction over platforms like WhatsApp, which do not host their servers in India and operate under United States laws. The Indian government should introduce regulations under Section 87 of the IT Act, 2008 to ensure that intermediaries adhere to Indian legal requirements. Strengthening existing provisions like Section 79 and Section 85 of the IT Act is necessary to hold platforms accountable for extremist content. Additionally, organisations and individuals must enact a specific law targeting the spread of misinformation. Precise definitions of cyber offences, penalties for non-compliance, and a system for regularly evaluating these policies should be included to ensure effective enforcement.

National Cyber Doctrine

India must develop a comprehensive National Cyber Doctrine that includes a clear definition of cybercrimes, categories of cybercriminals, and legal repercussions for offenders. Such a doctrine should also outline a strategic framework for planning, training, and executing cybersecurity initiatives. By integrating various stakeholders, including law enforcement, intelligence agencies, and private organisations, the doctrine can enable a cohesive approach to countering cyber threats.

Tri-Service Military Cyber Intelligence Team

Establishing a joint cyber-intelligence cell across all military commands is crucial for bolstering cyber defences against state-sponsored propaganda and cyber warfare. The team should focus on timely and precise actions against hostile activities, including monitoring and countering adversaries’ misuse of social media.

Digital Literacy Initiatives

With increasing social media usage among young people, early education on identifying and resisting disinformation is vital. A nationwide digital literacy drive should be launched, integrating modules on cyber safety into school curricula. Teaching students how to disengage from harmful online interactions and adopt a “no-comment” policy for contentious content can be a first step toward sanitising digital platforms.

Content Analyses and Building Counter-Narratives

State-sponsored research into flagged content on platforms like YouTube and Facebook can help identify the rhetoric used by extremist groups. Such analyses should guide the development of effective counter-narratives to neutralise harmful ideologies. YouTube’s technology that redirects users vulnerable to extremist messaging towards curated videos that counter these narratives is an example of effective intervention.[27] India can adopt similar models to combat online extremism.

Digital Fingerprints and Cross-Platform Blocking

Another plausible solution is using digital “fingerprints”, or hashes, to identify and remove harmful content. This technology ensures that, once content such as terrorist imagery or recruitment videos is removed from one platform, it cannot resurface on others within the same cooperative network. Indian authorities should collaborate with international organisations and platforms to adopt this technology.

Evaluation and Improvement of Regulatory Policies

Regularly evaluating existing regulatory measures is necessary to adapt to evolving threats and challenges. Building a feedback loop between stakeholders—governments, tech companies, and civil society—can improve the effectiveness of policies over time.

Conclusion

Countering the misuse of social media platforms for extremist propaganda and radicalisation requires an evolving approach. While India has progressed through measures such as the IT Rules 2021 and Section 69A of the IT Act, gaps persist in adapting to emerging technologies, ensuring swift content removal, and navigating the complexities of encrypted communications.

Strengthening the IT Act to include provisions for AI-driven propaganda and deepfakes, introducing penalties for non-compliance, and enhancing cyber law enforcement capacities are crucial. A National Cyber Doctrine can serve as a guiding framework for involving various stakeholders and addressing threats comprehensively. Establishing joint military cyber cells and state-sponsored research into extremist content can enhance India’s ability to counter threats at both domestic and international levels.

Promoting digital literacy, particularly among young people, will empower individuals to resist harmful narratives and become responsible digital citizens. Incorporating advanced technologies like digital fingerprints for cross-platform blocking and creating tailored counter-narratives will strengthen the defence against online extremism.

By balancing security needs with the preservation of democratic freedoms, India can build a resilient digital ecosystem that safeguards citizens from the threats posed by online extremism while fostering a safe and inclusive digital space for all.

Endnotes

[a] The author interviewed a group of Indian security officials in November 2024.

[b] KT, which is linked to Jaish-e-Mohammed, emerged after the abrogation of Article 370 in India in 2019. They have carried out multiple attacks, including the Kathua attack in 2024, and use slurring and conspiracy narratives to mobilise followers and gain media attention. See: https://www.firstpost.com/explainers/doda-encounter-kashmir-tigers-terror-group-jaish-e-mohammed-security-personnel-attacks-jammu-kashmir-13793546.html. In the case of the Kathua attack, they created a perception of power and dominance, claiming moral superiority by emphasising that their targets are only security forces, not civilians.

[c] These were ttacks carried out by The Resistance Forces and Kashmir Tigers in 2024.

[d] The United Nations has not designated Hamas as a terrorist organisation. A number of countries have, including the US, UK, Japan, Australia, and the EU.

[e] As of 2019, the Government of India has signed MLATs with 42 countries.

[f] Comedian Kunal Kamra filed a petition in the Bombay High Court, arguing that these provisions could suppress free speech. See: https://internetfreedom.in/in-kunal-kamras-petition-in-the-bombay-high-court-the-government-undertakes-not-to-notify-its-fact-check-unit/

[1] A. Schmid, “Radicalisation, De-Radicalisation, Counter-Radicalisation: A Conceptual Discussion and Literature Review,” International Centre for Counter-Terrorism, https://icct.nl/publication/radicalisation-de-radicalisation-counter-radicalisation-conceptual-discussion-and.

[2] Edge Delta, “Breaking Down the Numbers: How much Data Does the World Create Daily in 2024?,” Edge Delta, March 11, 2024, https://edgedelta.com/company/blog/how-much-data-is-created-per-day

[3] Delta, “Breaking Down the Numbers: How much Data Does the World Create Daily in 2024?”

[4] Soumya Awasthi, “Fragile State of Africa, Non-State Actors Annual Assessment,” Journal of Family Medicine and Health Care 7, no. 1, March 2021, https://www.sciencepublishinggroup.com/article/10.11648/j.jfmhc.20210701.11

[5] Sudeep Lavania, “Why Pakistan-Backed The Resistance Front Has Become Biggest Headache of Security Forces in Kashmir,” India Today, September 14, 2023, https://www.indiatoday.in/india/story/the-resistance-front-trf-let-lashkar-e-taiba-front-most-active-anantnag-encounter-2435627-2023-09-14.

[6] Kevin Collier, “Hamas Videos Spread Across Some Social Media Apps,” NBC News, October 14, 2023, https://www.nbcnews.com/tech/internet/hamas-videos-spread-social-media-apps-rcna120128

[7] Edmond Fitton Brown, “The Global Jihadi Terror Threat in September 2024,” Combating Terrorism Center Sentinel 17, no. 8, September 2024, https://ctc.westpoint.edu/commentary-the-global-jihadi-terror-threat-in-september-2024/.

[8] Brown, “The Global Jihadi Terror Threat in September 2024”

[9] Edma Ajanovic et al., “Spaces of Right Wing Populism and Anti-Muslim Racism in Austria. Identitarian Movement, Civil Initiatives and the Fight against ‘Islamisation’,” Czech Journal of Political Science, no. 2, 2016, https://czechpolsci.eu/article/view/34915/29805

[10] S. Farber and S.A. Yehezkel, “Financial Extremism: The Dark Side of Crowdfunding and Terrorism,” Terrorism and Political Violence: 1–20, doi: 10.1080/09546553.2024.2362665.

[11]  “Misinformation About the Israel-Hamas War is Flooding Social Media,” AP News, October 30, 2023, https://apnews.com/article/israel-hamas-gaza-misinformation-fact-check-e58f9ab8696309305c3ea2bfb269258e

[12] Benjamin Jensen and Divya Ramjee, “Beyond Bullets and Bombs: The Rising Tide of Information War in International Affairs,” Center for Strategic and International Studies, December 20, 2023, https://www.csis.org/analysis/beyond-bullets-and-bombs-rising-tide-information-war-international-affairs

[13] Johannes Baldauf et al., “Hate Speech and Radicalisation Online,” Institute for Strategic Dialogue, 2019, https://www.isdglobal.org/wp-content/uploads/2019/06/ISD-Hate-Speech-and-Radicalisation-Online-English-Draft-2.pdf

[14] R. Scrivens et al., “The Role of the Internet in Facilitating Violent Extremism and Terrorism: Suggestions for Progressing Research,” in Handbook of International Cybercrime and Cyberdeviance, ed. T. Holt and A. Bossler (Palgrave Macmillan, 2020), https://doi.org/10.1007/978-3-319-78440-3_61

[15] “India Blocks 10,500 Social Media URLs Promoting Khalistan Referendum in Three Years,” Greater Kashmir, 2024, https://www.greaterkashmir.com/latest-news/india-blocks-10500-social-media-urls-promoting-khalistan-referendum-in-three-years/.

[16] Ministry of Electronics and IT, Government of India, https://pib.gov.in/PressReleseDetailm.aspx?PRID=1700749&reg=3&lang=1

[17] Rimjhim Singh, “Govt Blocks Record 28,000 URLs in 2024; Facebook, X Face Maximum Takedowns,” Business Standard, December 28, 2024, https://www.business-standard.com/technology/tech-news/govt-blocks-record-28-000-urls-in-2024-facebook-x-face-maximum-takedowns-124120300714_1.html

[18] Indian Cybercrime Coordination Centre (I4C), Government of India, “About I4C,” https://i4c.mha.gov.in/about.aspx

[19] Indian Cybercrime Coordination Centre (I4C), Government of India, “Major Initiatives,” https://i4c.mha.gov.in/initiative.aspx

[20] Khalid Shah, “How the World’s Longest Internet Shutdown Has Failed to Counter Extremism in Kashmir,”      Observer Research Foundation, August 22, 2020, https://www.orfonline.org/expert-speak/how-the-worlds-longest-internet-shutdown-has-failed-to-counter-extremism-in-kashmir

[21] “Draconian Rules: On the Impact of the IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2023,” The Hindu, April 10, 2023, https://www.thehindu.com/opinion/editorial/draconian-rules-the-hindu-editorial-on-the-impact-of-the-it-intermediary-guidelines-and-digital-media-ethics-code-amendment-rules-2023/article66717811.ece

[22] “Draconian Rules: On the Impact of the IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2023”

[23] Niam Yaraghi, “How Should Social Media Platforms Combat Misinformation and Hate Speech?,” Commentary – Brookings, April 9, 2019, https://www.brookings.edu/articles/how-should-social-media-platforms-combat-misinformation-and-hate-speech/

[24] Anadi, “Deep Fakes, Deeper Impacts: AI’s Role in the 2024 Indian General Elections and Beyond,” Global Network on Extremism and Technology, September 11, 2024, https://gnet-research.org/2024/09/11/deep-fakes-deeper-impacts-ais-role-in-the-2024-indian-general-election-and-beyond/

[25] Tom Wheeler, “The Three Challenges of AI Regulation,” Brookings, June 15, 2023, https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/

[26] Archit Lohani, “Countering Disinformation and Hate Speech Online: Regulation and User Behavioural Change,” Observer Research Foundation, January 25, 2021, https://www.orfonline.org/research/countering-disinformation-and-hate-speech-online

[27] Todd C et al., “Assessing Outcomes of Online Campaigns Countering Violent Extremis: A Case Study of the Redirect Method,” RAND Corporation, 2018, https://www.rand.org/content/dam/rand/pubs/research_reports/RR2800/RR2813/RAND_RR2813.pdf

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.