India’s Bangladesh Crisis: Defending Against AI Weaponization and Digital Ghosts in South Asia
Article by Sudhanshu Kumar
A Digital Phantom Haunts South Asia
History teaches us that information warfare evolves with technology, but its objectives remain constant: to confuse, divide, and delegitimise. What has changed is that the time between lies and violence has collapsed from months to weeks to hours. Today, the methods have changed; AI-generated voices, deepfake videos, and algorithmic amplification are the new ways to do it because the technology is democratising this process. We are fighting digital ghosts today because our enemies have learned to exploit the gap between the speed of synthetic lies and the speed of human verification. It is unfortunate to acknowledge that this is a new reality in South Asia. We are fighting a war against enemies we cannot see, using weapons we did not deploy, in a conflict that exists primarily in the digital imagination of our neighbours.
Why I am saying that India is fighting a new kind of war is because this war is not with just one nation-state. Not with Pakistan’s artillery across the Line of Control, nor with China’s transgressions in Ladakh, but with invisible enemies wielding invisible weapons: AI-generated voices, deepfake videos, and algorithmic amplification. In this article, I will decode the story of how technology has turned our diplomatic crisis with Bangladesh into a digital battlefield where truth is the first casualty, and India is learning, painfully, that you cannot fact-check your way out of a mob’s fury.
The Making of a Digital Crisis
To understand how we arrived at this moment, we must rewind to August 2024, when Sheikh Hasina fled Dhaka following massive student-led protests. At that time, India, bound by decades of alliance with the Awami League and genuine humanitarian concerns, offered her refuge. It was a decision rooted in realpolitik and regional stability. But in the age of synthetic media, hospitality became hostage-taking.
The foundation of what intelligence analysts now call the “Zombie Regime” was laid in July 2025, when the BBC Eye investigation authenticated leaked audio recordings of Hasina ordering a violent crackdown on protesters during her final weeks in power. The phrase “Shoot wherever you find them” became seared into Bangladesh’s collective consciousness.
But what we see today is that the truth, in the digital age, is merely the raw material for deception. In this context, once the Bangladeshi public had accepted that Hasina could and did give lethal orders via phone, the stage was set for a more insidious operation. It led to the flooding of the information ecosystem with AI-generated clips that built upon this established baseline of credibility.
Over the next few months, by September 2025, deepfake videos of Hasina began circulating, showing her allegedly “addressing the nation” about detention camps. Even though the quality was poor, deliberately so, it is interesting to note down that the low-fidelity audio clips on WhatsApp carry an aesthetic of authenticity. They often feel like intercepted intelligence, like forbidden knowledge shared among conspirators. However, these weren’t meant to fool forensic experts. They were intentionally designed to fool frightened people scrolling through their phones at midnight.
What is The Mechanics of the “Zombie Regime”?
The “Zombie Regime” is a form of political necromancy unique to the AI age. In traditional exile politics, distance equals diminishing power. For instance, when Ayatollah Khomeini went into exile in Paris, his influence waned until he physically returned to Iran. But it is interesting to see that today digital technology collapses distance. Today, people believe that Sheikh Hasina, sitting in a safehouse in Delhi, can be algorithmically resurrected daily in the smartphones of 170 million Bangladeshis.
To describe in detail, I would see this phenomenon operates on three interconnected levels:
First, Acousmatic Authority: The power of a voice heard but whose source remains unseen. In cinema and psychology, the “acousmatic voice” carries disproportionate authority because the listener cannot verify the speaker’s context or sincerity. Every “Hasina audio” that circulated on Telegram, whether real or fake, reinforced the perception that she is not in passive exile but in active command.
Second, The Liar’s Dividend: This is a concept from disinformation studies where the existence of deepfake technology allows bad actors to dismiss authentic evidence as “fake,” while simultaneously making fabricated evidence seem plausible. When India’s Ministry of External Affairs denied that Hasina is politically active, the Bangladeshi interim government and its supporters claimed it’s a “deepfake denial.” Conversely, the problem lies in the fact that when a new AI-generated clip surfaces every time, India cannot easily debunk it because previous leaks were authentic. Today we are trapped in a zero-trust information environment in South Asia and it is eating our strategic interests.
Third, Synthetic Revanchism: Unlike traditional revanchist movements (which require organisational structures, funding networks, and physical mobilisation), synthetic revanchism can be conjured by a single actor with a laptop and access to commercial AI tools like ElevenLabs or Descript. It is so scary to imagine that today a Pakistani intelligence cell, a radical Bangladeshi student group, or even a lone digital provocateur can generate a “Hasina Declaration of War,” release it on dark social platforms, and within hours, real mobs will attack real Indian targets.
The Pakistan-ISI Amplification Network
It is also important to make it clear that this is not happening in a vacuum. According to several Intelligence assessments, it indicates that Pakistan is actively involved here. Some reports demonstrate that Inter-Services Intelligence (ISI) has deployed over 1,825 YouTube channels, 3,200 X (formerly Twitter) accounts, and 192 government-linked handles specifically targeting India’s position in Bangladesh. Following Operation Sindoor in May 2025, India’s military action in Pakistan-occupied Kashmir, the ISI activated what internal Indian intelligence documents call a “multi-front escalation strategy”.
In this regard, the assassination of Bangladeshi student leader Osman Hadi in December 2025 is a case study in hybrid warfare. It is important as, within hours of his death, a fabricated “Al Jazeera investigation,” complete with fake branding and deepfake reporter voiceovers, circulated claiming that his killers had fled to India and were being harboured by R&AW. Though the video was forensically debunked within 48 hours, by then, it had already been viewed over 2 million times and had triggered protests outside Indian diplomatic missions in Chittagong.
The experts have analysed that the ISI’s innovation is in understanding platform economics. They have realised that they don’t need to create all the content. They merely need to seed the initial narratives and let Bangladesh’s own “creator economy” take over the narrative. For example, the Local TikTok and YouTube influencers have discovered that anti-India content generates exponentially higher engagement than neutral political commentary. It is unfortunate that the algorithm rewards outrage. Today, a 19-year-old student in Sylhet doesn’t need a handler from Rawalpindi to tell them to post about “Indian aggression.” They just need to see their view count spike when they do. This is participatory propaganda, which is decentralised and self-sustaining on its own and is causing great harm to the peace in South Asia.
The Geopolitical Chessboard: China’s Digital Silk Road
While Pakistan provides the kindling, China offers the accelerant. The “India Out” campaign in Bangladesh isn’t merely about bilateral animosity; it’s about infrastructure replacement. For over a decade, India has been exporting its digital architecture to neighbours. UPI payment systems, Aadhaar-style identity frameworks, and satellite communications. This “India Stack” represents soft power projection far more durable than any defence agreement.
The “India Out” narrative provides perfect cover for dismantling this digital influence. When Bangladeshi student groups call for boycotting Indian apps and payment systems, they believe they are reclaiming sovereignty. What they don’t realise is that they are acting as the demolition crew for China’s Digital Silk Road. Every Indian satellite communication terminal removed creates space for Chinese alternatives. Every UPI integration suspended creates an opening for Alipay or WeChat Pay.
This pattern is repeating across South Asia. The Maldives elected a pro-China government that immediately began distancing itself from Indian defence agreements. Nepal oscillates between Delhi and Beijing depending on which capital offers better terms. In each case, anti-India narratives, often amplified by social media disinformation, grease the wheels of geopolitical realignment.
The Domestic Danger: When Deepfakes Come Home
The tactics perfected against India abroad are now being imported domestically. In December 2025, a deepfake video showing Lok Sabha Speaker Om Birla announcing a ₹12,000 monthly cash transfer to poor families went viral. It was designed to create expectations that, when unfulfilled, would generate anti-government resentment. Another manipulated video purportedly showed Uttar Pradesh Chief Minister Yogi Adityanath demanding Prime Minister Modi’s resignation over the Bangladesh policy. Both were debunked, but not before millions had seen them.
AltNews, India’s leading fact-checking organisation, reported that 2025 witnessed an unprecedented surge in political deepfakes, with AI technology “supercharging disinformation and propaganda like never before”. The implications for the 2026 state elections and the 2029 general elections are profound. If a foreign adversary can generate a believable deepfake of the Prime Minister making inflammatory communal remarks and release it 48 hours before polling day, can India’s fact-checking infrastructure respond quickly enough to prevent violence? The answer, currently, is no.
The Economic and Social Costs
This isn’t merely an abstract information warfare problem. It has tangible human costs. The suspension of visa services has disrupted thousands of families with cross-border ties. Medical tourism, a significant revenue source for Indian hospitals in Kolkata and Chennai, has collapsed as Bangladeshi patients fear being labelled “pro-India.” The diplomatic freeze has also frozen approximately $2 billion in bilateral trade. Indian exports of textiles, pharmaceuticals, and agricultural products face informal boycotts. Bangladeshi garment manufacturers, who rely on Indian raw materials, are scrambling to find alternative suppliers, often at higher costs. The “India Out” campaign, born in the digital realm, is now imposing real economic pain on both nations.
What India Must Do: A Comprehensive Strategy
The challenge is unprecedented, but not insurmountable. India needs a multi-layered response that combines technological innovation, diplomatic agility, and strategic communication.
Immediate Crisis Management
1. Real-Time Dark Social Monitoring
India’s Press Information Bureau (PIB) Fact Check unit currently operates primarily on public platforms like X and Facebook. But the most damaging disinformation circulates on WhatsApp, Telegram, and closed Facebook groups, which researchers call “dark social.” The Ministry of Electronics and Information Technology must establish dedicated teams that can infiltrate (legally, through open groups) and monitor these spaces in Bengali, Urdu, Tamil, and other regional languages. When a fake Hasina audio drops on a Dhaka Telegram channel, Indian diplomats should have the forensic analysis ready within hours, not days.
2. Forensic Transparency
India should publicly release the technical details of how it verifies official communications from Hasina. If she gives a genuine interview, it should be accompanied by cryptographic signatures that can be independently verified. This radical transparency might seem to expose vulnerabilities, but in a zero-trust environment, it’s the only way to establish baseline credibility.
3. Platform Diplomacy
India must work bilaterally with Meta, Google (YouTube), TikTok’s parent ByteDance, and Telegram to implement mandatory “AI-generated content” labels for synthetic media. The current self-regulation model has catastrophically failed. If platforms refuse, India should consider regulatory action, not censorship, but disclosure requirements similar to political advertisement disclaimers.
Structural Reforms
4. Build India’s “Truth Stack”
Just as the India Stack revolutionized digital payments and identity, India needs a sovereign AI-powered verification infrastructure. Every official government communication, from the Prime Minister’s speeches to MEA statements, should carry a blockchain-verified digital signature that cannot be spoofed. Citizens could scan a QR code on any official video and instantly verify its authenticity through a government portal.
5. National Digital Literacy Campaign
The Election Commission issued warnings about AI manipulation during the 2024 elections, but one-off advisories are insufficient. India needs a sustained, multimedia campaign, similar to COVID-19 vaccination drives, teaching citizens that “seeing is no longer believing.” This should start in schools, with curricula updated to include media literacy modules on deepfakes, algorithmic manipulation, and source verification.
6. Train Diplomatic Corps in Synthetic Media Warfare
Every Indian mission abroad should have at least one officer trained in AI forensics and digital disinformation. The Ministry of External Affairs should partner with IITs and private sector AI firms to create a crash course for diplomats. When the next crisis erupts in Kathmandu or Colombo, our ambassadors need to speak the language of algorithms, not just diplomacy.
Offensive Capabilities
7. Active Cyber Defense
India’s cyber doctrine has been largely defensive, protecting critical infrastructure, monitoring threats. It’s time to develop “active defence” protocols. If a Pakistani or Chinese cyber cell is producing deepfakes targeting India, Indian agencies should have the capability to identify, infiltrate, and disrupt those networks. This isn’t about launching retaliatory cyberattacks, but about making the cost of AI disinformation prohibitively high for adversaries.
8. Strategic Counter-Narratives
India should not merely debunk lies; it should proactively tell its own story. The Ministry of External Affairs needs a rapid-response content team that can produce compelling, shareable media in regional languages. When the ISI claims “India is destabilizing Bangladesh,” India should have ready-made infographics, short videos, and data visualizations showing India’s actual development assistance, trade benefits, and cultural ties. Fight algorithmic fire with algorithmic fire.
9. Regional AI Governance Coalition
India should lead a South Asian initiative, potentially through BIMSTEC, to establish shared norms against AI disinformation. If Bangladesh, Sri Lanka, Nepal, and Bhutan are all facing similar manipulation from extra-regional actors, there’s common ground for cooperation. A regional “Deepfake Non-Proliferation Treaty” might sound ambitious, but the alternative is a digital arms race that destabilises the entire neighbourhood.
The Larger War Ahead
The Sheikh Hasina “Zombie Regime” will eventually dissipate. She will either return to Bangladesh through political reconciliation, remain permanently in exile, or pass away. But the technology that resurrected her digitally will not disappear. It will be refined, democratised, and deployed in the next crisis. Tomorrow’s target might be Nepal, where a fake King Gyanendra could be AI-generated to stoke monarchist sentiments against the republic. It could be the Maldives, where deepfake “evidence” of Indian military coercion could justify Chinese naval basing rights. Or most terrifyingly, it could be India’s own Northeast, where a deepfake video of the Prime Minister making anti-tribal remarks could ignite ethnic violence that takes weeks to contain. The strategic insight that India’s adversaries have internalised is this: You don’t need tanks to destabilise India’s periphery anymore. You need AI, cheap smartphones, and an understanding of how algorithms weaponise emotion. As a senior Indian intelligence officer confided to me: “We’re fighting a war where the enemy manufactures evidence faster than we can manufacture trust.”
Conclusion: Building India’s Digital Immune System
The age of “trust but verify” is over. We now inhabit the age of “verify mathematically, verify independently, verify continuously.” India’s response to the Bangladesh deepfake crisis will serve as a blueprint for how democracies navigate the synthetic media age. This is not a problem that can be outsourced to fact-checkers or solved through internet shutdowns. It requires a whole-of-government approach: technologists working with diplomats, intelligence agencies coordinating with civil society, platforms accepting regulatory responsibility, and citizens becoming active participants in information hygiene. The ghost haunting India from Dhaka will fade. But unless we build the institutional immune system to resist synthetic media warfare, new ghosts will rise in Kathmandu, in Colombo, in Malé, and eventually within our own borders. The question is not whether AI will be weaponised against India again. The question is whether we will be ready when it happens. The mathematics of verification must become as fundamental to Indian statecraft as the mathematics of deterrence because in the 21st century, the deepfake that goes unchallenged is as dangerous as the missile that goes unintercepted.