By Mohd Fahad
The writer is Editor (National Desk) at First India
When a convincingly realistic video surfaced, appearing to show senior parliamentarian Shashi Tharoor making controversial political remarks, it spread across social media with familiar speed. Within hours, reactions hardened, outrage circulated, and partisan commentary followed. Only later did clarification arrive: the video was AI-generated and manipulated. By then, however, the damage — confusion, suspicion, and emotional polarization — had already taken root.
This episode is not an isolated curiosity. It is an early signal of a profound transformation underway in India’s media ecosystem. Synthetic media — AI-generated audio, video, and images — is rapidly lowering the barrier to creating persuasive falsehoods. The result is not merely more misinformation; it is a structural challenge to how citizens decide what to trust. In a democracy as vast, multilingual, and politically vibrant as India, that challenge carries consequences that go far beyond viral embarrassment.
Neighbour’s digitally driven propaganda
Neighbouring countries’ digitally driven propaganda strategies make this problem even more complex. For example, several social media accounts linked to Pakistan have, from time to time, circulated deepfake videos and misleading clips targeting India’s defence, political, and public institutions. These have included fabricated statements attributed to leaders or distorted portrayals of military capabilities — such as false claims intended to tarnish the image of the Indian armed forces. Independent fact-checkers and the Government of India’s Press Information Bureau (PIB) Fact Check unit have identified such videos as digitally manipulated and cited them as examples of deliberate misinformation.
Such deepfakes and fabricated content do not merely distort current events; they can also place pressure on national security and social cohesion, diverting public emotions and debate away from verified facts.
The new realism problem
Traditional misinformation often relied on crude edits, misleading captions, or recycled visuals. Deepfakes introduce something qualitatively different: emotional realism. Modern AI tools can simulate voice cadence, facial micro-expressions, and contextual gestures with uncanny accuracy. To the average viewer scrolling on a smartphone, such fabrications are indistinguishable from authentic footage.
The psychological impact is powerful. Humans are predisposed to trust audiovisual evidence. “Seeing is believing” has long functioned as a cognitive shortcut. Deepfakes exploit this instinct. Even when debunked, the initial emotional impression often lingers — a phenomenon researchers call the “continued influence effect.” In political communication, that lingering doubt can be more damaging than the original lie.
India’s media environment amplifies this effect. High social media penetration, rapid content sharing across messaging platforms, and intense political engagement create ideal conditions for manipulated content to travel faster than verification. Once a clip aligns with existing biases, it becomes self-reinforcing. Deepfakes therefore do not merely deceive; they weaponize polarization.
Timing, spectacle, and geopolitical sensitivity
The danger escalates during moments of heightened national attention — elections, crises, or globally visible events such as the ICC Men’s T20 World Cup. Such periods generate emotional intensity and narrative competition. A fabricated clip inserted into that environment can inflame sentiment, distract public debate, or delegitimize institutions.
The strategic value of deepfakes lies in timing. A convincing fake released just before a vote, a diplomatic development, or a major televised event can shape perception before fact-checkers respond. Even rapid debunking cannot fully neutralize the impression once it spreads through closed messaging networks where corrections travel slowly, if at all.
This introduces a new dimension to information warfare. Domestic political actors, foreign influence campaigns, or opportunistic provocateurs can exploit AI tools to manipulate narratives at scale. The cost of producing believable deception has dropped dramatically, while the societal cost of responding has increased.
Trust erosion as the real casualty
The deepest threat posed by deepfakes is not a single viral falsehood; it is the erosion of shared reality. As synthetic media becomes more common, citizens may begin to doubt authentic footage as well. This “liar’s dividend” allows bad actors to dismiss genuine evidence as fake, muddying accountability.
Democracy relies on a minimum level of epistemic agreement — a shared understanding of what happened. When every clip can be plausibly denied or fabricated, public discourse shifts from debating policy to disputing reality itself. Institutions that depend on visual documentation — journalism, courts, oversight bodies — face new credibility challenges.
For India, where democratic participation is both massive and deeply mediated by digital platforms, the stakes are unusually high. Public trust is already strained by perceptions of media bias and partisan messaging. Deepfakes risk accelerating cynicism: if nothing can be trusted, everything becomes political theater.
Regulation: necessary but insufficient
Governments worldwide are racing to regulate synthetic media, focusing on labeling, takedown timelines, and platform accountability. These measures are necessary guardrails, but they are not a complete solution.
First, enforcement faces scale problems. Millions of pieces of content circulate daily across languages and private networks. Automated detection tools struggle to keep pace with rapidly evolving AI generation techniques. Second, aggressive regulation risks overreach — raising concerns about censorship, due process, and the chilling of legitimate satire or political speech.
India’s regulatory dilemma is therefore delicate: how to contain harmful deception without constraining democratic expression. A purely punitive framework may treat symptoms without addressing underlying incentives. Deepfakes thrive not just because technology exists, but because attention economics reward outrage and virality.
Platform responsibility and design choices
Technology platforms sit at the center of this ecosystem. Algorithmic amplification favors emotionally charged content — precisely the type deepfakes are designed to produce. Adjusting ranking systems to slow the spread of unverified viral media, improving provenance tracking, and embedding friction into sharing mechanisms could reduce harm without heavy-handed censorship.
Transparency also matters. Users should be able to see when content has been flagged, verified, or contextually disputed. Clear provenance indicators — showing origin and edit history — can restore informational cues that AI manipulation has obscured.
Yet platforms alone cannot shoulder the burden. The information crisis is societal, not merely technical.
Media literacy as democratic infrastructure
Long-term resilience depends on citizens developing critical viewing habits. Media literacy programs — teaching people to question sources, recognize emotional manipulation, and verify claims — must become part of civic education. Journalists, educators, and civil society organizations play a crucial role in building this culture of skepticism without sliding into nihilism.
Importantly, media literacy should not frame citizens as passive victims. It should empower them as active participants in information hygiene. Democracies survive not because deception disappears, but because societies cultivate habits that limit its power.
A moment of institutional reckoning
The rise of AI deepfakes is forcing institutions to rethink verification. Newsrooms must invest in forensic tools and slower, more deliberate reporting workflows. Political actors should adopt rapid authentication mechanisms — such as official watermarking or cryptographic signatures — to confirm genuine communications. Courts and regulators must update evidentiary standards for a world where audiovisual proof is no longer self-authenticating.
These adaptations are not optional. They are the price of maintaining credibility in an AI-saturated environment.
Choosing trust in an age of simulation
India stands at a crossroads familiar to every technological turning point: innovation has outpaced social safeguards. Deepfake technology itself is not inherently malign. It has legitimate uses in film, accessibility, and creative expression. The danger arises when persuasive simulation collides with polarized politics and attention-driven media systems.
The incident involving a fabricated political video is, therefore less about one politician and more about a systemic warning. Democracies function on trust — trust that words are spoken by those who utter them, that images reflect events that occurred, and that public debate rests on shared facts. AI-driven misinformation threatens each of these assumptions simultaneously.
The response cannot be panic or prohibition alone. It must be a layered strategy: smart regulation, responsible platform design, institutional adaptation, and a citizenry trained to question what it sees. The goal is not to eliminate deception — an impossible task — but to prevent deception from defining political reality.
If India succeeds, it will demonstrate that democratic societies can absorb disruptive technologies without surrendering truth. If it fails, the line between fact and fabrication may blur so thoroughly that public discourse itself becomes a battlefield of simulations.
The deepfake era has arrived. The question is whether democratic trust can evolve quickly enough to survive it.

