Spot AI generated election videos
You do not need a cyber forensics degree, you need a different set of instinctsLast year, an audio clip of Rastriya Swatantra Party (RSP) vice-president Swarnim Wagle surfaced on Facebook in which the potential future finance minister appeared to be having a private conversation with Indian Prime Minister Narendra Modi about Nepal-India border issues.
The clip racked up 1.5 million views. It jumped through Viber family groups without caption or context, the way political gossip always does in Nepal. By the time Wagle walked into the Cyber Bureau office in Bhotahiti to file a complaint and confirm the audio was fabricated using AI, the clip had already done what it was designed to do.
This is the problem with fake AI videos — the damage is not in the content, but the damage is in the lag between the share and the correction.
Some videos are partisan, but so funny that millions share it: like the one of K P Oli running away carrying a huge bell while his rival Balendra Shah chases him to possess his election symbol.
Google ‘how to detect AI videos’, and there is a tidy checklist. Look for jerky motion. Count the fingers in the hand. Check if the eye blinking looks unnatural. Inspect the lighting. That advice made sense two years ago, it does not anymore.
On TikTok, the account ‘AI Nepal’ recently posted an entirely AI-generated video of a young Nepali living abroad on a video call with his parents and sister, encouraging them to vote. The footage looked like any other family video call. No glitchy fingers. No uncanny valley. No visual tell whatsoever.
The tech industry’s response to hyper-realistic AI generated videos has been cryptographic watermarks and metadata trails embedded in files, solutions that require software to detect and that no voter scrolling through their feed eight days before the 5 March election will ever use.
RED FLAGS
So, forget the checklist. The fakes now pass the eye test. What has not changed is the ecosystem around them. Here are a few things to look out for this election cycle.
-Look at the source, not the spectacle.
-Before you look at the content of a video, look at who posted it.
-What does this account usually share?
If a profile known for uploading grainy, shaky smartphone footage from political rallies suddenly drops a cinematic, flawlessly lit clip of a private scandal, that gap between the account’s history and the production quality is the first red flag.
Is the profile tied to a verifiable person, or is it a faceless account that appeared three weeks ago and posts exclusively about one candidate?
In Nepal’s current election cycle, AI-generated images of Nepali Congress (NC) president Gagan Thapa being chased away by crowds were traced back to exactly these kinds of disposable accounts.
Fact Checkers confirmed the images were fabricated. But by then, the images had already been screenshotted and forwarded thousands of times.
Look at the cuts. This is the tell that still works, even as the visuals get better. Despite all recent advances, the best AI video generators top out at five to ten seconds of high-fidelity footage per prompt.
A longer AI-generated video has to be stitched together from multiple short generations, which means frequent, awkward cuts every few seconds. If you are watching a tense two-minute ‘leaked’ recording and the camera conveniently cuts every eight seconds with no logical reason for the edit, you are watching a patchwork of separately generated clips glued into a sequence.
Evaluate at your own reaction. This is the one nobody talks about, and it is the most important. The algorithm does not care about truth, it feeds on emotional resonance.
A clip that makes your blood boil within five seconds, that perfectly validates something you already believe about a politician, that gives you exactly the outrage you were looking for, that precision is the red flag. Real events are messy and ambiguous.
They do not arrive in 15-second packages engineered to confirm your worldview. The instinct we need before the election is the pause between seeing and sharing. That cognitive pause, the two seconds where you ask yourself why this video is making you feel exactly what it wants you to feel, is the only defence that scales against a technology that is evolving faster than any detection tool.
Nepal went from masu bhat to digital masu bhat. The plate of mutton curry that once bought votes has been replaced by AI-generated dopamine hits served through the feed. In both cases, the target is a voter who reacts before thinking.
Think you have the instincts to tell AI from reality? Test yourself. We built AI OR NOT, a challenge where you listen to audio clips and watch footage and vote on what is AI-generated and what is not. Try it out online.
