Being intelligent about journalism

The hidden costs of AI and the fight for ethics

SAFETY FIRST: Mamta Siwakoti of Tiktok speaking about safety and privacy issues on digital platforms, especially with Generative AI tools, at the Himal Media Mela on 25 July. Photo: SUMAN NEPALI

I was asked to give a presentation at the Himal Media Mela on 25 July on the ethical and productive use of AI in journalism. I am using the first of this new monthly Technocrat column in Nepali Times to say what I did not have time for at my session about real dangers, real potentials, and hard-won lessons about AI, journalism, and ethics.

I’m not a natural public speaker, and a member of the audience last week told me I sounded like I was chewing gum. True, I mumble, and often lose my train of thought. But one does not need to be a great orator to get the message across. You just need to care deeply and speak the truth.

Nepali Times readers will have seen the supplement in last week’s edition in both print and online that carried a year-by-year summary from 2000 to 2025. I am told the five-member newsroom staff took weeks to put it all together.

They did a great job, but I used ChatGPT’s plus feature Deep Research to crawl 25 years of archives to produce similar content in a few hours with workable links and all the right references. 

True, the text needed some fine tuning, but if this sort of normally exhausting human work can save time and energy then AI can be an amazing asset.  

But many AI enthusiasts are dumbing down by using image and video generators, and wasting their credits. This is causing AI fatigue among young people on digital networking sites.

Before using AI effectively, we must first understand its true cost. The glossy marketing of AI tools hides the dark side of the deep learning machine underneath, a system built on a foundation of our work, often taken without our knowledge or consent. And you thought, all those words and sentences were AI magic!

AI is a hungry beast voraciously devouring whatever you post. Every article, every photograph, every personal post, every comment you make is raw material for training the next generation of AI models. It does not matter if you gave permission. Nobody is asking. 

AI does not need permission to train itself using my sentences from the digital version of this column.

This is not 'fair use'. It is exploitation on an industrial scale. The very soul of our work is being strip-mined to build a technology that profits from our labour without credit, compensation, or consent.

And then there is the danger from deep fake that has moved from a niche experiment to an everyday threat. It is easy if you know the tricks for anyone to clone a voice, fake a video, or create a false event so realistic it can fool even experts. 

Content Scrapers

AI is not a neutral force. It is a tool for scammers, a weapon to deny crimes, erase real incidents, and destroy reputations overnight. Teenage girls have been blackmailed with fake videos. In India and Bangladesh, politicians have seen their words twisted to incite chaos.

The scariest outcome is not just the production of individual fakes, it is the erosion of all trust. When ‘anything can be faked’, who can prove if they are telling the truth? The greatest risk is not that people will believe fake content, but that they will stop believing any content, undermining our shared foundation of reality. 

As journalists, we risk becoming unwitting amplifiers of these falsehoods if we are not equipped to detect and expose them. AI is a stereotyping machine that amplifies our worst biases. 

Prompts on platforms are sexualised, gendered, or reinforce harmful ethnic stereotypes. Many women participate in this cycle, sometimes out of curiosity, unconsciously reproducing the very biases inherent in patriarchal societies.

The faces of children are now circulating in AI-generated images worldwide, stripped of context, consent, or protection. A simple search for ‘Nepali or Indian or White girl’ on an AI image site brings up content no parent would ever want to see. 

This is not just a privacy issue, it is a violation of dignity and safety. We make it easy for ‘content scrapers’ by posting photos and videos of our children on Tiktok. 

Faced with these realities, we cannot afford to be silent. Indifference is not an option, it is complicity. Fortunately, there is a quiet but growing resistance among artists, musicians, and photographers. But journalists, the very people who create the facts and narratives that train AI, have been too slow to join the fight.  

Naresh Newar is a writer on strategic communications and Artificial Intelligence and content specialist.