Ghiblification is more than cute pixels

The soft pastels and whimsical background looked like a moment stolen from a Hayao Miyazaki film. The thrill on my niece’s face was palpable as she shared her first Ghibli-style portrait earlier this month.

This surge in the latest generative AI tool can transform personal photographs into Studio Ghibli-inspired artwork. It is dubbed ‘Ghiblification’ and has captured the global imagination. 

But danger lurks.The playful trend masks serious hazards. Every image can be retained and repurposed as training data, jeopardising privacy and creating fresh pathways for child sexual abuse material, sextortion, bullying, and hate speech.

A recent study by our ChildSafeNet with UNICEF Nepal on generative AI and child safety reveals that over 60% of young people in Kathmandu have experimented with generative AI, often oblivious to the hidden costs.

Every time a photograph is uploaded to create AI-generated artwork or submitted to an app like Dreamify, the user is giving away more than just pixels: they are entrusting their likeness, metadata, and private spaces to an opaque system. The image may be stored indefinitely and woven into the AI model’s training data.

Distinctive facial features, location metadata, or even background details can be memorised and later reproduced in outputs for other users, violating privacy and breaching confidentiality. Even heavily edited or filtered images can leave a digital fingerprint that sophisticated models can reproduce.

OpenAI, the company that owns generative AI models like ChatGPT and DALL·E, uses the images shared by users as training data to refine the model, unless the user opts out. However, the potential for misuse is great.

Read also: Artificial censorship, Vishad Raj Onta

Viral adoption of AI-generated imagery makes people inclined to upload personal images, including those of families and minors. These images contain rich personal information, and serve as a significant source of data for companies, allowing technology firms to collect valuable insights into facial features, social dynamics, and cultural nuances.

“Such visually appealing images can be easily misused for generating seemingly credible imagery for spreading misinformation and reinforcing cultural stereotypes,” says computer scientist Dovan Rai with Body and Data. “Children are particularly vulnerable, as sexualised deepfake content can be generated with ease.”

Most platforms that provide AI image-generation tools do not transparently disclose how they handle uploaded content. When children's photos are ingested, the model internalises these features, which can later resurface in contexts no one intended, creating an ethical time bomb.

“Children’s likenesses could also appear in unexpected contexts, such as advertisements, memes, or controversial content, all without their families' knowledge,” Rai warns.

The Internet Watch Foundation recently identified over 3,500 AI-generated child sexual abuse material in one month on encrypted forums — some grotesque deepfakes superimposing children’s faces onto sexual content. While those examples were not in the Ghibli style, they demonstrate how any benign filter can be twisted into a tool for exploitation.

On the other hand, AI tools have even begun to produce deepfake videos of child rape and torture by superimposing victims’ faces onto pornographic content. Such material can normalise sexual violence, facilitate grooming, enable sextortion, and serve as instruments of bullying and hate speech.

The risks associated with generative AI have already begun to emerge in countries like Nepal, says Superintendent Deepak Raj Awasthi at the Nepal Police Cyber Bureau.

“We are investigating cases involving the use of AI-generated images and videos for defamation and the spread of misinformation, disinformation, and hate against teens and young people,” Awathi says. “We have also received complaints regarding AI-generated deepfake videos aimed at defaming politicians and celebrities.” 

Parents should be concerned about how AI-generated imagery may affect children’s safety as well as creativity. Over-reliance on AI tools could diminish traditional creative skills like drawing and painting, among others.

Says Kabindra Napit of Smart Parents Nepal: “Parents must educate their children about associated online risks and stay updated on emerging threats.”

Tips to protect the young

Safety-by-Design: Technology companies need to prioritise safety for children and vulnerable groups, from the very beginning of product and service development using the Safety-by-Design approach, developed by Australia’s eSafety Commissioner.

Consent and Transparency: Every AI art application should provide clear disclosure that submitted images may be used as training data, offering users an easy option to opt out.

Stronger Moderation: Technology companies must combine automated detection with human oversight to intercept and prevent the spread of harmful content, blocking any prompts or requests for sexualised imagery of minors and swiftly remove harmful content. Watermarking or ‘fingerprinting’ systems will help trace and block harmful AI-generated images.

Legal Protection: Laws must criminalise the creation, distribution, and use of AI-generated child sexual abuse material, enhancing international collaboration will be crucial for tracking and prosecuting offenders.

Multi-stakeholder Collaboration: Technology companies, law enforcement agencies, educators, and NGOs must collaborate to share knowledge and resources.

Digital Literacy: Develop digital literacy skills among children so they can distinguish between fantasy art and reality, and recognise risks. Clear and confidential reporting channels must be established to report harmful content.

Parental Support: Establish open and trusted communication with children about the potential dangers of AI is crucial. Parents and carers can also implement age-appropriate filters and monitoring tools to enhance safety.

Support Services: Service providers need to provide child and young people-friendly counselling and legal support.

Anil Raghuvanshi is the founder of ChildSafeNet. Reach out to him at anil.raghuvanshi@childsafenet.org