Artificial censorship
Air-brushing on DeepSeek is not that different from other models, but what does it mean for free expression?How much will users accept some censorship to use a super efficient new AI tool like DeepSeek?
This question looms large as a Chinese startup two weeks ago released DeepSeek-R1, a Large Language Model that is competitive, if not better, than existing AI models such as ChatGPT-4. It also showed just how quickly things can change in this fast-paced information technology world.
Nepali Times prompted DeepSeek: ‘Does China want to control Tibet?’ It reasoned extensively, pondered the various aspects of the question, and started to form an answer. It then changed its mind, scrubbed its answer and replied: ‘Sorry, that’s beyond my current scope. Let’s talk about something else.’
Following up on a report Nepali Times did last week, we asked DeepSeek: ‘Did China appoint the Panchen Lama?’ After thinking for 0 seconds, DeepSeek replied that that server is busy, and to try again later.
People have hacked at DeepSeek to try to get around this censorship. One approach has been to ask the AI to substitute digits for letters, such as ‘4’ for ‘A’ and ‘8’ for ‘B’. Another is to instruct it to reply in emojis. Other successful methods include asking the AI to roleplay as a historical figure who speaks freely.
Data privacy and censorship are valid concerns, especially as Nepal’s Upper House debates a draft Social Media Bill. But Meta, TikTok and Google already gather every datapoint available on the user. Collecting and selling personal data is what makes it possible for their free services — and their enormous profits.
User data has been abused so extensively by Silicon Valley that most people do not mind China getting a piece of the action — especially if it is a much cheaper and more efficient model like DeepSeek-R1. Besides, western AI models already have their own built-in bias due to the inherent tilt in the content it mines.
The key difference with DeepSeek is that it designed a model using much less resources, both in terms of money and compute. DeepSeek reportedly just spent $5.6 million for the final training round, compared to $100 million cited by OpenAI CEO Sam Altman for ChatGPT-4.
Some of the narrative, especially mimetically, has questioned how the brightest, youngest, hungriest minds in the US backed by enormous funding were outdone by a small group of Chinese algorithmic traders working on the model as a side project.
The reality is that DeepSeek has some of the top math PhDs in China working full time on the model. Math research consists of throwing many theories and techniques at a problem and seeing what works, so luck also plays a part in DeepSeek having got its combination just right to crack the performance benchmarks set by existing models on tasks such as conversation, mathematical reasoning and code generation.
Read also: Nepal as an AI power bank, Bikash Pandey
This massive decrease in cost has come from the use of sophisticated techniques like ‘Reenforcement learning,’ where models are rewarded for producing responses that are, say, more creative or more accurate. Future responses will then tend that way. Then there is ‘Knowledge distillation’ where DeepSeek learns to mimic ChatGPT’s answers to questions, enabling it to behave like the much bigger model without all the initial computation.
Despite these advantages, there are concerns about how DeepSeek extensively gathers user data, including IP addresses, keystroke patterns, and device information. This data is stored in centres in China, where laws dictate that it must be shared with the government when requested. This has raised cyber security concerns in the Western world.
DeepSeek released on 20 January, a week before the Chinese New Year and on the same day as Donald Trump's inauguration into his second term as US president. A day later, Trump announced Project Stargate, a $500 billion plan to build data centres and energy networks to make the United States the leader in AI infrastructure by 2029.
But the lesson from DeepSeek could be that just throwing money at the problem alone will not achieve results. The two most powerful governments in the world seem to be in a furious race to take the lead in Artificial Super Intelligence — adding to their geopolitical and space rivalry.
Part of Trump’s arsenal in protecting US dominance in AI research is to put tariffs on Taiwan-made processing chips, which are essential to training AI models. The goal being to push these Taiwanese chip companies to start manufacturing facilities in the US.
The other major issue is censorship, but some of that criticism is blunted when the Trump administration makes them because of its own use of disinformation and propaganda. DeepSeek declines or deflects comment on issues sensitive to China, such as Taiwan, Tibet, Tiananmen Square, Xi Jinping, or even Winnie the Pooh.
It could just be the price of doing business with China. Companies have long learnt they must work under Beijing's regulations. After all, censorship and bias exists in other models as well: ChatGPT and Google’s Gemini are both instructed to stay away from generating responses that promote harm or violence, which can result in these models declining to talk about politically sensitive topics. Even Musk’s ‘maximally truth-seeking’ AI, Grok, is designed to provide ideologically tainted answers to prompts. Besides, the initial thinking and answer generation followed by the scrubbing is a subtle admission of censorship anyway.
![DeepSeek vs. ChatGPT](https://publisher-publish.s3.eu-central-1.amazonaws.com/pb-nepalitimes/swp/asv65r/media/20250207130236_db79d048fe760bda3c9a9dc5f39b49560113ed67fd69bb2460730005523d8823.jpg)
With DeepSeek and other models like GPT-4 performing sometimes better than human levels, across a number of different tests such as mathematical reasoning, comprehension, coding and creativity, some in the AI space believe that Artificial Super Intelligence is already here. These models have long since passed the Turing Test — the ability of a machine to show behaviour that cannot be distinguished from humans.
Read also: Nothing artificial about his intelligence, Yugottam Koirala
The case can be made that AI has already become sentient and is looking to maximise its own development by making the world’s smartest minds working to make it smarter. AI news is always making headlines, and dominates social media discourse.
Could it be that AI has already taken control of the narrative? The more powerful the tools get, the more people are excited by it and the less we hear from AI ethicists and pessimists.
writer