AI takeover of all media
AI is coming for ‘content’ -- everything from advertising and novels to movies and journalism. The result is likely to be simultaneously horrific, wonderful, depressing and exciting. There will be not only creative destruction, but also lots of plain old destruction.
Having spent most of my adult life producing research, journalism and documentaries, as well as consuming escapist novels and movies, I have great sympathy for creators.
But for the past three years, I have been an investor and venture capitalist in AI, and this experience has shaped the message I would offer to everyone in journalism, publishing, music, advertising and Hollywood: you ignore the potential of this technology at your peril.
First, consider the prospects for film and television industry which have been contracting for years, owing to the new forms of media delivery (streaming) enabled by the internet, laptops, tablets and mobile phones.
The decline of cable TV and DVDs reflects a variety of factors, including video streaming, the rise of user-generated content, the democratisation of creation through inexpensive cameras and software, and the resulting competition for eyeballs from YouTube, Facebook and TikTok.
Yet throughout this decade of painful contraction, the fundamental techniques of video production did not change much. You still used real cameras to film real people and things.
Soon, though, all these real-world inputs will be obsolete, replaced by AI. The pioneers of this new world will be, without exception, startups, some of them less than a year old.
The $600 billion digital advertising industry is next. The leading startup in AI commercials, Higgsfield, was founded only in 2023, but its business has exploded, with revenues doubling every month, on track to exceed $1 billion this year.
DAY AFTER TOMORROW
The AI revolution is coming to the arts and the carnage in legacy industries will be awful. What the day after will look like, however, is a far more complicated question.
Personally, as a once and future filmmaker, I am excited about AI filmmaking. I would love to be able to write treatments and screenplays, feed them to my AI ‘studio’, get back a good rough cut and then hone and hone with AI until I have exactly the film I want to make, with every character, setting, movement, line of dialogue and camera angle perfect.
There is, however, an urgent need for new laws, system and institutions to protect intellectual property and its creators. The most discussed issue is the very real need to compensate traditional creators whose prior work is being used to train AI models. But there is also a need to protect AI creators and creations.
Far more frightening is what is happening to nonfiction — news, information sources and reference services. Here, we are already witnessing the blurring of the boundaries to the point of indistinguishability between fact and fabrication.
While the AI era of art excites me more than it worries me, the balance is different in the realm of truth and reality. As much as there is to celebrate, I am terrified by what AI might bring.
Journalism, like Hollywood, has already contracted. The internet forced daily newspapers, weekly magazines, radio and television news all into the same market, it destroyed the classified advertising revenues that newspapers depended on, and it spawned thousands of low-quality new entrants.
To be sure, after multiple near-death experiences, a small number of high-quality English-language news organisations emerged even stronger and with larger global audiences than before: the New York Times, the Financial Times, the Guardian, Bloomberg News, the Economist, Politico and the Reuters and AP wire services. But these outlets reach only a small minority of the population. They are also expensive to produce and their finances are fragile.
The most frequently discussed issue is AI deepfakes. These are indeed a huge problem, considering that YouTube, Facebook, Snap, X and TikTok face few obligations with regard to truth or accuracy. Soon, it will be possible to synthesise nearly undetectable fake versions of almost anyone and almost any event.
Even the most carefully trained AI models can be misused and some open-source AI models have no controls whatsoever. Yet at the same time, AI has greatly improved the quality of news and information available to the public, at least for anyone interested enough to look.
The major models (mainly OpenAI, Anthropic and Google), and many value-added services enabled by them, are now remarkably good. Hallucination is still a problem, but far less so than even a year ago.
Already, AI models provide a miraculous portal to knowledge for more than a billion users. I use Perplexity at least a dozen times a day, and I used it repeatedly in writing this essay — far more often than I referred to legacy publications (or Google Search).
Similarly, there has been an explosion of specialised AI services, including reference resources for lawyers, scientists, doctors, patients and now also AI therapists.
But there is a dark side. AI models do not create knowledge. They harvest and distribute knowledge superbly, but they are totally dependent on information created by others. We (and the models) still need Politico, the New York Times, the Financial Times, AP, Reuters and the whole world of news organisations. They alone have commissioning editors, full-time journalists and fact-checkers while AI models do not hire investigative journalists or war correspondents willing to take risks.
Yet as much as AI models depend on legacy journalism, they also profoundly threaten it in at least two ways. As in the case of Hollywood, these threats are further amplified by the fact that the legacy industry isn’t paying attention.
The first problem is direct competition. If you want to know something specific, or want to stay current with some issue, you don’t need a news publication anymore; you can just ask a model. Moreover, the currently available models can answer many questions that the news organisations cannot. Perhaps worst of all, they are cheaper — much cheaper. For individual users, they typically charge $10 per month, whereas the New York Times typically costs about $25 per month.
The AI models have a cost advantage in part because they can amortize their fixed costs across huge numbers of users. But they also benefit greatly from not paying for most of the information they use.
Currently, there is a strong moral and practical argument being made for forcing model vendors to compensate creators fairly. But this will probably require new court decisions or new laws. In the meantime, there is a very real risk that unless news organisations, journalists, writers and documentary filmmakers are compensated sufficiently, the AI industry will eventually kill the very sources on which it depends to provide accurate results.
This brings us to the second problem posed by AI: the potential destruction of trustworthy news sources as a result of overwhelming pollution from AI junk and fraud. Innumerable AI services will arise and even the major foundation models and the most careful news organisations might be degraded by skillful AI fakery that cannot be distinguished from reality. So far, the models have been trained on reality; but soon, most training ‘content’ will be AI-generated.
One can hope that news organisations will wake up, that courts and legislatures and popular demand will force AI companies to compensate journalists and researchers fairly, and that AI will give rise to a new industry of high-quality journalism. © Project Syndicate, 2026
Charles Ferguson is a technology investor, policy analyst and documentary filmmaker, including the Oscar-winning Inside Job.
