Looking past all the hype and the fear mongering, it is easy to see a place for artificial intelligence in journalism — the what, when and how just haven’t been properly figured out yet.
A recent survey with over 3,000 journalists from several countries showed that just about 5% are using generative AI (Gemini, ChatGPT, Claude, LLaMa and others) tools regularly, while 28% use it in some capacity. This figure is impressively low, given what those tools can achieve for journalists.
This tells us that journalists still haven’t found a daily routine for those types of resources, or simply don’t find them useful at all right now for their own purposes. After all, it is a lot of work, trial and error to eventually find the right balance to include new tools in a routine.
But the main issue isn’t that journalists individually are not seeing AI as a valuable resource, most journalists do, but rather that organizations are taking a long time, as a group, to bring together the moving pieces that will make this resource an intrinsic part of newsrooms as whole — proofreading, automatic summarization, data classification, data visualization and many more things. (Please note that I did not say “content creation”).
There are a lot of projects and experiments, but few actual journalism-centric products. Journalism organizations are, right now, adhering to policies and projects from Big Tech organizations, while trying to navigate the landscape without unnecessary friction (well, maybe not the New York Times), getting to know the field and the rules.
At the same time, many different industries have been pouring money to invest in AI-powered solutions, leaving media organizations as buyers of subscription-based technologies that, in a way, they helped create (by providing high quality training data for new language models).
The whole dynamic puts journalists in a very sensitive position: as companies start to deploy products and programs for newsrooms (like Google’s initiative to offer an AI tool to produce content from public sources), the market becomes smaller and smaller for news organizations to come up with their own services, for their own needs and their own interests, generating their own revenue.
OpenAI’s agreement with local news organizations in the US and big media groups in Europe adds further complexity to this equation, especially because it involves money. It is a step forward in a good direction and it signals the value of journalistic content. And, sure enough, if deep pocketed tech companies want to send money our way, we should accept it. But it also keeps news organizations within tech companies’ low orbit, and we all remember the fate of several news organizations that relied solely on Facebook, Google and other platforms as their business model — a great reminder that we need revenue from innovative tools and methods from our own making as well.
Journalists report on the newest technical aspects of AI, on how many parameters and context windows it now has. They write about regulation and how parliaments all over the world are approaching this question. They publish complex analysis that explain and evaluate the opacity of copyright and the training of new large language models. And yet their own use of AI capabilities is basic to say the least.
There is, understandably, some fear among media professionals that those new, enhanced large language models might come for their jobs. Tools like the one created by Google only make those fears worse. However, it all means that it is even more important that news organizations partner up to create their own thing.
The Associated Press has been doing a very interesting job around AI in the past year, especially around fact-checking. The News York Times is pushing ahead as well. We need more of that, not only from major news organizations, but from all over this industry.
Journalists need to understand, in the end, that AI is a resource, a tool and that it needs humans behind it. It can help journalists in more menial aspects of our trade by handling time-consuming tasks and it can completely fix typos and other silly mistakes that our weary eyes fail to catch. But as good as they are right now, their job is to pile word after word based on statistical parameters — they are unable to create knowledge, only to replicate it. It needs human judgment behind it to be really effective. I doubt this situation will change anytime soon, especially in sight of new upcoming regulations and risk assessments.
This is undoubtedly a tricky field to cross, but one in which newsrooms nevertheless need to venture forth.
Sérgio Spagnuolo is Executive-Director at Brazilian newstech organization Nucleo Jornalismo and Innovations Consultant at the International Press Institute.