AI Tools in Journalism: A Double-Edged Sword?

Source: editorandpublisher.com

Published on October 31, 2025 at 06:40 AM

What Happened

Last year, Hilke Schellmann, a professor of journalism at NYU, found a major problem with Whisper, a popular AI tool for transcribing audio. It was adding lines of text that weren't in the original recording. This 'hallucination' issue is a big problem, especially since Whisper is used in medical centers to transcribe patient-doctor meetings.

Schellmann and her team tested several AI tools for summarizing transcripts of local government meetings. They found that the accuracy of these tools was quite low, at around 50%. This makes it hard for journalists to use them, as important information might be missed. The results were even worse for AI tools that generate literature reviews of scientific papers, with accuracy rates so low that Schellmann wouldn't recommend using them at all.

Why It Matters

AI tools can be helpful for journalists in some ways. They can help improve text, analyze data, and even find sources. For example, Schellmann has used chatbots to find email addresses of tricky sources. However, the low accuracy rates of these tools raise ethical and practical concerns.

One issue is that journalists might be conducting off-the-record conversations or protecting sources' anonymity. If they're using AI tools that record and transcribe these conversations, they might be violating their sources' trust and their newsroom's ethics guidelines.

Our Take

AI tools can be a helpful addition to a journalist's toolkit, but they're not a replacement for human work. They can save time and effort on certain tasks, but journalists need to be aware of their limitations and potential risks.

There's a need for a trusted source that tests and reviews these tools for journalists. This could be something like a Wirecutter or Consumer Reports for AI tools. It would help journalists make informed decisions about which tools to use and how to use them ethically and effectively.