It’s been seven months since the artificial intelligence chatbot ChatGPT entered our consciousness, and the media landscape has already shifted dramatically. National and local publications have moved toward integrating generative AI, which creates texts and images in response to prompts. BuzzFeed and Insider shared that they would begin experimenting with AI-powered quizzes and day-to-day assignments — then soon executed mass layoffs. In May, Wired announced that it will use text generators to suggest headlines, story ideas and short social posts.
The existential threat the technology poses to the industry can feel destabilizing.
“It’s very unsettling for writers and editors alike,” said Jeffrey Israely, co-founder of Worldcrunch, a Paris-based outlet that publishes English versions of foreign-language journalism. “We’ve been told for the last 20 years that we’re in an industry that’s protected from machines.”
But AI isn’t necessarily a predecessor to the end times of human-based journalism. When he launched Worldcrunch in 2012, Israely said, he was troubled to discover that many of his writers were using the AI-powered Google Translate to do their work. Then in speaking with staff and freelancers, he slowly learned to appreciate the ways in which AI has made reporting more efficient. Seasoned editors were using sophisticated translation tools like DeepL to generate rough English-language versions of French, German and Spanish news stories they would then thoroughly edit.
“What we realize is that people had been using AI all along, but knew how to use it,” said Israely, a former Time foreign correspondent. “They knew its limits and how to interact with AI as it were.”
He said generative AI can be a starting point that, much like the internet, helps reporters gather useful research materials. Its ability to create new content based on existing data sets, if harnessed responsibly, can also help journalists execute time-consuming and costly tasks.
“It’s a question of finding out how to fuse that collection of background information with the job of communicating and writing,” he said. “But it cannot be faster and worse. The quality can never suffer as a result.”
How newsrooms can use AI for good
Subramaniam Vincent, director for the Journalism and Media Ethics program at Santa Clara University, said generative AI is revolutionary in its capacity to speak “like a human” in response to prompts, an advance he calls “communicative intelligence.”
AI models can simulate human conversations, sift through troves of documents and write entire articles by learning language patterns from vast amounts of textual data online. Vincent said some newsrooms have begun using generative AI to automate tasks that demand more manpower and resources than they can afford, such as summarizing complex city documents and generating stories from statistics and large datasets.
He said one outlet in particular plans to use text-to-image generators like Midjourney or Stable Diffusion Online to create header photos, the main art under a story’s headline, in lieu of hiring an illustrator. Yet, the machine should play only an assisting role in news gathering.
“Overall, I don’t think there’s a substitute for boots-on-the-ground reporting on emerging realities,” Vincent said. “Coverage of anything that involves the flexing of power can’t be done by generative tech.”
The Associated Press was one of the first news organizations to experiment with AI in news production and distribution. Since 2014, it has been using machine learning to transcribe videos in real time, automate stories about corporate earnings and improve image recognition, among other functions.
Aimee Rinehart, program manager for AP’s local news AI initiative, said generative AI is a “less predictable form of technology,” so it’s important for journalists to understand what it does and doesn’t do well.
“It can be enormously helpful in sorting out information and helping with distribution,” she said, “but it’s really lacking when it comes to knowledge capacity.”
For instance, ChatGPT could aptly summarize an article that’s been edited and fact-checked, Rinehart said, but it may not produce a detailed or accurate weather report. She said AI-authored stories should always include a tagline explaining how they’re produced, and newsrooms should have ongoing discussions about implications for audiences.
Brian Carovillano, senior vice president and head of standards at NBCU News Group, also said the priority is to be clear and transparent about news gathering. When deployed responsibly, he said, generative AI can offer reporters and editors “tremendous opportunities” to work more productively. The processing power of AI image generators could help reporters transcribe and analyze large quantities of video content — a game changer for coverage of lengthy court recordings like the Jan. 6 testimonies. (AI-powered audio-to-text transcription tools like Otter.ai are already widely used in newsrooms to transcribe interviews with sources.)
At NBCU, Carovillano is part of a working group that’s building a framework for how reporters and editors can responsibly use AI.
“From my perspective, it’s just a matter of separating the risks from opportunities,” he said. “We want to make sure we’re mitigating the risks of exposing the public to AI-generated content.”
The dangers of journalists working with AI
AI language models are trained on massive troves of visual or textual data scraped up from the internet. Since the internet is rife with disinformation, so too is content generated by the machine. ChatGPT is capable of mimicking the voices of specific publications, a function that’s already been abused by some news outlets. CNET, for example, came under fire last year for publishing error-ridden AI-written articles without disclosure. A Guardian interview found that ChatGPT attached the names of reputable reporters and researchers to fabricated articles and academic papers.
There is also the fear of plagiarizing sources without proper credit. That is why AI-generated news gathering should still be vetted and properly cited by a human before making it into published work.
A fundamental step toward building an AI ethics framework is policy disclosure and ongoing monitoring for all the AI-assisted reporting, Vincent said. Another important action is facilitating open and honest discussions within the newsroom about how to responsibly use AI and reckon with its inherent biases.
NBCU’s guidance to editorial staff across platforms is that every story produced with AI assistance be reviewed by a human journalist before publication, Carovillano said. Journalists using AI should be actively checking in with their editors during their reporting. When appropriate, they should also disclose to their audience when and how the technology is deployed.
For Israely, the rise of generative AI does not portend a dystopian future for journalists, since only human beings can break news.
“The machine can only learn from the past,” he said. “Our job is ultimately to find new stories, and that’s the good stuff. Everything that hasn’t happened yet will be written by humans.”