Welcome to the ‘Wild West’ of Generative AI in Documentaries

Critics allege Jennifer Pan’s misaligned teeth are evidence AI was used in Netflix’s “What Jennifer Did.” (Netflix)

The Netflix true crime documentary “What Jennifer Did” skyrocketed to the top of streaming charts with more than 26 million hours viewed in its premiere week last month. The unsettling film details a brutal 2010 attack on Jennifer Pan’s parents and her subsequent murder conviction. But the story itself was eclipsed by questions about how it was told.

Shortly after the film’s release, the tech publication Futurism accused the director of using generative AI to mirror a friend’s description of her as “bubbly, happy, confident and very genuine.” The story described two images of Pan — one smiling, one making peace signs — with “mangled hands and fingers, misshapen facial features, morphed objects in the background and a far-too-long front tooth.” (Executive producer Jeremy Grimaldi has denied using AI. “The photos of Jennifer are real photos of her,” he told the Toronto Star. “The foreground is exactly her. The background has been anonymized to protect the source.”)

Sign up for our newsletter! Right Arrow

The headlines are part of a wider conversation about the use of generative artificial intelligence (GenAI) — a branch of AI that can create video, photos, audio and other content based on prompts — in documentary filmmaking. The emergence of this rapidly evolving technology opens doors to both opportunities and pitfalls, especially in an industry where accuracy is a fundamental tenet. In fact, inaccuracy was the most-cited risk in a 2023 survey of workers whose organizations have adopted GenAI, which can blur the lines between truth and fiction. It’s a delicate dance that has already entered into ethical gray areas in documentary storytelling.

In 2021, inauthentic audio of the late Anthony Bourdain caused an uproar when filmmaker Morgan Neville revealed he used GenAI to emulate Bourdain reading an email in the documentary “Roadrunner.” “We can have a documentary ethics panel about it later,” Neville told The New Yorker. The following year, the docuseries “The Andy Warhol Diaries” used GenAI to allow a simulated version of the pop artist to narrate his diary three decades after his death — though director Andrew Rossi said the Andy Warhol Foundation granted him permission.

From left: Stephanie Jenkins, Rachel Antell and Jennifer Petrucelli of the Archival Producers Alliance.

“The ethical issue I’m most thinking about is the power of generative AI to mislead,” said David Rand, a professor of management science and brain and cognitive sciences at MIT who researches the intersection of AI and misinformation. “People are worried about the ability of generative AI to use deceptive materials.”

The Archival Producers Alliance, a group of more than 300 documentary filmmakers seeking to establish standards for the use of GenAI in their work, describes the technology as the “Wild West.” “Sometimes in order to tell stories we need to get creative, but there isn’t yet a cinematic language for getting creative with generative AI,” said Stephanie Jenkins, co-founder of the APA. “Because it’s so new at this point, we need to be transparent with our audiences because we don’t want there to be confusion that material is real and authentic when it is not.”

Jenkins and her APA co-founders, Rachel Antell and Jennifer Petrucelli, fear that without best practices GenAI could create a muddling of the historical record. “This is a moment in time for the documentary community to reaffirm what its values have always been and to incorporate the latest technology,” Antell said. “It isn’t a departure of the beliefs — this is just a new iteration.”

Don’t bypass primary sources

For some documentarians, GenAI can be a solution. They could turn to the technology if authentic audiovisual records aren’t available. “Maybe we can’t find the primary sources, or maybe those didn’t exist or were actively suppressed or actively destroyed,” Jenkins said about using GenAI.

But finding authentic primary sources tied to moments in history should be a first stop for documentary storytellers, she said. “What I don’t want is for [GenAI] to become a replacement for doing research and connecting the dots in the real world,” Jenkins said. “This is just as much an affirmation of history and the importance of working and wrestling with history as it is about learning a new technology.”

While both GenAI and primary sources have biases, the APA notes crucial distinctions between the two. Producers can learn about the owner, context and intent of archival material. Content created by GenAI, however, can be a mystery because it draws from multiple unknown internet sources. “It’s easy and inexpensive and fun to play with and see what [GenAI] comes up with, but there are real risks, especially in this realm of nonfiction documentary work,” Petrucelli said.

Plus, unlike humans, the algorithm lacks abstract knowledge. That’s why you’ve likely seen an AI-generated image of a person with an extra finger on their hand. “It doesn’t conceptually understand why a hand has five fingers,” Rand said. “It’s really good at some things and amazingly bad at other things.”

Be transparent

A vital building block of nonfiction storytelling, whether journalism or documentary filmmaking, is transparency. When a storyteller is transparent, audiences can decide for themselves the credibility and trustworthiness of the information presented to them.

The use of GenAI should be continually communicated with the audience, beginning with broad messaging in the opening or closing credits, the APA argues. More transparency is necessary when an element is specifically tied to a time, place or person. An on-screen note in the lower third of the screen, a watermark or explicitly addressing the use of GenAI in the film itself can be sufficient, Antell said.

She cites the documentary “Another Body” as a strong example of incorporating GenAI into the narrative arc with creativity and transparency — deepfake “face veils” are used to protect the identities of deepfake victims. Visual cues, like a shift into sepia tone, can also be used as a thematic approach.

“There’s a contract between the filmmaker and audience that says what we’re showing you, you can assume to be true,” Antell added. “Anytime you’re stepping outside of that, you don’t want to break that contract. So, you need to be explicit.”

David Rand

To standardize the labeling of AI-generated content, Rand has co-authored a policy brief that lays out a framework for what types of content to label and the efficacy of different approaches.

“If you want a North Star for using generative AI as a good actor in the world,” Rand said, “the No. 1 thing is to make sure you’re using it in a way that’s not misleading, but instead helping people have an accurate understanding of the world.”

Likewise, transparency among colleagues during the production process is also important. When gathering elements, teams should be able to differentiate between authentic and inauthentic online material to avoid accidentally bringing AI-generated media into your files.

The APA also suggests engaging in open conversations with your colleagues about any GenAI material you intend to use. Ethical or legal concerns should be addressed during production — early and often. An additional cue sheet is also recommended to keep track of any AI software used, the prompts fed to generate any content and the dates of creation.

Consider how GenAI could elevate your work

Speaking out on camera can sometimes put sources in danger. This fear was palpable in the documentary “Welcome to Chechnya,” which followed a group of activists fighting against the government’s campaign to capture and kill LGBTQ+ Chechens. Oscar-nominated director David France, however, found an innovative solution. He used GenAI to create deepfakes that allowed victims of Chechnya’s oppression to share their story — disguised as someone else.

GenAI can also be used for re-creations, a critical creative tool in documentaries when primary sources can’t be located or simply don’t exist. A traditional re-enactment can involve actors, designers, costume designers, cinematographers, lighting directors, researchers, animators and illustrators. But the calculus changes when using GenAI.

“You take all those jobs out of there, of course, but you also take the human decision-making out,” Jenkins said, encouraging filmmakers to bring the same intentionality, accuracy and sensitivity they would to a traditional re-enactment. In other words, don’t lose your human touch.

Navigating the ethics of any new technology is challenging, but the first step is embracing that GenAI is here to stay. “Part of that is learning what it’s good at and not good at,” Rand said. Though you can’t rely on GenAI to be factually accurate, he says it can help you get closer to the truth. “Trust, but verify.”

Bernie Lubell

Bernie Lubell is a three-time Emmy Award-winning journalist, writer and senior producer at ABC’s “World News Tonight.” Bernie is also the recipient of three Edward R. Murrow awards for writing and producing pieces for NBC News on the Pittsburgh synagogue shooting and the racial justice riots of 2020.