What the Media Can Learn From the ‘Red Wave’ That Wasn’t

Supporters hold signs reading "Women for Oz" as they listen as US candidate for senate Mehmet Oz speaks during a rally on the outskirts of Bethlehem, Pennsylvania on November 6, 2022. (Photo by Ed JONES / AFP) (Photo by ED JONES/AFP via Getty Images)
Mehmet Oz supporters during a rally near Bethlehem, Pennsylvania, on November 6. Oz, a Republican, lost the race for Pennsylvania’s Senate seat to Democrat John Fetterman. ED JONES/Getty Image

If you had even casually glanced at the news leading up to the midterms, you’d surely heard a red wave was coming to wash over the country. 

However, that wave barely crested. Instead of Republican candidates sweeping up seats in both houses of Congress, Democrats actually gained a seat in the Senate, and with several House races yet to be called, are on track to lose control of the lower chamber by only a seat or two. Media prognostications were so far off, even the White House called out the press for having “egg on its face” over its coverage

Sign up for our newsletter! Right Arrow

While flashy narratives and one-liners make for great headlines and D.C. cocktail party gossip, this year’s red fizzle shows it’s clearly time for news outlets to rethink how they frame elections. A journalist’s job is to inform the public about what’s at stake — how policies will affect communities, what candidates have done based on their records and actions — not get caught up in predictions. 

But before we can begin to understand where we go from here, we first need to understand where we went wrong. 

It’s time to deemphasize polls

Let’s start with polls. Journalists love them. They are one of the few indicators any of us have on the pulse of the electorate. They also make for clear, easy TV and web graphics. One of the first polls journalists latched onto this time was President Joe Biden’s almost historically low approval ratings, which conventional political wisdom indicated would result in his party losing control of the House and Senate, especially since historically the party not in the White House usually sees gains in the midterms. 

However, to paint a picture of what a diverse, complicated country is thinking ahead of voting day, you can’t just look at one polling question or call in historical patterns. And if we’re going to look at polling, it should also be noted that hypothetical Democratic candidates were performing well in relation to hypothetical Republicans in polls since the Supreme Court overturned Roe v. Wade, even outpolling Biden’s approval numbers.

But while that polling was vastly underestimated or even ignored by pundits, the polls closer to the election were seized upon by many journalists. These polls showed Democratic candidates even in deep blue states like Washington trailing conservative candidates (Democratic Sen. Patty Murray went on to beat Republican Tiffany Smiley by almost 15 percentage points). This is how we ended up with journalists predicting a Republican surge. 

There are many reasons why the science of polling has become less accurate over time, including cellphones disrupting a pollster industry that long depended on landlines. And not every poll is created equal. For example, a Slate analysis of polls by Trafalgar Group, an Atlanta-based conservative pollster that was given an A- rating for accuracy by FiveThirtyEight, revealed the group’s numbers were off, favoring conservatives, by whopping margins in just about every key race. Trafalgar, whose founder appeared on Fox News several times, also has a transparency problem, especially around disclosing how their polls were funded, according to The New York Times

When I interviewed for my last big political journalist job, I told the editors I met with that I didn’t believe in polls. A palpable chill ran through the room as they asked me what I meant. I explained that polls don’t really do anything to inform the public, that they’re mostly useful to campaigns in judging how voters feel about an issue, and they shouldn’t be the focus for an election story.

As I pointed toward the nearby U.S. Capitol from the conference room window, I explained to them that my job was to take what was happening in that building, where the country was being legislated, and explain what it means to everyone else. How will their policies impact rural voters, low-income communities or LGBTQ people? Will their promises actually help or harm constituents? What have these candidates done versus what they said? This was what I felt my job was as a journalist. 

The editors seemed to like my answer well enough to hire me, but once I was on the job, many political stories I wrote centered on a poll an editor found interesting. It provided an easy, timely angle into a story — what could we say about it, they wondered? 

But polls don’t educate voters about important issues, like inflation or abortion. Polls don’t explain the reasons why politicians act, vote and campaign the way they do — like why Republicans spent millions of dollars on scare-mongering ads about trans people instead of Biden’s economic performance. Polls provide a snapshot of a portion of public opinion at a given time — but this information comes with a lot of limitations.

‘Both-sidesism’ doesn’t equal accuracy

Another reason why the media missed the story, some say, is bias. Mainstream news outlets may have been afraid of appearing biased toward Democrats, Norman Ornstein, emeritus scholar at the center-right think tank American Enterprise Institute, told CNN. In turn, they may have overcorrected their coverage, coming down harder on the left to seem more even-handed. 

“There is so many in the mainstream press that are just fearful to a remarkable degree of being branded as having a liberal bias. And what we see is that the reaction to that is to bend over quadruply backwards to show there is no bias,” Ornstein said. “This business of ‘both-sidesism’ to show that there is no bias gives us another kind of bias.”

Whether or not this is completely accurate, it does touch on why leaning in a particular narrative is troublesome. When your coverage is primarily predictions of who will win instead of analysis of the candidates, their policy positions and their campaigns, you’re not telling the whole story, and you’re doing your viewers and readers a major disservice.

Also, horse race coverage is known to affect how people vote. There is the bandwagon effect, in which people are swayed by the seemingly more popular candidate, and then there is the possibility that polls have an effect on voter turnout altogether. If voters see that their preferred candidate is being portrayed as a certain victor or loser, they may be more likely to skip voting. Alternatively, if coverage shows a very tight race, voters for both candidates may be more likely to make voting a priority.

A way forward

One outlet that had a refreshing approach to election coverage this year was The Texas Tribune, a member-supported, nonpartisan online news publication. In August, the Tribune released a coverage guide explaining to readers what they should expect from its election coverage, including how the Tribune planned to hold politicians accountable and cover misinformation.

The part I found most interesting was that editors weigh reader feedback in making coverage decisions. “Instead of letting only politicians set the agenda, we talk to voters and scrutinize polling data to understand ordinary Texans’ top concerns,” reads the guide. “Our readers’ questions and needs help inform our priorities.”

A national media system that cares more about accurately reporting on voters’ concerns would have better detected the mass backlash against the overturning of Roe v. Wade, or noticed a disconnect between how voters feel about Biden versus how they feel about their current representative and senator. They might have also noticed that younger voters were fired up to turn out for these issues. If outlets did, they might have been more skeptical of the late-breaking red-wave polling that ended up being erroneous.

Now is a good time for media to reflect on what went wrong with the midterm predictions and back off from narratives that are impossible to prove accurate. 2024 doesn’t need to look like 2022 or 2020 or 2016, with everyone shaking their heads on a Wednesday November morning, asking, “What went wrong in the polls?”