After the botched coverage of the Boston Marathon bombing investigation, the media excuse was that events were moving quickly. But they had weeks to cover the opening of, and most still got it wrong. (More)

A Year of Media Failure, Part II: Broken News

This week Morning Feature looks at the year’s media failures. Yesterday we began with ScandalFest 2013, a series of falsely-hyped and long-debunked partisan attacks, and two policy disputes that the media reported as “scandals.” Today we see broken news, from shoddy reporting on the Boston Marathon bombing investigation to shoddier reporting on the website. Saturday we’ll conclude with how the media claim a mission to hold government accountable, while largely rejecting accountability for themselves.

“You rarely get it right the first time”

At 1pm ET on April 17, CNN’s Steve King announced:

I was told they have a breakthrough in the identification of the suspect, and I’m told – and I want to be very careful about this because people get very sensitive when you say these things – I was told by one of these sources who’s a law enforcement official that this was a dark-skinned male. The official used some other words, I’m going to repeat them until we get more information because of the sensitivities. There are some people who will take offense even in saying that.

I’m making a personal judgment – forgive me – and I think it’s the right judgment not to try to inflame tensions.

After so carefully explaining his “personal judgment,” 45 minutes later King reported that “an arrest has been made.” Only NBC’s Pete Williams refused to join the bandwagon, and the FBI later criticized the media for jeopardizing an ongoing investigation.

It wasn’t the only reporting blunder that week. The New York Post printed photos of two so-called “bag men” being sought in the Boston Marathon bombing, when both men had already been cleared. As Boston University journalism professor and veteran AP reporter Fred Sayles later wrote:

Twenty years at the Associated Press taught me that although getting it first was high on the list of journalistic accomplishments, getting it wrong was the absolute worst thing you could do. That sensible dogma has changed in the frenzied world of media saturation. The downside of this shift was again on display during the week of the Boston bombing coverage.

Mixed in with very commendable reporting was very, very bad journalism. There were two explosions. No there were three. Additional bombs were found and dismantled. A Saudi national, a person of interest, was under armed guard in a hospital surrounded by SWAT teams. Two days later came the report that suspects had been identified, arrested and were on their way to court. None of this breathless reporting was right.

Bad information was so endemic that we heard a reporter on one cable network explain that this was the new normal. “You rarely get it right the first time,” he said.

Yes, breaking stories are a challenge in today’s era of continuous 24-hour news. Law enforcement authorities sometimes identify what turn out to be wrong suspects, and eyewitnesses are notoriously unreliable. Yet “You rarely get it right the first time” seems a poor excuse for rushing to air the latest rumor.

“Mammoth swarms of data”

It’s an even worse excuse when a story plays out over weeks, such as the initial flaws in the data hub. The story is still usually framed as being about a “website,” as if were no different from or your own personal site.

In fact’s most vexing technical challenge began in 2011, in conservative concerns about a comprehensive user database. The administration recognized those concerns and designed as a data hub, as ComputerWorld explained in early September:

Described by CMS as a routing tool, the hub is designed to let state and federal facilitated healthcare marketplaces quickly verify the eligibility of individuals seeking insurance coverage. The system connects healthcare insurance exchanges with numerous federal government databases at agencies like the Social Security Administration, the Internal Revenue Services, the Department of Homeland Security and the Department of Veterans Affairs.

The Hub itself will not store any data. It’s designed to move information between the federal database systems and the marketplaces. “The Hub increases efficiency and security by eliminating the need for each Marketplace, Medicaid agency … to set up separate data connections to each database,” the CMS said.

By October 4th, Reuters had identified the core problems:

One possible cause of the problems is that hitting “apply” on causes 92 separate files, plug-ins and other mammoth swarms of data to stream between the user’s computer and the servers powering the government website, said Matthew Hancock, an independent expert in website design. He was able to track the files being requested through a feature in the Firefox browser.
“They set up the website in such a way that too many requests to the server arrived at the same time,” Hancock said.

Not storing users’ data internally forced the hub to exchange secure data with dozens of government and private insurance company databases, for tens of thousands of users at a time, simultaneously.

Three weeks later after opened, Forbes contributor Anthony Kosner explained the problem again:

Far more efficient than cross checking multiple times with multiple sources during the data intake process would be to take all the data in, validate locally for obvious errors and missing data and then validate with all external agencies once at the end. Or better yet, the external validation could be going on asynchronously in the background while the rest of the information is being entered. If this external validation is a bottleneck, the user could be guided to the rate information based on the data they entered with the clearly marked caveat that final rates would be offered once all of a user’s data has been verified.

The point is that this is not a monumental amount of data to collect from each user. As a frontend exercise it’s rather trivial. But it is a mess on the backend, with its intricate dance of server calls and data validation, and these complexities are far harder to sort out.

Yet three days after Kosner’s column, nineteen days after the Reuters report, and six weeks after ComputerWorld explained how the data hub was set up, CNN’s Wolf Blitzer breezily ignored the technical challenges:

BLITZER: You should not have to give the private password information just to do window shopping for options. You should be able to go there, check out various options. And then, when you make a decision, then you go in there. You put the password, the Social Security number, all of that very confidential information, and that decision was made only two weeks before October 1st.

In fact was largely fixed by November 30th, as promised, and it has been improving steadily ever since. That could be a story about how quickly the “surge” tech team debugged the incredibly complex data hub. Instead, Ron Fournier likens it to the Iraq War … three months after the ComputerWorld explanation.

Yes, it’s hard for the 24-hour media to keep up with breaking events. But when the facts have been out there for three months, you’d think reporters could debug their work. Maybe they need a fact surge.


Happy Friday!