Tuesday, February 22, 2011

Keeping up appearances at PolitiFact

Yesterday PolitiFact published a piece by editor Bill Adair apparently intended to reassure readers that PolitiFact is, well, politifair in the way it does business.

Given Adair's recent past of expressing indifference to the public's perception of bias at PolitiFact this is a significant development.  Eric Ostermeier probably deserves a great deal of the credit for putting PolitiFact on the defensive.  Ostermeier published a study of PolitiFact's results suggesting the strong possibility of selection bias and called for PolitiFact to make its selection process transparent.

Though Ostermeier's name might as well have been "Voldemort" for purposes of Adair's article, the latter probably serves as Adair's response to Ostermeier's call.

How does the answer measure up?
Editor's Note: We've had some inquiries lately about how we select claims to check and make our rulings. So here's an overview of our procedures and the principles for Truth-O-Meter rulings.
The editor's note is about half true.  PolitiFact didn't just have inquiries.  It found itself criticized by a serious researcher who made a good case that PolitiFact ought to be viewed as having a selection bias problem unless PolitiFact could allay the concern by making its methods transparent.  The editor's note isn't exactly transparent.

Adair's off to a great start!
The next paragraph covers the basic description of PolitiFact.  It's a fact checking website.  We get it.  But the second paragraph contains an intriguing tidbit:
PolitiFact staffers research statements and rate their accuracy on the Truth-O-Meter, from True to False. The most ridiculous falsehoods get the lowest rating, Pants on Fire.
The most ridiculous statements get the "Pants on Fire" rating?  The description PolitiFact has used for years suggests that the difference between "False" and "Pants on Fire" is that the latter are ridiculous claims:
False – The statement is not accurate.
Pants on Fire – The statement is not accurate and makes a ridiculous claim.
I had imagined it was difficult enough to objectively determine the presence of ridiculousness.  But now Adair is introducing the possibility that "Truth-O-Meter" ratings above "Pants on Fire" might likewise count as ridiculous, leaving us with an unspecified degree of ridiculousness that results in sizzled shorts.

So far, Adair has only made things worse.  The new description seems to contradict the old.

After more routine description of PolitiFact's function we get to the key heading:  "Choosing claims to check."
Every day, PolitiFact staffers look for statements that can be checked. We comb through speeches, news stories, press releases, campaign brochures, TV ads, Facebook postings and transcripts of TV and radio interviews. Because we can't possibly check all claims, we select the most newsworthy and significant ones.
PolitiFact solicits readers' suggestions.
Again, Adair provides a description apparently at odds with past descriptions of PolitiFact's methods.  The preceding description makes it sound like PolitiFact staffers independently search the gamut of political material for potential stories.  But this account leaves out PolitiFact's past (and present--see image embedded to right) calls for readers to send in their suggestions.

If PolitiFact steadfastly maintains editorial independence in ferreting out stories then it misleads readers by making it appear that the organization has an interest in their story suggestions.  Alternatively, Adair's explanation misleads readers as to how the story selection process works.

Speaking of that explanation, there's more:
In deciding which statements to check, we ask ourselves these questions:
  • Is the statement rooted in a fact that is verifiable? We don’t check opinions, and we recognize that in the world of speechmaking and political rhetoric, there is license for hyperbole.
  • Is the statement leaving a particular impression that may be misleading?
  • Is the statement significant? We avoid minor "gotchas"’ on claims that obviously represent a slip of the tongue.
  • Is the statement likely to be passed on and repeated by others?
  • Would a person hear or read the statement and wonder: Is that true?
PolitiFact doesn't check opinions, eh?  And grants license for hyperbole?

Rating a statement that may mislead seems like a good rationale to perform a fact check.  But based on this criterion alone we end up having a great deal of trouble explaining Ostermeier's twin findings that Republican and Democrat officeholders received about the same number of grades but with one side receiving significantly worse grades.  If one side really does make more questionable statements then this criterion should lead us to expect PolitiFact to grade more of the questionable statements and thus more statements by the group making them.  What is Adair not telling us?

Does PolitiFact rate insignificant statements?  That's debatable.  Avoid "gotchas"?  Also debatable.  There's apparently quite a bit of room between the minimal significance of a statement and obvious slips of the tongue.  Wiggle room is great if you're PolitiFact.  But it does little to promote transparency.

Likewise "Is the statement likely to be passed on and repeated by others?"  How does one make a neutral judgment on that apart from waiting to see if it gets passed on and repeated by others?  And if PolitiFact waits to see the phrase repeated then it matters quite a bit what sources end up repeating the claim (JournoList, anyone?).  It makes up one more potential avenue for selection bias.

Supposedly PolitiFact judges whether the typical person would wonder if a statement is true.  As with others on the list, this is a criterion that restricts the reporter hardly at all.  PolitiFact's "Lie of the Year" for 2010 was regarded as true by a majority of Americans according to a poll cited in the story concerning that so-called lie.

That's how PolitiFact selects which claims to check.  Adair's description does nothing to address Ostermeier's criticism.  Perhaps Adair fails to understand that simply describing the several latitudes of freedom PolitiFact grants itself in choosing its stories does not defuse the charge that PolitiFact has a selection bias problem.  Or maybe he understands it and doesn't care, with the response simply indicating his desire not to "undermine the ability of readers, viewers or listeners to believe what they print or broadcast."  Whatever the case, Adair has made PolitiFact's methods even more questionable by offering discrepant explanations that fail to address the problem even if we ignore the discrepancies.

There's more.


Adair tries to reassure reader's about PolitiFact's transparency under the heading "Transparency and on-the-record sources":
PolitiFact relies on on-the-record interviews and publishes a list of sources with every Truth-O-Meter item. When possible, the list includes links to sources that are freely available, although some sources rely on paid subscriptions. The goal is to help readers judge for themselves whether they agree with the ruling.
Those in PolitiFact's audience who think "on the record" means that you get to read it may be forgiven their misconception.  "On the record" is a journalistic term for an interview for which quotations may be freely used and attributed.  It's still up to the journalist to put things in the published record.  This fact creates a problem when Adair goes on to claim that the list of sources is "When possible ... freely available."  PolitiFact could make its expert source interviews more fully available and that would help readers judge the issues for themselves.

That last line of Adair's is kind of ridiculous, isn't it?  Why would any reader need help judging whether to agree with a PolitiFact ruling?  The statement almost makes sense if we presuppose that the goal is to equip readers with the tools to make an accurate ruling on their own.  But if that's the goal then why bias the reader's effort with a bold graphic indicating what PolitiFact editors think the reader ought to think?  The "Truth-O-Meter" ends up undermining the goal, unless the goal is actually to influence the reader into agreement with the judgment of the editors.

There's more.

Adair peddles the idea that PolitiFact has principles under the heading "Principles in Truth-O-Meter rulings":
Words matter -- We pay close attention to the specific wording of a claim. Is it a precise statement? Does it contain mitigating words or phrases?
Don't make me laugh.

PolitiFact's history is littered with obvious failures to pay attention to precise wording and context (another example, not relying on my analysis, here).  But Adair's correct if we interpret him charitably.  Words matter sometimes
Context matters -- We examine the claim in the full context, the comments made before and after it, the question that prompted it, and the point the person was trying to make.
As with Adair's claim about paying attention to "specific wording," finding exceptions to his assertion involves picking low-hanging fruit.  Let's not doubt Adair's sincerity that PolitiFact aims to meet these criteria, but he makes it sound like PolitiFact succeeds in the attempt.  That claim doesn't pass the sniff test.
Burden of proof -- People who make factual claims are accountable for their words and should be able to provide evidence to back them up. We will try to verify their statements, but we believe the burden of proof is on the person making the statement.
PolitiFact produced a remarkably recent counterexample to this claim by Adair.  PolitiFact failed to confirm a claim by the Obama administration, then proceeded to treat it as true.  PolitiFact is correct that the burden of proof rests on the one making a claim, but wrong that a fact checker can use a failure to bear the burden of proof (or vice-versa in the aforementioned case) to justify applying a truth value to the statement.  Doing so serves as an example of the fallacy of appeal to ignorance (if we don't know the claim is true then the claim is false/if we don't know the claim is false then the claim is true).  Burden of proof issues don't really assist in fact checking, though it may interest readers if a speaker could not or would not produce support for a claim.
Statements can be right and wrong -- We sometimes rate compound statements that contain two or more factual assertions. In these cases, we rate the overall accuracy after looking at the individual pieces.
Has PolitiFact shifted its stance on this one?  Compare:
The Truth-O-Meter is based on the concept that – especially in politics - truth is not black and white. Depending on how much information a candidate provides, a statement can be half true or barely true without being false.
About PolitiFact
Adair's recent phrasing appears to grant that truth is black and white but that complex statements (those containing more than one propositional truth) defy simple categorization as true or false.  Perhaps it is not a reversal but simply inelegant wording in the "About PolitiFact" version.  Ultimately, however, PolitiFact's grading system doesn't help much in judging complex statements.  A bad argument can have all true premises, after all.
Timing – Our rulings are based on when a statement was made and on the information available at that time.
It took me a moment to think of an exception to this claim (in contrast to the earlier ones).  PolitiFact graded false a claim based on an ABC News report.  An ABC News report is "information available at the time."  PolitiFact subsequently found the claim poorly supported (see "Burden of Proof," above) and ruled the claim false.

There's more.

Adair:
Process for Truth-O-Meter rulings

A PolitiFact writer researches the claim and writes the Truth-O-Meter article with a recommended ruling. After the article is edited, it is reviewed by a panel of at least three editors that determines the Truth-O-Meter ruling.
They vote on the truth.  And you don't hear about the split decisions, if any.  Is anyone reassured by that?  I guess it could help if a minority conservative's vote threatened to counteract the left-leaning bias of the newsroom majority.

There's more, after I skip one of Adair's segments on the "Flip-O-Meter."

We come to PolitiFact's corrections policy:
We strive to make our work completely accurate. When we make a mistake, we correct it and note it on the original item. If the mistake is so significant that it requires us to change the ruling, we will do so.
When PolitiFact omitted the term "billion" in an item about the economic stimulus package the correction was made without giving the reader any notice whatsoever.  Omitting a word is not a ordinarily a serious error, but when the claim is made that mistakes are noted on the original item and that is not the case then something's wrong.

 Adair continues:
When we find we've made a mistake, we correct the mistake.
  • In the case of a factual error, an editor's note will be added and labeled "CORRECTION" explaining how the article has been changed.
  • In the case of clarifications or updates, an editor's note will be added and labeled "UPDATE" explaining how the article has been changed.
  • If the mistake is significant, we will reconvene the three-editor panel. If there is a new ruling, we will rewrite the item and put the correction at the top indicating how it's been changed.
The described policy is a good one if PolitiFact actually follows it.  I think the policy is new.  Adair and PolitiFact would prefer if people believe a standard like this has always been in effect, but the evidence suggests that either that isn't the case or else exceptions have been made.

If PolitiFact graded Adair's piece by the same standards it applies to others then he would be hit with a bunch of ratings in the "Half True" region of the "Truth-O-Meter."  As a reply to Eric Ostermeier's concerns about selection bias (not to mention mine) Adair rates an "Epic Fail" on the Futile-O-Meter.


Feb. 23, 2011:  Changed "it" to "the latter" to help remove ambiguity in the third paragraph.

No comments:

Post a Comment

Please remain on topic and keep coarse language to an absolute minimum. Comments in a language other than English will be assumed off topic.