That’s a catchy blog headline, I hope, or at least an important issue. But I’ve fooled you because while answering the question, I am going to discuss “audit” again. I wrote earlier that one kind of audit is performed by election officials to detect errors in voting machines, or to put it another way, to ensure that election results weren’t garbled by the computers used to create them. That sounds like a good thing to detect, and ensure, but how can we understand whether the detection is effective? Today’s post is the beginning of an answer to that question.

And it’s a very relevant question, because we know from last year’s experience in Humboldt County CA that malfunctions do occur. In fact, with just the right bad luck in the locales affected, perhaps only half a dozen Humboldt-sized, Humboldt-style glitches would have been required to swing MN’s close Coleman-Franken race.  And recall that each county has hundreds of opportunities for such a glitch! Five or ten malfunctions per thousands, across a medium-sized state, may not sound like a lot, but its enough to swing a major contest every few years.

To take a specific example, let’s look at the voting method of paper ballots, counted by machine partly  in polling places and partly in a central facility. (Similar issues apply to other voting methods including those using touch-screens or other direct-record devices.) One audit procedure is essentially a hand-count “spot check” or partial “re-do” of the machine count. Precincts are randomly selected to get a set of precincts with enough combined ballots to exceed some threshold percentage of the vote, say 1%. Then each of these precinct’s ballots are re-counted, for each contest, and the hand-count results compared to the machine count. There are often small variances — different interpretations by people and software — and these are scrutinized and documented to ensure that are in fact borderline interpretation cases or due to some other procedural, non-technical issue. Any substantial variation would be a sign of some potential machine malfunction, and would trigger further hand counts until the rules for the audit process are complete, or a full re-count is triggered by the audit procedure rules.

Fair enough, but in the typical case where 1% of a county’s paper ballots have been audited with no errors detected, what do we actually know? How confident can we be that the remaining unaudited ballots were correctly  machine-counted? What if a race is pretty darn close, say 2% margin of error, but not so close as to trigger a recount; if 1% of ballots were audited, what can we expect about the other 99% of ballots, and the chance that machine counting errors might change the election result?

Yes, I started a general question, and answered it with some more specific questions. But at least I didn’t bore you with too much more of the A-word. Coming soon, another post that answers the questions remaining from today, by explaining in simple terms what a “risk limiting audit” is, how it is different from the flat-percentage audit discussed today, and, finally, how you can tell for any election you want, whether the election officials were able to test whether election results were garbled by the computers used to create them.

— EJS