Recently I’ve made a series of posts seemingly obsessed with chanting “audit, audit, …” mantra-like, to put readers into a trance. For those of you still awake enough to want to know how to find out whether election results were garbled by the computers used to create them, today we have some more answers. The key word is “risk limiting audit” and here today to explain it is election expert Mark Lindeman of Bard College. Over to Mark …
Many observers agree that electronic voting machines and optical scanners cannot be assumed to count votes accurately. A commonly proposed solution is to audit — through a hand count — a random sample of the paper ballots (or, perhaps, voter-verifiable paper records) from each election. For instance, California mandates a hand count in 1% of all precincts. Intuitively, if a “large enough” random sample of ballots uncovers no material counting errors, we can be confident that the count is accurate. But what counts as “material,” “accurate,” or “large enough”? Risk-limiting audits offer one answer: they are designed so that if miscounts have altered an election outcome, the wrong outcome is likely to be corrected via a full hand recount. “Likely” is not a weasel word: it is specified as a guaranteed minimum probability. If an audit guarantees at least a 99% chance of correcting an outcome when it is wrong, then we can say that there is a 1% risk level of of an undetected wrong outcome.
“Risk-limiting audits” may sound like a no-brainer, but most existing audits come nowhere near this standard. There are at least two big problems. One is getting a reasonable sample size. Suppose for a moment that in order to alter some election outcome, at least half the election precincts would have to be miscounted. At that miscount rate, a random sample of just seven precincts has about a 99% chance of including at least one miscounted precinct — whether the contest being audited is in a single congressional district or an entire large state. Now suppose that miscounts in just 5% of election precincts could alter the outcome. To get that same 99% chance of detecting some miscount, one needs to sample about 90 precincts — again, whether one is auditing a single CD or all of California. (The numbers do decrease for smaller contests, but not as fast as many people expect.) So a “1% audit” may be far larger than needed to confirm who won an election, or it may be far too small, depending on the size of the contest and the winning margin, among other things. Changing the percentage doesn’t solve the problem, but only alters the balance of “too-large” and “too-small” samples.
Another big problem with existing audits is the gap between detecting miscounts and actually correcting incorrect outcomes. If an audit detects a miscount, what happens? In many states — including California — nothing happens. Some states do provide that sufficiently large miscounts lead to larger audit samples, and perhaps eventually to full recounts. A mere sample can never correct an incorrect outcome. But even the best of these rules is not very good: they don’t always count more when they should, or they sometimes count much more than necessary to confirm election outcomes. This is not to say that fixed-percentage audits are useless, but they aren’t tailored to the task of efficiently detecting and ultimately correcting most incorrect outcomes.
Many thanks to Mark for this explanation! Coming soon: practical use of risk limiting audit, and possibilities for DIY.