Tagged audit

Transparency, Voting Machines, Choices

Today I provide the next step in clarifying TTV goals in relation to discussions with election transparency advocates. Regarding the previous posting, I want to emphasize that voting  machines — in this case we focus on paper ballot scanning machines — are a transparency problem, if there is no human involvement in counting paper ballots, and the public has no access to audit records of the counting process. Even with current systems, election officials can choose to mitigate these difficulties; and as I said before, we will deliver to them some technology that can make that a lot easier to do.

Today, I wanted to talk about choices. In discussion about voting machines as part of the problem, it seemed like TTV might also be part of the problem too, because we are failing to advocate for either or both of the use of hand counting of paper ballots, or abandoning the use of paper ballot scanning devices. So let me be clear about that: it is true that we are not advocating for those positions, not influencing legislators to make such changes in election law, and not advocating that election officials should make those particular changes in their election methods. Such advocacy work may be to the public benefit, and is rightly performed by activists and advocates.

The choice is with election officials, on how to use available technology. In making available some new paper-ballot-counting technology, we are not advocating that a particular voting method be used. I’ve listed several voting methods below, as illustration of many choices that election officials could make, all of them choices in which new voting technology could be used, and could help with transparency. With the exception of advocates of completely zero machine count usage (and that is a worthy topic for another day), we hope that advocates of many positions might extend the benefit of the doubt that our efforts can help, at a minimum with some interesting “side effects” that I’ll discuss next time.

— EJS

PS: Here is that list of several kinds of voting methods:

  • Polling-place machine-counted paper ballots, centrally machine-counted other ballots, and minimum 2% partial hand-counting in a risk-limiting audit methodology;
  • Similar, but 100% hand-counting, for full benefit of each co-eval counting method checking the other (consilience), and a standard methodology for auditing and resolving differences;
  • Hand-counting, with machine-counting for consilience benefits in recounts, and in automaticly triggered audits of contests above a specified “close result” level;
  • Polling place electronic voting (no paper ballots), centrally machine-counted vote by mail ballots;

As you can see, that’s a broad range, and with variants of each, there are dozens of choices. Paper ballot scanning/counting devices have a role in each, and do not preclude any of these choices. Again: 100% hand count, 0% machine count is a separate topic I promise to get to.

Virus in NY Voting Machine? Not Really

The reports of computer viruses in NY voting machines — though spurious — cause me to return to a basic mantra of TrustTheVote: we do technology development so that election tech helps inspire public confidence in elections, rather than erode it.

The NY case is a great example of erosion, but also a cautionary tale for future inspiration. The caution comes from the significant and ongoing confusion about the term “virus”. But first, the situation in question arose in Hamilton County, NY, part of the hotly contested NY 23rd Congressional District race between Hoffman and Owens. It’s an ugly scene, because the vote was close, it’s already certified, Owen is seated, but re-canvassing efforts highlighted some counting irregularities. These weren’t large enough to effect the race, but were enough to spark Hoffman to un-concede defeat, and to issue a letter with some really disturbing claims of the election having been stolen. Now, add to this the claim that the election result is further tainted by the discovery of a computer virus in the voting system used in Hamilton. That’s a real example of tech digging the confidence hole that much deeper – ouch!

But the really sad part of this, for me, is that the true story is a good story about election officials doing the right thing: when they found a software bug, they worked with the vendor and created an effective work-around — maintaining the integrity of the system, the exact opposite of the story about the virus undermining the system. The real virus is that spurious story! The details, provided by NY State election official Doug Kellner, also provide another example of complexity of diligent election administration:

In pre-election testing several counties discovered the Dominion ImageCast machines froze when fed ballots that contained contests with multiple candidates to be elected.  It was determined during the week before the election that the cause was a source code programming error in the dynamic memory allocation of the function that stores ballot images–not the counting function.   Although only one line of source code needed modification, NYSBOE staff properly refused to approve any modification of source code without proper certification.  Dominion developed a work-around by changing the ballot configuration file–not the source code so that the machines using the new configuration files functioned on election day.  It is my understanding that a few county officials, who were using the machines for the first time, did not properly revise the configuration files and the machines were used in emergency ballot mode–that is, ballots were inserted in the emergency ballot boxes contained within the machine and were counted manually after the close of the polls.

Kudos to NY for doing their job right, in the real world of flawed equipment, not the fantasy land of viruses and stolen elections. New Yorkers should be thanking the NYSBOE for a job well done!

— EJS

PS: For a detailed debunking of the virus claims, see the blog of NY election tech expert and advocate Bo Lipari. It’s excellent. It got picked up in local press. But it can’t catch up to the idea virus, as the tale continues to mutate through the blogosphere that Hoffman was cheated by corrupt election officials, or ACORN, or computer hackers, or viruses, or some combination. ;-(

Pennsylvania Paperless Recount

I wrote before that this month’s re-count activity in Pennsylvania was notable because of the variety of voting methods used there, and hence the variety of recounting methods needed. In contrast to the Lackawanna county that I mentioned specifically, there are many counties in PA that use completely paperless DRE voting machines. In these cases, there are no actual ballots to recount, nor are there paper-trail tape-rolls to examine.

As a result, the recount is more a matter of re-obtaining the vote totals from the DREs, re-doing the tabulation that adds up the machines’ vote totals for the recounted contest, in order to re-compute the election result. This is similar in principle to re-counts of PA’s old lever machines, where the re-count involved re-inspection of counters on the back of each lever machine. One difference in practice, though, is that the lever machine counters could be directly inspected by a person, who would have little doubt that the totals they gather from each machine were in fact recorded by that machine. The DRE’s vote totals are stored re-writable digital storage media that are often separated from the machine itself. And as we saw recently in Myrtle Beach, human error can play a role in that separation.

So, election geek that I am, I’m waiting with interest to hear about the various re-counting methods used, the variances found, how the variance get accounted for, and so on. It should be a very interesting comparison of different means to the same end that one Lackawanna County candidate expressed so well:

Every vote should count. It’s hard enough to get the people to come out and vote. … The election process is under shadow.

Removing that shadow is what PA officials are working hard to do in a scant week of efforts, that along with the efforts of many public-spiritied observers, could teach us all a lot about how recount methods can create transparency as restore trust that every vote counts.

— EJS

Levers, HAVA, and “Compliance”

Kudos to Brad Friedman for making a good call on a subtle point in his comment on my posting about Bo Lipari‘s coverage of the NY State testing of voting systems. Brad objects to my statement that lever machines are not compliant with the Help Amercia Vote Act (HAVA).

And rightly so! The bad news about the adjective “HAVA compliant” is that people can and do disagree about the interpretation of that Act of Congress. The good news is that the noun “HAVA compliance” is well defined by facts on the ground, if not in the Act itself.

Those facts on the ground are composed of each state’s implementation of its HAVA compliance plan, under the oversight of the U.S. Department of Justice. The DoJ has for years been working with states, including the lever-machine states of NY and CT, on each state’s HAVA compliance plan. Those plans in NY include the use of machine-counted paper ballots, some hand-marked, and some from ballot marking devices that provide enhanced access for voters who are unable or unwilling to mark paper ballots by hand. Those plans do not include the continued of lever machines.

So we can say that lever machines are not part of HAVA-compliance (noun) in NY or CT.

Further, I got the impression, from talking to folks involved in HAVA compliance program implementations, that there was no chance of a compliance program being approved if it was based on the continued use of lever machines. If true, that might well based on what Brad would consider a misinterpretation of HAVA.

Would it be possible for a state to have an acceptable HAVA compliance plan that included lever machines? Perhaps a plan that included electronic DREs for enhanced access, lever machines (which are mechanical direct-record election devices), and tools for combining the results from both into an auditable election result? Possibly, but likely we’ll never know, as the last few HAVA-compliance program engines pull into the station at the end of ride.

— EJS

Identifying the Gold, Redux

I recently commented on specific connection, in the case of the TrustTheVote project, of open source methods and the issue of identifying a “gold build” of a certified voting system. As a reminder to more recent readers, most states have laws that require election officials to use only those specific voting system products that were previously certified for use in that state — and not some slightly different version of the the same product. But recently, I got a good follow-up question – what is the role of the Federal government, in this “gold build” identification process? There is in fact an important role, that is potentially very helpful, and where openness can help magnify the benefit of this helpful role of the government.

Here’s the scoop. The EAC has the fundamental responsibility for Federal certification, which is used in varying degrees as part of some states’ certification. Testing is the main body of work leading up to certification. Testing is performed by private companies, that have qualified in a  NIST-managed accreditation program as an official Voting Systems Test Lab. There are two key steps in the overall process in which a test lab verifies that it can re-do the “trusted build” process to re-create the soon-to-be “gold” version, so long as the lab can verify that the trusted build process did in fact re-create the same exact software that was tested. Then, as the EAC Web site briefly states: “Manufacturer provides software identification tools to EAC, which enables election officials to confirm use of EAC-certified systems.”

But here is the fly in the ointment: for your typical PC or server, this is not easy! and the same is true for current voting systems. Yes, you could crack open the chassis, remove the hard drive, examine it as the boot medium, re-derive a fingerprint, and compare the fingerprint to something on the EAC web site. But in practice this is not going to happen in real election offices, and in any case it would be fruitless — even if you did, you would still have no assurance that the device in the precinct was still the same as the gold build, because the boot media can be written after the central office tests the device, but before it goes into use in a polling place.

That’s quite an annoying fly in the ointment, but it doesn’t have to be that way. In fact, for for a carefully designed dedicated system, the fingerprinting and re-checking can be quite feasible — and that applies to carefully made voting systems too, as we’ve previously explained. Such carefully made voting systems would be a real improvement in trustworthiness (which is why we’re building them!), but they aren’t a silver bullet, since you can never 100% trust the integrity of a computing system. That’s why vote tabulation audits are an important ingredient, and why I periodically bang on about auditing in election processes.

— EJS

Stalking the Errant Voting Machine: the Final Chapter

Some readers may sigh relief at the news that today’s post is the last (for a while at least!) in a series about the use of vote-count auditing methods to detect a situation in which an election result was garbled by the computers used to create them. Today, a little reality check on the use of the the risk-limiting audit methods described earlier. As audit guru Mark Lindeman says,

Risk-limiting audits clearly have some valuable properties, yet no state has ever implemented a risk-limiting audit.

Why not? Despite the rapid development of RLA methods (take a quick glance at this paper to get a flavor), there are several obstacles, including:

  • Basic mis-conceptions: Nothing short of a full re-count will ever prove the absence of a machine count error. Instead, the goal of RLA is to reduce risk that machine count errors altered the outcome of any contest in a given election. Election result correctness is the goal, not machine operations correctness — yet the common mis-perception is often the reverse.
  • Requirements for election audits must be part of state election laws or regulation that implements them. Details of audit methods are technical, and difficult to write into law — and detailed enough that it is perhaps unwise to enshrine in law rather than regulation. Hence, there is some tension and confusion about the respective roles states’ legislative and executive branches.
  • Funding is required. Local election officials have to do the work of audits of any kind, and need funding to do so. A standard flat-percent audit is easier for a state to know how to fund, rather than a variable-effort RLA that depends on election margins and voter turnout.
  • The variability itself is a confusing factor, because you can’t know in advance how large an audit will have to be. This fact creates confusion or resistance among policy-makers and under-funded election officials.
  • Election tabulation systems often do not provide timely (or any) access to the data needed to implement these audits efficiently. These systems simply weren’t designed to help election officials do audits — and hence are another variable cost factor.
  • Absentee and early-voting ballots sometimes pose large logistical challenges.
  • Smaller contests are harder to audit to low risk levels, so someone must decide how to allocate resources across various kinds of contests.

As Lindeman points out, each of these problems is tractable, and real progress in RLA practice can be made without a solution to all of these problems. And in my view, one of the best ways to help would be to greatly increase transparency, including both the operations of the voting systems (not just the tabulation components!), and of the auditing process itself. Then we could at least determine which contests in an election are most at risk even after the audits that election officials are able to conduct at present. Perhaps that would also enable experts like Lindeman to conduct unofficial audits, to demonstrate effectiveness and help indicate efforts and costs for official use of RLA.

And dare I say it, we might even enable ordinary citizens to form their own judgement of an individual contest in an election, based on real published facts about total number of ballots cast in a county, total number of votes in the contest, margins in the contest, total number of precincts, precincts officially audited, and (crank a statistics engine) the actual confidence level in the election result, whether the official audit was too little, too much, or just right. That may sound ambitious, and maybe it is, but that’s what we’re aiming for with operational transparency of the voting system components of the TTV System, and in particular with the TTV Auditor — currently a gleam in the eye, but picking up steam with efforts from NIST and OASIS on standard data formats for election audit data.

— EJS

What’s an RLA? What Does a Good One Look Like, and Why Would I Care?

Recently I’ve made a series of posts seemingly obsessed with chanting “audit, audit, …” mantra-like, to put readers into a trance. For those of you still awake enough to want to know how to find out whether election results were garbled by the computers used to create them, today we have some more answers. The key word is “risk limiting audit” and here today to explain it is election expert Mark Lindeman of Bard College. Over to Mark …

Many observers agree that electronic voting machines and optical scanners cannot be assumed to count votes accurately. A commonly proposed solution is to audit — through a hand count — a random sample of the paper ballots (or, perhaps, voter-verifiable paper records) from each election. For instance, California mandates a hand count in 1% of all precincts. Intuitively, if a “large enough” random sample of ballots uncovers no material counting errors, we can be confident that the count is accurate. But what counts as “material,” “accurate,” or “large enough”? Risk-limiting audits offer one answer: they are designed so that if miscounts have altered an election outcome, the wrong outcome is likely to be corrected via a full hand recount. “Likely” is not a weasel word: it is specified as a guaranteed minimum probability. If an audit guarantees at least a 99% chance of correcting an outcome when it is wrong, then we can say that there is a 1% risk level of of an undetected wrong outcome.

“Risk-limiting audits” may sound like a no-brainer, but most existing audits come nowhere near this standard. There are at least two big problems. One is getting a reasonable sample size. Suppose for a moment that in order to alter some election outcome, at least half the election precincts would have to be miscounted. At that miscount rate, a random sample of just seven precincts has about a 99% chance of including at least one miscounted precinct — whether the contest being audited is in a single congressional district or an entire large state. Now suppose that miscounts in just 5% of election precincts could alter the outcome. To get that same 99% chance of detecting some miscount, one needs to sample about 90 precincts — again, whether one is auditing a single CD or all of California. (The numbers do decrease for smaller contests, but not as fast as many people expect.) So a “1% audit” may be far larger than needed to confirm who won an election, or it may be far too small, depending on the size of the contest and the winning margin, among other things. Changing the percentage doesn’t solve the problem, but only alters the balance of “too-large” and “too-small” samples.

Another big problem with existing audits is the gap between detecting miscounts and actually correcting incorrect outcomes. If an audit detects a miscount, what happens? In many states — including California — nothing happens. Some states do provide that sufficiently large miscounts lead to larger audit samples, and perhaps eventually to full recounts. A mere sample can never correct an incorrect outcome. But even the best of these rules is not very good: they don’t always count more when they should, or they sometimes count much more than necessary to confirm election outcomes. This is not to say that fixed-percentage audits are useless, but they aren’t tailored to the task of efficiently detecting and ultimately correcting most incorrect outcomes.

Many thanks to Mark for this explanation! Coming soon: practical use of risk limiting audit, and possibilities for DIY.

— EJS

Voter Registration, Fraud, and Transparency

In this week’s news we have a classic example of how transparency (a.k.a. “open government”) has enormous potential to defuse some thorny political issues that can rise to the highest heights of U.S. political news.  The news is about Karl Rove’s involvement in Bush-administration actions to dismiss some U.S. Attorneys, including David Iglesias.

A New York Times article E-Mail Reveals Rove’s Key Role in ’06 Dismissals describes how Iglesias lost favor with the Bush Administration as a result of being perceived to be slack in pursuing cases of possible voter fraud.  In a PBS Interview, Mr. Iglesias described exactly what type of voter fraud was at issue, and how his investigation indeed sought, but did not find evidence of fraud to be prosecuted.  And the connection with voter registration?  The potential fraud in question was voter registration fraud. Mr. Iglesias said that New Mexico state GOP officials

singled out ACORN  as an entity that they thought was engaging in … a plan to register individuals who were not legally entitled to vote… under-aged people, people who perhaps were felons, people who perhaps were not American citizens.

The concern was that if such fraud were occurring, then it would enable the further fraud of actual voting by people who were fraudulently registered and had no legal right to vote. If that were to occur, the election result could be swung — particularly of concern in NM, where the 2000 presidential election hung on 344 votes — and even worse, one couldn’t be sure because of the inability to know after the fact how these hypothetical illegal voters actually cast their ballot.

That’s serious stuff. Again, you may be asking what’s the connection to voter registration systems technology. Well, consider the effect of lack of transparency. Mr. Iglesias’ efforts were based on information not readily available to the public, or to his detractors inside the beltway. As a result, there was real angst over ACORN’s activities and a possible conspiracy to swing a Presidential election. And that information vacuum was a factor in feeding the conspiracy theorists who may eventually have helped the process of sacking Iglesias.

Now, imagine a world in which there is, in fact, quite readily available information about [a] the entire stream of voter registration requests, [b] source of requests (e.g., individuals, ACORN, Rock the Vote, etc.), [c] county officials’ adjudication of those requests, [d] results of adjudication, etc. Suppose that a state could easily generate reports about this stream for officials (e.g., States’ A.G. or Federal DoJ,), and even openly publish redacted versions of these reports or even the raw data.

Well, that’s what we’re building in the TrustTheVote Project. And that transparency is (and should be) what “open government” is about.  If that transparency had been the case in NM a couple years ago, then the information vacuum would not have existed, except as willful refusal to examine readily available information.

And that’s where open-source, open-data, operationally transparent, “people’s technology” can be the basis for real IT systems that can fill information vacuums and defuse conspiracy theories — helping to increase the health of public discourse. Yes, it sounds a bit highfalutin, idealistic, so don’t take it from me — let us prove it with real running code and stuff people can see, touch, and try.

— EJS

Identifying the Gold: Does Open Source Help?

A good question re-surfaced for us as we participated in the National Civic Summit recently. The issue was and remains about identifying a “gold build,” that is, when there is a particular system/version that is certified for use as a voting system, how should election officials know that the systems that they deployed are systems that are an instance of the certified system. Previously, we provided some answers of how you could answer the question “How do I know that this voting machine is a good one” and provide in the wiki on a more technical treatment of “field validation” of voting system devices.

But the  slightly different question that arose recently is: how does open source help?

The simple answer is that  open source techniques do not directly help at all. We could build a completely open system that has exactly the same architectural blockades to field validation as the current vendors’ product do. However, the TrustTheVote open source project has some advantages. First, we’re working on voting systems, which have sufficiently simple functional requirements (compared to general purpose computing systems) that field validation of voting devices isn’t as difficult as in the more general case. *

The second advantage allows us to sidestep many of these complexities, given the relative simplicity of voting devices. We were able to  go back to the drawing board and use an architecture that simplifies the field validation problems, for the very specific and limited class of systems that are voting devices.

Openness itself didn’t create these two advantages; but in conducting a public works project, we have the freedom to start fresh and avoid basic architecture pitfalls that can undermine trust. Therefore, the value of working openly is that the benefit of this work — increased confidence and trust — is a bit more easily achieved because field validation is fundamentally a systems trust issue, and we address in a way that can be freely assessed by anyone. And that’s where the open source approach helps.

— EJS

* NOTE: for the detail-oriented folks: in general, the twin problems of Trusted Software Distribution and Trusted System Validation are, in their general form, truly hard problems. Feasible approaches to them usually rely on complex use of cryptography, which simply shifts the burden to other hard problems in practical applied cryptography. For example, with “code signing” my Dad’s computer can tell him that it thinks he should trust some new software because is it signed as being from his SW vendor (e.g., Microsoft or HP); but he wonders (rightly) why he should trust his computer’s judgment in this matter, given the other mistakes that his computer makes. For more on the non-general voting-system-friendly solution, see the TrustTheVote wiki: https://wiki.trustthevote.org/index.php/Field_Validation_of_Voting_Systems

Can We Really Detect Flakey Voting Machines?

That’s a catchy blog headline, I hope, or at least an important issue. But I’ve fooled you because while answering the question, I am going to discuss “audit” again. I wrote earlier that one kind of audit is performed by election officials to detect errors in voting machines, or to put it another way, to ensure that election results weren’t garbled by the computers used to create them. That sounds like a good thing to detect, and ensure, but how can we understand whether the detection is effective? Today’s post is the beginning of an answer to that question.

And it’s a very relevant question, because we know from last year’s experience in Humboldt County CA that malfunctions do occur. In fact, with just the right bad luck in the locales affected, perhaps only half a dozen Humboldt-sized, Humboldt-style glitches would have been required to swing MN’s close Coleman-Franken race.  And recall that each county has hundreds of opportunities for such a glitch! Five or ten malfunctions per thousands, across a medium-sized state, may not sound like a lot, but its enough to swing a major contest every few years.

To take a specific example, let’s look at the voting method of paper ballots, counted by machine partly  in polling places and partly in a central facility. (Similar issues apply to other voting methods including those using touch-screens or other direct-record devices.) One audit procedure is essentially a hand-count “spot check” or partial “re-do” of the machine count. Precincts are randomly selected to get a set of precincts with enough combined ballots to exceed some threshold percentage of the vote, say 1%. Then each of these precinct’s ballots are re-counted, for each contest, and the hand-count results compared to the machine count. There are often small variances — different interpretations by people and software — and these are scrutinized and documented to ensure that are in fact borderline interpretation cases or due to some other procedural, non-technical issue. Any substantial variation would be a sign of some potential machine malfunction, and would trigger further hand counts until the rules for the audit process are complete, or a full re-count is triggered by the audit procedure rules.

Fair enough, but in the typical case where 1% of a county’s paper ballots have been audited with no errors detected, what do we actually know? How confident can we be that the remaining unaudited ballots were correctly  machine-counted? What if a race is pretty darn close, say 2% margin of error, but not so close as to trigger a recount; if 1% of ballots were audited, what can we expect about the other 99% of ballots, and the chance that machine counting errors might change the election result?

Yes, I started a general question, and answered it with some more specific questions. But at least I didn’t bore you with too much more of the A-word. Coming soon, another post that answers the questions remaining from today, by explaining in simple terms what a “risk limiting audit” is, how it is different from the flat-percentage audit discussed today, and, finally, how you can tell for any election you want, whether the election officials were able to test whether election results were garbled by the computers used to create them.

— EJS