Tagged standards

AR vs. UVR?

I came across an interesting article about voter registration: “The Alternative to Universal Voter Registration” where John N. Hall strongly supports Automatic Registration (AR) over Universal Voter Registration (UVR).

To people who are not election experts the distinction is a bit subtle. UVR has states proactively try to register everyone to vote, while AR has the federal government somehow use Social Security records to automatically register people.

In Mr Hall’s words, AR can be implemented by doing the following:

“Computer programs read through the Social Security Administration database, extract the data of age-eligible citizens, then send that data to the states.” (from The Alternative To Universal Voter Registration)

Now that I started writing this post and did some more googling I see that this is already a heavily debated topic that has flown back and forth, starting with John Fund’s (of the Wall Street Journal) original talk, to Rep. Barney Frank’s angry denunciation that he had nothing to do with the claims that he was involved in any way with Universal Voter Registration, to Mr Fund’s retraction of the claim, to more links than you can shake as stick at about the topic.

Anyway as usual I am johnny come lately. Phew. My original thought though was about the original post. You see, John Hall, being a “computer programmer” makes it sounds simple:

Voila! Why make mandates on all those state agencies and dragoon all that manpower entailed by UVR when a computer program can register everyone? A competent programmer could write the extract program in his sleep.” (from The Alternative to Universal Voter Registration)

Any description of this that starts with “Voila” and ends with “in his sleep” is … well let’s just say, it must be a bit of a simplification….

For example I would imagine that there would be major privacy concerns about sending information out of the social security systems out to each of the states. I am not sure that the states registration records even include social security information. And what about all the voters who don’t have social security numbers, there must be some, or many? And how quickly are the social security databases updated when people die?

OSDV Foundation Called to Testify on State of CA Voting Systems Future

Gregory Miller of the OSDV Foundation will be provide testimony during State of California Hearings on Future of Elections Systems next Monday, February 8th.

CA Secretary of State Debra Bowen requested elections and voting systems experts from around the country to attend and testify, and answer questions about the current election administration landscape and how California can best prepare for the future.  The Secretary noted in a prepared statement:

Demands for increased transparency and services, shrinking government budgets, and technological advances that outpace elections laws and regulations have combined to challenge what many thought were ‘permanent’ solutions developed as part of the 2002 Help America Vote ActMany in California and across the nation are ready to move in a new direction.  The question is, what should Californians seek in the next generation of voting equipment and how can new products truly serve the interests of voters?

Secretary Bowen will preside over the Hearing, joined by county elections executives from Los Angeles, Orange, Sacramento, San Joaquin, Santa Cruz and Madera counties. In addition to the testimony from OSDV, wide-ranging testimony will come from the U.S. Election Assistance Commission, Pew Center on States, the Federal Voting Assistance Program, representatives from every major voting system manufacturer with contracts in California, and more.  The complete agenda is available here.

California has a strong record of thoughtful analysis of its voting systems. In 2007, Secretary Bowen led a top-to-bottom review of certified voting systems. Bowen asserted from the outset that the review:

Ensure that California’s voters cast their ballots on voting systems that are secure, accurate, reliable, and accessible.

And following the top-to-bottom review, on August 3, 2007, Secretary Bowen strengthened the security requirements and use conditions for certain systems.

So its no surprise to us that continuing developments in the elections technology industry as well as legislative initiatives are leading the Secretary to conduct this Hearing next Monday.  Part of that change is best evidenced by the MOVE Act.

We’ll discuss more about the MOVE Act in other posts, but in summary, President Obama signed the Military and Overseas Voter Empowerment (MOVE) Act in October 2009.  The most immediate impact of the law from the State perspective has to do with the provision that establishes a 45-day deadline for States to provide ballots to voters. Because Primary results need to be certified and General ballots need to be constructed and conveyed, additional time (beyond 45 days) is required to meet the new federal guideline.  And the largest impact on elections technology, processes, and practices is two principle provisions of the Act that mandate States shall provide:

  1. A digital means by which overseas voters can verify and manage their voter registration status; and
  2. A digital means by which an overseas voter can receive a digital, download ready, blank ballot (think PDF).

Success in implementing these mandates will reduce lost participation of overseas voters, which studies have shown result in approximately 1 out of every 4 overseas  ballots not being counted because of failure to arrive in time.

But if it were only that easy.  You see, in 2008, many States changed their Primary dates by several months to allow their voters to more heavily impact the presidential nomination process.  And additional moves are likely in 2010 because 11 states and the District of Columbia have Primaries so close to the General Election that ballots may not be produced in time to comply with the new MOVE Act law.  California has a very large overseas and military voting contingent, and you can imagine MOVE Act mandates are on the minds of CA elections officials, State legislatures, and the Secretary.

Of equal interest, Los Angeles County, the largest election jurisdiction in the United States, is engaged in a process known as the Voting Systems Assessment Project (VSAP) to determine the design of their next generation voting system.

Serving over 4 million registered voters, the County is examining the ways in which it can modernize its voting systems.  Dean Logan, the County Registrar and Ken Bennett, the County IT Director are working to analyze the ways in which technology can ensure their ability to meet operational mandates and better serve their voters.  With the VSAP underway (a project the OSDV Foundation is participating in), our “take” is that more (and possibly dramatic) change in elections technology in the great State of California is all but assured.

Stepping back, the current voting technology used in Los Angeles County and elsewhere is provided by private companies; they offer election jurisdictions proprietary technology solutions that need to be certified by the CA Secretary of State. While there is oversight at a State level, and mandates at the Federal level, each jurisdiction must purchase their own technology and do the very important business of conducting elections. Consequently, jurisdictions find themselves in multi-year contracts for technology.

This gives a jurisdiction continuity, but impairs their ability to innovate and collaborate, learning from neighboring or similar jurisdictions elsewhere in the state or country.

With L.A. County — the largest elections jurisdiction in the nation — considering the future of elections technology for their voters, the mandates of the MOVE Act implementation bearing down, and the complexities of the largest States’ processes and regulations for selection and implementation of elections technology, the Secretary’s Hearing next week is of a near essential nature.

So we are honored to be asked to testify next week.  And the timing is good.  As a means to developing a holistic architecture for next generation systems, one of the imperative elements is a common data format for the exchange of election event data.  This is one particular element we’re working on right now.  In fact, we will shortly be collaborating with a group of States and jurisdictions on the testing of several framework components including: election event management, ballot preparation, and automated generation of printable ballots (watch for this announcement shortly).

Here’s the cool thing: It turns out that all of this work currently underway in the TrustTheVote Project which is leveraging this common data format and some other innovations, provides a ready-made open source freely available solution to implement the mandates of the MOVE Act.

So, we hope that this work will prove to be relevant and purposeful for the Hearings.  Our opportunity to testify is timely because we believe our work is in line with the agenda driving the hearing: What do next generation systems look like and how do states like CA comply with Federal mandates? How can we develop quickly to adapt to changing needs on the ground from elections officials, voters, and federal requirements?

We’re excited to participate; go Greg!

For interested viewers, there will be a webcast available here.  And the event will likely be carried live on Cal Channel Television.

Stay tuned; more to come.
Matt

Voting System Certified in New York

New York state recently certified two voting systems, and the end of the process is an interesting insight into current certification and standards — particularly the view of the dissent-voting participant, Bo Lipari, who explained his vote in his blog My Vote on NY Voting Machine Certification. It’s certainly worth reading Bo’s complete rationale, but I think that the most important take-away is very aptly expressed by today’s guest blogger, Candice Hoke, the Director of the Center for Election Integrity and Associate Professor of Law at the Cleveland-Marshall College of Law at Cleveland State University.

I read Bo Lipari’s blog regarding the NY VS certification issue, and the 9:1 vote in favor of certification, with Bo’s vote the only dissent. To provide a lawyer’s view, I would mention that Anglo-American law includes a principle termed “substantial compliance.” It has limitations and caveats, but it’s worth considering how this principle might apply to the voting tech certification area, or instead be excluded from it.

At base, Bo’s blog, and certification facts he presents, pose a very important question:

Do we really want voting system vendors to be able to “substantially comply” with the certification standards, or do we want to require more rigorous, complete compliance; and if so, why?

This is a critical question, of course.  Certainly, in the earlier NASED certification process, the ITAs (labs operating as Independent Testing Authorities) viewed substantial compliance to be all that was required.  The ITA view of “substantial” seemed to be inchoate and ad hoc, perhaps based on a general gestalt of the voting system product under review. As the California TTBR and other independent voting system studies documented, “substantial” offers a great deal of interpretive wiggle room.

My thanks to Candice both for posing this important question and for pointing that any answer is not going to be tidy, whether it is black-or-white, or a paler shade of gray.

— EJS

Two Ballots, New Ballots

Following a previous post with before-and-after pictures of an ideal “re-modeling”of  a ballot, I have a couple notes about how such remodeling is harder in practice; another ballot image to illustrate; and some good news about on-going TTV work on ballot image processing.

That ideal remodeling showed how to both fix one of class of usability flaws (visually “losing” some of the candidates in a race), and typical approach to increase accessibility, abandoning the eye-crossingly stark and skeletal black-and-white layout for one with colors, shading, fonts, and space to help visually separate distinct elements and visual highlight important elements. But the the “after” picture is idealistic in two ways.

[1] The full range of accessibility issues is much larger. To get an idea of the how much larger — for example, variations on one color or two, one language or two, paper size, placement of instruction text — check out part of the results of the AIGA work on ballot design. Or, take a look at the below sample image, which shows some of the fruit of two years of expert input and testing — which we at TTV are very fortunate to be able to leverage!

[2] The intended use of these images is to be printed as paper ballots that are marked by voters (manually or with digital assistance) and can be counted by an optical scanner. The previous “after” picture lacked the big visual mess of a bunch of black rectangles that leapt to the eye much more than the actual ballot information – needed for an optical scanner to orient the ballot image and find the marks. The AIGA sample below has these “timing marks” added back in, with a bit less visual clutter, but still a lot of them.

The good news I mentioned is about the timing marks and their usability impact. Results so far indicate that our scanner can get by just fine with only marks in each of the 4 corners — thus dispensing with most of the usability impact of the timing marks. This may seem ultra geeky, but it is the sort techie result that keeps us going. 😉 More details on this result in a later post.

— EJS

AIGA Optical-Scan Sample Ballot

AIGA Optical-Scan Sample Ballot

Eh Tu Coding Standards?

As you may know, our approach to developing software is kind of agile development meets high assurance. What the heck? We are now engaged in prototyping and modeling, so the slider is to the agile development side. But the high assurance part will come. And when it comes, and when we want our code to be certified, then clearly coding standards (and many other matters) will come to the fore.

But for the moment, as you take a look at the code that we have already put out there on github, and other code that it is on it’s way, remember where we are in evolution. For now, we feel that coding standards are kind of a moving target and so we are not going to be draconian in our oversight of that. In fact I have to say that harder than following a particular set of coding standards is ensuring that software we design and write is as simple, clear and well structured as possible, and then some. Personally I place a higher value on that than on whether we use 2 or 4 space tab settings 😉 The other point worth noting is that different parts of the overall election technology suite are subject to different degrees of review and certification. For example, it stands to reason that the code driving the design of ballots is different than the code tabulating the vote.

So as software engineers who care about their work and especially where we are working on something as important as elections technology you can count on us writing code that we can be proud of. You won’t find us crying crocodile tears when some of our code comes into the public domain and is scrutinized – after all, that’s what we’ve been all about from the very start.

, ,

Core + Interfaces

We are thinking a lot about the overall architecture of our system. We are thinking and talking with election officials and their technology pals.

One question that is asked and we ask ourselves, is, “will your system work, right out of the box, for any jurisdiction?”

For example, a product that I work on, BlogBridge, is free and open source, and works out of the ‘box’ for any user. While the code is available to anyone, in practice hardly any of the users look at it or have reason to modify it. That’s one model. One size fits all. But we know that in the arena of elections technology, one size definitely does not fit all.

We know that states do things differently, and things are done (very) differently from state to state. So the question becomes, “… how can you guys assure us that what you build will work for OUR scenario?”

Being the geek I am, I approach the question this way…

It seems that what we need to do is to design a system with a core which represents the 90% case (maybe it’s 85 or 95?) And them on top of that provide a set of appropriate interfaces that would allow a stake holder to add their own code as the deploy our technology.

So: Core + Interfaces.

As we work further with our collaborators, we will learn more how to best structure these interfaces. See the next post where I will get into how this might affect certification.

Stalking the Errant Voting Machine: the Final Chapter

Some readers may sigh relief at the news that today’s post is the last (for a while at least!) in a series about the use of vote-count auditing methods to detect a situation in which an election result was garbled by the computers used to create them. Today, a little reality check on the use of the the risk-limiting audit methods described earlier. As audit guru Mark Lindeman says,

Risk-limiting audits clearly have some valuable properties, yet no state has ever implemented a risk-limiting audit.

Why not? Despite the rapid development of RLA methods (take a quick glance at this paper to get a flavor), there are several obstacles, including:

  • Basic mis-conceptions: Nothing short of a full re-count will ever prove the absence of a machine count error. Instead, the goal of RLA is to reduce risk that machine count errors altered the outcome of any contest in a given election. Election result correctness is the goal, not machine operations correctness — yet the common mis-perception is often the reverse.
  • Requirements for election audits must be part of state election laws or regulation that implements them. Details of audit methods are technical, and difficult to write into law — and detailed enough that it is perhaps unwise to enshrine in law rather than regulation. Hence, there is some tension and confusion about the respective roles states’ legislative and executive branches.
  • Funding is required. Local election officials have to do the work of audits of any kind, and need funding to do so. A standard flat-percent audit is easier for a state to know how to fund, rather than a variable-effort RLA that depends on election margins and voter turnout.
  • The variability itself is a confusing factor, because you can’t know in advance how large an audit will have to be. This fact creates confusion or resistance among policy-makers and under-funded election officials.
  • Election tabulation systems often do not provide timely (or any) access to the data needed to implement these audits efficiently. These systems simply weren’t designed to help election officials do audits — and hence are another variable cost factor.
  • Absentee and early-voting ballots sometimes pose large logistical challenges.
  • Smaller contests are harder to audit to low risk levels, so someone must decide how to allocate resources across various kinds of contests.

As Lindeman points out, each of these problems is tractable, and real progress in RLA practice can be made without a solution to all of these problems. And in my view, one of the best ways to help would be to greatly increase transparency, including both the operations of the voting systems (not just the tabulation components!), and of the auditing process itself. Then we could at least determine which contests in an election are most at risk even after the audits that election officials are able to conduct at present. Perhaps that would also enable experts like Lindeman to conduct unofficial audits, to demonstrate effectiveness and help indicate efforts and costs for official use of RLA.

And dare I say it, we might even enable ordinary citizens to form their own judgement of an individual contest in an election, based on real published facts about total number of ballots cast in a county, total number of votes in the contest, margins in the contest, total number of precincts, precincts officially audited, and (crank a statistics engine) the actual confidence level in the election result, whether the official audit was too little, too much, or just right. That may sound ambitious, and maybe it is, but that’s what we’re aiming for with operational transparency of the voting system components of the TTV System, and in particular with the TTV Auditor — currently a gleam in the eye, but picking up steam with efforts from NIST and OASIS on standard data formats for election audit data.

— EJS

California de-re-de-certification for voting machine use

There’s a pretty regular stream of news about activities in the office of California Secretary of State Debra Bowen, de-certifying or re-certifying voting systems following the results of the state’s top-to-bottom review. Rather than making an up to the minute comment, I thought it would be useful to re-visit what I think is one of the more notable past scenes in the on-going drama.

Read more