Tagged Technology

Recounts, Russian Hackers, and Misunderstood Claims

There’s a lot of news media about the Green Party’s push for recounts. Some is accurate, some is wildly alarmist, but most of what I’ve read misses a really key point that you need to understand, in order to make up your own mind about these issues, especially claims of Russian hacking.

For example, University of Michigan’s Dr. Alex Halderman is advising the Green Party, and is considerably quoted recently about the possible attacks that could be made on election technology, especially on the “brains” of a voting system, the Election Management System (EMS) that “programs” all the voting machines, and collates their tallies, yet is really just some fairly basic desktop application software running on ancient MS Windows. Though sometimes complex to explain, Halderman and others are doing a good job explaining what is possible in terms of election-result-altering attacks.

In response to these explanations, several news articles note that DHS, DNI, and other government bodies take the view that it would be “extremely difficult” for nation state actors to carry out exploits of these vulnerabilities. I don’t doubt that DHS cyber-security experts would rank exploits of this kind (both effective and also successful in hiding themselves), as on the high end of the technical difficulty chart, out there with hacking Iranian uranium enrichment centrifuges.

Here’s the Problem: “extremely difficult” has nothing to do with how likely it is that critical election systems might or might not have been penetrated.

It is a completely different issue to compare the intrinsic difficulty level with the capabilities of specific attackers. We know full well that attacks of this kind, while high on technical difficulty, are totally feasible for a few nation state adversaries. It’s like noting that a particular class of technical Platform Diving has a high intrinsic difficulty level beyond the reach of most world class divers, but also noting that the Chinese team has multiple divers who are capable of performing those dives.

You can’t just say “extremely difficult” and completely fail to check whether one of those well known capable divers actually succeeded in an attempt — especially during a high stakes competition. And I think that all parties would agree that a U.S. Presidential election is pretty high stakes. So …

  • 10 out of 10 points for security experts explaining what’s possible.
  • 10 out of 10 points for DHS and others for assessing the possibilities as being extremely difficult to do.
  • 10 out of 10 points for several news organizations reporting on these complex and scary issues; and
  • 0 out of 10 points for news and media organizations concluding that because some attacks are difficult, they probably didn’t happen.

Personally, I don’t have any reason to believe such attacks occurred, but I’d hate to deter anybody from looking into it, as a result of confusing level of difficulty with level of probability.

— John Sebes

The Root Cause — Long Lines, Late Ballot Counts, and Election Dysfunction in General

I’ve spent a fair bit of time over the last few days digesting a broad range of media responses to last week’s election’s operation, much it reaction to President Obama’s “we’ve got to fix that” comment in his acceptance speech. There’s a lot of complaining about the long lines, for example, demands for explanation of them, or ideas for preventing them in te future — and similar for the difficulty that some states and counties face for finishing the process of counting the ballots. It’s a healthy discussion for the most part, but one that makes me sad because it mostly misses the main point: the root cause of most election dysfunction. I can explain that briefly from my viewpoint, and back that up with several recent events.

The plain unvarnished truth is that U.S. local election officials, taken all together as the collective group that operates U.S. federal and state elections, simply do not have the resources and infrastructure to conduct elections that

  • have large turnout and close margins, preceded by much voter registration activity;
  • are performed with transparency that supports public trust in the integrity of the election being accessible, fair, and accurate.

There are longstanding gaps in the resources needed, ranging from ongoing budget for sufficient staff, to inadequate technology for election administration, voting, counting, and reporting.

Of course in any given election, there are local elections operations that proceed smoothly, with adequate resources and physical and technical infrastructure. But we’ve seen again and again, that in every “big” election, there is a shifting cast of distressed states or localities (and a few regulars), where adminstrative snafus, technology glitches, resource limits, and other factors get magnified as a result of high participation and close margins. Recent remarks by Broward County, FL, election officials — among those with the most experience in these matters — really crystalized it for me. When asked about the cause of the long lines, their response (my paraphrase) is that when the election is important, people are very interested in the election, and show up in large numbers to vote.

That may sound like a trivial or obvious response, but consider it just a moment more. Another way of saying it is that their resources, infrastructure, and practices have been designed to be sufficient only for the majority of elections that have less than 50% turnout and few if any state or federal contests that are close. When those “normal parameters” are exceeded, the whole machinery of elections starts grinding down to a snail’s pace. The result: an election that is, or appears to be, not what we expect in terms of being visibily fair, accessible, accurate, and therefore trustworthy.

In other words, we just haven’t given our thousands of localities of election officials what they really need to collectively conduct a larger-than-usual, hotly contested election, with the excellence that they are required to deliver, but are not able to. Election excellence is, as much as any of several other important factors, a matter of resources and infrastructure. If we could somehow fill this gap in infrastructure, and provide sufficient funding and staff to use it, then there would be enormous public benefits: elections that are high-integrity and demonstrably trustworthy, despite being large-scale and close.

That’s my opinion anyway, but let me try to back it up with some specific and recent observations about specific parts of the infrastructure gap, and then how each might be bridged.

  • One type of infrastructure is voter record systems. This year in Ohio, the state voter record system poorly served many LEOs who searched for but didn’t find many many registered absentee voters to whom they should have mailed absentee ballots. The result was a quarter million voters forced into provisional voting — where unlike casting a ballot in a polling place, there is no guarantee that the ballot will be counted — and many long days of effort for LEOs to sort through them all. If the early, absentee, and election night presidential voting in Ohio had been closer, we would still be waiting to hear from Ohio.
  • Another type of infrastucture is pollbooks — both paper, and electronic — and the systems that prepare them for an election. As usual in any big election, we have lots of media anecdotes about people who had been on these voter rolls, but weren’t on election day (that includes me by the way). Every one of these instances slows down the line, causes provisional voting (which also takes extra time compared to regular voting), and contributes to long lines.
  • Then there are the voting machines. For the set of places where voting depends on electronic voting machines, there are always some places where the machines don’t start, take too long get started, break, or don’t work right. By now you’ve probably seen the viral youtube video of the touch screen that just wouldn’t record the right vote. That’s just emblematic of the larger situation of unreliable, aging voting systems, used by LEOs who are stuck with what they’ve got, and no funding to try to get anything better. The result: late poll opening, insufficient machines, long lines.
  • And for some types of voting machines — those that are completely paperless — there is simply no way to do a recount, if one is required.
  • In other places, paper ballots and optical scanners are the norm, but they have problems too. This year in Florida, some ballots were huge! six pages in many cases. The older scanning machines physically couldn’t handle the increased volume. That’s bad but not terrible; at least people can vote. However, there are still integrity requirements — for example, the voters needs to put their unscanned ballots in an emergency ballot box, rather than entrust a marked ballot to a poll worker. But those crazy huge ballots, combined with the frequent scanner malfunction, created overstuffed full emergency ballot boxes, and poll workers trying to improvise a way store them. Result: more delays in the time each voter required, and a real threat to the secret ballot and to every ballot being counted.

Really, I could go on for more and more of the infrastructure elements that in this election had many examples of dysfunction. But I expect that you’ve seen plenty already. But why, you ask, why is the infrastructure so inadequate to the task of a big, complicated, close election conducted with accessibility, accuracy, security, transparency, and earning public trust? Isn’t there something better?

The sad answer, for the most part, is not at present. Thought leaders among local election officials — in Los Angeles and Austin just to name a couple — are on record that current voting system offerings just don’t meet their needs. And the vendors of these systems don’t have the ability to innovate and meet those needs. The vendors are struggling to keep up a decent business, and don’t see the type of large market with ample budgets that would be a business justification for new systems and the burdensome regulatory process to get them to market.

In other cases, most notably with voter records systems, there simply aren’t products anymore, and many localities and states are stuck with expensive-to-maintain legacy systems that were built years ago by big system integrators, that have no flexibility to adapt to changes in election administration, law, or regulation, and that are too expensive to replace.

So much complaining! Can’t we do anything about it? Yes. Every one of those and other parts of election infrastructure breakdowns or gaps can be improved, and could, if taken together, provide immense public benefit if state and local election officials could use those improvements. But where can they come from? Especially if the current market hasn’t provided, despite a decade of efforts and much federal funding? Longtime readers know the answer: by election technology development that is outside of the current market, breaks the mold, and leverages recent changes in information technology, and the business of information technology. Our blog in the coming weeks will have several examples of what we’ve done to help, and what we’re planning next.

But for today, let me be brief with one example, and details on it later. We’ve worked with state of Virginia to build one part of new infrastructure for voter registration, and voter record lookup, and reporting, that meets existing needs and offers needed additions that the older systems don’t have. The VA state board of elections (SBE) doesn’t pay any licensing fees to use this technology — that’s part of what open source is about. The don’t have to acquire the software and deploy it in their datacenter, and pay additional (and expensive) fees to their legacy datacenter operator, a government systems integrator. They don’t have to go back to the vendor of the old system to pay for expensive but small and important upgrades in functionality to meet new election laws or regulations.

Instead, the SBE contracts with a cloud services provider, who can — for a fraction of the costs in a legacy in-house government datacenter operated by a GSI — obtain the open-source software, integrate it with the hosting provider’s standard hosting systems, test, deploy, operate, and monitor the system. And the SBE can also contract with anyone they choose, to create new extensions to the system, with competition for who can provide the best service to create them. The public benefits because people anywhere and anytime can check if they are registered to vote, or should get an absentee ballot, and not wait like in Ohio until election day to find out that they are one in a quarter million people with a problem.

And then the finale, of course, is that other states can also adopt this new voter records public portal, by doing a similar engagement with that same cloud hosting provider, or any other provider of their choice that supports similar cloud technology. Virginia’s investment in this new election technology is fine for Virginia, but can also be leveraged by other states and localities.

After many months of work on this and other new election technologies put into practical use, we have many more stories to tell, and more detail to provide. But I think that if you follow along and see the steps so far, you may just see a path towards these election infrastructure gaps getting bridged, and flexibly enough to stay bridged. It’s not a short path, but the benefits could be great: elections where LEOs have the infrastructure to work with excellence in demanding situations, and can tangibly show the public that they can trust the election as having been accessible to all who are eligible to vote, performed with integrity, and yielding an accurate result.

— EJS

At the Risk of Running off the Rails

So, we have a phrase we like to use around here borrowed from the legal academic world.  Used to describe an action or conduct in analyzing a nuance in tort negligence, is the phrase “frolic and detour.”  I am taking a bit of detour and frolicking in an increasingly noisy element of explaining the complexity of our work here.  (The detour comes from the fact that as “Development Officer” my charge is ensuring the Foundation and projects are financed, backed, supported, and succeed in adoption.  The frolic is in the form of commentary below about software development methodologies although I am not currently engaged or responsible for technical development outside of my contributions in UX/UI design.)  Yet, I won’t attempt to deny that this post is also a bit of promotion for our stakeholders — elections IT officials who expect us to address their needs for formal requirements, specifications, benchmarks, and certification, while embracing the agility and speed of modern development methodologies.

This post was catalyzed by chit-chat at dinner last evening with an energetic technical talent who is jacked-up about the notion of elections technology being an open source infrastructure.  Frankly, in 5 years we haven’t met anyone who wasn’t jacked-up about our cause, and their energy is typically around “damn, we can do this quick; let’s git ‘er done!”  But it is about at this point where the discussion always seems to get a bit sideways.  Let me explain.

I guess I am exposing a bit of old school here, but having had the formal training in computer systems science and engineering (years ago) I believe data modeling — especially for database-backed enterprise apps — is an absolute imperative priority.  And the stuff of elections systems is serious technology, containing a significant degree of fault tolerance, integrity and verification assurance, and perhaps most important a sound data model.  And modeling takes time and requires documentation, both of which are nearly antithetical in today’s pop culture of agile development.

Bear in mind, the TTV Project embraces agile methods for UX/UI development efforts. And there are a number of components in the TTV elections technology framework that do not require extensive up-front data modeling and can be developed purely in an iterative environment.

However, we claim that data modeling is critical for certain enterprise-grade elections applications because (as many seasoned architects have observed): [a] the data itself has meaning and value outside of the app that manipulates it, and [b] scalability requires a good DB design because you cannot just add in scalability later.  The data model or DB design defines the structure of the database and the relationships between the data sets; it is, in essence the foundation on which the application(s) are built.   A solid DB design is essential to achieve a scalable application.  Which leads to my lingering question:  How do agile development shops design a database?

I’ve heard the “Well, we start with a story...” approach.  And when I ask those who I really respect as enterprise software architects with real DB design chops, who also respect and embrace agile methodologies, they tend to express reservations about the agile mindset being boorishly applied to truly scalable, enterprise grade relational DB design that results in a well performing application, and related data integrity.

Friends, I have no intention of hating on agile principles of lightweight development methods — they have an important role in today’s application software development space and an important role here at the Foundation, but at the same time, I want to try to explain why we cannot simply just “bang out” new elections apps for ballot marking, tabulation, or ballot design and generation in a series of sprints and scrums.

First, in all candor, I fear this confusion rests in the reality that fewer and fewer developers today have had a complete computer science education, and cannot really claim to be disciplined software engineers or architects.  Many (not all) have just “hacked” with, and self-taught themselves, development tools because they built a web site or implemented a digital shopping bag for a friend (much like the well intentioned developer my wife and I met last evening).

Add in the fact, the formality and discipline of compiled code has given way to the rapid prototyping benefits of interpreted code.  And in the processes of this new modern training in software development (almost exclusively for the sandbox of the web browser as the UX/UI vehicle) what has been forgotten is that data modeling exists not because it creates overhead and delays, but because it removes such impediments.

Look at this another way.  I like to use building analogies — perhaps because I began my collegiate studies long ago in architectural engineering before realizing that computer graphics would replace drafting.  There is a reason we spend weeks, sometimes months traveling by large holes in the ground with towers of re-bar, forms, and concrete pouring without any clue of what really will stand there once finished.  And yet, later as the skyscraper takes form, the speed with which it comes together seems to accelerate almost weekly.  Without that foundation carefully laid, the building cannot stand for any extended period of time, let alone bear the dynamic and static weights of its appointments, systems, and occupants.  So too, is this the case with complex, highly scalable, fault tolerant enterprise software — without the foundation of a sold data model, the application(s) will never be sustainable.

I admit that I have been out of production grade software development (i.e., in the trenches coding, compiling; link, load, dealing with lint and running in debug mode) for years, but I can still climb on the bike and turn the pedals.  The fact is, data flow and data model could not be more different.  The former cannot exist without the latter.  It was well understood and data modeling has demonstrated many times that one cannot create a data flow out of nothing.  There has to be a base model as a foundation of one or more data flows, each mapping to its application.  Yet in our discussion punctuated by a really nice wine and great food, this developer seemed to want to dismiss modeling as something that can be done later… perhaps like refactoring (!?)

I am beginning to believe this fixation of modern developers with “rapid” non-data-model development is misguided, if not dangerous for its latent time shifted costs.

Recently, a colleague at another Company was involved with the development of a system where no time whatsoever was spent on data model design.  Indeed, the screens started appearing in record time.  The UX/UI was far from complete, but usable.  And the team was cheered as having achieved great “savings” in the development process.  However, when it came time to expand and extend the app with additional requirements, the developers waffled and explained they would have to recode the app in order to meet the new process requirements.  The data was unchanged, but processes were evolving.  The balance of the project ground to a halt in the dismissal of the first team over arguments about why requirements planning up front should have been done, and they figured out who to hire in to solve  it.

I read somewhere of another development project where the work was getting done in 2 week cycles. They were about 4 cycles away from finishing when on the tracker schedule a task called “concurrency” appeared for the next to last (penultimate) cycle.  The project subsequently imploded because all of the code had to be refactored (a core entity actually was determined to be two entities.)  Turns out that no upfront modeling led to this sequence of events, but unbelievably, the (agile) Development Firm working on the project, spun this as a “positive outcome;” that is they explained, “Hey, its a good thing we caught this a month before go-live.”  Really?  Why wasn’t that caught before that pungent smell of freshly cut code started wafting through the lab?

Spin doctoring notwithstanding, the scary thing to me is that performance and concurrency problems caused by a failure to understand the data are being caught far too late in the Agile development process, which makes it difficult if not impossible to make real improvements.  In fact, I fear that many agile developers have the misguided principle that all data models should be:

create table DATA
 (key INTEGER,
 stuff BLOB);

Actually, we shouldn’t joke about this.  That idea comes from a scary reality: a DBA (database architect) friend tells about a development team he is interacting with on an outsourced State I.T. project that has decided to migrate a legacy non-Oracle application to Oracle using precisely this approach.  Data that had been stored as records in old ISAM type files, will be stored in Oracle as byte sequences in Blobs, with an added surrogate generated unique primary key.  When he asked what’s the point of that approach, no one at the development shop could give him a reasonable answer other than “in the time frame we have, it works.”   It begs the question: What do you call an Oracle Database where all the data in it is invisible to Oracle itself and cannot be accessed and manipulated directly using SQL?  Or said differently, would you call a set of numbered binary records a “database,” or just “a collection of numbered binary records?”

In another example of the challenges of agile development in a database-driven app world, a DBA colleague describes being brought in on an emergency contract basis to an Agile project under development on top of Oracle, to deal with “performance problems” in the database.   Turns out the developers were using Hibernate and apparently relied on it to create their tables on an as-needed basis, simply adding a table or a column in response to incoming user requirements and not worrying about the data model until it crawled out of the code and attacked them.

This sort of approach to app development is what I am beginning to see as “hit and run.”  Sure, it has worked so far in the web app world of start-ups: get it up and running as fast as possible, then exit quickly and quietly before they can identify you as triggering the meltdown when scale and performance start to matter.

After chatting with this developer last evening (and listening to many others over recent months lament that we’re simply moving too slowly) I am starting to think of Agile development as a methodology of “do anything rather than nothing, regardless of whether its right.”  And this may be to support the perception of rapid progress: “Look, we developed X components/screens/modules in the past week.”  Whether any of this code will stand up to production performance environments is to be determined later.

Another Agile principle is of incremental development and delivery.   It’s easy for a developer to strip out a piece of poorly performing code and replace it with a chunk that offers better or different capabilities.  Unfortunately, you just cannot do this in a Database.  For example: you cannot throw away old data in old tables and simply create new empty tables.

The TrustTheVote Project continues to need the kind of talent this person exhibited last evening at dinner.  But her zeal aside (and obvious passion for the cause of open source in elections), and at the risk of running off the (Ruby) rails here, we simply cannot afford to have these problems happen with the TrustTheVote Project.

Agile methodologies will continue to have their place in our work, but we need to be guided by some emerging realities, and appreciate that for as fast as someone wants to crank out a poll book app or a ballot marking device, we cannot afford to short-cut simply for the sake of speed.  Some may accuse me of being a waterfall Luddite in an agile world; however, I believe there has to be some way to mesh these things, even if it means requirements scrums, data modeling sprints, or animated data models.

Cheers
GAM|out

EAC Guidelines for Overseas Voting Pilots

election-assistance-commissionLast Friday was a busy day for the Federal Elections Assistance Commission.  They issued their Report to Congress on efforts to establish guidelines for remote voting systems.  And they closed their comment period at 4:00pm for the public to submit feedback on their draft Pilot Program Testing Requirements.

This is being driven by the MOVE Act implementation mandates, which we have covered previously here (and summarized again below).  I want to offer a comment or two on the 300+ page report to Congress and the Pilot program guidelines for which we submitted some brief comments, most of which reflected the comments submitted by ACCURATE, friends and advisers of the OSDV Foundation.

To be sure, the size of the Congressional Report is due to the volume of content in the Appendices including the full text of the Pilot Program Testing Requirements, the NIST System Security Guidelines, a range of example EAC processing and compliance documents, and some other useful exhibits.

Why Do We Care?
The TrustTheVote Project’s open source elections and voting systems framework includes several components useful to configuring a remote ballot delivery service for overseas voters.  And the MOVE Act, which updates existing federal regulations intended to ensure voters stationed or residing (not visiting) abroad can participate in elections at home.

A Quick Review of the Overseas Voting Issue
The Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA) protects the absentee voting rights for U.S. Citizens, including active members of the uniformed services and the merchant marines, and their spouses and dependents who are away from their place of legal voting residence.  It also protects the voting rights of U.S. civilians living overseas.  Election administrators are charged with ensuring that each UOCAVA voter can exercise their right to cast a ballot.  In order to fulfill this responsibility, election officials must provide a variety of means to obtain information about voter registration and voting procedures, and to receive and return their ballots.  (As a side note, UOCAVA also establishes requirements for reporting statistics on the effectiveness these mechanisms to the EAC.)

What Motivated the Congressional Report?
The MOVE (Military and Overseas Voting Enhancement) Act, which became law last fall, is intended to bring UOCAVA into the digital age.  Essentially it mandates a digital means to deliver a blank ballot. 

Note: the law is silent on a digital means to return prepared ballots, although several jurisdictions are already asking the obvious question:  “Why improve only half the round trip of an overseas ballot casting?”

And accordingly, some Pilot programs for MOVE Act implementation are contemplating the ability to return prepared ballots.  Regardless, there are many considerations in deploying such systems, and given that the EAC is allocating supporting funds to help States implement the mandates of the MOVE Act, they are charged with ensuring that those monies are allocated for programs adhering to guidelines they promulgate.  I see it as a “checks and balances” effort to ensure EAC funding is not spent on system failures that put UOCAVA voters participation at risk of disenfranchisement.

And this is reasonable given the MOVE Act intent.  After all, in order to streamline the process of absentee voting and to ensure that UOCAVA voters are not adversely impacted by the transit delays involved due to the difficulty of mail delivery around the world, technology can be used to facilitate overseas absentee voting in many ways from managing voter registration to balloting, and notably for our purposes:

  • Distributing blank ballots;
  • Returning prepared ballots;
  • Providing for tracking ballot progress or status; and
  • Compiling statistics for UOCAVA-mandated reports.

The reality is, however, systems deployed to provide these capabilities face a variety of threats.  If technology solutions are not developed or chosen so as to be configured and managed using guidelines commensurate with the importance of the services provided and the sensitivity of the data involved, a system compromise could carry severe consequences for the integrity of the election, or the confidentiality of sensitive voter information.

The EAC was therefore compelled to prepare Guidelines, report to Congress, and establish (at least) voluntary guidelines.  And so we commented on those Guidelines, as did colleagues of ours from other organizations.

What We Said – In a Nutshell
Due to the very short comment period, we were unable to dive into the depth and breadth of the Testing Requirements.  And that’s a matter for another commentary.  Nevertheless, here are the highlights of the main points we offered.

Our comments were developed in consultation with ACCURATE; they consisted of (a) underlining a few of the ACCURATE comments that we believed were most important from our viewpoint; (b) the addition of a few suggestions for how Pilots should be designed or conducted.  Among the ACCURATE comments, we underscored:

  • The need for a Pilot’s voting method to include a robust paper record, as well as complementary data, that can be used to audit the results of the pilot.
  • Development of, and publication of security specifications that are testable.

In addition, we recommended:

  • Development of a semi-formal threat model, and comparison of it to threats of one or more existing voting methods.
  • Testing in a mock election, in which members of the public can gain understanding of the mechanisms of the pilot, and perform experimentation and testing (including security testing), without impacting an actual election.
  • Auditing of the technical operations of the Pilot (including data center operations), publication of audit results, and development of a means of cost accounting for the cost of operating the pilot.
  • Publication of ballots data, cast vote records, and results of auditing them, but without compromising the anonymity of the voter and the ballot.
  • Post-facto reporting on means and limits of scaling the size of the pilot.

You can bet this won’t be the last we’ll hear about MOVE Act Pilots issues; I think its just the 2nd inning of an interesting ball game…
GAM|out

Setting a Technology Agenda for Overseas Voting

I have arrived in Munich, reached my hotel and actually caught a nap.  It was a sloppy slushy day here from what I can tell; about 30 degrees and some wet snow; but spring is around the corner.  On the flight over the Pole last evening (I’m a horrible plane sleeper) I worked on final preparations for our Technology Track at this year’s UOCAVA Summit (which I wrote about yesterday).  I thought I’d share some more about this aspect of the Conference.  This is another long post, but for those who cannot be in Munich at this conference, here are the details.

Historically, as I see it, the Summit has been primarily a policy discourse.  While the Overseas Vote Foundation always has digital services to show off in the form of their latest Web facilities to support overseas voters, Summit has historically been focused on efforts to comply, enforce, and extend the UOCAVA (Uniformed and Overseas Citizens Absentee Voting Act).  This year, with the passage of the MOVE Act (something I also wrote about yesterday), a new tract of topics, discussion, (and even debate) has surfaced, and it is of a technical nature.  This is in principle why the Overseas Vote Foundation approached the OSDV Foundation about sponsorship and co-hosting.  We thought about it, and agreed to both.

Then came the task of actually putting together an agenda, topics, speakers, and content.

I owe a tremendous “thank you” to all of the Panelists we have engaged, and to Dr. Andrew Appel of Princeton, our Chief Technology Officer John Sebes, and our Director of Communications, Matthew Douglass, for their work in helping produce this aspect of Summit.  Our Director of Outreach Strategy, Sarah Nelson should be included in here for her logistics and advance work in Munich.  And of course, I would be remiss if I left out the fearless and brilliant leader of the OVF, Susan Dzieduszycka-Suinat, for all of her coordination, production work, and leadership.

A quick note about Andrew:  I’ve had the privilege of working with Professor Appel on two conferences now.  Many are aware that one of our tract productions is going to be a debate on so-called “Internet Voting” and that Dr. Appel will give the opening background talk.  I intend to post another article tomorrow on the Debate itself.  But I want to point out something now that certain activists may not want to hear (let alone believe).  While Andrew’s view of Internet-based voting systems is well known, there can be no doubt of his interest in a fair and balanced discourse.  Regardless of his personal views, I have witnessed Andrew go to great lengths to examine all sides and build arguments for and against public packet switched networks for public ballot transactions.  So, although several are challenging his giving the opening address, which in their view taints the effort to produce a fair and balanced event, I can state for a fact, that nothing is further from the truth.

Meanwhile, back to the other Track events.

We settled on 2 different Panels to advance the discussion of technology in support of the efforts of overseas voters to participate in stateside elections:

  1. MOVE Act Compliance Pilot Programs – titled: “Technology Pilots: Pros and Cons, Blessing or Curse
  2. Technology Futures – titled: “2010 UOCAVA Technology Futures

Here are the descriptions of each and the Panelists:

Technology Pilots: Pros and Cons, Blessing or Curse

The title is the work of the Conference Sponsor, OVF, but we agree that the phrase, “Technology Pilots” trips wildly different switches in the minds of various UOCAVA stakeholders.  The MOVE Act requires the implementation of pilots to test new methods for U.S. service member voting.  For some, it seems like a logical step forward, a natural evolution of a concept; for others pilots are a step onto a slippery slope and best to avoid at all costs. This panel will discuss why these opposing views co-exist, and must continue to do so.

  • Paul Docker, Head of Electoral Strategy, Ministry of Justice, United Kingdom
  • Carol Paquette, Director, Operation BRAVO Foundation
  • Paul Stenbjorn, President, Election Information Services
  • Alec Yasinsac, Professor and Dean, School of Computer and Information Sciences University of South Alabama

Moderator:
John Sebes, Chief Technology Officer, TrustTheVote Project (OSDV Foundation)

2010 UOCAVA Technology Futures

UOCAVA is an obvious magnet for new technologies that test our abilities to innovate.  Various new technologies now emerging and how they are coming into play with UOCAVA voting will be the basis of discussion.  Cloud computing, social networking, centralized database systems, open source development, and data transfer protocols: these are all aspects of technologies that can impact voting from overseas, and they are doing so.

  • Gregory Miller, Chief Development Officer, Open Source Digital Voting Foundation
  • Pat Hollarn, President, Operation BRAVO Foundation
  • Doug Chapin, Director, Election Initiatives, The Pew Center of the States
  • Lars Herrmann, Redhat
  • Paul Miller, Senior Technology and Policy Analyst, State of Washington
  • Daemmon Hughes, Technical Development Director, Bear Code
  • Tarvi Martens, Development Director at SK, Demographic Info, Computer & Network Security, Estonia

Moderator:
Manuel Kripp, Competence Center for Electronic Voting

The first session is very important in light of the MOVE Act implementation mandate.  Regardless of where you come down on the passage of this UOCAVA update (as I like to refer to it), it is now federal law, and compliance is compulsory.  So, the session is intended to inform the audience of the status of, and plans for pilot programs to test various ways to actually do at least two things, and for some (particularly in the Military), a third:

  1. Digitally enable remote voter registration administration so an overseas voter can verify and update (as necessary) their voter registration information;
  2. Provide a digital means of delivering an official blank ballot for a given election jurisdiction, to a requesting voter whose permanent residence is within that jurisdiction; and for some…
  3. Examine and test pilot digital means to ease and expedite the completion and return submission of the ballot (the controversy bit flips high here).

There are, as you might imagine, a number of ways to fulfill those mandates using digital technology.  And the latter (3rd) ambition raises the most concern.  Where this almost certainly involves the Internet (or more precisely, public packet-switched networks), the activists against the use of the Internet in elections administration, let alone voting, are railing against such pilots, preferring to find another means to comply with the so-called “T-45 Days” requirement of placing an official ballot in the hands of an overseas voter, lest we begin the slide down the proverbial slippery slope.

Here’s where I go rogue for a paragraph or two (whispering)…
First, I’m racking my brain here trying to imagine how we might achieve the MOVE Act mandates using a means other than the Internet.  Here’s the problem: other methods have tried and failed, which is why as many as 1 in 4 overseas voters are disenfranchised now, and why Sen. Schumer (D NY) pushed so hard for the MOVE Act in the first place.  Engaging in special alliances with logistic companies like FedEx has helped, but not resolved the cycle time issues completely.  And the U.S. Postal Service hasn’t been able to completely deliver either (there is, after all, this overseas element, which sometimes means reaching voters in the mountainous back regions of say, Pakistan.)  Sure, I suppose the U.S. could invest in new ballot delivery drones, but my guess is we’d end up accidentally papering innocent natives in a roadside drop due to a technology glitch.

Seriously though (whispering still), perhaps a reasonable way forward may be to test pilot limited uses of the Internet (or hec, perhaps even Military extensions of it) to carry non-sensitive election data, which can reach most of the farther outposts today through longer range wireless networks.  So, rather than investing ridiculous amounts of taxpayer dollars in finding non-Internet means to deliver blank ballots, one proposal floating is to figure out the best, highest integrity solution using packet-switched networks already deployed, and perhaps limit use of the Internet solely for [1] managing voter registration data, and [2] delivering blank ballots for subsequent return by means other than eMail or web-based submission (until such time as we can work out the vulnerabilities on the “return loop.”)  While few can argue the power of ballot marking devices to avoid under-voting and over-voting (among other things), there is trepidation about even that, let alone digital submission of the completed ballot. As far as pilots go, it would seem like we can make some important headway on solving the challenges of overseas voter participation with the power of the Internet without having to jump from courier mule to complete Internet voting in one step.  That observed, IMHO, R&D resulting in test pilots responsibly advances the discussion.

Nevertheless, the slippery slope glistens in the dawn of this new order.  And while we’ll slide around a bit on it in these panels, the real sliding sport is the iVoting Debate this Friday — which I will say more about tomorrow.

OK, back from rogue 😉

So, that this is where the first Panel is focused and where those presentations and conversations are likely to head in terms of Pilots.  In my remaining space (oops, I see I’ve gone way over already, sorry), let me try to quickly comment on the second panel regarding “technology futures.”

I think this will be the most enjoyable panel, even if not the liveliest (that’s reserved for the iVoting Debate).  The reason this ought to be fun is we’ll engage in a discussion of a couple of things about where technology can actually take us in a positive way (I hope).  First, there should be some discussion about where election technology reform is heading.  After all, there remain essentially two major voting systems commercial vendors in the industry, controlling some 88% of the entire nation’s voting technology deployment, with one of those two holding a ~76% white-knuckled grip market share.  And my most recent exposure to discussions amongst commercial voting vendors about the future of voting technology suggest that their idea of the future amounts to discussing the availability of spare parts (seriously).

So, I’m crossing my fingers that this panel will open up discussions about all kinds of technology impact on the processes of elections and voting – from the impact of social media, to the opportunities of open source.  I know for my 5 minute part I am going to roll out the TTV open source election and voting systems framework architecture and run through the 4-5 significant innovations the TrustTheVote Project is bringing to the future of voting systems in a digital democracy.  Each speaker will take 5 minutes to rush their topic, then our moderator Manuel will open it wide up for hopefully an engaging discussion with our audience.

OK, I’ve gone way over my limit here; thanks for reading all about this week’s UOCAVA Summit Technology Tract in Munich.

Now, time to find some veal brätwurst und ausgezeichnet bier.  There is a special meaning for my presence here; my late parents are both from this wonderful country, their families ended up in Munchen, from which both were forced out in 1938.   Gute nacht und auf wiedersehen!

GAM|out

Tim Bray on the way Enterprise Systems are built (compared to open source)

Tim Bray is one of the main people behind XML so he has some serious cred in the world of building and deploying systems. So it with interest (and some palpable butterflies) that a recent missive of his: “Doing it Wrong”.

I don’t know how much of what he says is relevant to what we at TrustTheVote are doing and how we are doing it, but it makes for interesting and highly relevant reading. I do know that many of his examples are very different from elections technology, in fundamental ways, and for many many reasons. So there’s no one-to-one correlation, but listen to what he says:

“Doing it wrong:Enterprise Systems, I mean. And not just a little bit, either. Orders of magnitude wrong. Billions and billions of dollars worth of wrong. Hang-our-heads-in-shame wrong. It’s time to stop the madness.” (from “Doing it Wrong” from Tim Bray)

and:

“What I’m writing here is the single most important take-away from my Sun years, and it fits in a sentence: The community of developers whose work you see on the Web, who probably don’t know what ADO or UML or JPA even stand for, deploy better systems at less cost in less time at lower risk than we see in the Enterprise.” (from “Doing it Wrong” from Tim Bray)

and:

“The Web These Days · It’s like this: The time between having an idea and its public launch is measured in days not months, weeks not years. Same for each subsequent release cycle. Teams are small. Progress is iterative. No oceans are boiled, no monster requirements documents written.” (from “Doing it Wrong” from Tim Bray)

and:

“The point is that that kind of thing simply cannot be built if you start with large formal specifications and fixed-price contracts and change-control procedures and so on. So if your enterprise wants the sort of outcomes we’re seeing on the Web (and a lot more should), you’re going to have to adopt some of the cultures and technologies that got them built.” (from “Doing it Wrong” from Tim Bray)

All of these quotes are from “Doing it Wrong” from Tim Bray. I suggest reading it.

OSDV Responds to FCC Inquiry about Internet Voting

The Federal Communications Commission (FCC) asked for public comment on the use of the Internet for election-related activities (among other digital democracy related matters).  They recently published the responses, including those from OSDV.  I’ll let Greg highlight the particularly public-policy-related questions and answers, but I wanted to highlight some aspects of our response that differ from some others.

  • Like many respondents, we commented on that slippery phrase “Internet voting”, but focused on a few of the specific issues that apply  particularly in the context of overseas and military voters.
  • Also in that context, we addressed some uses of the Internet that could be very beneficial, but are not voting per se.
  • We contrasted other countries’ experiences with elections and the Internet with the rather different conditions here in the U.S.

For more information, of course, I suggest reading our response. In addition, for those particularly interested in Internet voting and security, you can get additional perspectives from the responses of TrustTheVote advisors Candice Hoke and David Jefferson, which are very nicely summarized on the Verified Voting blog.

— EJS

Wired: Nation’s “First” Open Source Election Software

Wired’s Kim Zetter reported on our Hollywood Hill event, in an article titled “Nation’s First Open Source Election Software Released.”  I got a few questions about that “First” part, and I thought I’d share a few personal thoughts about it.

First of all, there is certainly plenty of open source software that does election-related stuff, as a few searches on github and sourceforge will show you. And there are other organizations that have had open source election software as part of their activities. FairVote‘s work on IRV software (some done by TTV’s own Aleks Totic) is a notable example. Another is Ben Adida‘s notable work (on crypto-enabled ballot count verification among other things) represented in his Helios system, recently used in a real university election. And Helios is only one of several such systems.

Now, what is “first” about OSDV’s release of a part of the election tech suite that we’re developing in the TTV project? In my own view, one first is that the software is targeted at U.S. elections specifically, and on providing  automation of election operations in ways that match the existing practices and needs of U.S. elections officials. The many well-meaning efforts on open-source Web apps for Internet voting, for example, are laudable work, but not what most election officials actually need right now or can legally deploy and use for U.S. government elections.

So I think that it is a first indeed, when you combine that factor with all the other attributes of open-source, non-proprietary, open-data, operations-transparent, and so forth. It’s not exactly a great invention to do what we’re doing: pick a target for deployment; talk to the people who work there; find out what they want, and how to deliver it without asking them to also change the way that they do their work. Applied to election tech development, that approach is fundamental to what we do, and whether our work  is “first.”

— EJS

“Adoptability” and Sustainability of the TrustTheVote Project

Ok, so rumors of my being radio silent for months due to my feeble attempts to restore my software development skills are greatly unbounded.  I’ve been crazy busy with outreach to States’ elections officials, as our design and specification work is driven by their domain expertise.   In the midst of that, I received a question/comment from a Gartner analyst, Brian Prentice, who I consider to be very sharp on a number of topics around emerging technologies, trends, and open source.  If you have a chance you should definitely check out his blog.

In any event, I thought it would be interesting to simply post my reply to his inquiry here, to potentially shed some light on our strategy and mindset here at the OSDV Foundation and the TrustTheVote Project in particular.  So with that, here it is…

Greetings Brian
I am replying directly (as Marie requested).   Please let me train on your specific question with regard to the TrustTheVote Project and States’ participation to ensure viability, “adoptability” (sic) and of course, sustainability.   Quickly, if I may, I owe you a background brief in ~150 words…

I can understand if you’re wondering “Who are these guys, and if they matter, why haven’t I heard of them?” Fair enough.  Backed by the likes of Mitch Kapor, a team of notable Silicon Valley technologists have been (intentionally) quieting plowing away on a hard problem: restoring trust in America’s voting technology by producing transparent, high assurance systems… and helping it to be freely deployed as “public infrastructure.”  To avoid the trouble with announcing vaporware (and because we have no commercial agenda wrapped up in a competitive first-mover advantage stunt), we’ve remained under the PR radar (except for those avid OSS folks who have been following our activities on the Net.)  Now we’re being pressed by many to go public given the level of work we’re accomplishing and the momentum we’re achieving.  So, here we are.  Now, to your question:

To what extent has the TrustTheVote Project displaced bespoke state-specific VR efforts. The success of any open source project is directly related to the vitality of the community that supports it.  As I would see it, TTV needs states to move away from their own software solutions and instead to contributing to TTV project.

1. All states who register voters are under a HAVA mandate to provide for a centralized voter registration database, and to varying extents are either self-vending or looking to outside (expensive) proprietary solutions.

2. Early on in our nearly 3 year old project we recognized that we did not want to build the ideal “Smithsonian solution” (i.e., an elegant solution that no one adopted, but made a perfect example of how it could’ve and should’ve been done).  Therefore, we realized that amongst all stakeholders in America’s elections systems, the States’ Elections Directors and local elections jurisdictions officials are the front lines and arguably have the most at stake — they succeed or fail by their decisions on what technology to choose and deploy to manage elections.   So, we created a stakeholder community we affectionately call the “Design Congress,” comprised of States’ elections directors.   Ideally, at full implementation, we will have all 50 states and 5 territories represented.   Currently, 18 states have expressed interest at some level, and about 12-15 are committed, on board, and advising us.  In many cases, we even have Secretaries of States’ themselves involved.

3. The TTV Project’s voter registration system is part of a larger elections management system we’re designing and building — under the advice and counsel of those very States’ elections directors and other domain experts who are actively “weighing in.”  We use a process very similar to that of the IETF (Internet Engineering Task Force) called the “RFC” or “Request for Comment.”

4. In the case of our Voter Registration Design Specification, we were encouraged by a number of States to freely adopt specs of their own, and in fact, CA encouraged us to look closely at theirs as a basis. We did.  In other cases (such as for our work on Ballot Design Studio, Ballot Casting/Counting services, Tabulators, etc.), some States are freely contributing to our overall code base (their “IP” is generally paid for by taxpayer dollars and they necessarily cannot sell, but can give it away, so they are eager to contribute to this public digital works project.)

5. So, you are 110% correct in your observations, and the TrustTheVote Project is already fully on track with you in building a strong stakeholder community to drive the design and specifications of all parts of the voting technology ecosystem we’re examining, re-thinking, designing, developing, and offering in an open source manner.

Our goal is simple: create accountable, reliable, transparent, and trustworthy elections and voting systems that are publicly owned “critical democracy infrastructure.”

And our work is gaining the attention of folks from the U.S. DoJ, the Obama Administration’s OSTP, the American Enterprise Institute, the Brookings Institute, several universities, States’ Secretaries, and of course, folks like Rock The Vote, and the Overseas Vote Foundation.

I (and our CTO or anyone here appropriate) would love an opportunity to brief you further; not because we have anything to promote or sell in the commercial sense, but because a growing group of some of the best in technology and public policy sectors are working together in a purely philanthropic manner, to produce something we think is vitally important to our democracy.

Cheers
Gregory Miller, JD
Chief Foundation Development Officer
Open Source Digital Voting Foundation

Arizona: a New Definition of “Sufficiently” Mis-Counted?

There’s a fascinating nugget inside of a fine legal story unfolding in Arizona. I know that not all our readers are thrilled by news of court cases related to election law and election technology, so I’ll summarize the legal story in brief, and then get to the nugget. The Arizona Court of Appeals has been working on case that considers this interesting mix:

  • The State’s constitutional right of free and fair elections;
  • The recognition that voting systems can mis-count votes;
  • The idea that a miscounted election fails to be fair;
  • The certification for use in AZ of voting system products that had counting errors before;
  • The argument over whether certified systems can be de-certified on constitutional grounds.

For the latest regular press news on the case, see the Arizona Daily Star’s article “Appeals court OKs group’s challenge to touch-screen voting.”

Now let’s look at what Judge Philip Hall actually said in the decision: (Thanks to Mark Lindeman for trolling this out). The judge refers to a piece of AZ law, A.R.S. § 16-446(B)(6), that says: “An electronic voting system shall . . . [w]hen properly operated, record correctly and count accurately every vote cast.” That “every” is a pretty strong word! Judge Philip Hall wrote:

We conclude that Arizona’s constitutional right to a “free and equal” election is implicated when votes are not properly counted. See A.R.S. §16-446(B)(6). We further conclude that appellants may be entitled to injunctive and/or mandamus relief if they can establish that a significant number of votes cast on the Diebold or Sequoia DRE machines will not be properly recorded or counted.

As election-ologist Joe Hall pointed out, “Of course, I’m left wondering ‘what is significant?’ here. Sounds like a question we’ll hear a lot about in the future of this case!” Indeed we will. Of course neither AZ law nor the legal ruling provides a pre-scription for “significant” but note also that “significant” may be a relative concept, depending on how close a race is. (Thanks again to Mark Lindeman for the point.) We know it’s pretty easy for today’s voting systems to miscount modest numbers (hundreds) of votes, and escape the notice of humans; and we know that contests that close will occur. Does that mean we can’t use these voting systems?

I guess the argument is going to continue, both on “significant” in Hall’s decision, and on “properly operated” in the AZ law. And as we saw in Humboldt County and many other places, “operator error” is often in the eye of the beholder.

— EJS