Tagged certification

Elections + National Security = Hardware Threats + Policy Questions

U.S. election technology is increasingly regarded as critical to national interests. In discussions about the national-level importance of election technology, I’ve also increasingly heard the term “national security” used. The idea seems to be that election technology is as important as other national-security-critical systems. That’s fair enough in principle, but at present we are a long way from any critical piece of election technology – such as machines for casting and counting ballots – from being manufactured, operated, and protected like other systems that currently do meet the definition of “national security systems”.

However, there is one element of national security systems (NSSs) that I believe is overlooked or unfamiliar to many observers of election technology as critical infrastructure. NSSs have to address hardware level threats by containing their risk using a set of practices called supply chain risk management (SCRM). Perhaps hardware threats have been overlooked by some national policy makers, because of the policy issue that I’ll close with today.

The Hardware Threat

I’d like to explain why hardware level threats are more feasible to address than many other challenges of re-inventing election technology to meet national security threats. But first I should explain what’s usually meant by hardware level threats, and where supply chains come into it.

Hardware level threats exist because its possible for an adversary to craft malicious hardware components that work just like a regular component, but also have hidden logic to make it misbehave. To take one simplistic example, a malicious optical disk drive might faithfully copy the contents of a DVD-R when requested, except in special circumstances, such as installing a particular operating system. In that special case, it might deliver a malicious modified copy of a critical OS file, effectively compromising the hardware that the system is installed in.

To those not familiar with the concept, it might seem fanciful that a nation-state actor would engage in such activities: target a specific device manufacturer; create malicious hardware components; inject them into the supply chain of the manufacturer so that malicious hardware components become part of its products. But, in fact, such attacks have happened, and on systems that could have a significant impact on defense or intelligence.

That’s why it is one of the basic aspects of national security systems, that their manufacturers take active steps to reduce the risk of such attacks, in part by operating a rigorous SCRM program. Though unfamiliar to many, the concepts and practices have been around for almost a decade.

Since the inception then of the Comprehensive National Cybersecurity Initiative (CNCI), many defense and intelligence related systems have been procured using SCRM methods specifically because of hardware threats. In fact, the DoD likely has the most experience in managing a closed supply chain, and qualifying vendors based on their SCRM programs.

SCRM for Election Technology

What might this mean for the future of election technology that is genuinely treated as a national security asset? It means that in the future, such systems would eventually have to be manufactured like national security systems. Significant efforts to increase voting technology security would almost demand it; those efforts’ value would be significantly undercut by leaving the hardware Achilles heel unaddressed.

What would that look like? One possible future:

  • Some government organization operates a closed supply chain program; perhaps piggybacking on existing DoD programs.
  • Voting technology manufacturers source their hardware components from the hardware vendors in this program.
  • Voting technology manufacturers would operate an SCRM program, with similar types of documentation and compliance requirements.
  • Voting technology operators – election officials – would cease their current practice of replacing failing components with parts sourced on the open market.

This would be a big change from the current situation. How would that change come about? Hence the open issue for policy makers …

Policy Issues and Questions

The opportunity for voting system vendors to benefit from a managed closed supply chain might actually be something possible in the short term. But how would that come about? And what would motivate the vendors to take that benefit? And to expend the funds to set up and operate an SCRM program?

To me, this is an example of a public good (reduced risks to our elections being attacked) that doesn’t obviously pencil out as profit where the manufacturer gets a return on the investment (“ROI”) of additional costs for additional manufacturing process and for compliance efforts. So, I suppose that in order for this to work, some external requirement would have to be imposed (just as the DoD and other parts of the Federal government do for their vendors of NSSs) to obligate manufacturers to incur those costs as part of the business of voting technology, and choose how to pass the costs along to, eventually, taxpayers.

However, in this case, the Federal government has no direct role regulating the election technology business. That’s the job of each State, to decide which voting systems are allowed to be used by their localities; and to decide which technology companies to contract with for IT services related to state-operated election technology related to voter registration and election management. But States don’t have existing expertise in SCRM that Federal organizations do.

So, there is plenty of policy analysis to do, before we could have a complete approach to addressing hardware level threats to elections. But there’s one thing that could be done in the near term, without defining a complete solution. Admittedly, iIt’s a bit of a “build and they might show up” approach, based on a possible parallel case.

Parallel to Certification

The best parallel I know of is with voting system certification. Currently, about half the States require that a voting system manufacturer successfully complete an evaluation and certification program run by the Federal government’s Election Assistance Commission (EAC). That’s a prerequisite for the State’s certification. A possible future parallel would be a] for the Federal government to perform supply chain regulation functions, and compliance monitoring of manufacturers, and b] for States to voluntarily choose whether to require participation of manufacturers. The Federal function might be performed by an organization, that already supports supply chain security, which sets up a parallel program for election technology, and offers its use to the manufacturers of election technology of all kinds. If that’s available, perhaps vendors might dip a toe in the waters, or States might begin to decide whether they want to address hardware threats. Even if this approach worked, then there would be the question of how all this might apply to all the critical election technology that isn’t machines for casting and counting ballots. But at least it would be a start.

That’s pretty speculative I admit, but at least it is a start that can be experimented with in the relatively near term – certainly in time for the 2020 elections that will use election systems that are newer than today’s decade-plus-old systems, but inside have the same vulnerabilities as today’s technology. Hardware assurance won’t fix software vulnerabilities, but it would make it much more meaningful to attempt to fix them, with the hardware Achilles heel being on the way to being addressed.

— EJS

Dismantling Federal Assistance to US Elections — The Freeze/Thaw Cycle

Last time I wrote in this series on the EAC being dismantled, I used the metaphor of freezing and thawing to describe not only how the EAC’s effectiveness has been limited, but also the consequence:

We now have voting systems that have been vetted with standards and processes that are almost as Jurassic as the pre-Internet era.

This time I need to support my previous claims by explaining the freeze/thaw cycle in more detail, and connecting it to the outcome of voting systems that are not up to today’s job, as we now understand it, post-2016.

The First Try

EAC’s first try at voting system quality started after the year 2000 election hanging chad debacle, and after the Help America Vote Act (HAVA) designed to fix it. During the period of 2004 to 2006, the EAC was pretty busy defining standards and requirements (technically “guidelines” because states are not obligated to adopt them) for the then-next-gen of voting systems, and setting up processes for testing, review, and certification.

That first try was “good enough” for getting started on a way out of the hanging chad morass, but was woefully inadequate in hindsight. A beginning of a second try resulted in the 2007 recommendations to significantly revise the standards, because the hindsight then showed that the first try had some assumptions that weren’t so good in practice. My summary of those assumptions:

  • Electronic Voting Machines (EVMs) were inherently better than paper-based voting, not just for accessibility (which is a true and important point) but also for reliability, accuracy, and many other factors.
  • It’s OK if EVMs are completely paperless, because we can assume that the hardware and software will always make an accurate and permanent digital record of every voter’s choice.
  • The then current PC technology was good enough for both EVMs and back-office systems, because that PC tech was good enough desktop computing.
  • Security and quality are important, and can be “legislated” into existence by written standards and requirements, and a test process for evaluating whether a voting system meets those requirements.

Even in 2007, and certainly even more since then, we’ve seen that what these assumptions actually got us was not what we really wanted. My summary of what we got:

  • Voting machines lacking any means for people to cross-check the work of the black-box hardware and software, to detect malfunctions or tampering.
  • Voting machines and back-office systems that election officials can only assume are unmodified, un-tampered copies of the certified systems, but can’t actually validate.
  • Voting machines and back-office systems based on decades old PC technology, with all the security and reliability limitations thereof, including the ready ability of any software to modify the system.
  • Voting system software that passed testing, but when opened up for independent review in California and in Ohio, was found to be rife with security and quality problems.

Taken together, that meant that election tech broadly was physically unreliable, and very vulnerable, both to technological mischance and to intentional meddling. A decade ago, we had much less experience than today with the mischances that early PC tech is prone to. At the time, we also had much less sensitivity to the threats and risks of intentional meddling.

Freeze and Thaw

And that’s where the freeze set in. The 2007 recommendations have been gathering dust since then. A few years later, the freeze set in on EAC as well, which spent several years operating without a quorum of congressionally approved commissioners, and not able to change much – including certification standards and requirements.

That changed a couple years ago. One of the most important things that the new commissioners have done is to re-vitalize the process for modernizing the standards, requirements, and processes for new voting system. And that re-vitalization is not a moment too soon, just as most of the nation’s states and localities have been replacing decaying voting machines with “new” voting systems thatare not substantially different from what I’ve described above.

That’s where the huge irony lies – after over a decade of inactivity, the EAC has finally gotten its act together to try to become an effective voting system certification body for the future — and it is getting dismantled.

It is not just EAC that’s making progress. EAC works with NIST, and a Technical Guidelines Working Group (TGWC), and many volunteers from many organizations (including ours) that working in several groups focused on help the TGWC. We’ve dusted off the 2007 recommendations, which address how to fix at least some of those consequences I listed above. We’re writing detailed standards for interoperability, so that election officials have more choice about how to acquire and operate voting tech. I could go on about the range of activity and potential benefits, but the point is, there is lot that is currently a-building that is poised to be frozen again.

A Way Forward?

I believe that it is vitally important, indeed a matter of national security, that our election tech makes a quantum leap forward to address the substantial issues of our current threat environment, and the economic and administrative environment that our hardworking election officials face today.

If that’s to happen, then we need a way to not get frozen again, even if the EAC is dismantled. A look at various possible ways forward will be the coda for this series.

— EJS

The Freeze Factor – Dismantling Federal Assistance to U.S. Elections

“Frozen” is my key word for what happens to the voting system certification process after EAC is dismantled. And in this case, frozen can be really harmful. Indeed, as I will explain, we’ve already seen how harmful.

Last time I wrote in this series on the EAC being dismantled (see the first and second posts), I said that EAC’s certification function is more important than ever. To re-cap:

  • Certification is the standards, requirements, testing, and seal-of-approval process by which local election officials gain access to new election tech.
  • The testing is more important than ever, because of the lessons learned in 2016:

1. The next gen of election technology needs to be not only safe and effective, but also …

2. … must be robust against whole new categories of national security threats, which the voting public only became broadly aware of in late 2016.

Today it’s time to explain just how ugly it could get if the EAC’s certification function gets derailed. Frozen is that starting point, because frozen is exactly where EAC certification has been for over a decade, and as a result, voting system certification is simply not working. That sounds harsh, so let me first explain the critical distinction between standards and process, and then give credit where credit is due for the hardworking EAC folks doing the certification process.

  • Standards comprise the critical part of the voting system certification program. Standards define what a voting system is required to do. They define a test lab’s job for determining whether a voting system meets these requirements.
  • Process the other part of the voting system certification program, composed of the set of activities that the players – mainly a voting system vendor, a test lab, and the EAC – must collectively step through to get to the Federal “seal of approval” that is the starting point for state election officials to make their decisions about voting system to allow in their state.

Years worth of EAC efforts have improved the process a great deal. But by contrast, the standards and requirements have been frozen for over a decade. During that time, here is what we got in the voting systems that passed the then-current and still-current certification program:

Black-box systems that election officials can’t validate, for voting that voters can’t verify, with software that despite passing testing, later turned out to have major security and reliability problems.

That’s what I mean by a certification program that didn’t work, based solely on today’s outcome – election tech that isn’t up to today’s job, as we now understand the job to be, post-2016. We are still stuck with the standards and requirements of the process that did not and does not work. While today’s voting systems vary a bit in terms of verifiability and insecurity, what’s described above is the least common denominator that the current certification program has allowed to get to market.

Wow! Maybe that actually is a good reason to dismantle the EAC – it was supposed to foster voting technology quality, and it didn’t work. Strange as it may sound, that assessment is actually backwards. The root problem is that as a Federal agency, the EAC had been frozen itself. It got thawed relatively recently, and has been taking steps to modernize the voting systems standards and certification. In other words, just when the EAC has thawed out and is starting to re-vitalize voting system standards and certification, it is getting dismantled – that at a time when we just recently understood how vulnerable our election systems are.

To understand the significance of what I am claiming here, I will have be much more specific in my next segment, about the characteristics of the certification that didn’t work, how the fix started over a decade ago, got frozen, and has been thawing. When we understand the transformational value of the thaw, we can better understand what we need in terms of a quality program for voting systems, and how we might get to such a quality program if the EAC is dismantled.

— EJS

Cancellation of Federal Assistance to US Elections — The Good, The Bad, and The Geeky

Recently I wrote about Congress dismantling the only Federal agency that helps states and their local election officials ensure that the elections that they conduct are verifiable, accurate, and secure — and transparently so, to strengthen public trust in election results. Put that way, it may sound like dismantling the U.S. Election Assistance Commission (EAC) is both a bad idea, and also poorly timed after a highly contentious election in which election security, accuracy, and integrity were disparaged or doubted vocally and vigorously.

As I explained previously, there might be a sensible case for shutdown with a hearty “mission accomplished”  — but only with a narrow view of original mission of the EAC. I also explained that since its creation, EAC’s evolving role has come to include duties that are uniquely imperative at this point in U.S. election history. What I want to explain today is that evolved role, and why it is so important now.

Suppose that you are a county election official in the process of buying a new voting system. How do you know that what you’re buying is a legit system that does everything it should do, and reliably? It’s a bit like a county hospital administrator considering adding new medications to their formulary — how do you know that they are safe and effective? In the case of medications, the FDA runs a regulatory testing program and approves medications as safe and effective for particular purposes.

In the case of voting systems, the EAC (with support from NIST) has an analogous role: defining the requirements for voting systems, accrediting test labs, defining requirements for how labs should test products, reviewing test labs’ work, and certifying those products that pass muster. This function is voluntary for states, who can choose whether and how to build their certification program on the basis of federal certification. The process is not exactly voluntary for vendors, but since they understandably want to have products that can work in every state, they build products to meet the requirements and pass Federal certification. The result is that each locality’s election office has a state-managed approved product list that typically includes only products that are Federally certified.

Thus far the story is pretty geeky. Nobody gets passionate about standards, test labs, and the like. It’s clear that the goals are sound and the intentions are good. But does that mean that eliminating the EAC’s role in certification is bad? Not necessarily, because there is a wide range of opinion on EAC’s effectiveness in running certification process. However, recent changes have shown how the stakes are much higher, and the role of requirements, standards, testing, and certification are more important than ever. The details about those changes will be in the next installment, but here is the gist: we are in the middle of a nationwide replacement of aging voting machines and related election tech, and in an escalating threat environment for global adversaries targeting U.S. elections. More of the same-old-same-old isn’t nearly good enough. But how would election officials gain confidence in new election tech that’s not only safe and effective, but robust against whole new categories of threat?

— EJS

Accurate Election Results in Michigan and Wisconsin is Not a Partisan Issue

counties

Courtesy, Alex Halderman Medium Article

In the last few days, we’ve been getting several questions that are variations on:

Should there be recounts in Michigan in order to make sure that the election results are accurate?

For the word “accurate” people also use any of:

  • “not hacked”
  • “not subject to voting machine malfunction”
  • “not the result of tampered voting machine”
  • “not poorly operated voting machines” or
  • “not falling apart unreliable voting machines”

The short answer to the question is:

Maybe a recount, but absolutely there should be an audit because audits can do nearly anything a recount can do.

Before explaining that key point, a nod to University of Michigan computer scientists pointing out why we don’t yet have full confidence in the election results in their State’s close presidential election, and possibly other States as well. A good summary is here and and even better explanation is here.

A Basic Democracy Issue, not Partisan

The not-at-all partisan or even political issue is election assurance – giving the public every assurance that the election results are the correct results, despite the fact that bug-prone computers and human error are part of the process. Today, we don’t know what we don’t know, in part because the current voting technology not only fails to meet the three (3) most basic technical security requirements, but really doesn’t support election assurance very well. And we need to solve that! (More on the solution below.)

A recount, however, is a political process and a legal process that’s hard to see as anything other than partisan. A recount can happen when one candidate or party looks for election assurance and does not find it. So it is really up to the legal process to determine whether to do a recount.

While that process plays out let’s focus instead on what’s needed to get the election assurance that we don’t have yet, whether it comes via a recount or from audits — and indeed, what can be done, right now.

Three Basic Steps

Leaving aside a future in which the basic technical security requirements can be met, right now, today, there is a plain pathway to election assurance of the recent election. This path has three basic steps that election officials can take.

  1. Standardized Uniform Election Audit Process
  2. State-Level Review of All Counties’ Audit Records
  3. State Public Release of All Counties Audit Records Once Finalized

The first step is the essential auditing process that should happen in every election in every county. Whether we are talking about the initial count, or a recount, it is essential that humans do the required cross-check of the computers’ work to detect and correct any malfunction, regardless of origin. That cross-check is a ballot-polling audit, where humans manually count a batch of paper ballots that the computers counted, to see if the human results and machine results match. It has to be a truly random sample, and it needs to be statistically significant, but even in the close election, it is far less work than a recount. And it works regardless of how a machine malfunction was caused, whether hacking, manipulation, software bugs, hardware glitches, or anything.

This first step should already have been taken by each county in Michigan, but at this point it is hard to be certain. Though less work than a recount, a routine ballot polling audit is still real work, and made harder by the current voting technology not aiding the process very well. (Did I mention we need to solve that?)

The second step should be a state-level review of all the records of the counties’ audits. The public needs assurance that every county did its audit correctly, and further, documented the process and its findings. If a county can’t produce detailed documentation and findings that pass muster at the State level, then alas the county will need to re-do the audit. The same would apply if the documentation turned up an error in the audit process, or a significant anomaly in a difference between the human count and the machine count.

That second step is not common everywhere, but the third step would be unusual but very beneficial and a model for the future: when a State is satisfied that all counties’ election results have been properly validated by ballot polling audit, the State elections body could publicly release all the records of all the counties’ audit process. Then anyone could independently come to the same conclusion as the State did, but especially election scientists, data scientists, and election tech experts. I know that Michigan has diligent and hardworking State election officials who are capable of doing all this, and indeed do much of it as part of the process toward the State election certification.

This Needs to Be Solved – and We Are

The fundamental objective for any election is public assurance in the result.  And where the election technology is getting in the way of that happening, it needs to be replaced with something better. That’s what we’re working toward at the OSET Institute and through the TrustTheVote Project.

No one wants the next few years to be dogged by uncertainly about whether the right person is in the Oval Office or the Senate. That will be hard for this election because of the failing voting machines that were not designed for high assurance. But America must say never again, so that in two short years and four years from now, we have election infrastructure in place that was designed from ground-up and purpose-built to make it far easier for election officials to deliver election results and election assurance.

There are several matters to address:

  • Meeting the three basic security requirements;
  • Publicly demonstrating the absence of the vulnerabilities in current voting technology;
  • Supporting evidenced-based audits that maximize confidence and minimize election officials’ efforts; and
  • Making it easy to publish detailed data in standard formats, that enable anyone to drill down as far as needed to independently assess whether audits really did the job right.

All that and more!

The good news (in a shameless plug for our digital public works project) is that’s what we’re building in ElectOS. It is the first openly public and freely available set of election technology; an “operating system” of sorts for the next generation of voting systems, in the same way and Android is the basis for much of today’s mobile communication and computing.

— John Sebes

More on CyberScoop Coverage of Voting Machine Vulnerabilities

CyberScoop‘s Chris Bing wrote a good summary of the response to Cylance’s poorly timed announcement of old news on voting machine vulnerabilities: Security Firm Stokes Election Hacking Fears.

I have a couple of details to add, but first let me re-iterate that the system in question does have vulnerabilities which have been well known for years, and reference exploits are old news. Sure, Cylance techs did write some code to create a new variant on previous exploits, but as Princeton election security expert Andrew Appel noted, the particular exploit was detectable and correctable, unlike some other hacks.

Regardless of whether Cylance violated the unwritten code of reporting on new vulnerabilities only, and regardless of good intentions vs. fear-mongering effects, the basic premise is wrong.

You can’t expect election officials to modify critical voting systems in response to a blog. In fact, election officials should not be modifying software at all, and should modify hardware only for breakage replacement.

Perhaps the folks at Cylance didn’t know that there are very special and very specific rules for modifying voting systems. Here  are 5 details about how it really works:

  • The hardware and software of voting systems is highly regulated, and modifications can only be done following regulatory review.
  • Even if this were a new vulnerability, and even if there were what some would claim is an easy fix, it would still require the vendor to act, not the election officials. Vendors would have to make the fix, and re-do their testing, then re-engage for testing by an accredited test lab (at the vendor’s expense), and then go back to government certification of the test lab’s finding.
  • Election officials are barred from “patching” or any kind of unsupervised modification. This makes a lot of sense, if you think about it: someone representing the vendor wants to modify these systems, while each of 10,000+ local election bodies is supposed to ensure only the legitimate changes happen? That’s not feasible, even if were legal.
  • Local election officials are required to do pre-election testing for machines’ “logic and accuracy,” and they must not use machines that have not passed such testing, which in some localities must also be signed off by an elections board. Making even a legitimate certified change to a system 4 days before an election would invalidate it for use on election day. Consider early voting! It is really many weeks since modifications of any kind were allowed.
  • So there is no way that a disclosure like this, with this timing, could ever be viewed as responsible by anyone who understands how voting tech is regulated and operated. I expect that it didn’t occur to the Cylance folks that there might be special rules about voting systems that would make disclosures 4 days before, or even 4 weeks before, completely impractical for any benefit. But regardless of a possible upside, it ought to have been clear that there is considerable downside for fear-mongering the integrity of an election a mere days before election day– especially this one.

And that would still be the case if this were a new finding.  Which it isn’t.

Making a new variant exploit on a vulnerability well known for some time is just grandstanding, and most responsible security folks steer clear of that to maintain their reputation.  I can’t fathom why Cylance in this case behaved so at variance with the unwritten code of ethical vulnerability research. I hope it was just impulsive behavior based on a genuine concern about the integrity of our elections.  The alternative would be most unfortunate.

— John Sebes, CTO

State Certification of Future Voting Systems — 3 Points of Departure

In advance of this week’s EVN Conference, we’ve been talking frequently with colleagues at several election oriented groups about the way forward from the current voting system certification regime. One of the topics for the EVN conference is a shared goal for many of us: how to move toward a near future certification regime that can much better serve state election officials in states that want to have more control, customization, and tailoring of the certification process, to better serve the needs of their local election officials.

Read more

Election Standards – Part Two

Last time I reported on the first segment of the annual meeting of the Voting Systems Standards Group (VSSC) of the IEEE. Most of that segment was about the soon-to-be-standard for election definition and election results (called VSSC.2). I recapped some of the benefits of data standards, using that as an example.  Much of the rest of day one was related to that standard in a number of ways. First, we got an update on the handful of major comments submitted during the earlier review periods, and provided input on how to resolve these finalize the standard document. Second, we reviewed suggestions for other standards that might be good follow-on efforts.

What’s in a Name?

One example of the comments concerned the issue that I mentioned previously (concerning aggregation), that the object identifiers in one VSSC.2 dataset might have no resemblance to identifiers in another dataset, even though the two datasets were referring to some of the same real-world items described by the objects. We discussed a couple existing object naming standards for election data, FIPS and OCD-IDs. FIPs is a federal standard for numeric identification for states, counties, townships, municipalities, and related minor civil divisions (as well as many other things not related to elections).

That’s useful because those types of real-world entities are important objects in election definitions, and it’s very handy to have standard identifiers for them. However, FIPS is not so useful because there loads of other kinds of real-world entities that are electoral districts, but not covered by FIPS. In fact, there are so many of them and in such variety, that no one really knows all of the types districts in use in the U.S. So we really don’t have a finished standard naming scheme for U.S. electoral districts. We also discussed the work of the Open Civic Data project, specifically their identifier scheme and repository, abbreviated as OCD-IDs.

More on that in the report from Day 2, but to make a long story short, the consensus was that the VSSC.2 standard was just fine without a unique ID scheme, and that a new standard specifically for standardized IDs was not a large need now.

Supporting Audits

That’s one possible new standard, related to the .2 standard, that we considered and deferred. Two others got the thumbs up, at least at the level of agreement to form a study group, which is the first step. One case was pretty limited: a standard for cast-vote records (CVRs) to support ballot audits with an interoperable data standard. To only slightly simplify, one common definition of a CVR is a record of exactly what a ballot-counting device recorded from a specific ballot, the votes of which are included in a vote tally created by the device. Particularly helpful is the inclusion of a recorded image of the ballot. With that, a person who is part of a typical ballot audit can go though a batch of ballots (typically all from the same precinct) and decide whether their human judgment agrees with the machine’s interpretation, based on the human’s understanding of relevant state election law.

Support for audits with CVRs is a fundamental requirement for voting systems, so this standard is pretty important. It’s scope is limited enough that I hope we can get it done relatively quickly.

More to Come

The other study group will be looking at the rather large issue of standardizing an election definition, beyond the level of the .2 standard. That standard is very useful for a number of purposes (including election result reporting) but intentionally limited to not try to be a comprehensive standard. The study group will be looking at some use cases that might guide the definition of a smaller scope, that could be a timely a right-sized step from .2 toward a truly comprehensive standard. My personal goal, which I think many share, is to look at the question of what else, besides what we already have in .2, is needed for an election definition that could support ballot layout at least at the level of sample ballots. I like that of course, because we already documented the TrustTheVote requirements for that when we developed the sample-ballot feature of the Voter Services Portal.

Onward to day 2!

— EJS

PS: For more on the Voter Services Portal …  production version of VSP in Virginia is at https://www.vote.virginia.gov and the demo version is described at PCEA’s web site http://www.supportthevoter.gov and http://web.mit.edu/vtp/ovr3.html with an interactive version at http://va-demo.voterportal.trustthevote.org

A Northern Exposed iVoting Adventure

NorthernExposureImageAlaska’s extension to its iVoting venture may have raised the interests of at least one journalist for one highly visible publication.  When we were asked for our “take” on this form of iVoting, we thought that we should also comment here on this “northern exposed adventure.” (apologies to those fans of the mid-90s wacky TV series of a similar name.)

Alaska has been among the states that allow military and overseas voters to return marked absentee ballots digitally, starting with fax, then eMail, and then adding a web upload as a 3rd option.  Focusing specifically on the web-upload option, the question was: “How is Alaska doing this, and how do their efforts square with common concerns about security, accessibility, Federal standards, testing, certification, and accreditation?

In most cases, any voting system has to run that whole gauntlet through to accreditation by a state, in order for the voting system to be used in that state. To date, none of the iVoting products have even tried to run that gauntlet.

So, what Alaska is doing, with respect to security, certification, and host of other things is essentially: flying solo.

Their system has not gone through any certification program (State, Federal, or otherwise that we can tell); hasn’t been tested by an accredited voting system test lab; and nobody knows how it does or doesn’t meet  federal requirements for security, accessibility, and other (voluntary) specifications and guidelines for voting systems.

In Alaska, they’ve “rolled their own” system.  It’s their right as a State to do so.

In Alaska, military voters have several options, and only one of them is the ability to go to a web site, indicate their choices for vote, and have their votes recorded electronically — no actual paper ballot involved, no absentee ballot affidavit or signature needed. In contrast to the sign/scan/email method of return of absentee ballot and affidavit (used in Alaska and 20 other states), this is straight-up iVoting.

So what does their experience say about all the often-quoted challenges of iVoting?  Well, of course in Alaska those challenges apply the same as anywhere else, and they are facing them all:

  1. insider threats;
  2. outsider hacking threats;
  3. physical security;
  4. personnel security; and
  5. data integrity (including that of the keys that underlie any use of cryptography)

In short, the Alaska iVoting solution faces all the challenges of digital banking and online commerce that every financial services industry titan and eCommerce giant spends big $ on every year (capital and expense), and yet still routinely suffer attacks and breaches.

Compared to the those technology titans of industry (Banking, Finance, Technology services, or even the Department of Defense), how well are Alaskan election administrators doing on their shoestring (by comparison) budget?

Good question.  It’s not subject to annual review (like banks’ IT operations audit for SAS-70), so we don’t know.  That also is their right as a U.S. state.  However, the  fact that we don’t know, does not debunk any of the common claims about these challenges.  Rather, it simply says that in Alaska they took on the challenges (which are large) and the general public doesn’t know much about how they’re doing.

To get a feeling for risks involved, just consider one point, think about the handful of IT geeks who manage the iVoting servers where the votes are recorded and stored as bits on a disk.  They are not election officials, and they are no more entitled to stick their hands into paper ballots boxes than anybody else outside a
county elections office.  Yet, they have the ability (though not the authorization) to access those bits.

  • Who are they?
  • Does anybody really oversee their actions?
  • Do they have remote access to the voting servers from anywhere on the planet?
  • Using passwords that could be guessed?
  • Who knows?

They’re probably competent responsible people, but we don’t know.  Not knowing any of that, then every vote on those voting servers is actually a question mark — and that’s simply being intellectually honest.

Lastly, to get a feeling for the possible significance of this lack of knowledge, consider a situation in which Alaska’s electoral college votes swing an election, or where Alaska’s Senate race swings control of Congress (not far-fetched given Murkowski‘s close call back in 2010.)

When the margin of victory in Alaska, for an election result that effects the entire nation, is a low 4-digit number of votes, and the number of digital votes cast is similar, what does that mean?

It’s quite possible that those many digital votes could be cast in the next Alaska Senate race.  If the contest is that close again,  think about the scrutiny those IT folks will get.  Will they be evaluated any better than every banking data center investigated after a data breach?  Any better than Target?  Any better than Google or Adobe’s IT management after having trade secrets stolen?  Or any better than the operators of military unclassified systems that for years were penetrated through intrusion from hackers located in China who may likely have been supported by the Chinese Army or Intelligence groups?

Probably not.

Instead, they’ll be lucky (we hope) like the Estonian iVoting administrators, when the OCSE visited back in 2011 to have a look at the Estonian system.  Things didn’t go so well.  OCSE found that one guy could have undermined the whole system.  Good news: it didn’t happenCold comfort: that one guy didn’t seem to have the opportunity — most likely because he and his colleagues were busier than a one-armed paper hanger during the election, worrying about Russian hackers attacking again, after they had previously shut-down the whole country’s Internet-connect government systems.

But so far, the current threat is remote, and it is still early days even for small scale usage of Alaska’s iVoting option.  But while the threat is still remote, it might be good for the public to see some more about what’s “under the hood” and who’s in charge of the engine — that would be our idea of more transparency.

<soapbox>

Wandering off the Main Point for a Few Paragraphs
So, in closing I’m going to run the risk of being a little preachy here (signaled by that faux HTML tag above); again, probably due to the surge in media inquiries recently about how the Millennial generation intends to cast their ballots one day.  Lock and load.

I (and all of us here) are all for advancing the hallmarks of the Millennial mandates of the digital age: ease and convenience.  I am also keenly aware there are wing-nuts looking for their Andy Warhol moment.  And whether enticed by some anarchist rhetoric, their own reality distortion field, or most insidious: the evangelism of a terrorist agenda (domestic or foreign) …said wing nut(s) — perhaps just for grins and giggles — might see an opportunity to derail an election (see my point above about a close race that swings control of Congress or worse).

Here’s the deep concern: I’m one of those who believes that the horrific attacks of 9.11 had little to do with body count or the implosions of western icons of financial might.  The real underlying agenda was to determine whether it might be possible to cause a temblor of sufficient magnitude to take world financial markets seriously off-line, and whether doing so might cause a rippling effect of chaos in world markets, and what disruption and destruction that might wreak.  If we believe that, then consider the opportunity for disruption of the operational continuity of our democracy.

Its not that we are Internet haters: we’re not — several of us came from Netscape and other technology companies that helped pioneer the commercialization of that amazing government and academic experiment we call the Internet.  Its just that THIS Internet and its current architecture simply was not designed to be inherently secure or to ensure anyone’s absolute privacy (and strengthening one necessarily means weakening the other.)

So, while we’re all focused on ease and convenience, and we live in an increasingly distributed democracy, and the Internet cloud is darkening the doorstep of literally every aspect of society (and now government too), great care must be taken as legislatures rush to enact new laws and regulations to enable studies, or build so-called pilots, or simply advance the Millennial agenda to make voting a smartphone experience.  We must be very careful and considerably vigilant, because its not beyond the realm of reality that some wing-nut is watching, cracking their knuckles in front of their screen and keyboard, mumbling, “Oh please. Oh please.”

Alaska has the right to venture down its own path in the northern territory, but it does so exposing an attack surface.  They need not (indeed, cannot) see this enemy from their back porch (I really can’t say of others).  But just because it cannot be identified at the moment, doesn’t mean it isn’t there.

</soapbox>

One other small point:  As a research and education non-profit we’re asked why shouldn’t we be “working on making Internet voting possible?”  Answer: Perhaps in due time.  We do believe that on the horizon responsible research must be undertaken to determine how we can offer an additional alternative by digital means to casting a ballot next to absentee and polling place experiences.  And that “digital means” might be over the public packet-switched network.  Or maybe some other type of network.  We’ll get there.  But candidly, our charge for the next couple of years is to update an outdated architecture of existing voting machinery and elections systems and bring about substantial, but still incremental innovation that jurisdictions can afford to adopt, adapt and deploy.  We’re taking one thing at a time and first things first; or as our former CEO at Netscape used to say, we’re going to “keep the main thing, the main thing.”

Onward
GAM|out