Tagged john sebes

Elections + National Security = Hardware Threats + Policy Questions

U.S. election technology is increasingly regarded as critical to national interests. In discussions about the national-level importance of election technology, I’ve also increasingly heard the term “national security” used. The idea seems to be that election technology is as important as other national-security-critical systems. That’s fair enough in principle, but at present we are a long way from any critical piece of election technology – such as machines for casting and counting ballots – from being manufactured, operated, and protected like other systems that currently do meet the definition of “national security systems”.

However, there is one element of national security systems (NSSs) that I believe is overlooked or unfamiliar to many observers of election technology as critical infrastructure. NSSs have to address hardware level threats by containing their risk using a set of practices called supply chain risk management (SCRM). Perhaps hardware threats have been overlooked by some national policy makers, because of the policy issue that I’ll close with today.

The Hardware Threat

I’d like to explain why hardware level threats are more feasible to address than many other challenges of re-inventing election technology to meet national security threats. But first I should explain what’s usually meant by hardware level threats, and where supply chains come into it.

Hardware level threats exist because its possible for an adversary to craft malicious hardware components that work just like a regular component, but also have hidden logic to make it misbehave. To take one simplistic example, a malicious optical disk drive might faithfully copy the contents of a DVD-R when requested, except in special circumstances, such as installing a particular operating system. In that special case, it might deliver a malicious modified copy of a critical OS file, effectively compromising the hardware that the system is installed in.

To those not familiar with the concept, it might seem fanciful that a nation-state actor would engage in such activities: target a specific device manufacturer; create malicious hardware components; inject them into the supply chain of the manufacturer so that malicious hardware components become part of its products. But, in fact, such attacks have happened, and on systems that could have a significant impact on defense or intelligence.

That’s why it is one of the basic aspects of national security systems, that their manufacturers take active steps to reduce the risk of such attacks, in part by operating a rigorous SCRM program. Though unfamiliar to many, the concepts and practices have been around for almost a decade.

Since the inception then of the Comprehensive National Cybersecurity Initiative (CNCI), many defense and intelligence related systems have been procured using SCRM methods specifically because of hardware threats. In fact, the DoD likely has the most experience in managing a closed supply chain, and qualifying vendors based on their SCRM programs.

SCRM for Election Technology

What might this mean for the future of election technology that is genuinely treated as a national security asset? It means that in the future, such systems would eventually have to be manufactured like national security systems. Significant efforts to increase voting technology security would almost demand it; those efforts’ value would be significantly undercut by leaving the hardware Achilles heel unaddressed.

What would that look like? One possible future:

  • Some government organization operates a closed supply chain program; perhaps piggybacking on existing DoD programs.
  • Voting technology manufacturers source their hardware components from the hardware vendors in this program.
  • Voting technology manufacturers would operate an SCRM program, with similar types of documentation and compliance requirements.
  • Voting technology operators – election officials – would cease their current practice of replacing failing components with parts sourced on the open market.

This would be a big change from the current situation. How would that change come about? Hence the open issue for policy makers …

Policy Issues and Questions

The opportunity for voting system vendors to benefit from a managed closed supply chain might actually be something possible in the short term. But how would that come about? And what would motivate the vendors to take that benefit? And to expend the funds to set up and operate an SCRM program?

To me, this is an example of a public good (reduced risks to our elections being attacked) that doesn’t obviously pencil out as profit where the manufacturer gets a return on the investment (“ROI”) of additional costs for additional manufacturing process and for compliance efforts. So, I suppose that in order for this to work, some external requirement would have to be imposed (just as the DoD and other parts of the Federal government do for their vendors of NSSs) to obligate manufacturers to incur those costs as part of the business of voting technology, and choose how to pass the costs along to, eventually, taxpayers.

However, in this case, the Federal government has no direct role regulating the election technology business. That’s the job of each State, to decide which voting systems are allowed to be used by their localities; and to decide which technology companies to contract with for IT services related to state-operated election technology related to voter registration and election management. But States don’t have existing expertise in SCRM that Federal organizations do.

So, there is plenty of policy analysis to do, before we could have a complete approach to addressing hardware level threats to elections. But there’s one thing that could be done in the near term, without defining a complete solution. Admittedly, iIt’s a bit of a “build and they might show up” approach, based on a possible parallel case.

Parallel to Certification

The best parallel I know of is with voting system certification. Currently, about half the States require that a voting system manufacturer successfully complete an evaluation and certification program run by the Federal government’s Election Assistance Commission (EAC). That’s a prerequisite for the State’s certification. A possible future parallel would be a] for the Federal government to perform supply chain regulation functions, and compliance monitoring of manufacturers, and b] for States to voluntarily choose whether to require participation of manufacturers. The Federal function might be performed by an organization, that already supports supply chain security, which sets up a parallel program for election technology, and offers its use to the manufacturers of election technology of all kinds. If that’s available, perhaps vendors might dip a toe in the waters, or States might begin to decide whether they want to address hardware threats. Even if this approach worked, then there would be the question of how all this might apply to all the critical election technology that isn’t machines for casting and counting ballots. But at least it would be a start.

That’s pretty speculative I admit, but at least it is a start that can be experimented with in the relatively near term – certainly in time for the 2020 elections that will use election systems that are newer than today’s decade-plus-old systems, but inside have the same vulnerabilities as today’s technology. Hardware assurance won’t fix software vulnerabilities, but it would make it much more meaningful to attempt to fix them, with the hardware Achilles heel being on the way to being addressed.

— EJS

Dismantling Federal Assistance to US Elections — The Freeze/Thaw Cycle

Last time I wrote in this series on the EAC being dismantled, I used the metaphor of freezing and thawing to describe not only how the EAC’s effectiveness has been limited, but also the consequence:

We now have voting systems that have been vetted with standards and processes that are almost as Jurassic as the pre-Internet era.

This time I need to support my previous claims by explaining the freeze/thaw cycle in more detail, and connecting it to the outcome of voting systems that are not up to today’s job, as we now understand it, post-2016.

The First Try

EAC’s first try at voting system quality started after the year 2000 election hanging chad debacle, and after the Help America Vote Act (HAVA) designed to fix it. During the period of 2004 to 2006, the EAC was pretty busy defining standards and requirements (technically “guidelines” because states are not obligated to adopt them) for the then-next-gen of voting systems, and setting up processes for testing, review, and certification.

That first try was “good enough” for getting started on a way out of the hanging chad morass, but was woefully inadequate in hindsight. A beginning of a second try resulted in the 2007 recommendations to significantly revise the standards, because the hindsight then showed that the first try had some assumptions that weren’t so good in practice. My summary of those assumptions:

  • Electronic Voting Machines (EVMs) were inherently better than paper-based voting, not just for accessibility (which is a true and important point) but also for reliability, accuracy, and many other factors.
  • It’s OK if EVMs are completely paperless, because we can assume that the hardware and software will always make an accurate and permanent digital record of every voter’s choice.
  • The then current PC technology was good enough for both EVMs and back-office systems, because that PC tech was good enough desktop computing.
  • Security and quality are important, and can be “legislated” into existence by written standards and requirements, and a test process for evaluating whether a voting system meets those requirements.

Even in 2007, and certainly even more since then, we’ve seen that what these assumptions actually got us was not what we really wanted. My summary of what we got:

  • Voting machines lacking any means for people to cross-check the work of the black-box hardware and software, to detect malfunctions or tampering.
  • Voting machines and back-office systems that election officials can only assume are unmodified, un-tampered copies of the certified systems, but can’t actually validate.
  • Voting machines and back-office systems based on decades old PC technology, with all the security and reliability limitations thereof, including the ready ability of any software to modify the system.
  • Voting system software that passed testing, but when opened up for independent review in California and in Ohio, was found to be rife with security and quality problems.

Taken together, that meant that election tech broadly was physically unreliable, and very vulnerable, both to technological mischance and to intentional meddling. A decade ago, we had much less experience than today with the mischances that early PC tech is prone to. At the time, we also had much less sensitivity to the threats and risks of intentional meddling.

Freeze and Thaw

And that’s where the freeze set in. The 2007 recommendations have been gathering dust since then. A few years later, the freeze set in on EAC as well, which spent several years operating without a quorum of congressionally approved commissioners, and not able to change much – including certification standards and requirements.

That changed a couple years ago. One of the most important things that the new commissioners have done is to re-vitalize the process for modernizing the standards, requirements, and processes for new voting system. And that re-vitalization is not a moment too soon, just as most of the nation’s states and localities have been replacing decaying voting machines with “new” voting systems thatare not substantially different from what I’ve described above.

That’s where the huge irony lies – after over a decade of inactivity, the EAC has finally gotten its act together to try to become an effective voting system certification body for the future — and it is getting dismantled.

It is not just EAC that’s making progress. EAC works with NIST, and a Technical Guidelines Working Group (TGWC), and many volunteers from many organizations (including ours) that working in several groups focused on help the TGWC. We’ve dusted off the 2007 recommendations, which address how to fix at least some of those consequences I listed above. We’re writing detailed standards for interoperability, so that election officials have more choice about how to acquire and operate voting tech. I could go on about the range of activity and potential benefits, but the point is, there is lot that is currently a-building that is poised to be frozen again.

A Way Forward?

I believe that it is vitally important, indeed a matter of national security, that our election tech makes a quantum leap forward to address the substantial issues of our current threat environment, and the economic and administrative environment that our hardworking election officials face today.

If that’s to happen, then we need a way to not get frozen again, even if the EAC is dismantled. A look at various possible ways forward will be the coda for this series.

— EJS

Kudos to EAC for Exploring Critical Nature of Election Infrastructure

Kudos to EAC for this week’s public Hearing on election infrastructure as critical infrastructure! After the 2016 election cycle, I think that there is very little disagreement that election infrastructure (EI) is critical, in the sense of: vital, super-important, a matter of national security, etc. But this hearing is a bit of a turning point. I’ll explain why in terms of: discussion before the hearing, then the aftermath, and then I will make my one most important point about action going forward. I’ll close with specific recommend steps forward.

Prior Negativity

Prior to this hearing, I heard and read a lot of negativity about the idea that EI is “critical infrastructure” (CI) in the specific sense of homeland security policy. Yes, late last year, DHS did designate EI as CI, specifically as a sub-sector of the existing CI sector for government systems. And that caused alarm and the negativity I referred to, ranging from honest policy disagreement (what are the public policy ramifications of designation) to par-for-the-course political rhetoric (unprecedented Federal takeover of elections as states’ rights, etc.), and just plain “fake news” (DHS hackers breaking Federal laws to infiltrate state-managed election systems).

The fracas has been painful to me especially, as someone with years of experience in the disparate areas of cyber-security technology (since the ‘80s), critical infrastructure policy and practice (since before 9/11), DHS cyber-security research (nearly since its inception), and election technology (merely the last decade or so).

Turning Point in Dialog

That’s why the dialogue, during the EAC hearing, and the reflections in online discussion since, have been so encouraging. I hear less competing monologues and more dialogue about what EI=CI means, what official designation actually does, and how it can or can’t help us as a community respond to the threat environment. The response includes a truly essential and fundamental shift to creating, delivering, and operating EI as critical national assets like the power grid, local water and other public utilities, air traffic control, financial transaction networks, and so on. Being so uplifted by the change in tenor, I’ll drop a little concept here to blow-up some of this new dialogue:

Official CI designation is irrelevant to the way forward.

The way forward has essential steps that were possible before the official designation, and that remain possible if the designation is rescinded. These steps are urgent. Fussing over official designation is a distraction from the work at hand, and it needs to stop. EAC’s hearing was a good first step. My blog today is my little contribution to dialog about next steps.

Outlining the Way Forward

To those who haven’t been marinating in cyber CI for years, it may be odd to say that this official announcement of criticality is actually a no-op, especially given its news coverage. But thanks to changes in cyber-security law and policy over the years, the essential first steps no longer require official designation. There may be benefits over the longer term, but the immediate tasks can and should be done now, without concern for Federal policy wonkery.

Here is a short and incomplete list of essential tasks, each of which I admit deserves loads more unpacking and explaining to non-CI-dweeb people, than I can possibly do in a blog. But regardless of DHS policy, and definitely in light of the 2016 election disruption experience, the EI community can and should:

  • Start the formation of one or more of the information-sharing communities (like ISAOs or similar) that are bread-and-butter of other CI sectors.
  • If needed, take voluntary action to get DoJ and DHS assistance in the legal side of such formation.
  • Use the information sharing organizations to privately share and discuss what really happened in 2016 to prepare, detect, and respond to attacks on EI.
  • Likewise use the organizations to jointly consider available assistance, and to assess:
    • the range of types of CI related assistance that are available to election officials – both cyber and otherwise;
    • the costs and benefits of using them; and
    • for those participants who have already done or choose to voluntarily use that assistance (from DHS or elsewhere) to, inform all EI/CI operators who choose to participate.
  • Begin to form sector-specific CI guidelines specifically about changes required to operate EI assets as CI.

And all that is just to get started, to enable several further steps, including: informing the election tech market of what needs to respond to; helping the 1000s of local election offices to begin to learn how their responsibilities evolve during the transformation of EI to truly part of CI in practice.

— EJS

The Freeze Factor – Dismantling Federal Assistance to U.S. Elections

“Frozen” is my key word for what happens to the voting system certification process after EAC is dismantled. And in this case, frozen can be really harmful. Indeed, as I will explain, we’ve already seen how harmful.

Last time I wrote in this series on the EAC being dismantled (see the first and second posts), I said that EAC’s certification function is more important than ever. To re-cap:

  • Certification is the standards, requirements, testing, and seal-of-approval process by which local election officials gain access to new election tech.
  • The testing is more important than ever, because of the lessons learned in 2016:

1. The next gen of election technology needs to be not only safe and effective, but also …

2. … must be robust against whole new categories of national security threats, which the voting public only became broadly aware of in late 2016.

Today it’s time to explain just how ugly it could get if the EAC’s certification function gets derailed. Frozen is that starting point, because frozen is exactly where EAC certification has been for over a decade, and as a result, voting system certification is simply not working. That sounds harsh, so let me first explain the critical distinction between standards and process, and then give credit where credit is due for the hardworking EAC folks doing the certification process.

  • Standards comprise the critical part of the voting system certification program. Standards define what a voting system is required to do. They define a test lab’s job for determining whether a voting system meets these requirements.
  • Process the other part of the voting system certification program, composed of the set of activities that the players – mainly a voting system vendor, a test lab, and the EAC – must collectively step through to get to the Federal “seal of approval” that is the starting point for state election officials to make their decisions about voting system to allow in their state.

Years worth of EAC efforts have improved the process a great deal. But by contrast, the standards and requirements have been frozen for over a decade. During that time, here is what we got in the voting systems that passed the then-current and still-current certification program:

Black-box systems that election officials can’t validate, for voting that voters can’t verify, with software that despite passing testing, later turned out to have major security and reliability problems.

That’s what I mean by a certification program that didn’t work, based solely on today’s outcome – election tech that isn’t up to today’s job, as we now understand the job to be, post-2016. We are still stuck with the standards and requirements of the process that did not and does not work. While today’s voting systems vary a bit in terms of verifiability and insecurity, what’s described above is the least common denominator that the current certification program has allowed to get to market.

Wow! Maybe that actually is a good reason to dismantle the EAC – it was supposed to foster voting technology quality, and it didn’t work. Strange as it may sound, that assessment is actually backwards. The root problem is that as a Federal agency, the EAC had been frozen itself. It got thawed relatively recently, and has been taking steps to modernize the voting systems standards and certification. In other words, just when the EAC has thawed out and is starting to re-vitalize voting system standards and certification, it is getting dismantled – that at a time when we just recently understood how vulnerable our election systems are.

To understand the significance of what I am claiming here, I will have be much more specific in my next segment, about the characteristics of the certification that didn’t work, how the fix started over a decade ago, got frozen, and has been thawing. When we understand the transformational value of the thaw, we can better understand what we need in terms of a quality program for voting systems, and how we might get to such a quality program if the EAC is dismantled.

— EJS

Blockchains for Elections, in Maine: “Don’t Be Hasty”

Many have noted with interest some draft legislation in Maine that mandates the exploration of how to use blockchain technology to further election transparency.  My comment is, to quote one well known sage, “Don’t Be Hasty”. First, though, let me say that I am very much in favor of any state resolving to study the use of innovative tech elections, even one as widely misunderstood as blockchains. This bill is no exception: study is a great idea.

However, there is already elsewhere a considerable amount of haste in the elections world, with many enthusiasts and over a dozen startups thinking that since blockchains have revolutionized anonymous financial transactions — especially via BitCoin — elections can benefit too. But actually not a lot, at least in terms of voting. As one of my colleagues who is an expert on both elections and advanced cryptography says, “Blockchain voting is just a bad idea – even for people who like online voting.” It will take some time and serious R&D to wrestle to the ground whether and how blockchains can be one of (my count) about half a dozen innovative ingredients that might make online voting worth trying.

However, in the meantime, there are plenty of immediate term good uses of blockchain technology for election transparency, including two of my favorites that could be put into place fairly quickly in Maine, if the study finds it worthwhile.

  1. In one case, each transaction is a change to the voter rolls: adding or deleting a voter, or updating a voter’s name or location or eligibility. Publication — with provenance — would provide the transparency needed to find the truth or lack thereof of claims of “voter roll purging” that crop up in every election.
  2. In the other case, each transaction is either that of a voter checking in to vote in person — via a poll book paper or digital — or having their absentee ballot received, counted, or rejected. I hope the transparency value is evident in the public knowing in detail who did and didn’t vote in a given election.

In each case, there is a public interest in knowing the entirety of a set of transactions that have an impact on every election, and in being able to know that claimed log of transaction records is the legitimate log. Without that assurance of “data provenance” there are real risks of disinformation and confusion, to the detriment of confidence in elections, and confusion rather than transparency. Publication of these types transaction data, with the use of blockchains, can provide the provenance that’s needed for both confidence and transparency. Figuring out the details will require study — Don’t Be Hasty — but it would be a big step in election transparency. Go Maine!

— EJS

Cancellation of Federal Assistance to US Elections — The Good, The Bad, and The Geeky

Recently I wrote about Congress dismantling the only Federal agency that helps states and their local election officials ensure that the elections that they conduct are verifiable, accurate, and secure — and transparently so, to strengthen public trust in election results. Put that way, it may sound like dismantling the U.S. Election Assistance Commission (EAC) is both a bad idea, and also poorly timed after a highly contentious election in which election security, accuracy, and integrity were disparaged or doubted vocally and vigorously.

As I explained previously, there might be a sensible case for shutdown with a hearty “mission accomplished”  — but only with a narrow view of original mission of the EAC. I also explained that since its creation, EAC’s evolving role has come to include duties that are uniquely imperative at this point in U.S. election history. What I want to explain today is that evolved role, and why it is so important now.

Suppose that you are a county election official in the process of buying a new voting system. How do you know that what you’re buying is a legit system that does everything it should do, and reliably? It’s a bit like a county hospital administrator considering adding new medications to their formulary — how do you know that they are safe and effective? In the case of medications, the FDA runs a regulatory testing program and approves medications as safe and effective for particular purposes.

In the case of voting systems, the EAC (with support from NIST) has an analogous role: defining the requirements for voting systems, accrediting test labs, defining requirements for how labs should test products, reviewing test labs’ work, and certifying those products that pass muster. This function is voluntary for states, who can choose whether and how to build their certification program on the basis of federal certification. The process is not exactly voluntary for vendors, but since they understandably want to have products that can work in every state, they build products to meet the requirements and pass Federal certification. The result is that each locality’s election office has a state-managed approved product list that typically includes only products that are Federally certified.

Thus far the story is pretty geeky. Nobody gets passionate about standards, test labs, and the like. It’s clear that the goals are sound and the intentions are good. But does that mean that eliminating the EAC’s role in certification is bad? Not necessarily, because there is a wide range of opinion on EAC’s effectiveness in running certification process. However, recent changes have shown how the stakes are much higher, and the role of requirements, standards, testing, and certification are more important than ever. The details about those changes will be in the next installment, but here is the gist: we are in the middle of a nationwide replacement of aging voting machines and related election tech, and in an escalating threat environment for global adversaries targeting U.S. elections. More of the same-old-same-old isn’t nearly good enough. But how would election officials gain confidence in new election tech that’s not only safe and effective, but robust against whole new categories of threat?

— EJS

The Myth of Technologist Suppression of Internet Voting

I’ve got to debunk a really troubling rumor. It’s about Internet voting, or more specifically, about those who oppose it. Longtime readers will recall that Internet voting is not one of the favorite topics here, not because it isn’t interesting, but because there are so many more nearer-term low-effort ways to use tech to improve U.S. elections. However, I’ve heard this troubling story enough times that I have to debunk it today, and return to more important topics next time.

Here’s the gist of it: there is a posse of respectable computer scientists, election tech geeks, and allies who are:

  • Un-alterably opposed to Internet voting, for ever, and
  • Lying about i-voting’s feasibility in order to prevent its use as a panacea for increased participation and general wonderfulness, because they have a hidden agenda to preserve today’s low-participation elections.

I have to say, simply: no. I’ve been in this pond for long enough to know just about every techie, scientist, academic, or other researcher who understands both U.S. elections and modern technology. We all have varying degrees of misgivings about current i-voting methods, but I am confident that every one of these people stands with me on these 4 points.

  1. We oppose the increased use of i-voting as currently practiced.
  2. We very much favor use of the Internet for election activities of many kinds, potentially nearly everything except returning ballots; many of us have been working on such improvements for years.
  3. We strongly believe and support the power of invention and R&D to overcome the tech gaps in current i-voting, despite believing that some of the remaining issues are really* hard problems.
  4. We strongly believe that i-voting will eventually be broadly used, simply because of demand.

We all share a concern that if there is no R&D on these hard problems, then eventually today’s highly vulnerable forms of i-voting will be used widely, to the detriment of our democracy, and to the advantage of our nation-state adversaries who are already conducting cyber-operations against U.S. elections.

I believe that we need a two pronged approach: to support to the R&D that’s needed, but in the mean time to enable much needed modernization of our existing clunky decaying elections infrastructure, to lay the rails for future new Internet voting methods to be adopted.

Returning to the kooky story … but what about all those Luddite nay-sayers who say i-voting is impossible and that the time for i-voting is “never”? There are none, at least among tech professionals and/or election experts. There is some harsh rhetoric that’s often quoted, but it is against the current i-voting methods, which are indeed a serious problem.

But for the future, the main difference among us is about the little asterisk that I inserted in point 3 above — it means any number of “really” before “hard.” I’m grateful to colleague Joe Kiniry of Galois and of Free&Fair, for noting that our differences are really “just the number of ‘really’ we put before the word ‘hard’.”

— EJS

PS: A footnote about i-voting Luddites and election tech Luddites more broadly. There are indeed some vocal folks who are against the use of technology in elections, for example, those that advocate for a return to hand-counted paper ballots, with no computers used for ballot casting or counting. They do indeed say “never” when it comes to using the Internet for voting, and indeed e-voting as well. But that’s because of personal beliefs and policy decisions, not because of a professionally informed judgment that hard problems in computer science can never be solved. In fact, these anti-tech people are the other end of the spectrum from the folks who so strongly favor i-voting at any cost that they caricature nay-sayers of any kind; both folks use out of context quotes about current i-voting drawbacks as way to shift a conversation to the proposition of “Internet voting, no way, not ever” from the more important but nuanced questions of: Internet voting, not whether, but how?

From DHS Symposium — The Three Basic Requirements for Voting System Security

In a recent posting, I noted that despite current voting systems’ basic flaws, it is still possible to do more to provide the public with details that can provide peace of mind that close contests’ results are not invalid due to technology related problems. Now I should explain what I meant by basic security flaws, especially since that was the topic of a panel I was part of recently, a group of security and/or election professionals on addressing a DHS meeting on security tech transfer.

We agreed on three basic security and integrity requirements that are not met by any existing product:

  1. Fixed-function: each machine should run only one fixed set of software that passed accredited testing and government certification.
  2. Replace not modify: that fixed software set should be able to be modified, and can updated only by being replaced with another certified system.
  3. Validation: all critical components of these systems are required to support election officials’ ability to validate a machine before each election, to ensure that it remains in exactly the same certified configuration as before.

These critical properties are absent today, because of a basic decision made by vendors years ago, to quickly bring new voting technology to market by basing it on ordinary turn of the century PC technology that was, and remains in today’s market, fundamentally unable to support fixed function systems inherently capable of validation. All voting systems today lack these basic properties, and without them, all other security requirements are largely irrelevant — and compliance with current certification requirements is impossible.

Crazy, eh? Then add to that:

  • the remarks of panelist and voting system security expert Matt Bishop of UC Davis on the many software-level security functional problems encountered in reviews of voting systems, problems found despite the official federal testing and certification process intended to find them; and
  • Virginia’s Election Commissioner Edgardo Cortez’s examples of system-level security issues found in their review of voting system that was subsequently banned for use in VA. A few minds were blown in the audience.

The Consensus and One More Thing

The consensus at this DHS event, for both panel and audience, was that any future voting system that is worth having, should be validated by a future testing and certification process that among other goals, specifically required the architecture-level security requirements that I outlined, and focused on the types issues Cortez and Bishop described – and one more thing that’s important for completely different reasons.

That one more thing: future voting systems need to be designed from scratch for ease of use by election officials, so that they don’t have to take today’s extra-ordinary measures with so much human-level effort and human-error-prone work needed to operate these systems with reasonable security that can be demonstrated in the event of disputes.

So, leaving aside “known unknowns” about recent hacks or lack thereof, we have some really important “known knowns” – there is enormous potential for improvement in a wholesale replacement of voting tech that meets the 3 basic integrity requirements above, can be feasibly examined for the issues that our panelists discussed, and can be easily safely operated by ordinary election officials.

— John Sebes

Recounts, Russian Hackers, and Misunderstood Claims

There’s a lot of news media about the Green Party’s push for recounts. Some is accurate, some is wildly alarmist, but most of what I’ve read misses a really key point that you need to understand, in order to make up your own mind about these issues, especially claims of Russian hacking.

For example, University of Michigan’s Dr. Alex Halderman is advising the Green Party, and is considerably quoted recently about the possible attacks that could be made on election technology, especially on the “brains” of a voting system, the Election Management System (EMS) that “programs” all the voting machines, and collates their tallies, yet is really just some fairly basic desktop application software running on ancient MS Windows. Though sometimes complex to explain, Halderman and others are doing a good job explaining what is possible in terms of election-result-altering attacks.

In response to these explanations, several news articles note that DHS, DNI, and other government bodies take the view that it would be “extremely difficult” for nation state actors to carry out exploits of these vulnerabilities. I don’t doubt that DHS cyber-security experts would rank exploits of this kind (both effective and also successful in hiding themselves), as on the high end of the technical difficulty chart, out there with hacking Iranian uranium enrichment centrifuges.

Here’s the Problem: “extremely difficult” has nothing to do with how likely it is that critical election systems might or might not have been penetrated.

It is a completely different issue to compare the intrinsic difficulty level with the capabilities of specific attackers. We know full well that attacks of this kind, while high on technical difficulty, are totally feasible for a few nation state adversaries. It’s like noting that a particular class of technical Platform Diving has a high intrinsic difficulty level beyond the reach of most world class divers, but also noting that the Chinese team has multiple divers who are capable of performing those dives.

You can’t just say “extremely difficult” and completely fail to check whether one of those well known capable divers actually succeeded in an attempt — especially during a high stakes competition. And I think that all parties would agree that a U.S. Presidential election is pretty high stakes. So …

  • 10 out of 10 points for security experts explaining what’s possible.
  • 10 out of 10 points for DHS and others for assessing the possibilities as being extremely difficult to do.
  • 10 out of 10 points for several news organizations reporting on these complex and scary issues; and
  • 0 out of 10 points for news and media organizations concluding that because some attacks are difficult, they probably didn’t happen.

Personally, I don’t have any reason to believe such attacks occurred, but I’d hate to deter anybody from looking into it, as a result of confusing level of difficulty with level of probability.

— John Sebes

Accurate Election Results in Michigan and Wisconsin is Not a Partisan Issue

counties

Courtesy, Alex Halderman Medium Article

In the last few days, we’ve been getting several questions that are variations on:

Should there be recounts in Michigan in order to make sure that the election results are accurate?

For the word “accurate” people also use any of:

  • “not hacked”
  • “not subject to voting machine malfunction”
  • “not the result of tampered voting machine”
  • “not poorly operated voting machines” or
  • “not falling apart unreliable voting machines”

The short answer to the question is:

Maybe a recount, but absolutely there should be an audit because audits can do nearly anything a recount can do.

Before explaining that key point, a nod to University of Michigan computer scientists pointing out why we don’t yet have full confidence in the election results in their State’s close presidential election, and possibly other States as well. A good summary is here and and even better explanation is here.

A Basic Democracy Issue, not Partisan

The not-at-all partisan or even political issue is election assurance – giving the public every assurance that the election results are the correct results, despite the fact that bug-prone computers and human error are part of the process. Today, we don’t know what we don’t know, in part because the current voting technology not only fails to meet the three (3) most basic technical security requirements, but really doesn’t support election assurance very well. And we need to solve that! (More on the solution below.)

A recount, however, is a political process and a legal process that’s hard to see as anything other than partisan. A recount can happen when one candidate or party looks for election assurance and does not find it. So it is really up to the legal process to determine whether to do a recount.

While that process plays out let’s focus instead on what’s needed to get the election assurance that we don’t have yet, whether it comes via a recount or from audits — and indeed, what can be done, right now.

Three Basic Steps

Leaving aside a future in which the basic technical security requirements can be met, right now, today, there is a plain pathway to election assurance of the recent election. This path has three basic steps that election officials can take.

  1. Standardized Uniform Election Audit Process
  2. State-Level Review of All Counties’ Audit Records
  3. State Public Release of All Counties Audit Records Once Finalized

The first step is the essential auditing process that should happen in every election in every county. Whether we are talking about the initial count, or a recount, it is essential that humans do the required cross-check of the computers’ work to detect and correct any malfunction, regardless of origin. That cross-check is a ballot-polling audit, where humans manually count a batch of paper ballots that the computers counted, to see if the human results and machine results match. It has to be a truly random sample, and it needs to be statistically significant, but even in the close election, it is far less work than a recount. And it works regardless of how a machine malfunction was caused, whether hacking, manipulation, software bugs, hardware glitches, or anything.

This first step should already have been taken by each county in Michigan, but at this point it is hard to be certain. Though less work than a recount, a routine ballot polling audit is still real work, and made harder by the current voting technology not aiding the process very well. (Did I mention we need to solve that?)

The second step should be a state-level review of all the records of the counties’ audits. The public needs assurance that every county did its audit correctly, and further, documented the process and its findings. If a county can’t produce detailed documentation and findings that pass muster at the State level, then alas the county will need to re-do the audit. The same would apply if the documentation turned up an error in the audit process, or a significant anomaly in a difference between the human count and the machine count.

That second step is not common everywhere, but the third step would be unusual but very beneficial and a model for the future: when a State is satisfied that all counties’ election results have been properly validated by ballot polling audit, the State elections body could publicly release all the records of all the counties’ audit process. Then anyone could independently come to the same conclusion as the State did, but especially election scientists, data scientists, and election tech experts. I know that Michigan has diligent and hardworking State election officials who are capable of doing all this, and indeed do much of it as part of the process toward the State election certification.

This Needs to Be Solved – and We Are

The fundamental objective for any election is public assurance in the result.  And where the election technology is getting in the way of that happening, it needs to be replaced with something better. That’s what we’re working toward at the OSET Institute and through the TrustTheVote Project.

No one wants the next few years to be dogged by uncertainly about whether the right person is in the Oval Office or the Senate. That will be hard for this election because of the failing voting machines that were not designed for high assurance. But America must say never again, so that in two short years and four years from now, we have election infrastructure in place that was designed from ground-up and purpose-built to make it far easier for election officials to deliver election results and election assurance.

There are several matters to address:

  • Meeting the three basic security requirements;
  • Publicly demonstrating the absence of the vulnerabilities in current voting technology;
  • Supporting evidenced-based audits that maximize confidence and minimize election officials’ efforts; and
  • Making it easy to publish detailed data in standard formats, that enable anyone to drill down as far as needed to independently assess whether audits really did the job right.

All that and more!

The good news (in a shameless plug for our digital public works project) is that’s what we’re building in ElectOS. It is the first openly public and freely available set of election technology; an “operating system” of sorts for the next generation of voting systems, in the same way and Android is the basis for much of today’s mobile communication and computing.

— John Sebes