There are many reasons to be concerned that some adversaries of the United States will attempt, again, to sow chaos in the upcoming U.S. elections. What are the threats that US election systems face right now?
There are many reasons to be concerned that some adversaries of the United States will attempt, again, to sow chaos in the upcoming U.S. elections. What are the threats that US election systems face right now?
This Monday, state officials in Maryland acknowledged that problems with their “motor voter” systems are more significant than originally described:
[A]s many as 80,000 voters — nearly quadruple the original estimate — will have to file provisional ballots Tuesday because the state Motor Vehicle Administration failed to transmit updated voter information to the state Board of Elections.
— Up to 80,000 Maryland voters will have to file provisional ballots, state says (Washington Post. 6/25/18)
This announcement, made only hours before the polls opened for Maryland’s Tuesday primary, will mean more than just a minor inconvenience for the tens of thousands of voters affected. Sen. Joan Carter Conway (D-Baltimore City), chairwoman of the Senate Education, Health and Environment Committee, said that this situation will “confuse voters, suppress turnout, and disenfranchise thousands of Marylanders.”
Yet the significance of this programming error is broader still. Sen. Richard S. Madaleno Jr. (D-Montgomery), who is also running for governor of Maryland, called the incorrect registration of thousands of voters a “catastrophic failure.” In his statement, he continued, “The chaos being created by this failure subjects real harm to our most cherished democratic values,”
Is this election season hyperbole? Not at all, says John Sebes, Chief Technology Officer of the OSET Institute (the organization that runs the Trust The Vote project). In his recent article, Maryland Voter Registration Glitch: A Teachable Snafu, Mr. Sebes identifies the wide-ranging problems that will follow from these kind of disruptions at a larger scale:
If a foreign adversary can use cyber-operations to maliciously create a similar situation at large scale, then they can be sure of preventing many voters from casting a ballot. With that disruption, the adversary can fuel information operations to discredit the election because of the large number of voters obstructed.
— John Sebes, OSET Institute
It is, in fact, the credibility of the entire election itself that is at stake. These kinds of technical problems don’t need to be the result of nefarious interference in the election process. Mr. Sebes continues,
The alleged system failure (hack, glitch, or whatever) doesn’t even need to be true! If this accidental glitch had occurred a couple of days before the November election, and came on the heels of considerable conversation and media coverage about election hacking, rigging, or tampering then it would be an ideal opportunity for a claimed cyber-attack as the cause, with adversaries owning the disruptive effects and using information operations to the same effect as if it were an actual attack.
— Maryland Voter Registration Glitch: A Teachable Snafu by John Sebes
Maryland is clearly vulnerable to this kind of attack on the credibility of their electoral process. Already, some are sounding the alarm that these voter registration problems weren’t identified quickly — plus, there’s no way to verify the process itself:
Damon Effingham, acting director of the Maryland chapter of Common Cause, said it was “preposterous” that it took MVA officials four days to figure out the extent of the problem and that there is no system to ensure that its system is working properly.
— Up to 80,000 Maryland voters will have to file provisional ballots, state says (Washington Post. 6/25/18)
John Sebes and the Trust The Vote project have spent years developing open source election software and systems to address these issues. But that alone isn’t sufficient. Mr. Sebes identifies the steps that election officials can take now to prevent the kind of problems that Maryland is experiencing this week:
Mr. Sebes concludes:
The Maryland glitch is not so much about failed integration of disparate data systems, but much more about unintentional catalyzing of opportunities to mount “credibility attacks” on elections and the need for a different kind of preparation.
Read the full article, Maryland Voter Registration Glitch: A Teachable Snafu by John Sebes, on the OSET Institute website.
The OSET Institute runs the TrustTheVote Project, a real alternative to nearly obsolete, proprietary voting technology. TrustTheVote is building an open, adaptable, flexible, full-featured and innovative elections operating system called ElectOS. It supports all aspects of elections administration and voting including creating, marking, casting, and counting ballots, as well as managing all back-office functions. Check out this overview of the TrustTheVote Project to learn more. If you’re involved in the election process, as an election official, or an academic or researcher, join the TrustTheVote Project as a stakeholder to help develop and deploy open, secure, reliable, and credible election technologies. If you’re concerned about the health of our election systems, you can donate or volunteer. If you have any questions about the TrustTheVote Project, contact us today.
To prepare for the new General Data Protection Regulations (GDPR) for EU countries, the TrustTheVote Project web support team and OSET Institute Legal reviewed all of our data privacy and security policies to ensure that we meet (or exceed) the standards set by the GDPR. Data privacy and security is one of the foundational values of the TrustTheVote Project, and we want to be sure that we’re consistently applying best practices and principles.
We also believe it’s important to support and promote international norms for digital privacy. Although the OSET Institute is headquartered in California, our mission is global in nature, because verifiable, accurate, secure and transparent election technology is a mandate for all democracies, worldwide. Trust in elections depends on digital privacy and security. That’s why we support the principles of the GDPR, both all of our web properties, and in the software we build for safe and secure elections, ElectOS.
For those interested in how foreign adversaries are meddling in U.S. elections with social media, there is a recent must-read paper from the NDN Think Tank: A Primer on Social Media Bots And Their Malicious Use In U.S. Politics authored by Tim Chambers of the Dewey Square Group. This report is the definitive work about:
The transport layer of weaponized social media content used for political purposes.
The methods work for any kind of content, whether preaching to the choir, or intentional distortion of real political events, or outright lies. Probably the most illuminating aspect of the report is the explanation of how any single blob of content gets its reputation burnished. It’s not just disinformation campaigns where the goal is to get the content widely heard or seen. It’s that, plus getting the content normalized by the support of large numbers of people. And here is the really clever bit: the content is liked/re-tweeted/etc by large numbers of who turn out not to exist including some bot-people with an (invented) impressive following or reputation.
This approach is a key part of the new information operations, and the chilling thing for me is how these info-ops are already part of current politics and electioneering, just adjacent to the actual administration and operation of elections. Since our focus is on innovative technology for election administration and operations, I have to be concerned with how election technology can be damaged by the new info-ops.
I can see adversaries revving this engine to attack U.S. elections at the level of basic trust, in contrast to the operational attacks that are the current focus of election hacking FUD. When I started in the cyber aspect of critical infrastructure protection in early 2001, we worried about “cyber-physical” attacks, where a cyber attack would magnify a physical attack on physical infrastructure.
Now we also need to address the much more insidious and diffuse threat of cyber-social attack, where a publicly visible cyber attack on election infrastructure is performed to provide the starter fluid for bot-based information operations intended to undermine belief in election outcomes and potentially de-legitimize an election. I’m talking about malevolent technology plus social media being a machine where cyber-ops are used to intentionally create a publicly visible mole-hill, which info ops then turn into a political mountain.
A scary thought.
For myself and our work it underlines the need for election infrastructure that is publicly, demonstrably, ground-up strong against cyber-ops. With the weak and architecturally flawed election technology we have today, any claim about an election technology hack, even a completely spurious one, is just too credible by default. The readiness of election technology to buttress or reject digital attack we are addressing, but the cyber-social threat to elections requires its own focused effort as well; to my understanding the work of projects like the Belfer Center’s Defending Digital Democracy is an important step in the right direction. Reading Chamber‘s paper on social media bots should be an early step of that effort.
I have one last comment of CAP’s recent report “9 Solutions to Secure America’s Elections” in addition to my previous comments and those of my colleagues here at the OSET Institute. I don’t agree that any 9 steps can “Secure America’s Elections” and especially not CAP’s 9 steps. Their recommendations are fundamentally about gradualism: the belief that what’s in place can be incrementally improved until we are “secure.” Though we can never be completely “secure”, we need more than gradualism to create fundamental changes that can significantly reduce risks to our elections, especially cyber security risks.
Curiously, despite the report’s bold title, I’m not sure the authors actually intended to advocate for only gradual improvements, because of their agreement on 2 key points. What’s missing is the explicit logical conclusion from those 2 points. The brief version is simply this:
But further, it is an enigma to me why this logic isn’t more widely accepted. I have a couple theories, related to my most important point: that wholesale replacement does not require anything like an Election Manhattan Project. In fact, the core tech has already been proven in use in other fields for years.
I’ll start with the two points that I agree with the most. The first point is national security. Elections are indeed a matter of national security, a bedrock part of our sovereignty and our ability to democratically affect the course of governance of our country. That bedrock is broadly understood of course, but only recently has there been broad awareness of how we are at risk. Our elections are in the cross-hairs of our nation-state adversaries, well funded and well equipped for hybrid cyber-operations, information operations, disinformation campaigns, and social media cyber-operations. In 2016 we saw a warm-up exercise by one adversary, but we know that there are several. Some of them have demonstrated capabilities well beyond what the public saw in 2016.
Kudos to CAP authors Danielle Root and Liz Kennedy for coming straight out with the point about national security risks. Not so long ago, a small group of election integrity and technology experts were routinely belittled with nursery-rhyme epithets about crying wolf and skies falling. Along with recent work by the Brennan Center, the EAC, Congressional testimony by DHS, among others, this CAP report helps put to rest a host of baseless optimisms about early 21st century election security. I’m grateful, but I wish I had been more a louder voice in that earlier Cassandra Chorus.
The second point that I agree with the most is the CAP report’s several observations of prior work on two points: demonstrations of the technological vulnerabilities of current voting systems; assessment of the set of broader risks to elections that are enabled by voting technology insecurity. The CAP report provides another important voice for the observation that our current technology for election infrastructure (“EI”) is mismatched for the present and future; it was never designed for or intended to operate in the current threat environment of nation state adversaries with the sophisticated capabilities that I noted above.
I believe that there is certainly room for varying assessments or assumptions about the likelihood of various types of attack, and of the likelihood of detection, and the likely consequences of detection or lack thereof. But the bottom line is that EI technical vulnerabilities are gift to our adversaries, regardless of one’s assessment of how those gifts might be used to our detriment.
I truly respect the CAP authors’ intent of collating the most important steps that many have identified as essential for mitigation of risk, and incremental improvement of EI. Within the narrow scope of voter records management, I mostly agree with gradualism. And yes, there are some important caveats on a few items of agreement, as noted by my colleague Sergio Valente. And yes, I have some significant disagreements on the way that the CAP report treats stakeholders. But these caveats aside, I fundamentally disagree with the implied statement — which I hope the authors didn’t fully intend — that it’s enough to do gradual in-place mitigation of existing vulnerable EI.
In place mitigation is important, but as band-aids for fundamentally vulnerable EI. Apply the band-aids while getting a replacement prepared, and then jettison the band-aided EI in favor of fundamentally stronger replacement technology. Let me apologize here for the convenient short hand term “band-aid” which is really about extra-ordinary efforts in 1000’s of elections offices to mitigate the potential harm from the fundamental vulnerabilities of existing EI. This existing EI technology, which “gradualists” would have us accept as unavoidable reality, requires these extra efforts, but the efforts could well be on the short end of asymmetric conflict with word-class cyber-warriors. If that sounds fanciful to you, I respect your risk assessment of a low risk, but I ask: “Risk assessment aside, should we really accept the vulnerabilities that create the risk? Especially given the existence of proven technology alternatives?”
As I said, I understand that everyone has their own views, risk assessments, realpolitik, and numerous other factors that color their conclusions on what to do with EI that is a national security asset. Everyone is entitled to their own opinions, and increasingly to the opinion that they are entitled to their own facts. But nobody is entitled to their own logic; or at least Commander Spock and I hope not. Here is the logic:
Captain, that is not logical. It is an enigma! The only 2 things that I can thing of that make sense for a blinkered view of this logical conundrum are: market dogma, and unawareness of existing alternatives.
The market dogma is what I sum up as:
“What the for profit market has no ability or incentive to develop and deliver to the government, the government will never have.”
I disagree! At the local level, that’s understandable for all but a handful of county governments who have developed a broader assessment of EI. But at a national strategy level, that assumption’s falsity is demonstrated by the history of ARPAnet, DARPA, NSFnet, e-commerce, the Global Information Grid, and the digital world at your fingertips on your “phone”.
But suppose that you admit that, in general, strategic technological hurdles can be largely overcome with basic R&D, applied R&D, and technology transfer. Admitting that, you could well believe that there is simply no fiscal or political will for an Elections Manhattan Project. Could be — and the good news is that it’s not needed. The basic R&D and applied R&D has already been done on trustworthy computing (especially fault tolerant, high assurance, fixed function, dedicated systems), and applied in practice from satellites to carriers to in-theater ad-hoc mesh networks for C4I. The task at hand is not to invent the base technology for trustworthy computing for a critical infrastructure – including that of elections. That’s already been done. The task is to:
That’s not a Manhattan Project. The pessimistic might compare it in logistical complexity to Operation Overlord. But for elections, we did that operation already. It was called HAVA – the Act of Congress; the billions of dollars; and the replacement of punch cards and butterfly ballots and paperless mechanical voting machines. That wasn’t a great success, in part because the result included paperless electronic voting machines. But that experience provided many lessons learned in the elections community. If steps 1, 2, 3 above can be done expeditiously, then step 4 could be done faster, better, and far cheaper, given the experience of HAVA.
My thanks to the Center for American Progress (“CAP”) for their recent report “9 Solutions to Secure America’s Elections”. As my colleagues here at OSET Institute have already written, we agree with many of the report’s recommendations at a short term tactical level, but in addition have a longer term strategic view based on principles of national security, homeland security, and critical infrastructure protection. I’m very pleased that CAP has joined the discussions in the election integrity and technology community — especially the discussion about how election officials (“EOs”) can move ahead to better protect the critical election infrastructure (“EI”) that they operate. One of my two contributions to this discussion is not to disagree with CAP’s tactical recommendations, but to suggest that as guidance to EOs, any tactical recommendation is going to be more effective in a framework of greater respect for EOs.
That more respectful framework includes:
I comment below on CAP’s report in each of these areas.
While I don’t disagree with most of CAP’s recommendations, I do find several of them to be formulated with a common flaw — lack of acknowledgement of states’ critical roles in U.S. elections. There are several statements that use words like “require” and “mandatory.”
I object to all of these as being likely interpreted as over-riding states’ fundamental independence in matters of elections. The Federal government can’t and shouldn’t try to make any requirements, and no one else should dictate to states either. In the election integrity and technology community, we can suggest to individual states that their state election directors determine how to make new state specific requirements on their localities – for example, for uniform risk-limiting ballot audit processes and creation of public evidence from them. But it is up to states to decide what is appropriate and feasible for their state and its local elections offices.
We can hope that the growth of multi-state information sharing practices will lead to common approaches nation wide, but I don’t think it’s right to say that states need to be dictated to by anyone.
Particularly vexing is the suggestion that new Federal law should mandate vulnerability analysis of EI. Existing voting systems have already seen ample security analysis and discovery of many security vulnerabilities. Such discovery has occurred in every certified voting system product that states have assessed in efforts like TTBR, Everest, and more recent work. Federal legislators or regulators are not equipped to specify exactly what types of analysis are sufficient. Even if they were, a new unfunded Federal mandate to perform analysis would likely have a perverse effect — to shift limited funds away from the mitigation of vulnerabilities that are already well known, to required re-analysis of systems already known to highly vulnerable.
CAP makes a similar suggestion to require updating and securing voter registration (VR) systems. But this assumes that VR systems have inadequate security that isn’t being addressed. In fact, that’s not yet known. Some states have already focused on VR security, others recently sought DHS cyber security assistance, and others have not. Some VR systems might need a major re-design for cyber-security, while others might benefit from operations changes for better cyber “hygiene”. As with other activities that I list in the next section, VR cyber security improvements are ongoing.
Lastly, I find especially inappropriate the recommendation for automatic voter registration (AVR). Each state has a right to regulate its voter rolls as it sees fit, and AVR is not universally viewed as an improvement. Indeed, in some states, AVR would work against the state’s political culture that participation in elections should require a pro-active step on the voter’s part to register. It is certainly the case that adding AVR would require major technology updates to any VR system, but that is no reason to label current systems as antiquated. While some may have a political agenda for nationally uniform automatic registration, that agenda has no place in any recommendations to strengthen cyber security of the state IT systems that manage voter records.
Of the nine recommendations, four are part of existing Election Official practices. EOs already have done, or are in the process of doing, a significant hardware and system transition. That transition includes efforts to replace aging unreliable machines, replace paperless voting machines, and support post election ballot audits. Similarly, there is an ongoing shift in ballot audit processes to adopt scientific and statistically sound methods for people to cross-check the work of fallible voting technology. (The scientific basis ensures the minimum effort for the maximum assurance that machine malfunction did not change an election result.) Most recently, Colorado and New Mexico have made notable progress. In recommending these and other activities, we should respect that EOs already understand their importance and are pursuing them. Some EOs certainly could use assistance and encouragement that starts with respect.
Likewise, EOs already perform pre-election testing on voting machines, to the best of their abilities. But those abilities are limited by shortcomings of the voting systems that they have. One limit of particular concern is that most if not all voting machines in use today lack support for EOs to feasibly and accurately validate them. Such validation should consist of means to assess each voting machine to ensure that it remains in the original certified configuration, without modification or tampering. Given that limitation, EOs are already doing all the testing that’s meaningful to detect malfunction and unreliability.
So yes, these recommendations are sensible, but more importantly, let’s commend EOs. Let’s ask them, “What more you need to strengthen existing practices or accelerate in-progress changes?” — not just tell them to do what they are already doing.
Two other CAP recommendations are about information sharing and coordination. Even leaving aside the inappropriate “mandatory” reporting idea, these two recommendations don’t recognize the extent of these and other related activities that are already ongoing: several organizations have started collaborating in the formation of election infrastructure (“EI”) as a new critical infrastructure (“CI”) sub-sector. CAP provides helpful background to those new to CI: information about ISACs, the role of the intelligence community, the existing MS-ISAC, National Intelligence Priorities Framework, the Cyber Threat Intelligence Integration Center and so forth. And it’s good to note one meeting hosted by EAC and DHS.
However, such meetings are part of an ongoing process in which we need to identify the key stakeholders, not just these Federal organizations and programs. EI sector formation activities already include leading local EOs, state EOs, and their organizations and associations including NASS and NASED. Rather than recommending general goals for activity in the EI sector or the community as a whole, we should be commending. Let’s commend EOs and all the stakeholder organizations for the formation work that they are already doing, and ask them, “What resources could accelerate the process?”
But further yet, we should identify specific aspects of work in progress that can be supported and accelerated by EI sector formation and sector organizations. For example, statewide uniform practices for risk limiting audits might emerge from both: inter-state information sharing that enable some states to learn from the early work of others; intrastate sharing of audit experiences to determine what works in the specific environment of each state. Intrastate sharing and cross-state local learning might also be fortified by more local stakeholder organizations, such as each state’s association of local EOs, and IAOGO.
Similarly, CAP’s “require minimum cyber security standards” for voter registration (“VR”) systems should not be cyber-operations standards imposed by some authority. Rather, effective VR security measures should emerge from on-going EI sector information sharing activities including: survey of existing practices, ongoing security assessment and remediation, and lessons learned by some states that can guide other state’s activity. The need is certainly urgent, given attacks in 2016, but professional assessment and practical remediation are called for, not top-down rules that might interfere with implementing lessons learned form the cross-state sharing activity that’s already in progress.
To close, I want to re-emphasize that most of the CAP recommendations are sound at the core, but would be better with a couple improvements:
Our EOs are hardworking public servants who just received a new unfunded (for now) mandate to manage their election assets as critical infrastructure. There’s a lot to learn, and a lot to do. The election integrity and technology community can have a helpful and supportive role, but it needs to start with both gratitude and respect for EOs’ work.
U.S. election technology is increasingly regarded as critical to national interests. In discussions about the national-level importance of election technology, I’ve also increasingly heard the term “national security” used. The idea seems to be that election technology is as important as other national-security-critical systems. That’s fair enough in principle, but at present we are a long way from any critical piece of election technology – such as machines for casting and counting ballots – from being manufactured, operated, and protected like other systems that currently do meet the definition of “national security systems”.
However, there is one element of national security systems (NSSs) that I believe is overlooked or unfamiliar to many observers of election technology as critical infrastructure. NSSs have to address hardware level threats by containing their risk using a set of practices called supply chain risk management (SCRM). Perhaps hardware threats have been overlooked by some national policy makers, because of the policy issue that I’ll close with today.
I’d like to explain why hardware level threats are more feasible to address than many other challenges of re-inventing election technology to meet national security threats. But first I should explain what’s usually meant by hardware level threats, and where supply chains come into it.
Hardware level threats exist because its possible for an adversary to craft malicious hardware components that work just like a regular component, but also have hidden logic to make it misbehave. To take one simplistic example, a malicious optical disk drive might faithfully copy the contents of a DVD-R when requested, except in special circumstances, such as installing a particular operating system. In that special case, it might deliver a malicious modified copy of a critical OS file, effectively compromising the hardware that the system is installed in.
To those not familiar with the concept, it might seem fanciful that a nation-state actor would engage in such activities: target a specific device manufacturer; create malicious hardware components; inject them into the supply chain of the manufacturer so that malicious hardware components become part of its products. But, in fact, such attacks have happened, and on systems that could have a significant impact on defense or intelligence.
That’s why it is one of the basic aspects of national security systems, that their manufacturers take active steps to reduce the risk of such attacks, in part by operating a rigorous SCRM program. Though unfamiliar to many, the concepts and practices have been around for almost a decade.
Since the inception then of the Comprehensive National Cybersecurity Initiative (CNCI), many defense and intelligence related systems have been procured using SCRM methods specifically because of hardware threats. In fact, the DoD likely has the most experience in managing a closed supply chain, and qualifying vendors based on their SCRM programs.
What might this mean for the future of election technology that is genuinely treated as a national security asset? It means that in the future, such systems would eventually have to be manufactured like national security systems. Significant efforts to increase voting technology security would almost demand it; those efforts’ value would be significantly undercut by leaving the hardware Achilles heel unaddressed.
What would that look like? One possible future:
This would be a big change from the current situation. How would that change come about? Hence the open issue for policy makers …
The opportunity for voting system vendors to benefit from a managed closed supply chain might actually be something possible in the short term. But how would that come about? And what would motivate the vendors to take that benefit? And to expend the funds to set up and operate an SCRM program?
To me, this is an example of a public good (reduced risks to our elections being attacked) that doesn’t obviously pencil out as profit where the manufacturer gets a return on the investment (“ROI”) of additional costs for additional manufacturing process and for compliance efforts. So, I suppose that in order for this to work, some external requirement would have to be imposed (just as the DoD and other parts of the Federal government do for their vendors of NSSs) to obligate manufacturers to incur those costs as part of the business of voting technology, and choose how to pass the costs along to, eventually, taxpayers.
However, in this case, the Federal government has no direct role regulating the election technology business. That’s the job of each State, to decide which voting systems are allowed to be used by their localities; and to decide which technology companies to contract with for IT services related to state-operated election technology related to voter registration and election management. But States don’t have existing expertise in SCRM that Federal organizations do.
So, there is plenty of policy analysis to do, before we could have a complete approach to addressing hardware level threats to elections. But there’s one thing that could be done in the near term, without defining a complete solution. Admittedly, iIt’s a bit of a “build and they might show up” approach, based on a possible parallel case.
The best parallel I know of is with voting system certification. Currently, about half the States require that a voting system manufacturer successfully complete an evaluation and certification program run by the Federal government’s Election Assistance Commission (EAC). That’s a prerequisite for the State’s certification. A possible future parallel would be a] for the Federal government to perform supply chain regulation functions, and compliance monitoring of manufacturers, and b] for States to voluntarily choose whether to require participation of manufacturers. The Federal function might be performed by an organization, that already supports supply chain security, which sets up a parallel program for election technology, and offers its use to the manufacturers of election technology of all kinds. If that’s available, perhaps vendors might dip a toe in the waters, or States might begin to decide whether they want to address hardware threats. Even if this approach worked, then there would be the question of how all this might apply to all the critical election technology that isn’t machines for casting and counting ballots. But at least it would be a start.
That’s pretty speculative I admit, but at least it is a start that can be experimented with in the relatively near term – certainly in time for the 2020 elections that will use election systems that are newer than today’s decade-plus-old systems, but inside have the same vulnerabilities as today’s technology. Hardware assurance won’t fix software vulnerabilities, but it would make it much more meaningful to attempt to fix them, with the hardware Achilles heel being on the way to being addressed.
We now have voting systems that have been vetted with standards and processes that are almost as Jurassic as the pre-Internet era.
This time I need to support my previous claims by explaining the freeze/thaw cycle in more detail, and connecting it to the outcome of voting systems that are not up to today’s job, as we now understand it, post-2016.
The First Try
EAC’s first try at voting system quality started after the year 2000 election hanging chad debacle, and after the Help America Vote Act (HAVA) designed to fix it. During the period of 2004 to 2006, the EAC was pretty busy defining standards and requirements (technically “guidelines” because states are not obligated to adopt them) for the then-next-gen of voting systems, and setting up processes for testing, review, and certification.
That first try was “good enough” for getting started on a way out of the hanging chad morass, but was woefully inadequate in hindsight. A beginning of a second try resulted in the 2007 recommendations to significantly revise the standards, because the hindsight then showed that the first try had some assumptions that weren’t so good in practice. My summary of those assumptions:
Even in 2007, and certainly even more since then, we’ve seen that what these assumptions actually got us was not what we really wanted. My summary of what we got:
Taken together, that meant that election tech broadly was physically unreliable, and very vulnerable, both to technological mischance and to intentional meddling. A decade ago, we had much less experience than today with the mischances that early PC tech is prone to. At the time, we also had much less sensitivity to the threats and risks of intentional meddling.
Freeze and Thaw
And that’s where the freeze set in. The 2007 recommendations have been gathering dust since then. A few years later, the freeze set in on EAC as well, which spent several years operating without a quorum of congressionally approved commissioners, and not able to change much – including certification standards and requirements.
That changed a couple years ago. One of the most important things that the new commissioners have done is to re-vitalize the process for modernizing the standards, requirements, and processes for new voting system. And that re-vitalization is not a moment too soon, just as most of the nation’s states and localities have been replacing decaying voting machines with “new” voting systems thatare not substantially different from what I’ve described above.
That’s where the huge irony lies – after over a decade of inactivity, the EAC has finally gotten its act together to try to become an effective voting system certification body for the future — and it is getting dismantled.
It is not just EAC that’s making progress. EAC works with NIST, and a Technical Guidelines Working Group (TGWC), and many volunteers from many organizations (including ours) that working in several groups focused on help the TGWC. We’ve dusted off the 2007 recommendations, which address how to fix at least some of those consequences I listed above. We’re writing detailed standards for interoperability, so that election officials have more choice about how to acquire and operate voting tech. I could go on about the range of activity and potential benefits, but the point is, there is lot that is currently a-building that is poised to be frozen again.
A Way Forward?
I believe that it is vitally important, indeed a matter of national security, that our election tech makes a quantum leap forward to address the substantial issues of our current threat environment, and the economic and administrative environment that our hardworking election officials face today.
If that’s to happen, then we need a way to not get frozen again, even if the EAC is dismantled. A look at various possible ways forward will be the coda for this series.
Kudos to EAC for this week’s public Hearing on election infrastructure as critical infrastructure! After the 2016 election cycle, I think that there is very little disagreement that election infrastructure (EI) is critical, in the sense of: vital, super-important, a matter of national security, etc. But this hearing is a bit of a turning point. I’ll explain why in terms of: discussion before the hearing, then the aftermath, and then I will make my one most important point about action going forward. I’ll close with specific recommend steps forward.
Prior to this hearing, I heard and read a lot of negativity about the idea that EI is “critical infrastructure” (CI) in the specific sense of homeland security policy. Yes, late last year, DHS did designate EI as CI, specifically as a sub-sector of the existing CI sector for government systems. And that caused alarm and the negativity I referred to, ranging from honest policy disagreement (what are the public policy ramifications of designation) to par-for-the-course political rhetoric (unprecedented Federal takeover of elections as states’ rights, etc.), and just plain “fake news” (DHS hackers breaking Federal laws to infiltrate state-managed election systems).
The fracas has been painful to me especially, as someone with years of experience in the disparate areas of cyber-security technology (since the ‘80s), critical infrastructure policy and practice (since before 9/11), DHS cyber-security research (nearly since its inception), and election technology (merely the last decade or so).
Turning Point in Dialog
That’s why the dialogue, during the EAC hearing, and the reflections in online discussion since, have been so encouraging. I hear less competing monologues and more dialogue about what EI=CI means, what official designation actually does, and how it can or can’t help us as a community respond to the threat environment. The response includes a truly essential and fundamental shift to creating, delivering, and operating EI as critical national assets like the power grid, local water and other public utilities, air traffic control, financial transaction networks, and so on. Being so uplifted by the change in tenor, I’ll drop a little concept here to blow-up some of this new dialogue:
Official CI designation is irrelevant to the way forward.
The way forward has essential steps that were possible before the official designation, and that remain possible if the designation is rescinded. These steps are urgent. Fussing over official designation is a distraction from the work at hand, and it needs to stop. EAC’s hearing was a good first step. My blog today is my little contribution to dialog about next steps.
Outlining the Way Forward
To those who haven’t been marinating in cyber CI for years, it may be odd to say that this official announcement of criticality is actually a no-op, especially given its news coverage. But thanks to changes in cyber-security law and policy over the years, the essential first steps no longer require official designation. There may be benefits over the longer term, but the immediate tasks can and should be done now, without concern for Federal policy wonkery.
Here is a short and incomplete list of essential tasks, each of which I admit deserves loads more unpacking and explaining to non-CI-dweeb people, than I can possibly do in a blog. But regardless of DHS policy, and definitely in light of the 2016 election disruption experience, the EI community can and should:
And all that is just to get started, to enable several further steps, including: informing the election tech market of what needs to respond to; helping the 1000s of local election offices to begin to learn how their responsibilities evolve during the transformation of EI to truly part of CI in practice.
“Frozen” is my key word for what happens to the voting system certification process after EAC is dismantled. And in this case, frozen can be really harmful. Indeed, as I will explain, we’ve already seen how harmful.
1. The next gen of election technology needs to be not only safe and effective, but also …
2. … must be robust against whole new categories of national security threats, which the voting public only became broadly aware of in late 2016.
Today it’s time to explain just how ugly it could get if the EAC’s certification function gets derailed. Frozen is that starting point, because frozen is exactly where EAC certification has been for over a decade, and as a result, voting system certification is simply not working. That sounds harsh, so let me first explain the critical distinction between standards and process, and then give credit where credit is due for the hardworking EAC folks doing the certification process.
Years worth of EAC efforts have improved the process a great deal. But by contrast, the standards and requirements have been frozen for over a decade. During that time, here is what we got in the voting systems that passed the then-current and still-current certification program:
Black-box systems that election officials can’t validate, for voting that voters can’t verify, with software that despite passing testing, later turned out to have major security and reliability problems.
That’s what I mean by a certification program that didn’t work, based solely on today’s outcome – election tech that isn’t up to today’s job, as we now understand the job to be, post-2016. We are still stuck with the standards and requirements of the process that did not and does not work. While today’s voting systems vary a bit in terms of verifiability and insecurity, what’s described above is the least common denominator that the current certification program has allowed to get to market.
Wow! Maybe that actually is a good reason to dismantle the EAC – it was supposed to foster voting technology quality, and it didn’t work. Strange as it may sound, that assessment is actually backwards. The root problem is that as a Federal agency, the EAC had been frozen itself. It got thawed relatively recently, and has been taking steps to modernize the voting systems standards and certification. In other words, just when the EAC has thawed out and is starting to re-vitalize voting system standards and certification, it is getting dismantled – that at a time when we just recently understood how vulnerable our election systems are.
To understand the significance of what I am claiming here, I will have be much more specific in my next segment, about the characteristics of the certification that didn’t work, how the fix started over a decade ago, got frozen, and has been thawing. When we understand the transformational value of the thaw, we can better understand what we need in terms of a quality program for voting systems, and how we might get to such a quality program if the EAC is dismantled.