Tagged e-voting

Bedrock 2: Adventures at the Board of Elections

Today, we’ll continue our illustrative story of elections — and as in the first installment of the story, we’ll keep it simple with the setting in the Town of Bedrock. As we tune in, we find Fred Flintstone in downtown Bedrock at the offices of Cobblestone County’s Bedrock Board of Elections (BBoE). He’s checking up on the rumor that Mayor Flint Eastrock has resigned, and that there is a Special Mayoral Election scheduled. Asking BBoE staffer Rocky Stonerman, Rocky replies, “Of course Fred! Just check the BBoE’s public slab-site.” Going back outside the BBoE offices, he checks the public slab-site, and sure enough there is a newly posted slab announcing the election.

Fred tells Rocky he’d like to run, and Rocky explains how Fred needs to apply as a candidate, and what the eligibility rules are. “Fred, I’ll tell you straight up, don’t bother to fill out the application, you’re not eligible because you’re a Quarry Commissioner. If you want to run for Mayor, you’ll need to resign first, and then apply as a mayoral candidate.”

“Yabba dabba doo – that’s what I’m here to do!” Forms and formalities of resignation then taken care of, Rocky gives Fred an application slab and chisel, and then grabs a chisel and runs out to update the Upcoming Elections slab page to include information about the contest for Quarry Commission, Seat #2. In the meantime, Fred has finished his application, and hands it in when Rocky returns. “Did you put me on the candidate list?”

“Of course not, Fred. We have to process your application! Best bet is to come down tomorrow — I’m going to have to pull your voter record from our voter record tablet-base system. And I’ve got to tell you, it’ll take a while — the VRTB has thousands of records. We’re still running on unsupported old ScryBase system! Wish we had funding to upgrade but not so far. Petro tells me the we should look at an open-stone MyScryql system, and …”

Not so interested in Rocky and Petro’s slabs and tablets, Fred interrupts, “And what about that referendum?” Rocky replies, “Oh yes! The Quarry Courier brought over an application yesterday, but it didn’t have all the commissioners’ signatures on the application. You probably want to get the commission to fix that — unless you want to go the petition route, though you’d need 300 signatures and frankly I don’t know if our TBMS has room, because …”

Having heard more than enough about stone-age election technology for one day, Fred beats a hasty retreat. Tune in for the next installment, to find out if Fred actually gets to run for mayor.

— EJS

NYT Totally Spot-on for Value of Open Data

You’ll often find the term “open source” here, used to describe either the source code for software, or the license that allows you take that source code and use it. But “open data” is just as important. A recent New York Times article read almost like I would have said it, starting with “It’s not boring, really!” or to be precise, the titleThis Data Isn’t Dull. It Improves Lives”. NYT’s Richard H. Thaler starts on exactly the right point:

Governments have learned a cheap new way to improve people’s lives. Here is the basic recipe: Take data that you and I have already paid a government agency to collect, and post it online in a way that computer programmers can easily use. Then wait a few months. Voilà! The private sector gets busy, creating Web sites and smartphone apps that reformat the information in ways that are helpful to consumers, workers and companies.

That’s exactly the approach to election open-data that we’re taking in the next steps of election data management at TrustTheVote. Right now, I have to admit that the current set of election data might actually be fairly boring unless you have an interest in ballot proofing ot electoral districting (which we’ll get to in a couple more installments in our Bedrock series). But the next step might be more interesting: combining that data with election-result information, which up to now we’ve managed only in the context of the TTV Tabulator and some common data formats that we’re working on with some help from EAC and NIST.

But by adding election result data back into the election definition data, we get the the next cool part: a new TTV component that is like the current Election Manager (which would remain deployed privately within a BoE or state), but with only the ability to publicly provide election and election result data via a Web services API. That, in turn, becomes the back end for an election night reporting system Web site and smartphone app.

But perhaps just as important, that API would be publicly accessible to any software, including 3rd party sites and apps, as Mr. Thaler points out. Right now, most election definition data and result data is locked up in the EMS of proprietary voting system products, with a sliver of it published in human-oriented reports and sometimes web content. A new TTV component for Web publication of that information would be a fine first step, but by itself, the data would still be limited to availability in whatever form (however broad) the human-oriented Web interface provides. The really key point, instead, is this:

Not only publish via a Web site, but also make all the data accessible, so anyone can do their own thing to slice and dice the data to gain confidence that the election results are right.

And that point — confidence — is where we rendezvous with the NYT’s point about data improving people’s lives. I’m not sure that open-data for elections can save lives, but I think it can help save some people’s faith in election integrity. And now is certainly a trying time, with a variety of election-related litigation news from NY and CO and IN and SC, all seeming to say that election irregularities or outright fraud — even at the top with IN’s highest election official being indicted — is all over the country.

There’s an old saying “one bad apple doesn’t spoil the whole barrel” but we should not have to take it on faith for the large barrel of honest diligent election officials and the valid results of their well-run elections. A bit of open-data might actually help.

— EJS

Voting System (De)certification – A Way Forward? (2 of 2)

Yesterday I wrote about the latest sign of the downward spiral of the broken market in which U.S. local election officials (LEOs) purchase product and support from vendors of proprietary voting system products, monolithic technology the result of years’ worth of accretion, and costing years and millions to test and certify for use — including a current case where the process didn’t catch flaws that may result in a certified product being de-certified, and being replaced by a newer system, to the cost of LEOs.

Ouch! But could you really expect a vendor in this miserable market to give away new product that they spent years and tens of millions develop, to every customer of the old product, who the vendor had planned to sell upgrades to? — just because of flaws in the old product? But the situation is actually worse: LEOs don’t actually have the funding to acquire a hypothetical future voting system product in which the vendor was fully open about true costs including

(a) certification costs both direct (fees to VSTLs) and indirect cost (staff time), as well as

(b) costs of development including rigorously designed and documented testing.

Actually, development costs alone are bad enough, but certification costs make it much worse — as well as creating a huge barrier to entry of anyone foolhardy enough to try to enter the market (or even stay in it!) and make a profit.

A Way Forward?

That double-whammy is why I and my colleagues at OSDV are so passionate about working to reform the certification process, so that individual components can be certified for far less time and money than a mess o’code accreted over decades, and including wads of interwoven functionality that might need even need to be certified! And then of course, these individual components could also be re-certfied for bug fixes by re-running a durable test plan that the VSTL created the first time around.  And that of course requires common data formats for inter-operation between components — for example, between a PCOS device and a Tabulator system that combines and cross checks all the PCOS devices’ outputs, in order to either find errors/omissions or find a complete election result.

So once again our appreciation to NIST, EAC, IEEE 1622 for actually doing the detailed work of hashing out these common data formats, which is the bedrock of inter-operation, which is the pre-req for certification reform, which enables certification cost reduction of certification, which might result in voting system component products being available at true costs that are affordable to the LEOs who buy and use them.

Yet’s that’s quite a stretch, from data standards committee work, to a less broken market that might be able to deliver to customers at reasonable cost. But to replace a rickety old structure with a new, solid, durable one, you have to start at the bedrock, and that’s where we’re working now.

— EJS

PS: Thanks again to Joe Hall for pointing out that the current potential de-certification and mandatory upgrade scenario (described in Part 1) illustrates the untenable nature of a market that would require vendors to pay for expensive testing and certification efforts, and to also have to (as some have suggested) forego revenue when otherwise for-pay upgrades are required because of defects in software.

Voting System (De)certification – Another Example of the Broken Market (1 of 2)

Long-time readers will certainly recall our view that the market for U.S. voting systems is fundamentally broken. Recent news provides another illustration of the downward spiral: the likely de-certification of a widely used voting system product from the vendor that owns almost three quarters of the U.S. market.

The current stage of the story is that the U.S. Election Assistance Commission is formally investigating the product for serious flaws that led to errors of the kind seen in several places in 2010, and perhaps best documented in Cuyahoga County. (See:  “EAC Initiates Formal Investigation into ES&S Unity 3.2.0.0 Voting System”.) The likely end result is the product being de-certified, rendering it no longer legal for use in many states where it is currently deployed. Is this a problem for the vendor? Not really. The successor version of the product is due to emerge from a lengthy testing and certification process fairly soon. Having the current product banned is actually a great tool for migrating customers to the latest product!

But at what cost to who? The vendor will charge the customers (local election officials, or LEOs) for the new product, the same as would have been if the migration were voluntary and the old product version still legal. The LEOs will have to sign and pay for a multi-year service agreement. And they will have the same indirect costs of staff efforts (at the expense of other duties like running elections, or getting enough sleep to run an election correctly), and direct costs for shipping, transportation, storage, etc. These are real costs! (Example: I’ve heard reports of some under-funded election officials opting to not use election equipment that they already have, because they have no funding for the expense of taking out of the warehouse to testing facility, and doing the required pre-election testing.)

Some observers have opined that vendors of flawed voting system products should pay: whether damages, or fines, or doing the migration gratis, or something. But consider this deeper question, from UCB and Princeton’s Joe Hall:

Can this market support a regulatory/business model where vendors can’t charge for upgrades and have to absorb costs due to flaws that testing and certification didn’t find? (And every software product, period, has them).

The funding for a high level of quality assurance has to come from somewhere, and that’s not voting system customers right now. Perhaps we’re getting to the point where the amount of effort it takes to produce a robust voting system and get it certified — at the vendor’s expense — creates a cost that customers are not willing or able to pay when the product gets to market.

A good question! and one that illustrates the continuing downward spiral of this broken market. The cost to to vendors of certification is large, and you can’t really blame a vendor for the sort of overly rapid development, marketing, and sales that leads to the problems being investigated. The folks are in this business to make a profit for heavens’ sake, what else could we expect?

— EJS

PS – Part Two, coming soon: a way out of the spiral.

Bedrock of Election Management

As I said in my recent MLK posting, I’m starting a series of blogs that should provide a concrete example of election management, at a small scale and (I hope) with some interest value.  But before I tell a story of election management, we need to first have a story of an election, and this particular election starts with a candidate.

So, let me tell you a little story about a man named Jed — oops, sorry, a man named Fred*.  Fred lives in the Town of Bedrock, and just heard that the famous mayor, Flint Eastrock, has just resigned, in order to start a new film project. Fred decides to run for mayor in the Special Mayoral Election, because he’s ready for the big time, having served on the Quarry Commission for some years. Like modern-day U.S., Bedrockites prefer to elect as many government positions as possible; rather than trusting the Mayor or Bedrock City Council to appoint Quarry Commissioners, the 5 commissioners are elected. So, in the special election, Fred’s open seat on the Commission will also be up for election.

Lastly, as Fred’s last act as Commissioner before resigning to run for mayor, Fred proposes a new referendum about the Quarry: a question for the voters to approve or reject a new usage fee for quarrying — some needed additional revenue for the quarry upgrade that he hopes to be the centerpiece of his tenure as mayor.

So, there we have an election coming up, with three ballot items:

  • An open seat for mayor, which Fred wants to run for;
  • An open seat on the Quarry Commissioner, from which Fred has resigned;
  • A referendum on the new quarry usage fee.

That’s almost enough for getting started on our Bedrock election story, but we’ve also seen a bit of Bedrock election law and election administration in action:

  • When a the office of Mayor is vacant, it is filled by special election, not appointment or remaining vacant until the next regular election.
  • The Bedrock Board of Election (BBoE) called a special election.
  • If a current local office-holder wants to run for a vacant office, he or she must resign from the office they already hold.
  • If there is a local referendum pending for the next election, and a special election is called, then the referendum is held during the special election.

Next time, Fred applies to be a candidate for mayor, and gets an earful about how the BBoE works in practice. Fred knows, as I do, that there is always more to learn in election-land!

— EJS

* My thanks and apologies for David Pogue on this one.

Open-Source Election Software Hosting — What Works

Putting an open source application into service – or “deployment” – can be different from deploying proprietary software. What works, and what doesn’t? That’s a question that’s come up several times in the few weeks, as the TTV team has been working hard on several proposals for new projects in 2011. Based on our experiences in 2009-10, here is what we’ve been saying about deployment of election technology that is not a core part of ballot certified casting/counting systems, but is part of the great range of other types of election technology: data management solutions for managing election definitions, candidates, voter registration, voter records, pollbooks and e-pollbooks, election results, and more – and reporting and publishing the data.

For proprietary solutions – off the shelf, or with customization and professional services, or even purely custom applications like many voter record systems in use today – deployment is most often the responsibility of the vendor. The vendor puts the software into the environment chosen by the customer — state or local election officials – ranging from the customer’s IT plant to outsourced hosting to the vendor’s offering of an managed service in an application-service-provider approach. All have distinct benefits, but share the drawback of “vendor lock-in.”

What about open-source election software? There are several approaches that can work, depending the nature of the data being managed, and the level of complexity in the IT shop of the election officials. For today, here is one approach that has worked for us.

What works: outsourced hosting, where a system integrator (SI) manages outsourced hosting. For our 2010 project for VA’s FVAP solution, the project was led by an SI that managed the solution development and deployment, providing outsourced application hosting and support. The open-source software included a custom Web front-end to existing open-source election data management software that was customized to VA’s existing data formats for voters and ballots. This arrangement worked well because the people who developed the custom front-end software also performed the deployment on a system completely under their control. VA’s UOCAVA voters benefited from the voter service blank-ballot distribution, while the VA state board of elections was involved mainly by consuming reports and statistics about the system’s operation.

That model works, but not in every situation. In the VA case, this model also constrained the way that the blank ballot distribution system worked. In this case, the system did not contain personal private information — VA-provided voter records were “scrubbed”. As a result, it was OK for the system’s limited database to reside in a commercial hosting center outside of the the direct control of election officials. The deployment approach was chosen first, and it constrained the nature of the Web application.

The constraint arose because the FVAP solution allowed voters to mark ballots digitally (before printing and returning by post or express mail). Therefore it was essential that the ballot-marking be performed solely on the voter’s PC, which absolutely no visibility by the server software running in the commercial datacenter. Otherwise, each specific voter’s choices would be visible to a commercial enterprise — clearly violating ballot secrecy. The VA approach was a contrast to some other approaches in which a voter’s choices were sent over the Internet to a server which prepared a ballot document for the voter. To put it another way …

What doesn’t work: hosting of government-privileged data. In the case of the FVAP solution, this would have been outsourced hosting of a system that had visibility on the ultimate in election-related sensitive data: voters’ ballot choices.

What works: engaged IT group. A final ingredient in this successful recipe was engagement of a robust IT organization at the state board of elections. The VA system was very data-intensive during setup, with large amounts of data from legacy systems. The involvement of VA SBE IT staff was essential to get the job done on the process of dumping the data, scrubbing and re-organizing it, checking it, and loading it into the FVAP solution — and doing this several times as the project progressed to the point where voter and ballot data were fixed.

To sum up what worked:

  • data that was OK to be outside direct control of government officials;
  • government IT staff engaged in the project so that it was not a “transom toss” of legacy data;
  • development and deployment managed by a government-oriented SI;
  • deployment into a hosted environment that met the SI’s exact specifications for hosting the data management system.

That recipe worked well in this case, and I think would apply quite well for other situations with the same characteristics. In other situations, other models can work. What are those other models, or recipes? Another day, another blog on another recipe.

— EJS

King’s Mighty Stream, Re-Visited

As I often do, I had a thoughtful Martin Luther King Day — as you can see from my still pondering a couple days later. But I think I now have something to share. Last time I wrote on MLK, I likened two unlikely things:

  • King’s demand for social justice and peace, using Isaiah’s prophetic words that “Justice shall roll down like water, and righteousness like a mighty stream.”
  • My vision of really meaningful election transparency, stemming from a mighty torrent of data that details everything that happened in a county’s conduct of an election, published in a form anyone can see, and can use to check whether the election outcomes are actually supported by the data.

ybg_web_3Still a bit of a stretch, no doubt, because since my little moment by the waterfalls of the MLK memorial in San Francisco,
I’ve had rather mixed success in explaining why this kind of transparency is so difficult. Among the reasons are the complexity of the data, and the very inconvenient way it is locked up inside voting system products and proprietary data formats.

RubeGoldbergOlafBut perhaps more important, it is just a vexingly detailed and complicated process to administer elections and conduct voting and counting — paradoxically made even more complex with the addition of new technology. (Just ask a New York state election admin person about 2010.) In some cases, I am sure that local election officials would not take umbrage at the phrase “Rube Goldberg Machine” to describe the whole passle of people, process, and tools.

So, among my new year’s resolutions, I am going to try to communicate, by example, a large part of the scope of data and transparency that is needed in U.S. elections. It will take some time to do in small digestible blogs, but I hope the example will serve to illustrate several things:

  • What election administration is really like;
  • What kinds of information and operations are used;
  • How a regular process of capturing and exposing the information can prevent some of the mishaps, doubts, and litigation you’ve often read about here.
  • Last but not least, how the resulting transparency connects directly to the nuts-and-bolts election technology work that we are doing on vote tabulation and on digital pollbooks.

One challenge will be keeping the example at an artificially small scale, for comprehensibility, while still providing meaningful examples of the data and the election officials’ work to use it. On that point especially, feedback will be particularly welcome!

— EJS

“Where Are the Vote Counts?” From New York to Ivory Coast

Yesterday, judges in New York state were hearing calls for hand recount, while elsewhere other vote counts were being factored into the totals, and on the other side of the Atlantic, the same question “where are the election results?” was getting very serious. In the Ivory Coast, like in some places in the U.S., there is a very close election that still isn’t decided. There, it’s gotten serious, as the military closed off all points of entry into the country as a security measure related to unrest about the close election and lack of a winner.

Such distrust and unrest, we are lucky to have avoided here; despite the relatively low levels of trust in U.S. electoral processes (less than half of eligible people vote, and of voters polled in years past a third to a half were negative or neutral), we are content to let courts and the election finalization process wind on for weeks. OK, so maybe not content, maybe extremely irate and litigious in some cases, but not burning cars in streets.

That’s why I think it is particularly important that Americans better understand the election finalization process — which of course like almost everything in U.S. elections varies by state or even locality. But the news from Queens NY (New York Times, “A Month After Elections, 200,000 Votes Found”) though it sounds awful in headline, is actually enormously instructive — especially about our hunger for instant results.

It’s not awful; it’s complicated. As the news story outlines, there is a complicated process on election night, with lots of room for human error after a 16 hour day. The finalization process is conducted over days or weeks to aggregate vote data and produce election results carefully, catching errors, though usually not changing preliminary election-night results. As Douglas A. Kellner, co-chairman of the State Board of Elections, said:

The unofficial election night returns reported by the press always have huge discrepancies — which is why neither the candidates or the election officials ever rely on them.

That’s particularly true as NY has moved to paper optical scan voting from lever machines, and the finalization process has changed. But in the old days, it was possible to misplace one or a few lever machine’s worth of vote totals with human errors in the paper process of reading dials, writing numbers on reporting form sheets, transporting the sheets, etc. Then, add to that the computer factor for human error, and you get your 80,000 vote variance in Queens.

Bottom line — when an election is close, of course we want the accurate answer, and getting it right takes time. Using computerized voting systems certainly helps with getting quicker answers for contests that aren’t close and won’t change in the final count. And certainly they can help by enabling audits and recounts that lever machines could not. But for close calls, it’s back to elbow grease and getting out the i-dotters and t-crossers — and being thankful for their efforts.

— EJS

Tabulator Troubles in Colorado

More tabulator troubles! In addition to the continuing saga in New York with the tabulator troubles I wrote about earlier, now there is another tabulator-related situation in Colorado. The news report from Saguache County CO is about:

a Nov. 5 “retabulation” of votes cast in the Nov. 2 election Friday by Myers and staff, with results reversing the outcome …

In brief, the situation is exactly about the “tabulation” part of election management, that I have been writing about. To recap:

  • In polling places, there are counting devices that count up votes from ballots, and spit out a list of vote-counts for each candidate in each contest, and each option in each referendum. This list is in the form of a vote-count dataset on some removable storage.
  • At county election HQ, there are counting devices that count up vote-by-mail ballots and provisional ballots, with the same kind of vote-counts.
  • At county election HQ, “tabulation” is the process aggregating these vote-counts and adding them up, to get county-wide vote totals.

In Saguache, election officials did a tabulation run on election night, but the results  didn’t look right. Then on the 5th, they did a re-run on the “same ballots” but the results were different, and it appears to some observers that some vote totals may be been overwritten. Then, on the 8th, with another re-try, a result somewhat like in NY:

… the disc would not load and sent an error message

What this boils down to for me is that current voting system products’ Tabulators are not up to correctly doing some seemingly simple tasks correctly, when operated by ordinary election officials. I am sure they work right in testing situations that include vendor staff; but they must also work right in real life with real users. The tasks include:

  • Import an election definition that specifies how many counting devices are being used for each precinct, and how many vote-count datasets are expect from them.
  • Import a bunch of vote-count datasets.
  • Cross-check to make sure that all expected vote-totals are present, and that there are no un-expected vote-counts.
  • Cross-check each vote-count dataset to make sure it is consistent with the election definition.
  • If everything cross-checks correctly, add up the counts to get totals, and generate some reports.

That’s not exactly dirt-simple, but it also sounds to me like something that could be implemented in well-designed software that is easy for election officials to use, and easy for observers to understand. And that understanding is critical, because without it, observers may suspect that the election has been compromised, and some election results are wrong. That is a terrible outcome that any election official would work hard to avoid — but it appears that’s what is unfolding in Saguache. Stay tuned …

— EJS

PS: Hats off to the Valley Courier‘s Teresa L. Benns for a really truly excellent news article! I have only touched on some of the issues she covered. Her article has some of the best plain-language explanation of complicated election stuff, that I have ever read. Please take a minute to at least scan her work. – ejs

Tabulator Technology Troubles

In my last post, I recounted an incident from Erie County NY, but deferred to today an account of what the technology troubles were, that prevented the routine use of a Tabulator to create county-wide vote totals by combining count data from each of the opscan paper ballot counting devices. The details are worth considering as a counter-example of technology that is not transparent, but should be.

As I understand the incident, it wasn’t the opscan counting systems that malfunctioned, but rather the portion of the voting system that tabulates the county-wide vote totals. As I described in an earlier post, the ES&S system has no tabulator per se, but rather some aggregation software that is part of the larger body of Election Management System (EMS) software that runs on an ordinary Windows PC. Each opscan devices writes data to a USB stick, and election officials aggregate the data by feeding each stick into the EMS. The EMS is supposed to store all the data on the stick, and add up all the opscan machines’ vote counts into a vote total for each contest.

Last week, though, when Erie County officials tried to do so, the EMS rejected the data sticks. Election officials had no way to use the sticks to corroborate the vote totals that they had made by visually examining the election-night paper-tapes from the 130 opscan devices. Sensible questions: Did the devices’ software err in writing the data to the sticks? If so, might the tapes be incorrect as well? Is the data still there? It turns out that the case was a bug in EMS software, not the devices, and in fact the data on the sticks was just fine. With a workaround on the EMS, the data was extracted from the sticks and used as planned. Further, the workaround did not require a bug fix to the software, which would have been illegal. Instead, some careful hand-crafting of EMS data enabled the software to stop choking on the data from the sticks.

Now, I am not feeling 100% great about the need for such hand-crafting, or indeed about the correctness of the totals produced by a voting system operating outside of its tested ordinary usage. But some canny readers are probably wondering about a simpler question. If the data was on the sticks, why not simply copy the files off the stick using a typical PC, and examine the contents of the files directly? With 40-odd contests countywide and a 100-odd sticks and paper tapes, it’s not that much work to just look at the them to whether the numbers on each stick match those on the tapes. Answer: the voting system software is set up to prevent direct examination, that’s why! The vote data can only be seen via the software in the EMS. And when that software glitches, you have to wonder about what you’re seeing.

This is at least one area where better software design can lead to higher confidence system: write-once media for storing each counting device’s tallies; use of public standard data formats so that anyone examine the data; use of human-usable formats so that anyone can understand the data; use of a separate, single-purpose tabulator device that operates autonomously from the rest of the voting system; publication of the tally data and the tabulator’s output data, so that anyone can check the correct results either manually or with their choice of software. At least that’s the TrustTheVote approach that we’re working out now.

— EJS