Tagged election results

The “VoteStream Files” A Summary

The TrustTheVote Project Core Team has been hard at work on the Alpha version of VoteStream, our election results reporting technology. They recently wrapped up a prototype phase funded by the Knight Foundation, and then forged ahead a bit, to incorporate data from additional counties, provided by by participating state or local election officials after the official wrap-up.

DisplayAlong the way, there have been a series of postings here that together tell a story about the VoteStream prototype project. They start with a basic description of the project in Towards Standardized Election Results Data Reporting and Election Results Reload: the Time is Right. Then there was a series of posts about the project’s assumptions about data, about software (part one and part two), and about standards and converters (part one and part two).

Of course, the information wouldn’t be complete without a description of the open-source software prototype itself, provided Not Just Election Night: VoteStream.

Actually the project was as much about data, standards, and tools, as software. On the data front, there is a general introduction to a major part of the project’s work in “data wrangling” in VoteStream: Data-Wrangling of Election Results DataAfter that were more posts on data wrangling, quite deep in the data-head shed — but still important, because each one is about the work required to take real election data and real election result data from disparate counties across the country, and fit into a common data format and common online user experience. The deep data-heads can find quite a bit of detail in three postings about data wrangling, in Ramsey County MN, in Travis County TX, and in Los Angeles County CA.

Today, there is a VoteStream project web site with VoteStream itself and the latest set of multi-county election results, but also with some additional explanatory material, including the election results data for each of these counties.  Of course, you can get that from the VoteStream API or data feed, but there may be some interest in the actual source data.  For more on those developments, stay tuned!

Election Results Reporting – Assumptions About Standards and Converters

It’s time to finish — in two parts — the long-ish explanation of the assumptions behind our current “VoteStream” prototype stage of the TrustTheVote Project’s Election Result Reporting Platform (ENRS) project. As I said before, it is an exercise in validating some key assumptions, and discovering their limits. Previously, I’ve described our assumptions about election results data, and the software that can present it. Today, I’ll explain the 3rd of three basic assumptions, which in a nutshell is this:

  • If the data has the characteristics that we assumed, and
  • if the software (to present that data) is as feasible and useful as we assumed;
  • then there is a method for getting the data from its source to the reporting software, and
  • that method is practical for real-world elections organization, scalable, and feasible to be adopted widely.

So, where are we today? Well, as previous postings have described, we made a good start on validating the first 2 assumptions during the previous design phase. And since starting this prototype phase, we’ve improved the designs and put them into action. So far so good: the data is richer than we assumed; the software is actually significantly more flexible than before, and effectively presents the data. We’re pretty confident that our assumptions were valid on those two points.

But where did the 2012 election results data come from, and how did it get into the ENRS prototype? Invented elections, or small transcribed subsets of real results, were fine for design; but in this phase it needs to be real data, complete data, from real election officials, used in a regular and repeated way. That’s the kind of connection between data source and ENRS software that we’ve been assuming.

Having stated this third of three assumptions, the next point is about what we’re doing to prove that assumption, and assess it limits. That will be part two of two, of this last segment of my account of our assumptions and progress to date.

— EJS

 

Election Results Reload: the Time is Right

In my last post, I said that the time is right for breaking the logjam in election results reporting, enabling a big reload on technology for reporting, and big increase in public transparency. Now, let me explain why, starting with the biggest of several reasons. 

Elections data standards are needed to define common data formats into which a variety of results data can converted.

Those standards are emerging now, and previously the lack of them was a real problem.

  • We can’t reasonably expect a local elections office to take additional efforts to publish the data, or otherwise serve the public with election results services, if the result will be just one voice in a Babel of dozens of different data languages and dialects.
  • We can’t reasonably expect a 3rd party organization to make use of the data from many sources, unless it’s available in a single standard format, or they have the wherewithal to do huge amounts of work on data conversion, repeatedly.

The good news is that election data standards have come along way in the last couple of years, due to:

  • Significant support from a the U.S. Governments standards body — the National Institute of Standards and Technology (NIST);
  • Sustained effort from the volunteers working in standards committees in the international standards body — the IEEE 1622 Working Group; and
  • Practical experience with evolving de facto standards, particularly with the data formats and services of the Pew Voting Information Project (VIP), and the several elections organizations that participate in providing VIP data.

There are other reasons why the time is right, but they are more widely understood:

  • We now have technologies that perennially understaffed and underfunded elections organization can feasibly adopt quickly and cheaply including powerful web application frameworks, supported by cloud hosting operations, within a growing ecosystem of web services that enable many organizations to access a variety of data and apps.
  • “Open government,” “open data,” and even “big data” are buzz phrases now commonly understood, which describe a powerful and maturing set of technologies and IT practices.  This new language of government IT innovation facilitates actionable conversations about the opportunity to provide the public with far more robust information on elections and their participation and performance.

It’s a “promised land” of government IT and the so-called Gov 2.0 movement (arguably we think more like Gov 3.0 when you think about it in terms of 2.0 was all about collaboration and 3.0 is becoming all about the “utility web”–real apps available on demand — a direction some of these services will inevitably take).  However, for election technology in the near term, we first have to cross the river by learning how to “get the data out” (and that is more like Gov 2.0) More next time on our assumptions about how that river can be crossed, and our experiences to date on doing that crossing.

— EJS