Tagged voting technology

For (Digital) Poll Books — Custody Matters!

Today, I am presenting at the annual Elections Verification Conference in Atlanta, GA and my panel is discussing the good, the bad, and the ugly about the digital poll book (often referred to as the “e-pollbook”).  For our casual readers, the digital poll book or “DPB” is—as you might assume—a digital relative of the paper poll book… that pile of print-out containing the names of registered voters for a given precinct wherein they are registered to vote.

For our domain savvy reader, the issues to be discussed today are on the application, sometimes overloaded application, of DPBs and their related issues of reliability, security and verifiability.  So as I head into this, I wanted to echo some thoughts here about DPBs as we are addressing them at the TrustTheVote Project.

OSDV_pollbook_100709-1We’ve been hearing much lately about State and local election officials’ appetite (or infatuation) for digital poll books.  We’ve been discussing various models and requirements (or objectives), while developing the core of the TrustTheVote Digital Poll Book.  But in several of these discussions, we’ve noticed that only two out of three basic purposes of poll books of any type (paper or digital, online or offline) seem to be well understood.  And we think the gap shows why physical custody is so important—especially so for digital poll books.

The first two obvious purposes of a poll book are to [1] check in a voter as a prerequisite to obtaining a ballot, and [2] to prevent a voter from having a second go at checking-in and obtaining a ballot.  That’s fine for meeting the “Eligibility” and “Non-duplication” requirements for in-person voting.

But then there is the increasingly popular absentee voting, where the role of poll books seems less well understood.  In our humble opinion, those in-person polling-place poll books are also critical for absentee and provisional voting.  Bear in mind, those “delayed-cast” ballots can’t be evaluated until after the post-election poll-book-intake process is complete.

To explain why, let’s consider one fairly typical approach to absentee evaluation.  The poll book intake process results in an update to the voter record of every voter who voted in person.  Then, the voter record system is used as one part of absentee and provisional ballot processing.  Before each ballot may be separated from its affidavit, the reviewer must check the voter identity on the affidavit, and then find the corresponding voter record.  If the voter record indicates that the voter cast their ballot in person, then the absentee or provisional ballot must not be counted.

So far, that’s a story about poll books that should be fairly well understood, but there is an interesting twist when if comes to digital poll books (DPB).

The general principle for DPB operation is that it should follow the process used with paper poll books (though other useful features may be added).  With paper poll books, both the medium (paper) and the message (who voted) are inseparable, and remain in the custody of election staff (LEOs and volunteers) throughout the entire life cycle of the poll book.

With the DPB, however, things are trickier. The medium (e.g., a tablet computer) and the message (the data that’s managed by the tablet, and that represents who voted) can be separated, although it should not.

Why not? Well, we can hope that the medium remains in the appropriate physical custody, just as paper poll books do. But if the message (the data) leaves the tablet, and/or becomes accessible to others, then we have potential problems with accuracy of the message.  It’s essential that the DPB data remain under the control of election staff, and that the data gathered during the DPB intake process is exactly the data that election staff recorded in the polling place.  Otherwise, double voting may be possible, or some valid absentee or provisional ballots may be erroneously rejected.  Similarly, the poll book data used in the polling place must be exactly as previously prepared, or legitimate voters might be barred.

That’s why digital poll books must be carefully designed for use by election staff in a way that doesn’t endanger the integrity of the data.  And this is an example of the devil in the details that’s so common for innovative election technology.

Those devilish details derail some nifty ideas, like one we heard of recently: a simple and inexpensive iPad app that provides the digital poll book UI based on poll book data downloaded (via 4G wireless network) from “cloud storage” where an election official previously put it in a simple CSV file; and where the end-of-day poll book data was put back into the cloud storage for later download by election officials.

Marvelous simplicity, right?  Oh hec, I’m sure some grant-funded project could build that right away.  But turns out that is wholly unacceptable in terms of chain of custody of data that accurate vote counts depend on.  You wouldn’t put the actual vote data in the cloud that way, and poll book data is no less critical to election integrity.

A Side Note:  This is also an example of the challenge we often face from well-intentioned innovators of the digital democracy movement who insist that we’re making a mountain out of a molehill in our efforts.  They argue that this stuff is way easier and ripe for all of the “kewl” digital innovations at our fingertips today.  Sure, there are plenty of very well designed innovations and combinations of ubiquitous technology that have driven the social web and now the emerging utility web.  And we’re leveraging and designing around elements that make sense here—for instance the powerful new touch interfaces driving today’s mobile digital devices.  But there is far more to it, than a sexy interface with a 4G connection.  Oops, I digress to a tangential gripe.

This nifty example of well-intentioned innovation illustrates why the majority of technology work in a digital poll book solution is actually in [1] the data integration (to and from the voter record system); [2] the data management (to and from each individual digital poll book), and [3] the data integrity (maintaining the same control present in paper poll books).

Without a doubt, the voter’s user experience, as well as the election poll worker or official’s user experience, is very important (note pic above)—and we’re gathering plenty of requirements and feedback based on our current work.  But before the TTV Digital Poll Book is fully baked, we need to do equal justice to those devilish details, in ways that meet the varying requirements of various States and localities.

Thoughts? Your ball (er, ballot?)
GAM | out

TrustTheVote on HuffPost

We’ll be live on HuffPost online today at 8pm eastern:

  • @HuffPostLive http://huff.lv/Uhokgr or live.huffingtonpost.com

and I thought we should share our talking points for the question:

  • How do you compare old-school paper ballots vs. e-voting?

I thought the answers would be particularly relevant to today’s NYT editorial on the election which concluded with this quote:

That the race came down to a relatively small number of voters in a relatively small number of states did not speak well for a national election apparatus that is so dependent on badly engineered and badly managed voting systems around the country. The delays and breakdowns in voting machines were inexcusable.

I don’t disagree, and indeed would extend from flaky voting machines to election technology in general, including clunky voter record systems that lead to many of the lines and delays in polling places.

So the HuffPost question is apposite to that point, but still not quite right. It’s not an either/or but rather a comparison of:

  • old-school paper ballots and 19th century election fraud;
  • old-school machine voting and 20th century lost ballots;
  • old-school combo system of paper ballots machine counting and botched re-counting;
  • new-fangled machine voting (e-voting) and 21st century lost ballots;
  • newer combo system of paper ballots and machine counting (not voting).

Here are the talking points:

  • Old-school paper ballots where cast by hand and counted by hand, where the counters could change the ballot, for example a candidate Smith partisan could invalidate a vote for Jones by adding a mark for Smith.
  • These and other paper ballot frauds in the 19th century drove adoption in the early 20th century of machine voting, on the big clunky “level machines” with the satisfying ka-thunk-swish of the level recording the votes and opening the privacy curtain.
  • However, big problem with machine votingno ballots! Once that lever is pulled, all that’s left is a bunch of dials and counters on the backside being increased by one. In a close election that requires a re-count, there are no ballots to examine! Instead the best you could do is re-read each machine’s totals and re-run the process of adding them all up in case there was an arithmetic error.
  • Also, the dials themselves, after election day but before a recount, were a tempting target for twiddling, for the types of bad actors who in the 19th century fiddled with ballot boxes.
  • Later in the 20th century, we saw a move to a combo system of paper ballots and machine counting, with the intent that the machine counts were more accurate than human counts and more resistant to human meddling, yet the paper ballots remaining for recounts, and for audits of the accuracyof machinery of counting.
  • Problem: these were the punch ballots of the infamous hanging chad.
  • Early 21st century: run from hanging chad to electronic voting machines.
  • Problem: no ballots! Same as before, only this time, the machins are smaller and much easier to fiddle with. That’s “e-voting” but wihout ballots.
  • Since then, a flimsy paper record was bolted on to most of these systems to support recount and audit.
  • But the trend has been to go back to the combo system, this time with durable paper ballots and optical-scanning machinery for counting.
  • Is that e-voting? well, it is certainly computerized counting. And the next wave is computer-assisted marking of paper ballots — particularly for voters with disabilities — but with these machine-created ballots counted the same as hand-marked ballots.

Bottom line: whether or not you call it e-voting, so long as there are both computers and human-countable durable paper ballots involved, the combo provides the best assurance that niether humans nor computers are mis-counting or interfering with voters casting ballots.


PS: If you catch us on HP online, please let us know what you thought!

Election Tech “R” Us – and Interesting Related IP News

Good Evening–

On this election night, I can’t resist pointing out the irony of the USPTO’s news of the day for Election Day earlier: “Patenting Your Vote,” a nice little article about patents on voting technology.  It’s also a nice complement to our recent posting on the other form of intellectual property protection on election technology — trade secrets.  In fact, there is some interesting news of the day about how intellectual property protections won’t (as some feared) inhibit the use of election technology in Florida.

For recent readers, let’s be clear again about what election technology is, and our mission. Election technology is any form of computing — “software ‘n’ stuff” — used by election officials to carry out their administrative duties (like voter registration databases), or by voters to cast a ballot (like an opscan machine for recording votes off of a paper ballot), or by election officials to prepare for an election (like defining ballots), or to conduct an election (like scanning absentee ballots), or to inform the public (like election results reporting). That covers a lot of ground for “election technology.”

With the definition, it’s reasonable to say that “Election Technology ‘R’ Us” is what the TrustTheVote Project is about, and why the OSDV Foundation exists to support it.  And about intellectual property protection?   I think we’re clear on the pros and cons:

  • CON: trade secrets and software licenses that protect them. These create “black box” for-profit election technology that seems to decrease rather than increase public confidence.
  • PRO: open source software licenses. These enable government organizations to [A] adopt election technology with a well-defined legal framework, without which the adoption cannot happen; and [B] enjoy the fruits of the perpetual harvest made possible by virtue of open source efforts.
  • PRO: patent applications on election technology.  As in today’s news, the USPTO can make clear which aspects of voting technology can or can’t be protected with patents that could inhibit election officials from using the technology, or require them to pay licensing fees.
  • ZERO SUM: granted patents on techniques or business processes (used in election administration or the conduct of elections) in favor of for-profit companies.  Downside: can increase costs of election technology adoption by governments. Upside: if the companies do have something innovative, they are entitled to I.P. protection, and it may motivate investment in innovation.  Downside: we haven’t actually seen much innovation by voting system product vendors, or contract software development organizations used by election administration organizations.
  • PRO: granted patents to non-profit organizations.  To the extent that there are innovations that non-profits come up with, patents can be used to protect the innovations so that for-profits can’t nab the I.P., and charge license fees back to governments running open source software that embodies the innovations.

All that stated, the practical upshot as of today seems to be this: there isn’t much innovation in election technology, and that may be why for-profits try to use trade secret protection rather than patents.

That underscores our practical view at the TrustTheVote Project: a lot of election technology isn’t actually hard, but rather simply detailed and burdensome to get right — a burden beyond the scope of all but a few do-it-ourself elections offices’ I.T. groups.

That’s why our “Election Technology ‘R’ Us” role is to understand what the real election officials actually need, and then to (please pardon me) “Git ‘er done.”

What we’re “getting done” is the derivation of blue prints and reference implementations of an elections technology framework that can be adopted, adapted, and deployed by any jurisdiction with common open data formats, processes, and verification and accountability loops designed-in from the get-go.  This derivation is based on the collective input of elections experts nationwide, from every jurisdiction and every political process point of view.  And the real beauty: whereas no single jurisdiction could possibly ever afford (in terms of resources, time or money) to achieve this on their own, by virtue of the collective effort, they can because everyone benefits — not just from the initial outcomes, but from the on-going improvements and innovations contributed by all.

We believe (and so do the many who support this effort) that the public benefit is obvious and enormous: from every citizen who deserve their ballots counted as cast, to every local election official who must have an elections management service layer with complete fault tolerance in a transparent, secure, and verifiable manner.

From what we’ve been told, this certainly lifts a load of responsibility off the shoulders of elections officials and allows it to be more comfortably distributed.  But what’s more, regardless of how our efforts may lighten their burden, the enlightenment that comes from this clearinghouse effect is of enormous benefit to everyone by itself.

So, at the end of the day, what we all benefit from is a way forward for publicly owned critical democracy infrastructure.  That is, that “thing” in our process of democracy that causes long lines and insecurities, which the President noted we need to fix during his victory speech tonight.  Sure, its about a lot of process.  But where there will inevitably be technology involved, well that would be the TrustTheVote Project.


At the Risk of Running off the Rails

So, we have a phrase we like to use around here borrowed from the legal academic world.  Used to describe an action or conduct in analyzing a nuance in tort negligence, is the phrase “frolic and detour.”  I am taking a bit of detour and frolicking in an increasingly noisy element of explaining the complexity of our work here.  (The detour comes from the fact that as “Development Officer” my charge is ensuring the Foundation and projects are financed, backed, supported, and succeed in adoption.  The frolic is in the form of commentary below about software development methodologies although I am not currently engaged or responsible for technical development outside of my contributions in UX/UI design.)  Yet, I won’t attempt to deny that this post is also a bit of promotion for our stakeholders — elections IT officials who expect us to address their needs for formal requirements, specifications, benchmarks, and certification, while embracing the agility and speed of modern development methodologies.

This post was catalyzed by chit-chat at dinner last evening with an energetic technical talent who is jacked-up about the notion of elections technology being an open source infrastructure.  Frankly, in 5 years we haven’t met anyone who wasn’t jacked-up about our cause, and their energy is typically around “damn, we can do this quick; let’s git ‘er done!”  But it is about at this point where the discussion always seems to get a bit sideways.  Let me explain.

I guess I am exposing a bit of old school here, but having had the formal training in computer systems science and engineering (years ago) I believe data modeling — especially for database-backed enterprise apps — is an absolute imperative priority.  And the stuff of elections systems is serious technology, containing a significant degree of fault tolerance, integrity and verification assurance, and perhaps most important a sound data model.  And modeling takes time and requires documentation, both of which are nearly antithetical in today’s pop culture of agile development.

Bear in mind, the TTV Project embraces agile methods for UX/UI development efforts. And there are a number of components in the TTV elections technology framework that do not require extensive up-front data modeling and can be developed purely in an iterative environment.

However, we claim that data modeling is critical for certain enterprise-grade elections applications because (as many seasoned architects have observed): [a] the data itself has meaning and value outside of the app that manipulates it, and [b] scalability requires a good DB design because you cannot just add in scalability later.  The data model or DB design defines the structure of the database and the relationships between the data sets; it is, in essence the foundation on which the application(s) are built.   A solid DB design is essential to achieve a scalable application.  Which leads to my lingering question:  How do agile development shops design a database?

I’ve heard the “Well, we start with a story...” approach.  And when I ask those who I really respect as enterprise software architects with real DB design chops, who also respect and embrace agile methodologies, they tend to express reservations about the agile mindset being boorishly applied to truly scalable, enterprise grade relational DB design that results in a well performing application, and related data integrity.

Friends, I have no intention of hating on agile principles of lightweight development methods — they have an important role in today’s application software development space and an important role here at the Foundation, but at the same time, I want to try to explain why we cannot simply just “bang out” new elections apps for ballot marking, tabulation, or ballot design and generation in a series of sprints and scrums.

First, in all candor, I fear this confusion rests in the reality that fewer and fewer developers today have had a complete computer science education, and cannot really claim to be disciplined software engineers or architects.  Many (not all) have just “hacked” with, and self-taught themselves, development tools because they built a web site or implemented a digital shopping bag for a friend (much like the well intentioned developer my wife and I met last evening).

Add in the fact, the formality and discipline of compiled code has given way to the rapid prototyping benefits of interpreted code.  And in the processes of this new modern training in software development (almost exclusively for the sandbox of the web browser as the UX/UI vehicle) what has been forgotten is that data modeling exists not because it creates overhead and delays, but because it removes such impediments.

Look at this another way.  I like to use building analogies — perhaps because I began my collegiate studies long ago in architectural engineering before realizing that computer graphics would replace drafting.  There is a reason we spend weeks, sometimes months traveling by large holes in the ground with towers of re-bar, forms, and concrete pouring without any clue of what really will stand there once finished.  And yet, later as the skyscraper takes form, the speed with which it comes together seems to accelerate almost weekly.  Without that foundation carefully laid, the building cannot stand for any extended period of time, let alone bear the dynamic and static weights of its appointments, systems, and occupants.  So too, is this the case with complex, highly scalable, fault tolerant enterprise software — without the foundation of a sold data model, the application(s) will never be sustainable.

I admit that I have been out of production grade software development (i.e., in the trenches coding, compiling; link, load, dealing with lint and running in debug mode) for years, but I can still climb on the bike and turn the pedals.  The fact is, data flow and data model could not be more different.  The former cannot exist without the latter.  It was well understood and data modeling has demonstrated many times that one cannot create a data flow out of nothing.  There has to be a base model as a foundation of one or more data flows, each mapping to its application.  Yet in our discussion punctuated by a really nice wine and great food, this developer seemed to want to dismiss modeling as something that can be done later… perhaps like refactoring (!?)

I am beginning to believe this fixation of modern developers with “rapid” non-data-model development is misguided, if not dangerous for its latent time shifted costs.

Recently, a colleague at another Company was involved with the development of a system where no time whatsoever was spent on data model design.  Indeed, the screens started appearing in record time.  The UX/UI was far from complete, but usable.  And the team was cheered as having achieved great “savings” in the development process.  However, when it came time to expand and extend the app with additional requirements, the developers waffled and explained they would have to recode the app in order to meet the new process requirements.  The data was unchanged, but processes were evolving.  The balance of the project ground to a halt in the dismissal of the first team over arguments about why requirements planning up front should have been done, and they figured out who to hire in to solve  it.

I read somewhere of another development project where the work was getting done in 2 week cycles. They were about 4 cycles away from finishing when on the tracker schedule a task called “concurrency” appeared for the next to last (penultimate) cycle.  The project subsequently imploded because all of the code had to be refactored (a core entity actually was determined to be two entities.)  Turns out that no upfront modeling led to this sequence of events, but unbelievably, the (agile) Development Firm working on the project, spun this as a “positive outcome;” that is they explained, “Hey, its a good thing we caught this a month before go-live.”  Really?  Why wasn’t that caught before that pungent smell of freshly cut code started wafting through the lab?

Spin doctoring notwithstanding, the scary thing to me is that performance and concurrency problems caused by a failure to understand the data are being caught far too late in the Agile development process, which makes it difficult if not impossible to make real improvements.  In fact, I fear that many agile developers have the misguided principle that all data models should be:

create table DATA
 (key INTEGER,
 stuff BLOB);

Actually, we shouldn’t joke about this.  That idea comes from a scary reality: a DBA (database architect) friend tells about a development team he is interacting with on an outsourced State I.T. project that has decided to migrate a legacy non-Oracle application to Oracle using precisely this approach.  Data that had been stored as records in old ISAM type files, will be stored in Oracle as byte sequences in Blobs, with an added surrogate generated unique primary key.  When he asked what’s the point of that approach, no one at the development shop could give him a reasonable answer other than “in the time frame we have, it works.”   It begs the question: What do you call an Oracle Database where all the data in it is invisible to Oracle itself and cannot be accessed and manipulated directly using SQL?  Or said differently, would you call a set of numbered binary records a “database,” or just “a collection of numbered binary records?”

In another example of the challenges of agile development in a database-driven app world, a DBA colleague describes being brought in on an emergency contract basis to an Agile project under development on top of Oracle, to deal with “performance problems” in the database.   Turns out the developers were using Hibernate and apparently relied on it to create their tables on an as-needed basis, simply adding a table or a column in response to incoming user requirements and not worrying about the data model until it crawled out of the code and attacked them.

This sort of approach to app development is what I am beginning to see as “hit and run.”  Sure, it has worked so far in the web app world of start-ups: get it up and running as fast as possible, then exit quickly and quietly before they can identify you as triggering the meltdown when scale and performance start to matter.

After chatting with this developer last evening (and listening to many others over recent months lament that we’re simply moving too slowly) I am starting to think of Agile development as a methodology of “do anything rather than nothing, regardless of whether its right.”  And this may be to support the perception of rapid progress: “Look, we developed X components/screens/modules in the past week.”  Whether any of this code will stand up to production performance environments is to be determined later.

Another Agile principle is of incremental development and delivery.   It’s easy for a developer to strip out a piece of poorly performing code and replace it with a chunk that offers better or different capabilities.  Unfortunately, you just cannot do this in a Database.  For example: you cannot throw away old data in old tables and simply create new empty tables.

The TrustTheVote Project continues to need the kind of talent this person exhibited last evening at dinner.  But her zeal aside (and obvious passion for the cause of open source in elections), and at the risk of running off the (Ruby) rails here, we simply cannot afford to have these problems happen with the TrustTheVote Project.

Agile methodologies will continue to have their place in our work, but we need to be guided by some emerging realities, and appreciate that for as fast as someone wants to crank out a poll book app or a ballot marking device, we cannot afford to short-cut simply for the sake of speed.  Some may accuse me of being a waterfall Luddite in an agile world; however, I believe there has to be some way to mesh these things, even if it means requirements scrums, data modeling sprints, or animated data models.


An Independence Holiday Reflection: IP Reform and Innovation in Elections Technology

On this Independence Day I gave some reflection to the intentions of our founding fathers, and how that relates to our processes of elections and the innovations we should strive for to ensure accuracy, transparency, verification, and security.  And as I thought about this more while gazing out at one of the world’s most precious natural resource treasures and typing this post, it occurred to me that innovation in elections systems is largely around the processes and methods more than any discrete apparatus.

That’s when the old recovering IP lawyer in me had an “ah ha” moment.   And that’s what this long-winded post is about—something that actually should matter to you, a reader of this forum about our on-going effort to make elections and voting technology critical democracy infrastructure.

You see, in America, innovation has long been catalyzed by intellectual property law, specifically patents.

And as you probably also know, patent law is going through major reform efforts in Congress as you read this.  Now here is what you may have missed, which in my reflecting on this Fourth of July holiday, the efforts of the TrustTheVote Project, and innovations in voting technology, dawned on me: there is a bad ingredient to the current patent reform legislation that threatens to not only undermine the very foundations on which patent law is used to catalyze innovation, but equally has the potential to undermine some very basic ideals our founding fathers had in mind as this nation was born.  Bear with me while I unravel this for you; I think it will grab your attention.

So it starts with Members of Congress debating patent reform through the America Invents Act (H.R. 1249).  You see, few may be aware of the role that business method patents (BMPs) play in the political process, especially during elections.  BMPs have been used to protect innovations designed to improve the operation of the political process.   And it is not unreasonable to assume that the TrustTheVote Project itself is working on innovations that should well qualify for patent protection resulting in patents we would assign ownership in to the general public.  Weakening the protection for such innovations may in turn reduce the motivation for companies and individuals to continue innovating in these technologies.  And it certainly could impact our work as well.  But this is exactly what Section 18 of H.R.1249, the America Invents Act of 2011, as currently drafted would likely do.

There is a long history of inventors using BMPs to protect their innovations related to voting systems.  As such systems have developed, from paper voting, to electronic voting, to on-line voting, companies both large and small have continued to innovate, and to protect their new technologies via the patent system, often through the use of BMPs.  Let’s look at just two major areas.

  1. Electronic Voting Systems – it is estimated that between 20% and 30% of American voters now cast their ballots electronically, chiefly via Direct Recording Electronic (DRE) systems. Yet these systems have encountered many problems related to their ability to record votes accurately, verifiably, and securely.  In effort to remedy these problems (but largely to no demonstrative gain), companies have developed technologies designed to overcome these shortcomings, and have protected these technologies with a series of patents, many of which are classed as BMPs. Organizations with numerous BMPs related to improving electronic voting systems include large companies such as IBM, Accenture and Pitney Bowes, and smaller specialist companies such as Hart InterCivic, Avante International and Smartmatic.
  2. Internet Voting Systems – in DRE systems, voters typically have to be physically present at a polling station in order to cast their ballots.  The next logical progression is for voters to cast their ballots remotely, for example via the Internet.   For reasons repeatedly explained here and elsewhere, this is just not a good idea given today’s “Internet.”  But in any event, such ill-advised efforts require a whole new level of network security, in order to ensure that the votes are recorded both accurately and in a verifiable fashion (both being extremely difficult to do, and its unclear any system exists patented or not that can do so, but bear with me for the sake of argument).   A search of the patents in this area, however, reveals that companies such as Accenture, Hart InterCivic, Scytl, and Avante have BMPs describing so-called Internet voting.  These BMPs sit alongside their earlier BMPs covering DRE systems, as these companies develop successive generations of voting technology.

In short, Companies are continuing to seek patent protections for innovations in this sector and business methods continue to be a vehicle for so doing.

Section 18 of H.R.1249, the America Invents Act of 2011, aims to give one special interest—banks—a “get out of jail card.”  As I read it, the provision does this:  If you sue a bank for infringement of a business method patent the bank can stay the court litigation and take your patent to the USPTO for a special post-grant review or PGR process.  If the Bank loses the first round at the office they have an automatic appeal to the board in the office.  If they lose they have another automatic appeal to the Court of Appeals for the Federal Circuit (CAFC), the sole circuit or appeals court for patent cases.  This process takes between 4-7 years based on the existing reexamination systems at the office.  This process is special in that the bank can bring in additional forms of prior art not permitted for other reexamination systems.

There is a good reason why the range of prior art that can be used in court to challenge a patent is not available in the office.  A judge, jury, rules of evidence, cross-examination and other time-tested features of court do not exist at the Patent and Trademark Office.  A patent examiner does not have the experience, procedures, institutional knowledge or time to ascertain the veracity or fraud of the prior art.  More importantly, they do not have the resources to deal with the increased volume of art. Worse, Section 18 can be conducted regardless of whether the patent has already been deemed valid in a prior proceeding.  

And on this Independence Day Holiday, it occurs to me this violates separation of powers and should be unconstitutional.

Before I explain how I can envision this impacting what we’re doing, let me state that I will not delve into the debate over BMPs because it devolves into a religious war, and one that I as both a computer scientist and an IP Lawyer have actually shifted view points from on side to the other over the years.  But suffice it to say that there are examples of useful business method patents that would be eliminated by Section 18 of the patent reform legislation winding its way through Congress.  We are all very familiar with one example: SSL.  Indeed, secure sockets layer is comprised of two BMPs.  Everyone in the world, each day, touches this patented innovation.  If Section 18 were law in 1995 then Visa and Mastercard with their SET proposal could have stalled Netscape and SSL in the USPTO for many years.  Microsoft and CommerceNet with SHTTP could have done the same.  The world would be worse off with competing security protocols. Ecommerce itself may not have taken off at all; at the very least its growth would have been stunted.

It is worth noting that since about year 2000 the U.S.P.T.O has employed a “second pair of eyes” process to examine BMP applications twice.  Moreover, given the public acrimony over BMPs, the U.S.P.T.O is very slow to grant BMPs and the allowance rate is 20% lower than other art areas.  And recently the Supreme Court in their Bilski decision affirmed the patentability of BMPs.

Yet, in spite of the acrimony and higher threshold to get a BMP, many companies large and small innovate and invest in BMPs.  The top 20 owners of BMPs are Fortune 100 companies and or respectable startups. Non-practicing entities comprise a very small portion of the ownership pool of BMPs.  And in considering the innovations resident in the open source elections technology framework we’re developing, we too may find ourselves in the middle of the BMP and Section 18 crossfire.

The challenge is, as a non-profit (pending 501.c.3) organization, we cannot and do not engage in the political process of legislation or lobbying.  Yet, we’re wary of where this is going, I think you should be too.  You see, policy makers don’t often have the time to consume, absorb and digest the data.  They prefer anecdotes, headline grabbing stories, one-page summaries, and talking points.

So let me turn to our thinking about BMPs and the impact of Section 18.

As mentioned above, without debating the basis for BMPs we at the TrustTheVote Project have come to accept that they are an essential part of technology IP.  One reason is that the scope for IT innovation far exceeds the scope for inventing new
technology, and includes innovation in the use of existing technology for new purposes.  That’s been increasingly true for some 20 years, with the scope of the online world coming to encompass so many areas of human activity.  One of the more recent advances is the use of IT innovations for public benefit.  I’ll explain that in terms of elections and political activity, but first let me give a general idea and one specific existing example.

In our experience with IT IP, a BMP can be used as a way to make a claim that “X has been used for many things before, but not in the area of Y; here is a way to use X for a particular purpose in the area of Y; this enables a new human activity Z.”  Now, I could forgo that claim and limit myself to a claim about Y-inspired extension of X that might be a sufficiently significant extension to warrant a patent for a technical innovation; or it might not.

If I limited myself that way, then another party could claim the innovation of using that new method for a particular purpose Z.  So in general, I want to claim both, to protect the right to use X in Y for Z.

Here’s a big idea: “Protect” in the public benefit world means “anyone can do so, not limited by a private or for profit IP holder.” That applies whether or not my extension of prior X is sufficiently innovative by itself.

As an example of this idea, let’s return to SSL, the subject of very well known and high quality BMPs.  When SSL was invented, the use of cryptography for communication security was already well established, including the use of digital certificates to establish (a chain of) trust in the identity of parties communicating.  In fact, there were many examples of cryptographic protocols and communication protocols.  So for X, let’s say “use of cryptographic protocols and communication protocols together for communication with security properties.”

Now, SSL as a protocol may well have been sufficiently innovative to warrant patents of algorithms. But whether or not that was true, SSL was used for several purposes, including a particular kind of communication in which one party trusts a
third to vouch for the second party’s identity as being sufficiently established for a financial transaction. That’s Y.  Z is “digital commerce” meaning financial transactions performed as part of exchange in which one party pays another party for goods and services – including digital goods and digital services.  Without X used for Y, digital commerce wouldn’t exist, and many forms of digital services and digital goods simply would not be provided. With X used for Y, Z is enabled for the first time.  And I view Z — digital commerce — as a major public benefit, even if it was primarily for private for-profit commercial transactions.

The public benefit is a larger economy with the addition of digital commerce.

So far so good, but let’s revisit the value of the BMP.  If it didn’t exist, the holders of patents for X could effectively block Z, or prevent intermediation and insert themselves into every use of X in Y for Z or X in A for B — any use. For example, IBM holds many patents on cryptographic protocols.  I don’t know if those protocols and patents were sufficiently broad to cover the SSL protocol as an algorithm or apparatus.  But if that were so, and BMPs didn’t exist, then IBM could have insisted that it be a party to every digital commerce transaction, only allowing transaction services by parties that made payments to IBM on terms dictated by IBM.  Any other parties would be barred from digital commerce.  Of course, that public benefit may be a matter of opinion on which many people would differ.

In elections and politics, public benefit may be clearer.

For a first example, consider technical innovations for online voter registration. Such innovations might include the use of a “forms wizard” to help people follow complicated rules for filling out voter registration forms; digital means for capturing a signature for the form; digital transmission of the form itself, or its data; and there are more.  All these techniques have been invented before and used in other areas of human endeavor. Adapting them for use in voter registration is probably not an adaption that qualifies as an innovation. But if one wants to ensure that the public be able to use IT implementations of online voter registration, a BMP can cover the use of forms wizards (or other X) for online voter registration (Y) to enable more rapid and more widespread ability of citizens to vote (Z).  Many people would definitely regard that as a public benefit.  The BMP protects that benefit when the BMP holder permits anyone to use the business process, barring a patent holder for X (the specific IT technique) from claiming that online VR implementations infringe their patent.

I don’t know who, if anyone, holds a patent relevant to the technology of the types of innovation in online VR that I refer to here.  However, I suspect that many would regard it as a public detriment for citizens to have to pay a for-profit company for the right to use an online VR service; or for local or state governments to have to pay for privilege of operating such a service.

Other examples lie in the activities around political campaigns to form communities of supporters, organize volunteers, raise money, etc.  The use of social media and other online technology has and I expect will continue to increase in use, enabling more citizens to more easily participate in the political process. As in elections technology, such innovation is often the application of established technology for new purpose.

BMPs can protect the right of political organizations to use such established technology.  I can easily imagine a PAC or other issues-based political organization building a membership organization that includes online interaction with members, including gaining and retaining credit card information for future contributions to the organization, or directly to a candidate or campaign. If I were a member of such an organization, I might expect to get an email about a new set of candidate reviews for candidates in an upcoming election.  I could go to the organization’s web site and read up on candidates.  I could choose to make a donation directly to the candidate’s campaign, immediately, with a single click of a “give $100” button in the candidate review.

Suppose that there were a private company with a patent on making payment in digital commerce using a similar method.  Without a BMP for the process of a citizen contributing to a campaign as part of a Web session with a web site of an issues based voluntary membership association, that patent holder could insist that it be the sole conduit of such contributions.  I suspect most people would view it as a public detriment to either pay a for-profit company for the privilege of a quick and easy campaign contribution, or use a more cumbersome and error prone method for free.

Worse, one could imagine selective enforcement of the patent, or selectively preferential licensing agreements, to make the quick and easy contribution method available only to political campaigns that the patent holder favored.

The same selective approach could be applied to any part of the political process.  Back to voter registration, it’s possible that a patent holder would choose to license its innovations selectively, only to those local election officials in locales where the majority of unregistered voters are perceived as friendly to the politics of the patent holder.

A selective approach could also be applied for disputes.  For example, if a financial transactions company were able to stop a political campaign for collecting online contributions in a certain manner, during the time in which the dispute is resolved. If the time frame stretches long enough, it doesn’t matter if the campaign wins the dispute—the election will already be over and the opportunity to raise and use funds will be gone.

And these types of scenarios could fit pretty much any use of social media technology, where a patent holder of a purely technical patent could assert the right to constrain the use of the technique in any field of human activity, including elections or politics.

These examples may be fanciful, or not based on a real scenario where an election-relevant or politics-relevant technology-using process is the subject of a BMP that involves a particular use of a particular underlying technology for enabling or automating the process.  But I believe that the general benefit of BMPs would apply to real cases.

This may be a new idea—organizations with a public-benefit motivation wanting to ensure general use of technology-enabled innovations in electoral or political processes, rather than trying to control or reserve or profit from BMPs.  And it is certainly not what BMPs might have been intended for.  But I believe that BMPs could be used—and for all I know are already being used—for electoral or political
processes.  It would be a shame, and a public detriment, if BMPs became less useful, either in general, or less useful in disputes with a particular class of organizations. This might be counter intuitive, but as we see the growth of digital democracy, open government, online activism, and the like, it shouldn’t come as a surprise that these new forms of technology-enabled human activity also create new uses for pre-existing IP protections that pre-date the existence of these evolving activities.

Setting aside the efficacy of BMPs and the related religious debates, I bet we can all agree that without BMPs, Goliath—IBM in my perhaps fanciful example above—can block the public, especially the little guy.  Section 18 in the patent bill gives banks a new tool, unique for banks, to stop David from getting their idea to the market.  And this troubles me for it moves us toward that proverbial slippery slope.

At the end of the day, Section 18 of H.R.1249, the America Invents Act of 2011 is frankly, akin to a government regime not granting a permit to open a business simply because one is from the wrong caste or religion or political party… and that’s not the government regime of this nation, who independence we celebrate today. Yet it appears some special interests in patent reform may have an otherwise misguided view to the contrary.

Your ball

Voting System (De)certification – A Way Forward? (2 of 2)

Yesterday I wrote about the latest sign of the downward spiral of the broken market in which U.S. local election officials (LEOs) purchase product and support from vendors of proprietary voting system products, monolithic technology the result of years’ worth of accretion, and costing years and millions to test and certify for use — including a current case where the process didn’t catch flaws that may result in a certified product being de-certified, and being replaced by a newer system, to the cost of LEOs.

Ouch! But could you really expect a vendor in this miserable market to give away new product that they spent years and tens of millions develop, to every customer of the old product, who the vendor had planned to sell upgrades to? — just because of flaws in the old product? But the situation is actually worse: LEOs don’t actually have the funding to acquire a hypothetical future voting system product in which the vendor was fully open about true costs including

(a) certification costs both direct (fees to VSTLs) and indirect cost (staff time), as well as

(b) costs of development including rigorously designed and documented testing.

Actually, development costs alone are bad enough, but certification costs make it much worse — as well as creating a huge barrier to entry of anyone foolhardy enough to try to enter the market (or even stay in it!) and make a profit.

A Way Forward?

That double-whammy is why I and my colleagues at OSDV are so passionate about working to reform the certification process, so that individual components can be certified for far less time and money than a mess o’code accreted over decades, and including wads of interwoven functionality that might need even need to be certified! And then of course, these individual components could also be re-certfied for bug fixes by re-running a durable test plan that the VSTL created the first time around.  And that of course requires common data formats for inter-operation between components — for example, between a PCOS device and a Tabulator system that combines and cross checks all the PCOS devices’ outputs, in order to either find errors/omissions or find a complete election result.

So once again our appreciation to NIST, EAC, IEEE 1622 for actually doing the detailed work of hashing out these common data formats, which is the bedrock of inter-operation, which is the pre-req for certification reform, which enables certification cost reduction of certification, which might result in voting system component products being available at true costs that are affordable to the LEOs who buy and use them.

Yet’s that’s quite a stretch, from data standards committee work, to a less broken market that might be able to deliver to customers at reasonable cost. But to replace a rickety old structure with a new, solid, durable one, you have to start at the bedrock, and that’s where we’re working now.


PS: Thanks again to Joe Hall for pointing out that the current potential de-certification and mandatory upgrade scenario (described in Part 1) illustrates the untenable nature of a market that would require vendors to pay for expensive testing and certification efforts, and to also have to (as some have suggested) forego revenue when otherwise for-pay upgrades are required because of defects in software.

Voting System (De)certification – Another Example of the Broken Market (1 of 2)

Long-time readers will certainly recall our view that the market for U.S. voting systems is fundamentally broken. Recent news provides another illustration of the downward spiral: the likely de-certification of a widely used voting system product from the vendor that owns almost three quarters of the U.S. market.

The current stage of the story is that the U.S. Election Assistance Commission is formally investigating the product for serious flaws that led to errors of the kind seen in several places in 2010, and perhaps best documented in Cuyahoga County. (See:  “EAC Initiates Formal Investigation into ES&S Unity Voting System”.) The likely end result is the product being de-certified, rendering it no longer legal for use in many states where it is currently deployed. Is this a problem for the vendor? Not really. The successor version of the product is due to emerge from a lengthy testing and certification process fairly soon. Having the current product banned is actually a great tool for migrating customers to the latest product!

But at what cost to who? The vendor will charge the customers (local election officials, or LEOs) for the new product, the same as would have been if the migration were voluntary and the old product version still legal. The LEOs will have to sign and pay for a multi-year service agreement. And they will have the same indirect costs of staff efforts (at the expense of other duties like running elections, or getting enough sleep to run an election correctly), and direct costs for shipping, transportation, storage, etc. These are real costs! (Example: I’ve heard reports of some under-funded election officials opting to not use election equipment that they already have, because they have no funding for the expense of taking out of the warehouse to testing facility, and doing the required pre-election testing.)

Some observers have opined that vendors of flawed voting system products should pay: whether damages, or fines, or doing the migration gratis, or something. But consider this deeper question, from UCB and Princeton’s Joe Hall:

Can this market support a regulatory/business model where vendors can’t charge for upgrades and have to absorb costs due to flaws that testing and certification didn’t find? (And every software product, period, has them).

The funding for a high level of quality assurance has to come from somewhere, and that’s not voting system customers right now. Perhaps we’re getting to the point where the amount of effort it takes to produce a robust voting system and get it certified — at the vendor’s expense — creates a cost that customers are not willing or able to pay when the product gets to market.

A good question! and one that illustrates the continuing downward spiral of this broken market. The cost to to vendors of certification is large, and you can’t really blame a vendor for the sort of overly rapid development, marketing, and sales that leads to the problems being investigated. The folks are in this business to make a profit for heavens’ sake, what else could we expect?


PS – Part Two, coming soon: a way out of the spiral.

2011: A Look Ahead; Another Glance Back

As Greg said in his New Year’s posting, we’ve been planning a variety of activities for 2011, and reflecting on what we did in 2010, much that remains to do, and to do better. But at the risk of boring you with a laundry list, I wanted to provide some additional detail on some of the 2010 activities that Greg mentioned. Many of the items listed below serve to indicate how much of the work in election technology (ours and others) has to get very detail oriented in order to actually deliver.

Voter Registration

  • Released version 2.0 of the TTV Online Voter Registration tool.
  • Put OVRv2 into production, operated by Open Source Labs and managed by RockTheVote.
  • Under RTV’s management, OVR has served well over 200,000 registrants for the 2010 election cycle, nearing the quarter-million total.

Election Management System

  • First-ever open source election management software deployed for use in DC and VA overseas voting projects in November 2010 elections.
  • TTV Election Manager supports DC legacy data formats, VIP standard election data for VA, DC-specific jurisdiction definitions, and first-ever new VA custom jurisdictions for local referenda.
  • First-ever system for computing and proofing and entire state’s worth of election data and ballot definitions.

Ballot Design

  • First-ever open source paper ballot design system supports local and state specific ballot formats and composition rules for multiple jurisdictions including DC, VA, NH
  • For VA statewide election, over 2,700 locality-specific ballots generated, including first-ever state-law compliant ballots for special classes of non-local UOCAVA voters.
  • First-ever generation of dual-use ballot documents, the same document marked either digitally or physically to become the same legal paper ballot of record.

Overseas Ballot Distribution

  • Fully localized ballots delivered to thousands of UOCAVA voters worldwide
  • Data integration with state voter record databases, ensuring every eligible UOCAVA voter gets their correct ballot
  • Public test of Digital Ballot Return – a controversial activity with many lessons learned on all sides, but we’re proud to have supported the D.C. BOEE in a rare example of responsible open public testing that should be the model for any assessment of new election technology.

Open-Source Software License

  • Released the OSDV Public License, or OPL, the first open source license specifically designed to aid state and local governments in acquiring open-source technology.
  • Published the OPL Rationale document, explaining the goals of the OPL and the reasoning behind each element of the OPL as meeting government needs for software licensing.

Public Speaking and Education

As you can see from these highlights — the tip of the proverbial iceberg — 2010 was a busy year for us. And 2011 is shaping up to be even busier!


Tabulator Troubles in Colorado

More tabulator troubles! In addition to the continuing saga in New York with the tabulator troubles I wrote about earlier, now there is another tabulator-related situation in Colorado. The news report from Saguache County CO is about:

a Nov. 5 “retabulation” of votes cast in the Nov. 2 election Friday by Myers and staff, with results reversing the outcome …

In brief, the situation is exactly about the “tabulation” part of election management, that I have been writing about. To recap:

  • In polling places, there are counting devices that count up votes from ballots, and spit out a list of vote-counts for each candidate in each contest, and each option in each referendum. This list is in the form of a vote-count dataset on some removable storage.
  • At county election HQ, there are counting devices that count up vote-by-mail ballots and provisional ballots, with the same kind of vote-counts.
  • At county election HQ, “tabulation” is the process aggregating these vote-counts and adding them up, to get county-wide vote totals.

In Saguache, election officials did a tabulation run on election night, but the results  didn’t look right. Then on the 5th, they did a re-run on the “same ballots” but the results were different, and it appears to some observers that some vote totals may be been overwritten. Then, on the 8th, with another re-try, a result somewhat like in NY:

… the disc would not load and sent an error message

What this boils down to for me is that current voting system products’ Tabulators are not up to correctly doing some seemingly simple tasks correctly, when operated by ordinary election officials. I am sure they work right in testing situations that include vendor staff; but they must also work right in real life with real users. The tasks include:

  • Import an election definition that specifies how many counting devices are being used for each precinct, and how many vote-count datasets are expect from them.
  • Import a bunch of vote-count datasets.
  • Cross-check to make sure that all expected vote-totals are present, and that there are no un-expected vote-counts.
  • Cross-check each vote-count dataset to make sure it is consistent with the election definition.
  • If everything cross-checks correctly, add up the counts to get totals, and generate some reports.

That’s not exactly dirt-simple, but it also sounds to me like something that could be implemented in well-designed software that is easy for election officials to use, and easy for observers to understand. And that understanding is critical, because without it, observers may suspect that the election has been compromised, and some election results are wrong. That is a terrible outcome that any election official would work hard to avoid — but it appears that’s what is unfolding in Saguache. Stay tuned …


PS: Hats off to the Valley Courier‘s Teresa L. Benns for a really truly excellent news article! I have only touched on some of the issues she covered. Her article has some of the best plain-language explanation of complicated election stuff, that I have ever read. Please take a minute to at least scan her work. – ejs

Tabulator Technology Troubles

In my last post, I recounted an incident from Erie County NY, but deferred to today an account of what the technology troubles were, that prevented the routine use of a Tabulator to create county-wide vote totals by combining count data from each of the opscan paper ballot counting devices. The details are worth considering as a counter-example of technology that is not transparent, but should be.

As I understand the incident, it wasn’t the opscan counting systems that malfunctioned, but rather the portion of the voting system that tabulates the county-wide vote totals. As I described in an earlier post, the ES&S system has no tabulator per se, but rather some aggregation software that is part of the larger body of Election Management System (EMS) software that runs on an ordinary Windows PC. Each opscan devices writes data to a USB stick, and election officials aggregate the data by feeding each stick into the EMS. The EMS is supposed to store all the data on the stick, and add up all the opscan machines’ vote counts into a vote total for each contest.

Last week, though, when Erie County officials tried to do so, the EMS rejected the data sticks. Election officials had no way to use the sticks to corroborate the vote totals that they had made by visually examining the election-night paper-tapes from the 130 opscan devices. Sensible questions: Did the devices’ software err in writing the data to the sticks? If so, might the tapes be incorrect as well? Is the data still there? It turns out that the case was a bug in EMS software, not the devices, and in fact the data on the sticks was just fine. With a workaround on the EMS, the data was extracted from the sticks and used as planned. Further, the workaround did not require a bug fix to the software, which would have been illegal. Instead, some careful hand-crafting of EMS data enabled the software to stop choking on the data from the sticks.

Now, I am not feeling 100% great about the need for such hand-crafting, or indeed about the correctness of the totals produced by a voting system operating outside of its tested ordinary usage. But some canny readers are probably wondering about a simpler question. If the data was on the sticks, why not simply copy the files off the stick using a typical PC, and examine the contents of the files directly? With 40-odd contests countywide and a 100-odd sticks and paper tapes, it’s not that much work to just look at the them to whether the numbers on each stick match those on the tapes. Answer: the voting system software is set up to prevent direct examination, that’s why! The vote data can only be seen via the software in the EMS. And when that software glitches, you have to wonder about what you’re seeing.

This is at least one area where better software design can lead to higher confidence system: write-once media for storing each counting device’s tallies; use of public standard data formats so that anyone examine the data; use of human-usable formats so that anyone can understand the data; use of a separate, single-purpose tabulator device that operates autonomously from the rest of the voting system; publication of the tally data and the tabulator’s output data, so that anyone can check the correct results either manually or with their choice of software. At least that’s the TrustTheVote approach that we’re working out now.