By Gregory Miller

The Gift Has Arrived: Our Exempt Status After 6 Years

Today is a bit of a historical point for us: we can publicly announce the news of the IRS finally granting our tax exempt status.

The digital age is wreaking havoc, however, on the PR and news processes.  In fact, we knew about this nearly 2 weeks ago, but due to a number of legal and procedural issues and a story we were being interviewed for, we were on hold in making this important announcement.  And we’re still struggling to get this out on the wires (mostly due to a change our of our PR Agency at the most inopportune moment).

The result: WIRED Magazine actually got the jump on us (oh, the power of digital publishing), and now hours after their article posting, we’re finally getting our own press release to the world.

I have to observe, that notwithstanding a paper-chase of near epic proportions with the IRS in granting us what we know our charter deserves to do foster the good work we intend, at the end of the day, 501(c)(3) status is a gift from the government.  And we cannot lose sight of that.

So, for the ultimate outcome we are deeply grateful, please make no mistake about that.  The ways  and means of getting there was exhausting… emotionally, financially, and intellectually.  And I notice that the WIRED article makes a showcase of a remark I made in one of the many  interviews and exchanges leading up to that story about being “angry.”

I am (or was) angry at the process because 6 years to ask and re-ask us many of the same questions, and perform what I humbly believe at some point amounted to intellectual naval gazing, was crazy.  I can’t help but feel like we were being bled.  I fear there are many other valuable public benefit efforts, which involve intangible assets, striving for the ability to raise public funds to do public good, who are caught up in this same struggle.

What’s sad, is that it took the guidance and expertise (and lots of money that could be spent on delivering the on our mission) of high powered Washington D.C. lawyers to negotiate this to successful conclusion.  That’s sad, because the vast majority of projects cannot afford to do that.  Had we not been so resolute in our determination, and willing to risk our own financial stability to see this through, the TrustTheVote Project would have withered and died in prosecution of our tax exempt status over 6 years and 4 months.

Specifically, it took the expertise and experience of Caplin Drysdale lawyers Michael Durham and Marc Owen himself (who actually ran the IRS Tax Exempt Division for 10 years).  If you can find a way to afford them, you can do no better.

There is so much that could be shared about what it took and what we learned from issues of technology licensing, to nuances of what constitutes public benefit in terms of IRS regulations — not just what seems obvious.  Perhaps we’ll do so another time.  I note for instance that attorney Michael Durham was a computer science major and software engineer before becoming a tax lawyer.  I too have a very similar combination background of computer science and intellectual property law, and it turned out to be hugely helpful to have this interdisciplinary view — just odd that such would be critical to a tax exempt determination case.

However, in summary, I was taught at a very young age and through several life lessons that only patience and perseverance empower prevailing.  I guess its just the way I, and all of us on this project are wired.

Cheers
GAM | out

If it Walks Like a Duck, and Quacks Like a Duck…

So in the midst of participating in an amazing conference at MIT’s Media Lab produced in conjunction with the Knight Foundation (thanks Chris Barr you rock!) today, we learned of additional conduct by the IRS with regard to their Exempt Division’s handling of 1023 filings (for 501.c.3 determination).  In particular, this revelation appeared in an NY Times article today:

But groups with no political inclinations were also examined. “Open source software” organizations seeking nonprofit status are usually for-profit business or for-profit support technicians of the software,” a lookout list warns. “If you see a case, elevate it to your manager.”

Please let us go on record once again, here and now:

  1. The OSDV Foundation’s 1023 application (filed in February 2007) states with sworn affidavits under penalties of perjury, that our charter Article II(A), defines the OSDV Foundation as an organization organized as a nonprofit public benefit corporation, not for the private gain of any person, and it is organized under California Nonprofit Public Benefit Corporation Law for public and charitable purposes, with by-laws consistent with that charter.
  2. We are not a “for-profit business,” rather we are a non-profit project, conducting our activities as such, and never intend to be a commercial operation, or convert into a commercial entity.
  3. We are genuinely seeking, through (continued) philanthropic support of larger grantor organizations, as well as through the generous support of individual citizens, to provide education, research, and reference development of blueprints for trustworthy elections technology, on an open source basis for the benefit of elections jurisdictions nationwide because the current commercial market place is woefully falling short of that capability.
  4. We are willing to ensure our reference designs and implementations, some reduced to software source code, remain up-to-date with as-then-published Federal and/or State-specific criteria or specifications, but we will not do so as a commercial venture.
  5. We reiterate here that we are notfor-profit support technicians” for any software that results from the efforts of this California public benefits non-profit corporation.

As you might imagine, these statements have been backed by over 6-years of considerable supporting documentation, sworn to be accurate and correct to the best of our knowledge, under penalties of perjury and backed by sworn, signed affidavits.

And to be sure, we take our sworn signed statements and veracity very seriously.

We remain hopeful that the IRS will ultimately acknowledge these points and grant us our exempt determination.

TrustTheVote Project Earns Backing from Knight Foundation Prototype Fund

Greetings All-

Apologies for the extended radio silence.  I promise to one day be able to explain in some detail why that occasionally occurs, but for now I have to remain, um, silent on that point.  However, I am very happy to share with you that one of the additional reasons for being distracted from this forum has been work that resulted in today’s announcement.

Indeed, the OSDV Foundation’s TrustTheVote Project has earned a substantial grant from the Knight Foundation’s Prototype Fund.  Such was a favorable consequence of being a near brides-maid in their Knight Foundation News Challenge, which we competed for earlier this Spring.  While we did not make the final cut for the major grant program, the Knight Foundation was sufficiently excited about our proposal for open data standards based election night reporting services that they awarded us a Prototype Grant.

You can learn more about our project here.  In a sentence, let me state the metes and bounds of this project.  We will share a little about what, how, why, and when in subsequent posts.

In a sentence our project is:
Building an open source election night results reporting service tying directly into local and State elections data feeds (for which the TrustTheVote Project has already helped establish the required standards), with a public-facing web app, and a robust API to enable anyone to access reporting data for further analysis and presentation.

Some Details
So, essentially the Knight Foundation’s Prototype Fund is designed to provide a “seed grant” to enable a prototype or “early Alpha” of an app, service, or system that advances the causes of civic media and citizen engagement with news and information.  Our Election Night Reporting Service is a perfect fit.  And this 6-month project is intended to finish the development and deployment stages for an evaluation/test run on the system.  I need to point out that it will definitely be a prototype and will not include some necessary components to put the system into production, but enough framework and scaffolding to conduct a robust “alpha test for which 3 or 4 elections jurisdictions have agreed to participate.

We will announce those jurisdictions soon.  The test will utilize an early release of the Results Scoreboard — a web-based app/service to display elections results.  The alpha will also deliver an API and data feed service.

In our next post, we will discuss some details about the project in terms of the what, how, and why. But let me say quickly that the name has some legacy meaning, because its not just about election night — its about election reporting any time.  So, stay tuned!

I’d like to thank the tremendous support of OSDVF Board and TTV Project Advisers who worked closely with us on the Knight News Challenge application and for the work of the Core team and our CTO John Sebes on hammering out sufficient details originating in our work with Travis County, TX a couple of years ago.  Without their contributions — many in the 11th hour and into the pre-dawn hours last March —  this would not have been possible.

Onward!

Crowd Sourcing Polling Place Wait Times – Part 2

Last time, we wrote about the idea of a voter information service where people could crowd source the data about polling place wait times, so that other voters would benefit by not going when the lines are getting long, and so that news media and others could get a broad view of how well or poorly a county or state was doing in terms of voting time.

And as we observed such would be a fine idea, but the results from that crowd-sourced reporting would be way better if the reporting were not on the “honor system.”  Without going on a security and privacy rampage, it would be better if this idea were implemented using some existing model for people to do mobile computing voter-stuff, in a way that is not trivial to abuse, unlike the honor system.

Now, back to the good news we mentioned previously: there is an existing model we could use to limit the opportunity for abuse.  You see, many U.S. voters, in a growing number of States, already have the ability to sit in a café and use their smart phone and a web site to identify themselves sufficiently to see their full voter record, and in some cases even update part of that voter record.

So, the idea is: why not extend that with a little extra record keeping of when a voter reports that they have arrived at the polls, and when they said they were done? In fact, it need not even be an extension of existing online voter services, and could be done in places that are currently without online voter services altogether.  It could even be the first online voter service in those places.

The key here is voters “sufficiently identify themselves” through some existing process, and that identification has to be based an existing voter record.  In complex online voter services (like paperless online voter registration), that involves a 3-way real-time connection between the voter’s digital device, the front-end web server that it talks to, and a privileged and controlled connection from the front-end to obtain specific voter data in the back-end.  But in a service like this, it could be even simpler, with a system that’s based on a copy of the voter record data, indeed, just that part that the voter needs to use to “identify themselves sufficiently”.

Well, let’s not get ahead of ourselves.  The fact is, State and/or local elections officials generally manage the voter database.  And our Stakeholders inform us its still likely these jurisdictions would want to operate this service in order to retain control of the data, and to control the ways and means of “sufficient identity” to be consistent with election laws, current usage practices, and other factors.  On the other hand, a polling place traffic monitor service can be a completely standalone system – a better solution we think, and more likely to be tried by everyone.

OK, that’s enough for the reasonably controlled and accurate crowd-source reporting of wait times. What about the benefits from it – the visibility on wait times?  As is almost always the case in transparent, open government computing these days, there are two parallel answers.

The first answer is that the same system that voters report into, could also provide the aggregated information to the public.  For example, using a web app, one could type in their street address (or get some help in selecting it, akin to our Digital Poll Book), and see the wait time info for a given precinct.  They could also view a list of the top-5 shortest current wait times and bottom-5 longest wait times of the precincts in their county, and see where their precinct sits in that ranking.  They could also study graphs of moving averages of wait times – well, you can ideate for yourself.  It’s really a question of what kind of information regular voters would actually value, and that local election officials would want to show.

The second answer is that this system must provide a web services API so that “other systems” can query this wait-time-reporting service.  These other systems should be able to get any slice of the raw data, or the whole thing, up to the minute.  Then they could do whatever visualization, reporting, or other services thought up by the clever people operating this other system.

For me, I’d like an app on my phone that pings like my calendar reminders, that I set to ping myself after 9am (no voting without adequate caffeine!) but before 3pm (high school lets out and street traffic becomes a sh*t show ;-)); but oh, when the waiting time is <10 minutes.  I’d also like something that tells me if/when turn-out in my precinct (or my county, or some geographic slice) tips over 50% of non-absentee voters.  And you can imagine others.  But the main point is that we do not expect our State or local election officials to deliver that to us.  We do hope that they can deliver the basics, including that API so that others can do cool stuff with the data.

Actually, it’s an important division of labor.

Government organizations have the data and need to “get the data out” both in raw form via an API, and in some form useful for individual voters’ practical needs on Election Day.  Then other organizations or individuals can use that API with their different vision and innovation to put that data to a range of additional good uses.  That’s our view.

So, in our situation at the TrustTheVote Project, it’s actually really possible.  We already have the pieces: [1] the whole technology framework for online voter services, based on existing legacy databases; [2] the web and mobile computing technology framework with web services and APIs; [3] existing voter services that are worked examples of how to use these frameworks; and [4] some leading election officials who are already committed to using all these pieces, in real life, to help real voters.  This “voting wait-time tracker” system we call PollMon is actually one of the simplest examples of this type of computing.

We’re ready to build one.  And we told the Knight News Challenge so.  We say, let’s do this.  Wanna help?  Let us know.  We’ve already had some rockin good ideas and some important suggestions.

GAM | out

Crowd Sourcing Polling Place Wait Times

Long lines at the polling place are becoming a thorn in our democracy.

We realized a few months ago that our elections technology framework data layer could provide information that when combined with community-based information gathering might lessen the discomfort of that thorn.  Actually, that realization happened while hearing friends extol the virtues of Waze.  Simply enough, the idea was crowd-sourcing wait information to at least gain some insight on how busy a polling place might be at the time one wants to go cast their ballot.

Well, to be sure, lots of people are noodling around lots of good ideas and there is certainly no shortage of discussion on the topic of polling place performance.  And, we’re all aware that the President has taken issue with it and after a couple of mentions in speeches, created the Bauer-Ginsberg Commission.  So, it seems reasonable to assume this idea of engaging some self-reporting isn’t entirely novel.

After all, its kewl to imagine being able to tell – in real time – what the current wait time at the polling place is, so a voter can avoid the crowds, or a news organization can track the hot spots of long lines.  We do some “ideating” below but first I offer three observations from our noodling:

  • It really is a good idea; but
  • There’s a large lemon in it; yet
  • We have the recipe for some decent lemonade.

Here’s the Ideation Part

Wouldn’t it be great if everybody could use an app on their smarty phone to say, “Hi All, its me, I just arrived at my polling place, the line looks a bit long.” and then later, “Me again, OK, just finished voting, and geesh, like 90 minutes from start to finish… not so good,” or “Me again, I’m bailing.  Need to get to airport.”

And wouldn’t it be great if all that input from every voter was gathered in the cloud somehow, so I could look-up my polling place, see the wait time, the trend line of wait times, the percentage of my precinct’s non-absentee voters who already voted, and other helpful stuff?  And wouldn’t it be interesting if the news media could show a real time view across a whole county or State?

Well, if you’re reading this, I bet you agree, “Yes, yes it would.”  Sure.  Except for one thing.  To be really useful it would have to be accurate.  And if there is a question about accuracy (ah shoot, ya know where this is going, don-cha?) Yes, there is always that Grinch called “abuse.”

Sigh. We know from recent big elections that apparently, partisan organizations are sometimes willing to spend lots of money on billboard ads, spam campaigns, robo-calls, and so on, to actually try to discourage people from going to the polls, within targeted locales and/or demographics. So, we could expect this great idea, in some cases, to fall afoul of similar abuse.  And that’s the fat lemon.

But, please read on.

Now, we can imagine some frequent readers spinning up to accuse us of wanting everything to be perfectly secure, of letting the best be the enemy of the good, and noting that nothing will ever be accomplished if first every objection must be overcome. On other days, they might be right, but not so much today.

We don’t believe this polling place traffic monitoring service idea requires the invention of some new security, or integrity, or privacy stuff.  On the other hand, relying on the honor system is probably not right either.  Instead, we think that in real life something like this would have a much better chance of launch and sustained benefit, if it were based on some existing model of voters doing mobile computing in responsible way that’s not trivial to abuse like the honor system.

And that lead us to the good news – you see, we have such an existing model, in real life. That’s the new ingredient, along with that lemon above, and a little innovative sugar, for the lemonade that I mentioned.

Stay tuned for Part 2, and while waiting you might glance at this.

For (Digital) Poll Books — Custody Matters!

Today, I am presenting at the annual Elections Verification Conference in Atlanta, GA and my panel is discussing the good, the bad, and the ugly about the digital poll book (often referred to as the “e-pollbook”).  For our casual readers, the digital poll book or “DPB” is—as you might assume—a digital relative of the paper poll book… that pile of print-out containing the names of registered voters for a given precinct wherein they are registered to vote.

For our domain savvy reader, the issues to be discussed today are on the application, sometimes overloaded application, of DPBs and their related issues of reliability, security and verifiability.  So as I head into this, I wanted to echo some thoughts here about DPBs as we are addressing them at the TrustTheVote Project.

OSDV_pollbook_100709-1We’ve been hearing much lately about State and local election officials’ appetite (or infatuation) for digital poll books.  We’ve been discussing various models and requirements (or objectives), while developing the core of the TrustTheVote Digital Poll Book.  But in several of these discussions, we’ve noticed that only two out of three basic purposes of poll books of any type (paper or digital, online or offline) seem to be well understood.  And we think the gap shows why physical custody is so important—especially so for digital poll books.

The first two obvious purposes of a poll book are to [1] check in a voter as a prerequisite to obtaining a ballot, and [2] to prevent a voter from having a second go at checking-in and obtaining a ballot.  That’s fine for meeting the “Eligibility” and “Non-duplication” requirements for in-person voting.

But then there is the increasingly popular absentee voting, where the role of poll books seems less well understood.  In our humble opinion, those in-person polling-place poll books are also critical for absentee and provisional voting.  Bear in mind, those “delayed-cast” ballots can’t be evaluated until after the post-election poll-book-intake process is complete.

To explain why, let’s consider one fairly typical approach to absentee evaluation.  The poll book intake process results in an update to the voter record of every voter who voted in person.  Then, the voter record system is used as one part of absentee and provisional ballot processing.  Before each ballot may be separated from its affidavit, the reviewer must check the voter identity on the affidavit, and then find the corresponding voter record.  If the voter record indicates that the voter cast their ballot in person, then the absentee or provisional ballot must not be counted.

So far, that’s a story about poll books that should be fairly well understood, but there is an interesting twist when if comes to digital poll books (DPB).

The general principle for DPB operation is that it should follow the process used with paper poll books (though other useful features may be added).  With paper poll books, both the medium (paper) and the message (who voted) are inseparable, and remain in the custody of election staff (LEOs and volunteers) throughout the entire life cycle of the poll book.

With the DPB, however, things are trickier. The medium (e.g., a tablet computer) and the message (the data that’s managed by the tablet, and that represents who voted) can be separated, although it should not.

Why not? Well, we can hope that the medium remains in the appropriate physical custody, just as paper poll books do. But if the message (the data) leaves the tablet, and/or becomes accessible to others, then we have potential problems with accuracy of the message.  It’s essential that the DPB data remain under the control of election staff, and that the data gathered during the DPB intake process is exactly the data that election staff recorded in the polling place.  Otherwise, double voting may be possible, or some valid absentee or provisional ballots may be erroneously rejected.  Similarly, the poll book data used in the polling place must be exactly as previously prepared, or legitimate voters might be barred.

That’s why digital poll books must be carefully designed for use by election staff in a way that doesn’t endanger the integrity of the data.  And this is an example of the devil in the details that’s so common for innovative election technology.

Those devilish details derail some nifty ideas, like one we heard of recently: a simple and inexpensive iPad app that provides the digital poll book UI based on poll book data downloaded (via 4G wireless network) from “cloud storage” where an election official previously put it in a simple CSV file; and where the end-of-day poll book data was put back into the cloud storage for later download by election officials.

Marvelous simplicity, right?  Oh hec, I’m sure some grant-funded project could build that right away.  But turns out that is wholly unacceptable in terms of chain of custody of data that accurate vote counts depend on.  You wouldn’t put the actual vote data in the cloud that way, and poll book data is no less critical to election integrity.

A Side Note:  This is also an example of the challenge we often face from well-intentioned innovators of the digital democracy movement who insist that we’re making a mountain out of a molehill in our efforts.  They argue that this stuff is way easier and ripe for all of the “kewl” digital innovations at our fingertips today.  Sure, there are plenty of very well designed innovations and combinations of ubiquitous technology that have driven the social web and now the emerging utility web.  And we’re leveraging and designing around elements that make sense here—for instance the powerful new touch interfaces driving today’s mobile digital devices.  But there is far more to it, than a sexy interface with a 4G connection.  Oops, I digress to a tangential gripe.

This nifty example of well-intentioned innovation illustrates why the majority of technology work in a digital poll book solution is actually in [1] the data integration (to and from the voter record system); [2] the data management (to and from each individual digital poll book), and [3] the data integrity (maintaining the same control present in paper poll books).

Without a doubt, the voter’s user experience, as well as the election poll worker or official’s user experience, is very important (note pic above)—and we’re gathering plenty of requirements and feedback based on our current work.  But before the TTV Digital Poll Book is fully baked, we need to do equal justice to those devilish details, in ways that meet the varying requirements of various States and localities.

Thoughts? Your ball (er, ballot?)
GAM | out

The 2013 Annual Elections Verification Conference Opens Tonight

If its Wednesday 13.March it must be Atlanta.  And that means the opening evening reception for the Elections Verification Network‘s 2013 Annual Conference.  We’re high on this gathering of elections officials, experts, academicians and advocates because it represents a unique interdisciplinary collaboration of technologists, policy wonks and legal experts, and even politicians all with a common goal: trustworthy elections.

The OSDV Foundation is proud to be a major sponsor of this event.  We do so because it is precisely these kinds of forums where discussions about innovation in HOW America votes take place and it represents a rich opportunity for collaboration, debate, education, and sharing.  We always learn much and share our own research and development efforts as directed by our stakeholders — those State and local elections officials who are the beneficiaries of our charitable work to bring increased accuracy, transparency, verification, and security (i.e., the 4 pillars of trustworthiness) to elections technology reform through education, research and development for elections technology innovation.

Below are my opening remarks to be delivered this evening or tomorrow morning, at the pleasure of the Planning Committee depending on how they slot the major sponsors opportunities to address the attendees.  We believe there are 3 points we wanted to get across in opening remarks: [1] why we support the EVN; [2] why there is a growing energy around increased election verification efforts, and [3] how the EVN can drive that movement forward…..

Greetings Attendees!

On behalf of the EVN Planning Committee and the Open Source Digital Voting Foundation I want to welcome everyone to the 2013 Elections Verification Network Annual Conference.  As a major conference supporter, the Planning Committee asked if I, on behalf of the OSDV Foundation, would take 3 minutes to share 3 things with you:

  • 1st, why the Foundation decided to help underwrite this Conference;
  • 2nd, why we believe there is a growing energy and excitement around election verification; and
  • 3rd, how the EVN can bring significant value to this growing movement

So, we decided to make a major commitment to underwriting and participating in this conference for two reasons:

  1. We want to strengthen the work of this diverse group of stakeholders and do all that we can to fortify this gathering to make it the premier event of its kind; and
  2. The work of the EVN is vital to our own mission because there are 4 pillars to trustworthy elections: Accuracy, Transparency, Verification, and Security, and the goals and objectives of these four elements require enormous input from all stakeholders.  The time to raise awareness, increase visibility, and catalyze participation is now, more than ever.  Which leads to point about the movement.

We believe the new energy and excitement being felt around election verification is due primarily to 4 developments, which when viewed in the aggregate, illustrates an emerging movement.  Let’s consider them quickly:

  1. First, we’re witnessing an increasing number of elections officials considering “forklift upgrades” in their elections systems, which are driving public-government partnerships to explore and ideate on real innovation – the Travis County Star Project and the LA County’s VSAP come to mind as two showcase examples, which are, in turn, catalyzing downstream activities in smaller jurisdictions;
  2. The FOCE conference in CA, backed by the James Irvine Foundation was a public coming out of sorts to convene technologists, policy experts, and advocates in a collaborative fashion;
  3. The recent NIST Conferences have also raised the profile as a convener of all stakeholders in an interdisciplinary fashion; and finally,
  4. The President’s recent SOTU speech and the resulting Bauer-Ginsberg Commission arguably will provide the highest level of visibility to date on the topic of improving access to voting.  And this plays into EVN’s goals and objectives for elections verification.  You see, while on its face the visible driver is fair access to the ballot, the underlying aspect soon to become visible is the reliability, security, and verifiability of the processes that make fair access possible.  And that leads to my final point this morning:

The EVN can bring significant value to this increased energy, excitement, and resulting movement if we can catalyze a cross pollination of ideas and rapidly increase awareness across the country.  In fact, we spend lots of time talking amongst ourselves.  It’s time to spread the word.  This is critical because while elections are highly decentralized, there are common principles that must be woven into the fabric of every process in every jurisdiction.  That said, we think spreading the word requires 3 objectives:

  1. Maintaining intellectual honesty when discussing the complicated cocktail of technology, policy, and politics;
  2. Sustaining a balanced approach of guarded optimism with an embracing of the potential for innovation; and
  3. Encouraging a breadth of problem awareness, possible solutions, and pragmatism in their application, because one size will never fit all.

So, welcome again, and lets make the 2013 EVN Conference a change agent for raising awareness, increasing knowledge, and catalyzing a nationwide movement to adopt the agenda of elections verification.

Thanks again, and best wishes for a productive couple of days.

Election Tech “R” Us – and Interesting Related IP News

Good Evening–

On this election night, I can’t resist pointing out the irony of the USPTO’s news of the day for Election Day earlier: “Patenting Your Vote,” a nice little article about patents on voting technology.  It’s also a nice complement to our recent posting on the other form of intellectual property protection on election technology — trade secrets.  In fact, there is some interesting news of the day about how intellectual property protections won’t (as some feared) inhibit the use of election technology in Florida.

For recent readers, let’s be clear again about what election technology is, and our mission. Election technology is any form of computing — “software ‘n’ stuff” — used by election officials to carry out their administrative duties (like voter registration databases), or by voters to cast a ballot (like an opscan machine for recording votes off of a paper ballot), or by election officials to prepare for an election (like defining ballots), or to conduct an election (like scanning absentee ballots), or to inform the public (like election results reporting). That covers a lot of ground for “election technology.”

With the definition, it’s reasonable to say that “Election Technology ‘R’ Us” is what the TrustTheVote Project is about, and why the OSDV Foundation exists to support it.  And about intellectual property protection?   I think we’re clear on the pros and cons:

  • CON: trade secrets and software licenses that protect them. These create “black box” for-profit election technology that seems to decrease rather than increase public confidence.
  • PRO: open source software licenses. These enable government organizations to [A] adopt election technology with a well-defined legal framework, without which the adoption cannot happen; and [B] enjoy the fruits of the perpetual harvest made possible by virtue of open source efforts.
  • PRO: patent applications on election technology.  As in today’s news, the USPTO can make clear which aspects of voting technology can or can’t be protected with patents that could inhibit election officials from using the technology, or require them to pay licensing fees.
  • ZERO SUM: granted patents on techniques or business processes (used in election administration or the conduct of elections) in favor of for-profit companies.  Downside: can increase costs of election technology adoption by governments. Upside: if the companies do have something innovative, they are entitled to I.P. protection, and it may motivate investment in innovation.  Downside: we haven’t actually seen much innovation by voting system product vendors, or contract software development organizations used by election administration organizations.
  • PRO: granted patents to non-profit organizations.  To the extent that there are innovations that non-profits come up with, patents can be used to protect the innovations so that for-profits can’t nab the I.P., and charge license fees back to governments running open source software that embodies the innovations.

All that stated, the practical upshot as of today seems to be this: there isn’t much innovation in election technology, and that may be why for-profits try to use trade secret protection rather than patents.

That underscores our practical view at the TrustTheVote Project: a lot of election technology isn’t actually hard, but rather simply detailed and burdensome to get right — a burden beyond the scope of all but a few do-it-ourself elections offices’ I.T. groups.

That’s why our “Election Technology ‘R’ Us” role is to understand what the real election officials actually need, and then to (please pardon me) “Git ‘er done.”

What we’re “getting done” is the derivation of blue prints and reference implementations of an elections technology framework that can be adopted, adapted, and deployed by any jurisdiction with common open data formats, processes, and verification and accountability loops designed-in from the get-go.  This derivation is based on the collective input of elections experts nationwide, from every jurisdiction and every political process point of view.  And the real beauty: whereas no single jurisdiction could possibly ever afford (in terms of resources, time or money) to achieve this on their own, by virtue of the collective effort, they can because everyone benefits — not just from the initial outcomes, but from the on-going improvements and innovations contributed by all.

We believe (and so do the many who support this effort) that the public benefit is obvious and enormous: from every citizen who deserve their ballots counted as cast, to every local election official who must have an elections management service layer with complete fault tolerance in a transparent, secure, and verifiable manner.

From what we’ve been told, this certainly lifts a load of responsibility off the shoulders of elections officials and allows it to be more comfortably distributed.  But what’s more, regardless of how our efforts may lighten their burden, the enlightenment that comes from this clearinghouse effect is of enormous benefit to everyone by itself.

So, at the end of the day, what we all benefit from is a way forward for publicly owned critical democracy infrastructure.  That is, that “thing” in our process of democracy that causes long lines and insecurities, which the President noted we need to fix during his victory speech tonight.  Sure, its about a lot of process.  But where there will inevitably be technology involved, well that would be the TrustTheVote Project.

GAM|out

Do Trade Secrets Hinder Verifiable Elections? (Duh)

Slate Magazine posted an article this week, which in sum and substance suggests that trade secret law makes it impossible to independently verify that voting machines are working correctly.  In a short, we say, “Really, and is this a recent revelation?

Of course, those who have followed the TrustTheVote Project know that we’ve been suggesting this in so many words for years.  I appreciate that author David Levine refers to elections technology as “critical infrastructure.”  We’ve been suggesting the concept of “critical democracy infrastructure” for years.

To be sure, I’m gratified to see this article appear, particularly as we head to what appears to be the closest presidential election since 2000.  The article is totally worth a read, but here is an excerpt worth highlighting from Levine’s essay:

The risk of the theft (known in trade secret parlance as misappropriation) of trade secrets—generally defined as information that derives economic value from not being known by competitors, like the formula for Coca-Cola—is a serious issue. But should the “special sauce” found in voting machines really be treated the same way as Coca-Cola’s recipe? Do we want the source code that tells the machine how to register, count, and tabulate votes to be a trade secret such that the public cannot verify that an election has been conducted accurately and fairly without resorting to (ironically) paper verification? Can we trust the private vendors when they assure us that the votes will be assigned to the right candidate and won’t be double-counted or simply disappear, and that the machines can’t be hacked?

Well, we all know (as he concludes) that all of the above have either been demonstrated to be a risk or have actually transpired.  The challenge is that the otherwise legitimate use of trade secret law ensures that the public has no way to independently verify that voting machinery is properly functioning, as was discussed in this Scientific American article from last January (also cited by Levine.)

Of course, what Levine is apparently not aware of (probably our bad) is that there is an alternative approach on the horizon,  regardless of whether the government ever determines a way to “change the rules” for commercial vendors of proprietary voting technology with regard to ensuring independent verifiability.

As a recovering IP lawyer, I’ll add one more thing we’ve discussed within the TrustTheVote Project and the Foundation for years: this is a reason that patents — including business method patents — are arguably helpful.  Patents are about disclosure and publication, trade secrets are, be definition, not.  Of course, to be sure, a patent alone would not be sufficient because within the intricacies of a patent prosecution there is an allowance that only requires partial disclosure of software source code.  Of course, “partial disclosure” must meet a test of sufficiency for one “reasonably skilled in the art” to “independently produce the subject matter of the invention.”  And therein lies the wonderful mushy grounds on which to argue a host of issues if put to the test.  But ironically, the intention of partial code disclosure is to protect trade secrets while still facilitating a patent prosecution.

That aside, I also note that in the face of all the nonsense floating about in the blogosphere and other mainstream media whether about charges of Romney’s ownership interest in voting machinery companies being a pathway to steal an election or suggesting a Soros-Spanish based voting technology company’s conspiracy to deliver tampered tallies, Levine’s article is a breath of fresh air deserving the attention ridiculously lavished on these latest urban myths.

Strap in… T-12 days.  I fear a nail biter from all view points.

GAM|out

Movement to Bring Open Source to Government Being Reorganized

Greetings-

Just a quick post to suggest an interesting report out this afternoon on the TechPresident blog.  The move to consolidate the efforts of Civic Commons (home of Open311.org) and Code For America (CfA), notwithstanding the likely trigger being Civic Common’s leader, Nick Grossman moving on, actually makes sense to us.  CfA’s  Jennifer Pahlka‘s write up is here.

Recently in a presentation, I was asked where our work fits in to the whole Gov 2.0 movement.  It seems to us that we are probably a foundational catalyst to the movement; related, but only tangentially.  To be sure, we share principles of accuracy, transparency, verification and security in government information (ours being elections information).  But Gov 2.0 (and its thought leaders such as CfA) is a considerably different effort from ours at the TrustTheVote Project.  That’s mainly because the backbone of the Civic Commons, Open311.org, and CfA efforts is Web 2.0 technology (read: the social web and related mash-up tools).  There is nothing wrong with that; in fact, its downright essential for transparency.

But to keep the apples in their crate and the oranges elsewhere, our work is about a far heavier lifting exercise.  Rather than liberating legacy government data stores to deliver enlightened public information sites, or to shed sunlight on government operations, we’re building an entirely new open source elections technology stack from the OS kernel up through the app layer, with particular emphasis on an open standards common data format (more news on that in coming posts).

Ours is about serious fault tolerant software architecture, design and engineering with stuff built in C++, Objective C, even dropping down to the machine-level, potentially as far as firmware if necessary, but at the app layer higher level programming tools as well including frameworks like Rails, and UX/UI delivery vehicles like HTML5 and AJAX (to the extent of browser-based or iOS5-based applications).

And that point is the segue to my closing comment:  the Gov 2.0 movement is smartly delivering Government information via the web; the social web in particular.  That’s huge.  By contrast, remember that a good portion of our work is focused on purpose-built, application-specific devices like Optical Scanners to “read” ballots, devices to mark a ballot for printing and processing, or mobile tablets to serve as digital poll books.  Sure, the web is involved in some voter facing services in our framework, like voter registration.  But unlike the Gov 2.0 effort, we have no plans leverage the web or Internet in general for anything (save a blank ballot delivery or voter registration update).

So by contrast, we’re in the rough, while Code for America is on the putting green.  And as such, you should have a look at the TechPresident article today.
Cheers
GAM|out