One of the signal failures of digital technology in recent years has been e-voting. Practically every high-profile attempt to switch from quaint analogue technologies to swish new digital ones has proved a complete and utter disaster.
But taking a closer look at these failures it becomes evident that the problem is not so much e-voting itself, as the toxic combination of e-voting with black-box software.
The problem is quite simple. If you can't see what the software is doing by looking at the code, you can't possible trust it. And e-voting without trust is about as useful as the proverbial chocolate teapot.
The solution is equally obvious: mandate open source solutions so that the code can be checked before use. That's all very well in theory, but what about the practice? Well, that's precisely what Open Source Digital Voting Foundation is attempting to address:
We’re a meritocratic community of technology and policy geeks, developing open source guidelines, specifications, and prototypes of high assurance digital voting systems and services.
The results of our work are intended to be publicly vetted, peer-reviewed proposed draft standards for building, verifying, and using digital voting technology.
Here's an interesting section from their manifesto:
We believe the root cause is the basing of digital voting products on popular, widely available computing systems and software (familiar to many people in their personal and professional lives) and then building closed systems on top that are difficult to assess and easy to criticize.
The first corollary is that these systems appear to have unreliable software, that is, voters observe what appear to be software glitches (e.g., vote flipping, printer problems) and elections volunteers observe complex systems that are often beyond their ability to operate confidently.
For many voters and volunteers, these appearances are unsurprising. After all, they are accustomed to personal computers that are not simple to operate, and which sometimes behave in bewildering ways. While this lack of confidence is commonplace in consumer computing, it’s simply unacceptable to building public trust in digital voting.
The second corollary is that voting machines have a basic characteristic that is wholly undesirable for special-purpose systems like digital voting equipment: they are easy to access, modify, reconfigure, etc. Yet digital voting equipment must be prepared for a narrow and specific use – “cast in stone” – to perform only its exact function during a limited time period (i.e., before, during, and after Election Day), and without any alterations within that time frame. However, currently these systems are not – by and large – built that way. So if they are even potentially modifiable, it is very difficult to have confidence that a system will stay the same and always function correctly.
When apparent glitches or confusion occurs, it is equally difficult to determine whether it’s a result of a malfunctioning normal system, or a system malfunctioning because it was modified or prepared erroneously.
Both these corollaries converge in one more effect of the root cause: A system that can “misbehave” and can also be changed to create new misbehavior is also a system that could have security flaws or vulnerabilities that could be used to create such flaws.
Although technically enabled (and potentially widespread and subtle) election fraud is more a matter of current function or conspiracy theory, the factual basis for this possibility is frequently demonstrated by credible experts, and increasingly given wide currency in the media.