Re: Microsoft-backed Consortium, AeA, Opposing Open Voting Bill, AB 852

From: Edward Cherlin <echerlin_at_gmail_dot_com>
Date: Thu Apr 26 2007 - 21:04:04 CDT

On 4/25/07, Hamilton Richards <hrichrds@swbell.net> wrote:
> At 5:38 PM -0700 2007/4/13, OVC Announce wrote:
> >__
> >
> >Dear Friends of Open Voting:
> >
> >The American Electronics Association (AeA) has come out against AB
> >852 [1], the bill carried by California Assemblymember Paul
> >Krekorian that would require full public disclosure of voting system
> >technology upon state certification. A Microsoft lobbyist has also
> >been working the state Capitol against us. We've heard that
> >Microsoft is also turning up the heat in Washington D.C. in
> >opposition to language in federal bills that would require voting
> >technology disclosure.
> >
> >[...]
> >
> >Let your legislators know that you want them to stand up to industry
> >opposition. The people have a right to know how their votes are
> >counted!

I like it.

> Of course the people have a right to know how their votes are
> counted, but let's pause for a moment to consider whether it's
> worthwhile to devote a lot of attention, energy, and political
> capital to the campaign for full public disclosure of voting
> technology. There's good reason to doubt that public disclosure would
> actually help the people to "know how their votes are counted".

I will respecfully disagree, below. All of Hamilton's observations,
while true in their place, have little or nothing to contribute to
this discussion, because they are answers to the wrong questions.

> The primary reason that disclosure's benefits are doubtful is this
> bit of conventional computer-science wisdom:
>
> Inspecting and testing software can show that it is capable of
> operating correctly, but can never show that it is incapable
> of operating incorrectly.

Inspecting can often show that software is capable of operating
incorrectly, or of being suborned. I don't expect perfection from this
bill, just massive improvement.

SS officer: "Can you give us a 100% guarantee that there will be no
prisoner escapes?"
Prison camp officer: "Only amateurs talk of 100% security. We of the
profession are not accustomed to speak in such inexactitudes."
SSO: "Which profession is that?"
PCO: "The insurance profession."
Colditz, BBC-TV

> What matters in elections is whether e-voting software can be relied
> on to count

record. Counting is a separate issue.

> votes correctly, and that question can never be answered
> by inspection or testing.

What matters in elections is that you can find out who messed up when
machines fail to record and count votes correctly. For this, audit is
required. Audit of the code, audit of the counts, audit of who did
what when, audit of chains of custody...Perhaps the best audit cannot
catch all problems. Should we give up, then? But Enron demonstrates
how much shoddy auditing can miss that nobody would attempt if they
knew that competent auditors were coming.

> For those who are not computer scientists and find the calls for
> inspection and testing to be eminently sensible, some explanation is
> in order.

I am a computer scientist with experience in programming language
design and implementation. I find the calls for inspection and testing
to be eminently sensible, and your objections to be irrelevant.

> Let's suppose that a vendor agrees to disclose their e-voting
> software, and we set out to inspect it. The first problem we face is
> that the code that's readable (and writable) by people (the "source
> code") is not the code that actually controls the machine--

This is not by any means the first problem. The first problem today is
that all the code that has passed the so-called testing process
currently in place is junk. None of it could pass any real code
inspection. So we have to come up with a process for creating code
that can pass inspection. Hence OVC.

> it has to
> be translated into machine-executable code by a software package
> called a compiler.

That's one way of doing it. There are also interpreters, which are
often much simpler in construction than compilers.

> Like e-voting software, a compiler can't be
> assumed to perform its translation correctly, so we must include the
> compiler in our inspection.
> That's not as simple as it sounds. The typical compiler is the
> product of years of development in which each version is used to
> compile the next version.

But this is not at all necessary. I was in charge of the project that
created the first ISO/ANSI compliant APL system from scratch. We
defined and implemented a virtual machine, wrote an editor in our VM
language, wrote an integrated development environment in the editor,
and wrote the APL in the IDE. It took one man-year. A FORTH system can
be put together from scratch in much less time,since it is little more
than a virtual machine with an editor and assembler. The compiler is
half a page long, and can be understood by an undergraduate.

> As Ken Thompson explained in his Turing
> Award lecture [2], this self-compilation makes it possible to infect
> a compiler with a clever bug which reproduces itself invisibly in the
> machine code of all subsequent versions, where it will lie in wait to
> be triggered by a particular bit of source code in the software it's
> compiling. So we must inspect not only the compiler that's used to
> compile the e-voting source code but all previous versions of that
> compiler as well--many of which may no longer exist (indeed, someone
> who deliberately inserted a clever bug would do well to destroy every
> copy of the infected source code).

Yes, apparently Thompson implemented his trick, and a perverted
compiler escaped to Bolt, Baranek, and Newman, but not to the world in
general.

Your objection holds only if the language is too complex for mere
mortals to understand in full. Mere mortals can understand all of each
of several languages and operating systems, where such understanding
was a design goal, or where radical simplicity of syntax and semantics
was a goal for other reasons.

> Another problem with our inspection project is that the e-voting
> code is only a small fraction of the software that controls the
> e-voting machine on election day. If we believe in inspection, we
> must also inspect all of the operating-system software, not to
> mention the firmware resident in graphics cards, disk controllers,
> and so on. And for each of these, we must also inspect the compilers
> used in their development, along with all of those compilers'
> ancestors.

Actually, for firmware we can inspect the object code with a
disassembler and debugger. It's small enough. We don't need a
gamer-level 3D graphics screamer in a voting machine. BTW, the One
Laptop Per Child computer happens to be the first computer running
only Free Software: LinuxBIOS, kernel, drivers, and applications. We
in the project expect a few of the children to understand the whole
system before they go to college. All of this technology will be
available for voting machines.

FORTHs include their own operating systems, written about 90% in
FORTH, so that's manageable. Or we can use a stripped-down Linux meant
for tiny embedded systems.

> At this point it's probably obvious that all this inspection
> would be an enormous logistical nightmare.

Sorry, not obvious at all. We are starting from a base of
much-inspected software, and there are large numbers of people capable
of inspecting our voting application.

> Even if Microsoft and the
> other vendors of COTS software used in e-voting systems were to do an
> about-face and open their source code to public inspection, there's
> no way such a massive undertaking could ever be funded.

I'm perfectly happy if they butt right out, since their code would
fail inspection immediately. Of course, they will attempt to raise a
political storm, but that is completely separate from the point you
are trying to make. That war has already been engaged, and Free/Open
Source Software is winning on the global front.

> Suppose we give up the idea of inspecting software, and decide to
> test it instead.

Not us.

> Unfortunately, determining whether software can be
> trusted by testing it is just as hopeless as inspecting it.
> The reason lies in a fundamental difference between software and
> physical systems. If you're testing a bridge, and you prove that it
> can handle a 10-ton load by driving a 10-ton truck across it, you can
> reasonably trust it to handle any load weighing less than 10 tons.

This turns out not to be the case. The Roebling firm was the first in
the US to build bridges that routinely failed to fall down. They did
this by calculation and testing, and then making their bridges six
times stronger than seemingly necessary. You would have to test your
bridge with a 60-ton load under otherwise ideal conditions in order to
meet their standard to certify it for 10 tons in any weather, any
state of the river, and any other traffic.

> With software, however, there are no physical laws supporting such
> inferences;

Yes. We have to use mathematical inference. Poor us. See Dijkstra, A
Discipline of Programming.

> if a bridge were like software, you'd actually have to
> test it with trucks of all possible weights less than 10 tons.

I give up. That's nonsense. You can't do complete testing of software.
You have to design it and implement it correctly. So that tells us
nothing about bridges. And anyway, I thought the reason for talking
about bridges was to explain the software problem, not the other way
around.

Look, if you want to give up and go away, then do. And let the rest of
us get on with it. If your analysis were correct, it wouldn't matter
if we wrote better code than Diebold, because we wouldn't be able to
convince anybody of it. Since this is not the case, i.e. Diebold code
has been savaged by computer scientists in the press and on TV, and
most of the public has understood, your argument is invalid.

> And that's only the beginning-- the load a bridge can support is
> a single variable, but the typical software package has many
> independent inputs which would have to be tested in all possible
> combinations. This situation is captured in the well known aphorism
>
> Testing software can reveal the presence of bugs, but can never
> confirm their absence [1].

"Apart from trivial coding mistakes, we expect the result to be
perfect."--Edsger Dijkstra, The Design of the T.H.E Operating System,
Technische Hogeschoole Eindhoven

> Even if these insurmountable obstacles were somehow surmounted, we
> would still face massive challenges in ensuring that the software
> we've inspected and tested is the same as the software that's
> controlling the machines on election day.

All in the plan. That's why we use checksums, and why we plan to run
the software directly from CD-ROM, where it can't be changed on the
day. Among other things.

> To prevent unauthorized
> installation of fraudulent software, every e-voting machine would
> have to be secured with an unbroken chain of custody.

Not the case, as I have just explained.

> Such stringent
> security measures have been shown to exceed the capabilities of many
> election administrators.

True, but not relevant.

> The bottom line is that no matter how much effort we put into
> inspecting and testing disclosed e-voting source code, we still can't
> say for sure that we know that it will operate correctly under all
> conditions. If the software counts votes, we still don't know how our
> votes are counted.

It may be that another audit will be required, by randomly taking
units out of service during the voting and testing whether they record
votes as cast. But don't tell us that there is nothing we can do.

> Some proponents of disclosure argue that inspecting and testing
> software can reveal flaws, and that correcting those flaws must
> necessarily improve the system, making it more trustworthy than it
> was. Of course correcting software's flaws usually improves its
> quality (the exceptions are the "corrections" that introduce new
> bugs).

Well, thank you.

> "More trustworthy", however, isn't good enough for elections.
> Unless we know for sure that e-voting software is incapable of
> operating incorrectly --and that's something we can never know-- we
> must treat it as not to be trusted.

Exactly! Therefore we audit six ways from Sunday, and we use code from
different sources for recording votes, reading ballots to voters,
counting votes, auditing, and any other functions. But why would you
turn down a chance to improve the code? That still doesn't make the
slightest sense to me. The only people I know who talk like that are
at Microsoft.

> Note that these arguments against the linkage between source-code
> disclosure and correctness apply equally well to open-source
> software. Open source has many virtues --amply explained elsewhere--
> but guaranteeing correct vote counting is not one of them.

More secure, with fewer bugs, easier to adapt to new requirements--Why
does this have nothing to do with correctness in your mind?

> Fortunately, as readers of ovc-discuss well know, the untrustworthy
> e-voting software is part of a larger system, and that larger system
> can be made trustworthy despite the untrustworthiness of its software
> component.

Wait, you really think we can fix voting without touching the broken
code that we know is out there?

> The OVC design [3] accomplishes this feat by printing each
> ballot for its voter to check. The printer-equipped system can be
> trusted, whether or not its source code has been publicly inspected,
> because the results --the printed ballots-- are verified by the
> voters.

Yes, you really do. Well think again. Voters do a terrible job of
inspection. Too many false positives, where they vote for someone and
forget they have done it.

No, it has to be belt and suspenders. Embalm, cremate, and bury, and
take no risks.

> So a good question is: What would disclosure, inspection, and testing
> buy us? OK, it would ensure public humiliation of the vendor, but
> satisfying though that may be, is it worth all the trouble?

Disqualification of the vendor, please, and then competition to create
software that can pass the tests.

> --Ham
>
> 1. E.W.Dijkstra, "The Humble Programmer". CACM 15 (1972), 10: 859-866
> <http://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EWD340.html>
>
> 2. K.Thompson, "Reflections on Trusting trust". CACM 27 (1984), 8:
> 761-763 <http://portal.acm.org/citation.cfm?doid=358198.358210>
>
> 3. Open Voting Solution <http://www.openvotingconsortium.org/our_solution>
> --
> ------------------------------------------------------------------
> Hamilton Richards, PhD Department of Computer Sciences
> Senior Lecturer (retired) The University of Texas at Austin
> ham@cs.utexas.edu hrichrds@swbell.net
> http://www.cs.utexas.edu/users/ham/richards

The perfect is the enemy of the good. We can't have the perfect, and
the good isn't good enough for you. What's left?

-- 
Edward Cherlin
Earth Treasury: End Poverty at a Profit
http://wiki.laptop.org/go/Earth_Treasury
WIRE AFRICA  http//www.wireafrica.org/
http://www.linkedin.com/in/cherlin
_______________________________________________
OVC-discuss mailing list
OVC-discuss@listman.sonic.net
http://lists.sonic.net/mailman/listinfo/ovc-discuss
==================================================================
= The content of this message, with the exception of any external 
= quotations under fair use, are released to the Public Domain    
==================================================================
Received on Mon Apr 30 23:17:15 2007

This archive was generated by hypermail 2.1.8 : Mon Apr 30 2007 - 23:17:16 CDT