Re: OVC-discuss Digest, Vol 27, Issue 28

From: Kathy Dopp <kathy_dot_dopp_at_gmail_dot_com>
Date: Sat Jan 27 2007 - 15:09:58 CST

> Date: Sat, 27 Jan 2007 01:06:47 -0500 (EST)
> From: charlie strauss <>
> Subject: Re: [OVC-discuss] A 3-Step Audit Protocol w/ 99%confidence

> Kathy I sent that message to you privately, I'm not sure why it was essential to post it here. I apologize to the OVC for the perpetuation of this.

Charlie, I posted it because I thought that the group would be
interested in the reasons why there is no achilles heel or fudge
factor in the audit strategy. Apparently I did not sufficiently
explain it earlier, or you of all people would have understood it
Charlie since your math skills are excellent and your statistical
skills are better than mine. If you still did not understand it
Charlie, then it was apparent to me that many others on this list did
not as well, so it bears another explaination of what the maximum vote
shift per vote count is and why it is used in the calculations.

As far as the history who discovered what and when, anyone can go
check out the history if they study the papers that Joe Hall has
published on his audit page:

A quick recap that leaves out my work in 2005 is:

Roy Saltman and then the Brennan Center first proposed the concept of
a maximum wrongful vote shift per vote count that would be not be
immediately noticed as suspicious. The Brennan Center also gave a
method for estimating sample sizes for parallel machine testing from
sampling with replacement.

I first correctly applied that concept to the margins between
candidates to calculate an amount of vote miscount to detect to
achieve audits that were designed to ensure correct election outcomes
to any degree of certainty. I applied both the estimation formula and
the vote count maximum given in Brennan Center appendices to provide a
spreadsheet for calculating audit sample sizes based on margins
between candidates and I was the first person to do this audit
mathematics based on candidate margins correctly.

Howard Stanislevic and I simultaneously devised methods to adjust for
precinct size variations, but my method was an exact numerical method
whereas Howard's algorithm had some minor errors in it that meant it
was very good but not precisely accurate. As I recall, Howard did not
publicly release an exact method like Frank and I developed, but
presented a somewhat loose algorithm at the time, and I recall that a
few of the sample sizes Howard shared with me were occasionally
slightly off and needed minor correction). Howard neglected to
mention my former July paper at all in his August one.

Frank Stenger and myself first created a numerical method for exactly
calculating the sample size for election audits using all of the
above. While I was finishing this paper, released in September, Jerry
sent frequent emails to this list which mischaracterized my work and
positions which had not been publicly released yet, and taking credit
for Frank's work.

Ronald Rivest released a new formula that gave a more precise estimate
of how to estimate audit sample sizes that gave smaller overestimates
of the exact audit sample sizes that Frank & my numerical method
calculated, than the less accurate estimation method presented first
by the Brennan Center.

You can check out all these facts by carefully studying the papers here:

Just a quick overview.
OVC-discuss mailing list
= The content of this message, with the exception of any external
= quotations under fair use, are released to the Public Domain
Received on Tue Jan 1 14:12:50 2008

This archive was generated by hypermail 2.1.8 : Tue Jan 01 2008 - 14:12:51 CST