# Re: [EILeg] [ei] Re: A 3-Step Audit Protocol w/ 99%confidence

From: Ron Crane <voting_at_lastland_dot_net>
Date: Fri Jan 26 2007 - 15:39:25 CST

A related problem with the audit schemes is that their assurance factors
rely upon the assumption that each precinct has the same probability of
being miscounted. This is arguable with respect to accidental miscounts,
such as those caused by using the wrong ballot description files. It is
suspect with respect to fraud. Indeed the schemes seem partially to
recognize this when they calculate the minimum number of precincts (M)
needed to flip the election (i.e. sort from largest to smallest and
reduce overall margin by each precinct's margin until remaining margin
is <= 0). But then they throw all the precincts back in one bin and pick
the audit candidates uniformly randomly. This means that, though fraud
is probably significantly more likely in larger precincts (better yield
per conspirator), such precincts are no more likely to be audited than
much smaller precincts (= low yield per conspirator).

This shouldn't be a problem when precinct size varies only slightly. But
sometimes it doesn't. For example, in San Francisco's recent election,
precinct size by registered voters (ignoring mail-in-only precincts)
ranged from 249 (#1136) to 1134 (#3631).

One way to approach this problem is to stratify the audit, always
auditing the M largest precincts and selecting another N precincts to
audit randomly. You could calculate N by using the existing schemes'
sort method on a precinct list that excludes the M largest precincts.

Another approach might be to weight a precinct's probability of
selection based upon its size (either by number of RVs or number of
ballots cast). But it isn't totally clear that this approach is
mathematically valid, and it'd be difficult to choose the precincts
without using computers.

-R

charlie strauss wrote:
> I agree with arthur, this fudge factor is the achilles heel of the recount strategy. The good news however is two fold.
>
> First, by asking the question in the right way "provide a sampling procedure that give 90% chance of discovering at least one fraudaudulent machine if existant, assuming that no machines are shifting more than Fudge%", then you have reduced an enormously slippery problem down to a single parameter we can argue over. (actually there are two parameters sort of).
>
> Second, we already know how to fix the fudge factor problem. The solution is to provide a limited TAR to go along with the sampling.
>
> Why do we need this TAR. Well the problem with the fudgefactor problem is we already know that any apparent vote shift--which can only be argued by statistical analysis of polls, registration data, and comarison to other voting modelities-- has proven fairly unconvincing. Witness Sarasota FL in the last election where it appears some precents shifted it his claimed by figures exceeding 25% in some reports. Yet the judge labeled it as insufficient speculation. And we are all familiar with the studies of ad nauseum of Ohio and Florida which use statistical evidence to claim large vote shifts in certain precints--yet no investigations results. Candidates seldom n challenge even bigger suspect vote shifts becuase they felt they lost in aggregate regardless of apparent shifts.
>
>
> Thus this presumed "fudge factor" might be a lot larger than anyone is really comfortable with. Yet that creates an enourmous problem for recounts designs. If we were to set the fudge factor at some ridiculously high number list say a 75% vote shift as being an undeniable self evident situation that would be automatically recounted, then one computes a rather painfully stringent sampling rate. (possibly too high for practical value). When you combine this with the fact that precints vary in size it gets a bit worse (a 75% shift in a big precint is wore than a 75% shift in a tiny one).
>
> Thus the solution here is to put the fudge factor at a small value. Then satisfy the candidates and voters with a TAR that can probe specificly contetest results not simply random sampling.
>
> Randoms sampling + TAR lacks the achiles heal.
>
> I don't believe the TAR needs to be very big either.
>
>
>
> _______________________________________________
> OVC-discuss mailing list
> OVC-discuss@listman.sonic.net
> http://lists.sonic.net/mailman/listinfo/ovc-discuss
>
>
>

_______________________________________________
OVC-discuss mailing list
OVC-discuss@listman.sonic.net
http://lists.sonic.net/mailman/listinfo/ovc-discuss
==================================================================
= The content of this message, with the exception of any external
= quotations under fair use, are released to the Public Domain
==================================================================
Received on Tue Jan 1 14:12:49 2008

This archive was generated by hypermail 2.1.8 : Tue Jan 01 2008 - 14:12:51 CST