Re: TAR--audits. urgently need help to get law.

From: Ron Crane <voting_at_lastland_dot_net>
Date: Tue Nov 21 2006 - 19:38:15 CST
I am not sure I understand how your machine-by-machine strategy is applied. Do you consider a tabulator (whether precinct-based or central) to be a "machine" for purposes of recounts?

Does NM use any DREs with VVPAT or ballot printers? If so, you need to increase the random sample size because many voters will not check their VVPAT or printed ballot.

charlie strauss wrote:
as a follow-up to my original query

Here's a synopisis what we decided to put in the proposed law.  

first the number of randomly selected machines is chosen such that if there were a sufficient number of bad machines to overturn the closest race then the chance at least one of those were to appear in the sample would be 90%.  We left it open for the legislature to change that, but we feel 90% is  excellent.

Additionally, 1/4 of the number used above  will be chosen in a targeted fashion.  Any major party will get an equal number of choices in the target (and the SOS gets an equal share as well).
  
Does this mean 1/4N in addition to the N chosen for random sampling, or that 3/4N are randomly sampled and 1/4N are targeted?

Also, I am not happy with enshrining "major parties" in this manner. Every party should get a popularity-weighted proportion of the choices, with popularity determined by statewide voter registration statistics or some other reasonably-objective criterion. I suppose that this could become a political sticking point, and it'd clearly be better to have a bill that enshrines the major parties than no bill at all, but all the same it's not fair.
1/4 was chosen so that we could treat the tar machines as though they were randomly chosen without excessively dilluting the statistical power of the random sample. That is it will introduce a slight bias but not  one big enough to argue over. Most of the time it weel serves  It's main purpose: to satisfy the the candidates that anomlies were simply anomlous not errors.

in typical elections the number of recounted machines in NM will be small: if the election margin between 2 candidates was 2% then to achieve 90% confidence all of NM would have to count 33 machines. 
Which formula did you use to calculate this number?
 Thus the number in the TAR is very small.  So we felt it was best to place the choice in the hands of the major stakeholder parties rather than candidates. This way we get party wide buy-in to the entire recount process both random and TAR, and thus a more generally authoritative public reassurance (parties we felt are likely to be more restrained about unsupported accusations than individual candidates).  The SOS's share is there to help cover the independent and minor parties interests as well as to target known problem spots that might not be of interest to the parties.

If the sample detects a bad machine the problem is handed over a commission to decide how to expand the recount give whatever error modality was detected.  (e.g. in some case the errors will point to a county specific or precinct specific, problem not a statewide, or vica vera)  They are charged with expanding the recount sample until it is determined that there less than 10% chance any race would be overturned.

  
Are they told how to determine the size and location of the recount sample?
In this second stage, where a detection event has occurred,  it's more important to maximize the statistical power of the sample to estimate the error size accurately rather than search for more events.  Therefore, we made it discresionary whether to include targeted precints in the expansion.

When I say machines and recount what this means is a recount of the paper ballots by hand from a given machine.


Some of The reasons we are comfortable with 90% here is that 1) we feel that in the case of systematic errors that in many cases there will either be none or legislative district wide errors.   The latter will always exceed the 90% threshold level.  
If indeed we're talking about innocent errors, then probably yes.
2) Moreover, remember that the number we chose to recount is based on the closest race.  
So you recount all races on all ballots that include the closest race? Do the detection-expansion thresholds then apply to all of those races?
Thus even if the detection probability for that race is 90% it's far higher for all the other races.  It's not particularly likely that an error would only affect the closest race.  3) The same logic applies to fraud being detected at greater than 90% confidence. 
I don't agree with sentence (3). It is pretty likely that fraud will be directed at one or a few races, rather than being spread across the entire ballot.
4) And we also feel that if the recount deters fraudsters then there's little different in deterence between 90% and 99%.
Attackers implicitly weigh (expected rewards * expected likelihood of getting rewards) against (expected penalties * expected likelihood of being penalized). I wonder whether a 90% detection threshold will be enough to deter attackers who see huge rewards in success and see little chance of being caught. Considering that the number of machines you need to recount for 90% is so small, why not consider raising the threshold to 95% (as the Brennan Center implicitly recommended) and seeing whether the recount is still manageable?
 5)  And finally, that percentage is the single election detection threshold.  Over the course of many elections, it will find many error modalities with much higher confidence and these can be removed from recurrence.

We also liked the idea of counting whole machines rather than say sampling 10% of the ballots in every machine.  
Yes, well, sampling less than all the ballots processed by a machine would be difficult and susceptible to error.
Our feeling was that most error modalities and most suspciious events are machine or precint specific. By recounting a machine in it's entrirety you get  defintive and actionable information about the machine's electronic total accuracy.  While the former aproach might be equally good, we prefered the concreteness and interpretability of the machine-centric recounts.

The bill is being massaged right now in committees so any comments you care to make are still welocome.  If you have sound procedures for including non-random choices in a way that improves the statistical decision theory that would be espeicially welcome.













-----Original Message-----
  
From: Ron Crane <voting@lastland.net>
Sent: Nov 21, 2006 2:35 PM
To: Open Voting Consortium discussion list <ovc-discuss@listman.sonic.net>
Cc: Kurt Hyde <Dr-Jekyll@att.net>
Subject: Re: [OVC-discuss] TAR--audits. urgently need help to get law.

Charlie Strauss wrote:
    
Kurt, (or anyone).   I urgently need your help turing the Targeted 
Audit Recount into law.

In New mexico were writing a law to introduce more rigorous recount 
procedures.  The current bill is enclosed.
But we just got the invitiation to include a Targeted Audit in to the 
bill by the subcommittee who will be recommending the bill but we have 
to get the amendments in by tomorrow (friday).

First is there any attempts to write TAR as legislation alraedy available?

Second TAR has one very serious problem that is keeping me from 
finishing this amendment.

Unlike a randomly sampled recount, you can't rigorously use the 
results of a TAR to estimate how many machines are making errors.  If 
you can estimate how many machines in the full population are making 
errors then you can construct a resonable argument about how likely it 
is that there could be enough errors in the election to change it's 
outcome based on the sample.  From that you can either iteratively 
augment the sample size or decide a full recount is needed.

      
I am sorry that I didn't get to this before the deadline, but I hope 
that my response might help your efforts generally.

I think that TAR precincts, on average, are about as likely to exhibit 
tampering as randomly-chosen precincts (I don't have much faith in 
candidates' and parties' analyses of tampering patterns). This suggests 
treating the TAR precincts as if they had been chosen randomly. If TAR 
precincts turn out to exhibit tampering more often than randomly-chosen 
precincts, this approach oversamples the vote, which isn't a bad thing. 
If TAR precincts exhibit tampering less often than randomly-chosen 
precincts, this approach undersamples the vote, which is why you want 
TAR to supplement, not to replace, random sampling.

Of course, recounting pure DREs is essentially meaningless, and 
recounting VVPATs is probably not very useful because few voters 
generally check them. The Brennan Center's report cited a study by Ted 
Selker showing <3% error-detection rate in an (admittedly synthetic) 
study. We need to take account of the VVPAT error-detection rate if 
VVPAT recounts are to have any real meaning. [1]
    
The basic law simply says this:
enough machines will be sampled (hand counted) such that there is a 
90% chance of detecting whether an election outcome would be reversed 
by bad machines.  ("machine" here means precinct-machine-equivalents:  
batch counted absentee ballots are divided into sub-batches of the 
size of a typical precinct ballot count).  If a mismatch between the 
hand count and the electronic count is found then an election 
commission will decide how to expand the recount:  their criteria is 
to keep expanding until they estimate their is less than a 10% chance 
the election would be reversed.
      
I think 90% is too low. Elections are among the most important things we 
do. Even an iPod has something like three-9s reliability (it will work 
99.9% of the time). True, inevitable errors limit hand recounts' 
accuracy, but we should strive for something better than a 90% chance of 
detecting tampering. The Brennan Center's report seemed to suggest 95%.

-R

[1] We also need to account for the possibility that presentation 
attacks (e.g., dropping candidates from the ballot, reordering the 
ballot) will influence some voters actually to change their votes to the 
ones a tamperer prefers. Alas, VVPAT doesn't help detect these attacks.
_______________________________________________
OVC-discuss mailing list
OVC-discuss@listman.sonic.net
http://lists.sonic.net/mailman/listinfo/ovc-discuss
    

_______________________________________________
OVC-discuss mailing list
OVC-discuss@listman.sonic.net
http://lists.sonic.net/mailman/listinfo/ovc-discuss


  

_______________________________________________
OVC-discuss mailing list
OVC-discuss@listman.sonic.net
http://lists.sonic.net/mailman/listinfo/ovc-discuss

==================================================================
= The content of this message, with the exception of any external
= quotations under fair use, are released to the Public Domain
==================================================================
Received on Thu Nov 30 23:17:10 2006

This archive was generated by hypermail 2.1.8 : Thu Nov 30 2006 - 23:17:19 CST