Monkey Testing and the State of the Testing Art

From: Richard C. Johnson <dick_at_iwwco_dot_com>
Date: Thu Jan 04 2007 - 15:50:44 CST

Folks,
   
  The exposure of what we knew about the ITA testing is not a surprise, but it is helpful to have the subject brought to public attention. I have spoken to Mr. Drew, the author of the Times article, and I do believe that some of his comments in the article were tongue in cheek. I understood "complicated" in that sense. The facts are so outrageous that such a reaction is understandable.
   
  In fact, only automated regression testing will allow the repetition that is required to test and debug code. You will never prove that any software works; you can logically only prove that it does not. The point of testing is to expose the software to the risk of failure; the best you can say is that you threw everything you had at it, you repeated the challenges many times and in many ways, you documented the tests, the testing, and the test results, and the software produced no exceptions (errors) under such test.
   
  You can do coverage analysis. That is, you can study which lines of source code get exercised in the process of running tests. You can never prove that you tested anything that you excercised, but you will know for a fact that, if you did not exercise lines of code, that you did not test them. Knowledge of what is untested can help you estimate risk of failure under use.
   
  Paying huge sums for random human testing is not cost effective. Code review is reasonable, but it is very expensive and is best done simply by using Open Source for critical applications and letting the community see the source. If Ciber did not document their testing, they did not do what they should have done; they may have waved their arms and bumped the machines, but they did not do what a professional software engineer would do. In a strict sense, the spokesperson for Ciber is wrong. Ciber, without a test plan, did not test the code in any meaningful sense.
   
  It is quite possible to do Open Test, where the tests are known and published, the test procedures are known and published, and the results are known and published. The community can contribute to Open Test by writing modules applicable to various performance aspects of voting machines. Open Test, and Open Testware, is completely analogous to Open Source and, in voting applications, Open Test is needed to replace the unprofessional hash the ITAs have made of their business. IMHO.
   
  -- Dick

_______________________________________________
OVC-discuss mailing list
OVC-discuss@listman.sonic.net
http://lists.sonic.net/mailman/listinfo/ovc-discuss

==================================================================
= The content of this message, with the exception of any external
= quotations under fair use, are released to the Public Domain
==================================================================
Received on Tue Jan 1 14:12:44 2008

This archive was generated by hypermail 2.1.8 : Tue Jan 01 2008 - 14:12:51 CST