Testing the statistical algorithms

A statistical program has to produce accurate results reliably. And it has to keep doing so even when some aspects of the program change between versions. Seemingly trivial or non-consequential programming changes can have an enormous impact on the final result produced. So the only way to have confidence in a program is through automated testing. In many cases, it is also possible to test against a standard dataset with a guaranteed, known result (e.g. http://www.itl.nist.gov/div898/strd/general/dataarchive.html.

The one-way ANOVA has passed the most difficult NIST test when using the default “precision” setting (as opposed to speed, which relies on floating point maths).

Additionally, the ANOVA, and all the other tests, are now tested using a number of carefully crafted Python functions and a simple program called NOSE (http://somethingaboutorange.com/mrl/projects/nose/0.11.1/testing.html). The tests can feed hundreds of random samples of data into each SOFA Statistics algorithm and check the output against a trusted algorithm e.g. stats.py from SciPy.

Of course, randomness is not enough to test an algorithm. It is necessary to also feed in cases where some values are very high, very close to zero, or very similar to other values. The specific approach necessary to separate out the weak algorithms depends on the particular test. The NIST ANOVA datasets, for example, include lots of values with the same leading digits and the only difference occurring after the decimal point. A deliberate approach to testing increases the odds of exposing errors.

In the open source world there is no need to take anyone’s word for it. The test script, and all the algorithms for SOFA Statistics, are open source (https://code.launchpad.net/sofastatistics), and any developers or statisticians who can extend or otherwise improve the tests are welcome to do so. That’s the open source way. So if you think of something that could help strengthen SOFA Statistics or its testing, please feel free to contact me.

As part of the testing just completed, a couple of small bugs were detected and these will be corrected in the next release coming soon.

Comments are closed.