[R] Why software fails in scientific research

Jim Lemon jim at bitwrit.com.au
Thu Jul 1 12:18:37 CEST 2010


On 07/01/2010 03:29 AM, Dr. David Kirkby wrote:
> On 03/ 1/10 12:23 AM, Sharpie wrote:
>>
>>
>> John Maindonald wrote:
>>>
>>> I came across this notice of an upcoming webinar. The issues identified
>>> in the
>>> first paragraph below seem to me exactly those that the R project is
>>> designed
>>> to address. The claim that "most research software is barely fit for
>>> purpose
>>> compared to equivalent systems in the commercial world" seems to me not
>>> quite accurate! Comments!
>
It can be argued that this is a reporting bias. Whenever I inform people 
doing epidemiology with Excel about Ian Buchan's paper on Excel errors:

http://www.nwpho.org.uk/sadb/Poisson%20CI%20in%20spreadsheets.pdf

there is a sort of reflexive disbelief, as though something as widely 
used as Excel could not possibly be wrong. That is to say, most people 
using commercial software, especially the sort that allows them to 
follow a cookbook method and get an acceptable (to supervisors, journal 
editors and paymasters) result simply accept it without question.

The counterweight to the carefree programming style employed by many 
researchers (I include myself) is the multitude of enquiring eyes that 
find our mistakes, and foster a continual refinement of our programs. I 
just received one this evening, about yet another thing that I had never 
considered, perfect agreement by rating methods in a large trial. Thus 
humanity bootstraps upward. My AUD0.02

Jim



More information about the R-help mailing list