[Rd] default for 'signif.stars'

Fox, John j|ox @end|ng |rom mcm@@ter@c@
Thu Mar 28 17:18:40 CET 2019


Dear all,

I agree with both Russ and Terry that the significance stars option should default to FALSE. Here's what Sandy Weisberg and I say about significance starts in the current edition of the R Companion to Applied Regression:

	'If you find the “statistical-significance” asterisks that R prints to the right of the p-values annoying, as we do, you can suppress them, as we will in the remainder of the R Companion, by entering the command: options(show.signif.stars=FALSE).'

This is a rare case in which I find myself disagreeing with Martin, whose arguments are almost invariably careful and considered. In particular, the crude discretization of p-values into several categories seems a poor visualization to me, and in any event "scanning" many p-values quickly, which is the use-case that Martin cites, avoids serious issues of simultaneous inference.

Best,
 John

> -----Original Message-----
> From: R-devel [mailto:r-devel-bounces using r-project.org] On Behalf Of 
> Therneau, Terry M., Ph.D. via R-devel
> Sent: Thursday, March 28, 2019 9:28 AM
> To: r-devel using r-project.org
> Subject: Re: [Rd] default for 'signif.stars'
> 
> The addition of significant stars was, in my opinion, one of the worst 
> defaults ever added to R.   I would be delighted to see it removed, or 
> at least change the default.  It is one of the few overrides that I 
> have argued to add to our site- wide defaults file.
> 
> My bias comes from 30+ years in a medical statistics career where 
> fighting the disease of "dichotomania" has been an eternal struggle.  
> Continuous covariates are split in two, nuanced risk scores are 
> thresholded, decisions become yes/no, ....    Adding stars to output 
> is, to me, simply a gateway drug to this pernicous addiction.   We shouldn't encourage it.
> 
> Wrt Abe's rant about the Nature article:  I've read the article and 
> found it to be well reasoned, and I can't say the same about the rant.   
> The issue in biomedical science is that the p-value has fallen victim to Goodhart's law:
> "When a measure becomes a target, it ceases to be a good measure."  
> The article argues, and I would agree, that the .05 yes/no decision 
> rule is currently doing more harm than good in biomedical research.   
> What to do instead of this is a tough question, but it is fairly clear 
> that the current plan isn't working.   I have seen many cases of two 
> papers which both found a risk increase of 1.9 for something where one 
> paper claimed "smoking gun" and the other "completely exonerated".   
> Do YOU want to take a drug with 2x risk and a p= 0.2 'proof' that it 
> is okay?   Of course, if there is too much to do and too little time, 
> people will find a way to create a shortcut yes/no rule no matter what 
> we preach.   (We statisticians will do it
> too.)
> 
> Terry T.
> 
> 
> 
> 
> 	[[alternative HTML version deleted]]
> 
> ______________________________________________
> R-devel using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel


More information about the R-devel mailing list