[R] Problem applying McNemar's - Different values in SPSS and R

Marc Schwartz marc_schwartz at me.com
Wed Dec 29 15:00:25 CET 2010

On Dec 28, 2010, at 4:13 PM, Johannes Huesing wrote:

> Marc Schwartz <marc_schwartz at me.com> [Tue, Dec 28, 2010 at 07:14:49PM CET]:
> [...]
>>> An old question of mine: Is there any reason not to use binom.test()
>>> other than historical reasons?
> (I meant "in lieu of the McNemar approximation", sorry if some
> misunderstanding ensued).

After I posted, I had a thought that this might be the case. Apologies for the digression then.

>> I may be missing the context of your question, but I frequently see
>> exact binomial tests being used when one is comparing the
>> presumptively known probability of some dichotomous characteristic
>> versus that which is observed in an independent sample. For example,
>> in single arm studies where one is comparing an observed event rate
>> against a point estimate for a presumptive historical control.
> In the McNemar context (as used by SPSS) the null hypothesis is p=0.5.

Yes, from what I can tell from a brief Google search, it appears that there are some software packages offering an exact variant of McNemar's, that will automatically shift to performing an exact binomial test if the sample size is say, <25.

I rarely use exact tests in general practice (I am not typically involved with "smallish" data sets), so do not come across this situation frequently. That being said, back to your original query, if one is using these techniques, one might find that the exact binomial test is actually being used as noted and therefore should be aware of the documentation for the package, especially if the results that are output are not clear on the effective shift in methodology.

So historical issues nothwithstanding, the functional equivalent of binom.test() is used elsewhere in current practice under certain conditions.

>> I also see the use of exact binomial (Clopper-Pearson) confidence
>> intervals being used when one wants to have conservative CI's, given
>> that the nominal coverage of these are at least as large as
>> requested. That is, 95% exact CI's will be at least that large, but
>> in reality can tend to be well above that, depending upon various
>> factors. This is well documented in various papers.
> Confidence intervals are not that regularly used in the McNemar context, as the
> conditional probability "a > b given they are unequal" is not that much an
> interpretable quantity as is the event probability in a single arm study.
>> I generally tend to use Wilson CI's for binomial proportions when
>  reporting analyses. I have my own code but these are implemented in
>  various R functions, including Frank's binconf() in Hmisc.
> Thanks for the hint.

Happy to help.



More information about the R-help mailing list