# [R] Sig differences in Loglinear Models for Three-Way Tables

Charles C. Berry cberry at tajo.ucsd.edu
Wed Apr 14 23:50:59 CEST 2010

```See comments in line.

On Wed, 14 Apr 2010, Sachi Ito wrote:

> Hi all,
>
> I've been running loglinear models for three-way tables: one of the
> variables having three levels, and the other two having two levels each.
>
> An example looks like below:
>
>> yes.no <- c("Yes","No")
>
>> switch <- c("On","Off")
>
>> att <- c("BB","AA","CC")
>
>> L <- gl(2,1,12,yes.no)
>
>> T <- gl(2,2,12,switch)
>
>> A <- gl(3,4,12,att)
>
>> n <- c(1136,4998,25,339,305,2752,31,692,251,1677,17,557)
>
>> d.table <- data.frame(A,T,L,n)
>
>> d.table
>
>    A   T   L    n
>
> 1  BB  On Yes 1136
>
> 2  BB  On  No 4998
>
> 3  BB Off Yes   25
>
> 4  BB Off  No  339
>
> 5  AA  On Yes  305
>
> 6  AA  On  No 2752
>
> 7  AA Off Yes   31
>
> 8  AA Off  No  692
>
> 9  CC  On Yes  251
>
> 10 CC  On  No 1677
>
> 11 CC Off Yes   17
>
> 12 CC Off  No  557
>
>
>
> First, I run the independence model and found a poor fit:
>
>> library(MASS)
>> loglm(n~A+T+L)
> Call:
> loglm(formula = n ~ A + T + L)
>
> Statistics:
>                      X^2 df P(> X^2)
> Likelihood Ratio 1001.431  7        0
> Pearson          1006.287  7        0
>
>
>
> Thus, I went on and run the two-way association model and found a good fit:
>
>> loglm(n~A:T+A:L+T:L)
> Call:
> loglm(formula = n ~ A:T + A:L + T:L)
>
> Statistics:
>                      X^2 df   P(> X^2)
> Likelihood Ratio 4.827261  2 0.08948981
> Pearson          4.680124  2 0.09632168
>
>
> I compared the independence model (Model1), two-way association model (Model
> 2), and three-way interaction model (Saturated), and found that the two-way
> association model was the most parsimonious one:
>
>> ind <- loglm(n~A+T+L)
>> twoway <- loglm(n~A:T+A:L+T:L)
>> anova(ind,twoway)
> LR tests for hierarchical log-linear models
>
> Model 1:
> n ~ T + A + L
> Model 2:
> n ~ A:L + A:T + T:L
>
>             Deviance df Delta(Dev) Delta(df) P(> Delta(Dev)
> Model 1   1001.430955  7
> Model 2      4.827261  2 996.603694         5        0.00000
> Saturated    0.000000  0   4.827261         2        0.08949
>
>
> By running a Chi-square test, I found that all of the three two-way
> associations are significant.
>> drop1(twoway,test="Chisq")
> Single term deletions
>
> Model:
> n ~ A:T + A:L + T:L
>       Df    AIC    LRT   Pr(Chi)
> <none>     24.83
> A:T     2 645.91 625.08 < 2.2e-16 ***
> A:L     2 152.93 132.10 < 2.2e-16 ***
> T:L     1 143.60 120.77 < 2.2e-16 ***
> ---
> Signif. codes:  0 ¡***¢ 0.001 ¡**¢ 0.01 ¡*¢ 0.05 ¡.¢ 0.1 ¡ ¢ 1
>
>
> Then, I got the coefficients:
>> coef(twoway)
> \$`(Intercept)`
> [1] 5.866527
>
> \$A
>         BB          AA          CC
> 0.27277069 -0.01475892 -0.25801177
>
> \$T
>       On       Off
> 1.156143 -1.156143
>
> \$L
>      Yes        No
> -1.225228  1.225228
>
> \$A.T
>    T
> A            On        Off
>  BB  0.4809533 -0.4809533
>  AA -0.1783651  0.1783651
>  CC -0.3025882  0.3025882
>
> \$A.L
>    L
> A            Yes          No
>  BB  0.19166429 -0.19166429
>  AA -0.15632604  0.15632604
>  CC -0.03533825  0.03533825
>
> \$T.L
>     L
> T            Yes         No
>  On   0.2933774 -0.2933774
>  Off -0.2933774  0.2933774
>
>
> I, then, hand-calculated odds ratio for A x T, A x L, and T x L.
>
> T x L:
> *èTL *=* e4(.293) *= 3.23
>
> A x L:
> *èAL(BB vs. AA) *= *e 2(.19166) + 2(.1563) = *2.01
>
> *èAL(BB vs. CC) *= *e 2(.19166) + 2(.03533) = *1.57
>
> A x T:
>
> *èAT(BB vs. AA) *= *e 2(.48095) + 2(.17837) = 3.74*
> *
> *
> *èAT(BB vs. CC) = e 2(.48095) + 2(.30259) = 4.79 *
>
>
>
> Now, I'd like to know if BB and AA (or BB and CC) are significantly
> different from each other (i.e., the odds of BB to be 2.01 times larger than
> AA is significant) and the differences between BB and CC are significant
> (i.e., the odds of BB to be 1.6 times bigger is significant) etc.
>

It will be easier to answer this if you formulate the problem as a
surrogate Poisson regression via glm().

The relationship between the surrogate Poisson and the loglinear model is
well known. See V&R or McCullagh and Nelder, for example.

Then the hypotheses about odds ratios become hypotheses about
coefficients, which you can test via summary(), or linear combinations of
coefficients, which you can test with the pieces that vcov() and coef()
provide.

HTH,

Chuck

>
> I'd really appreciate if someone can answer this question!
>
> Thank you,
> Sachi
>
> 	[[alternative HTML version deleted]]
>
>

Charles C. Berry                            (858) 534-2098
Dept of Family/Preventive Medicine
E mailto:cberry at tajo.ucsd.edu	            UC San Diego
http://famprevmed.ucsd.edu/faculty/cberry/  La Jolla, San Diego 92093-0901

```

More information about the R-help mailing list