[R-sig-ME] Orthogonal vs. Non-orthogonal contrasts

Yasuaki SHINOHARA y.shinohara at aoni.waseda.jp
Mon Jun 20 10:40:59 CEST 2016


Dear Thierry,

Thank you very much for your help.
I understand the advantages of multcomp!

Sorry for this late reply.
Thank you, again.

Best wishes,
Yasu

On Mon, 30 May 2016 09:47:06 +0200
  Thierry Onkelinx <thierry.onkelinx at inbo.be> wrote:
> Dear Yasu,
> 
> I see two advantages of multcomp:
> 
> 1) It can work with any parametrisation. So it doesn't matter 
>whether you
> use dummy encoding or some kind of contrast. Hence you can do any 
>post-hoc
> test without having to refit the model. Note that the specification 
>of the
> contrast will depend on the parametrisation. I find dummy encoding 
>easier
> to generate post-hoc contrasts.
> 
> 2) It corrects for multiple testing.
> 
> Best regards,
> 
> ir. Thierry Onkelinx
> Instituut voor natuur- en bosonderzoek / Research Institute for 
>Nature and
>Forest
> team Biometrie & Kwaliteitszorg / team Biometrics & Quality 
>Assurance
> Kliniekstraat 25
> 1070 Anderlecht
> Belgium
> 
> To call in the statistician after the experiment is done may be no 
>more
> than asking him to perform a post-mortem examination: he may be able 
>to say
> what the experiment died of. ~ Sir Ronald Aylmer Fisher
> The plural of anecdote is not data. ~ Roger Brinner
> The combination of some data and an aching desire for an answer does 
>not
> ensure that a reasonable answer can be extracted from a given body 
>of data.
> ~ John Tukey
> 
> 2016-05-30 9:30 GMT+02:00 Yasuaki SHINOHARA 
><y.shinohara at aoni.waseda.jp>:
> 
>> Dear Thierry,
>>
>> Thank you very much for your reply.
>>
>> I understood that the results of each main fixed factor (e.g., 
>>Factor A,
>> B, C and D) are pointless, since the interaction factors affected 
>>the
>> results of the main fixed factors. Actually, I manually coded the 
>>contrasts
>> for all the fixed factors based on the hypothesis I wanted to test, 
>>as
>> follows.
>>
>> #Factor A (testing block)
>> PreVsMid<-c(1,-1,0)
>> PreVsPost<-c(-1,0,1)
>> contrasts(alldata$FactorA)<-cbind(PreVsMid,PreVsPost)
>> #Factor B (Trainer order)
>> IDVsDIS<-c(1,-1)
>> contrasts(alldata$FactorB)<-cbind(IDVsDIS)
>> #Factor C (Phonetic environment)
>> IniVsMid<-c(1,-1,0)
>> IniVsCls<-c(-1,0,1)
>> contrasts(alldata$FactorC)<-cbind(IniVsMid, IniVsCls)
>> #Factor D (Length of Experience, continuous variable)
>> alldata$FactorD<-as.numeric(alldata$FactorD)
>>
>> What I really wanted to test is the interaction between Factor A and
>> Factor B. Factor A has the two contrasts (i.e., PreVsMid (1,-1,0),
>> PreVsPost(-1,0,1)), and Factor B has only one contrast (i.e., 
>>IDvsDIS
>> (-1,1)) since there are only two levels in the Factor B. I tested 
>>whether
>> there is a significant difference in the contrast of PreVsMid 
>>between the
>> two levels of Factor B (IDvsDIS). Therefore, I did not use the dummy
>> (simple effect) coding but used the effect coding.
>>
>> As you suggested, I tried to figure out how to use the multcomp 
>>package. I
>> found that the glht function in the packages allows me to test a 
>>variety of
>> contrasts with a matrix. However, I felt that the contrasts I coded 
>>above
>> are maybe enough to test my hypothesis, and I am wondering whether I 
>>should
>> use the glht function for the contrasts.
>>
>> Could you please let me know if there are any advantage of using the 
>>glht
>> function?
>>
>> I really appreciate your help.
>>
>> Best wishes,
>> Yasu
>>
>>
>> On Thu, 26 May 2016 08:53:44 +0200
>>
>>  Thierry Onkelinx <thierry.onkelinx at inbo.be> wrote:
>>
>>> Dear Yasu,
>>>
>>> The contrast x = c(1, -1, 0) is equivalent to beta_x * 1 * a_1 + 
>>>beta_x *
>>> (-1) * a_2 + beta_x * 0 * a_3.
>>> Likewise contrast y = c(.5, -.5, 0) is equivalent to beta_y * 0.5 * 
>>>a_1 +
>>> beta_y * (-0.5) * a_2 + beta_y * 0 * a_3.
>>>
>>> Since both model the same thing beta_x * 1 * a_1 + beta_x * (-1) * 
>>>a_2 +
>>> beta_x * 0 * a_3 = beta_y * 0.5 * a_1 + beta_y * (-0.5) * a_2 + 
>>>beta_y * 0
>>> * a_3.
>>> Some simple math will show that beta_x = 2 * beta_y
>>>
>>> Your contrasts are correct but pointless given your model. They are 
>>>only
>>> meaningful in case FactorA is only a main effect. You included 
>>>FactorA in
>>> some interactions as well. So you'll need to define contrasts on the 
>>>full
>>> set of fixed parameters to get some sensible results. You can do 
>>>that with
>>> the multcomp package. I would also suggest that you find some local
>>> statistician to help you define the contrasts relevant for your 
>>>model.
>>>
>>> Best regards,
>>>
>>>
>>> ir. Thierry Onkelinx
>>> Instituut voor natuur- en bosonderzoek / Research Institute for 
>>>Nature and
>>> Forest
>>> team Biometrie & Kwaliteitszorg / team Biometrics & Quality 
>>>Assurance
>>> Kliniekstraat 25
>>> 1070 Anderlecht
>>> Belgium
>>>
>>> To call in the statistician after the experiment is done may be no 
>>>more
>>> than asking him to perform a post-mortem examination: he may be able 
>>>to
>>> say
>>> what the experiment died of. ~ Sir Ronald Aylmer Fisher
>>> The plural of anecdote is not data. ~ Roger Brinner
>>> The combination of some data and an aching desire for an answer does 
>>>not
>>> ensure that a reasonable answer can be extracted from a given body 
>>>of
>>> data.
>>> ~ John Tukey
>>>
>>> 2016-05-26 6:31 GMT+02:00 Yasuaki SHINOHARA 
>>><y.shinohara at aoni.waseda.jp>:
>>>
>>> Dear Thierry,
>>>>
>>>> Thank you very much for your reply.
>>>> I understood why. The interaction of blockPreVsMid:FactorD turned
>>>> significant in the model which contrasted the testing block factor 
>>>>as
>>>> PreVsMid and PreVsPost (i.e.,cbind(c(1,-1,0),c(-1,0,1))), although 
>>>>the
>>>> interaction was not significant in the model with the testing block
>>>> contrasted as PreVsMid and PreMidVsPost (i.e.,
>>>> cbind(c(1,-1,0),c(1,1,-2))).
>>>>
>>>> Could I ask another question?
>>>> What is the difference in making a contrast of PreVsMid as c(1,-1,0) 
>>>>and
>>>> as c(0.5, -0.5, 0)?
>>>> It seems that the beta and SE are double if I code the contrasts 
>>>>with
>>>> (0.5, -0.5, 0). I hope it does not matter.
>>>>
>>>> Also, I coded "contrasts(data$FactorA)<-cbind(c(1,-1,0),c(-1,0,1))" 
>>>>to
>>>> test the differences between the mean of level 1 vs. the mean of 
>>>>level 2
>>>> and between the mean of level 1 and the mean of level 3. Is this 
>>>>correct?
>>>> Some website says something different from what I understood (e.g., 
>>>>the
>>>> first Answer of
>>>>
>>>> http://stats.stackexchange.com/questions/44527/contrast-for-hypothesis-test-in-r-lmer
>>>> ).
>>>>
>>>> My model includes both categorical and numeric variable, and all
>>>> categorical variables were coded manually.
>>>>
>>>> Best wishes,
>>>> Yasu
>>>>
>>>>
>>>> On Wed, 25 May 2016 09:44:14 +0200
>>>>  Thierry Onkelinx <thierry.onkelinx at inbo.be> wrote:
>>>>
>>>> Dear Yasu,
>>>>>
>>>>> A is part of two interactions. Hence you cannot interpret this main
>>>>> effect
>>>>> without the interactions. Note that changing the contrast will also
>>>>> effect
>>>>> the interactions.
>>>>>
>>>>> Best regards,
>>>>>
>>>>> ir. Thierry Onkelinx
>>>>> Instituut voor natuur- en bosonderzoek / Research Institute for 
>>>>>Nature
>>>>> and
>>>>> Forest
>>>>> team Biometrie & Kwaliteitszorg / team Biometrics & Quality 
>>>>>Assurance
>>>>> Kliniekstraat 25
>>>>> 1070 Anderlecht
>>>>> Belgium
>>>>>
>>>>> To call in the statistician after the experiment is done may be no 
>>>>>more
>>>>> than asking him to perform a post-mortem examination: he may be able 
>>>>>to
>>>>> say
>>>>> what the experiment died of. ~ Sir Ronald Aylmer Fisher
>>>>> The plural of anecdote is not data. ~ Roger Brinner
>>>>> The combination of some data and an aching desire for an answer does 
>>>>>not
>>>>> ensure that a reasonable answer can be extracted from a given body 
>>>>>of
>>>>> data.
>>>>> ~ John Tukey
>>>>>
>>>>> 2016-05-25 4:42 GMT+02:00 Yasuaki SHINOHARA 
>>>>><y.shinohara at aoni.waseda.jp
>>>>> >:
>>>>>
>>>>> Dear all,
>>>>>
>>>>>>
>>>>>> Hello, I am doing research of second language acquisition.
>>>>>> I am wondering about the glmer in R for my analyses. Could you 
>>>>>>please
>>>>>> answer my question?
>>>>>>
>>>>>> I have the following logistic mixed effects model.
>>>>>> model<-glmer(corr ~ A + B + C + D + A:B + B:C + A:D +(1+A|subject) +
>>>>>>
>>>>>>
>>>>>> (1+A|item:speaker),family=binomial,data=mydata,control=glmerControl(optimizer="bobyqa",
>>>>>> optCtrl=list(maxfun=1000)))
>>>>>>
>>>>>> I tested language learners (subjects) three time (pre-training,
>>>>>> mid-training, post-training) with the "item" produced by "speaker", 
>>>>>>so
>>>>>> Factor A is "testing block" which has three levels (i.e., pre, mid,
>>>>>> post).
>>>>>> Since each subject took the test three times, the random slopes for 
>>>>>>the
>>>>>> Factor A were also included as a random factor.
>>>>>>
>>>>>> I made an orthogonal contrast for the Factor A (testing block) as
>>>>>> follows.
>>>>>> PreVsMid<-c(1,-1,0)
>>>>>> PreMidVsPost<-c(1,1,-2)
>>>>>> contrasts(mydata$A)<-cbind(PreVsMid,PreMidVsPost)
>>>>>>
>>>>>> The results from summary(model) function for this factor were as
>>>>>> follows.
>>>>>> pre vs. mid test: β = 0.22, SE = 0.05, z = 4.34, p < 0.001
>>>>>> pre & mid vs. post test: β = -0.21, SE = 0.04, z = -5.96, p < 0.001.
>>>>>>
>>>>>> However, I thought it would be better if I made a non-orthogonal
>>>>>> contrast
>>>>>> for this factor as "pre vs. mid" and "pre vs. post" to test my
>>>>>> hypothesis.
>>>>>> So I made a new contrast for the Factor A as follows.
>>>>>> PreVsMid<-c(1,-1,0)
>>>>>> PreVsPost<-c(1,0,-1)
>>>>>> contrasts(mydata$A)<-cbind(PreVsMid,PreVsPost)
>>>>>>
>>>>>> The results from summary(model) function for this contrast were
>>>>>> pre vs. mid test: β = -0.01, SE = 0.04, z = -0.14, p > 0.05 (=0.89),
>>>>>> pre vs. post test: β = 0.42, SE = 0.07, z = 5.96, p < 0.001.
>>>>>>
>>>>>> Although the first contrast (pre vs. mid) is the same for both 
>>>>>>models,
>>>>>> why
>>>>>> the results of pre vs. mid contrast are so different (one is very
>>>>>> significant, but the other one is not significant)?
>>>>>>
>>>>>> I really appreciate any help.
>>>>>>
>>>>>> Best wishes,
>>>>>> Yasu
>>>>>>
>>>>>> _______________________________________________
>>>>>> R-sig-mixed-models at r-project.org mailing list
>>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>>>
>>>>>>
>>>>>>



More information about the R-sig-mixed-models mailing list