[R-sig-ME] lmer() for conjoint analysis? (interpreting coefficients)

Andy Fugard andyfugard at gmail.com
Fri Aug 6 13:41:27 CEST 2010


Dear Marianne,


On Thu, Aug 5, 2010 at 19:00, Marianne Promberger
<marianne.promberger at kcl.ac.uk> wrote:

>
> Each of 98 subjects made 9 choices, choosing one alternative each
> time from pairs of two.

Is this coded so that the model is predicting probability of choosing
the a priori determined best option for all 9 comparisons?

>
> We had prior evidence that subjects would prefer the standard
> treatment to any of the alternatives, at equal effectiveness. Hence,
> to reduce number of pairs to present to each subject, one option in
> each pair was always standard medication at lowest level of
> effectiveness (10 out of 100), and the other option was one of the
> three alternatives, at equal or better effectiveness: 10 out of 100,
> 20 out of 100, 40 out of 100.
>
> str(long)
> 'data.frame':   882 obs. of  5 variables:
>  $ subject      : Factor w/ 98 levels "subject 001",..: 1 2 3 4 5 6 7 8 9 10 ...
>  $ alternative  : Factor w/ 3 levels "alt1","alt2",..: 1 1 1 1 1 1 1 1 1 1 ...
>  $ effectiveness: Factor w/ 3 levels "10","20","40": 1 1 1 1 1 1 1 1 1 1 ...
>  $ choice       : Factor w/ 2 levels "0","1": 1 2 2 1 1 1 1 2 1 2 ...

(Just a thought: I find it easier to interpret logistic fits if I
encode the dependent variable values to be numerical 0s and 1s, rather
than a factor.)

>
> I fit this model: (model 1)
> lmer(choice ~ 0 + alternative + effectiveness + (1|subject), family = binomial, data = long)
>
>                Estimate Std. Error z value Pr(>|z|)
> alternativealt1   -0.363      0.446   -0.81     0.42
> alternativealt2   -0.679      0.448   -1.51     0.13
> alternativealt3    2.422      0.459    5.28  1.3e-07 ***

Is one of these the standard medication, then?

Also why did you get rid of the intercept?  Might make sense to push
the standard medication into the intercept, then you can see if
participants always prefer it over the others (i.e., the others will
have negative coefficients).

> effectiveness20    2.543      0.321    7.93  2.3e-15 ***
> effectiveness40    3.846      0.376   10.22  < 2e-16 ***

Okay, so these three are comparisons with effectiveness20, which is coded as 0.

>
> Of interest in conjoint analysis are the relative preferences, or
> "part-worth utilities", and to my understanding I can get these by
> comparing coefficients, e.g. increasing effectiveness from 10 to 40
> (3.846) is about 1.5 times as important as increasing effectiveness
> from 20 to 40 (revealed in choice behaviour in that more subjects
> choose the alternative). Alternatives 1 and 2 are not significant
> because standard and alternative get chosen about equally often, but
> they can be compared in that alternative 3 is preferred to medication,
> and that preference is, e.g., 2.42/.67= 3.6 times stronger than the
> slight preference of medication over alternative 2.

Have you tried getting the mean predictions from the model, feeding
them through invlogit?  Can be helpful to see what's going on
within-subject between conditions.

Also you could use relevel to code the variables to make the
comparisons you want.  Or use one of the multiple comparisons
packages, sometimes discussed on this list.

>
> We also asked each subject once about their perceptions of
> responsibility for smoking, and had a hypothesis that high perceptions
> would lead to rejection of the alternative treatment.
>
>  $ respcause    : int  4 5 6 6 5 5 6 5 4 5 ...

This sounds like a hypothesized interaction between respcause and alternative.

M1 = lmer(choice ~ 1 + alternative + effectiveness + respcause +
(1|subject), family = binomial, data = long)

M2 = lmer(choice ~ 1 + alternative + effectiveness + respcause +
alternative:respcause + (1|subject), family = binomial, data = long)

anova(M1,M2)

Just some thoughts.  Maybe I've partially understood what you're doing!

Cheers,

Andy




More information about the R-sig-mixed-models mailing list