[R] deviance vs entropy

Thomas Lumley tlumley at u.washington.edu
Thu Feb 15 17:41:28 CET 2001


On Thu, 15 Feb 2001, RemoteAPL wrote:

> Hello,
>
> The question looks like simple. It's probably even stupid. But I spent several hours
> searching Internet, downloaded tons of papers, where deviance is mentioned and...
> And haven't found an answer.
>
> Well, it is clear for me the using of entropy when I split some node of a classification tree.
> The sense is clear, because entropy is an old good measure of how uniform is distribution.
> And we want, for sure, the distribution to be uniform, represent one class only as the best.
>
> Where deviance come from at all? I look at a formula and see that the only difference to
> entropy is use of *number* of each class points, instead of *probability* as a multiplier
> of log(Pik). So, it looks like the deviance and entropy differ by factor 1/N (or 2/N), where
> N is total number of cases. Then WHY to say "deviance"? Any historical reason?
> Or most likely I do not understand something very basic. Please, help.


Entropy is, as you say, a measure of non-uniformity. Deviance (which is
based on the loglikelihood function) is a measure of evidence.  A given
level of improvement in classification is much stronger evidence for a
split if it is based on a large number of points.  For example, with 2
points you can always find a split that gives perfect classification. With
2000 points it is very impressive to be able to get perfect classification
with one split.

	-thomas

Thomas Lumley			Asst. Professor, Biostatistics
tlumley at u.washington.edu	University of Washington, Seattle

-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !)  To: r-help-request at stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._



More information about the R-help mailing list