[R] Question on estimating standard errors with noisy signals using the quantreg package
Thorsten Vogel
vogeltho at staff.hu-berlin.de
Tue Nov 1 09:29:51 CET 2011
Many thanks for your comments. The median of the r_i is something around
1000. And for the time being there are no covariates, though this might
change in the future. We are only starting to exploit a very nice data set.
Regarding the probability of being in the data, p, I would say it is indeed
constant across doctors. The data set is a subset of a larger administrative
data set. While the administrative data cover all patients, the data we use
cover all patients born on one of four days of the month (which are
specified a priori). Since I regard this sampling procedure akin to drawing
patients at random from the complete administrative data set, I think p=4/30
is constant across doctors.
Again, I very much appreciate any comments or suggestions.
Regards, Thorsten
-----Ursprüngliche Nachricht-----
Von: Roger Koenker [mailto:rkoenker at illinois.edu]
Gesendet: Montag, 31. Oktober 2011 21:24
An: Thorsten Vogel
Cc: r-help at r-project.org help
Betreff: Re: [R] Question on estimating standard errors with noisy signals
using the quantreg package
On Oct 31, 2011, at 7:30 AM, Thorsten Vogel wrote:
> Dear all,
>
> My question might be more of a statistics question than a question on R,
> although it's on how to apply the 'quantreg' package. Please accept my
> apologies if you believe I am strongly misusing this list.
>
> To be very brief, the problem is that I have data on only a random draw,
not
> all of doctors' patients. I am interested in the, say, median number of
> patients of doctors. Does it suffice to use the "nid" option in
summary.rq?
>
> More specifically, if the model generating the number of patients, say,
r_i,
> of doctor i is
> r_i = const + u_i,
> then I think I would obtain the median of the number of doctors' patients
> using rq(r~1, ...) and plugging this into summary.rq() using the option
> se="iid".
How big are the r_i? I presume that they are big enough so that you don't
want to worry about the integer "features" of the data? Are there really no
covariates? If so then you are fine with the iid option, but if not,
probably
better to use "nid". If the r_i can be small, it is worth considering the
dithering
approach of Machado and Santos-Silva (JASA, 2005).
>
> Unfortunately, I don't observe r_i in the data but, instead, in the data I
> only have a fraction p of these r_i patients. In fact, with (known)
> probability p a patient is included in the data. Thus, for each doctor i
the
> number of patients IN THE DATA follows a binomial distribution with
> parameters r_i and p. For each i I now have s_i patients in the data where
> s_i is a draw from this binomial distribution. That is, the problem with
the
> data is that I don't observe r_i but s_i.
Is it reasonable to assume that the p is the same across doctors? This
seems
to be some sort of compound Poisson problem to me, but I may misunderstand
your description.
>
> Simple montecarlo experiments confirm my intuition that standard errors
> should be larger when using the "noisy" information s_i/p instead of (the
> unobserved) r_i.
>
> My guess is that I can consistently estimate any quantile of the number of
> doctors' patients AND THEIR STANDARD ERRORS using the quantreg's rq
command:
> rq(I(s/p)~1, ...) and the summary.rq() command with option se="nid".
>
> Am I correct? I am greatful for any help on this issue.
>
> Best regards,
> Thorsten Vogel
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
More information about the R-help
mailing list