[Rd] unstable corner of parameter space for qbeta?

Martin Maechler m@ech|er @end|ng |rom @t@t@m@th@ethz@ch
Thu Mar 26 15:34:29 CET 2020


>>>>> J C Nash 
>>>>>     on Thu, 26 Mar 2020 09:29:53 -0400 writes:

    > Given that a number of us are housebound, it might be a good time to try to
    > improve the approximation. It's not an area where I have much expertise, but in
    > looking at the qbeta.c code I see a lot of root-finding, where I do have some
    > background. However, I'm very reluctant to work alone on this, and will ask
    > interested others to email off-list. If there are others, I'll report back.

Hi John.
Yes, qbeta() {in its "main branches"}  does zero finding, but
zero finding of   pbeta(...) - p*   and I tried to explain in my
last e-mail that the real problem is that already pbeta() is not
accurate enough in some unstable corners ...
The order fixing should typically be
1) fix pbeta()
2) look at qbeta() which now may not even need a fix because its
   problems may have been entirely a consequence of pbeta()'s inaccuracies.
   And if there are cases where the qbeta() problems are not
   only pbeta's "fault", it is still true that the fixes that
   would still be needed crucially depend on the detailed
   working of the function whose zero(s) are sought, i.e.,  pbeta()

    > Ben: Do you have an idea of parameter region where approximation is poor?
    > I think that it would be smart to focus on that to start with.

----------------------------

Rmpfr  matrix-/vector - products:

    > Martin: On a separate precision matter, did you get my query early in year about double
    > length accumulation of inner products of vectors in Rmpfr? R-help more or
    > less implied that Rmpfr does NOT use extra length. I've been using David
    > Smith's FM Fortran where the DOT_PRODUCT does use double length, but it
    > would be nice to have that in R. My attempts to find "easy" workarounds have
    > not been successful, but I'll admit that other things took precedence.

Well, the current development version of 'Rmpfr' on R-forge now
contains facilities to enlarge the precision of the computations
by a factor 'fPrec' with default 'fPrec = 1';
notably, instead of  x %*% y   (where the `%*%` cannot have more
than two arguments) does have a counterpart  matmult(x,y, ....)
which allows more arguments, namely 'fPrec', or directly 'precBits';
and of course there are  crossprod() and tcrossprod() one should
use when applicable and they also got the  'fPrec' and
'precBits' arguments.

{The %*% etc precision increase still does not work optimally
 efficiency wise, as it simply increases the precision of all
 computations by just increasing the precision of x and y (the inputs)}.

The whole  Matrix and Matrix-vector arithmetic is still
comparibly slow in Rmpfr .. mostly because I valued human time
(mine!) much higher than computer time in its implementation.
That's one reason I would never want to double the precision
everywhere as it decreases speed even more, and often times
unnecessarily: doubling the accuracy is basically "worst-case
scenario" precaution

Martin



More information about the R-devel mailing list