[Rd] code validation (was Re: NY Times article)
marc_schwartz at comcast.net
Sun Jan 11 17:23:18 CET 2009
on 01/10/2009 03:06 PM Spencer Graves wrote:
> Hi, All:
> What support exists for 'regression testing'
> (http://en.wikipedia.org/wiki/Regression_testing) of R code, e.g., as
> part of the "R CMD check" process?
> The"RUnit" package supports "unit testing"
> Those concerned about software quality of code they use regularly
> could easily develop their own "softwareChecks" package that runs unit
> tests in the "\examples". Then each time a new version of the package
> and / or R is downloaded, you can do "R CMD check" of your
> "softwareChecks": If it passes, you know that it passed all your checks.
> I have not used "RUnit", but I've done similar things computing the
> same object two ways then doing "stopifnot(all.equal(obj1, obj2))". I
> think the value of the help page is enhanced by showing the "all.equal"
> but not the "stopifnot". I achieve this using "\dontshow" as follows:
> obj1 <- ...
> obj2 <- ...
> all.equal(obj1, obj2)
> Examples of this are contained, for example, in "fRegress.Rd" in
> the current "fda" package available from CRAN or R-Forge.
> Best Wishes,
I think that there are two separate issues being raised here.
One is, how does R Core implement and document an appropriate software
development life cycle (SDLC), which covers the development, testing and
maintenance of R itself. This would include "Base" R and the
The second is how does an end user do the same with respect to their own
use of R and their own R code development.
I'll answer this one first, which is essentially that it is up to the
end user and their organization. There is an intrinsic mis-understanding
if an end user believes that the majority of the burden for this is on R
Core or that it is up to R Core to facilitate the end user's internal QA
If the end user is in an environment that requires formalized IQ/OQ/PQ
types of processes (building jet engines for example...), then there
will be SOPs in place that define how these are to be accomplished. The
SOPs may need to be adjusted to R's characteristics, but they should be
in place. The end user needs to be familiar with these, implement
appropriate mechanisms that are compliant with them and operate within
those parameters to reasonably ensure the quality, consistency and
repeatability of their work.
This is no different with R than any other mission critical application.
If somebody is using SAS to build jet engines and they have not
implemented internal processes that establish and document SAS'
performance, beyond that which the SAS Institute documents, the end user
and their organization are out on a limb in terms of risk.
With respect to the first issue and R itself, as you may be aware, there
is a document available here:
which while geared towards the regulated clinical trials realm,
documents the R SDLC and related issues. The key is to document what R
Core does so that end users can be cognizant of the internal quality
processes in place and that an appropriate understanding of these can be
achieved. Given the focus of the document, it also covers other
regulatory issues applicable to clinical trials (eg. 21 CFR 11).
In that document, I would specifically point you to Section 6 which
covers R's SDLC.
Note that this document DOES NOT cover CRAN add-on packages in any
fashion, as the SDLC for those packages is up to the individual authors
and maintainers to define and document. If a user is installing and
using CRAN add-on packages, then they should communicate with those
package authors to identify their SDLC and have then implement their own
internal processes to test them.
The typical "R CMD check" process essentially only tests that the
package is a valid CRAN package, unless the CRAN package author has
implemented their own testing process (eg. 'tests' sub-dir) with
additional code such as R Core has done when using procedures such as
'make check-all' subsequent to compiling R from source code.
More information about the R-devel