[Rd] strange apparently data-dependent crash with large
tplate at blackmesacapital.com
Tue Jun 8 17:11:15 CEST 2004
At Monday 12:14 PM 6/7/2004, Duncan Murdoch wrote:
>On Mon, 7 Jun 2004 18:59:27 +0200 (CEST), tplate at blackmesacapital.com
> >I'm consistently seeing R crash with a particular large data set. What's
> >strange is that although the crash seems related to running out of memory,
> >I'm unable to construct a pseudo-random data set of the same size that also
> >causes the crash. Further adding to the strangeness is that the crash only
> >happens if the dataset goes through a save()/load() cycle -- without that,
> >the command in question just gives an out-of-memory error, but does not
>This kind of error is very difficult to debug. What's likely
>happening is that in one case you run out of memory at a place with a
>correct check, and in the other you are hitting some flaky code that
>assumes every memory allocation is guaranteed to succeed.
This seems likely, given that there is quite a wait (with CPU pegged)
between when the message about being out-of-memory is printed, and when the
Windows error dialog appears. I guess the different setups with save/load
vs read.table are leaving the memory in different states, which then
affects whether or not the bug is triggered.
Is code that assumes every memory allocation is guaranteed to succeed
considered OK, or is it something to fix when found?
Thanks to you and Brian Ripley for the informative responses.
-- Tony Plate
>You could install DrMinGW (which produces a stack dump when you
>crash), but it's not necessarily informative: often the crash occurs
>relatively distantly from the buggy code that caused it.
>The other problem with this kind of error is that it may well
>disappear if you run under a debugger, since that will make you run
>out of memory at a different spot, and it may not appear on a
>different machine. For example, I ran your examples and they all
>failed because R ran out of memory, but none crashed.
More information about the R-devel