GC, Disk Storage, etc

Rob Creecy rcreecy@Census.GOV
Thu, 01 Apr 1999 15:40:48 -0500

Something to keep in mind when designing future memory management
 schemes is that in the not too distant future many machines will
 be 64 bits and have more than 2GB of memory, and it would be nice to be
 to fully utilize the available memory. I discovered what seems to be a
 limit of 2GB for vector heap size on a DEC ALPHA UNIX 4.0, which fails
 in memory.c doing the malloc for the vector heap when the request is
 than 2GB. I presume this is because address pointers are 32 bits.
 This situation is unfortunate since the machine has 12GB of memory and
I had
 a problem which could have used more than 2GB of memory. I wrote a
 program instead (which may have been better in the long run anyway)

 I don't have any ideas on how to address this problem, but thought I'd
 raise the issue for your consideration.


Ross Ihaka wrote:
> Gregory R. Warnes writes:
>  >
>  > I'm wondering what the plans are for the memory management system.
>  >
>  > Once there is a dynamic memory allocation system in place it would be nice
>  > to augment that so that objects which are not currently being used need
>  > not be resident in memory.  This way, rarely used objects will not be
>  > filling up the memory of the R process.
>  >
>  > (Yes, I am thinking about a cross between the current R and S object
>  > storage ideas.)
>  >
>  > It should be possible to set up an object access system that loads objects
>  > on demand, and saves them out to disk when not used using a LRU strategy.
>  >
>  > The save to disk could be part of the garbage collection strategy.  Some
>  > herusitic like, if we run out of memory, throw all the ojbects we've not
>  > used for XXX period of time out to disk.   If there are no such objects,
>  > try to increase our available memory.  If that fails, throw things out
>  > from least recently used forward.  If we can't manage to satisfy the
>  > request using either method, fail the memory allocation and tell the user.
>  >
>  > Glancing at the current code, it seems that the hardest part of this would
>  > be keeping track of object usage, without imposing excessive overhead.
> This is very much what I have in mind, but more for system objects
> than user objects.  At startup or when a package is attached, instead
> of loading all the objects, we build a list of "promise-to-load"
> objects which will load the objects when they are referenced (this is
> problematic for closures though because of shared state, but there are
> very few if any in the system code).  It would certainly be possible
> to time-stamp objects when they are referenced and unload them at gc
> time when they have not be referenced for a while.
> I'm currently looking at Luke's xlispstat autoload ideas and waiting
> the availability of his new garbage collector to see if there might be
> better possibilities.
> Step one though is to separate the save/restore mechanism from the
> garbage collector so that experiments of this type get easier.
>         Ross
> -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
> r-devel mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
> Send "info", "help", or "[un]subscribe"
> (in the "body", not the subject !)  To: r-devel-request@stat.math.ethz.ch
> _._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._
r-devel mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !)  To: r-devel-request@stat.math.ethz.ch