[R] R's memory limitation and Hadoop
Jeff Newmiller
jdnewmil at dcn.davis.CA.us
Tue Sep 16 14:27:28 CEST 2014
If you need to start your question with a false dichotomy, by all means choose the option you seem to have already chosen and stop trolling us.
If you actually want an answer here, try Googling on the topic first (is "R hadoop" so un-obvious?) and then phrase a specific question so someone has a chance to help you.
---------------------------------------------------------------------------
Jeff Newmiller The ..... ..... Go Live...
DCN:<jdnewmil at dcn.davis.ca.us> Basics: ##.#. ##.#. Live Go...
Live: OO#.. Dead: OO#.. Playing
Research Engineer (Solar/Batteries O.O#. #.O#. with
/Software/Embedded Controllers) .OO#. .OO#. rocks...1k
---------------------------------------------------------------------------
Sent from my phone. Please excuse my brevity.
On September 16, 2014 4:40:29 AM PDT, Barry King <barry.king at qlx.com> wrote:
>Is there a way to get around R’s memory-bound limitation by interfacing
>with a Hadoop database or should I look at products like SAS or JMP to
>work
>with data that has hundreds of thousands of records? Any help is
>appreciated.
>
>--
>__________________________
>*Barry E. King, Ph.D.*
>Analytics Modeler
>Qualex Consulting Services, Inc.
>Barry.King at qlx.com
>O: (317)940-5464
>M: (317)507-0661
>__________________________
>
> [[alternative HTML version deleted]]
>
>______________________________________________
>R-help at r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
More information about the R-help
mailing list