[R] Efficient way of loading files in R

Deepa deep@m@hm@||@c @end|ng |rom gm@||@com
Fri Sep 7 12:10:30 CEST 2018


The following is the system configuration:
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    2
Core(s) per socket:    2
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 142
Model name:            Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
Stepping:              9
CPU MHz:               2844.008
CPU max MHz:           3500.0000
CPU min MHz:           400.0000
BogoMIPS:              5808.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              4096K
NUMA node0 CPU(s):     0-3


On Fri, Sep 7, 2018 at 3:38 PM Deepa <deepamahm.iisc using gmail.com> wrote:

> Hello,
>
> I am using a bioconductor package in R.
> The command that I use reads the contents of a file downloaded from a
> database and creates an expression object.
>
> The syntax works perfectly fine when the input size is of 10 MB. Whereas,
> when the file size is around 40MB the object isn't created.
>
> Is there an efficient way of loading a large input file to create the
> expression object?
>
> This is my code,
>
>
> library(gcrma)
> library(limma)
> library(biomaRt)
> library(GEOquery)
> library(Biobase)
> require(GEOquery)
> require(Biobase)
> gseEset1 <- getGEO('GSE53454')[[1]] #filesize 10MB
> gseEset2 <- getGEO('GSE76896')[[1]] #file size 40MB
>
> ##gseEset2 doesn't load and isn't created
>
> Many thanks
>
>
>

	[[alternative HTML version deleted]]




More information about the R-help mailing list