[BioC] How do you analize very large affymetrix data?

Henrik Bengtsson hb at stat.berkeley.edu
Wed Jun 17 00:51:18 CEST 2009


Since you don't say what you've tried ("any normalization") and what
you mean by "very long time", it's hard to know give details, but
except from the fact that raw data is read from CEL file and
normalized data is written back to another CEL file, there is nothing
special in aroma.affymetrix that makes it slower than other methods.
Actually great care has been taken to optimize algorithms/estimators
for both memory and speed.

What you might have experienced is the automatic generation of
internal annotation files ("index files"), which happens once (in a
lifetime) for each new chip type you analyze.  You can read more about
what is going on, here:

  http://groups.google.com/group/aroma-affymetrix/web/improving-processing-time

[sorry for the current "spam-ban" hiccup by Google that you have to
click through; very annoying]

Henrik
(author)

2009/6/16 UsuarioR España <kurtney_84 at hotmail.com>:
>
> Hi all,
>
> I am trying to load and apply some summarization methods to 200 .CEL files. The only R package I found solving this is aroma.affymetrix but it takes very long time to apply any normalization.
>
> I am using R 2.9 and aroma.affymetrix version 1.1.0.
>
> Is there any other solution to this problem, what do you ussally use?
>
> Thanks in advance.
>
>
> _________________________________________________________________
>
>
>        [[alternative HTML version deleted]]
>
> _______________________________________________
> Bioconductor mailing list
> Bioconductor at stat.math.ethz.ch
> https://stat.ethz.ch/mailman/listinfo/bioconductor
> Search the archives: http://news.gmane.org/gmane.science.biology.informatics.conductor
>



More information about the Bioconductor mailing list