[R-sig-hpc] Intel Phi Coprocessor?

ivo welch ivo.welch at gmail.com
Mon Jun 10 17:43:27 CEST 2013

thx, simon.

> No, parallel does not use threads. multicore forks off a process from the current session, all the other methods create a new process, send the data to it etc.

there is a paragraph in the library(parallel) package description (apr
18, 2013) that says

On Windows the default is to report the number of logical CPUs. On
modern hardware (e.g.
Intel Core i7 ) the latter may not be unreasonable as hyper-threading
does give a significant
extra throughput.

I took this to mean that threads are useful.

> However, with so few cores that i5/i7 have I would certainly go for more. If you want real speed, you have to pay much more ;)

depressingly more.  the 6-core or 8-core xeon are now 2-gen old (still
based on Sandy Bridge).  haswell only exists in 4-core, though.  and
ivy bridge-E is delayed to later.  maybe amd kaveri will be a quantum
leap...again next year.  alas, I am getting so old, I just hope to
still be alive by then.  if my computer is twice as fast, maybe I can
write twice as many papers until next year...[= weird sense of humor]
I am now wondering what the smallest form-factor i7-haswell board and
formfactor is to patch together a few of them.  I wish there was a
good 5--motherboard chassis, but they don't exist.  I guess I will
need 5 SFFs.

> At least with gcc I found this to be a mixed bag, sse is certainly faster, but although avx can in theory speed double-precision arithmetics, it can also slow it down quite a bit, so for now I have only found MKL to use AVX to be consistently faster, but not the gcc or llvm compilers. This may change as the compilers get better ... hopefully ...

I tried gcc with more compile optimization flags and sse2.  dirk e
predicted correctly that it would make little difference above the
stock binary R distribution.  :-(..

I don't understand enough about the internals of intel processors,
compilers, and R, but it is surprising to me that with all the focus
on vector processing, all the various MMX instruction derivatives
still seem to make little difference in R in 2013.  what exactly is
the default here?  are we still using the old intel 80387
instructions, "just" better optimized for a vector language like R?

> Cheers,
> Simon
>> /iaw
>> ----
>> Ivo Welch (ivo.welch at gmail.com)
>> http://www.ivo-welch.info/
>> J. Fred Weston Professor of Finance
>> Anderson School at UCLA, C519
>> Director, UCLA Anderson Fink Center for Finance and Investments
>> Free Finance Textbook, http://book.ivo-welch.info/
>> Editor, Critical Finance Review, http://www.critical-finance-review.org/
>> On Mon, Jun 10, 2013 at 5:15 AM, Simon Urbanek
>> <simon.urbanek at r-project.org> wrote:
>>> On Jun 10, 2013, at 1:44 AM, ivo welch wrote:
>>>> does R run on the intel phi coprocessor?  the intel literature makes it seem as if it can be treated just like a 50-core 200-thread just-like-i686 processor running linux, albeit with only 8GB of very fast shared RAM.  some posts have suggested it can be 2-3 times as fast as two high-end Intel Xeon 8-core machines.  how do simple library(parallel) R tasks scale on it?
>>> Given that R is not thread-safe and almost everything (apart from parallel BLAS) is single-threaded, it's exactly the opposite of what you need for R. Explicit parallelization in R has overhead and cannot use threads, so you're better off with higher clock speed than large number of cores (unless you use those explicitly for particular tasks but writing your own low-level code). I was not able to test phi, but generally, in our experience scaling to many cores does not work very well, in particular when you have so little RAM (the only way parallel can scale is by running multiple processes which limits the amount of memory sharing that can be done). So, the way I see it you'd have to treat phi like GPU: you'll be able to leverage the speeds that are claimed by very specific code and algorithms written for it (or, e.g. by running BLAS on it if that's what you do often), but it will be much slower than Xenons for regular use of R. Your mileage may vary - this is just my personal experience evaluating high-core machines (250+) and R (the lesson was it's better to get multiple low-core, high-clockspeed, high-RAM machines instead - the opposite of phi), not particularly with phi.
>>> Cheers,
>>> Simon
>>>> regards,
>>>> /iaw
>>>> ----
>>>> Ivo Welch (ivo.welch at gmail.com)
>>>> _______________________________________________
>>>> R-sig-hpc mailing list
>>>> R-sig-hpc at r-project.org
>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-hpc

More information about the R-sig-hpc mailing list