[R-sig-hpc] What to Experiment With?

Andrew Piskorski atp at piskorski.com
Sat Apr 21 13:38:26 CEST 2012

On Fri, Apr 20, 2012 at 03:16:47PM -0700, ivo welch wrote:

> I have about $5,000 to spend on building fast computer hardware to run
> our problems.  if it works well, I may be able to scrounge up another
> $10k/year to scale it up.

If you had a larger hardware budget, I'd suggest contacting one of the
(small, specialized) HPC cluster vendors to see what they recommend.
Like say these guys; from their founder Joe Landman's posts on the
Beowulf list, I strongly suspect they know what they're doing:


I haven't researched current hardware in a while, but as of spring
2010, AMD and Supermicro had stopped charging a premium for their
4-socket cpus and motherboards; they switched to basically linear
price scaling per socket.  Alex Chekholko commented on that here:


If that's still true, then a 4-socket 24 to 64 core Opteron box may
currently (depending on your workload) be close to optimum on the
price/performance curve, particularly if you need large memory.  For
the same overall amount of CPU cores and RAM, partitioning a cluster
into fewer fatter (e.g. 4-socket) nodes will generally be more
convenient and flexible than smaller nodes, and due to the small
dollar savings of fewer cases, power supplies, etc., you might
actually end up slightly cheaper overall with the fatter nodes.

Googling just now, you can order a maxed-out 4-socket 64-core 512-GB
RAM box for about $14k.  Under $8k for 48-core 256-GB.  (Add more $ if
you want lots of disk IO or a much faster Infiniband network.):


However, I'm told the performance, and likely also price/performance,
of Intel CPUs has been improving relative to AMD for several years
now, and as far as I recall it was only AMD that dropped their
4-socket CPU prices to compete with 1 and 2 socket CPUs, not Intel.
So you'd want to compare that, ideally by running your own algorithms
on representative AMD and Intel nodes.

If you REALLY want to go as dollar-cheap as possible, you may be
interested in my old (2005) cookie-tray cluster ideas:


Since the rack (tray cage) is an open mesh, attach a 20 inch household
box fan to the side of the rack for better cooling.  Rather than the
1/4 inch thick urethane foam tape, it'd probably work better to use
1/8 inch foam strips plus some heavy duty velcro, so that the
motherboards are still removable from the cookie trays.

For mid to high end hardware, the cost of nice rack-mount cases (with
fans etc.) and a rack to put them isn't a significant fraction of the
total budget.  But if you manage to scrounge up really cheap compute
hardware, a cheap homebrew case and rack mount solution like those
cookie trays starts looking attractive.  (Note that old slow computers
are unlikely to be cost effective for production compute cluster
purposes; they might not be worth it even if you get them for free.
But they may be just fine for home experimentation or the like.)

Andrew Piskorski <atp at piskorski.com>

More information about the R-sig-hpc mailing list