Intel Launches New Xeon CPU, Announces Deep Learning Inference Accelerator

Status
Not open for further replies.

nutjob2

Reputable
Aug 31, 2015
41
0
4,540
They should have called it the BrainBurst architecture.

Or maybe just admit it's a bunch of overpriced FPUs? Nah. Intel is desperate to stay relevant and most of their R&D budget is spent in their marketing department.
 

bit_user

Polypheme
Ambassador
Speaking of Purley, Intel also has its Skylake-EP, which is the next-generation Xeon Purley platform, on display at the show. Intel has general availability slated for mid-2017.
WHY IS THIS TAKING SO @#$!% LONG?!?

The new chips boast support for AVX-512 instructions, which accelerate floating-point calculations and encryption algorithms.
Note that Intel has partitioned AVX-512 instructions into different groups. Xeon Phi v2 is the first product to support AVX-512, but it doesn't support the entire set. The Purley-compatible Skylakes are actually slated to support a different subset. So, it's not strictly a generational division, unlike previous vector instruction set extensions.

https://en.wikipedia.org/wiki/AVX-512#Instruction_set

One of Intel's key developments is its 72-core Knights Landing (KNL) platform, which it claims offers 5x the performance and 8x the power efficiency of the competing GPU solutions.
You shouldn't just repeat these specious claims without qualification. As these measurements were against Kepler-generation GPUs, they weren't really "competing GPU solutions".

KNL is also bootable, which is a key value proposition.
I think the bigger value proposition is that they can integrate with legacy codebases, without the sort of rewrite necessary to exploit CUDA. You can program them in any language, and run basically any existing software on them.

But for people who prize FLOPS/$ or FLOPS/W above all else, GPUs (and FPGAs) are still where it's at. ...unless you can afford to make your own ASIC, like Google.
 

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
858
315
19,360



Yes, in the article I linked we specify that the claims aren't against newer GPU's (copy/pasted below) and we noted those are Intel's claims in this piece, but will add "previous-gen".

It is notable that these results are from internal Intel testing, some of which the company generated with previous-generation GPUs. Intel indicated this is due to limited sample availability.


GPU and FPGA are still king on a FLOPS/$ FLOPS/W perspective, but I'm curious of a comparison to a GPU if you also included the CPU in the equation, as it still requires one. The GPU would probably still win, but it would be interesting to see a more direct comparison.
 
Status
Not open for further replies.