8350rocks
Distinguished
juanrga :
Nvidia is comparing to a Haswell Celeron consuming roughly the double of power, this important detail cannot be forgotten. Of course third-parties analysis/benchmark is needed, but the interesting part for me is that their in-order-DCO design could be easily scalable. One can dream that their Boulder core for high-performance servers is an ultrawide machine. There is some irony on the fact that Nvidia is matching Haswell using an Itanium-like VLIW architecture.
I read that bit...
Essentially, NVidia is still using in-order-execution, and they expect to optimize software such that it makes up for the performance deficit by not going OoO. Additionally, power consumption will be low, but that is because the advanced branch prediction logic required to run OoO execution is markedly absent.
Notice how those benchmarks shown are all either memory benchmarks, or generic integer/FP benchmarks.
I will be interested to see what it really does in terms of performance, simply because I am curious how much Nvidia overstated their performance with the propaganda this time around.