Updated SPEC benchmarks

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Games require lots of integer for score-tracking and big number displays, but for the most part, it's float. 3d games with major detail require major precision and FP emulation (or whatever the name was, that made the N64 do 3d in Integer only) would be bad in image quality and slow. Especially proof of float is the huge talk around FP Precision on the current generation of video cards, and nVidia's allegations of using 16-bit FP precision over 24-bit, the minimum required by DX 9 compliancy tests. And it's actually 128-bit precision, though I dunno why they then refer to them as 16,24 or 32.

--
I am my own competition. -VJK
 
Have you read the discusion on the boards how people were fast to defend GCC that a shame it where a major clean win for Intel compiler.Frank the guy who compile the workload have try all the given ''tip'' that were give before by others ''guru'' of Gcc none been able to produce faster score in overall.Intel compiler were also the fastest on no SSE vs any others including others compiler with SSE-2 optimization let alone the satgering 3X time lead when is own SSE-2 is up at a nice 1.8 Gflop in many test.

To clear up ''even me cannot read it'' post from before all unreal never feature SSE code only plain X86 powerful 3.2 canterwood are still in the 100 FPS range it not relate to how is the huge game engine but as badly it work a P4 3.0 on NWN reach only only 60 FPS even with the best graphic card.

Bad programing on unreal it was intentional.

[-peep-] french
 
On video cards, the precision is called "128-bit" because all data values (pixels) are dealt with in vectors (4 32-bit FP values, a red value, a green value, a blue value and an alpha value). They are massively parallel SIMD processors (in fact, very similar to IA-64's design).

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
 
This relate to DX9 was speaking of vector instruction support.

IA-64 almost not vector ops it use a inner loop if need.

I dont like french test
 
Most graphics card's native ISA is VLIW. It isn't SIMD per se but it functions like it most of the time. In that sense, it is a vector processor.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
 
Um, I think you have a word for what I'm about to do...

*bump*

There you go. I updated the relevant SPEC benchmarks on the first message. Interesting stuff there... I think Eden was a little worried that scalability on Itanium was worse than on Opteron? From rate numbers, Opteron doesn't scale that better than Itanium at all. Both scale well, with Itanium looking as if it'd scale a bit better...

And with the 844 costing $2149, you can see that a 1.3Ghz Itanium has rather good performance for its price, particularly if used in multiple-cpu configurations.

Then again, the single-processor 144 is probably the best price/performance for a long shot on single-processor workstations, no doubt about that. Oh, and with single-CPU workstations that don't need 64-bit, the even cheaper and quite good 32-bit 3.2Ghz P4 CPU is more than enough.

Anyway, lately, I've been evaluating Itanium in comparison to Opteron here, but just as a reminder, I am aware that 32-bit code runs easier on Opteron than on Itanium (though someone said this Itanium will run 32-bit a little smoother...) and that that is one of Opteron's main strengths.