Official Intel Ivy Bridge Discussion

Page 23 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
intel should have released more ivb core i3's with hd4000 igpu. it's already bad enough that hd4000 barely catches up to llano's 6550d igpu (iirc in only one compute benchmark) and trinity's igpu runs circles around hd4000. this'll be worse in mobile platforms where trinity offers more performance. me disappoint. :S
dt gaming performance however, will very likely get a boost from ivb architecture and 22nm process. this could mean that core i3's can finally outperform overclocked phenom ii x4's. 😉
 

pretty much already do:
http://www.anandtech.com/bench/Product/289?vs=80

but then throw in the "games are going to use more cores in the future" arguement . . . :heink:
 
Which won't happen. You ALREADY have games with some 80+ threads that don't scale, and theres a REASON for that. Nevermind the performance loss due to having to keep all those threads in the proper state. Thread too much, and you get negative performance returns.
 
Well, games today DO use more than 2 hard threads, so 4 cores are actually useful today, not in the future 😛

Problem is, the i3 and notebook i5s are fast enough to be comparable to the PhII X4 at high clock rate, but you can compare them to desktop i5s and see the real difference in cores.

The "i3 is not a good long term buy" is not a pro-AMD statement, is a some-what universal statement...

Cheers!
 

Some do use more than 2 threads, some don't.

Problem is, a Core 2 Duo is just now becoming inadequate for gaming. Since game designers in the future will still be catering to people with laptops or lesser desktops, a good dual-core will most likely continue to be "good enough" for the foreseeable future. Even more so since the demand gaming places on the CPU seems to grow slower than the demand it places on the GPU.
 


You're looking the glass half empty, my friend 😛

--

@gamerk:

Nice reads, but still doesn't take my statement down: i3s are not a good long term buy. You know it's thanks to the speed and uarch that they behave like true quads, but in real "quad works", they fall short.

If programmers can do it with 3 or 4 hard threads, an i3 will still be able to cope with 2 per core and the third and fourth will depend on speed and HT. I'm not talking about 80+ threads as you say, although we do know current OSes work with that amount (or even more) as a regular duty, but most of those programs are in the background or sleeping, so a program in the foreground will use a big chunk of the real CPU as long as it needs it. Don't want to get into the scheduling chat again, but games can cope with a lot of independent threads, it's just that programmers need to be able to design them for that.

Next gen consoles will let us know (if they ever come out) how many hard threads will be the norm in gaming for a good 5+ years down the road.

Cheers!
 
Nice reads, but still doesn't take my statement down: i3s are not a good long term buy. You know it's thanks to the speed and uarch that they behave like true quads, but in real "quad works", they fall short.

You apparently missed my "Duos are dead" argument back when everyone here was recommending the E8600 OC'd was a better long term buy then the Q9xxx series. But for GAMING purposes, duo core will be sufficient for some time, though with prices where they are now, its almost silly to NOT go at least with an i5 based CPU.

If programmers can do it with 3 or 4 hard threads, an i3 will still be able to cope with 2 per core and the third and fourth will depend on speed and HT. I'm not talking about 80+ threads as you say, although we do know current OSes work with that amount (or even more) as a regular duty, but most of those programs are in the background or sleeping, so a program in the foreground will use a big chunk of the real CPU as long as it needs it. Don't want to get into the scheduling chat again, but games can cope with a lot of independent threads, it's just that programmers need to be able to design them for that.

The problem is one of thread overhead. Say I do the same number of tasks in more threads. Now I have overlapping tasks, which means more locking [both explicit by the programmer, and implicitly done by the OS]. This robs performance [one thread stuck waiting for another to finish], and makes programming significantly tougher.

At the end of the day, the main rendering thread is the one that will do the majority of the work. Physics/AI processing is still relatively simple from a processing point of view...[PhysX aside; physics gets complicated real fast once you get into multiple-object dynamic interactions]

Even then, theres no guarantee to where the OS is going to allocate said threads...

Next gen consoles will let us know (if they ever come out) how many hard threads will be the norm in gaming for a good 5+ years down the road.

Cheers!

Consoles are totally different beasts. The 360 has a three core CPU capable of running two threads on each. The PS3 has 6 SPE units [think of them as mini CPU cores]. Especially in the PS3's case, you code to a VERY low level, as keeping the SPE's well fed is very challenging to pull off [no access to main memory; data transfer is done via DMA]. The 360 is less challenging, but most of the threading logic is still hardcoded for performance reasons.

Contrast this to a PC, where you create a thread, and leave it for the OS to put on a core.

Never try and compare embedded architectures to general purpose PC's. Totally different beasts, and totally different coding styles are needed. I've worked on major systems using both top of the line supercomputers, and systems with 256k of RAM to play with [most of that reserved for the program itself!]. You can't compare the two.
 
Well, in most console ports they don't make parallelism in a dynamic way at all.

I've read that some games for XBox don't even use the 3 cores for hard threads, since uneven number of cores don't exist in PCs, hence, making it harder to be ported.

You're thinking about good programming practices, but there's a lot of code around that's fixed for a certain number of cores even on a PC env. I'd say there's just a tiny number of programs that are really threaded dynamically to use 100% of every CPU (as in cores, not arch).

Cheers!
 


Because that entire mechanism has to be re-coded, from scratch. Instead of hard-coded thread management, you are moving to an OS managed thread management. Core loading becomes less of a concern [more powerful HW]. You have to worry about thread locking dynamics for the first time [doesn't exist with hardcoded thread management].

I've read that some games for XBox don't even use the 3 cores for hard threads, since uneven number of cores don't exist in PCs, hence, making it harder to be ported.

Nonsense.

First off, PC's don't CARE about how many "hard" threads you run. The OS manages what cores you use, not the user. [Trust me. I know people who tried managing core usage directly when the first duo cores came out. Which worked, until some other application did the same thing, and both used the 2nd core for their heavy duty workloads. Programmers make no assumptions about core usage. We spawn a thread, and leave it to the OS to manage and schedule.]

Secondly, the real issue, again, is the fact you won't typically have more then one or two really heavy duty threads able to run at a single point in time, due to various bottlenecks. Nevermind you don't want the application trashing at 100% workload for extended periods...

You're thinking about good programming practices, but there's a lot of code around that's fixed for a certain number of cores even on a PC env. I'd say there's just a tiny number of programs that are really threaded dynamically to use 100% of every CPU (as in cores, not arch).

Cheers!

Because most problems that scale beyond a few arbitrary cores are already being offloaded onto the GPU [CUDA, OpenCL, OpenMP, etc]. As for the rest, its not so much that the number of cores used is fixed [which is silly; thats what the OS scheduler is for], its more an issue of scalability.

Example: I have 100 AI objects that need to be processed. There are two approaches:

1: Create a single thread to manage everything, and loop through the AI's one at a time.
2: Create a single thread per AI object, and invoke all their update routines at the same time.

The 1st is the approach everyone takes, because of the overhead of having 100 threads running, needing to be scheduled, ensuring all the threads actually finish running in an arbitrary time period, and interaction between the AI's [which would lead to significant deadlock time in the 2nd example if the AI objects need to cross-reference eachother].

So 100 AI's are going to be processed in a single thread, pushing 1 core towards 100%. The 2nd example is simply too hard to manage properly, even if it is more scalable.
 


Tell me about it. At work, we have spreadsheets that have tracked manpower for some two decades across hundreds of programs. Even on Westmere CPUs [and only a handful of us have them], excel runs like a dog.
 

TRENDING THREADS