gamerk316 :
But as I've explained: The embarrasingly parallel stuff has already been offloaded to the GPU [rendering and physics]. The remaining parts of code is MUCH harder to make parallel, especially outside of specific subsystems.
For example, I can easily make a parallel AI engine, where every in game AI object is fully independent of the next. Thats simply to accomplish. But then you get into interactions between AI's, which requires a LOT of locking between each AI object [or each AI thread], which kills performance. Then I have to worry about processing environmental factors into my AI routine needs [the rendering equation has to finish creating the geometry so the AI can do a LOS check, audio engine needs to process audio cues the AI uses, etc]. So even though each AI in theory could be processed on a different core, because of low level locking of threads, you won't process more then one or two at a time.
On top of that, developers NEVER select what cores to load threads on. Thats the job of the OS scheduler. The best we can do is give "hints" what threads can be run in parallel, but its up to the OS to load them in an optimal manner and schedule them in the most optimal manner.
Well, I was thinking in a more broader spectrum. Clustering for instance. Also, you can always come up with parallelism, just try not to abuse it or that wall you mention will play against, yes and I totally agree. You have a lot of headroom left thanks to the Scheduler though and some high level APIs ("synchonized" in Java, for instance).
I love (just like my teachers) giving the Fibonacci Series sum as an example to threading. It's quite simple to grasp and understand the incredible difference in approaches (time wise) for the solution.
esrever :
Well the consumer gets to say what is high end by voting with their money but I consider midrange to be $150 since anything over $350 is pointless. so low end would be $0-150, mid will be $150-250, high end will be $250+.
Ugh, I'd have to disagree. If you just sell cheap stuff to the consumers with not much choice, doesn't make that stuff "premium" at all.
CAR EXAMPLE!
If you only see Ladas (http://en.wikipedia.org/wiki/Lada) being sold between 1k and 5k and the next jump you have Mustangs, 'Vettes and WRX STIs selling for 10k+, I would never ever call those Ladas "high performance vehicles" even if they match up in speed to the higher band.
In this case, Intel is selling the LGA1155 stuff as "mid" and "low" stuff and leaving LGA2011 as "premium/high". You can see the differences between SB-E and SB/IB right away. Problem is, AMD doesn't have anything to push the prices down (or features up) in the "cheap" bracket letting Intel reap big greens on each sell. If you look at the i5 and i7 desktop close, they're the same bloody chip sold at different price point just because HT. If I remember correctly, when the first HT'ed P4s came out, all of them came with the feature activated, except Celerons to match up to the Athlon X2s (which, recalling that time, where really expensive ~_____~).
So, TL;DR: Features decide "low", "high" or "mid". The price is a reflection of competition and positioning, but not features anymore.
Cazalan :
Market people are smarter than that. They'll know it will be 5+ years before AMD could make a competitive ARM+graphics solution. Nvidia has been doing it for years and just finally making waves with the Tegra 3 chip.
Qualcomm already bough a license from AMD for their graphics core. They call it Adreno which is just an anagram of Radeon. It's in the Snapdragon S4 or Krait SoC.
Well, GCN seems fairly scalable up and downwards, so I'd say putting that in a low power ARM SoC package it's kind of doable. I wonder how much of the patent portfolio went into Qualcomms hands for low power stuff though (if there was such a thing, teh hee), hmm...
Cheers!