[citation][nom]mindbreaker[/nom]Regarding Fritz Chess Benchmark: the assumption that floating-point has anything to do with chess is just wrong. "Llano assumes a lead in Fritz, pointing to Piledriver’s shared floating-point resources as a weakness in this particular test."Chess does not use floating point at all.[/citation]
I'm not saying that anything you've said is wrong, but I'm not sure if you're talking about Fritz in particular or chess software in general. Because if it isn't the former, then you do have to remember that programs can be coded in diverse ways. The makers of Fritz might've intentionally used floating-points for the sake of benchmarking floating-points. Anyway, just my thought on that.
[citation][nom]blazorthon[/nom]Think of it like this. When a module hits high utilization in the FX-8150, then it can Turbo from 3.6GHz all the way up to 4.2GHz or thereabouts. However, with a core from each module disabled, you now have each module, at best, running at about 60% utilization instead of topping out at 100%. So, Turbo can't go as high because each module is only being pushed somewhat beyond half-way. It maxes out at about 3.9GHz when one core per module is disabled.[/citation]
Oh, so it's not about putting more modules to sleep?
I'm just checking if this is what you're saying. It's about whether or not the modules actually need the Turbo Boost, or rather, that whatever measuring device it uses to read a modules utilization can only read "at best...about 60% utilization" because the disabled core carries a potential 40% utilization of whole module. Is that it? (I'm not sure if this info was in the article you gave me before. Sorry, I haven't quite read it through.) Sounds like a patch to Turbo Boost could fix this then, but alas, this mod is not officially supported. :-(
EDIT: I've finished the article you gave me.
http://techreport.com/articles.x/21865/1 Just for the sake of being more sure, where have you seen how Turbo Boost works?
🙂 Also, an interesting idea is how you could force an application to use certain threads as done in the article. Hm... Do you think this could serve as workaround for Bulldozer owners who don't have mo-bos that can turn off one core per module? I'm thinking like making .bat files for their games. :-D
[citation][nom]blazorthon[/nom]I have to wonder if AMD is doing this intentionally. Maybe someone is being paid to not have a proper competitor for Intel? Who knows. If AMD truly wanted to be a proper competitor for both Intel and Nvidia, then they would have done what you said in 3, release a die shrink of Phenom II while they did the right job on Bulldozer, fixing every problem that I mentioned and more. After that, they could have also done what we've suggested, disable one core per module on some consumer SKUs, and then reap the benefits of probably more than doubling Bulldozer's performance while cutting power usage even further down.Further along, yes, AMD should have at least made the low end Radeon 7000 cards use VLIW4 if they didn't want to use GCN. It would have been both a power efficiency boost over VLIW5 cards and better CF compatibility, among other advantages. Heck, they could have used the same GPU as Trinity's Devastator GPU. However, AMD seems intent on not being a very good competitor.[/citation]
Maybe AMD is just having trouble settling in with the new management/CEO. I mean with all those job cuts, something might've been shaken up in there. Also, sometimes, things could get out of order going from division to division...person to person. It could've been an unfortunate chain of events or they might just not have thought of the ideas you and others have, in time to implement. It could be a (bad) business decision. It could've also been what you've said about being bribed to be that way.
🙂) I have read a comment somewhere before about how Intel would have government (monopoly) issues if ever AMD's CPU division died. But we shouldn't jump to conclusions.
As for them imitating the tick-tock strategy, maybe they don't want to appear like copy cats?
I just had an idea right now. Maybe they can do a tick-tock strategy with CPU's and APU's. Like release a CPU then apply a die shrink and add-in graphics and release it as an APU, then a CPU again. Haha! It sounds funny and by the looks of it, unlikely since they released Trinity first with Piledriver and haven't applied a die shrink since Llano and Bulldozer...
Hm... That's a thought. Would there be a possibility that we'd be surprised of a release of Vishera with a die shrink? They had some practice with Trinity on 32nm.