blazorthon
Glorious
[citation][nom]army_ant7[/nom]
Oh, so it's not about putting more modules to sleep?I'm just checking if this is what you're saying. It's about whether or not the modules actually need the Turbo Boost, or rather, that whatever measuring device it uses to read a modules utilization can only read "at best...about 60% utilization" because the disabled core carries a potential 40% utilization of whole module. Is that it? (I'm not sure if this info was in the article you gave me before. Sorry, I haven't quite read it through.) Sounds like a patch to Turbo Boost could fix this then, but alas, this mod is not officially supported. :-(EDIT: I've finished the article you gave me. http://techreport.com/articles.x/21865/1 Just for the sake of being more sure, where have you seen how Turbo Boost works? Also, an interesting idea is how you could force an application to use certain threads as done in the article. Hm... Do you think this could serve as workaround for Bulldozer owners who don't have mo-bos that can turn off one core per module? I'm thinking like making .bat files for their games. :-D[/citation]
Yes. Turbo on AMD's modular architecture currently works on a per-module basis instead of a per-core basis or per CPU basis. Since only one core per module would now be in heavy use, each module can't hit 100% utilization to hit the max Turbo. You're also correct that we could probably use batch files and such to do this for those whom don't have a motherboard that has BIOS support for this, but that would mean that the inactive cores are still using power and generating heat, meaning that it would not overclock as well in this usage (although it would still be an improvement over doing nothing).
[citation][nom]army_ant7[/nom] Maybe AMD is just having trouble settling in with the new management/CEO. I mean with all those job cuts, something might've been shaken up in there. Also, sometimes, things could get out of order going from division to division...person to person. It could've been an unfortunate chain of events or they might just not have thought of the ideas you and others have, in time to implement. It could be a (bad) business decision. It could've also been what you've said about being bribed to be that way. ) I have read a comment somewhere before about how Intel would have government (monopoly) issues if ever AMD's CPU division died. But we shouldn't jump to conclusions.As for them imitating the tick-tock strategy, maybe they don't want to appear like copy cats?I just had an idea right now. Maybe they can do a tick-tock strategy with CPU's and APU's. Like release a CPU then apply a die shrink and add-in graphics and release it as an APU, then a CPU again. Haha! It sounds funny and by the looks of it, unlikely since they released Trinity first with Piledriver and haven't applied a die shrink since Llano and Bulldozer...Hm... That's a thought. Would there be a possibility that we'd be surprised of a release of Vishera with a die shrink? They had some practice with Trinity on 32nm.[/citation]
Maybe AMD's management is killing AMD's competitiveness. These guys are the ones whom changed the design methods yet again to inferior computer-generated designs and really... That these architectures are able to do as well as they do despite the huge mountain of problems holding them back is a testament to their quality IMO. That idea for tick-tocking between APUs and CPUs could be a very helpful thing for AMD. AMD could make a new architecture on their CPUs and do die shrinks on their APUs. It wouldn't really be copying Intel (although it is an arguably similar concept) and it would be a very good method of both giving AMD the same amount of time on a process node that Intel is getting, so more experience.
Basically, it would give AMD a way to beta-test both new nodes and architectures in the places where they could do the most good and (theoretically) the least bad. All that AMD would then need to do is merge their CPU and APU platforms. It wouldn't be difficult, just make the CPUs compatible with the APU platform (can't do the other way around because the CPU socket doesn't have pins for display outputs and such). AMD could make a socket that can even fit an adapter for older gen CPUs, so they can keep their main advantage over Intel, inter-compatibility between generations on the same platform.
Maybe AMD's management might not be at fault and it might be that they're simply having trouble, but I just can't see it. Sure, this isn't simple technology, but their older engineers knew what they were doing and could do their job very well. When it first came put, Phenom II was a good architecture (Phenom was computer-designed, if I remember correctly, which is reminiscent of Bulldozer). Why is it that now that AMD has what I would go as far as to call a great architecture for the time, AMD handicapped it in seemingly almost every way reasonably imaginable?
With Bulldozer, I could understand that maybe AMD had trouble getting the architecture to work at first. It is, after all, a very radical change from conventional CPU architectures. However, they have been working on it for about a decade now (AMD's been working on it since at least when Athlon 64 came out back in 2003), if not even longer. They would have needed to be doing something very wrong to work on it for so long and then see how it turned out. If anything, it seemed like BD CPUs were rushed, which is a little odd considering the timeframes here. Maybe AMD only got it working more or less a year or two ago and they had to do something with it as quickly as they could. However, Piledriver shouldn't fix only some of those problems. They should do much, much more and they should do it ASAP.
I suppose that yes, we shouldn't jump to conclusions. However, no matter how I look at it, this is what it seems to be. Yes, if AMD goes under, then Intel is screwed. They have had anti-trust lawsuits going after them and they have people lining up to take Intel down should they become a monopoly. However, I don't think that AMD will go under, even on their current path.
Oh, so it's not about putting more modules to sleep?I'm just checking if this is what you're saying. It's about whether or not the modules actually need the Turbo Boost, or rather, that whatever measuring device it uses to read a modules utilization can only read "at best...about 60% utilization" because the disabled core carries a potential 40% utilization of whole module. Is that it? (I'm not sure if this info was in the article you gave me before. Sorry, I haven't quite read it through.) Sounds like a patch to Turbo Boost could fix this then, but alas, this mod is not officially supported. :-(EDIT: I've finished the article you gave me. http://techreport.com/articles.x/21865/1 Just for the sake of being more sure, where have you seen how Turbo Boost works? Also, an interesting idea is how you could force an application to use certain threads as done in the article. Hm... Do you think this could serve as workaround for Bulldozer owners who don't have mo-bos that can turn off one core per module? I'm thinking like making .bat files for their games. :-D[/citation]
Yes. Turbo on AMD's modular architecture currently works on a per-module basis instead of a per-core basis or per CPU basis. Since only one core per module would now be in heavy use, each module can't hit 100% utilization to hit the max Turbo. You're also correct that we could probably use batch files and such to do this for those whom don't have a motherboard that has BIOS support for this, but that would mean that the inactive cores are still using power and generating heat, meaning that it would not overclock as well in this usage (although it would still be an improvement over doing nothing).
[citation][nom]army_ant7[/nom] Maybe AMD is just having trouble settling in with the new management/CEO. I mean with all those job cuts, something might've been shaken up in there. Also, sometimes, things could get out of order going from division to division...person to person. It could've been an unfortunate chain of events or they might just not have thought of the ideas you and others have, in time to implement. It could be a (bad) business decision. It could've also been what you've said about being bribed to be that way. ) I have read a comment somewhere before about how Intel would have government (monopoly) issues if ever AMD's CPU division died. But we shouldn't jump to conclusions.As for them imitating the tick-tock strategy, maybe they don't want to appear like copy cats?I just had an idea right now. Maybe they can do a tick-tock strategy with CPU's and APU's. Like release a CPU then apply a die shrink and add-in graphics and release it as an APU, then a CPU again. Haha! It sounds funny and by the looks of it, unlikely since they released Trinity first with Piledriver and haven't applied a die shrink since Llano and Bulldozer...Hm... That's a thought. Would there be a possibility that we'd be surprised of a release of Vishera with a die shrink? They had some practice with Trinity on 32nm.[/citation]
Maybe AMD's management is killing AMD's competitiveness. These guys are the ones whom changed the design methods yet again to inferior computer-generated designs and really... That these architectures are able to do as well as they do despite the huge mountain of problems holding them back is a testament to their quality IMO. That idea for tick-tocking between APUs and CPUs could be a very helpful thing for AMD. AMD could make a new architecture on their CPUs and do die shrinks on their APUs. It wouldn't really be copying Intel (although it is an arguably similar concept) and it would be a very good method of both giving AMD the same amount of time on a process node that Intel is getting, so more experience.
Basically, it would give AMD a way to beta-test both new nodes and architectures in the places where they could do the most good and (theoretically) the least bad. All that AMD would then need to do is merge their CPU and APU platforms. It wouldn't be difficult, just make the CPUs compatible with the APU platform (can't do the other way around because the CPU socket doesn't have pins for display outputs and such). AMD could make a socket that can even fit an adapter for older gen CPUs, so they can keep their main advantage over Intel, inter-compatibility between generations on the same platform.
Maybe AMD's management might not be at fault and it might be that they're simply having trouble, but I just can't see it. Sure, this isn't simple technology, but their older engineers knew what they were doing and could do their job very well. When it first came put, Phenom II was a good architecture (Phenom was computer-designed, if I remember correctly, which is reminiscent of Bulldozer). Why is it that now that AMD has what I would go as far as to call a great architecture for the time, AMD handicapped it in seemingly almost every way reasonably imaginable?
With Bulldozer, I could understand that maybe AMD had trouble getting the architecture to work at first. It is, after all, a very radical change from conventional CPU architectures. However, they have been working on it for about a decade now (AMD's been working on it since at least when Athlon 64 came out back in 2003), if not even longer. They would have needed to be doing something very wrong to work on it for so long and then see how it turned out. If anything, it seemed like BD CPUs were rushed, which is a little odd considering the timeframes here. Maybe AMD only got it working more or less a year or two ago and they had to do something with it as quickly as they could. However, Piledriver shouldn't fix only some of those problems. They should do much, much more and they should do it ASAP.
I suppose that yes, we shouldn't jump to conclusions. However, no matter how I look at it, this is what it seems to be. Yes, if AMD goes under, then Intel is screwed. They have had anti-trust lawsuits going after them and they have people lining up to take Intel down should they become a monopoly. However, I don't think that AMD will go under, even on their current path.