AMD Piledriver rumours ... and expert conjecture

Page 105 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
You can push it a lot higher than that but at that point you become limited only by your board and the cooler. Some are getting 3ghz+ on this chip. Only the oem samples work if you bought yours on eBay or Amazon the speeds reported cpu-z are false readings. It is a shame that there are no desktop class FS1 boards or the A8 35xxMX line could with ease might break 4ghz due to the higher binned sample quality.

Absolutely agree.

I just ran Prime95 on torture test mode with it state locked at 2.6Ghz. Within 2min it had hit 85c and I manually stopped the test, didn't want to risk finding out exactly how good the CPU thermal failsafe was (or lack thereof). For those 2min the CPU ran without a hitch, no issues and the system remained responsive. If I could of found a way to cool it then I ~know~ I could of ran it 3Ghz+.

Heck considering it's just a 32nm Phenom II x4 with double the L2 but no L3 and a GPU bolted on, I could probably get 3.5Ghz on it with a desktop cooler. Really wish they made FS1 Mini-ITX boards.

If it was my laptop, as in mine to use, then I'd probably put better thermal compound and modify the thermal solution along with putting in DDR3-1600 memory. This is an incredibly flexible CPU. I fear for what AMD will do with the mobile trinity's, I hope their unlocked and adjustable like there are.
 
Absolutely agree.

I just ran Prime95 on torture test mode with it state locked at 2.6Ghz. Within 2min it had hit 85c and I manually stopped the test, didn't want to risk finding out exactly how good the CPU thermal failsafe was (or lack thereof). For those 2min the CPU ran without a hitch, no issues and the system remained responsive. If I could of found a way to cool it then I ~know~ I could of ran it 3Ghz+.

Heck considering it's just a 32nm Phenom II x4 with double the L2 but no L3 and a GPU bolted on, I could probably get 3.5Ghz on it with a desktop cooler. Really wish they made FS1 Mini-ITX boards.

If it was my laptop, as in mine to use, then I'd probably put better thermal compound and modify the thermal solution along with putting in DDR3-1600 memory. This is an incredibly flexible CPU. I fear for what AMD will do with the mobile trinity's, I hope their unlocked and adjustable like there are.

Even with my crippled a8 3530mx the thermal fail safe doesn't seam to be there at all so what will happen is that it will crash. Got ddr3 1600 working but you much use 1.5v kits or it won't work. The stock coolers on many of these laptops is just far to low mass and very cheap. The fan profiles are not aggressive enough to keep temps down so I picked up a scraptop off a local goodwill store for the small fans inside. I am going to mod pretty soon. Only two 2-5% ever make it as an MX let alone an A8.
 
The problem is that BDs scheduler problems, at least those I know of, were mostly with just a few parallel threads (not a full CPU load). Folding launches 8 parallel threads, so that shouldn't be an issue. I have folded with BD on Ubuntu, though I don't remember the exact results. If I remember correctly, BD did better, but to be fair they all might have a little. I haven't really done a scientific analysis of it.

I think this is more a case of how the NT scheduler is handling thread assignments. I noticed earlier that when running one heavy thread (Single Thread CB) that NT wouldn't keep it on one core and kept shifting it around. Instead of one core at 100% I had four cores at 25%. This is really bad for two reasons, one being that boost technology activated based on individual core thermal loading, by constantly moving a thread around it prevents this from activating. When I force a program to onto a core then it immediately boosts to 2.6Ghz and stays there vs constantly shifting from 800 to 1.9 then to 2.6 as the thread gets bounced all over the place. The second reason is how CPU's execute code, CPU's can not do multiple instructions at once, not really. They execute one stream of instructions at a time (per individual processor target) and in order to execute a different stream of instructions you must first save the processor state and use sliding register windows / register rename files to give the new code stream a clean set of registers. This is know as a state change, all the processor's state info is saved off to cache and either a blank state is loaded or a previous state is fetched and reloaded. This takes time and stalls the CPU out. Constantly moving a thread across multiple cores would cause a ton of unnecessary state changes and needless stalling. Finally L2 cache is dedicated to each core (or shared across two with BD). Cache contents are context sensitive to the code that's being executed. Moving code from one core to a different core will cause the contents of the L2 cache to be invalidated, causing an immense number of miss's and reloads.

Core 0 => executing Thread A, OS priority interrupt, Task Switch (code moved to Core 3)
Core 3 => executing Thread A, Thread A's data was in Core 0's L2 cache, not available in Core 3's L2 cache, need to fetch it and invalidate Core 0's L2 cache.

Now multiple this for four or eight threads being constantly moved around to different cores, would be a nightmare on the caching system. This is where Intel's approach to inclusive L3 makes the most sense, they practically designed the CPU for poorly scheduling OS's. Core 3 would be able to reload the L3 copy of Core 0's L2 cache upon the thread being task switched vs Core 3 having to issue an interrupt to Core 0 to read it's L2 cache (if allowed) or having to (heaven forbid) go to main memory to fetch the data.
 
I think this is more a case of how the NT scheduler is handling thread assignments. I noticed earlier that when running one heavy thread (Single Thread CB) that NT wouldn't keep it on one core and kept shifting it around. Instead of one core at 100% I had four cores at 25%. This is really bad for two reasons, one being that boost technology activated based on individual core thermal loading, by constantly moving a thread around it prevents this from activating. When I force a program to onto a core then it immediately boosts to 2.6Ghz and stays there vs constantly shifting from 800 to 1.9 then to 2.6 as the thread gets bounced all over the place. The second reason is how CPU's execute code, CPU's can not do multiple instructions at once, not really. They execute one stream of instructions at a time (per individual processor target) and in order to execute a different stream of instructions you must first save the processor state and use sliding register windows / register rename files to give the new code stream a clean set of registers. This is know as a state change, all the processor's state info is saved off to cache and either a blank state is loaded or a previous state is fetched and reloaded. This takes time and stalls the CPU out. Constantly moving a thread across multiple cores would cause a ton of unnecessary state changes and needless stalling. Finally L2 cache is dedicated to each core (or shared across two with BD). Cache contents are context sensitive to the code that's being executed. Moving code from one core to a different core will cause the contents of the L2 cache to be invalidated, causing an immense number of miss's and reloads.

Core 0 => executing Thread A, OS priority interrupt, Task Switch (code moved to Core 3)
Core 3 => executing Thread A, Thread A's data was in Core 0's L2 cache, not available in Core 3's L2 cache, need to fetch it and invalidate Core 0's L2 cache.

Now multiple this for four or eight threads being constantly moved around to different cores, would be a nightmare on the caching system. This is where Intel's approach to inclusive L3 makes the most sense, they practically designed the CPU for poorly scheduling OS's. Core 3 would be able to reload the L3 copy of Core 0's L2 cache upon the thread being task switched vs Core 3 having to issue an interrupt to Core 0 to read it's L2 cache (if allowed) or having to (heaven forbid) go to main memory to fetch the data.

Great post and I agree.
 
They don't offer hardly any game dev courses under Linux in college, it is all done on windows and very rarely osx.


for Ubuntu or any distro of linux to really take off for the general public there needs to be cool games
when my 10 yr old daughter saw Ubuntu 11.04 running on my desktop she thought it was cool and wanted it for
her laptop
it is amazing how smooth it runs even on older hardware
 
for Ubuntu or any distro of linux to really take off for the general public there needs to be cool games
when my 10 yr old daughter saw Ubuntu 11.04 running on my desktop she thought it was cool and wanted it for
her laptop
it is amazing how smooth it runs even on older hardware

I cringe at Ubuntu. It's nice for beginners, but any time real work needs to be done it's on Redhat or CentOS (At least where I work).

Ubuntu does have a nice GUI I suppose.
 
as a beginner learning linux like myself Ubuntu is perfect
Ubuntu is almost getting to a point of being a viable OS for general public
for a professional in the trade it wouldnt be the number one choice
but already OEMs have been shipping computers with Ubuntu
if linux is going to go mainstream then it needs Ubuntu
 
as a beginner learning linux like myself Ubuntu is perfect
Ubuntu is almost getting to a point of being a viable OS for general public
for a professional in the trade it wouldnt be the number one choice
but already OEMs have been shipping computers with Ubuntu
if linux is going to go mainstream then it needs Ubuntu

Linux has been going mainstream for 10 years. I don't think it's ever going to happen.
 
Well, if more developers made their games for linux, i'd so use it a lot more, lol.

If more people ran linux, there would be a market for their games.

Demand creates supply. No significant demand for games on linux, no production of significant amounts of games on linux. Basic market economics.

And I honestly can't believe people consider Ubuntu to be a serious OS...
 
more cores is a good thing especially for multitaskers
I would think BD would be a decent Folding at Home CPU for example
AMD didnt really need to have a killer IPC on BD but still it shouldve been stronger than their older architecture
BD might have its place in workstations where more cores are crucial
everybody worries about gaming so much and forgets their is real work to be done
I think people do overlook where bulldozer could be useful just because it's IPC is so poor. Everyone knows it, AMD screwed up. Even so, I think there is a place where bulldozer is still viable.

I'm building a new setup soon, and in thinking through what i would use it for, having 4 cores doesn't seem like enough for what I would want to do.
That essentially rules out any Intel solution. Even the lone thought of getting LGA 2011 and getting the 3820 and upgrading to six cores later doesn't seem like a good option because of the high Mobo costs.

That leaves BD, and I've seriously considered going with a 8120. Most people would immediately say I'm crazy, and they have a right. The 8120 has a ton of issues, and the i5 2500k is a beast.

However, if I were to be doing what I mentioned before (Video exporting, while hosting and playing on a small game server for some friends) even the super-duper fast quad core seems like it would be limiting me. With the 8120, not so much.
In a situation like this, recommending the quad core would essentially say that 1 Sandy bridge core is >/= 2 bulldozer cores. However bad the BD cores may be, that isn't true.

Real life people don't use their computers only to do benchmarking.
 
It really isn't bad. AMD made matters worse by significantly overvolting the chips, or at least they did for my 8120. I can OC it below stock voltage and power to 3.6GHz and at or below stock voltage to 4GHz. Just for those of us hoping it would be at least a Phenom II "X8" it was quite a disappointment. I still think that would have been a better product.
 
In gaming, it does still get the job done. It may not do well compared to SB's awesome IPC, but it still provides playable from rates. (By my standards anything over 30 fps is playable, though I have read that people refuse to dip under 60).

I'll say it now. Bulldozer is bad. You only say its bad because there is something better though. If you were like me, upgrading from a AMD sempron LE-1250 with integrated graphics from 2008, bulldozer is fast. More power than I ever thought I could use. The 2500k would be the same way, or maybe better in some cases, but to me, both seem really nice.

It does show the problems with BD having to go to such examples to show where it may perform well, but I think people just want to say that there is not much else you could get that is worse.
 
Bulldozer really isn't as bad as people say it is. Its not amazing but it does perform decently where people would need it for. Only place it doesn't do well is gaming.

It didn't perform to the hype and doesn't perform to the price.

The fact that Phenom II beats it in many benchmarks is the primary reason why it gets so much hate. Sure, it will get the job done, but so did Phenom II. What's the incentive to upgrade when it's not really an upgrade. Anytime a company releases a product that doesn't beat performance of previous products then there will be those that claim it is a bust.

If you want a general purpose PC, what's the point in going BD? To support AMD. That's about it.
 
It didn't perform to the hype and doesn't perform to the price.

The fact that Phenom II beats it in many benchmarks is the primary reason why it gets so much hate. Sure, it will get the job done, but so did Phenom II. What's the incentive to upgrade when it's not really an upgrade. Anytime a company releases a product that doesn't beat performance of previous products then there will be those that claim it is a bust.

If you want a general purpose PC, what's the point in going BD? To support AMD. That's about it.
Or if your like me and you have ways of utilizing 8 cores consistently and don't want to pay half your budget for 6 intel cores. on top of that, LGA 2011 mobo's have steep prices. To me, and probably to many like me who will do large amounts of content creation, paying $190 for 8 cores is a better idea than paying $220 for 4 faster cores, even if we can fit the quad in our budget.

I'm not at all trying to say that AMD made a better product than Intel, or even themselves. I'm saying the products they do have are well suited for my (and others') situations. For many, all they want is a gaming rig. For that, the 2500k cannot be beat.

Also, i'm sorry for filling the page with my posts.
 
Im also in for an AM3+ because i think that going from Bulldozer to Piledriver when I upgrade will be better than Sandy to Ivy. Ivy is prolly going to be ~5% better, I think PD will make the 10-15% AMD is calling for.

Your logic is flawed. You are assuming Sandy Bridge and Bulldozer are on equal footing. They aren't. For most workloads a 15% increase would pull PD about in line or behind SB.

Or if your like me and you have ways of utilizing 8 cores consistently and don't want to pay half your budget for 6 intel cores. on top of that, LGA 2011 mobo's have steep prices. To me, and probably to many like me who will do large amounts of content creation, paying $190 for 8 cores is a better idea than paying $220 for 4 faster cores, even if we can fit the quad in our budget.

I'm not at all trying to say that AMD made a better product than Intel, or even themselves. I'm saying the products they do have are well suited for my (and others') situations. For many, all they want is a gaming rig. For that, the 2500k cannot be beat.

Also, i'm sorry for filling the page with my posts.

Well this is a niche case where Bulldozer may be the appropriate solution.
 
Your logic is flawed. You are assuming Sandy Bridge and Bulldozer are on equal footing. They aren't. For most workloads a 15% increase would pull PD about in line or behind SB.
I am aware SB has about a 20-25% IPC advantage over BD, and uses less power. I was saying that 10-15% over something (for me) is better suited than an extra 5% on something that wasn't.

P.S. Okay, seriously, I'm done posting for a while. Too many posts of mine on this page.
 
+1

I am in the same boat. I went from a C2D + HD4650 to an FX-4100 + GTX 560Ti.

Between F@H, VMs for school and general gaming the difference is amazing.


about a year ago I had a C2D 1.8ghz OCd to 2.4 with a HD 4650 1gb
now with a PHII @ 3.5ghz/HD 5770 it is an amazing difference
really it is what level of performance you are accustomed to
sure if you have been using a 2500k OCd then a FX doesnt seem that great
but if you were using a Pentium 4 then a FX would be more than enough
 
Status
Not open for further replies.