AMD Piledriver rumours ... and expert conjecture

Page 198 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
exactly what does it matter if you have to run minimum resolution to "see a virtual bottleneck"? who gives a ____ really ...

"dude I run 5,000 fps at 480x320 16 colors" " wow man, thats some major bragging rights, how does that game look?" "dude, like the atari 2600" "hahahaha you idiot"

http://video.answers.com/battlefield-3-meets-minecraft-517188631

If you run with video settings maxed out and its bottlenecking your GPU, guess what, it doesn't frigging matter what happens at low resolutions, your still gpu bottlenecked at your play settings. lowering the settings doesn't change that fact, period, end of story because its not something your going to do just to play.

I didn't buy 2 video cards so I could run 640x480, 800x600, 1200x1024. I bought them to play 1900x1200, ultra maxed aa, thats it. I don't care about any other resolution and neither does my cpu since I don't play there.

So again, what does it matter what happens at low resolutions? 2, 3, 4 years down the road if you upgrade your video cards you might see a bottleneck? guess what, by then your looking for a cpu anyway so again what does it matter today?

yes, its interesting to see how cpus change at low resolutions, thats it, you don't use those settings to see how your system is going to perform, just like using single player benches for multiplayer gaming.


Well its to give us a idea of how the CPU will perform for newer games, I never really agreed to it since i want to see if the CPU will bottleneck us on the most common resolutions, They should do Min/AVG benchmarks since that's what i care about. i don't care if i average 120FPS if my game drops to 15FPS in some areas.
 
exactly what does it matter if you have to run minimum resolution to "see a virtual bottleneck"? who gives a ____ really ...

"dude I run 5,000 fps at 480x320 16 colors" " wow man, thats some major bragging rights, how does that game look?" "dude, like the atari 2600" "hahahaha you idiot"

To find out how much headroom a CPU has.

When you bench at a GPU bottleneck resoultion, you may see, say, an i3 2300 and i7 2600k running at the same FPS. Turn down the settings, however, and you can see how much more powerful the 2600k really is. Its a test to see how close in performance CPU's really are.

Sure, two CPU's may have the same FPS at game x using GPU y, but what happens when you upgrade to new GPU z? Do you have headroom? Or will you have a CPU bottleneck? YOU DO NOT KNOW, because you never bothered to find out how much power to spare the CPU has. [This is important to know for all you SLI/CF users out there, or those of you who upgrade GPU's frequently].

When I build a machine, I build it around a very powerful CPU that I don't plan to upgrade for at least 3 architecture updates. As such, I want to know how much power to spare is NOT currently being used. That requires a low-resolution test to remove the GPU from the equation to separate CPU's and see how close they really perform in pure number crunching.

I view every CPU test at 1920x1080 to be invalid by default due to the presence of a GPU bottleneck.
 
@lilcinw

As those above have said, SIMD units and FPU's already have a completely separate register stack then x86 instructions. Their actually more like RISC instructions that use generic registers. The OS has no problem scheduling it separately then an x86 CPU. It would use the front end scheduler, but that's already the case right now. FPU instructions use a different path then x86 ones do though.

As for the traces, it's no where near 50%. Latency isn't an issue with SIMD, never has been. It's cache hits that are highly sensitive to latency due to it stalling out the CPU. A single instruction taking 1~3 cycles longer then usual will not stall the system out. By definition SIMD is highly parallel, you send a whole batch of instructions off and they eventually come back.

There is a point about it competing with the iGPU if your gaming on a laptop or desktop. The counter argument is that it's total performance neutral. You remove eight 128-bit SIMD FPUs from the cores, you can just add those to the iGPU itself with the saved space. In this case the iGPU would actually have more resources then before. Previously you would have six to eight unused SIMD FPU's doing absolutely nothing for you. By combining those with the iGPU you can actually put them to work. Benefit behind centralizing that kind of resource is that it becomes available to all units not the one it's exclusively bonded to. Ultimately that is the goal behind fusion / HSA, centralizing processing resources for use by the OS. Move away from treating CPU's as "cores" and instead treat the entire CPU as a set of processing resources to be used as needed.
 
@gamer

Except your not seeing what a CPU can do, your only seeing what 25% of a CPU can do.

And i7 has literally 100% more CPU power then an i3. A pure comparison between them should have the i7 always double what the i3 does. Of course doing single player timed loop demo's won't ever show that. I cringe whenever I hear someone give advice to only use an i3 for gaming...
 
@gamer

Except your not seeing what a CPU can do, your only seeing what 25% of a CPU can do.

And i7 has literally 100% more CPU power then an i3. A pure comparison between them should have the i7 always double what the i3 does. Of course doing single player timed loop demo's won't ever show that. I cringe whenever I hear someone give advice to only use an i3 for gaming...

Its a comparison test, not a benchmark. Low-resolution gaming is the only way to figure out how much headroom a given CPU has compared to another CPU in gaming.

Either that, or you'd need a REALLY high end SLI/CF setup and a game that actually scales well in SLI/CF. Even then, with FPS so inconsistent in those setups, I view those tests as being highly unreliable.

The point is a simple one: If the GPU bottleneck is removed, where do CPU's fall in comparison to eachother? How much more headroom does CPU X have over CPU Y? Will there be a CPU bottleneck if a CF/SLI in the future? These are the questions that need to be answered when looking at CPU's in gaming, and no one does that anymore; they just want a test that shows every CPU from an i3 to an i7 putting out 42 FPS in BF3. what are you REALLY proving? Taking the BF3 example, how many times have we heard on this forum people asking if an i3 is sufficent, because it puts out the same FPS? Then we have to explain that in MP, the i3 chokes to death. Woops, so what does the BF3 benchmark actually prove?

Nothing.
 
To find out how much headroom a CPU has.

When you bench at a GPU bottleneck resoultion, you may see, say, an i3 2300 and i7 2600k running at the same FPS. Turn down the settings, however, and you can see how much more powerful the 2600k really is. Its a test to see how close in performance CPU's really are.

Sure, two CPU's may have the same FPS at game x using GPU y, but what happens when you upgrade to new GPU z? Do you have headroom? Or will you have a CPU bottleneck? YOU DO NOT KNOW, because you never bothered to find out how much power to spare the CPU has. [This is important to know for all you SLI/CF users out there, or those of you who upgrade GPU's frequently].

When I build a machine, I build it around a very powerful CPU that I don't plan to upgrade for at least 3 architecture updates. As such, I want to know how much power to spare is NOT currently being used. That requires a low-resolution test to remove the GPU from the equation to separate CPU's and see how close they really perform in pure number crunching.

I view every CPU test at 1920x1080 to be invalid by default due to the presence of a GPU bottleneck.
here is the thing. In order to even achieve a CPU bottleneck, you need $1000+ in video cards. Are you really concerned about saving $50-150 when you upgrade your video cards again and not the cpu along with it? chances are if your spending that kind of money, your going to be upgrading both components. very few people upgrade one component at a time when they are running that much high end hardware to even achieve a gpu bottleneck.

yes I see your point, but also already said that at the end of what I wrote, saying it again doesn't mean much other than that.

then again, you said you build your computer around 3 gens of cpus. do you update your video card every time a new one comes out then? That would be the only type of person I can see that would even care about 640x480 resolutions. The thing is that 3 gen old cpu is never tested against new video cards 3 years down the road, how do you know its not bottlenecking? Where are the reviews of the q9600 with the 7970 and gtx 690? oh, thats right, new gpus are only tested with the newest and greatest cpu available. you still don't know wether your 3 gen old cpu will bottleneck the gpu because it wasn't tested.

systems are built for what they are used for. if your pretending to build it for the future because you know what is going to happen, your just fooling yourself.

According to this chart, even a 1 gen old cpu can bottleneck at gtx 460

IvyBridge_3770K_87.jpg


so should you upgrade from the 2600k to the 3770k because of this one? that didn't last 3 gens, heck it can't even keep up with a 3 gen old gpu.

truth of the matter is that people want to see low res testing because thats what Intel is good at. people use it to justify how much superior intel is, but when put side by side, no one can even tell the difference. http://www.fudzilla.com/home/item/25695-amd-pulls-blind-test-at-recent-show

Low res testing is just another form of synthetic test that only matters for testing, has no reflection on rl usage. I didn't buy a 3d hdtv to watch dvds, I bought it for blu-ray 3d movies. I didn't buy a 1900x1200 28" monitor to play 640x480. If i get to the point where I actually need to upgrade my cpu, its not going to be because its running slow on my 6970 crossfire setup. Its going to be because I upgraded to the HD9990 DX 12 series thats required to play diablo IV, BF 4, HL 3 or whatever game comes out in 3 years that will require an upgrade. I don't see any game in the near future thats going to bottleneck my 6970s below 60fps or even come close to hitting 30fps.
 
Problems can arise using slower/less cored cpus of today, where a seemingly slower older gen has more cores and in some games can compete and beat the newer cpus.
The combinations today, where we have many more choices of major changes in just a few gens makes the old testing ways extremely trying
 
 
In a game engine that is known to be not very reliant on GPU performance. And to be fair, FPS is over 100, which is far above a satesfactory performance level.

http://images.hardwarecanucks.com/image/mac/reviews/intel/ivybridge/IvyBridge_3770K_87.jpg

so should you upgrade from the 2600k to the 3770k because of this one? that didn't last 3 gens, heck it can't even keep up with a 3 gen old gpu.

See above point.

rofl, thats one statement not 2.

Intel is good at low res testing because, gasp, their CPU's are better. Shocking fact. When it comes to gaming, AMD CPU's tend to bottleneck first, though in high res testing, theres no way to know that.

Since you don't even run high end video cards, you will never run into this situation. The only way you can even see a difference is to run the low end resolutions. No wonder your defending it so hard.

IvyBridge_3770K_85.jpg


For you, yes to justify buying the best of the best with a mediocre video card, you have to run minimum resolution. Thats the difference here, I play to max out the settings, wich means I need the video cards to do the work for me.

A gtx 570 and a Single 6970 are incapable of delivering the performance.

1920_Ultra.png
 
The dumbest thing AMD done was bulldozer..... AMD response to why it's not that great was that," Bulldozer isn't optimize for windows 7" WTH were they testing the cpu on windows 98? You figure when you create a new product you design it for the operating system it's intended for, right? Now everyone has to wait for more patches or windows 8 to come out. Now if piledriver fails and make me regret ever buying a new motherboard and another copy of windows 7 just so i can upgrade my cpu and being really let down again. I'm going to explode! btw I'm still with my phenom ii 965 just cause wasting anymore money towards amd wouldn't be justified from the performance of the cpu. My main focus on my pc is gaming and bulldozer fails hard.
 
The point gamer has is that review sites need to pull the GPU bottleneck and put it towards the CPU in the game tests.

Most every game will perform very much the same at higher resolutions on a lot of CPUs. But then some CPUs will start to choke while others will last a year or longer.

Overall, a i5-2500K system will probably outlast a FX based system with just GPU upgrades. Thats due to the CPUs IPC and the inefficiency of the FXs current lineup.

Just as an example a Q6600 outlasted a Phenom for gaming or anything, and it was also a year older. I probably could have kept my Q6600 and just gone with the HD7970 but it did its job well for 5 years so it deserves the rest.
 
The dumbest thing AMD done was bulldozer..... AMD response to why it's not that great was that," Bulldozer isn't optimize for windows 7" WTH were they testing the cpu on windows 98? You figure when you create a new product you design it for the operating system it's intended for, right? Now everyone has to wait for more patches or windows 8 to come out. Now if piledriver fails and make me regret ever buying a new motherboard and another copy of windows 7 just so i can upgrade my cpu and being really let down again. I'm going to explode! btw I'm still with my phenom ii 965 just cause wasting anymore money towards amd wouldn't be justified from the performance of the cpu. My main focus on my pc is gaming and bulldozer fails hard.
while this is somewhat true, guess what? Intel did the same thing with HT

http://techreport.com/discussions.x/5280

It was soo long ago people forgot already. you didn't expect AMD to need the same type of hotfix since this is their iteration of HT?
 
while this is somewhat true, guess what? Intel did the same thing with HT

http://techreport.com/discussions.x/5280

It was soo long ago people forgot already. you didn't expect AMD to need the same type of hotfix since this is their iteration of HT?

I knew the pentium 4 had problems with hyperthreading but i never knew why. So thanks for the knowledge! :)
The fact is I have no idea how the bulldozer will be once operating system fixes the issues it is having with the operating system. I can't see that it will make a large enough improvement to still make it worth while. I hope AMD starts competing and putting out more powerful cpu's so for us the consumers can get better prices/quality. Right now if intel wants to be a jerk if it's starts really cutting prices on it's cpu's it will make it where no one can justified buying amd over intel.
 
I knew the pentium 4 had problems with hyperthreading but i never knew why. So thanks for the knowledge! :)
The fact is I have no idea how the bulldozer will be once operating system fixes the issues it is having with the operating system. I can't see that it will make a large enough improvement to still make it worth while. I hope AMD starts competing and putting out more powerful cpu's so for us the consumers can get better prices/quality. Right now if intel wants to be a jerk if it's starts really cutting prices on it's cpu's it will make it where no one can justified buying amd over intel.
to this day hyperthreading still fails in many many applications in engineering and sciences. Bulldozer wasn't even that bad. It was generally a small step up from a phenom II x6 in multithreaded applications and was within range of the i7s. It makes for a great multipurpose cpu. The only realistic applications that would be bad for it would be high end gaming, unless you spent $500 on a video card, it wouldn't make a difference anyways.

Theres a reason AMD has to design for the future and do forward thinking designs. Its not like AMD has the power to make microsoft program for the modular concept for the 3 year old OS. Sometimes companies have to do leaps of faiths to get anywhere and thats what AMD banked on with the modular CPU design. Maybe they have a plan here, maybe they don't but I think the bulldozer chips have finally gotten to a price competitive point now. Everything but the 6 cores seem to have a marketable audience and I would think they would sell a lot better if there wasn't so much negativity associated with bulldozer from the initial disappointment.

Piledriver will give amd something better than what they have now and will work as good cpus for people. Its just going to be how well they get received that matters. As of now, AMD really is just suffering from bad public perception. As far as the cpus they sell, they are quite competitive on price imo.
 
rofl, thats one statement not 2.

Intel is good at low res testing because, gasp, their CPU's are better. Shocking fact. When it comes to gaming, AMD CPU's tend to bottleneck first, though in high res testing, theres no way to know that.

Since you don't even run high end video cards, you will never run into this situation. The only way you can even see a difference is to run the low end resolutions. No wonder your defending it so hard.

http://images.hardwarecanucks.com/image/mac/reviews/intel/ivybridge/IvyBridge_3770K_85.jpg

For you, yes to justify buying the best of the best with a mediocre video card, you have to run minimum resolution. Thats the difference here, I play to max out the settings, wich means I need the video cards to do the work for me.

A gtx 570 and a Single 6970 are incapable of delivering the performance.

http://static.techspot.com/articles-info/458/bench/1920_Ultra.png

You miss the point. I care about the next time I upgrade a GPU, if I will be CPU bottlenecked or not. And I hardly call a 570 GTX "mid range".

Your first chart indicates a clear GPU bottleneck, even at 1680x1050 from IB all the way down to the A8. [I generally assume a +-5% variance in FPS when benchmarking]. Thats the type of test that is totally invalid.

Your second chart indicates a clear GPU bottleneck on whatever CPU is being tested. Which is fine. But that DOESN'T tell me if one CPU is significantly more powerful [and thus, more long lasting] then another CPU, which is what I want to know.

And I'm quite happy with a steady 45FPS with all eye candy turned on in the most graphically intensive game on the market, thank you, especially with Adaptive Vsync. I'm in a better GPU situation now then all those guys who spent $800 on 8800 Ultra's trying (and failing) to max Crysis...
 
You miss the point. I care about the next time I upgrade a GPU, if I will be CPU bottlenecked or not. And I hardly call a 570 GTX "mid range".

Your first chart indicates a clear GPU bottleneck, even at 1680x1050 from IB all the way down to the A8. [I generally assume a +-5% variance in FPS when benchmarking]. Thats the type of test that is totally invalid.

Your second chart indicates a clear GPU bottleneck on whatever CPU is being tested. Which is fine. But that DOESN'T tell me if one CPU is significantly more powerful [and thus, more long lasting] then another CPU, which is what I want to know.

And I'm quite happy with a steady 45FPS with all eye candy turned on in the most graphically intensive game on the market, thank you, especially with Adaptive Vsync. I'm in a better GPU situation now then all those guys who spent $800 on 8800 Ultra's trying (and failing) to max Crysis...

Who cares how fast your CPU is if you can't even max out your graphics settings? That's like saying you have the loudest car on the race track but it can only go 45 MPH. That's the point he's trying to make.
 
Not really related to PD, but I thought this could go here as well. (From the Haswell thread)

Intel is investing in 450mm wafer technology. You should see how tall the new fab in Ocotillo is. The new equipment is going to be huge.

$5 billion on fab 42
$3 billion on other fab upgrades
$4.1 billion on 450mm wafers

Intel is serious about maintaining their process lead.

http://allthingsd.com/20120709/intel-takes-15-percent-stake-in-asml-as-part-of-research-deal/

ASML is a $7.3 billion (2011 sales) company that makes the equipment that companies like Intel use in its factories, known in industry lingo as “fabs.” Since Intel is the biggest in the world, as Intel goes, so goes the rest of the world. Now that Intel has thrown in seriously on moving its manufacturing processes to support 450-millimeter wafers, that means two things: Other companies will have to do it, too, eventually. Also, older 300-millimeter manufacturing processes will migrate down to manufacturing lines cranking out lower-end chips. They’ll benefit from the same increase in efficiency that comes from larger wafers.

The deal calls for a few things. First, Intel will put about $1 billion toward ASML’s research and development into building chipmaking gear that can handle the larger wafers, but also also into advanced technologies using extreme ultraviolet light to etch the ever-smaller circuits on chips.

Second, Intel will take a 10 percent stake in ASML for about $2.1 billion. It will then commit to take another 5 percent later on. All in, the deal is being valued at about $4.1 billion.
 
You miss the point. I care about the next time I upgrade a GPU, if I will be CPU bottlenecked or not. And I hardly call a 570 GTX "mid range".

Your first chart indicates a clear GPU bottleneck, even at 1680x1050 from IB all the way down to the A8. [I generally assume a +-5% variance in FPS when benchmarking]. Thats the type of test that is totally invalid.

Your second chart indicates a clear GPU bottleneck on whatever CPU is being tested. Which is fine. But that DOESN'T tell me if one CPU is significantly more powerful [and thus, more long lasting] then another CPU, which is what I want to know.

And I'm quite happy with a steady 45FPS with all eye candy turned on in the most graphically intensive game on the market, thank you, especially with Adaptive Vsync. I'm in a better GPU situation now then all those guys who spent $800 on 8800 Ultra's trying (and failing) to max Crysis...


Not that this relates to PD but I consider the 570 to be higher end myself
also games look great at 45FPS as long as your min FPS is good
I cant find the link right now but I read an article where they had a $2500 dollar machine vs a 500 dollar machine
and they did a blind test where they asked people to pick them out and they couldnt
the human eye just isnt good enough to notice a difference between 50FPS and 100FPS as long as the minimum frames are good
if fact a machine that average 60fps and drops into the teens will look crappier than a machine that does 40fps and doesnt go below 30FPS

now does anybody have any comments about this article?
http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-trinity-apu,3241.html

"AMD Desktop Trinity Update: Now With Core i3 And A8-3870K "

going to go read it now and curious what everybody thinks about it
plus this is a AMD Piledriver/Trinity thread isnt it?
 
to this day hyperthreading still fails in many many applications in engineering and sciences. Bulldozer wasn't even that bad. It was generally a small step up from a phenom II x6 in multithreaded applications and was within range of the i7s. It makes for a great multipurpose cpu. The only realistic applications that would be bad for it would be high end gaming, unless you spent $500 on a video card, it wouldn't make a difference anyways.

Theres a reason AMD has to design for the future and do forward thinking designs. Its not like AMD has the power to make microsoft program for the modular concept for the 3 year old OS. Sometimes companies have to do leaps of faiths to get anywhere and thats what AMD banked on with the modular CPU design. Maybe they have a plan here, maybe they don't but I think the bulldozer chips have finally gotten to a price competitive point now. Everything but the 6 cores seem to have a marketable audience and I would think they would sell a lot better if there wasn't so much negativity associated with bulldozer from the initial disappointment.

Piledriver will give amd something better than what they have now and will work as good cpus for people. Its just going to be how well they get received that matters. As of now, AMD really is just suffering from bad public perception. As far as the cpus they sell, they are quite competitive on price imo.

Your are right about alot of things you said but AMD over hyped bulldozer performance before it was released even the name sounds over hyped. So most people was disapointed with the performance it gived. More cores are great but ipc is better. Rumour has it that piledriver will have a 15% perforance boost over bulldozer. Thats great, but if piledriver stock clocks are higher then it may not be. If you get the top fx chip vs piledriver chip at the same clock speed and only show 5% performance impovement then its not much better.(we have to wait and see) Your right about fx will only bottleneck the gpu if you have a highend gpu. I agree with most of what you say but it does show that it's lifespan won't be as long as far as gaming goes. FX is great for multitasking and alot of other things it is competing pretty strong against intel. It's just got way over hyped by AMD and when they release a new socket i won't buy the board until the actual cpu releases so they may have just hurt them selfs with that I'm sure i'm not the only one that feels this way.
 
to this day hyperthreading still fails in many many applications in engineering and sciences. Bulldozer wasn't even that bad. It was generally a small step up from a phenom II x6 in multithreaded applications and was within range of the i7s. It makes for a great multipurpose cpu. The only realistic applications that would be bad for it would be high end gaming, unless you spent $500 on a video card, it wouldn't make a difference anyways.

Theres a reason AMD has to design for the future and do forward thinking designs. Its not like AMD has the power to make microsoft program for the modular concept for the 3 year old OS. Sometimes companies have to do leaps of faiths to get anywhere and thats what AMD banked on with the modular CPU design. Maybe they have a plan here, maybe they don't but I think the bulldozer chips have finally gotten to a price competitive point now. Everything but the 6 cores seem to have a marketable audience and I would think they would sell a lot better if there wasn't so much negativity associated with bulldozer from the initial disappointment.

Piledriver will give amd something better than what they have now and will work as good cpus for people. Its just going to be how well they get received that matters. As of now, AMD really is just suffering from bad public perception. As far as the cpus they sell, they are quite competitive on price imo.

Let's not call Bulldozer a good cpu, it isn't and there are good reasons why it's not. AMD can turn Bulldozer into a good cpu, if not with Piledriver then perhaps the next iteration of that architecture. You are only angering the Intel fanboys.

Understand that Bulldozer, had it been done well, was the cpu that I wanted. Intel won't dare let us have an 8 core cpu pfft.
 
Your are right about alot of things you said but AMD over hyped bulldozer performance before it was released even the name sounds over hyped. So most people was disapointed with the performance it gived. More cores are great but ipc is better. Rumour has it that piledriver will have a 15% perforance boost over bulldozer. Thats great, but if piledriver stock clocks are higher then it may not be. If you get the top fx chip vs piledriver chip at the same clock speed and only show 5% performance impovement then its not much better.(we have to wait and see) Your right about fx will only bottleneck the gpu if you have a highend gpu. I agree with most of what you say but it does show that it's lifespan won't be as long as far as gaming goes. FX is great for multitasking and alot of other things it is competing pretty strong against intel. It's just got way over hyped by AMD and when they release a new socket i won't buy the board until the actual cpu releases so they may have just hurt them selfs with that I'm sure i'm not the only one that feels this way.
Benchmarks tell us that Piledriver will have about 15%(or more) better IPC than BD, not overall performance, which will be even more due to higher stock clocks.
 
Status
Not open for further replies.