AMD Piledriver rumours ... and expert conjecture

Page 81 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
It seems some people can't read very well...So I'll say it again:

The majority of software is linear in nature. And since the majority of people out there do no more then one heavy application at a time, you quickly run into diminishing returns.

Now, for embarassingly parallel problems, yes, more cores = more performance. Problem is, those tasks all are better suited to a GPU-like architecture. Hence why Rasterization, Video Encoding, and [to a lesser extent] Physics are all gradually being offloaded the CPU in the first place.

I'm speaking as a Software Engineer: The overwhelming majority of software simply will not scale well in most cases. After about 3-4 cores, any performance increase one would expect from using more cores is basically mitigated by bottlenecks caused by synchornization/locking or other IO bottlenecks.
Then why have more cores at all? If a gpu could perform most/all of the parallel problems, why don't we have massive single cores, and cheap, tiny gpus that could fit in most/all of consumer devices?

I'll trust your word as a Software Engineer, but logic just tells me that it doesn't end that simply.
 
cancelled, and the whole MORE CORES concept is also out the window..


fan of weak to mediocre more cores..?
why.

not speaking about the Thubans' but more of FX.
Piledriver needs to be 4 to 8 HARD HITTING cores.
it doesn't have to beat or even match Intel, just be competitive...
(competitive as in better than current Bulldozer chips and better than Deneb as well, single core performance included..)
please.

AMD will fix it and we will have 8 strong cores. Then I'll buy one. And a few years from then people will be saying that software just doesn't scale well beyond 8 cores.

Meanwhile, Intel will have ____________ (exercise left for the reader.)
 
thanks, and what are your upgrade plans.?
It’s planned for December and depends on Novembers phone savings too. My whole household will be up on the contracts. I don't sweat the drama until it gets close. Last year was the just the SSD. (Awesome upgrade) This year, it really does need full reconstructive surgery. The case is dented, all front panel USB’s are broken. PCI-e slot no.2 doesn’t work. My 5.1 sound is only 3-1/2.1 now. At work, things are really slow so we're told to stretch All hardware until next year. But they're i5-540's, Intel 80GB SSD and are going strong anyway.
Here at TH, I get entertained (Chad and Keith etc(but not Triny)), (I say test with all options on the table 😀 ), and educated (everyone else). And I can be a jerk after a thread has been totally derailed already 😗
 
..... are you going to make the jump to DDR4 or wait until it becomes mainstream much like many users did with the switch from DDR2 to DDR3?
The question is, how much of an increase in latency will there be? DDR 2 to DDR 3 increased it, will it be jumping again, and by how much. DDR 4 won't be faster until its much faster, just like the jump from DDR 2.
 
I'm guessing it's a business call. Developing in and x86 platform is way cheaper to do and then port. I kinda like the approach, since we'll have better ported games in the future, lol.

Cheers!

I again stress that the whole "console porting" thing is a myth made up by unhappy PC gamers who feel the need to be the center of attention. The massive increase in graphical quality some are predicting when we get new consoles will not happen, unfortunatly. [I doubt we'll see much improvement until we move to Ray Tracing, which is still a few years away]
 
Then why have more cores at all? If a gpu could perform most/all of the parallel problems, why don't we have massive single cores, and cheap, tiny gpus that could fit in most/all of consumer devices?

Because the GPU is generally overburdened JUST with graphics [rasterization], so from a performance standpoint, it doesn't make sense to also offload everything else. And to be fair, you just described most SoC's that existed just two years ago [which are only now becoming multi-core designs]. Besides, its not like games are using a fully dynamic physics engine capable of computing hundreds of forces on a single object in realtime [which would choke any CPU to death instantly], though if we did, physics would certianly be offloaded to some GPU-like device.

And from an OS perspective, having multiple cores does make sense, as you are running a couple dozen or so applications at one time, and you can reasonably seperate applications to run in parallel [OS/memory/IO bottlenecks aside]. Its at the per application level where using more cores is hard to do. Hence why single-core designs are going the way of the dodo. My point is, FOR A SINGLE APPLICATION, there should not be an expectation of significant threading beyond what we have now in most instances.
 
More cores is the future IMHO
currently I am playing Metro 2033,watching Netflix,doing emails,on Toms,designing a flyer in MS Publisher and just alt-tabbing off of Metro
I couldnt have done that on my old Core2Duo 3ghz
great old CPU
I have to adjust to a quadcore way of thinking
I have had this Phenom II x 4 Deneb @ 3.4 since December and only now I am starting to realize the full potential
I was stuck in a Dual Core way of thinking where you couldnt have too many intensive apps open at a time
Just yesterday I was watching Netflix as I was encoding/burning a wedding video to DVD and browsing
this is has been an eye opener for me as a dual core user
and now six and eight core are becoming more common
soon we will have to give up the "Quad core Way of Thinking"
and move on to the hexa and octo way of thinking
combine more cores with more display screens for more desktop real estate
no longer are you limited to just one or two intensive apps
you can have a video encoding,hard game playing,Photoshopping,browsing etc
The future is the Age of the Multitasker
 
The thing with making stronger single cores is it gets a point where its really diminishing as well. Engineers can spend effort on optimizing micro ops and branch prediction and making a better single threaded CPU or they can stick more cores in and call it done. Of course there is balancing but the well optimized single core cpu will never be as fast as a decently well optimized 2 core cpu when software can use the 2 cores.

CPU has gotten to the point where single threaded performance isn't too useful outside of gaming. Very little software that is single threaded is used professionally that requires extremely powerful cpus. Everyday single threaded tasks are all done very adequately with just about any above minimal CPU. More cores are also not helpful to most people.

So most people would be good with any cpu. Professionals can have multicored setups where software can be highly parallel. And its really only gamers who really need single threaded performance on a few cores.

Once transactional memory gets here, expect multithreading to increase greatly. I would expect games to use many more cores as that becomes the standard. Could be a while tho.
 
Then why have more cores at all? If a gpu could perform most/all of the parallel problems, why don't we have massive single cores, and cheap, tiny gpus that could fit in most/all of consumer devices?

There's no 1 size fits all solution when it comes to computing.

Cores are getting "fatter" which helps improve IPC, but there's limits by the OS and the compilers on how optimal they'll run.

 
Because the GPU is generally overburdened JUST with graphics [rasterization], so from a performance standpoint, it doesn't make sense to also offload everything else. And to be fair, you just described most SoC's that existed just two years ago [which are only now becoming multi-core designs]. Besides, its not like games are using a fully dynamic physics engine capable of computing hundreds of forces on a single object in realtime [which would choke any CPU to death instantly], though if we did, physics would certianly be offloaded to some GPU-like device.

And from an OS perspective, having multiple cores does make sense, as you are running a couple dozen or so applications at one time, and you can reasonably seperate applications to run in parallel [OS/memory/IO bottlenecks aside]. Its at the per application level where using more cores is hard to do. Hence why single-core designs are going the way of the dodo. My point is, FOR A SINGLE APPLICATION, there should not be an expectation of significant threading beyond what we have now in most instances.
And when you have an integrated gpu with a high end discrete video card, do you disable the igpu, or use all of it for fpu offloads since its not being used for graphics?

In that case, it only makes sense to offload it to the igpu.

 
http://semiaccurate.com/2012/03/02/sony-playstation-4-will-be-an-x86-cpu-with-an-amd-gpu/

With the current wave of Playstation 4 leaks, it is time to spill some of the beans on what we know about the console. The short story is that it completes the AMD clean sweep of all the next gen consoles.

Yes, you heard that right, multiple sources have been telling SemiAccurate for some time that AMD won not just the GPU as many are suggesting, but the CPU as well. Sony will almost assuredly use an x86 CPU for the PS4, and after Cell in the PS3, can you really blame them? While this may point to a very Fusion/Llano-like architecture we hear that is only the beginning.
 
The thing with making stronger single cores is it gets a point where its really diminishing as well. Engineers can spend effort on optimizing micro ops and branch prediction and making a better single threaded CPU or they can stick more cores in and call it done. Of course there is balancing but the well optimized single core cpu will never be as fast as a decently well optimized 2 core cpu when software can use the 2 cores.

CPU has gotten to the point where single threaded performance isn't too useful outside of gaming. Very little software that is single threaded is used professionally that requires extremely powerful cpus. Everyday single threaded tasks are all done very adequately with just about any above minimal CPU. More cores are also not helpful to most people.

So most people would be good with any cpu. Professionals can have multicored setups where software can be highly parallel. And its really only gamers who really need single threaded performance on a few cores.

Once transactional memory gets here, expect multithreading to increase greatly. I would expect games to use many more cores as that becomes the standard. Could be a while tho.

I agree that more cores looks good. We still have to get past the software engineers who don't think programs can be written to use many threads. But we will.
 
http://semiaccurate.com/2012/03/02/sony-playstation-4-will-be-an-x86-cpu-with-an-amd-gpu/

With the current wave of Playstation 4 leaks, it is time to spill some of the beans on what we know about the console. The short story is that it completes the AMD clean sweep of all the next gen consoles.

Yes, you heard that right, multiple sources have been telling SemiAccurate for some time that AMD won not just the GPU as many are suggesting, but the CPU as well. Sony will almost assuredly use an x86 CPU for the PS4, and after Cell in the PS3, can you really blame them? While this may point to a very Fusion/Llano-like architecture we hear that is only the beginning.

To me, this is great news. Although I remember following the news about the cell processor. What happened?

Mr. AMD guy, tell your engineers to work 12 hour shifts until Piledriver is fixed.
 
the cell was canned by IBM. I don't know the exact reasons but it was generally regarded as a failure and nearly nothing uses it outside of the ps3. Mostly because it was just too darn hard to program for it to get its performance to be effective.

edit: well apparently it was used in super computers effectively to push peak performance and was very good performance/watt but its time has come and gone it seem.
 
hee hee hee
this thread is so derailed now
it is like a train off the tracks offroading through a forest LOL


here lets try this

"Although AMD remains tight-lipped about exact plans for integration of SeaMicro’s Freedom supercomputer fabric with up to 1.28Tb/s (160GB/s) transfer speed into its own chip designs, it clearly stated that it bought the micro-server company for its intellectual property and technologies, not in order to make servers itself. AMD hopes that the ultra high-speed transfer fabric will allow it to create ultra-dense server platforms for cloud servers and other power consumption-sensitive applications.

"Integration our strong AMD Opteron roadmap with SeaMicro's technology will provide customers with a range of processor choices and platforms. [...] Our goal is to leverage SeaMicro IP with our Opteron processor to create industry-leading flexible silicon solutions. [...] When we think about SeaMicro acquisition, this is a technology play for us. [...] It is very much possible to [integrate SeaMicro's fabric technology into AMD processors] and when we look at progression of processor technology, [addition of] fabric would be a natural evolution," said Lisa Su, general manager of global business units at AMD."

source- http://www.xbitlabs.com/news/other/display/20120301171053_AMD_Vows_Not_to_Compete_Against_Its_Customers_with_SeaMicro_Technologies.html
 
"SeaMicro claims four-fold power reduction and six-fold space reduction by eliminating the typical busy server chipset to just three chips, via proprietary interconnect technology"

"All of this is good news for AMD; as SeaMicro's strength in terms of power and density could offset its weaknesses in power performance, while accentuate its strengths in highly-threaded performance."

source- http://www.dailytech.com/AMD+Acquires+Cloud+Server+Maker+SeaMicro+for+334M+USD/article24132c.htm
 
http://semiaccurate.com/2012/03/02/sony-playstation-4-will-be-an-x86-cpu-with-an-amd-gpu/

With the current wave of Playstation 4 leaks, it is time to spill some of the beans on what we know about the console. The short story is that it completes the AMD clean sweep of all the next gen consoles.

Yes, you heard that right, multiple sources have been telling SemiAccurate for some time that AMD won not just the GPU as many are suggesting, but the CPU as well. Sony will almost assuredly use an x86 CPU for the PS4, and after Cell in the PS3, can you really blame them? While this may point to a very Fusion/Llano-like architecture we hear that is only the beginning.

Perhaps AMD was more willing to work with Sony on tailoring an APU for their needs.

I'm not sure I buy it yet though. Sony's latest development with the PS Vita is ARM based. Going from PS2(MIPS) to PS3(CELL) to Vita(ARM) and now PS4(X86) seems like an internal developer nightmare.
 
Perhaps AMD was more willing to work with Sony on tailoring an APU for their needs.

I'm not sure I buy it yet though. Sony's latest development with the PS Vita is ARM based. Going from PS2(MIPS) to PS3(CELL) to Vita(ARM) and now PS4(X86) seems like an internal developer nightmare.
sony just loves new cpu for everything they make xD It shouldn't be too hard to develop for ARM and x86 tho.
 
Status
Not open for further replies.