AMD Piledriver rumours ... and expert conjecture

Page 89 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
Not quite. The 2500K will still hold a lot of vaule since it would be a quad core, not a 3 core.

My main point is thats why the CPUs Intel has will be designed the way they are, to be able to add or remove cores as needed. I am sure they will go that route with the IGP at some point too to make it easier.

Its much like the Terascale CPU, where you can add or remove any type of core to the design. The 80 core was just a random number they decided on since it could have been 50 cores or 100 cores.

Intel's use of the ring bus to connect the various components such as the CPU cores and the GPU makes it easier to drop in different components from a layout perspective as well. If Intel does add in a new coprocessor component, maybe an FPGA for instance, all they have to do is add another bidirectional node to the ring bus. Pretty far-sighted of them IMO, and indicative of future plans as well..
 
Vishera 8 cores and better scores..."

http://www.fudzilla.com/home/item/26253-amd-fx-8350-vishera-cpu-production-in-q3

Interesting theory. We shall see of course.

Intel's use of the ring bus to connect the various components such as the CPU cores and the GPU makes it easier to drop in different components from a layout perspective as well. If Intel does add in a new coprocessor component, maybe an FPGA for instance, all they have to do is add another bidirectional node to the ring bus. Pretty far-sighted of them IMO, and indicative of future plans as well..

I think AMD will end upp going the same route. In fact, BDs original design was to be modular, not moduals.
 
if they can realease a unlocked i3 at same price of i3 then no one can beat that chip at that price., and maybe only few will buy i5.

vishera!
The question is/will its single core at 1ghz can beat a single thuban core at 1ghz?

That's why they haven't released an unlocked i3. It would cut into the i5 sales too much.
 
if they can realease a unlocked i3 at same price of i3 then no one can beat that chip at that price., and maybe only few will buy i5.

vishera!
The question is/will its single core at 1ghz can beat a single thuban core at 1ghz?

Interesting question indeed. Too bad you can't drop the multi lower than 16 on a 2500/2600K. Have a 1 core, 1GHz faceoff and even throw the Pentium III Coppermine at 1GHz into the miz just for shnitz and giggles.
 
Interesting question indeed. Too bad you can't drop the multi lower than 16 on a 2500/2600K. Have a 1 core, 1GHz faceoff and even throw the Pentium III Coppermine at 1GHz into the miz just for shnitz and giggles.

well i don't noe how accurate it'll be, but if u can get ur 2500k to 2GHz , and then use that 50% clock throttling option in RealTemp... it should be something pretty close eh? or am i missing somethin here ??!

btw had a 933MHz coppermine for 7 years+, lovely old chip :) :)
 
well i don't noe how accurate it'll be, but if u can get ur 2500k to 2GHz , and then use that 50% clock throttling option in RealTemp... it should be something pretty close eh? or am i missing somethin here ??!

btw had a 933MHz coppermine for 7 years+, lovely old chip :) :)

Could see if you can clock the BCLK to 50Hz and then the multi to 20. That would give 1GHz.

I think Toms should do it for fun.
 
everyone wants profit and thus they haven't released k version of i3, no i4 or i5-2400k.

But if they do then their will be no point to recommend amd, even the phenon x6 cannot competete with those cpu at that price.

I went amd in my latest build because of oc option (as i do it for fun even it have more fun than playing games to me, i can't leave ocing) at cheap price which intel lacks in sub 200$ market., also due to more cores at that price.
 
I see modern RISC processors needing just as much magic HW and SW to get them to "play nice too". At least for any that can compete at the workstation level. They require some fairly complex/expensive I/O systems and database/grid compute software to get that performance.

IBM processors are usually packaged in giant MCMs with 100s of MB of cache/eDRAM. The MCM itself needs 800 Watts just to run it.

This is an old pic but similar is done with the latest Power7.
http://en.wikipedia.org/wiki/File😛ower5.jpg

SPARC modules required similarly large and expensive amounts of cache to be useful. This is partly why they pretty much died off until Oracle breathed some life back into them. The price/performance just wasn't worth it.

As long as price/performance keeps getting better the underlying ISA is just 1 aspect of 1 piece of the puzzle.

http://www.theregister.co.uk/2012/03/08/supercomputing_vs_home_usage/page2.html

*Cough*

Considering I have several SUN stacks sitting about 50 feet from my desk I might have just a little bit of hands on experience with this.

To someone who doesn't know what their looking at you'd think the interconnects between a SPARC, PPC and x86 look they same, they aren't, not by a long shot.

SPARC itself is modular, you can pull a SPARC CPU, memory or peripheral device out of a system and it won't crash. That circuitry you see is what allows that to happen. SPARC's now ship looking exactly like Xeon's, a chip either in a flat socket package or part of a CPU card assembly (for hot swap usage). Depending on the backplane you use you can keep getting bigger and bigger, I've worked with 64-way systems before.

I'm really shocked that so few people actually know what happened to SUN. Their hardware was never a problem, it wasn't expensive nor clunky. You could buy an Enterprise 25K if you wanted, that's an entire rack of equipment for a mainframe. But that era's most common product was actually the SunFire V240, a 2U dual socket server designed for low to medium processing. After that was the V490's which are 5U with two dual socket CPU cards. The 880's and 890's were rare and expensive, you only bought them if you had a specific purpose for them. They each were just a bigger version of the 480/490, four CPU cards instead of two and twelve FC HDD bays instead of two. Lots more I/O capability due to increased PCI bus's.

Which comes to one of the starkest differences, x86 systems tend to only have one PCI bus. SPARC's tend to have three to four separate PCI bus's. That is what that extra I/O circuitry is for, it's to handle all the additional components that you will put in there. I have a box sitting outside in one of my racks' that has four dual-port 4Gbps FC cards in it. Two form a 4 double loop to local FC disk array and the other two form a double loop to the enterprise SAN. That's four 4Gbps pipes that get muxed via mpxio on both the back end and the front end for 16Gbps in all directions. That kind of I/O isn't possible unless each set of card's gets it's own bus. And this is an older V490 system. Newer T2/T3/T4 box's can do 16GFC and dual 10GE.

Oracle bought SUN because SUN has a history of bad management decisions. Solaris is difficult and archaic to work with, it's unforgiving and makes only the slightest attempt at a GUI. Oracle changed out the management and infused cash to get the T3's finished and out the door while furthering production of the T4's. And while I think Oracle is shady when it comes to customer service (you must pay for security updates) their the king of Databasing for a reason. Oracle bought SUN because they knew they could make a profit with the company and further integrate SPARC + Solaris + Oracle Database + Oracle Middleware (used to be BEAWLS).

Anyhow this was all about the x86 ISA being old and not scalable. It was designed in the 70's and has stayed the same to maintain backwards compatibility. Only extensions have been added in the form of a coprocessor (SIMD / FPU) with the 64-bit iteration being just an extension of the registers to 64-bits with everything else kept the same. To get more performance out of the metal the CPU engineer's are having to design complex decoders and translators to act as middlemen between x86 binary and the internal RISC language of the CPUs. CPU's would be faster / cheaper / smaller if they didn't need to do that.
 
*Cough*

Anyhow this was all about the x86 ISA being old and not scalable. It was designed in the 70's and has stayed the same to maintain backwards compatibility. Only extensions have been added in the form of a coprocessor (SIMD / FPU) with the 64-bit iteration being just an extension of the registers to 64-bits with everything else kept the same. To get more performance out of the metal the CPU engineer's are having to design complex decoders and translators to act as middlemen between x86 binary and the internal RISC language of the CPUs. CPU's would be faster / cheaper / smaller if they didn't need to do that.

*Cough* 9/10 Top 10 Supercomputers are x86 based *Cough*

I'm well aware of Sun's hardware and your history lesson just backed my assessment 100%. They relied on a lot of expensive "magic HW&SW" to build those platforms. The ISA didn't give a native advantage or make it any better than x86. The hot swap technology you refer to isn't in their CPUs, its in the chip sets that go with them. IBM designed similar chip sets for x86 BTW.

A company's death always comes down to bad management, but they also tanked because their hardware couldn't keep up with the bang/buck of x86 once it gained 64bit. Engineers I know that were relying 100% on Sun kit were able to get 5-10x the processing power for the same price in x86. By the time Sun started offering AMD Opteron kit it was 2 years too late.

The CPU complexity is going to be somewhere. Intel chose to keep x86 and scale it the way they have, and they're making a killing at it. If there was a simple fix to replace that you can be sure Intel or AMD would do it. They have some of the brightest engineers and scientists in the world. You can bet they have several alternatives in the queue, and if those simulations ever panned out to give a significant advantage they'd switch to it.

For now the x86 clan have the next several steps already planned out, and that's basically making the x86 a SoC. Technically this is already true for the Atom CPU but that's a low end chip.
* the discrete primary GPU is already gone but continues to gain power
* the northbridge is already gone
* the southbridge will be gone next year
* Transactional cache looks to speed CPU/GPU operation a good amount
* Another couple years out the "main memory" will be stacked on or next to the CPU but in the same package (similar to how Core2Quad and Interlagos is made now)

Anyhow, bring on the Trinity news. 😉
 
*Cough* 9/10 Top 10 Supercomputers are x86 based *Cough*

I'm well aware of Sun's hardware and your history lesson just backed my assessment 100%. They relied on a lot of expensive "magic HW&SW" to build those platforms. The ISA didn't give a native advantage or make it any better than x86. The hot swap technology you refer to isn't in their CPUs, its in the chip sets that go with them. IBM designed similar chip sets for x86 BTW.

A company's death always comes down to bad management, but they also tanked because their hardware couldn't keep up with the bang/buck of x86 once it gained 64bit. Engineers I know that were relying 100% on Sun kit were able to get 5-10x the processing power for the same price in x86. By the time Sun started offering AMD Opteron kit it was 2 years too late.

The CPU complexity is going to be somewhere. Intel chose to keep x86 and scale it the way they have, and they're making a killing at it. If there was a simple fix to replace that you can be sure Intel or AMD would do it. They have some of the brightest engineers and scientists in the world. You can bet they have several alternatives in the queue, and if those simulations ever panned out to give a significant advantage they'd switch to it.

For now the x86 clan have the next several steps already planned out, and that's basically making the x86 a SoC. Technically this is already true for the Atom CPU but that's a low end chip.
* the discrete primary GPU is already gone but continues to gain power
* the northbridge is already gone
* the southbridge will be gone next year
* Transactional cache looks to speed CPU/GPU operation a good amount
* Another couple years out the "main memory" will be stacked on or next to the CPU but in the same package (similar to how Core2Quad and Interlagos is made now)

Anyhow, bring on the Trinity news. 😉

Considering we're constantly doing performance evals on "Sun Kits" I can call BS on your price / performance ratios. We recently chose to go with T4-2's at our next HW fielding over x86 kits. When it comes to processing width very few things can touch a SPARC uArch, it's designed with the specific purpose to do lots of unrelated tasks at once. This has been demonstrated through spec benchmarks, specifically with web front ends and Oracle / middle ware back ends.

Intel chose to keep with the x86 because it's backwards compatible with current code, they've invested a lot on of money into optimizing their compiler and finally because it's a closed ISA that they completely own. Nobody can produce a x86 CPU without Intel's permission, that gives them near exclusive access to most of the computing market. SPARC is an open standard, anyone can make a SPARC compatible CPU. While you bash SUN you completely forgot about the other SPARC maker Fujitsu who makes the UltraSparc VIII line of CPUs. Sun =/= SPARC, they just decided to go with that ISA and develop on it. Did you ever even learn to program in ASM?

SUN was tanking not because they failed to produce powerful hardware but because they couldn't make it accessible enough. They focused too much on specialized systems and applications. With commodity x86 servers becoming so cheap the average business doesn't need the power behind a Sparc kit so they opt to go with the cheaper commodity box. That and they insisted one everything being programmed in java, which was one of their worst mistakes.
 
And I see your continuing on with this stacking main memory onto CPU die's nonsense. That will never happen for two really big reasons.

First is heat, high performance CPU's are already rated at 100W or more. Silicon is not a good thermal conductor, not compared to actual conductors. Thus you get your heat moving from your logical components through your layered memory into your cooling solution and in doing so heating the memory up. The heat produced from the memory will also count towards the CPU's TDP much like the graphics cores in the APU's count towards the total package's TDP budget. Putting small amounts of memory wouldn't be a big issue as you could strategically place them away from the hottest regions of the lower logic circuits. It's heat production would still count towards the total heat budget but at least it wouldn't be insulating the CPU from it's own heat spreader.

Second is physical space, go take a really good look at DIMM chip sizes. Not the sticks but the actual physical memory chips on those sticks. Now go take a look at a CPU die. You should quickly notice that the CPU dies are usually smaller then the entire DIMM package which presents a problem. How much memory could you actually fit onto a single layer on top of the CPU? How much memory is in the average system? The amount you could fit, assuming 100% space utilization, is a mere fraction of what's in today's systems, not to mention what will be in the systems in the next four or five years. And due to the problem mentioned above you won't actually be able to use all 100% of the surface area as you'd then be making a good insulator for the hotter CPU below. You could reverse the order and have the CPU stacked onto the memory chips, but then you run into engineering problems not to mention you now have a heat generator under your CPU.

Put those two problems together and you won't ever get a desktop CPU with enough memory on die to actually run anything. You will be forced to resort to having system RAM installed which presents a NUMA issue with one region of memory being an order of magnitude faster then another region. Ultimately what you end up with is a very large L4 cache which is exactly what they will make it into. Stack 64~256MB of ultra fast memory and use it as a glorified cache. You might get enough memory to run a phone, but with those now coming with 1GB+ I wouldn't count on it.
 
everyone wants profit and thus they haven't released k version of i3, no i4 or i5-2400k.

But if they do then their will be no point to recommend amd, even the phenon x6 cannot competete with those cpu at that price.

I went amd in my latest build because of oc option (as i do it for fun even it have more fun than playing games to me, i can't leave ocing) at cheap price which intel lacks in sub 200$ market., also due to more cores at that price.

That is why I have a older 1156 i5 build instead of SB besides of me just being cheap, also enjoy overclocking on my 790x and 990FX boards. The locked down approach for 1155 for everything but unlocked versions are well a turn off at almost every price point for me.
 
*Cough*

Considering I have several SUN stacks sitting about 50 feet from my desk I might have just a little bit of hands on experience with this.

To someone who doesn't know what their looking at you'd think the interconnects between a SPARC, PPC and x86 look they same, they aren't, not by a long shot.

SPARC itself is modular, you can pull a SPARC CPU, memory or peripheral device out of a system and it won't crash. That circuitry you see is what allows that to happen. SPARC's now ship looking exactly like Xeon's, a chip either in a flat socket package or part of a CPU card assembly (for hot swap usage). Depending on the backplane you use you can keep getting bigger and bigger, I've worked with 64-way systems before.

I'm really shocked that so few people actually know what happened to SUN. Their hardware was never a problem, it wasn't expensive nor clunky. You could buy an Enterprise 25K if you wanted, that's an entire rack of equipment for a mainframe. But that era's most common product was actually the SunFire V240, a 2U dual socket server designed for low to medium processing. After that was the V490's which are 5U with two dual socket CPU cards. The 880's and 890's were rare and expensive, you only bought them if you had a specific purpose for them. They each were just a bigger version of the 480/490, four CPU cards instead of two and twelve FC HDD bays instead of two. Lots more I/O capability due to increased PCI bus's.

Which comes to one of the starkest differences, x86 systems tend to only have one PCI bus. SPARC's tend to have three to four separate PCI bus's. That is what that extra I/O circuitry is for, it's to handle all the additional components that you will put in there. I have a box sitting outside in one of my racks' that has four dual-port 4Gbps FC cards in it. Two form a 4 double loop to local FC disk array and the other two form a double loop to the enterprise SAN. That's four 4Gbps pipes that get muxed via mpxio on both the back end and the front end for 16Gbps in all directions. That kind of I/O isn't possible unless each set of card's gets it's own bus. And this is an older V490 system. Newer T2/T3/T4 box's can do 16GFC and dual 10GE.

Oracle bought SUN because SUN has a history of bad management decisions. Solaris is difficult and archaic to work with, it's unforgiving and makes only the slightest attempt at a GUI. Oracle changed out the management and infused cash to get the T3's finished and out the door while furthering production of the T4's. And while I think Oracle is shady when it comes to customer service (you must pay for security updates) their the king of Databasing for a reason. Oracle bought SUN because they knew they could make a profit with the company and further integrate SPARC + Solaris + Oracle Database + Oracle Middleware (used to be BEAWLS).

Anyhow this was all about the x86 ISA being old and not scalable. It was designed in the 70's and has stayed the same to maintain backwards compatibility. Only extensions have been added in the form of a coprocessor (SIMD / FPU) with the 64-bit iteration being just an extension of the registers to 64-bits with everything else kept the same. To get more performance out of the metal the CPU engineer's are having to design complex decoders and translators to act as middlemen between x86 binary and the internal RISC language of the CPUs. CPU's would be faster / cheaper / smaller if they didn't need to do that.

You are spot on about people not knowing much if anything about Sun at all. Most here have never owned, maintained, or even used Sun hardware at all. Only a few now days know what they are looking at when it comes to Sun hardware much the same with SGI. It is a dead shame that it is that way but maybe that will change. Perhaps there will one day be another golden era when everything isn't locked down like it is now and gain some diversity once again.
 
That is why I have a older 1156 i5 build instead of SB besides of me just being cheap, also enjoy overclocking on my 790x and 990FX boards. The locked down approach for 1155 for everything but unlocked versions are well a turn off at almost every price point for me.

The lockdown was not 100% on purpose though they knew about it in the end. It was mainly due to integrating the PCIe and SATA clocks. If those were not tied in, then BCLK OCing would be possible like with LGA2011.

Honestly though, I would not be suprised if AMD took the same approach. They kill themselves by offering the lower end CPUs that can be unlocked or overclocked to better than a higher priced CPU.

Maybe thats why they have the new "K" edition Llanos and will have them in Trinity too.....
 
And I see your continuing on with this stacking main memory onto CPU die's nonsense. That will never happen for two really big reasons.

I'll just stop you right there because it's already being done. Chips are taped out and being fabbed RIGHT NOW doing this. They'll be in product this year.

There is an entire JEDEC standard based on this. It's called WideIO and it's just in the first phase. You're looking at 16Gb stacks today that will double easily by next year. That means within 2 years you could have 8GBytes which is more than adequate for most laptop/desktop computing needs. And that's just WideIO. Micron and IBM have their own thing going on and they were talking 20 die stacks.

The power/heat savings is significant (70%) when you don't have to go off chip for memory.

You cite heat as an issue and yet we're looking at 35W Trinity and Ivy Bridge CPUs. They can put on entire GPUs which generates significant more heat than DRAM on the same die. There's nothing to prevent the power envelopes needed for 3D stacks, other than the technical challenges of assembling stacks with sub micron accuracy.

Denying CPUs will go 3D is like 3DFX denying integration of 2D/3D video cards. It's not a matter of IF it's a matter of WHEN.
 
Considering we're constantly doing performance evals on "Sun Kits" I can call BS on your price / performance ratios. We recently chose to go with T4-2's at our next HW fielding over x86 kits. When it comes to processing width very few things can touch a SPARC uArch, it's designed with the specific purpose to do lots of unrelated tasks at once. This has been demonstrated through spec benchmarks, specifically with web front ends and Oracle / middle ware back ends.

Intel chose to keep with the x86 because it's backwards compatible with current code, they've invested a lot on of money into optimizing their compiler and finally because it's a closed ISA that they completely own. Nobody can produce a x86 CPU without Intel's permission, that gives them near exclusive access to most of the computing market. SPARC is an open standard, anyone can make a SPARC compatible CPU. While you bash SUN you completely forgot about the other SPARC maker Fujitsu who makes the UltraSparc VIII line of CPUs. Sun =/= SPARC, they just decided to go with that ISA and develop on it. Did you ever even learn to program in ASM?

SUN was tanking not because they failed to produce powerful hardware but because they couldn't make it accessible enough. They focused too much on specialized systems and applications. With commodity x86 servers becoming so cheap the average business doesn't need the power behind a Sparc kit so they opt to go with the cheaper commodity box. That and they insisted one everything being programmed in java, which was one of their worst mistakes.

I was talking price/performance when they tanked. The performance has come back now that Oracle injected enough cash to get new processors made. And of course with 8 threads (SMT style more like Bulldozer) per core they will have an advantage for some operations. Oracle ditched Itanium for Sparc and more power to them for taking on that effort.

BTW I would hope Oracle software runs better on Oracle hardware. 😉

ASM yes that was in a freshman EE class. And yes I know Fujitsu makes the processors. Whats with the HS questions?

I'm not bashing Sun. They were great during the dot-com bubble, they tanked and now they're on a comeback (the hardware anyway). They're just not relevant for 99.999% of consumers. They're not in the desktop space.
 
Status
Not open for further replies.