AMD CPU speculation... and expert conjecture

Page 172 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


What's the issue? The 22nm Atoms are actually going to be quite powerful with the new OoO design.

AMD sells A6 APUs that are both Jaguar and Richland/Trinity based. That's the equivalent of Atom/iSomething.

 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460
Unless if they make some sort of Special BIOS in the next AM3+ Boards to support the next Gen CPUs, or they may just come out with a new socket entirely. SteamRoller is kind of a Wild Card for AMD, no one knows what to expect.
 

Cataclysm_ZA

Honorable
Oct 29, 2012
65
0
10,630


Intel also integrated the VRM onto the same die, a very difficult thing to do. If you do the numbers, they run up 13% higher in power consumption with 12% increase in performance. I don't think that was appreciated by many, but its a necessary step in Intel's plans, and was going to happen sooner or later.



The AM3 forward support depended on the board's power delivery, so not a lot of people were able to do that.
 

8350rocks

Distinguished


Uhh...NO! AM3+ CPUs have an extra pin that would break off if you tried to put them into an AM3 board.

Wikipedia agrees with me too:

http://en.wikipedia.org/wiki/Socket_AM3

AM3 is 941 pins, AM3+ is 942 pins.

That would get someone a broken CPU if they tried to put an AM3+ CPU into an AM3 board. Please do not instruct someone to do that. It would make a new CPU into a really expensive paper weight, and they likely couldn't get it RMA'ed because the issue was user error, not a manufacturer defect.

EDIT: Doing some more research...some boards were AM3 with an AM3+ socket installed for forward compatability, but this is not recommended, as trying it on a board that did not have a compatible socket would be seriously bad. (Bent pins, bad power regulation etc.)
 

Ranth

Honorable
May 3, 2012
144
0
10,680


As far as I know am3 doesn't officially support am3+ cpus.
 

No, via BIOS update, the AM3 boards (890 series which could handle 130W+ TDP) could be used on Bulldozer or Piledriver CPUs, I believe the AM3+ CPUs only have 938 or 939 pins in them, those compatible mobos never had an AM3b socket. If I stuck a 8350 even in an old 790 board, it would come out with no bent pins, but would not boot of course.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


What is wrong with ordinary SO-DIMMS instead of the slot? There is a SO-DIMM jedec standard for GDDR5.



The higher latency of GDDR5 seems to be a myth, its latency is rather close to DDR3, but overcompensated by the giant bandwidth. The current price of GDDR5 is relatively high because the volume is low. Only high-end GPUs use it and in very small quantities: 1 or 2 GB VRAM is the normal. If GDDR5 is added to main memory prices will drop. The PS4 shows how you can have 8GB GDRR5 without increasing the price excessively over a DDR3 design (Xbox One).
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I see a 8% increase in general performance, but only an 1-5% in games. Moreover the chips run hotter now and OC poor.
 

8350rocks

Distinguished


Well, supposing that's the case, AMD does not support it, and it will void any support from the MB manufacturer, so I still would advise against it...even if it hypothetically works under certain circumstances.

EDIT: The socket is wider on the AM3+ boards...you're sure they all fit? I don't know...I still wouldn't recommend it either way. It was never designed to for the newer architectures.
 
AM3 can take an AM3+ CPU and AM3+ can take any AM3 CPU, therefore incorrect. As for 28nm, I have never seen it as that much of an manufacturing process upgrade. GDDR5 will likely only be for BGA, which kind of defeats the upgrading purpose.

[/quotemsg]

Uhh...NO! AM3+ CPUs have an extra pin that would break off if you tried to put them into an AM3 board.

Wikipedia agrees with me too:

http://en.wikipedia.org/wiki/Socket_AM3

AM3 is 941 pins, AM3+ is 942 pins.

That would get someone a broken CPU if they tried to put an AM3+ CPU into an AM3 board. Please do not instruct someone to do that. It would make a new CPU into a really expensive paper weight, and they likely couldn't get it RMA'ed because the issue was user error, not a manufacturer defect.

EDIT: Doing some more research...some boards were AM3 with an AM3+ socket installed for forward compatability, but this is not recommended, as trying it on a board that did not have a compatible socket would be seriously bad. (Bent pins, bad power regulation etc.)[/quotemsg]
No, via BIOS update, the AM3 boards (890 series which could handle 130W+ TDP) could be used on Bulldozer or Piledriver CPUs, I believe the AM3+ CPUs only have 938 or 939 pins in them, those compatible mobos never had an AM3b socket. If I stuck a 8350 even in an old 790 board, it would come out with no bent pins, but would not boot of course.
[/quotemsg]

Well, supposing that's the case, AMD does not support it, and it will void any support from the MB manufacturer, so I still would advise against it...even if it hypothetically works under certain circumstances.

EDIT: The socket is wider on the AM3+ boards...you're sure they all fit? I don't know...I still wouldn't recommend it either way. It was never designed to for the newer architectures.[/quotemsg]

AM3+ CPUs have the same pin width, length and count. AM3 CPUs will fit in AM3+ Boards without a problem, the width is just to prevent bent pins and the chip will still work fine.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


black socket vs white socket.

black am3 = am3+ capable, reduced functionality.
white am3 = no fit.
 
According to there motherboard specs Asrock boards (I suspect most or all manufacturers) officially support AM3+ CPUs on all AM3 chipsets even back to the Nvidia ones, this one even has a White socket see http://www.asrock.com/mb/NVIDIA/N68-VS3%20FX/
 

8350rocks

Distinguished
Up to 95W, sure...the supported CPU list shows a single 125W SKU, that's the 4350...which is the least power hungry CPU of the 125W SKUs. Notice the FX 6350, 8320, and 8350 are all absent from that list? Those CPUs would draw more power than the board can handle, the VRMs are not up to spec for the highest end AM3+ CPUs.

I would still be cautious about recommending anyone do that. AMD does not officially support that practice, and if something goes up in smoke, you're simply out the money you spent with no recourse from AMD. I also doubt the MB manufacturer would honor anything because the SKUs are not listed as supported on the supported CPU list from them.
 
^^ strangely, the cpu support list says fx4350 is supported (listed as 125w) yet none of the other 125w fx cpus are supported...
edit: vrm look anemic... and it's cheap asrock...
however, i suppose higher end mobos with better vrm can support as long as bios support is there (wikipedia says cpu pin counts are the same while socket hole numbers are different...)
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Haswell will be successful because the graphics are 40% faster than Ivy and overall they consume less power than Ivy, allowing to enter even lower power form factors.

What some niche overclocked pre-built shops can do is entirely irrelevant at under 0.1% of the marketplace.
 

8350rocks

Distinguished


I pointed that out...the 4350 is the least power hungry SKU of the 125W TDP chips.
 

8350rocks

Distinguished


Actually...not really.

2500k and 3570k basically...yes. The 4670k won't overclock nearly as well though. You might get 4.8 GHz out of the first 2, but the 4670k won't get past 4.3-4.4 on a good day based on the reviews I am seeing. The ES's that intel sent out are being called "misleading" by industry analysts because the regular product will not get near the 4.5 GHz many were seeing from the ES's. Evidently intel hand picked the best binned samples to send out, and the rest of the product doesn't have that much headroom. Plus hasfail runs considerably hotter when OC'ed...on the order of 15C in fact.

If you want to OC go SB or IB.
 
In regards to the AM3 vs AM3+ CPU support list debate...

I had the Crosshair IV (890FX) and, officially, it only supported BD with a beta BIOS (last thing I knew) and PD was a hit or miss with some hacked BIOS from OC'ers around the globe. Some got lucky and were able to put 8350's in the Crosshair IV with very decent OC numbers.

I switched that MoBo for the Crosshair V (990FX) before going SB and it was really sweet (OC wise). I got the same OC numbers with the PhII 965 (~4.1Ghz on ~1.51v) and switched just because "we're not sure if we'll support BD, sorry" from Asus. Also, it had a crappy sound chip/driver; God it sounded terrible that thing. They fixed that with the Crosshair V though.

Anyway, if the MoBo supports 125W with decent VRMs (most high tier 890FX'es) should be able to power a PD. Problem is that most vendors will want you to buy a new 990FX and don't give backwards support, because business. AMD can't get in there, so you're at the mercy of your MoBo vendor/maker. So, more than a technical complication, I would say it's a business call.

Cheers!
 

8350rocks

Distinguished
I am going to say maybe both...

Having the VRMs on die are presenting a challenge they didn't account for, higher power draw at peak, and much higher temps.

Additionally...because intel's trigate process is on bulk silicon...the heat dissipation and efficiency are less than what it would have been on FD-SOI.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Don't know, but doubt many BGA solutions will have GDDR5. Perhaps up to 384sp (the same Trinity/Richland) is enough for those solutions, it already is better than intel on games (general) and should be quite better with OpenCL on the GPGPU.

High performance graphics is possible it will have GDDR5 not on socket but on "subtract", like the MCM solutions with 2 CPU chips on subtract( instead of 2 processor dies will be 1 APU die plus the GDDR device dies)... and its nothing new for AMD, we can say its NOT a debut.

SEE GDDR5 "ON THE SOCKET"... (in this case is a mobile GPU with GDDR5 i think)
http://www.techpowerup.com/img/11-05-03/17a.jpg

If as it seems, the DRAM device dies might have not their own packages, but be on the same socket subtract of the main "processor"(like server MCM)... its quite similar, a little better because of less impedance of electric domain crossing, but if exactly like that mobile GPU, only it takes is replace that GPU die for an APU die and the GDDR packages for the new ones... will be nothing new really... only for intel, Xbitlab seems to be considering things by an intel POV, which in this case is late to the game(yet "extort" a lot of money as if they invented something new and wonderful, when in fact they are late) .

 
Status
Not open for further replies.