AMD CPU speculation... and expert conjecture

Page 176 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

hcl123

Honorable
Mar 18, 2013
425
0
10,780


gee.. AT, no wonder...

That 57% is not a performance measure, so no 57% better for web browsing or anything, is an indication of battery cycle durability while running Light (EDIT: measuring software ?? ... lots of salt!..).

It doesn't make web browsing or anything else "better", its exactly the same the experience, only the battery lasts longer... and is only an approximation, we can extrapolate, but is representative only of the machines tested(EDIT: other machine can have different batteries and different results(how they deplete), and is very workload dependent " Workloads with greater idle time will show the biggest improvement in battery life thanks to Haswell ULT. ").

 
@hcl,

SPEC isn't a consumer benchmark though you can kinda use it for that. They primarily do industrial bench's and in that place reputation is extremely important. Intel couldn't pay them enough money for them to cheat because if it's found out then their entire customer base would abandon them. Also they do custom bench's and you get ALL the info from those (you payed for them). Anyway I trust them more then any other suite, of course their bench's require people to actually understand what they mean, they won't hold your hand.
 


I think both of you might have a point. The rumored roadmaps have Intel desktops becoming BGA. There will very likely always be socketed server setups as server operators do often swap out CPUs unlike desktop users. However, single-socket server setups are going to be pretty pricey compared to most LGA1150 desktop builds.

1. The least expensive single socket Xeons are $189, considerably more than a cheap Pentium or i3 in LGA1150 runs.

2. Boards for single socket server setups will very likely be considerably more expensive than current LGA115 boards. Intel currently has single-socket Xeons in no less than three sockets. The E3-12xxV3 series is in LGA1150, the two E5-14xx units are in LGA1356, and the E5-16xx units are in LGA2011. It would be very easy to eliminate the LGA1150 units and migrate people to the successors to LGA1356 and LGA2011 if the desktop LGA units moved to BGA. LGA1356 and LGA2011 boards are considerably more expensive due to the added complexity of 3 and 4-channel memory controllers as well as a much smaller target audience.

3. Server boards usually require specific heatsinks because the desktop units generally won't work with server-style screw-and-backplate setups. Good quiet tower heatsinks made specifically for a server setup are quite expensive. It's a niche market and only folks like Noctua who charge a royal sum for their heatsinks make suitable units. Ask me how I know this one.

4. Intel will capitalize on the trend and expand the line of "server-socket" desktop chips like the LGA2011 i7s. They will of course be no less expensive than right now since it will "buy these, a completely locked Xeon, or get a BGA chip."
 


I am with the speculation, that is not a 220W TPD but rather Max load Wattage, I believe the TPD is around 140W on a mature Piledriver process with RCM. Clocked at 5Ghz with a Turbo core and extensive overclockability could see AMD'd V12 equivilant start to rack up the benches, it will be interesting to see what 900mhz base boost does to Single and Multithreaded performance, it will also have a mitigating effect on the current achilles heel and will mitigate the IPC penalties through shared resources.

we saw the 6800K boost .14 in Cinebench 11 over the 8350/5800K singlethreaded performance and close the gap to the i3 and lower end i5's so it will be interesting to see how far this closes the gap.

 
price will likely be higher than current $200~ or $250~.
if the cpus' performances are up to snuff, amd might try to compete with core i7 3930k instead of core i7 3770k. there are 2-3 big price gaps in intel's lineups. one between lga 11xx core i5 and core i7, another among the lga2011 core i7 cpus. the chips can fall into any of those gaps.
if it's a 'twkr'-like chip, only aimed at enthusiasts, it might still cost a lot regardless of performance.
 

8350rocks

Distinguished


I agree with that assessment I think TDP will actually come in around 140W and max draw will be something around 220W
 


1) AMD's process is now very mature so they can squeeze better bins and power gating from each chip.
2) S/A super secret of AMD Richland, maybe relates to this aspect and a working RCM.

I am putting this down to AMD's exectution improving rather than some monster power guzzler monstrosity. Since AMD is more durable than Intel, there wont be an issue and we also need to factor in Intel's Extreme line Hexcore beasts at 4.4ghz where they are max stable they run off around 230+w so for AMD to do 5 of a Octocore at 220w is pretty awesome.



 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


We won't find out. They're only being sold to OEMs so they'll show up in prebuilt systems costing an arm and a leg (think Alienware). It just boils down to a marketing gimmick.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780
NO its TDP alright, but if follows Richland it must have quite a good headroom
http://twimages.vr-zone.net/2013/01/richlandSPEC-2-665x162.jpg

the A10 6800k (100W)is 10% higher base clock than the A10 6700 (65W), yet the difference in TDP is 46%. I believe all unlock SKUs carries a lot of "rated" TDP by excess... but doubt is anything close to 140W in any case(perhaps the 9370).

Also the "improved" process of Richland might have been adopted, along with DDR3 2400 ( very good, a major performance block surpassed). So it could be an improved Vishera (new mem controller) not FX8350 OC. Linear by clock should be ~18% better performance, but the new mem controller, and the extraordinary scalability this design presented by untying the big L2, i would say 25%

http://translate.google.com/translate?langpair=auto|en&u=http%3A%2F%2Fwww.pcgameshardware.de%2FCPU-Hardware-154106%2FSpecials%2FFX-9590-FX-9370-AMD-Centurion-1073412%2F

EDIT: damn! ... original link ( this sites breaks those translation links) http://www.pcgameshardware.de/CPU-Hardware-154106/Specials/FX-9590-FX-9370-AMD-Centurion-1073412/

EDIT 2: price says $800 (9590) (?) and $400 (9370) .. should the first have a waterblock for GPUs to ?

EDIT 3: dreams...
we saw the 6800K boost .14 in Cinebench 11 over the 8350/5800K singlethreaded performance and close the gap to the i3 and lower end i5's so it will be interesting to see how far this closes the gap.

They will release a new build with the same exact name... and everything falls back to usual

EDIT4:
We won't find out. They're only being sold to OEMs so they'll show up in prebuilt systems costing an arm and a leg (think Alienware). It just boils down to a marketing gimmick.

The rumor says first OEMs, then retail. Makes perfect logic to me. AT wants to test it, they have to buy a pricey machine lol
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860


fond a very good writeup on the tech. http://sinhardware.com/index.php/blog/314-haswellfivr

Li4vLi4vLi4vLi4vaW1hZ2VzL2ZpdnJibG9nLzE1LlBORyZ3PTUwMCZoPTMzMyZxPTkwJmFvZT0x.jpg


so the haswell VRM is a 2-step VR. 1st is from the motherboard. It has to supply an extremely clean one size only 2.5v line. This is where and why your seeing "16 and 32" phase vrm for HW mobos. They are there to clean up dirty power supplies and make haswell possible with a cheaper ps.

This is probably ~95% efficient from the motherboard.

HW takes the 2.5V and turns it into the required voltags internally. The efficiency is much worse, ~82+%. Another interesting point is they stated the "power cell" is on 90nm process.

As I sated earlier, smaller parts = less power. 25A at 22nm would be no go. figure 4x the size for 90nm, would need 4x the number of cells at 22nm to have the same thermal density, making each cell 6a.

Also, the phase is 16-phase VRM. The efficiency comes in by shutting off un-needed "cells" so that it can reach its peak faster, as each parallel group of "cells" extends the peak for efficiency to a farther amperage. Its not changing it from a 4-phase to a 16-phase as needed, doing that would be extremely complex circuitry. cutting off parallel pathways is simple and easy to implement than adjusting a series circuit.
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810


9590 Piledriver at 5.0GHz, looking forward to seeing the benchmarks on that:D I can only wonder if Piledriver FX is hitting 5.0GHz turbo, where will Steamroller FX debut? Could we be seeing Steamroller FX with 5.5GHz turbo, 6.0GHz turbo?

With Haswell being less than impressive I wonder how far the 9590 will close the gap and how the first Steamroller FX processors will benchmark compared to Haswell i7.
 

8350rocks

Distinguished
Well, rest assured, with the intel "atmosphere" out there now from many sites...places like AT will probably say something like..."Well, this AMD sure is fast...for an AMD...but look at what Hasfail did for laptops!! Much better battery life!!!"

Pfft...
 


I did the analysis a few months back in this thread; looked at a good dozen games or so that I had installed (various publishers; all the big AAA studios), ran them through the top two programs to determine compiler signatures.

ALL of them were MSVC based. ONE was MSVC 2008, the rest were either MSVC 2005 or 2003 (Note the implication: No SSE support beyond SSE3), and one case of MSVC 6.0.

Now, you can yell and scream about "compiler switchs" and "versions". I now call BS. Why? Because beyond setting the -O2 flag and maybe a few architecture specific flags, very few, if any, compiler flags are set by the developer. And given how we almost NEVER manually insert any special CPU opcodes, you aren't going to see platform bias out of MSVC.

As far as .md5 goes, who cares? If you want a "static" version of a benchmark, never update it. Simple. Simply installing standard updates and hotfixes to a compiler is going to affect the final output so SOME degree, so you are never going to have .md5's that carry over across mutliple compilations.

But of course, people like you continue to propegate the myth that theres some systematic bias against AMD, and if only this were to go away, AMD would outperform Intel by double in all situations. Stop it.
 

jdwii

Splendid
Off Topic

But i'm wondering if Microsoft wants to go bankrupt lately, i honestly can't understand why that console is releasing for 499.99$ when the PS4 has 50% more GPU resources and 2.5 times more memory bandwidth and is releasing for 399.99$(which sorry PC gamers that's a heck of a deal for that type of power in 2013) lets not also forget about forcing users to go online every 24 hours(sorry there is plenty of times that would screw me over, hell i have hughesnet here), Not to mention the used game fee is going to go back to the consumer regardless of what people are saying you're either going to get less for used Xbox one games or its going to cost more for them or a little of both.

Not to mention i think its still going to cost you a monthly fee to microsoft for you to use your Netflix account where the PS4/PS3/Wii/Wii U/ 3ds and PSP its free(except the netflix cost of course)

If i was a console dude i would be forgetting microsoft even existed and just get the PS4/Wii U(i love nintendo games they just need to price their system at 250$)

So i ask you guys with Windows 8(half of my Sims 3 expansions did not work out of the box and metro still stinks so i use classic shell) and Xbox One is microsoft trying to go bankrupt is the ghost of Steve Jobs hunting Steve ballmer and telling him to do such retarded things?
 

montosaurous

Honorable
Aug 21, 2012
1,055
0
11,360
I can't even imagine what the TDP of those chips will be. Probably won't perform too much better than the FX 83xx either if it's just a small few tweaks and huge clock rate increase. It might be able to beat the i7 4770k at stock in most multi threaded benchmarks, but we'll see. It will probably be a little overpriced as well.
 

montosaurous

Honorable
Aug 21, 2012
1,055
0
11,360


Sims 3 works fine on Windows 8. It's Sims 2 that has issues. And if you think Microsoft is bad, Apple is 10 times worse.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
they are twkr chips. Im wondering if they will be hiting 6.0 ghz on chilled water loops. I wouldn't exactly call it exciting, the power draw is going to be insane on these "high leakage" parts.
 

jdwii

Splendid


Yeah i agree to that my comment sure wasn't to make Apple seem better lol, If anything in the mobile market Android is my favorite and then a like IOS better than W8.
 
Status
Not open for further replies.