AMD CPU speculation... and expert conjecture

Page 199 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


I said the power draw difference at load is around 50 watts, to use a round number. The actual power consumption difference between the FX-8350 and the i7-4770K in the THG entire benchmark run was only 35 watts, despite the AM3+ MB being a higher-end unit than the LGA1150 unit (actually, closer to the LGA2011 unit used with the i7-3930K). So I even overestimated a bit just to get a round number.

average-power.png



The other thing is I had an ibm r40 and it had 8-10 hour barttery life web browsing on 15 sxga screen.

I don't believe it unless you had an external battery. Notebooks in those days were considered to be very, very good if they got 4-5 hours on a giant battery. Most got 2-3 hours on a regular sized battery. IBM claimed 4 hours runtime with an idle computer and dimmed screen out of a single battery (see here) which is very consistent with battery runtimes of the era. Various user reviews on sites like Notebook Review say 3-4 hours with one battery and 6-ish with both batteries installed. So I call BS on that claim of 8-10 hours.

Also I know for a fact that a pentium m 1.3ghz and 1.5ghz pentium m were much faster then most p4's at gaming like need for speed underground and flash based games it destroyed a 2.66ghz p4 and 1.6ghz p4.

It is very difficult to directly compare the Pentium Ms and Pentium 4s in gaming since you are talking about laptops. You are more likely than not comparing how much RAM and what IGP/GPU you had in each machine than anything. I don't doubt that the 1.6 P4 was slower than the 1.3 or 1.5 P-M but a 2.66 P4B with an equivalent RAM/GPU should be faster than a 1.5 P-M in gaming. To really test them you would need a rare standalone motherboard or socket adapter for a PGA479 CPU so that you can run desktop RAM and a desktop GPU with the mobile CPU and directly compare to the P4. The ASUS CT479 socket adapter allowed that and there were some tests done that showed the Dothan 533 vs. various 90 nm Pentium 4s and A64s. (Here is one.) The fastest Pentium M (780) did decently in 1024x768 gaming on par with the A64s of similar clock speed but finished midpack to bottom of the list in other programs.

The pentium m cpus were definately better then the k8's. I was amazed how good pentium m's were, as the ibm r40 I could not even hear it really at 100% cpu load as it was such a low power chip. The pentium m's are probably still as efficient as intel atoms performance per watt wise.

The Pentium Ms were pretty efficient but they weren't really better than the K8 as far as performance was concerned. Look at the link above where a Pentium M was benched against K8s. It would be impossible to do a direct power comparison on a current Atom vs. a Pentium M since they use wildly different platforms. If you just did a "plug a board that supports either into the wall and let 'er rip," I would strongly suspect the Atom would do better, considering it's a 32 nm SoC setup vs. a 90 nm CPU with an older two-chip chipset. Total performance of the Atom would be lower on single-threaded stuff since the Atom isn't too fast but a dual-core Atom should outperform a Pentium M using multithreaded code and it would use less power for sure since the platform TDP is a lot lower.

Intel can release cpus if they want with more cores, they just don't have to. It makes no sense for them to as they can mark up there 6 core cpus in price still. They make 8 core xeons already, 6 core 980x a few years ago. Its not hard for them to but because the fx8350 isn't faster then the 3770k or 4770k or 2600k intel don't have a reason to add more cores yet and we all know intel like to make easy money.

Intel probably could drop the 6-core SB-E to the $300 ish mark but that's all they could do today for a 6-core chip. The only dies Intel currently makes which are able to yield a CPU with more than four cores are the 435 mm^2 8-core SB-E die and the 10-core 513 mm^2 Westmere-EX die. Both are huge dies and are going to be expensive to produce, even as a 6-core salvaged part. Intel would do best if they introduced a dedicated 6-core die which would be in the ~300ish mm^2 range on 32 nm and be obsolete even before it came out. As I said above, I am not sure Intel can yet make a 6-core sized die on 22 nm due to process immaturity.

You made a remark about the 6-core Westmere chips. Intel never sold one for less than around $500 but it did have its own dedicated 248 mm^2 die which Intel used extensively in the Xeon 5600 line as that was the top core count chip for the LGA1366 platform. I suppose the argument that you would be better off making is that Intel feels no pressure to release the full 8-core SB-E as a desktop chip due to a lack of pressure from AMD as AMD can't touch the 8-core E5s with an AM3+ chip plus the chip would likely be retailing in the $1500+ range, out of reach of all but the most loaded enthusiasts.

AMD on the other hand is using an 8-core die 3/4 of that size, and yields are exponentially better with smaller die sizes. AMD also uses that 4-module FX die in every single non-APU part it sells so there is a significant economy of scale. Intel only uses the 6-core-capable die in a small high-end desktop line and in midrange servers. I'd hazard a guess AMD sells considerably more 4-module dies than Intel sells SB-E dies, considering E3s use the 4-core die and the E7s use the 10-core Westmere-EX die.

Last thing I only mention power as that is the way I judge if the cpu is better or not from before. Most companies of technology always go on about increasing performance per watt but AMD have not gone forward since like 2009 on the desktop market considering at 32nm they should have improved a lot more.

Performance in a few benchmarks with BD/PD has not improved relative to power use compared to Stars (K10) but most have, especially anything multithreaded. If you really want to look at power use, look at how much AMD cleaned up the idle power use of BD/PD. They introduced clock gating and BD/PD are *much* better at idle than any previous chip ever was. My Bulldozer-based Opteron 6234 runs a little hotter than the Stars-based 6128 it replaced (because the 6234 uses the entire 115 W TDP due to Turbo CORE and is significantly faster than the 6128) but it is significantly cooler at idle. Most desktop CPUs idle the vast majority of the time so you should be be very happy with these more meaningful reductions in power use at idle.

I get your power consumption point but personally I like the best performance per watt components at the time as its more future proof or shall I say it won't look ancient in a few years time and I like to buy stuff that are innovating instead of releasing slap dash cpus like intel p4 netburst cpus.

I betcha AMD's CPUs will look much better in the future compared to the present Intel competition. The future is clearly more multithreaded and the BD/PD's forte is multithreading. Intel is hanging on with a bunch of fairly low-core-count CPUs trying to flog single-threaded performance. Think of it this way- would you rather use a Phenom X4 9850BE or a Core 2 Duo E6850 today? Back in the day the E6850 was considered much faster and the Phenom X4 was considered to be crap as poorly-threaded performance was all that mattered...
 

jdwii

Splendid
"instead of releasing slap dash cpus like intel p4 netburst cpus" Cough Haswell,Ivy, Sandy all of these CPU's measured in a 15% increase in CPU performance From Sandy-Haswell so i still find no reason for a person with a 2500K to upgrade on an Intel machine. Intel is no different and in fact worse since they have the money to improve quicker and are worth much more.

On a laptop a 35 watt TDP Cpu is fine and stable and on a desktop a 125 watt is stable its funny how you have fell into marketing and care more about performance per watt over performance which is nothing more than a gimmick for desktops really. Again you avoid that question why is power consumption so important to you?
 

jdwii

Splendid
"would you rather use a Phenom X4 9850BE or a Core 2 Duo E6850 today? Back in the day the E6850 was considered much faster and the Phenom X4 was considered to be crap as poorly-threaded performance was all that mattered..."

He explains how Amd CPU's are so far behind when looked into the future when your statement proves him wrong actually and according to game developers a 8350fx is going to do the same thing to a 4 Core HT CPU.
 

8350rocks

Distinguished


Based on things like Unreal4 Engine, CryENGINE3, Frostbite, and several other cutting edge game engines, it is already doing it. The fruit is forthcoming...we just don't have it yet.
 

ohyouknow

Distinguished
Nov 18, 2011
956
0
19,160
The past few pages are battling with a troll. Never argue with trolls, they will bring you down to their level and beat you with experience. Obviously hi-jacked from a more simply stated quote.

Back to the subject at hand. Steamroller specced full fledged CPU vs APU Steamroller. What would be the significance in performance difference? APU seems like it will be the only cat in town sooner rather than later. So it may be best to look at it from that perspective. Last time I saw some stuff for gaming, the APU+GPU wasn't so far behind in regards to traditional AMD Proc+GPU.

Please for the love of everything, no compiler issues, older proc comparisons, so on and so forth.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790




B) The FX9590 isn't simply an overclocked FX8350. We don't have details, but either it is a chip with extraordinary thermal and electric tolerances preselected from the main FX8350 line or includes some 2.0 revision like Richland. In any case, the FX9590 will consume less power than an OC FX8350.

E) Compared with ordinary i7 line such as the 3770k or 4770k, compared with extreme series such as the 3960x or 3970x I am not so sure.

H) I don't know any updated desktop roadmap for 2014. I only know server roadmap and the desktop for 2013. If steamroller is 2 threads per module I would wait some 4 module FX, but if steamroller is 4 threads per module as speculated before, then a 2 module FX makes more sense for general public.


 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


It's truly difficult to guess how Kaveri will perform. There are several big changes that diverge from the most recent Richland, and it's a new process on top of that. It will have higher CPU performance and higher GPU performance (512 GCN), but still marred with memory bandwidth issues.

It will be interesting to see what the enthusiasts do between the choice of a 4 core Steamroller APU or an 8 core Piledriver for the desktop. SR doesn't come close to a 100% gain so 8 Piledriver will still win out over 4 SR simply on numbers. SR will be more efficient per watt but anyone already with an 8350 won't care about that.

Some will just want the new SR cores, and go discrete. Use the 512 GCN as a heatsink to OC the CPU cores. Then it comes down to the new 28nm node. How well that will OC. Too many variables still. Is it the new FD-SOI or older PD-SOI? It has the potential to be the new overclock king.

As usual AMD is stuck between a rock and a hard place. They spent a fortune in fines to GF to allow production at TSMC, and now they're getting screwed by Apples wad of cash stealing the front of the TSMC fab line.
 

8350rocks

Distinguished


If you had 10 hours battery back then...why are people thrilled about having 7 hours battery life with an XL battery in current laptops? Why isn't Intel making a CPU with 20 hours battery life web browsing if they could make one that got 10 hours 9 years ago? If Intel is so great...then surely such an awesome and powerful company has a CPU that can run for a week web browsing without charging the battery!! By your logic, 9 years ago they had one that ran 10 hours, Moore's law says that technology gets twice as complex every 2 years...that means in the 9 years since your "10 hour CPU" technology has doubled effectiveness 4.5 times. By that logic, Intel should have a current CPU for laptop mobile that runs 50 hours web browsing without charging the battery.

However, they don't...why is that?

Because your battery didn't really last 10 hours unless you were plugged into a wall for ~6 of them.

I invite you to show me a documented test showing your same configured laptop with that CPU lasting 10 hours web browsing not plugged in. If it was an IBM laptop...it should be pretty common to find battery life benchmarks on your favorite site, notebookcheck.

Benchmarks or it didn't happen. From this point forward, I would like to see you post a benchmark to support anything you say in this thread, because everything you've said so far has been wildly off base with no evidence to support it.

 

jdwii

Splendid
"Finally power efficiency is important as you get basically double the performance at same power it seems every 4 years. For example my 5930g I bought in january 2009 mid 2008 model scores 5200 in 3dmark06 and takes 70w while gaming. My new acer m3 bought in december 2012 got in january 2013 a mid 2012 model takes 65w and the cpu is 2.2x faster and the 3dmark06 score is just over 10000. The 3317u cpu on most things like cinebench video encoding is 2.2x faster then the p7350 c2d and my p7350 because of 9600m gt idles took 50w for cpu vs 28-30w for my 3317u for cpu 100% test."


Reading this all i can say is who cares, are you a green energy guy or something the most energy efficient design does not make it the best if it did Amd jaguar would be considered the best design in 20 years. Besides bragging rights i still see no reason to think what you do but it's your own beliefs and you are entitled to it but don't be a troll going around thinking its right.

"Question to you is why buy inferior technology. AMD have a lot of catching upto do. Its actually quite funny intel desktop cpus with netburst cpus from 2000-2005 were worse then there laptop cpus in terms of power efficiency. Now AMD are in a similar way there laptop cpus are more advanced then there desktop cpus in terms of power efficiency i.e. performance per watt. So the moral of this story is intel have been ahead basically in the mobile market since 2003 by far."

It wouldn't be if those laptop CPU's had to compete with desktop CPU's in performance(clocked higher) they would run HOT and require a LOT of power. Laptop CPU's are fun at looking at performance per watt which you seem to fallen to which honestly its just a way to lower performance figures each generation and at the end of the day a gimmick. And finally Amd is improving performance per watt they are making their CPU's and APU's faster while improving efficiency.
 

8350rocks

Distinguished
Anyone else see this M$ restructuring article? Somehow I doubt this will be significant in any sense beyond changing the way they try to ram crap down our throats about how good Win8 is supposed to be...even though it isn't.

http://www.tomshardware.com/news/Restructure-Steve-Ballmer-Microsooft-Redmond-Devices-and-Services,23232.html

Some Kaveri speculation from BSN:

http://www.brightsideofnews.com/news/2013/6/1/amd-updates-roadmaps2c-shows-28nm-kaveri-for-socket-fm22b.aspx

If you look at the product roadmap .pdf here, it shows that HD 8XXX series are due in Q3 this year...so likely before end of summer we'll see HD 8XXX series GPUs hit:

http://phx.corporate-ir.net/phoenix.zhtml?c=74093&p=irol-IRHome
 
All right guys, it was fun to battle and all, but we really should get back on topic. How about that Steamroller?

I'll start. Steamroller is apparently only announced on FM2(+) currently, judging from the server roadmaps. The server die is used in the high-end desktop as well and 2014 server is going to be a tweaked Piledriver, not Steamroller. Thoughts?
 
But are they going to do scrapping for the "high end" desktop parts or just binning? Maybe both?

I'd like to see some AM3+ love with a new chipset to be honest... The correct way should be DDR4 and PCIe 3 with the "high end" desktop from AMD, just like Intel leaked some time ago for HW-E.

Cheers!
 

8350rocks

Distinguished


I would think binning would be enough, they don't seem to be having the same yield problems they were before.

Agreed on new chipset. 1090X/FX
 

cowboy44mag

Guest
Jan 24, 2013
315
0
10,810
Personally if they are going to be doing a new chipset (1090X/FX) I would like to see AM4. It would like to see a Steamroller processor for AM3+ 990FX, but top of the line Steamroller processors running AM4 1090FX. If they are going to come out with a new chipset, requiring a new motherboard anyway I would like to see AM4 which would hopefully mean "high end" FX processors through excavator. It would be a shame if AMD dropped its high end FX line and only produced APUs.

I know I read somewhere awhile back that AMD was "unifying" its line which I took it to mean producing only APUs and the FX line would end. I personally want to see the high end FX line continue and keep nipping at Intel's heels until they are able to surpass Intel. A commitment like AM4 1090FX would in my mind show AMD is committed to continuing its FX line, hopefully through excavator.
 

No, AM3 is a legend like 939 and must live on until the very death of a non-APU CPU :3. I believe AM3+ 990/1090FX should stay until the end of Steamroller. From there, I think AMD will have a fully matured APU with 8/9870ish integrated graphics and FX will die.
 


I'd say he's just a regular spammer, haha.

Anyway, in regards to SR without iGPU talk... Is it going to exist at all? If so, they should move it to a timeframe that allows to have DDR4 (mainly) in their MoBos. Man, I wish they do just that for the "high end" SR parts.

APUs would be the most benefited from it, but the move to GDDR5 will be enough for a good time. In all honesty, 8GB or RAM for Windows computers is still enough and Win7 (not sure about 8) goes around the 2GB mark and 1.5GB when "optimzed", so there's still a lot of memory pool for games and all the rest of the stuff. I wonder if they'll use GDDR5 sticks or what, haha.

Cheers!
 
.

That is because Kabini notebooks came out before the new Kabini based All in One ITX systems which are markedly better than the old E-350's they replace across the board. So if a true ITX setup running low power and giving ample performance is what you are looking for, then you may want to look at those :D

I'd say he's just a regular spammer, haha.

Anyway, in regards to SR without iGPU talk... Is it going to exist at all? If so, they should move it to a timeframe that allows to have DDR4 (mainly) in their MoBos. Man, I wish they do just that for the "high end" SR parts.

APUs would be the most benefited from it, but the move to GDDR5 will be enough for a good time. In all honesty, 8GB or RAM for Windows computers is still enough and Win7 (not sure about 8) goes around the 2GB mark and 1.5GB when "optimzed", so there's still a lot of memory pool for games and all the rest of the stuff. I wonder if they'll use GDDR5 sticks or what, haha.

Cheers!

At this point we cannot assume DDR4 or GDDR5 for Kaveri but what we do know is that on both X86 and IGP Kaveri will be more up to date. Current APU's run a hybrid Turks Radeon core which we know is not good on tessellation nor is it good at compute, Kaveri features a hybrid Pitcairn core, along with a new x86 Arch, these two advances alone will see notable performance gains even if limited to the DDR3 2400-2800 bandwidths. I contrast this to Intel Iris Pro, the HD5200 is a beast a low resolutions where the potent i7 core is capable of driving its frame rates but as soon as you touch details or bump resolutions to 16x9/10 or 19x10 the performance is slower than an APU despite 2x the bandwidth, I will say that yes bandwidth is important but if the rest of the architecture on the CPU+IGP side is flimsy even that is fallacy.

Soon I will be doing a demonstration of a i7 4770K vs A10 5800K, Battlefield 3 at 1680x1050 showdown, same settings, same RAM same multiplayer maps just to show how playable the APU is at HD res and how much more fluid it is. this follows a person who bought a 4770 from me claiming that the HD4600 is nothing like its reviews and by that I mean not good at all, complains of massive stutters and lags.





 

trinity - not turks. cayman-derivative, with vce taken from gcn gpus. the apu is called trinity because of this 3-in-1 design. edit: also because it matches some river's name. amd was happy cuz the codename fit perfectly.
if kaveri has a pitcairn igpu, that'd be a gcn 1.0, unlike gcn 1.1 in bonaire gpus and unlike kabini's igpu or the kaveri igpu (based on gcn 2.0 arch) that has been suggested by the rumors and leaks so far.
are you absolutely sure kaveri has a pitcairn igpu? what would be so hybrid about the igpu? doesn't seem like you're speculating...

edit2: a bit googling with the jaguar-successor's 'beema' codename directed me to 'bhima river'. i wonder if fudzilla got the name wrong. bhima is another indian river like kaveri and kabini.
http://www.fudzilla.com/home/item/30108-amds-2014-kabini-successor-is-beema
if fudz are wrong, they might have used the wrong pun... :whistle: :sol:
 

8350rocks

Distinguished
Nvidia Shield thoughts from semiaccurate:

http://semiaccurate.com/2013/06/27/an-encounter-with-nvidia-shield/

AMD OCP "Roadrunner" systems for Servers drop cost of VDI Slices from $91 (Intel) to $38.

http://www.theregister.co.uk/2013/05/15/amd_roadrunner_opteron_ocp_systems/

The systems were part of facebook's "open compute challenge" and were designed in collaboration with large financial institutions to meet their needs.

Sounds like AMD's going after the big dogs in the server market with this...I think they regain a large portion of market share in the next 12 months.
 


Speculative based on article out last week.

Caymen's were the HD6900 family and I seriously doubt Trinity is anything remotely close to a Caymen core. I remember reading up that it was Turks based, that used in the HD6500/6600 families.

 
Status
Not open for further replies.