AMD CPU speculation... and expert conjecture

Page 198 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Yes someone grabbed an AMD stripped of a lot of indications, and striked a green bar across the "core" pipes. Bars that are not even straight, a lousy photoshop job(or something). But it makes sense its now 8 pipes, 4 ALU + 4 AGU (?)... its the "rumor"... only too bad a slide to be official anything, least of all official AMD.

Also according to that die shot, they forgot to include 2 FlexFPUs, and not 3 pipes each but 4 pipes each (2MMX +2FMAC).

 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Yes that is what i find amusing, AMD always has yields problems, yet they manage to sell chips considerably lower priced LOL

AFAIK about fab processes, nothing is having worst yields than finfet on bulk pushed for high performance (i don't bother now presenting a flood of links). For high performance it needs quite advanced fab techs, it needs implanting oxide isolation beneath those "fin" channels after those are formed, and do to issues of gaps and filling, the fins tend to get triangular (like pyramids) not rectangular.

Finfet high performance on bulk presents the worst variability issues of any other process, those techs are quite complicated, intel must have quite yields issues, that is why their chips are below the 200mm², that is, the larger they make them, the worst it gets... and a reason for the big HSW/IB-E to be very expensive.

Compared to ESRAM, this last one is a walk in the park. Sure ESRAM tends to be big, its not a yield friendly feature by any measure, but it yields good enough and its cheap enough to include it in the chips for Wuii, which are quite low power, quite small, an so must be quite cheap.



 

mayankleoboy1

Distinguished
Aug 11, 2010
2,497
0
19,810


Instead of using direct x86 opcodes/microcode, compilers provide intrinsics to programmers. They are like shorthands for the opcodes. When used in programs, they are treated by the compiler as hand written machine code.
To effectively use these intrinsics, the coder must know the memory addressing, register usage, register renaming and all these machine level stuff. 90% of the programmers wont ever use these intrinsics, though.

( I follow the mozilla JS engine (spidermonkey) development, and the coders know pretty much machine level code. They routinely analyse x86 opcode of JS converted code to check correctness of the JS compiler/interpreter.)

Hence why I laugh every single time someone claims a few new optimized opcodes is going to lead to across the board performance enhancements, that NEVER come to pass [see: AVX].

AIUI, compiling a program with /AVX2 instructs the compiler to try using AVX2 in loops. 99% of the time, the compiler cant convert these loops to AVX/AVX2/FMA3 .
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


200W is for enthusiasts, they always OC. period

Look a FX8320 OC to 6Ghz, no its not fake its 6GHz, its a good water block, the idle temp 20ºc 8) .. the lower the temperature possible to cool down a chip the lower is the TDP (power drawn), no matter if your chip is marked 1000W (understand ?? )... inverse also truth, rated 45w but the higher the temp the higher is the TDP drawn, that is why the "hot hasfail after been hot up , is no 45w anymore...
http://www.xtremesystems.org/forums/showthread.php?284577-8350-Power-Consumption&p=5195002&viewfull=1#post5195002

http://img853.imageshack.us/img853/591/7rs8.jpg

That machine is not drawing more than 300W average, perhaps could TDP rate at it.

OTOH, if power is a concern, there you have FX8350 TDP at BELOW 45W... yes 45W 2.7Ghz, 8 real cores, and better TDP than Hasfail 45W, because intel never counts large parts of the "uncore" on the 3.3v rails.

http://www.xtremesystems.org/forums/showthread.php?284577-8350-Power-Consumption&p=5163272&viewfull=1#post5163272
8-core FX within 45W TDP envelope...

Note for Prime95 LFFT: At stock specs the rated TDP was exceeded by 15.5% during Prime95.
The same margin is allowed for 45W TDP test too.

Frequency: 2.7GHz
Voltage: 0.925V

Prime95 LargeFFT
DCR Imax (A): 47.25A
DCR Vsen (V): 0.919V
DCR Pmax (W): 43.42W

43.42 / 1.155 ("Prime95 margin") = 37.6W
37.6W + 6.75W (BaseTDP = NB, Logics) = 44.345W

X264 R2230 64/10-bit BD-RAW 1080p -> MKV
Preset: Slow, Tune: Film, Bitrate: 9000, Output: MKV

DCR Imax (A): 41.5A
DCR Vsen (V): 0.919V
DCR Pmax (W): 38.13W

38.37W + 6.75W (BaseTDP = NB, Logics) = 44.89W
(TDP rates invariably must be lower than what the max power drawn wit Prime... umm close to 35W than 40W that chip could be marked perhaps)

Perhaps with better turbo, with better power management, with better DVFS (dynamic voltage and frequency scaling) it could be a little better than 2.7Ghz and turbo as high as intel (right now 2.7ghz is a little low for FX).

Look perhaps below the 25W mark
http://www.xtremesystems.org/forums/showthread.php?284577-8350-Power-Consumption&p=5163749&viewfull=1#post5163749
Well under 25W average now, the peak is barely over...

Note for Prime95 LFFT: At stock specs the rated TDP was exceeded by 15.5% during Prime95.
The same margin is allowed for 25W TDP test too.

I did not underclock the NCLK domain as I wanted to keep the L3 cache at stock performance level.

Frequency: 1.8GHz
Voltage: 0.7875V
NCLK Frequency: 2200MHz
Voltage: 1.075V

I cannot make separate DCR measurements for VRM Loop 2 (CNB) currently, so the new BaseTDP is an estimate.
Default BaseTDP = 6.75W (CNB 1.1625V -> 1.075V): 6.75 * ((1.075 / 1.1625)^2) = 5.77W

Prime95 LargeFFT
DCR Imax (A): 28.75A
DCR Vsen (V): 0.781V
DCR Pmax (W): 22.45W
EPS12V Clamp Imax (A): 2.95A
EPS12V Vin (V): 12.13V
EPS12V Pmax (W): 35.78W

22.45 / 1.155 ("Prime95 margin") = 19.4W
19.4W + 5.77W (BaseTDP = NB, Logics) = 25.17W

X264 R2230 64/10-bit BD-RAW 1080p -> MKV
Preset: Slow, Tune: Film, Bitrate: 9000, Output: MKV

DCR Imax (A): 27.75A
DCR Vsen (V): 0.781V
DCR Pmax (W): 21.7W
EPS12V Clamp Imax (A): 2.58A
EPS12V Vin (V): 12.13V
EPS12V Pmax (W): 31.3W

21.7W + 5.77W (BaseTDP = NB, Logics) = 27.47W

DCR = Current flow over VRM inductors
EPS12V = Current draw from EPS12V connector measured with a separate clamp meter (VRM Iin, both loops included).

Cooling: Passive Thermalright Ultra 120 Extreme (not very well suitable for the purpose...), peak temperature after Prime95 LFFT 63 celsius (TSI).

That is the same process of the "centurion" (and this last one must be a little improved)... who ever complains about 220W don't understand nothing of chips and circuits and nanotechnology, they only parrot charts rubbed in their faces.

hard to say AMD has a better fab process, too many things to consider, if you want low power then you must give up high clock, nevertheless this SOI process with high performance in mind owes nothing to intel.

 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


Sure there are cheap inverters but they're likely to blow up on you. I've fried a few in my car. Never more will I trust a cheap inverter.

6 hours is a lot but that's a peak for when the hardware is brand new. Anyone that's used rechargeable batteries knows the life degrades significantly over time.

My phone could easily go a week when it was new. Now I have to charge it every night.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780
every night ? ... that battery is gone, capacity fade all up

And yes do to battery capacity fading is better to have a laptop with a better battery than with a lower power rated CPU, and the colling also counts a lot, is no good if your CPU is low power and then your laptop heats up a lot... that is why the "slim ultras" tend to be worst than a good entry level notebook.

But propaganda is king, ppl look a the CPU ratings never giving a second thought about the all system quality and features which are much more important for low power than the CPU.

EDIT:

that is why Apple got a flush of complains about their Mac books on hot SNB
 
Meh most tasks can be done in parallel, you just need to rethink the process flow. We tend to design in bottlenecks without knowing it, and then complain that we can't do it any faster. Unfortunately methods that organize data such that it can be worked on in parallel also tend to be less efficient when forced to work in serial. It's mostly in task and process organization. And gamer, I have no idea how you can claim not to use GCC and maintain a straight face. I can attest that GCC is heavily used in the server industry and our primary desktop product use's it for binary code (most of our desktop apps are written in java and designed to be modular).
 


But here's the key: The ESRAM's on the XB1 is tied to the GPU clock. The higher the clock, the more yields become an issue. Hence why the XB1 is having (allegedly) yield problems, and why there's even a roumer a downclock may be in the mix, while the WiiU doesn't have as many yield problems.
 


In the X86/Windows world, GCC is in the minority. And its a small minority. Server world, where you have to worry about a much wider range of processor architectures and platforms? GCC makes more sense.

Differnet worlds palladin. Never try and mix and match the two.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


And in real world Windows is a minority. Android-Linux is, currently, the majority. Intel and AMD already announced plans to migrate to a post-windows world.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


YES Wuii doesn't have yield problems, yet its the same 32MB of the XB one that has yields problems LOL
Some calculated this has between 40 and 50 mm² of die space, and the fact that both have it doesn't tell you the concept is quite feasible from small chips up to quite bigger ?

Why not Kaveri ? ... which seems to be your point against. If a steamroller module is the same size of a PD by that die shot, yet it could have 4 thread per module (2 in SMT), it will have the same 8 threads as Hasfail and as XB one, XB one have 4 channels of DDR3, but it has a 768sp GPU (oops lol) plus 4 more pipes of dedicated functions specially for Kinetic and a full audio processor on die, besides video UVD and display as other APUS... Kaveri will have 512sp GPU and none additional functions, so it can very well pass with only 2 channels of DDR3 that could be tweaked for 2133 or 2400 on AMD brand. The 28nm in that die shot is getting close to 30% improvement in size, kaveri GPU is only 33% more sp than Triniti/Richland, even having those ESRAM on board will result in a chip not much bigger than Trinity, specially if that ESRAM is T-RAM... and quite smaller than XB one that is quite above 300mm² for sure... Kaveri with ESRAM could have easily double the effective bandwidth of Trinity/Richland for the GPU and probably above 60% overall improvement in performance

[ EDIT: and as suspectable by the "scheme" of Berlin APU for servers http://semiaccurate.com/2013/06/17/amd-announces-seattle-berlin-and-warsaw-for-servers/amd-berlin-block-diagram/ where large part of System I/O functions pass to the chipset (like a I/O stripped down Trinity die)is possible to make a Kaveri with 512sp GPU almost the same size of Trinity on 28nm, yet with 32MB ESRAM on board. But i think they will not do it, Kaveri will be ~280mm² with 32MB ESRAM, to fit on current chipset models, and when they will so it, strip the I/O out, then it will be kaveri 2.0 for FM3, but with a 768sp GPU...so the same size..]

So some yields problems most likely is not due to ESRAM but because the all XB one chip is almost the size of a Taihiti... a big chip...

All processes and all chips have yields problems (ones more than others), there never was a design and implementation that got 100% yields.

If you want to believe the smear and slander spin propaganda is up to you, but i think i demonstrated that intel has NOT the best process around and believe me(if you want) it has the worst yields bare none (because variability can also be considered an yield problem)

I think that is why they didn't go for that eDRAM on die.. its OFF die... and so make it a L4 nonetheless, only the 6T SRAM CAM structures for the tag files (for make it a cache) for that 64MB of eDRAM occupies almost the same space as the eDRAM itself lol

AMD ESRAM is a pseudo cache... it doesn't use CAM structures, is dynamically assigned and tags are stored in the same kind of cells as the data... matter of fact tags and data tend to be store in the same row...

 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
"TSMC will reportedly dedicate the phase-4, -5 and -6 facilities of Fab 14, its 12-inch fab located in southern Taiwan, to produce Apple's A-Series chips. Sources said this foundry will initially allocate a capacity of 6,000 to 10,000 12-inch wafers for the manufacture of Apple's chips. Output will rise gradually starting in 2014."


And there you have why Nvidia/AMD are stuck at 28nm through 2014. Gates should have let Apple die when he had the chance.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Yes you proved your ignorance. Specially when running an intensive game a 77w IB can go above the 200W already, a Titan can go over 300W if not 400W... don't believe me go search a lot of power measurements, the TDP marks are "typical workloads" on average...

And 400W was what one OCer got on air, at full load, prime95 or something more taxing, which is quite above typical

Ppl overvolt because if they didn't they couldn't get the transistors to fire faster, a transistor doesn't switch instantly, there is a "build up" period and then the switch... of course we are talking about picoseconds here... and the higher the voltage the faster the build up period is.

Also good Water Block, because the lower the temp you manage to run the less the power wasted, no matter if your chip is rated 25W or 1000W... understand ?

EDIT: and if care to see, that same record holder, that is the same exact chip. no more no less, is happy running Prime95 drawing less than 25W (CPU alone) Lol... for that power you could have 4 FX8320 for the same power of a IB 3770k lol... what a record holder...

 

i want to know about how pricing will affect the end chip with the on-package ed/esram or on-die ed/esram. since current apus seem to ditch L3 cache for cost purposes. i think a tri-module jaguar(around 2.5-3.0ghz(turbo))+512 gcn-based radeon cores will be a lot cheaper.
if it was before, i woulda thought that at least amd will be safe from fab-monopolizing since only nvidia makes arm socs. but now that amd is talking about their own arm soc(and most likely more on the way), i wonder how apple will react. i hope amd doesn't get stuck with glofo.... remembered the old gpu roadmap where glofo was supposed to make cape verde gpus...
samsung to the rescue..?

 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I am in total agreement with you on power consumption in desktops. Bad power consumption means I basically have to leave my desktop plugged in all the time. It's just not acceptable.

Oh wait, desktop computers are more like home appliances. Oh, that's right, air conditioners use over 1000w, refridgerators can use between 1500w and 3000w, electric water heaters can use almost 5000w.

Please, however, keep masturbating over 500w for a computer. If you are so concerned about power consumption, you should perhaps give up air conditioning, electric heating, refridgeratorss, freezers, and hot water.

Electric water heater basically will cost more than ten times the amount of FX 8350 at 464w, which means if that FX would cost you an extra $10 a year, the water heater would be $100 a year.

Do you feel stupid yet? Because you sure look stupid.
 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


Most probably ESRAM will be quite cheaper than GDDR5, Wuii has it, and those must be small quite cheap portable consoles. And better, no GDDR5 means the all platform for Kaveri can be the same to... cheaper (comparatively).



If i were AMD i wouldn't mind to get stuck with Glofo for a while, if they prove they can deliver good processes for FD-SOI up the 10nm node, which could be quite reversed the situation to the 32nm SOI transition. Yes ALL MY PRODUCTION would transition to FD-SOI in those conditions, FD-SOI that not only is THE BEST bar none for low power but it can also be better for high performance, and cherry on top of the cake it can be cheaper than plain "bulk" (and most certainly than finfet on bulk), all factors accounted... yes even GPUs will be FD-SOI...

 

hcl123

Honorable
Mar 18, 2013
425
0
10,780


You must be dreaming or you must not feel very good lol... yet saving $10 is better than nothing right ?

Its amazing how you still "blindly" believe some benchmarks, after more than 100 pages of posts in here (go back and reason)... just a hint that difference of voltages is called Dynamic Voltage and Frequency Scaling (intel is very good alright) , an head ache for you for sure, but nothing in nowadays chips is fixed or "absolute", as example FX8350 is happily running at 25W, it even passes the Prime95... (can scale quite low to)

well believing the earth is flat is worst... but just for curiosity what is your religion ?

 

Ranth

Honorable
May 3, 2012
144
0
10,680



You're starting to hurt my brain... like seriously. So 3770k needs 0.7 volts and? Explain how this is of any importance. "That is a sign of Intels supperior powerconsumption" So intel have low Watts? The only place that really is important in small enclosure and mobile, though... problem with hotwell is... it's freaking hot. Now I don't know if Hotwell is as hot on mobile as it is on desktop. But if it is, I'd rather grap something that's cool and quiet than hot and noisy.

Also of all videos you chose that? Those butthurt guys? That video was a response to Teksyndicate and if you look at that video, which was about what performance you get with the 8350 at settings that people USE in games: If you're going to get a system at a givin' price point, you know (As far as gaming goes) spend more on GPU and less on CPU as a 8350 + 7870 will beat a I5-3570k + 7850, at the settings sane people would play at (Not ultra low at 800 by 600... no mid-high at 1920 by 1080...)

Would you be nice and leave? Seriously? You're driving the page count up, and for no good reason
 

griptwister

Distinguished
Oct 7, 2012
1,437
0
19,460


You are so annoying... I found your comment on their channel. Just give it a rest... I don't get you! You have like Zero Tech knowledge and you make these assumptions and claims.

Nissangtr786 1 hour ago
I liked this video as it dispels myths on a variety of benchmarks with everything as same as it can be. I got an i5 3317u ivy and am amazed how little watts it consumes and is 2.2x faster then my old c2d p7350. It always is frustrating seeing so many deluded comments as in buy fx8350 for future, its a lot cheaper when the fx8350 consumes so much electricity and they make it like the fx8350 get similar fps in games when even toms shows the i3 getting better overall game performance over an fx8350
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I am sorry but that record goes to Intel. A FX-8350 @ 4.8 consumes about 364W. Intel six cores @ 4.7 are above the 500W: 525W for the 3930K and 538W for the 3960x.
 

mlscrow

Distinguished
Oct 15, 2010
71
0
18,640
Okay, so the ultimate conclusions of this thread can be:

A) hajifur is a complete waste of worldly resources.
B) The 220W TDP of the 5GHz FX9590 is spot-on, as it's simply an overclocked FX8350, which we all know is based on Piledriver, which we all know requires exponentially more power as you overclock it beyond 4GHz. 220W is actually surprisingly low for a 5GHz OC (compared to an FX8350 OC'd to 5GHz), but we can attribute that to process maturity and AMD lowballing.
C) High-end performance enthusiasts don't care about power consumption anyway.
D) The FX-8350 provides better performance per dollar than its Intel counterparts.
E) The FX-8350 provides worse performance per watt than its Intel counterparts.
F) The point of this thread is supposed to be Steamroller anyway, so it's dumb to battle the troll in this virtual stadium, lets get back on topic.
G) Steamroller should put AMD back in direct competition with Intel in the high-end.
H) AMD only shows a quad core Steamroller available for 2014, which is very, very sad and hopefully they'll realize their mistake sooner than later, because we all want an 8-core Steamroller FX for 2014, don't we?
 


The desktop i7s are not low end SKUs. They are actually some of the higher end units as the SKU list goes Celeron -> Pentium -> i3 -> i5 -> LGA1150 i7 -> LGA2011 i7. Also like I stated before, Intel is pretty well tapped out as to how fast they can crank the Haswell/Ivy Bridge arch on their current 22 nm process because they optimized for low power over all else. This is why laptop and desktop are similar speeds and power dissipations- LGA1155/1150 Haswell/IB is a laptop CPU and doesn't have much clock headroom which can be exploited on the desktop. They couldn't make it much faster even if they wanted to without significant changes to the manufacturing process and/or chip macro-architecture.

Also, Intel may not be able to very easily spit out a 6-core CPU which meets the <$200 and 3+ GHz base clock requirements of the mainstream segment. If they were going to do so now, it would probably be a 32 nm Sandy Bridge variant rather than a 22 nm Haswell as I am guessing their 22 nm process isn't up to the task yet. Intel has been pretty slow in ramping up production with the last two nodes. They have introduced relatively mediocre processors with small die sizes and low clock speeds at first- 32 nm came with the 2C Clarkdale and 22 nm came with the 2C/4C Ivy Bridge. Notice that they didn't intro the arch on very high-end, high-clocked, many-cored chips. This is because making small, low-clocked chips allows for you to have some yields despite a very immature process. It is only when a process is much more mature does Intel open it up to much larger chips with higher clock speeds like SB-E. I don't think it's AMD being not competitive as AMD is able to equal or beat the quad-core i7s with their FXes costing 2/3 as much in heavily multithreaded tasks. You'd think Intel would allow a six-core i7 to sneak down into the $250-300-ish territory to stave off the 4-module FXes if they could do so easily as that would shut down AMD, as they did with the newly-inexpensive Q6600s when the Agena Phenom X4s debuted. But they don't. I guess they probably can't or else they probably would do so.

2. 32nm sandy bridge is 70-80% better performance per watt then piledriver in terms of kilojoule to complete the same task.

Way to mix units there buddy (golf clap.) Sounds like you are repeating some marketing points somebody threw out there. Joules are a quantity of energy while watts are a rate of energy usage, aka power. I suppose you are trying to say that on some benchmark Sandy Bridge (which we weren't even really talking about, we were discussing Haswell) finishes earlier and uses less energy to complete some unspecified task than Piledriver. Without the particular setups and task used, it's impossible to arrive at a meaningful figure for energy efficiency. Cite your source and we'll discuss.


3. Intel already shown with haswell 8 core announcement they can crank the power consumption up if they wanted. Eben that 8 core haswell cpu will probably take less then the fx4300 power consumption wise. Intel are in a position where they could release easily a 16 core cpu at same power consumption as the fx9000 series cpu if they wanted to. The new ivy bridge e and haswell e has a better heat spreader I believe which will solve all heat problems.

Intel might have announced an 8-core Haswell-E* unit but they sure haven't shipped it. We don't have a clue to how it clocks, how it performs, how hot it runs, and if it is any better or worse than Sandy Bridge-E*. Shoot, we don't even know what socket it will use (rumor is a new 2011 land socket called "Socket R3.") We only have the current 2C/4C Socket G2/BGA laptop and LGA1150/BGA Haswells to compare against.

I would highly doubt that Intel can even release a 16-core CPU presently. It would be too large to get more than a handful of viable chips out of a wafer. The current 32 nm process is what Intel uses for its >4 core CPUs, presumably because it's more mature and can yield larger chips better than 22 nm. An 8-core SB-E is around 400 mm^2 in size. A 16-core unit would be twice that size, an >800 mm^2 utterly unyieldable chip. A 22 nm 4-core Haswell is 177 mm^2, which if you made a 16-core variant would likely be in the 600 mm^2 range if you stripped out the IGP. That would be very, very tough to yield, especially on an immature process. Rumor has the $4000+ top-line Haswell-EXes having 12 cores ($4000+ Westmere-EXes on 32 nm have 10 cores maximum.) That sounds more reasonable. Intel would need to steal a page from AMD and incorporate on-die QPI links to tie two separate 8-core dies together in an MCM package to make a 16-core chip such as AMD does with the 16-core Opteron 6300s. That would still be really ugly as that chip would have 8 memory channels to route from the socket. Hello 3000+ land sockets and EATX single-socket boards!

4. There was no p4 at 4ghz at stock and I know how great these pentium m machines are, the 1.3ghz pentium m destroyed a 1.6ghz p4 and 2.66ghz p4. I had a 1.5ghz one as well but the thing was these took 1/6th the power could have 10 hour battery life on ibm r40 and was so light platform with centrino as well as with the extra cache of the p4 and shorter pipelines embarrassed p4s especially at gaming. A top end 2ghz or 2.26ghz pentium m will destroy a p4 3.8ghz at gaming any day of the week.
Pentium m cpus were like years ahead of the time, nothing was even close to as good in the market as them for a good few years.

Pentium 4s could get well over 4 GHz, particularly in later iterations. The fastest stock P4 was 3.8 GHz for crying out loud, and that was a 90 nm Prescott, not even a later 65 nm Cedar Mill, one of which held the world overclocking record for several years until unseated by an FX-8150. A 1.3 Banias was not faster than a 2.66 P4B in very many tasks, either, nor did any computer from 2003-2005 have a 10 hour battery life unless you had multiple batteries adding up to at least 200 Wh in total capacity. The rest of the chipset, hard drive, display, and such were too inefficient. You are only starting to see that today with CPUs which are much more efficient than any P-M running on much more efficient chipset, LED-backlit displays, and with SSDs instead of HDDs. And that's essentially an "active idle" measurement with an 80+ Wh battery, not the one with half that capacity that you'll see in an "ultrabook."

Pentium Ms were not especially ahead of their time, and there were certainly chips shortly after it that exceeded their performance. The first Pentium M Banias was simply a PIII-M Tualatin with a few extra tweaks for power consumption, twice the L2 cache, and the original P4's FSB and SSE2 capabilities. Performance per clock wasn't astoundingly better than Tualatin but it did clock a few hundred MHz higher. The second Pentium M, the 90 nm Dothan, was followed up by the Core Solo/Core Duo "Yonah." Yonah was essentially a die shrink of Dothan to 65 nm with SSE3 and a second core. It performed as well or better than Dothan per core and per clock. So the Pentium M wasn't anything particularly unique if you really look at it. The competing Mobile A64 and later the Turion 64 were every bit as fast if not faster per clock than the Pentium Ms. The big thing Intel had with the Pentium M was a real idea of platform marketing with the Centrino brand, as well as a moderate improvement over the Mobile A64/Turion 64 + NVIDIA chipset platform's power consumption. The P-Ms certainly weren't faster than the AMD K8s and it took the second follow-on to the P-M (the Core 2) to actually beat AMD's K8.

5. Intel is definately the leader in low power computing in terms of performance per watt nothing even comes close and finally the reason why power matters is I see constantly people recommend an fx8300 or fx6300 etc when these take on peak load around 90-100w then an intel competitive cpu. I think if you are an above average user an fx8350 will cost around £30 a year over an i7 3770k to run.

Like I originally said, power consumption doesn't really matter as the difference in load power consumption is a very small cost. The only reason that power consumption really matters is if you are really TDP-constrained such as in a very small form factor setup and absolutely cannot exceed a certain TDP point, lest you overheat the system. This is especially true with gaming setups. You are not pushing your system at 100% CPU load very much because you would have to be playing CPU-bound games 24/7/365. That simply does not happen. Your machine is going to be idling or turned off most of the time, in which case the difference in power consumption is zero (turned off) or largely dependent on what HDD/GPU/MB you picked rather that which CPU you are running as CPU-specific idle power is pretty similar between the two. You certainly are not going to make up the $150 or more in difference between the i7-4770K and a decent LGA1150 board and an FX-8350 and a decent AM3+ board in power costs at current electricity rates of roughly a dime per kWh unless you are talking about a time period of decades in real-life usage. The extra 50 watts or so a similarly-equipped Haswell system vs. a Piledriver system uses at full CPU load would pay for over three years of power for sustained 24/7/365 100% CPU usage. You aren't going to recoup the difference in purchase price, period. If you really cared about power usage while gaming, you'd use an IGP or lower-end GPU rather than a 200+ watt high-end GPU as GPUs have a much bigger effect on power consumption than CPUs. But yet I hear very, very little about GPU power consumption and efficiency, likely because nobody really cares due to the fairly small cost difference. Witness the fact that Tom's is currently running an article about using two 300 W GPUs on the front page for that fact. So I would give up the "the price of power makes it cost-efficient to spend way more on an Intel CPU" argument as it has been thoroughly debunked.
 

jdwii

Splendid


Define what you mean by "most" and remember that we're not discussing servers.
 

jdwii

Splendid


Until you explain why this matters when its completly stable which you continue to avoid that question you are nothing but a troll.
 
Status
Not open for further replies.