AMD CPU speculation... and expert conjecture

Page 565 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
The 6800K with DDR3-2133 has been a very good HTPC / casual gaming PC for me. I have a 360 wireless controller configured for it and frequently load up all sorts of stuff. I even have various emulators loaded, so it's not uncommon to play PSX and N64 era games on my big living room screen.

That being said, it's got one down side. Even with a noctura low profile HSF on it, I can still hear fan sounds when I play. This is kind of a sticking point for me, I don't want to hear any whirling noises whenever I'm enjoying a game, it detracts from the experience. I'm also a bit hyper-sensitive to sound, so what's "background" noise to some people is grating to myself. I had planned on replacing it with an A8-7600, it looks to be similiar performance but at a much lower TDP and thus lower cooling requirements.
 
There seriously needs to be a Mini-ITX version of an AM3+ board with a 970 chipset. I could really see an FX6300 being used in a Mini-ITX system with a medium dGPU. Small, tight and efficient.

To those of you still arguing the "core" concept. AMD's design does indeed have two cores per module as the two x86 units are separate entities with separate instruction pipelines. The FPU has always been a co-processor since the days of x87. Even when they started integrating them into the same chips during the 486 era they still maintain a separate set of registers, instruction pipeline and stack. So AMD sharing a large SIMD FPU between two x86 cores doesn't make them any less of an x86 core.

Now as to the "why" of it all. AMD wasn't lying when they said that 3rd ALU doesn't get used much. With x86 you can only process one integer instruction at a time, it's a very serial process. Learn some basic x86 ASM and it'll become painstakingly obvious why this is so. What both companies try to do is speculative computing, where they analyze the code stream and attempt to guess which instructions need to be executed ahead of time. That is where those extra ALU's come into handy. The predictor will mark the instructions that need to be run now, and the scheduler then decodes them into miniature RISC operations and assigns them to the internal ALU's for processing.

I say all this because AMD lacks some critical technology to do what Intel does. If AMD would of just made a four core "3 ALU" CPU, it would of still failed in "IPC" because they lack the ability to reliably use that 3rd ALU. Instead AMD decided to segregate their design into two independent integer units and then allow software to be the one to sort out how best to assign work to the hardware. The performance difference between the old K10 and the BD design was almost entirely attributable to the long L2 cache latency, something AMD's engineers didn't expect would be as great as it was. Typically engineers spend a large amount of man hours tightly timing the scheduler, MMU and cache store / load cycles. AMD didn't do this and instead let a computer program auto-optimize it, that optimization worked only half way. AMD still needed its' engineers to go through and tighten up the timings, this is the big difference between BD and PD. There is still quite a bit of inefficiency in the timings, not much else they can do with the way the MMU is separated between the integer cores.

That's really just to explain why AMD did what they did, it was them working with what they had. You couldn't just die-shrink the K10 and call it a new chip, Llano pretty much proved that. They needed better prefetch and scheduling tech, something to allow them to keep the internal ALU's fed. Also "FPU", or really SIMD is rarely used. As a percentage of overall instructions computed, the SIMD ones are in a small minority vs the ALU and MMU ones. When a SIMD instruction is used, it typically has a very large impact on performance due to them being able to execute in a few cycles what would take integer units a hundred or more cycles to execute.
 

iron8orn

Admirable


The 990 is not high end anymore and pretty much dumb to buy anything but a 990 and a 6350/8350 and overclock it but value is at a loss to Intel quickly when comparing price vs. performance plus you do not even need to overclock a i5.

NEED THOSE PRICE CUTS NOW!

 


That's why I put "high end" in quotes. It has 38 PCIe lanes which requires additional layers on the board to accommodate them. Comparatively the non-LGA2011 Intel boards have less traces and are more comparative to a 990x or 970.

The only thing the 900 series is missing is PCIe 3.0 and we haven't even saturated PCIe 2.0 yet so it's not even a problem. Just more myths being passed around without actual technical knowledge.
 

iron8orn

Admirable
PCIe 2.0 is not saturated in bandwidth but lacking the better coding efficiency of PCIe 3.0 that shows with a few more frames per second in games.

In my mind, lets say you where using Windows 7 and 2.0 then upgraded to windows 8 and 3.0 with the same cpu and gpu. That is really a solid stability increase.
 

iron8orn

Admirable
Your crazy palladin, You should get some quieter fans for your cpu instead of downgrading to a worse cpu.. The Kaveri has less power consumption and better performance if you must waist money to emulate old games but i talk latest games.

Sure the 970 could be better but few have been made to its real potential man. Talking real world not wiki info. Seems much more logical for price cuts than to make new models.
 
PCIe 2.0 is not saturated in bandwidth but lacking the better coding efficiency of PCIe 3.0 that shows with a few more frames per second in games.

Yeah .... umm no. Seriously ... no. You are horribly wrong on this.

The only difference between the two is bandwidth, PCIe 3 is slightly more then 2x the bandwidth of PCIe 2. So an 8x PCIe 3.0 channel is about the same as a 16x PCIe 2.0 channel. They are even backwards and forwards compatible with each other.

Seriously there is nothing wrong with the 990 platform. Ever since the the memory unit was moved into the CPU, northbridge chipset means almost nothing nowadays. It only gets hate because it's associated with the BD release and it's vogue to hate on BD.

Now as to why the bandwidth isn't important, you don't send data to the dGPU for the sake of sending it to the GPU, you send it there because the GPU does work with it. Bandwidth only matters when the dGPU can do the work faster then your able to send it, otherwise more bandwidth doesn't do anything for you. Current top end dGPU's are not doing work faster then a PCIe 16x 2.0 channel can send. Current mid range dGPU's are not doing work faster then a PCIe 8x 2.0 channel can send.

This is something we see every generation of system interconnect. The technology for the new interconnects always arrives 4~5 years before it's needed. We say this with PCI vs AGP, AGP vs PCIe and now PCIe 2 vs PCIe 3. You can expect PCIe 3 to become a requirement in another two to three years if that. The trend has been that cards are requiring less of a bandwidth increase every generation because games have been very conservative in how much graphics data they push through the pipe. If we finally move on to 64 bit gaming and the huge amount of memory that opens up, we could see this trend reverse and consumer dGPU's come out with 8GB or more of graphics memory.
 

iron8orn

Admirable
dude, your just jumping allover really..

my point was that they need price cuts or it is not worth it anymore.

we all know everything AMD memory related can not compare to Intel.

I am pretty certain that I have read the coding efficiency of 3.0 is worth a few fps with the same hardware.
 
I am pretty certain that I have read the coding efficiency of 3.0 is worth a few fps with the same hardware.

Nope not close.

That's like saying putting a statue of bill gates on your PC raises your FPS by a few. Or that connecting with 10Gbe to a NAS will let you copy data faster on a 7200RPM drive.

Your spreading a very bad myth that has been disproven many times over again. Toms even did a segment where they took a PCIe 3.0 card and measured FPS at 4x, 8x and 16x, then did the same on a PCIe 2.0 system at 4x, 8x and 16x. PCIe 2.0 8x and above were the same FPS on all but the highest end cards. You can have a 100Tbps (yes terra) connection between the CPU and GPU, won't do a damn thing if the GPU is busy processing data.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Both AMD and me claim that ARM will win in the long run... and we aren't alone: :sarcastic:

Conclusions: long-term outlook

Unless Intel fundamentally changes their design strategy, in the long term, they cannot win this competition with ARM for the microprocessor design market, despite their substantial cash and research assets. Market share for x86-based designs may grow more slowly in the future and will suffer from shrinking profit margins; eventually squeezing AMD out from the middle of the CPU design market (the author anticipates that by the mid 2020's, x86-based designs will cease to retain a significant market share in ordinary people's everyday computing needs, so that x86 will become a niche market for legacy or specialist hardware).

http://www.slyman.org/blog/2011/02/arm-to-dominate-microprocessor-architecture/
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


What part of A57 core belongs to ARM and AMD cannot reuse "blocks" from it wasn't understood?
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


It is not about being either optimistic or pessimistic, it is about being realist. The idea of AMD outperforming Intel because it did in the past is unfounded because the contexts are very different, as I explained before: DEC-legacy, foundries, funds, Intel mismanagement/underexecution, chip complexity...

Nobody is saying that AMD will be "trying to match what Intel has now". What we are saying is that we expect AMD new arch. to match Haswell performance. The new AMD arch will be superior to anything that "Intel has now" in other aspects such as efficiency, customization, and cost. AMD doesn't need to match Skylake core performance to be competitive.
 
@yuka: if you want to go the apu route, amd has a10 7800 coming out soon. it's got 65w(max.) configurable tdp, 3.5GHz base clockrate and 720Mhz igpu clockrate (turbo, i assume) and fully enabled cores and shaders, 1.5v ddr3 2133 ram support.

a10 7800 is available in japan for now. cpu world says other markets will have to wait till end of july for wide availability:
http://www.cpu-world.com/news_2014/2014070301_AMD_launches_A4-7300_A6-7400K_and_A10-7800_APUs.html
if 7800 sells for around $150 or less, imo it'll be better than a8 7600 in terms of performance.
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


You also have problems with understanding "ideas aren't patented".
 

jdwii

Splendid


Juan that site does not include anyone from Amd that said that they just used this quote
“People may increasingly depend on tablets and smartphones—instead of PCs—as their main point of contact with the Internet, experts say.”
What about people still on PC's or servers? I for one could care less about tablets or smartphones they are extremely limited in what they can do any screen under 15.6 inches is to low for me to do work or email's. I use a 28+Inch screen at work and i use a 32 inch 1080P screen for PC gaming,movies. Smartphones have their uses for things outside of real gaming and real work.
 


I just maintain that they will be targeting Skylark with their design, NOT Haswell. Agreed it doesn't need to win in outright performance, but it HAS to at least match it from an efficiency standpoint (which means it either has to be much lower wattage for less performance, or similar performance at similar power levels). Now they may not achieve that, but to start out on a design project with the intent to make something inferior would be stupid, and Keller knows what he's doing (look at the Apple arm designs, they didn't just match the standard ARM cores they surpassed them on a number of fronts).
 

iron8orn

Admirable


Nice website and review!

If AMD can keep getting more games on Mantle with xfire working the Kaveri will keep looking better and better. I wish they did a test with 2400mhz ram.
 

8350rocks

Distinguished
@juan

Notice how all this propaganda you spread mentions nothing of traditional pc space. Also all the articles/sources are at minimum a year old?

I would also like to point out that GF is now partnered with Samsung and owns IBM fabs too, which should provide a massive boost in terms of technology.

You also have to consider that Jim Keller and others have been at other companies, exposed to new ideas, and some of them are even from Intel originally. Now obviously, they cannot rip Intel off, however who better to reverse engineer something and improve on it than people who helped design it to begin with?

I would also like to point out as well, many of you assume a great deal about node shrinks that you have no information about.

That is straining to point you all in the right direction as best I can.
 


:) I understand people being skeptical about AMD given the last few years- however if nothing else they haven't been short of ambition with their designs. The main things that have crippled AMD recently have been more manufacturing related than design so I'm fairly certain their design at least will be good. The real key will be if they can actually get it out on that 12 to 14nm process (which should be comparable to Intels 10nm) or if they'll get hampered in the past being a node (or 2) behind. The only thing in their favor is it appears that Intel are having as much problem shrinking nodes as everyone else so I doubt they can maintain much of a lead moving forward.
 

jdwii

Splendid
For CPU performance a 6300 is Superior and under gaming a A8 7600 is really only decent for 720P monitors. I guess you could build a rig for 370$ with one. TDP is quite nice and i like having it configurable, but isn't a AMD Athlon 5350 already enough for 1080P HD move playback for HTPC rigs?
 
Status
Not open for further replies.