Discussion: AMD's last hope for survival lies in the Zen CPU architecture

Page 13 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


 
I agree with you that Nvida has made some good graphics cards to but price point again GTX 980 in Canada runs 800 and change and a 290x runs 300 and change. I am a working man with a family and can't afford to shell out 800 for i7 or 400 for I5 plus nearly 1000 for a graphics card. But I can afford 200 for an FX 8350 and another 300 for a 290x and and very good video rendering and very good gaming and last time I checked there is a lot more guys like me, than there are that can shell out over 2 grand for a daily home pc for entertainment. Yes i agree and have an I7 for my work computer and as far as i am concerned there are is no substitute for running cad. for the lad that made the comment about intel integrated graphics come on lets be real they all suck compared to dedicated. With all the debt amd has can be squashed by everyday consumer that want a good product at a fair price and lets face it you can brag about the POWER of Intel all you want but does the average user even utilize half of the power. Hell No! at work some of us need it few and far between. That is why AMD will be around not just because of fan boys but the average everyday consumer that just wants a good reliable rig for entertainment! after he has been hard at work doing Quantum Physics on his I7!
 


I am aware of the configurations on the chips and that's my point. An application that runs faster on a Pentium D than on a Pentium 4 with Hyper-Threading disabled should theoretically scale well across multiple threads, yet Hyper-Threading would often still lower performance compared to having it off. I still have a Pentium 4 with Hyper-Threading and a Pentium D.

You aren't looking at the context. AMD bashed them when they were generally not ideal. Again, Intel agreed with this otherwise they wouldn't have changed them in Core 2 with Hyper-Threading and then Nehalem onward with MCM. Also keep in mind that the technology for Hyper-Threading and MCM has changed over the years. For Hyper-Threading, the cache architecture has improved and there are more registers. For MCM, an FSB is no longer bottle-necking things.

I can see where you're coming from, I just disagree with it being ironic. Things are different now; it isn't AMD coming out and saying "these were great ideas all along what were we thinking" or something like that. I was surprised to see AMD going with SMT at first, granted it makes sense, but it's still a different time and the technology isn't the same. We don't even know exactly how it will be implemented other than we get two logical threads per core. That could mean several different things, not just a Hyper-Threading clone.
 
$800 CAD for a GTX 980 is rather expensive... did you mean the GTX 980 Ti which should be closer to the mark. You do not need a Core i7 to play games, a Core i5 will be more than enough. Based on Newegg.ca prices range between $260 - $310 depending on the model. You do not necessary need to buy the i5-4690k "Devil's Canyon" since a stock speed Core i5 will have the same performance as an overclocked FX-8350, if not better. That means you can buy a less expensive H97 chipset motherboard and you can stick with Intel's stock heatsink to save money.

While the FX-8350 is less expensive ($235 CAD at Newegg), if you want to overclock it to get performance closer to that of a stock speed Intel Core i5 CPU, you will need to invest in a 3rd party heatsink, a good motherboard that will allow you to overclock the FX-8350 and a power supply with higher watt output.

If you plan on buying Fallout 4 when it comes out, then just be aware it will most likely favor Intel CPUs because it is based on the Creation Engine that was designed for Skyrim. Between a FX-8350 and Sandy Bridge generation i5-2500k both running at stock speed and using the same GPU, the i5-2500k got over 25% better performance in Skyrim.
 


Even with multi-threading SMT still requires optimization because it does not act the same as a real core. That was the biggest downfall to the P4s HT was that no programs were optimized for it but it still saw it as a secondary core and would bottleneck it. Once optimizations for SMT came in it acted vastly better. Just because an app can utilize two cores does not mean it will automatically perform better due to SMT, it requires the OS understanding how the registers and pipeline work for SMT to utilize it properly. That is why even today that not all multi-threaded applications benefit from an i7.

And bashing is bashing. They said that SMT was never a good idea nor was a MCM design vs a monolithic design. And MCM has not changed much at all. SMT has but not enough to say it is a vastly superior technology. It is much like CMT. CMT was a bad idea at the time but once Windows had optimizations built into it CMT started to shine more since the OS understood how to handle the threads.

AMD has a history of saying things are bad. when they are not. They laughed at Core 2 when Intel first announced it thinking that K8 was invincible. Funny enough they got a taste of the Core uArch when Intel released a PGA479 to PGA478 adapter and people threw a Pentium M on a desktop board and overclocked it where it was smoking Athlon 64 and Pentium EEs left and right.

Point is, both sides have had ideas the other has bashed. Intel said a IMC was not necessary when it was.
 


Intel's integrated graphics in Broadwell is probably the most powerful of all integrated graphics thanks to the eDRAM. Although they'll probably never do it, I'd like to see a scaled-up version of Intel's graphics and how it compares with high-end cards from Nvidia and AMD. Intel's architecture is certainly interesting in that it's still very energy efficient despite having a much lower number of cores for a given level of performance.
 
Even with multi-threading SMT still requires optimization because it does not act the same as a real core. That was the biggest downfall to the P4s HT was that no programs were optimized for it but it still saw it as a secondary core and would bottleneck it. Once optimizations for SMT came in it acted vastly better. Just because an app can utilize two cores does not mean it will automatically perform better due to SMT, it requires the OS understanding how the registers and pipeline work for SMT to utilize it properly. That is why even today that not all multi-threaded applications benefit from an i7.

The problem with the P4s SMT implementation is that XP really didn't have the best SMT support. It treated the HTT cores as full independent CPU cores, so you had collision problems when two threads tried to use the same CPU resource at the same time. Nowadays, Windows (and most OSs) are designed with schedulers that understand when cores are shared, and tries to avoid resource collisions whenever possible.

The problem wasn't HTT, the problem was the underlying OS.
 


Although you are correct about this and it is part of my point, it is not everything. The software was not all that needed improvement, the hardware was changed around as well. More ideal cache hierarchy and performance, hugely better memory performance with the IMC, and more registers all helped as well. Furthermore, why an app might benefit from another core but not SMT wasn't what mattered as far as marketing goes- AMD was correct and this proves it, whether or not it was through bashing that they made their point.



Fair enough with bashing is bashing, I wasn't denying that it happened, only saying that I didn't think what is happening now is ironic even with that past in mind. However, with MCM especially, a lot more has changed in how effectively it works even if not in exactly how it works. Simply by removing the FSB bottleneck, MCM performance scaling skyrocketed. With SMT, granted more of the improvements were in software than hardware, it has still improved significantly even on the hardware side. Although perfect scenarios would see up to a 30% increase, we can reach for that 30% instead of 10% or 15% much more often even under the venerable XP with current processors than we could with P4s with HTT.

CMT performance really didn't change much with software optimization, vastly unlike SMT. However, that's not to say that it is an inherent flaw or feature of CMT. That's merely what we've seen from the Bulldozer architecture and its derivatives, and that shouldn't be surprising considering the weak cache, high cache capacity, weak memory controller, and poorly inspired Netburst copy of an architecture relative to primary competitors Sandy Bridge and onward. The high capacity of the cache will negate most of the disadvantages of sharing the L2 between two cores and the poor performance of the cache and memory controller stop CMT from mattering anyway.

Whether you have two threads loading up a module or two threads loading up one core each in two modules doesn't make a big difference because the L2 cache and the L3 cache are both similarly high latency and average capacity per core unless you're doing floating point work. If AMD had L2 cache comparable to Intel's, then CMT may have actually done something when it had software optimization compared to not having software optimization, but only when the application could take advantage of more cache and/or faster shared cache with another core. Funny thing is that if Zen fixes the cache latency like AMD claims it will but does not use CMT, then AMD will not be using CMT in the first time where it actually could've had a chance at being useful.

AMD has a history of saying things are bad. when they are not. They laughed at Core 2 when Intel first announced it thinking that K8 was invincible. Funny enough they got a taste of the Core uArch when Intel released a PGA479 to PGA478 adapter and people threw a Pentium M on a desktop board and overclocked it where it was smoking Athlon 64 and Pentium EEs left and right.

Point is, both sides have had ideas the other has bashed. Intel said a IMC was not necessary when it was. [/quotemsg]

 
Let's face it as much as other people hate AMD deep inside you dont want AMD to disappear. They are the balance between Intel and Nvidia. Zen might not be as fast as skylake maybe as fast as haswell or broadwell is actually good enough. Plus their Polaris architecture seems to be promising. They cant beat both Intel and Nvidia at the same time they just need to beat 1 competition to stay where they are.
 


Then the answer is they will die. The PC market is going away, mobile is here to stay forever. Offsetting computing to the cloud is the future. Sometimes you need to leap in innovation, instead of taking the baby steps like they always do. There is no reason not to embrace a new technology that is different then the road map.
 
The Cloud isn't the future at all. I've worked with corporations that mitigated everything to the cloud, have the cloud go down when they had to get to a REALLY important document, then spend millions getting everything OFF the cloud again.
 

In any business situation: always have a backup. Especially for cloud-based services you have no control of when they go down or become unreachable for whatever reason.

That applies to many personal situations too for people who have personal photos, videos and other data they want to keep safe from cloud f-ups, accidental deletion, fire/flood/whatever destroying their local copy/copies, etc.
 


So because a company had one bad experience, the cloud isn't the future for everyone? All the cloud is, is a datacenter that hosts app logic outside of the local network. The ability to quickly scale, flexibility and resilience is still far better than hosting everything in your own datacenter... or should I recite all the times the entire network was down at my previous companies who had their own datacenter...?
 


Anecdotal evidence doesn't get you anywhere when speaking broadly, true that, but it does account for something in terms of experience. If your data and processing needs aren't really big to be held in-house, use the cloud. If you have critical processes (time critical apps / high priority processing) and heavy restricted information, you will be inclined for an in-house approach. Specially for when your most reliable providers are overseas.

Just like with *anything* in technology, you can't say "one solution fits all".

In the case of where I work, we have hybrid solutions when they are mid/low in terms of security restrictions and processing, but all confidential and processing hungry stuff is always kept in-house (Mainframe, Datagrids and big data marts). There's no one better qualified to know how much power/space you need than your own people. Plus laws protecting data 😛

Cheers!
 


But the cloud brings other problems: Connectivity, bandwidth, and perhaps most importantly: Ownership of data. Nevermind that by definition, your data isn't airgapped, which means security becomes a nightmare. One hack on one person to fish for user credentials, and all your companies data could be exposed. As you might expect, this makes a LOT of legal departments skittish.

Anyways, I see business going back to the old mainframe model, just with mainframes replaced with servers. It's not like HDD space or IT personal are at a premium.
 
Status
Not open for further replies.