AMD CPU speculation... and expert conjecture

Page 643 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

jdwii

Splendid


I'd like to say Windows does this already with the patches installed for 7 and 8,8.1,10 does this already.
 

8350rocks

Distinguished
I would not worry about AMD dropping x86 to the back burner anytime soon. John Byrne wants to see them compete with Intel and hold majority % in dGPU. They are committed to the product and rory put them in place managing that for a reason. Now AMD will be run by an electrical engineer with doctorate from MIT. The board will handle the business end, and AMD will make the products right for the consumer.

While I, personally, lament the departure of Rory Read, I think a transition at this time is an opportune moment. They are trending upward, their financial situation is far more solvent, and they are making great products.
 
Read's entire approach lacked the depth required to execute on a rapidly moving technological landscape.

Lacking any intuition, experience in uarch, engineering or the difficulties associated with Feed (he should have had at least 4 different scenarios under research for instance) he was going nowhere fast.

He applied a static beancounting approach to a highly fluid technological company and we watched it go backwards.

If you can't continue to innovate no one wants to buy your products no matter how cheap you can eventually produce them using LEAN and 6 Sigma.

You need an innovative engineer driving the company, in AMD's case you need 3.
 
Rory's departure sure surprised me, but he said "Lisa Su was being trained to be the next CEO even when I joined the company". He didn't hit any special landmark while being CEO, but he didn't do anything wrong either. He steered the sinking ship away from the shore, but it's still sinking little by little; i'll give you that. In my own opinion, his best move was to bring Keller back.

Having a "technical" background does very little in a CEO position, if you ask me. The CEO has to manage more than do engy work. Sure, it helps, but the skill set is a little skewed towards HR than Science.

In any case, is good to see a Doctor in Electrical Engy as the head of AMD again.

Cheers!
 


There's two problems really: The first is that Windows tends not to swap threads around whenever it can, due to the performance benefit of not needing to copy data across the CPU cache. The second is, at the time a thread gets assigned, it is entirely possible that higher priority tasks are on the other cores, leaving two heavy workload, but light priority, threads on the same core. We saw this in the Headline Benchmark CPU tests. AMD in theory could have the same problem, but with more cores to play with, the odds of two heavy threads getting loaded on the same core decrease.

The other answer is that HTT could be nuking performance.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


If you're looking for an actual explanation, it's because a margin of error is both positive and negative as it's a range.

For example, if you have a margin of error of 5% on something that's 100, it is either 95 (100 - 5) or 105 (100 + 5). The correct way to write margin of error is (+/-), but you can just write that you have a margin of error of 5% and it's usually implied that it's +/-.

Maybe sort of like this

<============xxxxx100xxxxx==========>

Where x is area of margin of error and = is where you don't expect results to be.

Seeing Read step down is sort of a big deal. I never really expected it. He had what it takes to keep the company on track and make it profitable, but he was far too slow. Coming from Lenovo and being in the OEM market, it's a place where you don't rush products out. How many times do we see Intel, AMD, or Nvidia announce a new mobile part and then you can buy those parts in an OEM laptop in the next week or so? Rarely, if ever.

I waited for a long time to buy an A4-5000 laptop (MONTHS), and I thought it was just OEMs ignoring AMD like usual. But I looked into it and surprisingly enough, Intel products get delayed as well. It's just not as fast paced as the DIY market and a lot of people (myself included) would view the lethargic OEM market as intentionally sabotaging AMD because they took forever to get products out.

I am hoping Su has learnt enough from Read about maintaining stability while she can bring her technical and engineering expertise to the table to push AMD in a more aggressive position. We've seen AMD transform into a much more customer-friendly company with things like gaming evolved. They also are reaching out to developers and stuff which they've never done.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


That might explain the 2x million dollar retention bonuses Byrne got recently. Some fighting at the top.
 

h2323

Distinguished
Sep 7, 2011
78
0
18,640


Yes it could, Su also clarified on the conference call yesterday that x86 will still be a significant focus.
 

jdwii

Splendid
http://www.pcgamer.com/how-gamecubewii-emulator-dolphin-got-a-turbocharge/

Gamer you to check this out it just proves once again that the way a program is well programmed matters the most in terms of performance never thought Dolphin emulator would get this much better gonna have to turn it on and play.
Edit i just got done playing some ssb melee on the emulator still kinda skips(darn it fx) every now and then but much better.
 

8350rocks

Distinguished
Just wondering, you guys do know AMD stole John Byrne from NVidia some time back right? He was a VP of sales and product for team green first...

John is the passionate kind of guy you want running your development. Explains a lot about the quality differences seen since he took over compute. Expect a lot more to come.
 

wh3resmycar

Distinguished



she said she has extremely nice standards. so hopefully by catalyst 15 we'll have dynamic vsync built into it. :pt1cable:
 

colinp

Honorable
Jun 27, 2012
217
0
10,680
You can get one of those cold fusion devices sometime after a genuine one is invented, which will happen sometime after it's found to be even theoretically possible.
 
On a complete side note, LENR (Low Energy Nuclear Reactions) have been demonstrated to some degree, just nobody is quiet sure exactly why it's happening and why it's only on such a small scale. Probably has something to do with QM. It's currently so incredibly inefficient that I wouldn't expect anything remotely reasonable in any of our lifetimes if ever. Also n early all nuclear fuels have ridiculously high, as in many orders of magnitude, energy density, yet all the supporting equipment, not to mention thermal conversion equipment, make it not very portable. The US Navy has done some pretty interesting things in miniaturizing nuclear power plants so that they can fit on ships.
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
FYI regarding thoughts on CMT by a lot of you. http://semiaccurate.com/forums/showpost.php?p=223138&postcount=290

Seronx found Oracle is pushing CMT in their design in their new architectures. So we can watch and see if this chip is not so good like Bulldozer or if CMT is a valid option and that AMD just had a bad implementation. But judging by the fact that Oracle felt it was a good idea even after Bulldozer, I doubt that it's such a bad idea as some of you would like to think.

What's interesting is it's a cluster of four "cores" in a module, to use AMD terminology that we've been using. Much like that mysterious leaked slide we saw a while ago that was deduced to have really fat 4 core modules.

Looking at Rock, you have a dual core module with a shared FPU like AMD, and then two modules in a cluster. And they both appear to have SMT as well judging by the fact you have a 2m/4c/8t set up. Unless I'm not understanding this properly. Seronx, I admire the fact the can dig through patents and linkedin like he does and just presents information to us without bias regardless of how ridiculous it may seem (objectively instead of subjectively), can be a bit vague at times.

I don't know enough about Rock, but perhaps the difference between Rock and Bulldozer is that Rock is set up in such a way that if you're going single thread, you give a core more resources than it would normally have as opposed to if it were a single core. And the AMD approach is to share things and if you're not sharing, you end up with just a typical core.

I say this because SMT usually provides more resources to a single core (whether they are used effectively or not is up for debate) when running single thread and CMT, at least in Bulldozer designs, just gives a standard core's resources.

Up for discussion. It's interesting to see a company besides AMD or Intel embrace CMT like this. Specially with such strong opinions out there that CMT is broken and lead to Bulldozer's current position in the market (like less than 4% server marketshare).
 

szatkus

Honorable
Jul 9, 2013
382
0
10,780
@blackstar actually CMT is one of few things which works as it should in Bulldozer. People were looking for cause of poor performance, so they took the most hyped feature. And no one cares that BD has for example smaller L1, bigger L2, less ALUs/AGUs than K10.
 


Dolphin has made HUGE progress the past year or so. I remember the 3.5 branch and all the issues it had, but Dolphin is a LOT faster and a LOT more stable then it used to be. I give them a lot of credit for the optimizations and fixes they've put in.
 


That's my thought. No one really has a clue how the heck it works, but experimentally, it does.

I'm waiting to see how Europes upcoming experiments go. Their small scale reactor finally managed to achieve ignition, be it at a very small scale. They plan to start ramping up the power output later this year to see if they can achieve energy out > energy in at higher scales.

Just remember: Fusion is ALWAYS 20 years away.
 

jdwii

Splendid


Yes that is often overlooked it really comes down to what your saying.
I honestly upgraded to the 8350 to see for myself and its L3 cache is one major issue and the L2 cache one programmer even claimed its easier to just treat the CPU as a 4 core since 1 core can steal all the L2 cache in 1 module.

Also as stated by palladin9479(and many others) the scheduler/branch predictor is also an issue. Hopefully Amd does what Intel did and includes a µop cache(maybe decrease their pipeline length to). Furthermore i hope they stop sharing the L1 cache.

CMT will still lower single threaded performance its just impossible to claim otherwise when it shares resources performance will degrade even if its by 5% one day(20% now 30% in some cases). The only way to overcome that is by brute forcing your design like adding lots of cache or increasing frequency by a decent amount. Edit actually if the software you use can take advantage of sharing resources it might actually overcome the 20% hit and perhaps boost performance.

CMT also introduces latency issues and overall it might just be easier for Amd or others to use SMT since its free to add on and pushes efficiency and won't make single threaded performance worse(if the OS/software handles it right).
I see CMT being more useful for many-core designs where single threaded performance doesn't matter as much like servers.
 
Status
Not open for further replies.