AMD CPU speculation... and expert conjecture

Page 700 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


It was a private conversation about Zen, but I can copy and paste

****** wants an IPC 100% greater than Piledriver/Excavator.

To take that quote in context, I will add that, according to Bulldozer engineers:

AMD made a significant investment in delivering high, single-thread performance levels.
Power efficiency was a critical design characteristic from Bulldozer’s inception.
 


It appears to of been a giant miscommunication internally. The technical folks provided that information to the marketing folks who promptly "simplified" it off the piece of paper. As much as people want to have faux outrage over it, unless I see direct evidence to the contrary I'm going to believe nVidia's story. This is because I have personally witnessed several situations exactly like this within my own company and projects I work on. I do a write up with lots of good technical info, the presentations will then create a powerpoint presentation out of my papers and promptly drop a ton of important technical sh!t so that it "looks pretty". This is so prevalent inside IT that often after the big presentation the clients technical folks will do a mind meld with our technical folks and hash out stuff like that.

From an engineering point of view it's a very elegant implementation, a method of getting free performance (yes people are actually complaining about free performance). Normally you'd have to just chop the entire section off and then one memory bus gets more DRAM chips then another and you still end up with a region of "slow" memory as those extra DRAM chips can't be interleaved across all 32-bit bus's. What they did instead was to partially disable the defective region (it's a binned chip precisely because there is a defect in it), reroute the control logic and use the slower region as a form of super fast victim cache. It still has it's own 32-bit memory controller with it's own data path, just no control logic. And all that cool stuff was completely lost on it's way to the guys who make the flyers and powerpoint presentations.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
The whole Internet is revolutionized after ARM announcement of the new A72.

In another part a pair of x86 fanboys (I know one of them works for Intel) tried to convince me that ARM numbers were pure marketing crap and that it is impossible that ARM achieved those performance levels because it is hard for Intel to get 5% IPC gains per gen. One of them did claim that at best the A72 could improve IPC by 15%.

Well, the chief of the CPU group at ARM has clarified things and closed some mouths. He has confirmed that the new A72 has IPC gains of 10--50%, depending of the application, compared to A57 (the core used by AMD), with everything else the same for both cores. Moreover, they achieved the IPC gains whereas reducing power.

Moreover, he confirmed that the A72 will achieve 30% higher frequencies. Thus, total performance will be about 40--80% over the A57.
 

jdwii

Splendid


Oh know sure you will get a lot of hate for speaking the truth i tested the CPU like crazy and it indeed has 10-15% less performance per clock compared to the phenom. Actually under dolphin i saw decrease in performance when the 8350 was at 4.3ghz and the 1100T was at 3.9ghz.

Piledriver is still faster overall because of clock speeds and new instruction sets which enable higher performance in newer programs. For example watch dogs on the 1100T performs pretty bad were the 8350 performs much higher this isn't over the pseudo cores.

Using fritzchess the 1100T OC performed very similar to a 8350 out of the box.

To a noob the easiest way i can explain the 8350 is its smarter not more powerful then a 1100T. Makes little since but hey i'm saying it.
 

jdwii

Splendid
Oh and sorry for the extra post but i agree with palladin why on earth would Nvidia lie when they don't have to its not like they are cornered or anything. It honestly makes more since then the tin foil hats mess.
 

8350rocks

Distinguished
Why correct it now if they knew all along?

That is what really bothers me about it. They should have clarified early to avoid the mess entirely, but that is not the nvidia way. They just hoped that it would be a while before the 4 GB frame buffer flaw was actually shown for what it was.

The other thing I dislike about the solution is the design requires the GPU to be unable to read or write from both sections of memory simultaneously. Sure, it would have been more complex to implement a second memory controller for the second chunk of memory, but now you end up with a gpu that thinks it has 4 GB, but really only has fast access to 3.5 GB
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360



So which is it? Is this refresh fixing all these things or is it getting nothing but frequency changes, as your source claims.
 



......
I'm all out of faith
This is how I feel
I'm cold and I am shamed
Lying naked on the floor

Illusion never changed
Into something real
I'm wide awake and I can see
The perfect sky is torn
You're a little late, I'm already torn

So I guess the fortune teller's right
Should have seen just what was there
And not some holy light

It crawled beneath my veins
And now I don't care, I had no luck
I don't miss it all that much
There's just so many things
That I can touch, I'm torn
.....

;D
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


The first quote is not from my source, but what "The Stilt" wrote on another forum. I gave the link to his post when I quoted him the first time.

The second quote is based in what my source said me.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


You don't have any proof of this, true?



It resembles certain CPUs that think that have 8 independent cores, but really only have fast access to half the cores whereas access to the other half of the cores has a 20% performance penalty. The reason was that implementing a second decoder and a smarter fetch for those cores was "more complex".

Moreover, the engineers that build that CPU didn't claim that have 8 cores, but they used the term "8 clusters". However, the folks of marketing department 'simplified' the specs and said everyone that the CPU has 8 cores. And this was the starting point of unending discussions on forums. You can see still find people in forums discussing if those CPUs have 8 real cores or 8 fake cores or what.

The analogy with the Nvidia case is amazing, isn't?
 

truegenius

Distinguished
BANNED
Your own previous posts conflicts with what you say in your current posts

Now


earlier

so if we combine 3.5ghz of your beliefs vs current 2.05ghz of top jaguar means 70% more clock and +40% ipc means over current jaguar
then it means 140% more performance over current jaguar as per your previous beliefs but now you changed it to 70%
are you mixing jaguar and piledriver numbers, because 40% ipc over piledriver means 22.5% performance increase of top parts (3.5ghz zen vs 4ghz 8350 considering same clocks)

let me guess future
look, amd got X% performance increase and thats exactly as per my prediction
 


Emphasis mine. It's BD all over again.
 


Pretty much this. Been through this myself often enough.

But yeah, people are complaining about free performance. It really does boggle the mind really.
 

truegenius

Distinguished
BANNED


who is that big mouth ?
some nvidia engineer, really ?
and what, is he trying to say that consumers are only bunch of fools because they are buying their stuff because they and nvidia don't know issues of their cards ? if yes, then i will try to recommend amd cards more than that of nvidia

he is trying to cover nvidia's lack of quality control or failure by showing his foolishness
and i can't believe that how much people liked it and how few disliked

according to him, engineers at nvidia don't have enough brain to find a way to decrease 970's performance
they could have just released a 3GB card, or could have used lower clocked memory chips, coud have decreased bus width
anyone of the above would have served as a significant reason to get 980 over 970

though, i bet all the gibberish he talked is what nvidia told him to say, just look at his facial expression and body language and you will spot that he is lying
this is simply a technique to manipulate human mind and make them think that their mistake wasn't a mistake indeed was intentional
many people around me use this technique to defend them self when their cards are down, some sort of mix of fallacies in short
to me it looks like he used these fallacies techniques there,
special pleading ( saying that performance loss after 3.5gb was intentional to make it less appealing than 980, but truth is no one had any idea of this issue or performance loss prior to the reports by 970 users )
black-or-white ( by saying that this was the only way to decrease 970's performance but there are many other ways listed above )
ancedotal ( by saying that they (nvidia engineers) were discussing a way to decrease 970's performance and were running out of ideas and they financially arrived on this over 3.5gb loss idea )
bandwagon ( by saying that they did this because it was the only valid option remaining )
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


You are entirely right. I mixed both in my head and wrote that plain stupid post.

Note that your 140% is only if you consider the upper 3.5Ghz. If you consider the lower 3.0GHz then it is reduced to 110%.


It is easy to guess future in this case. If final numbers are close to my 'prediction' some people from one camp here will do the impossible and more to try to negate any merit and only a pair of people from another camp will say "Juan was right again". If by the contrary my 'prediction' is wrong then the failure will be mentioned to me again and again and again during months by the people from the former camp. But guess what? I don't care. I will continue doing predictions/guesses and giving my opinion.

I also know that they will not apply the same standard to their own predictions. For instance the people from the second camp that said that "Zen will be faster than Skylake" will try to negate he said that. This kind of stuff happened before.
 

8350rocks

Distinguished


I cannot say that there is any proof...

As for your analogy...it is a poor analogy. You can still access the second core of a module on the BD/PD/SR line of CPUs, on the NVidia GPU, you literally cannot access the 512 MB portion if you are accessing the 3.5 GB side. It is an either or, not both with one at a penalty.

Hence, your ignorance proves my point. Even if you could do both at a penalty to one side, it would still be better than having it as a binary type of scenario.
 

8350rocks

Distinguished


That is not what was said...let us be fair now Juan.

My source said that Zen would be..."within 5% of skylake across the board"

 

szatkus

Honorable
Jul 9, 2013
382
0
10,780


Do they know real performance of Skylake or is it just a prediction?
 

8350rocks

Distinguished



I would imagine it would be a very educated guess...
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


That's a giant lie peddled by journalists who don't want to risk losing their free review samples.

The BIOS itself is programmed to report the invalid specifications, which is not just limited to the 3.5GB. ROPs and L2 cache are reported incorrectly via the BIOS. Which is why GPU-Z reads the incorrect specs. And the ROPs and L2 are actually things where Nvidia blatantly lied. The card actually does have 4GB of memory, just 3.5GB is usable as one pool and .5GB is usable as a separate, slower pool.

https://forum.beyond3d.com/threads/nvidia-maxwell-speculation-thread.50568/page-142#post-1821198

The whole thing is a lie. Unless you think PR department writes the BIOS. Nvidia fessed up to the memory thing because they know they aren't in legal trouble over it. But the ROPs and L2 are blatant lies and Nvidia would probably lose a class action lawsuit over the ROPs and L2, yet not the VRAM.

It's a classic PR tactic. You draw attention to some other big thing that you know you can get away with to draw attention away from the things you can actually get in trouble for.

Nvidia did more or less get away with pawning 3.5GB of usable memory as 4GB of memory, I fully expect this trend to continue with whatever Nvidia comes out with after the GTX 900 series.
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780


There is no chance for AMD to do that much in a single generation, i expect around 40% over Piledriver and maybe close to Haswell in the best scenario.

What would they do with all the Piledriver CPUs they still got if they present something 60% faster?

If this was possible they would have done this instead of Crapdozer.

Also, how can they know the performance of Skylake? there is no chance for AMD even matching Haswell, and if they do i would probably buy one but i can`t wait till 2016 to release another miracle... ill just grab the i5 4690k or wait a little more till Skylake arrives and grab the cheapest i5.
 

truegenius

Distinguished
BANNED


and funny thing about prediction is, many (excluding me, because i preferred sb else ph2, never liked module) said that "buy am3+, because lga1155 is a dead socket" and some part of it continued in lga1150 era
and am3+ users are waiting that soon amd will launch upgrade for their socket ( glad i bought asus am3 socket 8+2 phase board )
so current million dollar question is will amd launch am3+ cpu soon ?

users are waiting
soon_meme_collection_640_03.jpg




can you explain a little in terms of current cpus like ph2, fxx3xx, sb etc and core, clock, performance per clock (cinebench), and some idea of socket will be more helpful :D



exactly

*edit :- if you see this line then edit function is working now
edit 2 :- its working
running-around-smiley-emoticon.gif
 
Status
Not open for further replies.