AMD CPU speculation... and expert conjecture

Page 639 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
I love seeing people reusing now my old arguments --including the easiness of migrating to other foundries--. I really love this is the same people who then criticized my arguments.

I still remember when the SOI fans were claiming that Carrizo could use 22nm FD-SOI. As I wrote then "Carrizo will be produced in bulk by reasons explained again and again and again during months, back when people pretended that Kaveri was SOI... but I smell a new wave of SOI hype/nonsense/lies is coming back to this thread.

Now the hype/nonsense/lie is pushed to sub-10nm, despite I already explained why FD-SOI doesn't scale well beyond 10nm. When the 7nm and 5nm node will be officially announced and people can see that FD-SOI is not used, then it will be interesting to watch their new excuses. Any bet that the same people who is now claiming that FDSOI will be used for sub10 nm will be reusing my former arguments why FDSOI doesn't work at those scales?

Finally, I am not surprised seeing the same individuals trying again the same fallacies/lies about dGP/iGPU and A10/2500k. Now that the first ARM servers have been demonstrated to run faster than top Xeon servers in some applications, this people needs to hide their silly comments about ARM servers, such as this classic gem: "I remember i just asked my boss yesterday about Arm and he laughed and said that is low-end crap and couldn't even run a print server let alone anything that actually matters i guess in the U.S we can afford are electricity Bill" :lol:
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Glofo canceled its in-house 14XM process (FINFET on bulk) and licensed the 14nm FF process from Samsung (FINFET on bulk), because Sansung was better and was ready earlier.
 


Well I don't think the *total* ram requirement changes really, just the distribution. People don't usually account for dedicated vram in a system however that still as to be paid for to be on a card. A common system with a dgpu will have 10gb of total ram (8gb system + 2gb gpu), so with an HSA APU you would still need access to 10gb, just all system ram instead of the split. Also in some situations the dgpu vram is duplicated in system ram, in those situations HSA will represent a reduction in overall usage, so 8gb total may be sufficient, where you'd need 10gb total with the dgpu.
 


I'm going to say it for the one millionth time: You should not expect any degree of security from any data that is connected to any device hooked up to the internet. Period.
 


That's not how it works. In terms of graphics processing, while you need the data in system RAM to facilitate the transfer to VRAM, once that is done, you can free up that space. So that data isn't a major user of system RAM. So what you have no is essentially the main game engine using system RAM, and the GPU using VRAM. So you can have a situation where the game engine uses 2GB, VRAM uses 6GB, and you aren't limited by RAM size with 4GB of system RAM.

Remove VRAM and use one pool for everything, and suddenly, even 8GB isn't nearly enough. You'd need 12GB or so. That increases system costs by about $50, which doesn't sound like much until you realize that's about the price difference between AMD and Intel.

And I again note: VRAM does away with the limitation of slow system RAM slowing the GPU to a crawl, which DDR4 isn't solving either.
 


Yeah I did say it's the total pool that is important. Maybe I'm wrong about a potential saving, still what your not accounting for is that you pay $50 more on memory but you *would have already paid that* to get the dedicated vram on the dgpu included in it's cost.

I don't think HSA makes the overall system cost any greater- it just redistributes where the money gets spent. Obviously I agree with you, current APU's are seriously restricted by fairly slow / narrow system ram interface. That is hopefully something that will improve over time. There is an argument though that a larger shared memory pool is a more flexible approach- as where your not using graphics heavy applications you can use the additional space for general purpose applications where the memory on the dGPU can only be used when the card is needed.
 

8350rocks

Distinguished
So, Samsung announcing 14nm FDSOI AND 10nm FDSOI is interesting, especially considering STMicro has stated they project things being fine to 5nm. Obviously, it remains to be seen how things go at this point, but it does make things really interesting at this point.
 

h2323

Distinguished
Sep 7, 2011
78
0
18,640
Juan not sure what that rant was about, more concerned about AMD information then your ego and martyrdom. If you take the words me, I, and anything else self related out of your posts and focus on AMD and others semi's then less perceived attacking will go on, I guarantee it.

You don't have special access to AMD information. We all knew when...... AMD said it was so. TAM for ARM servers accelerate 2015 through 2020. Yes ARM servers are a reality but claiming anything in the early days is really pointless. It will all be much more clear next year. AMD has a conservative management team, we will see how they update there roadmaps for ARM penetration as time goes on.

I would stay away from road maps and information from AMCC, they have lost a lot of shareholder value by making extraordinary claims with nothing to back it up. Likely will fail as they have less cash then AMD and have less talent as well.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


STMicro will be using FDSOI for low performance products such as 2W phone chips. because it is not adequate for high-performance chips such as server SoCs.¹ They licensed to Samsung because want second-sourcing production due to projections of a possible increase in demand of phones from the Asia market.

It is worth to mention that the FDSOI 14nm is internally called UTBOX20 because the box thickness is 20nm and FDSOI 10nm is internally called UTBOX15s because the box thickness is 15nm. I want to emphasize that my old explanation of why FDSOI doesn't scale well beyond 10nm was using real 10nm size. If someone at SOI industry invent a UTBOX10 and rename it as FDSOI 7nm or 5nm, I expect that nobody comes and pretend that my claims about scaling were incorrect, because it would be pretty ridiculous.

¹ In fact even IBM had abandoned FDSOI for its future products before they sold the foundries.
 

jdwii

Splendid


Juan once again its not fallacious arguments would you like me to once again show all of them to you from this fourm? Perhaps you need to learn how to use words correctly before you post them online or in real life. Yeah about the Arm thing it was crap at the time and its still only for limited app's as you just claimed. I also don't see us at my business(school) upgrading anytime soon to an Arm server over many many many issues one being software compatibility. Also stop showing marketing slides when comparing performance man it looks sad when you do that.
 

noob2222

Distinguished
Nov 19, 2007
2,722
0
20,860
Ultra thin buried oxide layers have already been developed to 10nm, aka utbox 10. That not the "internal name", just the layer thickness.

Surely your not trying to imply that 22nm transistor means 22nm thickness of the silicon used.
 

8350rocks

Distinguished
Noob...if he knew half as much about substrate as he thought, and the details behind a processor uarch, he would realize half of what he says is so patently false that it is no point in debating. Then his head would explode because he would never admit how ridiculous he sounded dismissing the credible ideas out of hand with irrelevant information.
 

anxiousinfusion

Distinguished
Jul 1, 2011
1,035
0
19,360


Somewhat relevant.
 
On the note of games and applications keeping graphics resources in main memory, they do and they don't. They keep whats currently on the screen in main memory, this is what enables alt-tabbing and fast task switching. If they have also loaded other resources that aren't immediately being used, then those most likely won't be kept in system memory.

Also remember games rarely directly consume that much video ram, it's the kernel mode drivers doing all that AA / AF and other special effects that really consume memory. A game might only be using 400~600MB of actual resources, but the kernel graphics drivers are using another GB or so by rendering them with 8~16x AA/AF. Normally this is all transparent to the user, it's "magic" so to speak, but if we're discussing stuff like HSA and memory pool allocation then it's important to remember which is doing what. Right now nearly no game use's more then 2GB of system memory due to the 32-bit NT limitations. That is acting like a huge road block to development and once games start being designed as 64-bit, then we can expect memory utilization to explode. That explosion should also drive an explosion in VRAM usage as they will be wanting to pack more and more into graphics memory for fast access.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


If data is going to be used once then you can free space in system RAM, but this is not the norm and, in practice, you will have a copy of the same data in two places often. Moreover, game developers have invented tricks to surpass the limits of the traditional dCPU/dGPU, including a tile approach with lots of small textures mapped repetitively or storing textures in compressed format in RAM and sending them compressed via PCIe and decompressing on GPU side when are applied to the scene.

Also you omit what happens when the GPU is used for more than graphics. When the GPU is used for compute the PCIe is a bottleneck, storing same data in two memory pools is a large waste of resources, and copying data from and backward each one of the two memory pools affect performance. Anxiousinfusion has given a relevant link, but I gave before a paper showing a performance wall of more than 10x for some workloads.

The unified memory pool of the PS4 was a choice of game developers. Besides simplifying programming (which means they can spend more time on making a better game), the unified pool gives programmer freedom to do things that cannot be made with a traditional dCPU/dGPU architecture with two separate memory pools.

cdrkf also mentions another important point: flexibility. With a single memory pool programmers can split it in function of the needs. For instance Killzone Shadow Fall programmers split the PS4 memory pool into three: main, vram, and intermediate buffer. And the split can be dynamical with one game requiring more VRAM than other game or even a section of a game requiring more VRAM than another section of the same game.

With a traditional CPU/GPU architecture you are constrained to a fixed partition which is made at hardware design phase. That is why game developers mentioned before recommended to purchase a dGPU with maximum possible VRAM. In most cases most of that VRAM will be unused, but it is needed for the poor case scenario. This is a waste of money and resources.

You also omit that system RAM doesn't need to be slow. The PS4 uses fast system RAM based in GDDR5. Current PCs use slow DDR3 because traditionally memory architectures were designed for capacity not speed. But that has changed. Intel, Micron, ARM and others have developed HMC which is not only much faster than DDR but much faster than GDDR5

http://www.hybridmemorycube.org/faq.html

Future CPUs/APUs will be not limited by slow DDR3/4 memory. Intel is going to release a CPU with >500GB/s bandwidth (which is about the double of the bandwidth available to the new GTX 970 and 980). Nvidia is researching a future SoC with 1.6TB/s of system RAM and AMD has a similar design in the lab but with HBM instead.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
I want to add that I am glad by participating in a forum full of SOI and substrate experts, specially those that predicted that Kaveri had been delayed because was being made on SOI, that Carrizo was going to be made on 22nm FDSOI, that AMD would return to SOI with K12/Zen, and those that predict now that FDSOI will be needed for sub-10nm nodes.
 

8350rocks

Distinguished
Juan, while that was speculation at the time, you have made absurdly inaccurate posts about certain aspects of hardware design.

While speculation at the time was due to lack of information. Your information presented inaccuracies about technical facts.

As for your articles, John took over compute and graphics some time ago. This is just some housekeeping from that reorganization.
 


The game I've been following 'PA' is native 64 bit. The amount of ram directly limits map size (which are procedurally generated) and the game does use more ram than most. 8gb is a good minimum to play it well, however some of the bigger maps can use 20+ gb. The engine has been built to scale with hardware.

It can also use *a lot* of graphics ram with all the settings dialed up, with 2gb being the practical minimum if you want to run on anything above minimum textures in particular (min textures will work with about 1gb vram).

Oddly enough though, despite the high memory requirements the game is quite forgiving on cpu / gpu power (it's quite well threaded).
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


I don't know why you have such a hard time accepting the fact that plans can definitely change as technology a product depends on doesn't work out. I'm specifically speaking about fab process.

Is it that hard for you to extrapolate that AMD had this stuff planned for 22nm SOI and it all fell through? Is Global Foundaries completely abandoning it when they were talking it up like crazy in 2012 not enough evidence for you?

I realize you like to think that AMD relying on Piledriver two years after it was released in HEDT is part of the plan of moving to ARM, but if that were the case, AMD would not be losing CPU market share like crazy. The fact is no company in their right mind would sabotage a product line because two years later they will have a good product in an untested market that will replace the old one.

There's plenty of evidence that HEDT is alive and well, look at what PC gaming market is doing. And prices are as low as they have ever been for a good gaming PC. There is no longer a need to spend $300 on a CPU to just be able to play on high settings.

Things completely fell through for AMD's plans with HEDT because technology wasn't available. And I know you will just go "LOL SWITCHING TO ARM" but there's too much evidence out there that suggests riding on PD for two years was not part of the original plan. Old roadmaps, code names, 22nm SOI being completely cancelled at GloFo, people with insider information claiming one thing and years later things change.

Look at AMD's current CPU situation. Their latest CPU technology is limited to mid-range parts at best while the higher profits markets are all limping along on an ancient platform with an ancient (in x86 architecture terms) CPU. You have to be completely daft to think AMD would choose to be in the position they are in. Specially if you think they're going to pin all of their hopes on two years from now on an emerging market.

What AMD has done with their x86 CPUs/APUs is the same as if Intel went "sorry, everything beyond 2 core CPU is not going to get architecture or platform updates". No chip maker in their right mind would intentionally do that. And that's what you're implying by saying that this insider information from so long ago that didn't pan out is. A grand plan to keep R&D projects out of markets they normally go in when that market is doing very well.

What if Samsung went "all our high end SoCs will no longer get platform, modem, or controller updates!"? Or if Nvidia went "only low end parts get Maxwell, everyone will be fine with GK104 until the new architecture which isn't even compatible with DX comes out in 2016" and all the low end parts were only around half as fast as GK104?

That's basically what you're arguing AMD's intentions are. And it's the most ludicrous thing I've read on any forum in a very long time. AMD did not choose to be in their position and they had plans like Komodo and 22nm SOI. And it all fell through for reasons they couldn't control. Technical reasons. Not reasons like "here's a marketing slide and some guy that sells ARM servers saying ARM is going to win in the long run so cancel all your x86 CPUs two years before you have your good ARM products out!"
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
You don't need to give us more excuses! We all know that you are experts and that the true reason why all your predictions have proven to be wrong is because "plans can definitely change". And when your future predictions will suffer from same fate, you would blame again the "plans". Don't you?

Some time ago I decided to contact via PM with some friends here at TOMS to share all my info about Excavator and Carrizo instead posting at this thread. Since then I have been contacting with friends via PM when I have more info. Lately we shared roadmap about 2015--2016 products. We have known the replacement for Opteron 6000 series, the replacement for fortcoming Excavator-based SoC, the replacement for Puma-based products... what is made on 28nm, what is made on 20nm... This is what one of them said me:

I have watched people fight postings like yours for a good two years now and it appears those guys don't want to see any speculation. Admittedly, it's made me afraid of sharing my own thoughts regarding the future of consumer computing and integration. Thanks for sharing!

I like the idea of sharing relevant info about AMD products between us via PMs and leaving this thread for the real experts to post their unuseful thoutgts and failed predictions about everything, from sockets and IC manufacturing to OSes and APIs.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
This thread is not about predictions. It is speculation and conjecture.

If one has to be right all the time there is little point in discussing, as you'll never learn anything.
 


Some people love self-recognition out of psychological issues. For example, low self-esteem.

In any case. Juan, noob, 8350rocks and jdwii, stop it with the personal bashing. We're supposed to be grown ups with a shared love for technology, not raging-teen-fanbois that just got out of their houses basements and got to socialize for the first time, right? In particular, Juan, stop with the sarcastic tone. It doesn't help the conversation at all.

Cheers!
 

sapperastro

Honorable
Jan 28, 2014
191
0
10,710
These days, I have no idea whether AMD is coming or going. They put one roadmap up, tear it down, put another up, say something else, etc.

Since my FX is (currently) doing its job well enough, I am happy to sit and see what happens.

Unfortunately, this thread seems to be spiraling into one big argument post, probably because of the lack of new information and things to discuss. Reminds me a little of a wild west saloon actually, one guy says something, another calls bs, another calls liar and we descend down yet another level.

2015 is going to be a looooooong year AMD wise...
 
Status
Not open for further replies.