AMD CPU speculation... and expert conjecture

Page 504 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

8350rocks

Distinguished


LOL...you do not think for 1 minute that AMD brought Jim Keller back for nothing do you? I can guarantee you they have him focused on this segment to get viable product moving and gain market share, once they have a foothold there, we will see a new uarch that is Jim Keller's brainchild, and the HEDT platform will not be an APU (not in the traditional sense, it may have a more robust coprocessor setup, but not a full blown die hog of an iGPU). Though with current uarch, it was too far along in design process for massive revisions. Excavator will likely be mostly Jim Keller revisions to the uarch, but will likely be the last bit we see on this modular uarch outside of custom designs using the modular core design.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
Beema has some impressive improvements but with Intel throwing out $5 BayTrail parts with 'contra revenue' it probably won't gain them any traction. They should just go straight to AM1 where it would move some parts at least.
 

harly2

Distinguished
Nov 22, 2007
124
0
18,680



Well if you said it...then....

As they (all foundries) run into MASSIVE issues at 10nm software performance will increase dramatically.

 


No it won't. We've been trying to make massively parallel systems work since the 70's, and every single time, we find there's no way to do it. The best you can do is offload some subset of tasks (rendering, physics, etc), but you are forever limited in performance to the time it takes to finish serial processing.
 

harly2

Distinguished
Nov 22, 2007
124
0
18,680


Beema will gain traction in laptops, Mullins won't in tablets..
 

colinp

Honorable
Jun 27, 2012
217
0
10,680


There is literally no way that AMD is going to design a chip specifically for HEDT. It's a small and shrinking market in comparison to all-in-ones, laptops and tablets.

The only way they'll get into HEDT again is if one of their lower power designs happens to scale up well to HEDT TDPs.
 




I think they will probably have to worry more about a 14nm Atom than the 22nm Atom, and that is probably coming sooner rather than later.



The massive issue is with the materials more than anything which they are already working on. I think beyond 7nm is where things will really get more interesting
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I only expect you aren't confounding "low-IPC" with "low-power" again.



It seems you missed this post from mine:


 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810
PCIe is a bottleneck only for certain workloads. As you can see in these Mining rigs they're happy with a 1x connection. If the data to be worked on can fit in the local RAM it's no issue.

http://www.brightsideofnews.com/2014/04/18/primochill-hasher-mining-rack-review-madness-organized/

 

harly2

Distinguished
Nov 22, 2007
124
0
18,680
The massive issue is with the materials more than anything which they are already working on. I think beyond 7nm is where things will really get more interesting




Yes "they" are working on it, but what does that mean, working on it with success, failing, we don't know, we won't know. I think from 14nm on we see severe slow downs meaning the time from node to node is unreliable and full of misinformation due to investor relations. Corporate compartmentalization will be in full effect, the stakes will be high. I also don't think it's bad, some big changes will be coming to the industry and I think they will be good for the consumer.

14nm will be the last node where we on the forums will be saying this date and that date for this fab and that fab with any sort of accuracy.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


First, nobody said it would be easy.

Second, the golden days of sloppy programmers (those who would compensate their inefficient serial code by forcing a hardware update on the user side) are gone. Dennard scaling died at about 2005

http://www.extremetech.com/wp-content/uploads/2013/08/CPU-Scaling-640x637.jpg

and the only way to increase performance significantly since then has been with multi-cores and parallel programing. Technological scaling predicts that the traditional multi-core paradigm is outdated and it starts being replaced by the new many-core paradigm, aka massive parallelism

http://www.extremetech.com/wp-content/uploads/2012/02/Scaling1.jpg

In the 70s, programmers could learn parallel programing as a hobby; in the year 2020 parallel programing will be the only way to extract significant performance from hardware. The target in the HPC community is scale up to ~10^5 threads per TCU (~10^9 threads per machine) in next four years.

Of course serial code will be needed for those tasks that cannot be parallelized, but those are a minority, probably the 10%.
 


This is incorrect misinformation at best, blatant lying at worst.. We've been successfully making massively parallel systems work since the 70's, they were known as mainframes. Earlier in this discussion I described exactly how you would build a data processing stack that would host thousands of simultaneous connections and process tens of thousands of transactions all running on hardware that is massively parallel. When it comes to data processing massively parallel is the only way to go. This also works in the home though home based data processing tends to be miniscule in comparison. Data encoding / rendering are massively parallel tasks, sorting cataloging and indexing are all massively parallel tasks. Playing video games are massively parallel tasks, though the massively parallel part tends to be done by specialized coprocessors rather then by general purpose processors.

I don't care what color of paint you put on your face in the morning but don't go around lying about the nature of computing. We've been in the age of parallelism for a long time now.
 
Juan, stop embedding marketing slides into your posts without at least using a spoiler tag. Preferably just put a link to the source of the image so people can chose to open it if they want rather then screwing the forums up by loading it directly. I had to start editing your posts because they were getting too graphics heavy. Also people try not to make large link pyramids without using spoiler, it keeps the page clean and easy to read.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


A single core is not a bottleneck if you are running ancient software or only browsing internet and reading email. 2GB RAM are not a bottleneck if the OS and applications fit in it, PCIe is not a bottleneck if moving of data is minimized and/or compression is used.

However, CPUs with 16-cores or more are needed for certain workloads. 32GB DIMMS are needed for certain workloads. NVLINK and similar interconnects are needed for certain workloads...



14nm is here. 10nm will be ready before the expected. The interesting stuff starts at 7nm. For instance nobody has shown that FD-SOI scales down beyond 10nm.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Ok done. I replaced the spoiler tag with direct links
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Thus you assume that AM4 is coming because you "have not had confirmation that AM4 is not coming"? Interesting

I only expect that we don't get more leaks about the AM4 Baeca CPU*, because I am still laughing hard at this one :LOL:

* AMD Phenom IV X12 170 “Baeca” 25nm FD-SOI CPU with 12 Cores, 6 GHz Core Clock, and AM4 Socket Compatibility

http://www.ocaholic.ch/modules/xcgal/albums/userpics/10001/AMD-Phenom-IV-X12-170.jpg
 
This is incorrect misinformation at best, blatant lying at worst.. We've been successfully making massively parallel systems work since the 70's, they were known as mainframes. Earlier in this discussion I described exactly how you would build a data processing stack that would host thousands of simultaneous connections and process tens of thousands of transactions all running on hardware that is massively parallel. When it comes to data processing massively parallel is the only way to go. This also works in the home though home based data processing tends to be miniscule in comparison. Data encoding / rendering are massively parallel tasks, sorting cataloging and indexing are all massively parallel tasks. Playing video games are massively parallel tasks, though the massively parallel part tends to be done by specialized coprocessors rather then by general purpose processors.

I don't care what color of paint you put on your face in the morning but don't go around lying about the nature of computing. We've been in the age of parallelism for a long time now.

I explicitly called some tasks, such as rendering and encoding, massively parallel. You can simply add more cores on specialized co-processors (GPUs) to infinity to improve performance of those tasks. And yes, anything to do with processing data can be made massively parallel. I'm not arguing that point. I'm arguing Juans argument that every task under the sun can be handled this way. It can't, and for those tasks, you are limited by how long it takes your serial processing to complete.
 

truegenius

Distinguished
BANNED


you predicted it like a pro :fille:



:chaudar: whats this ?
a belated april fool joke :hum:
2e8jdQJ.jpg
 

i wonder if.... the 4 a.c.e. in the igpu could be used together with the 4 cpu cores as a 8 "compute cores" configuration like kaveri apus. may be in a future soc... :whistle:
a 16 core soc will have total 20 "compute cores" and with a gcn 1.1 igpu (8 a.c.e.)... total 24 frickin' "compute cores"....
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


GamerK, you do realize Bulldozer was designed to be a server chip first and foremost. Meaning it was designed to run in situations where apache creates a thread and runs in on a CPU core for every request.

You have it backwards. You are thinking AMD designed 8 core CPU because they thought multi-threaded software was coming.

They basically designed a server chip, found it didn't meet most demands of current software for desktop consumers, and then tried to push developers into using more cores for things.

But as for those of you who think that the PC is dying, well.

The "PC is dying" numbers don't include DIY sales. In fact, the whole "PC is dying" thing is only referring to people who buy pre-built systems from Dell, Wal-Mart, etc.

So, imagine in your head how many people used to buy desktops before smartphones, tablets, and laptops were the main computing devices. Now imagine that the DIY market is growing enough to substantialy offset a bunch of shmucks who can't built their own computers. These internet blogs and finance sites love to grab the number of PC shipments from Dell, Gateway, etc and then go "ZOMG DESKTOP SKY IS FALLING ITS ALL OVER BUY 5 iPADS SO YOU CAN MULTITASK!!!!!" without even thinking about how accessible and easy it has become to build your own computer or how that market is basically exploding much faster than console sales are.

I could see AMD releasing an HEDT platform if they can get some form of HSA working across dGPU and dCPU. It doesn't have to be as good as HSA on an APU, but it needs to at least be better than just straight up OpenCL. Which, I think is possible. It makes more sense to release a gimped HSA and more powerful traditional computing platform.

The big thing some of you are forgetting is that HSA depends on software success. The majority of people who are into HEDT are early adopters of technology and are usually rather vocal about their purchases. They are ideal candidates for HSA. It would also provoke more companies to write more HSA software.

From a software developer's standpoint, AMD sort of needs an HSA HEDT platform. HSA exists to increase performance and an AMD APU will never be as fast as two big dCPUs and dGPUs. So the high end people will never move to APU only. Yet at the same time APU is the only thing that can do HSA and HSA is used to push performance. I don't want to beat a dead horse, but convincing people to drop traditional performance for the sake of "maybe someday you can get HSA for the software you need to use!" isn't going to work on anyone.

AMD needs HSA HEDT platform to spur software development of HSA software. Imagine if we saw HEDT platform where games ran Mantle, could access memory of each GPU and main system memory without having to copy everything to VRAM like we do for AFR, and you have another GPU doing physics and another doing global illumination calculations. It's like PhysX before Nvidia destroyed it and turned it into "lets give the part of a computer which is usually a bottleneck in games more computation to do so we can sell more chips!" (remember old PPUs?), but on steroids.

To put AMD HEDT platform into perspective for you all.

How many would switch from an Intel i7, i5 or AMD FX 3m/6c or better to AMD APU with HSA if it meant a few of your programs where HSA accelerated but you lost performance in traditional programs?

And how many of you would switch from an Intel i7, i5 or AMD FX 3m/6c or better to a platform that was HSA capable (but was less efficient at HSA than a full fledged APU) but had 5m/10c or better Steamroller or Excavator CPU with at least one Tahiti or Hawaii class GPU?

The answer is obvious to me. The second one sounds better. Perhaps HSA would be a lot slower than if you put Hawaii class GPU on the same die as 5m/10c FX class CPU, but if it ends up 40% faster than just plain old OpenCL (or CUDA for that matter) while APU is 200% faster, does it even matter? It's still faster than regular OpenCL and CUDA. Which, IMO, is a win. It is not the best that HSA can do but it should still be significantly better than no HSA at all.

But that is why I think HEDT AMD platform still makes sense. HSA depends on getting a lot of HSA capable systems up and running. And ignoring the HEDT platform, which is full of knowledgeable people who basically go on forums to advertise for free (like we're all doing now), is a massive mistake and a massive missed opportunity for AMD if they just stick to APUs forever.
 

8350rocks

Distinguished


I can only sit back and laugh at your snide comments.

If you think I base my estimations of what may or may not be coming on anything I see on the internet, you are so far off base that I cannot, and will not, even begin to have the conversation with you about what you do not know.

Let me say this...I personally refer to you as "90/10"...as in, approximately 10% of what you say is close...the rest is what you have gleaned and speculated about from articles that are also "90/10" sources typically.

GG.
 

8350rocks

Distinguished


This is the most accurate comment about the mindset of AMD to this point...the issue is lack of opportunity to do it well, and after the BD launch, they are not keen on having another product come out that does not meet expectations.

Jim Keller is working on it...that is all I can say...(though I am also under NDA...you can read between the lines I am sure...)
 

colinp

Honorable
Jun 27, 2012
217
0
10,680


Up until the launch date of Kaveri, you were still clinging to the hope that it would be SOI and not bulk. You also had no foreknowledge of Mantle, HSA, TrueAudio or TressFX. You even thought that Mantle was in the Linux version of Catalyst.

I mean, surely you can understand why any of us may be skeptical about your supposed insider knowledge. Personally, my internal BS-o-meter is waving like the Queen on a state visit.

Throw us a bone, something that will be verifiable later on (so we can look back and say, "He was right!") and not vague like, "AMD are going to release a better chip some time in 2015."
 

jdwii

Splendid


To be honest that's over Amd not telling all of their engineers the truth sorry that happens i was told that as well, the truth is not everyone has access to everything at Amd.
 
Status
Not open for further replies.