AMD CPU speculation... and expert conjecture

Page 588 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
RAM isn't moving onto the die for reasons already explained. Its costs MORE to RAM on the CPU die. It draws MORE power. You don't have the transistor budget. And apps aren't RAM bandwidth limited, so there's no reason for it.
 

jdwii

Splendid


Gamer do you think we will ever see 8GB of Vram on the APU alone being sold as mainstream gaming parts? Or do you picture only a small buffer for example at the most 512mb of vram?
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


There will be a CPU with 16GB of RAM on it in 2016. For many things that will be plenty but it will cost around $4500.

The question is how quickly they can scale down the costs to something mainstream consumers might buy. ;)
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


I find funny that you accuse others of parroting, when your main activity in this thread consists on copy/pasting new links. The only thing you have verified is that you call liars to people whose arguments you are unable to grasp, but others can understand easily.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
@8350rocks, the plans of AMD have not changed. The FF interconnect only provides 10Gbps per socket. This is slow even for future APU--APU communication. AMD exascale uses 40--100 Gbps interconnects for APU--APU.

FD-SOI doesn't scale well beyond 10nm, unlike FINFETs. Even IBM had rejected FD-SOI for its future nodes, but this doesn't matter anymore, with IBM abandoning the foundries business by due to the lack competitiveness.

Kaveri reduced clocks by two reasons: (i) lower TDP and (ii) HDL. FD-SOI was not in the menu because cannot provide the needed properties for GCN. AMD is not the only company that has rejected SOI. The foundry enginner correctly notices how all the big guys have rejected SOI due to being pure hype.


@gamerk316, Nvidia has not changed its plans, because the motivation to do CPU cores has not changed. When the project started out in 2006, it was an x86 core with Fermi GPU shaders. Due to x86 license problems, Nvidia changed to ARM cores and the GPU cores have been updated, but the main goal is unchanged. Nvidia has gave a technical talk about its plans for the 2018--2020. Check the slide that I reproduced a pair of pages ago. You can continue to negate evidence but dGPUs are replaced by HCNs (Heterogeneous Compute Nodes). Now check my signature.


@Cazalan, research follows a well-defined series of steps, starting with ideas in a piece of paper and following with computer simulations and finishig with prototypes on silicon. Exascale research doesn't even need drivers. No sure what are you arguing here.


@jdwii, only your comment was "silly", first because the FPU was integrated in the CPU despite being a big chip by the standards of the epoch

640px-80386with387.JPG


and, second, because at 14nm there is enough space for including a R9 290X on a small die APU. At 10nm there is double of that space.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


In 2015 a 'CPU' with up to 16GB of RAM will be released, but it will not use ordinary DDR3/4 RAM, but MCDRAM with a bandwidth much greater than the GDDR5 memory used in today high-end GPUs. That 'CPU' will be up to 4x faster in real-world HPC workloads than the fastest dGPU made by Nvidia up to now: the $5000 K40.

Moreover the 'CPU' will be easier to program than the GPU.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
@Yuka, you are right. From an economic side it was an adquisition from AMD, but I was refering to the technical side, which was a merge, a fusion, between the GPU team and the CPU team. The motivations were:

The AMD/ATI acquisition doesn’t make a whole lot of sense on the discrete graphics side if you view the evolution of PC graphics as something that will continue to keep the CPU and the GPU separate. If you look at things from another angle, one that isn’t too far fetched we might add, the acquisition is extremely important.

Some game developers have been predicting for quite some time that CPUs and GPUs were on this crash course and would eventually be merged into a single device. The idea is that GPUs strive, with each generation, to become more general purpose and more programmable; in essence, with each GPU generation ATI and NVIDIA take one more step to being CPU manufacturers. Obviously the GPU is still geared towards running 3D games rather than Microsoft Word, but the idea is that at some point, the GPU will become general purpose enough that it may start encroaching into the territory of the CPU makers or better yet, it may become general purpose enough that AMD and Intel want to make their own.

It’s tough to say if and when this convergence between the CPU and GPU would happen, but if it did and you were in ATI’s position, you’d probably want to be allied with a CPU maker in order to have some hope of staying alive. The 3D revolution killed off basically all giants in the graphics industry and spawned new ones, two of which we’re talking about today. What ATI is hoping to gain from this acquisition is protection from being killed off if the CPU and GPU do go through a merger of sorts.

This is the same motivation why Nvidia began to design its own APUs in the same year AMD brought ATI and rejected Nvidia.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Since you are a Nvidia owner, let us contrast your ruminations to actual data from the Nvidia project that will kill its own GPUs. Nvidia use the buzzword HCN: Heterogeneous Compute Node.

The design has a L0/L1/L2 hierarchy cache with three topology. The LLC is 256MB. The DRAM is 256GB. The integrated SSD is 1TB.

This is not the more agressive design. Since you mention a Haswell CPU, let me remark that Intel mentions 512GB of DRAM for its own project.

As a final note, since you mention linux, let me inform you that we can run several lighweigth linux distros on 256MB or even less. You can run the kernel and the applications on that small memory footprint.
 


There most certainly won't be a CPU with 16GB of dram on die, the manufacturing technology isn't there yet. It should be possible to put 16GB of 3D stacked dram on the same socket, would just be a bunch of IC's surrounding the die inside the same package. You can make the package as large as you want it to be so there isn't as much of a space limitation. The real issue is cost, 16GB per chip is extremely small in the server industry where you get 64~128GB per chip is common. It would only be useful for a small set of circumstances where you'd want a multi-socket system that had a small memory footprint but needed to access that memory at ultra fast speeds / incredibly low latency. It wouldn't be useful for commodity computing which is what we're discussing.

It's funny that some people here are basically saying "640KB ought to be enough for anybody" in regards to integration of memory / graphics.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Image1.png

 

8350rocks

Distinguished


My point was essentially that a bargain bin GPU in 2020 will run 4K, but a top end GPU will be able to accomplish so much more.

For example, for 1080p you can get by with a HD 7770 for 1080p with ~1.5 B transistors...while back in the day you needed the HD 5850 with ~2.15B transistors...
 

jdwii

Splendid
"@jdwii, only your comment was "silly", first because the FPU was integrated in the CPU despite being a big chip by the standards of the epoch"
No juan yet again yours is silly since a FPU is much smaller compared to a GPU such as 295X. So again you are using silly claims.
 

jdwii

Splendid


That goes for everyone at tomshardware juan i guess its you then sense you always claim things and find ways out of them when you are proven wrong such as the silly FPU argument or the silly 2500K is even with a A10 7850K
 


"Ever" is such a loaded term, especially since CPUs are going to look very different in 20 or so years. Short term though? Maybe 128MB or so of cache on die just for the GPU portion. Any more makes the part too expensive to mass produce. And I suspect I'm being VERY generous here.
 




I think IRIS PRO already uses 128mb, personally I think *if* stacked memory actually turns into a 'thing' we might see a bit more than that in the not too distant future (maybe)....
 

see, this is where you're lying. again. your whole reply is a strawman argument(edit2: distorting my stance and p.o.v. with intents to troll/bait is lying, btw.) with sprinkled ad hominem here and there.
i am gonna clarify things here: i accused you of parroting in the argument, only after noticing it. not plural. i do post links from other sites. that activity is very much different from what you do. you posted links/spammed promo slides in arguments/discussions without pointing out relevant specifics, most of the time without relevance or specifics at all and without displaying any familiarity or understanding of the content.
most of the time i post links is outside any argument/discussion, mostly for reference. almost everyone does that.

i've verified that you frequently lie in attempts to win arguments instead of engaging contructive discussions and i've pointed out several occasions where you've done so. you did however.....fail to debunk my accusations, so they still stand.
i don't see what attacking my lack of grasp is supposed to accomplish, though. i question the same things everyone else do. LOL.
 

logainofhades

Titan
Moderator


Been talk for years, and a few products, that were external boxes with a dGPU inside. I would actually like to have such a thing, with proper interface connection, for laptops. It is one of those products that makes perfect sense, if the details on getting it to work properly, and laptop makers providing a usable interface to connect to them.
 
i've been ignoring the money aspects in favor of the technological and performance improvement aspects. in reality, chip designers won't make anything if they can't make money off of it. this is why intel doesn't sell hex cores under $250, ddr4 is still unavailable, stacked memory repeatedly gets pushed back, 1366x768 is still norm.
for example: we could have an lga 2011 platform with cheap 128GB (4x 32GB of ddr3 1866) system memory and 1TB ssd and eliminate hdd entirely. we can't, because memory manufacturers decided to prioritize mobile over pc. we had lpddr2 for a long time and suddenly lpddr3 and lpddr4 came out within a rather short time span.
as a result ddr3 kit prices went up and you can't build a small box with kaveri a10 7800 non-K with 64GB ddr3 2133 ram and 512TB SSD (ssd prices stagnated iirc). imagine booting windows/linux, playing games, edit documents, convert videos from and in system ram and then save the system snapshot on the SSD in periodic intervals or at shutdown (provided that the o.s. enables doing this).
 


What I find frustrating is how hard it is to get a stock laptop with a reasonable screen and an SSD?!

I mean you can add them with the customisable options available through Dell or Lenovo, but they want something stupid like £300 for a 128gb ssd 'upgrade' when I can buy the same drive for £60 on line. As for the fact that my phone has a higher resolution screen (in total pixels not just density) than my laptop... it's crazy.
 


Except the eDRAM isn't on the CPU die itself; its external.

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/3

Also worth noting:

I asked about the potential to integrate eDRAM on-die, but was told that it’s far too early for that discussion. Given the size of the 128MB eDRAM on 22nm (~84mm^2), I can understand why. Intel did float an interesting idea by me though. In the future it could integrate 16 - 32MB of eDRAM on-die for specific use cases (e.g. storing the frame buffer).

So we're right back to limited die space, hence why its external of the main CPU die. Its essentially acting as an external L4 cache.
 
Status
Not open for further replies.