AMD Ryzen MegaThread! FAQ and Resources

Page 87 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

imrazor

Distinguished
Dec 28, 2009
1,053
0
19,660
124
So the new Threadripper 2 begs the question...are there actually any apps that can use 64 threads? I guess if you have enough virtual machines you can, but I don't think apps like Premiere Pro can really use all of the threads on that monster...
 

daerohn

Distinguished
Jan 18, 2009
105
0
18,710
15
the main issue with render farms is that they are licensed to CPU cores. So single CPU with more cores and threads will help decrease costs greatly. There are also simulation and analysis software which can greatly benefit from more cores. I for one can think to replace my rendering, simulation and analysis machines with AMD powered WS's as we need more core per CPU. Depending on testing of course.
 

Yuka

Splendid


You mention "issue", but then argue for "advantage". What is it? :p

Also, your argument doesn't go against TR being a niche product TBH. Did you just wanted to add a bit more to the "rendering farm" part?

Cheers!
 

daerohn

Distinguished
Jan 18, 2009
105
0
18,710
15
Well the issue is for rendering farm we have here, we have to purchase extra license for every CPU we use. So basically more cores here on a single CPU would reduce the annual licensing cost. Also we need to check the render time differences those extra cores provide. Basically what I am trying to express here is if Epyc can provide a significant performance increase in rendering and simulation and analysis, we can replace our WS's with AMD ones in the coming years.
 

Eximo

Titan
Herald


That won't last long. They'll switch to a per core licensing scheme like all the major vendors have done. They might just be waiting for the majority of their customers to move to high core count chips before they spring it on you. Been happening throughout the industry for the last five years.

Some are really evil about it too. Charging you for all the cores on the host computer even if the VM only has access to two cores.

 


*cough*Oracle*cough*
 

genz

Honorable
Oct 8, 2012
615
0
11,160
65


Programmatically yes, but I think the current consensus is that the in-out pathways on the die are constrained compared to most platforms of lower core counts in most apps, meaning your cores will struggle to get fed enough data unless programs start to optimize code specifically to stay in-core more, or store the data dynamically via NUMA to feed from different sources. This is simply because since HT the core to NB pipe has never been bottled as the mobo connectors would never reach it's bandwidth limits except for edge case insane builds. Future advances in RAM are likely to stay on the same socket with AMD though, so that may go away as RAM gets faster. This also allows more cores to stay at peak power efficiency/data output, which is below max mhz.

Mixed GPU renders will limit the core count use I think, but there are still many fields where that doesn't matter (higher than double precision, heavily deferred rendering and a lower end of the market where this can be cheaper than a whole software suite of CUDA products+GPU). It's also superbly priced for it's performance regardless, simply because the number of apps we run today that rely on 50 background services.

The current trend with is still away from what this CPU is about, and it remains to be seen if a wealth of off-die processors in the future will limit this CPU's performance, or enhance it by allowing more pathways in/out.

 
Long term, I'm expecting a move toward off-die processing, essentially reversing the recent trend of putting all that processing on the CPU die. GPUs are going to be your massively parallel floating point processors, and I expect to see FPGAs become mainstream due to the flexibility they provide as specialized co-processors.
 

8350rocks

Distinguished


I mean, I am not sure FPGAs and ASICs will become that mainstream. They are very highly specialized and run at a penalty doing anything that is not along the lines of what they are hardwired to do. You can try to implement some flexibility in the design, but then you sacrifice specialization advantages, and at a certain point, you lose out to a regular CPU if you get very broad in terms of design capability. Ultimately, that would defeat the purpose.
 


My stance is CPU performance gains are going to hit a wall as die shrinks end, so maximizing performance isn't going to be the main driver of CPU design going forward. Security and flexibility will be the focus, and FPGAs are the easiest way to accomplish that.
 

8350rocks

Distinguished


EUV lithography may very well breathe more life into Moore's Law though...
 

Eximo

Titan
Herald
I've seen future plans for 3nm, but that number is just a number, and given where Intel is in relation to others they have a few more node shrinks to go before they are stuck where Intel is. They'll probably have to start stacking things or something to keep up Moore's law (which was superseded, modified, only convention keeps it being mentioned). Intel has some big question marks on their roadmap, at one point they had switching to a diamond substrate. And there are always carbon nanotube options to consider. AMD is only consuming process nodes, so they'll innovate at the pace of the rest of the market.

I agree FPGA adoption might be the way of the near future. Imagine programmers telling the CPU to program a section of itself for the express purpose of running that program's code. Could lead to some super efficient programs. Only rely on the CPUs general processing abilities when you have to. Though I have not looked into the rewrite capabilities of Intel's adopted FPGA tech, so that might not be super feasible for everyday consumers. But if I could make a server really good at doing one job, but still have it be a general item that can be purchased, pretty enticing.
 


3nm falls into the "we can't prove its not doable according to physics" category, but electron leak could very well be an insurmountable problem. There's a point where physics itself becomes a limiting factor.
 

goldstone77

Honorable
Aug 22, 2012
2,219
2
11,960
85
Samsung and TSMC think 3nm is possible, and have it on the their road maps, ~2021. This is heavily dependent upon the development of an EUV process to make it cost effective to produce.The main problem with making these smaller chips is "COST". Manufacturing has always been about making a cost effective solution. They can make these higher density chips, at such a slow rate it's not economical. As I've pointed out in another post here . Also, the proper tooling to make these smaller features is not supposed to show up until 2021.
their new tool (again it won’t even be arriving at fabs for evaluation until 2021)
https://thechipcollective.com/posts/moshedolejsi/what-is-up-with-euv/
It's a hard task and the physics is getting more tricky, but it's possible.

As far as the future of processing, I see A.I. developing hardware and software. You can see that A.I. is already better at strategy games than human beings, and once it's able to learn how to build CPUs and code it will be a game changer. I think we will also see more specialized cores to run specific tasks.

Edit: I just wanted to add here the current processes in mass production, and their transistor density. While "nodes" is just a name to describe, you can look back and see at 130nm it really hasn't been accurate.

Intel's 22 nm process (2012) had 16.5 MTr/mm², 14 nm process (2014) had 44.67 MTr/mm², and 14++ nm process had 37.22 MTr/mm²
https://en.wikichip.org/wiki/mtr-mm%C2%B2
8nm uHD cell has a transistor density of 61.2 MTr/mm²
https://fuse.wikichip.org/news/1443/vlsi-2018-samsungs-8nm-8lpp-a-10nm-extension/

There is a little information about 3nm on here
TSMC's 7nm HD 96.49 MTr/mm²(estimated), and the 7nm HPC is 67 MTr/mm²(AMD is using this process for CPU/GPU)
Seeing is believing. Look at the side by side comparison of GlobalFoundies 14nm vs TSMC's 7nm. 4 chiplets vs 8 chiplets(that is a 14nm I/O package in the middle)
 

Similar threads


ASK THE COMMUNITY