Discussion AMD Ryzen MegaThread! FAQ and Resources

Page 87 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

imrazor

Distinguished
So the new Threadripper 2 begs the question...are there actually any apps that can use 64 threads? I guess if you have enough virtual machines you can, but I don't think apps like Premiere Pro can really use all of the threads on that monster...
 

daerohn

Distinguished
Jan 18, 2009
105
0
18,710
the main issue with render farms is that they are licensed to CPU cores. So single CPU with more cores and threads will help decrease costs greatly. There are also simulation and analysis software which can greatly benefit from more cores. I for one can think to replace my rendering, simulation and analysis machines with AMD powered WS's as we need more core per CPU. Depending on testing of course.
 


You mention "issue", but then argue for "advantage". What is it? :p

Also, your argument doesn't go against TR being a niche product TBH. Did you just wanted to add a bit more to the "rendering farm" part?

Cheers!
 

daerohn

Distinguished
Jan 18, 2009
105
0
18,710
Well the issue is for rendering farm we have here, we have to purchase extra license for every CPU we use. So basically more cores here on a single CPU would reduce the annual licensing cost. Also we need to check the render time differences those extra cores provide. Basically what I am trying to express here is if Epyc can provide a significant performance increase in rendering and simulation and analysis, we can replace our WS's with AMD ones in the coming years.
 

Eximo

Titan
Ambassador


That won't last long. They'll switch to a per core licensing scheme like all the major vendors have done. They might just be waiting for the majority of their customers to move to high core count chips before they spring it on you. Been happening throughout the industry for the last five years.

Some are really evil about it too. Charging you for all the cores on the host computer even if the VM only has access to two cores.

 


*cough*Oracle*cough*
 

genz

Distinguished


Programmatically yes, but I think the current consensus is that the in-out pathways on the die are constrained compared to most platforms of lower core counts in most apps, meaning your cores will struggle to get fed enough data unless programs start to optimize code specifically to stay in-core more, or store the data dynamically via NUMA to feed from different sources. This is simply because since HT the core to NB pipe has never been bottled as the mobo connectors would never reach it's bandwidth limits except for edge case insane builds. Future advances in RAM are likely to stay on the same socket with AMD though, so that may go away as RAM gets faster. This also allows more cores to stay at peak power efficiency/data output, which is below max mhz.

Mixed GPU renders will limit the core count use I think, but there are still many fields where that doesn't matter (higher than double precision, heavily deferred rendering and a lower end of the market where this can be cheaper than a whole software suite of CUDA products+GPU). It's also superbly priced for it's performance regardless, simply because the number of apps we run today that rely on 50 background services.

The current trend with is still away from what this CPU is about, and it remains to be seen if a wealth of off-die processors in the future will limit this CPU's performance, or enhance it by allowing more pathways in/out.

 
Long term, I'm expecting a move toward off-die processing, essentially reversing the recent trend of putting all that processing on the CPU die. GPUs are going to be your massively parallel floating point processors, and I expect to see FPGAs become mainstream due to the flexibility they provide as specialized co-processors.
 

8350rocks

Distinguished


I mean, I am not sure FPGAs and ASICs will become that mainstream. They are very highly specialized and run at a penalty doing anything that is not along the lines of what they are hardwired to do. You can try to implement some flexibility in the design, but then you sacrifice specialization advantages, and at a certain point, you lose out to a regular CPU if you get very broad in terms of design capability. Ultimately, that would defeat the purpose.
 


My stance is CPU performance gains are going to hit a wall as die shrinks end, so maximizing performance isn't going to be the main driver of CPU design going forward. Security and flexibility will be the focus, and FPGAs are the easiest way to accomplish that.
 

8350rocks

Distinguished


EUV lithography may very well breathe more life into Moore's Law though...
 

Eximo

Titan
Ambassador
I've seen future plans for 3nm, but that number is just a number, and given where Intel is in relation to others they have a few more node shrinks to go before they are stuck where Intel is. They'll probably have to start stacking things or something to keep up Moore's law (which was superseded, modified, only convention keeps it being mentioned). Intel has some big question marks on their roadmap, at one point they had switching to a diamond substrate. And there are always carbon nanotube options to consider. AMD is only consuming process nodes, so they'll innovate at the pace of the rest of the market.

I agree FPGA adoption might be the way of the near future. Imagine programmers telling the CPU to program a section of itself for the express purpose of running that program's code. Could lead to some super efficient programs. Only rely on the CPUs general processing abilities when you have to. Though I have not looked into the rewrite capabilities of Intel's adopted FPGA tech, so that might not be super feasible for everyday consumers. But if I could make a server really good at doing one job, but still have it be a general item that can be purchased, pretty enticing.
 


3nm falls into the "we can't prove its not doable according to physics" category, but electron leak could very well be an insurmountable problem. There's a point where physics itself becomes a limiting factor.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
Samsung and TSMC think 3nm is possible, and have it on the their road maps, ~2021. This is heavily dependent upon the development of an EUV process to make it cost effective to produce.The main problem with making these smaller chips is "COST". Manufacturing has always been about making a cost effective solution. They can make these higher density chips, at such a slow rate it's not economical. As I've pointed out in another post here . Also, the proper tooling to make these smaller features is not supposed to show up until 2021.
their new tool (again it won’t even be arriving at fabs for evaluation until 2021)
https://thechipcollective.com/posts/moshedolejsi/what-is-up-with-euv/
It's a hard task and the physics is getting more tricky, but it's possible.

As far as the future of processing, I see A.I. developing hardware and software. You can see that A.I. is already better at strategy games than human beings, and once it's able to learn how to build CPUs and code it will be a game changer. I think we will also see more specialized cores to run specific tasks.

Edit: I just wanted to add here the current processes in mass production, and their transistor density. While "nodes" is just a name to describe, you can look back and see at 130nm it really hasn't been accurate.
dLy11Nq.png

Intel's 22 nm process (2012) had 16.5 MTr/mm², 14 nm process (2014) had 44.67 MTr/mm², and 14++ nm process had 37.22 MTr/mm²
https://en.wikichip.org/wiki/mtr-mm%C2%B2
8nm uHD cell has a transistor density of 61.2 MTr/mm²
https://fuse.wikichip.org/news/1443/vlsi-2018-samsungs-8nm-8lpp-a-10nm-extension/
samsung-density-14nm-5nm.png

There is a little information about 3nm on here
TSMC's 7nm HD 96.49 MTr/mm²(estimated), and the 7nm HPC is 67 MTr/mm²(AMD is using this process for CPU/GPU)
Seeing is believing. Look at the side by side comparison of GlobalFoundies 14nm vs TSMC's 7nm. 4 chiplets vs 8 chiplets(that is a 14nm I/O package in the middle)
o4d30iW.png
 
Seems there might be a 3rd Gen Threadripper afterall!!

 
  • Like
Reactions: goldstone77
As far as the future of processing, I see A.I. developing hardware and software. You can see that A.I. is already better at strategy games than human beings, and once it's able to learn how to build CPUs and code it will be a game changer. I think we will also see more specialized cores to run specific tasks.

Two points:

1: AI in strategy games still uses very sub-optimal strategy, generally winning by virtue of having better reaction times and superior micro. For example, in Starcraft 2 the AI doesn't take the basic step of using buildings to prevent entry into it's base. In addition, a human player was able to stall the AI from attacking by constantly sending just one or two units at the AIs base, as the AI would pull back it's entire army to deal with it. This highlights an issue I have with current AI development: AIs can learn and optimize "bad" strategies, and those bad behaviors will never get culled out.

2: Specialized co-processors have a major downside: You give up die space and power in exchange for a computing advantage in very specialized tasks, which will lead to a decrease in performance for more general tasks. You can't keep adding more and more dedicated co-processors; it's just not efficient. It's more likely you end up with FPGAs on die that will be re-programmed in near realtime to fit whatever task they are required to perform.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
You make good points.

1: The point I'm making is that A.I. can run huge data sets faster and more accurate than a human, and all you have to do is give it rules to follow.
The incredible inventions of intuitive AI | Maurice Conti
TED
Published on Feb 28, 2017

2: As we hit limitations of what is practical to do with general processors, I believe specialized cores will be more common place.

Exynos 8 Octa (8890) is a 64-bit octa-core high-performance mobile SoC designed by Samsung and introduced in early 2016 for their consumer electronics. Manufactured on Samsung's 2nd generation 14 nm process, the 8890 features eight cores consisting of a big quad-core cluster operating at 2.3 GHz with a turbo of up to 2.6 GHz based on Samsung's custom Mongoose 1 microarchitecture and another little quad-core cluster operating at 1.6 GHz consisting of Cortex-A53 cores.
https://en.wikichip.org/wiki/samsung/exynos/8890

Now that chiplets are becoming popular by are improving yields, helping with cost of shrinking the process, I think it will make adding specialized core easier since you can just add them to a package.
https://www.tomshardware.com/reviews/intel-hades-canyon-nuc-vr,5536.html

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9XL0UvNzQxMTgyL29yaWdpbmFsLzh0aC1HZW4tSW50ZWwtQ29yZS1wcm9jZXNzb3IuanBn
 
You make good points.

1: The point I'm making is that A.I. can run huge data sets faster and more accurate than a human, and all you have to do is give it rules to follow.

I'm a mechanical engineer- the 'AI can design it better' is a huge sales pitch in the CAD industry at the moment. The problem though, is that as you say you have to give the AI the rules to follow- which is usually based on an FEA study of a part. In my experience all the AI does is remove every scrap of material it can whilst still meeting the load case given the the FEA study- however that results in a part that has no capacity to withstand unexpected forces (e.g. side loading, shock loading etc). The parts are also totally impossible to manufacture save for 3D printing (another buzz term that is being held up as the answer to everything yet in reality is only suited to very limited production volume and parts produced in this way have a lot of limitations).