AMD Announces 32-core Threadripper 2, Shows 7nm EPYC CPU

Status
Not open for further replies.

yorich

Honorable
Mar 20, 2013
18
0
10,510
"This motherboard comes with a beefy power delivery subsystem to accommodate the higher YDP rating of the Threadripper 2 processors" YDP? Who proofs these articles? a 6 year old?
 

bit_user

Titan
Ambassador
One thing to note: 32-core Threadripper will still have only 4 memory channels. They'll connect to 2 of the 4 zepplin dies, meaning the other two dies will have to traverse the Infinity Fabric to reach memory. It will be interesting to see the performance impact this has on different workloads.
 

bit_user

Titan
Ambassador

Please stop calling it "desktop". There's really nothing desktop about it - not the socket, memory, TDP, or motherboard. It's not even a Xeon W (Workstation) chip.

It's a server chip that they simply overclocked and might choose to market to desktop users with a high tolerance for large electricity bills. The price of these systems will likely be "server-grade", as well.
 

Dosflores

Reputable
Jul 8, 2014
147
0
4,710
A 32-core processor that doesn't require a chilled water cooler (unlike Intel's) ?
A 7nm EPYC processor coming to market this year?
7nm gaming GPUs early next year?

This is huge, but I guess Intel and Nvidia will make sure that the press reports it as totally insignificant.
 

InvalidError

Titan
Moderator

The real problem with the FX series was the lack of performance to justify the increased TDP. Shouldn't be a problem with ThreadRipper 2... aside from its TDP likely exceeding many TR4 motherboards' VRM comfort zone so people will need to upgrade their motherboards despite TR2 being otherwise technically compatible.
 


Or its an expo where companies release new products all the time and was planned months in advanced with or without the knowledge that AMD was also launching a 32core CPU.




Looks like its going to be 3GHz with a boost of 3.4GHz although that's, per the article I read, a WIP which means they are not 100% on stock clock rates.



Intel only needed the cooling for the overclock which makes a ton of sense. 5GHz on 28 cores is a lot of power. 5GHz on 32 cores would also be a ton of power and need a lot of cooling. Hell I wonder what it will need stock with a 250W TDP.

Everything I have read about the 7nm is it is closer to what Intels 10nm should be and I will believe it when I see it. I don't see how anyone else wouldn't have issues with smaller processes if Intel is.

To be fair on nVidia AMD hasn't really shown a lot of competition in the past few years. Vega isn't much of a threat to them, especially with the price gouging thanks to miners, and their next GPU will extend that lead which will be announced this year in a few months. Their "7nm" GPU better be more like Zen and close that gap a lot.
 

Dosflores

Reputable
Jul 8, 2014
147
0
4,710


It doesn't make any sense to show a 28-core processor at 5 GHz because no one will use it that way. It's a totally ludicrous way of showing a Cinebench record. I don't think anyone would work on a workstation which needs a water chiller for the kind of performance that was advertised.



Maybe because Intel isn't a god and sometimes can fail. It has happened once (Pentium4 vs Ahtlon64).



A 7nm Vega will be twice as efficient as current Vega. The problem with Vega for most gamers was it performance/watt ratio. It doesn't matter whether Nvidia keeps the performance lead or not. Making attractive mainstream cards is AMD's goal. Making attractive mainstream CPUs has worked pretty well for them.
 

Chris Fetters

Distinguished
BANNED
Dec 6, 2013
31
12
18,535
InvalidError said:

The real problem with the FX series was the lack of performance to justify the increased TDP. Shouldn't be a problem with ThreadRipper 2... aside from its TDP likely exceeding many TR4 motherboards' VRM comfort zone so people will need to upgrade their motherboards despite TR2 being otherwise technically compatible.

Actually not so much. At least according to AMD's HW VP, Jim Anderson in his interview with PCWorld. When discussing first wave X399 boards he said the vast majority would be able to handle TR2 at stock clocks, absolutely fine & dandy (assuming one's VRM cooling is able to properly keep the mosfets cool) as most of the 2017 X399 boards shipped with overkill VCore power delivery for the 1st Gen TR SKU's. This means that the potential power delivery issues with most 1st Gen X399 boards & TR2 will largely be isolated to overclock attempts & insufficient airflow/cooling, not a lack of possible output amperage.

Namely, if you want to be able to notably overclock these new high core count SKU's via an an all-core multipler or use of very aggressive PBO parameters and you don't already have a top end 1st Gen X399 board; upgrading might very well be wise. If you plan to run totally stock & have a decent board though, you should most likely be fine.
 

bit_user

Titan
Ambassador

This might be a credible explanation if Intel didn't pull that sketchy OC trick with the hidden water chiller and not tell anybody until it was exposed afterwards.

I don't recall a time Intel has done something this sketchy, in a public demo. I think it's a new low, for them, possibly hinting at the pressure they're under from their ongoing 10 nm delays.
 

Giroro

Splendid
So with 4 8-core dies, these are basically built-up ryzen2 instead of chopped-down Epyc.
Which tells me they never intended to produce enough 12nm Epyc chips to make it worthwhile to design Threadripper 2 around it. Also that means a 7nm Threadripper 3 is probably going to be released before a 7nm Ryzen 3 (which will be the one AMD actually wants you to call Ryzen 2). That, or a lower-end version of Epyc with only 2 active dies.
Either way, AMD is going to find some way to re-purpose faulty Epic chips.
 
*Tina Turner voice*

giphy.gif


THUNDERDOME!
 

Nintendork

Distinguished
Dec 22, 2008
464
0
18,780
Having so much MT potential at 32c/64t, if you got a lesser X399 mobo just disable turbo, should eat for breakfast any OCd 1950X, the 250w TDP was well above typical usage (by mobo OEM's).

For applications the penalties on latencies are pretty insignificant vs games. in which case this is not your cpu a 12nm based 16core 2 die with no latency penalties TR2 should be the one.
 


I don't disagree that its pointless. Interesting to see 5GHz on 28 cores and not even using LN2. It, and Toms recent delidding review, make me wonder if Intel moved back to solder for the IHS if they would get better stock clocks.

I never stated Intel was a god. However being a solo company that developed new process tech is impressive to me and a bad uArch is not the same as a process tech. Pentium 4 was bad due to the uArch much like Bulldozer.

Also never believe claims by the company. Hell I don't even believe Intel with their claims as those claims might be in only certain scenarios.



Maybe, maybe not. Honestly we can assume all day. I always assume that companies plan a lot in advanced. It is really hard to think that Intel would be able to come up with this last minute much like I don't think the 6 core CPUs were last minute but rather planned. This isn't simplistic stuff. Overclocking 28 cores takes a lot of time to work and tweak it with that many cores.

This 32 core is impressive, although I still find it ironic that its using MCM which Intel got a lot of crap for with Core 2 Quad. Its good to see AMD finally competing. My hope is for more from both sides. We really don't want AMD to become Intel and if Intel doesn't compete well enough they will.
 

bit_user

Titan
Ambassador

Yes! I definitely agree that overclocking that system wasn't last-minute, and involved careful planning, experimentation, and fabrication. I think it's an impressive feat, and would love to know if they went direct-die, delidded, or if the HCC Xeons already use solder under their IHS.

The point is that someone planned that demo, and they either planned from the beginning to hide the OC or maybe that was a last-minute change. The one thing we know is that you don't just "forget" to mention on any of the slides or anywhere else to indicate that it was an overclocking stunt. That omission, and - if you go back and read the Intel exec's exact words - the implication that it's a demo of a product they can and are willing bring to the desktop market were absolutely intentional, and can have no other purpose than to upstage AMDs press conference.

The tone is everything. Companies have staged overclocking competitions and demo'd OC'd systems before. If they'd highlighted that it was an overclocking exercise, from the beginning, then I'd have applauded them - because, what PC geek wouldn't think 28 Skylake-SP cores @ 5 GHz is damn impressive?
 
It probably was last minute. People could see the CPU and we could tell it wasn't air cooled. Imagine if they could launch a 5GHz 28 core CPU air cooled? They would have it plastered on massive posters in your face. No one would hide that.

Maybe in 10 years we will have 5GHz 28 core CPUs but by then I expect something different.

Whats interesting is this got me thinking of Intels TeraScale CPU. Could run at 3.13GHz while only using 24W. They had it aat 6.26GHz pushing 2TFLOPS of performance on 80 cores and using the same as a Core 2 Quad. Wonder why that didn't pan out.
 

bit_user

Titan
Ambassador

According to this, it actually burned 62 W @ 1 TFLOPS and maxed out at 265 W, for only 1.81 TFLOPS.

https://en.wikipedia.org/wiki/Teraflops_Research_Chip#Statistics_[21]

Remember the Cell Processor, of PS3 fame? That originally ran at 3.2 GHz on 90 nm node (same as later Pentium 4's), delivering 230 GFLOPS in 2006. According to wikipedia, that first generation utilized 170 - 200 W (total system power).

https://en.wikipedia.org/wiki/PlayStation_3_technical_specifications#Form_and_power_consumption

Subtracting what the GPU, optical drive, HDD, bluetooth, HDMI, fan, etc. burned, you might estimate the CPU only accounted for 100 W of that.

Compare that to Coffee Lake i7-8700K, which can only deliver only somewhere in the ballpark of 200 GFLOPS from its AVX units, at a roughly similar TDP, but at 14 nm and more than a decade later. And costing as much as an entire PS3 (super-slim).

Why? Probably owing largely to in-order SPE cores and DMA-driven (rather than cache-mediated) memory access. Not unrelated to that, the Cell was notoriously hard to program. Assuming the Teraflops research chip is similar (except uses a mesh instead of ring bus), I think you have your answer.

As the world discovered with GPUs, you can reach high efficiency when your cores are simple and in-order, and your cache hierarchy is relatively flat and small (or non-existent). This is good for things like large matrix multiplies and deep learning, but not so good for general-purpose computation.
 
Status
Not open for further replies.