News Leak reveals 500W fire-breathing 128-core Granite Rapids Xeon 6 CPU

Status
Not open for further replies.
However, with all that additional power headroom Intel has been able to double the core count available on the Granite Ridge CPUs from 64 cores to 128 cores. This is a massive change that will greatly enhance Granite Ridge's multi-core capabilities.

Granite Ridge is the first Intel server CPU architecture to outperform AMD's outgoing EPYC 9654 series (Genoa) in raw core count.

Granite Ridge is Intel's next-generation CPU architecture that will succeed Emerald Rapids.

Overall Intel claims Granite Ridge will provide up to a 2x to 3x performance improvement in mixed AI workloads and up to 2.8x better memory bandwidth.

Can you make some corrections though, since there has been some confusion/typos regarding the naming scheme which Intel uses vs AMD.

Granite Ridge has been used instead of "Granite Rapids" on few paragraphs. We all know that Granite Ridge is the "rumored" codename for the desktop implementation of the "Zen 5" micro-architecture, the upcoming Ryzen 9000 series, so it's obviously confusing.

Even I get confused when it comes to "Granite Ridge", and "Granite Rapids" naming scheme.

On a serious note, AMD should definitely change their codename, assuming this "Granite" name has not been chosen yet. Because we don't expect Intel to do any changes now, since the Granite Rapids family is already finalized.


EDIT:

Thanks for the correction.
 
Last edited by a moderator:
Btw, as expected, there are two variants of these chips Intel plans to release, notably within the AP and SP lineup.

AP seems to be higher end, since these are centered around a bigger socket, and there are four higher-end Granite Rapids-AP CPUs listed, which would sport the Intel "Platinum" badge as per one leak.

But there are definitely more SKUs in the pipeline, since this can't be the final product list. These are early qualification samples though.
  • Xeon 6980P - 128 Cores (Redwood Cove P-Cores) / 500W / 2.0-3.2 GHz
  • Xeon 6979P - 120 Cores (Redwood Cove P-Cores) / 500W / 2.1-3.2 GHz
  • Xeon 6972P - 96 Cores (Redwood Cove P-Cores) / 500W / 2.4-3.5 GHz
  • Xeon 6960P - 72 Cores (Redwood Cove P-Cores) / 500W / 2.7-3.8 GHz
Intel Xeon 6 6900E/P Platinum
Intel Xeon 6 6700E/P Gold
Intel Xeon 6 6500P Silver
Intel Xeon 6 6300P Bronze


It appears all the previous leaks from the past 1-2 years have materialized, since there has again been a mention of two reference platforms used by Intel, sporting different sockets as well. Both still come under the "Birch Stream" platform though.

For the SP chips Intel is using the reference evaluation platform known as "Beechnut City" sporting the LGA 4710 socket.
For the AP chips Intel is using the reference evaluation platform known as ""Avenue City"" sporting the LGA 7529 socket.


This rumored 7529 socket is kind of huge. In fact, the socket is so big that it can literally house six AMD Ryzen 7000 CPUs. One guy at a Chinese forum last year posted a screenshot of this chip, lol. Doesn't come as a surprise though, since this is a server socket.

View: https://www.youtube.com/watch?v=ZlLkoRiyA8U&t=84s

fo2vYLm.png
 
Last edited by a moderator:
Intel's new 500W ceiling is almost twice as high as its outgoing Xeon Scalable parts which peak at 350W to 385W.
Questionable maths.

Intel is equipping five Xeon 6 CPUs with a sky-high 500W TDP, including the top four most powerful Granite Rapids SKUs and even the flagship Sierra Forest SKU comprised entirely of efficiency cores.
AMD applies the same 350 W TDP to all Ryzen Threadripper 7000 CPUs, ranging from 12 to 96 cores. Intel could be doing a similar thing here. But maybe some of these need all the watts they can get. Sierra Forest at 288 cores and actual 500 W usage would be only ~1.74 Watts per core.

Either way, physically large chips can dissipate more heat.


What does TDP even mean anymore? Apparently for Emerald Rapids it's the old definition from before base/turbo TDP:
Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload.
 
What does TDP even mean anymore? Apparently for Emerald Rapids it's the old definition from before base/turbo TDP:
Basically still the same thing, this is from the current gen desktop info on ark

Processor Base Power​

The time-averaged power dissipation that the processor is validated to not exceed during manufacturing while executing an Intel-specified high complexity workload at Base Frequency and at the junction temperature as specified in the Datasheet for the SKU segment and configuration.

And yes, base power IS TDP.
https://www.intel.com/content/www/us/en/support/articles/000055611/processors.html
 
A leak has revealed that Intel is planning two new Xeon 6 CPUs boasting a whopping 500W TDP rating, one with just P-cores and the other featuring E-cores only. The P-core model will have 128 cores while the E-core variant will come with 288 cores.

Leak reveals 500W fire-breathing 128-core Granite Rapids Xeon 6 CPU : Read mor

😱

Okay, now we're just getting stupid. 500W 128 core CPU? That's like wrapping a Dodge Neon around a Viper V10.

I can see where it might be useful for fileservers over broadband or something... But how do you cool it?

I can't see the average person needing that. 😎
 
Last edited:
Questionable maths.


AMD applies the same 350 W TDP to all Ryzen Threadripper 7000 CPUs, ranging from 12 to 96 cores. Intel could be doing a similar thing here. But maybe some of these need all the watts they can get. Sierra Forest at 288 cores and actual 500 W usage would be only ~1.74 Watts per core.

Either way, physically large chips can dissipate more heat.


What does TDP even mean anymore? Apparently for Emerald Rapids it's the old definition from before base/turbo TDP:
I thought TDP is related to the necessary cooling hardware, using the Tjmax as a parameter. Given power and Tjmax, one gets a thermal resistance (K/W) that is used to design the cooling system. Second question was what is the area of the chip? How much is the power density (W/m2) increasing? thanks.
 
Okay, now we're just getting stupid. 500W 128 core CPU? That's like wrapping a Dodge Neon around a Viper V10.
Larger = easier to cool because the heat is spread out over a larger area and more chiplets/tiles. Xeons and Threadripper/Epyc are much larger than the consumer sockets.

More cores = more watts. At least if cores and clocks are rising faster than efficiency gains. But think about it. Tom's Hardware pushed the i9-14900K to use 359 Watts in Prime95. That's 24 cores (8+16), a lot less than 128 P-cores or 288 E-cores, in a chip that is what, maybe 25% the size or less?

Having said that, these chips can probably pull more than 500 Watts. How about 1 kW?

I thought TDP is related to the necessary cooling hardware, using the Tjmax as a parameter. Given power and Tjmax, one gets a thermal resistance (K/W) that is used to design the cooling system. Second question was what is the area of the chip? How much is the power density (W/m2) increasing? thanks.
TDP is a marketing term, and AMD and Intel have come up with their own definitions and tweaked them over the years.
 
TDP is a marketing term, and AMD and Intel have come up with their own definitions and tweaked them over the years.
It is not a marketing term, it's just like the minimum specs for games, it's what you need to get going while biggerrer will be betterrer.

All the CPUs ,be they from amd or intel, run perfectly fine and at stated speeds with a cooler that can cool for the rated TDP. It's just that a bigger cooler can make them run faster than they should/would.
 
Some slides detailing the upcoming Birch Stream AP/SP platform. Although, the info seems to be accurate, but Intel can still make last minute changes though before the final release date, so exercise some caution.

Btw, some of you might have already seen these before as well. I had these pics saved on my old backup drive.

EDIT: The socket 4677 support mentioned for the SP CPU lineup is not entirely correct, since it has been replaced by socket 4710. These slides are a bit old though.

FYi3CYS.png


YidfRDd.jpeg


IdicpFZ.png


whPg70c.png


BRUobFN.jpeg


Kc4lxqO.png


e5KBKij.png


r11LTJJ.png


E3L7Auo.jpeg
 
Last edited by a moderator:
It is not a marketing term, it's just like the minimum specs for games, it's what you need to get going while biggerrer will be betterrer.

You're absolutely right about what you need to get going, versus 'biggerer will be betterer'.

After what I've seen just tinkering, I seriously question whether 'biggerer will be betterer' is really worth the price, especially in the here and now. Of course, well-informed upgrades, and finding the right information when things are not working well, tend to help.

I'm generally about bang for the buck, but the last two years of my escapades have a been a prime example.

I am now sort of between systems. Try to keep a long story short, I was gifted a CyberPower in 2018. The box labeled "Utimate Gaming System" contained a CyberPower Onyx case with a 500W Channel Well PSU, R7 1700, AMD equivalent Cooler Master, MSI B450M Bazooka, 8GB DDR4-2133, MSI GT1030, and a 2TB HDD. Not terrible, but hardly what it was made out to be, and likely overpriced as well.

Between now and then, $500 in upgrades didn't turn out as planned. Mostly mysterious lagging. But I couldn't seem to find anyone that knew what was going on. 2022 was a bad time to buy hardware, but I'd had enough of the problems and didn't think I had enough horsepower.

This time, $2600 got me the Asus Tuf B550-PLUS with an R9 5900X, Scythe Mugen 5, 1TB SN570 M.2, 8TB WD Black, 32GB of Crucial Ballistsix DDR4-3200 (same previous SKU), Asus KO RTX3060ti-8-OC, with a Corsair RM850x in a Corsair 4000X RGB, cooled with six Corsair LL120s, and a Commander Core XT.

Fast? Oh, hell, yes. But this one had issues too. Chiefly a random blank screen on start, almost from the get-go. Reseating things was a temporary fix before it did it again. Someone suggested a memory training issue when I homed in on it being every 8-12 boot cycles. Since memtest86 had shown nothing, I RMA'd the board.

A bad CPU was also possible, so I built a temporary machine / test bed for the other components with an ASRock B450M-HDV and a used 3600X ($175 total). It lagged badly with one of these DIMMs installed. The other duplicated my issue. You could have heard me cussing in the next county. However, a kit from its QVL cured its ills. I also noticed very little difference in performance (apart from total CPU load) from the 5900X.

When returned, Asus noted my board as "could not duplicate issue". A Patriot kit from the Asus board's QVL cured its ills as well -- A YEAR AFTER I BUILT IT.

I upgraded to a Fractal Pop Air XL, ($109), and finished out the 3600X in the 4000X for resale with an M.2 ($40), a GTX1650 ($210) and Corsair CX650F ($70). Thought I could at least get my money back out of it with mostly new parts, but anyone here knows that building PCs for profit almost never works out.

Turns out the previous 1700 rig's memory wasn't on its board's QVL either. It now runs perfectly with some HyperX Fury DDR4-2133 ($50 on eBay) from its QVL. So what's my point?

Through all this, I learned that Ryzens and supporting boards are apparently quite picky about RAM, something that quite a few people have argued with me about on other forums. I've also learned that persistent annoying random video recording glitches with OBS Studio were solved using FLV format with OBS Studio, which I then edited and exported to MP4 / M4A. You could have heard me cussing in the next county when I figured that one out, too.

The 3600X is now a web / media NAS box ($150 case with another $150 in adapters), using the Tuf 1650S for encoding / transcoding, so it can still do light gaming. While the RTX3060ti is a better card that brings out the eye candy in some games I play, I wonder, was it really worth $850? I've even considered selling the 5900X rig.

Had I known then what I know now, or been able to find people with the right answers, my best bet would have been to shell out $2000 to swap the B450M / 1700 to a better case with a big cooler, PSU, SSD, RAM more to its liking, an RTX3080ti, overclocked the 1700, and it would have performed just as well, if not better, for about half to two-thirds of what I've wound up spending.

I mean, yeah, don't get me wrong, it's cool that the 5900X boosts to 5.00-5.025 at times, and ATS is awesome at 2K with high settings. I can also stage 14-28 vehicles chasing me on high detail at 2K in BeamNG, something the GTX1650 couldn't touch.

But being disabled on a fixed income now, with what I now know are autistic traits that make what others consider fairly innocuous issues 300X more frustrating to me, I really couldn't afford the headache or expense of the twists and turns I've had. Unfortunately, the last time I had someone build a machine for me, it did not go well.

While my 275mm KO card won't fit the 2U NAS case, an EVGA RTX3060ti XC will, and I can get one for $300 right now. I've been told my 5900X is worth $1000-$1500, and I could use the money.

But then again, a 3080 / 3080i / 3090 is really overkill for anyone who isn't doing CAD / CGI work. I've seen ATS played on a 3090, and there really isn't much difference from my 3060ti.

So, yeah, 'biggerer' isn't always 'betterer'.
 
Last edited:
Turns out the previous 1700 rig's memory wasn't on its board's QVL either. It now runs perfectly with some HyperX Fury DDR4-2133 ($50 on eBay) from its QVL. So what's my point?

Through all this, I learned that Ryzens and supporting boards are apparently quite picky about RAM, something that quite a few people have argued with me about on other forums

Is the current AMD consumer/client platform too picky about QVL certified Mem kits ?
 
Is the current AMD consumer/client platform too picky about QVL certified Mem kits ?
I can't really speak to that, as I have not worked with any.

However, in my experience, AMD is not Intel, and many standard RAM kits are optimized for Intel, not AMD (some may offer versions for either). The DDR5 memory standard may have resolved some of these issues, but I really can't answer that either way.

A number of factors seriously delayed my 5900X upgrade. I actually bought the chip in Sep 2021, but would not get to actually use it until Mar 2022 because MSI goofed in saying my board supported it. A myriad of naysayers poo-poo'd the idea, saying that AM4 was "dead" and that AM5 would be out soon.

Not only had AMD stated and missed one release date for the AM5, but had also released a statement in the weeks following that, saying that AM4 was not going anywhere. This was not lost on me. It suggested issues with the new platform that were taking longer than planned to sort out. While my 5900X build's RAM stability issues were persistent, they were fairly minor in the grand scheme of things.

Two years later, GamersNexus has identified and exposed a pattern of issues with the AM5 boards, EXPO, and thermals, for starters. There are many reviewer videos showing AM5s are running an average temp of 75-85 degrees. Some cannot run cooler than 80, even at idle!

GamersNexus has posted videos showing AM5 boards running hot, burning up, and in some cases, CPUs exploding. It's a board manufacturer's nightmare, and GN has even had to call out Asus for their poorly-disguised efforts to trick customers into voiding their warranty in the process of BIOS updates to fix existing problems.

However, an interesting tidbit that JayzTwoCents has mentioned. While AM5 is supposed to be compatible with AM4-style coolers, he's noted that the AM5's IHS sits somewhat lower than that of the AM4, so the cooler simply cannot firmly contact the IHS for proper heat transfer to the cooler. Additionally, it is also thinner, which makes shaving the IHS lid ill-advised.

Nevertheless, J2C did this with an AM5, hoping it would help thermals. What he found is what anyone would suspect -- that while it helped somewhat, it actually increased the existing gap between the cooler base surface, which worsened the already poor heat transfer and likely lessened any real impact the modification might have had. Thermals are PC killers, and that is one area the AM5 is extremely weak.

Meanwhile, my 5900X / RTX3060ti build has run 24/7 for various reasons, mostly because it doubled as a Plex media server. With properly tuned fan curves, the 5900X usually held 65C, peaking around 70-72C if run hard. The RTX3060ti would touch 68-75C, but usually held 65C or so depending. Idle temps hung around 35-40C.

As for the aformentioned 3600X / GTX1650S-4-OC HTPC / web build, CPU peaks around 65-68C, the GPU around 68-75C also. These are, of course, gaming loads. For normal use, they rarely top 55C, and idle around 38-40C.

But I digress. The takeaway from this, of course, is that bigger isn't always better. And newer isn't always better either. I am very glad that I built what I built when I built it, and quite confident my three AM4s will be around for years to come, doing anything I need them to do, because when I can no longer install or use Win10, I will be switching to Linux full-bore. I've already put Slacko 7 on a Celeron D 325 for a friend's kid, and I'm pretty impressed with it given the platform. PGA478 Celerons weren't known for being barn-burners.

Were I to build a new machine today, I would choose an AM4. Some might say I'm crazy, but I'm just not seeing where AM5 or LGA1700 are all they're cracked up to be.

The latest Intel chips are also already showing stability issues that no one can seem to fix. No offense to those who have these platforms, of course, but sometimes, it's better the devil you know.
 
Last edited:
Granite Rapids doubles the core count. increases memory bandwidth by 2.5x, doubles DSA performance, adds full CXL 2.0 ... and the story is that the chip increases power by 1.5?
 
Well, at least the Intel's Granite Rapids-D lineup, designed for EDGE solutions, would be more power efficient chips ( to be used in virtual radio access network (vRAN) workloads when they launch in 2025).

The previous 4th generation Xeon platform already reduced the power consumption by up to 20%, so the new GR-D SKUs should be in a better position as well.
 
Status
Not open for further replies.