News Intel 12th-Gen Alder Lake Release Date, Benchmarks, Specifications, and All We Know

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

InvalidError

Titan
Moderator
The benefits of PCI-E 4 are largely on paper, the difference for GPU performance is pretty small
In many games, the 4GB RX5500 gains 50-70% in performance going from 3.0x8 to 4.0x8. If 4GB GPUs which are useless for mining memory-hard coins are the only GPUs tons of gamers are able to get their hands on for a reasonable price in the foreseeable future because of it, the hypothetical 4GB RX6500 and RTX3050 should benefit even more from 4.0x16.
 
  • Like
Reactions: TheJoker2020

TheJoker2020

Commendable
Oct 13, 2020
219
64
1,690
Here is my expectation of what AMD's response to the big.LITTLE architecture from Intel could be, and AMD has options.

AMD could quite easily update their Atom equivalent and add that in as a chiplet, which would be ideal with an integrated GPU aimed at video playback to take over from a dedicated GPU for the desktop parts when not doing anything 3D.

AMD could of course build this into their memory controller IO chip without any real drawbacks for silicon space.

On the APU side of things, AMD could simply move the APU's to a chiplet based design and do the above.

The flexibility that AMD has with their chiplet based designs and also their experience with chiplets could mean that AMD can respond quickly and effectively to Intel, or they might simply wait and see what the end users think and for the software side of things to catch up.

The next 18-months is going to be very interesting in the computer technology sphere 😀
 

TheJoker2020

Commendable
Oct 13, 2020
219
64
1,690
In many games, the 4GB RX5500 gains 50-70% in performance going from 3.0x8 to 4.0x8. If 4GB GPUs which are useless for mining memory-hard coins are the only GPUs tons of gamers are able to get their hands on for a reasonable price in the foreseeable future because of it, the hypothetical 4GB RX6500 and RTX3050 should benefit even more from 4.0x16.
The problem there is that if the cards running at 8x were running at 16x to begin with they would already have that performance increase on PCI-E 3.

Yes I know that AMD APU's only connect to GPU's as 8x, (which I think was a mistake by AMD) so there is that valid argument. In my defense, I don't really consider APU's with a dedicated GPU as a worthwhile option so that never even crossed my mind.
 

InvalidError

Titan
Moderator
The problem there is that if the cards running at 8x were running at 16x to begin with they would already have that performance increase on PCI-E 3.
The 4GB RX5500 is still trailing 10-20% behind the 8GB models even on 4.0x8, so there is still a fair chunk of performance to be had there and next-gen 50-tier GPUs will hopefully be at least 30% faster so the potential for 4.0x8 / 3.0x16 to be a bottleneck should be considerably greater. PCIe bandwidth is becoming critical to the continued viability of 4GB GPUs.

Yes I know that AMD APU's only connect to GPU's as 8x, (which I think was a mistake by AMD) so there is that valid argument. In my defense, I don't really consider APU's with a dedicated GPU as a worthwhile option so that never even crossed my mind.
What do APUs have to do with it? The 4.0x8 limitation on the RX5500 comes from the GPU itself only having an x8 electrical interface, which is why you cannot see anyone testing it at anything more than 4.0x8 on AMD-based systems or 3.0x8 on anything Intel up to 10th-gen.
 
  • Like
Reactions: TheJoker2020

TheJoker2020

Commendable
Oct 13, 2020
219
64
1,690
I wasn't aware that any modern graphics cards were made with less than a 16x interface, what a nerf.!

The smaller GPU RAM of 4GB, plus the nerfed interface is a great example of a non top tier GPU needing PCI-E 4 Vs 3.

Might have to read up on this odd GPU.!
 
Well, for PC gamers on a budget, PCIe 4.0x16 would make GPUs faster than the RX5500 and GTX1650S still viable with only 4GB of VRAM while incidentally making them unusable for ETH and other memory-hard crypto-mining. If AMD and Nvidia genuinely want to throw desperate gamers a bone, they need to launch those.
With mid-range cards in the $200+ MSRP bracket shipping with 6+GB of VRAM for years, I can't imagine many people wanting to cut that back to 4GB, especially with video memory requirements increasing. A faster PCIe interface (and faster RAM) might reduce the performance hit of offloading data to system memory, but it's still not going to perform as well as dedicated VRAM on the card. At the very least, it will result in less-predictable performance in future games.

And I really doubt such cards would be unaffected by the current price hikes. Even the existing 1650 SUPER and 5500 XT 4GB are currently selling for around double their MSRP on sites like eBay. I'm sure we will see new "budget" 4GB cards released some months down the line, though whether they remain reasonably priced is likely to depend more on whether the mining market has collapsed by then.
 
If PCI 5 will make any difference it will be because of the new way console games will run.
The Direct I/O between nvme and GPU will be the only thing that could maybe use that bandwidth and it depends on how fast and how wide of an adoption it will get on the consoles. Think dx12 here.
 

InvalidError

Titan
Moderator
With mid-range cards in the $200+ MSRP bracket shipping with 6+GB of VRAM for years, I can't imagine many people wanting to cut that back to 4GB, especially with video memory requirements increasing.
Have you looked at GPU prices lately? If you want to buy new without waiting who-knows-how-long to get one at anywhere near MSRP, $300 only gets you a 4GB 1650S and you have to step up to $500+ for anything recent with 6GB. 6GB GPUs aren't anywhere near $200 anymore.

With even the 4GB models priced out of most non-hardcore PC gamers' budgets, there will be a pretty good market for those if they can leverage 4.0x16 to offset most of their VRAM deficit and be available in sufficient quantities to keep them anywhere near MSRP.
 
...Honestly I see it as a nice way for the OEM's to be like "we can sell you a brand new 20 core GAMING computer! For the low low price of $499!", and not mention that its 16 Atom cores and 4 Core I series cores (although to be honest in a virtual enviroment thats not the worst idea if you just run a bunch of lightweight VM's)...
I think THIS is Intel's point. They can sell a bunch of cores and act like their 16 core is the same as AMD's 16 core. But in reality, Intel is selling 8 good cores and some junk against AMD's 16 good cores. Great for Intel's marketing, considering the multicore benchmarks are already brutal against Intel.

Then Intel will act like their 6+4, 16-thread chips should match AMD's, but really they'll be closer to AMD's 12-thread 6-core chips.

AMD might need to increase its marketing budget to dispel some of these misleading concepts. Of course, if big.little software catches up with hardware, AMD might need to get on this train.
 
Have you looked at GPU prices lately? If you want to buy new without waiting who-knows-how-long to get one at anywhere near MSRP, $300 only gets you a 4GB 1650S and you have to step up to $500+ for anything recent with 6GB. 6GB GPUs aren't anywhere near $200 anymore.

With even the 4GB models priced out of most non-hardcore PC gamers' budgets, there will be a pretty good market for those if they can leverage 4.0x16 to offset most of their VRAM deficit and be available in sufficient quantities to keep them anywhere near MSRP.
I really like this idea. I can't mine because I've got a 4GB RX 480. It's hilarious that I got it for $200 four years ago and it's STILL the best deal out there used for $180.

It's odd that AMD and NVidia only sell high-end cards (nothing decent to replace the $200 RX 580 ever came out). If they just made a $150 RX 570 4GB, it would dominate the gaming market. Instead, they sell the limited supply of more expensive cards so that they can create a scalping market. They could just sell those expensive cards at double the MSRP and cut out the scalpers, while still selling at the same rate. I'm very confused on what's happening there.
 
Won't Alder Lake be competing with Ryzen 6000 series pretty directly, on the calendar? From what I can tell, the 6000 series is looking at a 25% IPC improvement (or more): https://hothardware.com/news/amd-zen-4-ryzen-6000-cpus-25-percent-ipc-lift

That keeps Intel firmly in the backseat with AMD driving.

Plus, I'm not going to trust Intel's claims to put two or three generations on the same socket. They haven't done that in 15 years. They just don't have the same consumer-pleasing culture of AMD. I can't help but feel like Intel's 10nm node problems are Karma.
 

InvalidError

Titan
Moderator
Won't Alder Lake be competing with Ryzen 6000 series pretty directly, on the calendar? From what I can tell, the 6000 series is looking at a 25% IPC improvement (or more): https://hothardware.com/news/amd-zen-4-ryzen-6000-cpus-25-percent-ipc-lift
The CPUs AMD may be launching around the time of Alder Lake are Zen 3+ and I've only seen claims of 5-7% performance gain on those. Zen 4 is the one with the bigger uplift and that isn't coming this year - at least not to the mainstream.
 

TheJoker2020

Commendable
Oct 13, 2020
219
64
1,690
The CPUs AMD may be launching around the time of Alder Lake are Zen 3+ and I've only seen claims of 5-7% performance gain on those. Zen 4 is the one with the bigger uplift and that isn't coming this year - at least not to the mainstream.

My primary hope is that AMD has dramatically increased the performance of the memory controller, this is right now the biggest weakness of ZEN CPU's.

We have already seen the new ZEN-3 APU's being tested with 4,800MHz RAM.! They have a much better RAM controller, I hope that this is brought to the next gen desktop ZEN 3+ CPU's, then again, I expected the Ryzen 3000 CPU's to be able to run the RAM at 4,000MHz.
 
The CPUs AMD may be launching around the time of Alder Lake are Zen 3+ and I've only seen claims of 5-7% performance gain on those. Zen 4 is the one with the bigger uplift and that isn't coming this year - at least not to the mainstream.
Zen 4 is expected early 2022 (Feb to April). Considering Alder Lake is also slated for 2022, that's pretty close to the same time. Maybe Intel will hit the early end of the Alder Lake release timeline of "late 2021 to early 2022", but I wouldn't bet on Intel's timeliness these days.
 
My primary hope is that AMD has dramatically increased the performance of the memory controller, this is right now the biggest weakness of ZEN CPU's.

We have already seen the new ZEN-3 APU's being tested with 4,800MHz RAM.! They have a much better RAM controller, I hope that this is brought to the next gen desktop ZEN 3+ CPU's, then again, I expected the Ryzen 3000 CPU's to be able to run the RAM at 4,000MHz.
Well Zen 4 is supposed to be on DDR5. That will require reworking the memory system a bit. Hopefully they totally rebuild it.
 

TheJoker2020

Commendable
Oct 13, 2020
219
64
1,690
Well Zen 4 is supposed to be on DDR5. That will require reworking the memory system a bit. Hopefully they totally rebuild it.
It will have to for DDR5 as it is designed to run at far faster speeds, I was referring to Zen 3+ which is rumoured to have a small IPC bump, but my hope was faster memory support,it will still be the same socket and DDR4.

Zen 4 will be a new socket and DDR5 👍
 
Have you looked at GPU prices lately? If you want to buy new without waiting who-knows-how-long to get one at anywhere near MSRP, $300 only gets you a 4GB 1650S and you have to step up to $500+ for anything recent with 6GB. 6GB GPUs aren't anywhere near $200 anymore.

With even the 4GB models priced out of most non-hardcore PC gamers' budgets, there will be a pretty good market for those if they can leverage 4.0x16 to offset most of their VRAM deficit and be available in sufficient quantities to keep them anywhere near MSRP.
Well of course the graphics card pricing from a few months back isn't comparable to current pricing, though it's mostly miners and enthusiasts willing to pay a large premium who are buying cards at these inflated prices. My point is that cards launching in the $200+ price bracket have had 6-8GB of VRAM as standard for at least several years, and a "faster" card with less VRAM than that is going to be a bit imbalanced, and will likely be undesirable to anyone who knows what recent mid-range cards have typically offered. PCIe 4.0 x16 isn't likely to improve that much. You are still likely to see a fairly large performance hit when VRAM is exceeded, and that's only assuming one is building a brand new PCIe 4.0-capable system, since the feature still isn't at all common among those who might consider upgrading to such a card.

And an increasing number of current games already need more than 4GB of VRAM to run their best, and that probably isn't going to improve as developers start designing for the new consoles. That may not be as much of a concern if one were buying a lower-end gaming card like a 1650 SUPER near its MSRP, but an improved 4GB card isn't likely to be selling for that much for quite some time. It would probably be more like $400+ if released today, making it no more attractive to the budgets of "non-hardcore PC gamers" than any of the other cards currently on the market. Who wants to pay that kind of money for what would probably be like a 1660 SUPER with less VRAM and questionable future performance? Supply isn't likely to be much better either, when the manufacturers will undoubtedly be prioritizing higher-margin parts.
 

InvalidError

Titan
Moderator
a "faster" card with less VRAM than that is going to be a bit imbalanced, and will likely be undesirable to anyone who knows what recent mid-range cards have typically offered. PCIe 4.0 x16 isn't likely to improve that much.
The 4GB RX5500 performs 70-90% as good as the 8GB variant when it has 4.0x8 available, 4.0x16 should provide enough extra bandwidth and reduced latency to push the bar higher.

an improved 4GB card isn't likely to be selling for that much for quite some time. It would probably be more like $400+ if released today
The 3060 launched for $325 so the 3050's MSRP should be well under $300. Beyond that, it'll be a matter of whether Nvidia can make enough chips to keep retail prices near MSRP and with 4GB being mostly useless for mining, that's one less major sinkhole for gamers to compete with.
 
Feb 17, 2021
9
2
10
intel is so confusing. why release rocket lake only a few months before alder lake
why announce alder lake before rocket lake is even availiable
if alder lake is this good, then why relese a subpar cpu right before
im so confused
 

InvalidError

Titan
Moderator
intel is so confusing. why release rocket lake only a few months before alder lake
if alder lake is this good, then why relese a subpar cpu right before
Alder Lake is on 10nm and Intel's 10nm production may never ramp up enough to sustain volume production of everything across market segments due to being too little too late to achieve decent profitability out of additional 10nm fabs, so I doubt there will be enough of them to meet demand. It may have more to do with shifting some demand away from 14nm to offset production throughput loss on those huge Rocket Lake dies.

Alder Lake uses DDR5 which will likely be ludicrously expensive for the next year or two and I haven't seen any rumors of backward-compatibility with DDR4 so that may not even be a non-official option this time around. The LGA1700 socket and support for PCIe 5.0 also likely means more expensive motherboards.

Alder Lake also introduces mixed core architectures to x86. Given that it took software developers 2-3 years to figure out how to deal with Ryzen's CCXes and IF, it could be a while until the peculiarities of mixed x86 cores get sorted out.

I get the feeling that Alder Lake will be mainly a niche/enthusiast thing due to how expensive it will initially be simply from too many new things and associated early-adopter taxes hitting simultaneously along with relatively limited (though probably not Broadwell-esque) availability.
 
intel is so confusing. why release rocket lake only a few months before alder lake
Alder Lake desktop processors are probably the better part of a year away. Intel expects to launch Alder lake sometime toward the end of this year, but that may only apply to laptop parts. They launched their "11th gen" notebook processors half a year ago, after all, and are only getting around to the desktop lineup now. It's possible we could see some Alder lake parts on the desktop shortly before the year is through, but I wouldn't be at all surprised if they didn't launch until early next year.
 

Geef

Distinguished
I can't say I'm too fond of the BigLittle approach on Desktop CPUs..
Perhaps on Mobile, but I just don't get the benefit on desktop.
Maybe others feel differently.
I am hoping this change will make it easier for Devs to tell their programs to actually use additional cores. That has been a problem for a long time, program doesn't take advantage of large core counts.
 
I am hoping this change will make it easier for Devs to tell their programs to actually use additional cores. That has been a problem for a long time, program doesn't take advantage of large core counts.
It's not that devs are to incompetent to know how to use additional cores, it's just that a lot of programs simply can't by virtue that they're not CPU bound. This is a well known problem.

You can throw as many cores as you want to infinity, but it ain't going to make Notepad run any better.
 

InvalidError

Titan
Moderator
It's not that devs are to incompetent to know how to use additional cores, it's just that a lot of programs simply can't by virtue that they're not CPU bound.
And many everyday algorithms simply cannot be made multi-threaded. For example, you cannot parse a stream input in a multi-threaded manner since you need to know the context of the next byte coming in to interpret it correctly. That's why things like code compilers are fundamentally single-threaded, though you can build multiple object files in parallel to achieve full multi-threaded speed-up.