News Intel 5th Gen Xeon 'Emerald Rapids' pushes up to 64 cores, 320MB L3 cache — new CPUs claim up to 1.4X higher performance than Sapphire Rapids

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Micron introduce in November DDR5 rdimm 8000. Mass production next year and yes capacity of modules is low but this is just sights for what be next.
Nothing can use 8000 and won't be able to for a while (also those don't use standard JEDEC timings) and support for different formats will likely come first.

MCR DIMMs are not comparable to regular DDR5 as they use a multiplexer to pull data from two ranks at once to get the extra bandwidth and will only work on specific Intel platforms. MRDIMMs are the same concept, but there's been less specific information about it though it seems to be the standard the industry will go to though it will need IMC support as well so nothing current will be able to use them.
 
Micron introduce in November DDR5 rdimm 8000. Mass production next year and yes capacity of modules is low but this is just sights for what be next.
It'll come, sure, but when you can buy UDIMMs around that speed already and RDIMMs are limited to 6400 while most RDIMM supporting platforms can only use 5600, that's a large part of why AMD went with 12 channels.
 

George³

Prominent
Oct 1, 2022
228
124
760
It'll come, sure, but when you can buy UDIMMs around that speed already and RDIMMs are limited to 6400 while most RDIMM supporting platforms can only use 5600, that's a large part of why AMD went with 12 channels.
Yeah, I just flagged this as a sign of what to expect from future workstation and server platforms with DDR5.
 
The thing is, unless you're a board designer (and it's clear that you're not), you don't know if something is a dominant issue or just also true. I'm getting a lot of that from you, where you read about some effect and assume it explains some particular design decision. It's basically the same kind of thinking that conspiracy nuts use to establish causal relationships anywhere they perceive an alignment of interests between two entities or persons.

In engineering, there's usually one dominant factor which drives a decision, even when it seems like there could be more. In order to truly know the rationale behind certain design decisions, you need to be intimately familiar with the details (or take it on the authority of someone who is).
That's not been my experience. While my job isn't just engineering, it is a considerable part of it. I've had to design and build boards to interface between (admittedly simple) devices when my boss was upset about the new Chinese games arcade games not playing nice with our card reader system (the manufacturer of said card readers was also entirely at a loss as to how to make them work). I do some technical work including reverse-engineering old game machines to repair them or to replace broken components that aren't made anymore with things that weren't made for them. In my years of studying electrical engineering at Penn State and at work, engineering was always about making the best of bad tradeoffs. There was almost never only one dominant factor at play. Sure, I've never designed a modern computer board with PCIe 5 and DDR5 slots, but I'm no stranger to the field.
This is wildly out of date, if it were ever true. AFAICT, virtually all current datacenter SSDs use NVMe x4 connections. There might still be some x8 drives kicking around, but the industry has standardized on x4 and the advent of PCIe 5.0 and CXL (not to mention PCIe 6.0 and CXL 3.0, waiting in the wings) would seem to eliminate any foreseeable need for them to go wider.
I'm well aware that most SSDs (both consumer and data center) use x4 connections. The point I was making was that they could easily not and it wouldn't make much difference.

Because they had to maintain backward compatibility, they couldn't do obvious things like differential signalling. That doesn't mean PATA hit a fundamental wall, just that maintaining support for IDE and multiple devices on a bus required they introduce those extra grounds. If you were starting from scratch, there's a lot more you could do.

You're clearly starting with an explanation and claiming it applies everywhere it seems like it could. That's not engineering. Engineers start with data and work forwards to the answer.
Parallel SCSI often supported two different types of signaling in a single version. Was doing so expensive and complicated as I previously mentioned? Probably, but certainly doable. Two devices, again, were not a big deal for PATA as it almost always had devices that were too slow to saturate the interface. This is not true for consumer PCI cards or for RAM where a single device is capable of saturating the bus all alone (GbE adapter, PATA adapter, or multiport SATA adapter, etc. on PCI bus for a few examples).
Less significant for signal integrity? How do you know?


PATA supports IDE, which is much older than either of those technologies.
Same as above. PATA devices were generally too slow to saturate the interface nearly enough to cause signal integrity issues due to a second device on the bus. The most common time PATA's support for two devices caused signal integrity issues was if you had one device on the nearer connector and left the far connector open, which is why the later PATA versions mandated using the far connector for the master device. PATA's signal integrity issues in regards to not getting much faster had little to do with the second device. Furthermore, the second device was usually an optical disk drive, which was never fast enough to be an issue for PATA and was rarely seeing simultaneous use while really nailing the system's hard drive.
IDE is from 1986, PATA was last updated in 2005. That's a 19 year run on compatibility. USB was introduced in 1996 and was last updated with USB4 in 2019. That's 23 years, though if you prefer the last version of USB using the same connector, that was USB 3.1 in 2016, a 20 year run. Ethernet has both of them beat and it has even used the same connector for longer.

So, B replaced A, instead of being replaced by C, because D is better and cheaper? Do you read this stuff, before you post it?


With margins as thin as theirs, a couple $ per unit is huge! This tells me you've never worked anywhere in the hardware business. BoM costs are one of their main concerns. That said, your cost estimate is probably off by an order of magnitude.


Early SATA drives were premium. Surely, it was the OEMs' ability to get cheaper PATA drives which drove any decision to keep using them.


But it didn't stay expensive because of the interface. It stayed expensive because it had been relegated to a high-end niche market of workstation and server hardware that was expensive for other reasons - not the interface.
There is no such logic in my posts.

If they couldn't make a decent profit selling systems with PATA devices, they never would have sold them. Claiming it made a "huge" difference is nonsense. They would have been inclined to shift to cheaper and cheaper anything possible, sure. BoM cost is one of their main concerns, sure. But something which costs a tiny few percent of the total at most is hardly their great dragon to slay. Will OEMs care about the cost benefits of a smaller cable that does the same job? Yes. They'll eventually get around to using cheaper things that work better. The fact of the matter is though that even ignoring the cost of the cable, you can't push a wide parallel interface as far as SATA went (and we all know it had much more potential bandwidth to go if desired) using hardware as cheap as SATA's hardware. Not then and not now.

However, the simple fact is that there was little choice in the matter. PATA couldn't be pushed much faster without more expensive hardware. They needed a new storage interface to deal with things like flash memory getting faster and serial was the best way to go to avoid the issues that plagued PATA. You cannot push parallel interfaces at higher speeds far at all without expensive hardware. Nobody wants to pay for big, high-power RF transistors in their ICs to drive the signal lines going to their hard drive, or, god forbid, an external drive just as much as they don't want to pay for 80 wires in a PATA cable. SATA hadn't fully replaced PATA until more than a few years after it came out for OEM systems.

I already mentioned that they likely had a good deal going for some of the older drives before the stock ran out and SATA drives were cheaper, but the fact remains that OEMs always sell computers that have features that aren't in use and are therefor entirely wasted money. Are they using every slot and port on the board? No? Then they're wasting money paying for them. Boards used to be entirely designed by hand, the cost of adding things that would later not be used is not zero. OEMs aren't in a constant, absolute race to completely minimized costs.

The cost of the interface cannot be ignored when it is that high. While technological advancement will inevitably improve, pushing parallel interfaces at high speeds is simply more of a physics problem than pushing a serial link to high speeds. The fact of the matter is that the drives connecting to those interfaces did come down in price, yet the interface cards themselves don't come down nearly as fast because driving such interfaces is just not that easy. By the time a high-end interface like SCSI becomes cheap to make, it's practically obsolete. Much of the transistors in it are doing things that most people don't need and therefor aren't interested in paying for. Yes there's a server tax. No, it's not just the server tax.

You completely missed the point. It wasn't about 24-slot boards, but about 48-slot boards. I was pointing out how little room is left after just 24 slots. Once you go to 48 slots, you're left with a huge board that can fit nothing else.


Their server CPUs don't support 3 DPC, and that's probably quite alright, since they're typically used in 2 CPU configurations and there doesn't seem to be much demand for 48-slot boards.
I didn't miss the point unless your point had nothing to do with what either of us said. I never said anything about going beyond 48 DIMMs per board, I talked about the different ways to get there depending on the needs of the workload. If memory bandwidth isn't a huge factor for a given workload but you need large RAM pools to hold a whole dataset, then if you could do three DIMMs per channel with an 8 channel CPU, that's 24 DIMMs which may be ideal for the workload. AMD sells six channel CPUs too. 18 DIMMs in the one board is not far-fetched if you don't need more CPU performance or memory bandwidth. Other workloads, of course, are different and need the memory bandwidth more than anything and obviously their users would have no interest in such configurations.

You're thinking of consumer RAM, here. Servers use registered memory, meaning they can handle more ranks.

Anyway most of the server market is datacenters, and they care a lot about density (i.e. fitting the most resources in the least rack space). I think they would go with higher-density RAM no matter what.


Cheaper memory generally runs slower, which is another reason why you don't want it in 100+ core systems.


ECC has nothing to do with it. And ECC RDIMMs are available at the maximum speed that the CPUs, themselves, support. These days, the ECC UDIMM market is suffering, mainly just because servers and now workstations only support support RDIMMs. That leaves a small market for ECC UDIMMs, so DIMM manufacturers don't give it much love.
The same thing applies to server memory. A 64GB DDR5 RDIMM can be had for around $200. 128GB RDIMMs are around $1300 and 256GB RDIMMs are around $2500. I suspect server farms of any sort look at paying more than five times the amount per GB as not a small thing.

I had a typo where I meant "DIMM" to be "RDIMM". I wasn't saying ECC had anything to do with it, only that they aren't offered by manufacturers. For the record, it doesn't make a difference as you still can't easily buy ECC modules, be they DIMMs or RDIMMs, over DDR5-6400 even though Sapphire Rapids workstations have supported faster speeds like DDR5-6800 for a while now and Threadripper launched with even higher speeds supported. I understand the lack of faster DDR5 ECC UDIMMs to a point simply because there aren't any platforms that officially support overclocking of DDR5 ECC UDIMMs, though some consumer AMD boards are probably capable of it unofficially. But Sapphire Rapids is more than old enough that it should be more readily available.
 

bit_user

Titan
Ambassador
I've had to design and build boards to interface between (admittedly simple) devices when my boss was upset about the new Chinese games arcade games not playing nice with our card reader system
Building some glue logic is very different than working at the cutting edge of technology. You don't know what technical challenges drove the decisions in current standards, unless you have insight into the main challenges implementing the previous generation of standards, at the very limit.

Same as above. PATA devices were generally too slow to saturate the interface nearly enough to cause signal integrity issues due to a second device on the bus.
One has nothing to do with the other.

PATA's signal integrity issues in regards to not getting much faster had little to do with the second device.
Source?

IDE is from 1986, PATA was last updated in 2005. That's a 19 year run on compatibility. USB was introduced in 1996 and was last updated with USB4 in 2019. That's 23 years,
You're looking at the wrong aspect. It's not how long a standard was built upon, but the state of technology at its inception. That's what ultimately limits its longevity.

If they couldn't make a decent profit selling systems with PATA devices, they never would have sold them.
It's about cost reduction. In case you hadn't noticed, the price of PCs fell substantially, for quite a while. That enabled greater sales volumes and more profit. You couldn't just keep doing what had been viable in prior years, or else you'd fall behind the competition, get priced out, and have to drop out of the race.

Claiming it made a "huge" difference is nonsense.
I didn't. What I said was that a couple $ of BoM cost, as you claimed, was a big deal.

something which costs a tiny few percent of the total
With typical margins of less than 10%, a few % is a big deal. This is how I knew you never worked in a high-volume hardware or manufacturing business.

The fact of the matter is though that even ignoring the cost of the cable, you can't push a wide parallel interface as far as SATA went
Again, SCSI proved you wrong on that.

using hardware as cheap as SATA's hardware.
Yes, SATA was about cost. Cable costs, primarily. The controllers sure weren't simpler!

PATA couldn't be pushed much faster without more expensive hardware breaking backward compatibility
Fixed that, for you.

You cannot push parallel interfaces at higher speeds far at all without expensive hardware.
Source?

Boards used to be entirely designed by hand,
Not in the post-2000's timeframe we're talking about, when SATA came along.

Anyway, I'm still waiting to see any evidence that timing skew was what killed PATA, as you had originally claimed. Just to keep things in perspective, the highest frequency PATA supported was about 66.5 MHz, which translates to a wavelength I'd conservatively estimate at 2.25m. And it's a ribbon cable, with all the conductors being the same length. Now, explain how timing skew was a deal breaker, beyond that speed.

I didn't miss the point unless your point had nothing to do with what either of us said. I never said anything about going beyond 48 DIMMs per board,
You were complaining about the lack of DIMMs per channel, and my point about 48 DIMMs per board was to illustrate that it's a non-issue, with CPUs now having so many memory channels.

if you could do three DIMMs per channel with an 8 channel CPU, that's 24 DIMMs which may be ideal for the workload.
On a standard server workhorse 2-CPU board, that's 48 DIMMs. Again, virtually impractical. And Intel is moving to 12-channels per CPU, just like AMD.

AMD sells six channel CPUs too.
Siena is primarily for the lower-power CSP vertical. Not the same kind of cloud applications as Genoa.

The same thing applies to server memory. A 64GB DDR5 RDIMM can be had for around $200. 128GB RDIMMs are around $1300 and 256GB RDIMMs are around $2500.
There's a supply crunch in the DDR5 market, right now. The OEMs & hyperscalers have most of the higher-density RDIMM supply tied up, which is pushing up prices on the open market. A better example would be to look at DDR4, where 128 GB RDIMMs can be found for $650.

Still, if we take DDR5 module capacity of 64 GB x 12, that's a respectable 8 GB per core, for a 96-core CPU.

you still can't easily buy ECC modules, be they DIMMs or RDIMMs, over DDR5-6400 even though Sapphire Rapids workstations have supported faster speeds like DDR5-6800 for a while now
They do not. The highest spec Xeon W only officially supports DDR5-4800:

The fact that you can overclock them to higher is beside the point, because most people will not run such a machine with RAM at such speeds.

Threadripper launched with even higher speeds supported.
Which is, in fact, being served by at least some memory makers:

I understand the lack of faster DDR5 ECC UDIMMs to a point simply because there aren't any platforms that officially support overclocking of DDR5 ECC UDIMMs,
Some Raptor Lake CPUs officially support it at DDR5-5600, on a W680 board.
 
Last edited:
Status
Not open for further replies.