Lam Research has proposed 3D DRAM as a pathway to higher density memory solutions.
3D DRAM Proposal Paves the Road for a Density Increase : Read more
3D DRAM Proposal Paves the Road for a Density Increase : Read more
The number of slots isn't the only problem. You also have the cost of platforms with more slots or slots that support higher density DIMMs. There are applications that can benefit from having more RAM but aren't necessarily worth having that much extra RAM at current prices. Then you have applications that already use all of the RAM that can be practically tossed at them and could use more if there was a cost-effective way of plugging it in. If you can cram 16X the amount of memory per chip, that opens up the possibility of stuffing all of the bandwidth mainstream CPUs and GPUs need in a single HBM-like chip stuffed directly on the CPU/GPU substrate or even 3D-stacked on a memory/cache/IO die/tile.For bigger needs like servers there are a ton of slots for memory so its not really an issue, if it is just change motherboards for more slots.
DRAM in self-refresh mode is only 20-30mW per chip, a negligible part of system power next to the 10-70W it takes just to display a static screen between key presses depending on how large and how bright the screen is. Most modern motherboards waste more power on on-board RGB than necessary for DRAM self-refresh.I would much, much prefer that they allocate ressources to develop manufacturing tools to scale up the manufacturing of Non-Volatile Memory (NVM) such as VCMA MRAM, or VG-SOT-MRAM (as per concept from European research center IMEC) to replace volatile DRAM as Non-Volatile Memory is a needed disruptive technology to enable so many improvements like « Normally-off Computing » : this is soooo much needed !!!
for most ppl? sure.PC memory hardly ever needs to be upgraded. For many years 8GB of memory was enough for everyone. Then it moved up to 16GB.
has limitations as your memory traces get longer & your speed/latency suffer as result.if it is just change motherboards for more slots.
Consumer memory requirements have mostly plateaued, suspiciously around the same time frame that cost per bit stopped reliably declining and memory prices zigzagged every few years.This isn't too much of a problem. PC memory hardly ever needs to be upgraded. For many years 8GB of memory was enough for everyone. Then it moved up to 16GB.
For bigger needs like servers there are a ton of slots for memory so its not really an issue, if it is just change motherboards for more slots.
Maybe in 20+ years it might become an issue for servers but PCs will be just fine running at - double the memory every 10 years? - A 64 GB PC as a base model and gamers running 128GB.
DRAM in self-refresh mode is only 20-30mW per chip, a negligible part of system power next to the 10-70W it takes just to display a static screen between key presses depending on how large and how bright the screen is. Most modern motherboards waste more power on on-board RGB than necessary for DRAM self-refresh.
If you want to reduce idle/standby power consumption, a better thing to focus on is reducing the number of background processes that needlessly wake up the CPU and its peripherals.
Standby mode has existed for over 20 years and gives you the benefit of a computer that wakes up faster than the monitor can sync up with the video signal. Having a computer that is "instant on" is nothing new to people who can be bothered to use it.Your system only consume energy when it is doing a task, not when it is in idle, nor shut down, and is always immediately ready as soon as you turn on power (no boot time (or at least much less)) : I think that people don’t really realize how groundbreaking it would be because, as of 2023, such system doesn’t yet exist
What sort of energy-harvesting IoT device does sufficiently complex tasks that it needs so much RAM that self-refreshing RAM becomes a meaningful concern? Typical sensors don't really do much besides measure, transmit measurement, go back to sleep. You can do that sort of stuff in less than 1KB of (S)RAM... unless you are running a few million lines of bloat in the form of an OS when all you really need from it is a barebones network stack.For many infrequently triggered, battery operated IoT sensors it would be game changing : if the IoT device can harvest energy from the environment (small solar panel, vibration,…), and store it in a small battery, then it may provide enough energy to never have to manually recharge or change a non rechargeable coin cell battery in your devices.
Intel from 2016 is calling and asking where you were when they were losing money left and right producing non-volatile Optane memory.I would much, much prefer that they allocate ressources to develop manufacturing tools to scale up the manufacturing of Non-Volatile Memory (NVM) such as VCMA MRAM, or VG-SOT-MRAM (as per concept from European research center IMEC) to replace volatile DRAM as Non-Volatile Memory is a needed disruptive technology to enable so many improvements like « Normally-off Computing » : this is soooo much needed !!!
I think that you should stop giving your opinion because you really sound like a clown here.This isn't too much of a problem. PC memory hardly ever needs to be upgraded. For many years 8GB of memory was enough for everyone. Then it moved up to 16GB.
For bigger needs like servers there are a ton of slots for memory so its not really an issue, if it is just change motherboards for more slots.
Maybe in 20+ years it might become an issue for servers but PCs will be just fine running at - double the memory every 10 years? - A 64 GB PC as a base model and gamers running 128GB.
Standby mode has existed for over 20 years and gives you the benefit of a computer that wakes up faster than the monitor can sync up with the video signal. Having a computer that is "instant on" is nothing new to people who can be bothered to use it.
What sort of energy-harvesting IoT device does sufficiently complex tasks that it needs so much RAM that self-refreshing RAM becomes a meaningful concern? Typical sensors don't really do much besides measure, transmit measurement, go back to sleep. You can do that sort of stuff in less than 1KB of (S)RAM... unless you are running a few million lines of bloat in the form of an OS when all you really need from it is a barebones network stack.
The most popular IoT devices are things like voice assistants, smart plugs, doorbells, surveillance cameras, thermostats, etc., all of which wired/plugged-in, most of which always-on, continuously recording to catch trigger words or motion. Not much point in worrying about 30mW of self-refresh when the bus IO is drawing 300+mW.
Optane memory seems to be some kind of Phase Change Memory (PCM), which seems to gave very high read/write power consumption : this makes this technology a very bad fot for IoT devices to begin with…Intel from 2016 is calling and asking where you were when they were losing money left and right producing non-volatile Optane memory.
If your IoT temperature/moisture/light/etc. sensor requires 2GB of DRAM to operate, start your power-optimizing journey with ditching Linux/BSD from your IoT device payload. A micro-controller running a bare-metal software stack could do that job using only 1KB of on-chip SRAM and 1/1000th the CPU-power.You involuntarily have a « survivor bias » here (you could review Youtube video from Veritasium about it) in your assesment of IoT devices, because you are not considering all battery operated IoT applications that can’t yet exist (so you don’t take into account) because of the continuous energy draw that represent the 30mW self-refresh power consumption you are talking about.
If your IoT device really requires GBs worth of working memory and storage, chances are it requires a lot more processing power than you can comfortably hook up to a coin/button-sized rechargeable battery.That said, there are likely many IoT devices for which more Non-Volatile Memory (NVM) could open new opportunities : hard to foresee which one exactly, but I have no doubt about that…
Well since Martin Fink could not deliver the memristor he promised, that looks further away than ever.I would much, much prefer that they allocate ressources to develop manufacturing tools to scale up the manufacturing of Non-Volatile Memory (NVM) such as VCMA MRAM, or VG-SOT-MRAM (as per concept from European research center IMEC) to replace volatile DRAM as Non-Volatile Memory is a needed disruptive technology to enable so many improvements like « Normally-off Computing » : this is soooo much needed !!!
The latest numbers I can find is 0.041 um^2 for 28nm MRAM vs 0.0016 um^2 for Micron's D1a cells and 0.002 um^2 for previous-gen DRAM. So the smallest MRAM so far has ~20X worse density than contemporary DRAM.All the working NV-RAM technologies I've been seeing can't go near even current RAM density and the last I've read about was MRAM on a 45nm process for basically stop-clock logic
Well since Martin Fink could not deliver the memristor he promised, that looks further away than ever.
All the working NV-RAM technologies I've been seeing can't go near even current RAM density and the last I've read about was MRAM on a 45nm process for basically stop-clock logic: you can implement all register files and SoC caches as non-volatile and therefore don't need to spend time and effort on saving processor state when going to sleep.
It's interesting, even available, but in an embedded power domain that never got anywhere near the IoT hype predictions. So in fact the technology is there, I believe it's even used in some embedded storage controllers but the market is in fact so tiny it hardly makes the news.
Martin's memristor promised a single technology from register file storage to what's still stored on tape, SRAM speeds and tape economy and that would indeed be transformational.
The busy evolution of ever expanding storage technologies and densities at least makes for an interesting life as IT architect, Martin's vision would have been one big bang and then little else to do. So in a perverse way I'm happier about it now than I was ever before...
The latest numbers I can find is 0.041 um^2 for 28nm MRAM vs 0.0016 um^2 for Micron's D1a cells and 0.002 um^2 for previous-gen DRAM. So the smallest MRAM so far has ~20X worse density than contemporary DRAM.
While that is true, it's also very much a personal computer perspective.DRAM in self-refresh mode is only 20-30mW per chip, a negligible part of system power next to the 10-70W it takes just to display a static screen between key presses depending on how large and how bright the screen is. Most modern motherboards waste more power on on-board RGB than necessary for DRAM self-refresh.
If you want to reduce idle/standby power consumption, a better thing to focus on is reducing the number of background processes that needlessly wake up the CPU and its peripherals.
I think the current Wikipedia article on MRAM has a rather balanced and fair assessment of the current state. And the net result is that in terms of endurance, density, speed, retention and power it's always a compromise, never able to push out the alternatives 1: 1. And it's not the fabrication scale that limits it, it's the underlying physics.Thanks very much for this information.
I would think MRAM already is (or is close to) better density that the one you mention, because NXP has announced that it should be using it in 2025 in a 16nm TSMC automotive process.
Also there are multiple varitions of MRAM, as it could be tuned with different trade-off (speed, endurance, low power,… For exemple, VCMA-MRAM is very good in low power consumption), which could make Non-Volatile MRAM very versatile, and some variation may be easier to manufacture and to scale to smaller nodes than others.
But to compete with DRAM is really, really hard because even if you could make an MRAM have near DRAM density, at first, cost will likely be much, much more expensive (no idea how much more, but I would think it could easily be at least 100x more…).
That is the reason why I hope that European research center IMEC could find useful applications to drastically (10x / 100x / 1000x) improve AI / Artificial Neural Network (ANN) energy efficiency as it would provide a much greater financial incentive to speed-up the the manufacturing scale-up of this technology (which is needed to significantly reduce cost, and make it more cost competitive with DRAM…)