Samsung uses 12nm DRAM process technology to build its highest-capacity DRAM IC to date.
Samsung Paves The Way to 1TB Memory Sticks with 32Gb DDR5 ICs : Read more
Samsung Paves The Way to 1TB Memory Sticks with 32Gb DDR5 ICs : Read more
Samsung uses 12nm DRAM process technology to build its highest-capacity DRAM IC to date.
Samsung Paves The Way to 1TB Memory Sticks with 32Gb DDR5 ICs : Read more
According to your link, even the most advanced MRAM variant still has a write endurance of only a trillion cycles. In a PC-style environment or anything with comparably regular use, you may burn through the endurance as DRAM within weeks, possibly down to hours as SRAM.I wish it would be low-power 32Gb Non-Volatile VG-SOT-MRAM ICs, to be able to get low-power 32GB Non-Volatile memory in a package.
Nope. Starting with DDR5, ECC DIMMs require 25% more chips. So, the number would be 40 chips, whereas a non-ECC DIMM of the same capacity would have only 32.Samsung can now build a 128 GB DDR5 RDIMM with ECC using 36 single-die 32 Gb DRAM chips
Do you have comparable figures for DDR5 DRAM and ~7 nm SRAM cells? Just curious.According to your link, even the most advanced MRAM variant still has a write endurance of only a trillion cycles.
Ever heard of DRAM or SRAM failing? CPU registers are made the same way as SRAM cells, they get read and written trillions of times per active hour and we usually take for granted that apart from material, manufacturing or something else in the system failing and killing the CPU first, CPUs typically outlast how long people can be bothered to run them to find out how long they can really last, mostly the same goes for DRAM.Do you have comparable figures for DDR5 DRAM and ~7 nm SRAM cells? Just curious.
While the OS may be randomizing memory page allocations, once that allocation is done, stuff usually stays wherever it was physically mapped at allocation until it either gets evicted to swap or freed. The int64 that is counting 1kHz clock ticks for the RTC is allocated at boot and likely stays there for as long as the system is turned on. That is ~31 billion writes per year on the LSB. Your NIC's driver has a stats, status and buffer blocks that get written to every time a packet comes in or out, that can be millions of writes per second if you are transferring large files at high speeds and AFAIK, peripherals cannot directly DMA in/out of CPU caches to avoid transient memory writes. Last time I looked, my system was reporting 30k interrupts/s while mostly idle, which I suspect translates to thousands of memory writes/s to mostly static locations too.BTW, as memory layout randomization seems to be gaining favor, for security reasons, I think it will have the effect of leveling-out wear. Most DRAM content should be rather low turnover. Therefore, I'm a little skeptical that ~1T cycles is inadequate for a client PC or other device.
DRAM, for sure. I've experienced about a dozen DRAM DIMMs, that once tested fine, developing bad cells. It's not hard to find other accounts of this happening.Ever heard of DRAM or SRAM failing?
Nothing says that registers need to be constructed at the same size as memory cells in L3 cache, for instance. I'd almost be surprised if they were exactly the same.CPU registers are made the same way as SRAM cells, they get read and written trillions of times per active hour
This used to be true, but once memory controllers and PCIe switches both moved on die, it became cheap enough for memory accesses by PCIe devices to snoop the CPU cache. Intel calls this Data Direct I/O, and it was first introduced during the Sandybridge generation.Your NIC's driver has a stats, status and buffer blocks that get written to every time a packet comes in or out, that can be millions of writes per second if you are transferring large files at high speeds and AFAIK, peripherals cannot directly DMA in/out of CPU caches to avoid transient memory writes.
While your system is idling, those probably won't tend to get past L3 cache.Last time I looked, my system was reporting 30k interrupts/s while mostly idle, which I suspect translates to thousands of memory writes/s to mostly static locations too.
Okay, maybe 1T is a bit on the low side.Burning through 1T writes to xRAM within a desktop's useful life isn't too far-fetched
A more likely scenario is that they add visibility to the on-die ECC errors in DDR5 (or later) DRAM chips to the OS. Once the error rate of a memory page crosses some threshold, the kernel can remap the page and exclude that address range from further use (which they can already do).If imminently consumable memory becomes a thing, we'll need AMD, Intel and friends to implement system-managed "L4$" memory pools for the OS to use as a target for high-traffic OS/driver structures, DMA buffers, high-frequency and transient data.
The article says DDIO is only available on Xeon E5 and E7 CPUs though. Another page adds the XSP g1/2/3 lineup, W-2200 and W-3200 families. Unless it has changed since then, that looks like there is still no DDIO on consumer CPUs and entry-level Xeons. Also, DDIO was responsible for the NetCAT security flaw exposed in 2019, which means no DDIO for you if your data needs privacy.This used to be true, but once memory controllers and PCIe switches both moved on die, it became cheap enough for memory accesses by PCIe devices to snoop the CPU cache. Intel calls this Data Direct I/O, and it was first introduced during the Sandybridge generation.
According to your link, even the most advanced MRAM variant still has a write endurance of only a trillion cycles. In a PC-style environment or anything with comparably regular use, you may burn through the endurance as DRAM within weeks, possibly down to hours as SRAM.
I wouldn't want that in any of my devices until endurance is up by at least another +e3.
Do you have comparable figures for DDR5 DRAM and ~7 nm SRAM cells? Just curious.
BTW, as memory layout randomization seems to be gaining favor, for security reasons, I think it will have the effect of leveling-out wear. Most DRAM content should be rather low turnover. Therefore, I'm a little skeptical that ~1T cycles is inadequate for a client PC or other device.
MRAM would have near-zero benefit in smartphones and similar devices where the screen typically accounts for 50-90% of energy usage while the device is being actively used and WiFi/BT/3/4/5G/GPS/etc. with all of the associated background processes and services are ensuring the 15-20Wh battery still gets drained in 2-4 days even when the screen stays off the whole time. My "newest" device that tracked screen power usage separately from everything else was my Nexus 7 tablet and the screen typically accounted for 80-90% of overall battery drain. Eliminating all idle power draw/leakage would only buy you a few hours of battery life, nothing worth overhauling the whole software and hardware design paradigm for.My dream is that low-power MRAM endurance is high enough to at least be able to replace SRAM and DRAM in mobile phones and edge IoT devices in order that they would inherently be built on the « Normally-Off Computing » concept.
Yeah, I'm dragging my feet on upgrading to 5G, for as long as I can. I wish my current phone would turn off the cell modem, while I'm on wifi, but at least I turn off GPS and Bluetooth, when I'm not using them. I often use airplane mode, while I'm sleeping or driving.energy usage while the device is being actively used and WiFi/BT/3/4/5G/GPS/etc. with all of the associated background processes and services are ensuring the 15-20Wh battery still gets drained in 2-4 days even when the screen stays off the whole time.
MRAM would have near-zero benefit in smartphones and similar devices where the screen typically accounts for 50-90% of energy usage while the device is being actively used and WiFi/BT/3/4/5G/GPS/etc. with all of the associated background processes and services are ensuring the 15-20Wh battery still gets drained in 2-4 days even when the screen stays off the whole time. My "newest" device that tracked screen power usage separately from everything else was my Nexus 7 tablet and the screen typically accounted for 80-90% of overall battery drain. Eliminating all idle power draw/leakage would only buy you a few hours of battery life, nothing worth overhauling the whole software and hardware design paradigm for.
If you want a cellphone with long standby battery life, get the dumbest phone you can find for your preferred carrier. I could forget my N5190 in my backpack and it would be over a week until the low-battery chime reminded me to take it out if nobody called me first.
Do the math. Any delay or sluggishness in app launching, on modern phones, probably isn't due to the speed of reading it from NAND. It's probably trying to communicate with cloud services or something. Depending on the app, it might be doing some non-trivial amount of computation, at startup.you could store most of the software directly in the MRAM (instead of the NAND Flash storage), and get extremely responsive app launching (near instantaneous), which I am also strongly interested in.
A dumb switch is 100% efficient, nearly 100% reliable, 100% secure against cyber-attacks and works perfectly fine without network or online service access. A networked smart-switch will need an AC-DC converter to power its standby electronics and each of those will typically spend about 200mW just powering the control IC and feedback circuit to maintain a stable output. Slap one of those in every individual light bulb in your home and you can end up with hundreds of watts worth of phantom power draw that never needed to exist.You could stick such device in every room in your house, and use it as home automation interface, with integrated voice assistant to replace dumb switches.
Yes, lets replace DRAM which costs under $2/GB and NAND which costs ~$0.05/GB with MRAM which currently costs somewhere in the neighborhood of $5000/GB extrapolating from 16Mbits chip prices.Also if the MRAM memory/storage is large enough (at least 64GB), I guess you could store most of the software directly in the MRAM (instead of the NAND Flash storage), and get extremely responsive app launching (near instantaneous), which I am also strongly interested in.
Do the math. Any delay or sluggishness in app launching, on modern phones, probably isn't due to the speed of reading it from NAND. It's probably trying to communicate with cloud services or something. Depending on the app, it might be doing some non-trivial amount of computation, at startup.
As I've said before: take the time to form a detailed understanding of the problem you're trying to solve, before reaching for solutions. Even if you're starting with a solution, you still need to do the analysis to see whether a given problem is a good fit for it.
A dumb switch is 100% efficient, nearly 100% reliable, 100% secure against cyber-attacks and works perfectly fine without network or online service access. A networked smart-switch will need an AC-DC converter to power its standby electronics and each of those will typically spend about 200mW just powering the control IC and feedback circuit to maintain a stable output. Slap one of those in every individual light bulb in your home and you can end up with hundreds of watts worth of phantom power draw that never needed to exist.
Saving maybe 1W (16 control panels x 60mW each) on the ever-so-slightly more power-efficient home automation panels' 4GB of DRAM throughout your home on one hand doesn't mean much when you are throwing away hundreds of watts out the windows everywhere else.
Yes, lets replace DRAM which costs under $2/GB and NAND which costs ~$0.05/GB with MRAM which currently costs somewhere in the neighborhood of $5000/GB extrapolating from 16Mbits chip prices.
As bit_user mentioned, when you can read from NAND at the speeds we have today, the main obstacle to load times is software being written to load most of its stuff sequentially. On PCs, there is virtually no benefit in most software to loading from NVMe 4.0x4 vs SATA SSD despite 4.0x4 being about 12X faster on bandwidth and 50X faster on latency.
You must have missed the part where I stated that I strongly prefer dumb devices that don't need any sort of network access, are impervious to cyber-attacks and cannot spy on me. I stick to dumb devices wherever I can.If you are living in the US, I would think you already have plenty of « smart » devices throughout your home (voice enabled speakers, smart alarm clock, smart thermostats,…) that replaced or come in addition to your previous « dumb » version of those belongings because they provide more comfort. And all those devices are already sipping phantom power as you said.
The sort of micro-controllers that get stuffed into simple IoT devices like smart LED bulbs already have stupidly low power modes in the 0.001-0.05mW range. A fully-powered 16MHz ATU256, which is completely overkill for most IoT stuff, draws 1mW in while idle, 0.01mW if you pause the core clock and use an external Vcore supply instead of the internal linear regulator.So, as possible, it would technically be better that those devices don’t sip power when they are idle…
Well, either show where the status quo stops scaling as well as your proposed solution, or make a numerical argument for why/how your proposed solution would be better.your mindset is set-up by the status quo there is nowadays : you are accustomed by the fact that memory is volatile, and you don’t put that into question : « it has always been like this, why do you want to change it ?»
I want to change it because it should NEVER has been like this to begin with
The sort of micro-controllers that get stuffed into simple IoT devices like smart LED bulbs already have stupidly low power modes in the 0.001-0.05mW range. A fully-powered 16MHz ATU256, which is completely overkill for most IoT stuff, draws 1mW in while idle, 0.01mW if you pause the core clock and use an external Vcore supply instead of the internal linear regulator.
If you want to control these things over RF though, you are looking at 15+mW for BT/WiFi/Zigbee receivers. And when those things are integrated into smart LED bulbs or similar AC-powered devices, you also have the ~200mW from the AC-DC converter running itself. Power used by the mCU's DRAM/SRAM is of no material consequence there, that is less than 0.01mW out of 200+mW, 1/200 000th, 0.005%, a fart in the tornado.
The sort of low-power micro-controllers that go into battery-powered temperature sensors have standby power in the micro-watt range, which would let them do nothing for 10-15 years on a single LR44 cell. Most of the energy drain in those things comes from the RF transmitter's 50-250mW depending on how much range you want. If you want the battery to last longer, reduce the reporting rate and range before bothering with anything on the micro-controller side beside picking an appropriate micro-controller for the job and programming it correctly.
As i wrote earlier, a micro-controller sitting there doing nothing can last about 15 years on an LR44 cell if you design the circuit and program it accordingly. The battery will die of old age first if a micro-controller waiting for external input that never comes is the only load.For a system that infrequently compute, like an IoT sensor triggered by an event, it will necessarily help increase battery life further,