News Samsung Paves The Way to 1TB Memory Sticks with 32Gb DDR5 ICs

Status
Not open for further replies.
The smallest 16Gbits DDR5 chip I remember reading about was 88sqmm, which means this 32Gbits chip must be around 160sqmm. These DRAMs are going to be a bit pricey from having fewer than half as many viable chips per wafer.
 
I wish it would be low-power 32Gb Non-Volatile VG-SOT-MRAM ICs, to be able to get low-power 32GB Non-Volatile memory in a package.
According to your link, even the most advanced MRAM variant still has a write endurance of only a trillion cycles. In a PC-style environment or anything with comparably regular use, you may burn through the endurance as DRAM within weeks, possibly down to hours as SRAM.

I wouldn't want that in any of my devices until endurance is up by at least another +e3.
 
Samsung can now build a 128 GB DDR5 RDIMM with ECC using 36 single-die 32 Gb DRAM chips
Nope. Starting with DDR5, ECC DIMMs require 25% more chips. So, the number would be 40 chips, whereas a non-ECC DIMM of the same capacity would have only 32.

However, the 12.5% overhead figure does apply to DDR4 and older DIMM standards.
 
According to your link, even the most advanced MRAM variant still has a write endurance of only a trillion cycles.
Do you have comparable figures for DDR5 DRAM and ~7 nm SRAM cells? Just curious.

BTW, as memory layout randomization seems to be gaining favor, for security reasons, I think it will have the effect of leveling-out wear. Most DRAM content should be rather low turnover. Therefore, I'm a little skeptical that ~1T cycles is inadequate for a client PC or other device.
 
Do you have comparable figures for DDR5 DRAM and ~7 nm SRAM cells? Just curious.
Ever heard of DRAM or SRAM failing? CPU registers are made the same way as SRAM cells, they get read and written trillions of times per active hour and we usually take for granted that apart from material, manufacturing or something else in the system failing and killing the CPU first, CPUs typically outlast how long people can be bothered to run them to find out how long they can really last, mostly the same goes for DRAM.

Basically, cycle endurance does not appear to be a thing for DRAM and SRAM. At least not yet.

Papers do predict that N-channel performance degradation from ions getting trapped in the gate oxide over time will become a problem eventually.

BTW, as memory layout randomization seems to be gaining favor, for security reasons, I think it will have the effect of leveling-out wear. Most DRAM content should be rather low turnover. Therefore, I'm a little skeptical that ~1T cycles is inadequate for a client PC or other device.
While the OS may be randomizing memory page allocations, once that allocation is done, stuff usually stays wherever it was physically mapped at allocation until it either gets evicted to swap or freed. The int64 that is counting 1kHz clock ticks for the RTC is allocated at boot and likely stays there for as long as the system is turned on. That is ~31 billion writes per year on the LSB. Your NIC's driver has a stats, status and buffer blocks that get written to every time a packet comes in or out, that can be millions of writes per second if you are transferring large files at high speeds and AFAIK, peripherals cannot directly DMA in/out of CPU caches to avoid transient memory writes. Last time I looked, my system was reporting 30k interrupts/s while mostly idle, which I suspect translates to thousands of memory writes/s to mostly static locations too.

Burning through 1T writes to xRAM within a desktop's useful life isn't too far-fetched if it is used for some consistently intensive stuff that doesn't cache particularly well.

If imminently consumable memory becomes a thing, we'll need AMD, Intel and friends to implement system-managed "L4$" memory pools for the OS to use as a target for high-traffic OS/driver structures, DMA buffers, high-frequency and transient data.

2T0C IGZO is another potentially interesting DRAM technology that takes the refresh interval from 16ms to 100-1000s, practically eliminating self-refresh and could easily scale in a multi-layer manner. One major drawback (at least for now) is that the material can fail in as little as 120 days when exposed to oxygen or moisture.
 
Ever heard of DRAM or SRAM failing?
DRAM, for sure. I've experienced about a dozen DRAM DIMMs, that once tested fine, developing bad cells. It's not hard to find other accounts of this happening.

As far as SRAM, I'm not even sure how you'd know. In CPUs with ECC-protected caches, I guess you might get a machine check exception and maybe you happen to check the system logs and see it mentioned there. In general, I'd expect CPUs to be built with more generous margins for SRAM than your typical DRAM chip, because getting it wrong is a lot more costly to them.

CPU registers are made the same way as SRAM cells, they get read and written trillions of times per active hour
Nothing says that registers need to be constructed at the same size as memory cells in L3 cache, for instance. I'd almost be surprised if they were exactly the same.

Anyway, I just asked if you had stats. It'd suffice to simply say "no", or just not respond to this point.

Your NIC's driver has a stats, status and buffer blocks that get written to every time a packet comes in or out, that can be millions of writes per second if you are transferring large files at high speeds and AFAIK, peripherals cannot directly DMA in/out of CPU caches to avoid transient memory writes.
This used to be true, but once memory controllers and PCIe switches both moved on die, it became cheap enough for memory accesses by PCIe devices to snoop the CPU cache. Intel calls this Data Direct I/O, and it was first introduced during the Sandybridge generation.


Last time I looked, my system was reporting 30k interrupts/s while mostly idle, which I suspect translates to thousands of memory writes/s to mostly static locations too.
While your system is idling, those probably won't tend to get past L3 cache.

Burning through 1T writes to xRAM within a desktop's useful life isn't too far-fetched
Okay, maybe 1T is a bit on the low side.

If imminently consumable memory becomes a thing, we'll need AMD, Intel and friends to implement system-managed "L4$" memory pools for the OS to use as a target for high-traffic OS/driver structures, DMA buffers, high-frequency and transient data.
A more likely scenario is that they add visibility to the on-die ECC errors in DDR5 (or later) DRAM chips to the OS. Once the error rate of a memory page crosses some threshold, the kernel can remap the page and exclude that address range from further use (which they can already do).
 
This used to be true, but once memory controllers and PCIe switches both moved on die, it became cheap enough for memory accesses by PCIe devices to snoop the CPU cache. Intel calls this Data Direct I/O, and it was first introduced during the Sandybridge generation.
The article says DDIO is only available on Xeon E5 and E7 CPUs though. Another page adds the XSP g1/2/3 lineup, W-2200 and W-3200 families. Unless it has changed since then, that looks like there is still no DDIO on consumer CPUs and entry-level Xeons. Also, DDIO was responsible for the NetCAT security flaw exposed in 2019, which means no DDIO for you if your data needs privacy.
 
According to your link, even the most advanced MRAM variant still has a write endurance of only a trillion cycles. In a PC-style environment or anything with comparably regular use, you may burn through the endurance as DRAM within weeks, possibly down to hours as SRAM.

I wouldn't want that in any of my devices until endurance is up by at least another +e3.

It seems that European research center IMEC is first intending VG-SOT-MRAM as SRAM cache replacement (Level 1, 2 and 3).

Although 10E12 cycle endurance is I think lower than current generation SRAM/DRAM cycle endurance, for many (IoT) applications it may be reasonable enough.

And I would think that with further R&D, with time, it may be possible to achieve better endurance.

So yes, it may not yet be a good fit for all applications, but it could already certainly be use usage that do are not too intensive in read/write.

For example, NXP has announced that in 2025 they intend to use TSMC automotive 16nm Finfet with MRAM, and I think it is only 10E6 read/write cycle endurance…
 
Do you have comparable figures for DDR5 DRAM and ~7 nm SRAM cells? Just curious.

BTW, as memory layout randomization seems to be gaining favor, for security reasons, I think it will have the effect of leveling-out wear. Most DRAM content should be rather low turnover. Therefore, I'm a little skeptical that ~1T cycles is inadequate for a client PC or other device.

The feedback from InvalidError may be valid, but to get a better idea, I agree that it would be better to have some « hard » statiscal numbers from research papers related to usage to give some context.

I have no idea, but I would think that 10E12 (1 Trillion) read/write cycle could be enough for edge IoT devices / sensors as if they are not triggered to intensively.

But I have no idea for mobile devices (smartphone), desktop and servers…

My dream is that low-power MRAM endurance is high enough to at least be able to replace SRAM and DRAM in mobile phones and edge IoT devices in order that they would inherently be built on the « Normally-Off Computing » concept.

It would provide the opportunity to re-architect the compute in those devices differently.

Desktop and servers are not a priority for myself…
 
My dream is that low-power MRAM endurance is high enough to at least be able to replace SRAM and DRAM in mobile phones and edge IoT devices in order that they would inherently be built on the « Normally-Off Computing » concept.
MRAM would have near-zero benefit in smartphones and similar devices where the screen typically accounts for 50-90% of energy usage while the device is being actively used and WiFi/BT/3/4/5G/GPS/etc. with all of the associated background processes and services are ensuring the 15-20Wh battery still gets drained in 2-4 days even when the screen stays off the whole time. My "newest" device that tracked screen power usage separately from everything else was my Nexus 7 tablet and the screen typically accounted for 80-90% of overall battery drain. Eliminating all idle power draw/leakage would only buy you a few hours of battery life, nothing worth overhauling the whole software and hardware design paradigm for.

If you want a cellphone with long standby battery life, get the dumbest phone you can find for your preferred carrier. I could forget my N5190 in my backpack and it would be over a week until the low-battery chime reminded me to take it out if nobody called me first.
 
energy usage while the device is being actively used and WiFi/BT/3/4/5G/GPS/etc. with all of the associated background processes and services are ensuring the 15-20Wh battery still gets drained in 2-4 days even when the screen stays off the whole time.
Yeah, I'm dragging my feet on upgrading to 5G, for as long as I can. I wish my current phone would turn off the cell modem, while I'm on wifi, but at least I turn off GPS and Bluetooth, when I'm not using them. I often use airplane mode, while I'm sleeping or driving.

I also leave it in the middle power-saving mode and keep the screen brightness set as low as possible (which is also more comfortable to look at than an overly bright screen). Occasionally, I need to temporarily boost the brightness, which I do by pointing its back-facing camera at the brightest object in the room (i.e. window, ceiling light, computer monitor, etc.). I think the adaptive brightness feature is called "comfort view".

What bugs me the most is that apps are targeting newer devices, and that means doing more stuff in the background and perhaps being optimized less. The biggest battery drain of the apps I use is Amazon Music, which seems to be getting terribly bloated and yet seems to offer no noticeable benefits over the version I used 3 years ago, at the start of lock down.

I never watch videos on my phone. The only time I browse the web is in the rare event that I need to look something up while I'm away from home or the office.

Consequently, the phone is on year 5 and battery #2. I toyed with the idea of taking advantage of discounts and replacing it earlier this year, but I decided to wait until its battery gets markedly worse.

BTW, as far as old phones go, I literally kept two phones until my cell network dropped support for the RF standard they used. In fact, I've only upgraded phones 1 time when I didn't absolutely have to, but that phone (my first smartphone) was starting to get flaky, occasionally refusing to unlock the screen until I rebooted it, which took about 2 minutes.
 
MRAM would have near-zero benefit in smartphones and similar devices where the screen typically accounts for 50-90% of energy usage while the device is being actively used and WiFi/BT/3/4/5G/GPS/etc. with all of the associated background processes and services are ensuring the 15-20Wh battery still gets drained in 2-4 days even when the screen stays off the whole time. My "newest" device that tracked screen power usage separately from everything else was my Nexus 7 tablet and the screen typically accounted for 80-90% of overall battery drain. Eliminating all idle power draw/leakage would only buy you a few hours of battery life, nothing worth overhauling the whole software and hardware design paradigm for.

If you want a cellphone with long standby battery life, get the dumbest phone you can find for your preferred carrier. I could forget my N5190 in my backpack and it would be over a week until the low-battery chime reminded me to take it out if nobody called me first.

For edge IoT devices, I would think that the fact that it would not consume any energy during idle, could help extend battery life.

For mobile phones, I would think that if hardware and software are primarily optimized with the Non-Volatile MRAM advantages in mind, there could some new interesting innovation : you could for example use more of « always-on » only on small fraction of the display (as Samsung does for showing date, time,… when in idle), or for new type of mobile phones with some type of bi-stable color E-ink, like E-ink Spectra 6, to further reduce power consumption.

Also if the MRAM memory/storage is large enough (at least 64GB), I guess you could store most of the software directly in the MRAM (instead of the NAND Flash storage), and get extremely responsive app launching (near instantaneous), which I am also strongly interested in.

I could definitely imagine getting a low-power mobile size device, with a faster refresh bi-stable color E-ink Spectra 6 display, and also at least 64GB bi-stable Non-Volatile Memory/Storage MRAM, embedded with Artificial Neural Network (ANN) stored in VG-SOT-MRAM.

You could stick such device in every room in your house, and use it as home automation interface, with integrated voice assistant to replace dumb switches.
 
you could store most of the software directly in the MRAM (instead of the NAND Flash storage), and get extremely responsive app launching (near instantaneous), which I am also strongly interested in.
Do the math. Any delay or sluggishness in app launching, on modern phones, probably isn't due to the speed of reading it from NAND. It's probably trying to communicate with cloud services or something. Depending on the app, it might be doing some non-trivial amount of computation, at startup.

As I've said before: take the time to form a detailed understanding of the problem you're trying to solve, before reaching for solutions. Even if you're starting with a solution, you still need to do the analysis to see whether a given problem is a good fit for it.
 
You could stick such device in every room in your house, and use it as home automation interface, with integrated voice assistant to replace dumb switches.
A dumb switch is 100% efficient, nearly 100% reliable, 100% secure against cyber-attacks and works perfectly fine without network or online service access. A networked smart-switch will need an AC-DC converter to power its standby electronics and each of those will typically spend about 200mW just powering the control IC and feedback circuit to maintain a stable output. Slap one of those in every individual light bulb in your home and you can end up with hundreds of watts worth of phantom power draw that never needed to exist.

Saving maybe 1W (16 control panels x 60mW each) on the ever-so-slightly more power-efficient home automation panels' 4GB of DRAM throughout your home on one hand doesn't mean much when you are throwing away hundreds of watts out the windows everywhere else.

Also if the MRAM memory/storage is large enough (at least 64GB), I guess you could store most of the software directly in the MRAM (instead of the NAND Flash storage), and get extremely responsive app launching (near instantaneous), which I am also strongly interested in.
Yes, lets replace DRAM which costs under $2/GB and NAND which costs ~$0.05/GB with MRAM which currently costs somewhere in the neighborhood of $5000/GB extrapolating from 16Mbits chip prices.

As bit_user mentioned, when you can read from NAND at the speeds we have today, the main obstacle to load times is software being written to load most of its stuff sequentially. On PCs, there is virtually no benefit in most software to loading from NVMe 4.0x4 vs SATA SSD despite 4.0x4 being about 12X faster on bandwidth and 50X faster on latency.
 
Do the math. Any delay or sluggishness in app launching, on modern phones, probably isn't due to the speed of reading it from NAND. It's probably trying to communicate with cloud services or something. Depending on the app, it might be doing some non-trivial amount of computation, at startup.

As I've said before: take the time to form a detailed understanding of the problem you're trying to solve, before reaching for solutions. Even if you're starting with a solution, you still need to do the analysis to see whether a given problem is a good fit for it.

I had in mind « games » when talking about app launching that usually have a bit of latency when launching. But you are likely correct for apps that needs to retrieve data from internet.

But again, your mindset is set-up by the status quo there is nowadays : you are accustomed by the fact that memory is volatile, and you don’t put that into question : « it has always been like this, why do you want to change it ?»

I want to change it because it should NEVER has been like this to begin with : memory should be Non-Volatile and it I think in the 1960’s indeed wasn’t (with sthg like « magnetic core memory »)

I understand where you are coming from, but please try to think the other way around : if you would live in a world where memory was non-volatile (some sort of MRAM is the default RAM memory that you are accustomed), would you want to switch to volatile DRAM/SRAM ?

You would argue to me « why do you want to switch to a volatile memory ? It doesn’t make sense, you would loose your data and have to keep constant power to refresh the memory to keep it in memory ».

In the course of history, it just happened that volatile DRAM/SRAM was invented before Non-Volatile MRAM, and that the reason why we have volatile memory nowadays.

Now one of the challenge is obviously that as DRAM/SRAM benefit of decades of investment in manufacturing / R&D, they are much, much more cost optimized than new emerging Non-Volatile Memory (MRAM).

MRAM is much, much more expensive, so obviously from an economic point of view, it doesn’t yet make sense as DRAM for consumer devices.

One other challenge is also read/write endurance that has to be high enough (VG-SOT-MRAM is in the range of 10E12 which may be already good enough for some use cases, but it may need more to cover all use cases).

I would think that we are heading toward a world with more and more sensors / computing devices surround invidual (more than 1000’s per individual), and for it to be doable and energy sustainable, it likely require Non-Volatile Memory, that you don’t gave to change 1000’s coin cell every year…

Also I think IMEC is also investigating/envision Non-Volatile Memory (MRAM) for those use cases…
 
A dumb switch is 100% efficient, nearly 100% reliable, 100% secure against cyber-attacks and works perfectly fine without network or online service access. A networked smart-switch will need an AC-DC converter to power its standby electronics and each of those will typically spend about 200mW just powering the control IC and feedback circuit to maintain a stable output. Slap one of those in every individual light bulb in your home and you can end up with hundreds of watts worth of phantom power draw that never needed to exist.

Saving maybe 1W (16 control panels x 60mW each) on the ever-so-slightly more power-efficient home automation panels' 4GB of DRAM throughout your home on one hand doesn't mean much when you are throwing away hundreds of watts out the windows everywhere else.

I would think that a low power mobile size device, with bi-stable color E-ink display and bi-stable Non-Volatile Memory for home automation would come in addition to the « dumb switch » in most rooms.

If you are living in the US, I would think you already have plenty of « smart » devices throughout your home (voice enabled speakers, smart alarm clock, smart thermostats,…) that replaced or come in addition to your previous « dumb » version of those belongings because they provide more comfort. And all those devices are already sipping phantom power as you said.

So in my mind, it is not really a question, a low-power reprogrammable mobile phone size device in every room for home automation (Matter), has the potential to improve day to day comfort (especially for people with disabilities), and therefore, given time, it is something that will happen.

If such a device has a low-power 64GB Non-Volatile MRAM storage, it would unified memory and storage, which could avoid a lot of data shuffling, and so conceptually still better than using volatile memory.

Yes, lets replace DRAM which costs under $2/GB and NAND which costs ~$0.05/GB with MRAM which currently costs somewhere in the neighborhood of $5000/GB extrapolating from 16Mbits chip prices.

I 100% agree, that as 2023, for consumer devices it wouldn’t make economic sense to replace volatile DRAM by Non-Volatile MRAM because it is much, much more expensive.

But cost is partly a High Volume Manufacturing (HVM) (scale) issue and, if needed, incentives could help.

20 years, 30 years, 40 years ago, solar panels and battery cells were costing much, much, much more than other alternatives, but with HVM (and incentives), their cost has come down tremendously to the point that they now begin to be cost competitive.

As bit_user mentioned, when you can read from NAND at the speeds we have today, the main obstacle to load times is software being written to load most of its stuff sequentially. On PCs, there is virtually no benefit in most software to loading from NVMe 4.0x4 vs SATA SSD despite 4.0x4 being about 12X faster on bandwidth and 50X faster on latency.

So I have no doubt in my mind that, Non-Volatile Memory like MRAM would technically (I insist on technically) be preferable, but yes, as of 2023, there are challenges like costs, and endurance.

I would expect that, depending if there is incentive or not, it could take at least 10 to 30 years+ to feel the shift from volatile to Non-Volatile : it won’t happen overnight, but a transition to spintronics related technologies is clearly ongoing, both for storage, but given time also compute (Intel MESO concept) (a bit like the ongoing shift from LCD to OLED that is spreading in more and more category of devices).

Spintronic related technologies will likely first find niches where it bring advantages (it seems that Non-Volatile VG-SOT-MRAM cache is smaller in size than volatile SRAM) which will help drive some HVM, which should help reduce cost, which should address new market opportunities.

So it is a journey, but my belief is that we already are at the start if that path…
 
If you are living in the US, I would think you already have plenty of « smart » devices throughout your home (voice enabled speakers, smart alarm clock, smart thermostats,…) that replaced or come in addition to your previous « dumb » version of those belongings because they provide more comfort. And all those devices are already sipping phantom power as you said.
You must have missed the part where I stated that I strongly prefer dumb devices that don't need any sort of network access, are impervious to cyber-attacks and cannot spy on me. I stick to dumb devices wherever I can.
 
Alright. Many people don’t and more and more people are buying smart devices…

It is perfectly conceivable that in 10 years, 20 years or more,… there will be 100’s and then 1000’s of « smart » devices for 1 individual in rich countries…

So, as possible, it would technically be better that those devices don’t sip power when they are idle…
 
So, as possible, it would technically be better that those devices don’t sip power when they are idle…
The sort of micro-controllers that get stuffed into simple IoT devices like smart LED bulbs already have stupidly low power modes in the 0.001-0.05mW range. A fully-powered 16MHz ATU256, which is completely overkill for most IoT stuff, draws 1mW in while idle, 0.01mW if you pause the core clock and use an external Vcore supply instead of the internal linear regulator.

If you want to control these things over RF though, you are looking at 15+mW for BT/WiFi/Zigbee receivers. And when those things are integrated into smart LED bulbs or similar AC-powered devices, you also have the ~200mW from the AC-DC converter running itself. Power used by the mCU's DRAM/SRAM is of no material consequence there, that is less than 0.01mW out of 200+mW, 1/200 000th, 0.005%, a fart in the tornado.

The sort of low-power micro-controllers that go into battery-powered temperature sensors have standby power in the micro-watt range, which would let them do nothing for 10-15 years on a single LR44 cell. Most of the energy drain in those things comes from the RF transmitter's 50-250mW depending on how much range you want. If you want the battery to last longer, reduce the reporting rate and range before bothering with anything on the micro-controller side beside picking an appropriate micro-controller for the job and programming it correctly.
 
your mindset is set-up by the status quo there is nowadays : you are accustomed by the fact that memory is volatile, and you don’t put that into question : « it has always been like this, why do you want to change it ?»

I want to change it because it should NEVER has been like this to begin with
Well, either show where the status quo stops scaling as well as your proposed solution, or make a numerical argument for why/how your proposed solution would be better.

No matter what, you can't avoid doing the math. It always comes down to the hard numbers. Then, they're either on your side or they're not. If they are, then the discussion can move on to one of cost, practicality, and the mechanics of transitioning. If they're not, we can all save a lot of time and end the discussion right there.

If you try to go anywhere with this idea, that's exactly what you're going to run up against. So, my advice would be to stop wasting time on long posts, and start doing more homework on the technologies involved, to achieve a comfortable understanding of the intricate details and be able to make credible numerical arguments.
 
Assuming read/write (R/W) energy of Non-Volatile Memory MRAM is lower, egal or very slightly higher than the R/W of volatile SRAM and volatile DRAM, then as MRAM doesn’t consume any energy while idle, it will necessarily at least save (little) energy/power when in idle as SRAM/DRAM do consume energy for self-refresh in idle.

This is also assuming you keep the same hardware design and software behavior, which I think wouldn’t be the case with NVM : Given (long) time, you would likely redesign software/hardware on the assumption on NVM to further save energy.

Especially, according to the video below at 5:00, shuffling software instructions and/or data from storage to memory seems to consume at least 1000x more than the compute task itself.

Therefore if you could unify memory and storage in one single unified fast, low-power NVM (preferably low-power MRAM), you would not (or at least less) need to shuffle data/software from a slow, high power consumption storage location (NAND Flash) saving a lot of energy in the process.

I would think that you may design your system around the memory, and spread many more Non-Volatile memory blocks near compute, to lower the probability to have to shuffle data/sofware from a memory block/location to another. Therefore further reducing power consumption.


For a system that infrequently compute, like an IoT sensor triggered by an event, it will necessarily help increase battery life further, but yes, the longer the time between events (weeks, months, years,…), the higher the energy saving benefit (for shorter timeframe, the benefit may be very minimal indeed, BUT still there) as yes, currently, RF does consume a lot more energy (but spintronics related technologies could also help reduce RF power consumption as well).

I think that one of the challenge European research center IMEC is working toward is to decrease active (R/W) power consumption, and there VCMA-MRAM seems to have a clear advantage for IoT devices.


Regarding hard numbers about power consumptions, I have some difficulty to find hard numbers at specific iso-nodes for an apple to apple comparison (ideally a 14nm LPDDR5 DRAM versus a 14nm VG-SOT-MRAM or 14nm VCMA-MRAM). You can find numbers but they are not at the same node, so it is difficult to compare Read energy per bit, and Write energy per bit because it is different nodes…

Furthermore, regarding SRAM, it seems that VG-SOT-MRAM blocks do have a smaller size at iso-capacity than SRAM blocks, and therefore you could also increase the cache capacity using VG-SOT-MRAM instead of SRAM.

Then yes, as of 2023, cost is a huge challenge but cost was also a challenge for solar panels, battery cells,… 10 years, 20 years, 30 years ago…
 
The sort of micro-controllers that get stuffed into simple IoT devices like smart LED bulbs already have stupidly low power modes in the 0.001-0.05mW range. A fully-powered 16MHz ATU256, which is completely overkill for most IoT stuff, draws 1mW in while idle, 0.01mW if you pause the core clock and use an external Vcore supply instead of the internal linear regulator.

If you want to control these things over RF though, you are looking at 15+mW for BT/WiFi/Zigbee receivers. And when those things are integrated into smart LED bulbs or similar AC-powered devices, you also have the ~200mW from the AC-DC converter running itself. Power used by the mCU's DRAM/SRAM is of no material consequence there, that is less than 0.01mW out of 200+mW, 1/200 000th, 0.005%, a fart in the tornado.

The sort of low-power micro-controllers that go into battery-powered temperature sensors have standby power in the micro-watt range, which would let them do nothing for 10-15 years on a single LR44 cell. Most of the energy drain in those things comes from the RF transmitter's 50-250mW depending on how much range you want. If you want the battery to last longer, reduce the reporting rate and range before bothering with anything on the micro-controller side beside picking an appropriate micro-controller for the job and programming it correctly.

Thanks for those information.

The thing is that I want highly reprogrammable / updatable IoT devices : I would prefer that they have more silicon transistors / NVM than strictly needed at first, but as new emerging usage ideas pop-up, can access to more ressources (allocate more compute / memory) to avoid the need to change the device too oftenly…

Ex: Let say your IoT device has 1 billion transistors able to work up to 2Ghz, and 64GB MRAM unified Non-Volatile Memory (NVM) / storage

Let say it is initialy compatible with Google Assistant. It only needs 1 million transistors at 10Mhz and 8Mbyte memory for that (I have no idea, it just an example), and only that is used.

Then, 2 years later, you want to update it to make it compatible with Apple Homekit or Matter.

You don’t have to bother if you will have enough transistors / NVM memory to do so, as it wad overprovionned in the 1st place.

You just need to update your IoT device to allocate more of the available transistors and memory for your new tasks.
 
For a system that infrequently compute, like an IoT sensor triggered by an event, it will necessarily help increase battery life further,
As i wrote earlier, a micro-controller sitting there doing nothing can last about 15 years on an LR44 cell if you design the circuit and program it accordingly. The battery will die of old age first if a micro-controller waiting for external input that never comes is the only load.
 
Status
Not open for further replies.