The closer you get to the CPU cores, the more stringent the bandwidth and latency requirements get.The article from the European research center IMEC (which is likely the world’s most important research center for developing semiconductor manufacturing tools and processes) seems to indicate that VG-SOT-MRAM has all the requirements to replace (at least some level) cache (at least L3 and maybe further down).
The main thing I'm saying is that if you want to solve a problem, try to learn everything you can about it. If you're pitching it to me as a solution for cache memory, I shouldn't have to remind you about tag RAM. You should already know all about it, and how your MRAM-based solution can most-effectively be adapted to it.
For all I know, there might be interesting ways to use some property of MRAM to make tag lookups more efficient. That's on you to figure out, since you're advocating for it.
Prototypes are great, but you need sound theory for even believing an idea is taking that far. Clearly, someone did that homework.It is likely that before end of 2025, there could be some prototype chip demonstrating the viability of this (it seems they are already working on that…).
Could be, but you ought to know the argument for doing so, before trying to make it. To convince someone, you should at least be able to quote the most relevant bits of the documents you cite. If you believe in the idea as much as you say you do, that shouldn't be too much to ask.For AI applications, it seems that VG-SOT-MRAM could be used differently and seems an interesting candidate for implementing multi-level deep-neural network weights for analog in-memory computing, and likely much, much better suited than highly energy inefficient DRAM.
I'm not opposed to anything you're saying - just skeptical. I could be convinced, but you're currently a ways away from that.
Good luck on your journey.