Princeton: Replacing RAM with Flash Can Save Massive Power

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]Duckhunt[/nom]I am not really sold on the idea of Flash RAM. We need some new tech.We need the ability to turn on the pcs in an instant rather then everyone having them in sleep mode or hibernate mode. I know whole corporations who have all like that.[/citation]

We have that. Try upgrading your PC.
 
[citation][nom]Onus[/nom]Hmmm, shades of "Expanded" memory? Those running PCs in the late 80's early 90's will remember... LIM EMS...QEMM386...DesQview...[/citation]

Not really. EMM and EMS were only allowing use of existing onboard memory above the 640k barrier. There was no physical difference only the availability of the addressing to DOS by usually loading system drivers (can't recallthe actual terminology...lol) into higher memory thus freeing up the below 640k memory space for normal program usage.
 



Woah hold on there. *Pulls out dusty old timer hat*

There certainly where physical differences in EMM / EMS, they were NOT special addressing schemes. At least not when they were originally created.

Way Way Way back during the era of the 8086 / 80286 mainboards had their memory soldiers onto them and there was a real problem with the x86 CPU's accessing memory above 1MB. Some manufacturers went so far as to create special memory expansion bus's that allowed for additional memory to be bolted on. Programs that were programmed for this special memory type could then directly access it and use it for data storage. There was two different types, Extended and Expanded and the primary difference was in addressing method. Extended presented it's entire range to the program, Expanded was organized into 64KB pages and only one page could be accessed at a time. The EMM / EMS drivers that were eventually created were to maintain backwards compatibility with those older programs.
 
well there is more than one, you aren't the only one, don't think saving as saving money, money is not our god, go green don't think as brown. multiple 1W saving with billion and the world is saved no more stupid nuclearpower. idont save money i save resources and electricity and water...
 
Well I guess we are already doing this with SSD caches and so on. It's like L1 L2 L3 and L4 caches, the higher up you go the slower but more capacious the memory gets. RAM was basically the unwritten L5 cache (that wasn't on the CPU die) and seeing SSDs become the Unwritten L6 would be cool as long as it made things faster.
 


At least with modern systems, I think that RAM would count as L4 by your logic since we don't have L4 anymore (which, as I recall, was never on the CPU die even for the old computers that had it) and SSDs would be L5. However, since they both act as main memory rather than just cache (not to say that they can't be used to cache some things), they're technically not really cache anyway 😉
 


That's not perfectly true. They CAN be used as main memory, but RAM-drives have existed for years and Fusion Drive for Macs as well as off the shelf Hybrid Drives both use SSD as a cache.

As for L4 cache, they may not entirely be gone...
http://www.bit-tech.net/news/hardware/2012/03/19/haswell-l4-cache-rumour/1
 


I was saying that although they can be used to cache things, that is not their main purpose these days, so what I said really is perfectly true, at least as far as this discussion has gone 😉

Also, that L4 cache thing with Haswell is just a rumor and even if it's true, it'd only be for Haswell, a platform that is still not out. Current platforms would still work according to what I said and I did specify that I'm talking about current platforms.
 
I agree with what a lot of you are saying, this is fairly pointless for the average consumer. If you had farms of computers, like some elite bitcoin operators, or large networks of supercomputers etc.. etc.. I can see the benefits because watts quickly add up as you scale higher in numbers of computers running using the same source. But still, unless there are other advantages that would encourage us power users to take a serious look at it- this will only catch the interest of a handful of people.
 


And I'm saying that you nitpicking is completely irrelevant to the point, which was that address space, whether volatile or not, gets more capacious and slower as the scale goes up. Just because L4 caches are gone does not mean RAM is the new L4 cache, it just means there is none. When that Celeron processor came out from Intel with no L2 cache did they call RAM the L2 cache? No.
 


I know I'm late, but w/e.

I disagree. You specifically said L5 for the RAM. Are you saying that without L4, we are just going to go from L3 to L5 in this context? That seems rather nonsensical to me. So yes, in the absence of a real L4 cache, by your logic, the RAM would have taken its place as L4.

They didn't call RAM L2 cache because RAM is not cache. I only called it cache for the sake of argument in this context because that was your words, not mine, that it would be cache for what you said.

Yes, I was nitpicking, but it most certainly was not irrelevant and it still isn't now.
 


I see that nitpicking and possibly trolling are your stregnths. RAM and all forms of temporary memory used for organizing files to be processed are forms of cache. L1 typically holds a cache for a processor's registers, L2 for all processors, L3 for everything on die, RAM for all components of the PC full stop, HDD for anything that does not fit into RAM. (Virtual Memory)


Knowing that all of those abbreviations just stand for 'Level' then what exactly restricts RAM from being level 4 of the cacheing hierarchy in your knowledgeable opinion? You've been pretty vocal in saying I am wrong but have given no reasoning as to why I cannot be right insofar.

And no, I did not "specifically say". I gave a process and came to a logical reasoning. The number does not matter, the sequence does, so yes, RAM can be level 362 if we can find that many forms of organizational storage that is faster that it and closer to the CPU in the hierarchy.


That's like saying, oh the third guy in the race cheated and got disqualified, but you still got fourth so no bronze for your sorry backside lol.
 


RAM can be used for cache. The storage drives can be used to cache things too. However, this is not their only purpose, unlike CPU cache, therefor they are not cache, although they can be used for it. The entirety of the RAM is not used for caching purposes and the same is true for the hard drive. That is the difference and I most certainly have pointed this out earlier, so yes I have given reason.

Whether or not RAM can be level 362 by your logic is irrelevant in these examples because there aren't 361 levels of caching before it. That is my point. Not once did I say that RAM couldn't be any number, only that from the logic used in your earlier post, RAM would not be L5 for then-current systems because there is no caching nor even proper storage in them that acts as a level between L3 cache on the CPU and the RAM. If there were 361 levels before it, then sure, calling it L362 would be true within your logic.

My point this whole time, and still is, simply that at least when I made my post, the RAM in your example would have been L4, not L5, for current systems. Storage might still be considered L6 since storage drives have their own small caches too, but RAM has no such caching in consumer systems as far as I'm aware and would thus have been L4 in your example.

Yes, you gave a process, what I've been saying this whole time is that your conclusion used an incorrect placement for the RAM for current systems.

Also, L2 is almost always limited to a single processing core, not all processors, at least in modern x86 CPUs. The only current major exception would be AMD's modular architectures and even then, it's still not linked to all cores, just to one module per L2 cache.

Technically, calling RAM and storage cache can be compared to calling a log a bench. Sure, you can sit on both, but the log still isn't a bench. For example, not all data stored in the RAM is found on a storage drive, therefor it is not cached. Almost all data stored on a hard drive is typically not a cache.
 
I'm going to have to do this bit by bit, because it seems you are trying really hard to misunderstand/disprove my statements.



RAM is categorically cache. The scientific definition for Cache "is a component that transparently stores data so that future requests for that data can be served faster". RAM by it's nature is non-volatile so all data must be loaded to it and none of it is stored unless offloaded from it. It is the systems classical cache. There is nothing saying that cache must only serve one unit within a system.



That IS my point. You nitpicked about me saying that RAM was L5 cache on the basis that there was not two levels of cache below it than it and I said, if you could find 361 levels before it then it would be. It's sequential... do you understand the nature of counting? I'm sorry to seem condescending but wow do you realize that calling my illustration of the sequential nature of cache levels also makes your argument over level 4 not existing irrelevant too, seeing as that is an argument against me missing the next number in a sequence. You just dismissed your own argument pal.


If so, you're either obsessive compulsive, or really really really trying to ignore my statement about Haswell chips which (then) were rumored to have L4 caches. Further to this point, with the previous existence of L4 caches, I would think if we ever adopted this kind of naming system we would still miss out the L4 in case they should ever return, much like the removal of GT2 racing would not automatically promote the racing previously called GT3 to GT2 and the fact that should L5 even be given to the naming of RAM nobody would want to buy an L5 down the line and be delivered an SSD (due to your types renaming it L4 and L5 then being passed on to SSDs)

Anyway, RAM does not have cache. RAM IS cache. A cache is a faster temporary store of data used for the sake of speed of access. That is RAM precisely defined, else we would skip RAM and just write directly to and from the disk. We already do if and when we run out of RAM, or simply don't need it's speed. It's called Virtual Memory.



You are right here.



Technically you are wrong on many counts here. Not all data stored on any cache can be found on a storage drive. Firstly within the L1 cache you have procedural code which often creates it's own results which must then be reprocessed, then you have the fact that any data that is the result of CPU processing must be offloaded to a cache before being sent to RAM (and then to the drive or other devices; RAM is not only a cache for the CPU, but for the whole system as mentioned earlier). The only difference you truly have is that, to a large extent, CPU caches are ruled by a simple ruleset and operate automatically whereas the programmer controls RAM because we reach a stage where data is big enough to make human control a smarter idea.

Then you say that the entirety of RAM is not used for caching purposes and nor is the hard drive. Please state a single use of RAM or hard drive data that does not involve processing that data at some stage and is part of it's intended purposes. Using a HDD as a paperweight does not count, and storing already processed data doesn't count either as L1/2/3 cache will temporarily store output data before it is sent to RAM in blocks too. Oh and RAM sending data to other devices doesn't count either as it will have at some stage received that data from elsewhere and therefore just held it to be processed.

Like Wikipedia says: In computer science, a cache (/ˈkæʃ/ KASH)[1] is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the greater the number of requests that can be served from the cache, the faster the overall system performance becomes.

That is general enough to include all forms of memory in a computer.
 
Status
Not open for further replies.