A-DATA to OCZ: We Can Slap a 64 MB Cache on an SSD, Too

Status
Not open for further replies.

baov

Distinguished
Jan 21, 2009
30
0
18,530
Why are we talking about pulling reads out of the cache? The reason it's there on an SSD is to buffer those slow writes, not to have reads from the cache.
 

jacobdrj

Distinguished
Jan 20, 2005
1,475
0
19,310
As cheap as DDR style memory is, why not have a gig of built in chace? These drives cost upwards of $500 anyways, what is another $5? Intel is still a leg up. But they are pushing them with innovation. For you geeks out there, it is taking the brute force Stargazer approach, rather than the finesse Excelsior one.
 

Grims

Distinguished
Sep 17, 2008
174
0
18,680
[citation][nom]jacobdrj[/nom]As cheap as DDR style memory is, why not have a gig of built in chace? These drives cost upwards of $500 anyways, what is another $5? Intel is still a leg up. But they are pushing them with innovation. For you geeks out there, it is taking the brute force Stargazer approach, rather than the finesse Excelsior one.[/citation]

Why stop there, why not just make a 64GB cache drive and be done with it? :p
 

MikePHD

Distinguished
Jun 25, 2007
24
0
18,510
I don't know which vertex you are talking about, but my vertex fresh out the box gets 120mB/s writes sustained and 240mB/s reads
 

mavroxur

Distinguished
[citation][nom]jacobdrj[/nom]As cheap as DDR style memory is, why not have a gig of built in chace? These drives cost upwards of $500 anyways, what is another $5? Intel is still a leg up. But they are pushing them with innovation. For you geeks out there, it is taking the brute force Stargazer approach, rather than the finesse Excelsior one.[/citation]


Because putting that much data in a volatile RAM cache is asking for problems unless you add a battery backup on the drive like most high end RAID cards that have a lot of on-board cache do. Not to mention that a cache flush for a 1gb cache would bog, especially if it flushes during a windows shutdown or during a latency-sensitive operation.
 

jacobdrj

Distinguished
Jan 20, 2005
1,475
0
19,310
[citation][nom]Grims[/nom]Why stop there, why not just make a 64GB cache drive and be done with it?[/citation]
Because 1GB costs 5 dollars, while 64 costs 500. If you put enough cache where the law of diminishing returns is not likely to catch up with it at minimal cost (especially with respect to the overall cost) might be worth it.
 

jacobdrj

Distinguished
Jan 20, 2005
1,475
0
19,310
[citation][nom]mavroxur[/nom]Because putting that much data in a volatile RAM cache is asking for problems unless you add a battery backup on the drive like most high end RAID cards that have a lot of on-board cache do. Not to mention that a cache flush for a 1gb cache would bog, especially if it flushes during a windows shutdown or during a latency-sensitive operation.[/citation]
Interesting, but if that means they are effectively engineering for failrue, you could argue the use of SSD to begin with is a bad choice, as data from a dead drive is unrecoverable. Also, at 64mb, you still have a tremendous data loss were power to be inturrupted. It is a buffer. For any data sensitive applications, by that logic, no buffer should be used at all, and the hit in perfomence would be justified. I would immagine with proper engineering of the controller, you could at least mitigate these problems for consumer grade drives.
 

mavroxur

Distinguished
[citation][nom]jacobdrj[/nom]Interesting, but if that means they are effectively engineering for failrue, you could argue the use of SSD to begin with is a bad choice, as data from a dead drive is unrecoverable. Also, at 64mb, you still have a tremendous data loss were power to be inturrupted. It is a buffer. For any data sensitive applications, by that logic, no buffer should be used at all, and the hit in perfomence would be justified. I would immagine with proper engineering of the controller, you could at least mitigate these problems for consumer grade drives.[/citation]



I never said SSD's were a bad choice. And 64mb isnt all that far off from 32mb caches on typical hard drives, but when you start using cache sizes over 1gb like you mentioned, there's a lot more data sitting in a volatile state, that takes longer to flush. It's not an issue usually when you're sporting 32mb of cache, as your buffer-to-disk speed can flush that quickly. With 1gb, it's more pronounced. I know that it's hard to realize the size difference between 64mb and 1gb, but there is a substantial difference there.
 

jacobdrj

Distinguished
Jan 20, 2005
1,475
0
19,310
[citation][nom]mavroxur[/nom]I never said SSD's were a bad choice. And 64mb isnt all that far off from 32mb caches on typical hard drives, but when you start using cache sizes over 1gb like you mentioned, there's a lot more data sitting in a volatile state, that takes longer to flush. It's not an issue usually when you're sporting 32mb of cache, as your buffer-to-disk speed can flush that quickly. With 1gb, it's more pronounced. I know that it's hard to realize the size difference between 64mb and 1gb, but there is a substantial difference there.[/citation]
I am not saying you were. It just seems like the logical conclusion if failure was the only reason for not increasing the cache size more drastically.
To your point about the size difference being significant: As a non EE/CE I don't have a concept of what is truly 'large'. I'll take your word for it. What would you say would be a good size for the cache?
 
Status
Not open for further replies.