Seagate: Industry Not Ready for 3TB HDD Capacity

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Cool. I'll load a few 3 TB HDDs into my Mac Pro the day they come out because Macs don't have these issues. Good luck, PC users.
 
Ok at http://www.48bitlba.com/ it says:
"48-bit Logical Block Addressing (LBA) is a technology which extends the capacity of IDE ATA/ATAPI devices beyond a previous limit of 137.4 GB. This limit applies to IDE ATA/ATAPI devices only and not to SCSI interface devices. The original design specification for the ATA interface only provided 28-bits with which to address the devices. This meant that a hard disk could only have a maximum of 268,435,456 sectors of 512 bytes of data thus limiting the ATA interface to a maximum of 137.4 gigabytes. With 48-bit addressing the limit is 144 petabytes (144,000,000 gigabytes). "

Soooooooooo whats with the 2.1 TB limit?
 
[citation][nom]smugmug[/nom]Cool. I'll load a few 3 TB HDDs into my Mac Pro the day they come out because Macs don't have these issues. Good luck, PC users.[/citation]

Wanna talk about my 6TB array?
 
I like Seagate for some reason. Everyone goes for WD but i see no fault in Seagate's products. Just buy a retail package. When you buy OEM drives, who knows how they were handled in transit and what kind of packaging they were in. They are only like $5-10 more and there is a less chance of failure.
 
Another issue with having > 2 TB hard drives is that current motherboards can't boot from a > 2 TB partition, as MBR-formatted drives are limited to 2 TB per partition.

In order to boot from a GPT partition, you have to have:

1) A motherboard with an EFI, not a BIOS. Apple MacBooks currently use EFIs.
2a) A version of Windows that's NT 6.0 (Vista or 7) or higher. The only NT 5.x versions that support booting from a GPT partition are the Itanium versions of Windows Server 2003 (as far as I know).
or
2b) GPT and EFI support in the non-Windows OS of your choice (which I think the Linux communities support by now...I could be wrong)

In order to access a GPT partition (not necessarily boot from it), you need the following:

1a) A version of Windows that's NT 5.2 (XP x64 or Windows Server 2003, any flavor) or higher.
or
1b) Support in the non-Windows OS of your choice.
 
Confused_about_lba:

Even if the LBA can go up to 48-bit the rest of your computer needs to be capable of handling those addresses to. EFI is ready for it since it is compeltely designed to be new world tech while most of our computers still run legacy code from the 80's 😉

*yes i am exaggerating a bit and dude it was hard to find the correct spelling of exaggerating! 😀
 
a) I have setup my OS to use ALL available RAM before swapping and it took 4GB
So I could benefit for more than 4GB RAM
b)
I have my 3*1,5TB drives filled with videos - not all could fit
I'm movinh to FullHD stuff and I hope to get BIGGER Hd's in order to have any of them on-line on my home entertainment system
c)
Bill Gates runoured to have said that 640 KB would be enough
The first HD's were 10MB
We will need terabytes of RAM and Petabytes of storage in 202* year
NA we could have even more - think about MO(O)RE law: it has worked in the past 30 years!!!!
 
[citation][nom]CyberAngel[/nom]a) I have setup my OS to use ALL available RAM before swapping and it took 4GBSo I could benefit for more than 4GB RAMb)[/citation]

Sure, if you setup your OS to use ALL available RAM then the same would happen with 32GB :) If OSs (Microsoft's particularly) were cars then today's cars would merely squeeze 5HP out of 5000cc engine. It is pity that we have such powerful machines today only to be strangled with such a inefficiently written software.



If technical merit
As a programmer I can only say - what a waste of resources.

 
[citation][nom]beayn[/nom]Unless they mean LBA was brought out in 1990, which seems more logical as CHS mapped drives were still in use back then.[/citation]

I'd support this version of the correction. Hardly anyone was using hard disks in 1980, and those that were in use maybe went up to 100mb in a washing-machine sized body with 128 or 256 byte sectors. There wasn't really anything resembling standardisation, let alone a need to use 32 bits of LBA (= 4 billion addresses (x512 = 2 trillion)) when the device itself only stored about 800 million BITS in total over 400,000 sectors.

1990 was about the point where hard disks large enough to threaten the limits of the existing CHS addressing scheme - i.e. over 512mb - were coming up on the horizon, even as home users were just getting used to the idea of using something other than floppies (or tapes!) anyway. Sounds about right. My first HDD based computer (in '94!) had a 540mb disk which could be run in either CHS or LBA, the latter giving us those last, precious 28mb, even though we had to go to 8kb FAT16 clusters to use it.

Two tera would have seemed immense back then, just as 4Gb RAM would have (we had 8Mb, a massive leap up from 1Mb!)... in fact it still did when it was talked about as being the upper limit of FAT32 disks when Win95b came along in '96, or even when I bought my first 2Tb external for a ludicrous sum a couple of years ago.

This all explains why the previously steadily-increasing maximum size of available disks suddenly stalled at 2Tb. LAPTOP 1-tera disks are starting to become common and affordable, even; the desk/lap storage gap hasn't ever really been less than 3x before now. However I'm puzzled at why other workarounds can't be used.

The first would be just the already in-progress shift to 4kb sectors; that would allow 16Tb without any other messing about than a slight code tweak to access 4096 bytes at once instead of 512 and multiply size readings by the same factor (and limiting systems that have to use 4096-to-512 translation and alignment to 2Tb, similar as old pre-48bit controllers could only reach 128Gb, and previous ones topped out at 32, 8, 2 or even 0.5Gb and you set a drive jumper to hard-limit the size it reported), giving us an extra few years to iron out any dumb mistakes in the standard.

The other would be to alter what the drive controller reports (again, maybe jumper-settable) - rather than one 3Tb drive, there would be two 1.5Tb ones (or 10x 2Tb for a theoretical 20Tb disk...). Under NTFS in a suitably high-tier version of Windows, these could even be joined back together into a single seamless volume under a dynamic disk scheme.
 
[citation][nom]Moshu78[/nom]3 GB of RAM, more than 3 GB. You can't access 4 GB on a 32 bit OS (...) Add some home-made video processing and gaming and the 3 GB will not suffice.[/citation]

Nonsense. It depends much on your motherboard and the integrated graphics (if you use it). I've seen reported amounts between 3.0 and 3.75Gb on 32 bit systems with 4Gb installed, the latter being functionally no different from 4.0Gb (>93%) unless you've got the system under quite incredible stress. And I still quite happily manage to plug through video and audio processing on a 1Gb, 32bit, sub-2Ghz solution. For hi-rez stuff I have to leave renders to run in the background for a few hours, but the actual interactive part of the work is fine, and the heavy processing doesn't stop me from doing other stuff as multitask. The memory load of even a very complex Avisynth script (manually chopping together a 120+ cut, 40 minute, 3 camera production with several effects) didn't go over 480mb. For plain transcoding I'd be surprised at it peaking at 1/10th of that - you're never looking at more than a few ~1mb (for SD, ~5mb for HD) frames at a time or holding more than ~12 seconds of encoded stuff (300 frames of MPG4) in memory. CPU/RAM SPEED and DISK is what those applications need, and even so, 2Tb (and 64bit) is massive overkill if you're only doing one project at once - a 120Gb disk is usually sufficient (and a 32bit CPU will get there - and at very high speed if you have a high Ghz multicore. They still use 64/128 bit memory access and SIMD after all). Did it comfortably on a 850mhz (overnight encodes 😉, 256mb system back in the days of Win98 and SVCD...

Gaming, you have a point, but I gave up on PC gaming as a needlessly expensive, running-to-stand-still hobby covering a depressingly small spread of genres many years ago. The only vaguely contemporary PC game I've had any interest in is Live For Speed, and that runs nicely on the aforementioned sub-2Ghz system (with Intel integrated graphics no less). When I can pick up a PS3 for the cost of a typical "good" PC graphics card, let alone the memory, the power-hungry multicore CPU (which itself may eat up half its purchase cost in energy bills per year), etc, all of which will be hopelessly out of date in half the time of the console, I really don't see the point.

For everything else, including playing back HD video (ok, admittedly only 720p, but it would require trivial spec increase to reach 1080p or even 3D) off disk or online, 32-bit XP Pro and a reasonably modest hardware does JUST fine. We're not in the days of hurridly rushing from a lowly 486 through Pentium MMXs to Athlons then Cores, and Win 3.1 thru 9x to 2000 and XP any more. We've reached a level where even a rather cheap, nasty, and somewhat old PC is effortlessly powerful enough for most needs. So I don't see why I need to run out and spend money I can ill afford on bringing it up to the latest, typically power-hungry system (my example system is in fact a laptop, too - peaks at 70w and typically uses less than 30 when on AC, under test) to cope with the latest memory/disk-hungry OS (64bit instructions being 2x as big after all) just because some geek decides that as we haven't chased the cutting edge as hard as he, we're hopelessly obsolete. No, we're not, no more than a 10 year old economy car is.

It's a bit of a shame I won't be able to use disks bigger than 2Tb with my current setup, but by the time 3 or 4s become properly affordable, it really WILL be time to upgrade anyway. For now, a not particularly sprawling pair of 2s is EASILY enough for all the data I can currently hope to try filling them with. The increase won't be necessary until we're routinely storing scads of full-HD alt-frame 3D material.
 
[citation][nom]w3k3m[/nom]Sure, if you setup your OS to use ALL available RAM then the same would happen with 32GB If OSs (Microsoft's particularly) were cars then today's cars would merely squeeze 5HP out of 5000cc engine. It is pity that we have such powerful machines today only to be strangled with such a inefficiently written software. If technical merit As a programmer I can only say - what a waste of resources.[/citation]

Actually, with the system cache, this is what NT-class OSes do anyway. The problem with RAM going to waste was more of a 9x issue. The "memory in use" line you see in Task Manager is how much of the total is being used by PROGRAMS. If you scan the numbers you'll also see quite a lot is being silently used by disk/read-ahead cache as well. Even that won't necessarily fill RAM immediately (might not have been enough stuff read from disk, on an over-2Gb system!), and because of some overhanging programming practices some programs may still swap out to virtual memory (and complain if it isn't there) even when there's excess RAM left... but left on long enough and with enough concurrent things running at once, it WILL all get filled, and the cache will eventually have to start giving way to program memory. If you manage to fill the latter at the moment, though, you're either running some very heavy jobs/games, or something's gone badly wrong. It's taken until the release of Adobe CS5 for me to actually have a need to consider upgrading TO 2Gb. 512mb wasn't enough, but 1Gb allows enough XP headroom for heavy, 9x style swapping to be so rare as to be notable when it happens.

I support the complaint about inefficiency though. I don't do significantly more complex things with, say, Word, Excel or Outlook than would have been possible with the word processors, spreadsheets or email programs available for my family's "original" home computer with 1mb RAM. OK, there was no multitasking, options for inserting images were limited (mainly because of a lack of routes for putting them in the system), and the maximum sheet size was limited - but it's nothing that wasn't fixed in fullness when going up to an 8mb Win3.1 system and its 16-ish mb of Virtual memory. Word 6, Excel 4 and their 2007 variants have but a credit card's width of difference between them, mostly in improved colour handling and graphical (smart art, charts) effects that still could have been done with the old hardware (as proved by Serif Pageplus) had MS bothered to program them in. God only knows what all that extra - and let me emphasize, it's about 1000 out of the 1024mb installed in my "lowly" system, plus all of the 2Gb swapfile - RAM is being used for.
 
[citation][nom]Littlun[/nom]Eh, I'd be a little weary of storing such massive amounts on a single HDD anyway.[/citation]

What, pray, is the difference in terms of "massive amounts of data" between a 3Tb disk, a 2Tb one, or even a 100Gb? You're still going to lose several congressional libraries worth of information should it break.

Backups, man.
 
[citation][nom]thomaseron[/nom]It only displays about 93% of your total space, but you can use it all. Windows just calculates it differently. Right-click on your harddrive and choose properties. Check the Capacity and you'll see one specified in Gigabytes, and one in bytes. There should be quite a difference.[/citation]

A 3"TB" disk will only have a touch over 3,000,000,000,000 bytes available, because the manufacturer will be offering the lowest amount they can legally get away with providing (every tiny increase in the allowable tolerances reduces production costs). Thanks to the difference between this decimal way of representation and the binary one of increasing the magnitude name with each power of 10 of 2 (ie 10bit increase), there's a disparity. The new, rather kludgey way of representing it is to have the "binary" flavour suffixed with TiB instead of TB, and to call it ... urgh ... Tebibytes (and gibi, mebi etc).

It wasn't so bad when it was the 2.4% difference between 1000 and 1024 at the kilo level, and no-one really minded you describing a 713KiB floppy as a "720", or even at the megabyte level where the 4.9% difference between a 1.39MiB and 1.44MB floppy, or a 850MB and 810MiB hard disk made little odds... but once you get up into the Giga/gibi and particularly Tera/tebi region, things get a bit silly, and you "lose" almost 10% of the rated capacity in moving between standards (a 1TB disk only has ~931GiB (0.909TiB) available for just this reason, despite offering over 1,000,000,000,000 bytes AFTER formatting and filesystem). I have a feeling we'll really have to change things when we push on towards Petabyte and Exabyte capacities 😉
 
[citation][nom]spectrewind[/nom]Just a quick clarification on this:Assuming a single drive is used, you have have up to four primary partitions, one of them being marked as active.[/citation]

I don't think it will make much difference in this case though - rather than as before where the problems all lay in the design of the hard disk access controller on the motherboard (less traces = less cost...), the BIOS, or the operating system access routines... this is a problem of the entire STANDARD running out of room, which unalterably affects how things are accessed at the actual DISK end. Splitting up your 8Tb physical disk into 4x 2Tb partitions won't help, as you won't be able to instruct the heads to read past the first partition anyway. This would only be of use if the partitions are pre-set and present as completely seperate drives (that way being allocated two extra bits by the backdoor - using the disk ID system instead of pure LBA), or we get the physical channel issue fixed but the operating systems lag behind.

I've done it this way before, e.g. using a 10Gb (presented as 8Gb by the BIOS) drive with Win3.1 / 95a, or tricking 98SE into using the full size of a 250Gb disk (turns out the 128Gb limit is only on drive size and where the FAT table lies, so if you make 2x 125Gb partitions, it still works so long as your motherboard is 32-bit LBA capable), but it wouldn't work here. It's exactly the same problem as running out of CHS in the early 90s, for which we moved to LBA, with some fakery of "extended CHS" emulation for not-fully-compatible boards.
 
[citation][nom]mayne92[/nom]Even if Seagate comes out with a 3Tera HDD I wouldn't buy from them. I had loyalty with them but seem to have a knack for crappy firmware and high failure rate HDD.[/citation]

Yeah, they used to be my go-to guys for reliable disks (their older ones, up to at least 40Gb, used to be rock solid and almost unbreakable, even after being physically abused), but I'm not going to buy any of theirs for a while. Had both a Maxtor (which use SG internals) and an actual Seagate external disk fail on me in short order over the last couple years despite very careful handling, which even a typically-flaky WD didn't (bought as a gamble, and some of whose advanced funcions never worked). They may recover, but for now it's going to be Iomega*, Hitachi, maybe WD...

* Bet i'll find out they use SG or WD internals, now 😉 but they must get the ones that score highly on QC tests because they're as reliable as oldskool seagates.
 

I don't think so. There's absolutely no reason to change it because HDD capacities are not inherently powers of 2. Now for flash memory devices like SSDs it should be reported using the binary system because these devices are inherently binary capacities.
 
windows xp ultimate by johnny 32-bit can see and use 3tb partition if you format to fat32 with "compuapps-swissknife. this works for me, as i store movies for viewing on a ps3. i have 3692 movies on it and still have about 400 gigs left. the hell with windows 7, xp is fine if you know how to get it to work for you
 
Status
Not open for further replies.