Chkdsk is a Windows application. It (and Windows itself) keeps the Windows list of Bad Sectors in a file on each respective drive. That is, the C: drive has a file on it with Windows' list of Bad Sectors for that drive; the D: drive has its own list on it. These files are basically "hidden" so only Windows uses them normally. However, they are no big secret, so anybody who writes their own utility to do some diagnostics and repairs on a HDD under Windows might refer to that list.
No software could detect and ignore the "fixes". Or at least, it would be VERY difficult! I don't even know the details of how the NTFS File System works - I understand nobody outside Microsoft REALLY knows that. So what I'm about to outline is NOT exactly what happens, but it will give you some feeling for the complexity. This outline is from the older FAT32 system. The NTFS File System uses more files and tracks much more data in greater complexity.
In FAT32, there are two fundamental file structures at the base of the system. The first is the Root directory, which has space for up to 254 files, and that's all. For each file, it has a number of entries in "fixed fields" - that is, the length of each entry is rigidly defined. One file entry has things like the file name, the file extension, the number of Sectors used for the file, the file length in bytes, the file date, and the Sector Number of the very first Sector it uses. If a file is truly deleted, its name is modified by changing only the first character to one particular reserved character. Now, 254 files is not much! But the system allows you to create Sub-Directories (now called Folders) of unlimited size. Each Sub-Directory is actually simply a special kind of file structured in the same way. But since a file can be of any length, so can a Sub-Directory and it can have a VERY large number of files (and Sub-Sub-Directories, etc.) in it. So that's how the START of each file can be found, along with estimates of its length.
The other key structure is known as the FAT or File Allocation Table. It has a fixed size, and every entry in it is just the number of a Sector. Hence this system imposes a limit on the size of the HDD, because it limits the number of Sectors it can track, and a Sector is 512 bytes. That is one of the reasons that it is rarely used these days. Within the FAT, the entry that corresponds to one Sector on the HDD holds the Sector Number of the NEXT Sector that is part of this file. Remember that the Directory entry points to the file's FIRST Sector; from then on each Sector points to the next. The LAST FAT Sector for a File has a special number it it that signals the End of File. When a file is truly deleted, all of its entries in the FAT are reset to 0, so a string of zeros in successive FAT table positions indicates that a file USED to occupy those Sectors, but now it is available for re-assignment.
As a HDD is used and files are created, the Directory and Sub-Directories are filled up with entries, and the FAT is also filled up to track Sector use. When deletions happen, these are revised, leaving unused FAT positions indicating Sectors are available. As more files are created, those Sectors MAY be assigned to new files and filled with new data, and the corresponding entries in the FAT are updated. At first, when new files are created, it is likely that the system will choose a block of never-used Sectors to give that file, and the FAT entry for the file will look like each entry just points to the next Sector on the HDD. But after a while those blocks of empty Sectors are all broken up, and the FAT entries start to look like the file is scattered all over the HDD. The sequence in the FAT has no pattern at all. This is called Fragmentation.
So now imagine what happens when CHKDSK decides one Sector is Bad, AND you have told it to "fix" that problem. It writes that Sector Number to its secret file of Bad Sectors on the HDD and never uses it again. Then it goes back and finds a Sector not in use, and assigns that to the gap in the file, and writes that Sector Number to the correct FAT position. Much later some hypothetical software tool comes looking for files that have been "fixed" in this way. By scanning through the Directory and FAT it can identify the START of each file's trail in the FAT, and it can check whether each FAT entry points simply to the next Sector in sequence. If it finds one NOT in a simple sequence, it could assume that was a substitute. BUT -and it's a HUGE BUT - that out-of-sequence entry could easily have been from a normal operation and NOT a "fix", and it has no way of knowing which is the real cause. Even if it assumes this was a "fix", it STILL does not know where the original Sector was that had faulty data. Now, MAYBE since it knows what Sector number should have been in that sequence (IF the sequence had been maintained undisturbed) it could go looking for an entry like that in Window's Secret file of Bad Sectors and try to find that Sector. THEN, even if it gets that far, it has to try to recover data from a Sector that is known to have yielded bad data before. (Moreover, in modern HDD's, the HDD itself already tried valiantly to recover data from it before letting Windows know it could not.) So what are the chances that this hypothetical utility can recover data from that? Just about NONE!!
Bottom line is that when you have lost data due to a Bad Sector, even if the file has been "fixed" so it is readable with one corrupted Sector, your chances of recovering the original data are: impossible. You are MUCH better off trying to find another copy of that file from somewhere else, or just resigning yourself to loss of the file.
Now to better news - the SMART warnings. As I said, one type of warning you could see is one from Windows reporting a Read Error on a drive, and these usually indicate a real possibility of damaged data. But much before that happens, the HDD itself is running its own data quality monitor and trying to recover data before it is really lost permanently. It is also running its own routine diagnostic checks on itself and logging the results into a set of files it keeps written on itself. These files are accessible through the SMART reporting system on the HDD.
When do you get to see SMART warnings? Usually you have two opportunities, BOTH run by the BIOS and not by Windows or another OS. First, every time your system boots, one of the things the BIOS does is to ask each HDD for its SMART report, and then it prints a brief message on the screen as part of the start-up displays. Most times you won't notice it because it says something comforting like "SMART data OK". You can turn this checking and message off in BIOS Setup, but most people leave it on, and that's good.
Of course, any application software, like HD Tune, can also ask each HDD for a SMART report and show you all it gets back in detail - often a hundred lines or so. And those are the two types of messages you report seeing. Usually the best use of the SMART reports is to warn you that small failures have been happening more often, indicating an increased probability of future failure. So your prudent response is to replace the HDD while the failures are still minor enough to allow automatic data recovery (already done by the HDD) so NO files are lost.
Regarding choosing a new HDD, I can offer a few comments. First, check what OS (version of Windows) you are using. A 3 TB HDD is larger than you can use with the Partitioning system commonly used, MBR. But that is limited to a Partition no larger than about 2.1 TB. To use it as one large drive, you need to be using a different system known as GPT, which XP does not support. Now, I think there are ways to use a 3TB unit in Win XP if you use it to create TWO Partitions each less than 2.1 TB, but I'm not sure of the details. On the other hand, if you are using VISTA or something more recent, that is not an issue.
HOWEVER, there is another issue. Among the versions of Windows, the common 32-bit versions do NOT allow you to BOOT from a HDD Partitioned in the GPT system. All 64-bit current versions of Windows do. So, if your plan is to use a 3TB unit for data storage only you'd be all right, but if you want one of these as your BOOT drive, you MUST be running a 64-bit version of Windows, and preferably Vista or later.
Regarding the WD lineup of HDD's, my impression is that the Blue line are the low-cost base of the product line. The Green ones are maybe better quality and have better features, and are specifically designed to minimize power consumption (and hence heat generation) with a number of tricks like slower rotational speeds and slower access, and maybe powering down after a period of non-use. These are very popular and well-priced, and I do not recall seeing anything to indicate they fail any more than others. Above those are the Black line that offer faster speeds and data access rates, plus other features to enhance performance, and sometimes longer warranty periods. They are hence more expensive. Beyond that are the Red line designed for use in servers and RAID array systems, and not really what you need.
There are other HDD makers, of course, and Seagate certainly is a major competitor in this area. That company had a problem several years ago with one particular new model, but I have not heard of any significant problem with Seagate units recently. Like WD, Seagate has a "green" line as well as a higher-performance line, plus some specialty designs. Actually, my machine has Seagate drives, but we do have WD units in other machines here. I'd consider both company's products reliable.
Since you have seen SMART warnings through HD Tune, let me suggest something. Run HD Tune again on your drives, and print out the results. Now, shut down the system and disconnect power. For each drive, go to the data cable plugged into the back of the drive and carefully unplug it, then plug back in. Do this two or three times. Now do the same thing for the other end of the cable that plugs into the mobo. Now go back to the drive and do the same thing for the wider power supply connector at the drive. Lastly, look around carefully and verify that all the cables you handled are reconnected properly, AND that you did not disturb some other connection in the process. Why do all this? Well, one of the reasons you can get bad data reads is the the connection between a cable and a socket is loose or is dirty with a bit of oxidation of the metal contact. Unplugging and reconnecting can "scrub" the metal surfaces cleaner and solve that problem. If this actually is NOT the source of your troubles, this process will do no good at all, but it is easy, harmless and free to try.
Now close up and reconnect power, boot up, and re-run the HD Tune tests and see if anything changed. When you're done, you might want to disconnect that one drive you already have been NOT using to preserve it.