What distro to use to resurrect an old Thunderbird rig?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

randomizer

Champion
Moderator
I have an Athlon T-bird rig I'm trying to resurrect from being terribly slow to usable again. It's been running Windows XP for years with 448MB of RAM but one of the RAM slots has kicked the bucket so I'm down to 384MB now. I have run XP on it with 256MB but I'm sure you can imagine what fun that would be especially after using the same installation for 3 years straight. But I digress.

I am wondering if the newer versions of the common distributions would be too hefty for 384MB RAM and an 850MHz T-bird (it's degrading so I've toned down the overclock). I wouldn't be running KDE because it will probably suck up too much of the precious little RAM I have.

Full specs:

850MHz Athlon T-bird.
384MB PC133
9800 Pro 128MB (overkill to the max but I had a spare card lying around and it was better than the old GF2 MX400 that the PC already had 😛)
Soundblaster Live! 16-bit. I think it's the platinum version.
ASUS A7V (VIA KT133 chipset)
160GB 7200RPM Samsung PATA drive
 
If the Hyper-Threading in the i7 is the same as the Hyper-Threading in the P4 then almost any kernel should work.

With Hyper-Threading enabled you should see 8 fake CPUs when you boot up.

With Hyper-Threading disabled you should see 4.
 
P4 Hyperthreading wasn't the same implementation though. It had pretty negative impacts with singlethreaded performance in particular, but that isn't the case with i7. I suppose I could just try a distro and find out. 😉 Now I just need to re-partiton my HDD... again.
 
I've never even heard of it before now. Granted I was an all-AMD guy back then :)

Another thing I haven't been able to find anything on Google about (strangely) is whether or not by default you can remove a USB flash drive without unmounting/ejecting it as you can by default in Windows because write caching is disabled.
 




AMD is good :)

Removing usb flash drives, memory cards or any other storage devices without unmounting/ejecting them is not recommended because the filesystem might be left in an inconsistent state. Linux caches everything which is great for performance but not so great if you lose power, or if the storage device is pulled out in the middle of a write operation. This is less of an issue with journaled filesystems but it can still cause problems.

ext3, ext4, JFS, XFS, NTFS etc are journaled.

ext2, FAT12, FAT16, FAT32 are not journaled.

Unfortunately most usb flash drives, memory cards and other storage devices default to some form of FAT.
 
Try the "sync" option passed to mount.

You can edit your fstab or configure your automounter to use the "sync" and "dirsync" options.

For more info "man mount"

sync All I/O to the file system should be done synchronously.
In case of media with limited number of write cycles
(e.g. some flash drives) "sync" may cause life-cycle
shortening.

dirsync
All directory updates within the file system should be
done synchronously. This affects the following system
calls: creat, link, unlink, symlink, mkdir, rmdir, mknod
and rename.


commit=1 might help too.


Good luck :)
 
As an aside, I'm quite surprised to hear Linux enthusiasts say they've never configured (so presumably never compiled) the kernel.

I know some of these comments were made ages ago, but I only just got the chance to read them today. Anyway, in my earlier comment, I stated that the first time I had ever installed Gentoo was also the first time I had ever configured and compiled the kernel before, and since it was a bit scary at first, I spent ages and ages reading up on how to do it right. Strangely enough, this still didn't stop me from getting a kernel panic when I tried running my first ever home-compiled kernel!

I tried to make sure that I didn't compile in support for conflicting drivers, but that ended up being my problem as I had inadvertently compiled in support for conflicting chipsets! I don't remember the particulars behind the problem, but I do remember wondering why all the LEDs on my keyboard were flashing and the OS just hung at boot time, but even this was easy enough to fix once I knew that there was a problem and what the problem was. The thing I love about Gentoo is that when stuff breaks, it is very easy to fix (which is not to say that it would be harder in any other distro).

The reason why I hadn't done it up to that point (i'd already tried Fedora and Ubuntu for almost 3 years by that time) was because Gentoo and the kernel (configuring/compiling the kernel and Gentoo go hand in hand) were presented as this seemingly insurmountable task that only true *nix gurus were capable of doing. So, seeing as how I was familiar with Linux, but by no means considered myself an expert, I just hadn't done it before. Configuring/compiling the kernel was also something that isn't really encouraged (or discouraged) in Fedora or Ubuntu, so I hadn't really felt compelled to do it until I dove head first into Gentoo.

By far, configuration took me the longest. I spent nearly a day and a half on that simply because i read ALL the options. The actual compiling took about 5 minutes. I thought it would take much longer because the kernel is such a complex beast, but really compiling it takes much much less time than compiling say, a new version of GCC or Firefox.

As for the size of the produced kernel, I think it is actually heavier than my laptop's Ubuntu kernel (something like 4.1MB vs 3.4MB), but I also compiled in support for the ext3 file system directly, which means I don't have to do an initramfs image, which saves like 6MB, which is admittedly not much. Really though, depending on the flags you give GCC, the binaries you end up with can actually be BIGGER than what you'd have under a distro like Ubuntu. It really comes down to what you optimize for, a lot of times, optimizing for speed will yield a bigger compiled program, and sometimes a smaller compiled program will run faster. This happens for a number of very technical reasons that I won't discuss here. So, whether or not a Gentoo installation ends up being smaller than an Ubuntu installation depends on what you optimize for, what you compile in support for, and what you choose to install. By default, once you have the base system installed (CLI only), its probably less than 1GB, but if you choose to go crazy and install XFCE _and_ GNOME _and_ KDE to see which you like better, then of course things are going to balloon out of control. Another thing to mention is that the source files that you download will also take up some space, so it is important to clean it out once in a while (whereas this problem isn't such a big deal in a binary-based distro like Ubuntu).

And as to all the talking about removing flash drives without unmounting or "safely removing" (in windows speak), this is generally a bad idea. If you have the opportunity to write a file system, you will understand why. I had to write a file system - albeit a really rudimentary and crappy one - for an EEPROM in my embedded systems class at school and it really opened my eyes. What it comes down to is that the majority of all the writes that are made to files in the file system are stored in memory until such time that it is convenient to write a whole bunch of them back at once. This is usually because devices like flash or EEPROM make it as easy to write back a block of data (512 bytes or more) at a time as to write a byte back at a time, so it makes more sense to buffer the writes and send them later. That coupled with the fact that writing to a disk is orders of magnitude slower than writing to RAM, makes it very enticing to do things this way. The big drawback comes when either there is powerloss or you yank the disk. Since the file system hasn't had a chance to close() the files yet, it doesn't get to commit the most recent changes back to the file. The worst case for the FS I wrote was that you'd lose 512 bytes of changes, but in a real FS, the results can be much worse.

In any case, I have babbled long enough.

--Zorak
 
I'm sure you are aware, but didn't mention, one of the reason that a custom configured kernel tends to be bigger than one of the stock ones. A stock kernel is generally a fairly lean one with much of the functionality contained in hundreds of modules whereas most people when configuring the kernel themselves will build functionality for the devices that they know they will be using into the kernel itself. Hence a larger (but IMO more efficient) kernel, but hardly any modules.

My major gripe with Fedora is that it positively discourages you from tailoring the kernel to your particular requirements. In Gentoo (as in FreeBSD, which I keep banging on about) you are encouraged to tailor the system, and the kernel, to your particular hardware and general requirements. And I've got to say that by compiling your own kernel, and getting it wrong once or twice, you learn a whole lot more about the boot process and its configuration than if you stick with a stock Fedora install.
 

That's only true if write caching is enabled though isn't it? At least on Windows it appears as though the only difference between being "optimised for quick removal" and "optimised for performance" is write caching, but I don't know for sure because that's all that you're told. I've never had data corruption on any flash drive without unmounting it in Windows as long as I had caching disabled. I don't want to argue (you're the one who's written a file system 😉), just curious that's all.
 


Wow. I wouldn't give me THAT much power just because I wrote an FS for one class - and a really crude FS at that! You should probably be fine if you can turn off caching (i.e. force it to write as soon as you request a file write). I thank you for your deference, I am honored, but you already knew what you were doing. I just wanted to explain the rationale behind why the situation is the way it is. By the way, if you ever get the chance to write an FS, take it. It is a fun and enlightening (for me anyways) experience.

I'm sure you are aware, but didn't mention, one of the reason that a custom configured kernel tends to be bigger than one of the stock ones. A stock kernel is generally a fairly lean one with much of the functionality contained in hundreds of modules whereas most people when configuring the kernel themselves will build functionality for the devices that they know they will be using into the kernel itself. Hence a larger (but IMO more efficient) kernel, but hardly any modules.

My major gripe with Fedora is that it positively discourages you from tailoring the kernel to your particular requirements. In Gentoo (as in FreeBSD, which I keep banging on about) you are encouraged to tailor the system, and the kernel, to your particular hardware and general requirements. And I've got to say that by compiling your own kernel, and getting it wrong once or twice, you learn a whole lot more about the boot process and its configuration than if you stick with a stock Fedora install.

You are absolutely right. I had forgotten about this although I was aware of the practice. In fact that is the way I set up my kernel (no modules and only support for my HW). It was the integrated drivers that were conflicting that caused my kernel panics, and I agree I did learn quite a bit from this experience, though I still have a long way to go. I hadn't realized that fedora keeps you from customizing and compiling your own kernel. How strange! In any case, it was my first distribution and my first ever non-windows-non-macintosh-OS so it has a special place in my heart 😉 It allowed me to configure enough about it so that I could learn how the system works and go from a beginner's level to a more intermediate level, and I believe that certainly has some value.

You know, one of these days I am going to finally get around to trying the *BSD. Hopefully it will be sooner rather than later.

-Zorak
 
I hadn't realized that fedora keeps you from customizing and compiling your own kernel.
No - you can do it. It's just not really made very easy compared to Gentoo. I suspect the percentage of Fedora users who customize their kernel is much lower than Gentoo.

The reason I got into kernel compilation in the first place is because when I started using Linux you pretty much had to. This was a 0.something kernel, before the days of modules. Now it's optional, but still a good learning experience IMO.
 


That's why water cooling was invented- you can run hot, full-performance CPUs in a normal-sized case quietly 😀 Special low-TDP units are only advantageous in situations where you have a specific limit on case size, cooling capacity, or electrical power supply, otherwise they are just needlessly more expensive than standard chips. Notebooks, high-performance HTPCs, and blade/rack servers are good candidates for special low-TDP chips. Other units would do better with either a budget chip that doesn't suck much power due to its low clock speed and small die (file servers, most HTPCs, routers, kiosks, etc.) or a full-TDP standard chip.



In Gentoo, the package manager Portage does all of the configuring and compiling for you. In Debian-based distros, apt-build works in much the same way, except that you can't pass nearly as many options as easily to apt-build as you can to Portage through /etc/make.conf. If you just have a source package, untar it somewhere, then cd to the program's folder and ./configure && make -jn && sudo make install, where n = number of CPU cores plus one.



I generally still compile everything as a module anyway. It's a pain to get some new widget and then discover you didn't compile the module for it and then goof around compiling individual modules for it from the kernel tree. It doesn't take all that much longer to compile a kernel with the default list of modules compared to a stripped-down kernel anyway, even on my old X2 4200+.



You can get a basic CLI-only install of Debian down to about 1 GB or so. My Debian Lenny headless file server uses about 2 GB of the HDD for the OS, but it is a standard Debian XFCE installation that I just remotely SSH into. I didn't care to remove X and such because the OS drive is an 80 GB HDD and space is not at a premium at all.



HyperThreading sucked on the Pentium 4s because the L1 cache wasn't big enough in the P4 (~8 KB), few pieces of software were multi-threaded at the time, and CPU schedulers were too dumb to know which "CPU" was the real one and which was the logical one. HT on the old P4s runs a lot better with modern OSes and modern software than it did when it was introduced, but the whole L1 cache issue among others means that the boost from HT on the P4 won't touch the boost from HT on a Nehalem.



Installing a kernel on Fedora is exactly the same as installing on Gentoo if you don't use genkernel. The package manager in both cases dumps the kernel source in /usr/src and then you execute your make commands and such in the exact same way. I use genkernel as it saves me from having to manually type in the various make commands myself and automatically calls menuconfig for me.