As an aside, I'm quite surprised to hear Linux enthusiasts say they've never configured (so presumably never compiled) the kernel.
I know some of these comments were made ages ago, but I only just got the chance to read them today. Anyway, in my earlier comment, I stated that the first time I had ever installed Gentoo was also the first time I had ever configured and compiled the kernel before, and since it was a bit scary at first, I spent ages and ages reading up on how to do it right. Strangely enough, this still didn't stop me from getting a kernel panic when I tried running my first ever home-compiled kernel!
I tried to make sure that I didn't compile in support for conflicting drivers, but that ended up being my problem as I had inadvertently compiled in support for conflicting chipsets! I don't remember the particulars behind the problem, but I do remember wondering why all the LEDs on my keyboard were flashing and the OS just hung at boot time, but even this was easy enough to fix once I knew that there was a problem and what the problem was. The thing I love about Gentoo is that when stuff breaks, it is very easy to fix (which is not to say that it would be harder in any other distro).
The reason why I hadn't done it up to that point (i'd already tried Fedora and Ubuntu for almost 3 years by that time) was because Gentoo and the kernel (configuring/compiling the kernel and Gentoo go hand in hand) were presented as this seemingly insurmountable task that only true *nix gurus were capable of doing. So, seeing as how I was familiar with Linux, but by no means considered myself an expert, I just hadn't done it before. Configuring/compiling the kernel was also something that isn't really encouraged (or discouraged) in Fedora or Ubuntu, so I hadn't really felt compelled to do it until I dove head first into Gentoo.
By far, configuration took me the longest. I spent nearly a day and a half on that simply because i read ALL the options. The actual compiling took about 5 minutes. I thought it would take much longer because the kernel is such a complex beast, but really compiling it takes much much less time than compiling say, a new version of GCC or Firefox.
As for the size of the produced kernel, I think it is actually heavier than my laptop's Ubuntu kernel (something like 4.1MB vs 3.4MB), but I also compiled in support for the ext3 file system directly, which means I don't have to do an initramfs image, which saves like 6MB, which is admittedly not much. Really though, depending on the flags you give GCC, the binaries you end up with can actually be BIGGER than what you'd have under a distro like Ubuntu. It really comes down to what you optimize for, a lot of times, optimizing for speed will yield a bigger compiled program, and sometimes a smaller compiled program will run faster. This happens for a number of very technical reasons that I won't discuss here. So, whether or not a Gentoo installation ends up being smaller than an Ubuntu installation depends on what you optimize for, what you compile in support for, and what you choose to install. By default, once you have the base system installed (CLI only), its probably less than 1GB, but if you choose to go crazy and install XFCE _and_ GNOME _and_ KDE to see which you like better, then of course things are going to balloon out of control. Another thing to mention is that the source files that you download will also take up some space, so it is important to clean it out once in a while (whereas this problem isn't such a big deal in a binary-based distro like Ubuntu).
And as to all the talking about removing flash drives without unmounting or "safely removing" (in windows speak), this is generally a bad idea. If you have the opportunity to write a file system, you will understand why. I had to write a file system - albeit a really rudimentary and crappy one - for an EEPROM in my embedded systems class at school and it really opened my eyes. What it comes down to is that the majority of all the writes that are made to files in the file system are stored in memory until such time that it is convenient to write a whole bunch of them back at once. This is usually because devices like flash or EEPROM make it as easy to write back a block of data (512 bytes or more) at a time as to write a byte back at a time, so it makes more sense to buffer the writes and send them later. That coupled with the fact that writing to a disk is orders of magnitude slower than writing to RAM, makes it very enticing to do things this way. The big drawback comes when either there is powerloss or you yank the disk. Since the file system hasn't had a chance to close() the files yet, it doesn't get to commit the most recent changes back to the file. The worst case for the FS I wrote was that you'd lose 512 bytes of changes, but in a real FS, the results can be much worse.
In any case, I have babbled long enough.
--Zorak