Discussion Pentium 4s Were Fast. Core i7s Are Not.

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
In my stack of junk in the garage, I have a Dell Latitude laptop, from my old workplace.
PIII cpu(?), 1998 era.
1GB RAM, I think.

2 drives available, 1 Win 2000, 1 Puppy Linux.

It would take 3-4 minutes to boot up, with either drive.
If I easily had the power cable available, I would test this. (I don't)

Any system today that takes 3 minutes to present a usable desktop would result in. "OMG!!! WTF IS WRONG!!"

Rose colored glasses indeed.
Had had some server computers (fanless Pentium IIIs) running Debian 6 back in the day. It was smooth and nice; definitely not fast; but for what they were worth they did more than was anticipated. Debian 6 was a very good OS. Very stable and fast.
 
  • Like
Reactions: Order 66
Render farms. Massive render farms.

When you have a minute look up Silicon Graphics and their workstations, those were the machines people used to make Toy Story and the like.

Ooh, found the stats:


"A cluster of 117 (87 dual-processor and 30 quad-processor, 100-MHz) SPARCstation 20s with 192 to 384 megabytes of RAM, and a few gigabytes (4 or 5gb) of disk space each. They ran Solaris, with Pixar’s proprietary “Renderman” software, and a SparcServer 1000e for job distribution.

Pixar had some storage arrays totalling ~250GB (an SGI Challenge with 144 gigabytes, and a Sun array of 108 gigabytes) that the movie was ultimately rendered to before being backed up to magnetic tape (no more than 40GB per tape, likely less according to wikipedia’s specs on the Exabyte 8mm data tape aka data8 format) and then rendered to film for delivery, as the disk array was not actually large enough to store the final render in its entirety."
For some reason, I couldn’t see your full post until I hit reply. (This has happened many many times with other posts as well.) my question is, are these still used today? How powerful exactly are these? I don’t know if you would be able to benchmark them since I’m sure they don’t run windows.
 
  • Like
Reactions: jnjnilson6
I remember the breathless P4 logo while the machine booted up before the Windows XP logo came up. You felt a sensation of power not to be duplicated ever again in the future.
I can kind of remember that feeling the first time I had a P-4 2.66 Ghz but faded fast.

If I took my first car a 69 Ford Torino I could say the same. Had the big block 351 and that car moved. Looking back that car was to fast and to heavy. I loved the crap out of that thing back than but today even if one was made available for me to take a spin around the block I think I would pass.

We do the same things today on our PC but it's all those revised background stuff that seems like were at the same spot power wise.

Sometimes it feels like following that carrot on a string to upgrade to stay where we already were. But doing a side by side of where we were to where we are is an eye opener.
 
Rose-tinted glasses.

I mean, sure, it's not entirely without merit – *some* things do feel slower and clunkier today than they ought to, and *some* programs are lazily written. Many webpages are slow, often because of all the junk the adware ecosystem pulls in...

But the requirements are also a lot higher – we're running 1440p or 4k monitors at 100+ Hz refresh rates rather than a dinky 1024x768@60Hz, and even for 2D work that takes some resources; higher-quality font rendering, higher-resolution media, super-sampling and downsizing, etc. Some of the applications that feel less snappy than "back in the day" also do a lot more - the development environments I use are a lot more powerful than 10+ years ago, and deal with much larger codebases.

And a whole lot of stuff is just extremely much faster now. Compressing all the tracks of an audio CD to MP3 can be done at the blink of an eye on a mid-end computer today - that was a couple of minutes *per track* in the Pentium 4 days.

Some stuff is slower, and there's a lot of worsening of the internet... other stuff is just at a way higher complexity/quality. And then there's all the stuff that's amazingly faster – heck, I don't even have to go back a decade, the compute power available at mortal consumer cost has increased impressively even compared to five years ago.
 
Ray tracing a scene...

Late 90's (486), early 2000s (P4), ray tracing a single semi complex scene with a P4 would take hours, or overnight.

Today...pretty much rendered on the fly, at 60fps in a game.
That's not the best comparison though. Realtime raytracing in a game takes a lot of shortcuts to achieve playable performance, which will often negatively impact visuals, whereas the single scene would be rendered with a focus on quality over performance, in many cases going past the point of diminishing returns. The realtime routines are cutting a lot of corners to minimize the number of calculations that need to be performed, like greatly reducing the number of light rays and bounces, as well as performing significant noise reduction after that's done and upscaling the resulting image, and in most cases only using raytracing for a portion of the scene's rendering.

And of course, realtime raytracing today isn't being performed on a CPU, but rather on specialized hardware designed with that purpose in mind, so it's not exactly a direct comparison. That additional hardware isn't really contributing to making a system feel responsive in general use, which seems to be the center of this discussion. Certainly, today's systems are much more capable of performing demanding workloads faster, but that doesn't necessarily translate to a more performant user experience outside of those brute-force workloads.

And on that note, realtime raytraced lighting effects only work on that specialized hardware today because of a lot of heavy optimization that's been done. All those shortcuts involving minimizing rays and bounces, reducing the noise of the resulting messy output and upscaling in a way that looks decent, are all heavily optimized by people who know what they are doing with the hardware at a low-level. And that's pretty much the opposite of what we have seen going on with a lot of other software that isn't pushing the limits of modern hardware. A large portion of programmers today are outright bad at optimization, or are at least not given the time to optimize, since there's less incentive to optimize code for something that will at least sort of work on most modern hardware.

And that's the problem. Outside of a limited subset of tasks that still push hardware to its limits in a clearly measurable way, modern software is largely becoming less optimized. Take for example a web browser. There's less "need" for the browser's developers to focus on performance than there was a decade or two ago, since the minimum baseline of hardware tends to be higher, so they are more prone to allowing optimization to slide. It also doesn't help that Chrome has attained a near-monopoly on the browser space much like IE had a couple decades back, so we don't have nearly as much of the competition between browser engines as we had back when there were a number of them all vying for attention. Microsoft's browser is now a Chromium reskin. Opera is now a Chromium reskin. And Chrome itself has become a bloated, unoptimized memory-hog, with Google seemingly not caring much about performance since they strong-armed their browser into the number one position 10 years ago. Outside of Webkit, that has its own monopoly on Apple devices (And that Chromium itself was forked off of), the only real competition would be Firefox, though they seem content to no longer try to directly compete with Google, getting by on whatever limited resources they have access to as they continue to slide into obscurity, perhaps with the eventual fate of becoming a Chromium reskin themselves. So, the developers of Chromium don't have any significant competition for the browser to be compared against, and as a result performance (and new functionality in general) takes a backseat to simply maintaining the browser and keeping it secure so that Google can continue to use it for its primary task of harvesting user data.

This lack of optimization also extends to web developers as well. Developing a website used to be a careful balancing act between making the site look good, and allowing it to maintain a certain degree of usability on lower-end hardware and slow Internet connections, but that's generally considered less important today. With the prevalence of faster Internet connections and faster hardware in general, you end up with many web developers not caring as much if their page is bloated with unnecessary images with larger-than-necessary file sizes, and advertisements and tracking scripts that can bog down performance if not dealt with by the end-user. Combined with the lack of optimizations in the browser, and increased background activity from OS processes, it's no wonder why the web browsing experience of today ends up being significantly less optimized than it was in the past.

There are undoubtedly many parts of a user's computing experience that will perform better today, particularly with hardware like SSDs that have helped counter the unoptimized software bloat, but there are definitely areas where there have been regressions made, or no obvious improvements despite being run on significantly faster hardware.
 
That's not the best comparison though. Realtime raytracing in a game takes a lot of shortcuts to achieve playable performance, which will often negatively impact visuals, whereas the single scene would be rendered with a focus on quality over performance, in many cases going past the point of diminishing returns. The realtime routines are cutting a lot of corners to minimize the number of calculations that need to be performed, like greatly reducing the number of light rays and bounces, as well as performing significant noise reduction after that's done and upscaling the resulting image, and in most cases only using raytracing for a portion of the scene's rendering.

And of course, realtime raytracing today isn't being performed on a CPU, but rather on specialized hardware designed with that purpose in mind, so it's not exactly a direct comparison. That additional hardware isn't really contributing to making a system feel responsive in general use, which seems to be the center of this discussion. Certainly, today's systems are much more capable of performing demanding workloads faster, but that doesn't necessarily translate to a more performant user experience outside of those brute-force workloads.

And on that note, realtime raytraced lighting effects only work on that specialized hardware today because of a lot of heavy optimization that's been done. All those shortcuts involving minimizing rays and bounces, reducing the noise of the resulting messy output and upscaling in a way that looks decent, are all heavily optimized by people who know what they are doing with the hardware at a low-level. And that's pretty much the opposite of what we have seen going on with a lot of other software that isn't pushing the limits of modern hardware. A large portion of programmers today are outright bad at optimization, or are at least not given the time to optimize, since there's less incentive to optimize code for something that will at least sort of work on most modern hardware.

And that's the problem. Outside of a limited subset of tasks that still push hardware to its limits in a clearly measurable way, modern software is largely becoming less optimized. Take for example a web browser. There's less "need" for the browser's developers to focus on performance than there was a decade or two ago, since the minimum baseline of hardware tends to be higher, so they are more prone to allowing optimization to slide. It also doesn't help that Chrome has attained a near-monopoly on the browser space much like IE had a couple decades back, so we don't have nearly as much of the competition between browser engines as we had back when there were a number of them all vying for attention. Microsoft's browser is now a Chromium reskin. Opera is now a Chromium reskin. And Chrome itself has become a bloated, unoptimized memory-hog, with Google seemingly not caring much about performance since they strong-armed their browser into the number one position 10 years ago. Outside of Webkit, that has its own monopoly on Apple devices (And that Chromium itself was forked off of), the only real competition would be Firefox, though they seem content to no longer try to directly compete with Google, getting by on whatever limited resources they have access to as they continue to slide into obscurity, perhaps with the eventual fate of becoming a Chromium reskin themselves. So, the developers of Chromium don't have any significant competition for the browser to be compared against, and as a result performance (and new functionality in general) takes a backseat to simply maintaining the browser and keeping it secure so that Google can continue to use it for its primary task of harvesting user data.

This lack of optimization also extends to web developers as well. Developing a website used to be a careful balancing act between making the site look good, and allowing it to maintain a certain degree of usability on lower-end hardware and slow Internet connections, but that's generally considered less important today. With the prevalence of faster Internet connections and faster hardware in general, you end up with many web developers not caring as much if their page is bloated with unnecessary images with larger-than-necessary file sizes, and advertisements and tracking scripts that can bog down performance if not dealt with by the end-user. Combined with the lack of optimizations in the browser, and increased background activity from OS processes, it's no wonder why the web browsing experience of today ends up being significantly less optimized than it was in the past.

There are undoubtedly many parts of a user's computing experience that will perform better today, particularly with hardware like SSDs that have helped counter the unoptimized software bloat, but there are definitely areas where there have been regressions made, or no obvious improvements despite being run on significantly faster hardware.
That’s what I was going to say, but I decided that it just wasn’t worth arguing the point as it’s still an interesting comparison.
 
  • Like
Reactions: jnjnilson6
If you were able to take modern hardware and use it in the context of software 20 or 30 years ago, it would be lightning fast in comparison to the hardware of the day. It's unfortunate that powerful hardware disincentivizes optimization, but what can you do?

For some reason, I couldn’t see your full post until I hit reply. (This has happened many many times with other posts as well.) my question is, are these still used today? How powerful exactly are these? I don’t know if you would be able to benchmark them since I’m sure they don’t run windows.
For modern 3D animated films, it'll still be server farms or the like, and definitely not rendered in real time. The ray-tracing used in animated films is far beyond the real-time capabilities of even the most expensive GPU.
 
Software.

I remember back in the day 1 GB RAM (around 2005) could load programs / do as much almost as 48 - 64 GB can do today.
Many programs are written badly and it seems that my Core i7-12700H performs simple tasks like web browsing even a little slower than a P4 would in the past.

In the past computers were infinitely slow and software was well written. Today computers are infinitely fast and software is badly written. If this trend continues soon simple tasks would not be able to be executed well at all even on the most powerful of modern hardware.

Do write up your thoughts and
Thank you!
One thing I distinctly remember from the Pentium 4 (single core) era is that the entire screen, including the mouse, would freeze when 100% of the CPU was being used. That included virus scans, heavy applications, and even viruses (hey, it happened a lot more back then). After I purchased a dual core CPU I never had a single freeze like that ever again. It was groundbreaking.

But other than that, yes, the operating system was much snappier. As programs get bigger, they become more inefficient. Not endorsing it, but I think some Linux variants still have that 'snappiness' that older Windows versions had.