Tom's Hardware Wants You: CPU Benchmarks 2011/12

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I'd like to see more on power consumption, especially on idle, since most people leave PCs on 24/7 with the CPU idle most of the time. With the latest round of GPU on CPU offerings from Intel and AMD, I would also like to see something that compares them to systems with video cards and with graphics integrated onto motherboards.
 
With the increasing interest and use of Linux, I would also have to vote for including the Phoronix test suite. Of course this will involve more work in swapping out the OS and probably won't be of interest to gamers, but I do think it is an idea who's time has come.
 
Boot/Shutdown benchmarks.
Service Pack upgrades.

I know these things are HDD/SSD dependent, but with compressed OSs, a lot of the heavy lifting gets passed to the CPU.
 
I vote for emulation/virtualization benchmarks... virtual environment performance will become increasingly important. Various 32(/16) bit apps under XP Mode. Perhaps emulators that stress a single core and graphics simultaneously, e.g., MAME32.

Also, more power management related benchmarks and stability tests, e.g., Start Menu pop-up latency w/wo C-State Control, etc.
 
Some benchmarks that shows a PC in a different form of production environment other than Gaming. Music software can be very CPU intensive, most notable is Wave Arts - Tube Saturator, which stresses the living hell out of my core i7 920. Lets see some test using Sonar X1, Ableton Live, Cubase, Omnisphere, Trillian, Emulator X3, Massive, etc.....
 
Hi if you can make a audio test with cubase or fruity loop and a lots of vst open in the daw. Thanks you!
 
[citation][nom]cmcghee358[/nom]Considering that Phenoms have a fair amount of L3 cache, and Athlons have none, I disagree with this statement.[/citation]
I know the Athlons have no L3 cache, but the Athlon from the test has a 600mhz advantage. That's almost a 20% higher clockspeed, while it is also around 20% slower in fps. No way that's only because of the L3 cache.
 
I want to see
Autocad Load time. But is not so mainstream and is not free. So instead I want to see Gimp load time. Both are heavy, and dependent of CPU + SSD speeds. Even on i7 920 + Vertex 2 they are painfully slow.

The same about compilation times with Visual Studio. It takes so loong....
 
AMD focuses on highly threaded applications with their cheap cores, when compared to Intel. Intel, on the other hand, put everything they've got on multimedia optimizations which are simply brilliant in practise. That's their different engineering views.

Now how relevant is Intel's effort considering GPU assistance is getting popular for multimedia role in general? Is their advantage obsolete compared to GPU processing?

So my suggestion is: Include GPU assisted numbers (onboard and discrete) to every possible benchmark.
 
I would like to have SolidWorks tested.

Specifically the flow simulation add-on (formerly cosmos FloWorks). This program is highly processor throttled and because I use it for hours a day any performance increase will be noticed.

As a side note it would nice to see SolidWorks used as a benchmark for video cards. There is always the question if the powerhouse Quadro cards are really worth the cost ($600-$2000).
 
How about throwing a bone to your F@H team and benching one SMP and one BigAdv project over 10 or so frames and calculating the PPD from the average frame time?
 
[citation][nom]echdskech[/nom]I've been using Tom's CPU charts since it came out to spec out not only desktops but also entry level servers built on enthusiast parts. Most of them run server software on Linux though so I'd like to cast a vote for the Phoronix test suite.[/citation]

That would be great as long as they do not try to test like they did last time they used it by not recompiling the tests on the different processors. This in theory should not happen anymore however as we have built in detection that when the processor changes it will recompile the suite before running the tests to prevent bad benchmarking practices. The last time they used PTS before we put in that detection were largely useless as they compiled for one specific architecture and used those blobs across the board negating any cpu optimization that the bench would have provided except for the cpu it was compiled on.
 
Folding @ Home smp client. I realize the different work units can affect times and PPD, so that would have to be thought of.
 
I suggest non-synthetic benchmarks for multiplayer online gaming. Synthetic scoring mapped to average frames per second in your diagrams would be useful. I mention this because synthetic benchmarks don't translate to anything without a reference benefit. For example, take a 3DMark11 score of P4000 compared to P5000. What does that 'mean' in terms of benefits to the user? Does that translate to a 25% increase in frames per second?

I would also like to see physics on/off benchmarks for shared or dedicated gpu use. Does a dedicated physics card improve frames in multiplayer too.

Non-synthetic benchmarks for the Adobe sweet would be nice. How long does it take to open a specific file. How long to render a file in AE5 versus AE4 on different platforms. Benchmarks on these same tests on Quadro versus GeForce versus Radeon versus FirePro.
 
I would really like to see Starcraft II in your cpu benches. Specifically with the detail turned all the way down and at a native resolution. 1680x1050/1920x1080. This is a setting that many people prefer to play the game with. And not just single player or standard benching but 200/200 maxed armies fighting each other, as performance drastically changes throughout a single game.
 
I would also like to cast my vote for:
- CAD benchmarks
- heavy MATLAB/Simulink code or some similar language
- Finite Element Analysis/Computational Fluid Dynamics solving. Solidworks and most CAD packages have some integrated FEA/CFD tools. While CAD based FEA/CFD is very basic, they could be a good benchmark and much more attainable than a real FEA/CFD solver.
 
for the love of beans, we need a Java benchmark, and there is already a free one that anyone can run.

please, please, add this Java benchmark from NIST (National Institute of Standards and Technology), it runs well known math and science and engineering equations and problems, in a Java Virtual Machine:

math.nist.gov/scimark2/run.html

it runs the same, with repeatable scores, in Firefox, IE, or Chrome (it is browser neutral). And it is quick to run, too.
 
and for the love of techs everywhere:

time how long it takes to install some big antivirus programs, like AVG, McAfee, MSE, etc.

years ago working as a tech at a bigbox, installing tons of antivirus programs took much of my time, and i realized that timing them was a great way to measure system wide performance of the machines that i worked on!

hell, i still have to do serious malware cleanup for friends/family/acquaintances a few times a month, it seems.

actually, to make things easier for you, just report how long it takes to install some of the bigger programs or games that you are using as benchmarks! i have a feeling that reporting the install times of a few big programs would indicate how long it takes to install just about any other big program, antivirus included.


 
1) More FLAC, less AAC! MP3 is the defacto for compatibility, while FLAC is the defacto for quality.

2) The ultimate in CPU stressing: EMULATION. Unlike modern PC games, Dolphin and PCSX2 rely on a fast CPU rather than a fast GPU.
 
Casting a vote for:
-SiSoftware Sandra
This is a must, every has access to it.
-Music production/Digital Audio Workstation software
Ableton Live, Cubase, Pro Tools, etc. Load them up with a huge about of synthetic instruments, effects, mixed and sampled audio clips, then track CPU usage over the course of the song.
-AutoCAD (inventor/revit), Solidworks, application and project load times. Simulation, stress, CFD, etc.
-JAVA, FLASH, HTML5. Like it or not, HTML5 is going to be used to write new applications as our software moves to the cloud and browser based OS's become more popular.

I think a strong differentiation needs to be made between low-level benches, (ALU/FPU/GFLOPS/Inter-core and memory bandwidth/Latency/IPC/) medium-level synthetics, (Standard benchmarking software) and high-level applications, (load times, open/close, game FPS, level load/save, etc). i.e. How many GFLOPS difference result in how many more FPS, load times? Care should also be taken in writing to avoid benchmarking the software vs benchmarking the hardware.
 
Status
Not open for further replies.