[citation][nom]Cleeve[/nom]As a writer I don't really have any contact with the advertising side. I chose scanners that data indicated have a widespread user base. AVG has a free version, by the way.I'm not sure why some people assume that a free AV scanner might perform wildly different than a paid-for AV scanner.[/citation]
I wouldn't assume there would be a drastic difference in performance between a paid-for vs. a free version, but might expect a difference in features. The difference in features might lead to a difference in performance as a side effect. Certainly some anti-mal companies might try to differentiate free from non-free by performance, but I think the scrutiny applied to anti-mal products would tend to minimize any overt differences in performance that wouldn't be explainable by a difference in features.
[citation]Actually, I try to avoid assumptions and test even things that I might not consider possible. Assumptions are often incorrect. Often this attitude leads us to interesting discoveries. Case in point: the single-core internet page load time was significantly longer with the AVG security suite compared to the AVG virus scanner, but only when a single-core CPU is used. But if nothing else, such tests can support a hypothesis and dispel myths.[/citation]
That would depend on what browser you used as well -- browsers like Firefox and older versions of IE are single-threaded, while Chrome, maybe Opera and maybe newer IE might be multi-threaded. However, multi-threaded browsers would show more harm being constrained on a single core than browsers that are already constrained to 1 core but optimized for 1 core (like FF, though, optimize they may try, it's still a poor substitute for a multi-threaded browser). I assume (?) that you simulate some of your single-core testing by using process affinity on the active processes? While doing so is a great short-cut, it is important to note that it will differ from a true single-core machine, in that limiting a duel core processor to 1 process will give faster performance on that 1 core than if a real 1 core machine was used for 1, possibly 2 reasons: 1) most dual core machines list 2ndary Cache on a /CPU basis, but most modern multi-core machines allow cache sharing, so that if 1 core is kept inactive, the other core will get double the cache memory, and 2ndly, on modern Intel chips, reducing the number of active cores will usually allow the other cores to operate at a faster frequency.
That could explain the differences in speed you noted on the single-core internet page load time when there was really only 1 underlying CPU -- when there were multiple underlying cpu's, the active core would, at least, have gotten more cache. If it was a dual core machine with a 2nd core disabled in the BIOS, then that can allow the speed bump. I seem to remember reading that even if a 2nd core wasn't disabled in the bios, if a 2nd core was idle, a single core might execute faster because it still operated within the allowed Thermal Design Power (TDP).
Another factor would be whether or not the browser(s) tested have hooks to allow an antivirus to scan incoming data. Another would be whether or not the browser stores the data in memory using it's own cache system (thus avoiding any anti-virus hooks in the file system that might scan data written by a browser, or scan on reads). There's a bunch of factors that can affect how an anti-virus product might affect things.
And yet another factor that could be in play -- when IE operatates in protected mode, it writes internet data to the 'local-low' cache, where normal user and system level processes (like a virus scanner) are unable to read or execute it it. This would prevent virus scanning of such items (because normal user level processes cannot read or execute such files, AND files that are executed at that level and running with 'low' integrity cannot write to any object (file or process) of 'normal integrity) -- though they can read, if permitted by the DACL's. The integrity controls are mandatory, and cannot be overridden by normal users. It is difficult for even Administrators to override mandatory labels. (Just try reading or setting the process priority of AudioDG.exe!).
I would strongly suggest you test Microsoft's Security Essentials -- it might not show up under the normal market usage studies because it is a free product. But also, because it is a free product from microsoft, it would be very instructive to know how it affects performance in your suite of tests.
To beef up your peformance testing, the easiest tool(s) would be to load the Cygwin (cygwin.com) tools (FREE!). They provide most of the linux-utilities and allow a great deal of flexibility in testing that isn't available under standard windows. Some examples:
For network testing to a remote file system, one can get very distinct performance tests that allow eliminating many random factors. For example, to time maximum write throughput to a server, first create
a file on the server of the size you wish to test. On a Gigabit network you will want to test writing
multiple Gigabytes of data, using ~16MB chunks. So create a file maybe 1GB long on the server by using 'dd':
dd if=/dev/zero of=localfile bs=1M count=1024
Then use the sysinternals util 'contig' to defrag that file & make sure it is contiguous. Then on the target, use:
dd if=/dev/zero of=//server/share/localfile bs=16M count=64 conv=notrunc oflag=direct
If you install the docs, "man dd" will explain the options. the notrunc option will prevent dd from
destroying the file that's already there, so it will start writing over the top of the existing
file -- you want this so that you won't be timing the time it takes to delete and reallocate space,
as well as ensuring you are writing the the area you have already defragmented! The oflag=direct will prevent the write from using the local buffers -- the only thing you won't get is a guarantee that the other end has been written to disk. The only way to do that (due to server optimizations) is to write enough data to exceed the memory size on the remote system. So, for example, writing 24G on an 8G system will give you a good idea of actual write-to-disk speed. You can compare it to a smaller write that would fit in the remote system's memory.
If your remote system is a linux machine, then you can make the file you write to a link to /dev/null to test raw network write speed without the disk getting in the way. On a 1GB network, with appropriate memory tuning, you can get 125MB/s raw write speeds -- EVEN to disk, if you have a fast RAID (verified with a linux server and Win7-64 client).
Doing similar, but in reverse:
dd if=//server/share/localfile of=/dev/null bs=16M count=64
you can get up to around 119MB/s raw reads from memory on the server, and about 112MB/s from disk.
----
More pertinent to the virus scanning, you can do various types of file accessing.
A simple 'find' will look for filenames (though it also 'opens' each file to determine file sizes and other file data), or and ls -R for a simple name listing without file opens,
or probably the most anti-virus intensive -- tree copies:
cp -r /source /dest...
Since it has to read and write each, you'll see if there is any penalty for simple reads/writes of a file.
To test execution impact, be sure to load up the 'gcc' compiler suite, so you can create a trivial 'c' program (you could use a shell or batch script, but that would invoke more overhead of the interpreters).
I tried a simple test of a null program: "main() {}" compiled with -Os, and striped, binary size 6k.
Doing a 1000 iterations of executing it with or without the on-access scanning turned on, showed no noticeable impact. I tried making 1000 differently named copies of the file as well -- still no difference (~79-80 seconds). Maybe if I varied the file and added a unique string to each of the files it might make a difference, since the scanner may realize that even though they were different names, it can do the checksum so quickly, that it makes no difference.
But Using those tools you could create arbitrarily large executables with non-compressable random strings in each and time that to check for scan times. It would give you a large tool set to do benchmarking with, where you can measure specific aspects of system performance.
I hope you test the ms-security essentials, since I'd really be interested to know how it stacks up in your tests -- and judging from the other comments, several other readers would be interested as well.
Astara