Yeah, I know we're way OT, but the OP seems to have gone AWOL. I have tried the raw physical disk approach, but it's never really given me any benefits, except as a data disk, not the VM itself ( and that was just for easy file management via the host OS, not for performance ). I think where our difference's lie, is the vastly different uses we have. Running CAD, Gaming, and streaming media can all be very disk intensive, hence the problems and solutions I've experienced. I did look around VMware community forums last night, and I did find a bunch of posts with similar problems. The majority of advice given there was to seperate the VM images from the host OS, RAID 10 the VM image disks, and/or seperate the VM's onto diffirent physical spindles, having no more than 2 VM's per dedicated spindle. The problem with what I found is that it's all very dated circa 2006, which is approx. when I was having these problems myself. I just didn't feel right quoting old links. Perhaps VMware's file management improvements have somewhat rectified these issues since then, I don't know. Most of these things are not anywhere near as much of an issue for ESX server, as you do not "double up" on file systems with the host windows OS.
When I said that network speed is more limiting than HDD speed, I was refering to max throughput. 4 HDD's in RAID 0 will easily surpass actual file transfer speeds of GBe. Streaming 1 HD movie across a GBe will noticably slow down performance of all the VM's. Again, vastly different uses, you and me. I do run dual GBe myself, and for the small I/O transactions it is great, and I wouldn't go back to 100 speeds by any means, but actual streaming transfer speeds do not set my world on fire. I'm sure you are absolutely right, and I know I am too, it's just the difference between an I/O environment and a disk intensive applications, not to mention my handicap of running the whole ball of wax inside a host windows OS. Using ESX server is far more effecient for large numbers of VM's.
It's also important to note that I due not rely on network transfer rates for this very reason. Most of my VM's are directlty connected to my host using multiple monitors, keyboards, and mice. Strange, I know. I am experimenting with the improved RDP in 2008 R2 now to see if I can actually pull it all off now the more traditional "thin client" way. In the past, both terminal services, and network bandwith bottlenecked me.
From the limited information given by the OP, do you believe his problem is actually the lack of a good hardware controller card ? I've run heavy disk intensive VM's before off of on-board RAID with no major issues to speak of.
I know that I'll probablly never see DX support in ESX, but it would be nice. A number of my CAD workstations at work could be easily replaced by VM's, as they do not need extreme modeling performance, but DX support would extremely help basic operation. OpenGL would be nicer yet, but that really is wishful thinking.