G
Guest
Guest
Let me just say this, from the perspective of a corporation running literally thousands of servers distributed between two Tier 3 data centers.
We don't have the time or inclination to benchmark any CPUs, let alone server class. For us it's all about business requirements and the bottom line. Our current cost for a quad core Intel CPU in the blades we buy is double that for a sixteen core AMD CPU. About the only reason we buy Intel is to constrain costs for enterprise applications like Oracle databases which are licensed on a per core basis. For them we want the most efficient cores possible. For Microsoft and VMWare applications it's all about putting the greatest number of cores possible under each very expensive processor based license. Over the last seven years we have not had a single server class AMD CPU fail.
In the real world, where servers are just another commodity, server hardware represents a relatively small percentage of total application costs. Switching and storage hardware costs routinely exceed server costs. Application software costs dwarf application hardware costs, and application staff are more expensive than both combined.
The lesson is that raw performance numbers are all well and good, but hardly enough to justify the massive price premiums the vendors of those items demand. Especially in a world in which server resource (primarily CPU and Memory) utilization, despite virtualization efforts, is routinely less than 20%. Will there be areas in which the new Intel CPUs offer value? Of course there will. But to tar all server requirements with the same broad brush is to ignore application business requirements. When you do that you ignore business rule number 1 - and that is to remain profitable.
We don't have the time or inclination to benchmark any CPUs, let alone server class. For us it's all about business requirements and the bottom line. Our current cost for a quad core Intel CPU in the blades we buy is double that for a sixteen core AMD CPU. About the only reason we buy Intel is to constrain costs for enterprise applications like Oracle databases which are licensed on a per core basis. For them we want the most efficient cores possible. For Microsoft and VMWare applications it's all about putting the greatest number of cores possible under each very expensive processor based license. Over the last seven years we have not had a single server class AMD CPU fail.
In the real world, where servers are just another commodity, server hardware represents a relatively small percentage of total application costs. Switching and storage hardware costs routinely exceed server costs. Application software costs dwarf application hardware costs, and application staff are more expensive than both combined.
The lesson is that raw performance numbers are all well and good, but hardly enough to justify the massive price premiums the vendors of those items demand. Especially in a world in which server resource (primarily CPU and Memory) utilization, despite virtualization efforts, is routinely less than 20%. Will there be areas in which the new Intel CPUs offer value? Of course there will. But to tar all server requirements with the same broad brush is to ignore application business requirements. When you do that you ignore business rule number 1 - and that is to remain profitable.