Well I suppose this is where I come in.
Crysis is based on older primitive common sense approaches towards Polygon based 3D rendering. The engine itself is not geared towards complex compute effects (shaders). The engine relies on simple shaders and the usage of enormous amounts of high resolution textures and alpha textures.
In other words, the game is TMU bound. We see the results of this when SLI and CrossfireX are employed with Crysis engine based games. There is very little scaling. This is because TMU (Texture Mapping Units) do not scale with SLI or CFX (each GPU has it's own memory pool and each GPUs Texture Mapping Units therefore process the textures for the entire scene and not just the texture relative to the load assigned to them).
This is also why a Radeon HD 4870X2 2GB is really only a 1GB card or why a GTX 295 1792MB is really only an 896MB card.
Traditionally (DX10 era), nVIDIA cards have had far more TMU units than comparable ATi cards. Therefore nVIDIA cards perform better under Crysis than ATi cards.
This changed recently with the release of the 5870 as it contains.. you guessed it... 80TMUs (up from 40 on the 4870/4890) while nVIDIAs GT200/b based derivatives (top models) also share 80TMUs (they share the same bottleneck on that front therefore Shader OPs and Triangle count become the main performance indicators).